Spectacular incidents, such as the outage resulting from the attack on Dyn on 21 October 2016, sometimes lead some to believe that it is possible to break the Internet, i.e. to interrupt its operation for a more or less long period of time. This breakdown could be the result of a deliberate attack, or it could be an accidental failure. Sensational articles have already been published on this subject ...
L’AFNIC (Association Française pour le Nommage Internet en Coopération) is releasing a thematic dossier aimed at exploring the possibility of such a blackout. Let's say it right away, the conclusion will be nuanced: the possibility of such a breakdown cannot be excluded, but it does not seem a likely scenario. This does not mean that the Internet is sufficiently resilient, nor does it mean that we should rest on our laurels ...
Che new thematic dossier of the Afnic (1) explores the possibility of a general breakdown of the Internet and solutions to improve its resilience.
After having explored security issues, particularly related to the DNS protocol and the risks of hijacking domain names in previous publications, the Afnic has turned its attention to a thorny question: Is it possible to break the Internet? Can its operation be interrupted for a more or less long period of time, following a deliberate attack, or an accidental breakdown?
This thematic dossier aims to answer this vast question, in particular by redefining what the Internet is and its most crucial elements, which are potentially subject to attacks from routers and the DNS protocol. It also looks at recent examples of spectacular incidents and their consequences.
Finally, rather than giving a categorical answer to a question that remains purely theoretical, this thematic dossier proposes avenues for reflection and action to improve the resistance and resilience of the Internet.
Is the glass half full or half empty?
Answering the question is difficult because the answer is bound to be nuanced and full of uncertainties.
Experience (alas) and the statistical law of large numbers leave room only for a certain margin of error. But exceptional events, such as a hypothetical total or near total failure of the Internet, which have never happened, are obviously very hard to predict.
Pessimists will indeed say that breakdowns and attacks are frequent and have serious consequences. And they will cite a ransom being held against hospitals, malicious software preventing the Rafales from taking off, a denial of service attack preventing access to a popular site... And they will be outraged that a state, a small group of delinquents, or even a single high school student in his garage, can block such essential services.
Optimists will note that none of these outages or attacks stopped the Internet, or even a significant portion of it. No matter how inconvenient the consequences for users of these particular services, the Internet continued to function. And the consequences were generally short-lived, lasting no more than a few hours. This is a far cry from the heralded "cyber war".
This apparent clash of views between optimists and pessimists can be summarized by the famous quote from Pierre Col, "... the only way to summarize this apparent opposition of points of view between optimists and pessimists is by the famous quote from Pierre Col, "...the same one. the Internet is locally vulnerable and globally robust ». It is very easy (too easy, and needs to be fixed) to cause failures that are limited in space and time, and much more difficult to do so across the Internet over a long period of time.
But what's the Internet?
One of the reasons why discussions about Internet resistance are difficult is that many people only know about the Internet from what they see on their screens. Hence the sensational headline in the New York Times, which announced during the attack on Dyn that "half the Internet is broken. Yet the Internet was working perfectly during this attack, even though several well-known sites were affected.
It is therefore important to remember that the Internet hosts many different services, not just half a dozen websites. Companies exchange data, scientific researchers copy large files, e-mail and instant messaging services continue to work, even if Facebook is down.
Routers, the true heart of the Internet
When we talk about the Internet, we mostly think of the web pages of the most famous organizations, such as Google or Amazon. But the real infrastructure of the Internet is the hundreds of millions of miles of cable that connect computers to each other, through routers, the active equipment that steers messages in the right direction. The core routers are numerous, hundreds of thousands of machines, but they are made by a small number of manufacturers (such as Huawei, Cisco or Juniper) and a failure or security flaw affecting a whole range could have serious consequences. Again, diversity is crucial for the health of the Internet.
The world of routers is also that of the BGP (Border Gateway Protocol), which is not very secure (and, as with DNS, known security solutions are not widely deployed). Deliberate attacks, or accidents, such as the one committed by Google on August 25, 2017, which cut off part of the Internet, particularly in Japan, are a cause for concern.
To date, cooperation between network operators has always been successful in quickly mitigating and then resolving these incidents.
The DNS, an essential and often forgotten link in the infrastructure
While cables, routers, and their BGP protocol, form the heart of the Internet, the DNS (Domain Name System) is an indispensable infrastructure. No DNS, it's almost no Internet. And, if you're a bit of a technician, don't say "ah, but you can type in the IP address to connect." With a lot of web servers, this will not work or bad. Sometimes there are several servers on the same IP address, there are HTML pages that load code or style sheets, via URLs so domain names. We can clearly see this crucial nature of the DNS during outages such as the one at Bouygues in April 2015 or during the problem at Orange in October 2016, which had classified Wikipedia and Google in the blacklist by mistake.
There are two kinds of DNS servers, very different: the authoritative servers are generally maintained by the domain name registries (such as AFNIC, which maintains the authoritative servers for .fr) or by DNS hosts, and the resolversThese are maintained by Internet service providers, local IT services or large foreign operators.
Both are crucial, and can be subject to attack or failure.
Attack against DYN in 2016
This denial of service attack (an attack that aims to prevent a service from operating, not to take control of it) has been one of the most spectacular in recent years. It hit DNS Dyn on October 21, 2016, in two stages. It struck in two phases, each lasting about two hours.
The attack consisted of sending a lot of traffic to Dyn's servers, which could no longer respond to all legitimate requests.
Note that, contrary to what most media have claimed, the majority of the large, known websites that were affected were only not Dyn customers. They were customers of Amazon Web Services, itself a Dyn customer (AWS endpoints in the eastern United States are under the us-east-1. amazonaws.com domain, whose name servers were all at Dyn). It was therefore a cascading failure, common on today's web, where many external services are used just to display a page.
The attack on CEDEXIS and its consequences
On 10 May 2017, the DNS host Cedexis was the victim of a denial of service attack lasting approximately two and a half hours. This host is used by many media, and the French press was therefore not very accessible that day. This attack illustrates the crucial character of the DNS, and the impression of "all is broken" that can result from a successful attack on a service that is widely used by the Web.
Social networks (here, Mastodon) are generally the best tool to learn that there is a breakdown!
In fact, seriously questioning whether or not the Internet can be broken completely is a purely theoretical question. On the one hand, it is very difficult to give a reliable answer on this subject. On the other hand, it is more useful to ask what can be done to improve the resilience of the Internet.
For example, there is now far too much centralization in some departments. When Facebook is down, many users complain to their IT department that "the Internet is broken.". And, indeed, for them, it's almost the same since all their interactions are mediated (and recorded...) by Facebook.
In a more technical way, if all the DNS servers are hosted at A. and all the sites at C., we can see that a failure of A. or C. will have far-reaching consequences. It is therefore crucial that Internet services, especially essential ones, are spread over a large number of different providers.
These principles of redundancy and diversity must guide any design of Internet services. Redundancy is, for example, having several authoritative name servers for a DNS zone, and that they do not share a common point of failure (for example, they should not be in the same room).
Another example of redundancy is that a country must be connected by a large number of very different physical links. Diversity, however, recommends not putting all its eggs in the same basket. If the whole Internet uses the same software as a DNS server, a security flaw in that software has far-reaching consequences.
It should be noted that the Observatory of Internet Resilience in France publishes an annual report, with many indicators, such as the variety of network operators for a given DNS zone. ( It also contains a lot of excellent BGP indicators.)
Another system of measures, involving the whole community, is the deployment of the RIPE Atlas probes which allow many technical measures from the farthest corners of the Internet.
Since it is difficult to predict breakdowns, let alone attacks, human reactions are crucial in dealing with problems. That is why cooperation, communication and coordination must be developed.
Finally, like any human work, the Internet is not perfect and can fail. This risk must be factored into its security assessments and systems must not be designed to produce dramatic consequences if the Internet has a defect just at that moment.
By way of conclusion ...
If we can't give a simple answer to the question "can we break the Internet? "but it is possible to work on improving its resistance (standing up to attacks) and resilience (recovering from a crisis).
Moreover, while breaking part of the Internet for a limited time remains too easy, and justifies security efforts, breaking the entire Internet for a long time is, fortunately, not within the reach of any attacker.
(1) Afnic is the registry for .fr (France), .re (Reunion Island), .yt (Mayotte), .wf (Wallis and Futuna), .tf (Terres Australes et Antarctiques), .pm (Saint-Pierre and Miquelon) domain names.
Afnic also positions itself as a provider of technical solutions and registry services. The Afnic - Association Française pour le Nommage Internet en Coopération - is made up of public and private players: representatives of public authorities, users and Internet service providers (registrars). It is a non-profit organization.