by

The Fragile Internet

Living in Asia reminds us sometimes of how fragile the Internet actually is. Even though it’s capable of withstanding outages across segments of connectivity, it’s really all a game of money, politics, and physical cables that enable our packets to get across.

The recent earthquake in Taiwan has rendered large segments of the Internet inaccessible from across Asia. If I have my details straight, there are seven-ish undersea cables that link Hong Kong to Europe and North America. Due to the earthquake, six-ish of those seven-ish cables were severed, thus knocking out large amounts of trans-Pacific internet access.

Sites still work, sometimes too slowly to be functional and various services that require persistent connections (e.g. VoIP or IM software) fail to connect since the latency is so high. Apparently one of the broken undersea cables has been restored which has enabled some connectivity to be restored but as you can imagine large parts of Asia are still vastly under connected. This includes not only internet access but also basic telco since a lot of this data is carried on the same pipes.

To fix these cables, they literally have to sail ships to the location of the breakage, dredge up the cable from the seafloor (miles below sea, mind you), repair the break, then resink the cable. In response, some of the telcos have found alternate routes via Singapore and Europe and via satellites as well. Of course, given the situation there’s no way they’ll be able to carry the full trans-Pacific load.

These kinds of problems bode poorly for a lot of websites, let alone services that require connections. Pages heavy in JavaScript/CSS or just garbage HTML (e.g. heavy headers, toolbars, navigation elements) load incredibly poorly when packet loss is in the 30% range. Often times, sites load so slowly that the IE hangs and never comes back. While carrier level optimizations are necessary to prevent problems like this from occurring in the future, higher level optimizations must be made as well.

Parts of HTML/HTTP should be modernized to account for dealing with these problems. For example, when visiting ESPN, the top 40% of the page is redundant content and is sent from the server to the client every time I read a new page. While this may read as an argument for an AJAX style application, it isn’t. The pure page-weight of an AJAX site (or even a site that relies heavily on CSS files) creates a hindrances from those sites from even ever loading. I’ve often in the last week seen my browser fully download the HTML for a page, only be never get the content to render because one of ten (or whatever) linked CSS or JS files failed to load.

The presentation layers must become more redundant to these sorts of issues. Not because major carrier level outages will happen more often in the future (needless to say, they will be part of the global internet landscape forever) but because of two things.

First, the cost to waste bandwidth will grow unbounded. Major services companies (the Yahoos, Microsofts, Googles of the world) are well aware of this problem and are throwing money at it, examples would be Google’s massive facility building operations in Oregon. The need for bandwidth drives up costs in data centers, servers, operations personnel, etc.

Second, not all users have huge pipes, even in America, but the problem is compounded many-fold globally. There is a limited amount of bandwidth available to carry ever more bandwidth hungry applications and services, many hosted in the US and deliver them globally. Without a maniacal focus on page weights and delivery size, it’s increasingly difficult to deliver applications that will be deliverable globally and even to domestic users on non-optimal connections. These problems can be solved with money, but not every organization can afford to spend like a Google/Yahoo/Microsoft can.