The speed of light and human (great) expectations

Users have certain expectations of products and services. If these are not met by some in the class, but are met by others, competitive forces reward the top-performers. If the entire class cannot meet expectations, it dies.

So to it is with communications services. Users have an expectation that web pages load within a certain time. This is somewhat of a soft – real time goal, but its generally agreed that the sweet-spot is < 3s, with 47% of consumers expecting a load time of <= 2s according to an Akamai survey. In fact, 40% will leave a site if it takes >3s to load according to the same study.

Now, bandwidth is increasing, this must fix everything, right?

No. Bandwidth is not the panacea everyone thinks it is.

In fact, the time to load for a web page is based on a combination of:

  • bandwidth (speed) & size of page
  • latency of network from client to server and back
  • jitter of network from client to server and back
  • ‘Think’ time of server and client, javascript execution, etc.

A typical ‘web2.0’ web site has ~10-20 unique TCP connections that go into the loading of it (various cookies, spyware, ads, html-content, images, javascript libraries, etc). Browsers have tried to overcome this with parallelisation of connections. Typically a well designed site will have the html fetch first, which will have all instructions for what needs to be loaded, and then the browser will open those connections in parallel.

So a typical web page will load in something like:

time = (size / bandwidth) + 
       (number DNS lookups * (latency+jitter)) + 
       (number serial TCP connections * (latency + jitter)

if we assumed that TCP instantly ramped to the full speed of the connection. TCP employs a congestion management algorithm called AIMD (additive increase, multiplicative decrease). One of the features of this algorithm is called ‘slow-start’. In the slow-start mode, TCP will start linearly going faster until a packet is lost. it will then slow down and hover around that rate. If packets are lost due to congestion, it will cut in half its rate each time.

So AIMD is another area where web performance suffers. The many, small, TCP connections that make up that web2.0 site never reach their full speed. The latency and jitter dominate.

So, when upgrading that crappy old 2Mbps DSL connection to that spiffy new 100Mbps fibre, what is a user’s expectation? Is it a 50x improvement? What will they think when they get a 1.0x speed on their favourite yahoo.ca page? Will they return the product? This is the big fear of companies like Comcast spending big $$$ on DOCSIS 3.0 technologies.

But fixing latency is tougher than fixing bandwidth. It requires moving content closer to the edge, moving DNS servers down to the edge, removing segments of routing, flattening the network. Ultimately it requires fixing the speed of light. And that has proven tough.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *