Black and Decker has launched plantsense, bringing an IP address to geraniums everywhere.

Its this kind of thing that is going to drive IPv6, the need to directly connect to sensor-nets within the home.

As to whether I need to be able to ping my potted plant, i dunno just yet.

On my flight from Beijing to Nanjing in china the other day there was an interesting advertisement on all the seat backs (at least, i think it was an advertisement).

The ad suggests that "derby the world in my finger". I'm not 100% clear on why i might want to derby the world inside my finger, but I'm sure its a good thing. It seems <a href="http://talesfromthebigtomato.blogspot.com/2009/11/derby-huh.html">others </a>have <a href="http://batteredleatherjournal.wordpress.com/2010/04/04/butchered-english/">also </a>wondered in vain.

This is a raw idea in progress.

In the early days, audio was always uncompressed. During the 90's audio compression started to get reasonable, and mp3 came to be a popular format. This happened around the time people started to get broadband, and the two created an explosive mixture of bandwidth growth.

But that bandwidth growth self-capped, and if u look @ downloaded audio today, its largely the same. The fidelity gap between 128kbps VBR mp3 and uncompressed audio is small enough that it just didn't matter. Thus the bandwidth growth due to fidelity capped itself. The second thing that happend is that the number of songs people could listen to (the amount of information they could process) capped out, and thus the overall amount of bandwidth due to song-swappers became driven only by number of users, no other variable.

Video is going the same direction as audio. It turns out that 1080p H.264 is sufficiently close to 'uncompressed live analog' that there's not as much point in going further. Living rooms aren't getting larger, so we don't need more resolution. The gains due to compression are slowing, and bandwidth growth is again being driven by number of users only.

This could mean that the overall internet bandwidth will self-limit to the rate of information processing a human brain can do live. There is probably some number on that somewhere of so many bits-per-second.

This could mean that the internet will eventually be 'done' when we each have a non-oversubscribed '1xhuman information rate' link to the back of our skull.

This might be approaching sooner than we think.

And lest the naysayers say "what about the collector, storing to disk only", well, the size of storage is also going to asymptotically approach molecular densities and stop growing too.

When I was in grade 5 we had this supply teacher one day. Our regular teacher was a strict bible-thumper beat you with a stick type, so we had high hopes for good hijinx. Somehow through that adolescent shared vision model, we all agreed to drop our pencils precisely at 10am, the idea being it would create a great sound on our portable classroom floor.

We expected a laugh. We instead go the privilege of writing out 5 dictionary pages at lunch hour.

AT&T is now facing the same sort of prank.  There is a call to have everyone rev up their mobile data devices at noon on friday. Operation Chokehold is on @ noon pacific. A site called 'fake steve' is suggesting it.

What could the prank accomplish? Well, networks are highly oversubscribed. They are designed for the expected peak-normal traffic. My grade 5 classroom didn't fall to bits from pencils falling, and its unlikely AT&T will catch fire. But it could make things miserable for the people on the network for an hour or so. Its more like '@ 10 stab your neighbour with your pencil'.

One suggestion I theorised with a customer was, what if somewhat wrote a truly malicious application. It worked sort of like a seti@home where it ran in the background and it gave something of value to people when they saw their internet connection idle. Perhaps it would be a reward points system for who sent the most bandwidth, and the top 5 each month got a prize. This would cost the consumer nothing, the application provider nothing, and would wreck every residential ISP in the world. Whenever your machine went idle it would transmit and receive at full rate from others in the network.

The internet is a shared medium, and requires all parties to have a common interest in making it work. Providers, applications, consumers. When it was created, this shared interest was obvious, everyone knew everyone by name. Today, i have not introduced myself to most of my Internet neighbours. I'm sure they're nice.

So, eyes peeled for 12PST friday for AT&T pencil drop.

Users have certain expectations of products and services. If these are not met by some in the class, but are met by others, competitive forces reward the top-performers. If the entire class cannot meet expectations, it dies.

So to it is with communications services. Users have an expectation that web pages load within a certain time. This is somewhat of a soft - real time goal, but its generally agreed that the sweet-spot is < 3s, with 47% of consumers expecting a load time of <= 2s according to an Akamai survey. In fact, 40% will leave a site if it takes >3s to load according to the same study.

Now, bandwidth is increasing, this must fix everything, right?

No. Bandwidth is not the panacea everyone thinks it is.

In fact, the time to load for a web page is based on a combination of:

  • bandwidth (speed) & size of page
  • latency of network from client to server and back
  • jitter of network from client to server and back
  • 'Think' time of server and client, javascript execution, etc.

A typical 'web2.0' web site has ~10-20 unique TCP connections that go into the loading of it (various cookies, spyware, ads, html-content, images, javascript libraries, etc). Browsers have tried to overcome this with parallelisation of connections. Typically a well designed site will have the html fetch first, which will have all instructions for what needs to be loaded, and then the browser will open those connections in parallel.

So a typical web page will load in something like:

time = (size / bandwidth) + 
       (number DNS lookups * (latency+jitter)) + 
       (number serial TCP connections * (latency + jitter)

if we assumed that TCP instantly ramped to the full speed of the connection. TCP employs a congestion management algorithm called AIMD (additive increase, multiplicative decrease). One of the features of this algorithm is called 'slow-start'. In the slow-start mode, TCP will start linearly going faster until a packet is lost. it will then slow down and hover around that rate. If packets are lost due to congestion, it will cut in half its rate each time.

So AIMD is another area where web performance suffers. The many, small, TCP connections that make up that web2.0 site never reach their full speed. The latency and jitter dominate.

So, when upgrading that crappy old 2Mbps DSL connection to that spiffy new 100Mbps fibre, what is a user's expectation? Is it a 50x improvement? What will they think when they get a 1.0x speed on their favourite yahoo.ca page? Will they return the product? This is the big fear of companies like Comcast spending big $$$ on DOCSIS 3.0 technologies.

But fixing latency is tougher than fixing bandwidth. It requires moving content closer to the edge, moving DNS servers down to the edge, removing segments of routing, flattening the network. Ultimately it requires fixing the speed of light. And that has proven tough.