Bravo for BitGravity! On Monday, the would-be Akamai
Is this how Akamai dies?
It's news like this that seems to be spooking Akamai investors. They're wondering whether this Rule Breakers recommendation is permanently broken. (Shares of Akamai are down more than 45% year to date.)
I don't think so. Here's why:
Over the last couple of years we saw a tremendous amount of growth related to broadband adoption. This year we've seen that moderate a little. I think we'll see a second wave as we go from [1 terabit per second] to 500 [terabits per second]. We think in order to scale, you can't do that from a centralized architecture, you have to do that from the edge where the capacity is the most abundant and the least expensive. I think what we'll see is it will be a little bit slower to come but bigger than most people expect.
Those are comments from Akamai Chief Financial Officer J.D. Sherman at a recent investor conference sponsored by Citi. Notice the language -- Sherman is conceding that Akamai's growth story faces shorter-term challenges.
A line in the Web
Is that really a fair claim? I can't be 100% sure. I've long been an Akamai bull because of its algorithm. But that's only one part of the story, according to company Senior Vice President of Networks and Operations Bobby Blumofe, whom I recently interviewed.
Think of a line, he says. On the left side of the line is the Internet, and on the right side are users. Akamai's servers operate on the right side. BitGravity, Internap
The trouble is that the interconnection has limits. As Blumofe explains:
If the servers are on the wrong side of the link, then every viewer causes an additional copy to be streamed over that link. For example, consider a 1 [megabit-per-second] stream, maybe TV quality, with an ISP having 10,000 viewers. In that case, that ISP is going to have 10 [gigabits] of traffic going over their uplink. That's a lot of load, and they likely don't have that much spare capacity on that link.
Thus, according to Akamai, the line between the Web and ISPs such as AT&T
Contrast that with how Akamai works. By modeling itself in the same decentralized style as the Internet itself, and by using an algorithm that allows data to be broken into bite-size chunks and then reassembled very close to its destination, Akamai doesn't need the delivery system -- the interconnect between users and the Internet -- to scale. Rather, Akamai's algorithm and architecture scale the Web.
Or at least that's the theory. It could take years for Akamai to be proved right, if ever. BitGravity points to Akamai's storage problem. And Google
But even here Blumofe is skeptical. He asserts that unless these connections become the digital equivalent of a 100-lane multi-terabit highway -- versus the five-lane gigabit highway we have today -- no CDN positioned on the wrong side of the line will scale to the degree that Akamai can. Patient investors will thus be rewarded.
I think he's right. I've bet my hard-earned dollars on it. Do you agree? Disagree? Use the comments box below to express your point of view.
Get your clicks with related Foolishness: