Post of the Day
September 21, 2000

Board Name:
Network Appliance

Posts selected for this feature rarely stand alone. They are usually a part of an ongoing thread, and are out of context when presented here. The material should be read in that light. How are these posts selected? Click here to find out and nominate a post yourself!

Subject:  Moving Through Stage Three
Author:  Stocksure

You know, I hate using cliches. I really, really do. Especially after I've made so many sarcastic comments about some of them over the years. Yet, upon seeing the recent EMC/Network Appliance, NAS/SAN controverersy, two words keep repeating themselves in my head: disruptive technology, disruptive technology, disruptive...

I used to think that the Network Appliance/EMC saga was one similar to EMC's battles with IBM in the early/mid '90s, as I once stated, and as Fox mentioned in an earlier post. Similarities to IBM's inability to dominate the PC industry the way it rules mainframes, another analogy that Fox brought up, also surfaced in my mind. However, the more I think about it, the more it tends to remind me of yet another battle IBM lost related to the mainframe market (yeah, I know, with the amount of screw-ups they had over the years, it's hard to keep track).

As we all know, going into the '80s, IBM's name was synonymous with enterprise computing, enjoying a level of mindshare and market share in this arena that even Cisco would envy. Much of this dominance was the result of the manner in which they were the 800-lb. gorilla of the high-end mainframe market. These mainframes were the heart and soul of every major corporation's IT department (I'm not sure if the term "IT department" had been coined yet...but you know what I mean). Companies relied on them for everything, ranging from database management to number-crunching to storage access to handling client software requests. And they cost a fortune. But companies were happy to pay, especially as the importance of computing grew infinitely more valuable to a company's operations.

Then comes the application server (first primarily running Unix, the utilizing Netware and other operating systems), a device that comes across as quite inconspicuous when compared to those huge mainframes. At first, these servers were meant to do one thing and one thing only: handle client/server software requests. In terms of functionality and scalability, they paled in comparision to the IBM mainframes. The IBM (and DEC, HWP, etc.) sales guys scoffed at these contraptions, thinking that there was no way that they could ever be a threat to their high-end, high-margin products; but the application servers did have their advantages. They were easier to set up and manage, and they were much, much cheaper.

And so, in standard disruptive fashion, the application server industry went through the following stages in its battle against the IBM-style mainframe:

-------------

Stage One: The servers first appear in the low-end market, where cost-sensitive smaller businesses and small branch offices scoop them up. The mainframe guys barely bother to notice.

Stage Two: The servers make advances into the mid-tier, where companies looking to cut costs find them useful. They also begin to be used as "secondary devices" in certain high-end markets, so as to offload certain tasks from a company's mainframes. The mainframe companies finally notice, but still think little of these servers, considering them a "niche market" at best.

Stage Three: The servers really begin to scale. They're not quite at the same level as mainframes in this regard, but they're getting fairly close. A few begin to appear in high-end implementations, and the mainframe companies begin paying close attention, although they still feel that the market dominance of their product will continue. They make comments regarding how servers and mainframes are "complementary" devices, and how servers are better for certain non-taxing client requests, while mainframes are better for everything else.

They begin to make their first servers. Only thing is, they're not much different from the mainframes that they traditionally make. In the beginning, the only advantage the server manufacturers had was the issue of cost, and so they've always approached their market with a "dollar efficiency is paramount" mentality. On the other hand, the mainframe vendors have always done business under the mantra of "performance at all cost." After all, computing is the heart of a company's IT infrastructure, isn't it? So, when they start making servers, they carry this mentality with them. Their devices inevitably outperform the competition, but without attaining the value proposition the latter still offers.

Stage Four: Servers finally equal mainframes in terms of performance and scalability. Although some differences remain, servers are now indistinguishable from mainframes in many regards, such as the functions they carry out, and the manner in which they're designed. The one main difference, however, is that the products still classified as "servers" tend to deliver much more value for every dollar spent. Due to the minshare they have, and due to the deep customer relationships their manufacturers with so many companies, mainframes still outsell servers in terms of revenues; but servers are gaining momentum.

Stage Five: Servers finally surpass mainframes in terms of annual revenues, and the latter goes on the long, grinding path towards obsolescence.

--------------

I'm sure that all of you can easily see how this analogy fits into the NAS/SAN debate, so I guess there's no need for a step-by-step rundown. The only thing that could potentially be up for debate is what stage we're in. I'd say that the most defensible argument is that we're at stage three. Network Appliance is getting some major, noticable high-end wins (Continental, Yahoo!, Deutsche Telekom, etc.), and they're completely on par with the leading SAN vendor, EMC, in terms of software (one might even argue that EMC is behind, as they have nothing that's comparable to Netapp's Snapshot for backup copies at a fraction of disk capacity). Meanwhile, EMC's finally woken up to the NAS market to the point where they've made some major comments regarding its importance, and have made their first mainstream NAS release, a product that can significantly outperform high-end Netapp filers...but is hard to distinguish from the company's SAN offerings both in terms of architecture and cost.

There's already been a lot of talk regarding how ridiculously expensive/wasteful the EMC Celerra/Symmetrix NAS combo that the company tested actually was when compared to a high-end Netapp filer (84 processors and 103 GB of RAM vs. 1 processor and 1 Gb of RAM). However, what hasn't been noted is how much a scaled-down version of the Celerra can actually underperform a Netapp filer. For example, I've read that a Celerra using one data mover hooked up to a Symmetrix frame can handle 8,000 operations/second. The data mover contains one processor, and each Symmetrix frame contains twelve. On the other hand, Network Appliance's F840, utilizing just one CPU, can handle 15,000 operations/second, while the F840c, making use of only two CPUs, can deliver over 25,000 operations/second. In terms of price/performance, the difference isn't merely significant, it's mind-boggling.

Still, one could argue, just as they did during the height of the mainframe/server controversy, that it's worth paying a premium for the best possible performance, even if it results in the customer paying more; and just as the argument was made that computing was the heart of corporate IT back in the '80s, storage now more than ever is taking center stage in terms of its value to a company's IT infrastructire; and yes, it's quite obvious that if one were to scale a Celerra-based NAS system to its utmost limits without caring for cost, even the F840c wouldn't be able to keep up.

Meanwhile, a second major argument also currently exists in favor of the performance offered by SANs over NAS devices. In a SAN, a company's application servers directly interface with storage pools as if they were the disk drives of the servers, while with an NAS box, the servers have to go through an intermediary operating system such as Data ONTAP, and have to deal with the IP protocol stack, both of which can result in additional latency and more processing overhead. While this is a moot point for any data request coming directly from a client PC, as well as most small data requests only heading for a server, for large data transfers that are only meant to go to the server (i.e. certain database file requests), a fibre channel SAN can significantly outperform an NAS box hooked up to an ethernet network. Even with 10-gigabit ethernet, this performance bottleneck would most likely exist in some form for such requests, and act as a second strike against the viability of NAS devices for high-end applications.

However, a new technology, set to appear next year, is going to get rid of both of the weaknesses that I've mentioned, and, when combined with the rate at which Netapp filers are scaling in terms of capacity, should allow the NAS/SAN battles to reach Stage Four. Allow me to present DAFS:

http://dafs.netapp.com/info/faqs.html

A standard pioneered by our favorite company (well, one of my favorite companies), DAFS, capable of being implemented both in NAS and SAN environments, and capable of running on ethernet, fibre channel, or any othe network protocol, allows for three major benefits:

1. It allows data requested by an application on a general-purpose server to be sent from a storage device to the application while bypassing the buffers within the operating system (i.e. Unix, NT, Linux) for the general-purpose server.

NAS/SAN implications: This feature should benefit both camps equally, with the only real winners being us consumers, who might have to wait a few seconds less than usual while an online purchase order gets processed, and thus might have the order go through before the e-tailer handling it goes bankrupt.

2. It allows a server handling a data request to bypass the IP protocol stack while accessing a filer hooked up to an IP network.

NAS/SAN implications: Since fibre channel SANs don't make use of IP as of right now (although they might in the future), this development's meaningless at this point in time for such networks. On the other hand, for block data transfers only meant to reach an application server, this goes a long way in equaling the performance differences between NAS and SAN systems.

3. It allows a server accessing an NAS device to bypass the latter's operating system completely en route to making a file request.

NAS/SAN implications: While obviously irrelevant to SANs, this is absolutely huge for the NAS market. First, as one might guess, when conmbined with benefit #2, it puts NAS systems on completely equal grounds with SANs with regards to application server requests for block data transfers; but it doesn't stop there. The ability to bypass the operating system of a filer also allows NAS systems to scale infinitely from a processing resource perspective.

The reason for this is simple: just as DAFS benefit #3 lets a Unix/NT server to directly access the disks on a Netapp filer, it also lets a Netapp filer directly access the disks of another Netapp filer. For example, if Continental Airlines were to have a configuration of 20 F840 filers for the management of its databases, and a request was made for a piece of data that happened to be on filer #18, but the CPU on that filer happened to be handling a number of requests at that time, the given request could be sent to, say, the CPU on filer #4, which could then access content on filer #18 just as quickly as it'd be able to access content on the disks that are directly attached to it. And if this isn't good enough, Continental could buy ten more Netapp filers, only with no disks installed, and use them as additional CPU resources for managing data requests made to the other filers. Or they could buy twenty more filers if they wanted to. Or fifty, for that matter, as many as they consider necessary; and just like that, there goes any potential scalability advantage that the Celerra/Symmetrix or any other NAS/SAN hybrid might have potentially were Netapp filers to continue to exist as standalone devices; and I guess we can't forget that those Netapp filers would still have a considerable cost advantage.

Of course, one problem could arise from the implementation of the kind of distributed architecture that I just outlined: difficulties surrounding the ability to efficiently manage those processing resources. After all, if the resources of such a system, possessing so many different individual processing units, aren't properly utilized, expenses related to CPU resource purchases will quickly run out of control, and the entire system will quickly turn into a mismanaged, convoluted computational bureaucracy, one that would make Joseph Heller roll over in his grave.

But fortunately enough, Network Appliance has already thought this problem out, as shown by their purchase of WebManage (http://biz.yahoo.com/bw/000905/ca_network.html), a company whose software, is able to channel individual data requests to the device best fit to handle a given request. Traditionally, such software (Akamai's FreeFlow is a good example) has been only used to manage requests made to internet servers carrying redundant content, whether housed in a single data farm or dispersed over a worldwide content delivery networks. Enterprise implementations, and implementatinos involving file servers that don't all carry the same content, have been rare; but such implementations would prove to be highly necessary if the kind of DAFS architecture that I outlined earlier were to take flight. Now, with this kept in mind, notice how the press release surrounding the WebManage buyout made reference to the use of the company's software in enterprise as well as internet environments. It all seems quite interesting, IMO.

Whenever I try to analyze the quality of the management team of a company that I happen to be invested in, or happen to be considering investing in, one of the things that I try to do is look at the moves the company's management has made in the recent past, and ask myself that, if I were in their position, would I have made the same moves, or would I have done something different. It becomes a test, of sorts; and of the numerous companies that I've put this "test" of mine, up until now, only two, Broadcom and Qualcomm, have passed with flying colors. It seems that it's time for me to add a third company to this list.

Eric


Read More Posts by This Author
Go To This Post
More Recommended Posts
Get past Posts of the Day in the Archives