POST OF THE DAY
Advanced Micro Devices
Two Articles at Anandtech

Related Links
Discussion Boards

By eachus
November 25, 2009

Posts selected for this feature rarely stand alone. They are usually a part of an ongoing thread, and are out of context when presented here. The material should be read in that light. How are these posts selected? Click here to find out and nominate a post yourself!

Just finished reading two articles at Anandtech. The first, about the 5970 had not much to say, although they did have problems when trying to overclock two different cards. Other than that, the summary says it all:

There are two things that become very clear when looking at our data for the 5970

1. It's hands down the fastest single card on the market
2. It's so fast that it's wasted on a single monitor


This raises two interesting questions: Should you invest in flat panel makers? Will we be seeing say, 3840x2400 monitors next Christmas? Other than that, the article does point out that AMD has a lot of work to do to make real Eyefinity monitors and systems available. If people are going to set up three monitor Eyefinity configurations with 20 degree angles between them, the drivers should be able to take this into account.

The other article is on AMD's Server plans for the next two years: http://it.anandtech.com/IT/showdoc.aspx?i=3681 I agree with most of what they say, although I don't expect Intel's Nehalem-EX to take the high-end of the server market by storm. My primary reason for that though, is the inherent conservatism of this market. For good reason. A balky web-server may cause lots of headaches and late nights for the IT staff, but if the main financial database server for a company goes down for more than a few minutes, it will be discussed at the next Board of Directors meeting. If the head of IT can say, we really need that upgrade you took out of the budget, but we think we can deal with the overload for the current hardware for now... Fine. But if the money got spent, and the system failing is the new server the IT head promised would eliminate these crashes last year? Ouch!

Incidentally, this does not apply to replacing components. There is an example where it mentions companies buying low-power Opterons by the thousands to cut energy costs. The article raises its metaphorical eyebrows at the price of the EE Opterons. But if a sharp pencil and some paper shows payback in 6-9 months from lower power and cooling costs? Why not? And the net performance improvements on the upgraded servers (where that happens) are effectively for free.

This is one aspect of AMD's long lifetime for the Socket F platform is paying big dividends. (Well reducing quarterly losses for AMD.) It used to be that upgrading servers, beyond occasionally expanding memory, was just not done. New systems had a lot more to offer than shiny new cases, and new CPUs, even if they didn't require new sockets, often had power requirements which weren't met by older motherboards. AMD has been building new Socket F CPUs for several years now with the identical power requirements to the preceding generations. I'll mention in passing the upgrade to Jaguar at Oak Ridge. All the existing CPUs were recently replaced with Istanbul CPUs, and suddenly Jaguar is #1 on the Top 500 list. (It must be nice for Jack Dongarra at the University of Tennessee after having been involved in the Top 500 list almost from the beginning to have the #1 site in his own backyard.)

Whether sold through the OEM, or purchased by the owner of the system through a vendor, every upgraded system is a sale for AMD. And frankly, if there are any Opteron servers sitting around with dual-core CPUs in them, the owner, or the responsible IT manager, is a fool. Upgrading from Barcelona to Istanbul? Run the numbers. In some cases it will easily be worth it. Shanghai? Probably hold on a few months and eventually plan to upgrade the motherboards, and/or the entire system.

But my guess is that AMD will have milked that upgrade chain for as much as they can--and the IT managers involved should be happy. Whether as greening the server suite, or as cost cutting measures, those systems should have been good for their careers.

What about going forward? The article twigs on part of AMD's strategy that I've already commented on. The Intel Nehalem-EP dual-core system changed how many IT people thought about some of their larger applications--and about server consolidation. The initial logic on server consolidation was that the more existing systems you could cram into one virtualized server the better. Choosing a four or eight socket system allowed you to reallocate resources among the (virtual) servers). Having a few big boxes instead of a lot of little ones meant that the bin packing problem* you had to solve was much easier. However, Nehalem-EP systems were much less expensive than four or eight socket servers, and the jump in capacity meant that you weren't making your bin packing problem that much worse. (There is another bin packing problem when assigning processor affinities, but I digress.)

AMD intends to climb on that bandwagon by offering dual-socket systems with from 8 to 24 cores next year. They will raise that with Bulldozer to 32 total cores. We could argue here about how much a Bulldozer core will be worth compared to a Nehalem-EX core, but I'll save that for my next post. ;-) The important takeaway point here is that between AMD and Intel, what used to be the high-end quad-socket x86 market has become the mid-range dual-socket x64 market. The question in the dual-socket market now is how to size a system and do the necessary price comparisons. (If you have been paying attention, you will know that sizing a system today is more about power and cooling than it has been in the past. But converting your system requirements to total cost of operation for various vendors offerings is now often done by a spreadsheet program. Hint: Don't use one offered you by a system vendor. ;-)


* Technically in computer science, the bin packing problem refers to a problem where you have a number of bins, with a size or weight limit on each, and a number of objects to pack in the bins. For complexity theory purposes, the problem is whether or not it is possible to fit all of the objects in the bins. In practice, when bin packing problems come up, you are much more interested in implementing the solution, rather than saying yes, it is possible, and walking away. ;-) Fortunately, so far all algorithms for solving bin packing problems do so by construction, so you get an acceptable packing list along with a yes answer.

For those of you acquainted with complexity theory, the bin packing problem is NP-complete. However, problems with a few big bins and lots of objects are easily solved, as are problems with lots of little bins and fewer objects than bins. The toughest cases occur with around twice as many objects as bins, and of course where the weights to be packed are close to or equal to the total load capacity.