POST OF THE DAY
Advanced Micro Devices
What will the Future Hold?

Format for Printing

Format for printing

Request Reprints

Reuse/Reprint

By eachus
September 27, 2004

Posts selected for this feature rarely stand alone. They are usually a part of an ongoing thread, and are out of context when presented here. The material should be read in that light. How are these posts selected? Click here to find out and nominate a post yourself!

There has been quite a lot of discussion here of whether or not AMD's 90 nm process should be considered successful. I think it is a little early to decide that, but it does bring up a major issue for down the road. (This post discusses single vs. dual core from a customer point of view. The next post will discuss what it will take to make faster CPUs in the future.)

As process nodes get smaller, the amount of impedance (and pure resistance) in the interconnects increases. To some extent this can be fought by putting in more interconnect layers. But that means that the signals using the upper metal layers have to go through a longer via stack to get there. At some point, this approach will no longer work. I'm beginning to get the feeling that Intel is already there for the Netburst architecture, and if AMD is not there at 90 nm, they are certainly close.

Both AMD and Intel are getting a significant boost from smaller die sizes. Intel has spent a lot of that on additional cache, and is about to spend more. (The Pentium2M chip expected late this year or early next year will be a Pentium chip with 2 Meg L2, unlike the current P4 EE chips which have 512k of L2 and 2 Meg of L3.) It should be a decent performer for Intel. Of course, AMD already offers both 512k and 1 Meg L2 chips, and there will be 90 nm 1 Meg L2 Athlon64 FX chips early next year.

AMD could create a 2 Meg L2 Athlon64 FX, but why? The customers who might consider it will be buying dual-core Toledo chips with 2 Meg of L2 instead. ;-) And therein lies the first issue I wanted to discuss.

Let say it is a year from now and you have a choice between a 3 GHz single core Athlon64, and a dual-core chip at the same price. At what clock speed do you opt for the dual-core alternative? To tie things to a single number, let's assume that an X*3GHz dual core chip has 1.8 times the throughput of a 3 GHz single core chip, and X*0.9 times the speed of a single core chip on single-threaded code like Primes. (I could write a decent version that would use both cores, but ignore that for now.)

Obviously if X is below 0.55, you might as well buy the single-core chip. And if X is close to 1.0, there is no point in buying the single-core chip. But where in that range do we draw the line? It obviously depends to some extent on what you use your computer for, but I think we can concentrate on a 'average' consumer who wants a high-performance CPU.

Intel's Hyperthreading has started makers of CPU hog software thinking about how to take advantage of multi-threading, and I think we can assume that even if Microsoft doesn't do much to take advantage of multi-threading, graphics drivers will. I won't go into the gory details, but as long as you have to deal with the possibility of different applications both talking to the graphics driver, you might as well put any heavy lifting in the driver, such as software shading, in its own thread. You quickly get to a design where the code for graphics calls to the OS runs in the caller's thread, and a separate thread is the only one that talks to the graphics card. If you are really tricky, you can have a circular buffer for input to the second thread, and that thread never needs to deal with locks.

In any case I'm going to stick my neck out and say that even on the most CPU intensive applications, we will see at most 60% of the CPU cycles needing to be in a single thread. That means that as applications continue to migrate to multithreaded code, users will find that a dual-core CPU system feels as fast as a single-core system if X is 0.66. But users always want faster so let's go with X = 0.7.

Back to the original question then, we expect desktop customers to prefer a 2.1 GHz dual-core CPU to a 3 GHz single-core CPU, all other things being equal. Or better, say that a 2 GHz dual-core Athlon FX would be preferred to a 2.8 GHz single-core FX-55. It has taken a while to get here, but here is the point. At 130 nm, dual-core CPUs were possible for AMD, but definitely would not have been cost effective. At 90 nm, the picture is very different. Even if AMD is limited to 2.8 GHz at 90 nm (late next year of course), that will provide the equivalent of 4 GHz 130 nm single-core chips, in the same die size, and at a lower power consumption.

Notice that the same logic will apply to Intel and their Irwindale chip, with one further caveat. I don't think Intel can even think of dual-core chips at 90 nm. On the other hand, they do expect to be at 65 nm next year. So I think we will see the first dual-core chips from Intel being based on the Pentium-M architecture, and fabricated at 65 nm. I also expect this chip to require at least twice as much power as a Dothan chip. Not a big deal in one sense, in fact even 80 watts would be a big improvement over Prescott (and Prescott2M). AMD, of course, won't have a die size advantage over Intel while Intel is at 65 nm and AMD is at 90 nm. But I just don't see that as a big deal. Both AMD and Intel will be able to produce more than enough single-core chips to fill demand. I expect that many of the AMD single-core chips will be dual-core chips with one failed (and turned off by laser surgery) core. Intel will probably do the same. When AMD gets to 65 nm, they will have the die size advantage back in spades--until Intel gets to 45 nm. In reality it won't be that simple...


Become a Complete Fool
Join the best community on the web! Becoming a full member of the Fool Community is easy, takes just a minute, and is very inexpensive.