Like a wine aficionado uncorking a new bottle, I like to ruminate on the monikers big tech companies give to their latest architectures. I was a big fan of AMD's
The soon-to-be-released Fermi takes a large step forward in advancing NVIDIA's CUDA architecture, which allows developers to code for the GPU using the popular C-based programming language. According to NVIDIA, the Fermi will contain up to 512 CUDA cores and is expected to run 400% faster than previous NVIDIA chips while enabling new advanced operations. Such features suggest that the Fermi is at least partially targeted at the high-performance computing (HPC) market, where researchers and math-intensive applications can take full advantage of all the Fermi aims to offer.
That HPC segment happens to be a market that Intel is targeting as well, and there is some speculation that there exists a looming turf war between the two tech giants. Intel's graphics processor (GPU) offering, the Larrabee, takes a different approach with a similar result. The Larrabee uses the x86 instruction set, similar to modern central processors (CPUs). By leveraging a large number of processing cores (32 in most demonstrations, versus the two or four processor cores most standard CPUs use), Larrabee is able to handle the tasks that current-generation GPUs perform while still offering an attractive hardware tool set for the often eclectic needs of supercomputing.
Much has been made of the increasing CPU/GPU convergence, and the hype machines have been working overtime. Intel, which has partnered with DreamWorks
The Fool's own technology editor, Eric Bleeker, spoke with an NVIDIA senior vice president who downplayed the apparent competition and convergence. That doesn't exactly jive with the shot across the bow on the NVIDIA website: "Fermi delivers supercomputing features and performance at 1/10th the cost and 1/20th the power of traditional CPU-only servers." Broadpoint Amtech analyst Doug Freedman recently made a case for NVIDIA even going so far as entering the CPU market itself.
So what's an investor to make of all this? Clearly, there is a general awareness of the importance of GPU architectures in the computers of tomorrow. A platform that can scale well from the consumer level on up will not only be lucrative, but could also command a great new nickname. I'm not sure which is more exciting.