POST OF THE DAY
Advanced Micro Devices
Response to LongHook

Format for Printing

Format for printing

Request Reprints

Reuse/Reprint

By onarchy, LongHook
March 11, 2003

Posts selected for this feature rarely stand alone. They are usually a part of an ongoing thread, and are out of context when presented here. The material should be read in that light. How are these posts selected? Click here to find out and nominate a post yourself!

[This is part of an ongoing discussion stemming from a previous Post of the Day. The debate is over computing power, upgrade resistance, and killer software applications that might drive consumers to purchase new hardware.]

onarchy, the Rebuttal:

Hi guys,

I read Longhook's post of the day with great interest. While I think he makes valid points I really DO think there will be a market for CPU power in the future that far exceeds current computing power. And yes, much of this need for speed will be on a PC. I'm not necessarily sure it will be on a desktop but more on that later.

1. Voxel 3D graphics

I believe that at some point in the future it will be useful to convert from the current vertex-based graphics to a voxel based system. Vector graphics was invented mainly due to computing limitations. It was a smart way to obtain full-screen graphics without having to do heavy computations for every single pixel (which is what you do in a voxel based system). So in a screen with 1024x768 = 786,432 pixels you may only need, say, 5000 vertices to have a quite impressive 3D scene, so the advantage of vertix based system is clear.

But there is still a lot of overhead per vertex, and the complexity of scenes is increasing at a rapid pace, and working with vertices does impose artificial limitations and create unusually hard problems. A Voxel model has intrinsic advantages: 1) a constant (hence predictable and scalable) computation load per frame, 2) truly volumetric models, 3) real ray tracing and hence better image quality.

At some point in the future there WILL occur a transition to voxel graphics. But not today, simply because the computational requirements is WAY out of your average desktop PC's league. (The main problem is actually bandwidth.) Once this technology becomes a standard, PCs actually have to be quite powerful compared with today. I know, it's still 5 years or so off, but it IS a future killer app waiting to happen.

2. Voice recognition

This subject has been beaten to death by others, but for completeness I include it here. Voice recognition requires a LOT of CPU power, and it WILL come, and it will be a mass market.

3. Image recognition

Image recognition is in my view the big brother of voice recognition. This is an area of research that has made great improvements lately, cf. e.g. www.image-metrics.com. It's actually closer than you think, and it will have a greater impact than even voice recognition. Imagine this: a security system in every home that monitors and recognizes who enters the home. Or this: an automated driving system in every car. The applications are many and I leave it up to your imagination to think of others. Definitely a killer app for CPU power, with room for a LOT of growth.

4. 3DTV

I have already mentioned voxel based graphics, and one killer app for this is 3DTV. It too i closer than you think. There exists 3DTV products even today (cf. www.4d-vision.com) and better products are just around the corner. A 3DTV will require an immense amount of computing power and bandwidth to display 3D movies. The horizon for this product is 5 years down the line. Suddenly your TV will become a power hungry little thang.

5. Artificial intelligence systems

Since I've already mentioned speech and image recognition, I might as well also pull up AI out of my hat. I worked in an AI company called Webmind from 1997 to 2000. The company failed, due to lack of funding after the dotcom crash, but during that time we gained a lot of insight into what is needed to run an AI system. Power, power, power and not to mention a LOT of power! At the time we were running the meanest systems available, namely 2 GB RAM quad systems with 800 MHz Pentiums, and we weren't even close to what was needed. We made some rough calculations on how much memory and CPU power would be needed to run a truly intelligent system and the numbers were quite ugly: CPU power requirement would be some 1000 x 800 Mhz Pentium equivalents and 1 Terabyte of RAM. Needless to say much of the reason we failed was because we were simply too far ahead of the available computing technology, and to some extent we still are. However, you can get quite some mean intelligence out of a quad or octo Opteron server with 32 Gigs of RAM. In other words, AI is currently within practical reach, although a full-fledged digital mind is still way off.

In summary, there is an enormous potential for CPU hungry applications, even for the mass market. Ironically much of that power will be needed in what I would dub mass market servers. That is, servers for your home. The Server where you store all your movies, but also the server that monitors your home and that you eventually can give instructions through speech. At first it will only understand a limited set of commands, but eventually you will be able to speak to it in something that resembles natural language. I am talking about a House Server. It will be as common as the electricity system in your home. Of course, at some point it will be commoditized, but the need for speed will grow faster than the production capability for at least a few more years.

Onar.

LongHook, the Response

While I think he makes valid points I really DO think there will be a market for CPU power in the future that far exceeds current computing power. And yes, much of this need for speed will be on a PC. I'm not necessarily sure it will be on a desktop but more on that later.

I agree. It's not that more CPU horsepower can't be used, it's more that the additional cost can't be justified. So you have a chicken and the egg syndrome -- do you write software that requires a radically more powerful system than people have/can justify, or do you change your software so that it runs on lower end hardware?

Say you're a consumer happily paying for a computer that is $900, and two years later it still runs fine for everything you need. Now someone is telling you that you need a $2000 computer --how good does the new tech have to be to justify this for everyone, not just the elite?

In most cases, the developer alters the system requirements because he doesn't want to be locked out of a broader market. In a few rare cases, your software is SO more advanced that everyone will have to upgrade their systems in order to run it. The last time I can remember this was maybe Windows 95, but the market dynamics have changed radically since then. The migration from Win3.1->Win95 was about going from bad to not-so-bad. Subsequent upgrades from Win98+ have been incremental, and the typical consumer just doesn't notice.

1. Voxel 3D graphics

I believe that at some point in the future it will be useful to convert from the current vertex-based graphics to a voxel based system.


There are immense technical hurdles to this, and computational power isn't the only one. Storage requirements are absolutely immense. And, honestly, it just doesn't look as good, not until we have hardware acceleration for voxel rendering so we can get filtering.

Vector graphics was invented mainly due to computing limitations. It was a smart way to obtain full-screen graphics without having to do heavy computations for every single pixel (which is what you do in a voxel based system).

It wasn't computation, it was storage. Storing the vertices for vector objects was significantly less expensive than needing a framebuffer to store the color at every addressable pixel location.

But, this has nothing to do with voxels, since any voxel engine, in the end, is decomposed down to pixels.

Interestingly enough, however, the shift from vector -> raster is analogous to the shift from vertex -> voxel, with the associated costs and penalties. But with an extra dimension now, and it's no longer bounded by screen space, which makes the vertex -> voxel transition orders of magnitude more expensive than the shift from vector to raster.

But there is still a lot of overhead per vertex, and the complexity of scenes is increasing at a rapid pace, and working with vertices does impose artificial limitations and create unusually hard problems.

A lot of overhead? Compared to what? A vertex based system is going to have a minute fraction of the memory footprint of a voxel based system. Scene complexity is increasing, yet modern GPUs are keeping up with this just fine, to the point that within a couple generations we'll be approaching Renderman level of graphics. All with vertices (note: high end computer studios that use massive clusters are still using vertices and traditional rasterizers.)

A Voxel model has intrinsic advantages: 1) a constant (hence predictable and scalable) computation load per frame,

This isn't true. Voxels have a nearly completely unpredictable load, because a large part of it depends on the opacity and complexity of the scene in front of you and how far you cast your rays.

2) truly volumetric models

Which are advantageous...how?

3) real ray tracing and hence better image quality.

Ray tracing sucks. It was neat in the 80s, but no one today that works in computer graphics considers ray tracing that interesting. Radiosity overtook it as the rendering methodology of choice for interiors sometime in the late 80s/early 90s, but even then, most professional computer special effects houses use RenderMan, which is a polygon shader system.

At some point in the future there WILL occur a transition to voxel graphics.

Maybe, but before we're there we have to see some research that addresses all the myriad problems it faces. Voxel rendering is rife with significant problems and limitations, and yet gives almost nothing back. It's elegant in its brute force, sure, but at immense cost in performance, memory and image quality. And you've now dictated the max resolution of your world based on the size of each voxel element. If you make that size too big, then as you approach finer detailed objects you'll start to see nasty artifacts. If you drop the voxel size further, memory requirements skyrocket -- each halving of size of your base voxel will end up increasing your memory footprint by a factor of EIGHT.

Voxels, if anything, are completely unscalable.

Now, you can save a ton of memory by then taking your voxel representation and compressing and optimizing the hell out of it into "skins", but that's computationally expensive and, in the process, you've basically turned your voxels into polygons for a net wash.

Finally, if voxels ever show up, they're going to have to be qualitatively superior to hardware accelerated triangle renderers, and I don't see this happening.

2. Voice recognition

This subject has been beaten to death by others, but for completeness I include it here. Voice recognition requires a LOT of CPU power, and it WILL come, and it will be a mass market.


You're missing the point. Sure, voice recognition may be here at some point, and it will be mass market, but it's not going to be the driving force behind new CPU sales. It will show up because there will be enough CPU power to use it, not vice versa.

Image Recognition

The examples you cite are not mass market, they're completely and utterly niche. Mass market is "everyone who has a computer will be using this technology". Image recognition, if it's used at all, will come about, once again, naturally as the result of the proliferation of Web cams and higher computing power.

[3DTV]is closer than you think.

No, it's not, because I'm still waiting for my HDTV to become relevant.

Artificial intelligence systems

This is so nebulous that I won't bother responding. "AI" means about fifty thousand different things.

I don't dispute that some of the things you mention will become important in the future, but they're simply not interesting enough to force consumers to start buying brand new machines outside of their normal cycles. And the cycles are lengthening.

And with the plummeting dollar/power ratio we're seeing, a lot of really nifty new technologies will be viable on the mass market commoditized machines of the future, so again, a net wash -- that new system someone buys four years from now will probably run some new and different kinds of apps that they'll be excited about, but it won't be enough for most consumers to up and buy a system that isn't commodity priced.

It is a very rare application that will make the computing public, at large, need to upgrade their systems. The constant upgrade cycles of the 80s and 90s were happening because computers were owned by a distinct breed of consumer that was not mass market. AND there was a constant cycle of new software that taxed existing systems, and often this software had enough benefits to justify the upgrades.

Both of these are false now. Computer owners today are not any different than a car owner or someone that has a television set -- it's an appliance. They don't have the "I want to upgrade!" mindset that computer owners of 10 years ago had. In addition, there just isn't the influx of new and interesting apps to force the market, as a whole, to shift to more computing power. Look at how many people are still running Win98 today.

And this isn't conjecture, just look at the rate of advancement in software the past three years. Remove games, and nothing has changed, with the possible exception of the advent of digital video editing.

-Hook


Become a Complete Fool
Join the best community on the web! Becoming a full member of the Fool Community is easy, takes just a minute, and is very inexpensive.