Advanced Micro Devices
Next Killer Application

Format for Printing

Format for printing

Request Reprints


By Grunchy
March 14, 2003

Posts selected for this feature rarely stand alone. They are usually a part of an ongoing thread, and are out of context when presented here. The material should be read in that light. How are these posts selected? Click here to find out and nominate a post yourself!


New "killer apps" are the result of some enabling hardware. For example, CD-ROMs made "multimedia" the killer app of ~1990. The internet made browsers and the world-wide-web the killer app of the late 90s. It doesn't take long for all the "must-have" applications of a new hardware innovation to be developed and proliferated. It is entirely plausible that the enabling hardware could be a faster CPU, but that's not likely to happen in the next few years anyways.

IMO, the next killer app will be the result of some human interface breakthrough. That could be either some kind of virtual reality, or some slick input device, or both.

That makes good sense. The only reason that Windows was ever such a killer app is because of the mouse and the 2d accelerated display adapter. Without those, you've got a text display driven by a text keyboard, which makes the computer virtually useless to the majority of today's market. The indispensable CLI in Linux is the #1 reason why it cannot become more than a niche OS. As soon as it can become as user-friendly as Windows then look out! The percentage of the human population who actually give a !@$%? what "grep" means will never, ever exceed 1%. Sorry Linux fans.

However, Windows is a pitiful product, even 2000 pro and XP. Why? Compare its GUI to just about any stylized GUI of any video game. The video game is always more flashy, gives better cues what it's doing, and has greater responsiveness and stability. Maybe that's because the context of the video game is far smaller than the context of Windows GUI, but the point is, Windows has a looooooong way to go.

The next killer app is going to be A.I. Let me give you an example. Look at how crummy Linux is for the simplest of tasks: you go through a massive manual to know which scripts to run, which config files to update, how to run VI to make changes, etc., and eventually you can figure out how to change your monitor resolution from 1280x1024 to 800x600. Then, look at how crummy Windows is: you click "start", then "settings", then "control panel" (but what other settings are there to be changed? Why not just "control panel"?), then you figure out you need to change "display" instead of something simple like "monitor" or "tv", then you gotta click the "settings" tab (isn't that how we started this whole exercise?), then you move a slider called "screen area". That is definitely unforgivable.

Now how about A.I.? What if it had a keyboard / dialogue interface that went something like this:

Human: what is the display resolution?

Computer: it is currently 1280x1024 and millions of colors display. Maybe that is a little too small to read the web page you've got on the TV screen right now because I notice it is mostly textual. May I suggest a smaller display resolution that will make all things slightly bigger and more legible?

Human: nah, let me tell you what. I like the high-res because everything is less jaggy. You change the text rendering so it's as big as if it were 800x600 and I'll be happy with that. Oh, and from now on make sure that all text display is about that size, because I'm tired of squinting at you.

Computer: ok, you got it boss!

Such dialogue systems already exist, and somebody or other is going to commercialize it and revolutionize the entire computer market. It seems all the big players think that voice recognition is going to be the next killer app, that's because they are too close to see what people really need, instead all they can see is what they have already invested in. Microsoft somehow thinks that voice commands will make menu systems easier to navigate, without realizing that nobody actually wants menu systems in the first place, let alone the fact that nobody wants to talk to a non-living computer! The mere thought of it is completely, utterly inhuman... voice recognition is definitely not for this person, anyway.

Mark my words: contextual A.I. is going to revolutionize computing. The mouse is going to give way to some kind of laser target device like quadriplegics use, so the computer always knows which part of the TV you're looking at (and therefore knows what it needs to be working with). We should all be vigorously pursuing our repetitive stress disorder lawsuits to ensure these fundamental technological changes get made fast enough... :)

Become a Complete Fool
Join the best community on the web! Becoming a full member of the Fool Community is easy, takes just a minute, and is very inexpensive.