I'm sure Silicon Valley has other ideas up its proverbial sleeve. Chips keep getting faster, smaller, cheaper to make, and more power-efficient. The rate of improvements may change over time, but the technology always moves forward.
Aren't computer chips fast enough already?
Likewise, the need for speed isn't going away. You know what I mean if you ever find yourself twiddling your thumbs while your computer, phone, or media center is taking its sweet time doing something important.
Sure, today's chips can juggle dozens of applications simultaneously, stream 4K videos without a hiccup, and support real-time multiplayer gaming. However, tomorrow's challenges are just around the corner. We're looking at a future where virtual reality could be as common as our morning coffee, where artificial intelligence analyzes massive data sets to predict global trends, and where real-time language translation happens as effortlessly as a casual conversation. These are solid use cases for computing hardware with massive horsepower.
So, there will always be a need for more silicon-powered performance. What's "good enough" today will seem quaintly slow in a few years and barely usable in a decade or two.
Keeping up with the performance demand is even more important when a quick calculation really matters. Self-driving cars must react to ever-changing traffic conditions in the blink of an eye. Fast computing can have life-or-death implications in medical monitoring systems, emergency response solutions, air traffic control platforms, and more. Computers are not always fun and games.
And so, hardware designers strive to keep Moore's law alive in some form. If that means moving the goalposts sometimes -- performance per watt, adding more processing cores instead of boosting the performance per core -- then so be it. The show must go on.