The world is changing, and we must change with it.

Every year, we hear of more incredible breakthroughs, which push the boundaries of our understanding further out into the realm of what was once far-fetched science fiction. In fact, our understanding of these new developments is often presented in sci-fi terms: the 3-D printer is Star Trek's replicator; Minority Report presaged responsive advertising and biohacking; Star Wars' hologram videos are already in development; and in hundreds of other sci-fi books and films written years ago you'll find echoes of what's happening right now. But this strange new world will create strange new challenges. We may be the ones who set that world in motion, but it's tomorrow's children -- our children and grandchildren -- who will ultimately build it.

What sort of future will we shape for them, and how will we teach them to handle it? For thousands of years, the answer was simple. Your life looked much like your parents', and your children's lives looked much like yours. The world you shaped looked like the world you inherited, but the Industrial Revolution changed that, and the digital revolution continues to shatter the similarities between one generation's growth and the next. Tomorrow's children will become tomorrow's adults in a world where the sort of life taken for granted over the past half-century -- school until you're 18 (or 21, or 25...), a decent job at a living wage, and a retirement supported by pensions and Social Security -- will become as archaic as a nation of farmers is today.

You might be tempted to reject the notion of a future that looks radically different from our present, but recent history offers more than enough evidence of dramatic change from one generation to the next. Someone born in 1800 might have lived to see their children become the first to travel on machines -- the locomotive or the steamship -- but someone born in 1900 might have lived to see their children travel further in an aircraft in one day than their parents traveled in a decade. Optimists may be wrong on the particulars in the short run, and occasionally wrong on the big picture as well, but techno-pessimists who say that something will never happen put no time limit on being proven wrong. As it turns out, the time between a pessimistic prediction and a dramatic technological breakthrough -- or simply to the maturation of an early stage technology -- is short enough that those who weighed against progress were still alive to find out how wrong they were.

  • "A rocket will never be able to leave the Earth's atmosphere." -- The New York Times , 1920.

The first rocket to accomplish this "impossible" feat was the German V-2 in 1944 , and the first American rocket to leave Earth's atmosphere went up two years later, in 1946 . The Times did not retract its claim until 1969, as Apollo 11 roared toward the Moon.

  • "Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia." -- Early science writer Dr. Dionysius Larder , 1828 .

At this time, rail travel typically topped out at about 15 miles per hour . Passenger locomotives first reached speeds of 60 miles per hour by 1848, and first broke the 100-miles-per-hour mark in 1893.

  • "There is no likelihood man can ever tap the power of the atom." -- Robert Millikan , winner of the 1923 Nobel Prize in physics, 1928.

German chemist Otto Hahn was the first to split a uranium atom, by bombarding it with neutrons in experiments beginning in 1934. The first controlled self-sustaining nuclear reaction  took place underneath a field at the University of Chicago, under the direction of Enrico Fermi, in 1942.

  • "The horse is here to stay, but the automobile is only a novelty -- a fad." -- President of the Michigan Savings Bank, to Ford investor and inaugural chairman Horace Rackham , 1903.

Some 11,000 automobiles were built in 1903 . A decade later, the industry built over 370,000 vehicles . By 1924, Ford dominated the auto industry, producing 1.7 million out of an estimated 3.6 million  vehicles. Rackham sold his shares back to Henry Ford in 1919 for $12.5 million (roughly $300 million today), netting a 250,000% gain on his initial $5,000 investment.

  • "Heavier-than-air flying machines are impossible."  -- Scottish mathematician and creator of the Kelvin temperature scale William Thomson, Lord Kelvin , 1895.

The Wright Brothers completed their first successful flight at Kitty Hawk a mere eight years later, in 1903.

  • "There is no reason anyone would want a computer in their home." -- Ken Olsen , founder and president of Digital Equipment Corporation, 1977 .

The "big three" of the first personal computing era all went on sale in 1977. Over 700,000 personal computers were sold in 1980 , and by 1987 annual sales surpassed nine million machines. By the time Olsen retired from DEC in 1992, nearly 65 million personal computers  were in use in the United States alone.

A number of people undoubtedly believed all of these things at one time or another. The failure of their vision shows the limit of much human thinking, which says I cannot see it happening, and so it can never happen. Cave-dwelling nomads never imagined cities and fields of grain when they struck the first spark of a controlled fire. Medieval peasants in their fields, toiling near the shadow of a windmill, couldn't believe that their descendants might one day fly thousands of feet overhead and hold the knowledge of the world in the palm of their hands. Even today, something even more incredible waits just beyond the limit of our own vision, which now encompasses possibilities so powerful that any other generation might view them as the works of a god on Earth.

The march of progress has been strong for over two centuries now, lifting billions out of a hardscrabble life largely indistinguishable from that of the first farmers. But technology also steadily improved before the Industrial Revolution, though at a rate slower than might be appreciated by those living in earlier times. Why has it taken so long to get to the point where we now take progress for granted? It's because progress accelerates. It took hundreds of thousands of years to get from fire to the farm, but only a few thousand years more to get from the farm to the aqueduct. Major leaps forward took less and less time. Aqueduct gave way to cannon, which gave way to printing press, which gave way to steam engine, which gave way to telegraph. Progress accelerates because it proceeds at an exponential rate. This is what the progress of human technology looks like on a timeline that stretches back to the dawn of man :


Source: Wait But Why, "Putting Time in Perspective." 

On this timeline, you wouldn't even be able to see the development of the earliest modern computer. ENIAC wouldn't show up until midway through the light blue bar that marks the beginning a 90-year-old life. That bar shrinks to a sliver on the span of recorded history, and disappears completely on the bar marking the existence of modern humans. 

Many people once looked at their era's progress and expected it to continue onward in a linear fashion, but this is not what has actually happened in the course of technological development, as we've already seen, and it's not what we should expect in the lives of tomorrow's children. Technology often improves exponentially, and the difference between linear and exponential is stark: linear growth adds one penny to a pile of pennies every day, but exponential growth doubles the number of pennies added to the pile each day. With linear growth, you become a millionaire in about 274,000 years. With exponential growth, you become a millionaire in 27 days and a trillionaire in less than two months. Even if you start with a pile of a million pennies on the linear side and begin with fractions on the exponential side, the latter is destined to eclipse the former before too long. It's the difference between perpetual poverty and imminent abundance.

Your smartphone is a perfect example of exponential growth. It is millions of times more powerful than ENIAC, which was built during World War II to help the U.S. military figure out how to kill its enemies more efficiently. It was able to perform roughly 5,000 operations per second . If computing technology had improved in a linear fashion after ENIAC, our computers would be able to perform about 350,000 operations per second today, with 5,000 more operations possible with each subsequent year's progress. However, a high-end smartphone built about seven decades after ENIAC can instead perform 12 billion operations per second. High-end PC processors can perform nearly ten times as many operations per second as most smartphones.

When you consider the cost and the accessibility of that smartphone's computing power relative to ENIAC, it looks even more impressive. ENIAC cost about $500,000 to build, which in our time would be equal to $6 million . A basic model of the aforementioned smartphone costs you about $200 . That's a 99.99% reduction in price to get 2.4 million times the processing power. There's not much of a point graphing the change between the two technologies, because on virtually every method you wouldn't be able to see one of them. 

Exponential growth in technology has been so well-observed for so many years that none but the most pessimistic prognosticators predict that progress will slow. We are (almost) all futurists, in a way. That doesn't necessarily mean that flying cars are imminent, since a safe and efficient transportation network of personal aircraft would involve more complex considerations than just sticking a turbine on the back of a Volkswagen. Predictions often fall into the trap of confusing the possible with the feasible, and the feasible with the inevitable. We could build flying cars, but since earth-bound cars result in nearly 11 million accidents every year  in the United States alone, would we really want to add a third dimension to our daily commutes? In other ways, predictions of the future often fail to consider alternate possibilities. We might all have flying cars when we automate all of them to ensure that we never run into any lousy fliers on our way to work, but we might also not need flying cars at all because the technology for virtual realities will have given us the ability to be anywhere we want without having to leave the house.

What we're about to do is not to imagine a future that will be, but one that might be, based on our understanding of technology today. We may find that we've underestimated the explosive change to come, but , but we can only look forward with the knowledge we have today. The shape of the future cannot be known with exact precision, but we can perhaps trace its outline with the rough tools now available. In time, tomorrow's children will fill in that shape with what they've learned, and it's up to us to make sure they have the right tools for the task.

To help us trace that outline, we will draw on the knowledge of several noted futurists  with diverse specialties:

Award-winning science fiction novelist Charles Stross, whose work often focuses on the long-term consequences of accelerating technological improvement. His novel Accelerando is most closely aligned with our investigation today, but Rule 34 and Singularity Sky also offer readers some eye-opening visions of an accelerating future.

Software executive Martin Ford, whose book The Lights in the Tunnel examines a future where much of the economy has been automated. Published in 2009 at the low point of the Great Recession, this book examines the economic outcomes of many of the possibilities we'll discuss here.

Gerontologist Aubrey de Grey, founder of the SENS (Strategies for Engineered Negligible Senescence) Research Foundation and one of the foremost anti-aging theorists. His book Ending Aging explores the worldwide effort to stop the only disease with a guaranteed 100% mortality rate. We'll explore some of the possible social and economic impacts on tomorrow's children should this effort succeed in later segments.

Technology writer and theorist Michael Chorost, whose experiences overcoming hearing loss with a cochlear implant helped inform World Wide Mind, an exploration of a world where computers are as much a part of the human body as hands or feet. As a scientist and as a modern-day "cyborg," Chorost is uniquely qualified to investigate the social issues that might arise from this merger of man and machine.

Professor Andrew McAfee, whose work at MIT's Center for Digital Business informed Race Against the Machine -- coauthored with fellow professor and CDB director Erik Brynjolfsson -- an examination of the impact of technology on the workforce and society . McAfee is a prolific writer and speaker on both the causes of and solutions to a more automated world, and as a longtime educator, he also brings authority to any examination of tomorrow's education reforms and improvements. McAfee and Brynjolfsson's second book on an increasingly automated future, The Second Machine Age, will be published in early 2014.

Inventor, entrepreneur, and eminent futurist Ray Kurzweil, whose books The Age of Spiritual Machines and The Singularity is Near are perhaps the most well-known texts on the outcome of accelerating change. Kurzweil has been one of the world's leading futurists for over two decades, and also boasts a long record of technological achievement and business success dating back to his childhood. His research into accelerating change has greatly informed this and many other discussions of future progress. Kurzweil became Google's director of engineering at the end of 2012, and in this role he will help the company develop the most advanced technology in the world  -- no modest responsibility in a company that's already produced self-driving cars and augmented-reality glasses and which recently waded into the challenge of radically extending human lifespans .

You'll find their thoughts throughout this article, helping to add depth, color, and shape to the outline of possible futures awaiting tomorrow's children. Nothing is certain, and we may not be able to imagine what tomorrow brings, but we can begin to prepare for its possibilities today.

2015: The foundations for success

"Young people have a marvelous faculty of either dying or adapting themselves to circumstances." -- Samuel Butler, The Way of All Flesh 

Right from the start, tomorrow's children will enter a world where their success will be primarily shaped by two attitudes: their attitudes toward technology, and their attitudes toward money. These are far from the only attitudes that will shape the success of tomorrow's children, but without technology and without money, society as we know it would be impossible. Even if our children choose to live their lives by other principles, the guiding forces of technology and money will ultimately affect the outcomes of their major life decisions at a very basic level. Developing the right attitudes in tomorrow's children toward these forces -- as we all were once guided by our parents -- will be critical to their long-term success. But what are the right attitudes to adopt? There's no one answer that will work for all children, but in a fast-moving future, a willingness to adapt will provide greater opportunity than an unbending ideology.

The sensible response to accelerating change ought to be that we should provide our children with the knowledge they'll need to properly command tomorrow's technologies from an early age, so that they'll be well-prepared to master tomorrow's jobs and control tomorrow's machines. Children of Luddites will either find it impossible to thrive with their parents' reactionary toolsets, or they'll be belatedly forced to move forward and adapt to the reality of change with a more limited range of useful skills. Since change keeps moving faster, the longer one waits to move forward with it, the harder it will be to succeed in the world built by that change.

But simply accepting the inevitability of technological progress won't be enough for our children to truly thrive. Nearly everyone today embraces new technology as its benefits become clear, but a surprisingly small number of young people gain any real fluency in the technologies that shape so much of our lives. We wouldn't call a child an expert at engine repair if all they know how to do is smack a carburetor with a rock, and yet the notion that children must be technology experts simply because they spend so much of their lives surrounded by technology continues to be as pervasive today as it was when children actually were on the cutting edge of technology, during the far rougher-edged times of the 70s and 80s .

Simply using an iPhone from morning to night does not make someone knowledgeable about computing systems, and we should neither assume that it does nor attempt to step in to solve basic problems for tomorrow's children when problems inevitably arise. This demands the development of lifelong inquisitiveness in tomorrow's children -- an important quality in someone trying to navigate a world where technology is improved on at ever-faster rates.

Everyone doesn't need to be a computer programmer, but everyone should have a basic understanding of how computers work. Few people are mechanics, after all, but when something goes wrong with a car, its owner should at least understand the need for repair. And unlike the gap between ordinary drivers and mechanics, the difference between basic competency and true fluency can often be measured in terms of millions or billions of dollars -- the person using Instagram might entertain their friends with cleverly filtered pictures, but the people who built Instagram got a billion-dollar buyout. The person using Instagram well might get a few more clicks and likes for their updates, but the person who can put an image through Photoshop and create something entirely original can at least offer potential employers a diverse creative toolset, even if they aren't interested in ultimately finding a job as a graphic artist.

Source: Paul Inkles (Playaz Design) via Flickr.

If we accept the idea that technology and society will become ever-more closely aligned as we move on to tomorrow, then at the very least, we owe it to tomorrow's children to help them become interested in understanding, exploiting, and adapting to the rapidly improving technologies that will govern their lives. We can't allow the next generation to grow up with the thought that the Internet is a "series of tubes," or let tomorrow's leaders fence themselves off in the walled gardens of the few major corporations that control so much of our digital world today -- especially when those same corporations are likely to one day be pushed aside as technology races forward. Thinking broadly and creatively will be important to tomorrow's techies, because building and deploying tomorrow's technology is likely require considerations we've never had to face before. Simply knowing the code won't be enough.

More importantly, everyone should be aware of the possibilities that tomorrow's technology offers the world as it spreads through society. The latter industrial era of the Baby Boomers offered reasonable lifestyles for those who could build a box. Machines can build boxes now, and plenty of other things besides, so -- to use a hackneyed phrase -- success will be found by thinking outside that box, and the process that built it.

Michael Chorost took this approach to writing World Wide Mind. When I asked him if he thought there might be a time frame for the adoption of the proposed "collective telempathy" technology at the heart of his book (essentially a network of brain-to-brain connections), he instead highlighted his thought process, which he described as such:

I'm trying to enlarge our imaginative scope. To break out of the naive assumption that tomorrow's technologies will be about letting us do what we do now, only better.

So you ask, what good is [collective telempathy]? And it's sort of like trying to explain the use of Twitter to someone from 1950. It's hard to think of plausible uses now. If you explained Twitter to someone from 1950, they'd say, "Why can't you just write a memo?" But new technologies create new social realities in which they become not only useful but indispensable. The future isn't just better than what we have now, but fundamentally different.

As technology progresses from massive and distant to tiny and intimate, development choices will increasingly require moral and social considerations as well as hardware and software considerations. These new developments will be, as Chorost says, fundamentally different from the reality we accept today. Tomorrow's children can't be locked into rigid thought processes if they're to effectively cope with a world where new technologies create larger impacts in less time with each major leap forward. What will we mere flesh-and-bone humans think of people who've implanted machines in their minds? Will we zealously pursue ways to manipulate our genetic code, or outlaw the mere effort to look closer into these new techniques?

Even if life expectancy flatlines from here on out, many of tomorrow's children can be expected to live clear to the next century. They will almost certainly be exposed to transhumanist -- that is, the effort to surpass the limits of human biological functionality with technological augmentations -- developments from a rather young age. It would be a grave disservice to them if we were to ignore the human side of the technological challenges they'll face, because they certainly won't be able to. And earlier still than the rise of transhumanism will be the proliferation of automation technologies, which offer the potential to radically reshape the way the world works -- or doesn't work, if enough jobs are given to machines without considering new ways to support a world with a much smaller workforce. Changing the way we think about work also requires us to change the way we think about money, which brings us to the second core attitude tomorrow's children will need for success in this strange new world.

Source: FamZoo Staff via Flickr.

Before we can even begin to educate our children to succeed in an accelerating future, we need to understand the world we, as their guardians, are preparing for them as we apply greater levels of technology to our interactions with money, whether that comes in the form of changing business practices, changing political attitudes, or changing economic policies. That requires serious thought, not only on the meaning of money in a world where fewer people may be needed at work, but about the meaning of work itself -- to say nothing of what investing, social insurance programs, and other methods of wealth accumulation and redistribution will look like in a world where work might mean something different than it has for centuries.

Attitudes toward money have for centuries ranged from extremes of collectivism to extremes of individualism. However, no matter its structure, a modern economy always tends to bestow a few high achievers with fantastic rewards while offering modest rewards in the best of times for everyone else. Pure socialism failed because party leaders sought to funnel more to themselves, while the average worker had little incentive to care about his performance so long as the only rewards were those that met basic needs. Pure capitalism, on the other hand, has in the past created staggering monopolies for a few and widespread oppression and misery for everyone else. These extremes are today typically moderated by government, which attempts to protect consumers from the rapacious maw of capitalism by covering it with socialist-lite safety nets. This public-private partnership has helped stretch a century of phenomenal economic growth into two centuries of phenomenal growth, but the healthy interplay between government and business can also be undermined if one side (populists or plutocrats) gains too much power over the other.

Capital is necessary for economic growth, but without labor -- or at least, without the income that labor earns -- capital can't be put to productive use. But what happens when less and less labor can produce greater and greater returns? That's been a common refrain in modern-day fears of automation, and we can already see the breakdown of this old relationship between labor and capital in the structure of our largest enterprises. At the height of its power in 1974, before being broken up as the last great monopoly in America, AT&T employed over a million people and earned about $5.5 billion in profit, which works out to about $26,000 in real profit per employee after adjusting the original $5,500 per employee for inflation. It was so large (it was the second-largest employer in the country, behind only the federal government) and so in control of a vital conduit in the American economic landscape that the government had already twice failed to break it apart before succeeding with a lawsuit first filed that year.

In our time, the closest thing we have to yesterday's AT&T is probably Google, which controls the conduits of Internet information to such a degree that it has no true competitor. At last count, Google employed about 42,000 people , and it earned $12.4 billion  -- nearly $300,000 in profit per employee -- during its last four quarters. That's a more than tenfold increase in profit per person from the industrial age to the digital age, and Google (like many of its super-profitable peers) can funnel those profits back into advanced research projects, like the driverless car, that potentially threaten the livelihood of millions. You might say, "but there's a huge number of businesses that now depend on Google for their livelihood!" There were also a huge number of businesses that depended on AT&T as well; you really needed to have a phone to do business back in the 1960s. What matters is the impact each employee in a systemically important company has on the bottom line, and there's really no contest between the old way and the new.

As more and more functions and occupations come under the purview of automation technologies, one would expect more and more money to flow toward the people and corporations in control of these technologies. They'll require fewer employees to produce greater results. Capital will have the upper hand -- that is, if nothing changes. Just as the children of the Great Depression grew up with a different way of looking at money than their profligate Roaring 20s elders, the children of the digital age will need to learn new ways of dealing with the changing relationship between labor and capital as they grow into the leaders who must eventually guide the world through a period of incredible change. We -- their parents and grandparents -- will also find ourselves grappling with these changes, but our response to them will indelibly influence the attitudes of the next generation when it comes of age.

That doesn't mean that any one particular economic ideology followed today is the only prism through which we can view a better future. Like those who lived through the Great Depression, our children may also be forced to adapt to changing times by cobbling together the best parts of divergent economic and political ideologies.

Whatever the outcome, it seems likely that tomorrow's children will have to force the current system to change in order to preserve the growth we have long taken for granted. Charles Stross views capitalism, in its present form, as an unsustainable system barreling toward an inflection point -- although what that inflection point might look like is anyone's guess. I asked him for his opinions on the ways in which the spread of technology might affect the world beyond the obvious impacts of technological unemployment and automation, and this was his response:

Source: Ave Maria Mõistlik via Wikimedia Commons.

Capitalism in its modern form is a very recent phenomenon in human history: we're only two and a third centuries past Adam Smith's The Wealth of Nations. It has produced undeniable and huge benefits, but also huge costs: climate instability, depletion of natural resources, pollution on a huge scale, and immiseration and unemployment in the name of progress. (It's all very well to talk about "creative destruction" and "disruption" creating opportunities for growth in the long term: in the short term, people get hurt when their source of steady income goes away.) Nor is it obvious that continuous compound economic growth is possible. Our physical resources are limited, as long as we only live on one planet, and while the intellectual property industries offer the promise of a growth sector that isn't hobbled by physical energy and matter constraints, the only way to maintain their profitability is to effectively tax copying -- which encourages rent-seeking.

I have no time for Leninism or other totalitarian ideologies, but it seems to me that Marx very astutely identified some paradoxes inherent in the unlimited pursuit of capitalism: notably its instability and periodic crises, the need for disequilibria in labor and capital flows in order to facilitate profit-taking, and its corrosive impact on non-fiscal human relationships. While today the repeated mantra is for outsourcing of government services to the private sector and loosening up of regulatory constraints, it doesn't seem to me to be plausible that corporations can provide public services at a lower cost than an efficiently managed state sector: where is their profit margin going to come from? I'm very much afraid that if we don't tame the runaway transnational capitalism that's taking root today, we're going to end up in a situation where we are compelled to embrace socialist solutions (including nationalization of corporate assets without compensation) if we're going to avoid mass starvation and civil unrest.

An important point to note is that our transnational corporations are, in a very real way, the first true artificial intelligences. They employ human beings, true, but the human components are ideally interchangeable: corporate goals are set out in their foundational documents, and the executives then guide the corporation's activities in pursuit of those objectives. Wherever possible, processes that can be automated are automated in order to shed the not-inconsiderable overheads of human employees. And the interests of a corporation are not necessarily aligned with those of the people who work for it, much less the citizens of the nations in whose environment the corporation exists. Much of what we take for a global free trade environment is in fact the product of intensive lobbying of the committees tasked with negotiating international trade treaties: we've seen regulatory capture emerge on a global scale, and the entities that dictate the shape of the free trade environment are not doing so with the best interests of humanity in mind.

The alternative ideologies on offer are all camped outside the big tent right now. The Greens are going to be with us in some form for the long haul -- and they are distinctly skeptical about the very concept of unlimited growth. The left is in general in eclipse, but some sort of new left synergy may emerge after a period of introspection. Racist, nationalist parties seem to wax and wane in line with economic instability, as witnessed in the frightening rise of the Golden Dawn in Greece. What I'd like to see is a new, pragmatic ideology based on humanism and human rights: that we should assess all proposals in terms of whether they hurt people, and aim to choose the policies that do the least harm. But I'm a hopeless optimist, and I see no sign of a large constituency emerging for such an ideology: as a species, we are prone to discounting long term benefits in favor of short term profits, even when doing so hurts us in the long run.

The interplay between technology and money is extremely complex and difficult to predict. The discovery of oil made both automobile and airplane possible, but it also created the most impressive monopoly ever seen in the United States and has propped up autocratic regimes for decades in the Middle East and elsewhere. People have been predicting an end to oil supplies almost from the moment oil was discovered, but it's historically been man-made shortages that have driven price spikes and efforts to broaden our energy horizons, not the natural drying-up of the world's wells. The tech industry and high finance have combined to create more millionaires  and billionaires  than any other industry, but both also express a strong drive to "streamline" business by eliminating what might amount to millions of jobs, while only producing thousands more in their own industry. We once thought that much of the populace would work in the tech industry, but except in the broadest sense of working with technology, that's never been true -- even today, the tech industry employs less than 3% of all American labor .

Ray Kurzweil believes strongly in the long-run ability of the economy to overcome threats of technological unemployment, because it's done such a good job overcoming these threats in the past. His rebuttal to critics who believe that robots are destined to take all our jobs echoes the comments Michael Chorost made about enlarging the scope of our imaginations:

Source: Ray Kurzweil via Google+.

This [technological unemployment] controversy goes back to the advent of automation in the textile industry in England at the beginning of the nineteenth century which marked the beginning of the industrial revolution. Weavers saw that one person with the new machines could replace dozens of weavers. New types of machines were introduced quickly and the weavers predicted that employment would soon be enjoyed only by the elite. They could see clearly the jobs going away but not the new types of employment that could not be described because they had not been invented yet. They formed a society to combat this called the Luddites. The reality turned out very different from their fears. New industries were formed and new jobs created that never existed before. The common man and woman could now have more than one shirt or blouse. The reality of jobs lost could be seen very clearly whereas the advent of new jobs that had not yet been invented were harder to understand.

If I were a prescient futurist giving a speech in 1900, I would say that a third of you now work on farms and another third in factories, but in a hundred years -- that is, by the year 2000 -- that will go down to 3% and 3%. That is indeed what happened; today it is 2% and 2%. Everyone in 1900 would exclaim, "My god, we'll all be out of work!" If I then said not to worry, you'll get jobs as website designers, database editors, or chip engineers, no one would know what I was talking about. In the U.S. today, 65% of workers are knowledge workers of some kind and almost none of these jobs existed fifty years ago.

So again today we can envision types of work that will go away through continued automation and it is difficult to envision the jobs that have not yet been invented.

The challenge in raising tomorrow's children won't lie in simply preparing them for a future that merely wraps what we have now in a sleeker, more technologically capable package. The people who have succeeded in navigating our changing world are the ones who think several steps ahead of the present, and that's the best attitude tomorrow's children can adopt. From an early age, their education is likely to follow one of two models: it will either prepare them for a future that's fundamentally different by developing the right mental tools to quickly adapt to change, or it will prepare them to be successful yesterday. In the next section, we'll examine the increasingly diverse educational opportunities you will be able to choose for tomorrow's children, to better understand how to help them build the mental tools they'll need to thrive.

2020: Teaching for tomorrow instead of yesterday

"[He] learned rapidly because his first training was in how to learn. And the first lesson of all was the basic trust that he could learn. It's shocking to find how many people do not believe they can learn, and how many more believe learning to be difficult. [He] knew that every experience carries its lesson."-Frank Herbert, Dune

Things start to get interesting once tomorrow's children enter primary school, or at least begin the educational equivalent of primary school, around the year 2020. A multitude of efforts at education innovation and education reform, applied from kindergarten to the college level, will have been ripening for over a decade. Many of these efforts may only in the earliest stages of empirical validation, and some are bound to be rejected before tomorrow's children begin to learn, but some efforts will endure and catch on in the public consciousness. The challenge for new parents (and parents-to-be) in the coming years will be navigating this maze of new educational resources to best build in their children the skills and attitudes that will allow them to thrive in a rapidly changing world.

Parents can no longer simply sit back and wait for change to come. This is as true today for parents who will send their children off to the local public school as it is for those with the means and the motivation to place their offspring in elite institutions with cutting-edge technology and teaching methods. The American public education system, despite undergoing various minor pedagogical transitions over the past few decades, still relies on the same basic model it's used for over a century: identify facts, drill facts, and recall facts. Yesterday's education was designed to produce obedient button-pressers well-suited for an industrial economy that needed masses of obedient button-pressers. Many of those button-pressing jobs have been, or will be, automated in the near future.

None of the futurists I talked to had more to say about the future of education than Andrew McAfee. I first asked him how he might reform education to prepare students for a fast-changing future:

Source: Andrew McAfee via Google+.

The ground rules for education should be that we need to turn out people who are good at things that computers are not good at. Now, that boundary is blurry these days, but I still have never seen a creative computer, or an innovative computer, or a computer that could realize what the problem was, let alone solve that problem. These kinds of skills still are in demand, and I think they are going to continue to be in demand.

The kinds of skills that we drill into students now -- the three R's -- via this factory model are going to become a lot less valuable. So my basic blueprint for educational reform is to start teaching kids creativity, innovation, problem identification, and problem solving skills, in addition to things like interpersonal skills and the ability to communicate clearly and effectively. We still need to be literate and numerate, even in the world that we are heading into.

No one should want to raise children who are ignorant of the basic facts of our world, but when so many of these facts can simply be found searching the relevant string on Google -- searching for information online is incidentally one of the fundamental skills for the future currently being overlooked by most educators -- it becomes far more important that tomorrow's children know what to do with the facts they find. However, public education is one of the largest and most entrenched fields of employment in America, and at the same time it's highly fractured along state and local lines. The results of any major public-school reforms are likely to spread slowly and unevenly through the country, which makes it an inadequate option for parents desperately in search of a better alternative to the current fact-based factory model of education.

Due to the grindingly slow pace of public-education reforms, truly preparing children for the future is all but certain to require a hefty amount of parental involvement from an early age. I followed up on McAfee's proposals by asking what he thinks parents can and should do on their own to give their children the tools for success in this fast-moving digital world:

I was a Montessori kid, and I am a huge believer in that system for younger children. It taught me that the world was an interesting place, and my job, even as a little kid, was to go out and ask questions of the world and see if I could figure out the answers to them. They talk about the Montessori Mafia in the high tech industry [Amazon's Jeff Bezos and both Google co-founders are Montessori alumni], and I don't think that's a complete coincidence. It really does encourage a sense of a curiosity and a desire to understand and solve problems, and that comes in incredibly handy. So I am a big believer in that.

At higher education levels, I hate to sound like an old fuddy‑duddy but the advice is: Take difficult courses, hit the books hard, seek out good teachers, and take advantage of the astonishing education resources we now have online -- everything from Khan Academy to the MOOCs [massive open online courses] that are out there. If you are relying on anybody else to spoon-feed you your education and prepare you for the workforce and the economy of tomorrow, I think that's really risky. Take it into your own hands and fill up your toolkit. You are just never going to get another chance.

How do we ensure that education works for tomorrow's children? The answer, as McAfee's alluded to, is part pedagogy -- teaching students how to learn rather than what to learn -- but it's also partly dependent on improvements in technology as well. If the public education system can't provide the right learning environment on its own, then connected coursework and digital toolkits will have to fill in the gaps. For years, colleges have been extending the reach of great teachers with online coursework, and other digital learning platforms have been deployed into more and more schools at ever-lower grade levels. Done broadly enough, this sort of coursework adopts the definition of a MOOC, a massive open online course where hundreds or thousands of eager learners can access the same great teacher at a fraction of the cost of standard tuition, if not for free.

Deploying education this way also makes it more readily quantifiable without necessarily funneling students through the narrow channel of standardized testing, and that data can help to analyze the quality of the educational program itself on an ongoing basis. The gamification of education is a hot topic right now, and it may not bear fruit in the way its backers expect, but the lessons found in developing effective games can be adapted into better ways of enhancing and analyzing student progress in virtual classrooms. After all, most modern games (at least the well-designed ones) give players multiple pathways to the completion of any given task, which turns the multiple-choice standardized testing paradigm on its head. Measuring learning doesn't have to mean that we measure it with a single right answer to any given question.

This doesn't mean that tomorrow's children will grow up with online coursework taught entirely by algorithm, but there's no reason why we shouldn't expect to see a growing range of worthwhile alternatives to the sclerotic public school system built into tomorrow's technologies. A MOOC led by one of the best teachers in the country, in any given field, ought to be readily available to everyone who wants to learn and has the foundational skill set necessary to understand the lesson. At first, these elite educators will require support structures -- teaching assistants, tutors, and administrators -- that will have to be staffed by humans, most likely those who felt stifled by the current public education system and decided to opt out along with their students. However, as natural-language interfaces and other software-based education tools improve, digital assistants can begin to step in for support, much as they already have in customer service roles.

Source: Steve Jurvetson via Flickr.

Add in the capabilities of tomorrow's gaming consoles, which are likely to be equipped with an immersive virtual reality system such as the Oculus Rift (or its superior descendant, the product of several more years of further development) as well as the motion sensors and lifelike graphics that are standard today, and you have the seeds of a truly virtual classroom that works as well as or better than the real thing. These technologies simply represent the beginning of a wholesale shift away from a model of education that has survived for thousands of years, and there's a long way to go before the dream of a virtual classroom becomes a reality for the masses.

However, we should never rush blindly into the next big thing in education. Online charter schools, which have been pushed heavily by for-profit education company K12 and which attempt to largely replace the bricks-and-mortar model with virtual classrooms, consistently show worse student performance  than the public  schools  they aim to replace. The company's performance has been so abysmal that legendary investor Whitney Tilson likened it to predatory subprime mortgage lenders in a bearish thesis presented at the 2013 Value Investing Congress. When Atlantic writer Hanna Rosin  attended a development conference for young children's educational apps, she found that despite their obvious professional enthusiasm for digital media, app developers still frequently restrict their own children from using this nascent format, wary that an overdependence on technology might stunt growth elsewhere. The impact of technology on education is often measured across years, and many of the options now available have a very limited track record on which to be judged, if they offer a track record at all.

While we might take issue with today's systems, we can be reasonably certain that those available to tomorrow's children will be much better than those available today. Seven years (from this article's publication to 2020) is a long time when you're talking about modern technology. It's the span between AOL's peak and the mainstreaming of Facebook, or in hardware terms, the difference between a Motorola Razr and an iPhone. And if technology is indeed improving exponentially, the next seven years will see a great deal more improvement than the last seven, or perhaps even the last 700. The difficulty will lie in balancing the desire for private-sector innovation with the understanding that a child's education should not be subject to the same profit motive as selling tablets.

Source: Intel Free Press via Flickr.

Parents want their children to succeed, and most parents are willing to go to any length to ensure that success. If that means adopting alternate models of education to that provided by public schools, or even those offered by the better private schools (often at an untenable cost to middle class families), most parents would do it without batting an eye as long as the costs are bearable, even if there may not be a great deal of data to support the efficacy of that alternative. And there is a growing body of evidence that the public education system is simply failing to provide what it's historically been meant to: an opportunity for everyone.

There is already a marked level of inequality in American public schools when it comes to the graduation and achievement rates seen between rich and poor  students, white and black or Hispanic students, and even between students in one state against another . If the best options for tomorrow's education are only within reach of parents who can and will devote substantial effort to directing their children toward those options, the gap between educational have and have-not is bound to widen even further. I asked McAfee if, given all this, he felt that the link between education and opportunity might soon break down, and he countered that it already appears to have broken down:

I think that link [between an ordinary education and opportunity] is already broken. Up until recently, it was a decent bargain that if you went to college, you had a decent career ahead of you when you got out. And that bargain really feels like it's falling apart and there is plenty of blame to go around. When you look at college graduation rates, only half of the students who enter full‑time four‑year undergraduate colleges and universities graduate within six years. The statistics are even worse at the community college level.

So the link between a just-OK education and opportunity is already fairly broken. Now, if you work hard, if you get a good education, or if you're a combination of smart enough, rich enough and fortunate enough to go to an elite institution, that link is still pretty strong. I think those links are going to remain strong for some time to come. But the link between a completely average education and economic opportunity is very rapidly getting weaker.

I think that higher education is going to find itself in serious trouble if and when employers stop requiring college degrees for their job applicants. Now, unfortunately, there is not a lot of evidence that that is happening -- in fact, the opposite is happening. For a lot of jobs, even jobs where we wouldn't think a college degree is necessary, employers are still saying, "show up with a B.A. or we are not even going to talk to you." That insulates the higher education industry from a lot of pressure and a lot of need to change.

I do think that there is some downward pressure on cost, because they have risen so high and some innovators out there are doing different things. But the industry is really only going to get shaken up when employers stop requiring that as a criteria for consideration. And it's starting. We see tiny little changes to the status quo, especially in high tech for the more technical jobs --programming jobs and coding jobs -- they are relying less on your educational credentials and more on things like your GitHub score, your TopCoder score, your demonstrated abilities out there in the real world. But that's a tiny little part of the economy and just a small number of jobs.

It stands to reason that if more skill sets can be quantified, then more skill sets will be, which should lead to a proliferation of alternative credentialing systems like GitHub and TopCoder. This may never be able to accurately assess one's ability at more humanistic skills, which involve interacting with people, finding the connections between disparate data points, or in using one's creative talents to move an audience. But when it comes to fields where the basic facts are known, digital learning platforms can be very helpful in building advanced abilities, and the better platforms might well emerge as alternatives to the high school diploma. If aspiring coders can log on from anywhere in the world to practice their skills on a recognized platform that can signal competence to potential employers, why can't aspiring scientists, economists, or engineers also have access to a similar system?

As these credentialing systems catch on, parents will face even greater pressure to help tomorrow's children maximize their educational potential. This connects back to the same thread of inequality running through many early alternative options, but inequality of opportunity isn't likely to be restricted to students. Transforming education from a highly localized experience to a digital one with a global reach is likely to have a profound impact on the hundreds of thousands of educators now employed in American schools, as well as on the millions of students who may choose to opt out of those schools.

Massively open teaching platforms, with the proper individualized support and incentive structure, would likely wind up rewarding the few elite teachers who can quantifiably produce the best results, creating a top-heavy system not unlike that of professional sports or the entertainment industries, where a few highly skilled individuals prop up a far larger number of modestly paid supporting players. Once the alternatives to public schooling become clearly superior in terms of both results and costs, it's likely that the shift will occur quickly. That shift may take place in the next ten years or it may not take place until tomorrow's children send their children off to be educated, but history has shown that when a new technology provides clearly superior results to earlier options, society is likely to adopt it rather quickly -- smartphones, for example, went from a niche product to one carried by over a billion people in just over five years.

A quick look at the employment figures in film or pro football might give us pause before we rush headlong into this brave new world. If MOOCs and other widely deployed education systems take off, tomorrow's children may very well encounter, quite early on in their lives, a technologically mediated winner-take-all environment in what's historically been a slow-moving industry that offers far better protections than most to substandard employees. Easier access to elite educators is of course beneficial to students, but it risks disrupting one of the steadiest sources of employment in the postwar era :


Source: Federal Reserve Bank of St. Louis, author's calculations .

It's quite rare to find someone in the public school system who completely wrecks the payroll curve, but it's a fact of life in pro sports and in Hollywood. Players in the four major sports leagues combined to earn $10.1 billion for the 2012 season, out of $38.4 billion in earnings for all employees in the spectator sports industry, as well as for employees in the performing arts, and in museums and parks -- the St. Louis Fed doesn't break out sports earnings separately. Less than 1% of the workforce in this part of the economy -- which totals about 540,000 people -- received 26% of its wages. Totals for top actors are harder to come by, but since the single highest-paid actor in film (Robert Downey, Jr.) earned $75 million over the past year, it's probably safe to assume that a similar chunk of the total earnings pie in these industries goes toward top film and TV stars as well. Some teachers might -- and ought to -- earn much more for their efforts in this sort of environment, but most will find their earning power rather diminished if education transitions toward a winner-take-all design. 

We may not necessarily want to create such an environment, but history shows that when talent can be effectively shared with the largest possible audience, the audience tends to gravitate toward a few of the topmost talents, often to the exclusion of anyone seen as even modestly less talented. Salman Khan and his Khan Academy  have produced about 5,100 videos that have been viewed over 300 million times, all without any expectation of profit. If each viewing is something akin to one class in a public school, then that's something akin to the work of about 65,000 single-subject teachers over the course of a full 180-day school year -- and this doesn't account for the fact that a number of school days aren't particularly productive. If a parent has a choice between sending their children to an average group of teachers at the nearest public school and providing them with access to the best teachers in the world of subjects the children are actually interested in, what parent is going to take the first option?

Education is supposed to provide opportunity. If that's no longer the case -- if the desired outcomes for success in a fast-changing future retreat behind walls that can only be scaled by children whose parents have the means and the motivation to do something different -- then tomorrow's children, regardless of their education, will wind up experiencing inequality in a very real and direct way. Due to the way most societies are structured, with high-earners congregating together and low-earners occupying primarily low-earning social circles, tomorrow's children may not really understand the impact of an unequal education until years later. However, if technology begins to replace public schools with superior education opportunities, then inequality among students may very well diminish, but at the cost of greater inequality in a displaced teaching workforce. Since teachers are far from the only ones whose professions are at risk of widespread technologically motivated downsizing, tomorrow's children could begin confronting the reality of systemic unemployment by the time they enter middle school.

2025: What's the measure of a man in an age of machines?

"Many countries today have begun the transition from an industrial wealth system and civilization to a knowledge-based system, without appreciating that a new wealth system is impossible without a corresponding new way of life." -- Alvin Toffler  

By the age of ten, tomorrow's children may well have already experienced more upheaval than any generation since that raised during the Great Depression. You, their parents and grandparents, will be largely removed from these changes at an educational level, but you won't be able to ignore the next wave of changes, which will begin to mature by the time tomorrow's children prepare to enter middle school.

The spread of automation, which seems likely to take hold in the transportation industry at roughly the same time as automated on-demand manufacturing matures with 3-D printing technology, will affect everything from the manner in which your children get to soccer practice to the fit and traction of the cleats on their feet. When enough of the world's two most important economic functions -- manufacturing and transportation -- can be run with minimal or no human input, then the structure of an economy we've taken largely for granted for over a century is likely to change in fundamental ways, as it once did during the Great Depression.

We know that 3-D printers can be useful in a wide range of ways, but their functional applications remain limited at present to a few niche operations, whether streamlining highly specialized manufacturing processes or creating complex new jewelry and other decorative trinkets. We also know that automated machinery is preferable to large pools of manufacturing labor based on the inexorable progress of the industry toward more production with less work. Despite the best efforts of streamlining experts, machinery has not yet advanced to the point where entire supply chains can be operated without human inputs -- with an emphasis on yet.

The combination of increasingly capable automation with increasingly robust 3-D printing machinery should eventually make real the possibility of an economic infrastructure that can dig raw materials out of the earth, process them, turn them into something useful, and then get that something to the people who want it, all without a single human hand guiding the process. Mining and farming are already highly automated processes, and even without a widespread adoption of 3-D printed manufacturing, it's certainly conceivable that new technologies will further reduce the need for human control to a bare minimum across the entire supply chain. Entirely automated production won't be close to reality by 2025, but we'll see early examples of workerless -- or nearly workerless -- factories, which may very well include banks of high-end 3-D printers churning away on customized models as orders roll in.

Source: Creative Tools via Flickr.

There are more ways to use these devices beyond on-demand manufacturing purposes. As their material capabilities improve while unit costs decline, it might soon make sense for forward-looking families to install their own home unit for their children's education. Access to next-gen 3-D printers, which are already in use today on such diverse projects as fashion shows and human tissue generation, can provide budding young scientists, entrepreneurs, and artists with a range of hands-on opportunities that most children today will never encounter until they enter college or a trade school.

These devices are natural extensions of a more self-directed model of learning, providing on-demand materials for coursework and a ready canvas for experimentation. A young child who's interested in learning more about the human body can print out a realistic 3-D organ that's five times its normal size, the better to teach basic anatomical concepts before diving into more complex procedures. Another child who wants to learn about electronics might use a 3-D printer to create a functional circuit board. These new methods of hands-on learning would be a natural combination with the digital classrooms we already explored, and early access to 3-D printing will also prime tomorrow's children for a life in which many products can be acquired easily without leaving the home.

Ray Kurzweil brought up an important point about the creative impulse behind 3-D printed goods when I asked him about the nature of an automated economy. Intellectual property rights will be very important for the designers of whatever models get formed up in the bellies of tomorrow's 3-D printers, because as this technology enters the mainstream, people will certainly print out the best designs thousands or even millions of times, much as popular digital music or e-books enjoy millions of downloads today. However, easy access to the necessary printable data could also lead to widespread piracy, as it has with today's digital music and movies. Here's what Kurzweil had to say when I asked him what he thought might happen to the economy of a world where most of our manufacturing needs can be completed in an entirely automated way:

While no one will really need to buy anything, they will want to. We can already see how this works today. There are millions of open source songs, movies, videos, books, essays and other media products that you can enjoy for free -- legally -- yet people still pay money to read Harry Potter, see the latest blockbuster movie, and listen to music from their favorite artist. So we have a coexistence of high quality open source forms of information along with proprietary forms. And this has not hurt the music, movie and publishing industries. There has, of course, been a radical shifting of business models, which is still going on, but the overall revenues for proprietary industries have held up.

By about 2020, we will be able to print out clothing on 3-D printers at a few cents a pound, and there will be many thousands of free open source designs. So people conclude that that will be the end of the fashion industry. But we will see the same realignment that we see in the media industries today. While you will be able to download cool open source designs, people will still spend money on the latest hot clothing designs from their favorite designers.

In the 2020s we will be able to print out virtually all the physical things we need -- snap together modules to build a house, food, and even replacement organs. That will dramatically raise living standards, but there will still be a proprietary market for all these forms of information.

Of course, there are likely to be downsides to this level of automation. Martin Ford also believes that 3-D printing might become the new foundation of manufacturing, but is less convinced that it will be a worthwhile means of earning a living in this new economy, intellectual property protections notwithstanding:

Some people believe that technologies like desktop 3-D printing will unleash a lot of new entrepreneurship and perhaps lead to a new craft-based or "maker" economy. I think that trend may be a partial solution; however, I'm very doubtful it will be a solution for most people. Even if everyone has a 3-D printer, it's not clear to me that most people would be successful generating an income on that basis. Also, I think 3-D printing is likely to be most important at industrial scale. The machines used in large factories will be far more capable than home or small business machines, so the technology will definitely eliminate factory jobs, but I'm skeptical that it will lead to an income for most people.

When the entire world has access to all the best creative work humanity can produce, most interest tends to naturally gravitate toward those already validated as the best -- like the music industry, 3-D printing designs could easily become winner-take-all, or winner-take-most.

Although we've focused so far on greater levels of factory automation as a result of 3-D printing, it's important to recognize that this alone isn't going to reshape the economy. Manufacturing job losses, after all, have been ongoing for decades, and few assessments of the future expect this sector to ever again become a driver of employment growth. Transportation, that critical link between the factory floor and your door, has thus far kept its employment numbers largely intact, but there's plenty of evidence that unmanned delivery vehicles (and unmanned vehicles of many other stripes) will be increasingly common on the road by the time tomorrow's children enter middle school.

Automating the transportation link of the supply chain might take place sooner than the day when a replicator-like 3-D printer churns away in everyone's home. We already know it's possible to automate road vehicles, because we know that Google's self-driving car has logged hundreds of thousands of real-world miles without a single accident. That doesn't completely bridge the gap between the assembly line and your door, but it can come pretty close, and there's no reason not to expect automated loading-and-delivery robotics to fill in that gap in due time.

Source: Zack Sheppard via Flickr.

Charles Stross believes that most people are likely to be surprised by the transformative potential of automated road vehicles. When I asked him to name technologies whose eventual impact we might be underestimating, he provided such a wealth of information on self-driving cars, and the problems they could overcome, that it should be a basic primer for automated-vehicle skeptics. The advantages are simply too compelling to ignore:

The self-driving car technology Google is working on is potentially huge. Around 90% of road traffic accidents are the results of driver error. Accidents kill more people every six weeks in the U.S. than died in 9/11 -- the average death rate over multiple years is in the same league as the Vietnam War. Globally, accidents kill over a million people a year (North America and the EU and the rest of the developed world are relatively safe).

The main constraint on the roll-out of self-driving vehicles is developing the control software, and the price of the control units -- which are microelectronics. If it costs $10,000 today it'll be down to $100 in a couple of years. So it's going to go from an academic curiosity to fitted as standard on all new cars by the end of the decade.

Let me emphasize, this is already happening: Volvo has already rolled out a low-speed collision avoidance radar system on their cars -- if the car detects a pedestrian crossing the road and you don't hit the brakes in time, the car will attempt to brake for you. It's not foolproof by a long shot, but even a 50% success rate would represent a huge number of lives saved. This tech is going to become as ubiquitous as seat belts and air bags within a very short period of time.

Then things get interesting.

First, insurance companies will drive the uptake of self-driving vehicles once their accident rate drops below that of human-driven vehicles. Human drivers lose attentiveness. Computers, in principle, don't. After a few years of statistics, I expect to see insurance premiums rise for people who insist on driving their cars. Not long after, we'll begin to see cars sold without steering wheels.

Then things get more interesting.

Huge numbers of people will end up jobless: truckers, taxi drivers, people who are employed as autopilots for vehicles that now come with autopilots. But that's just the beginning.

Our relationship with our cars is that they're an extension of our personal space; we get territorial about them. But having cars that drive themselves breaks some of this instinct. You're in the back of a car with an invisible robot chauffeur. What are the implications? For one thing, no more school run -- you can tell the car to take your kid to the school gates, or go and collect them. For another thing, drunk-driving becomes a thing of the past. You can go to a bar, drink yourself legless, and then stumble into the back of your car and say "take me home, Jeeves."

Then parking and road usage get interesting. Even at the peak of the morning rush hour, around 95% of the UK's private car fleet is parked at any given moment. Our housing stock averages 75 years of age, and was largely designed and built prior to the age of the private automobile, which now clog our streets. But if you've got a smartphone and a self-driving car, does it need to be parked outside your home? Of course not: when it's self-driving, you can whistle for it and it comes from a car park on the edge of town. My city was gridded out in the 1750s, and a one-car garage within a quarter mile of where I live sells for upwards of $60,000, and we're perpetually on the verge of gridlock due to narrow, congested streets lined with parked cars. Being able to remove all the on-street parking bays would be a huge benefit in terms of freeing up roads for traffic movement -- a gigantic infrastructure win.

Around town, we have streets blighted by obstacles -- traffic lights, speed bumps, chicanes, painted markings -- designed to control the flow of vehicles by signaling instructions to human drivers. With self-driving vehicles and a ubiquitous computing infrastructure, we can do away with most signage. Instead, we can rely on the roads to notice the six-year-old playing with a ball in the street and route approaching cars around her. Sidewalks and grade-separated cycle paths won't be needed as much if our cars are self-driving, vigilant, and averse to accidents. In fact, our city streets in 2050 may look a lot more like those of 1850 than those of 1950.

If your car can drive itself, do we need upper speed limits? The argument that speed limits save lives becomes specious, especially if these are auto-only routes where non-powered traffic isn't permitted. The only remaining argument is for fuel conservation, which is undermined by improvements in vehicle efficiency and automatic slipstreaming to reduce drag. Long-distance car travel may become a whole lot faster, competing with the low end of high speed rail.

Finally, we get to the prospect of fractional reserve car ownership. If a car comes when you call it, and you don't drive it yourself but have a magic invisible chauffeur, do you really need to own it outright? Do you really need to own an asset that spends 90% of its time unused? Wouldn't it be better to have a 10% share in a Rolls Royce, cleaned and managed by an agency that maintains it for a high utilization level, than to own outright a boringly utilitarian economy car? I suspect the long-term consequence of self-driving vehicles is going to be a precipitous drop in the number of cars, combined with a vast increase in their quality and comfort levels. Ultimately, the gap between the private automobile and public transport blurs into invisibility. Yes, the rich will want to own their playthings (and may even drive them on track days) -- but for the rest of us, efficiency combined with comfort and safety will eventually win out over the advertising frame that portrays your car as a marker of autonomy and identity, just as we've seen with the decline in cigarette smoking.

But the key benefit is simple: two million fewer deaths and six million fewer injuries a year globally. If you tally up those lost lives in actuarial terms by assigning them a value -- say,$1 million of lost labor on average over a lifetime -- that's $2 trillion saved per year, or about 5% of planetary GDP.

In the long run, automated vehicles just make economic sense, and as Stross points out, the impact of this technology on the workforce and on society at large is bound to be immense. Transportation is so tightly bound up in our modern economy that it's difficult to fully quantify just how many jobs might be at risk, but we can give it a try. Truck and taxi drivers are extremely vulnerable, as are any others whose basic job description involves "get X from Point A to Point B." The automotive manufacturing infrastructure is also at very real risk -- if people simply reserve time on a shared vehicle rather than purchasing their own, fewer vehicles will be needed to begin with. We saw how important that infrastructure was during the great recession when the Big Three went to Washington pleading for bailouts, but unlike that crash, a shift toward automated vehicles isn't going to produce immediate catastrophic effects. It's simply going to grind away at the vehicle-based industries now dependent on human labor until it becomes unavoidably apparent that the driver is going the way of the horse.

Source: The Pug Father via Flickr.

So what's the end result of this huge surge in automation? Think about any given industry that now depends on you going to them, or their delivering to you. That's an awful lot of industries -- retail and restaurants also depend on a transport infrastructure for their business, and they employ a substantial (and growing) chunk of the American workforce. That's not to say that Wal-Mart will go the way of the general store it replaced, but if the retailer that's built itself on better logistics than anyone else can find a way to get customers what they need at lower cost, it'll do so, even if that means eliminating entire superstores worth of staff at a time.

Why would you really need, or even want, to even get into an automated car and go to the store when virtual stores -- using the same virtual-reality interfaces with which tomorrow's children will already be familiar -- can provide the same shopping experience backed with rapid delivery systems? This barely touches on the as-yet-untested delivery capabilities of unmanned aerial vehicles, which could further shorten transit times by simply flying in a straight line to their drop site -- and we haven't even scratched the surface of robotics, which Andrew McAfee believes is a field whose capabilities are about to explode:

In general, robots have not been that astonishing so far. The robots that we see in science fiction movies are way ahead of the robots that we see out there in the real world, but I see a lot of evidence that that is about to change. Before too long we really are going to see robots that can walk and talk and navigate through an environment -- walk around, maintain their balance, have amazing sensory ability, and really come a lot closer to science fiction.

DARPA just announced another grand challenge, and it's a humanoid robot grand challenge. What the robot has to do is hop into a car, turn the key in the lock, drive it around, climb up a rubble field, repair a pump ‑‑ you know, this is absolutely sci‑fi stuff. And the robotics researchers I know at MIT, when I asked them, "are you guys going to be able to do this within the timeframe of the grand challenge?" They say: "oh, yeah, absolutely." We are about to get blown away in the next few years by robots.

Properly automated, on-demand manufacturing afforded by next-gen 3-D printers, a streamlined transportation network of unmanned vehicles, and the robotic support to fill in the gaps that these machines can't service, all combine to offer us incredible possibilities, but also puts an incredibly large number of jobs at risk. In fact, nearly half of all American jobs might be at risk by the time tomorrow's children go to college -- a study recently undertaken by Oxford University's  Programme on the Impacts of Future Technology found that 45% of America's present jobs are vulnerable, with transportation and production two of the most threatened fields. It's possible that these jobs will be replaced by other work, as manufacturing once supplanted farming, and as retail and white-collar jobs eventually took the lead from manufacturing. Ray Kurzweil, as we've noted earlier, points out that it's difficult to conceive of the jobs that might await these displaced workers in the future, because many have not yet been invented.

But as more and more of the underpinnings of our economy become automated, the nature of work in that economy must also change. Of what use is a 40-hour workweek when technology can provide us the things we need in exchange for much less than full-time labor? Is a full-time job necessary, or even desirable, when it may be possible to simply requisition raw materials from automated mining-bots that will work their way through a manufacturing process out of science fiction, until something finished arrives at your door?

Tomorrow's children will grapple with these questions before they even enter the workforce, because the impact of automation will be apparent to anyone who doesn't live in their own private bubble. While they may grow up using 3-D printers and self-driving cars, it is likely to take longer for them to come to grips with the changing workforce, which is all but certain to displace someone close to them -- possibly permanently -- within the first 15 years of their lives.

At the same time, tomorrow's children will be reaching maturity in an era where their every move may well be tracked, recorded, and analyzed, whether for advertising purposes or for reasons of national security. The interplay between the changing workforce and the end of any meaningful private life will shape the world tomorrow's children inherit. The impact of constant surveillance is likely to hit home earlier, as tomorrow's children will be the first generation whose entire lives will essentially become part of a public record. And as widespread changes in employment foment widespread unrest, governments everywhere are likely to step up their oversight of the formerly "private" records of their citizens, as well.

2030: Is a world of intelligent machines a world without privacy?

"Essentially, the power of any institution is predicated on followers who have been deceived. ... So if deception and trickery are inevitable for leading the masses toward some goal that you see and they don't understand, then why shouldn't you be able to use that deception and trickery to build a deliberate system?" -- Vladimir Bartol, Alamut 

Tomorrow's children will be approaching adulthood as the world tumbles through a period of tumultuous changes. The politics of their parents will influence the direction these changes take, but a number of factors beyond easy parental control will also help shape the future they inherit. The rise of an automated world also brings with it the threat of a world where truly pervasive surveillance and control is not just a nightmare of dystopian sci-fi writers.

The impact of surveillance, which is at least as much the natural outcome of participating in a data-driven online society as it is the notion of eyes in the sky and taps on the phones, will change human relationships in profound ways -- not only between man and government, but between man and his fellow man as well. We are already experiencing these changes as today's children (and adults) become constantly connected on social media and through other online platforms. Tomorrow's children, many of whom will be included in social media virtually from the moment of their conceptions, are likely to grapple with what privacy means before finding that they have very little of their own.

Source: Jeff Schuler via Flickr.

Your parents probably had baby pictures of you stored in well-loved albums, and if you've got adult children, you probably kept this tradition alive when they were babies as well. But tomorrow's children -- and many of today's -- are already growing up in a world where their precious moments are plastered across the Internet by parents whose lives are shaped by social media. By the time they enter high school, many of tomorrow's children may find that their online persona has already been created for them. Would you want the whole world to know that you once wore a sailor hat in the bathtub when you were two years old? The parents of tomorrow's children are making it relatively simple for anyone with a modicum of online aptitude to ferret out embarrassing pictures and updates, which will languish on social media servers for years before their youthful subjects are mature enough to discover and claim them.

By the time tomorrow's children gain control of their own online personae after years of parental (over)sharing, they're likely to have passed through hundreds or thousands of hours of a more targeted sort of surveillance. In an ad-driven online world, the advertisement that's matched to its viewers' needs is far better than the untargeted ad, whether the target happens to be a middle-aged breadwinner or a teenager with little to spend. And by the time they gain control, tomorrow's children will have already grown up in a world that prioritizes sharing in ways that may not be healthy for a young person's personal development. Whether they want to or not, tomorrow's children will find themselves pressured to publicize themselves, in ways both overt (via social media posts) and less apparent (compiling preferences and desires on shopping sites). There are some signs that today's children are already turning toward a more guarded online lifestyle, but the personal choice to share is likely to become only one modest piece in the larger privacy puzzle as tomorrow's children grow up.

Surveillance cameras are already proliferating in many places, for various reasons: to stop speeders or thieves, to watch interesting locations, to protect valuable property, or to better tailor advertising to a nearby audience. As technology improves, smaller cameras will be able to watch more of the world, with less public awareness of their existence. Already, tiny drones can flit about indoors to scout locations in advance of emergency workers, wearable cameras come in small enough packages to drape on heads and fingers and wrists and shoulders, and sensor-embedded billboards  can create the first inklings of a Minority Report future where every advertisement knows your name.

By the time tomorrow's children begin carving out their own online space, implantable technology and genetic modifications will be on their way to maturity, which will offer the public an unprecedented ability to regulate moods, enhance performance, and even record thoughts, all without the need for constant reuptake or recalibration. These diverse systems, whether placed everywhere and remotely controlled or embedded directly in people's bodies, will offer a range of options for governments searching for ways to keep their citizens compliant, without forcing the deployment of a security state of 1984-style overtness that people will eventually learn how to avoid.

Source: Amal Graafstra via Flickr.

However, tomorrow's children may barely notice the deployment of pervasive (even implanted) surveillance, or recognize the danger such a state poses to their economic and social well-being. When the simple act of sharing oneself digitally becomes an accepted rite of social passage, the concept of being exposed to others -- willingly or not, beneficially or not -- loses the potency of its threat. Ubiquitous computing and ubiquitous surveillance will only amplify this phenomenon. If young people are prone to share their lives now, with smartphones as the primary connection, how much more likely will they be to share themselves when their very bodies become connected?

The danger of such ubiquitous tracking is not in its mere existence, but in the avenues of control it opens to those in power. Corporations may only want to serve up more effective advertising, but their methods of delivering content may funnel users into narrower channels of knowledge -- a phenomenon known as the "filter bubble ," which winds up giving users only what information an algorithm thinks it wants to see.

This method of delivery can reduce a user's understanding of the world around them, which can make it easier to push them toward one extremist stance or the other. Corporations need not cooperate with governments for governments to gain access to the flows of information through corporate servers, as the past year's NSA revelations have made clear. They may manipulate the filter bubble mechanism to create a more amenable population, use the data to identify unrest before it boils over, or to simply keep an eye on political enemies, as Nixon so crudely attempted with much less developed technologies.

Ray Kurzweil has, in the past, been a big advocate of the liberalizing power of technology. In The Age of Spiritual Machines, he anticipated the fall of the Soviet Union as a result of the Internet, but this took place in an earlier era, before governments began to truly catch on to the power they might wield over the flow of information. I asked him if his view on technology as a force for democracy still held true, and he seemed somewhat less certain than he once was:

We can paint credible scenarios ranging from a radically free population that creates its communities based on common interest and the free sharing of information to totalitarian control, a la 1984. I would argue that the former has been a stronger trend, but we can find justification for both ends of the continuum.

I wrote in the 1980s that the Soviet Union would be swept away by the then emerging social network consisting of early forms of email. People thought it was ridiculous that this mighty nuclear superpower would be swept away by a few teletype machines, but that's what happened in the 1991 coup against Gorbachev. Time magazine had a cover story with Yeltsin standing on a tank. But the tank had nothing to do with it -- it was the clandestine network of hackers harmed with teletype machines that swept away the monopoly of information that the authorities had previously controlled. We then saw a great rise of democracy with the rise of the web in the 1990s and we see the impact of modern social networks today.

That raises a great dilemma with the potential for government surveillance. The democratization of power also implies a democratization of destruction. If there is another 9/11 or worse, the citizenry will demand government surveillance more intrusive than what we have now. The answer is for an informed public to use democratic means to monitor government actions to prevent abuse. This is easier said than done and so far the government's reaction has been negative to the mere disclosure of these programs.

Kurzweil hits on the need for an informed public to properly prevent government overreach, but the creation of filter bubbles threatens the spread of necessary and useful information in a propaganda-rich environment. Charles Stross often examines the interplay between technology and politics in the background of his novels, so I asked him how he expected the relationship between these two forces to change over time as technology -- for surveillance and otherwise -- continues to develop. He shot down the idea that the public can truly be informed in an unimaginably complex and interconnected society:

One problem we face is that our societies are rapidly becoming intractably complex; so much so that we seem to be losing any benefits from transparency. Transparency doesn't help if the issues are so complex that only a handful of specialists understand any given corner of the problem space. We also have a woefully misinformed public, as indicated by numerous studies, such as the recent report by the Royal Statistical Society.

The news media is fundamentally defective insofar as it's finely tuned to maximize advertising revenue at any cost, which includes depriving us of an accurate understanding of what's going on around us. They're also struggling desperately to maintain sales in an environment where advertising revenue is stuck in a deflationary death spiral thanks to the Internet. The Internet is an amazingly powerful tool for disintermediating supply chains, but a side effect of this is that search becomes a vital infrastructure service -- and advertising loses much of its profit-generating efficacy in the face of price-comparison algorithms. So the news media, for decades dependent on advertising as a revenue source, is suddenly deprived of easy money, leaving us adrift in an echo chamber of software-generated "news" that is all too happy to feed our own prejudices back to us, with banner ads attached.

And this is before we even raise the carpet of our democratic forms and peer at the dirt pile beneath, which is the increasing degree to which decisions are removed from the public sphere and taken in private.

In my more pessimistic moments I am pretty certain that we're living in what Colin Crouch characterizes as a post-democratic age -- one in which our governments pay lip service to democracy but in which the application of democracy is strictly limited. Indeed, recent revelations about ubiquitous surveillance, secret courts (in both the U.S. and U.K.), and so on suggest that we're living in a "soft totalitarian" system -- one where dissent is tolerated but ignored as long as it is non-threatening to the neoliberal consensus.

We -- I mean the citizens of the developed world -- are only 25 years away from a bipolar situation where the eventual triumph of capitalism was seen as anything but inevitable. We haven't yet really internalized what it means to live in a world dominated by a single political ideology (the neoliberal consensus) -- today, even the center-left parties in Europe have either signed on to the same general set of policies (e.g. the British Labour Party) or accepted a decline into gradual irrelevance in the face of a globalized system that dictates economic policy guidelines from a supra-national level. We still have multiple competing parties, and the ongoing culture wars lend the appearance of real conflict to their sound bites. But if we don't find a way to address the very real problems of global capitalism -- in particular, the secession of multinational corporations from national tax systems and any vestigial sense of social responsibility -- we're going to be in big trouble in a decade or two.

Where we go from here is unclear. But I suspect something not unlike the Arab Spring may be waiting in our future, 30 or 40 years down this road.

But revolutions don't just happen without cause. It's possible that tomorrow's children, steeped in a mode of learning that dares them to question things and explore different options, will be more inquisitive and less trusting of the world presented to them than most of their parents and grandparents have been. If enough are acclimated to this mind-set, the next generation could begin the process of dismantling the systemic surveillance state we now feel so powerless to stop.

Source: Mary Henderson via Flickr.

It's not likely, however, that a return to greater degrees of privacy is in the cards -- no less a tech luminary than Internet pioneer Vint Cerf has said that "privacy may actually be an anomaly ." In the past, humans clustered in communal gatherings to share their lives with each other, and the communal nature of social media may simply represent a technologically mediated return to our roots. We may yet push back in a real way against encroaching government surveillance efforts, but it won't be surveillance itself that will cause the backlash -- only overt efforts at control will doom a surveillance state, and even then only if its targets recognize and reject the attempts.

The automation-driven sea change in the workplace, which will be well under way by the time tomorrow's children reach adulthood, is still the most likely cause of any eventual revolution. People may be placid so long as they have some security. If that security is stripped away by an economic system that does not value human inputs, no amount of surveillance-driven coercion is likely to keep them cowed. The only question is: will the lessons we learn today and in the near future be the right ones to offer to tomorrow's children so that they need neither be cowed nor insecure?

2035: Is there such a thing as a post-human economy?

"What we obtain too cheap, we esteem too lightly; it is dearness only that gives everything its value." -- Thomas Paine, The Crisis 

By the time they reach adulthood, tomorrow's children will have more direct experience with technologically driven unemployment than any generation since those who came of age during the Great Depression. The Oxford study we referenced earlier projects that 45% of America's jobs will be at risk of automation within the next two decades. By the time they enter college -- or at least begin attaining some of the credentials we've long associated with college -- tomorrow's children will almost certainly have passed through a period of similar economic dislocations to the Great Depression as the coming wave of automation-driven job losses crashes ashore. After all, you can't displace nearly half of the workforce within a single generation and not expect some upheaval.

Source: Rob DiCaterino via Flickr.

As we've already explored, vast swathes of low-level service work will soon be highly susceptible to displacement, and once these jobs are gone, where will displaced workers go? More importantly, how will those millions of displaced workers support themselves in an economy where so much can be done without human input? Two decades is undoubtedly far enough in the future that predictions made and colored by today's struggles might seem archaic to the next generation, but what's important is that we begin doing something to ensure that the displaced masses do not fall into abject poverty while the few in control of automation technologies accumulate everything that's left. Failure to do so could make the next generation less resilient and less able to safely navigate this massive shift in the workforce. We often hear about "mortgaging our future" with the national debt, but building an economy that will grow ever more unstable in the future so that we can buy cheap stuff today is a far more dangerous mortgage than the one represented by government IOUs.

I asked Charles Stross to explore, in broad strokes, the course he believes the global economy might take as more and more of the world is mediated by technological processes, and the response (as always) was eye-opening in its detail:

We have come from a world where everyone who was physically able to move or communicate could expect to have a job. And we still make this assumption today. But as we automate aspects of human cognition, we are rapidly rendering a good proportion of our population permanently unemployable.

It started with the not-too-bright folks who could hold a pick or a shovel and work on the road, or use a broom to sweep the pavement clean, or fold sheets in a laundry. Mechanization is eating into those manual jobs at a terrifying pace (although not so much with the sheet-folding, for now -- I'll come back to that shortly). More recently, other specialties have also felt the pinch. Smart algorithms make it possible to run warehouses with far fewer staff. Smart search has also hammered the demand for junior lawyers: the law school bubble in the U.S. appears to be bursting because there's no demand for junior staff with law degrees to pore through compendia of precedent and law, researching cases: much of that legwork has been automated.

The effect of ubiquitous AI is to eliminate jobs -- those jobs that can be automated and that don't require direct human emotional support. It turns out that predicting the drape and folding of fabric is computationally very hard -- there's an X-Prize-like goal of building a robot that can stitch a jacket within ten years, but that's really far out, difficult stuff. [Author's note: Robots already exist that can fold sheets and towels, albeit very slowly .] So our unskilled laundry worker's job may be safe for the time being, as are the jobs of the waiter who served you at the restaurant last night and the home caregivers who change diapers for elderly folks with dementia.

But 90% of lawyers are under threat. So are the radiographers who check mammograms for signs of breast cancer: it's an image-recognition task, and as soon as automated systems can do it better or faster than humans, those humans become redundant. Medical doctors may have some degree of safety in the near term, but some aspects of their tasks are already being automated. Accountancy might seem like a safe profession, but with the move toward online tax-filing it's only a matter of time before expert systems that provide optimal support for tax planning eat into their consultancy market too. Self-driving cars will kill the jobs market for taxi drivers and long-haul truckers; they'll hurt the auto industry too, because self-driving cars and smartphones make fractional reserve car ownership an attractive option -- why own an SUV that spends 90% of its time parked by the roadside when you can own a 10% stake in a Rolls-Royce which will come when you call it?

Maybe 30% or 40% of the working-age population in the developed world currently have jobs that leverage their training. And we moan about unemployment! But how can we expect global society to adapt to massive structural unemployment, such that only 5-10% of the population can have jobs that use their training, and another 10-15% carry out personal services hitherto considered to be low-status occupations, like waiting tables or nursing assistants?

With the economic development of China, India, and (slower and starting later) Africa, we're visibly on course to reach a state sometime between 2050 and 2100 where the world is fully developed. This might sound like a good thing at first, but another way of framing it is that it's a world where first-world problems have become universal. We won't be able to escape the unemployment trap by reimporting outsourced jobs: everyone's going to be in the same mess everywhere, with an aging population, automation eating brain-work, and the majority of jobs remaining found in low-paid service areas that suffer from Baumol's cost disease [a rise in salary due primarily to a rise in salary in other parts of the workforce]. This is a potentially explosive situation, and it's one that Keynes pointed to in the 1920s-1940s, but our current generation of political leadership seem to be so obsessively focused on jobs -- more particularly, on the toxic belief that the unemployed are lazy shirkers who should be punished for not finding paid employment.

A friend of mine said, a while ago, that he thought the big political problem of the 21st century would be how we deal with too much information. I think that's the big problem for the first half of the century. The big problem for the second half will be how we live in a world where human intelligence has been sharply devalued.

The trouble with forecasting the long-term outcome of such a drastic change so far in the future is that we simply may not be able to understand the ways in which new technology or policy might ultimately influence society. John Maynard Keynes suffered a similar problem when he wrote Economic Possibilities for Our Grandchildren in the teeth of the Great Depression, in which he tried to peer a century ahead to anticipate a 15-hour workweek for the populace, all despite having never seen a computer or visited a shopping mall. That 15-hour workweek might still be very much in the cards for tomorrow's children. We don't know yet; 2035 is still a long way off. But will the rationale for such a drastically reduced need for employment resemble the same manufacturing-centric vision Keynes foresaw in the teeth of the Great Depression? Will the economic outcome of this reduced need be the same as Keynes foresaw? Probably not. We might see the rise of new technology decades in advance, but we can only guess at the ways in which it will shape the future.

What seems certain is that we'll need less work, at least in the aggregate, to accomplish the same economic progress we've historically enjoyed. We already see this trend in the level of productivity per hour of employment, or in real GDP per capita, or in several other measures of individual economic output. So much of the retail experience, from the factory floor to your door, is now expected to be automated by technologies in existence or in development right now. As we've already explored, there's very little economic reason why these tasks shouldn't automate fairly rapidly once mature technologies, capable of more work at lower cost than human employees, are in place.

Source: Martha Soukup via Flickr.

Teachers are ultimately at risk if they do little more than mechanically move through sets of facts. Well-paying white-collar jobs, from legal document discovery for law firms to accounting to basic technical support, are already highly susceptible to automation if their primary job description involves solving problems based on predetermined instruction sets. Drivers and warehouse workers and retail clerks have all seen their jobs streamlined by technology, and nearly everyone who works in these fields today could be streamlined right out of a job by the time tomorrow's children begin graduating college. High-level knowledge work and empathetic relationship-building professions are well-insulated for the foreseeable future, but these jobs, as we've already seen, do not exactly account for that much of the workforce.

The question in a largely automated world becomes not "where will the jobs of the future come from?" but "what will happen to those whose jobs are no longer necessary?" Some will certainly be willing and able to transition from a dying profession to one in higher demand, but millions will find it difficult to find any work at all that sustains them at a level necessary to ensure a middle-class lifestyle, at least as we understand it today. It's certainly possible that these projections of widespread unemployment will prove either far too ambitious or completely off the mark, and both Ray Kurzweil and Andrew McAfee remain convinced that we can make the economy work in ways that our narrow experience can't yet conceive, in spite of ever-greater levels of automation. I asked McAfee what he would do, if given the chance to control national policy, to ensure that our race is with the machines in the future, rather than against them:

I think our economy is going clearly to become more technologically enabled and more automated, but we are not going to head into this worker‑free, completely sci‑fi economy in that phase.

I think the Econ 101 playbook is still the right playbook, and that playbook encourages us to do a few things with government policy to have a successful, thriving economy and a good workforce. Rebuilding our infrastructure is absolutely one of those things. The American Society of Civil Engineers gives us either a C minus or a D on our overall national infrastructure; we should double down and upgrade that infrastructure so that it's absolutely world class.

I would also pursue comprehensive immigration reform. This is still the country that a lot of people want to come to, and we should be welcoming that kind of immigration, especially from high-skilled workers.

Educational policy also needs to change. And I would explore some innovative alternatives to financing higher education, because right now it's just this flood of cheap money into higher ed without a lot of accountability. So let's explore some other things. The state of Oregon, for example, just passed legislation that will basically give the educational institution a stake in the person's future. I think those kinds of things are great. So, let's keep that going. And let's make sure that we have a good environment for entrepreneurship, and that we don't choke off business creation with tons of regulation -- this kind of patchwork wall that a lot of people see. We want to make sure that we continue to have the easiest and best environment to start a business in because businesses employ people.

There are a lot of people who have that urge to become entrepreneurs. And there are a couple of different things that get in their way. One is, like I mentioned, the thicket of regulation that they face at the local, state, and federal level. It's confusing; you don't exactly know what you have to do when you start a business. So just sweeping that stuff out and making the process of business creation more transparent would help a lot.

I hear from more conservative colleagues a lot that we need to reduce the burden on entrepreneurs. And I believe that. But what you hear from people on the more liberal side, one of the things that keep people from entrepreneurship is the fact that most of us get our health care from our employers, and health care is really expensive. I have a lot of sympathy for that, too. So I would also overhaul the health care system and make it so that people who want to start a business didn't have to worry about preserving their health, or at least make it so that it's not so expensive for them to do so.

And finally, the government should clearly be investigating in basic research. Private industry doesn't do enough of that of its own, so let's keep funding basic research, which will turn into the breakthroughs of tomorrow.

These answers explore what we might do to improve our lot as a nation by the time we get to 2035, but we should also wonder whether or not most of tomorrow's children will be able to find meaningful work at a living wage under these new economic conditions. Everyone can't be an entrepreneur, and infrastructure repairs won't provide lifelong employment for most people. In The Lights in the Tunnel, Martin Ford convincingly paints a picture of more income flowing to fewer workers -- the primary problem of widespread automation. It's not pretty.

Google might have created a hundred millionaires when it went public, but a hundred millionaires only need a hundred cell phones, a hundred cars, a hundred televisions, a hundred (or fewer) accountants to get them better tax breaks, and so on. If the phenomenal success of Google's brain trust -- remember, it earns far more than pre-divestiture AT&T in real terms while fielding far fewer employees -- comes at the expense of 100,000 or a million good middle-class jobs, then that ultimately results in far fewer people who can buy cell phones, cars, and televisions and still pay their basic living expenses. Fewer buyers bring in fewer profits, and over time the negative effect of reduced sales can compound across the entire economy. Just look at where we are today -- years after the great recession, GDP growth remains weak and has been persistently below its previous trend. People just aren't buying like they used to, because many simply don't have the money.

It's easy to say "if people don't have jobs, maybe they shouldn't be buying cars and televisions," but this blinkered view of capitalism ignores the fact that capitalism only grows when ever-greater numbers of people can afford to buy a wide array of products and services beyond the bare necessities. An inexorable dwindling of the workforce seems like a problem to be addressed by the word's governments -- many national governments, including the U.S. government, once put millions to work during the Great Depression when private industry couldn't.

But the world's largest corporations, once kept in check by the thicket of regulation that began to grow during the Great Depression, are themselves now becoming almost as powerful as many governments, Many have learned to use this power in overtly political ways, and frequently stand as keepers to the gates of real reform. Might they recognize the need for a broad and well-paid workforce, or are they instead committed to implementing ever-greater levels of automation regardless of the long-term economic consequence? I asked Martin Ford if he felt that today's corporations could or would hold the line against automation-driven downsizing to support sustained economic progress, but he pointed out that unless all of them agree to do so, none are likely to take the initiative -- and he wasn't particularly confident that government action could reverse the effect, either, at least not with the governments in place today:

Source: Martin Ford.

I am very skeptical that corporations can solve this unemployment problem of their own initiative. There is already a strong movement toward hiring part-time workers, but it has nothing to do with sharing work; rather, it is focused on cutting costs for both wages and benefits. The main problem with simply cutting hours is that it will not result in an adequate income for most workers. As it is, part-time workers have to string several jobs together to get by.

In general, I don't think the market will solve this without government intervention. The situation is in many ways like a typical "tragedy of the commons" scenario. Even if business managers understand that widespread automation will undercut consumer demand, they also know that anything they do independently will not be enough to impact the overall market -- only collective action will work.

This is quite similar to a resource, like a lake or the ocean, being over-fished. The fishermen may understand the problem, but they will still go out and catch as many fish as possible -- unless the government steps in with regulation to insure that the same rules apply to everyone.

I think governments are capable (in a technical sense) of addressing the issue. The problem is the politics -- it is more a problem with the public than with the government itself. Conservatives are especially likely to object to anything that see as a handout to the "undeserving." That is a very powerful obstacle to overcome, in spite of the fact that a guaranteed income has its origins in conservative/libertarian circles -- both Friedrich Hayek and Milton Friedman supported a guaranteed minimum income.

From our vantage point, it seems almost inconceivable that we might someday see the implementation of a guaranteed minimum income, or some other supposedly extreme form of income redistribution that might help to better align post-human economies with the capitalist impulse. But massive economic dislocations have a way of speeding socially desirable changes through the political system -- protests in favor of social safety nets had been part of the American landscape during recessions for at least four decades before Social Security became law in 1935, and legally complex guaranteed-minimum-income programs had earned the support of millions leading up to that law's passage.

Pressures for social change both influence and are influenced by the political structure in which they evolve, and this structure can either propel change forward or stand in its way. Early corporate regulations arose around the turn of the last century in response to reporting -- occasionally embellished -- of widespread abuse by the monopolistic trusts, against both their workers and their competition. Social insurance programs arose during the Great Depression in response to populist uprisings that demanded, among other things, a guaranteed basic income for all. In each case the end result was less than the extreme changes sought by the fiercest proponents of reform, but these changes took root because those who supported them simply conducted far better public relations campaigns than their opposition.

Many of the elements of today's economy that were forged during the Great Depression developed as people and politicians moved to counteract the exhaustion of robber-baron capitalism. Social safety nets have yet to celebrate their centennials in many countries, and a number of countries have yet to even implement such protective structures. The elements of tomorrow's economy will be forged in the exhaustion of what can best be called crony capitalism, a too-tight commingling of the country's most powerful corporations and its most ambitious politicians that's allowed many of the benefits created by public-private partnerships to flow to those in control of its private elements rather than back to the public. Outside of the lobbyists and corporate leaders who most benefit from this arrangement, few citizens look very fondly on this system, but unlike the robber barons, today's capitalist cronies have mastered the art of political and media manipulation, which may very well obscure the problem and delay the final dislocation that leads to a system that can effectively benefit a world where far fewer people need to work.

The most recent recession was, by nearly every measure, much weaker than the Great Depression, because policymakers have built up a lifetime's worth of tools and knowledge since that era to prevent another economic dislocation of similar magnitude. This expanded toolset prevented millions from suffering as their forbears suffered in the 1930s, but it's also had the effect of blunting any widespread rage for change -- the rapid elimination of the Occupy movement from city streets would have been far less easily accomplished if its protesters truly had nowhere else to go and no other possible means of support. But that doesn't mean that inequality isn't real and that it isn't going to be a big problem for society without any structural changes. In fact, the top 10% of American earners are now taking home more of the total income pie than they were just before the Great Depression -- the top 1% is only a hair away from surpassing its Roaring 20s highs as well: 


Source: The World Top Incomes Database .

If rage over economic inequality can be blunted by policy decisions, which may ultimately only serve to push the reckoning further into the future, another form of inequality may prompt tomorrow's children to push back instead. That new inequality, which we'll discuss in some detail in the following section, will be the potential inequality of human capability. This shift will eventually be set in motion by biomedical, biotechnological, and nanotechnological advances that hold the potential to render the baseline human body obsolete, as the automobile once did for the horse and buggy.

2040: Building a better human

"'Playing God' is actually the highest expression of human nature. The urges to improve ourselves, to master our environment, and to set our children on the best path possible have been the fundamental driving forces of all of human history. Without these urges to 'play God,' the world as we know it wouldn't exist today."-Ramez Naam, More Than Human 

What does it mean to be human? This won't just be a philosophical question for the next generation. Maturing fields of biotechnology and nanotechnology will inevitably reach a point where humans can begin overcoming the limitations of biology. Today you can wear a computer -- Google Glass being only the most notable example -- and interact with it as though it were nearly a part of your body. It should not be difficult to imagine the path this puts us on, from the computer next to you to the computer within.

Some people already have computers within them, helping them to hear or to control cerebral palsy or maintain regular heartbeats, and the drive to overcome medical limitations will undoubtedly determine the progress of the computer within for some time to come. But when ordinary individuals discover that people who were handicapped become super-capable with the latest technological therapies, they'll want a piece of that action too. The human drive for self-improvement is too strong to hold back a surge of interest in this sort of augmentation.

And what about augmenting the flesh-and-blood parts of ourselves, rather than replacing them with cyborg-like machinery? Genomics and genetic modification are still very much in their infancy, at least from a human perspective, because there's a deep fear of meddling with our fundamental human technology -- our DNA. We have no problem splicing jellyfish genes into cats  to get them to glow in the dark, but testing genetic modifications in human beings is something most people fear, for various reasons. Religious groups are appalled by what seems an effort to play God, and more secular-minded opponents of human genetics experiments point to the horrific eugenics experiments of the Third Reich and other nations in during the first half of the 20th century.

Like technological augmentation, highly worthwhile -- and eventually, highly successful -- attempts to tinker with our human source code are and will continue to be undertaken initially to help the unwell. When these treatments work, people will notice. Steroid use is already widespread among healthy individuals. Virtually everyone in the developed world dopes themselves up with caffeine to make it through the day. Millions of students take Ritalin and Adderall to keep their noses to the grindstone, whether or not they actually need to. These are temporary solutions that genetic engineering or technological augmentation can make permanent, if the will is there.

One of the most compelling routes of development for this type of technology leads to a healthier and longer period of what we now call "old age." The world is undoubtedly graying, and swelling elderly populations put enormous pressure on economies to care for them, particularly as the ratio of workers to retirees shifts more heavily toward the latter group. What if you could solve both problems -- allow older workers to remain productive longer while also improving their health to the point where they pose no greater burden to the health-care system than their twentysomething grandchildren?

Aubrey de Grey is on the vanguard of this approach from a biochemical perspective as the chief science officer of the SENS Research Foundation. One of his most audacious public pronouncements is that the first person to live to 150 is already alive today. That might apply to the parents of tomorrow's children, but it's more likely to broadly apply to those children, who will mature into a world where robust negligible senescence (the NS of SENS, defined as a lack of the symptoms of aging) research has been in progress for more than a quarter of a century. I asked de Grey if he foresaw a future within the next 50 years where 150-year lifespans become widespread, and further, what the societal implications of a world of sesquicentenarians (150-year-olds) might be:

Source: SHARE Conference via Flickr.

I think it's extremely likely that we'll see widespread 150-year lifespans by [the next 50 years]. Of course, it will take at least 50 years from the arrival of these therapies before we have anyone actually living to 150, but the point is that people will be maintaining the same youthful function (both mentally and physically) whatever their chronological age, so there will be the expectation that they will live to 150, or maybe even much older.

The right way to answer questions on social impacts is to start from what I mentioned earlier. Demographic changes resulting from increased life expectancy are really slow compared to most technology-driven changes, so there's no sense in analyzing how a very different demographically constituted world would work unless one also factors in reasonable predictions for what other changes will have taken place by that time.

In the case of retirement and career structure and so on, the key thing to take into account is the inexorable rise of automation. Automation has thus far had one main effect over the past century, namely a shift of manpower from manufacturing and agriculture to service industries. But what we're seeing now is automation increasingly in the service industries too -- and there's no third sector waiting in the wings to expand to absorb that workforce in the way that the service sector did for manufacturing. There's the entertainment industry -- but seriously, how many entertainers do we need? So the conclusion is that within far less than 50 years, the entire concept of the full-time 40x40x40 career will be long gone, and retirement will have a very different definition than it does now. But also, of course, government assistance applies to health care as well as retirement benefits, and the big difference there is that the overwhelming majority of causes of ill-health will be history too, because everyone will be biologically young.

I think the likeliest scenario is that the chronologically elderly will increasingly take advantage of their biological youth to explore more novelty, and that this will especially include more interaction as equals with those of very different ages. It makes theoretical sense that this would occur, since novelty is enjoyable and the only real barrier right now is the great disparity in health. And there is also plenty of practical objective evidence, in the form of (to take just two examples) competitive sports, where people can now remain active at the top level until far greater ages than was once possible; and romantic matters, where partnerships between people of very different ages are becoming more and more common. In the latter regard you're welcome to mention, since it has long been public knowledge, that my own situation is a good example, in that I have three long-term romantic partners aged 24, 45 and 69.

The progression toward better and better life expectancy might seem to point to advanced old age as an inevitability, but it's easy to mistake expectancy, which tracks age at death, with expected lifespans. Before antibiotics and prenatal care and modern medical practice, more people died while very young than while middle-aged -- if you got that far, you could probably survive to what we'd now call old age. Life expectancy has increased dramatically as the world has gotten much better at ensuring the survival of the very young. Expected lifespan, however, has not been growing all that much.


Source: Centers for Disease Control, National Vital Statistics Report .

Over the past 110 years, life expectancy from birth has increased by nearly 60%, but a healthy 25-year-old might only expect to live 25% longer than they did at the turn of the 20th century. Someone who made it all the way to 65 today could expect to live perhaps 10% longer than their great-grandparents did a century ago. If we could only make similar improvements to the expected lifespan of today's 65-year-olds as have been made for the life expectancies of newborn children since 1900, the average 65-year-old today could live to be 135! And that's a very conservative goal for ambitious scientists like Aubrey de Grey and others at SENS. 

The dramatic improvement in life expectancy during the Industrial Revolution has of course coincided with an explosion in global population, which surpassed one billion in 1804, doubled by 1927, and then grew another three and a half times in less than a century, so that the planet now supports over seven people billion today. Yet the Malthusian catastrophe of overcrowding predicted 200 years ago has not come to pass, thanks to improvements in farming, better energy sources, accelerating technological progress, and the modernization of medical care. We may be tempted to look at the dawn of the super-aged era as the ultimate fulfillment of the Malthusian prophecy, but for that to happen, technological progress must stagnate and birth rates will remain high worldwide. The first assumption is easily discarded, and the second has already been disproven in developed countries , and some developing countries are already at sub-replacement fertility rates (2.1 children per woman in developed countries). France, Britain, the U.S., Canada, Italy, Japan, and Germany have all recorded sub-replacement fertility rates for decades, China's one-child policy has decimated its fertility rate (it's now just 1.6 children per woman), and Brazil's development pushed its fertility below replacement rates in 2005.

The combination of declining fertility and lengthening lifespans produces a different sort of problem, though it may not be as intractable as you think. The Baby Boom generation was massive, much larger in size than its parents' generation, and that influx of workers provided social-insurance programs with very favorable ratios of workers to retirees. The Millennials (or "New Boomers" according to the chart below) -- who will be parents to the generation whose lives we've thus far followed through this fast-changing future -- are not a much larger generational cohort than their parents, and a number of factors, from weaker economic prospects to changing social attitudes, indicate that they may not wind up having enough children to sustain population equilibrium.


Source: U.S. Census Bureau via Elwood Carlson and the Population Reference Bureau .

As we've examined earlier, the spread of automation throughout the economy will make it much harder to earn a living wage as a button-presser or at any other repetitive sort of task, which unfortunately is a description that applies to many of today's jobs. In time, automation is likely to drastically reduce the need for a full-time workforce, thus reducing the tax base necessary to support elderly people drawing on social-insurance programs, regardless of any other streamlining efforts.

Tomorrow's 150-year-old may be as healthy as a 30-year-old, but as long as we imagine that the world will still run on some form of capitalism, we also have to imagine that people will still be drawing their Social Security check each year, or else they must stay at work. What happens to the workforce, and to society, when people can continue to work far past the normal age of retirement? In a world where such extreme longevity is possible, some major shift will have to occur in the way our children deal with work, retirement, and social interactions -- essentially, everything in their adult lives.

A guaranteed minimum income, which we explored in the previous section, can help reduce the burden on wage-earners if redistributive effects are properly levied against the holders of capital, who will be those deploying automation technologies in the workforce. It should also help ameliorate some of the generational resentment that may well arise if hale and hearty centenarians wind up clinging to a dwindling number of "good" jobs.

There are a number of therapies and approaches to improving the natural functions of human bodies, and both de Grey and Ray Kurzweil share similar ambitions for radically extending human life and human biological functionality. But where de Grey's SENS Research Foundation focuses on biotechnology, Kurzweil has looked ahead to a future where nanotechnology -- or at least something sufficiently miniaturized -- can operate inside human bodies, enhancing our mental and physical capabilities in ways beyond what's possible with tweaks to the genome:

What happens when machines are smarter than an unenhanced human? The answer to that question is implicit in the adjective "unenhanced." Education has been a form of enhancement and we will incorporate more advanced forms of enhancement.

We are not experiencing an alien invasion of intelligent machines from Mars to displace us. We create these tools to expand our own reach, physically and mentally. We are the only species that does this. The additional neocortex afforded by our large foreheads was the enabling factor that permitted us to create this new form of evolution. Accessing knowledge in the cloud is already a brain extender, even if we now use devices such as smart phones to access it. As we get to the 2030s we will be expanding our neocortex directly in the cloud. So rather than competing with superintelligent machines, we will merge with them.

This is a concise description of progress toward a technological singularity , a concept developed first by John von Neumann and later popularized by Vernor Vinge and further expanded by Kurzweil, the author of The Singularity is Near and a co-creator of Singularity University. If a black hole is a gravitational singularity beyond which we know nothing, the technological singularity is similar. It's the leap from fire to the farm, from the farm to the factory, and from the factory to something we're not able to understand from today's vantage point. It stands to reason that augmenting human intelligence with the direct application of computing power would, if and when ultimately successful, produce breakthroughs beyond what we can imagine today, at speeds beyond our comprehension. As Kurzweil says, we are the only species that extends its capabilities with advanced tools, but the human body is limited in a number of ways that computers are not. The singularity, as we understand it, would be the final breakthrough that frees us of our biological constraints.

There are very real risks to human society during this transformation from human to human-plus, perhaps on the scale of the Neanderthals' demise at human hands. In the nearer term (at least nearer in this timeline), a greater risk may be that development of these new augmentation technologies proceeds behind the closed doors of a few determined plutocrats while society at large is pressured to oppose such development. And as it stands today, there simply isn't much of an approved medical rationale for this sort of experiment on healthy human minds or bodies. Michael Chorost doesn't expect this rationale to develop easily:

Source: Festival della Scienza via Flickr.

The problem is that it is very hard for the medical establishment -- the guys with the money -- to find reasons to alter healthy, properly functioning bodies. The risk-to-reward ratio is very poor. In my case, when I went deaf, it was well worth the surgical risk to hear again with a cochlear implant. With a healthy individual, the risk of things going wrong outweighs the potential benefits -- and nobody's going to go near it.

Of course, with things like steroids and other chemical enhancements, the risks are well known --- but that's popping a pill, using tech that's been around for decades. Cutting into a body with a knife to put new things into it is a whole different matter. The question is, what kind of benefits would there be? Beyond current techniques (pills, shoulder modification, LASIK, and so forth) I can't think of any engineering path that would result in significantly enhancing an athlete or any other healthy individual. Can you?

It's only when the potential benefit is enormous that anyone is going to seriously consider physically modifying a healthy body. If you could open up whole new kinds of communication, to let people do things that to us would seem like magic, then the risk-reward ratio starts to shift. Even there, the obstacles are huge. Our current regulatory system makes it nearly impossible to do such things. But let's say that someday, that begins to change. It could start with developing technologies to help locked-in people communicate. Then someone in China might begin trying it in healthy people. And then we would eventually start trying to catch up.

As Chorost points out, it's quite likely that China will pursue augmentations more aggressively than most Western nations. Unless China's repressive culture eases into a more open form of government, the best treatments could well wind up reserved for party elites and other hand-picked luminaries. What happens to society if the means of augmentation are kept behind lock and key by those who view widespread improvement to the human condition as a threat to their own power? If the only 150-year-olds are plutocrats and the rest of humanity putters along with their version 1.0 bodies, will people simply accept such a situation? Will people accept the idea that some of their fellow citizens are smarter not because they were born that way or trained themselves to think smarter, but because they had the resources to improve their minds with a chip or an injection of nanobots?

The path of progress toward technologies that push the human body beyond its biological limitations is clouded in the best of times. What will matter most -- as has been the case with all of the other technologies we've examined here -- will be the mind-set of those who pursue that progress, and of those who control its eventual availability. This is the same dilemma the world faced throughout the Cold War, only now the threat of destruction may not be limited to the leaders of the world's two most powerful countries. However, should progress toward posthumanism take into full account its eventual impact on the unaugmented, then the risks to humanity will be kept modest. What's certain to change, if (and likely when) advanced life-extending and capacity-boosting technologies begin to spread around the world, is the nature of society itself.

Source: Steve Jurvetson via Flickr.

Imagine life at 150. It's unlikely that anyone alive today will become sesquicentenarians by 2040, as that would require them to have lived since 1890 -- they'd already be the oldest people in the world today. However, the progress of research at SENS and other anti-aging research centers could make healthy 100-year-old baby boomers relatively commonplace shortly after 2040. And tomorrow's children, born in 2015, would have a lifetime of exposure to negligible-senescence treatments available to them. Why stop at 150? The lives of tomorrow's children may be filled with healthy grandparents that may look not too different than their grandchildren. As Aubrey de Grey has pointed out, a world where age really is just a number is likely to break down the barriers of age-defined lifestyles, making "retirement" more of a choice than a necessity for weary bodies and commingling the young with the very old in all sorts of enjoyable activities, from sports to sexuality -- provided society is ready to adapt to this changing reality.

The conflicts of a world where age really is nothing but a number and where the human body is just a beginning have been covered by many science fiction books and films over the years, and the wide range of stories told only highlights just how little we can really know about this future. The attitudes tomorrow's children build in school may not fully prepare them to live with cyborgs and centenarians who can outdo and outthink ordinary human beings. All we can do is lay the groundwork today for a world in which all can share in the gains of progress, whether that progress involves a robotic workforce or a humanity that's become more than simply human. Tomorrow's children will inherit that world, but it's up to us to begin building it for them.

2045: The end of our beginning

"Nothing is built on stone; all is built on sand, but we must build as if the sand were stone." -- Jorge Luis Borges

By the time tomorrow's children begin fully entering the workforce -- whatever's left of it -- their experiences and their eventual professions will look much different than ours. After spending the bulk of their lives absorbing a new virtual reality, fabricated reality, and automated reality, they'll be on the vanguard of creating a technological age whose endpoint we are not likely to fully understand. To them, we may seem like the hardscrabble farmers left behind by the industrial revolution, watching the steam train and the steamship and the telegraph link worlds we'd never even considered to progress we might never have thought possible.

By 2045, they'll be entering what's currently considered their peak earning years, but by this point in time, it may be something else -- the first surge of mature creative processes before their work leaves us far behind. With augmentations, there may be no peak, only continuous growth, a personal singularity. It's no coincidence that our understanding of their story ends around 2045 , which is the year in which Ray Kurzweil has forecast the beginning of the technological singularity.

What does that mean for us? A popular depiction of this event shows the awakening of machines into a humanlike self-awareness. These machines may be part of us, or they may be our descendants. As technology gains greater and greater prominence, the notion of what it means to be human will ultimately become inseparable from the notion of a technological being. We are already technological beings, but only at the edges. The change to a being with technology at the core -- an augmented mind, an enhanced heart, a bionic body -- is what may well drive tomorrow's children in the latter half of the 21st century. Or it may not. We may be unleashing technological forces that will grow beyond our control and render us obsolete before we can adapt.

If we look back at their lives, we see that tomorrow's children should have enjoyed an education that taught them how to think rather than what to think, lived through a period of automation that eliminated more jobs than it created, helped participate in the long-overdue modernization of an economic system originally devised for a world where the most advanced technologies were the tall sail and the blunderbuss, and took part in the dramatic enhancement of their bodies by means both mechanical and biochemical. This is the world they ought to be prepared for, a world of breakneck progress and dramatic change, not the one of short steps and iterative upgrades we so often expect.

Our challenge isn't to paint in the details of this picture for the next generation, but to make sure that the outline we create today is one we want tomorrow's children to fill in as they grow up. And there are many obstacles that stand in our way. The American education system, long the envy of the world, may sink so far into a bureaucratic quagmire that it may be unable to produce the leaders we'll need to guide the next generation. Our economy may fail to adapt to a world where far fewer people need to work. This failure is likely to harm progress as well -- when those in charge of the gears of commerce can see no benefit in innovating and are content to run in place, they may grow rich, but the rest of us grow poorer in a stagnating world.

But there will be plenty of opportunities to fix what's wrong, to build something better, and to give tomorrow's children the right tools to guide our world through its next great transformation. What matters most is that we begin this process now with an understanding that, as President John F. Kennedy once said, "All this will not be finished in the first one hundred days. Nor will it be finished in the first one thousand days ... nor even perhaps in our lifetime on this planet." This is a lifelong project with the goal of ensuring a better future for tomorrow's children, even though we may never know how it will play out. We may not know what lies ahead. But let us begin today .