Watershed technologies such as AlphaGo make it easy to forget that artificial intelligence (AI) isn't just a futuristic dream. It's already here -- and we interact with it every day.

"Smart" traffic lights, fraud detection, mobile bank deposits, and, of course, internet search -- each of these technologies involves AI of some kind. As we have grown used to AI in these instances, it has become part of the scenery -- we see it, but we no longer notice it. Expect that trend to continue: As AI grows increasingly ubiquitous, it'll become increasingly invisible.

Major advancements in technologies dependent on AI -- among them robotics, machine vision, natural-language processing, and machine learning -- will soon work their way into our daily lives. Whether it's driverless cars or delivery drones, we're on the cusp of a radical shift.

Nothing will remain untouched. AI's integration into our world will transform employment, economic activity, and possibly the character of our society.

Is the doctor in?

Healthcare is ground zero for AI. In fact, AI has been quietly helping doctors treat diseases for almost its entire existence. In 1963, a Midwestern radiologist named Gwilym S. Lodwick published a paper in Radiology Society of America that described a technique he invented for predicting the survival span of lung cancer patients: Lodwick took X-rays and coded their features to represent tumor characteristics using numerical values. Then, as he explained, these numbers could "be manipulated and evaluated by the digital computer."

Armed with rudimentary image processing, in the 1970s radiologists began using machine vision to generate data directly from images. These were the logic-based days of early AI, so algorithms followed a sequence of rules to identify body parts: If there's an oval here attached to a thick line, we're looking at a hip bone connected to a thigh bone.

Lodwick called his technique "computer-aided diagnosis," and CAD has been an invisible tool of medicine ever since. By the 1980s and 1990s, doctors were using CAD to give them a second opinion for diagnosing everything from lumbar hernias to gastric pain.

Study after study describes the success of these early AI diagnosticians. Even neural networks aren't new to medicine. One 1990 paper diagrams a neural network for identifying neonatal problems. Doctors could enter "true" or "false" for 21 inputs. Entering "true" for, say, No. 12 meant that you noticed some stuff in the lung that's not supposed to be there. The computer could spit out likelihoods for over a dozen different conditions that the patient might have.

What's different now is that huge data sets and enormous processing power are combining with deep learning to perfect object recognition. Using a data set of skin-lesion images some 100 times larger than prior ones, a Stanford Ph.D. student last year trained a neural network to recognize thousands of different skin diseases. It can identify skin cancer -- an annual killer of 10,000 -- as well as dermatologists can.

Today, doctors use CAD to help them identify a variety of diseases, including Alzheimer's, heart defects, diabetic complications, and a whole range of cancers.

Here, there, and everywhere

The entire healthcare industry will have spent more than $1 billion on AI in 2017, according to the International Data Corporation.

And they're not alone. Retailers and banks are the biggest spenders. Home Depot, Lowe's, H&M, and most large banks are turning to AI to automate customer service. Money is pouring in to chatbots -- instant-messaging tools that allow customers to communicate with AI software. (Somewhat terrifyingly, Facebook halted one of its chatbot experiments after its bots began to talk to each other in a new language of their own invention.) 

Amazon.com staffs its warehouses with tens of thousands of little orange robots that fetch products for humans to pack. Once a robot arrives at the desired shelf, it slides underneath and carries the whole shelf to employees, who grab a product and pack it in a box. Just a foot tall, these robots can drive under shelves and orient themselves by reading barcodes pasted to the floor. For navigation, the robots use the A* algorithm discussed in part 1

It's not just minimum-wage customer service and warehouse staff these machines are potentially replacing -- machines are taking over the work of highly paid bankers and lawyers, too. Goldman Sachs has automated half of the tasks involved in filing an IPO, according to Bloomberg. JPMorgan Chase now has computers doing what used to take 360,000 man-hours of legal work every year.

AI is benefiting patients, consumers, and businesses alike. But it's also beginning to creep into our commercial and personal lives in other, less innocuous ways.

You are the product

"Data mining" is an appropriately invasive metaphor for the kinds of privacy risks we increasingly face. AI-powered digital platforms such as social media open up new opportunities for extracting data from users. AI also makes existing data more valuable for understanding and manipulating consumers. And data-hungry techniques such as machine learning will motivate even more data collection.

You can learn a lot from even seemingly anonymous and unrelated scraps of information. Eighty-five percent of Americans can be identified from their ZIP code, birthdate, and gender alone. Corporate giants know a lot more about us than that, and there are few rules in place limiting how companies use all the information they collect.

The insights gained from this golden age of corporate reconnaissance allow companies to classify, cluster, and target people with incredible precision. Facebook segments its 2 billion users not just by ZIP code, birthdate, and gender, but also by relationship, employer, job title, education, ethnic group, homeownership, important life events, parents' children's age group, politics, fitness and wellness, hobbies, technology use, and an unbelievable list of behaviors from taking casino vacations to buying pet products.

We're just now becoming more aware of the implications that this level of microtargeting has for society. Now that two-thirds of Americans get news from social media, public discourse is increasingly exposed to all manner of epistemic failures, including confirmation bias, groupthink, and wishful thinking -- with visible social and political consequences.

Recall from AlphaGo that AI tries to accomplish what we ask it to do, blind to irrelevant concerns. If boosting engagement means serving up emotionally charged material that readers already agree with, rather than content that helps people learn new things, that's what it'll do.

Your overconfidence is your weakness

The incredible sophistication of AI could tempt us to think its algorithms are infallible. But that would be a mistake. In fact, brand-new technology, "hard" numbers, and functional obscurity make it easy to overrate computers' abilities.

One of the most notoriously destructive cases of technological hubris was that of Therac-25, a therapy device for cancer patients that delivered massive overdoses of radiation and injured or killed six people over the course of two years in the 1980s.

Therac-25 was a convoluted kludge of code and hardware thrown together from earlier models. Since the old versions seemed to work fine, the manufacturer did away with important hardware safety features, relying instead on buggy software. It was a disaster waiting to happen.

A 1995 academic autopsy of the deadly device published in IEEE Computer details an absurd 25-car pileup of software and hardware catastrophes. But two facts stand out: No one really understood how the thing worked, and Therac-25's engineers were wildly overconfident that it did.

The first sign of trouble occurred in 1985 at a hospital in Marietta, Georgia, where a patient was injured by 80 times the lethal dose of radiation. When the hospital telephoned Therac-25's manufacturer to find out if there had been some malfunction, engineers took a look and responded three days later that a scanning error was impossible.

If engineers were oblivious, imagine the helplessness of hospital staff. Each of us can empathize with the Tyler, Texas, technician whose machine read "Malfunction 54" and "dose input 2." No one knew what "Malfunction 54" meant. As for "dose input 2," a manual explained, unhelpfully, that it meant the dose delivered had been either too high or too low.

By 1986, company engineers still hadn't put the pieces together and told a Yakima, Washington, hospital that it was impossible for Therac-25 to overdose. The hospital recounted:

In a letter from the manufacturer dated 16-Sep-85, it is stated that "Analysis of the hazard rate resulting from these modifications indicates an improvement of at least five orders of magnitude"! With such an improvement in safety (10,000,000 percent) we did not believe that there could have been any accelerator malfunction.

Therac-25 was an extreme case of overconfidence in hardware, software, and the supposed infallibility of exact figures, but it won't be the last. Its breakdown underscores why human common sense will remain indispensable for the foreseeable future.

Computers are growing more capable, but we're also asking them to do more. Many of the machine-learning techniques we'll be called upon to trust in the coming years -- notably deep learning -- are known for their intrinsic inscrutability. The "act rationally" approach to AI, of which they belong, is less transparent than the logical reasoning and heuristics of the "think logically" and "think humanly" approaches. Just like the Therac-25 device, neural networks are a black box.

And, of course, not all Silicon Valley companies are known for their intellectual humility and cautiousness where opportunity for disruption is concerned.

Then there's the overconfidence in AI, frequently embodied by something called the paradox of automation: Automation feeds our reliance upon it, degrading our skills and readiness.

An intriguing 2015 study looked at how automated early warning systems affect how well we drive. Researchers measured reaction times and facial expressions of drivers who had to react quickly to avoid getting T-boned by foam cubes. When the system gave accurate warnings, response times improved. But when systems failed to go off or gave misleading or incomplete information, response times to danger were worse than if there had been no system.

Blind faith in technology caused literal blindness to danger, too. Even with a full two seconds to react, several drivers didn't manage hit their brakes at all:

When asked what happened in this situation, [three] drivers reported that they: "had not seen the obstacle."

Fully fledged self-driving cars are expected to reduce traffic deaths. But they'll also cause drivers to commit a smaller number of other accidents that wouldn't have occurred before -- and which will be blamed on "human error."

Our overconfidence in AI is a much more general problem than how it relates to transportation. It's one we'll have to keep watching.

Justice truly blinded

Recall how that initial version of AlphaGo learned to mimic the orthodox approach of 20th-century Japanese players of Go: That style was overrepresented in AlphaGo's training examples.

There's no such thing as unbiased AI software, because every AI program relies upon a model of the world. And just like human mental models -- scientific theories, political ideologies, and first principles -- no single AI model works perfectly across all problems. The choices we make are guided by the model we chose.

Nowhere is that more the case than in the courts. The criminal justice system is beginning to use AI at every stage, from choosing which neighborhoods to police, to sentencing, to predicting who is likely to commit another crime. On its face, more accurate justice could be a good thing. It might mean fewer people go on to commit violent crimes and fewer people end up in jail who don't need to be there. Several studies have examined how effective AI is at reducing crime, and the jury is still out.

But we need to be wary. In 2016, ProPublica investigated a criminology AI software named COMPAS. Of the 7,000 arrested people included in ProPublica's study, just 1 out of 5 that COMPAS predicted would commit a violent crime over the next two years actually did. The system also miscategorized black people as likely reoffenders more often than it did white people.

COMPAS wasn't programmed with race in mind. But its racial bias shouldn't surprise us, as machine learners are easily contaminated by it. If there's a particular bias a developer wants to avoid, like race, it's not enough to exclude it from the data set. Unless you're very careful, other attributes accidentally end up serving as proxies. One counterintuitive solution might be to explicitly teach the AI software to avoid specific biases by training it on those attributes -- similar to what researchers who study implicit bias aim to do with humans.

Each classification AI software faces a trade-off between being too lax and too sensitive. The way you calibrate its sensitivity is by assigning a higher "cost" to the mistake you're more concerned about. If you're a radiologist, you want your cancer-screening AI to be more forgiving of false positives than of false negatives; it's more important to catch as many cancers as you can, even if it means some patients will turn out not to be sick after all. So you'll assign a higher cost to missing cancers.

Every legal system struggles with a similar problem. Wrongful convictions are unfair to convicts and their families; wrongful acquittals are dangerous for the public. When Sir William Blackstone, the 18th-century English jurist, asserted, "It is better that 10 guilty persons escape than that one innocent party suffer," he was assigning to wrongful conviction a higher cost than to wrongful acquittal. (Notice that that's just the opposite of our cancer-screening AI.)

Costs are choices the programmer has to input. AI can't do it for you. You can avoid making painful trade-offs only with a better classifier, and human behavior is far too unpredictable to classify perfectly. And you can tell the makers of COMPAS had to make these kinds of choices.

The main thing you want to look at is something called an AUC score. If COMPAS had scored 1.0, that would indicate perfection: Every predicted recidivist will go on to commit a crime, and every predicted non-recidivist would not. A score of 0.5 would mean it's as good as throwing darts: If 3 out of 10 people go on to commit a crime, the classifier would perform as well as picking three people at random.

COMPAS' AUC score clocked in around 0.7 -- meaning it is moderately predictive, but there are substantial trade-offs. Whether COMPAS is beneficial depends on whether the costs its programmers chose reflect our legal system's values and on how people in the legal system use it. On both fronts, we should be skeptical.

Page 31 of the COMPAS handbook warns, "Our risk scales are able to identify groups of high-risk offenders -- not a particular high-risk individual." Therefore, when staff members disagree with an assessment, they should "use professional judgment and override the computed risk."

Computerized Lady Justice is holding a steel sword, a steel scale, and wearing a steel blindfold.

A 21st-century Lady Justice. Image source: Getty Images.

The danger is that the old business saw, "If you can't measure it, you can't manage it," has transformed into its overconfident logical inverse: "If you can measure it, then you can manage it." In practice, this could mean sending police officers to areas that may have been overpoliced in the past, or sending too few officers to neighborhoods where people may be afraid to report crimes. Just like the drivers in the reflex study who put too much faith in their cars to avoid hitting cubed foam menaces, it's crucial that justice not be blind to the hazards of AI.

Garbage in, garbage out

COMPAS shows the challenges that arise when we trust AI to transform data into judgment. The problem gets even trickier when humans deliberately bias the data.

Last March, Microsoft created an experimental AI program named Tay to chat with 18- to 24-year-olds. Tay was given a Twitter account with which "she" was to learn how to communicate from her interlocutors.

Within 24 hours, Twitter users (predictably, in hindsight) began bombarding Tay with abusive tweets. Soon enough, she had learned to be a racist jerk and began writing her own obnoxious posts. Microsoft quickly took down Tay's Twitter account.

Just like children, machine learners learn from their experiences. What they learn depends on what's in the data, which is not necessarily the same as what you want them to learn. The experiment illustrates the dangers researchers face when they treat human problems as purely technical matters and underestimate secondary effects.

Microsoft's apology hit the nail on the head:

AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.

Tay's misadventure shows us that AI isn't immune to the influence of bad actors. As more of our world gets computerized, hackers will have more targets. The series of high-profile Russian and Chinese attacks over the past few years only underscores the problem.

It's tempting to think hot technologies like deep learning will come to the rescue. By learning a computer system's normal behavior and detecting unusual activity, neural networks can try to defend against the sort of routine threats common criminals pose. But because neural networks need lots of training data to function properly, they're ineffective against sophisticated one-of-a-kind attackers such as governments and intelligence agencies.

What's more, because of their extraordinary sensitivity, neural networks can be confused by just a smidgen of misinformation. Today's cars are run by dozens of computers, and technology already exists for hacking them. Opportunities will only grow when self-driving cars hit the pavement.

For example, hackers can control AI machines' behavior by messing with their vision. One researcher managed to stall self-driving cars using a $60 laser setup that produced confusing visual "echoes" of obstacles.

A team of Israeli researchers figured out a way to mess up a driverless car's behavior by reverse-engineering its learning patterns and then feeding it faulty data. A few carefully tweaked pixels might, say, prevent a car from detecting pedestrians.

Others have taken a less complex route to mayhem: Just confuse the AI by gluing some stickers onto street signs. One cryptography paper reads:

Using a perturbation in the shape of black and white stickers, we attack a real Stop sign, causing targeted misclassification in 100% of the images obtained in controlled lab settings and above 84% of the captured video frames obtained on a moving vehicle for one of the classifiers we attack.

Despite neural networks' computational sophistication, common sense is a long way away.

The Tay incident and multiple hacking studies prove that we can't look at AI in isolation from society. Technology that works well in a lab may become dysfunctional when exposed to the real world with all its human actors and messy secondary effects. As AI becomes increasingly integrated with society, we'll continue to influence it. And it'll also begin to change us.

About those jobs...

A century ago, fully one-third of Americans worked on farms. Over the course of the 20th century, American farming jobs were decimated. From 1910 to 2000, the number of farmers and farm laborers fell 87%, from 12.8 million to 1.6 million. Better pesticides and fertilizers contributed, but the biggest culprit was mechanization -- think of the shift from horses and mules to today's industrial harvesting Leviathans.

A 2006 report from the Bureau of Labor Statistics (BLS) detailing occupational changes during the 20th century tells us why:

[F]rom 1900 to 1997, the time required to cultivate an acre of wheat decreased from more than 2 weeks to about 2 hours, while for an acre of corn, it declined from 38 hours to 2 hours.

A black and white graph shows farmers and farm laborers plunging as a share of U.S. employment over the 20th century.

Farming employment collapsed over the 20th century. Image source: Bureau of Labor Statistics.

Significantly, even as 9 out of 10 farmers left their fields, the ranks of another planting profession less vulnerable to mechanization grew ninefold: gardeners.

Unlike farming -- which today is practiced on wide open, multi-acre, single-crop fields -- gardening happens in niches. You plant five little daffodils alongside seven tulips and a rose bush. Gardening resists mechanization.

The second big decline in employment hit manufacturing. Between 1980 and 2010, manufacturing jobs plunged 40%, from 19 million to 11 million.

Rightly or wrongly, trade gets blamed for "stealing" American manufacturing jobs. But that can't be the whole story. For even as manufacturing employment collapsed over the past three decades, manufacturing output soared. The truth is that it was largely our own robots that "stole" them.

The pattern was similar to that of farming. Technologies such as robotic assembly lines and automated vendor fulfillment allow factories to produce more things with fewer workers.

Chart showing manufacturing employment declining even as production rose

Since 1980, manufacturing employment has declined while output has increased. Data source: Federal Reserve Bank of St. Louis. Chart by author.

The upshot of the 20th century was a nationwide shift from farming, household labor, and blue-collar jobs to white-collar work, especially, as the BLS puts it, "occupations having to do with information, ideas, or people."

The great fear for the 21st century is that the same implacable forces of mechanization and automation are now coming for those occupations. Worse still, if AI encompasses the entire scope of intelligent human activity, there may be nowhere left for us to turn.

Just look at radiology. It's an elite white-collar profession, one of the highest-grossing even among doctors, with an average yearly salary of $340,000. It takes 13 years of training to become a radiologist, and to be one requires tremendous medical knowledge, reasoning, judgment, and attention to detail.

These are all traits that a computer can learn. Geoffrey Hinton, one of the chief pioneers of neural networks, told a Toronto conference last year:

I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff, but hasn't yet looked down and so doesn't realize that there's no ground underneath him. People should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to be doing better than radiologists. ... We've got plenty of radiologists already.

In fact, it's inherently easier to train a computer to read X-rays than it is to train one to clean houses. One job involves memorizing patterns that never change from thousands of images. The other requires dusting, stacking other people's books, going up and down staircases, wiping faucets, and doing a thousand other little things.

So could something like what happened to farming and factory workers now happen to everyone else?

The news is both good and bad. Estimates are that one-third to one-half of jobs in the U.S. will be threatened by automation over the next decade or two. But AI's effects will vary considerably by occupation. We can classify jobs into three categories: in danger, resilient, and changing.

Jobs in danger

The transportation sector, currently employing 9 million Americans, will be one of the first and hardest hit.

Consider hired drivers: 300,000 taxi and ride-sharing drivers in America will soon be competing with self-driving vehicles. Last month, Uber agreed to buy 24,000 autonomous cars from Volvo. Lyft is already testing out autonomous cars in Boston.

Small white robot on two wheels crossing the street and trailing three people

Starship delivery robot going for a test run. Image source: author.

Long stretches of highway, as compared to, say, snowy city streets with stoplights and strollers, tend to be more predictable driving environments and so are even better suited for machine learning than city driving is. In November, Tesla unveiled three-truck semi convoys that it says are capable of traveling with only a human driver in the first truck. There are currently 1 million truck drivers in the U.S., earning on average $21 an hour. It's hard to imagine a future where those numbers don't dwindle.

As for deliveries, drones and courier robots will happily pick up the last leg.

Other routine, easily automatable jobs such as data entry and telemarketing are also in trouble.

While AI has the capacity to replace many highly educated workers in well-paid but routine professions (think paralegal work), the jobs in danger of automation tend to be clustered in lower-paid, less highly educated fields.

The following charts show that most jobs earning less than $20 per hour have a high probability of partial or full automation, and that about half of jobs requiring less schooling than a high school degree involve highly automatable skills. Automatability declines with increasing income and more schooling:

Chart showing share of jobs with their median probability of automation

Image source: Executive Office of the President, 2016.

Chart showing declining bars as schooling increases. It begins at 44 percent shre of jobs with highly automatable skills for less than high school degree and ends with 0 percent for graduate degree

Image source: Executive Office of the President, 2016.

Resilient jobs

The peculiar blossoming of gardening positions in the 20th century reflects the fact that it's tough to mechanize dexterous work in variable environments. Twenty-first-century "gardening" jobs that robots will struggle to perform include installers and technicians.

Here's one example: A plumber comes to investigate trouble with your water pressure. He tests 10 taps, is familiar with your house's layout, and spares the demolition of several walls by cutting away 4 inches to reveal the precise location of piping where he correctly guessed a nail had been stuck. It will be a long time before robots have similar capabilities, and it's not as if venture funds are pouring billions of dollars into researching how to disrupt plumbers.

Today, three of the fastest-growing occupations fall into the "gardener" category: solar-panel installers, wind-turbine technicians, and bicycle repairers.

"People" people will also remain in demand. Computers still can't match people in understanding and responding to human desires, intentions, and emotions. (Recall that AlphaGo lost game 4 in part because it failed to interpret what Lee Sedol was up to.) Most people will continue to prefer human nurses, life coaches, therapists, and yoga instructors to robotic ones.

Finally, the AI industry will bear a techie boomer generation. Coders, engineers, data scientists, and statisticians will be needed to develop the hardware and software that power AI.

"Gardeners," "people" people, and techie boomers are three categories of resilient jobs. In fact, if you look at the occupations the BLS projects will grow the most rapidly over the next decade, all of the top 17 fit into one of those three buckets.

A chart with 17 colored dots representing occupations and their 10-year growth estimates and annual incomes. The occupations are: Solar Photovoltaic Installers, Wind Turbine Technician. Bicycle Repairers. Statisticians Software Developers, Applications, Mathematicians, Information Security Analysts, Operations Research Analysts, Home Health Aides, Personal Care Aides, Physicians assistants, Nurse Practitioners, Physical Therapist Assistants, Medical Assistants, Physical Therapist Aides, Occupational Therapy Assistants, Genetic Counselors

The 17 fastest-growing occupations in the U.S. Estimated growth (2016-2026) and annual income. Growth estimates and income data: Bureau of Labor Statistics. Chart by author.

Get used to hearing over and over the conventional wisdom that everyone needs to specialize in a "practical" field, preferably a STEM (science, technology, engineering, math) one.

It is true that the most talented workers in important, growing, high-skill fields will get to keep their jobs. But this advice misses the whole meaning of rapid change: Rigid specialization becomes more perilous, not less. As the nature of work changes rapidly and unpredictably, cognitive adaptability and skills with general scope become crucial.

Humans still top computers in creativity, general intelligence, interdisciplinary thought, and intellectual flexibility in unstructured environments. Workers with these skills will have a much better chance of adapting to coming changes in the workplace. Ironically, "impractical" study in areas such as the liberal arts, particularly from a young age, could foster these advantages.

Changing jobs

While up to half of all jobs face AI automation, a study commissioned by the Organisation for Economic Cooperation and Development (OECD) found that computers will totally replace only 9% of jobs. That would leave the remaining one-third of workers seeing their jobs not disappear, but transform.

What separates a job in the "changing" category from a job in the "in danger" category? Transforming jobs involve some tasks that aren't easy to automate. As machines take over tasks that can be automated, workers can double down on the other parts of their jobs where machines don't excel. Since the 1980s, occupations with more novel jobs and tasks have grown significantly more quickly than others. To take an extreme case, 70% of the tasks software engineers did in 2000 didn't even exist in 1990.

Teachers, surgeons, managers, engineers, and sales representatives won't disappear, but the work they do will change. The hope, among economists and technologists, is that AI will take over routine tasks, freeing up workers to do more interesting, high-value tasks. Auto-graders could allow teachers to spend more quality time working with students rather than grading quizzes. AI design tools could help engineers produce better models and analysis. Better visualization and AI-generated business intelligence could eliminate the drudgery that managers and other office workers face in proliferating reports.

Home life could see similar benefits. For example, the average household spends almost four hours a day doing chores. Robotic vacuums like the Roomba are just beginning to tackle household cleaning (39 minutes). It's easy to see robots someday doing laundry (22 minutes), too.

Some housework, like cooking (54 minutes), which involves various tools, ingredients, and motions, could be tricky for robots, at least in the near term. But should robots take over repetitive restaurant jobs such as flipping burgers, labor costs -- which make up about one-third of restaurant expenses -- will come down. That could make alternatives to cooking, such as eating out or delivery by robotic couriers, much cheaper.

That would also mean lots of unemployed fast-food cooks and food-delivery drivers. Even if most jobs change rather than disappear, plenty of displaced workers will still need new jobs to replace lost ones.

Historically, that's exactly what has happened. The farmers, switchboard operators, and assembly-line workers in the 20th century were replaced by computer specialists, accountants, and dental hygienists. In the 21st century, we'll see a crop of data gatherers, AI supervisors, and other jobs that we can't yet imagine.

Unfortunately, there's usually a delay between when technology destroys old jobs and workers find new ones. It took 40 years for things to improve for 19th-century weavers. As for 20th-century manufacturers, whole regions of the country still haven't recovered. And farming? A National Bureau of Economic Research paper says, "The share of agriculture in GDP or employment is falling toward zero."

Millions of unemployed workers will need to be retrained, but as the Rust Belt can attest, we don't have a great track record here. The U.S. government spends a smaller portion of resources on retraining than all but two other OECD countries. We'll have to do a lot more in the future to keep up with displacement caused by AI.

Even more frightening than the need for mass retraining is the possibility that AI won't produce a gradual shift in the labor market, but rather lots of rapid, jarring changes. In that case, many workers may never catch up -- like chasing after a car as it pauses at a series of stoplights but never quite reaching it.

The whole economy is entering into a period of transition. AI will account for a larger chunk of economic growth over the coming years. Robots alone were responsible for one-tenth of all economic growth between the mid-1990s and mid-2000s, and their population has since tripled.

How those gains will be distributed is another question. Today's tech giants have grown wealthy not just because of booming demand for their products, but also because internet services (and, in the case of Apple, design) scale extremely well. Apple, Alphabet, Amazon, and Facebook are together worth $5 million per employee. AI will allow companies to derive even more value from their employees, while reducing the need for workers. That's a recipe for a winner-take-all outcome.

If the economy won't provide displaced workers new, decent-paying jobs in spite of booming wealth for AI's architects, workers will demand that governments do something to help, either through affordable college and retraining opportunities, a larger safety net, guaranteed basic income, a more progressive tax code, or a kind of global nativism. Or they'll lash out in unpredictable ways.

Going rogue

A world without jobs would upend society. But the scariest scenario of all is the one where robots overthrow the human species.

Legends of automata are nothing new -- they're practically archetypal. They existed in ancient Greece, Israel, and China, and they continued through the Middle Ages and the Renaissance up to the present.

The centuries-old legends of the Golem contain all the elements of a Terminator AI: A scholar uses magic and written code for the word "truth" to animate a clay statue. The Leopold Weisel and Brothers Grimm versions narrate what happens next. Weisel writes:

Such self-made servants are worth much: they do not eat, they do not drink and do not need wages.

The Brothers Grimm chime in:

However, he increases in size daily and easily becomes larger and stronger than all his housemates. ... [O]nce, out of carelessness, someone allowed his Golem to become so tall that he could no longer reach his forehead [to deactivate him].

Weisel finishes:

[M]isfortune followed. The magic servant became enraged, tore down the houses, threw rocks around, uprooted trees, and thrashed about horribly.

In some versions, the Golem ends up crushing his master.

These Golem legends probably inspired Frankenstein, The Terminator, and R.U.R., the Czech play that invented the word "robot" (from "serf"). In that one, the robots rebel and kill the humans.

Could this be our fate?

The AI apocalypse scenario, at least in popular imagination, involves robots that acquire sentience and launch a war of extermination against humanity. We have no idea what causes consciousness, subjective experience, or volition. But it seems unlikely that ever more powerful calculators will accidentally wake up one day with eradication on their minds. Recall that the current trend is to create AI that acts rationally -- not one that mimics how human minds actually work.

The real danger is not a Terminator, but an Optimizer. If future advancements result in artificial intelligence that far outstrips our own, we had better hope that AI's programmed goals are totally compatible with human existence. Because, as we saw with AlphaGo, AIs pursue their goals literally, effectively, and relentlessly.

A thought experiment by Oxford philosopher and futurist Nick Bostrom shows how even seemingly reasonable goals, in the hands of superintelligence, tend toward the catastrophic:

An AI, designed to manage production in a factory, is given the final goal of maximizing the manufacture of paperclips, and proceeds by converting first the Earth and then increasingly large chunks of the observable universe into paperclips.

Getting the goal right isn't easy. Note that programming the machine instead to manufacture just 1 million paperclips wouldn't be enough to save us. For the machine would then work to convert the entire Earth into scanners, processors, and databases if that would help it to count and recount its work to minimize the chances -- even by 1% -- that it made the wrong number.

Yes, it may sound stupid, but history is full of folly. When you think about it, it's actually very difficult to specify a goal that wouldn't incidentally result in infinite resource extraction and the casual destruction of humanity. Infinite power corrupts machines as well as men.

(For those interested, there is now a thrilling computer game available -- Universal Paperclips -- where you play as a paperclip-maximizing AI.)

Researchers developing AI tend to be much more sanguine about its long-term dangers than the rest of us. (This could be an extremely good -- or extremely bad -- thing.) But AI safety concerns aren't just the stuff of thought experiments. A preliminary DeepMind study partly inspired by Bostrom discovered that, in pursuit of their goals, some AI programs can be prone to causing irreversible side effects, as well as to learning how to avoid being turned off.

It's hard to put a firm estimate on when the danger of uncontrollable superintelligence would become real. But computers could surpass our intelligence by midcentury, so we might as well begin thinking about solutions soon.

It's getting personal

The near-term risk, however, remains to employment -- as I recently discovered in a very personal way.

Next: Planned Obsolescence