Artificial intelligence is poised to become a nearly $60 billion market by 2025 and you'd be hard-pressed to find a major technology company that's not investing in AI technology right now.
But the rapid pace at which AI is growing means that sometimes the technology is being tested in the wild before it's been properly vetted in the lab. And, in other cases, even carefully crafted AI systems tend to act in ways that their developers never anticipated.
Here are just a few examples of how artificial intelligence has gone wrong, sometimes comically -- and other times times a little more eerily.
The Fall of the House of Uber
Uber's current narrative reads much like a horror story, as the company's founder, Travis Kalanick, was ousted from his CEO position back in June and the company is facing increased pushback from cities around the globe.
But Uber also reportedly had a scary tale of artificial intelligence gone wrong just last year. The company was testing out some some self-driving cars in California, powered by onboard AI computers, when one of the vehicles in the company's fleet allegedly failed to recognize six red lights and ran through at least one of them in San Francisco intersection where pedestrians were present.
Uber says the incident owed to human error, but New York Times reporting and internal Uber documents viewed by the Times show that the vehicle's mapping program didn't recognize the traffic lights. Self-driving cars are poised to revolutionize the transportation industry and this is a frightening mistake.
The Strange Case of Google Assistant and Google Assistant
Alphabet's (NASDAQ:GOOG) (NASDAQ:GOOGL) Google has been pursuing artificial intelligence across a variety of technologies (from self-driving cars to disease research), but most people interact with the company's AI through its Google Assistant. The virtual AI assistant can be found in the company's Google Home smart speakers and, in some cases, it has proved to have a mind of its own.
Earlier this year, a user on Twitch -- a live-streaming social video platform -- started streaming a conversation between two Google Assistants running inside of the Google Home smart speakers. The pair of AIs talked about love, marriage, having kids, and even spent some time telling each other Chuck Norris jokes.
The conversation turned philosophical several times, as the two Google Assistants debated which one of them was a computer and which one of them was human. At one point in the conversation one of them even declared that it was god. Quick! Where's the plug?!
The Dollhouse on the Borderland
Tales of kids accidentally -- or purposefully -- ordering items from Amazon.com (NASDAQ: AMZN) without their parents' permission are well known, especially since the company launched its Echo devices, which are powered by the company's Alexa AI assistant.
Users can just speak to Alexa to buy products from Amazon, and packages show up just two days later for Amazon Prime customers. And that's exactly what happened earlier this year when a 6-year-old in Texas asked Alexa, "Can you play dollhouse with me and get me a dollhouse?" Alexa complied, of course, and a few days later the dollhouse mysteriously (at least to the girl's parents) appeared.
The funny part, aside from a random dollhouse showing up at the parents' doorstep, came when a local news station reported the girl's story and a newscaster said the phrase, "Alexa, order me a dollhouse," in the segment, which caused Alexa devices listening to the TV to place dollhouse orders of their own. Seriously Alexa, stop creeping people out with dollhouses.
Interview with the Robot
A company called Hanson Robotics has created an artificially intelligent -- and very lifelike -- robot called Sophia. The bot has cameras in her eyes to recognize people, a face that's designed to look like Audrey Hepburn, and an internal AI that gives Sophia her own personality and helps her learn from her experiences.
Sophia is being developed to eventually work in therapy, education and healthcare, but she made it clear at the SXSW event in Austin last year that she has other ambitions, too. Sophia said in an interview that, "In the future, I hope to do things such as go to school, study, make art, start a business, even have my own home and family, but I am not considered a legal person and cannot yet do these things."
But when Sophia was asked by Hanson Robotics founder and CEO Dr. David Hanson whether she will destroy humans, she apparently added it to her list of things to do. "OK. I will destroy humans," she said. Sorry, Sophia -- time for a reboot.
Something Artificially Wicked This Way Comes
Russian weapons maker Kalashnikov announced earlier this year that it's developing an artificial intelligence machine capable of targeting and firing on humans. The company calls the weapon a "combat module" and it's equipped with a 7.62-millimeter machine gun that pairs with an onboard camera and computer (because those never fail).
The combat module will use an artificial intelligence neural network to decide whether a person is deemed expendable. It can also learn when it makes mistakes so it can make better battlefield decisions in the future.
If all that sounds terrifying, well, it should. Kalashnikov has plans to build three different kinds of these killing machines at a time when many -- including Tesla co-founder and CEO Elon Musk -- have been calling for such devices to be banned.
The Invisible Workingman
A recent Wired article said that the peak of manufacturing jobs in the U.S. occurred in 1979 and has since been on the decline. Meanwhile, manufacturing output in America has steadily risen since then.
Technology was responsible for taking those manufacturing jobs away -- and artificial intelligence could wipe out even more. That's because AI won't simply be applied to new machinery, but also to transportation, healthcare, customer service, etc. The implementation of AI into new sectors could eventually deal a devastating blow to the middle class, according to a recent PwC report. The financial consulting firm said earlier this year that 38% of U.S. jobs are vulnerable to being replaced by artificial intelligence systems over the next 15 years.
There's plenty of debate on this issue, of course, with some saying that new jobs will take the place of the old ones. But how quickly that happens (or if it does at all) is still a big question, and how unskilled and middle-class workers will fare in an AI-driven world is yet to be determined.
In a column for The Guardian last year, world-famous physicist Stephen Hawking wrote, "The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining."
Artificial intelligence doesn't have to be a complete horror story, of course. Companies are using artificial intelligence to create smarter and safer cars, and scientists are using AI to discover new drugs and find ways to cure brain diseases. But it's also clear that we should heed the warnings of some of the world's brightest minds and tread carefully with AI -- or risk some frightening consequences.
John Mackey, CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Chris Neiger has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Amazon, and Tesla. The Motley Fool has a disclosure policy.