Elon Musk is best known as the billionaire co-founder and CEO of Tesla (NASDAQ:TSLA) and SpaceX. His companies are transforming the transportation industry through long-range electric vehicles with semi-autonomous capabilities, and the space industry with SpaceX's reusable rockets and plans to colonize Mars.
Most recently, with his new Boring Company, he's proposed building a new transportation system that uses underground tunnels for faster car travel and says that his SpaceX rockets could lift suborbital transports that could take passengers anywhere in the world in just one hour.
Needless to say, Musk hardly fits the description of a technophobe.
But when it comes artificial intelligence, Musk has thrown up the caution flag many times. His most recent warning came just last month when he suggested that AI could eventually start the next world war. Let's take a look at Musk's AI warnings over the past few years, why he thinks the technology is one of humanity's most dire threats, and what he's doing to tame the AI dragon.
Warning No. 1: Don't summon the demon
Speaking at the MIT Aeronautics and Astronautics department's Centennial Symposium in 2014, Musk dropped this bombshell of a warning,
"With artificial intelligence, we are summoning the demon. In all those stories where there's the guy with the pentagram and the holy water, it's like, yeah, he's sure he can control the demon. Didn't work out," Musk told the group.
Musk had warned of AI potentially being more dangerous than nuclear weapons just a few months prior on Twitter, and his comments at the symposium further publicized his beliefs. Musk urged for government regulation then and has since ramped up his calls for more AI technology oversight.
Warning No. 2: This is humanity's greatest existential risk
The billionaire offered another direct warning about rogue artificial intelligence this past summer when he was giving a talk at a meeting of U.S. governors.
"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said.
This warning was of particular note because he was addressing political leaders, and suggesting that the government implement more regulation over AI before it gets out of hand. Musk said that he has access to lots of AI technology and based on what he's seen, "it's the scariest problem."
He also mentioned in that talk that he doesn't prefer regulation, but in the case of AI, he thinks that if the government waits until it has an obvious issue to react to, it'll be too late.
"I think people should be really concerned about it," Musk said. "I keep sounding the alarm bell."
Warning No. 3: The race for AI domination has already begun
Most recently, Musk sent out an ominous tweet in September that read, "It begins" with a link to an article from The Verge about Russian President Vladimir Putin saying that whichever nation is the leader in AI will be the "the ruler of the world."
Musk followed that tweet up by saying that China and Russia were gaining in computer science and that the competition for AI superiority among nations would likely be the cause of World War III.
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.— Elon Musk (@elonmusk) September 4, 2017
Technology blog TechCrunch noted that potential war may not come from nations fighting over artificial intelligence, but instead from a government's AI system deciding to start a war in order to emerge as the leader.
This wasn't Musk's first warning about combining AIs and weapons either. In 2015 he endorsed a letter, along with theoretical physicist and cosmologist Stephen Hawking, that laid out why an autonomous AI arms race may be on the horizon:
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.
The letter ends with pleas that AI be used to benefit humanity, and for a ban on autonomous weapons.
What the billionaire is doing to prevent an AI catastrophe
Musk isn't content just shouting out warnings about AI's potential problems. He's also actively trying to create AI that's good for the world and humanity. To that end, the prolific entrepreneur has helped fund an artificial intelligence research company, called OpenAI.
The company says on its website that, "As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world."
Musk has also made investments in AI companies, just to keep track of the progress on the technology. His most famous investment in that vein was probably in the startup DeepMind, which Alphabet purchased back in 2014 for $650 million.
"It gave me more visibility into the rate at which things were improving, and I think they're really improving at an accelerating rate, far faster than people realize," Musk told Maureen Dowd in a Vanity Fair interview earlier this year.
Musk recently backed another AI company called Neuralink, which has the goal of implanting devices into human brains that could eventually help people keep pace with artificial intelligence. The company will first focus on treating brain diseases, but eventually, aims to create systems by which people could communicate and control computers with their thoughts, wirelessly.
It's clear that Musk believes that one of the best ways to stop AI technology from spiraling out of control is to help develop some of it and steer it in the right direction. But the burden of keeping AI under control shouldn't be shouldered by private companies and tech billionaires alone -- governments need to get out ahead of this now as well.