In this week's Rule Breaker Investing podcast, Motley Fool co-founder David Gardner has invited a special guest: Robin Hanson, a George Mason University economics professor and research associate at Oxford's Future of Humanity Institute. Hanson has master's degrees in physics and philosophy and a Ph.D. in social science, and he previously worked on A.I. at Lockheed Martin and NASA. His recently published book is The Age of Em: Work, Love and Life when Robots Rule the Earth.
Hanson joins us this week to discuss a broad topic: the future. His key prediction is that digital brain emulations will be able to do many of the things humans have to today. It's not so far-fetched, and it leads to a concern that has been discussed by such high-profile intellects as Elon Musk, Stephen Hawking, and Bill Gates: that these near-sentient machines will one day decide to wipe out humanity. Could we simply program them not to, a la Isaac Asimov's Laws of Robotics?
This video was recorded on Sept. 14, 2016.
How do you react to people who today (Elon Musk and a few others) say things like we need to make sure that we don't program them to destroy us all? And a lot of the Skynet worries and questions? What is your reaction as the author of The Age of Em?
Well, ems can't be programmed. If you really hated the idea that your descendants would be free and able to choose their values and their attitudes (that are different from yours), you won't like this world, because these descendants have that choice. Of course, that's the freedom you've had relative to your ancestors.
But some people say, "Yeah, but we don't want to tolerate that for our descendants." There are many people who say, "We must figure out a way to make sure our descendants can't have values and attitudes that differ from ours, because otherwise it could randomly drift away. And who knows how far away it could go, and that would be terrible."
The Age of Em may only last a year or two, and then something else may happen, and a plausible thing that might happen next is that we achieve artificial intelligence through other means. That is the way we've been doing it for the last half-century -- slowly writing software. It's possible that we will continue along that vein, and eventually ordinary software will replace ems and be better than ems. I don't know. At that point, you may worry about that other kind of software and whether it can drift away.
But honestly, I think people are so quick to have policy recommendations, and to have evaluations, and they need first to have the foggiest idea of what might happen if you do nothing. And therefore, I've written this book mainly in the mode of telling you what might happen if you do little to stop it. It's a positive evaluation of the most likely outcomes. It's not my job to make you love it or hate it. You may want to change it, but first know what is likely to happen.