Artificial intelligence (AI) has become a very hot topic in financial and investing media these days.
From explaining how AI will disrupt industries to making comparisons to another AI winter and Elon Musk worrying about doomsday scenarios, we spend a lot of time predicting what the future may hold for computers that make their own algorithmic decisions.
As investors, we're also very interested in finding the winning companies from this AI movement. NVIDIA has been a popular picks-and-shovels play -- its graphics processing units train the neural networks that serve as AI's brain. Biogen -- perhaps a less obvious but longer-term beneficiary -- could use AI to design drugs that treat serious and complex diseases like Alzheimer's.
At the recent technology-focused Collision Conference, I discussed the topic with one of the field's most prominent experts: Babak Hodjat. He developed the original natural-language user interface that went on to power Apple's (NASDAQ:AAPL) Siri voice assistant. He has now co-founded Sentient Technologies to use distributed, evolutionary AI to develop stock market trading strategies and to optimize e-commerce conversion rates.
Babak has a firsthand look at everything happening in AI, and in this interview, he shares his thoughts about what we should expect during the next decade, why we should curb our enthusiasm for artificial general intelligence (AGI), and what role regulation should play in AI's future.
A full transcript follows the video.
Simon Erickson: This is all very interesting stuff, Babak. I'm going to change gears on you now. You are the expert here in artificial intelligence. We've talked about everything up and to this point of today. I'm now going to ask you to break out your crystal ball and look forward to the future of AI.
My first question on that note is with Sentient Technologies, you're doing a lot of cool things right now. What is it you really hope to accomplish? Either directly with your company or indirectly influencing the whole field in AI over the next decade?
Babak Hodjat: Obviously, digital marketing alone is a huge undertaking. We talked about website optimization and mobile experience optimization, e-commerce, but there's a whole funnel that happens in digital marketing. From getting to the user, the ads themselves, to going through the experience, to the conversion, to email remarketing back, to them to deciding what you want to show in the inventory management, the design of the products. These all come together. I think AI can be a centerpiece orchestrating this. We have our roadmap cut out for us as far as building out within digital marketing this product suite that can ultimately disrupt digital marketing.
That's one path. Ultimately though, I believe that this technology has very ubiquitous applicability across the board, especially if it's industrialized to the point where I think it can be. It's still a lot of work to get there, but I think we can democratize AI. What do I mean by that? Today, if you want to build an AI model, you have to get a whole bunch of Ph.D.'s together to come up with these elaborate designs. Unfortunately, we do not have a one size fits all. We do not have an AGI. You have to get these Ph.D.'s to come up with an elaborate design for a deep network architecture and then train it using a lot of compute capacity and lots and lots of data to get these relatively static models.
I think by virtue of breakthroughs in evolutionary computation and multi-task learning, you can actually have a system whereby anyone that has a problem and a generic way of tackling that that may not be quite acceptable yet, can actually upload that into the platform and evolve it against augmented data sets and get to the level of performance that make it then acceptable to deploy.
I think that ultimately one company obviously cannot hit ... we have to remain very, very focused ... I think the platform has the potential to get to that breakout point where even developers can make use of it and AI-enable their particular application areas.
Erickson: You said democratizing AI. You also said AGI, artificial general intelligence, which is very different than discrete AI, which we're seeing more in verticals today.
Can you talk a little bit about where you think we're heading toward AGI? Another topic we just looked at quantum computing, which is now more and more available -- for the right price -- to do very complex computation. A completely different way of processing. Where do you see this all plays out? What role does quantum have and where do we stand in AGI? Is it still 50 years in the future, but we're going to get there?
Hodjat: Yeah, I'm very skeptical. AGI is ill-defined. That's to start with. I think it will be a very subjective matter. At a certain point, we might decide that the level of intelligence that we have captured, for example, in the conversational system that back-ends into data and so forth on the internet, is AGI. That's going to be a decision. But AGI in my understanding, which is a system that manifests all facets of human intelligence and beyond to the point where it can be considered human level as far as its intelligence is concerned. I think that's very, very difficult, if not impossible, to get to. The reason for that is our technologies are really good at mimicking and even exceeding the individual facets of intelligence that we observe in humans or in nature. But putting them all together in one system today is very, very difficult.
So that integration piece is elusive and very, very difficult to build. The tracks of research today that kind of take it from one extreme or the other -- one extreme being build a system that learns very, very well and then start teaching it, and it will get there. I think that's misplaced. Humans don't learn that way. So why should we expect our machines to learn that way? Then, there's the other extreme that says just add more and more common sense and knowledge into this system, and ultimately, it will become AGI. That's misplaced as well. Humans don't do that. There's a flexibility and robustness that's missing here.
So we have to build something that's in between that has the best of both worlds. It's a certain, specific configuration that we have to land on. Otherwise, whatever we're building is going to disappoint us and it's not going to look human. The reason why I think we won't be able to get there is because that specific configuration of intelligence in us humans was evolved through many thousands of millions of years to be exactly the way it is today, for us.
In environments that we've lived within, not just the past 150 years, but many, many years before that. And capturing that specific configuration and architecture in our AI system is next to impossible. It's just the complexity of it is going to be beyond us -- very, very difficult to fill.
That's my humble opinion on AGI. Of course, everybody has an opinion there, which I respect.
On quantum computing, quantum computing is very interesting. I've looked at it from a theory perspective, it stands to really disrupt machine learning as one of its applications. It has many. Suddenly, speed up the power of machine learning in the way we can actually search for data and retrieve data, by orders of magnitude. That can have a huge impact on the quality and speed and performance of our AI systems. That's in theory. In theory, we kind of know if we had a quantum computer, above a certain scale, then we could get this functionality.
The problem is we do not have that yet. The problem is not whether or not we can build qubits and a quantum computer. I think it's established that we can. It has been built in the past.
In the lab, we can build systems with several qubits. We have seen superposition, and we've been able to verify some of these algorithms on a very, very small scale. It takes a lot of energy, even for those small scale problems to run. We have currently, we're still not at a point where we can build large enough areas of qubits that allow superposition to a point where it is applicable for these types of machine learning systems and cost effective.
It's still in the lab. I'm optimistic that we will get there. It's still in research phase, not at a point, at least where I've seen, where I can go do a timeshare on a quantum machine and then get benefits from it. Not there yet. Will we get there in the next few years or next 20 years? I don't know. I think we still need a few breakthroughs to get there.
Erickson: We are seeing progress, but we should be realistic in our expectations?
Hodjat: I think so. Yeah.
Erickson: Then, the last question I have for you, Babak. The topic that Elon has been tweeting quite a bit about is regulation. What do you feel the role of regulation should be for the progress we're making today?
Hodjat: Regulation is important. Oftentimes, when we talk about regulation pertaining to AI, we're actually talking about regulations for technologies as a whole. Not specifically AI. I think the first order of business, in my opinion, is don't make this specific to AI. If you're setting our regulation, it has to be technology-blind.
The second is, I think we have to really think long and hard about the current state of technology and risks and come up with regulations on that versus the perceived risk of what AI or technology might become. It's nowhere near today, in my humble opinion. Then, set regulations that are just going to be misplaced on that sort of thing.
One example is explainablity. I mean, there's been regulation already in Europe around explainability. There's been regulation on, for example, the insurance companies around explainability for quite a while now. Essentially, it says if technology and/or AI is being used in areas that pertain to human livelihood, then the decisions of that system have to be explained. What does explainable mean? If I show you a very, very complex set of rules that are readable but not comprehensive -- is that explainable? Should it be explainable? If I can validate it on data, if I can actually show the laws and what's happening to lead up to the decision that system has made, and kind of be able to simulate, replicate that, wouldn't that be enough? I think we have to, at some point, decide whether or not we trust technology to make decisions.
I think we're past that point already. Our credit score is being set by technology, not AI, already. We don't understand how that works. It might be an explainable rule-based system. But clearly, I don't know about you, I don't understand how it's being set.
Hodjat: I think we have to be very wise in how we set these regulations.
Erickson: And grounded in the science and the technology.
Hodjat: Exactly right.
Erickson: Babak Hodjat, again, the CEO and co-founder, Sentient Technologies. Fascinating stuff and an expert in artificial intelligence. Thank you, sir, for your time today.
Hodjat: My pleasure. Thank you very much.