As artificial intelligence expands in capabilities and usage, it's important to have safeguards in place. In this episode of "The AI/ML Show" on Motley Fool Liverecorded on March 2, guest Jim Chappell, head of Artificial Intelligence & Advanced Analytics at AVEVA, talks about why it's so vital to have traceability of AI as it evolves and how blockchain technology might be one answer.

Jim Chappell: What could be scary if it's not managed properly is AI is going to evolve from task based, like it's doing today, to objective driven. These objectives are going to get broader and broader in nature. For example, I want you to optimize the efficiency of my plant using any means that you have at your disposal from an AI perspective. That's a very broad objective, and it could use anything that it has. Then further complicating it, people think of AI as like a robot or a cyborg, but it's not, it's connected everywhere. It's in the internet at large across the world, and it can do something in this continent and this continent all at the same time. We just exacerbates the good that it can do, but it also exacerbates that if it makes a mistake, it could be bad. Safety especially, and the ability to adversely impact businesses and society, just get magnified by some of these capabilities that it's going to be able to be doing, so you have to be super careful, and it needs to be reproducible. It needs to be more transparent because typically, you refer to AI as a black box. Well, it needs to be reproducible. These are the inputs, let's see where the outputs are. As a black box, it's looking at things like what's the rate of change? What happened previously? Is this getting worse? It gets very complicated, you can't just feed snapshot inputs into it. What's exciting are things like blockchain or something like that could help with the traceability of AI as well as, as regulations that we talked about. Again, what AI can do and what it can't do. But as it moves toward larger, objective-driven things, it could magnify its good or magnify its bad so from a design perspective, you have to be conscious of this, and from a safety and monitoring perspective, you need to make sure you have the proper safeguards in place. I think that's going to be a bigger and bigger issue as AI becomes more sophisticated.