In an increasingly digitized world, the artificial intelligence (AI) boom is only getting started. But could the risks of artificial intelligence outweigh the potential benefits these technologies might lend to society in the years ahead? In this segment of Backstage Pass, recorded on Dec. 14, Fool contributors Asit Sharma, Rachel Warren, and Demitri Kalogeropoulos discuss.
10 stocks we like better than Walmart
When our award-winning analyst team has an investing tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*
They just revealed what they believe are the ten best stocks for investors to buy right now... and Walmart wasn't one of them! That's right -- they think these 10 stocks are even better buys.
Stock Advisor returns as of 6/15/21
Asit Sharma: We had two questions that we were going to debate. Well, I'll have to choose one. Let me do the virtual coin toss really quick here. We're going with B, artificial intelligence has the potential to be more harmful than beneficial to society. Rachel Warren, agree or disagree?
Rachel Warren: Gosh. [laughs] This may seem like a bit of a cop out, but I don't really feel like it's a yes or no answer. I think that technology in and of itself is an amoral construct.
I think it can be used for good, I think it can be used for bad. I think you think of all the benefits that artificial intelligence are providing to how the way that companies run, how software runs, how accompanies monetize their products, how you think of companies that are using AI to power more democratized insurance algorithms, for example.
I think artificial intelligence is going to continue to provide both benefits as well as detriment to society. You think of all the positives of artificial intelligence.
But then you look at how it can be used, for example, by law enforcement agencies to find criminals. That can be a really great thing. It's empowering these law enforcement agencies to have a more efficient way of tracking down criminals, keeping people safer.
But at the same time, how fair are these algorithms? Are these algorithms judging people equally or are they including certain things that single out certain individuals that may or may not be fair in the long run and may, in fact, result in less justice?
That's just an example. For me, I think personally, artificial intelligence can do great things, I think it can be used as well for very harmful things, and I think it ultimately is something that people need to view with caution and not just automatically view it as good or evil. That's just my quick take. [laughs]
Sharma: Love it. Very well said in a short amount of time. Demitri, reaction to what Rachel said.
Demitri Kalogeropoulos: Asit, if it scares Elon Musk, it should scare me. [laughs].
Warren: True. [laughs]
Kalogeropoulos: I would just say, yeah, I agree with a lot of what Rachel said. I think it's interesting. I mean, it clearly has the potential to be harmful in ways. I was just thinking about just in the last couple of weeks where we're hearing all these changes in Instagram and Facebook. Rachel mentioned the way these algorithms are working. We're clearly finding. Remember maybe a couple of years ago that there was an issue with YouTube that was driving users.
The algorithm is there to maximize engagement, for example, in all these cases. It's getting smarter at doing that. It's got all this content that can do that. It knows it's using the millions and billions of us as little testing machines to kind of tweak that. But they've had to make adjustments to these because they were harmful in a lot of ways just without being programmed that way.
If you did a chart on Facebook in terms of if you ranked engagement level up to the level of prohibited content, engagement rises as you get closer to prohibited, and goes to infinity if you got the prohibited, that's just human nature, I guess. Bad news travels faster than good news and conspiracy theories travel a lot faster than the truth. These are all just weaknesses, I guess, you could say in human psychology that algorithms can be ruthless at cashing in on or if you want to say, or monetizing.
That's clearly something I think we need to watch out for. Most cases, thankfully, it seems like we're finding these in time, but I think we have to be really careful that we're watching out because sometimes, who knows which ones we're not finding and years later, we find out that we were being manipulated in these ways.
Sharma: I love both of those comments. I mean, personally for me, I feel that this is a space that has enormous potential to do good. But without some type of oversight or regulation, we open the doors to really deleterious effects. Palantir is an example of a company that I won't invest in because I don't think that they really care that much about the detriment they can do.
Rachel mentioned the inadvertent. Well, I mean, this may have been reading between the lines, but this has been shown with some of their technologies, inadvertent racial profiling that comes from the tech they're using to help law enforcement.
Warren: Yes. Like mass surveillance, yes.
Sharma: It's interesting, governments have been a little bit slower to think about the regulation of AI. We can vote with our pocketbooks, we can buy companies that are using AI to good effect, and we can be a little bit of activist shareholders as a society to point to how we want companies to behave to the level of seriousness that we want them to take a look at what their algorithms are arriving at. I'll stop here so I can give the two of you the last word. We've got about a minute left.
Warren: I agree with what you're saying. I think that this is also something to remember. As investors, we look at all of the investment opportunities within the artificial intelligence space. These opportunities are only going to grow. I think if there's aspects of this technology that concern you or bother, it's OK to say, "This looks like a really great business, but I personally, I don't feel comfortable, ethically speaking, to invest in it."
That's OK. There are no shortage of fantastic investment opportunities available within the broader technology space. I think it's definitely something where you look at this area, there are so many potential benefits, I agree with what you were saying, there's so much potential here as well. For businesses, for companies, there's obviously a lot of profits to be made, but I think it's something to be wary of as well.
What Demitri was saying about Facebook algorithms. My timeline might look very, very different from my good friend's timeline based on I click on a couple of articles and then my entire feed changes in a certain direction, and then you go deeper down the rabbit hole.
I think just the nature of how these algorithms work, it makes it extremely difficult to regulate. With that knowledge, I think it's important to approach this area and investing in it with just a bit of caution.
Sharma: Demitri, you get the last word and then we'll sign off for the night. [laughs]
Kalogeropoulos: I don't have much to add to that for sure.
Sharma: I know. Rachel is on fire tonight, everything is sounding so persuasive and succinct and eloquent.
Kalogeropoulos: You just nailed it. [laughs] I would just say, yeah. I mean, you can look for companies that maybe don't have looked for incentives there. I like a company, for example, like Netflix.
If you're just evaluating something like that, if you're comparing a Facebook to a Netflix, Netflix made the decision not to advertise on their service, for example, because they don't want to get into a lot of these sticky subjects, whereas Facebook has to monetize.
It's a free service so they have to find a way to monetize it in different ways. That's just another thing to think about when you're comparing these companies.
Sharma: That's a great point, think about the business model. Sometimes, that causes behavior that you don't want to see.