What happened

Is the AI revolution about to go up in smoke?

In trading Friday, shares of C3.ai (AI 3.02%) suffered a blip, falling by as much as 5.6% before recovering a bit. As of 1:55 p.m. ET, they were down by 2.7%.

Other stocks in the AI space are faring better, with ChatGPT-backer Microsoft (MSFT 1.82%) still up 1.2% (although it was up more earlier in the day), and Nvidia (NVDA 6.18%) holding onto a 0.8% gain, buoyed by an upgrade from UBS, and a price target hike by Truist Bank, respectively. Still, rumors are rumbling in Washington, D.C., and investors in AI stocks might want to keep an eye on what's happening to C3.ai as a harbinger of troubles that may lie ahead.  

So what

Specifically, as The Wall Street Journal reported Friday morning, the Federal Trade Commission (FTC) has begun investigating ChatGPT -- the first and by far the most popular so far of the artificial intelligence-powered chatbots -- over its potential to generate false information for users.  

The FTC's investigation appears to center on defamation and privacy concerns -- the potential for ChatGPT to "generate statements about real individuals that are false, misleading or disparaging," or to reveal personally identifiable information about individuals without their consent.

And these are not the only concerns.

In a separate Journal article published Thursday, the paper described the proliferation of low-quality ChatGPT-generated content flooding the internet -- AI-generated articles, for example, that have "perfect spelling and grammar, but a completely incoherent story." As the paper pointed out, numerous YouTube influencers are flogging the idea of using ChatGPT to generate content for sale to online media as a source of income, resulting in an "exponentially" increasing number of ChatGPT users doing just that.  

Now what

Now why is this a problem? Looked at one way, it actually sounds like good news for ChatGPT -- and for companies like C3.ai that help clients make use of it, for Microsoft, which owns a stake in ChatGPT owner OpenAI and partners with it, and for Nvidia, which makes the powerful computer chips that enable AI applications to function. The more popular ChatGPT becomes, and the more often and more new ways in which it is used, the more money for those AI companies, right?

Well, yes -- to a point. The problem is that errors and misinformation in AI-generated content are growing in volume, and as a result, making up an increasingly large proportion of content on the internet ... which ChatGPT and other AIs scrape to generate additional new content ... which could perversely lead to even more errors and misinformation generated by AI chatbots in a high-tech version of the "telephone game."

There's even a scientific name for this phenomenon: model collapse. As The Wall Street Journal explains: "Just as repeatedly scanning and printing the same photo will eventually reduce its detail, model collapse happens when large learning models become less useful as they digest data they have created."

Mind you, at this early stage of the game, model collapse is more of a warning than an inevitability. The fact that the folks developing AI are aware of the problem means they're probably already working to solve it, too. And even if AI becomes less useful for purposes such as generating reliable data and dependable content, the technology might still have many other beneficial uses -- automating routine tasks, for example, filling out forms with verified data, and so on, all of which should generate growth.

Still, model collapse is a worry worth keeping an eye on. It could have a measurable effect on how profitable AI-related investments such as C3.ai, Microsoft, and Nvidia will become.