After years of advances behind the scenes, 2023 is the year of artificial intelligence (AI), some are saying -- and with good reason. The launch of ChatGPT late last year took the world by storm, growing to 100 million monthly active users by January, marking the fastest adoption of a consumer app in history. 

The excitement soon turned to fervor with word that Microsoft had invested $13 billion in OpenAI, ChatGPT's creator. This set off a mad dash by other companies with AI expertise to release their own next-generation chatbots.

However, these advances come with some stark limitations. Subsequent testing of ChatGPT and Alphabet's Google Bard revealed an inherent problem with these systems: They could (at times) provide answers that are "irrelevant, nonsensical, or factually incorrect," according to The New York Times. Called "hallucinations," this critical flaw called into question these seemingly groundbreaking advances. 

Nvidia (NVDA 3.34%) believes it could have an answer to the problem.

A robotic hand interacting with a visual AI touchscreen display.

Image source: Getty Images.

The truth of the matter

In a blog post published on Tuesday, Jonathan Cohen -- Nvidia's vice president for applied research -- detailed a solution the company devised to mitigate the issue of chatbots veering off into fantasy or outright lies. He said the company has developed NeMo Guardrails, "newly released open-source software" designed to help chatbots "stay on track." 

The software provides developers with a way to integrate rules-based systems that will stop a chatbot from spouting inaccurate information, ensuring their responses are "accurate, appropriate, on topic, and secure," he wrote. In a briefing given to the press late Monday, Cohen said NeMo Guardrails can "detect and mitigate hallucinations." 

"You can write a script that says, if someone talks about this topic, no matter what, respond this way," Cohen said. "You don't have to trust that a language model will follow a prompt or follow your instructions. It's actually hard-coded in the execution logic of the guardrail system what will happen."

The issue stems from how these systems are trained. Chatbots are the interface between users and large-language models (LLMs), which have been trained using vast amounts of publicly available data found on the internet. The problem is that not all the scoured data is factually accurate -- and might contain opinions, biases, and unsubstantiated conclusions. As a result, some responses from the chatbots can veer well off the beaten path and can end up being illogical, offensive, or even downright creepy.

If Nvidia's solution proves effective, this could all be a thing of the past.

A vested interest

Nvidia has a vested interest in helping usher in the future. A broader adoption of AI would no doubt be a boon to the company, which provides the semiconductors used to power cloud computing, data centers, as well as those that train and run these complex AI models.

Management estimates that AI enterprise software represents a $150 billion opportunity for Nvidia, while chips and systems could generate another $300 million. In all, Nvidia believes it is chasing a total addressable market (TAM) of roughly $1 trillion, all the more reason the company wants AI to thrive. But the opportunity ahead extends far beyond just Nvidia. 

The next phase of AI development

Cathie Wood, CEO of Ark Investment Management, recently published the firm's Big Ideas 2023 report, and the conclusion it reaches is jaw-dropping. If software developers are able to realize just 10% of the value generated by their creations, advances in AI could generate as much as $14 trillion in additional revenue and $90 trillion in enterprise value, according to the report. 

While AI could hold the key to the next generation of high-tech advances, broad adoption of the technology will certainly depend on the ability of users to get useful, accurate, and truthful responses from these AI models.

Nvidia's technology could help keep future advances on track.