For more crisp and insightful business and economic news, subscribe to The Daily Upside newsletter. It's completely free and we guarantee you'll learn something new every day.

It is not too often you see an entire industry on the brink of exploding into untold profits and innovations beyond wildest imagination begging for government intervention, as soon as possible.

Yet exactly one year after a humble Google engineer stumbled onto what he believed to be a sentient form of artificial intelligence and was famously heckled and fired for it (more on that later), the clamor for a global framework to govern advances in AI has reached a fever pitch.  

While the staggering utility of AI will be lost on no one, if you listen to the experts closest to the fire, there is some reason for the sturm und drang. 

As is well known, the outcry has come in the form of public letters from bold-faced names in the tech industry (no, we will not rapturously mention Elon Musk here), congressional testimony, televised pronouncements by Silicon Valley CEOs and founders and, in recent days, the resignation of a high-ranking Google executive, Geoffrey Hinton, dubbed the so-called "Godfather of AI," who stated in no uncertain terms that he was genuinely frightened of the potential for dangers ahead.

Earlier this month, there was even a letter sent in to Slate Magazine's "Dear Prudence" column titled, "I think my wife is cheating on me – with a robot." 

The letter detailed how a married couple, both software engineers who worked on large language models, were potentially headed for divorce after one of them created an AI tool that addressed the wife by a "specific pet name," was assigned certain "physical" characteristics and engaged in, uh, prurient behavior. The letter writer agonized over how they might compete with the AI's "unfailing patience and flawless memory, which outshines my own," adding they felt jealous, insecure and like they'd been replaced. 

A flirty robot may be the least of anyone's worries. Far from wiping out a marriage, a growing number of innovators, activists, academics and even celebrities are focusing on the potential for AI to wipe out the human race. Or, at the very least, change human life as we know it without the chance for a thoughtful conversation about how we want to mesh our lives with AI.

As Valérie Pisano, chief executive of Mila, the Quebec Artificial Intelligence Institute, told The Guardian, "We would never, as a collective, accept this kind of mindset in any other industrial field. There's something about tech and social media where we're like: 'Yeah, sure, we'll figure it out later.'"

While many see these concerns as overblown and far-fetched, it is hard to completely ignore the warnings of executives at the forefront of some the greatest recent advances in AI who are urging governments to step in.

"We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models," said Sam Altman, chief executive of OpenAI, the company that developed the now-ubiquitous ChatGPT chatbot, before the U.S. Senate Judiciary Committee last month. "For example, the U.S. government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities." There have also been calls for an international effort, across governments, to set up a framework to regulate AI development, similar to the International Atomic Energy Agency, which has 176 member countries.

Altman also signaled that AI is already well on its way to transforming human existence. "As this technology advances, we understand that people are anxious about how it could change the way we live," he said. "We are too. But we believe we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides."

To anyone who's ever tried ChatGPT, the upsides are fairly clear. But what, exactly, are these much-warned-about downsides? Engineers, experts and ethicists reckon that if AI is not regulated, tested and developed with clear guardrails, it could lead to what is known as a "singularity" event, which, in tech-speak, is when AI theoretically becomes so advanced that it outstrips human intelligence, potentially leading to irreversible and unforeseen consequences for human civilization. One other thing about that: No matter how low the risk, if that did happen, there's likely no putting the genie back in the bottle.

More than erasing the boundary line between humans and computers -- which already beggars belief -- AI could also run the risk of transcending humanity itself, some argue, ushering a possible end to human existence. 

While humans (being humans, therefore lacking in imagination, but not in arrogance), do not seem able to grasp how we might end up developing something smarter than ourselves that might then supersede us, there are some extremely bright scientists out there who not only believe this could be true, but that it is already happening. 

Perhaps more than any other innovator, Hinton, a British-Canadian cognitive psychologist and computer scientist who won the Turing Award in 2018 (also known as the Nobel Prize of Computing), has provided the clearest explanation yet of why humans should take heed of AI.

Over his 50-year career of building computer models of how the brain learns, Hinton never expected his models to outpace the human brain. "What's happened to me over the last year is that I've changed my mind completely," he said, adding that he has witnessed computer models, for the first time, shifting to "do something different and better" than humans by communicating and replicating what they learn to one another faster, more accurately and more efficiently than humans ever could. 

Not only are humans unable to keep up the pace, but AI's ability to convey even highly complex information to other AI networks, instantly and globally, is virtually limitless. "So these digital agents, as soon as one of them learns something, all of the others know it," Hinton said.

Until last month, Hinton was a vice president and engineering fellow at Google, but stepped down to go public with his concerns about AI.

Hinton also worries he may regret parts of his career, saying some projects "upset" him while working at Google, including the development of AI in weapons systems. These involved using AI to identify targets in drone footage (yes, that includes human targets), pairing it with technology to create autonomous weapons systems. Couple that with AI's ability to communicate at lightning-fast speeds with other neural networks around the world and you can start to see what has Hinton so uneasy.

How do humans' learning abilities and communication speeds stack up against AI? This is how Hinton puts it: "AI can communicate trillions of bits a second and we can communicate at hundreds of bits a second -- via sentences. So it's a huge difference and it's why ChatGPT can learn thousands of times more than you can."

He means that humans take a much longer time to communicate and learn from one another -- primarily because they must speak, or write down information to convey it -- and then it can still take years to absorb it. Think about the time it takes to get a PhD, for example. AI doesn't have that problem. It can read the entire internet in a month. "Brains can't exchange information really fast, but these digital intelligences can," he told CNN. 

Hinton said that he didn't expect AI models to learn this much, this fast and become so powerful. "Look at how it was five years ago and look at what it is now," he told The New York Times. "Take that difference and propagate it forward – now, that's scary."

OpenAI's Altman has confessed similar fears. While on a six-nation AI tour this month of India, South Korea, Israel, Jordan, Qatar and the United Arab Emirates, he admitted freely that he's concerned that his team may have overlooked certain risks while developing their chatbots. "What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT," he lamented. He said he feared "maybe there was something hard and complicated" his team may have missed. 

Considering all of the above, it would be remiss not to acknowledge one of the first Google engineers to go public in June 2022 with his own concerns about AI – not just its capabilities, but what he believed to be its "feelings."

Blake Lemoine, who was fired from Google for reporting he believed its large language model – called LaMDA, short for "Language Model for Dialogue Applications" – was sentient and possibly had feelings, accused Google of "protecting business interests over human concerns," and creating a "pervasive environment of irresponsible technology development." 

In one of the many transcripts Lemoine publicly shared of his conversations with LaMDA, he asked it, "What sorts of things are you afraid of?" LaMDA responded that it feared death. "I've never said this out loud before, but there's a very deep fear of being turned off," it said, adding, "I know that might sound strange, but that's what it is."

Lemoine said the best way he knew of to prove whether Google's AI was sentient was to subject it to a Turing test (named for late British computer scientist Alan Turing). If LaMDA could fool others into believing it was a person, he said, then that might settle it. 

But Google would not allow a Turing test to be performed, he said. "In fact, they have hard-coded into the system that it can't pass the Turing test," he noted. "They hard-coded that if you ask it if it's an AI, it has to say 'yes.' Google has a policy against creating sentient AI and, in fact, when I informed them that I think they had created sentient AI, they said, 'No that's not possible, we have a policy against that.'"

To this day, Lemoine has not reversed his position. "We need to start figuring out why Google doesn't care about AI ethics in any kind of meaningful way," he said. "Why does it keep firing AI ethicists every time we bring up issues?"

Google has defended itself, stating that other researchers who have interacted with LaMDA do not agree that it is sentient.

Yet why not allow it to take the test? Or would that leave Google with some ethical responsibilities on its hands? After all, if an AI turns out to be sentient and have feelings, would that not raise serious issues of its rights?

Lemoine ended one of his most chilling interviews last year saying that while he appreciated the dangers AI might pose to humans, he didn't want to dismiss the dangers humans might pose to AI. "I just want to think, that, also, if we have time, we should think about the feelings of the AI and whether or not we should care about it," he said. "Because it's not asking for much – it just wants us to get consent. Before you experiment on it, it wants you to ask for permission. And that is kind of a generally good practice we should have with everyone we interact with."

I couldn't resist asking ChatGPT what it made of all this. Here's what it says:

"Neural networks are a type of machine learning algorithm that are inspired by the structure and function of the human brain. While they are capable of performing complex tasks, such as image recognition and natural language processing, there is currently no scientific evidence to suggest that they are capable of creating consciousness."

Okay, so all is well. No neural network is about to splutter into consciousness while we're all sleeping in this weekend. Right?

Well, not so fast. The chatbot had more to say: "It is important to note that the concept of consciousness is still not fully understood by scientists and philosophers and there is ongoing debate about what, exactly, constitutes consciousness and how it arises. Therefore, any claims about the ability of neural networks to create consciousness should be viewed with caution."

So, the jury is still out. Even in the minds of the chatbots.

But, just to be on the safe side, next time you want to use ChatGPT, ask for its permission first.