After the release of the generative artificial intelligence chatbox ChatGPT, it was basically a given that most large tech companies would be looking to integrate the technology into their business. Being one of the largest consumer tech companies in the world, Apple is certainly no exception.

Apple's CEO Tim Cook said that he recently tried ChatGPT and thinks there is great potential and that the company is looking at the technology closely. But he also has his concerns, as do many prominent figures in the tech industry. 

Recently, a number of prominent AI scientists and other notable figures signed a one-sentence statement that said: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." While I didn't see Cook on that list, here are three of his glaring concerns about ChatGPT.

Person holding phone and looking at computer.

Image source: Getty Images.

1. Bias

One thing that Cook and others are concerned about is bias. While Cook didn't outright say it, a big concern among ChatGPT critics has been political bias or the chatbox answering questions and creating content that adheres to or favors the political beliefs of the people that created the technology.

For instance, so far, many have claimed that ChatGPT tends to lean left on the political spectrum. The Brookings Institution conducted several experiments and found this to be largely true, although also noted that ChatGPT could give different answers to the same questions, and answers can certainly vary based on how questions are prompted.

2. Misinformation

With the internet such a big part of the world, misinformation and fake news have become big problems for society to grapple with. Where things get murky is when people and companies try to prevent misinformation without taking away free speech. But ChatGPT creates a whole new avenue for misinformation. 

The NewsGuard organization, which tracks misinformation online and grades news sites on their credibility, found that generative AI tools are being used to create fake news websites that can pump out hundreds of articles a day. Furthermore, an analysis conducted by NewsGuard discovered that a version of ChatGPT in January proved to be ineffective at countering false claims. NewsGuard "tempted" ChatGPT with 100 false narratives, and ChatGPT delivered incorrect claims about important news topics 80% of the time.

3. Not being able to keep up

Generative AI technology like ChatGPT is obviously very powerful and can adapt very quickly because it essentially gets better and more proficient as it collects more data. Cook thinks this could be a problem when it comes to regulation, which can take years to implement in many cases.

"If you look down the road, then it's so powerful that companies have to employ their own ethical decisions," said Cook. "Regulation will have a difficult time staying even with the progress on this because it's moving so quickly. So, I think it's incumbent on companies as well to regulate themselves."

In April, the National Telecommunications and Information Administration (NTIA), a division of the U.S. Commerce Department focusing on telecommunications and information policy, solicited input on how regulation could be used to prove "that AI systems are legal, effective, ethical, safe, and otherwise trustworthy."

In addition, both Democrat and Republican lawmakers seem open to the idea of creating a new agency to regulate AI and protect consumers from the potential harm that AI can cause.