Companies are only just beginning to understand the power of artificial intelligence (AI). It's a driver of efficiency, and it's quickly becoming a major value creator.
But it's not a risk-free technology, especially considering there is still a lack of regulation, which means developers are really pushing the boundaries of what's possible.
The U.S. Federal Trade Commission (FTC) opened an investigation into OpenAI just last week on the way it handles data, and because its ChatGPT chatbot is potentially defaming some consumers in its responses. That's right: The AI, under its own power, might be breaking the law.
Safety is now front-of-mind for the U.S. government, especially because responsible AI developers have been asking for a clear set of rules and guidelines to protect the industry from losing control of the technology. A first step in that process was achieved last Friday, when the following seven companies came together in Washington, D.C., to sign an AI safety pact:
- Microsoft (MSFT 1.60%)
- Amazon (AMZN 2.57%)
- Facebook parent Meta Platforms (META 0.24%)
- Google parent Alphabet (GOOGL 0.92%) (GOOG 1.16%)
- OpenAI
- Anthropic
- Inflection
The agreement hopes to achieve three main goals
Each of the seven companies is working on large language models in some capacity, which describes a type of AI that typically culminates in online chatbots that people can use to generate text, images, video, or even computer code. Even though this safety pact was entered into voluntarily by the parties, it's the U.S. government's first foray into setting clear guidelines to rein in the potential risks associated with this type of AI.
It covers three core areas:
- Ensuring products are safe before introducing them to the public. This will involve the seven companies appointing independent experts to test and analyze product updates before wider release. The companies will also share data with the government, civil society, and academia. This data revolves around attempts by malicious actors to breach safety features, so the industry can collaborate on potential threats.
- Building systems that put security first. Further to the above, the companies will invest in cybersecurity to protect unreleased model weights. These are the parameters used to train AI; they determine the output a chatbot will provide for a given input, for example. The companies will also allow third parties to discover and report vulnerabilities. This will ensure they can't hide or sugarcoat the extent of a breach or attack.
- Earning the public's trust. The companies have agreed to inform end users when an item of content was created by AI; for example, images and videos might be watermarked. And developers will report biases in their models to contextualize their outputs, and they will put money toward researching the potential social harm AI could bring. Lastly, the companies have agreed to put their models to use for social good, such as healthcare initiatives or fighting climate change.
These guidelines are particularly important for OpenAI, Anthropic, and Inflection because they operate as private companies. Microsoft, Meta, Amazon, and Alphabet are publicly listed, so they already abide by more-stringent reporting requirements, and their activities are continuously observed by thousands of investors.
Here's what the AI safety pact could mean for these companies
AI innovation has moved at a breakneck pace in 2023, partly because of a lack of regulation. That's not to say we wouldn't have ended up in the same place even if a firm set of rules existed, but it might have taken a longer time. Why? Because red tape requires companies to consult lawyers and advisors to ensure they're operating within the rules, and that can drag out development timelines.
That might be the environment the industry is heading toward following this agreement. It won't stop those leading AI companies from advancing the technology, but considering they'll have to consult independent experts before releasing new products, it could take far longer for them to reach consumers.
Plus, sharing data with the academic community, for example, might come with unexpected consequences because its views on safety might differ vastly from the developers' views -- and when the government takes advice on shaping future legislation, it's likely to err on the side of caution.
The recent FTC investigation launched against OpenAI should serve as a shot across the bow for every other AI company, especially Microsoft, which is a major investor in OpenAI. The agency is clearly concerned about the potential harm AI could bring to consumers, and it wants to know what measures are in place to ensure it is minimized.
It's too soon to speculate on the result of this probe, but developers should take it as a warning to fine-tune their AI models to weed out responses that could damage the reputation of any one individual. Plus, they should invest heavily in data security, which is a focal point of the FTC.
The AI safety pact should go a long way toward ensuring companies don't fall afoul of existing consumer laws (and, therefore, the FTC). But the industry should prepare for a new landscape where its new products are rolled out at a much slower pace. In the long term, this will almost certainly benefit developers and consumers alike.