The world of artificial intelligence (AI) is moving at a breakneck speed, and top corporate and political leaders are trying to safely guide its evolution. Alphabet (GOOG -1.96%) (GOOGL -1.97%) CEO Sundar Pichai met with European Commissioner for the internal market Thierry Breton on Wednesday to discuss the progression of AI and map out possible actions to help ensure the emerging technology doesn't get out of hand. 

In the meeting, Pichai promised that his company would join an "AI pact" and collaborate with other players in the artificial intelligence space on a volunteer basis to shape and abide by guidelines for tech development. But while there's seemingly a lot to like about the prospect of guardrails being put on AI, it's not clear that voluntary pacts or even strict laws will be able to address many of the potential problems that could arise. 

Could an AI pact help stave off potential disasters?

While detailed specifics of what rules and guidelines members of an AI pact would be adhering to haven't been made available, Pichai's discussion with other members of the European Commission suggest that privacy standards and fighting misinformation could be key focuses. The rapid progression for artificial intelligence tech over the last year is unprecedented, and continued advances could be unwieldy.

Because legislative bodies can be slow to act and laws vary between territories, it could be important for technology leaders to attempt to establish some kind of presiding framework for artificial intelligence initiatives. Taking a proactive approach to the problems and questions raised by incredible leaps forward for artificial intelligence could help minimize the possibility of disastrous outcomes stemming from the tech. The AI pact mentioned by Pichai at the meeting with Breton seems to be a step in this direction. 

But there's a risk that rules surrounding AI might not amount to much. For one, it's possible that voluntary AI guidelines and restrictions will be followed very loosely or only in the most nominal senses. It's also possible that those who don't abide by rules for the artificial intelligence project, whether self-imposed or through regulations, could actually gain significant competitive advantages. 

Controlling AI could be incredibly difficult

The European Parliament is currently finalizing a new set of rules and restrictions that will govern AI technologies. The EU AI Act isn't officially law yet, but it's seemingly headed in that direction and could be a landmark response that plays a big role in the tech's evolution in Europe. But it might not be enough, and it may have unintended consequences.

Developing ways to work around restrictions could be incentivized by the proposed regulations, and the speed at which AI tech has been progressing will make it difficult for regulators to keep up. Additionally, there are already signs that influential players in the space could respond negatively. In response to the legislation, OpenAI CEO Sam Altman said that his company might cease to offer its ChatGPT and Dall-E services in the EU if it can't meet new regulatory requirements.

The European governing body almost certainly doesn't want to fall behind in AI, and regulating the tech is a difficult needle to thread. Regulations will be very difficult to enforce globally, and those who opt not to follow guidelines or regional laws could actually be rewarded. 

While an AI pact that includes Alphabet and other large tech companies could be beneficial, it's unlikely to be a panacea and will come with its own set of complications. 

Unless provisions for punitive measures were put in place, a company could simply opt out of a potential pact if it didn't agree with new or existing rules. Given the incredible rate at which AI tech has been advancing lately, any organization attempting to regulate the space will likely have to determine what is and isn't allowed and forge restrictions on a continuous basis. In this kind of situation, the potential for a voluntary artificial intelligence pact to fracture could be high. 

There's probably no one-size-fits-all solution

It's already proving difficult to get top technology companies and figures on the same page when it comes to how artificial intelligence should be guided. For example, Tesla CEO Elon Musk and Apple co-founder Steve Wozniak called for a temporary pause on AI development in April, but OpenAI CEO Sam Altman and Microsoft co-founder Bill Gates dismissed the feasibility of the proposal. Governments will likely have to play a leading role in keeping artificial intelligence safe and on the rails, but it won't be easy.

Placing restrictions on commercially available AI products and services on a regional basis may be feasible, but regulating the use of internal tools could be a much more difficult matter. When it comes to restricting the use of artificial intelligence by adversarial governments and independent actors around the world, the challenge of governing the progression of these emerging technologies is daunting.