For more crisp and insightful business and economic news, subscribe to The Daily Upside newsletter. It's completely free and we guarantee you'll learn something new every day.

Google wants to make its AI more accurate without gobbling up computing power. 

The company filed a patent application for a diffusion model with "improved accuracy and reduced consumption of computational resources." As the title implies, Google's system aims to train diffusion models, which are a common type of machine learning model commonly used for image generation and restoration, that can more efficiently paint a clearer picture. 

Google describes a typical diffusion model consisting of a "noising model" and a "denoising model." In the context of feeding this model a photo, the noising model would essentially create blurry pictures (what Google calls "intermediate data") from the non-blurry one used as the input data. Then the denoising model reconstructs that image. This process essentially helps the model learn how to deal with randomness, and perform more accurately. 

So where does the power-saving come in? This tech relies on something called a "learned noise schedule," which essentially learns a pattern of how to add noise into data during the training process. This creates less variance when testing the model, the end result being faster optimization and training with "less consumption of computational resources such as memory usage, processor usage, etc.," Google noted.

The system also uses "continuous-time loss," or a way to smoothly optimize the parameters of the machine learning model over time. This allows for training to happen in fewer steps, also resulting in less resource usage.

Photo via the U.S. Patent and Trademark Office.

While this patent is a bit in the weeds, it highlights Google's continuous work to stay an AI frontrunner. The company's files patent applications constantly to try to gain a grip on AI innovations, ranging from AI-assisted coding tools to a model that detects spammy playlists on YouTube. 

Publicly, the company's been quite enthusiastic about nesting AI with almost every facet of its organization. The company's been quickly integrating the tech throughout its Google Workspace offerings, merged its two AI units in April into a supergroup, and is keeping up on the investment side, plunging money into high-profile start-ups like Anthropic and Hugging Face. And this week at the Google Next conference, the company announced tons of new features, public availability of its AI chip, and a partnership with chip giant Nvidia

As Google plugs away at making AI its north star, finding ways to ensure accuracy  will certainly help its cause. 

This patent also highlights a problem for AI developers: the sheer amount of power and other computational resources that training AI takes up. Training just one AI model can reportedly eat up more energy than used in 100 homes per year. 

For Google specifically, the company's researchers found that AI takes up 10% to 15% of its energy usage annually, or roughly 2.3 terawatt hours, according to Bloomberg. At the scale that Google is creating AI, finding even small ways to cut this down on energy cost per model could make a big dent in its energy consumption.