NVIDIA (NVDA 2.33%) announced on Tuesday that just weeks after its release, the A100 Tensor Core graphics processing unit (GPU) has been adopted by Google Cloud, a division of Alphabet (GOOGL 0.24%) (GOOG 0.33%)

The Accelerator-Optimized VM (A2) family, available on Google Compute Engine, is designed specifically to handle some of the most demanding applications out there, including artificial intelligence (AI) workloads and high performance computing (HPC). This makes Google the first cloud service provider to offer the new NVIDIA GPUs.

The NVIDIA A100 GPU.

The NVIDIA A100 GPU. Image source: NVIDIA.

AI training in the cloud

For the most demanding workloads, Google Cloud will offer users up to 16 GPUs on a single VM (or virtual machine). The cloud provider will also offer the A2 VMs in smaller configurations to match the individual user's computing needs. The system will be available via a private alpha program to start, before opening up to the general public later this year. 

In a blog, NVIDIA said the A100 can also power a broad range of compute-intensive applications in cloud data centers, including "data analytics, scientific computing, genomics, edge video analytics, 5G services, and more."

Based on NVIDIA's new Ampere architecture, the A100 represents the "greatest generational leap" in performance in the company's history, boosting both machine-learning training and inference computing performance by 20 times compared with its predecessors. Previous versions of the technology required separate processors for training and inference. The A100 also offers a 10-fold increase in speed versus the previous generation technology.

Google plans to roll out access to additional instances in the near future, with the NVIDIA A100 coming soon to Google Kubernetes Engine, Cloud AI Platform, and other Google Cloud services.