At NVIDIA's (NVDA -0.68%) GPU Technology Conference, the company announced a new product known as the Tesla V100, which is expected to ship in the third quarter of this year. The Tesla V100 is a graphics processing unit (GPU) built specifically for the artificial-intelligence and high-performance-computing markets. From a financial perspective, revenue from NVIDIA's Tesla products is counted under the company's "data center" reporting segment. 

NVIDIA's data-center business has been on fire over the past year or so, growing 145% year over year in fiscal 2017, as GPU-accelerated computing has gone from a relatively niche technology used in supercomputers to a viable replacement for traditional central processing unit-based computing in many data-center applications.

NVIDIA's Tesla V100 accelerator.

Image source: NVIDIA.

With the Volta-based V100, NVIDIA seems to be looking to continue its winning streak in the data center. Let's look at what NVIDIA has created.

NVIDIA's next "big" thing -- literally!

The Tesla V100 is based on the company's Volta architecture, which the company says is based on a "a major new redesign of the [streaming multiprocessor] processor architecture that is at the center of the GPU."

"The new Volta SM is 50% more efficient than the previous-generation Pascal design," NVIDIA claims. 

That efficiency boost is apparent in the published specifications of the device. In the same 300-watt thermal envelope as the prior generation Pascal-based P100, NVIDIA claims 15 teraflops of single precision and 7.5 teraflops of double precision computing power -- nearly 50% more grunt than the P100, which offered 10.6 and 5.3 teraflops of single- and double-precision computing power, respectively. (A teraflop is a trillion operations per second.)

The chip also includes specialized processors that NVIDIA refers to as "Tensor Cores" inside the V100. These cores, NVIDIA says, "are the most important feature of the Volta GV100 architecture to help deliver the performance required to train large neural networks."

"Tensor Cores provide up to 12x higher peak TFLOPs [teraflops] on Tesla V100 for deep learning training compared to P100 FP32 operations, and for deep learning inference, up to 6x higher peak TFLOPs compared to P100 FP16 generations," NVIDIA says.

Finally, the Volta-based V100 is huge -- quite literally. The chip measures in at 815 square millimeters -- for comparison, NVIDIA's Tesla P100 measured in at 610 square millimeters -- and is manufactured in Taiwan Semiconductor's (TSM 2.74%) 12-nanometer FFN manufacturing technology.

NVIDIA says 12-nanometer FFN is a manufacturing technology tailored to NVIDIA's specific requirements -- the "N" stands for "NVIDIA." It appears to be a higher-performing, more efficient variant of the 16-nanometer FF+ technology that NVIDIA currently uses to manufacture its Pascal-architecture products.

Foolish bottom line

NVIDIA appears to have built an incredible machine with the Tesla V100. Not only has the company made substantial advancements on the traditional GPU part of the equation with a substantially more efficient architecture, but it even went so far as to add specialized processors that can cope much better with the deep-learning workloads its customers are interested in.

Also of note, NVIDIA says it "has worked with many popular deep learning frameworks such as Caffe2 and MXNet to enable to use of Tensor Cores for deep-learning research on Volta GPU based systems" and that it "continues to work with other framework developers to enable broad access to Tensor Cores for the entire deep-learning ecosystem."

So in addition to building compelling new hardware, NVIDIA is doing the non-trivial legwork to drive software and ecosystem enablement for these new technologies, since hardware that can't be easily used isn't worth much.

Nicely done, NVIDIA.