Last summer, chip giant Intel (INTC 0.89%) released a new family of processors for the data center, called Xeon Scalable. At the time, Intel touted the Xeon Scalable chips as "the industry's biggest platform advancement in a decade."
The Xeon Scalable products offered significantly better performance than their predecessors as well as a more robust feature set, to boot.
Of course, Intel has made it clear that it aims to refresh its key processor products at a roughly annual cadence. To that end, Intel is expected to launch its next-generation of Xeon Scalable processors, code-named Cascade Lake, sometime in the second half of 2018.
Here are three things you need to know about the Cascade Lake chips.
1. Manufacturing technology improvements
Intel's current Xeon Scalable chips, which are based on the company's Skylake-SP architecture, are manufactured using the company's second-generation 14-nanometer technology, known as 14+. The Cascade Lake chips are expected to be manufactured in Intel's third-generation 14-nanometer technology, unsurprisingly called 14++.
According to Intel, 14++ offers a roughly 10% speed boost over 14+ at the same power consumption, so Intel is likely to parlay those manufacturing improvements to deliver performance enhancements (likely through increases in the operating frequency of the chips compared to their Skylake-SP predecessors).
2. Optane memory module support
The upcoming Cascade Lake chips are expected to support memory modules based on Intel's 3D XPoint memory technology, a type of memory that's supposed to be more cost-effective than traditional DRAM and can retain its data even when power to the system is removed (such memory is referred to as "persistent memory"). Intel markets 3D XPoint-based products under its Optane brand.
According to a leaked Intel slide deck, the company intends to support Optane memory modules on "select SKUs" of Cascade Lake.
What this likely means, then, is that Intel will try to charge a premium for Cascade Lake-based chips that support 3D XPoint-based memory modules in a bid to further increase its data center processor average selling prices and ultimately accelerate revenue growth in the company's data center group (DCG).
3. Built-in deep-learning features
Arguably the hottest topic in the world of chips is machine learning, so it's little surprise that Intel would want to add specialized functionality to its chips to help accelerate key machine learning/artificial intelligence workloads.
Per the slide deck that I referenced above, the Cascade Lake chips are going to include new instructions known as Vector Neural Network Instructions (VNNI). These should allow Cascade Lake to be far faster and more efficient for certain types of machine-learning applications.
These new instructions aren't likely to make a dent in the rise of alternative computing architectures for machine learning/deep learning, such as graphics processing units (GPUs), but it should nonetheless make Intel's processors more valuable and more versatile, encouraging major data center operators that are interested in better machine-learning performance to consider upgrading.