Microprocessor giant Intel (INTC 0.61%) wants to be a leader in the market for artificial-intelligence (AI) processing. To that end, back in May, Diane Bryant, who led the company's data-center group before going on leave, hosted a presentation highlighting Intel's rather broad AI portfolio. The company's pitch seems to be that it has solutions suitable for all sorts of different machine learning and AI scenarios.

Intel tries to sell its mainstream Xeon processors as its "most widely deployed machine learning solution." The company further says that its upcoming Knights Mill processor, which will be sold under the Xeon Phi branding, is a product suitable for "high-performance, general-purpose machine learning."

Intel Xeon processors.

Image source: Intel.

Diving into more specialized solutions, Intel says Xeon processors paired with a field-programmable gate array (FPGA) are ideal for "programmable, low-latency" machine learning inference calculations. Intel helped itself in this area when it acquired FPGA specialist Altera about two years ago. 

And finally, Intel says its upcoming Xeon-plus-Nervana Engine solution will offer "best-in-class neural network performance."

That all sounds good on paper, but my concern with this strategy is that it appears to emphasize breadth over depth.

The problem with breadth over depth

In theory, having a wide portfolio of products that specifically target each kind of use case should allow Intel to be very well positioned for AI. After all, Intel's theoretical total addressable market should be much larger with a broader portfolio than with a narrower one. The problem is that a company can capture a significant portion of that combined total addressable market only if it develops the right products and invests enough in marketing and attendant platform support for each one.

Take, for example, Knights Mill, which is expected to launch in the fourth quarter of this year. This is supposed to be a competitor to NVIDIA's (NVDA 4.35%) general-purpose GPU solutions, but the reality is that NVIDIA has advanced on its data-center GPU products much more quickly than Intel has on its Xeon Phi lineup.

Intel is, as they say, shooting behind the duck.

NVIDIA is also all-in on general-purpose GPU computing and as such puts a tremendous amount of effort into developing, supporting, and promoting its GPU platforms. As a result, NVIDIA's data-center revenue continues to skyrocket, up 175% year over year last quarter, while Intel hasn't talked too much about its Xeon Phi lineup on recent earnings calls.

Intel needs to find focus

If Intel is to succeed in AI, I think it'll need to figure out what big-picture architectural approach works best and bet significantly on that.

Longer term, I think Intel will probably invest heavily in future versions of the Nervana Engine and focus on integrating those intelligently with its standard Xeon processors, as this approach seems to offer the best odds for long-term success.