Brought to you by Data Center Knowledge

Intel is designing the next-generation chip in its line of supercomputer processors to power artificial intelligence.

Code-named Knights Mill, the next processor in the Xeon Phi family is expected to be available in 2017, the company said Wednesday. Diane Bryant, executive VP and general manager of Intel’s Data Center Group, made the announcement at the Intel Developers Forum in San Francisco.

As tech giants like Google, Facebook, and Microsoft, as well as Elon Musk’s non-profit OpenAI, are investing tremendous sums of money in AI research, fighting tooth-and-nail to attract top minds in the field, a battle to supply hardware for the software those minds will create is nascent.

In this battle, Intel is up against the likes of Nvidia, whose GPUs are a popular way to deploy algorithms for machine learning (a common type of AI technology), and the tech giants themselves. Google has developed a custom processor for its machine learning applications on its own, saying nothing was available on the market that met its performance and price requirements.

Intel has a different set of concerns than Google does when designing its processors for machine learning. It wants to create a general-purpose chip that will support machine learning and other workloads, Charles Wuischpard, VP of the Data Center Group, said on a call with reporters in June, which was the first time the company discussed its AI chip strategy publicly.

While he had little familiarity with Google’s Tensor Processing Unit, Wuischpard said it appeared to be a highly specialized part designed for a specific workload.

Another big part of Intel’s AI chip strategy is to make it scale out rather than up. The latter is the most common architecture for machine learning, he said, which makes it difficult to scale.

Knights Mill “is optimized for scale-out analytics implementations, and will include key enhancements for deep learning training,” Intel said in a statement Wednesday.