Intel Nervana: The First Processors for Artificial Intelligence

To relieve the main processor and reduce the power consumption of servers and data centers, Intel has developed processors integrating their own artificial intelligence. Ideal for making very complex calculations, but also to compile huge databases.

At the Hot Chips 31 conference August 18-20 at Stanford University in the United States, Intel unveiled new chips described as high performance artificial intelligence accelerators. They are called Intel Nervana “Neural Network Processors” (NNP) or neural network processors.

Intel has developed two separate processors that build on the company’s new 10-nanometer Ice Lake architecture. They are part of the manufacturer’s long-term vision, which already imagines artificial intelligence everywhere. According to Naveen Rao, general manager of Intel’s Artificial Intelligence Product Department, “To achieve a future where AI is everywhere, it will be necessary to support the huge amount of data generated and ensure that companies have the tools to efficiently use their data where they are produced. ”

The first, called NNP-T, code-named Spring Crest, is used to train artificial intelligence. It differs a lot from Intel Xeon processors that bring some specific instructions to the AI. This is a new architecture built specifically to accelerate the training of neural networks, while reducing the power consumption required.

The second processor, which is intended for larger scale deployment, is called NNP-I (Inference) or Spring Hill. In the field of artificial intelligence, inference is the practice of algorithms generated during training, where the AI ​​will infer results from the data provided (such as predicting your next purchase based on the items consulted. in the store). Intel has provided a map that can be plugged into any motherboard, but primarily targets servers and data centers, rather than the general public. Facebook has already integrated it into its huge servers.

The NNP-I plugs into the standard M.2 port of a motherboard, known primarily for connecting SSD storage cards, and thus allows the main processor to be offloaded from tasks that use artificial intelligence. The processor uses a PCI-Express 3.0 x4 or x8 interface, and it consists of two cores dedicated to artificial intelligence (SNC IA Core) and 12 inference engines (Inference Compute Engines or ICE).

The NNP-I card is therefore intended to be compatible with most existing servers, requiring a simple connection to a free slot. The architecture is extensible, which allows you to add multiple cards as needed on the same server. This new system is completely autonomous; Intel will provide software that will send tasks directly to the NNP-I chip, completely bypassing the main processor of the machine.

 

2 Comments