Intel announces Loihi 2 – A Neuromorphic Chip
18-10-2021 | By Robin Mitchell
Recently, Intel announced the development of its latest neuromorphic chip called Loihi2 and an open framework for developing solutions for it. What is neuromorphic computing, what are the specifics of the Loihi 2 chip, and could neuromorphic designs be the solution to advanced AI systems?
What is neuromorphic computing?
Neuromorphic computing is a computing technology whose function closely resembles neurons in the brain. While traditional computing uses logical elements that can either be on or off, a neuromorphic computer instead uses analogue electronics to recreate neural pathways. These pathways can be programmed with an element of resistance to a signal, and analogue electronics can be used to process multiple signals (i.e. in a near-identical fashion to neurons).
One common component that could help produce future neuromorphic designs is the memristor, a resistor whose resistance depends on the historical current that flowed through it. Such a component could be multiplied in the millions and connected across a vast network representing a rudimentary brain.
Intel announces its latest neuromorphic chip, the Loihi 2
Recently, Intel announced the release of its next-generation neuromorphic chip, the Loihi 2, which is said to be a significant improvement over its first device, the Loihi 1. The new chip uses Intels 4 process, the first EUV method developed by intel for nanometer features and has also developed an open framework to help accelerate software development on the device.
The Loihi 2 is designed to execute neuromorphic-based AI algorithms and provides up to 10 times faster processing and 15 times greater resource density than the Loihi 1. The Loihi device contains up to 1 million neurons per chip while also providing better energy efficiency.
Each neuron in the Loihi 2 is programmable using microcode instructions, which allows each neuron to perform basic processing and weighting on connections with other neurons. Instead of the entire chip operating on a per-cycle status, the neuromorphic design uses a spiking method whereby neurons respond on spike edges meaning that the design operates very similarly to neurons in the human brain.
The new device can also perform learning functions to improve its design long-term, and the use of 3D construction methods enables the Loihi 2 to have better scaling performance than its predecessor.
Will neuromorphic processors be the key to future AI?
To understand why AI is highly advantageous, we first need to compare the performance of computers vs biological neurons. Computers are incredibly efficient at computing very large numbers very quickly. The number of computations that a computer can perform in one second would take a human an entire lifetime. However, computers are only good at basic calculations, and any task they are required to do must be broken down into simple tasks.
While the human brain is not able to perform even the most basic of calculations in a timely manner, it can do so much more with the ability to learn and improve itself over time, read many thousands of signals instantaneously, determine what signals are most important to respond to, and how to extrapolate from incomplete data.
For example, the human brain only has access to a pair of eyes. Yet, from this, it can determine its environment to an accuracy not possible with computers while also controlling a fast-moving vehicle. However, a self-driving car requires sensors all around the vehicle, multiple cameras, and software to read all the various data while still making mistakes.
AI, however, computes information differently to a computer by operating similarly to the human brain. In the case of neuromorphic computing, an AI would essentially simulate the biological process that enables neurons to communicate and process information, thereby having the same advantages presented by biological brains.
Neuromorphic devices have a long way to go, and they are still in their infancy. However, as neuromorphic chips operate like brains, they are highly adaptable, can be used in almost any application requiring a level of intelligence, will be able to extrapolate from complex data structures, and may even be the key to driving true self-driving vehicles.