- Author, Zoe Corbyn
- Role, Technology reporter
- Reporting from San Francisco
Modern computers’ hunger for electricity is increasing at an alarming rate.
According to a recent report from the International Energy Agency (IEA), data center, artificial intelligence (AI) and cryptocurrency consumption could double 2022 levels by 2026.
It is estimated that the energy consumption of these three sectors could be approximately equal to Japan’s annual energy needs by 2026.
Companies like Nvidia – whose computer chips power most AI applications today – are working to develop more energy-efficient hardware.
But could an alternative path be to build computers with a fundamentally different type of architecture, one that is more energy efficient?
Some companies certainly think so, taking advantage of the structure and function of an organ that uses a fraction of the power of a conventional computer to perform more actions faster: the brain.
In neuromorphic computing, electronic devices imitate neurons and synapses, connecting them together in a manner similar to the brain’s electrical network.
It is not new: researchers have been working on the technique since the 1980s.
But the energy demands of the AI revolution are increasing pressure to get the emerging technology into the real world.
Today’s systems and platforms exist mainly as research tools, but proponents say they could deliver huge energy efficiency gains.
Among those with commercial ambitions are hardware giants such as Intel and IBM.
There are also a handful of small businesses present. “The opportunity is waiting for the company that can figure this out,” said Dan Hutcheson, an analyst at TechInsights. “[And] the odds are such that it could be an Nvidia killer.”
In May, SpiNNcloud Systems, a spinout from Dresden University of Technology, announced that it would be selling and taking pre-orders for neuromorphic supercomputers for the first time.
“We achieved the commercialization of neuromorphic supercomputers in front of other companies,” said Hector Gonzalez, co-CEO.
It’s an important development, says Tony Kenyon, professor of nanoelectronic and nanophotonic materials at University College London, who works in this field.
“While there is still no great app out there… there are many areas where neuromorphic computing will deliver significant gains in energy efficiency and performance, and I am confident we will see widespread adoption of the technology as it matures,” he says.
Neuromorphic computing encompasses a range of approaches – from simply a more brain-inspired approach to a near-total simulation of the human brain (which we’re actually nowhere near yet).
But there are some fundamental design features that set it apart from conventional computing.
First, unlike conventional computers, neuromorphic computers do not have separate memory and processing units. Instead, these tasks are performed together on one chip in one location.
Removing the need to transfer data between the two reduces energy consumption and speeds up processing time, notes Prof. Kenyon.
Also common may be an event-driven approach to computing.
Unlike conventional computing, where every part of the system is always on and always available to communicate with every other part, activation in neuromorphic computing can be more sparse.
The imitation neurons and synapses are only activated when they have something to communicate, just as many neurons and synapses in our brains only come into action when there is a reason to do so.
Only working when there is something to process also saves energy.
And while modern computers are digital – using 1s or 0s to represent data – neuromorphic computing can be analog.
Of historical interest is that this calculation method relies on continuous signals and can be useful when data coming from the outside world needs to be analyzed.
However, for convenience, most commercially oriented neuromorphic efforts are digital.
The intended commercial applications fall into two main categories.
One of these, which SpiNNcloud focuses on, is providing a more energy-efficient and higher-performing platform for AI applications – including image and video analytics, speech recognition and the large language models that power chatbots like ChatGPT.
Another is found in ‘edge computing’ applications – where data is processed not in the cloud, but in real time on connected devices, but operating with limited power supplies. Autonomous vehicles, robots, mobile phones and wearable technology can all benefit from it.
However, technical challenges remain. Long considered a major stumbling block to the advancement of neuromorphic computing is developing the software needed to make the chips work.
While having the hardware is one thing, it needs to be programmed to work, and that may require developing a completely different programming style from the start than that of conventional computers.
“The potential for these devices is enormous… the problem is how to make them work,” summarizes Mr Hutcheson, who predicts it will be at least a decade, if not two, before the benefits of neuromorphic computing are truly felt .
There are also issues with costs. Whether they use silicon, as the commercially-oriented efforts do, or other materials, making radically new chips is expensive, notes Prof. Kenyon.
Intel’s current prototype neuromorphic chip is called Loihi 2.
In April, the company announced that it had brought together 1,152 of them to create Hala Point, a large-scale neuromorphic research system that includes more than 1.15 billion fake neurons and 128 billion fake synapses.
With a neuron capacity roughly equal to that of an owl’s brain, Intel claims this is the world’s largest system yet.
At the moment it is still a research project for Intel.
“[But Hala Point] shows that there is some real feasibility for applications to use AI,” said Mike Davies, director of Intel’s neuromorphic computing lab.
About the size of a microwave, Hala Point is “commercially relevant” and “rapid progress” is being made on the software side, he says.
IBM is calling its latest brain-inspired prototype chip NorthPole.
Unveiled last year, it is an evolution of its previous TrueNorth prototype chip. Tests show the chip to be more power efficient, space efficient and faster than any chip currently on the market, said Dharmendra Modha, the company’s chief scientist for brain-inspired computing. He adds that his group is now trying to demonstrate that chips can be assembled into a larger system.
“The path to market will still come,” he says. One of NorthPole’s great innovations, notes Dr. Modha is that it is designed together with the software so that the full capabilities of the architecture can be exploited from the start.
Other smaller neuromorphic companies include BrainChip, SynSense and Innatera.
The SpiNNcloud supercomputer commercializes neuromorphic computing, developed by researchers from both TU Dresden and the University of Manchester, under the umbrella of the EU’s Human Brain Project.
Those efforts have resulted in two neuromorphic supercomputers for research purposes: the SpiNNaker1 machine, based at the University of Manchester, consisting of more than a billion neurons, and operational since 2018.
A second-generation SpiNNaker2 machine at TU Dresden, currently being configured, has the capacity to emulate at least five billion neurons. The commercially available systems offered by SpiNNcloud can reach an even higher level of at least 10 billion neurons, Mr. Gonzalez says.
The future will be one of different types of computing platforms – conventional, neuromorphic and quantum, another new type of computer also on the horizon – all working together, says Prof. Kenyon.