Scientists at the University of Cambridge put physical constraints on an artificial intelligence system, similar to how human and other animal brains have to develop and operate with both physical and biological constructs. The system then developed some features of the brains of complex organisms to solve tasks.
In a study published in the journal Nature Machine Intelligence today, Jascha Achterberg and Danyal Akarca from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge worked with their colleagues to develop a simplified version of the brain and applied some physical constraints before giving the system tasks. This technology could potentially be used to develop more efficient AI systems and even understand the human brain itself better.
Developing a system with the same limitations as the brain
Instead of using real neurons or brain cells, they used computational nodes. This is because both neurons and nodes have similar functions. They both take an input, transform it and produce and output. Also, a single node or neuron might connect to multiple others, with all of them outputting and inputting information.
The physical constraint they placed on their system of computational nodes was similar to a constraint experienced by neurons in the brain—each node was given a specific location in a virtual space, and the further it was away from another, the more difficult it was for the two to communicate.
After placing this constraint, they gave the system a task to complete. The task in this case was a simplified version of a maze navigation task that is typically given to animals like rats and monkeys when studying their brains. Basically, it was given multiple pieces of information to decide on the shortest route to reach the endpoint of the maze.
The system did not initially know how to complete the task and kept making mistakes. The researchers kept giving it feedback until it gradually learned to get better at the task. The system then repeated the task over and over again until it learned how to perform it correctly.
As we mentioned earlier, the constraint placed upon the system meant that the further away the two nodes were in the virtual space, the more difficult it was to build a connection between the two nodes in response to the feedback. This is just like how it is more expensive to form and maintain connections across a large physical distance in the brain.
Same tricks as the human brain
When the system performed these tasks with those constraints, it used some of the same “tricks” used by real human brains to solve the same tasks. One example is how it tried to get around the constraints by developing hubs of highly connected notes that acted as junctions to pass information across the network.
But what surprised the researchers more was the fact that the behaviour of the individual nodes themselves began to change. Instead of having a system where each node solves for one particular property of the maze task like a goal location or the next choice, the nodes developed a “flexible coding scheme.”
This meant that at different moments, the nodes might be “firing” for a mix of the properties of the maze. For example, the same node might encode different locations of the maze instead of needing specialised nodes for encoding particular locations. This is also observed in complex animal brains.
It is quite fascinating that this one simple constraint — making it harder to wire nodes that are further apart —forced the artificial intelligence system to take on complicated characteristics. And these characteristics are shared by biological systems like the human brain.
Designing more efficient AI systems
One major implication of this research is that it has the potential to allow developments of more efficient AI models. Many popular AI systems that we know, like the Generative Pre-trained Transformer (GPT) technology used by OpenAI, use a lot of resources like computing power (GPUs) and electricity.
“We see a lot of potential in using our insights to create AI models which are made simpler in their internal structure while preserving their capabilities, so that they run more efficiently on computer chips. We also think our results can help to better distribute large AI models across multiple chips within large-scale compute clusters,” Achterberg told indianexpress.com in an email interview.
The current implementation of the “spatially embedded AI system” is built using a very small and simple model to study its effects. However, it could be scaled to build larger AI systems.
While many companies, like GoogleAmazon, Meta, and IBM have also built AI chips, Nvidia dominates the market. It accounts for more than 70 per cent of AI chip sales in the market. This, coupled with the fact that countries like the United States restrict the sale of AI chips to certain markets, means that they are very expensive and harder to come by. They also consume a lot of electricity, contributing to climate change
Because of that, there is a lot of interest in building sparse AI models, which work with a smaller set of parameters and fewer “neuronal connections.” In theory, sparse models can run more efficiently. The results of this Cambridge research could help build brain-inspired sparse models which can solve the same problems more efficiently.
Understanding the human brain
There is an even more interesting prospect of the technology—you might even be able to use it to study the actual human brain better.
“The brain is an astonishingly complicated organ, and to understand it we need to build simplified models of its function to explain the principles by which the brain works. One major advantage of these artificial models is that we can study phenomena in them which are difficult to study in the real brain,” said Achterberg. With an actual brain, you cannot remove a neuron and then add it back later to see what the exact role of the neuron is. But with artificial intelligence systems, that is entirely possible.
“One major problem of neuroscience is that we can usually only record the brain’s structure (which neurons are connected to which other neurons?) or the brain’s function (which neurons are currently sending and receiving information?). Using our simplified artificial model, we show that we can study both the brain’s structural and functional principles, to study the links between the brain’s structure and function,” added Achterberg.
Most Read
What Achterberg described would be incredibly difficult to do with data recorded from an actual brain. It could be a lot easier with simplified artificial brains.
Taking the rudimentary ‘artificial brains’ further
Now, the researchers are focusing on developing their systems in two directions—one is making the model even more brainlike while not being too complex. “In this direction, we have started using so-called ‘Spiking Neural Networks’, which emulate the way information is sent through the brain more closely than what regular AI models do,” said Achterberg.
The second is bringing the insights they have from their small and simplified model to largescale models used by modern AI systems. They hope that by doing this, they can look at the effects of brain-like energy-efficient processing in large-scale systems that otherwise need a lot of energy.