Can the brain compete with advanced artificial intelligence systems?

Artificial intelligence stems from human brain dynamics, but brain learning is restricted in a number of significant aspects compared to deep learning.

Artificial intelligence (photo credit: INGIMAGE)
Artificial intelligence
(photo credit: INGIMAGE)

Many professionals – from architects to graphic artists – are aware that artificial intelligence systems may in the near future either replace them or speed up and improve their work. Traditionally, AI stems from human brain dynamics, but brain learning is restricted in a number of significant aspects compared to deep learning (DL).

Advanced deep-learning architectures consist of dozens of fully connected and convolutional hidden layers. Currently extended to hundreds, they are far from their biological counterparts.

Can the brain, with its limited realization of precise mathematical operations, compete with advanced AI systems implemented on fast and parallel computers? From our daily experience, we know that for many tasks, the answer is yes. Why is this, and can we build a new type of efficient AI inspired by the brain?

Can an AI inspired by the brain be built?

First, efficient DL wiring structures consist of many dozens of feed-forward layers, while brain dynamics consist of only a few. Second, DL architectures typically consist of many consecutive filter layers, which are essential to identify one of the input classes. If the input is a car, for example, the first filter identifies wheels, the second one identifies doors, the third one lights. After many additional filters, it becomes clear that the input object is, indeed, a car.

Artificial intelligence (credit: PIXABAY/WIKIMEDIA)
Artificial intelligence (credit: PIXABAY/WIKIMEDIA)

Conversely, brain dynamics contain just a single filter located close to the retina. The last necessary component is the mathematically complex DL training procedure, which is evidently far beyond biological realization.

Yuval Meir, in an article published on Monday in the journal Scientific Reports under the title “Learning on tree architectures outperforms a convolutional feedforward network,” said researchers from Bar-Ilan University in Ramat Gan have solved this puzzle.

Lead researcher Prof. Ido Kanter, of BIU’s Physics Department and the Gonda Multidisciplinary Brain Research Center, said: “We’ve shown that efficient learning on an artificial tree architecture, in which each weight has a single route to an output unit, can achieve better classification success rates than previously achieved by DL architectures consisting of more layers and filters. This finding paves the way for efficient, biologically inspired new AI hardware and algorithms.”

Meir, a doctoral student who contributed to this work, added that “highly pruned tree architectures represent a step toward a plausible biological realization of efficient dendritic tree learning by a single or several neurons, with reduced complexity and energy consumption, and biological realization of back-propagation mechanism, which is currently the central technique in AI.”

Efficient dendritic tree learning, based on previous research by Kanter and his experimental research team, was conducted by Dr. Roni Vardi. It showed evidence for sub-dendritic adaptation using neuronal cultures, together with other anisotropic properties of neurons, such as different spike waveforms, refractory periods and maximal transmission rates.

The efficient implementation of highly pruned tree training requires a new type of hardware that differs from emerging graphics processing units better fitted to the current DL strategy. The emergence of such hardware is required to efficiently imitate brain dynamics.