In 1990, Australian roboticist Rodney Brooks published a paper titled “Elephants don’t play chess,” which introduced the concept that AI can improve by learning like the human brain. The idea is that AI can emulate human thinking by gradually building simple connections into more complex ones.

However, the ability of AI to imitate neuroscience remains a highly debated topic, with researchers from both sides seeking to learn from each other and determine if machines can eventually have human-like minds.

Traditionally, AI originates from the dynamics of the human brain, but brain learning has limitations compared to Deep Learning (DL). DL’s efficient architectures have many feedforward layers while the brain only has a few. DL also uses multiple consecutive filter layers crucial for identifying input classes, which the brain lacks.

For example, it takes several filters before it becomes evident that the input item is, in fact, a vehicle; the first filter recognizes wheels; the second recognizes doors; the third recognizes lights; and so on.

On the other hand, brain dynamics are just comprised of a single filter that is situated near the retina.

The last thing that’s needed is the math-heavy DL training process, which is obviously way too hard for a living thing to do.

Does it mean brain learning is weaker than Artificial Intelligence?

Can the brain, limited in its ability to perform precise math operations, compete with highly advanced AI systems run on fast computers?

Our daily experience suggests “yes” to many tasks.

Why is this the case, and can this lead to a new, brain-inspired type of efficient AI?

In an article that came out today in the journal Scientific Reports, researchers from Bar-Ilan University in Israel explain how this puzzle can be solved.

“We’ve shown that efficient learning on an artificial tree architecture, where each weight has a single route to an output unit, can achieve better classification success rates than previously achieved by DL architectures consisting of more layers and filters,” remarks lead author Prof. Ido Kanter.

“This finding paves the way for efficient, biologically-inspired new AI hardware and algorithms.”

PhD student and co-author Yuval Meir added, “Highly pruned tree architectures represent a step toward a plausible biological realization of efficient dendritic tree learning by a single or several neurons, with reduced complexity and energy consumption, and biological realization of backpropagation mechanism, which is currently the central technique in AI.”

A previous study by Kanter and his experimental research team – led by Dr. Roni Vardi – found evidence for sub-dendritic adaptation using neuronal cultures, as well as other anisotropic features of neurons such as distinct spike waveforms, refractory periods, and peak transmission rates.

For highly pruned tree training to work well, we need a new type of hardware that is different from the new GPUs that work better with the current DL strategy.

To successfully simulate brain dynamics, a new piece of hardware must be developed, they conclude.

Source: 10.1038/s41598-023-27986-6

Image Credit: Getty


Source link