Thrivous

Welcome to Thrivous!

The Unexpected Complexity of Single Neuron Computation

21 September 2021
Giulio Prisco

Neuron Complexity

“[Our] work about ‘Single Cortical Neurons as Deep Artificial Neural Networks’ was finally published in Neuron,” tweeted David Beniaguev earlier this month. Beniaguev is a PhD student in Computational Neuroscience at Hebrew University, Israel.

The paper is published in Neuron. It's authored by Beniaguev, Idan Segev, and Michael London, all at Hebrew University. And it follows a preprint with the same title published in bioRxiv in 2019.

So the research results had been released two years ago. But of course publication in a prestigious peer-reviewed scientific journal like Neuron gives more visibility and credibility to the research results.

“Our approach is to use deep learning capabilities to create a computerized model that best replicates the I/O properties of individual neurons in the brain,” Beniaguev explains in a press release issued by Hebrew University, titled “Neurons are much smarter than we thought.”

A neuron “receives electrochemical signals through its dendrites, filters those signals, and then selectively passes along its own signals (or spikes),” explains SingularityHub. The researchers “built a model of a biological neuron (in this case, a pyramidal neuron from a rat’s cortex).” Then the researchers investigated neural network algorithms that could most accurately approximate the model.

The researchers found that, to replicate the I/O of a simulated “smart neuron,” a neural network must be 5-7 layers deep. Even the authors did not anticipate such complexity. “I thought it would be simpler and smaller,” said Beniaguev as reported by Quanta. The researchers are persuaded that artificial neural networks based on smart neurons would permit reproducing more features of the human brain.

“An illustration of this would be for the artificial network to recognize a cat with fewer examples and to perform functions like internalizing language meaning,” adds Segev. “The end goal would be to create a computerized replica that mimics the functionality, ability and diversity of the brain - to create, in every way, true artificial intelligence.”

This research builds “a bridge from biological neurons to artificial neurons,” said a neuroscientist not directly involved in this study, Quanta reports. Other neuroscientists are similarly intrigued by the results.

However, much work remains to be done. “The relationship between how many layers you have in a neural network and the complexity of the network is not obvious,” says London. “We tried many, many architectures with many depths and many things, and mostly failed.”

“Our brain developed methods to build artificial networks that replicate its own learning capabilities and this in return allows us to better understand the brain and ourselves,” Beniaguev concludes.

I find this intriguing. And I think his research is a step toward closing the loop, using our understanding of how the brain works to build artificial systems that work like the brain.

This is the goal of neuromorphic engineering research. It could deliver not only human-like Artificial Intelligence (AI), but also better brain-computer interfaces. And it suggests future possibilities that look like science fiction becoming science fact.

More Articles

Don't miss a beat! In our Pulse Newsletter, Thrivous curates the most important news on health science and human enhancement, so you can stay informed without wasting time on hype and trivia. It's part of the free Thrivous newsletter. Subscribe now to receive email about human enhancement, nootropics, and geroprotectors, as well as company news and deals.

Read more articles at Thrivous, the human enhancement company. You can browse recent articles in Thrivous Views. See other Pulse Newsletter articles. Or check out an article below.