Synapses need only few bits

Data estesa
22nd September, 2015

Deep learning is possibly the most exciting branch of contemporary Machine Learning. Complex images analysis, speech recognition, self-driving cars are just a few popular examples of a multitude of new applications where Machine Learning, and Deep Learning in particular, show their amazing capabilities.

Deep neural networks are made up of many layers of artificial neurons with hundreds of millions of connections between them. The structure of such deep networks themselves is reminiscent of the brain, where billions of neurons are connected through thousands of synaptic contacts each. These type of networks can be trained to perform hard classification tasks over huge datasets, with the remarkable property of being able to extract information from examples and thus generalize to unseen items.

The way neural networks learn is by tuning their multitude of connections, or synaptic weights, following the signal provided by a learning algorithms that reacts to the input data. This process is in some aspect similar to what happens throughout the nervous system where  plastic modifications of synapses are indeed considered to be responsible for the formation and stabilization memories. 

The problem of devising efficient and scalable learning algorithms for realistic synapses is then crucial for both technological and biological applications.

In a recent study, published on the prestigious journal Physical Review Letters (American Physical Societyhttp://journals.aps.org/prl/abstract/10.1103/PhysRevLett.115.128101 ), researchers from Politecnico di Torino and Human Genetics Foundation(Italy) showed that extremely simple synaptic contacts, even just 1-bit switch like synapses, can be efficiently used for learning in large scale neural networks, and can lead to unanticipated computational performance. The study was conducted by a research group led by Riccardo Zecchina and composed by Carlo Baldassi, Alessandro Ingrosso, Carlo Lucibello and Luca Saglietti.

Until now, theoretical analysis suggested that learning with simple discretized synaptic connections was exceedingly difficult and thus impractical. Using tools from Statistical Physics of disordered systems, the researchers found that, instead, the problem can become extremely simple. The authors provide an in depth theoretical explanation for why the problem can become simple and provide concrete learning strategies.

These new results are consistent with biological considerations and recent experimental evidence that suggest that synaptic weights are not arbitrarily graded, but store a few bits each. Still, the most immediate follow ups will be of technological nature: the hardware implementation of learning algorithms relying on extremely simple synapses can overcome many of the computational bottlenecks (memory and speed)  that the future generation of learning algorithms will have to face.

Documents: