How The Brain Learns
Jan 23, 2024
English physicist Emerson Pugh famously described the fundamental problem in learning about the inner workings of the brain: “If the human brain were so simple that we could understand it, we would be so simple that we couldn't.”
But that does not mean that we are adrift on a scientific sea without a paddle. When it comes to broad topics like how the brain learns, understanding small pieces can give us a glimpse of the overall shape of the issue. Recognizing and studying the organizing principles by which the brain grows and develops, learning things about the world around it, gives a powerful framework upon which to grow and develop our understanding in turn. A crucial organizing principle describing how the brain learns, recognized early in neuroscience, is called Hebbian learning – the idea expressed in the mnemonic that “cells that fire together, wire together.”
Neurons speak in voltages
All cells are electrical – they have a voltage difference, a membrane potential, between the interior and exterior, just like a battery. Usually that membrane potential stays about the same all the time. However, some cells can rapidly change that voltage to send and receive electrical signals.
In neurons, those electrical signals come in the form of action potentials. Neurons are carefully calibrated such that their membrane potential will tend to stay at a fixed resting potential. But when it increases just enough, and reaches a threshold, that opens the floodgates. The voltage quickly increases, then just as quickly decreases, overshoots a little, and returns to its resting potential again. This entire process happens over only three to four milliseconds. This very fast all-or-nothing spike in voltage is the action potential. Somewhat like the dots and dashes of Morse code, it is the basic unit of neuron communication. When it comes to neuroscience in general and Hebbian learning specifically, in which “cells that fire together, wire together,” these action potentials are what it means for a neuron to fire.
A neuron sends those action potential signals via a long, thin projection called an axon. Axons can be very long – the longest in the human body are too thin to see unaided but over three feet in length – in order to reach the targets of their signals. To receive these signals, a neuron has shorter projections called dendrites arranged in a dendritic tree, forming a dense net where they connect with axons from other neurons. These connections between axons and dendrites are specialized structures called synapses. At these structures, small molecules called neurotransmitters are released to carry the message of the axon's neuron's action potential. Receptors in the dendritic neuron detect those neurotransmitters, and encourage the second neuron to fire its own action potential in response.
Changing the odds
But the responding action potential is not guaranteed. A single neuron can have thousands of synapses, and the contribution of just one will usually not make it fire an action potential. Just how likely it is that input at a particular synapse will make its neuron fire is referred to as the synapse's strength. Think, for example, about the synapse between a sensory neuron that detects when you touch a hot stove and the motor neuron that pulls your hand away. That synapse is very strong, because when you touch the hot stove, you want to pull your hand away, guaranteed, as soon as possible.
But a synapse's strength is not fixed – it can change over time by various methods – it is plastic. Things like increasing the amount of neurotransmitter released or the number of receptors ready to detect it can make a synapse stronger. Doing the opposite can make a synapse weaker. In our mnemonic, “cells that fire together, wire together,” this plastic change in synaptic strength is what it means to wire.
Learning to listen – and to ignore
So we can now understand what our mnemonic is about – what guiding principle affects how the brain organizes itself. A more full version goes something like this: “cells that fire together, wire together; cells that fire out of sync lose their link.” For all the effort we took to define the words used, Hebbian learning is straightforward. If two neurons don't fire at the same time, one firing will tend to be less and less able to make the other fire – but if they do tend to fire at the same time, one firing will make the other more and more likely to fire in response. In effect, this becomes a coincidence detector that underlies associative learning.
Think about the classic experiment Pavlov did with his dogs. When a dog smells food, it drools. Whenever Pavlov fed his dogs, he would ring a bell. Eventually, the dog would drool when it heard a bell – even if there was no food to be smelled. They learned to associate the coincidence of the sound of a bell with the anticipation of being fed.
In those dogs, the olfactory (smell) neurons that fired when they smelled food were connected to the neurons that fired to make their salivary glands start producing all that drool – they had strong synaptic connections. But when Pavlov rang the bell, those firing salivary neurons were accompanied by auditory (sound) neurons that fired when they heard that bell. Over time, the synaptic connection between the auditory neurons and salivary neurons became stronger – and eventually, the auditory neurons firing when they heard the bell were enough to make the salivary neurons fire too.
Practicing and memory
Of course, this idea of synaptic strength doesn't only happen in dogs. The adage that “practice makes perfect” is a good example of such a thing in everyday life. The neurons that fire as you practice playing a sport, or the precise movements of fretwork on a guitar, or any other developed skill, are all connected. Repeating the action over and over makes those neurons fire in sync over and over – and makes their synaptic connections more powerful, making their associated actions easier and easier.
Another example is memory. Think about how a particular taste or smell can make a half-forgotten memory come roaring back, as if it only happened yesterday. Or think on how just a couple notes or words can make a song play in your head over and over – even if you don't want to hear it! Exactly how memories work is still a very active part of neuroscience research, and it is increasingly clear that there are many different kinds of memory with different mechanisms underlying them – but the synaptic strength between different neurons is a crucial part of the process.
Getting the details right
Donald Hebb proposed this idea in 1949, and he was broadly correct, even as we learn more details about how neurons organize themselves and their communications between each other. Nowadays Hebbian learning is encompassed by the idea of spike-timing-dependent plasticity. More precisely than just “at the same time,” experiments have shown that what really makes a synapse strong is when the axonal neuron fires just a few milliseconds before the dendritic neuron does. These are examples of processes that result in long-term potentiation, or the opposing long-term depression, of synapses. These terms describe synapses that permanently become stronger or weaker.
Some of the underlying molecular mechanisms are now increasingly well understood, as well. For example, the coincidence detection that defines Hebbian learning is thought to arise from the action of a particular neurotransmitter called N-methyl-D-aspartate, or NMDA. The receptors for NMDA on the second neuron only fully open when two conditions coincide: the first neuron has released NMDA, and the second neuron's membrane potential is depolarized (closer to zero than normal), which removes magnesium ions that normally keep those receptors closed. That is exactly the coincidence of factors that happens when the second neuron fires a few milliseconds after the first – the action potential in the first neuron released the NMDA, and the action potential in the second neuron depolarized it as those molecules of NMDA arrived at its receptors.
Turning principles into frameworks
This idea of Hebbian learning is only a small fraction of what medical science knows about how the brain organizes itself – which is in turn so little of all that there is to know about the brain, a fraction so tiny as to be almost nothing. In all likelihood the process will never be finished; there will always be more to learn, more ideas to explore. But this idea of strengthening and weakening the synaptic connections between neurons will remain crucial. It underlies Pavlov's experiments over a century ago, it underlies cutting edge research being done today, and it will underlie discoveries we cannot yet imagine. The point, of course, is the journey, and with this framework we can understand a great deal of the structure of the brain, and get a glimpse of its astonishing beauty.
Written by Robert Hubbard.
Edited by Kashish Sawant.
Images created by BioRender.
References
-
George Edgin Pugh, The Biological Origin of Human Values New York, Basic Books, 1977; ch. 7, p 154.
-
Hebb, D.O. The Organization of Behavior. New York: Wiley & Sons. 1949.
-
Alberts, B. Molecular biology of the cell. Garland science, 2017.
-
Levitan, I. B., & Kaczmarek, L. K. The neuron: cell and molecular biology. Oxford University Press, USA. 2015.
-
Gerstner, W. Hebbian learning and plasticity. In: Arbib M and Bonaiuto J, ed. From neuron to cognition via computational neuroscience. MIT Press Cambridge; 2016:0-25.
-
Munakata, Y., & Pfaffly, J. Hebbian learning and development. Developmental science, 2004; 7(2), 141-148.
-
Agliari E, Aquaro M, Barra A, Fachechi A, Marullo C. From Pavlov Conditioning to Hebb Learning. Neural Comput. 2023 Apr 18;35(5):930-957. doi: 10.1162/neco_a_01578. PMID: 36944235.