ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
ERCIM news 130
ERCIM news 130
Back Issues Online
Back Issues Online

by Henrik D. Mettler (University of Bern), Virginie Sabado (University of Bern), Walter Senn (University of Bern), Mihai A. Petrovici (University of Bern and Heidelberg University) and Jakob Jordan (University of Bern)

Despite years of progress, we still lack a complete understanding of learning and memory. We leverage optimisation algorithms inspired by natural evolution to discover phenomenological models of brain plasticity and thus uncover clues to the underlying computational principles. We hope this accelerates progress towards deep insights into information processing in biological systems with immanent potential for the development of powerful artificial learning machines.

What is the most powerful computing device in your home? Maybe your laptop or smartphone spring to mind, or possibly your new home automation system. Think again! With a power consumption of just 10W, our brains can extract complex information from a high-throughput stream of sensory inputs like no other known system. But what enables this squishy mass of intertwined cells to perform the required computations? Neurons, capable of exchanging signals via short electrical pulses called "action potentials" or simply "spikes", are the main carriers and processors of information in the brain. It is the organised activity of several billions of neurons arranged in intricate networks, that underlies the sophisticated behaviour of humans and other animals. However, as physics has long known, the nature of matter is determined by the interaction of its constituents. For neural networks, both biological and artificial, the interaction between neurons is mediated by synapses, and it is their evolution over time that allows sophisticated behaviour to be learned in the first place.

Synaptic plasticity describes how and why changes in synaptic strengths take place. Early work on uncovering general principles governing synaptic plasticity goes back to the 1950s, with one of the most well-publicized results being often summarised by the mnemonic "what fires together, wires together”. However, this purely correlation-driven learning is but one aspect of a much richer repertoire of dynamics espoused by biological synapses. More recent theories of synaptic plasticity have therefore incorporated additional external signals like errors or rewards. These theories connect the systems-level perspective ("what should the system do?") with the network level ("which dynamics should govern neuronal and synaptic quantities?"). Unfortunately, the design of plasticity rules remains a laborious, manual process and the set of possible rules is large, as the continuous development of new models suggests.

In the Neuro-TMA group at the Department of Physiology, University of Bern we are working on supporting this manual process with powerful automated search methods. We leverage modern evolutionary algorithms to discover various suitable plasticity models that allow a simulated neuronal network architecture to solve synthetic tasks from a specific family, for example to navigate towards a goal position in an artificial two-dimensional environment. In particular, we use genetic programming, an algorithm for searching through mathematical expressions loosely inspired by natural evolution (Figure 1), to generate human-interpretable models. This assures our discoveries are amenable to intuitive understanding, fundamental for successful communication and human-guided generalisation. Furthermore, this interpretability allows us to extract the key interactions between biophysical variables giving rise to plasticity. Such insights provide hints about the underlying biophysical processes and also suggest new approaches for experimental neuroscience (Figure 2).

Figure 1: Schematic overview of our evolutionary algorithm. From an initial population of synaptic plasticity rules (g, h), new solutions (offspring, g', h') are created by mutations. Each rule is then evaluated using a particular network architecture on a predefined task, resulting in a fitness value (Fg, Fg', Fh, Fh'). High-scoring rules are selected to become the new parent population and the process is repeated until a plasticity rule reaches a target fitness.
Figure 1: Schematic overview of our evolutionary algorithm. From an initial population of synaptic plasticity rules (g, h), new solutions (offspring, g', h') are created by mutations. Each rule is then evaluated using a particular network architecture on a predefined task, resulting in a fitness value (Fg, Fg', Fh, Fh'). High-scoring rules are selected to become the new parent population and the process is repeated until a plasticity rule reaches a target fitness.

Figure 2: Symbiotic interaction between experimental and theoretical/computational neuroscience. Experimental neuroscientists provide observations about single neuron and network behaviours. Theoretical neuroscientists develop models to explain the data and develop experimentally testable hypotheses, for example about the time evolution of neuronal firing rates due to ongoing synaptic plasticity.
Figure 2: Symbiotic interaction between experimental and theoretical/computational neuroscience. Experimental neuroscientists provide observations about single neuron and network behaviours. Theoretical neuroscientists develop models to explain the data and develop experimentally testable hypotheses, for example about the time evolution of neuronal firing rates due to ongoing synaptic plasticity.



Two of our recent manuscripts have highlighted the potential of our evolving-to-learn (E2L) approach by applying it to typical learning scenarios in both spiking and rate-based neuronal network models. In [1], we discovered previously unknown mechanisms for learning efficiently from rewards, recovered efficient gradient-descent methods for learning from errors, and uncovered various functionally equivalent spike-timing-dependent-plasticity rules with tuned homeostatic mechanisms. In [2], we demonstrated how E2L can incorporate statistical properties of the dataset to evolve plasticity rules that learn faster than some of their more general, manually derived counterparts.

Since our approach requires a large number of neuronal network simulations, we make use of modern HPC infrastructure, such as Piz Daint at the Swiss National Supercomputing Centre (CSCS) as well as high-performance software for the simulation of neuronal networks [3] and from the Scientific Python ecosystem. To support the specific needs of our research we have developed an open-source pure-Python library for genetic programming [L1]. We believe that the open nature of such community codes holds significant potential to accelerate scientific progress in the computational sciences.

In the future, we will explore the potential of neuromorphic systems, dedicated hardware for the accelerated simulation of neuronal network models. To this end, we collaborate closely with hardware experts at the Universities of Heidelberg, Manchester and Sussex.

In summary, our E2L approach represents a powerful addition to the neuroscientist’s toolbox. By accelerating the design of mechanistic models of synaptic plasticity, it will contribute not only new and computationally powerful learning rules, but, importantly, also experimentally testable hypotheses for synaptic plasticity in biological neuronal networks. This effectual loop between theory and experiments will hopefully go a long way towards unlocking the mysteries of learning and memory in healthy and diseased brains.

This research has received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3). We would like to thank Maximilian Schmidt for our continuous collaboration and the Manfred Stärk foundation for ongoing support.

Links:
[L1] https://github.com/Happy-Algorithms-League/hal-cgp
[L2] NeuroTMA group: https://physio.unibe.ch/~petrovici/group/
[L3] (Graphics) www.irasutoya.com

References:
[1] J. Jordan, et al.: “Evolving to learn: discovering interpretable plasticity rules for spiking networks”, 2020. arXiv:q-bio.NC/2005.14149
[2] H.D. Mettler et al.: “Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming”, 202, arXiv: cs.NE/2102.04312.
[3] J. Jordan, et al.: “Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers, Frontiers in Neuroinformatics, 12, 2018, doi:10.3389/fninf.2018.00002

Please contact:
Henrik D. Mettler,
NeuroTMA group, Department of Physiology, University of Bern
This email address is being protected from spambots. You need JavaScript enabled to view it.

 

Next issue: January 2024
Special theme:
Large Language Models
Call for the next issue
Image ERCIM News 125
This issue in pdf

 

Image ERCIM News 125 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed