ERCIM news 135
ERCIM news 135
ERCIM news 134
ERCIM news 134
ERCIM news 133
ERCIM news 133
ERCIM news 132
ERCIM news 132
ERCIM news 131
ERCIM news 131
Back Issues Online
Back Issues Online

by Martin Nilsson (RISE Research Institutes of Sweden) and Henrik Jörntell (Lund University, Department of Experimental Medical Science)

Biology-inspired computing is often based on spiking networks, but can we improve efficiency by going to higher levels of abstraction?  To do this, we need to explain the precise meaning of the spike trains that biological neurons use for mutual communication. In a cooperation between RISE and Lund University, we found a spectacular match between a mechanistic, theoretical model having only three parameters on the one hand, and in vivo neuron recordings on the other, providing a clear picture of exactly what biological neurons “do”, i.e., communicate to each other.

Most neuron models are empirical or phenomenological because this allows a comfortable match with experimental data. If nothing else works, additional parameters can be added to the model until it fits data sufficiently well. The disadvantage of this approach is that an empirical model cannot explain the neuron – we cannot escape the uneasiness of perhaps having missed some important hidden feature of the neuron. Proper explanation requires a mechanistic model, which is instead based on the neuron’s underlying biophysical mechanisms. However, it is hard to find a mechanistic model at an appropriate level of detail that matches data well. What we ultimately want, of course, is to find a simple mechanistic model that matches the data as well as any empirical model.

We struggled for considerable time to find such a mechanistic model of the cerebellar Purkinje neuron (Figure 1a), which we use as a model neuron system. Biological experiments revealed, at an early stage, that the neuron low-pass filters input heavily, so clearly, the cause of the high-frequency component of the interspike interval variability could not be the input, but was to be found locally. The breakthrough came with the mathematical solution of the long-standing first-passage time problem for stochastic processes with moving boundaries [1]. This method enabled us to solve the model equations in an instant and allowed us to correct and perfect the model using a large amount of experimental data.

We eventually found that the neuron’s spiking can be accurately characterised by a simple mechanistic model using only three free parameters. Crucially, we found that the neuron model necessarily requires three compartments and must be stochastic. The division into three compartments is shown in Figure 1b, having a distal compartment consisting of the dendrite portions far from soma; a proximal compartment consisting of soma and nearby portions of the dendrites; and an axon initial segment compartment consisting of the initial part of the axon. One way to describe the model is as trisecting the classical Hodgkin-Huxley model by inserting two axial resistors and observing the stochastic behaviour of ion channels in the proximal compartment.

From the theoretical model we could compute the theoretical probability distribution of the interspike intervals (ISIs). By comparing this with the ISI histograms (and better, the kernel density estimators) of long (1,000–100,000 ISIs) in vivo recordings, the accuracy was consistently surprisingly high (Figure 1c). Using the matching to compute model parameters as an inverse problem, we found that the error was within a factor two from the Cramér-Rao lower bound for all recordings.

It seems that the distal compartment is responsible for integrating the input; the proximal compartment generates a ramp and samples it, and the axon initial segment compartment detects a threshold passage and generates the spike.
We have concluded that the neuron function appears rather straightforward, and that this indicates that there is potential to proceed beyond the spiking level towards higher levels of abstraction, even for biological neurons. The fundamentally inherent stochasticity of the neuron is unavoidable, and this must be taken into account, but there is no need to worry excessively about hidden yet undiscovered neuron features that would disrupt our view of what neurons are capable of. The reason is the nearly perfect match between model and data; the match cannot be significantly improved even if the model is elaborated, which we show using the Cramér-Rao lower bound.

The major limitation is that we assume stationary input. This is by design, because we want to eliminate uncontrollable error sources such as cortical input. In the cerebellum, this can be achieved experimentally by decerebration. However, by observing the distal compartment’s low-pass filtering properties, it is straightforward to generalize the model to accept non-stationary input using a pseudo-stationary approach.

So, what do the neurons do, then? In brief, it turns out that the Purkinje neurons first soft-threshold the internal potential, and then encode it using pulse frequency modulation, dithered by channel noise to reduce distortion. And this is it! One of the three free parameters is the input, and the other two correspond to the soft-threshold function’s gain (slope) and offset (bias), respectively. Please refer to [2] for details, or to [L1] for a popular description without formulas.
From a technical point of view, it is interesting to note that the soft-thresholding function is nearly identical to the rectifying linear unit (ReLU), and even more so to the exponential, or smooth soft-thresholding function that has recently received much attention in the machine learning field.

The next step is to investigate the implications for ensembles of neurons. Is it possible to formulate an abstraction which treats such assemblies of neurons, known to be ReLUs, as a single unit without considering each neuron independently?
This research was partially funded by the European Union FP6 IST Cognitive Systems Initiative research project SENSOPAC, “SENSOrimotor structuring of Perception and Action for emerging Cognition”, and the FP7 ICT Cognitive Systems and Robotics research project THE, “The Hand Embodied”, under grant agreements 028056 and 248587, respectively.

Figure 1: (a) Confocal photomicrograph of Purkinje neuron. (b) Proposed division of neuron into three compartments. (c) Example of the match between theoretical model (red solid trace) and experimental interspike-interval histogram (blue bars; black dashed trace for kernel density estimator). Image credits: CC BY 4.0 [2].
Figure 1: (a) Confocal photomicrograph of Purkinje neuron. (b) Proposed division of neuron into three compartments. (c) Example of the match between theoretical model (red solid trace) and experimental interspike-interval histogram (blue bars; black dashed trace for kernel density estimator). Image credits: CC BY 4.0 [2].

Link:
[L1] https://www.drnil.com/#neurons-doing-what  (retrieved 2021-03-16)

References:
[1] M. Nilsson: “The moving-eigenvalue method: Hitting time for Itô processes and moving boundaries”, Journal of Physics A: Mathematical and Theoretical (Open Access), October 2020. DOI: 10.1088/1751-8121/ab9c59
[2] M. Nilsson and H. Jörntell: “Channel current fluctuations conclusively explain neuronal encoding of internal potential into spike trains”, Physical Review E (Open Access), February 2021. DOI: 10.1103/PhysRevE.103.022407

Please contact:
Martin Nilsson, RISE Research Institutes of Sweden,
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: July 2024
Special theme:
Sustainable Cities
Call for the next issue
Image ERCIM News 125
This issue in pdf

 

Image ERCIM News 125 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed