Galves–Löcherbach model
   HOME

TheInfoList



OR:

The Galves–Löcherbach model (or GL model) is a mathematical model for a network of neurons with intrinsic
stochasticity Stochastic (, ) refers to the property of being well described by a random probability distribution. Although stochasticity and randomness are distinct in that the former refers to a modeling approach and the latter refers to phenomena themselve ...
. In the most general definition, a GL network consists of a countable number of elements (idealized ''neurons'') that interact by sporadic nearly-instantaneous discrete events (''spikes'' or ''firings''). At each moment, each neuron ''N'' fires independently, with a
probability Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
that depends on the history of the firings of all neurons since the last time ''N'' last fired. Thus each neuron "forgets" all previous spikes, including its own, whenever it fires. This property is a defining feature of the GL model. In specific versions of the GL model, the past network spike history since the last firing of a neuron ''N'' may be summarized by an internal variable, the ''potential'' of that neuron, that is a
weighted sum A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is ...
of those spikes. The potential may include the spikes of only a finite subset of other neurons, thus modeling arbitrary synapse topologies. In particular, the GL model includes as a special case the general leaky integrate-and-fire neuron model.


Formal definition

The GL model has been formalized in several different ways. The notations below are borrowed from several of those sources. The GL network model consists of a countable set of neurons with some set I of indices. The state is defined only at discrete sampling times, represented by integers, with some fixed time step \Delta. For simplicity, let's assume that these times extend to infinity in both directions, implying that the network has existed since forever. In the GL model, all neurons are assumed evolve synchronously and atomically between successive sampling times. In particular, within each time step, each neuron may fire at most once. A Boolean variable X_i /math> denotes whether the neuron i\in I fired (X_i 1) or not (X_i 0) between sampling times t\in\mathbb and t+1. Let X '\,\mathrel\,t/math> denote the matrix whose rows are the histories of all neuron firings from time t' to time t inclusive, that is :X '\,\mathrel\,t((X_i _)_ and let X \infty\,\mathrel\,t/math> be defined similarly, but extending infinitely in the past. Let \tau_i /math> be the time before the last firing of neuron i before time t, that is : \tau_i =\mathop\. Then the general GL model says that :\mathop\biggl(\,X_i = 1 \;\mathrel\; X \infty\,\mathrel\,t-1,\biggr)\;\;=\;\; \Phi_i\biggl(X\bigl tau_i[t,\mathrel\,t-1\bigr.html" ;"title=".html" ;"title="tau_i[t">tau_i[t,\mathrel\,t-1\bigr">.html" ;"title="tau_i[t">tau_i[t,\mathrel\,t-1\bigrbiggr) Moreover, the firings in the same time step are conditionally independent, given the past network history, with the above probabilities. That is, for each finite subset K\subset I and any configuration a_i\in\, i\in K, we have : \mathop\biggl(\,\bigcap_\bigl\ \;\mathrel\; X \infty\,\mathrel\,t-1biggr)\;\;=\;\;\prod_ \mathop\biggl(\,X_k a_k\;\mathrel\;X\bigl[\tau_k ,\mathrel\,t-1\bigl]\,\biggr)


Potential-based variants

In a common special case of the GL model, the part of the past firing history X\bigl tau_i[t,\mathrel\,t-1\bigr.html" ;"title=".html" ;"title="tau_i[t">tau_i[t,\mathrel\,t-1\bigr">.html" ;"title="tau_i[t">tau_i[t,\mathrel\,t-1\bigr/math> that is relevant to each neuron i\in I at each sampling time t is summarized by a real-valued internal state variable or ''potential'' V_i /math> (that corresponds to the membrane potential of a biological neuron), and is basically a weighted sum of the past spike indicators, since the last firing of neuron i. That is, :V_i \;\;=\;\; \sum_^ \biggl( E_i '+ \sum_ w_\,X_j 'biggr)\alpha_i\bigl '-\tau_i[tt-1-t'\bigr.html"_;"title=".html"_;"title="'-\tau_i[t">'-\tau_i[tt-1-t'\bigr">.html"_;"title="'-\tau_i[t">'-\tau_i[tt-1-t'\bigr/math> In_this_formula,_w__is_a_numeric_weight,_that_corresponds_to_the_total_synaptic_weight.html" "title="">'-\tau_i[tt-1-t'\bigr.html" ;"title=".html" ;"title="'-\tau_i[t">'-\tau_i[tt-1-t'\bigr">.html" ;"title="'-\tau_i[t">'-\tau_i[tt-1-t'\bigr/math> In this formula, w_ is a numeric weight, that corresponds to the total synaptic weight">weight In science and engineering, the weight of an object is the force acting on the object due to gravity. Some standard textbooks define weight as a vector quantity, the gravitational force acting on the object. Others define weight as a scalar qua ...
or strength of the synapses from the axon of neuron j to the dentrites of neuron i. The term E_i[t'], the ''external input'', represents some additional contribution to the potential that may arrive between times t' and t'+1 from other sources besides the firings of other neurons. The factor \alpha_i ,s/math> is a ''history weight function'' that modulates the contributions of firings that happened r whole steps after the last firing of neuron i and s whole steps before the current time. Then one defines :\mathop\biggl(\,X_i = 1\;\mathrel\;X \infty:t-1,\biggr) \;\;=\;\; \phi_i(V_i where \phi_i is a monotonically non-decreasing function from \mathbb into the interval ,1/math>. If the synaptic weight w_ is negative, each firing of neuron j causes the potential V_i to decrease. This is the way
inhibitory synapse An inhibitory postsynaptic potential (IPSP) is a kind of synaptic potential that makes a postsynaptic neuron less likely to generate an action potential.Purves et al. Neuroscience. 4th ed. Sunderland (MA): Sinauer Associates, Incorporated; 2008. ...
s are approximated in the GL model. The absence of a synapse between those two neurons is modeled by setting w_ = 0.


Leaky integrate and fire variants

In an even more specific case of the GL model, the potential V_i is defined to be a decaying weighted sum of the firings of other neurons. Namely, when a neuron i fires, its potential is reset to zero. Until its next firing, a spike from any neuron j increments V_i by the constant amount w_. Apart from those contributions, during each time step, the potential decays by a fixed ''recharge factor'' \mu_i towards zero. In this variant, the evolution of the potential V_i can be expressed by a recurrence formula : V_i +1;\;=\;\; \left\ \;+\; E_i \;+\; \sum_ w_\,X_j Or, more compactly, : V_i +1;\;=\;\; (1 - X_i \,\mu_i\,V_i \;+\; E_i \;+\; \sum_ w_\,X_j /math> This special case results from taking the history weight factor \alpha ,s/math> of the general potential-based variant to be \mu_i^. It is very similar to the leaky integrate and fire model.


Reset potential

If, between times t and t+1, neuron i fires (that is, X_i = 1), no other neuron fires (X_j = 0 for all j\neq i),and there is no external input (E_i = 0), then V_i +1/math> will be w_. This self-weight therefore represents the ''reset potential'' that the neuron assumes just after firing, apart from other contributions. The potential evolution formula therefore can be written also as : V_i +1;\;=\;\; \left\ \;+\; E_i \;+\; \sum_ w_\,X_j where V^_i = w_ is the reset potential. Or, more compactly, : V_i +1;\;=\;\; X_i ,V^\mathsf_i \;\;+\;\; (1 - X_i \,\mu_i\,V_i \;+\; E_i \;+\; \sum_ w_\,X_j /math>


Resting potential

These formulas imply that the potential decays towards zero with time, when there are no external or synaptic inputs and the neuron itself does not fire. Under these conditions, the membrane potential of a biological neuron will tend towards some negative value, the resting or baseline potential V^\mathsf_i, on the order of −40 to −80
millivolt The volt (symbol: V) is the unit of electric potential, electric potential difference (voltage), and electromotive force in the International System of Units (SI). It is named after the Italian physicist Alessandro Volta (1745–1827). Defini ...
s. However, this apparent discrepancy exists only because it is customary in
neurobiology Neuroscience is the scientific study of the nervous system (the brain, spinal cord, and peripheral nervous system), its functions and disorders. It is a multidisciplinary science that combines physiology, anatomy, molecular biology, developme ...
to measure electric potentials relative to that of the extracellular medium. That discrepancy disappears if one chooses the baseline potential V^\mathsf_i of the neuron as the reference for potential measurements. Since the potential V_i has no influence outside of the neuron, its zero level can be chosen independently for each neuron.


Variant with refractory period

Some authors use a slightly different ''refractory'' variant of the integrate-and-fire GL neuron, which ignores all external and synaptic inputs (except possibly the self-synapse w_) during the time step immediately after its own firing. The equation for this variant is : V_i +1;\;=\;\; \left\{ \begin{array}{ll} V^\mathsf{R}_i & \quad \mathrm{if}\; X_i = 1 \\ mm \displaystyle \mu_i\,V_i \;+\; E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j & \quad \mathrm{if}\; X_i = 0 \end{array} \right. or, more compactly, : V_i +1;\;=\;\; X_i ,V^\mathsf{R}_i \;\;+\;\; (1 - X_i \,\biggl(\mu_i\,V_i \;+\; E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j biggr)


Forgetful variants

Even more specific sub-variants of the integrate-and-fire GL neuron are obtained by setting the recharge factor \mu_i to zero. In the resulting neuron model, the potential V_i (and hence the firing probability) depends only on the inputs in the previous time step; all earlier firings of the network, including of the same neuron, are ignored. That is, the neuron does not have any internal state, and is essentially a (stochastic) function block. The evolution equations then simplify to : V_i +1;\;=\;\; \left\{ \begin{array}{ll} V^\mathsf{R}_i & \mathrm{if}\; X_i = 1\\ 0 & \mathrm{if}\; X_i = 0 \end{array} \right\} \;+\; E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j : V_i +1;\;=\;\; X_i ,V^\mathsf{R}_i \;\;+\;\; E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j /math> for the variant without refractory step, and : V_i +1;\;=\;\; \left\{ \begin{array}{ll} V^\mathsf{R}_i & \quad \mathrm{if}\; X_i = 1 \\ mm \displaystyle E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j & \quad \mathrm{if}\; X_i = 0 \end{array} \right. : V_i +1;\;=\;\; X_i ,V^\mathsf{R}_i \;\;+\;\; (1 - X_i \,\biggl(E_i \;+\; \sum_{j\in I\setminus\{i\ w_{j\to i}\,X_j biggr) for the variant with refractory step. In these sub-variants, while the individual neurons do not store any information from one step to the next, the network as a whole still can have persistent memory because of the implicit one-step delay between the synaptic inputs and the resulting firing of the neuron. In other words, the state of a network with n neurons is a list of n
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represente ...
s, namely the value of X_i /math> for each neuron, which can be assumed to be stored in its axon in the form of a traveling
depolarization In biology, depolarization or hypopolarization is a change within a cell, during which the cell undergoes a shift in electric charge distribution, resulting in less negative charge inside the cell compared to the outside. Depolarization is ess ...
zone. Writer::Mokone William kgotso


History

The GL model was defined in 2013 by mathematicians Antonio Galves and Eva Löcherbach. Its inspirations included Frank Spitzer's
interacting particle system In probability theory, an interacting particle system (IPS) is a stochastic process (X(t))_ on some configuration space \Omega= S^G given by a site space, a countable-infinite graph G and a local state space, a compact metric space S . More ...
and
Jorma Rissanen Jorma Johannes Rissanen (October 20, 1932 – May 9, 2020) was an information theorist, known for originating the minimum description length (MDL) principle and practical approaches to arithmetic coding for lossless data compression. His work in ...
's notion of stochastic chain with memory of variable length. Another work that influenced this model was Bruno Cessac's study on the leaky integrate-and-fire model, who himself was influenced by Hédi Soula. Galves and Löcherbach referred to the process that Cessac described as "a version in a finite dimension" of their own probabilistic model. Prior integrate-and-fire models with stochastic characteristics relied on including a noise to simulate stochasticity. The Galves–Löcherbach model distinguishes itself because it is inherently stochastic, incorporating probabilistic measures directly in the calculation of spikes. It is also a model that may be applied relatively easily, from a computational standpoint, with a good ratio between cost and efficiency. It remains a non-Markovian model, since the probability of a given neuronal spike depends on the accumulated activity of the system since the last spike. Contributions to the model were made, considering the
hydrodynamic In physics and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids— liquids and gases. It has several subdisciplines, including '' aerodynamics'' (the study of air and other gases in motion) a ...
limit of the interacting neuronal system, the long-range behavior and aspects pertaining to the process in the sense of predicting and classifying behaviors according to a fonction of parameters, and the generalization of the model to the continuous time. The Galves–Löcherbach model was a cornerstone to the NeuroMat project."Modelos matemáticos do cérebro", Fernanda Teixeira Ribeiro, ''Mente e Cérebro'', Jun. 2014
/ref>


See also

*
Biological neuron model Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecon ...
*
Hodgkin–Huxley model The Hodgkin–Huxley model, or conductance-based model, is a mathematical model that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical chara ...
*
Computational neuroscience Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematical models, computer simulations, theoretical analysis and abstractions of the brain to u ...
* NeuroMat


References

{{DEFAULTSORT:Galves-Locherbach model Neural circuits Computational neuroscience