Sunteți pe pagina 1din 9

Neurocomputing 152 (2015) 2735

Contents lists available at ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

Biological context of Hebb learning in articial neural networks,


a review
Eduard Kuriscak a,n, Petr Marsalek b,c, Julius Stroffek b, Peter G. Toth b
a

Institute of Physiology, First Medical Faculty, Charles University in Prague, Albertov 5, CZ-128 00 Praha 2, Czech Republic
Institute of Pathological Physiology, First Medical Faculty, Charles University in Prague, U Nemocnice 5, CZ-128 53 Praha 2, Czech Republic
c
Czech Technical University in Prague, Zikova 4, CZ-166 36 Praha 6, Czech Republic
b

art ic l e i nf o

a b s t r a c t

Article history:
Received 7 July 2014
Received in revised form
26 September 2014
Accepted 9 November 2014
Communicated by W.L. Dunin-Barkowski
Available online 27 November 2014

In 1949 Donald Olding Hebb formulated a hypothesis describing how neurons excite each other and how
the efciency of this excitation subsequently changes with time. In this paper we present a review of this
idea. We evaluate its inuences on the development of articial neural networks and the way we
describe biological neural networks. We explain how Hebb's hypothesis ts into the research both of
that time and of present. We highlight how it has gone on to inspire many researchers working on
articial neural networks. The underlying biological principles that corroborate this hypothesis, that
were discovered much later, are also discussed in addition to recent results in the eld and further
possible directions of synaptic learning research.
& 2014 Elsevier B.V. All rights reserved.

Keywords:
Articial neural networks
Biological neural networks
Hebb learning
Hebb rule
Hebb synapse
Synaptic plasticity

1. Introduction
In 2014 we commemorate 110 years since the birth of Donald
Olding Hebb and 65 years since the rst publication of his inuential
book The Organization of Behavior [1]. In the rst half of the twentieth
century, one of the most tantalizing questions in the eld of mind and
brain research was the problem of how the physiology of the brain
correlates to high level behavior of mammals and especially humans.
Pavlov proposed conditioned reexes [2] as an explanation of how the
neural excitation line connects the triggered target muscle to some
external excitation along the way. In 1943, McCulloch and Pitts [3]
used ideas coined almost a decade earlier by Turing [4] to formulate
logical calculus as a framework of neural computation. A few examples
of similar hypotheses include that of Jeffress, who during his sabbatical
at California Institute of Technology in 1948 proposed a neural circuit
for sound localization [5]. The proposed circuit was found in birds 40
years later [6]. Similarly, in 1947 Laufberger [7] proposed the idea of

Abbreviation: ANN, Articial neural networks; AMPA, -Amino-3-hydroxy-5Methyl-4-isoxazole-Propionic Acid; BNN, Biological neural networks; BCM, Bienenstock, Cooper, and Munro; CA3, Cornu Ammonis no. 3 (area in the hippocampus);
LTD, Long-term depression; LTP, Long-term potentiation; NMDA, N-Methyl- DAspartate; NO, Nitric oxide; STDP, Spike timing dependent plasticity
n
Corresponding author. Tel.: 420 224 96 8413.
E-mail address: Eduard.kuriscak@l.cuni.cz (E. Kuriscak).
http://dx.doi.org/10.1016/j.neucom.2014.11.022
0925-2312/& 2014 Elsevier B.V. All rights reserved.

binary (all-or-none) representation and processing of information in


the brain [8]. However this idea was not entirely new and its roots can
be traced back to 1926, to a paper by Adrian and Zotterman [9]. The
mid-twentieth century saw many researchers developing sophisticated hypotheses of how information might be processed by neuronal
circuits in the brain. In 1948, Wiener's book [10] brought the perspectives of information theory and signal processing to the eld.
During this time, Hebb was working on a theory that would
explain complex psychological behaviors within the framework of
neural physiology. The approach he adopted stemmed from the best
practices used by behavioral psychologists in North America of the
time, dating back some 40 years to William James [11]. To explain
behavior using hypothetical neural computations, Hebb had to make
a few novel assumptions, one of which has become the most cited
sentence of his 1949 book [1]. It is the formulation of the general rule
describing how changes in synaptic weights (also strengths, or
efciencies) control the way how neurons excite each other: When
an axon of cell A is near enough to excite a cell B and repeatedly or
persistently takes part in ring it, some growth process or metabolic
change takes place in one or both cells such that A's efciency, as one of
the cells ring B, is increased.
Hebb's simplied formulation of the assumption was deliberate.
Whilst the idea of increasing synaptic efciency had been previously presented and the possible underlying chemical and biological processes already studied [12], their biological nature was not

28

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

yet known in detail [13]. Using the above postulate allowed Hebb to
develop his theory further without discussing the processes involved in changing synaptic transmission. A further simplied and
popular version of describing this assumption is Neurons that re
together, wire together.
Today, the phenomena of neural adaptation, training, learning
and working memory seems to be almost a trivial fact present in
our everyday lives as we improve our cognitive skills. By applying
this simple rule in the study of articial neural networks (ANN) we
can obtain powerful models of neural computation that might be
close to the function of structures found in neural systems of many
diverse species.
This review paper is divided into the following parts: after this
initial introduction, the application of the Hebb rule in selected
neural networks is outlined. This is succeeded by a brief review
discussing how biological synapses, neurons and networks function, prior to the closing section, consisting of a summary and
future directions. A commented list of relevant web links can be
found in the appendix.

2. Models of neural networks


The above rule that Hebb proposed describes only the way in
which synaptic efciencies are changed in a dynamic system.
Most articial neural networks are characterized by two phases
of synaptic change, learning and recall. First, in the learning
phase there are outputs to train the network to achieve a desired
response to a given input. Secondly, in the recall phase the
synaptic efciencies do not usually change and the network is
only used to calculate the response to a given input, based on the
synaptic efciencies calculated previously.
Let us illustrate the two phases using the example of the Hopeld
network [1416]. In this network, the Hebb rule is used in the
iterative learning phase to set up the weights of input patterns
successively stored in the network. This is achieved by mixing the
input and required output of all the neurons in the network with the
rst pattern. After this the next pattern is learned. The activity of the
neurons approach the desired values by repeating the learning
procedure. The Hebb rule is repeated in the network to set synaptic
efciencies accordingly, while the update algorithms may vary.
Finally in the recall phase the weights are no longer adjusted and
the memory retrieval of the network consist of the completed recall
of partial inputs.
The type of learning whereby the required neuron output is used
instead of the actual neuron output to change the synaptic weights
is often called the supervised learning rule. In some cases, instead of
using the neuron output directly, only the difference between the
original and the required output is used during learning. This eliminates a change of weights if the neuron already yields the
required output or if it is close to the output value. In contrast,
applying only the changes in synaptic weights based on the actual
neuron activity is called the unsupervised learning rule. In a strict
sense, the Hebb rule in its original formulation did not include
supervised learning as only one synapse from neuron A to neuron B
is increasing its efciency. In supervised learning there is a third
factor C, which represents the additional input from another source
of information, or the output of B. The efciency here is the function
of the activity of A, B and C (there are also supervised learning rules
where the activity of B is ignored).
However, the Hebb rule can be utilized by supervised learning. For
example, the idea behind contrastive Hebbian learning is to clamp
output neurons to desired values and then use Hebbian learning rule
to set the weights across the network [17]. Several other supervised
learning rules use mathematical formulations similar to that of the

Hebb rule, therefore we have incorporated them into this review in


particular emphasis on those that have a biological counterpart.
In addition to the term supervised learning, the more general term
of reinforcement learning is often used. From a psychological, or
behavioral point of view, this is any learning whereby its process is
facilitated (reinforced) either by (positively emotionally charged)
reward or by (negatively emotionally charged) punishment [2].
Originating in the behaviorism, the term reinforcement was introduced into the ANN theories by several researchers between the
years 1980 and 1986 [14,18,19]. Reinforcement in ANN means
learning with the use of feedback input. This feedback is usually
binary, signaling only that output is to be accepted or rejected. There
is no information about desired output as in supervised learning.
The neural systems of all higher animals are hierarchically
organized with many levels of connections between the neurons.
Lower level circuits are typical for local, mostly inhibitory interneurons. Unsupervised learning is believed to be more likely
present in these lower level biological cellular circuits and their
mechanisms of learning since fewer conditions are imposed on its
mechanism. Hebb's view on changes in synaptic efciency considers only local factors acting on corresponding neurons and
synapses. To implement the Hebb rule, there is no need for any
supervision in learning. However, in biological neural networks
(BNN) there exist multiple forms of reinforced feedback that affect
learning and employ both local mechanisms in the circuit and
global, longer range mechanisms. Many examples of biological
feedback connections that change their synaptic weight are found
in the visual pathway, specically in the retina and visual cortex,
which contain well described hierarchies of feedback connections.
Global hierarchical reinforcement can be described as an abstract
form. This higher level of description also bridges psychology with
biology. From both a psychological and a biological perspective, we
can look at emotions as an example of tagging and reinforcing
memories [20].
2.1. Formulations of Hebb rule
One of the simpler formulations of the Hebb rule can be
written as

dwi
f wi r out r in
i ;
dt

where weight wi of the i-th input changes with time t and time
constant w. This constant includes both the change rate and the
pre-set strength factor. According to the terminology of [21,22],
w 1 is a constant in the term correlating post- and pre-synaptic
rates. The right side of this ordinary differential equation contains
an unspecied function of weight f wi . r out and r in
are the
i
respective output and input rates. Other equivalent formulations
of the Hebb rule can be found in [21].
Several properties of the Hebb rule are important for its implementation in ANN. Six properties are summarized by Gerstner and
Kistler [22]: (1) locality, meaning restricting the rule to input and
output neurons of the given synapse (however some supervised
learning rule variants are not local); (2) cooperativity, the requirement
of simultaneous activity of both neurons; (3) boundedness of the
weight values thereby preventing their divergence; (4) competition,
that some synapses are strengthened at the expense of other
synapses that are weakened; (5) long term stability, which is a
natural requirement for the dynamic stability of the neural network
system (this is however only one side of the dilemma of stability
versus plasticity); (6) synaptic depression and facilitation, or weight
decrease and increase, which is perhaps the most important property.
Synapses must be able to change in both directions, or, as is the
case when extremal values occur in biological neurons, when new
connections grow or existing links are disconnected. Obviously,

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

the purpose of decreasing efciency is to assure the convergence


of the network to a mean net activity value [23,24].
In Eq. (1), when f wi is positive, because the ring rates can be
only positive numbers, the weight only grows and diverges.
Simply limiting the weights (according to point 3. above) would
not eliminate the divergence problem, because at some point all
weights in the network will reach this limit. In any practical
application of neural networks, a decrease of the efciency is as
important as its increase (point 6 above). In order to obtain a
stable solution, we can equivalently rewrite the weight changes as
differences of ring rates, which can converge to a stable point:

dwi
in
in
f wi r out  r out
0 r i t r i;0 ;
dt
r out
0

r in
i;0

and
are thresholds for output and input rates
where
respectively. This variant is called the Hebb rule with post-synaptic
and pre-synaptic gating [22]. Next, we will look into how the learning
process using the Hebb rule is accomplished in selected ANNs.
2.2. Feed-forward networks
The rst published model of a neural network with learning
capability was the perceptron [25]. The network was inspired by
the putative biological neural circuits involved in visual perception. Although the change of synaptic efciency proposed by Hebb
was not used in the perceptron, Hebb's theory was discussed in
the paper. Hebb's theory was however criticized, mainly as it did
not offer a model of behavior and only attempted to build a bridge
between biophysics and psychology. The simplied model of the
perceptron consists of three layers: sensory cell layer, association
unit layer and response unit layer. The proposed learning rule only
adjusts values of the synapses from the rst and second layers.
The rst concept of neuronal learning in networks was demonstrated in the ADALINE (Adaptive Linear Neuron or later Adaptive
Linear Element) network, an example of perceptron, a single layer
network [26]. The authors, Widrow and Hoff, used the steepest
gradient descent technique, to search for the extreme function
values. This procedure is similar to the back-propagation learning
algorithm for multi-layer feed-forward networks, developed in
1986 [19]. Widrow and Hoff formulated the error function as the
difference between the obtained neuron output and the desired
neuron output. They then minimized the square power of the
dened error function as it is used in the least squares optimization method. Simplied formula for only one neuron output y and
only one pattern x x1 ; x2 xn can be written as follows:
yx a0 a1 x1 a2 x2 an xn ;
2

E2 dx  yx ;

where dx denotes the desired output d1 x; d2 xdn x. When we


calculate the derivatives of square error E2 according to the
weights ai and use the gradient method, we get the learning rule
formula:
ati ati  1 dx  yxxi ;

used by Widrow and Hoff, for the t-th iteration of the rule. This is
the iterative form of the supervised learning rule, with learning
rate . Instead of directly assigning weights to the corresponding
target value, they approach their target values with each iteration.
The weight change in the next iteration is calculated based on not
one but several patterns. This can potentially result in a clash,
when one pattern requires an increasing weight while the other
requires a decrease.
Willshaw [27] described a two layer model of an associative
neural network with only binary valued neural outputs and synaptic
weights (valued using only 0 and 1). The name of this model, nonholographic associative memory, stems from the biological memory

29

properties that are analogous to holography, an imaging method of


physical optics [28]. The term non-holographic indicates concepts
developed beyond the holographic metaphor. This model has two
layers, where each neuron of the input layer is connected with all
neurons in the output layer. This model can be trained to associate
the given input with the required output. When we denote the k-th
pattern input as xk1 ; xk2 xkn and the corresponding required output
as yk1 ; yk2 ykn , the synaptic weight from input neuron i to output
neuron j is set using the equation
!
wij H

xpi ypj ;

p1

where P denotes the number of patterns stored to the network and


Hx is a hard limiter function, where Hx 1 for x 4 0 and Hx 0
otherwise. This is a straightforward example of the Hebb learning
rule on the synapse between neurons with forced outputs. Since the
weights and neural states attain only 0 or 1, simultaneous activity of
neurons in one pattern causes synapse to have maximum strength,
or we can say, neurons that red together once, wired together.
Further analysis of the Willshaw model can be found in [29]. More
biologically plausible modications of the network with only partial
connectivity between layers and probabilistic synaptic transmission
are compared to hippocampal cells in the work of Graham and
Willshaw [30].
Learning rules based on back-propagation of errors are often
criticized as biologically implausible as they assume the fast
propagation of information about errors along axons in the opposite
direction. Moreover, this information must be very precise and
specic for each neuron [31]. Signaling in the BNN using various
chemical compounds to alter synaptic efciencies is relatively slow
and its target areas are spread out. An example of a biologically
more plausible learning rule for multi-layer feed-forward networks
was introduced by Barto and Anandan [32]. Their associative
reward-penalty rule utilizes the Hebb learning rule together with
the global reinforcement signal:
(
xi  xi xj
if r 1 reward;
6
wij 0 x  x x if r  1 penalty;
i
i
j
where 4 0 4 0 are constants and xi is expected (mean value)
output of neuron i in the network based on stochastic neural units.
A network incorporating a variant of this rule was successfully used
to model visual space representation in area 7a, in the posterior
parietal cortex of monkeys [31].
Another example of the learning rule in feed-forward networks
based on Hebb's postulate is Sanger's rule [33] (also known as the
generalized Hebbian algorithm), mainly used in principal component analysis. It is an extension of the Oja learning rule [34] for
networks with multiple outputs. The Oja learning rule for one
neuron acting as a principal component analyzer is written as

wij xj xi  xj wi :

where
A term in the Hebb rule xj xi was replaced by
x0i xi  xj wi is considered as the effective input to the unit. This
additional feedback controls the growth of the weights and
prevents divergence.
xj x0i

2.3. Recurrent networks


Hopeld was inspired by the spin glass theory in physics
[35,36] and published a similar neural network model [14,15].
There are n neurons and the output of each neuron is connected as
an input to all other neurons. The neuron states are again binary,
valued originally 0 and 1, or equivalently 1 and 1, [16].
This model works as an auto-associative memory and the input
to the network is presented by setting up all the neuron states

30

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

according to the network input. The neuron states are then


updated in iterations either synchronously (with all the neuron
states updated in one iteration) or asynchronously (only one
neuron is picked up and its state is updated in one iteration).
The synaptic weights are assigned the values obtained by the
following version of the Hebb rule:
p

wij xki xkj ;

t1

where wij denotes the synaptic weight from neuron i to neuron j,


xki and xkj denote the neural states of neurons i and j respectively in
pattern k, and p denotes the number of patterns to be stored to the
network.
An application of Hebb's hypothesis here would not assume
symmetrical synaptic weights, however, the formulation of the
learning in Eq. (8) results in symmetry. This symmetry is the
essential property of the Hopeld network. Without this symmetry the network would not converge to its stable state. Further
analysis of the auto-associative memories of the Hopeld type can
be found in [37].
An extension of this model to include continuous neuron states
as a function of time was published in 1984 [15] in close succession
to the rst paper [14]. In this second paper, Hopeld proposed that
the model could be implemented directly using electrical circuits.
The learning rule is the same as in the previous paper.
One generalization of the Hebb rule was aimed at explaining
particular phenomena of the visual cortex. The BCM theory
(named after Bienenstock, Cooper, and Munro [23]) was proposed
to account for experiments measuring the selectivity of neurons in
the primary sensory cortex and selectivity changes dependent on
neuronal input. The linear neuron output y is obtained simply by
summing the input xi multiplied with weight wi:
9

y wi xi

The weight is calculated by a rule expressing synaptic change as


a product of the presynaptic activity and a nonlinear function of
postsynaptic activity. The weight depends on modication threshold M :
dwi
yy M xi  wi ;
dt

10

and decays with the factor . The function M Ep y=y0  sets the
modication (sliding) threshold according to the mean activity
taken over all the p patterns. Synapses are facilitated or depressed
according to this equation.
It has been shown that if the cycles of sparse activity are stored to
a model with a Hopeld network topology and Willshaw-model-like
neurons and synapses, where both neuron outputs and synapses are
only binary 0 or 1, the capacity of the model can be higher than that
of the Hopeld network. In the Hopeld network, capacity scales
with the number of neurons n and is proportional to 0.15n. The
properties of sparse networks were studied under conditions where
the percentage of the active neurons in the whole cycle approached
those observed in biological networks and their proportion was
maintained at around 1.5% [38].
We have used another variant of the Hebb learning rule to store
the patterns in the network in a similar way to that of the
Willshaw model. When many indexes in the rule are omitted,
the form of the Hebb rule can be clearly seen. We use weight
modication in cyclic activation sequences:
!!
wij H

k1

k li k 1
xi xj

lk  1

q1

k q k q 1
xi xj

11

where r is the number of patterns stored, k x denotes the k-th


pattern, l(k) denotes the k-th pattern's cycle length, k x1 ; k x2 k xlk
denotes the network activities within the cycle iterations and

k q k q
x1 ; x2 k xqn denotes the activities of individual neurons within
iteration q of the cycle in pattern k. The learning rule itself is the
same as in the Willshaw model except that it is rewritten in a form
that include neural activities within cycles. Further details can be
found in [39,40]. Synaptic facilitation and depression can additionally enlarge the network capacity, as demonstrated in [4143].

2.4. Networks of self-organizing maps and other unsupervised


networks
After 1980, statistics, optimization, dynamic differential equations,
and corresponding numerical methods were rapidly developing due
to the advances in scientic computing. From 2000 onwards, any
reference to applications of ANN algorithms was regarded as a
standard computational method, as it can be seen in a vast number
of examples in technical and scientic computing [44].
Observations made regarding the spatial organization of neural
systems led to the development of other ANN learning theories.
Kohonen proposed another unsupervised neural network model,
that was motivated by sensory neural circuits, especially by vision.
This model is referred to as the self-organizing map [45]. Kohonen
proposed a model capable of forming a topographic map corresponding to the inner (spatial) organization of inputs. Such a
spatial relationship imposes a metric of the input as a metric space
to synaptic weights. Consider for example the topology of a two
dimensional space. In this case, neurons are viewed as nodes on a
grid with lateral connections to neighboring neurons. All the
neurons have input connections from the same source. The lateral
connections are excitatory to close neurons, inhibitory to those
further away and there are no lateral connections between the
neurons distant from each other. The model calculates a discriminant function (an analogy for neuronal output) for every neuron
and picks up the one with the highest function value, referred to as
the winner neuron. The weights of the neurons are then updated
according to the equation
wi t 1

wi t txt
;
J wi t txt J

12

where t denotes time and the network is updated in discrete time


steps. Value wi t denotes the weight vector of i-th neuron, xt is
the vector input presented to the network and J y J is the metric of
the Euclidean norm of the vector y. As in the equations above,
parameter t is the learning rate, here it is a monotonously
decreasing function of time t. This function differs for the winner
neuron, the nearest neighboring neurons, those further away and
the neurons most distant from the winner neuron.
Similar ideas about the lateral excitation and inhibition as well
as choosing the neuron with the highest response have been
previously discussed by Rosenblatt in [25], where reinforcement
learning was used instead to adjust weights. There is no visible
direct evidence of any Hebbian idea behind Eq. (12). However, the
following discriminant function used in neurons is the dot product
of weight and input vectors:
wi :x J wi J J x J cos J x J cos ;

13

where is the angle between vectors wi and x and due to the


normalization in Eq. (12), the vector weight norm is unity
J wi t J 1. If the weight vector is of a different direction than
the presented pattern, the angle will be decreased by the
learning Eq. (12), which increases the cosine value in Eq. (13).
This observation can be rephrased as follows: the response of
the winner neuron and the nearest neighboring neurons to the
presented input is increased by the learning rule. From this point
of view the self-organized map is close to the Hebb rule and
increases the weights in a way that the target neuron output is
higher. However, this is done simultaneously for all input weights

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

31

Table 1
Neural networks with biological correlates. Table 1 shows the hypothetical biological neural mechanisms used as paradigms for the construction of articial neural networks.
Name

Articial neural network (ANN)

Biological (BNN) correlates

Hopeld ANN

It is an auto-associative network by Hopeld [14]. It uses


Hebb rule in iterative learning and update
It consists of layers of neurons with converging
connections [25]
ADAptive LInear NEuron uses an optimization algorithm
in learning [26]
It was proposed by Pribram [28] and Willshaw [27]

CA3 region in hippocampus is one of several brain regions resembling


the Hopeld ANN [46]
All sensory pathways, including the visual pathway, contain pathways of
converging axons [46]
No biological correlate to our knowledge

Perceptron
ADALINE
Holographic model
BCM learning rule
Kohonen ANN
Cyclic ANN

It is named after Bienenstock, Cooper and Munro [23]


and can be used in ANN
It is also called self-organizing map, it is an ANN with
a geometrical organization
It is an example of a more elaborated Hebb rule [39]

and subsequently the normalization can decrease the weight even


though it would be increased by the regular form of the Hebb rule.
Biological neural networks discussed in this section are summarized in Table 1.

3. Comparison of articial and biological neuronal networks


In this section we discuss how articial neural networks, which are
frequently parts of computer software applications, are inuenced by
studies of biological neural networks, made of real living neurons.
Articial neural networks were rst developed in the mid-to-late
1940s [3], inspired by neural theories formulated from the 1920s
onwards, that explain signal processing in real nervous systems
[9,48]. The time lag between these observations and their computational implementations was due to the lack of practically usable
computational hardware and an absence of sufciently elaborated
mathematical and information theories. These did not appear until
the late 1940s [10,49]. Since then both elds of research mutually
inspired each other, but retained separation. The advancement of
articial neural networks was motivated by the desire to mimic and
replicate fundamental biological, neurophysiological and psychological phenomena (e.g. adaptation, memory and cognition). Gradually
this became an applied science oversimplifying some processes and
omitting redundant and detailed biological features, including the
Hebb rule. In spite of this approach and the core aspects of the
articial neural networks that use Hebb rule variants, the simplications proved fruitful and successfully mimicked many attributes and
functions of biological neural circuits.
Studies of BNN have shown that many discovered mechanisms
have not yet been implemented in any articial neural network.
Examples include the signal processing on dendrites and dendritic
spines changing with time [50,51], the nonlinear summation of
postsynaptic currents and shunting effects [52], and the variable
signal propagation delays and effects of spike timing jitter [5355].
Such ideas later entered the eld of computational neuroscience,
the branch of research exploring, modeling and analyzing biological neural circuits by means of software and hardware computational resources, numerical and information processing.
The motivation stemming from biological neurons and neuronal circuits continues to inspire ANN design. Many of the observations are based on mammalian brain studies. Hierarchical layers of
functional units and auto-associative connections are present in
biological neural nets. The hierarchy of the visual pathway was
used by Hebb and his numerous followers as an example of a
layered structure. The structure of the CA3 area in the hippocampus (Cornu Ammonis area 3) is believed to contain an auto-

Image storage in hologram is a metaphor of biological memory.


At present it is mostly discussed metaphorically
It was originally described as a mechanism of the visual cortex
It was originally proposed by Kohonen to model neocortical columns [45]
Wilson [47] proposes such cyclic activity in the model of the hippocampus.
Many other neural circuits function with the use of periodic activation.

associative neural network [56], similar to that of Hopeld's [46].


Its function in mammals is related to memory, emotions, motivation and space navigation, via connections through the limbic
system circuits to the rest of the brain [20]. Shepherd's book The
Synaptic Organization of the Brain, from which we highlight the
chapter on hippocampus and its CA3 area [46], serves as a good
literary source for mathematical and engineering readers.
Biological neurons are functional units signaling their activity by
spikes (action potentials), that evoke intricate sub-cellular machinery,
as they pass the synaptic connections between neurons. At the
synapse, an electrical signal of the pre-synaptic membrane is converted into a chemical signal transmitted by molecules called mediators. These have either excitatory or inhibitory effects on the postsynaptic membrane. A common example of an excitatory mediator is
Glutamic Acid (GLU), whilst a common example of an inhibitory
mediator is -Amino-Butyric Acid (GABA). Inhibition is frequently
realized by local interneurons [46,38]. Two examples of excitatory
postsynaptic membrane receptors, named after their drug activators,
are -Amino-3-hydroxy-5-Methyl-4-isoxazole-Propionic Acid (AMPA)
and N-Methyl-D-Aspartate (NMDA). Calcium (Ca2 ) ions act as important intra-cellular messengers of processes started at the neural
membrane [57,58]. The mediator molecules themselves in turn act
on the post-synaptic membrane, giving rise to the post-synaptic
potential which then contributes to the generation of the spike in
the post-synaptic neuron. The propagation of chemical signals in the
opposite direction, from the post-synaptic neuron back to the presynaptic neuron, and the coincident activation of both these neurons
within the short (millisecond) time frame are two putative mechanisms enabling the realization of the Hebb rule in biological neurons.
Two examples of messengers of retrograde neuronal signals are intracellular Ca2 ions [59] and nitric oxide (NO, both intra- and extracellular) [60]. Spikes converging from two neurons are grouped
together on virtually all possible time ranges, but not all of them elicit
synaptic changes [5,61,62].
There are numerous experimental protocols stimulating neurons
in either intact nervous system (in vivo), or in brain slices and
neuronal cultures maintained in a dish (in vitro), or alternatively in
detailed models reproducing biological neural machinery in computer hardware (in silico). In experimental protocols, synaptic weight
changes are referred to as potentiation, depression, plasticity, and
also Hebbian changes. Short term potentiation (STP) and long term
potentiation (LTP) are two phenomena elicited in hippocampal slices
and other preparations [46]. The spike timing dependent plasticity
(STDP), and changes following the BCM rule are assumed to take
place also in intact brain functioning in vivo [23,24,6366].
There are numerous examples of reproducible behaviors in biological neurons. In behavioral, psychological and biological experiments,

32

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

one can manipulate newly perceived patterns of a visual scene [53],


complex sounds of speech-like utterances [67] and other sensory
stimuli. Even perceptual illusions are used in experimental manipulations [68] and their neuronal mechanisms established. Numerous
attempts to bridge biology and psychology have not yet grasped the
exactness of studied phenomena and are far from complete. Our
perceptions retain some of their emotional and sensory qualities even
as recollections from our memory. How are these seamlessly executed
and experienced phenomena realized? How are our inner thoughts
and memories represented by biological neurons? These and many
other similar questions are left as part of a wide array of open problems in neurobiology. There remains a long way to closing the gap
between the multitude of elementary phenomena of experimental
biology mentioned above to psychological and behavioral phenomena,
which were the subject of Hebb's highly celebrated book [69].

4. Beyond Hebb's assumptions and future directions


The processes of learning executed by the ANN models mentioned
here are inspired by Hebb's original idea. But learning rules are very
often not exactly the rule originally proposed by Hebb. The fundamental concept maintained is that changes in the synaptic efciency
are based on the local properties and activity of the corresponding
neurons. Various simulations and approaches have been used in
models as well as in biological experimentation on neurons. It was
originally hypothesized that a straight line existed between the
learning and recall processes, where a change in synaptic efciency
is part of the learning process, while other neural activities caused by
synaptic transmission are part of the recall process.
The detailed models of signal transmission in biological synapses
known today describe highly complex dynamics of underlying synaptic changes [70,54]. These changes inuence the parameters that
describe synaptic dynamics [24]. New variables and more degrees of
freedom in single synapse descriptions have been introduced. The
natural question that comes to mind is what exactly should be considered as part of the learning or the recall process. This is especially
relevant when designing novel ANN mechanisms. It can be seen from
experiments and some models that synaptic efciencies change
dynamically and employ time ranges that are fast enough to compare
to duration of interspike intervals and could also surely affect memory
recall processes under certain circumstances [70]. It has been shown
that short term potentiation can improve the capacity of auto-associative memory.
Synapses in a neural network can be strengthened either via a
supervised, or an unsupervised weight change process. Hebb proposed a rule of weight changes in time. The application of this rule
has been fruitful in the design of articial neural networks. There is
growing evidence to show that, in biological neural networks,
variations of this rule are employed. This justies the reference to
the Hebb rule even in the description of higher level memory
processes and inspires studies of synaptic weight change mechanisms
in both articial and biological neural networks.

Even before the advent of the ANN, their solutions had already been
devised, for example (1) the classication of data was solved using
statistical factor analysis, cluster analysis or Bayesian classication;
(2) the recognition of patterns could be achieved by covariance or
correlation analysis as well as their successors like principal component analysis; (3) curves could be tted by regression methods or
their generalized variants, such as the generalized least squares
method, and also by polynomials and splines tting; and (4) time
series forecasts could be performed by (nonlinear) auto-regressive
moving average methods. These examples are mostly statistical
methods with overlapping types of tasks and solutions. The ANN
solution to these classical problems thereby is sometimes more
robust, or it is used as a method of abstraction of the original
problem yielding a more concise solution.
As a practical example, consider a task to classify vehicles on a
highway. There is a toll gate equipped with several standard
industrial laser scanners, which have a given angular resolution
and frame rate. Motor vehicles pass under the gate at a certain
cruising speed. The task of the software is to classify vehicles into
several categories: personal cars, busses and trucks of a particular
size and weight. This task includes both pattern recognition and
classication. The task is accomplished by a three layer feedforward ANN. The use of the ANN enables to combine both vehicle
shape recognition and classication into one algorithm [44].
Before some of the data mining techniques were made possible
by computation numerics, it was known that some problems would
be harder to solve than others. This led to the concept of ill posed and
well posed problems, and the subsequent development of regularization methods to convert the former to the latter. Regularization
and optimization methods are therefore useful alternatives of ANN
application. One example of a generally considered harder problem is
the task of inferring a 3D structure based on 2D information. This
task can be alternatively modied to an intermediary problem,
creating a 21/2 D object sketch based on projection information
of a 2D object [18]. Another example of a harder problem is the
production of meaningful sentences based on grammatical rules.
When attempts are made to cross beyond the mathematical formulation of such hard problems using ANN, we nd that these
problems relate to questions of articial intelligence. Yet it seems that
the decision as to what problems are contained in the eld of
articial intelligence is rather arbitrary and has changed more frequently over the years than problems solved by classical ANN.
Another future challenge for the ANN is the demand for higher
modularity and scalability. There is a need for modular interfaces,
which would make it possible to connect the output from one
neural network to the input of another. In addition to this not
many ANNs to date contain modules within modules, or even
several levels of nesting modules. The major difference between
ANN and BNN is that in the latter modularity of both interconnections (in neural pathways) and nesting of modules (in neural
nuclei, with specic reference to the columnar organization of the
cerebral cortex) are ubiquitous.
4.2. Future perspectives of BNN

4.1. Future perspectives of ANN


In order to discuss the future perspectives of ANN computations,
we must rst divide problems solved by the ANN into two groups.
The rst of these are problems with known solutions. These are
prototypical tasks for the ANN and their solutions are well
described. The second ones are open problems, where the solutions,
or even heuristic approaches are not known. Some examples of
problems successfully solved using an ANN are as follows: (1) the
classication of data into several categories; (2) the recognition of
input patterns, typically 2D images; (3) curve tting; and (4) the
forecasting of future data, based on a time series of historical data.

The worm, Caenorhabditis elegans, is commonly used animal model


in biology, with a nervous system consisting of exactly 302 cells. Their
connectivity is well described and function of individual neurons has
been identied to some extent. A connectivity map of this system
guides us in understanding its functioning. To date, the connectivity
and functional maps of higher organisms are only partially described.
Examples of them are invertebrates such as the fruit y, Drosophila
melanogaster, or vertebrates such as the rhesus macaque, Macaca
mulatta, and also humans. New methods of tracing connections and
probing neuronal function are now available. Most ndings extend the
observation of the modularity and universality of neurons and

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

synapses as functional units, which make up individual specialized


neural circuits of the animal brain.
Most synapse strengthening biological protocols have been
described in simple, isolated preparations. Denition of the STDP is
a generalization of the LTP and LTD protocols, variations of these
protocols have resulted in description of the STDP. The neuromuscular
synapse of the African clawed frog Xenopus laevis embryo cells has
been used as in vivo preparation demonstrating the STDP [71].
Gerstner et al. [72] presented a theoretical argument that in order
to explain the ne tuning of the circuit encompassing rst binaural
neuron in higher animals, we need learning of ne tuned spike timing
triggered neural response. A natural way of uncovering the biological
mechanisms is a progress from embryonic, or in vitro isolated tissues,
or from lower animal preparations to more complex systems, and
later the phenomena are found in a more subtle form in neurons of
later developmental stages, then in vivo and nally in higher animals.
The succession of experimental studies of the LTP, LTD and STDP
follows this chronological order. In this succession, the backpropagating action potential was one of the missing links conveying
the synaptic information in anti-dromic direction [57]. It was later
shown that this effect was required for the LTP as well [73]. Recently,
the STDP has been demonstrated not only in synapses from excitatory
to excitatory neurons, but also in excitatory to inhibitory and
inhibitory to excitatory synapses. Therefore it is not surprising that
the STDP is present in both hippocampus and neocortex in the
excitatory to excitatory synapses and a variant of the LTP can be
demonstrated in cerebellum where most of the circuitry uses
inhibitory synapses. The effect of dopamine of the reward pathway
in neocortex is a conceptual link of supervised learning, or reinforcement, between the synapses and behavior [65].
However, many fundamental ndings, whether structural or
functional, still wait for plausible reasoning, explanation or implementation into ANN models. Although our understanding of the
nervous system contains many conceptual elements adopted from
and elaborated by ANN, an abundance of new ndings in fundamental research have shown that biological neural networks use a
multitude of computationally diverse processes. Our explanation of
them is made possible by the Hebb rule and other principles
implemented already into articial neural networks. There is no
doubt that some of these ndings that are yet to be elaborated by
molecular biology, quantum chemistry, biological physics, or any
quantitative branch of natural sciences will profoundly inuence
our understanding of both articial and biological neural networks.
In order to limit the number of literary references to a reasonable
scale, we have deliberately not cited many further papers or books
in this last section. We advise the reader to direct her attention to
the world wide web pages listed in Appendix A.

Acknowledgments
This work was supported by graduate students research program SVV 2014 no. 260 033 to P.M., J.S. and P.G.T. and PRVOUK
program no. 205024 to P.M.,both programs at Charles University in
Prague. Thanks to Elisa Brann, Libor Husnik, Jiri Kofranek and
Martin Vokurka.

Appendix A
We gathered several web reference sources of mostly computer
source codes and open source books and papers. Most of the
sources listed below were visited on July 22, 2014:
1. MATLAB (RM) library sources of ANN simulations are at http://
www.mathworks.com/products/neural-network/.

33

2. MATLAB (RM) like matrix computation numerical libraries are


in free open source programs: (1) Octave, http://www.gnu.org/
software/octave/ and (2) SciLab, https://www.scilab.org/.
3. MATLAB (RM) sources to the book of Wilson [47] are at his
homepage: http://cvr.yorku.ca/webpages/wilson.htm#book.
4. Open source book by Gerstner and Kistler, Neuron Models [21]
is available at http://icwww.ep.ch/  gerstner/SPNM/node72.
html.
5. Elsevier Freedom collection (for academic institutions) contains many open source papers [57,67,74] at https://info.
myelsevier.com/?q=products/collections/freedom-collection.
6. JSTOR (Journal STORage) available from academic subscriptions contains many papers of historical reference and primary
sources at http://www.jstor.org.
7. Neurocomputing journal offers open access to older papers
[40,41,43,61,62] at http://www.journals.elsevier.com/neuro
computing/.
8. Proc. Natl. Acad. Sci. USA journal has an open archive, where
[6,14,15,31,53,56,59,60] and other papers can be found at
http://www.pnas.org/.
9. Some papers of our group are related to both ANN and BNN. They
are [44,5355,57],the newer are available on request, the older are
mostly open source now and are at (1) http://nemo.lf1.cuni.cz/
mlab/marsalek-HOME/; or (2) http://scholar.google.cz/citations?
view_op=search_authors&mauthors=PetrMarsalek.
10. Scholarpedia contains many papers related to Hebb learning:
(1) BCM theory; (2) Hebb [75]; (3) Hebb rule; (4) Hopeld
network [16]; (5) Kohonen network; (6) LTD; (7) LTP; (8) memory; (9) models of synaptic plasticity; (10) reinforcement
learning; (11) STDP [66] and (12) supervised learning; the
pages are at http://www.scholarpedia.org/.

References
[1] D.O. Hebb, The Organization of Behavior, Wiley, New York. Reprinted in 2002
by: Lawrence Erlbaum Associates, NJ, 1949.
[2] I.P. Pavlov, Conditioned Reexes, an Investigation of the Physiological Activity
of the Cerebral Cortex, Oxford University Press, London, UK, 1927.
[3] W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous
activity, Bull. Math. Biophys. 5 (4) (1943) 115133.
[4] A.M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proc. Lond. Math. Soc. 42 (1936) 230265.
[5] L.A. Jeffress, A place theory of sound localization, J. Comput. Physiol. Psychol.
41 (1) (1948) 3539.
[6] C.E. Carr, M. Konishi, Axonal delay lines for time measurement in the owl's
brainstem, Proc. Natl. Acad. Sci. USA 85 (21) (1988) 83118315.
[7] V. Laufberger, Vzruchova theorie, (The Impulse Theory), Czech Medical
Association, Prague, 1947 (in Czech).
[8] L.J. Kohout, A Perspective on Intelligent Systems: A Framework for Analysis
and Design, Chapman and Hall, London, New York, 1990.
[9] E.D. Adrian, Y. Zotterman, The impulses produced by sensory nerve-endings:
Part II. The response of a single end-organ, J. Physiol. Lond. 61 (2) (1926)
151171.
[10] N. Wiener, Cybernetics: Or Control and Communication in the Animal and the
Machine, Hermann and Cie, 1948 (2nd revised ed., Wiley, New York, 1961).
[11] W. James, Pragmatism: A New Name for Some Old Ways of Thinking, Longman
Green and Company, New York, NY, 1907.
[12] C.U.A. Kappers, G.C. Huber, E.C. Crosby, The Comparative Anatomy of the
Nervous System of Vertebrates, Including Man, Macmillan, New York, NY,
1936.
[13] R.E. Brown, P.M. Milner, The legacy of Donald O. Hebb: more than the Hebb
synapse, Nat. Rev. Neurosci. 4 (12) (2003) 10131019.
[14] J.J. Hopeld, Neural networks and physical systems with emergent collective
computational abilities, Proc. Natl. Acad. Sci. USA 79 (8) (1982) 25542558.
[15] J.J. Hopeld, Neurons with graded response have collective computational
properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA 81 (10)
(1984) 30883092.
[16] J.J. Hopeld, Hopeld network, Scholarpedia 2 (5) (2007) 1977.
[17] J.R. Movellan, Contrastive Hebbian learning in the continuous Hopeld model,
in: Proceedings of the 1990 Connectionist Models Summer School, 1990,
pp. 1017.
[18] D. Marr, Vision: A Computational Investigation into the Human Representation
and Processing of Visual Information, Henry Holt and Company, New York, NY,
1982.

34

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735

[19] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning representations by backpropagating errors, Nature 323 (6088) (1986) 533536.
[20] E.R. Kandel, In Search of Memory: The Emergence of a New Science of Mind,
W.W. Norton and Company, New York, NY, 2007.
[21] W. Gerstner, W.M. Kistler, Neuron Models: Single Neurons, Populations,
Plasticity, Cambridge University Press, Cambridge, MA, 2002.
[22] W. Gerstner, W.M. Kistler, Mathematical formulations of Hebbian learning,
Biol. Cybern. 87 (56) (2002) 404415.
[23] E.L. Bienenstock, L.N. Cooper, P.W. Munro, Theory for the development of
neuron selectivity: orientation specicity and binocular interaction in visual
cortex, J. Neurosci. 2 (1) (1982) 3248.
[24] L. Benuskova, W.C. Abraham, STDP rule endowed with the BCM sliding
threshold accounts for hippocampal heterosynaptic plasticity, J. Comput.
Neurosci. 22 (2) (2007) 129133.
[25] F. Rosenblatt, The perceptron, a probabilistic model for information storage
and organization in the brain, Psychol. Rev. 65 (6) (1958) 386408.
[26] B. Widrow, M.E. Hoff Jr., Adaptive switching circuits, IRE WESCON Conv. Rec. 4
(1960) 96104.
[27] D.J. Willshaw, O.P. Buneman, H.C. Longuet-Higgins, Non-holographic associative memory, Nature 222 (5197) (1969) 960962.
[28] K.H. Pribram, The neurophysiology of remembering, Sci. Am. 220 (1) (1969) 7386.
[29] D. Golomb, N. Rubin, H. Sompolinsky, Willshaw model: associative memory
with sparse coding and low ring rates, Phys. Rev. A 41 (4) (1990) 18431854.
[30] B. Graham, D. Willshaw, Probabilistic synaptic transmission in the associative
net, Neural Comput. 11 (1) (1999) 117137.
[31] P. Mazzoni, R.A. Andersen, M.I. Jordan, A more biologically plausible learning
rule for neural networks, Proc. Natl. Acad. Sci. USA 88 (10) (1991) 44334437.
[32] A.G. Barto, P. Anandan, Pattern-recognizing stochastic learning automata, IEEE
Trans. Syst. Man Cybern. 15 (3) (1985) 360375.
[33] T.D. Sanger, Optimal unsupervised learning in a single-layer linear feedforward neural network, Neural Netw. 2 (6) (1989) 459473.
[34] E. Oja, Simplied neuron model as a principal component analyzer, J. Math.
Biol. 15 (3) (1982) 267273.
[35] E. Ising, Beitrag zur Theorie des Ferromagnetismus, Z. F. Phys. A 31 (1) (1925)
253258 (in German).
[36] D. Sherrington, S. Kirkpatrick, Solvable model of a spin-glass, Phys. Rev. Lett.
35 (26) (1975) 17921796.
[37] D.J. Amit, Modeling Brain Function. The World of Attractor Neural Networks,
Cambridge University Press, Cambridge, MA, 1989.
[38] E.T. Rolls, A. Treves, Neural Networks and Brain Function, Oxford University
Press, New York, NY, 1998.
[39] J. Stroffek, E. Kuriscak, P. Marsalek, Pattern storage in a sparsely coded neural
network with cyclic activation, Biosystems 89 (13) (2007) 257263.
[40] J. Stroffek, P. Marsalek, Short-term potentiation effect on pattern recall in
sparsely coded neural network, Neurocomputing 77 (1) (2012) 108113.
[41] J. Torres, J. Cortes, J. Marro, H. Kappen, Attractor neural networks with activitydependent synapses: the role of synaptic facilitation, Neurocomputing 70 (1012)
(2007) 20222025.
[42] D. Bibitchkov, J.M. Herrmann, T. Geisel, Pattern storage and processing in
attractor networks with short-time synaptic dynamics, Netw. Comput. Neural
Syst. 13 (1) (2002) 115129.
[43] D. Bibitchkov, J.M. Herrmann, T. Geisel, Effects of short-time plasticity on the
associative memory, Neurocomputing 44-46 (2002) 329335.
[44] J. Stroffek, E. Kuriscak, P. Marsalek, Highway toll enforcement, IEEE Veh.
Technol. Mag. 5 (4) (2010) 5665.
[45] T. Kohonen, Self-organized formation of topologically correct feature maps,
Biol. Cybern. 43 (1) (1982) 5969.
[46] T.H. Brown, A.M. Zador, Hippocampus, in: G.M. Shepherd (Ed.), The Synaptic
Organization of the Brain, Oxford University Press, New York, 1990,
pp. 346388.
[47] H.R. Wilson, Lyapunov functions and memory, in: Spikes, Decisions and
Actions, Oxford University Press, New York, 1999, pp. 223250.
[48] J.C. Eccles, Some aspects of Sherrington's contribution to neurophysiology,
Notes Rec. R. Soc. Lond. (1957) 216225.
[49] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 28
(4) (1949) 656715.
[50] F. Santamaria, S. Wils, E.D. Schutter, G.J. Augustine, The diffusional properties
of dendrites depend on the density of dendritic spines, Eur. J. Neurosci. 34 (4)
(2011) 561568.
[51] C. Sala, M. Segal, Dendritic spines: the locus of structural and functional
plasticity, Physiol. Rev. 94 (1) (2014) 141188.
[52] S. Cushing, T. Bui, P.K. Rose, Effect of nonlinear summation of synaptic currents
on the input-output properties of spinal motoneurons, J. Neurophysiol. 94 (5)
(2005) 34653478.
[53] P. Marsalek, C. Koch, J. Maunsell, On the relationship between synaptic input
and spike output jitter in individual neurons, Proc. Natl. Acad. Sci. USA 94 (2)
(1997) 735740.
[54] E. Kuriscak, P. Marsalek, Model of neural circuit comparing static and adaptive
synapses, Prague Med. Rep. 105 (4) (2004) 369380.
[55] E. Kuriscak, P. Marsalek, J. Stroffek, Z. Wunsch, The effect of neural noise on
spike time precision in a detailed CA3 neuron model, Comput. Math. Methods
Med. 2012 (595398) (2012) 116.

[56] S.R. Kelso, A.H. Ganong, T.H. Brown, Hebbian synapses in hippocampus, Proc.
Natl. Acad. Sci. USA 83 (14) (1986) 53265330.
[57] P. Marsalek, F. Santamaria, Investigating spike backpropagation induced Ca2
inux in models of hippocampal and cortical pyramidal neurons, Biosystems
48 (13) (1998) 147156.
[58] M. Mlcek, J. Neumann, O. Kittnar, V. Novak, Mathematical model of the
electromechanical heart contractile systemregulatory subsystem physiological considerations, Physiol. Res. 50 (4) (2001) 425432.
[59] A. Zador, C. Koch, T.H. Brown, Biophysical model of a Hebbian synapse, Proc.
Natl. Acad. Sci. USA 87 (17) (1990) 67186722.
[60] T.J. O'Dell, R.D. Hawkins, E.R. Kandel, O. Arancio, Tests of the roles of two
diffusible substances in long-term potentiation: evidence for nitric oxide as a
possible early retrograde messenger, Proc. Natl. Acad. Sci. USA 88 (24) (1991)
1128511289.
[61] P. Marsalek, Neural code for sound localization at low frequencies, Neurocomputing 3840 (2001) 14431452.
[62] P. Marsalek, J. Kofranek, Sound localization at high frequencies and across the
frequency range, Neurocomputing 5860 (2004) 9991006.
[63] Y. Dan, M.M. Poo, Spike timing-dependent plasticity of neural circuits, Neuron
44 (1) (2004) 2330.
[64] S. Song, Hebbian learning and spike-timing dependent plasticity, in: J. Feng
(Ed.), Computational Neuroscience: A Comprehensive Approach, CRC Press,
Boca Raton, FL, USA, 2004, pp. 324358.
[65] N. Caporale, Y. Dan, Spike timing-dependent plasticity: a Hebbian learning
rule, Annu. Rev. Neurosci. 31 (2008) 2546.
[66] J. Sjstrm, W. Gerstner, Spike-timing dependent plasticity, Scholarpedia 5 (2)
(2010) 1362.
[67] J.L. Eriksson, A.E.P. Villa, Learning of auditory equivalence classes for vowels by
rats, Behav. Process. 73 (3) (2006) 348359.
[68] R. von der Heydt, E. Peterhans, G. Baumgartner, Illusory contours and cortical
neuron responses, Science 224 (4654) (1984) 12601262.
[69] T.J. Sejnowski, The book of Hebb, Neuron 24 (4) (1999) 773776.
[70] M. Tsodyks, H. Markram, The neural code between neocortical pyramidal
neurons depends on neurotransmitter release probability, Proc. Natl. Acad. Sci.
USA 94 (2) (1997) 719723.
[71] T. Morimoto, X.H. Wang, M.M. Poo, Overexpression of synaptotagmin modulates short-term synaptic plasticity at developing neuromuscular junctions,
Neuroscience 82 (4) (1998) 969978.
[72] W. Gerstner, R. Kempter, J.L. van Hemmen, H. Wagner, A neuronal learning
rule for sub-millisecond temporal coding, Nature 383 (1996) 7678.
[73] L.N. Cooper, STDP: spiking, timing, rates and beyond, Front. Synaptic Neurosci.
2 (14) (2010) 00014-13.
[74] G.E. Hinton, Connectionist learning procedures, Artif. Intell. 40 (1) (1989)
185234.
[75] R.M. Klein, Donald Olding Hebb, Scholarpedia 6 (4) (2011) 3719.

Eduard Kuriscak received his M.D. in the eld of


General Medicine from Palacky University, Olomouc,
in 1998, and his Ph.D. in Normal and Pathological
Human Physiology from Charles University, Prague, in
2002. He is also trained as an applied physicist. He was
awarded the Hlavka scholarship for young scientists of
the Czech Republic. He was a coordinator of several
system administration and software development projects, and is currently an assistant professor at the
Department of Physiology, First Medical Faculty,
Charles University, Prague, since 1998.

Petr Marsalek is a professor of Biophysics and Computer Science in Medicine at Charles University in
Prague. There he received his M.D., in 1990, his B.S. in
Mathematics and Computer Science, in 1992, and his
Ph.D. in Biophysics, in 1999. His postdoctoral stays have
included California Institute of Technology and Johns
Hopkins University in the USA, and the Max Planck
Institute for the Physics of Complex Systems in Dresden, Germany. He is afliated with the Department of
Pathological Physiology at the First Medical Faculty of
Charles University in Prague and part time at the Czech
Technical University in Prague.

E. Kuriscak et al. / Neurocomputing 152 (2015) 2735


Julius Stroffek is currently working towards his Ph.D.
at the Department of Pathological Physiology, First
Medical Faculty, Charles University in Prague. He
received his master's degree in Computer Science, in
2004, at the Faculty of Mathematics and Physics,
Charles University in Prague, under the supervision of
Petr Marsalek. After his graduation he has been working as a software engineer at multinational software
companies.

35
Peter G. Toth is currently working towards his Ph.D. at
the Department of Pathological Physiology, First Medical Faculty, Charles University in Prague. He received
his master's degree in Computer Science, in 2011, at the
Faculty of Mathematics and Physics, Charles University
in Prague, under the supervision of Petr Marsalek.

S-ar putea să vă placă și