Documente Academic
Documente Profesional
Documente Cultură
Neurocomputing
journal homepage: www.elsevier.com/locate/neucom
Institute of Physiology, First Medical Faculty, Charles University in Prague, Albertov 5, CZ-128 00 Praha 2, Czech Republic
Institute of Pathological Physiology, First Medical Faculty, Charles University in Prague, U Nemocnice 5, CZ-128 53 Praha 2, Czech Republic
c
Czech Technical University in Prague, Zikova 4, CZ-166 36 Praha 6, Czech Republic
b
art ic l e i nf o
a b s t r a c t
Article history:
Received 7 July 2014
Received in revised form
26 September 2014
Accepted 9 November 2014
Communicated by W.L. Dunin-Barkowski
Available online 27 November 2014
In 1949 Donald Olding Hebb formulated a hypothesis describing how neurons excite each other and how
the efciency of this excitation subsequently changes with time. In this paper we present a review of this
idea. We evaluate its inuences on the development of articial neural networks and the way we
describe biological neural networks. We explain how Hebb's hypothesis ts into the research both of
that time and of present. We highlight how it has gone on to inspire many researchers working on
articial neural networks. The underlying biological principles that corroborate this hypothesis, that
were discovered much later, are also discussed in addition to recent results in the eld and further
possible directions of synaptic learning research.
& 2014 Elsevier B.V. All rights reserved.
Keywords:
Articial neural networks
Biological neural networks
Hebb learning
Hebb rule
Hebb synapse
Synaptic plasticity
1. Introduction
In 2014 we commemorate 110 years since the birth of Donald
Olding Hebb and 65 years since the rst publication of his inuential
book The Organization of Behavior [1]. In the rst half of the twentieth
century, one of the most tantalizing questions in the eld of mind and
brain research was the problem of how the physiology of the brain
correlates to high level behavior of mammals and especially humans.
Pavlov proposed conditioned reexes [2] as an explanation of how the
neural excitation line connects the triggered target muscle to some
external excitation along the way. In 1943, McCulloch and Pitts [3]
used ideas coined almost a decade earlier by Turing [4] to formulate
logical calculus as a framework of neural computation. A few examples
of similar hypotheses include that of Jeffress, who during his sabbatical
at California Institute of Technology in 1948 proposed a neural circuit
for sound localization [5]. The proposed circuit was found in birds 40
years later [6]. Similarly, in 1947 Laufberger [7] proposed the idea of
Abbreviation: ANN, Articial neural networks; AMPA, -Amino-3-hydroxy-5Methyl-4-isoxazole-Propionic Acid; BNN, Biological neural networks; BCM, Bienenstock, Cooper, and Munro; CA3, Cornu Ammonis no. 3 (area in the hippocampus);
LTD, Long-term depression; LTP, Long-term potentiation; NMDA, N-Methyl- DAspartate; NO, Nitric oxide; STDP, Spike timing dependent plasticity
n
Corresponding author. Tel.: 420 224 96 8413.
E-mail address: Eduard.kuriscak@l.cuni.cz (E. Kuriscak).
http://dx.doi.org/10.1016/j.neucom.2014.11.022
0925-2312/& 2014 Elsevier B.V. All rights reserved.
28
yet known in detail [13]. Using the above postulate allowed Hebb to
develop his theory further without discussing the processes involved in changing synaptic transmission. A further simplied and
popular version of describing this assumption is Neurons that re
together, wire together.
Today, the phenomena of neural adaptation, training, learning
and working memory seems to be almost a trivial fact present in
our everyday lives as we improve our cognitive skills. By applying
this simple rule in the study of articial neural networks (ANN) we
can obtain powerful models of neural computation that might be
close to the function of structures found in neural systems of many
diverse species.
This review paper is divided into the following parts: after this
initial introduction, the application of the Hebb rule in selected
neural networks is outlined. This is succeeded by a brief review
discussing how biological synapses, neurons and networks function, prior to the closing section, consisting of a summary and
future directions. A commented list of relevant web links can be
found in the appendix.
dwi
f wi r out r in
i ;
dt
where weight wi of the i-th input changes with time t and time
constant w. This constant includes both the change rate and the
pre-set strength factor. According to the terminology of [21,22],
w 1 is a constant in the term correlating post- and pre-synaptic
rates. The right side of this ordinary differential equation contains
an unspecied function of weight f wi . r out and r in
are the
i
respective output and input rates. Other equivalent formulations
of the Hebb rule can be found in [21].
Several properties of the Hebb rule are important for its implementation in ANN. Six properties are summarized by Gerstner and
Kistler [22]: (1) locality, meaning restricting the rule to input and
output neurons of the given synapse (however some supervised
learning rule variants are not local); (2) cooperativity, the requirement
of simultaneous activity of both neurons; (3) boundedness of the
weight values thereby preventing their divergence; (4) competition,
that some synapses are strengthened at the expense of other
synapses that are weakened; (5) long term stability, which is a
natural requirement for the dynamic stability of the neural network
system (this is however only one side of the dilemma of stability
versus plasticity); (6) synaptic depression and facilitation, or weight
decrease and increase, which is perhaps the most important property.
Synapses must be able to change in both directions, or, as is the
case when extremal values occur in biological neurons, when new
connections grow or existing links are disconnected. Obviously,
dwi
in
in
f wi r out r out
0 r i t r i;0 ;
dt
r out
0
r in
i;0
and
are thresholds for output and input rates
where
respectively. This variant is called the Hebb rule with post-synaptic
and pre-synaptic gating [22]. Next, we will look into how the learning
process using the Hebb rule is accomplished in selected ANNs.
2.2. Feed-forward networks
The rst published model of a neural network with learning
capability was the perceptron [25]. The network was inspired by
the putative biological neural circuits involved in visual perception. Although the change of synaptic efciency proposed by Hebb
was not used in the perceptron, Hebb's theory was discussed in
the paper. Hebb's theory was however criticized, mainly as it did
not offer a model of behavior and only attempted to build a bridge
between biophysics and psychology. The simplied model of the
perceptron consists of three layers: sensory cell layer, association
unit layer and response unit layer. The proposed learning rule only
adjusts values of the synapses from the rst and second layers.
The rst concept of neuronal learning in networks was demonstrated in the ADALINE (Adaptive Linear Neuron or later Adaptive
Linear Element) network, an example of perceptron, a single layer
network [26]. The authors, Widrow and Hoff, used the steepest
gradient descent technique, to search for the extreme function
values. This procedure is similar to the back-propagation learning
algorithm for multi-layer feed-forward networks, developed in
1986 [19]. Widrow and Hoff formulated the error function as the
difference between the obtained neuron output and the desired
neuron output. They then minimized the square power of the
dened error function as it is used in the least squares optimization method. Simplied formula for only one neuron output y and
only one pattern x x1 ; x2 xn can be written as follows:
yx a0 a1 x1 a2 x2 an xn ;
2
E2 dx yx ;
used by Widrow and Hoff, for the t-th iteration of the rule. This is
the iterative form of the supervised learning rule, with learning
rate . Instead of directly assigning weights to the corresponding
target value, they approach their target values with each iteration.
The weight change in the next iteration is calculated based on not
one but several patterns. This can potentially result in a clash,
when one pattern requires an increasing weight while the other
requires a decrease.
Willshaw [27] described a two layer model of an associative
neural network with only binary valued neural outputs and synaptic
weights (valued using only 0 and 1). The name of this model, nonholographic associative memory, stems from the biological memory
29
xpi ypj ;
p1
wij xj xi xj wi :
where
A term in the Hebb rule xj xi was replaced by
x0i xi xj wi is considered as the effective input to the unit. This
additional feedback controls the growth of the weights and
prevents divergence.
xj x0i
30
t1
y wi xi
10
and decays with the factor . The function M Ep y=y0 sets the
modication (sliding) threshold according to the mean activity
taken over all the p patterns. Synapses are facilitated or depressed
according to this equation.
It has been shown that if the cycles of sparse activity are stored to
a model with a Hopeld network topology and Willshaw-model-like
neurons and synapses, where both neuron outputs and synapses are
only binary 0 or 1, the capacity of the model can be higher than that
of the Hopeld network. In the Hopeld network, capacity scales
with the number of neurons n and is proportional to 0.15n. The
properties of sparse networks were studied under conditions where
the percentage of the active neurons in the whole cycle approached
those observed in biological networks and their proportion was
maintained at around 1.5% [38].
We have used another variant of the Hebb learning rule to store
the patterns in the network in a similar way to that of the
Willshaw model. When many indexes in the rule are omitted,
the form of the Hebb rule can be clearly seen. We use weight
modication in cyclic activation sequences:
!!
wij H
k1
k li k 1
xi xj
lk 1
q1
k q k q 1
xi xj
11
k q k q
x1 ; x2 k xqn denotes the activities of individual neurons within
iteration q of the cycle in pattern k. The learning rule itself is the
same as in the Willshaw model except that it is rewritten in a form
that include neural activities within cycles. Further details can be
found in [39,40]. Synaptic facilitation and depression can additionally enlarge the network capacity, as demonstrated in [4143].
wi t txt
;
J wi t txt J
12
13
31
Table 1
Neural networks with biological correlates. Table 1 shows the hypothetical biological neural mechanisms used as paradigms for the construction of articial neural networks.
Name
Hopeld ANN
Perceptron
ADALINE
Holographic model
BCM learning rule
Kohonen ANN
Cyclic ANN
32
Even before the advent of the ANN, their solutions had already been
devised, for example (1) the classication of data was solved using
statistical factor analysis, cluster analysis or Bayesian classication;
(2) the recognition of patterns could be achieved by covariance or
correlation analysis as well as their successors like principal component analysis; (3) curves could be tted by regression methods or
their generalized variants, such as the generalized least squares
method, and also by polynomials and splines tting; and (4) time
series forecasts could be performed by (nonlinear) auto-regressive
moving average methods. These examples are mostly statistical
methods with overlapping types of tasks and solutions. The ANN
solution to these classical problems thereby is sometimes more
robust, or it is used as a method of abstraction of the original
problem yielding a more concise solution.
As a practical example, consider a task to classify vehicles on a
highway. There is a toll gate equipped with several standard
industrial laser scanners, which have a given angular resolution
and frame rate. Motor vehicles pass under the gate at a certain
cruising speed. The task of the software is to classify vehicles into
several categories: personal cars, busses and trucks of a particular
size and weight. This task includes both pattern recognition and
classication. The task is accomplished by a three layer feedforward ANN. The use of the ANN enables to combine both vehicle
shape recognition and classication into one algorithm [44].
Before some of the data mining techniques were made possible
by computation numerics, it was known that some problems would
be harder to solve than others. This led to the concept of ill posed and
well posed problems, and the subsequent development of regularization methods to convert the former to the latter. Regularization
and optimization methods are therefore useful alternatives of ANN
application. One example of a generally considered harder problem is
the task of inferring a 3D structure based on 2D information. This
task can be alternatively modied to an intermediary problem,
creating a 21/2 D object sketch based on projection information
of a 2D object [18]. Another example of a harder problem is the
production of meaningful sentences based on grammatical rules.
When attempts are made to cross beyond the mathematical formulation of such hard problems using ANN, we nd that these
problems relate to questions of articial intelligence. Yet it seems that
the decision as to what problems are contained in the eld of
articial intelligence is rather arbitrary and has changed more frequently over the years than problems solved by classical ANN.
Another future challenge for the ANN is the demand for higher
modularity and scalability. There is a need for modular interfaces,
which would make it possible to connect the output from one
neural network to the input of another. In addition to this not
many ANNs to date contain modules within modules, or even
several levels of nesting modules. The major difference between
ANN and BNN is that in the latter modularity of both interconnections (in neural pathways) and nesting of modules (in neural
nuclei, with specic reference to the columnar organization of the
cerebral cortex) are ubiquitous.
4.2. Future perspectives of BNN
Acknowledgments
This work was supported by graduate students research program SVV 2014 no. 260 033 to P.M., J.S. and P.G.T. and PRVOUK
program no. 205024 to P.M.,both programs at Charles University in
Prague. Thanks to Elisa Brann, Libor Husnik, Jiri Kofranek and
Martin Vokurka.
Appendix A
We gathered several web reference sources of mostly computer
source codes and open source books and papers. Most of the
sources listed below were visited on July 22, 2014:
1. MATLAB (RM) library sources of ANN simulations are at http://
www.mathworks.com/products/neural-network/.
33
References
[1] D.O. Hebb, The Organization of Behavior, Wiley, New York. Reprinted in 2002
by: Lawrence Erlbaum Associates, NJ, 1949.
[2] I.P. Pavlov, Conditioned Reexes, an Investigation of the Physiological Activity
of the Cerebral Cortex, Oxford University Press, London, UK, 1927.
[3] W.S. McCulloch, W. Pitts, A logical calculus of the ideas immanent in nervous
activity, Bull. Math. Biophys. 5 (4) (1943) 115133.
[4] A.M. Turing, On computable numbers, with an application to the Entscheidungsproblem, Proc. Lond. Math. Soc. 42 (1936) 230265.
[5] L.A. Jeffress, A place theory of sound localization, J. Comput. Physiol. Psychol.
41 (1) (1948) 3539.
[6] C.E. Carr, M. Konishi, Axonal delay lines for time measurement in the owl's
brainstem, Proc. Natl. Acad. Sci. USA 85 (21) (1988) 83118315.
[7] V. Laufberger, Vzruchova theorie, (The Impulse Theory), Czech Medical
Association, Prague, 1947 (in Czech).
[8] L.J. Kohout, A Perspective on Intelligent Systems: A Framework for Analysis
and Design, Chapman and Hall, London, New York, 1990.
[9] E.D. Adrian, Y. Zotterman, The impulses produced by sensory nerve-endings:
Part II. The response of a single end-organ, J. Physiol. Lond. 61 (2) (1926)
151171.
[10] N. Wiener, Cybernetics: Or Control and Communication in the Animal and the
Machine, Hermann and Cie, 1948 (2nd revised ed., Wiley, New York, 1961).
[11] W. James, Pragmatism: A New Name for Some Old Ways of Thinking, Longman
Green and Company, New York, NY, 1907.
[12] C.U.A. Kappers, G.C. Huber, E.C. Crosby, The Comparative Anatomy of the
Nervous System of Vertebrates, Including Man, Macmillan, New York, NY,
1936.
[13] R.E. Brown, P.M. Milner, The legacy of Donald O. Hebb: more than the Hebb
synapse, Nat. Rev. Neurosci. 4 (12) (2003) 10131019.
[14] J.J. Hopeld, Neural networks and physical systems with emergent collective
computational abilities, Proc. Natl. Acad. Sci. USA 79 (8) (1982) 25542558.
[15] J.J. Hopeld, Neurons with graded response have collective computational
properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA 81 (10)
(1984) 30883092.
[16] J.J. Hopeld, Hopeld network, Scholarpedia 2 (5) (2007) 1977.
[17] J.R. Movellan, Contrastive Hebbian learning in the continuous Hopeld model,
in: Proceedings of the 1990 Connectionist Models Summer School, 1990,
pp. 1017.
[18] D. Marr, Vision: A Computational Investigation into the Human Representation
and Processing of Visual Information, Henry Holt and Company, New York, NY,
1982.
34
[19] D.E. Rumelhart, G.E. Hinton, R.J. Williams, Learning representations by backpropagating errors, Nature 323 (6088) (1986) 533536.
[20] E.R. Kandel, In Search of Memory: The Emergence of a New Science of Mind,
W.W. Norton and Company, New York, NY, 2007.
[21] W. Gerstner, W.M. Kistler, Neuron Models: Single Neurons, Populations,
Plasticity, Cambridge University Press, Cambridge, MA, 2002.
[22] W. Gerstner, W.M. Kistler, Mathematical formulations of Hebbian learning,
Biol. Cybern. 87 (56) (2002) 404415.
[23] E.L. Bienenstock, L.N. Cooper, P.W. Munro, Theory for the development of
neuron selectivity: orientation specicity and binocular interaction in visual
cortex, J. Neurosci. 2 (1) (1982) 3248.
[24] L. Benuskova, W.C. Abraham, STDP rule endowed with the BCM sliding
threshold accounts for hippocampal heterosynaptic plasticity, J. Comput.
Neurosci. 22 (2) (2007) 129133.
[25] F. Rosenblatt, The perceptron, a probabilistic model for information storage
and organization in the brain, Psychol. Rev. 65 (6) (1958) 386408.
[26] B. Widrow, M.E. Hoff Jr., Adaptive switching circuits, IRE WESCON Conv. Rec. 4
(1960) 96104.
[27] D.J. Willshaw, O.P. Buneman, H.C. Longuet-Higgins, Non-holographic associative memory, Nature 222 (5197) (1969) 960962.
[28] K.H. Pribram, The neurophysiology of remembering, Sci. Am. 220 (1) (1969) 7386.
[29] D. Golomb, N. Rubin, H. Sompolinsky, Willshaw model: associative memory
with sparse coding and low ring rates, Phys. Rev. A 41 (4) (1990) 18431854.
[30] B. Graham, D. Willshaw, Probabilistic synaptic transmission in the associative
net, Neural Comput. 11 (1) (1999) 117137.
[31] P. Mazzoni, R.A. Andersen, M.I. Jordan, A more biologically plausible learning
rule for neural networks, Proc. Natl. Acad. Sci. USA 88 (10) (1991) 44334437.
[32] A.G. Barto, P. Anandan, Pattern-recognizing stochastic learning automata, IEEE
Trans. Syst. Man Cybern. 15 (3) (1985) 360375.
[33] T.D. Sanger, Optimal unsupervised learning in a single-layer linear feedforward neural network, Neural Netw. 2 (6) (1989) 459473.
[34] E. Oja, Simplied neuron model as a principal component analyzer, J. Math.
Biol. 15 (3) (1982) 267273.
[35] E. Ising, Beitrag zur Theorie des Ferromagnetismus, Z. F. Phys. A 31 (1) (1925)
253258 (in German).
[36] D. Sherrington, S. Kirkpatrick, Solvable model of a spin-glass, Phys. Rev. Lett.
35 (26) (1975) 17921796.
[37] D.J. Amit, Modeling Brain Function. The World of Attractor Neural Networks,
Cambridge University Press, Cambridge, MA, 1989.
[38] E.T. Rolls, A. Treves, Neural Networks and Brain Function, Oxford University
Press, New York, NY, 1998.
[39] J. Stroffek, E. Kuriscak, P. Marsalek, Pattern storage in a sparsely coded neural
network with cyclic activation, Biosystems 89 (13) (2007) 257263.
[40] J. Stroffek, P. Marsalek, Short-term potentiation effect on pattern recall in
sparsely coded neural network, Neurocomputing 77 (1) (2012) 108113.
[41] J. Torres, J. Cortes, J. Marro, H. Kappen, Attractor neural networks with activitydependent synapses: the role of synaptic facilitation, Neurocomputing 70 (1012)
(2007) 20222025.
[42] D. Bibitchkov, J.M. Herrmann, T. Geisel, Pattern storage and processing in
attractor networks with short-time synaptic dynamics, Netw. Comput. Neural
Syst. 13 (1) (2002) 115129.
[43] D. Bibitchkov, J.M. Herrmann, T. Geisel, Effects of short-time plasticity on the
associative memory, Neurocomputing 44-46 (2002) 329335.
[44] J. Stroffek, E. Kuriscak, P. Marsalek, Highway toll enforcement, IEEE Veh.
Technol. Mag. 5 (4) (2010) 5665.
[45] T. Kohonen, Self-organized formation of topologically correct feature maps,
Biol. Cybern. 43 (1) (1982) 5969.
[46] T.H. Brown, A.M. Zador, Hippocampus, in: G.M. Shepherd (Ed.), The Synaptic
Organization of the Brain, Oxford University Press, New York, 1990,
pp. 346388.
[47] H.R. Wilson, Lyapunov functions and memory, in: Spikes, Decisions and
Actions, Oxford University Press, New York, 1999, pp. 223250.
[48] J.C. Eccles, Some aspects of Sherrington's contribution to neurophysiology,
Notes Rec. R. Soc. Lond. (1957) 216225.
[49] C.E. Shannon, A mathematical theory of communication, Bell Syst. Tech. J. 28
(4) (1949) 656715.
[50] F. Santamaria, S. Wils, E.D. Schutter, G.J. Augustine, The diffusional properties
of dendrites depend on the density of dendritic spines, Eur. J. Neurosci. 34 (4)
(2011) 561568.
[51] C. Sala, M. Segal, Dendritic spines: the locus of structural and functional
plasticity, Physiol. Rev. 94 (1) (2014) 141188.
[52] S. Cushing, T. Bui, P.K. Rose, Effect of nonlinear summation of synaptic currents
on the input-output properties of spinal motoneurons, J. Neurophysiol. 94 (5)
(2005) 34653478.
[53] P. Marsalek, C. Koch, J. Maunsell, On the relationship between synaptic input
and spike output jitter in individual neurons, Proc. Natl. Acad. Sci. USA 94 (2)
(1997) 735740.
[54] E. Kuriscak, P. Marsalek, Model of neural circuit comparing static and adaptive
synapses, Prague Med. Rep. 105 (4) (2004) 369380.
[55] E. Kuriscak, P. Marsalek, J. Stroffek, Z. Wunsch, The effect of neural noise on
spike time precision in a detailed CA3 neuron model, Comput. Math. Methods
Med. 2012 (595398) (2012) 116.
[56] S.R. Kelso, A.H. Ganong, T.H. Brown, Hebbian synapses in hippocampus, Proc.
Natl. Acad. Sci. USA 83 (14) (1986) 53265330.
[57] P. Marsalek, F. Santamaria, Investigating spike backpropagation induced Ca2
inux in models of hippocampal and cortical pyramidal neurons, Biosystems
48 (13) (1998) 147156.
[58] M. Mlcek, J. Neumann, O. Kittnar, V. Novak, Mathematical model of the
electromechanical heart contractile systemregulatory subsystem physiological considerations, Physiol. Res. 50 (4) (2001) 425432.
[59] A. Zador, C. Koch, T.H. Brown, Biophysical model of a Hebbian synapse, Proc.
Natl. Acad. Sci. USA 87 (17) (1990) 67186722.
[60] T.J. O'Dell, R.D. Hawkins, E.R. Kandel, O. Arancio, Tests of the roles of two
diffusible substances in long-term potentiation: evidence for nitric oxide as a
possible early retrograde messenger, Proc. Natl. Acad. Sci. USA 88 (24) (1991)
1128511289.
[61] P. Marsalek, Neural code for sound localization at low frequencies, Neurocomputing 3840 (2001) 14431452.
[62] P. Marsalek, J. Kofranek, Sound localization at high frequencies and across the
frequency range, Neurocomputing 5860 (2004) 9991006.
[63] Y. Dan, M.M. Poo, Spike timing-dependent plasticity of neural circuits, Neuron
44 (1) (2004) 2330.
[64] S. Song, Hebbian learning and spike-timing dependent plasticity, in: J. Feng
(Ed.), Computational Neuroscience: A Comprehensive Approach, CRC Press,
Boca Raton, FL, USA, 2004, pp. 324358.
[65] N. Caporale, Y. Dan, Spike timing-dependent plasticity: a Hebbian learning
rule, Annu. Rev. Neurosci. 31 (2008) 2546.
[66] J. Sjstrm, W. Gerstner, Spike-timing dependent plasticity, Scholarpedia 5 (2)
(2010) 1362.
[67] J.L. Eriksson, A.E.P. Villa, Learning of auditory equivalence classes for vowels by
rats, Behav. Process. 73 (3) (2006) 348359.
[68] R. von der Heydt, E. Peterhans, G. Baumgartner, Illusory contours and cortical
neuron responses, Science 224 (4654) (1984) 12601262.
[69] T.J. Sejnowski, The book of Hebb, Neuron 24 (4) (1999) 773776.
[70] M. Tsodyks, H. Markram, The neural code between neocortical pyramidal
neurons depends on neurotransmitter release probability, Proc. Natl. Acad. Sci.
USA 94 (2) (1997) 719723.
[71] T. Morimoto, X.H. Wang, M.M. Poo, Overexpression of synaptotagmin modulates short-term synaptic plasticity at developing neuromuscular junctions,
Neuroscience 82 (4) (1998) 969978.
[72] W. Gerstner, R. Kempter, J.L. van Hemmen, H. Wagner, A neuronal learning
rule for sub-millisecond temporal coding, Nature 383 (1996) 7678.
[73] L.N. Cooper, STDP: spiking, timing, rates and beyond, Front. Synaptic Neurosci.
2 (14) (2010) 00014-13.
[74] G.E. Hinton, Connectionist learning procedures, Artif. Intell. 40 (1) (1989)
185234.
[75] R.M. Klein, Donald Olding Hebb, Scholarpedia 6 (4) (2011) 3719.
Petr Marsalek is a professor of Biophysics and Computer Science in Medicine at Charles University in
Prague. There he received his M.D., in 1990, his B.S. in
Mathematics and Computer Science, in 1992, and his
Ph.D. in Biophysics, in 1999. His postdoctoral stays have
included California Institute of Technology and Johns
Hopkins University in the USA, and the Max Planck
Institute for the Physics of Complex Systems in Dresden, Germany. He is afliated with the Department of
Pathological Physiology at the First Medical Faculty of
Charles University in Prague and part time at the Czech
Technical University in Prague.
35
Peter G. Toth is currently working towards his Ph.D. at
the Department of Pathological Physiology, First Medical Faculty, Charles University in Prague. He received
his master's degree in Computer Science, in 2011, at the
Faculty of Mathematics and Physics, Charles University
in Prague, under the supervision of Petr Marsalek.