Documente Academic
Documente Profesional
Documente Cultură
net/publication/220731305
CITATIONS READS
0 313
1 author:
Manuel Alfonseca
Universidad Autónoma de Madrid
181 PUBLICATIONS 971 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Manuel Alfonseca on 01 March 2014.
By Manuel Alfonseca
IBM Madrid Scientitic Center
Paseo de la Castellana, 4
28046 Madrid (SPAIN)
The response of a neuron is a procedure that computes the In general, any neuron in a neural network can provide an
output of the neuron as a function of its inputs, the weights input (a connection) to any other neuron. Therefore, the
of the input connections and the threshold of the neuron. network structure can be represented by a square n by n
Usually, the response of a neuron can be expressed in the matrix, where n is the number of neurons in the network
following way: and the element i,j in the matrix is the weight of the
connection between neuron i and neuron j. Nonexistent
f(G wi xi I- O) (Equation 1) connections can be represented as connections of zero
weight (since equation I is not affected by those null
where xi is the set of inputs to the neuron, w, are the res-
connections).
pective weights of the input connections, 8 is the neuron
threshold and f is the response function.
The connection matrix represents the structure of the
networks. To include all the available information we
In typical neural networks, connections are such that the
need an additional vector with the thresholds of all the
neurons can be divided in a certain number of layers.
neurons in the network, given, of course, in the same order
Neurons in the first layer (the input layer) have inputs that
as in the matrix rows and columns.
do not come from other neurons. but from outside the
neural network (from the environment). Neurons in the
Example
last layer (the output layer) have outputs that do not go to
Permission to cow without fee all or pert of this material is granted provided that
the copies we not made or distributed for direct commercial advantage, the ACM
The following is an example of a neural network that com-
copyright notice and the title of the publication and ti date appear, end notice putes the ‘exclusive OR’ operation:
Is given that copying is by permission of the Association for Computing
Machinery. To copy otherwise, or to republish, requires a fee and/or specific
penission.
l 1990ACM 089791-371-x/90/0008/0002.,.$1.50
while the threshold vector is equal to: The preceding function has a loop. This is due to the fact
that each inner product propagates the effect of the input
to the next accessible layer. The loop, which proceeds until
00 10 the network stabilizes, wilt eliminate the transient stages
and provide us with the steady-state result. In a neural
network without feedback, the loop will be executed at
COMPUTATION OF THE NETWORK OUTPUT most n times, where n is the number of layers in the
network, usually equal to 3.
Let us assume that we have a neural network represented
by a matrix and a threshold vector. How do we compute Let us assume that the matrix of the example network is
the output of the network as a result of receiving certain contained in variable MATRIX, while the threshold vector
inputs? is in variable TH. The following APL2 instructions
compute the results of the network corresponding to all the
We will represent the inputs as a vector of values which we possible inputs:
will extend to the same length as the number of neurons in
the network. This extension is easy. It is enough to assume \
/
that all the neurons have exaclly one input, and assigning IN+0 0 0 0) co 1 0 0) (1 0 0 0)
zero as the input value of those neurons that in actual fact (1 1 0 0)
did not have any input. (cMATRIX TH) COWUTE-IN
0000 010110011110
In the case of the previous example, there are two input
neurons (corresponding to rows 1 and 2 of our matrix). It will be noticed that Ure fourth element of the result is the
Rows 3 and 4, which are not input neurons, should always exclusive OR of the first two elements, which are the
have an input value of zero. Therefore, the input vector for inputs. The COMPUTE fbnction also provides us with
the example will be of the form: additional information, since it includes the fmal state of all
the neurons in the network.
f xx00 I
ANALOGIC NEURONS
where the lirst row is the threshold neuron, the weights of The neurons in the previous examples were digital, since
whose connections are the negatives of the thresholds of their output can only be zero or one. Analogic neurons
the remaining neurons in the network. This neuron should can produce other outputs, such as any number in ‘the
always fire. Rows 2 and 3 correspond to the input [O,l] interval. For example, a commonly used response
neurons, row 4 to the hidden neuron and row 5 to the function for neural networks is:
output neuron.
l/(1 +e-‘y “1
In the case of the example, there are two input neurons with appropriate corrections when the value obtained is
(corresponding to rows 2 and 3 of our matrix). Neuron 1 too near one or zero. The following APL2 function com-
(the threshold neuron) should always receive an input of 1, putes the result of a neural network composed of neurons
to make sure it fires. Rows 4 and 5, which are not input with this response function. The neural network is
neurons, shotrId always have an input value of zero. assumed to be fully represented by a single connectivity
Therefore, the input vector for the example will be of the matrix.
form:
r
CO1 ZWONEC COWUTEP 1NWf;A
lxx00 Cl1 PINPUT
c21 L:A*z
t33 Z*l+**-A+.‘fCONEC
where x can be replaced by either zero or one. There are, 141 Zt(Z<O.P)/rPZl*O
therefore, four possible input vectors in the example: t53 Zt(D0.8)/\PZ3*1
t61 Z*INl’UT+Z
171 l (“ArZ)/L
k
tOI fEACHNRO;IN~OU;I;II The preceding set of examples make it clear that emulating
Cl3 IN*2 7Pl 1 1 0 1 1 1 0 0 1 0 0 1 0 neural networks in APL2 is very simple. Very general
c21 IWIN,CllP 7Pl 0 1 1 1 0 1 1 0 1 1 networks (with or without feedback) can be emulated with
011 8 five line function. Learning procedures can also be
c33 IN+IN,1113 7P0 1 1 1 0 1 0 1 1 0 1
0111101111
implemented easily. Different methods can be tested and
c43 IN'IN,t113 7Pl 0 1 0 0 1 0 1 1 1 1 compared with a huge productivity.
1111111011
c53 ou+10 10P1,10P0 The only extension to APL2 that could improve the
C63 LO:I*l+II*O treatment of neural networks would be the implementation
t71 L:*(OlJCI;I~(-LAYERSt33)fCONEC COWUTE
of sparse matrices. This is due to the fact that the matrices
lr(+/LAYERS)tINCI;I)/Ll
Cal BKPROPl I representing neural networks are usually very large and
c91 11+11+1 most of their values are equal to zero (since neurons in the
Cl01 Ll:I*I+l same layer do not have common connections).
Cl11 l (Is+PIN)/L
Cl21 l (II#o)/Lo REFERENCES
1. Hopfield, J.J. Neural networks and physicat systems
The teaching algorithm is very simple. The back- with emergent collective computational abilities. Proc.
propagation method is applied to the ten inputs one by Nat. Acad. Sci. USA, Vol. 79, 2554-2558, 1982.
one. At the end of the process, the first inputs are not
guaranteed, since the changes to the network to learn the 2. Minsky, M., Papert, S. Perceptrons: an introduction
last numbers may have modified conditions for the first to computational geometry. MIT Press, Cambridge,
Mass., 1965.