Sunteți pe pagina 1din 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220731305

Neural networks in APL

Conference Paper  in  ACM SIGAPL APL Quote Quad · May 1990


DOI: 10.1145/97811.97816 · Source: DBLP

CITATIONS READS
0 313

1 author:

Manuel Alfonseca
Universidad Autónoma de Madrid
181 PUBLICATIONS   971 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Complex systems and other things View project

All content following this page was uploaded by Manuel Alfonseca on 01 March 2014.

The user has requested enhancement of the downloaded file.


NEURAL NETWORKS IN APL

By Manuel Alfonseca
IBM Madrid Scientitic Center
Paseo de la Castellana, 4
28046 Madrid (SPAIN)

ABSTRACT other neurons, but to the environment. There may be zero


to any number of intermediate layers (also called ‘hidden
Neural networks are fairly straightforward to program in a layers”).
matrix oriented language such as APL. The only general
improvement that would benefit them would be the A neural network where at least a neuron sends a
implementation of sparse matrices. Small networks can be connection to another neuron in a preceding layer is a
trained quite easily using the standard procedures (back neural network with feedback. An interesting family of
propagation, etc). neuron networks with feedback is called “Hopfield neural
networks” (see reference 1).
INTRODUCTION
A neural network with just two layers (one input layer and
A neural network (also called a “connection% system”) is a one output layer) and no feedback between them is called
set of elementary units, called neurons, mutually related by a “perceptron”, In an important paper (reference 2)
means of connections. Minsky and Papert proved that it is impossible to generate
the “exclusive OR” operation with a perceptron. This paper
Each neuron has a certain number of inputs and a single effectively put an end to all research in neural networks for
output. The output, however, can divide itself to provide several years. Current research usually uses neural
connections (inputs) to many other neurons. In addition, networks with at least one intermediate layer.
each neuron has an associated number (its threshold).
Each connection also has an associated number (3s MATRIX-VECTOR REPRESENTATION OF A NEURAL
weight). NETWORK

The response of a neuron is a procedure that computes the In general, any neuron in a neural network can provide an
output of the neuron as a function of its inputs, the weights input (a connection) to any other neuron. Therefore, the
of the input connections and the threshold of the neuron. network structure can be represented by a square n by n
Usually, the response of a neuron can be expressed in the matrix, where n is the number of neurons in the network
following way: and the element i,j in the matrix is the weight of the
connection between neuron i and neuron j. Nonexistent
f(G wi xi I- O) (Equation 1) connections can be represented as connections of zero
weight (since equation I is not affected by those null
where xi is the set of inputs to the neuron, w, are the res-
connections).
pective weights of the input connections, 8 is the neuron
threshold and f is the response function.
The connection matrix represents the structure of the
networks. To include all the available information we
In typical neural networks, connections are such that the
need an additional vector with the thresholds of all the
neurons can be divided in a certain number of layers.
neurons in the network, given, of course, in the same order
Neurons in the first layer (the input layer) have inputs that
as in the matrix rows and columns.
do not come from other neurons. but from outside the
neural network (from the environment). Neurons in the
Example
last layer (the output layer) have outputs that do not go to
Permission to cow without fee all or pert of this material is granted provided that
the copies we not made or distributed for direct commercial advantage, the ACM
The following is an example of a neural network that com-
copyright notice and the title of the publication and ti date appear, end notice putes the ‘exclusive OR’ operation:
Is given that copying is by permission of the Association for Computing
Machinery. To copy otherwise, or to republish, requires a fee and/or specific
penission.
l 1990ACM 089791-371-x/90/0008/0002.,.$1.50

Neural Networks in APL APL90


where x can be replaced by either zero or one. There are,
therefore, four possible input vectors in the example:

The output of the network can be computed by means of


the following APL2 function:

EOI PNETLJXK COM’UTE INPUTiAtCONEC~


All the neurons in the network have the following response
THRESHOLD
function: Cl1 (CONEC THRESHOLDFNETWRK
c23 PINPUT
QWIXj) > 0 c31 L:A*Z
141 Z*(A+.xCONEC).THRESHOLD
The preceding neural network has three layers: two input [51 Z*INPUT+Z
neurons, one hidden neuron and one output neuron. C61 l (-AiZ)/L
.

The structure of this neural network can be represented by


the following matrix: The left argument is the defmition of the network, and is a
a general vector of two elements. The first is the
/ connectivity matrix, the second is the threshold vector.
00 >
11
00 1 1 It will be noticed that the response function, applied to the
00 o-2 whole neural network, reduces in this case to an inner
00 0 0
product and a comparison.

while the threshold vector is equal to: The preceding function has a loop. This is due to the fact
that each inner product propagates the effect of the input
to the next accessible layer. The loop, which proceeds until
00 10 the network stabilizes, wilt eliminate the transient stages
and provide us with the steady-state result. In a neural
network without feedback, the loop will be executed at
COMPUTATION OF THE NETWORK OUTPUT most n times, where n is the number of layers in the
network, usually equal to 3.
Let us assume that we have a neural network represented
by a matrix and a threshold vector. How do we compute Let us assume that the matrix of the example network is
the output of the network as a result of receiving certain contained in variable MATRIX, while the threshold vector
inputs? is in variable TH. The following APL2 instructions
compute the results of the network corresponding to all the
We will represent the inputs as a vector of values which we possible inputs:
will extend to the same length as the number of neurons in
the network. This extension is easy. It is enough to assume \
/
that all the neurons have exaclly one input, and assigning IN+0 0 0 0) co 1 0 0) (1 0 0 0)
zero as the input value of those neurons that in actual fact (1 1 0 0)
did not have any input. (cMATRIX TH) COWUTE-IN
0000 010110011110
In the case of the previous example, there are two input
neurons (corresponding to rows 1 and 2 of our matrix). It will be noticed that Ure fourth element of the result is the
Rows 3 and 4, which are not input neurons, should always exclusive OR of the first two elements, which are the
have an input value of zero. Therefore, the input vector for inputs. The COMPUTE fbnction also provides us with
the example will be of the form: additional information, since it includes the fmal state of all
the neurons in the network.
f xx00 I

APL QUOTE QUAD Manuel Alfonseca


MATRIX REPRESENTATION OF A NEURAL NETWORK
( 10000
If the response of all the neurons in a network is of the 10100
11000
form indicated by equation 1, the network wit1 be 11100
equivalent to another network, where all the neurons in the
original network are present, with the same connections
and weights, but with zero threshold, and where an The output of the network can be computed by means of
additional input neuron has been added, whose output is the following, slightly simpler APL2 function:
always 1. and which is connected to every neuron in the
network by means of a connection whose weight is qua1
to minus the threshold of the target network in the original
network. The proof of this assertion is obvious, just by
looking at equation 1.

Therefore, if the indicated hypothesis holds (which is


usually the case) a neural network with n neurons and
arbitrary thresholds can be considered equivalent to
another neural network with n+ 1 neurons, all of them Let us assume that the modilied matrix of the example
network is contained in variable MATRIXl. The following
with zero threshold. Therefore, there exists a single matrix
representation of the total behavior of any neural network APL2 instructions compute the results of the network
if the response of its neurons can be represented by means corresponding to all the possible inputs:
of equation 1.

The neural network in the above example can be fully


represented by the following matrix:

compatible with the result obtained with the matrix-vector


[;;;iiJ representation.

ANALOGIC NEURONS

where the lirst row is the threshold neuron, the weights of The neurons in the previous examples were digital, since
whose connections are the negatives of the thresholds of their output can only be zero or one. Analogic neurons
the remaining neurons in the network. This neuron should can produce other outputs, such as any number in ‘the
always fire. Rows 2 and 3 correspond to the input [O,l] interval. For example, a commonly used response
neurons, row 4 to the hidden neuron and row 5 to the function for neural networks is:
output neuron.
l/(1 +e-‘y “1
In the case of the example, there are two input neurons with appropriate corrections when the value obtained is
(corresponding to rows 2 and 3 of our matrix). Neuron 1 too near one or zero. The following APL2 function com-
(the threshold neuron) should always receive an input of 1, putes the result of a neural network composed of neurons
to make sure it fires. Rows 4 and 5, which are not input with this response function. The neural network is
neurons, shotrId always have an input value of zero. assumed to be fully represented by a single connectivity
Therefore, the input vector for the example will be of the matrix.
form:
r
CO1 ZWONEC COWUTEP 1NWf;A
lxx00 Cl1 PINPUT
c21 L:A*z
t33 Z*l+**-A+.‘fCONEC
where x can be replaced by either zero or one. There are, 141 Zt(Z<O.P)/rPZl*O
therefore, four possible input vectors in the example: t53 Zt(D0.8)/\PZ3*1
t61 Z*INl’UT+Z
171 l (“ArZ)/L
k

Neural Networks in APL 4 APL90


LEARNING The program assumes that the number of layers in the
network is three (this is the usual number). Some
We say that a neural network ‘learns’ when it modiftes its modifications would have to be done to apply a similar
behavior in such a way that its response to a certain set of procedure to a perceptron or to a network with four or
inputs adapts to another set of predefined ‘desired more layers.
outputs-.
A Learning Example
There are ditTerent teaming procedures that modify the
weights of the connections of the neural network in such a The preceding back-propagation program will be used to
way that the outputs get closer and closer to the desired teach a neuron network to distinguish the numbers in a
values. These techniques require a teaching period during digital matrix similar to the following:
which the foliowing happens:
1. One or several inputs are applied to the network.
2. The corresponding outputs are computed. t-1
3. The outputs are compared to the desired outputs.
4. The weights of the connections are modified so that
the outputs gets closer to the desired outputs.
Each number will be obtained by lighting some of the slots
in the preceding matrix, but not the others. For instance,
The above process is repeated until the network behavior
number 8 will be obtained by lighting all the slots. Number
is acceptable.
3 by lighting ail the slots except the two vertical ones to the
One of learning procedures most used in neural networks left. And so on.
is called ‘back propagation”, because the weight
corrections are applied to those output neurons in the last The network will have seven input neurons in the first
layer. Their input will be one if the corresponding slot is
layer that differ from the desired value, and then the
lighted. zero otherwise.
correction is propagated to the preceding layers. The
following APL2 program executes a version of back
There will be ten output neurons in the third layer. Each
propagation:
output neuron will be trained to recognize one and only
Y one number.
r CO1 BKPROPl 1;INPUT;OUTPUT;O;OUT;E;d;NT;NO;
ERiNiEl The hidden layer is unnecessary to solve this problem,
Cl3 Ee0.02 which falls under the reach of perceptions. However, since
121 NT*l++/N*LAYERS the back propagation program we are going to use
131 L:‘Input value: ‘,IINCI;l
c43 INPUT~1,INCI;l,(NC2l+NC33)PO
requires three layers, we shall put a single neuron in the
c51 ‘Output value: ‘,IOUCI;l intermediate layer.
C63 OUTPUT+OUtI;1
173 Ll:O*(-N[31)+0UTWONEC COMPUTE INPUT The following program generates the network:
t81 ER*O.5x+/(ElCO-OUTPUT)*2
191 l (ER<lE-10)/O
1101 d*ExOUTo.xEl
CO1 Y NETWORK N;tliZ
cl11 NO+l+Nt11+Nt21+tNC31
Cl1 A 5enerafe 8 n&work with Ntll Input
cl21 d*d%ONECC;NOltO
l lamants #
t131 CONECC;NO1%ONECt;NO1-d
t21 A NC23 hid&n l lwnts and NC31 output
Cl41 El+(-NT)fEl
t153 NO*(v/dfO)/~fPd
elements
t31 LAYERSeN
1163 d*EXOUT~.*(CONEC+.xEl)CNOl
c43 Nt13~~1’3+1
1171 d*dWZONEC[;NOl#O
f51 W+/N
Cl81 CONECC;N01’CONEC~;N01-d
C61 Z*<t~,Ntll>PO,r~~Ntll,Nt23~PY~,tOI03
t191 *Ll
/ (<Nf2l+N~3l)rN~2l)PO
f71 Z*Z,<(<NCll+Nt2l)rN~3l)PY),fOIO1
CNt31,Ntfl)PO
This program makes use of several global variables: 181 zcOIO:l~ZcOIO;l
CONEC is the matrix of the neural network. LAYERS is a f91 CONEVZ
vector that contains the number of neurons in each layer. I
IN is a matrix of possible inputs. Finally, OU is the
where Y is the initial value to be assigned to all the
desired set output values.
connections. The program generates the two global

APL QUOTE QUAD 5 Manuel Alfonseca


variables LAYERS and CONEC. In our case we will ones. Therefore, the learning procedure is repeated until all
execute of the numbers are simultaneously learned.

The teaching process is finished in 2 minutes and 25


1 NETWORK 7 1 10 seconds, executing under APLZ/PC (32 bit system) on an
IBM K/2 model 80 with the math co-processor. One
The following program teaches this network to recognize hundred percent accuracy is obtained.
the definitions of the ten digits in the digital matrix:
CONCLUSION

tOI fEACHNRO;IN~OU;I;II The preceding set of examples make it clear that emulating
Cl3 IN*2 7Pl 1 1 0 1 1 1 0 0 1 0 0 1 0 neural networks in APL2 is very simple. Very general
c21 IWIN,CllP 7Pl 0 1 1 1 0 1 1 0 1 1 networks (with or without feedback) can be emulated with
011 8 five line function. Learning procedures can also be
c33 IN+IN,1113 7P0 1 1 1 0 1 0 1 1 0 1
0111101111
implemented easily. Different methods can be tested and
c43 IN'IN,t113 7Pl 0 1 0 0 1 0 1 1 1 1 compared with a huge productivity.
1111111011
c53 ou+10 10P1,10P0 The only extension to APL2 that could improve the
C63 LO:I*l+II*O treatment of neural networks would be the implementation
t71 L:*(OlJCI;I~(-LAYERSt33)fCONEC COWUTE
of sparse matrices. This is due to the fact that the matrices
lr(+/LAYERS)tINCI;I)/Ll
Cal BKPROPl I representing neural networks are usually very large and
c91 11+11+1 most of their values are equal to zero (since neurons in the
Cl01 Ll:I*I+l same layer do not have common connections).
Cl11 l (Is+PIN)/L
Cl21 l (II#o)/Lo REFERENCES
1. Hopfield, J.J. Neural networks and physicat systems
The teaching algorithm is very simple. The back- with emergent collective computational abilities. Proc.
propagation method is applied to the ten inputs one by Nat. Acad. Sci. USA, Vol. 79, 2554-2558, 1982.
one. At the end of the process, the first inputs are not
guaranteed, since the changes to the network to learn the 2. Minsky, M., Papert, S. Perceptrons: an introduction
last numbers may have modified conditions for the first to computational geometry. MIT Press, Cambridge,
Mass., 1965.

Neural Networks in APL 6 APL90


View publication stats

S-ar putea să vă placă și