Sunteți pe pagina 1din 7

1/13/2011

1
Lecture2 NeuralNetwork
NeuralNetworkModels
Terms
Network Architecture defines the network structure (a
description of the number of the layers in a neural network,
each layers transfer function, the number of neurons per
layer, and the connections between layers)
Learning Algorithm/Learning Function/Learning Rule
procedure used to update the weight and bias
Learning rate a training parameter that controls the size
of weight and bias changes during learning
Supervised Learning a learning process in which changes
in a networks weights and biases are due to the in a network s weights and biases are due to the
intervention of any external teacher. The teacher typically
provides output targets.
Unsupervised Learning the weights and biases are
adjusted based on network inputs only (no target outputs
available)
1/13/2011
2
LearningFunctionsinMATLAB
ExamplesofANN
Supervised Learning
Perceptrons
Adalines/Madalines Adalines/Madalines
MultiLayer Perceptrons (Backpropagation Algorithm)
Unsupervised Learning
Simple Competitive Networks: Winnertakeall | Hamming
network
Learning Vector Quantization (LVQ)
Adaptive Resonance Theory (ART)
Kohonen SelfOrganizing Maps (SOMs)
1/13/2011
3
Perceptrons
useful as classifiers and they
can classify linearly
separable input vectors very
well well.
perceptronneuronhard
limittransferfunction;
hardlim
If No Error (e =0); no update to
the weight/bias
(Algorithm: refer Handout 6)
ExampleforPerceptrons
Not same as the target; use g ;
perceptron learning rule to update W
and b
Updated W and b
Present the next input vector, p2
The target is 1, so the error is
zero. Thus there are no
changes in weights or bias
Continue presenting p3 next
Try: demop1 Final values:
1/13/2011
4
ImplementingtheexampleinMATLAB
ExampleforPerceptrons(Cond)
apply adapt for one pass through the sequence of all
four input vectors four input vectors
run the problem again for two passes
ADALINES/MADALINES
ADALINE: ADAptive Linear Neuron with learning
rule called LMS algorithm (Least Mean Square)
ADALINE solve linearly separable problems ADALINEsolvelinearlyseparableproblems
TransferfunctionLinear
MADALINEManyADALINE
1/13/2011
5
SingleADALINE
For n= 0, o 0,
Wp + b = 0
specifies
decision
boundary
Input vectors in the upper right gray area will lead to
an output greater than 0. Input vectors in the lower
left white area will lead to an output less than 0.
SingleADALINE
Single-layer linear networks can perform linear function
approximation or pattern association
Linear networks can be designed directly or trained with the
Widrow-Hoff rule to find a minimumerror solution
the error e and the bias b are vectors
and is a learning rate
If is large, learning occurs quickly,
but if it is too large it may lead to
instability and errors may even
increase. Try: nnd10lc
1/13/2011
6
ImplementingtheexampleinMATLAB
SingleADALINE
Range of the 2 scalar input
Single output
g p
P =[2 1 -2 -1;2 -2 2 1];
t =[0 1 0 1];
net =newlin( [-2 2; -2 2],1);
net.trainParam.goal=0.1;
[net, tr] =train(net,P,t);
The problem runs, producing the following
training record
Thus, the performance goal is met in
64 epochs. The new weights and bias
are:
weights =net.iw{1,1}
weights =
-0.0615 -0.2194
bias =net.b(1)
bias = training record.
TRAINWB, Epoch 0/100, MSE 0.5/0.1.
TRAINWB, Epoch 25/100, MSE 0.181122/0.1.
TRAINWB, Epoch 50/100, MSE 0.111233/0.1.
TRAINWB, Epoch 64/100, MSE 0.0999066/0.1.
TRAINWB, Performance goal met.
bias
[0.5899]
A =sim(net, p)
A =
0.0282 0.9672 0.2741 0.4320
err =t - sim(net,P)
err =
-0.0282 0.0328 -0.2741 0.5680
ApplicationofADALINE
Application:Adaptivefiltering(DSP:FiniteImpulseResponse(FIR)filter)
The input signal enters fromthe left and passes through N-1 delays The input signal enters from the left, and passes through N-1 delays.
The output of the tapped delay line (TDL) is an N-dimensional vector, made up of
the input signal at the current time, the previous input signal, etc.
Adaptive Filter
a finite impulse
response (FIR) filter
1/13/2011
7
AdaptiveFilteringinMATLAB
p ={3 4 5 6}
[a,pf] =sim(net,p,pi);
output sequence
a =
[46] [70] [94] [118]
and final values for the delay outputs of
pf =
[5] [6]
T ={10 20 30 40}
net =newlin([0,10],1);
the tapped delay line
net.adaptParam.passes =10;
[net yE pf af] =adapt(net p Tpi);
the tapped delay line
net.inputWeights{1,1}.delays =[0 1 2];
net.IW{1,1}=[7 8 9];
net.b{1}=[0];
initial values of the outputs of the delays
pi ={1 2}
[net,y,E pf,af] adapt(net,p,T,pi);
wts =net.IW{1,1}
wts =0.5059 3.1053 5.7046
bias =net.b{1}
bias =-1.5993
y =[11.8558] [20.7735] [29.6679] [39.0036]

S-ar putea să vă placă și