Sunteți pe pagina 1din 11

17/06/2011

An Introduction to Artificial Neural N A ifi i l N l Networks k


K.Sivakumar Ph.D Professor P f Mechanical Engineering Bannari Amman Institute of technology Sathyamangalam

What Are Artificial Neural Networks?


Anextremelysimplifiedmodelofthebrain FirstproposedbyFrankRosenblattin1958atCornell University U i it AnArtificialNeuralNetwork(ANN)isaninformation processingparadigmthatisinspiredbytheway biologicalnervoussystems, suchasthebrain, processestheinformation

17/06/2011

What Are Artificial Neural Networks? Composedofmanyneuronsthatcooperate toperformthedesiredfunction

FromBiologicalNeuronto ArtificialNeuron

Dendrite

CellBody

Axon

17/06/2011

Analogy between biological and artificial neural networks


Biological N Bi l i l Neural Network lN t k Soma Dendrite Axon Synapse
Synapse Axon Synapse Dendrites Axon

Artificial N A tifi i l Neural Network lN t k Neuron Input Output Weight


Out put Signals Middle Layer Input Layer Output Layer Input Signals

Soma Dendrites

Soma Synapse

FromBiologyto ArtificialNeuralNetworks

17/06/2011

How Do Neural Networks Work? Theoutputofaneuronisafunctionofthe weightedsumoftheinputsplusabias Th f ti Thefunctionoftheentireneuralnetworkis f th ti l t ki simplythecomputationoftheoutputsofall theneurons Anentirelydeterministiccalculation

NetworkTraining
SupervisedTraining
Networkispresentedwiththeinputandthedesired output. Usesasetofinputsforwhichthedesiredoutputs results/classesareknown.The differencebetweenthe desiredandactualoutputisusedtocalculate adjustmenttoweightsoftheNNstructure

UnsupervisedTraining p g
Networkisnotshownthedesiredoutput. Conceptissimilartoclustering Ittriestocreateclassificationintheoutcome.
8

17/06/2011

TypesofANNs
SingleLayerPerceptron MultilayerPerceptrons (MLPs) RadialBasisFunctionNetworks(RBFs) HopfieldNetwork BoltzmannMachine Self Organization Map (SOM) SelfOrganizationMap(SOM) ModularNetworks(CommitteeMachines) Backpropagationmethod

The Perceptron
The operation of Rosenblatts perceptron is based on the McCulloch and Pitts neuron model. The model. model consists of a linear combiner followed by a d l i f li bi f ll db hard limiter. The weighted sum of the inputs is applied to the hard limiter. (Step or sign function)

10

17/06/2011

SingleSingle-layer two-input perceptron twoInputs


x1
w1

Linear Combiner

Hard Limiter

Output Y

w2

x2

Threshold
11

Possible Activation functions


Step function
Y +1 0 -1 X

Sign function
Y +1 0 -1 X

Sigmoid function
Y +1 0 -1 X

Linear function
Y +1 0 -1 X

1, if X 0 +1, if X 0 sigmoid 1 Y step = Y sign = Y = 1, if X < 0 0, if X < 0 1 + e X

Y linear = X

12

17/06/2011

BackPropagationNeuralNetwork(BPN)
LearningRule: gradientdescentbaseddelta supervisedlearningruletominimizethetotal i dl i l t i i i th t t l errorproducedbynet. TrainingRulesteps:
Initializationofweight Feed forward Feedforward BackPropagationofError UpdatingofWeightanderror

Updatingweightswithlearningrateandmomentumfactor
Newweightisupdatingineachiteration

Initialweightissmallrandomvaluesliesbetween0.5to0.5or1to1. Vij(new) newWeightvectorforhiddenlayer. Wij(new) newWeightvectorforoutputlayer. Vij(old) previousiterationWeightvectorforhiddenlayer. Wij(old) previousiterationWeightvectorforoutputlayer. =Momentumfactor=0.0to1.0 = Learning factor from 0 0 to 20 =Learningfactorfrom0.0to20 Vij(t)=currentiterationweightofthehiddenlayer. Vij(t1)= previousiterationweightofthehiddenlayer. Wij(t)=currentiterationweightofthehiddenlayer. Wij(t1) =previousiterationweightofthehiddenlayer.

17/06/2011

Momentum Co-efficient CoConversely, if is too large, we may end up bouncing around the error surface out of control g g the algorithm diverges

If is too small, the algorithm will take a long time to converge

LearningFactor Learning Factor

BPNNetwork
InputLayer v11 x1 v12 v13 v21 x2 v22 v23 v31 v32 x3 v33 b1 b2 b3 b0=b1=b2=b3~1 z3 w3 b0 z2 w2
Activation function& Threshold

HiddenLayer z1 w1 output Layer y1 y ANN output


Value1or Value2or Value3

17/06/2011

Effectoflearningrate
Learningfactor LearningSpeed But Start with low learning rate and steadily ButStartwithlowlearningrateandsteadily increased. Whenincreasingthelearningrate,thenet performancewillbeincreased. PerformanceofANNnet Learningrate. g Learningrateisdifferentfromproblemto problem.

EffectofMomentumfactor
Thechangingtheweightinthedirectionthat isacombinationofcurrentweightvectorwith previousweightvector. Momentumfactorwillreducesthetimefor learningprocess. Momentfactoracceleratestheconvergence p y oferrorproducedbythenet. Whilewronglyassigningthisvaluegivewide changesintheweight.

17/06/2011

EffectofNumberofneuronsinhidden layer
Good results should be obtained if Number Goodresultsshouldbeobtained,ifNumber ofneuronsinhiddenlayershouldbegrater than2timesofnumberofinputneurons. Number of neurons in Hidden Layer NumberofneuronsinHiddenLayer [2XNo.ofinputNeurons+1]

Normalized/De normalized value

10

17/06/2011

Case Studies

ThankU Th k U

11

S-ar putea să vă placă și