Sunteți pe pagina 1din 9

Neural Networks

Dr. Debdoot Sheet


Assistant Professor, Department of Electrical Engineering
Principal Investigator, Kharagpur Learning, Imaging and Visualization Group
Indian Institute of Technology Kharagpur

www.facweb.iitkgp.ernet.in/~debdoot/
Contents
• Simple neuron
• Neural network formulation
• Learning with error backpropagation
• Gradient checking and optimization

Neural Networks [Debdoot Sheet] 2


Simple Neuron Model
 x1 y = w0 + w1x1 + w2 x2 + w3 x3
w1 y = [w b ][x 1]
T

w f NL (.) pˆ = f NL ( y )
x2 2   p̂
y
x3 w3 f NL ( y ) =
1
w0 = b 1 + exp( - y )
1 f NL ( y ) = tanh ( y ) =
exp( y ) - exp( - y )
exp( y ) + exp( - y )
Neural Networks [Debdoot Sheet] 3
Neural Network Formulation
1 w
y1 = [w b][x 1]
T
w1,1 1, 0
 x1 w f (.) pˆ1 = f NL ( y1 )
2 ,1 1 NL
p̂1
w1, 2 y
w21, 0 y2 = [w b ][x 1]
T
w2, 2 f NL (.)
x2 y p̂2 pˆ 2 = f NL ( y2 )
w1, 3 2
w2,3 y = [w b ][x 1]
x3
T

wk , j f NL (.)
yk p̂ pˆ = f NL (y )
xj k

Neural Networks [Debdoot Sheet] 4


Error in Prediction

 x1 f NL (.)
y1 p̂1 e1 = p1− p̂1
f NL (.) e2 = p2 − p̂2
x2 y2 p̂2

x3 f NL (.)
p̂k E = p−pˆ
xj yk
Neural Networks [Debdoot Sheet] 5
Error Backpropagation

x1 p1 p̂1 J (W ) = ∑ p n− pˆ n
x2 p2 p̂ 2 n

x3 p3 p̂ 3 W = arg min {J (W )}
W

! ! ! 𝜕
(k ) J (W )
(k +1) (k )
xn pn p̂ n W =W −
𝜕W
Neural Networks [Debdoot Sheet] 6
Gradient Descent Learning
𝜕
(k ) J (W )
(k +1) (k )
W =W −
𝜕W

(
J W (k ) )
Cost-function

1 2 ... k Epochs
Neural Networks [Debdoot Sheet] 7
Understanding Gradient Descent
Some
random (
J W (k )
)
initial value

(
J W (k ) ) w2
Cost-function
w1

1 2 ... k Epochs
Neural Networks [Debdoot Sheet] 8
Take Home Messages
• Haykin, Simon, Neural Networks and
Learning Machines, 2001.
• Toolboxes
– Matlab – Neural Network Toolbox (nprtool)
– Python – Theano
– Lua – Torch, nn, cuDNN, nngraph

Neural Networks [Debdoot Sheet] 9

S-ar putea să vă placă și