Documente Academic
Documente Profesional
Documente Cultură
Lecture 1
Introduction
Synapses
1 | Page
y=b+
-> bias
Plot
x i wi
i
y=
if z
{01otherwise
OR
;b
b+ x i wi
z=
y=
x i wi
z=
Linear neurons:
o Computationally
limited.
o
if z 0
{0 1otherwise
which means, = -b
Rectified Linear Neurons:
o Sometimes called linear
threshold neurons.
o They compute a linear
weighted sum of their
inputs.
o The output is a nonlinear function of the
total input.
o
z=
b+ x i wi
y=
z if z >0
{0 otherwise
Plot
Sigmoid neurons:
2 | Page
y=
o
b+ x i wi
i
1
z
1+e
Plot
z=
Plot
1
z
1+e
b+ x i wi
p(s=1)=
o
Supervised learning
o Learn to predict an
output when given an
input vector.
Reinforcement learning
o Learn to select an
action to maximize
payoff.
Unsupervised learning
o Discover a good internal
representation of the
input.
3 | Page
Unsupervised learning
1
( yt )2 is often a
2
Reinforcement learning
4 | Page
function of the
activities in the
layer below.
Recurrent networks:
Lecture 2
Types of neural network
architectures
o
o
5 | Page
What we do is we have
connections between
hidden units and the
hidden units act like a
network thats very
deep in time.
They are
equivalent to
very deep nets
with one hidden
layer per time
slice.
So at each time step
the states of the hidden
units determines the
states of the hidden
units of the next time
step.
One way in which they
differ from feed-forward
nets is that we use the
same weights at every
time step.
So if you look at
the red arrows
where the hidden
units are
determining the
next state of the
hidden units, the
weight matrix
depicted by each
red arrow is the
same at each
time step.
They also get
inputs at every
time step and
often give
outputs at every
time step, and
thosell use the
same weight
matrices too.
They have the ability to
remember information
Perceptrons
6 | Page
The perceptron
convergence procedure:
Training binary output
neurons as classifiers
weight on an
extra input line
that always has
an activity of 1.
We can now learn
a bias as if it
were a weight.
7 | Page
A geometrical view of
perceptrons
Weight-space
8 | Page
The problem is
convex.
The
average of
two
solutions is
itself a
solution.
In general,
in machine
learning, if
you can
get a
convex
learning
problem
that makes
life easy.
Limitations of perceptrons
9 | Page