Sunteți pe pagina 1din 42

1

Introduction to Neural Networks


Course Code : EE 480
Prof. Adel Abdennour

Overview
Biological Neuron
Artificial Neuron Activation Functions Perceptron Learning

Biological Neurons
Soma or body cell - is a large, round central body in which almost all the logical functions of the neuron are realized. The axon (output), is a nerve fiber attached to the soma which can serve as a final output channel of the neuron. An axon is usually highly branched. The dendrites (inputs)- represent a highly branching tree of fibers. These long irregularly shaped nerve fibers (processes) are attached to the soma. Synapses are specialized contacts on a neuron which are the termination points for the axons from other neurons.

Biological Neuron Contd

The spikes travelling along the axon trigger the release of neurotransmitter substances at the synapse.
The neurotransmitters cause excitation or inhibition of the signal in the dendrite The contribution of the signals depends on the strength of the synaptic connection.

The Artificial Neuron

An Artificial Neuron

Artificial Neuron(Contd)

Artificial Neuron(Contd)
An Artificial Neuron(AN) implements a nonlinear mapping depending on the activation function used. That is,

Artificial Neuron(Contd)

Artificial Neuron(Contd)

Calculating the Net Input Signal:

Net input = weighted sum of all input signals

10

Artificial Neuron(Contd)

The Artificial Neurons functionality is determined by:

- the nature of its activation function

its ability to approximate a function to be learned

11

Artificial Neuron(Contd)

12

Activation Functions
1. Linear Function:

13

Activation Functions

2. Step Function:

14

Activation Functions

3. Ramp Function:

15

Activation Functions

4. Sigmoid Function:

16

Activation Functions
5. Hyperbolic Tangent Function:

6. Gaussian Function:

17

Artificial Neuron Learning


Linear Separable Boolean Perceptron 1- AND Perceptron

Linearly Separable

18

Artificial Neuron Learning


Linear Separable Boolean Perceptron 2. OR Perceptron

Linearly Separable

19

Artificial Neuron Learning


20

Artificial Neuron Learning


There are three main types of Learning:

Supervised Learning:
Neuron is provided with a data set consisting of input vectors and a target (desired output). Also called (Training Set) Aim is then to adjust the weight values

Unsupervised Learning:
Where the aim is to discover patterns or features in the input data with no assistance from an external source.

Reinforcement Learning:
Where the aim is to reward a neuron for good performance and to penalize the neuron for bad performance.

21

Artificial Neuron Learning


Perceptron Learning: Remember what we like about fruits:
Taste
Seeds Skin

Sweet=1
Edible=1 Edible=1

Not Sweet=0
Not Edible=0 Not Edible=0

For Output:

Good Fruit = 1 Not Good Fruit = 0

22

Perceptron Learning Contd.


Lets Start with no knowledge
Input
Taste

0.0 Output 0.0

Seeds

0.0

If > 0.4 then


Skin

fire

23

Perceptron Learning Contd.


Lets Start with no knowledge

To train the perceptron, we will show it each example and have it categorize each one. Since its starting with no knowledge, it is going to make mistakes. When it makes a mistake, we are going to adjust the weights to make that mistake less likely in the future.

24

Perceptron Learning Contd.

When we adjust the weights, were going to take relatively small steps to be sure we dont over-correct and create new problems.

25

Perceptron Learning Contd.

We are going to learn the Category good fruit defined as anything that is sweet. Good fruit = 1 Not Good Fruit = 0

26

Perceptron Learning Contd.


Show it a banana:
Input
Taste

0.0 Output 0.0

Seeds

0.0
If > 0.4 then

0.0
Skin

fire

27

Perceptron Learning Contd.


Show it a banana:

In this case we have: (1 X 0) + (1 X 0) + (0 X 0) It adds up to 0.0. Since that is less than the threshold (0.40), it responded no. Is that correct? No.

28

Perceptron Learning Contd.

Since we got it wrong, we know we need to change the weights. Well do that using the delta rule (delta for change).
w = learning rate X (Target output- Actual output) X input

29

Perceptron Learning Contd.


Learning Delta Rule: The three parts of that are: Learning rate: We set that ourselves. Learning happens in a reasonable amount of time, but small enough that we dont go too fast. So, picking 0.25.

(Target output - actual output) (Error):Target out put should be good fruit as it is sweet so, in this case, the target output should be1, the output is 0, so (1 - 0) = 1.
Input: For first node it is1.

30

Perceptron Learning Contd.

Learning Delta Rule

So calculating w,
w = 0.25 x (1-0) x 1 = 0.25
w is the change in weight so we are adding 0.25 to it. If we get (target output - Actual out put) = 0

then no need to adjust weights. And if the (target output - Actual output) is positive then we have to increase the weight, otherwise we will have to decrease the weights.

31

Perceptron Learning Contd.

Changing weights for Banana


Feature Learning Rate (Target-Actual) Input (Error) w

Taste
Seeds Skin

0.25
0.25 0.25

1
1 1

1
1 0

+0.25
+0.25 0

32

Perceptron Learning Contd.

Here it is with adjusted weights


Input
Taste

0.25

Output

0.25
Seeds

0.0

If > 0.4 then


Skin

fire

33

Perceptron Learning Contd.


To continue training, we show it

the next example, adjust the weights We will keep cycling through the examples until we go all the way through one time without making any changes to the weights. At that point, the concept is learned.

34

Perceptron Learning Contd.

Input
Taste

0.25

Output

Seeds

0.25

0.25
If > 0.4 then

0.0
Skin

fire

35

Perceptron Learning Contd.

Changing weights for Pear


Feature Learning Rate (Target-Actual) Input (Error) w

Taste
Seeds Skin

0.25
0.25 0.25

1
1 1

1
0 1

+0.25
0 +0.25

36

Perceptron Learning Contd.

Here it is with adjusted weights


Input
Taste

0.50 Output 0.25

Seeds

0.25

If > 0.4 then


Skin

fire

37

Perceptron Learning Contd.

Input
Taste

0.50 Output 0.25

Seed

0
If > 0.4 then

0.25
Skin

fire

38

Perceptron Learning Contd.

Changing weights for Lemon


Feature Learning Rate (Target-Actual) Input (Error) w

Taste
Seeds Skin

0.25
0.25 0.25

0
0 0

0
0 0

0
0 0

39

Perceptron Learning Contd.

Input
Taste

0.50 Output 0.25

Seed

1
If > 0.4 then

0.25
Skin

fire

40

Perceptron Learning Contd.


Changing weights for Strawberry
Feature Learning Rate (Target-Actual) Input (Error) w

Taste
Seeds Skin

0.25
0.25 0.25

0
0 0

1
1 1

0
0 0

41

Perceptron Learning Contd.

Input
Taste

0.50 Output 0.25

Seed

0.25
0.25

If > 0.4 then


Skin

fire

42

Perceptron Learning Contd.

If you keep going, you will see

that this perceptron can correctly classify the examples that we have.

S-ar putea să vă placă și