Sunteți pe pagina 1din 8

EXPERIMENT NO.

08
AIM: To study and implement perceptron network for AND function using NN tool by
MATLAB.
THEORY:
Perceptron Networks
In machine learning, the perceptron is an algorithm for supervised learning of binary
classifiers (functions that can decide whether an input, represented by a vector of numbers,
belongs or not to some specific class). It is a type of linear classifier, i.e. a classification
algorithm that makes its predictions based on a linear predictor function combining a set of
weights with the feature vector. The algorithm allows for online learning, in that it processes
elements in the training set one at a time.
Figure shows architecture of single layer perceptron. In the architecture, only the
associator unit and the response unit is shown. The sensor unit is hidden, because only the
weights between the associator and the response unit are adjusted. The input layer consists of
input from X1XiXn. There always exists a common bias of 1. The input neurons are
connected to the output neurons through weighted interconnections. This is a single layer
network because it has only one layer of interconnections between the input and the output
neurons. This network perceives the input signal received and performs the classification.

Algorithm:
To start training process, initially the weights and the bias are set to zero. The initial
weights of the network can be formulated from other techniques like fuzzy systems, Genetic
algorithm, etc. It is also essential to set the learning rate parameter, which ranges between 0 to 1.

Then the input is presented, the net input is calculated by multiplying the weights with the inputs
and adding the result with the bias entity. The output is compared with the target, where if any
difference occurs, we go in for weight update based on perceptron learning rule, else the network
training is stopped.
The training algorithm is as follows:
Step 1: Initialize weights and bias (initially it can be zero). Set learning rate (0 to 1).
Step 2: While stopping condition is false do Steps 3-7.
Step 3: For each training pair s:t do steps 4-6.
Step 4: Set activations of input units.
xi = sj for i =1 to n
Step 5: Compute the output unit response.

The activation function used is,

Step 6: The weights and bias are updated if the target is not equal to the output response.
If t != y and the value of xi is not zero.

Step 7: Test for stopping condition.


The stopping conditions may be the weight changes.
Note:
1. Only weights connecting active input units (xi #0) are updates.
2. Weights are updated only for patterns that do not produce the correct value of y.
Example: Develop a perceptron for the AND function with bipolar inputs and targets.
Solution: The training pattern for AND function can be,

Step 1: Initial weights w1 = w2 = 0 and b = 0, = 1, = 0.


Step 2: Begin computation.
Step 3: For input pair (1, 1): 1, do Steps 4-6.
Step 4: Set activations of input units
xi = (1, 1).
Step 5: Calculate the net input.
Applying the activation,

Therefore y =0.
Step 6: t = 1 and y = 0
Since t != y, the new weights are,

The new weights and bias are [1 1 1].


The algorithmic steps are repeated for all the input vectors with their initial weights as the
previously calculated weights.
By presenting all the input vectors, the updated weights are shown in table below:

This completes one epoch of the training,


The final weights after the first epoch is completed are, w1 = 1, w2 = 1, b = -1
We know that b + x1w1 + x2w2 = 0

x2 = -x1 + 1 is the separating line equation.


The decision boundary for AND function trained by perceptron network is given as,

CONCLUSION: Thus we studied and implemented Perceptron Networks for AND function
using NN Tool in MATLAB.

OUTPUT SCREENSHOTS:

S-ar putea să vă placă și