Documente Academic
Documente Profesional
Documente Cultură
Prepared by:
PRIYANKA GUPTA
ROLL
-0916IT111025
NO.
List of Experiments
Aim
1
2
Experiments
Study of Biological Neural Network
Study of Artificial Neural Network
8
9
10
Experiment No. 1
PAGE
NO.
SIGN.
Experiment No.2
Aim: Study of ANN.
An artificial neural network is a system based on the operation of biological neural networks,
in other words, is an emulation of biological neural system. Why would be necessary the
implementation of artificial neural networks? Although computing these days is truly
advanced, there are certain tasks that a program made for a common microprocessor is unable
to perform; even so a software implementation of a neural network can be made with their
advantages and disadvantages.
Advantages of ANN
A neural network can perform tasks that a linear program can not.
When an element of the neural network fails, it can continue without any problem by
their parallel nature.
Disadvantages of ANN
Another aspect of the artificial neural networks is that there are different architectures, which
consequently requires different types of algorithms, but despite to be an apparently complex
system, a neural network is relatively simple.
Artificial neural networks (ANN) are among the newest signal-processing technologies in the
engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view
to the engineering perspective. In engineering, neural networks serve two important functions:
as pattern classifiers and as nonlinear adaptive filters.
An Artificial Neural Network is an adaptive, most often nonlinear system that learns to
perform a function (an input/output map) from data. Adaptive means that the system
parameters are changed during operation, normally called the training phase . After the training
phase the Artificial Neural Network parameters are fixed and the system is deployed to solve
the problem at hand (the testing phase ). The Artificial Neural Network is built with a
systematic step-by-step procedure to optimize a performance criterion or to follow some
implicit internal constraint, which is commonly referred to as the learning rule . The
input/output training data are fundamental in neural network technology, because they convey
the necessary information to "discover" the optimal operating point. The nonlinear nature of
the neural network processing elements (PEs) provides the system with lots of flexibility to
achieve practically any desired input/output map, i.e., some Artificial Neural Networks are
universal mappers . There is a style in neural computation that is worth describing.
An input is presented to the neural network and a corresponding desired or target response set
at the output (when this is the case the training is called supervised). An error is composed
from the difference between the desired response and the system output. This error information
is fed back to the system and adjusts the system parameters in a systematic fashion (the
learning rule). The process is repeated until the performance is acceptable
Neural Network Topologies
In the previous section we discussed the properties of the basic processing unit in an artificial
neural network. This section focuses on the pattern of connections between the units and the
propagation of data. As for this pattern of connections, the main distinction we can make is
between:
Feed-forward neural networks, where the data ow from input to output units is strictly
feedforward. The data processing can extend over multiple (layers of) units, but no feedback
connections are present, that is, connections extending from outputs of units to inputs of units in
the same layer or previous layers.
Experiment No.3
Aim: Study and implementation of Perceptron training Algorithm.
Algorithm
Start with a randomly chosen weight vector w0;
Let k=1;
While these exists input vector that are misclassified by: Wk-1 do
Let i be a misclassified input vector
Let Xk=class(ij)ij, impling that Wk-1.Xk<0
Update the weight vector to Wk= Wk-1 + nXk;
increment k;
End while;
Program
#include<iostream.h>
#include<conio.h>
Void main( )
{
clrscr( );
int in[3],d,w[3],a=0;
for(int i=0;i<3,i++)
{
cout<<\n initialize the weight vector w<<i;
cin>>w[i]
}
for(i=0;i<3:i++}
{
cout<<\n enter the input vector i<<i;
cin>>in[i];
}
cout<<\n enter the desined output;
cin>>d;
int ans=1;
while(ans= = 1)
{
for (a= 0, i==0;i<3;i++)
{
a = a + w[i] * in[i];
}
clrscr( );
cout<<\n desired output is<<d;
cout<<\n actual output is <<a;
int e;
e=d-a;
cout<<\n error is <<e;
cout<<\n press 1 to adjust weight else 0;
cin>>ans;
if (e<0)
{
for(i=0;i<3;i++)
{
w[i]=w[i]-1;
}
}
else if (e>0)
{
for(i=0;i<3:i++)
{
w[i]=w[i]+1;
}
}
getch( );
}
OUTPUT:
Experiment No.4
#include<<iostream.h>>
#include<<conio.h>>
void main()
{
float n,w,t,net,div,a,al;
cout<<consider o single neuron percetron with a single i/p;
cin>>w;
cout<<enter the learning cofficient;
cin>>d;
for (i=0;i<10;i++)
{
net = x+w;
if(wt<0)
a=0;
else
a=1;
div=at+a+w;
w=w+div;
cout<<i+1 in fraction are i<<a<<change in weight<<dw<<adjustment at=<<w;
}
}
OUTPUT:
Experiment No.5
Aim: Write a program to implement of delta rule.
#include<<iostream.h>>
#include<<conio.h>>
void main()
{
clrscr( );
float input[3],d,weight[3],delta;
for(int i=0;i < 3 ; i++)
{
cout<<\n initilize weight vector <<i<<\t;
cin>>input[i];
}
cout<<\n enter the desired output\t;
cin>>d;
do
{
del=d-a;
if(del<0)
for(i=0 ;i<3 ;i++)
w[i]=w[i]-input[i];
else if(del>0)
for(i=0;i<3;i++)
weight[i]=weight[i]+input[i];
for(i=0;i<3;i++)
{
val[i]=del*input[i];
weight[+1]=weight[i]+val[i];
}
cout<<\value of delta is <<del;
cout<<\n weight have been adjusted;
}while(del 0)
if(del=0)
cout<<\n output is correct;
}
OUTPUT
Experiment No.7
Aim: Write a program for Back Propagation Algorithm by second method.
# include <iostream.h>
#include <conio.h>
void main ()
{
int i ;
float delta, com, coeff = 0.1;
struct input
{
float val,out,wo, wi;
int top;
}
s[3] ;
W1 = -0.3;
}
else
{
W0 = del [i - 1], W0 ;
W1 = del [i - 1] , Wi ;
}
del [i]. aop = w0 + (wi * del [i]. val);
del [i].out = del [i]. aop);
delta = (top del [i]. out) * del [i].out * (1 del [i].out);
corr = coeff * delta * del [i].[out];
del [i].w0 = w1 + corr;
del [i]. w1 = w1 + corr;
i++;
}While ( i ! = 3)
cout<< VALUE<<Target<<Actual<<w0 <<w1<<\n;
for (i=0; i=3; i++)
{
cout<< s [i].val<< s[i].top<<s[i].out << s[i]. w0<< s[i]. w1;
cout<< \n;
}
getch ();
}
OUTPUT
Experiment No.8
Aim: Write a program to implement logic gates.
#include <iostream>
int main()
{
char menu;
//Menu control variable
int result;
//final output variable
int dataValue1;
int dataValue2;
cout << "enter your Boolean operator code: (A,O,N,X): ";
cin >> menu;
switch (menu) //Menu control variable
{
case 'A':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 == 1 && dataValue2 == 1)
{
result = 1;
}
else
{
result = 0;
}
cout << "show result:" << result;
break;
case 'O':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 == 1 || dataValue2 == 1)
{
result = 1;
}else
{
result = 0;
}
cout << "show result:" << result;
break;
case 'N':
cout << "Enter first Boolean value:";
cin >> dataValue1;
result = !dataValue1;
cout << "show result:" << result;
break;
case 'X':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 = !dataValue1)
{
result = 1;
}else
{
result = 0;
}
cout << "show result:" << result;
break;
default:
result = 0;
break;
}//end switch
cin.ignore(2);
return 0;
}//end main
OUTPUT
Experiment No. 9
Aim: Study of genetic algorithm.
Genetic Algorithms were invented to mimic some of the processes observed in natural
evolution. Thefather of the original Genetic Algorithm was John Holland who invented it in the
early 1970's. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the
evolutionary ideas ofnatural selection and genetics. As such they represent an intelligent
exploitation of a random sarch used to solve optimization problems. Although randomized,
GAs are by no means random, instead they exploit historical information to direct the search
into the region of better performance within the search space. The basic techniques of the GAs
are designed to simulate processes in natural systems necessary for evolution, specially those
follow the principles first laid down by Charles Darwin of "survival of the fittest".
Why Genetic Algorithms?
It is better than conventional AI in that it is more robust. Unlike older AI systems, they do not
breakeasily even if the inputs changed slightly, or in the presence of reasonable noise. Also, in
searching a large state-space, multi-modal state-space, or n-dimensional surface, a genetic
algorithm may offer significant benefits over more typical search of optimization techniques.
(linear programming, heuristic, depth-first, breath-first, and praxis.)
GA Algorithms:
1 randomly initialize population(t)
2 determine fitness of population(t)
3 repeat
4 select parents from population(t)
5 perform crossover on parents creating population(t+1)
6 perform mutation of population(t+1)
7 determine fitness of population(t+1)
8 until best individual is good enough
Genetic Algorithm
Flowchart:
3. Mutation Operator
With some low probability, a portion of the new individuals will have some of their bits flipped.
Its purpose is to maintain diversity within the population and inhibit premature convergence.
Mutation alone induces a random walk through the search space Mutation and selection (without
crossover) create a parallel, noise-tolerant, hill-climbing algorithms
Design: Circuit board layout, Communication Network design, keyboard layout, Parametric
design in aircraft
Control: Missile evasion, Gas pipeline control, Pole balancing
Machine Learning: Designing Neural Networks, Classifier Systems, Learning rules
Robotics: Trajectory Planning, Path planning
Combinatorial Optimization: TSP, Bin Packing, Set Covering, Graph Bisection, Routing,
Signal Processing: Filter Design
Image Processing: Pattern recognition
Business: Economic Forecasting; Evaluating credit risks, Detecting stolen credit cards before
customer reports it is stolen
Medical: Studying health risks for a population exposed to toxins
Experiment No. 10
Aim: Study of Genetic Programming.
Genetic Programming is a branch of evolutionary computation inspired by biological
evolution. It is introduced by Koza and his group. It is popular for its ability to learn
relationships hidden in data and express them automatically in a mathematical manner. It is a
machine learning technique used to optimize a population of computer programs according to a
fitness landscape. This fitness has been determined by a program's ability to perform a given
computational task. In this direction, a variety of classifier programs have been considered.
These classifiers have used different representation techniques including decision trees,
expression trees and classification rule sets. Genetic programming is an extension of the
genetic algorithm in which the genetic population contains computer programs.
Genetic Programming, one of a number of evolutionary algorithms, follows Darwins theory of
evolution (survival of the fittest). There is a population of computer programs (individuals)
that reproduce with each other. The best individuals will survive and eventually evolve to do
well in the given environment.
Why Genetic Programming?
Genetic programming (GP) is a technique to automatically discover computer programs using
principle of Darwinian evolution. GP is a means of getting computers to solve problems
without being explicitly programmed. By being explicitly programmed, it infers that the
programmer does not specify the size, shape, or structural complexity of the solution in
advance but rather all these factors are automatically determined. Automatic programming has
been the goal of computer scientists for a number of decades. Genetic programming shows the
most potential way to automatically write computer programs. So there is an amount of hope
for GPs role in future computing.
Genetic Programming has been applied successfully to symbolic regression (system
identification, empirical discovery, modeling, forecasting, data mining), classification, control,
optimization, equation solving, game playing, induction, image compression, cellular automata
programming, decision tree induction and many others. The problems in Genetic Programming
include the fields of machine learning, artificial intelligence, and neural network.
Steps of GP:
Following are the steps of Genetic Programming process:
Generate an initial population of random compositions of the functions and terminals of
the problem (computer programs).
Iteratively perform the following sub steps until the termination criteria have been
satisfied:
Execute each program in the population and assign it a fitness value.
Create a new population of computer programs by applying the following two
primary operations. The operations are applied to computer programs in the
population selected with a probability based on fitness
Reproduce an existing program by coping it into the new population
Create two new computer programs from existing programs by genetically
recombining randomly chosen parts of two existing programs using the
crossover operation applied at a randomly chosen crossover point within
each program
Design the program that is identified by the method of result designation as the result of
the run of GP. This result may represent a solution to the problem.
Advantages of GP
GP is used successfully for multicategory classification problems because it has following
advantages:
No analytical knowledge is needed and we can still get accurate results.
Every component of the resulting GP rule-base is relevant in some way for the solution of
the problem. Thus operations are not encoded null that will expend computational
resources at runtime.
With GP restrictions are not imposed on how the structure of solutions should be. Also we
do not bound the complexity or the number of rules of the computed solution.
GP provides a mathematical representation of the classifier. As the classifier uses only a
few selected features, the mathematical representation of the classifier can be easily
analyzed to know more about the underlying system.
Genetic Programming is the absence or relatively minor role of preprocessing of inputs
and postprocessing of outputs. The inputs, intermediate results, and outputs are typically
expressed directly in terms of the natural terminology of the problem domain. The
programs produced by genetic programming consist of functions that are natural for the
problem domain. The postprocessing of the output of a program, if any, is done by a
wrapper (output interface).
Applications of GP
Genetic Programming has been applied successfully to symbolic regression (system
identification, empirical discovery, modeling, forecasting, data mining), classification, control,
optimization, equation solving, game playing, induction, image compression, cellular automata
programming, decision tree induction and many others. The problems in Genetic Programming
include the fields of machine learning, artificial intelligence, and neural network.