Sunteți pe pagina 1din 26

GWALIOR ENGINEERING COLLEGE

AIRPORT ROAD MAHARAJ PURA GWALIOR 474015

Department of CSE & IT


LAB MANUAL
OF

Soft Computing (IT802)

B.E. (VIIITH SEM)

Prepared by:

Approved & Reviewed by:

PRIYANKA GUPTA
ROLL
-0916IT111025

NO.

List of Experiments
Aim
1
2

Experiments
Study of Biological Neural Network
Study of Artificial Neural Network

Write a program of Perceptron Training


Algorithm.

Write a program to implement Hebbs


rule

Write a program to implement of delta


rule

Write a program for Back propagation


Algorithm

Write a program for Back Propagation


Algorithm by second method

8
9
10

Write a program to implement logic gates


Study of genetic algorithm
Study of Genetic programming (Content
Beyond the Syllabus)

Experiment No. 1

PAGE
NO.

SIGN.

Aim: Study of BNN.


Neural networks are inspired by our brains. A biological neural network describes a
population of physically interconnected neurons or a group of disparate neurons whose inputs
or signaling targets define a recognizable circuit. Communication between neurons often
involves an electrochemical process. The interface through which they interact with
surrounding neurons usually consists of several dendrites (input connections), which are
connected via synapses to other neurons, and one axon (output connection). If the sum of the
input signals surpasses a certain threshold, the neuron sends an action potential (AP) at the
axon hillock and transmits this electrical signal along the axon.
The control unit - or brain - can be divided in different anatomic and functional sub-units,
each having certain tasks like vision, hearing, motor and sensor control. The brain is
connected by nerves to the sensors and actors in the rest of the body.
The brain consists of a very large number of neurons, about 10 11 in average. These can be seen
as the basic building bricks for the central nervous system (CNS). The neurons are
interconnected at points called synapses. The complexity of the brain is due to the massive
number of highly interconnected simple units working in parallel, with an individual neuron
receiving input from up to 10000 others.
The neuron contains all structures of an animal cell. The complexity of the structure and of the
processes in a simple cell is enormous. Even the most sophisticated neuron models in artificial
neural networks seem comparatively toy-like.
Structurally the neuron can be divided in three major parts: the cell body (soma), the
dendrites, and the axon.
The cell body contains the organelles of the neuron and also the `dendrites' are originating
there. These are thin and widely branching fibers, reaching out in different directions to make
connections to a larger number of cells within the cluster.
Input connections are made from the axons of other cells to the dendrites or directly to the
body of the cell. These are known as axondentrititic and axonsomatic synapses.

Fig: Biological Neurons


There is only one axon per neuron. It is a single and long fiber, which transports the output
signal of the cell as electrical impulses (action potential) along its length. The end of the axon
may divide in many branches, which are then connected to other cells. The branches have the
function to fan out the signal to many other inputs.
There are many different types of neuron cells found in the nervous system. The differences
are due to their location and function.
The neurons perform basically the following function: all the inputs to the cell, which may
vary by the strength of the connection or the frequency of the incoming signal, are summed
up. The input sum is processed by a threshold function and produces an output signal.
The brain works in both a parallel and serial way. The parallel and serial nature of the brain is
readily apparent from the physical anatomy of the nervous system. That there is serial and
parallel processing involved can be easily seen from the time needed to perform tasks. For
example a human can recognize the picture of another person in about 100 ms. Given the
processing time of 1 ms for an individual neuron this implies that a certain number of
neurons, but less than 100, are involved in serial;
Biological neural systems usually have a very high fault tolerance. Experiments with people
with brain injuries have shown that damage of neurons up to a certain level does not
necessarily influence the performance of the system, though tasks such as writing or speaking
may have to be learned again. This can be regarded as re-training the network.

Experiment No.2
Aim: Study of ANN.
An artificial neural network is a system based on the operation of biological neural networks,
in other words, is an emulation of biological neural system. Why would be necessary the
implementation of artificial neural networks? Although computing these days is truly
advanced, there are certain tasks that a program made for a common microprocessor is unable
to perform; even so a software implementation of a neural network can be made with their
advantages and disadvantages.
Advantages of ANN

A neural network can perform tasks that a linear program can not.

When an element of the neural network fails, it can continue without any problem by
their parallel nature.

A neural network learns and does not need to be reprogrammed.

It can be implemented in any application.

It can be implemented without any problem.

Disadvantages of ANN

The neural network needs training to operate.

The architecture of a neural network is different from the architecture of microprocessors


therefore needs to be emulated.

Requires high processing time for large neural networks.

Another aspect of the artificial neural networks is that there are different architectures, which
consequently requires different types of algorithms, but despite to be an apparently complex
system, a neural network is relatively simple.
Artificial neural networks (ANN) are among the newest signal-processing technologies in the
engineer's toolbox. The field is highly interdisciplinary, but our approach will restrict the view
to the engineering perspective. In engineering, neural networks serve two important functions:
as pattern classifiers and as nonlinear adaptive filters.
An Artificial Neural Network is an adaptive, most often nonlinear system that learns to
perform a function (an input/output map) from data. Adaptive means that the system
parameters are changed during operation, normally called the training phase . After the training
phase the Artificial Neural Network parameters are fixed and the system is deployed to solve
the problem at hand (the testing phase ). The Artificial Neural Network is built with a
systematic step-by-step procedure to optimize a performance criterion or to follow some

implicit internal constraint, which is commonly referred to as the learning rule . The
input/output training data are fundamental in neural network technology, because they convey
the necessary information to "discover" the optimal operating point. The nonlinear nature of
the neural network processing elements (PEs) provides the system with lots of flexibility to
achieve practically any desired input/output map, i.e., some Artificial Neural Networks are
universal mappers . There is a style in neural computation that is worth describing.

An input is presented to the neural network and a corresponding desired or target response set
at the output (when this is the case the training is called supervised). An error is composed
from the difference between the desired response and the system output. This error information
is fed back to the system and adjusts the system parameters in a systematic fashion (the
learning rule). The process is repeated until the performance is acceptable
Neural Network Topologies
In the previous section we discussed the properties of the basic processing unit in an artificial
neural network. This section focuses on the pattern of connections between the units and the
propagation of data. As for this pattern of connections, the main distinction we can make is
between:

Feed-forward neural networks, where the data ow from input to output units is strictly
feedforward. The data processing can extend over multiple (layers of) units, but no feedback
connections are present, that is, connections extending from outputs of units to inputs of units in
the same layer or previous layers.

Recurrent neural networks that do contain feedback connections. Contrary to feed-forward


networks, the dynamical properties of the network are important. In some cases, the activation
values of the units undergo a relaxation process such that the neural network will evolve to a
stable state in which these activations do not change anymore. In other applications, the change
of the activation values of the output neurons are significant, such that the dynamical behaviour
constitutes the output of the neural network (Pearlmutter, 1990).

Training of Artificial neural networks


A neural network has to be configured such that the application of a set of inputs produces
(either 'direct' or via a relaxation process) the desired set of outputs. Various methods to set the
strengths of the connections exist. One way is to set the weights explicitly, using a priori
knowledge. Another way is to 'train' the neural network by feeding it teaching patterns and
letting it change its weights according to some learning rule.
We can categories the learning situations in two distinct sorts. These are:
Supervised learning or Associative learning in which the network is trained by providing it
with input and matching output patterns. These input-output pairs can be provided by an
external teacher, or by the system which contains the neural network (self-supervised).

Unsupervised learning or Self-organization in which an (output) unit is trained to respond


to clusters of pattern within the input. In this paradigm the system is supposed to discover
statistically salient features of the input population. Unlike the supervised learning
paradigm, there is no a priori set of categories into which the patterns are to be classified;
rather the system must develop its own representation of the input stimuli.

Reinforcement Learning This type of learning may be considered as an intermediate form


of the above two types of learning. Here the learning machine does some action on the
environment and gets a feedback response from the environment. The learning system
grades its action good (rewarding) or bad (punishable) based on the environmental
response and accordingly adjusts its parameters. Generally, parameter adjustment is
continued until an equilibrium state occurs, following which there will be no more
changes in its parameters. The self organizing neural learning may be categorized under
this type of learning.

Experiment No.3
Aim: Study and implementation of Perceptron training Algorithm.
Algorithm
Start with a randomly chosen weight vector w0;
Let k=1;
While these exists input vector that are misclassified by: Wk-1 do
Let i be a misclassified input vector
Let Xk=class(ij)ij, impling that Wk-1.Xk<0
Update the weight vector to Wk= Wk-1 + nXk;
increment k;
End while;

Program

#include<iostream.h>
#include<conio.h>

Void main( )
{
clrscr( );
int in[3],d,w[3],a=0;
for(int i=0;i<3,i++)

{
cout<<\n initialize the weight vector w<<i;
cin>>w[i]
}
for(i=0;i<3:i++}
{
cout<<\n enter the input vector i<<i;
cin>>in[i];
}
cout<<\n enter the desined output;
cin>>d;
int ans=1;
while(ans= = 1)
{
for (a= 0, i==0;i<3;i++)
{
a = a + w[i] * in[i];
}
clrscr( );
cout<<\n desired output is<<d;
cout<<\n actual output is <<a;
int e;
e=d-a;
cout<<\n error is <<e;
cout<<\n press 1 to adjust weight else 0;

cin>>ans;
if (e<0)
{
for(i=0;i<3;i++)
{
w[i]=w[i]-1;
}
}
else if (e>0)
{
for(i=0;i<3:i++)
{
w[i]=w[i]+1;
}
}
getch( );
}

OUTPUT:

Experiment No.4

Aim: Write a program to implement Hebbs rule.

#include<<iostream.h>>
#include<<conio.h>>
void main()
{
float n,w,t,net,div,a,al;
cout<<consider o single neuron percetron with a single i/p;
cin>>w;
cout<<enter the learning cofficient;
cin>>d;
for (i=0;i<10;i++)
{
net = x+w;
if(wt<0)
a=0;
else
a=1;
div=at+a+w;
w=w+div;
cout<<i+1 in fraction are i<<a<<change in weight<<dw<<adjustment at=<<w;

}
}

OUTPUT:

Experiment No.5
Aim: Write a program to implement of delta rule.
#include<<iostream.h>>
#include<<conio.h>>
void main()
{
clrscr( );
float input[3],d,weight[3],delta;
for(int i=0;i < 3 ; i++)
{
cout<<\n initilize weight vector <<i<<\t;
cin>>input[i];
}
cout<<\n enter the desired output\t;
cin>>d;
do
{
del=d-a;
if(del<0)
for(i=0 ;i<3 ;i++)
w[i]=w[i]-input[i];
else if(del>0)
for(i=0;i<3;i++)
weight[i]=weight[i]+input[i];
for(i=0;i<3;i++)
{

val[i]=del*input[i];
weight[+1]=weight[i]+val[i];
}
cout<<\value of delta is <<del;
cout<<\n weight have been adjusted;
}while(del 0)
if(del=0)
cout<<\n output is correct;
}

OUTPUT

Experiment No.7
Aim: Write a program for Back Propagation Algorithm by second method.
# include <iostream.h>
#include <conio.h>
void main ()
{
int i ;
float delta, com, coeff = 0.1;
struct input
{
float val,out,wo, wi;
int top;
}

s[3] ;

cout<< \n Enter the i/p value to target o/p << \t;


for (i=0; i<3 ; i++)
cin>> s [i], val>> s[i], top);
i = 0;
do
{
if (i = = 0)
{
W0 = -1.0;

W1 = -0.3;
}
else
{
W0 = del [i - 1], W0 ;
W1 = del [i - 1] , Wi ;
}
del [i]. aop = w0 + (wi * del [i]. val);
del [i].out = del [i]. aop);
delta = (top del [i]. out) * del [i].out * (1 del [i].out);
corr = coeff * delta * del [i].[out];
del [i].w0 = w1 + corr;
del [i]. w1 = w1 + corr;
i++;
}While ( i ! = 3)
cout<< VALUE<<Target<<Actual<<w0 <<w1<<\n;
for (i=0; i=3; i++)
{
cout<< s [i].val<< s[i].top<<s[i].out << s[i]. w0<< s[i]. w1;
cout<< \n;
}
getch ();
}

OUTPUT

Experiment No.8
Aim: Write a program to implement logic gates.
#include <iostream>
int main()
{
char menu;
//Menu control variable
int result;
//final output variable
int dataValue1;
int dataValue2;
cout << "enter your Boolean operator code: (A,O,N,X): ";
cin >> menu;
switch (menu) //Menu control variable
{
case 'A':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 == 1 && dataValue2 == 1)
{
result = 1;
}
else
{
result = 0;
}
cout << "show result:" << result;
break;
case 'O':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 == 1 || dataValue2 == 1)
{
result = 1;
}else
{
result = 0;
}
cout << "show result:" << result;

break;
case 'N':
cout << "Enter first Boolean value:";
cin >> dataValue1;
result = !dataValue1;
cout << "show result:" << result;
break;
case 'X':
cout << "Enter first Boolean value:";
cin >> dataValue1;
cout << "Enter second Boolean value:";
cin >> dataValue2;
if(dataValue1 = !dataValue1)
{
result = 1;
}else
{
result = 0;
}
cout << "show result:" << result;
break;
default:
result = 0;
break;
}//end switch
cin.ignore(2);
return 0;
}//end main

OUTPUT

Experiment No. 9
Aim: Study of genetic algorithm.
Genetic Algorithms were invented to mimic some of the processes observed in natural
evolution. Thefather of the original Genetic Algorithm was John Holland who invented it in the
early 1970's. Genetic Algorithms (GAs) are adaptive heuristic search algorithm based on the
evolutionary ideas ofnatural selection and genetics. As such they represent an intelligent
exploitation of a random sarch used to solve optimization problems. Although randomized,
GAs are by no means random, instead they exploit historical information to direct the search
into the region of better performance within the search space. The basic techniques of the GAs
are designed to simulate processes in natural systems necessary for evolution, specially those
follow the principles first laid down by Charles Darwin of "survival of the fittest".
Why Genetic Algorithms?
It is better than conventional AI in that it is more robust. Unlike older AI systems, they do not
breakeasily even if the inputs changed slightly, or in the presence of reasonable noise. Also, in
searching a large state-space, multi-modal state-space, or n-dimensional surface, a genetic
algorithm may offer significant benefits over more typical search of optimization techniques.
(linear programming, heuristic, depth-first, breath-first, and praxis.)
GA Algorithms:
1 randomly initialize population(t)
2 determine fitness of population(t)
3 repeat
4 select parents from population(t)
5 perform crossover on parents creating population(t+1)
6 perform mutation of population(t+1)
7 determine fitness of population(t+1)
8 until best individual is good enough

Genetic Algorithm

Flowchart:

The values of the two strings are exchanged up to this point


If S1=000000 and s2=111111 and the crossover point is 2 then S1'=110000 and s2'=001111
The two new offspring created from this mating are put into the next generation of the population
By recombining portions of good individuals, this process is likely to create even better
individuals

3. Mutation Operator
With some low probability, a portion of the new individuals will have some of their bits flipped.
Its purpose is to maintain diversity within the population and inhibit premature convergence.
Mutation alone induces a random walk through the search space Mutation and selection (without
crossover) create a parallel, noise-tolerant, hill-climbing algorithms

Effects of Genetic Operators


Using selection alone will tend to fill the population with copies of the best individual from the
population Using selection and crossover operators will tend to cause the algorithms to converge
on a good but sub-optimal solution Using mutation alone induces a random walk through the
search space. Using selection and mutation creates a parallel, noise-tolerant, hill climbing
algorithm.
Applications of Genetic Algorithms
Scheduling: Facility, Production, Job, and Transportation Scheduling

Design: Circuit board layout, Communication Network design, keyboard layout, Parametric
design in aircraft
Control: Missile evasion, Gas pipeline control, Pole balancing
Machine Learning: Designing Neural Networks, Classifier Systems, Learning rules
Robotics: Trajectory Planning, Path planning
Combinatorial Optimization: TSP, Bin Packing, Set Covering, Graph Bisection, Routing,
Signal Processing: Filter Design
Image Processing: Pattern recognition
Business: Economic Forecasting; Evaluating credit risks, Detecting stolen credit cards before
customer reports it is stolen
Medical: Studying health risks for a population exposed to toxins

Experiment No. 10
Aim: Study of Genetic Programming.
Genetic Programming is a branch of evolutionary computation inspired by biological
evolution. It is introduced by Koza and his group. It is popular for its ability to learn
relationships hidden in data and express them automatically in a mathematical manner. It is a
machine learning technique used to optimize a population of computer programs according to a
fitness landscape. This fitness has been determined by a program's ability to perform a given
computational task. In this direction, a variety of classifier programs have been considered.
These classifiers have used different representation techniques including decision trees,
expression trees and classification rule sets. Genetic programming is an extension of the
genetic algorithm in which the genetic population contains computer programs.
Genetic Programming, one of a number of evolutionary algorithms, follows Darwins theory of
evolution (survival of the fittest). There is a population of computer programs (individuals)
that reproduce with each other. The best individuals will survive and eventually evolve to do
well in the given environment.
Why Genetic Programming?
Genetic programming (GP) is a technique to automatically discover computer programs using
principle of Darwinian evolution. GP is a means of getting computers to solve problems
without being explicitly programmed. By being explicitly programmed, it infers that the
programmer does not specify the size, shape, or structural complexity of the solution in
advance but rather all these factors are automatically determined. Automatic programming has
been the goal of computer scientists for a number of decades. Genetic programming shows the

most potential way to automatically write computer programs. So there is an amount of hope
for GPs role in future computing.
Genetic Programming has been applied successfully to symbolic regression (system
identification, empirical discovery, modeling, forecasting, data mining), classification, control,
optimization, equation solving, game playing, induction, image compression, cellular automata
programming, decision tree induction and many others. The problems in Genetic Programming
include the fields of machine learning, artificial intelligence, and neural network.
Steps of GP:
Following are the steps of Genetic Programming process:
Generate an initial population of random compositions of the functions and terminals of
the problem (computer programs).
Iteratively perform the following sub steps until the termination criteria have been
satisfied:
Execute each program in the population and assign it a fitness value.
Create a new population of computer programs by applying the following two
primary operations. The operations are applied to computer programs in the
population selected with a probability based on fitness
Reproduce an existing program by coping it into the new population
Create two new computer programs from existing programs by genetically
recombining randomly chosen parts of two existing programs using the
crossover operation applied at a randomly chosen crossover point within
each program
Design the program that is identified by the method of result designation as the result of
the run of GP. This result may represent a solution to the problem.
Advantages of GP
GP is used successfully for multicategory classification problems because it has following
advantages:
No analytical knowledge is needed and we can still get accurate results.
Every component of the resulting GP rule-base is relevant in some way for the solution of
the problem. Thus operations are not encoded null that will expend computational
resources at runtime.
With GP restrictions are not imposed on how the structure of solutions should be. Also we
do not bound the complexity or the number of rules of the computed solution.
GP provides a mathematical representation of the classifier. As the classifier uses only a
few selected features, the mathematical representation of the classifier can be easily
analyzed to know more about the underlying system.
Genetic Programming is the absence or relatively minor role of preprocessing of inputs
and postprocessing of outputs. The inputs, intermediate results, and outputs are typically

expressed directly in terms of the natural terminology of the problem domain. The
programs produced by genetic programming consist of functions that are natural for the
problem domain. The postprocessing of the output of a program, if any, is done by a
wrapper (output interface).
Applications of GP
Genetic Programming has been applied successfully to symbolic regression (system
identification, empirical discovery, modeling, forecasting, data mining), classification, control,
optimization, equation solving, game playing, induction, image compression, cellular automata
programming, decision tree induction and many others. The problems in Genetic Programming
include the fields of machine learning, artificial intelligence, and neural network.

S-ar putea să vă placă și