Documente Academic
Documente Profesional
Documente Cultură
INDORE
Project report:
ARTIFICIAL NEURAL NETWORK
● The first computational model of a neuron was proposed by Warren MuCulloch and
Walter Pitts in 1943.
● In this type of network, there are only two layers, input layer and output layer.
● Input layer does not count because no computation performed in this layer.
● Output layer is formed when different weights are applied on input nodes and the
cumulative effect per node is taken.
● After this the neurons collectively give the output layer and compute the output signals.
Multilayer feed forward network
● This layer has hidden layer which is internal to the network and has no direct contact
with the external layer.
● Existence of one or more hidden layers enable the network to be computationally
stronger, feed-forward network because information flows through the input function,
and the intermediate computations used to denote the output Z.
● There are no feedback connections in which outputs of the model are fed back into itself.
Recurrent network
● In this type of network, processing element output can be directed to the processing
element in the same layer and in the preceding layer forming a multilayer recurrent
network.
● They perform the same task for every element of a sequence, with the output being
depended on the previous computations.
● The main feature of an Recurrent Neural Network is its hidden state, which captures
some information about a sequence.
Perceptron Model
● The perceptron model, proposed by Minsky-Papert, is a more general computational model
than McCulloch-Pitts neuron.
● It overcomes some of the limitations of the M-P neuron by introducing the concept of
numerical weights for inputs, and a mechanism for learning those weights.
● Inputs are no longer limited to boolean values like in the case of an M-P neuron, it supports
real inputs as well which makes it more useful and generalized.
Back Propagation
● The Backpropagation neural network is a multilayered, feedforward neural network.
● Backpropagation works by approximating the non-linear relationship between the input
and the output by adjusting the weight values internally.
● It has two stages, training and testing.
● Backpropagation neural network includes an input layer, hidden layer and an output
layer. It should be noted that Backpropagation neural networks can have more than one
hidden layer.