Sunteți pe pagina 1din 5

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such

as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurones. This is true of ANNs as well. This project explains how to develop Neural Network programs. It includes two different types of Neural Network programs that are,

DigitalNeuralGate - A Two input neural digital gate which can be trained to perform functions of various digital gates (like XOR, AND, OR etc) PatternDetector - A simple handwriting/pattern detection program which can analyze an image to detect it.

And also we learn these following concepts: Concept of a neuron, and a neural network Programming model and design of the BrainNet library. Neural XML (NXML), an XML based programming language (which is a part of BrainNet library), for creating, training and running Neural Networks. Here are some basic facts about the structure of a neural network

A Neural Network consists of various layers Each layer can any number of neurons in it. The first layer of the network is called an input layer, and it is here we apply the input The last layer is called the output layer, and it is from here we take the output. A neural network can have any number of hidden layers, between the input and output layer. In most neural network models, a neuron in one layer is connected to all neurons in the next layer.

In the above network, N1 and N2 are neurons in input layer, N3 and N4 are neurons in hidden layer, and N5 is the neuron in output layer. We provide the inputs to N1 and N2. Each neuron in each layer is connected to all neurons in next layer. The above network can be called a 2-2-1 network, based on the number of neurons in each layer. how the training takes place, in a back ward propagation neural network. In a backward propagation neural network, there are several layers, and each neuron in each layer is connected to all neurons in the next layer. For each connection, a random weight is assigned when the network is initialized. Also, a random bias value is assigned to each neuron during initialization. Training is the process of adjusting the connection weights and bias of all neurons in the network (other than neurons in the input layer), to enable the network to produce expected output for all input sets. Now, let us see how the training actually happens. Consider a small 2-2-1 network. Now, we are going to train this network with AND truth table. As you know, AND truth table is AND TRUTH TABLE A B Output 0 0 0 1 1 0 1 1 0 0 0 1

Now, some basic facts about training.


You can train a neural network by providing inputs and outputs. The network will actually learn from the inputs and outputs - this is explained in detail later. Once training is over, you can provide the inputs to obtain the outputs. Now we will see how you can use the BrainNet library to develop a neural network, which can be trained to perform digital gate functions. We are going to create a 2-2-1 network - which means, a network with two input neurons, two hidden layer neurons and one output neuron - exactly as shown in the picture above. Then, we will see how to train this network to perform the functions of various two input digital gates - like AND gate, OR gate, XOR gate etc. we can train the same network to learn the functions

of various gates. The network will learn which output to produce for a given input, from the truth table of a gate - after a number of training rounds. First, let us see how we train our 2-2-1 network, the first condition in the truth table, i.e, when A=0, B=0 then output=0. Step 1 - Feeding The Inputs Initially, we will feed the inputs to the neural network. This is done by simply setting the output of neurons in Layer 1, as the input values we need to feed. I.e, as per the above example, our inputs are 0,0 and output is 0. we will set the output of Neuron N1 as 0, and the output of N2 is set to 0. Have a look at this pseudo code, and it will make things clear. Inputs is the input array. The number of elements in Input array should match the number of neurons in input layer. Step 2 - Finding the output of the network We have already seen how we calculate the output of a single neuron. As per our above example, the output of neurons N1 and N2 will act as the inputs of N3 and N4. Finding the output of neural network involves, calculating the outputs of all hidden layers and output layer. As we discussed earlier, a neural network can have a number of hidden layers. as per our above example, let us calculate the net value of neuron N3. We know that N1 and N2 are connected to N3.

Net Value Of N3 = N3.Bias + (N1.Output * Weight Of Connection From N1 to N3) + (N2.Output * Weight Of Connection From N2 to N3)

Similarly, to calculate the net value of N4,

Net Value Of N4 = N4.Bias + (N1.Output * Weight Of Connection From N1 to N4) + (N2.Output * Weight Of Connection From N2 to N4)

Step 3 - Calculating The Error or Delta In this step, we will calculate the error of the network. Error or Delta can be stated as the difference between the expected output and the obtained output. For example, when we find the output value of the network for the first time, most probably the output will be wrong. We need to get 0 as the output for inputs A=0 and B=0. But the output may be, some other value like 0.55, based on the random values assigned to the bias and connection weights of each neuron.

Now let us see, how we can calculate the error. Let us see how to calculate the error or delta of each neuron in all the layers.

First we will calculate the error or delta of each neuron in the output layer. The delta value thus calculated will be used to calculate the error or delta of neurons in the previous layer (i.e, the last hidden layer) The delta value of all neurons in the last hidden layer is used to calculate the error or delta of all neurons in the previous layer (i.e, second last hidden layer) This process is continued, till we reach the first hidden layer (delta of input layer is not calculated).

In Step 2, we are propagating values forward - starting from the first hidden layer to the output layer, for finding the output. In Step 3, we are starting from the output layer, and propagating the error values backward - and hence, this neural network is called as a Backward Propagation neural network.

Step 4 - Adjusting The Weights and Bias After calculating the delta of all neurons in all layers, we should correct the weights and bias with respect to the error or delta, to produce a more accurate output next time. Connection Weights and Bias, together are called free parameters. Remember that a neuron should update more than one number of weights - because, as we already discussed, there is a weight associated with each connection to a neuron. Have a look at this model below. Please not that this model holds only the major interfaces and classes with in the model.

Fig: An Partial Model of BrainNet Framework As we discussed earlier, a Neural Network consists of various Neuron Layers, and each Neuron Layer has various Neurons. A Neuron has a strategy - which decides how it should perform tasks like summation, activation, error calculation, bias adjustment, weight adjustment etc. To brief the UML diagram above,

INeuron, INeuronStrategy, INeuralNetwork and INetworkFactory are interfaces A Neuron should implement the INeuron interface A Neural Network should implement the INeuralNetwork interface A Neuron has a strategy, and a strategy should implement the INeuronStrategy interface. We have a concrete implementation of INeuronStrategy, called BackPropNeuronStrategy (for a backward propagation neural network). A Neural Network is initialized and connections betweens layers are made by a neural network factory. A Factory should implement the INetworkFactory interface. We have a concrete implementation of INetworkFactory, called BackPropNetworkFactory, for creating Backward Propagation neural networks.

The major interfaces in the model are briefed below. INetworkFactory INeuron INeuronStrategy INeuralNetwork An interface to define a neural network factory The interface for defining a neuron The interface for defining the strategy of the neuron The interface for defining a neural network

The major classes in the model are briefed below. BackPropNeuronStrategy A backward propagation neuron strategy. This is a concrete implementation of INeuronStrategy The class is to help the user to initialize and train the NetworkHelper network. It maintains a list of training data elements. A generic neural network. This is a concrete NeuralNetwork implementation of INeuralNetwork NeuralNetworkCollection A collection of neural networks A concrete implementation of INeuron Neuron A collection of INeurons NeuronCollection This is a hash table to keep track of all neurons NeuronConnections connected to/from a neuron, along with the related weights

S-ar putea să vă placă și