Sunteți pe pagina 1din 22

An Introduction to Artificial neural network | Kaushik Bose

An Introduction to
Artificial neural network
TERM PAPER DOCUMENT
B.TECH 6TH SEMESTER
PAPER CSEB 605(P)

Kaushik Bose

13/06/2014

ROLL NO. 91/CSE/111029

An Introduction to Artificial neural network | Kaushik Bose

TABLE OF CONTENTS
Introduction ............................................................................................................................................................ 3
Basics .................................................................................................................................................................. 3
Networks ............................................................................................................................................................ 3
Why Neural Networks? ...................................................................................................................................... 4
Technical Viewpoint ....................................................................................................................................... 4
Biological Viewpoint ...................................................................................................................................... 4
Biological neural networks ..................................................................................................................................... 5
Characteristics that ann share with biological neural system ............................................................................ 6
Artificial Neural Networks ...................................................................................................................................... 7
What is neural network ...................................................................................................................................... 7
Formal definition of artificial neuron network ................................................................................................... 7
Characterization of ANN..................................................................................................................................... 7
A General framework for ann models ................................................................................................................ 8
Neurons the basic computational entities ......................................................................................................... 8
The Perceptron and Linear Separability ........................................................................................................... 10
Perceptron for Classification ........................................................................................................................ 11
limitations of perceptron ............................................................................................................................. 11
Artificial Neural Network Architecture or topology ............................................................................................. 12
Architecture based on number of layers .......................................................................................................... 12
single layer neural network ......................................................................................................................... 12
Multilayer Neural Network .......................................................................................................................... 12
Architecture based on the connection pattern ................................................................................................ 14
totally connected neuron network .............................................................................................................. 14
partially connected neuron network ........................................................................................................... 14
ARCHITECTURE BASED ON information flow ................................................................................................... 15
Feed-forward neural network ...................................................................................................................... 15
feed-back or recurrent neural network ....................................................................................................... 15
ANN learning process ........................................................................................................................................... 16
supervised learning ...................................................................................................................................... 16
reinforcement learning ................................................................................................................................ 16
unsupervised learning .................................................................................................................................. 16
Back propagation ......................................................................................................................................... 17
learning laws .................................................................................................................................................... 17
Hebb's Rule: ................................................................................................................................................. 17
Hopfield Law: ............................................................................................................................................... 17

An Introduction to Artificial neural network | Kaushik Bose

The Delta Rule: ............................................................................................................................................. 17


Kohonens Learning Law: ............................................................................................................................. 17
Benefits of neural networks ................................................................................................................................. 18
Nonlinearity ...................................................................................................................................................... 18
input-output mapping ...................................................................................................................................... 18
adaptivity.......................................................................................................................................................... 19
fault tolerance .................................................................................................................................................. 19
Neurobiological Analogy .................................................................................................................................. 19
Applications of ann ............................................................................................................................................... 19
Signal Processing .............................................................................................................................................. 19
Pattern Recognition ......................................................................................................................................... 19
Medicine........................................................................................................................................................... 20
Speech Production ........................................................................................................................................... 20
Speech Recognition .......................................................................................................................................... 20
clustering/categorization ................................................................................................................................. 20
prediction/forecasting...................................................................................................................................... 20
optimization ..................................................................................................................................................... 20
Future scope OF artificial NEURAL NETWORKS .................................................................................................... 21
References ............................................................................................................................................................ 21

An Introduction to Artificial neural network | Kaushik Bose

INTRODUCTION
BASICS

The great majority of digital computers in use today are based around the
principle of using one very powerful processor through which all computations
are channelled. This is the so called von Neumann architecture, after John von
Neumann, one of the pioneers of modern computing. The power of such a
processor can be measured in terms of its speed (number of instructions that it
can execute in a unit of time) and complexity (the number of different
instructions that it can execute).
Nowadays there is a new field of computational science that integrates
the different methods of problem solving that cannot be so easily described
without an algorithmic traditional focus. These methods, in one way or another,
have their origin in the emulation, more or less intelligent, of the behaviour of
the biological systems.
It is a new way of computing denominated Artificial Intelligence, which
through different methods is capable of managing the impressions and
uncertainties that appear when trying to solve problems related to the real
world, offering strong solution and easy implementation. One of those
technique is known as Artificial Neuron Networks (ANN), inspired by the
functioning of human brain.

NETWORKS

One efficient way of solving complex problems is following the lemma


divide and conquer. A complex system may be decomposed into simpler
elements, in order to be able to understand it. Also simple elements may be
gathered to produce a complex system (Bar Yam, 1997). Networks are one
approach for achieving this. There are a large number of different types of
networks, but they all are characterized by the following components: a set of
nodes, and connections between nodes. The nodes can be seen as
computational units. They receive inputs, and process them to obtain an output.
This processing might be very simple (such as summing the inputs), or quite
complex (a node might contain another network...) the connections determine
3

An Introduction to Artificial neural network | Kaushik Bose

the information flow between nodes. They can be unidirectional, when the
information flows only in one sense, and bidirectional, when the information
flows in either sense.
Networks are used to model a wide range of phenomena in physics,
computer science, biochemistry, ethology, mathematics, sociology, economics,
telecommunications, and many other areas. This is because many systems can
be seen as a network: proteins, computers, communities, etc.
WHY NEURAL NETWORKS ?

There are problem categories that cannot be formulated as an algorithm.


Problems that depend on many subtle factors, for example the purchase price of a real
estate which our brain can (approximately) calculate. Without an algorithm a
computer cannot do the same. Therefore the question to be asked is: How do we learn
to explore such problems?
So we need to learn; a capability computers obviously do not have. Humans
have a brain that can learn. Computers have some processing units and memory. They
allow the computer to perform the most complex numerical calculations in a very
short time, but they are not adaptive.
The largest part of the brain is working continuously, while the largest part of
the computer is only passive data storage. Thus, the brain is parallel and therefore
performing close to its theoretical maximum, from which the computer is orders of
magnitude away. Additionally, a computer is static - the brain as a biological neural
network can reorganize itself during its "lifespan" and therefore is able to learn, to
compensate errors and so forth. There are two basic reasons why we are interested in
building artificial neural networks (ANNs):
TECHNICAL VIEWPOINT

Some problems such as character recognition or the prediction of future states


of a system require massively parallel and adaptive processing.
BIOLOGICAL VIEWPOINT

Artificial Neural Networks can be used to replicate and simulate components of


the human (or animal) brain, thereby giving us insight into natural information
processing.

An Introduction to Artificial neural network | Kaushik Bose

BIOLOGICAL NEURAL NETWORKS

A biological neuron has three types of components that are of particular


interest in understanding an artificial neuron:
Dendrites
Soma
Axon
The many dendrites receive signals from other neurons. The signals are electric
impulses that are transmitted across a synaptic gap by means of a chemical process.
The action of the chemical transmitter modifies the incoming signal (typically, by
scaling the frequency of the signals that are received) in a manner similar to the action
of the weights in an artificial neural network.
The soma, or cell body, sums the incoming signals. When sufficient input is
received, the cell fires; that is, it transmits a signal over its axon to other cells. It is
often supposed that a cell either fires or doesn't at any instant of time, so that
transmitted signals can be treated as binary. However, the frequency of firing varies
and can be viewed as a signal of either greater or lesser magnitude. This corresponds
to looking at discrete time steps and summing all activity (signals received or signals
sent) at a particular point in time.
The transmission of the signal from a particular neuron is accomplished by an
action potential resulting from differential concentrations of ions on either side of the
neuron's axon sheath (the brain's "white matter"). The ions most directly involved are
potassium, sodium, and chloride.

Figure 1: Biological Neuron


5

An Introduction to Artificial neural network | Kaushik Bose

Figure 2: Biological Neuron to Neuron Connection

CHARACTERISTICS THAT ANN SHARE WITH BIOLOGICAL NEURAL SYSTEM

The processing element receives many signals.


Signals may be modified by a weight at the receiving synapse.
The processing element sums the weighted inputs.
Under appropriate circumstances (sufficient input), the neuron
transmits a single output.
The output from a particular neuron may go to many other neurons (the
axon branches).
Systems are fault tolerant
Information processing is local (although other means of transmission,
such as the action of hormones, may suggest means of overall process
control).
Memory is distributed:
Long-term memory resides in the neurons' synapses or weights.
Short-term memory corresponds to the signals sent by the
neurons.
A synapse's strength may be modified by experience.
Neurotransmitters for synapses may be excitatory or inhibitory.

An Introduction to Artificial neural network | Kaushik Bose

ARTIFICIAL NEURAL NETWORKS


WHAT IS NEURAL NETWO RK

Work on artificial neural networks commonly referred to as neural networks,


has been motivated right from its inception by the recognition that human brain
computes in an entirely different way from the conventional digital computer. The
brain is a highly complex, nonlinear and parallel computer (information processing
system). It has the capability to organize its structural constituents, known as neurons,
so as to perform certain computations (e.g., pattern recognition, perception, and
motor control) many times faster than the fastest digital computer in existence today.
A neural network is a machine that is designed to model the way in which the
brain performs a particular task or function of interest; the network is usually
implemented by using electronic components or is simulated in software on a digital
computer. Our interest is confined largely to an important class of neural networks
that perform useful computations through a process of learning. To achieve good
performance, neural networks employ a massive interconnection of simple computing
cells referred to as "neurons" or "processing units."
FORMAL DEFINITION OF ARTIFICIAL NEURON NETWORK

An artificial neural network is a massively parallel distributed processor made


up of simple processing units, which has a natural propensity for storing experiential
knowledge and making it available for use.
It resembles the brain in two respects:
Knowledge is acquired by the network from its environment through a learning
process.
Interneuron connection strengths, known as synaptic weights, are used to store
the acquired knowledge.
CHARACTERIZATION OF ANN

A neural network is characterized by:


Its pattern of connections between the neurons (called its architecture)
Its method of determining the weights on the connections (called its training,
or learning, algorithm)
Its activation function.

An Introduction to Artificial neural network | Kaushik Bose

A GENERAL FRAMEWORK FOR ANN MODELS

A neural net consists of a large number of simple processing elements called


neurons, units, cells, or nodes. Each neuron is connected to other neurons by means
of directed communication links, each with an associated weight. The weights
represent information being used by the net to solve a problem. Neural nets can be
applied to a wide variety of problems, such as storing and recalling data or patterns,
classifying patterns, performing general mappings from input patterns to output
patterns, grouping similar patterns, or finding solutions to constrained optimization
problems.
There are many different ANN models but each model can be precisely specified
by the following eight major aspects:
A set of processing units
A state of activation for each unit
An output function for each unit
A pattern of connectivity among units or topology of the network
A propagation rule, or combining function, to propagate the activities of
the units through the network
An activation rule to update the activities of each unit by using the
current activation value and the inputs received from other units
An external environment that provides information to the network
and/or interacts with it.
A learning rule to modify the pattern of connectivity by using
information provided by the external environment.
NEURONS THE BASIC CO MPUTATIONAL ENTITIES

The basic unit of neural networks, the artificial neurons, simulates the four basic
functions of natural neurons (receives inputs from other sources, combines them in
some way, performs a generally nonlinear operation on the result, and then output
the final result). Artificial neurons are much simpler than the biological neuron. Here
we identify three basic elements of the artificial neural model:
A set of synapses or connecting links, each of which is characterized by a weight
or strength of its own. Specifically, a signal xj at the input of synapse j connected
to neuron k is multiplied by the synaptic weight wj. Unlike a synapse in the brain,
the synaptic weight of an artificial neuron may lie in a range that includes
negative as well as positive values.

An Introduction to Artificial neural network | Kaushik Bose

An adder for summing the input signals, weighted by the respective synapses
of the neuron; the operations described here constitute a linear combiner.
An activation function for limiting the amplitude of the output of a neuron. The
activation function is also referred to as a squashing function in that it squashes
(limits) the permissible amplitude range of the output signal to some finite
value.

Figure 3: Nonlinear model of an Artificial Neuron

The neuronal model of Figure 1 also includes an externally applied bias, denoted by b.
The bias b, has the effect of increasing or lowering the net input of the activation
function; depending on whether it is positive or negative, respectively.
In mathematical terms, we may describe a neuron by writing the following pair of
equations:

u wjxj
j 1

And

y (u b)
9

An Introduction to Artificial neural network | Kaushik Bose

Where x1, x2,

, xm are the input signals; w1, w2, , wm are the synaptic weights

of neuron; u is the linear combiner output due to the input signals; b is the bias;

(.)

is the activation function; and y is the output signal of the neuron. The use of bias b,
has the effect of applying an affine transformation to the output u of the linear
combiner in the model of Figure 1.

v=u+b
v is called induced field of the neuron.
THE PERCEPTRON AND LINEAR SEPARABILITY

The Perceptron was the first supervised model of artificial neural network. In
1958 Frank Rosenblatt proposed the Perceptron model that can also be used as a
pattern classifier. The single-layer perceptron model consists of one layer of binary
input units and one layer of binary output units. There are no hidden layers and
therefore there is only one layer of modifiable weights.
A perceptron uses a step function that returns +1 if weighted sum of its input
>=0 and -1 otherwise.

1 if v 0

(v )
1 if v 0

Figure 4: The Single Layer Perceptron


10

An Introduction to Artificial neural network | Kaushik Bose

PERCEPTRON FOR CLASSIFICATION

The perceptron is used for binary classification.


First train a perceptron for a classification task.
Perceptron can only model linearly separable classes.
When the two classes are not linearly separable, it may be desirable to obtain
a linear separator that minimizes the mean squared error.

X1
1 true

true

false

true
1

X2

Figure 5: Linearly Separable Boolean Function OR


LIMITATIONS OF PERCE PTRON

The perceptron can only model linearly separable functions, those


functions which can be drawn in 2-dim graph and single straight line
separates values in two part.
Boolean functions given below are linearly separable:
o AND
o OR
o COMPLEMENT
It cannot model XOR function as it is not linearly separable. When the
two classes are not linearly separable, it may be desirable to obtain a
linear separator that minimizes the mean squared error.

11

An Introduction to Artificial neural network | Kaushik Bose

ARTIFICIAL NEURAL NETWORK ARCHITECTURE OR TOPOLOGY

With the expression architecture, structure or topology of an artificial neuron


network we talk about the way in which computational neurons are organized in the
network. Particularly how the nodes are connected and how the information is
transmitted through the network. The architecture can be classified in terms of three
aspects.
Number of levels or layers
Connection pattern
Information flow
ARCHITECTURE BASED ON NUMBER OF LAYERS
SINGLE LAYER NEURAL NETWORK

This is the simplest form of layered neural network. Here an input layer of the
source nodes (input nodes) that projects onto an output layer of neurons or vice versa.

Figure 6: Single layer Neural Network

MULTILAYER NEURAL NE TWORK

A multilayer neural network is a network with one or more layers (or levels) of
nodes (the so-called hidden units) between the input layers and the output layers.
Multilayer neural networks can solve more complicated problems than can singlelayer neural networks, but training may be more difficult.
12

An Introduction to Artificial neural network | Kaushik Bose

Figure 7: Multilayer (Three-layer) Neural Network

Figure 8: Multilayer Neural Network


13

An Introduction to Artificial neural network | Kaushik Bose

ARCHITECTURE BASED ON THE CONNECTION PATTE RN


TOTALLY CONNECTED NE URON NETWORK

A neural network is said to be totally connected neural network when all the
output from a level get to all and each of the nodes in the following node. In this case
there will be more connections than nodes.

Figure 9: Totally connected Neural Network


PARTIALLY CONNECTED NEURON NETWORK

A neural network is said to be partially connected if a neuron of the first layer


does not have to be connected to all neurons on the second layer and so on.

Figure 10: Partially connected Neural Network


14

An Introduction to Artificial neural network | Kaushik Bose

ARCHITECTURE BASED ON INFORMATION FLOW


FEED-FORWARD NEURAL NETWORK

In a feed forward Artificial Neuron Network a unit only sends its output to units
from which it does not receive an input directly or indirectly (via other units). In other
words, there are no feedback loops. A feed forward ANN arranged in layers, where the
units are connected only to the units situated in the next consecutive layer, is called a
strictly feed forward ANN.

Figure 11: A strictly Feed-Forward Neural Network


FEED-BACK OR RECURRENT NE URAL NETWORK

A neural network is said to be feed-back or recurrent if there is at least one


feedback loop.

Figure 12: A Feed-back Neural Network


15

An Introduction to Artificial neural network | Kaushik Bose

ANN LEARNING PROCESS

Learning is a process by which the free parameters of a neural network are


adapted through a process of stimulation by the environment in which the network is
embedded. The type of learning is determined by the manner in which the parameter
changes take place.
This definition of the learning process implies the following sequence of events:
The neural network is stimulated by an environment.
The neural network undergoes changes in its free parameters as a result
of this stimulation.
The neural network responds in a new way to the environment because
of the changes that have occurred in its internal structure.
By learning we mean the procedure for modifying the weights and biases of a
network. The purpose of learning rule is to train the network to perform some task.
The learning process fall into four broad categories.
SUPERVISED LEARNING

In supervised learning the external environment also provides a desired output


for each one of the training input vectors and it is said that the external environment
acts as a "teacher".
REINFORCEMENT LEARNI NG

A special case of supervised learning is reinforcement learning where the


external environment only provides the information that the network output is "good"
or "bad", instead of giving the correct output. In the case of reinforcement learning it
is said that the external environment acts as a "critic".
UNSUPERVISED LEARNIN G

In unsupervised learning the external environment does not provide the


desired network output nor classifies it as good or bad. By using the correlation of the
input vector the learning rule changes the network weights in order to group the input
vector into "clusters" such that similar input vectors will produce similar network
outputs since they will belong to the same cluster. Ideally, the learning rule finds the
number of clusters and their respective centres, if they exist, for the training data. This
learning method is also called self-organization.
16

An Introduction to Artificial neural network | Kaushik Bose

BACK PROPAGATION

This method is proven highly successful in training of multi-layered neural nets.


The network is not just given reinforcement for how it is doing on a task. Information
about errors is also filtered back through the system and is used to adjust the
connections between the layers, thus improving performance. A form of supervised
learning.

LEARNING LAWS
HEBB'S RULE:

The first, and undoubtedly the best known, learning rule was introduced by
Donald Hebb. The description appeared in his book T h e Organization of Behaviour in
1949. His basic rule is: If a neuron receives an input from another neuron, and if both
are highly active (mathematically have the same sign), the weight between the
neurons should be strengthened.
HOPFIELD LAW:

This law is similar to Hebbs Rule with the exception that it specifies the
magnitude of the strengthening or weakening. It states, "If the desired output and the
input are both active or both inactive, increment the connection weight by the learning
rate, otherwise decrement the weight by the learning rate." (Most learning functions
have some provision for a learning rate, or a learning constant. Usually this term is
positive and between zero and one.)
THE DELTA RULE:

This rule is a further variation of Hebb's Rule. It is one of the most commonly
used. This rule is based on the simple idea of continuously modifying the strengths of
the input connections to reduce the difference (the delta) between the desired output
value and the actual output of a processing element. This rule changes the synaptic
weights in the way that minimizes the mean squared error of the network. This rule is
also referred to as the Widrow-Hoff Learning Rule and the Least Mean Square (LMS)
Learning Rule.
KOHONENS LEARNING L AW:

This procedure, developed by Teuvo Kohonen, was inspired by learning in


biological systems. In this procedure, the neurons compete for the opportunity to
learn, or to update their weights. The processing neuron with the largest output is
17

An Introduction to Artificial neural network | Kaushik Bose

declared the winner and has the capability of inhibiting its competitors as well as
exciting its neighbours. Only the winner is permitted output, and only the winner plus
its neighbours are allowed to update their connection weights.
The Kohonen rule does not require desired output. Therefor it is implemented
in the unsupervised methods of learning.

BENEFITS OF NEURAL NETWORKS

It is apparent that a neural network derives its computing power through, first,
its massively parallel distributed structure and, second, its ability to learn and
therefore generalize. Generalization refers to the neural network producing
reasonable outputs for inputs not encountered during training (learning). These two
information-processing capabilities make it possible for neural networks to solve
complex (large-scale) problems that are currently intractable.
NONLINEARITY

An artificial neuron can be linear or nonlinear. A neural network, made up of an


interconnection of nonlinear neurons, is itself nonlinear.
INPUT-OUTPUT MAPPING

A popular paradigm of learning called learning with a teacher or supervised


learning involves modification of the synaptic weights of a neural network by applying
a set of labelled training samples or task examples. Each example consists of a unique
input signal and a corresponding desired response. The network is presented with an
example picked at random from the set, and the synaptic weights (free parameters) of
the network are modified to minimize the difference between the desired response
and the actual response of the network produced by the input signal in accordance
with an appropriate statistical criterion. The training of the network is repeated for
many examples in the set until the network reaches a steady state where there are no
further significant changes in the synaptic weights. Thus the network learns from the
examples by constructing an input-output mapping for the problem at hand.
18

An Introduction to Artificial neural network | Kaushik Bose

ADAPTIVITY

Neural networks have a built-in capability to adapt their synaptic weights to


changes in the surrounding environment. In particular, a neural network trained to
operate in a specific environment can be easily retrained to deal with minor changes
in the operating environmental conditions. Moreover when it is operating in a nonstationary environment (i.e., one where statistics change with time), a neural network
can be designed to change its synaptic weights in real time.
FAULT TOLERANCE

A neural network, implemented in hardware form, has the potential to be


inherently fault tolerant, or capable of robust computation, in the sense that its
performance degrades gracefully under adverse operating conditions. For example, if
a neuron or its connecting links are damaged, recall of a stored pattern is impaired in
quality. However, due to the distributed nature of information stored in the network,
the damage has to be extensive before the overall response of the network is
degraded seriously.
NEUROBIOLOGICAL ANAL OGY

The design of a neural network is motivated by analogy with the brain, which is
a living proof that fault tolerant parallel processing is not only physically possible but
also fast and powerful. Neurobiologists look to (artificial) neural networks as a
research tool for the interpretation of neurobiological phenomena. On the other hand,
engineers look to neurobiology for new ideas to solve problems more complex than
those based on conventional hard-wired design techniques.
APPLICATIONS OF ANN

SIGNAL PROCESSING

There are many applications of neural networks in the general area of signal
processing. One of the first commercial applications was (and still is) to suppress noise
on a telephone line. The neural net used for this purpose is a form of ADALINE.
PATTERN RECOGNITION

Many interesting problems fall into the general area of pattern recognition. One
specific area in which many neural network applications have been developed is the
automatic recognition of handwritten characters (digits or letters).

19

An Introduction to Artificial neural network | Kaushik Bose

MEDICINE

One of many examples of the application of neural networks to medicine was


developed by Anderson et al. in the mid-1980s [Anderson, 1986; Anderson, Golden,
and Murphy, 1986]. It has been called the "Instant Physician" [Hecht Nielsen, 1990].
The idea behind this application is to train an auto associative memory neural network
(the "Brain-State-in-a-Box," described in Section 3.4.2) to store a large number of
medical records, each of which includes information on symptoms, diagnosis, and
treatment for a particular case. After training, the net can be presented with input
consisting of a set of symptoms; it will then find the fun stored pattern that represents
the "best" diagnosis and treatment.
SPEECH PRODUCTION

Learning to read English text aloud is a difficult task, because the correct
phonetic pronunciation of a letter depends on the context in which the letter appears.
SPEECH RECOGNITION

Progress is being made in the difficult area of speaker-independent recognition


of speech. Several types of neural networks have been used for speech recognition,
including multilayer nets.
CLUSTERING/CATE GORIZATION

In clustering, there are no training data with known class labels. A clustering
technique explores similarity between the patterns and places similar pattern in a
cluster.
PREDICTION/FORECASTI NG

Artificial neural network is used for Stock-market prediction and weather


forecasting.
OPTIMIZATION

A wide variety of problems in mathematics, statistics, engineering, science,


etc. can be posed as optimization problems. The goal of an optimization algorithm is
to find a solution satisfying a set of constraints such that an objective function is
minimized or maximized. ANN is used to solve this kind of problems.

20

An Introduction to Artificial neural network | Kaushik Bose

FUTURE SCOPE OF ARTIFICIAL NEURAL NETWORKS

A great deal of research is going on in neural networks worldwide:


The basic research to networks which can respond to temporally varying
patterns.
The research on the techniques for implementing neural networks directly in
silicon. Already one chip commercially available exists, but it does not include
adaptation. Edinburgh University have implemented a neural network chip, and are
working on the learning problem.
There is particular interest in sensory and sensing applications: nets which learn
to interpret real-world sensors and learn about their environment.

REFERENCES

Neural Networks-A Comprehensive Foundation, Simon Haykin


Fundamentals Of Neural Networks, Laurene Fausett
Artificial Neural Network: A Tutorial, Anil K. Jain
A Brief Introduction to Neural Networks, David Kriesel
Artificial Neural Networks, Girish Kumar Jha
Artificial Neural Networks for Beginners, Carlos Gershenson
Artificial Neural Networks Technology, Dave Anderson and George
McNeill
http://en.wikipedia.org/wiki/Artificial_neural_network

21

S-ar putea să vă placă și