Sunteți pe pagina 1din 34

Artificial Neural Network

Page 1

Artificial Neural Network

INDEX
Content No.
1. 2. 3. 4. . $. Introduction Definition Structure Of Human Brain Neurons Basics Of !NN "ode# !rtificia# Neura# Net%or&s $.1 Ho% !NN Differs (rom Con)entiona# Com*. $.2 !NN +s +on Neumann Com*. '. Perce*tron -. .earning .a%s -.1 He//0s 1u#e -.2 Ho*fie#d 1u#e -.3 23e De#ta 1u#e -.4 23e 4radient Descent 1u#e -. 5o3onen0s .earning 1u#e 6. Basic Structure Of !NNS 1,. Net%or& !rc3itectures 1,.1Sing#e .a7er (eed (or%ard !NN 1,.2"u#ti#a7er (eed for%ard !NN 1,.31ecurrent !NN 11. .earning Of !NNS 11.1.earning 8it3 ! 2eac3er 11.2.earning 8it3out ! 2eac3er 11.3.earning 2as&s 12. Contro# 13. !da*tation 14. 4enera#i9ation 1 . Pro/a/i#istic !NN 1$. !d)antages 1'. .imitations 1-. !**#ications 16. 1eferences 1 2 3 4 $ ' 1, 11

Page

14 1

1-

23 24 24 2 2$ 2' 232

Page 2

Artificial Neural Network

Artificial Neural Network

Introduction
Ever since eternity, one thing that has made human beings stand apart from the rest of the animal kingdom is, its brain .The most intelligent device on earth, the Human brain is the driving force that has given us the ever-progressive species diving into technology and development as each day progresses. Due to his in uisitive nature, man tried to make machines that could do intelligent !ob processing, and take decisions according to instructions fed to it. "hat resulted #as the machine that revolutioni$ed the #hole #orld, the %omputer &more technically speaking the 'on (eumann %omputer). Even though it could perform millions of calculations every second, display incredible graphics and *-dimentional animations, play audio and video but it made the same mistake every time. +ractice could not make it perfect. ,o the uest for making more intelligent device continued. These researches lead to birth of more po#erful processors #ith high-tech e uipments attached to it, super computers #ith capabilities to handle more than one task at a time and finally net#orks #ith resources sharing facilities. -ut still the problem of designing machines #ith intelligent self-learning, loomed large in front of mankind. Then the idea of initiating human brain stuck the designers #ho started their researches one of the technologies that #ill change the #ay computer #ork .rtificial (eural (et#orks.

Page 3

Artificial Neural Network

Definition
(eural (et#ork is the specified branch of the .rtificial /ntelligence. /n general, (eural (et#orks are simply mathematical techni ues designed to accomplish a variety of tasks. (eural (et#orks uses a set of processing elements &or nodes) loosely analogues to neurons in the brain &hence the same, neural net#orks). These nodes are interconnected in a net#ork that can then identify patterns in data as it is e0posed to the data. /n a sense, the net#ork learns from the e0perience !ust as people do. (eural net#orks can be configured in various arrangements to perform a range of tasks including pattern recognition, data mining, classification, and process modeling.

Page 4

Artificial Neural Network

Structure of 3uman /rain


.s stated earlier, (eural (et#orks is very much similar to the biological structure of human -rain. 1ollo#ing is the biological structure of brain is given.

Se:uentia# (unctions; 1u#es Conce*ts Ca#cu#ations E<*ert S7stems .earn /7 1u#es

Para##e# (unctions; Images Pictures Contro# Neura# Net%or&s .earn /7 e<*erience

(unctions of Brain .s sho#n in figure, left part of the brain consists of rules, concepts and calculations. /t follo#s 23ule -ased 4earning5 and hence solves the problem by passing them through rules. /t has se uential pairs of (eurons. Therefore, this part of brain is similar to the e0pert systems. 3ight part of the brain, as sho#n belo# in the figure6 consist of functions, images, pictures, and controls. /t follo#s 2parallel learning5 and hence learns through e0perience. /t has parallel pairs of (eurons. Therefore, this brain is similar to the (eural (et#ork.

Page 5

Artificial Neural Network

Neurons
The conceptual constructs of a neural net#ork stemmed from our early understanding of the human brain. The brain is comprised of billion and billions of interconnected neurons &some e0pert5s estimate up#ards of 7877 neurons in the human brain). The fundamental building blocks of this massively parallel cellular structure are really uite simply #hen studied in isolation. . neuron receives incoming electrochemical signals from its dendrites and collects these signals at the neuron nucleus. The neuron nucleus has a internal threshold that determines if neuron itself tires in response to the incoming information. /f the combined incoming signals e0ceeds this threshold then neuron fires and an electrochemical signal is sent to all neurons connected to the firing neuron on its output connections or a0ons. 9ther#ise the incoming signals are ignored and the neuron remains dormant. There are many types of neurons or cells. 1rom a neuron body &soma) many fine branching fibers, called dendrites, protrude. The dendrites conduct signals to the soma or cell body. E0tending from a neuron5s soma, at a point called a0on hillock &initial segment), is a long giber called an a0on, #hich generally splits into the smaller branches of a0onal arbori$ation. The tips of these a0on branches &also called nerve terminals, end bulbs, telondria) impinge either upon the dendrites, somas or a0ons of other neurons or upon effectors.

Page 6

Artificial Neural Network

The a0on-dendrite &a0on-soma, a0on-a0on) contact bet#een end bulbs and the cell it impinges upon is called a synapse. The signal flo# in the neuron is &#ith some e0ceptions #hen the flo# could be bi-directional) from the dendrites through the soma converging at the a0on hillock and then do#n the a0on the the end bulbs. . neuron typically has many dendrites but only a single a0on. ,ome neurons lack a0ons, such as the amacrine cells.

Page 7

Artificial Neural Network

Basics of !rtificia# Neura# "ode#s


The human brain is made up of computing elements, called neurons, coupled #ith sensory receptors &affecters) and effectors. The average human brain, roughly three pounds in #eight and :8 cubic inches in volume, is estimated to contain about 788 billion cells of various types. . neuron is a special cell that conducts and electrical signal, and there are about 78 billion neurons in the human brain. The remaining :8 billion cells are called glial or glue cells, and these serve as support cells for the neurons. Each neuron is about one-hundredth si$e of the period at the end of this sentence. (eurons interact through contacts called synapses. Each synapse spans a gap about a millionth of an inch #ide. 9n the average each neuron receives signals via thousands of synapses. The motivation for artificial neural net#ork &.(() researches is the belief that a human5s capabilities, particularly in real-time visual perception, speech understanding, and sensory information processing and in adaptively as #ell as intelligent decision making in general, come from the organi$ational and computational principles e0hibited in the highly comple0 neural net#ork of the human brain. E0pectations of faster and better solution provide us #ith the challenge to build machines using the same computational and organi$ational principles, simplified and abstracted from neurobiological of the brain.

!rtificia# Neura# Net%or& "ode#

Page 8

Artificial Neural Network

!rtificia# Neura# Net%or&


.rtificial neural net#ork &.((s), also called parallel distributed processing systems &+D+s) and connectionist systems, are intended for modeling the organi$ation principles of the central neurons system, #ith the hope that the biologically inspired computing capabilities of the .(( #ill allo# the cognitive and logically inspired computing capabilities of the .(( #ill allo# the cognitive and sensory tasks to be performed more easily and more satisfactory than #ith conventional serial processors. -ecause of the limitation of serial computers, much effort has devoted to the development of the parallel processing architecture6 the function of single processor is at a level comparable to that of a neuron. /f the interconnections bet#een the simplest fine-grained processors are made adaptive, a neural net#ork results. .(( structures, broadly classified as recurrent &involving feedback) or non-recurrent &#ithout feedback), have numerous processing elements &also dubbed neurons, neurodes, units or cells) and connections &for#ard and back#ard interlayer connections bet#een neurons in different layers, for#ard and back#ard interlayer connections or lateral connections bet#een neurons in the same layer, and self-connections bet#een the input and output layer of the same neuron. (eural net#orks may not have differing structures or topology but are also distinguished from one another by the #ay they learn, the manner in #hich computations are performed &rule-based, fu$$y, even nonalorithmic), and the component characteristic of the neurons or the input;output description of the synaptic dynamics). These net#orks are re uired to perform significant processing tasks through collective local interaction that produces global properties. ,ince the components and connections and their packaging under stringent spatial constraints make the system large-scale, the role of graph theory, algorithm, and neuroscience is pervasive.

Page 9

Artificial Neural Network

Ho% Neura# Net%or&s differ from Con)entiona# Com*uter=


(eural (et#orks perform computation in a very different #ay than conventional computers, #here a single central processing unit se uential dictates every piece of the action. (eural (et#orks are built from a large number of very simple processing elements that individually deal #ith pieces of a big problem. . processing element &+E) simply multiplies an output value &table lookup). The principles of neural computation come from the massive processing tasks, and from the adaptive nature of the parameters &#eights) that interconnected the +Es.

Page 10

Artificial Neural Network

Simi#arities and difference /et%een neura# net and )on Neumann com*uter
Neura# net Trained &learning by e0ample) by ad!usting the connection strengths, threshold, and structure) <emory and processing elements separate are collected +arallel&discrete or continuous), digital, asynchronous <ay be fault-tolerant because of Distributed representation and 4arge =scale redundancy ,elf-organi$ation during learning >no#ledge stored is adaptable address /nformation is stored in the /nterconnection bet#een neurons +rocessing is anarchic %ycle time, #hich governs +rocessing speed, occurs in <illisecond range memory location is strictly replaceable processing is autocratic cycle time, corresponds to processing one step of a program in the cpu during 9ne clock cycle , occurs /n the nanosecond range ,oft#are dependent >no#ledge stored in an ,e uential or serial , <emory and processing +on Neumann com*uter +rogrammed #ith instruction & if-then analysis based on logic)

synchronous&#ith a clock) (ot fault- talerant

Page 11

Artificial Neural Network

Perce*tron
.t the heart of every (eural (et#ork is #hat is referred to as the perceptron &sometimes called processing element or neural node) #hich is analogus to the neuron nucles in the brain. The second layer that is very first hidden layer is kno#n as perceptron. .s #as the case in the brain the operation of the perceptron is very simple6 ho#ever also as is the case in the brain, #hen all connected neurons operate as a collective they can provide some very po#erful learning capacity. /nput signals are applied to the node via input connection &dendrites in the case of the brain.) The connections have strength #hich change as the system learns. /n neural net#orks the strength of the connections are referred to as #eights. "eights can either e0cite or inhibite the transmission of the incoming signal. <athematically incoming signals values are multiplied by the value of those particular #eights. .t the perceptron all #eighted input are summed. This sum value is than passed to a scaling function. The selection of scaling function is part of the neural net#ork design. The structure of perceptron &(euron (ode) is as follo#.

Perce*tron

Page 12

Artificial Neural Network

.earning .a%s
<any learning la#s are in common use. <ost of these are some sort of variation of the best kno#n and oldest learning la#s, hebb5s rule. 3esearch into different learning functions continues as ne# ideas routine sho# up in trade publication. ,ome researches have the modeling of biological learning as their main ob!ective. 9thers are e0perimenting #ith adaptation of their perceptions of ho# nature handles learning. Either #ay, man5s understanding of ho# neural processing actually #orks is very limited. 4earning is certainly more comple0 rhan the simplification represented by the learning la#s currently develop. . fe# of the ma!or la#s are presented as e0amples. He//0s 1u#e The first, and undoubtedly the best kno#n, learning rule #ere introduced by Donald Hebb. The description appeared in his book the 9rgani$ation of behavior in 7:?:. His basic rule is@ if a neuron receives an input from another neuron, and if both are highly active &mathematically have the same sign), the #eight bet#een the neurons should be strengthened. Ho*fie#d .a% /t is similar to Hebb5s rule #ith the e0ception that it specifies the magnitude of the strengthening or #eakening. /t states, if the desired output and the input are both active and both inactive, increment the connection #eight by the learning rate, other#ise decrement the #eight by the learning rate.

23e De#ta 1u#e


This rule is a further variation of Hebb5s 3ule. /t is one of the most commonly used. This rule is based on the simple idea of continuously modifying the strengths of the input connections to reduce the difference &the delta) bet#een the desired output value and the actual output of a processing element. Their rule changes the synaptic #eights in the #ay that minimi$es the mean s uared error of the net#ork.

Page 13

Artificial Neural Network

This rule is also referred to as #indro#s-Hoff 4earning rule and the least mean s uare &4<,) 4earning 3ule. The #ay that the Delta 3ule #orks is that the delta rule error in the output layer is transformed by the derivative of the transfer function and is then used in the previous neural layer to ad!ust input connection #eights. /n other #ords, the back-propagated into previous layers one layer at a time. The process of back-propagating the net#ork errors continues until the first layer is reached. The net#ork typed called feed for#ard6 back-propagation derives its name from this method of computing the error term. "hen using the delta rule, it is important to ensure that the input data set is #ell randomi$ed. "ell-ordered or structured presentation of the training set can lend to a net#ork, #hich cannot converge to the desired accuracy. /f that happens, then net#ork is incapable of learning the problem. 23e 4radient Descent 1u#e This rule is similar to the Delta rule in that the derivatives of the transfer function is still used to modify the delta error before it is applied to the connection #eights. Here, ho#ever, an additional proportional constant tied to the learning rate is appended to the final modifying factor acting upon the #eights. This rule is commonly used, even though it converges to a point of stability very slo#ly. /t has been sho#n that different learning rates for different layers of net#ork help the learning process converge faster. /n these tests, the learning rates for those layers close to the output #ere set lo#er than those layers near the input. This especially important for applications #here the input data is not derived from a strong underlying model. 5o3onen0s .earning .a% The procedure, developed by Teuvo >ohonen, #as inspired by learning in biological systems. /n this procedure, the processing elements complete for the opportunity to learn, or update their #eights. The processing element #ith the largest output is declared the #inner and has the capabilities of

Page 14

Artificial Neural Network

inhibiting its competitors as #ell as e0citing its neighbors. 9nly the #inner is permitted an output, and only the #inner plus its neighbors are allo#ed to ad!ust their connection #eights. 1urther, the si$e of the neighborhood can vary during the training period. The usual paradigm is to start #ith a larger definition of the neighborhood, and narro# in as the training process proceeds. -ecause the #inning element is defined as the one that has the closest match to the input pattern, >ohonen net#orks model the distributed of the data and is sometimes refered to as self-organi$ing maps or selforgani$ing topologies.

Page 15

Artificial Neural Network

Basic Structure of artificia# neura# net%or&


in*ut #a7er; The bottom layer is kno#n as input neuron net#ork in this case 07 to 0A are input layerneurons. Hidden #a7er; The in-bet#een input and output layer the layers are kno#nas hidden layers #here the kno#ledge of past e0perience ; training is the Out*ut .a7er; The topmost layer #hich give the final output. /n this case $7 and $B are output neurons.

Basic Structure Of !rtificia# Neura# Net%or&

Page 16

Artificial Neural Network

Net%or& arc3itectures
1>. Sing#e #a7er feedfor%ord net%or&s; /n this layered neural net#ork the neurons are organi$ed in the form of layers. /n this simplest form of a layered net#ork, #e have an input layer of source nodes those pro!ects on to an output layer of neurons, but not vise-versa. /n other #ords, this net#ork is strictly a feed for#ard or acyclic type. /t is as sho#n in figure@

,uch a net#ork is called single layered net#ork, #ith designation single later referring to the o;p layer of neurons. 2>. "u#ti#a7er feed for%ard net%or&s; The second class of the feed for#ard neural net#ork distinguishes itself by one or more hidden layers, #hose computation nodes are correspondingly called neurons

Page 17

Artificial Neural Network

or units. The function of hidden neurons is intervenue bet#een the e0ternal i;p and the net#ork o;p in some useful manner. The ability of hidden neurons is to e0tract higher order statistics is particularly valuable #hen the si$e of i;p layer is large. The i;p vectors are feedfor#ard to 7 st hidden layer and this pass to Bnd hidden layer and so on until the last layer i.e. output layer, #hich gives actual net#ork response.

3>. 1ecurrent net%or&s; . recurrent net#ork distinguishes itself from feed for#ard neural net#ork, in that it has least one feed for#ard loop. .s sho#n in figures output of the neurons is fed back into its o#n inputs is referred as self-feedback . recurrent net#ork may consist of a single layer of neurons #ith each neuron feeding its output signal back to the inputs of all the other neurons. (et#ork may have hidden layers or not.

Page 18

Artificial Neural Network

Page 19

Artificial Neural Network

.earning of !NNS
The property that is of primary significance for a neural net#ork is the ability of the net#ork to learn from environment, and to improve its performance through learning. . neural net#ork learns about its environment through an interactive process of ad!ustment applied to its synaptic #eights and bias levels. (et#ork becomes more kno#ledgeable about its environment after each iteration of the learning process.

Learning with a teacher:


7). ,upervised learning@ the learning process in #hich the teacher teaches the net#ork by giving the net#ork the kno#ledge of environment in the form of sets of the inputsoutputs pre-calculated e0amples. .s sho#n in figure

Page 20

Artificial Neural Network

(eural net#ork response to inputs is observed and compared #ith the predefined output. The difference is calculated refer as error signal and that is feed back to input layers neurons along #ith the inputs to reduce the error to get the perfect response of the net#ork as per the predefined outputs.

.earning %it3out a teac3er;


Cnlike supervised learning, in unsupervised learning, the learning process takes place #ithout teacher that is there are no e0amples of the functions to be learned by the net#ork. 1>. 1einforcement #earning ? neurod7namic *rogramming /n reinforcement learning, the learning of an input output mapping is performed through continued interaction #ith environment in order to minimi$e a scalar inde0 of performance. .s sho#n in figure.

Page 21

Artificial Neural Network

/n reinforcement learning, because no information on #ay the right output should be provided, the system must employ some random search strategy so that the space of plausible and rational choices is searched until a correct ans#er is found. 3einforcement learning is usually involved in e0ploring a ne# environment #hen some kno#ledge& or sub!ective feeling) about the right response to environmental inputs is available. The system receives an input from the environment and process an output as response. ,ubse uently, it receives a re#ard or a panelty from the environment. The system learns from a se uence of such interactions. 2>. @nsu*er)ides #earning; in unsupervised or self-organi$ed learning there is no e0ternal teacher or critic to over see the learning process.

.s indicated in figure.

3ather provision is made for a task independent measure of the uality of the representation that the net#ork is re uired to learn and the free parameters of the net#ork are optimi$ed #ith respect to that measure. 9nce the net#ork has become tuned to the statistical regularities of the input data, it developes the ability to form internal representation for encoding features of the input and there by to create the ne# class automatically.

Page 22

Artificial Neural Network

.earning tas&s +attern recognition@ Humans are good at pattern recognition. "e can recogni$e the familiar face of the person even though that person has aged since last encounter, identifying a familiar person by his voice on telephone, or by smelling the fragments comes to kno# the food etc. +attern recognition is formally defined as the process #here by a received pattern;signal is assigned to one of a prescribed number of classes. . neural net#ork performs pattern recognition by first undergoing a training session, during #hich the net#ork is repeatedly present a set of input pattern along #ith the category to #hich each particular pattern belongs. 4ater, a ne# pattern is presented to the net#ork that has not been seen before, but #hich belongs to the same pattern caterogy used to train the net#ork. The net#ork is able to identify the class of that particular pattern because of the information it has e0tracted from the training data. +attern recognition performed by neural net#ork is statistical in nature, #ith the pattern being represented by points in a multidimensional decision space. The decision space is divided into regions, each one of #hich is associated #ith class. The decision boundries are determined by the training process.

Page 23

Artificial Neural Network

.s sho#n in figure@ in generic terms, pattern-recognition machines using neural net#ork may take t#o forms. 7). To e0tract features through unsupervised net#ork. B). 1eatures pass to supervised net#ork for pattern classification to give final output.

Page 24

Artificial Neural Network

Contro# The control of a plant is another learning task that can be done by a neural net#ork6 by a 2plant5 #e mean a process or critical part of a system that is to be maintained in a controlled condition. The relevance of learning to control should not be surprising because, after all, the human brain is a computer, the output of #hich as a #hole system are actions. /n the conte0t of control, the brain is living proof that it is possible to build a generali$ed controller that takes full advantages of parallel distributed hard#are, can control many thousands of processes as done by the brain to control the thousands of muscles.

Page 25

Artificial Neural Network

!da*tation The environment of the interest is no stationary, #hich means that the statistical parameters of the information bearing generated by the environment vary #ith the time. /n situation of the kind, the traditional methods of supervised may learning may prove to be inade uate because the net#ork is not e uipped #ith the necessary means to track the statistical variation of the environment in #hich it operates. To overcome these shortcomings, it is desirable for a neural net#ork to continually adapt its free parameters to variation in the incoming signals in a real time fashion. Thus an adaptive system responds to every distinct input as a novel one. /n other #ords the learning process encountered in the adaptive system never stops, #ith learning going on #hile signal processing is being performed by the system. This form of learning is called continuous learning or learning on the fly. 4enera#i9ation /n back propagation learning #e typically starts #ith a training sample and uses the back propagation algorithm to compute the synaptic #eights of a multiplayer preceptor by loading &encoding) as many as of the training e0ample as possible into the net#ork. The hope is that the neural net#ork so design #ill generali$e. . net#ork is said generali$e #ell #hen the input output mapping computed by the net#ork is correct or nearly so for the test data never used in creating or training the net#ork6 the term generali$ation is borro#ed from psychology. . neural net#ork that is design to generali$e #ell #ill produced a correct input output mapping even #hen the input is slightly different from the e0amples used to train the net#ork. "hen ho#ever a neural net#ork learns too many input output e0amples the net#ork may end up memori$ing the training data. /t may do so by finding a feature that is present in training

Page 26

Artificial Neural Network

data but not true for the underlining function that is to be modeled. ,uch a phenomena is referred to as an over fitting or over training. "hen the net#ork is over trained it looses the ability to generali$e bet#een similar input output pattern. 23e *ro/a/i#istic neura# net%or& .nother multilayer feed for#ard net#ork is the probabilistic neural net#ork &+((). /n addition to the input layer, the +(( has t#o hidden layers and an output layer. The ma!or difference from a feed for#ard net#ork trained by back propagation is that it can be constructed after only a single pass of the training e0emplars in its original form and t#o passes is a modified version. The activation function of a neural in the case of the +(( is statistically derived from estimating of probability density functions &+D1s) based on training patterns.

Page 27

Artificial Neural Network

!d)antages of Neura# Net%or&s 1> (et#orks start processing the data #ithout any preconceived hypothesis. They start random #ith #eight assignment to various input variables. .d!ustments are made based on the difference bet#een predicted and actual output. This allo#s for unbiased and batter understanding of data. 2> (eural net#orks can be retained using additional input variables and number of individuals. 9nce trained thay can be called on to predict in a ne# patient. 3> There are several neural net#ork models available to choose from in a particular problem. 4> 9nce trained, they are very fast. > Due to increased accuracy, results in cost saving. $> (eural net#orks are able to represent any functions. Therefore they are called 2@ni)ersa# !**ro<imators5. '> (eural net#orks are able to learn representative e0amples by back propagating errors.

Page 28

Artificial Neural Network

.imitations of Neura# Net%or& .o% .earning 1ate;A BB 1or problems re uiring a large and comple0 net#ork architecture or having a large number of training e0amples, the time needed to train the net#ork can become e0cessively long.

(orgetfu#ness ;ABB The net#ork tends to forget old training e0amples as it is presented #ith ne# ones. . previously trained neural net#ork that must be updated #ith ne# information must be trained using the old and ne# e0amples = there is currently no kno#n #ay to incrementally train the net#ork.

Im*recision ;ABB (eural net#orks do not provide precise numerical ans#er, but rather relate an input pattern to the most probable output state.

B#ac& /o< a**roac3 ;ABB (eural net#orks can be trained to transform an input pattern to output but provide no insights to the physics behind the transformation.

.imited (#e<i/i#it7 ;ABB The .((, is designed and implemented for only one particular system. /t is not applicable to another system.

Page 29

Artificial Neural Network

!**#ication Of !rtificia# Neura# Net%or&

/n parallel #ith the development of theories and architectures for neural net#orks the scopes for applications are broadening at a rapid pace. (eural net#orks may develop intuitive concepts but are inherently ill suited for implementing rules precisely, as in the case of rule based computing. ,ome of the decision making tools of the human brain such as the seats of consciousness, thought, and intuition, do not seem to be #ithin our capabilities for comprehension in the near future and are dubbed by some to be essentially no algorithmic. 1ollo#ing are a fe# applications #here neural net#orks are employed presently@ 1> 2ime Series Prediction +redicting the future has al#ays been one of humanity5s desires. Time series measurements are the means for us to characteri$e and understand a system and to predict in future behavior. Dershenfield and #eighed defined three goals for time series analysis@ forecasting, modeling, and characteri$ation. 1orecasting is predicting the short-term evolution of the system. <odeling involves finding a description that accurately captures the features of the long-term behavior. The goal of characteri$ation is to determined the fundamental properties of the system, such as the degrees of freedom or the amount of randomness. The traditional methods used for time series prediction are the moving average &ma), autoregressive &ar), or the combination of the t#o, the .3<. model. (eural net#ork approaches produced some of the best short-term predictions. Ho#ever, methods that reconstruct the state space by timedelay embedding and develop a representation for the geometry in the system5s state space yielded better longer-term predictions than neural net#orks in some cases.

Page 30

Artificial Neural Network

2> S*eec3 4eneration 9ne of the earliest successful applications of the back propagation algorithm for training multiplayer feed for#ard net#orks #ere in a speech generation system called (ET talk, developed by ,e!no#ski and 3osenberg. (et talk is a fully connected layered feed for#ard net#ork #ith only one hidden layer. /t #as trained to pronounce #ritten English te0t. Turning a #ritten English te0t into speech is a difficult task, because most phonological rules have e0ceptions that are conte0t-sensitive. (et talk is a simplest net#ork that learns the function in several hours using e0emplars.

3> S*eec3 1ecognition >ohonen used his self-organi$ing map for inverse problem to that addressed by (et talk@ speech recognition. He developed a phonetic type#riter for the 1innish language. The phonetic type#riter takes as input a speech as input speech and converts it into #ritten te0t. ,peech recognition in general is a much harder problem that turning te0t into speech. %urrent state-of-the-art English speech recognition systems are based on hidden <arkov <odel &H<<). The H<<, #hich is a <arkov process6 consist of a number of states, the transitions bet#een #hich depend on the occurrence of some symbol. 4> !utonomous +e3ic#e Na)igation 'ision-based autonomous vehicle and robot guidance have proven difficult for algorithm-based computer vision methods, mainly because of the diversity of the une0pected cases that musy be e0plicitly dealt #ith in the algorithms and the real-time constraint. +omerleau successfully demonstrated the potential of neural net#orks for overcoming these difficulties. His .4'/(( &.utonomous 4and 'ehicle in (eural (et#orks) set a #orked record for autonomous navigation distance. .fter training on a t#o-mile stretch of high#ay, it

Page 31

Artificial Neural Network

drove the %<C (avlab, e uipped #ith video cameras and laser range sensors, for B7.B miles #ith an average speed of AA mph on a relatively old high#ay open to normal traffic. .4'/(( #as not distributed by passing cars #hile it #as driven autonomously. .4'/(( nearly doubled the previous distance #orld record for autonomous navigation. . net#ork in .4'/(( for each situation consists of a single hidden layer of only four units, an output layer of *8 units and a *8 E *B retina for the :F8 possible input variables. The retina is fully connected to the hidden layer, and the hidden layer is fully connected to the output layer. The graph of the feed for#ard net#ork is a node-coalesced cascade version of bipartite graphs. > Hand%riting 1ecognition <embers of a group at .TGT -ell 4aboratories have been #orking in the area of neural net#orks for many years. 9ne of their pro!ects involves the development of a neural net#ork recogni$er for hand#ritten digits. . feed for#ard layered net#ork #ith three hidden layers is used. 9ne of the key features in this net#ork that reduces the number of free parameters to enhance the probability of valid generali$ation by the net#ork. .rtificial neural net#ork is also applied for image processing. $> In 1o/otics (ie#d; "ith the help of neural net#orks and artificial /ntelligence. /ntelligent devices, #hich behave like human, are designed. "hich are helpful to human in performing various tasks.

Page 32

Artificial Neural Network

1ollo#ing are some of the application of (eural (et#orks in various fields@ -usiness o <arketing o 3eal Estate Document and 1orm +rocessing o <achine +rinted %haracter 3ecognition o Draphics 3ecognition o Hand printed %haracter 3ecognition o %ursive Hand#riting %haracter 3ecognition 1ood /ndustry o 9dor;.roma .nalysis o +roduct Development o Huality .ssurance 1inancial /ndustry o <arket Trading o 1raud Detection o %redit 3ating Energy /ndustry o Electrical 4oad 1orecasting o Hydroelectric Dam 9peration o 9il and (atural Das %ompany <anufacturing o +rocess %ontrol o Huality %ontrol <edical and Health %are /ndustry o /mage .nalysis o Drug Development o 3esource .llocation ,cience and Engineering

Page 33

Artificial Neural Network

o %hemical Engineering o Electrical Engineering

!t #ast I %ant to sa7 t3at after 2,, or 3,, 7ears neura# net%or&s is so de)e#o*ed t3at it can find t3e errors of e)en 3uman /eings and %i## /e a/#e to rectif7 t3at errors and ma&e 3uman /eing more inte##igent.

1eference; C!rtificia# Neura# Net%or&D B7 ;A 1o/ert E Sc3a#&off CNeura# Net%or&D B7; A Simon Ha7&in CInternetD; A %%%.anns.mit?edu.com %%%.cs./ar&e#7.com %%%.academicresources.com

Page 34

S-ar putea să vă placă și