Sunteți pe pagina 1din 6

Thumbnails Document Outline <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=1> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=2> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=3> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=4> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=5> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=6> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=7> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=8> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=9> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=10> <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.

pdf#page=11> No Outline Available Find: Previous Next Highlight all Match case Toggle Slider Find Previous Next Page: of 11 Presentation Mode Open Print Download Current View <http://ps2pdf.com/tgetfile/yeates.pdf?key=1363404218&name=yeates.pdf#page=1&zoo m=auto,0,800> Zoom Out Zoom In A NEURAL NETWORK FOR COMPUTING THE PSEUDO-INVERSE OF A MATRIX AND APPLICATIONS TO KALMAN FILTERING Mathew C. Yeates y California Institute of Technolog Jet Propulsion Laboratory 9 Pasadena, California 9110 ABSTRACT e m A single layer linear neural network for associative memory is described. Th atrix which best maps a set of input keys to desired output targets is computed s m recursively by the network using a parallel implementation of Grevilles algorithm. Thi odel differs from the Perceptron in that the input layer is fully interconnected leading t d to a parallel approximation to Newtons algorithm. This is in contrast to the steepes escent algorithm implemented by the Perceptron. By further extending the model to k i allow synapse updates to interact locally, a biologically plausible addition, the networ mplements Kalman filtering for a single output system.

D A NEURAL NETWORK FOR COMPUTING THE PSEUDO-INVERSE OF A MATRIX AN APPLICATIONS TO KALMAN FILTERING O I. INTRODUCTION ne result of the current interest in neural modeling, [2][5][6][9][11], has been , a the emergence of structures that, while not strict models of biological neural systems re, nevertheless, interesting in their own right for their ability to do complex processg ing in a massively parallel way. This correspondence describes such a structure, a sin le layer linear feedforward network, as well as its ability to implement several classical signal processing algorithms. Suppose, given the vector pairs ( x d , y d ) i=1..p with x d R y d R , it is desired i i i n i m i i e l to find an m x n matrix W such that Wx = y for all i. If such a matrix exists, th inear system described by y d = Wx d acts as an associative memory where the y d can be i i .

thought of as stored memories and the x d are the stimuli that retrieve them One neural approach is to form W as the sum of outer products of the input vecb tors with the output vectors, the Correlation Matrix Memory [8]. This scheme is can e easily implemented by a neural network with W being the matrix of connectivity t between the inputs and outputs but perfect recall is realized only when the input vec ors are orthogonal. Attempts at using this type of memory are described in [8][3]. X w A better choice for W involves calculating the pseudo-inverse of the matrix ith columns constructed with the input vectors. The mapping problem can formulated using matrix terminology by letting X = [ x d , x d ,.., x d ] and Y = [ y d , y d ,.., y d ]. The 1 2 p 1 2 p -1input output pairs form the columns of X and Y and W is sought that minimizes ee Y WX , where

eeee is the Euclidean matrix norm satisfying E E E T ) ddddddd ee A ee = tr ( AA . In general, X may not be invertible and the best solution [10] is , given by W = YX + Z ZXX whr X is the generalized notion of an inverse + + + called the Penrose Pseudoinverse, and Z is an arbitrary matrix of the same dimension s as X. Taking Z=0 for uniqueness, the solution of minimum norm, W = YX , i + obtained. Several researchers have investigated the use of the pseudoinverse to to implel ment the desired mapping. See [12]. However, these attempts involve an off-line calcu ation and only after the the desired W is found is it stored in a neural network. The q contribution of this correspondence is to show that the neural network itself can uickly learn the desired mapping. Calculating X involves a matrix inversion when either the rows or columns of X + , are independent. Specifically X = X

( XX ) rows independent + + T T 1 T 1 T X = ( X X ) X columns independent e While any matrix inversion routine could be used to solve for X in either of thes + s p cases, a recursive algorithm is desired for implementation on a neural network. This i rovided for by a special case of Grevilles algorithm [8]. W can be generated by (1.1) hhhhhhhhhhhhhhhhhhh ( y d W x d ) x d d W = W + 1 + x d x k k 1 k T k

1 k k k 1 k k T k 1 T T 1 k 1 k k T k k k k 1 k T k 1 = 1 + x d x d x d x d hhhhhhhhhhhhh k = 0 . . . p -2More Information Less Information Close

S-ar putea să vă placă și