Sunteți pe pagina 1din 10

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO.

11, NOVEMBER 2004

2245

HermiteGaussian-Like Eigenvectors of the Discrete Fourier Transform Matrix Based on the Singular-Value Decomposition of Its Orthogonal Projection Matrices
Magdy Tawk Hanna, Senior Member, IEEE, Nabila Philip Attalla Seif, and Waleed Abd El Maguid Ahmed
AbstractA new technique is proposed for generating initial orthonormal eigenvectors of the discrete Fourier transform matrix by the singular-value decomposition of its orthogonal projection matrices on its eigenspaces and efciently computable expressions for those matrices are derived. In order to generate HermiteGaussian-like orthonormal eigenvectors of given the initial ones, a new method called the sequential orthogonal procrustes algorithm (SOPA) is presented based on the sequential generation of the columns of a unitary matrix rather than the batch evaluation of that matrix as in the OPA. It is proved that for any of the SOPA, the OPA, or the GramSchmidt algorithm (GSA) the output HermiteGaussian-like orthonormal eigenvectors are invariant under the change of the input initial orthonormal eigenvectors. Index TermsDiscrete fractional Fourier transform (DFRFT), GramSchmidt algorithm (GSA), HermiteGaussian-like eigenvectors, orthogonal procrustes algorithm (OPA), sequential OPA (SOPA), singular-value decomposition.

I. INTRODUCTION

AVING developed the continuous fractional Fourier transform (FRFT) [1][3], current research is taking place to develop its discrete counterpart, namely the discrete FRFT (DFRFT) [4][8]. In order for the DFRFT to satisfy the requirements of unitarity and index additivity, orthonormal eigenvectors should be generated for the discrete Fourier transform (DFT) matrix . In order for the DFRFT to approximate its continuous counterpart, it is logical to demand that the eigenvectors of approximate the HermiteGaussian functions which are the eigenfunctions of the FRFT [9]. Candan et al. obtained a second-order difference equation by discretizing the second-order differential equation satised by the HermiteGaussian functions [6], [7]. Periodic solutions of this difference equation exist since its coefcients are periodic with period . One period of each solution sequence forms the elements of an eigenvector of an almost tridiagonal real symhave metric matrix . The matrix and the DFT matrix
Manuscript received November 6, 2003; revised June 17, 2004. This paper was recommended by Associate Editor R. R. Rao. M. T. Hanna and W. A. E. M. Ahmed are with the Department of Engineering Mathematics and Physics, Faculty of Engineering, Cairo University/Fayoum Branch, Fayoum 63514, Egypt (e-mail: hanna@ieee.org; alkodseforever@yahoo.com). N. P. A. Seif is with the Department of Engineering Mathematics and Physics, Faculty of Engineering, Cairo University, Giza 12613, Egypt (e-mail: npaseif@mail.com). Digital Object Identier 10.1109/TCSI.2004.836850

a common set of eigenvectors because they commute. Candan et al. used those HermiteGaussian-like orthonormal eigenvectors of as a basis for a legitimate denition of the DFRFT. Actually, the work of Candan et al. was an extension of that of Dickinson and Steiglitz [10] who previously studied the eigenstructure of the special matrix1 . Pei et al. achieved markedly superior results. They regarded the orthonormal eigenvectors of matrix S only as initial orthonormal basis spanning the eigenspaces of matrix . In each eigenspace, they searched for other orthonormal eigenvectors that better approximate the HermiteGaussian implies that its functions [5]. The unitarity of matrix eigenspaces corresponding to its distinct eigenvalues are orthogonal to each other [11] and the task reduces to nding good HermiteGaussian-like orthonormal eigenvectors for each eigenspace individually. More specically, since matrix has four distinct eigenvalues , [12], [13], the corresponding initial eigenvectors are grouped as the columns . Pei et al. generated a set of of four matrices , vectors by sampling the HermiteGaussian functions and proved that they are approximate eigenvectors of matrix cor, [4], responding to the exact eigenvalues [5]. Those vectors are arranged as the columns of four matrices , . Pei et al. proposed two techniques for getting orthonormal eigenvectors of that better approximate the HermiteGaussian functions than the initial orthonormal eigenvectors forming the columns of the matrices , [5]. The rst technique is the GramSchmidt algorithm (GSA) where for each value of k separately the columns of are to get exact nonorthogonal projected on the column space of that are next orthonormalized by applying eigenvectors of the GramSchmidt method. The second technique is the orthogonal procrustes algorithm (OPA) where for each value of separately the desired superior orthonormal eigenvectors are that is expressed assumed to form the columns of a matrix and the unitary matrix is evaluated by minimizing as the Frobenius norm of the matrix difference [14]. One objective of this paper is to present a direct technique without for generating initial orthonormal eigenvectors of
1Strictly speaking, denoting the matrix S in the work of Dickinson and Steiglitz [10] and Pei et al. [5] by S and the matrix S in the work of Candan et al. [6] by S , the two matrices are related by S = S 4I . Therefore, S and S have the same eigenvectors.

1057-7130/04$20.00 2004 IEEE

2246

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO. 11, NOVEMBER 2004

appealing to matrix . The technique depends on the generaon its tion of the orthogonal projection matrices of matrix eigenspaces and the computation of initial eigenvectors by applying the singular-value decomposition technique. More specically, expressions are derived for the projections matrices and simplied in order to get efciently computable forms. A second objective is to present an alternative technique for generating good HermiteGaussian-like eigenvectors given initial ones. In this techniqueto be referred to of as the sequential OPA (SOPA)the columns of the unitary are sequentially evaluated by solving a series of matrix constrained minimization problems rather than batch evaluated as in the OPA. A third contribution is the proof that the superior HermiteGaussian-like eigenvectors computed using any of the three techniquesGSA, OPA, or SOPAare invariant under the change of the initial eigenvectors. This implies that for any of the three rened techniques, the nal eigenvectors will be the same whether the initial eigenvectors are computed by or by the nding the eigenvectors of the auxiliary matrix singular-value decomposition of the projection matrices. More surprisingly, it will be proved that both the GSA and the SOPA produce identical results despite being algorithmically quite different. In Section II, the projection matrices on the four eigenspaces will be derived using two completely different of matrix methods. In Section III, each projection matrix is decomposed by the singular-value decomposition technique in order to get orthonormal basis of the corresponding eigenspace. In Section IV, the GSA and the OPA will be surveyed and proved to produce an output that is invariant under the change of the initial eigenvectors. The contributed SOPA will be next presented and proved to have the same property. In Section V, it will be shown that the SOPA and GSA produce identical outputs despite being algorithmically distinct. Some simulation results will be presented in Section VI. II. ORTHOGONAL PROJECTION MATRICES ON EIGENSPACES OF MATRIX F The DFT matrix of order is dened by (1) where (2) Matrix has the following four distinct eigenvalues [12]: (3) It is straightforward to show that matrix is unitary and consequently is diagonalizable [15]. According to the spectral theorem [16], has the following spectral decomposition: (4)

is the orthogonal projection matrix on the k th where . Two different methods eigenspace of to be denoted by will be presented below for deriving the four projection matrices. A. Method A has the same eigenvectors Since for any integer , matrix and consequently projection matrices as matrix , (4) implies that (5) is the resolution of the identity The special case of matrix induced by . Since the objective is the derivation of the four projection matrices, the above equation will be written for , 1, 2, 3 in order to get four matrix equations that can be expressed as

(6)

where the partitioned matrix of coefcients is given by

(7)

In preparation for solving (6), one obtains the following results from (3)

if , integer otherwise

(8)

if otherwise. Using (7)(9), one obtains2

, integer

(9)

(10) and consequently (11)


2The superscripts 3 and , respectively, denote the complex conjugate and the complex conjugate transpose.

HANNA et al.: HERMITEGAUSSIAN-LIKE EIGENVECTORS OF DFT MATRIX

2247

Therefore, (6) can be directly solved using (11) and (7) to get the following compact expression for the four orthogonal projection matrices: (12) In order to put the above expression in an efciently computable form, one should utilize the fact that [17, p. 351] (13) where by is the contra-identity matrix of order dened

Since matrix is diagonalizable and has only four distinct is of the third degree. Using eigenvalues, the polynomial can be expressed as the Lagrange interpolation formula, (25) where (26)

Combining (24)(26), one gets (27)

..

(14) Upon substituting (3) in the above equation, one obtains

Upon using the fact that [10] (15) and the unitarity and symmetry of , one obtains:3 (16) From (3), one directly gets (17) Substituting (3), (13), (16), and (17) in (12), one obtains

(28) (29) (30) (31) Using (23) and (28)(31), one gets the following expressions for the projection matrices: (32) (33) (34) (35) Using (3), it immediately follows that the above four expressions are identical to (12) obtained using the rst approach. III. INITIAL ORTHONORMAL EIGENVECTORS OF MATRIX The th-order square orthogonal projection matrix has rank which is the dimension of the k th eigenspace of matrix given by Table I [12]. The objective here is to compute an orthonormal basis for each eigenspace. The singular-value decomposition technique will be shown to be the right technique to apply. The singular-value decomposition of an arbitrary square matrix of order is [18] (36) where and are unitary matrices of order and (37) In the above equation, the singular values , are real and satisfy . Lemma 1: Let be a square Hermitian matrix of order having a modal matrix and eigenvalues arranged

(18) Therefore, the nal expressions required for computing the orthogonal projection matrices are given by (19) (20) (21) (22)

B. Method B According to a corollary of the spectral theorem, each orthogonal projection matrix can be expressed as [16, p. 434] (23) where is a polynomial in , which satises the conditions (24)
3Although

matrix F is symmetric, it is not Hermitian.

2248

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO. 11, NOVEMBER 2004

DIMENSIONS r

TABLE I OF FOUR EIGENSPACES

such that position of

. The singular-value decomis given by (36) where (38) (39) (40)

computationally by applying an eigenvalue decomposition procedure since might have repeated eigenvalues and the corresponding eigenvectorsas evaluated by that procedurewill not generally be orthogonal. Applying the result (46) to the four projection matrices of last section, one gets: (47)

and if if (41)

Proof: See Appendix A. The above lemma implies that for a Hermitian matrix the singular values are equal to the absolute values of its eigenvalues and the right singular vectors are equal to its orthonormal eigenvectors. and in (36) can be If the rank of is , the matrices partitioned as (42) (43) where (44) and . In (42) the submatrix has orthonormal columns by the unitarity of . For a Hermitian matrix of rank , combining (36), (38), (40), (42) and (43), one obtains (45) is the leading diagonal block of order of matrix where dened by (40). In the particular case of an orthogonal projection matrix , all eigenvalues are either 1 or 0 [18]. Consequently the diagonal and in (45) reduce to the identity matrix and matrices (45) simplies to (46) Therefore, in order to nd orthonormal basis for any space of dimension given its orthogonal projection matrix , one has only to apply the singular-value decomposition technique (36) and select the rst r columns of matrix in (42). It should be emphasized that one should never try to get the same result

The orthonormal basis of the k th eigenspace of matrix given by the columns of will be taken as initial orthonormal eigenvectors of corresponding to . They will be utilized for deriving the desired HermiteGaussian-like orthonormal eigenvectors direly needed for dening a DFRFT that approximates its continuous counterpart. IV. HERMITEGAUSSIAN-LIKE EIGENVECTORS By sampling the HermiteGaussian functions, Pei et al. obtained approximate eigenvectors for the DFT matrix that corof (3) as delineated in [4], respond to the exact eigenvalues [5]. Those vectors are grouped to form the columns of the four matrices , where the dimensions of are given in Table I. Being the corresponding eigenspaces approximate rather than exact eigenvectors, the columns of do not belong to the eigenspace corresponding to the eigenvalue . The objective here is to nd orthonormal basis for to form the columns of a matrix that are as close as possible to the columns of matrix . Toward achieving that goal two techniques have been proposed by Pei et al. [5], namely the GSA and the OPA. A third technique to be termed the SOPA will be proposed in this paper. In preparation for presenting the new technique, the rst two techniques will be surveyed below and cast in a form that will facilitate the comparison. More importantly, they will be proved to produce an output that is invariant under the change of the initial orthonormal basis of . , are orthogonal to Since the eigenspaces each other due to the unitarity of matrix F, each eigenspace will be dealt with separately. In order to simplify the notation the subscript k will be dropped in the remainder of this paper. The matrices , , will be written as the matrices , , , respectively. The space will be denoted by . A. GSA First matrix will be expressed as the partitioned matrix (48)

HANNA et al.: HERMITEGAUSSIAN-LIKE EIGENVECTORS OF DFT MATRIX

2249

Each column of will be projected on to get . Since the resulting vectors are not orthogonal, they will be orthonormalized by applying the GramSchmidt technique in order to . More specically, since the space is get , can be spanned by the orthonormal columns of , vector expressed as (49) The above equation can be rewritten as . . . By virtue of the denition of matrix be compactly expressed as

Since the columns of form a basis of , the columns of can be expressed as linear combinations of those of , i.e., (59) The above vector equations can be compactly expressed as a matrix equation: (60) where that is a square matrix of order . It follows immediately

(50) By the orthonormality of the columns of ally, the above equation implies that and

(61) individu(62)

, the same equation can

(51) Upon dening the matrix as (52) the vector equations (51) can be combinedly expressed as (53) The GramSchmidt technique is next applied to orthonormalize the columns of matrix in order to get the columns of matrix using the following steps: 1) (54) 2) For a) :

Based on this unitarity property of , it follows from (60) that (63) is invariant under This proves that the matrix product the change of the initial orthonormal basis of the space . It follows from (53) that is invariant under the change of . The same applies to as can be concluded from (54)(57). B. OPA of the HermiteGaussian-like Here, the desired matrix eigenvectors dened by (57) will be expressed as4 (64) where is a unitary matrix to be evaluated such that the square of the Frobenius norm is minimized. The solution of this problem is given by the OPA expounded in [14] and used in [5] and summarized in the following three steps. 1) Form matrix (65) 2) Find the singular-value decomposition of

(55) b)

(66) (56) Matrix is next dened as (57) Lemma 2: The result of the GSA is invariant under the change of the initial orthonormal basis of the space given by the columns of matrix . Proof: Consider a second set of initial orthonormal basis dened by given by the columns of a matrix (58) 3) Compute matrix (67) Lemma 3: The matrix determined by the OPA is invariant under the change of the initial orthonormal basis of the space given by the columns of matrix . Proof: Let and be two initial orthonormal bases of E. They should be related by (68)

^ ^ U was taken as U = QV

4It

should be mentioned that in [5], the OPA was erroneously applied since rather than as .

^ U = VQ

2250

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO. 11, NOVEMBER 2004

where is a unitary matrix. (This follows along the same lines and be the corresponding unitary maof (59)(62)). Let and be the resulting optrices evaluated by the OPA and timal matrices given by (64). It follows from (65) to (67) and (64) that (69) (70) (71) (72) Our task reduces to showing that (69), one gets . From (68) and

manner to be next delineated. By virtue of the denition of the Frobenius norm of a matrix and the Euclidean norm of a vector [18] and using (48), (57) and (64), one obtains (82) where (83) (84) (85) In the OPA, matrix has been evaluated by minimizing the total performance index of (83). In the SOPA, there will be stages of sequential minimizations. In stage s, the column of will be evaluated by minimizing the partial performance index of (84) subject to the constraints that will be orthogonal to the previously evaluated columns , and be of unit norm in order to satisfy the unitarity of . These constraints can be expressed as (86) (87) The rst set of pactly expressed as orthogonality constraints can be com-

(73) Substituting (70) in the above equation, one obtains (74) By the uniqueness of the singular values of equation implies that , the above

(75) By the unitarity of the matrices to (76) and have real nonnegaSince the diagonal matrices tive diagonal elements arranged in decreasing order of absolute value, a direct application of Lemma 1 results in (77) By comparing (76) and (77), one gets (78) It follows immediately that (79) Upon utilizing (71), the above equation reduces to (80) Substituting (68) and (80) in (72), one eventually obtains (81) , , , , , (74) leads

(88) where . . .

(89)

It should be mentioned that the rows of the matrix are linearly independent because of being orthogonal due to the way they have been sequentially generated. For mathematical tractability, one will set aside the quadratic constraint (87) for a while and minimize subject to the linear constraints (88) and call the resulting vector . Next, by normalizing in order to satisfy the normalization condition (87), one obtains . More specically, the constrained minimization problem dened by the quadratic criterion (84) and the linear constraints (88) is solved in Appendix B and its solution is given by

(90) C. SOPA Although the desired matrix will be expressed as in (64), the unitary matrix will not be evaluated by minimizing the as in the OPA but rather square of the Frobenius norm will be evaluated sequentially in the its columns , By the orthonormality of the columns of matrix equation reduces to , the above

(91)

HANNA et al.: HERMITEGAUSSIAN-LIKE EIGENVECTORS OF DFT MATRIX

2251

By utilizing the orthonormality of the rows of matrix (89), the above result simplies to

of

It follows from the above equation and (94) that (99)

(92) By the same token, it can be shown that In the special case of generating , one has the unconstrained of (84) whose minimization problem dened by criterion solution is given by: (93) The quadratic constraint (87) accounting for the normalization condition will be satised by computing: (94) Therefore, the SOPA can be summarized in the following steps. 1) For a) b) c) d) (the null matrix). 2) For a) augment matrix by the row vector b) c) d) augment matrix by the column vector . 3) Generate according to (64). Lemma 4: The matrix determined by the SOPA is invariant under the change of the initial orthonormal basis of the space given by the columns of matrix . and be two initial orthonormal bases of Proof: Let which should be related by (68). Let be the matrix be the determined by the SOPA corresponding to and let resulting matrix . An extra subscript will be introduced in and and matrix to become , and the vectors , respectively, when they are computed based on matrix . From (93) and (68), one obtains (95) By virtue of (94) and the unitarity of , one gets (96) From (89) and (96), it follows that (97) From (92), (97), and (68), one obtains where (107) (108) Substituting (51) in (104), one gets (106) where (105) (104) The above equation together with (64) and (68) lead to (102) (100) From (85) and (100), one obtains (101)

V. EQUALITY OF OUTPUTS OF GSA AND SOPA In the GSA, one starts by projecting the vectors , on the column space of matrix to get the nonorthogonal vectors , . Next by applying the GramSchmidt orthonormalization technique, one sequentially obtains . In the SOPA, one sethe orthonormal vectors , of the unitary quentially obtains the columns , matrix . This corresponds to the sequential evaluation of the since (64), (57) and (85) imply that vectors , (103) It will be shown below that the GSA and SOPA produce identical results. Toward that goal one starts by a further manipulation of the equations pertaining to both techniques only for the sake of proving the equality of their outputs. A. GSA Manipulating (55), one obtains

(98)

2252

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO. 11, NOVEMBER 2004

Since the vectors , lie in the column space of matrix , it follows from (105) that (109) where is an matrix. Consequently (110) By the orthonormality of the columns of , the above equation results in and those of (111) From (107), (108), and (109), one obtains

C. Identity From (107) and (108), it is obvious that matrix is the same for both the GSA and the SOPA since it is an input matrix while , are computed for each algothe matrices ; rithm separately. From (114) and (121), it is clear that and from (56) and (117) it follows that is the same for both algorithms. By virtue of (105) and (108), one concludes that and are identical for both algorithms. Equations (113) and and consequently is the same for (120) imply that both algorithms. Proceeding in the same manner, one concludes that both the GSA and SOPA result in exactly the same set of . vectors ,

(112) Therefore, (106) simplies to (113) In the special case of , (51) and (107) result in (114) B. SOPA One starts by dening (115) where the vectors , are given by (92) and (93). The orthonormality of the columns of implies that (116) Equations (115), (116), (94), and (103) result in (117) Since (94) implies that is an unnormalized version of , the above equation implies that dened by (115) can be interpreted as an unnormalized version of . Combining (115) and (92), one obtains (118) Using (89), (103), and (105), it follows that

VI. SIMULATION RESULTS Orthonormal eigenvectors of the DFT matrix computed using the following three techniques. have been

1) The P method where one only obtains initial orthonormal vectors by the singular-value decomposition of the projection matrices , of according to (47) as was explained in Section III. 2) OPA explained in Section IV-B. 3) SOPA explained in Section IV-C. It should be mentioned that even if one intends to apply a rened technique such as the second or third one, he should start by applying the rst technique in order to generate initial orthonormal eigenvectors of to be used as input to the advanced technique. Since the main goal is to generate HermiteGaussian-like orthonormal eigenvectors of , the error vectors between the eigenvectors of and samples of the HermiteGaussian functions of the same order have been computed as it was done in [5]. The norms of these error vectors have where denotes the been plotted versus for columns of the modal matrix of F. Figs. 1 and 2 show the results for and 128, respectively. Since in the P method the act of approximating the HermiteGaussian functions was not taken into account, it is quite expected that the P method has the largest error among the three techniques being compared. For the OPA or SOPA the error tends to increase on the average with . The interpretation is that the samples of the HermiteGaussian functions are approximate eigenvectors of with an approximation error that grows with the order of those functions [4], [5]. Consequently, the error between the exact HermiteGaussian-like eigenvectors determined by the OPA or SOPA and the approximate eigenvectors tends to increase with . Upon comparing the OPA and SOPA, one clearly notices that the OPA has a lower rate of growth of the error. The interpretation is that for the SOPA the number of linear constraints expressed by (88) and (89) increases with the order thus restricting the freedom left in the solution space for minimizing the criterion given by (84). This is to be contrasted to the OPA where matrix rather than its individual columns is batch evaluated. On the other hand, the SOPA has the merit that the error begins to be noticeable at a value of larger than that for the OPA.

(119) Substituting the above equation in (118) and utilizing (107) and (108), one gets

(120) In the special case of (107) that , it follows from (115), (93), and (121)

HANNA et al.: HERMITEGAUSSIAN-LIKE EIGENVECTORS OF DFT MATRIX

2253

APPENDIX A PROOF OF LEMMA 1 The modal decomposition of a Hermitian matrix A is (A1) where (A2) and all eigenvalues , onal matrix can be expressed as are real [15]. The diag(A3) and are dened by where the real diagonal matrices (39)(41). Substituting (A3) in (A1), one gets (A4)
Fig. 1. Norm of the error vectors between the exact and approximate = 64. eigenvectors for

By comparing (36) and (A4), one obtains (38). APPENDIX B Statement of the Problem Find the -dimensional vector that minimizes (B1) subject to the constraints (B2) where is an -dimensional vector, is an linearly independent columns, is an linearly independent rows and Solution Augmenting the constraints (B2) to the criterion (B1) by means of the complex vector of Lagrange multipliers, one gets the following real augmented criterion: (B3) matrix with matrix with .

Fig. 2. Norm of the error vectors between the exact and approximate eigenvectors for = 128.

By virtue of the denition of the Euclidean norm, one obtains from (B1) and (B3) (B4) Since is a real-valued scalar function of the complex vector and its complex conjugate , a necessary and sufcient condition for minimization is (B5) where in nding the gradient vector, one should view and as two different vectors, i.e. one should treat as a constant vector [19]. Consequently, it follows from (B4) when evaluating that (B6) Upon applying condition (B5), one gets (B7)

VII. CONCLUSION A new technique has been developed for generating initial based on the orthonormal eigenvectors of the DFT matrix singular-value decomposition of the projection matrices of on its eigenspaces after deriving efciently computable expressions for these projection matrices. In order to generate HermiteGaussian-like eigenvectors of given the initial ones, a new method called SOPA has been proposed based on the sequential evaluation of the columns of a unitary matrix rather than the batch evaluation of that matrix as in the OPA. Surprisingly, the output of the SOPA has been proved to be equal to that of the GSA. Furthermore it has been proved that for any of the GSA, OPA or SOPA, the output is invariant under the change of the input initial orthonormal eigenvectors of .

2254

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMSI: REGULAR PAPERS, VOL. 51, NO. 11, NOVEMBER 2004

By applying condition (B2) in order to evaluate vector , one obtains (B8) Substituting (B8) in (B7), one obtains (B9) REFERENCES
[1] V. Namias, The fractional order Fourier transform and its application to quantum mechanics, J. Inst. Math. Appl., vol. 25, pp. 241265, 1980. [2] A. C. McBride and F. H. Kerr, On Namiass fractional Fourier transforms, IMA J. Appl. Math., vol. 39, pp. 159175, 1987. [3] L. B. Almeida, The fractional Fourier transform and time-frequency representations, IEEE Trans. Signal Processing, vol. SP-42, pp. 30843091, Nov. 1994. [4] S.-C. Pei, C.-C. Tseng, and M.-H. Yeh, A new discrete fractional Fourier transform based on constrained eigendecomposition of DFT matrix by Lagrange multiplier method, IEEE Trans. Circuits Syst. II, vol. 46, pp. 12401245, Sept. 1999. [5] S.-C. Pei, M.-H. Yeh, and C.-C. Tseng, Discrete fractional Fourier transform based on orthogonal projections, IEEE Trans. Signal Processing, vol. SP-47, pp. 13351348, May 1999. [6] . Candan, M. A. Kutay, and H. M. Ozaktas, The discrete fractional Fourier transform, IEEE Trans. Signal Processing, vol. SP-48, pp. 13291337, May 2000. [7] . Candan, The discrete fractional Fourier transform, M.S. thesis, Dept. Elect. Electron. Eng., Bilkent Univ., Ankara, Turkey, 1998. [8] H. M. Ozaktas, Z. Zalevsky, and M. A. Kutay, The Fractional Fourier Transform With Applications in Optics and Signal Processing. Chichester, U.K: Wiley, 2001. [9] H. Dym and H. P. McKean, Fourier Series and Integrals. San Diego, CA: Academic Press, 1972. [10] B. W. Dickinson and K. Steiglitz, Eigenvectors and functions of the discrete Fourier transform, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-30, pp. 2531, Feb. 1982. [11] F. R. Gantmacher, The Theory of Matrices, 2nd ed. New York: Chelsea, 1990, vol. 1. [12] J. H. McClellan and T. W. Parks, Eigenvalue and eigenvector decomposition of the discrete Fourier transform, IEEE Trans. Audio Electroacoust., vol. AU-20, pp. 6674, Mar. 1972. [13] L. Auslander and R. Tolimieri, Is computing with the nite Fourier transform pure or applied mathematics, Bull. Amer. Math. Soc., vol. 1, no. 6, pp. 847897, November 1979. [14] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, M.D.: Johns Hopkins Univ. Press, 1989. [15] G. Strang, Linear Algebra and Its Applications, 3rd ed. San Diego, CA: Harcourt Brace Jovanovich, 1988. [16] S. H. Friedberg, A. J. Insel, and L. E. Spence, Linear Algebra. Englewood Cliffs, NJ: Prentice-Hall, 1979. [17] Z. I. Borevich and I. R. Shafarevich, Number Theory, N. Greenleaf, Ed. New York: Academic, 1966. [18] G. W. Stewart, Introduction of Matrix Computations. New York: Academic, 1973. [19] D. H. Brandwood, A complex gradient operator and its application in adaptive array theory, Proc. Inst. Elect. Eng., pt. F, H, vol. 130, no. 1, pp. 1116, 1983.

Magdy Tawk Hanna (S81M85SM90) received the B.S. degree (with honors) from Alexandria University, Alexandria, Egypt, in 1975, the M.S. degree from Cairo University, Cairo, Egypt, in 1980, and the M.S. and Ph.D. degrees from the University of Pittsburgh, Pittsburgh, PA in 1983 and 1985, respectively, all in electrical engineering. From 1976 to 1980, he was a Research Assistant with the Planning Techniques Center, Institute of National Planning, Cairo, Egypt. From 1981 to 1985, he was a Teaching Fellow in the Department of Electrical Engineering, University of Pittsburgh. During the summer of 1983, he was a Research Assistant with the Very Large Array Telescope, National Radio Astronomy Observatory, Socorro, NM. From 1985 to 1987, he was a Visiting Assistant Professor in the Department of Electrical Engineering, University of Iowa, Iowa City. Since 1988, he has been with the Department of Engineering Mathematics and Physics, Faculty of Engineering, Cairo University, Fayoum Branch, Fayoum, Egypt where he is now a Professor and Chairman of the Department. From September 1992 to August 1996, he was an Expatriate faculty member with the Department of Electrical Engineering, University of Bahrain, the State of Bahrain. His main areas of interest in research are fractional Fourier transform, wavelets and lter banks, two-dimensional digital lter design, and array signal processing. Dr. Hanna is a member of Eta Kappa Nu and Sigma Xi. The University of Pittsburgh recognized him as a University Scholar on its Annual Honors Day Convocation in 1985 for superior performance in the graduate program.

Nabila Philip Attalla Seif was born in Cairo, Egypt, on November 6, 1951. She received the B.S. degree in electronics and communication engineering and the B.S. degree in mathematics from Cairo University, Cairo, Egypt, in 1973 and 1975, respectively, and the M.S. and the Ph.D. degrees in mathematics, from Colorado State University, Fort Collins, in 1978 and 1981, respectively. She is currently an Associate Professor in the Department of Engineering Mathematics and Physics, Faculty of Engineering, Cairo University. Her main research interests are in numerical linear algebra and approximation theory.

Waleed Abd El Maguid Ahmed was born in Cairo, Egypt, in 1974. He received the B.S. degree with honors in computer and control system and the M.S. degree from Cairo University, Cairo, Egypt (with a thesis entitled A new technique for learning neural networks.) in 1996, and 2001, respectively. He is working toward the Ph.D. degree at the same university, working in the area of fractional Fourier transforms. He has been a Teaching Assistant in the Department of Engineering Mathematics and Physics, Faculty of Engineering, Cairo University, Fayoum Branch, Fayoum, since 1997. His main research interests are in fractional Fourier transform and its applications in optical signal analysis, time-frequency analysis, optimization techniques for learning neural networks and digital signal processing.

S-ar putea să vă placă și