Sunteți pe pagina 1din 9

636

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

Split Wiener Filtering With Application in Adaptive Systems


Leonardo S. Resende, Member, IEEE, Joo Marcos T. Romano, Member, IEEE, and Maurice G. Bellanger, Fellow, IEEE
AbstractThis paper proposes a new structure for split transversal filtering and introduces the optimum split Wiener filter. The approach consists of combining the idea of split filtering with a linearly constrained optimization scheme. Furthermore, a continued split procedure, which leads to a multisplit filter structure, is considered. It is shown that the multisplit transform is not an input whitening transformation. Instead, it increases the diagonalization factor of the input signal correlation matrix without affecting its eigenvalue spread. A power normalized, time-varying step-size least mean square (LMS) algorithm, which exploits the nature of the transformed input correlation matrix, is proposed for updating the adaptive filter coefficients. The multisplit approach is extended to linear-phase adaptive filtering and linear prediction. The optimum symmetric and antisymmetric linear-phase Wiener filters are presented. Simulation results enable us to evaluate the performance of the multisplit LMS algorithm. Index TermsAdaptive filtering, linear-phase filtering, linear prediction, linearly constrained filtering, split filtering, Wiener filtering.

I. INTRODUCTION

ONRECURSIVE systems have been frequently used in digital signal processing, mainly in adaptive filtering. Such finite impulse response (FIR) filters have the desirable properties of guaranteed stability and absence of limit cycles. However, in some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplication operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering. Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated. Owing to its simplicity and robustness, the least mean square (LMS) algorithm is one of the most widely used algorithms for adaptive signal processing. Unfortunately, its performance
Manuscript received March 21, 2002; revised May 19, 2003. The associate editor coordinating the review of this paper and approving it for publication was Dr. Naofal M. W. Al-Dhahir. L. S. Resende is with the Electrical Engineering Department, Federal University of Santa Catarina, 88040-900, Florianpolis-SC, Brazil (e-mail: leonardo@eel.ufsc.br). J. M. T. Romano is with the Communication Department, State University of Campinas, 13083-970, Campinas-SP, Brazil (e-mail: romano@decom.fee. unicamp.br). M. G. Bellanger is with the Laboratoire dElectronique et Communication, Conservatoire National des Arts et Mtiers, 75141, Paris, France (e-mail: bellang@cnam.fr). Digital Object Identifier 10.1109/TSP.2003.822351

in terms of convergence rate and tracking capability depends on the eigenvalue spread of the input signal correlation matrix [1][3]. Transform domain LMS algorithms, like the discrete cosine transform (DCT) and the discrete Fourier transform (DFT), have been employed to solve this problem at the expense of a high computational complexity [2], [4]. In general, it consists of using an orthogonal transform together with power normalization for speeding up the convergence of the LMS algorithm. Very interesting, efficient, and different approaches have also been proposed in the literature [5], [6], but they still present the same tradeoff between performance and complexity. Another alternative to overcome the aforementioned drawbacks of nonrecursive adaptive systems is the split processing technique. The fundamental principles were introduced when Delsarte and Genin proposed a split Levinson algorithm for real Toeplitz matrices in [7]. Identifying the redundancy of computing the set of the symmetric and antisymmetric parts of the predictors, they reduced the number of multiplication operations of the standard Levinson algorithm by about one half. Subsequently, the same authors extended the technique to classical algorithms in linear prediction theory, such as the Schur, the lattice, and the normalized lattice algorithms [8]. A split LMS adaptive filter for autoregressive (AR) modeling (linear prediction) was proposed in [9] and generalized to a so-called unified approach [10], [11] by the introduction of the continuous splitting and the corresponding application to a general transversal filtering problem. Actually, an appropriate formulation of the split filtering problem has yet to be provided, and such a formulation would bring to us more insights on this versatile digital signal processing technique, whose structure exhibits high modularity, parallelism, or concurrency. This is the purpose of the present paper. By using an original and elegant joint approach combining split transversal filtering and linearly constrained optimization, a new structure for the split transversal filter is proposed. The optimum split Wiener filter and the optimum symmetric and antisymmetric linear-phase Wiener filters are introduced. The approach consists of imposing the symmetry and the antisymmetry conditions on the impulse responses of two filters connected in parallel by means of an appropriate set of linear constraints implemented with the so-called generalized sidelobe canceller structure. Furthermore, a continued splitting process is applied to the proposed approach, giving rise to a multisplit filtering structure. We show that such a multisplit processing does not reduce the eigenvalue spread, but it does improve the diagonalization factor of the input signal correlation matrix. The interpretations of the splitting transform as a linearly constrained processing are then

1053-587X/04$20.00 2004 IEEE

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS

637

(a) Fig. 2. Generalized sidelobe canceller.

and (6)
(b) Fig. 1. Split adaptive transversal filtering.

considered in adaptive filtering, and a power normalized and time-varying step-size LMS algorithm is suggested for updating the parameters of the proposed scheme. We also extend such an approach to linear-phase adaptive filtering and linear prediction. Finally, simulation results obtained with the multisplit algorithm are presented and compared with the standard LMS, DCT-LMS, and recursive least squares (RLS) algorithms. II. SPLIT TRANSVERSAL FILTERING Let us start with the following sentence. Any finite sequence (e.g., the impulse response of a transversal filter) can be expressed as the sum of a symmetric sequence and an antisymmetric sequence. The symmetric (antisymmetric) sequence is given by the sum (difference) of the original sequence and its backward version, dividing by two the resulting sequence. Specifically, in matrix notation, let (1) denote the -by-1 tap-weight vector of a transversal filter [see Fig. 1(a)]. With and denoting the vectors of the symmetric and antisymmetric parts of , then (2) where (3) (4) and is the -by- reflection matrix (or exchange matrix), which has unit elements along the cross diagonal and zeros , elsewhere. Thus, if , where . The symmetry and anand are, respectively, described tisymmetry conditions of by (5)

which can be easily verified from (3) and (4). Now, consider the classical scheme of a transversal filter, as has been shown in Fig. 1, in which the tap-weight vector split into its symmetric and antisymmetric parts. The input and the desired response are modeled as signal wide-sense stationary discrete-time stochastic processes of zero mean. Without loss of generality, all the parameters have been assumed to be real valued. III. SPLIT FILTERING AS A LINEARLY CONSTRAINED FILTERING PROBLEM The principle of linearly constrained transversal optimal fil, tering is to minimize the power of the estimation error subject to a set of linear equations defined by (7) where is the -by- constraint matrix, and is a -element response vector. An alternative implementation is represented in block diagram form in Fig. 2. This structure is called the generalized sidelobe canceller (GSC) [12], [14]. Essentially, it consists of changing a constrained minimization problem into an unconstrained one. The columns of the -by-( - ) marepresent a basis for the orthogonal complement of the trix . The subspace spanned by columns of is termed the signal blocking matrix. The ( - )-elmatrix ement vector represents an unconstrained filter, and the corepresents a filter that satisfies efficient vector ). the constraints ( and antisymmetric The splitting of into its symmetric parts (see Fig. 1) can be interpreted as a linearly constrained and , as well optimization problem. Let us define matrices as vectors and , as

. . .

. . .

..

. . .

(8) . . . . . . .. . . .

638

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

(a)

. This property can subspace spanned by the columns of forces to be symmetric also be verified by the fact that would force it to be antisymmetric. through (12), whereas Considering the above properties, Fig. 3(a) can be simplified to the block diagram shown in Fig. 3(b). and are It is interesting to observe that the vectors coefficients of and . merely composed of the first This can be easily verified by noting that the premultiplication by yields and of by yields . of The estimation error is then given by (14) where (15) denotes the -by-1 tap-input vector. In the mean-squared-error sense, the vectors and are chosen to minimize the following cost function:

(b) Fig. 3. GSC implementation of the split filter.

(16) (9) where is the variance of the desired response , is , and is the -by-1 the -by- correlation matrix of and . cross-correlation vector between Appealing to the symmetric and square Toeplitz properties of . the correlation matrix , it can easily be shown that A matrix with this property is said to be centrosymmetric [15] and, in the case of , can be partitioned into the form

and

, for

odd (

), or (10)

(17) (11) , for even ( and imposing the constraints defined by ). Then, consider (12) and (13) on and . It establishes the symmetry and antisymmetry properties of and , respectively. , requires that be orthogonal Notice that (12), with is to the subspace spanned by the columns of . Likewise, for orthogonal to the subspace spanned by the columns of . Using the GSC structure and these constraints on the and in Fig. 1(b) leads to the respective branches of block diagram shown in Fig. 3(a) ( even). However, since and , and , eliminating the two branches and . and . Moreover, it is easy to verify that is a possible choice of signal blocking matrix Thus, to span the subspace that is the orthogonal complement of the . . . When is odd ( ), it can be partitioned as . . . for of even ( , and ), where are and correlation matrices

-by-

(18)

where vector of

is a , and

-by-1 correlation is a scalar denoting

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS

639

The above multisplit scheme can be viewed as a linear trans, which is denoted by formation of (24) where

. . .
Fig. 4. Multisplit adaptive filtering.

(25)

the variance of the input signal . Then, it is easy to show by direct substitution of (10), (11), and (17) (or (8), (9), and odd) that the last term in (16) is equal to zero. In (18) for and are statistically uncorrelated other words, ), and consequently, the symmetric and ( antisymmetric parts can be optimized separately. Thus, the optimum solutions are given by (19) and (20) and the scheme of Fig. 3(b) corresponds to the optimum split Wiener filter: (21) where (22) and (23) The proof of (21) is given in the Appendix. Notice that (22) and (19) define the true optimum linear-phase , having both constant group delay and conWiener filter stant phase delay (symmetric impulse response). On the other hand, (23) and (20) define a second type of optimum generalized (affine phase filter), having only linear-phase Wiener filter the group delay constant (antisymmetric impulse response). IV. MULTISPLIT AND LINEAR TRANSFORMS For ease of presentation, let , where is an integer number greater than one. Now, if each branch in Fig. 3(b) is and can considered separately, the transversal filters also be split into their symmetric and antisymmetric parts. By proceeding continuously with this process and splitting the resplitsulting filters, we arrive, after steps composed of ), at the multisplit scheme ting operations each ( and are -bymatrices such shown in Fig. 4. as in (10) and (11), respectively, and , for , are the single parameters of the resulting zero-order filters.

and (26) It can be verified by direct substitution that for , is s and s, in which the inner product of any a matrix of two distinct columns is zero. In fact, is a nonsingular matrix, . In other words, the columns of are mutually and orthogonal. is given by The correlation matrix of

(27) Let us denote by the matrix at the right side of the above equation. It is similar to and is obtained from by means of a similarity transformation [16]. Therefore, based on , and consequently, the linear (27), we can stress that transformation does not affect the eigenvalue spread of the input data correlation matrix . Now, an interesting point to mention is that the columns of can be permuted without affecting its properties. This amounts to a rearrangement of the single parameters in Fig. 4 in different possible permutations. The resequences. Then, there are markable result is that one of them turns into the -order Hadamard matrix so that the multisplit scheme can be represented in the compact form shown in Fig. 5. can be constructed from The Hadamard matrix of order as follows:

(28)

, this gives , , , and Hadamard Starting with matrices of all orders which are powers of two. An alternative way of describing (28) is for where denotes the Kronecker product of matrices, and (30) (29)

640

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

At this stage, it is worth pointing out that the aforementioned into a correlinear transforms do not convert the vector sponding input vector of uncorrelated variables. Therefore, the single parameters in Figs. 4 and 5 cannot be optimized separately by the mean-squared error criterion. Consider the linear transform . Premultiplying and postand , we get (34), shown at the multiplying in (17) by bottom of the page. This enables us to verify that the correlation is diagonal by block and that the multisplit operamatrix of tion improves the diagonalization of . This improvement can be measured by a diagonalization factor defined in [11] by

Fig. 5.

Hadamard transform of the input (n).

(35)

With and transforms, the zeros in change position, but they never take place in the main diagonal. Hence, , , and are not orthogonal transforms that decorrelate the input signal. In other words, (34) is not a unitary similarity transformation. As an exception, when (36) and

Fig. 6. Flow graph of butterfly computation for

Mxn.
( )

Another very interesting linear transform is obtained, making (31) and (37) (32) . Using (25), this results in a linear transformawith the flow graph depicted in Fig. 6 ( ). tion of Similarly, denoting this linear transform by , the multisplit scheme is also represented by Fig. 5 by substituting for , with which means that (34) becomes the unitary similarity transformation. As a matter of fact, (34) reveals that the multisplit operation makes and uncorrelated, where

. . . (38) . . .

(33)

and

(34)

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS

641

Finally, the optimum coefficients , for , in the scheme of Fig. 5 can be obtained by minimizing of the mean-squared-error, which results in

(39) where have that . From (39), we also (40) From here, we describe the application of the constrained optimization interpretation for the generation of new multisplit adaptive filtering structures and algorithms. V. MULTISPLIT ADAPTIVE FILTERING In the adaptive context, we can explore the aforementioned properties of the multisplit transform in order to propose a power normalized and time-varying step-size LMS algorithm for updating the single parameters independently. . Since, in this case, and Let us start with are uncorrelated, a least-squares version of the Newton method can be applied to update the parameters and as follows: (41) where (42) and

where is the largest eigenvalue of . Table I presents a summary of the proposed algorithm for multisplit adaptive filtering. in Table I is conIt is important to stress that the use of . Otherwise, the linear transform matrix ditioned to is not composed only of s and s, requiring a number of multiplication operations proportional to . Notwithstanding, the number of filter coefficients can usually be set to the next power of two to take advantage of the implementation simplicity and reduce computational burden. The butterfly computation in addition operations per iteration. In Fig. 6 requires only other words, no multiplication operation is demanded. Finally, the procedure can be extended to complex parameters applying the split processing as much to its real part as to its imaginary part. VI. LINEAR PHASE ADAPTIVE FILTERING In applications that require an adaptive filter with linearphase response, the symmetry constraint of the impulse response has been used. In that case, the utilization of the DFT-LMS, DCT-LMS, and RLS techniques, to name a few, would not solve the problem directly, requiring a symmetry constraint to be imposed. With the multisplit technique, such applications fit perfectly, due to the fact that the symmetric or antisymmetric conditions in the impulse response of the filter have already been guaranteed. In fact, we need to consider just one branch in Fig. 3(b). For symmetric impulse response constraint, the input samples single-parameters ( ) are given by for updating the and for the antisymmetric impulse . However, response constraint by there is another interesting point concerning linear-phase adaptive filtering. Consider again the structure of the split filter in Fig. 3(b). The least-squares (LS) criterion can be used in either symmetric or antisymmetric parts to obtain the linear-phase filters. For example, the optimum solution for the filter with symmetric impulse response is given by

(43) Equation (43) is an estimate of the eigenvalues of the correlation matrix of , which defines the step-sizes used to adapt the individual weights in (41). To account for adaptive filtering operation in a nonstationary environment, a forgetting factor is included in this recursive computation of applies to a wide-sense stathe eigenvalues. The case in (41) corretionary environment. Notice that making sponds to the RLS algorithm applied to the parameters and independently. , despite the residual correlation among the variFor and , the same strategy ables inside of in (41) can be used since the diagonalization factor of has been improved by the multisplit transforms. In other words, the multisplit orthogonal transform together with power normalization can be used to improve the convergence rate of the LMS algorithm. In this case, based on (27), convergence of the single parameters is assured by

(44) where

(45) and

(46) Proceeding, in the adaptive context, the RLS algorithm can be . On the other hand, it is worth directly applied to update

642

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

TABLE I MULTISPLIT LMS (MS-LMS) ALGORITHM

pointing out that fast LS algorithms, which exploit the time shift relationship of the input data vector, cannot be applied due to the does not satisfy this property [3]. fact that A final observation is required here. The direct application of the LS criterion in the scheme of Fig. 3(b) in order to obtain the optimum Wiener filter by computing independently and corresponds to an approximate LS solution (quasioptimal). Since the LS-estimated correlation matrix is not centrosymmetric, the last term in (16) becomes nonzero, and and cannot be independently consequently, computed. VII. LINEAR PREDICTION All split and multisplit transversal filtering theory developed above can be applied to linear prediction by making or . In the case of linear-phase prediction, the appropriate structure of the prediction-error filter is . In this figure, both forward presented in Fig. 7 for and backward prediction have been considered. For forward prediction, the sign of in the sum block is positive, whereas depends on the symmetry ( ) or antithe sign of and symmetry ( ) condition. The opposite combination corresponds to backward prediction. VIII. SIMULATION RESULTS To evaluate the performance of the MS-LMS algorithm in adaptive filtering, the same equalization system in [2, Ch. 5] is considered (see Fig. 8). The input channel is binary, with
Fig. 7. Linear phase prediction-error filter.

Fig. 8. Adaptive equalizer for simulation.

, and the impulse response of the channel is described by the raised cosine: otherwise (47)

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS

643

property. It is clear that the LP-RLS technique surpasses all other approaches in terms of convergence rate, even standard RLS, because the symmetry of the response is built into the adaptive filtering structure. The algorithm parameters were and . IX. CONCLUSION In the present work, it has been shown that the split transversal filtering problem can be formulated and solved by using a linearly constrained optimization and can be implemented by means of a parallel GSC structure. Then, the optimum split Wiener filter can be introduced together with its symmetric and antisymmetric linear-phase parts. Furthermore, a continued split procedure, which led to a multisplit filter structure, has been considered. It has also been shown that the multisplit transform is not an input whitening transformation. Instead, it increases the diagonalization factor of the input signal correlation matrix without affecting its eigenvalue spread. A power normalized, time-varying step-size LMS algorithm, which exploits the nature of the transformed input correlation matrix, has been proposed for updating the adaptive filter coefficients. The novel approach is generic since it can be used for any value of filter order , for complex and real parameters, and can be extended to linear-phase adaptive filtering and linear prediction. For a power-of-two value of , such a proposition corresponds to a Hadamard transform domain filter, from which the structure exhibits high modularity, parallelism, or concurrency, which is suited to implementation using very large-scale integration (VLSI). Finally, simulation results confirm that the split processing structures provide a powerful and interesting tool for adaptive filtering. APPENDIX PROOF OF (21) The purpose of this Appendix is to show that the optimum Wiener filter can be split into two symmetric and antisymmetric impulse response filters connected in parallel according to Fig. 3(b) and (21). Using (19)(23), we need to prove that (48) or (49) The left side of (49) can be rewritten as

(a)

(b) Fig. 9. Learning curves: Adaptive equalization. (a) (b) ( ) = 46:8216.

(R)

6:0782.

where controls the eigenvalue spread of the correlation matrix of the tap inputs in the equalizer, with for , and for . The sequence is an additive white noise that corrupts the channel output , and the equalizer has 11 coefficients. with variance Fig. 9 shows a comparison of the ensemble-averaged error performances of the DCT-LMS, MS-LMS, standard LMS, and and (100 RLS algorithms for independent trials). The good performance of the MS-LMS algorithm can be observed in terms of convergence rate when compared with the standard LMS algorithm. On the other hand, we can verify that the MS-LMS algorithm is somewhat sensitive to variations in the eigenvalue spread so that the DCT-LMS exhibits a better performance in the second case. This shows clearly that the multisplit preprocessing does not orthoganalize the input data vector, but it improves the diagonalization of the input signal correlation matrix, which is taken into account in the power normalization used by the MS-LMS algorithm. Nevertheless, as far as computational burden is concerned, its simplicity is notorious when compared with the DCT transform. Such an aspect, together with convergence improvement, is definitive in justifying its application. Finally, the good performance of the linear-phase RLS (LP-RLS) algorithm is also illustrated in this equalization example where the considered channel has the linear-phase

(50)

644

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

where

, and it has been taken into account

that . The square matrix denotes the similarity transformation that splits the Wiener filter once into symmetric and antisymmetric parts. In fact, the subspaces and are complementary, and spanned by the columns of is an orthogonal transform with . From (50), we have (51) which proves the veracity of (21). ACKNOWLEDGMENT The authors wish to thank Prof. J. C. M. Bermudez and R. D. Souza, from the Federal University of Santa Catarina, and the anonymous reviewers for their suggestions, which have improved the presentation of the material in this paper. REFERENCES
[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [2] S. Haykin, Adaptive Filter Theory, 4th ed. Englewood Cliffs, NJ: Prentice-Hall, 2002. [3] M. G. Bellanger, Adaptive Digital Filters and Signal Analysis, 2nd ed. New York: Marcel Dekker, 2001. [4] F. Beaufays, Transform-domain adaptive filters: An analytical approach, IEEE Trans. Signal Processing, vol. 43, pp. 422431, Feb. 1995. [5] J. S. Goldstein, I. S. Reed, and L. L. Scharf, A multistage representation of the Wiener filter based on orthogonal projections, IEEE Trans. Inform. Theory, vol. 44, pp. 29432959, Nov. 1998. [6] P. Strobach, Low-rank adaptive filters, IEEE Trans. Signal Processing, vol. 44, pp. 29322947, Dec. 1996. [7] P. Delsarte and Y. V. Genin, The split levinson algorithm, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 470478, June 1986. , On the splitting of classical algorithms in linear prediction [8] theory, IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-35, pp. 645653, May 1987. [9] K. C. Ho and P. C. Ching, Performance analysis of a split-path LMS adaptive filter for AR modeling, IEEE Trans. Signal Processing, vol. 40, pp. 13751382, June 1992. [10] P. C. Ching and K. F. Wan, A unified approach to split structure adaptive filtering, in Proc. IEEE ISCAS, Detroit, MI, May 1995. [11] K. F. Wan and P. C. Ching, Multilevel split-path adaptive filtering and its unification with discrete walsh transform adaptation, IEEE Trans. Circuits Syst. II, vol. 44, pp. 147151, Feb. 1992. [12] L. J. Griffiths and C. W. Jim, An alternative approach to linearly constrained adaptive beamforming, IEEE Trans. Antennas Propagat., vol. AP-30, pp. 2734, Jan. 1982. [13] B. D. Van Veen and K. M. Buckley, Beamforming: A versatile approach to spatial filtering, IEEE Acoust., Speech, Signal Processing Mag., vol. 5, pp. 424, Apr. 1988. [14] S. Haykin and A. Steinhardt, Eds., Adaptive Radar Detection and Estimation. New York: Wiley, 1992. [15] S. L. Marple Jr., Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice-Hall, 1988. [16] B. Noble and J. W. Daniel, Applied Linear Algebra, 3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1988.

Leonardo S. Resende (M96) received the electrical engineering degree from Catholic University Pontific of Minas Gerais (PUC-MG), Belo Horizante, Brazil, in 1988, and the M.S. and Ph.D. degrees in electrical engineering from the State University of Campinas (UNICAMP), Campinas, Brazil, in 1991 and 1996, respectively. From October 1992 to September 1993, he worked toward the doctoral degree at the Laboratoire dElectronique et Communication, Conservatoire National des Arts et Mtiers (CNAM), Paris, France. Since 1996, he has been with the Electrical Engineering Department, Federal University of Santa Catarina (UFSC), Florianpolis, Brazil, where he is an Associate Professor. His research interests are in constrained and unconstrained digital signal processing and adaptive filtering.

Joo Marcos T. Romano (M90) was born in Rio de Janeiro, Brazil, in 1960. He received the B.S. and M.S. degrees in electrical engineering from the State University of Campinas (UNICAMP), Campinas, Brazil, in 1981 and 1984, respectively. In 1987, he received the Ph.D. degree from the University of Paris-XI, Paris, France. In 1988, he joined the Communications Department of the Faculty of Electrical and Computer Engineering, UNICAMP, where he is now Professor. He served as an Invited Professor with the University Ren Descartes, Paris, during the winter of 1999 and in the Communications and Electronic Laboratory in CNAM/Paris during the winter of 2002. He is responsible for the Signal Processing for Communications Laboratory, and his research interests concern adaptive and intelligent signal processing and its applications in telecommunications problems like channel equalization and smart antennas. Since 1988, he has been a recipient of the Research Fellowship of CNPq-Brazil. Prof. Romano is a member of the IEEE Electronics and Signal Processing Technical Committee. From April 2000, he has been the President of the Brazilian Communications Society (SBrT), a sister society of ComSoc-IEEE, and since April 2003, he has been the Vice-Director of the Faculty of Electrical and Computer Engineering at UNICAMP.

Maurice G. Bellanger (F84) graduated from Ecole Nationale Suprieure des Tlcommunications (ENST), Paris, France, in 1965 and received the doctorate degree from the University of Paris in 1981. He joined T.R.T. (Philips Communications in France), Paris, in 1967 and, since then, he has worked on digital signal processing and applications in telecommunications. From 1974 to 1983, he was head of the telephone transmission department of the company that developed speech, audio, video, and data terminals as well as multiplexing and line equipments for digital communication networks. Then, he was deputy scientific director of the company and, from 1988 to 1991, he was the scientific director. In 1991, he joined the Conservatoire National des Arts et Mtiers (CNAM), Paris, a public education and research institute, where he is a professor of electronics. He is the head of the Electronics and Communications research team. Dr. Bellanger has published 100 papers, has been granted 16 patents, and is the author of two textbooks: Theory and Practice of Digital Signal Processing (New York: Wiley, 3rd ed., 2000) and Adaptive Filtering and Signal Analysis (New York: Marcel Dekker, first ed., 1987; second ed. 2001). He was elected a fellow of the IEEE for contributions to the theory of digital filtering and the applications to communication systems. He is a former associate editor of the IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING and was the technical program chairman of the conference ICASSP, Paris, in 1982. He was the president of EURASIP, the European Association for Signal Processing, from 1987 to 1992. He is a member of the French Academy of Technology.

S-ar putea să vă placă și