Sunteți pe pagina 1din 5

12.

Wiener Filters for Filtering and Prediction

863

12.7 Wiener Filters for Filtering and Prediction


In many practical applications we are given an input signal {x(n)}, consisting of the sum of a desired signal {s(n )} and an undesired noise or interference {w(n)} , and we are asked to design a filter that suppresses the undesired interference component. In such a case, the objective is to design a system that filters out the additive interference while preserving the characteristics of the desired signal {s(n)} . In this section we trea t the problem of signal estimation in the presence of an additive noise disturbance. The estimator is constrained to be a linear filter with impulse response {h(n)}, designed so that its output approximates some specified desired signal sequence [d(n)} . Figure 12.7.1 illustrates the linear estimation problem. The input sequence to the filter is x(n) = s(n) + w(n), and its output sequence is y(n). The difference between the desired signal and th e filter output is the error sequence e(n) = d(n)- y(n). Y.le distinguish three special cases:
1. If d(n) = s(n), the linear estimation problem is referred to as filtering.

2. If d(n) = s(n + D), where D > 0, the linear estimation problem is ref erred to as signal prediction. Note that this problem is different than the prediction considered earli er in this chapter, where d(n) = x(n + D), D 2: 0. 3. If d ( n ) = s(n - D) , where D > 0, the linear estimation problem is referred to as signal smoothing. Our treatment will concentrate on filtering and prediction. The criterion selected for optimizing the filter impulse response { h (n)} is the minimiza tion of the mean-square error. This criterion has the advantages of simplicity and mathematical tractability. The basic assumptions are that the sequences {s(n)}, f w ( n)}, and [d(n )} are zero mean and wide-sense stationary. The linear filter will be assumed to be either FIR or IIR. If it is IIR, we assume that the input data {x(n)} are available over the infinite past. We begin with the design of the optimum FIR filter. The optimum linear filter, in the sense of minimum mean-square error (MMSE), is called a Wiener filter.

d(n)
Optimum linear filter

+
y(n.)

s(n)

Signal

x(n )

t'( fl )

Figure 12.7.1 Model for linear estimation problem.

w( n )

Noise

864

Chapter 12

Linear Prediction and Optimum Linear Filters

12.7.1

FIR Wiener Filter

Suppose that the filter is constrained to be of length M with coefficients {hk , 0 < k < M - 1). Hence its output y(n) depends on the finite data record x(n) , x ( n1), ... , x (n - M+1),
M-1

y (n )

L h(k)x(n - k)
k=O

(12.7.1)

The mean-square value of the error between the desired output d ( n) a nd y(n) is

I /
= E jdfn) -

M -1

L h(k)x(n- k)l
k=U

(12.7.2)

Since this is a quadratic function of the filter coefficients, the minimization of 8M yields the set of linear equations
M -1

L h(k)Y.-.:x (l- k) = Ydx( l) ,


k'-0

= 0 , 1, ... , M

-1

(12.7.3)

where Y xx (k) is the autocorrelation of the input sequence { x(n)} and YdAk ) = E [d (n)x* (n - k)] is the crosscorrelation between the desired seque nce ld(n)} and the input sequence {x (n) , 0 < n _::: M - 1}. The set. of linear equations that specify the optimum filter is called the Wiener-Hopf equation. These equations are also called the normal equations, encountered earlier in the chapter in the context of linear one-step prediction. In genera], the equ a tions in (12.7.3) ca n be expressed in matrix form as
(12.7.4)

where r M is an M X M (Hermitian) Toeplitz matrix with elements r /k = YxxU - k) and Yt1 is the 1\1 x 1 crosscorrelation vector with elements Ydx (l) , l = 0, 1, ... , A1 -1. The solution for the optimum filter coefficients is (12.7.5) and the resulting minimum MSE achieved hy the Wiener filter is
MMS EM = rpin 8M
1M

= aJ

- L ho pt ( k)Yix( k)
k=O

.M -1

(12.7.6)

or, equivalently,
('12.7.7)

12.7 Wiener Filters for Filtering and Prediction

865

'J where at = Eld(n)l 2 . Let us consider some special cases of (12.7.3). If we are dealing with filtering, the d(n) = s(n). Furthermore, if s(n) and w(n) are uncorrelated random sequences, as is usually the case in practice, then

Yxx (k) = Y1s (k) Ydx(k) = Yss (k)

+ YwtJ; (k)
(12.7.8)

and the normal equations in (12.7.3) become


. -1 lt.f

L h(k)[Ys(l- k)
k=O

+ YwuCl- k)]

= YH(l),

= 0, 1, ... , M

-1

(12.7.9)

that

U we are dealing \Vith prediction, then d(n) = s(n +D) where D > 0 . Assuming s (n) and w (n) arc uncorrclatcd random sequences, we have
Ydx(k) = YssU +D)

(12.7.10)

Hence the equations for the Wiener prediction fi Iter become


M-1

L h(k)fyss(l- k) + Yu;w{l- k)] = y


k=O

55

(l +D),

l = 0, 1, ... , M- 1

(12.7.11)

In all these cases, the correlation matrix to be inverted is Toeplitz. Hence the (generalized) Levinson Durbin algorithm may be used to solve for the optimum filter coefficients.
EXAMPlE 12.7.1 Let us consider a signal x(n) = s(n) difference equation

+ w(n), where

s(n) is an

AR(l) process that satisfies the

s(n) = 0.6s(n - 1) + v(n)

\vhere {v(n)} is a white noise sequence with variance a} = 0.64, and {w(n)} is a white noise sequence with variance O" = 1. We will design a Wiener filter of length M = 2 to estimille
(s(n)}.

Solution. Since {s( n)} is obtained hy exciting a single-pole filter by white noise, the power spectrallknsity of s(n) is
l's.l(f)

= a, IH(f)l 2
0.64 0.64 1.36 -1.2 cos 2.rrf

866

Chapter 12

Linear Prediction and Optimum Linear Filters

The corresponding autocorrelation sequence {Yfs (m)) is


- y,(m ) = (0.6)1 m l

The equations for the filter coefficients are


211(0 ) + 0.6h(l) = 1 0.6h(O) + 2h(1) = 0.6

Solution of these equations yields the result


h(O) = 0.451,

h(l) = 0.165

TI1e corresponding minimum MSE is


MMSE2 = 1- h( O)Yss( O ) - h( l)y,,, (1)
= 1 - 0.451 - (0.165)(0.6) = 0.45

TI1is error can be reduced further by increasing the length of the Wiener filter (see Problem 12.35).

12.7.2

Orthogonality Principle in Linear Mean-Square Estimation

The normal equations for the optimum filter coefficients given by (12.7.3) can he obtained directly by applying the orthogonality principle in linear mean-square estimation. Simply stated, the mean-square error &M in (12.7.2) is a minimum if the filter coefficients {h(k)} are selected such that the error is orthogonal to each of the data points in the estimate,
E[e(n )x*(n -! ) ] = 0,

I
M -1

= 0, 1, . .. , M- 1

(12.7.12)

where
e(n) = d ( n)-

L h( k)x (n- k)
k=O

(12.7.13)

Conversely, if the filter coefficients satisfy (12.7.12), the resulting MSE is a minimum. When viewed geometrically, the output of the filter, which is the estimate
M -l

d (n)

L h( k)x(n
k=O

- k)

(1 2.7.14)

is a vector in the subspace spanned by the data {x (k), 0 k M - 1}. The error e(n) is a vector from d(n) to d (n) [i.e., d(n) = e(n ) + d(n) ] , as shown in Fig. 12.7.2. The orthogonality principle states that the length 8M = E )e(n )l 2 is a minimum when e( n) is perpendicular to the data subspace [i.e., e( n ) is orthogonal to each data point x(k) , 0 k lvf - 1J.

12.7

Wiener Filters for Filter ing and Prediction

867

d(n)

e(n)

' ''
'
'

'

Figure 12.7.2

Geometric interpretation of linear MSE problem.

x(2 )

We note that the solution obtained from the normal equations in (12.7.3) is unique if the data {x (n)} in the estimate d (n) are linearly ind ependent. In this case, the correlation matrix rM is nonsingular. On the other hand, if the data are linearly dependent, the rank of r M is less than lvf and therefore the solution is not unique. In this case, the estimate d(n) can be expressed as a linear combination of a redu ced set of linearly independent data points equal to the rank of rM. Since the MSE is minimized by selecting the filter coefficients to satisfy the orthogonality principle, the residual minimum MSE is simply MMSEM = E[e(n)d*(n)] which yields th e result given in (12.7.6). (12.7.15)

1 2.7.3

IIR Wiener Filter

Tn the preceding section we constrained the filter to be FIR and obtained a set of M linear equations for the optimum filter coefficients. In this section we allow the filter to be infinite in duration (llR) and the data sequence to be infi nite as well. Hence the filter output is
y(n) =

L h( k)x(n - k)
k=O

00

(12.7.16)

The filter coefficients are selected to mininuze the mean-square error between the desired output d(n) and y( n ), that is,

= E d(n) -

L h(k)x(n - k)
k=O

00

(12.7.17)

S-ar putea să vă placă și