Sunteți pe pagina 1din 55

ADVANCED SIGNAL PROCESSING FOR COMMUNICATIONS

MASTER ADVANCED SCIENCES OF MODERN TELECOMMUNICATIONS SECOND QUARTER, COURSE 2012 - 2013

EQUALIZATION

c Baltasar Beferull Lozano Baltasar.Beferull@uv.es, http://www.uv.es/gsic/beferull Group of Information and Communication Systems (GSIC) Inst. de Rob otica y de Tecnologas de la Informacin & las Comunicaciones (IRTIC) Escuela T ecnica Superior de Ingenier a Universitat de Valncia

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Equalization: Low complexity suboptimal receivers

In many applications, it is not possible to use optimal receivers Even with an ecient implementation, it is usually too computationally intensive Then, lower complexity receivers are designed at the cost of optimality We explore here several sub-optimal structures for detection in ISI channels We use Linear Estimation as a tool to derive and analyze several suboptimal schemes We make rst a short review of Estimation (without proofs)

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Linear Estimation
Suppose we are given observations: y(k ) = Ax(k ) + z(k )

and we want to estimate x(k ), assuming that z(k ) are i.i.d. Gaussian noise vectors Estimation criterion: MMSE criterion Linear Estimators:

(k )||2] min E[||x(k ) x (k ) = W y ( k ) x

where W is simply the linear estimator matrix. Hence,


def

(k ) = x(k ) W y(k ) e(k ) = x(k ) x thus, the linear estimation problem is to nd a matrix Wopt such that:
W

Wopt = arg min E[||x(k ) W y(k )||2]


2 2 M M SE = E[||x(k ) Wopt y(k )|| ]

thus, the optimum MMSE for linear estimation is given by:

Notice that the MSE can be calculated in general as follows: E[||x(k ) W y(k )||2] = Tr{E[(x(k ) W y(k )) (x(k ) W y(k ))] = Tr{Rxx Rxy W W Ryx + W Ryy W}}
&

'

84

CHAPTER 5. EQUALIZATION: LOW COMPLEXITY SUBOPTIMAL RECEIVERS


3

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Orthogonality Principle: The MSE is minimized if and only if the following condition is satised: 5.1.4 Geometry of random processes which implies that:

A random variable/vector/process is a E mapping from a0probability space , to C n . i.e., if x Cn , [ e (k ) y ( k )] = k x = x( ), X : Cn . Example 5.1.1. Think E of ( outcome of a single trial, and X ( ) is just a mapping from the [(x k ) as Wan opt y(k )) y (k )] = 0 Wopt Ryy = Rxy outcome of the trial to a complex vector.

x e=xx

L(y )

x = Wy

Linear space of {y }
Figure 5.1: Geometry of random processes. Pythagorean Theorem for random processes:

opt||2] + E[||x (k )||] = E[||x(k )||] E[||x(k ) x


Just as in elementary geometry there is a Pythagorean relationship between x, x opt and x .
&

Theorem 5.1.3. Pythagorean theorem:

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Optimum (non-linear) MMSE Estimation

(k ) = E[x(k )|y(k )] In general, the optimal estimator is the conditional mean x In most cases, E[x(k )|y(k )] is non-linear (k ) = E[x(k )|y(k )] is linear If x and y are jointly Gaussian, then x = linear estimators are optimal for estimating random processes from another correlated Gaussian process More general orthogonality principle holds: for any (measurable) function g (y)

E[(x(k ) E[x(k )|y(k )]) g (y(k ))] = 0

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Wiener smoothing
Recall that for a discrete random (scalar and white-sense stationary) process {x(k )}, autocorrelation is given by: rxx(l) = E[x(k )x(k l)] (l) rxx(l) = rxx and the power spectrum is expressed through the D transform:
Sxx(D) = D{E[xk x k n ]} = E[X (D )X (D )] (D) Sxx(D) = Sxx

(mneumonic notation)

Suppose that we observe {y (k )} and want to estimate {x(k )} Notice that there is correlation between y (l) and x(k ), for k = l (k )} Thus, we need to lter {y (k )} to get a estimate {x x (k ) = w (k ) y ( k ) = Hence, ( D ) = W (D ) Y ( D ), X E ( D ) = X ( D ) W (D ) Y (D )
(D) X n

w(n) y (k n)

The criterion that is used is to nd W (D) is such that: arg min E |ek |2
W (D)
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

The optimal estimator Wopt(D) will satisfy the orthogonality principle: E[eopt(k ) y (k n)] = 0 n opt(D) = Wopt(D) Y (D) opt(k ) and X where eopt(k ) = x(k ) x E[eopt(k ) y (k n)] = 0 E[ E[
k eopt (k ) y k eopt (k ) y

Notice that the following holds:

(k n)] = 0 ((n k ))] = 0

E[eopt(n) y (n)] = 0 E[E (D) Y (D)] = 0 Remember that: Y (D) y (n) Y (D ) Y (D ) Y (D 1 ) Y (D) = = = = y (n)Dn (mneumonic form)

y (n)(D)n y (n)(D)n y (n )D n = y (n)Dn

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Linear Prediction
Given a sequence {xk }, we want to use only past samples in order to predict the present sample: x k =
m=1

amxkm

Question: Find {am} k |2 is minimized m=1 such that E |xk x Using orthogonality principle, we have that: k )xkn, ek = (xk x Based on this, the optimality condition is given by: E or E[xk x k m ] = rx(n) =
m=1 m=1

n = 1,

xk

am x k m
m=1

x k n = 0

amE[xkm x k n ] n = 1,

am rx(n m),

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Let gn = rx(n)

m=1

am rx(n m) =

m=0

am rx(n m) = G(D) = A (D) Sx(D)

where a0 = 1, am = am,

m 1 and gn is an anti-causal sequence. Sx(D) = L(D) L(D)x

Suppose that the Paley-Wiener condition holds for Sx(D) and that we can write: where L(D) is causal, stable and minimum phase (i.e. zeros & poles of L(D) strictly outside unit circle) G(D) = A (D) Sx(D) = A (D) L(D) L(D)x where A (D) is causal & monic, and G(D) is anti-causal. G(D) = A (D ) L (D ) = x L(D) Since L(D) is minimum phase, L1(D) is causal and stable, which implies: 1 = l0 + L (D ) 1 lnDn = = l0 + L (D ) n=1 =
n ln D = l0 + n1 n=1 n l nD

(anti-causal)

G(D) x L(D)

is anti-causal

Since A (D) L(D) is causal, we can conclude that: 1 A (D ) = (causal and stable since L(D) is minimum-phase) L(D)
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Notice also that for l 1, E[ek e k l ] = E ek xkl

+ m=1

am xkml

= E[ek x k l ]

m=1

a m E[ek xk ml ]

and due to the orthogonality principle, since E[ek x k l ] = 0, l 1, we get that: E[ek e k l ] = 0, l=0 which means that the prediction lter is also a whitening lter for the error sequence

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

10

 P c Pa

cc

Suboptimal detection: Equalization

vi

ViX

% $ b'&)(0( 12(0( % b

R9 ViX

R# " V5h i X
B B o GBAEF3
 y

E !  C  S$T  C  E 
t r

o ~@9A3rCBDBAEF3 o uf~0 matched  o q4387 H Let us start with just the output: q4365 T o lter n

HBPIPEF387s)QBA3tq85

3RB 97S3ps U 3T
o "

d 1 G %

VXW`Y
{r

bT
'

2  yk =
d %

yk =

a
% %

'

(1( ' { r a 4 { r  ' a x(0(  || n p||qk n + zk

n , k + z, k xn||p||
g 4 {r

5

In suboptimal receivers, we are only interested in minimizing marginal measures, i.e. E[|ek |2]

yk = ||p||(xk qk ) + zk
4 {r

c c

{ r #P2 G { r #32 { r # { r #f2

ed f

gf

G #

hd
2

G #

We could 3 Ii3 work equivalently with the output of the whitened matched lter (WMF) and do exactly the same, but the nal conclusions are the same.
' {r n

3RB
 trq@09 u)q o t

4 {r

'

r3

pq f

&

5 w T

p q
`

7sEF3 IR3
A

pRq

pRq

D )r

E U9tI oCu`v rCBA3b9

% ' {r z 4 {r |9 t t @ q9

{r z 4 {r `

3RB U5

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

Figure 5.2: The ISI channel. Equalizer

11

yk

rk

Goal: Output {rk } should make the equivalent channel as close as possible to AWGN Figure 5.3: Structure of linear equalizer. = equalizer linear ltering blocks + symbol-by-symbol detection at the output {rk } length as in optimal decoding (MLSE, MAPSD).

Because of the symbol-by-symbol detection, complexity does not grow exponentially with the channel . Zero-forcing equalizer (ZFE): Inverts the channel and eliminate ISI.

. MMSE linear equalizer (MMSE-LE): Takes noise into account and inverts channel However, the price we pay is a decrease in performance possible in the presence of noise.

Four types decision of structures will be considered here: . Zero-forcing feedback equalizer (ZF-DFE): Uses previous decisions to eliminat Zero-forcing inverts channel. equalizer (ZFE): Inverts the channel and eliminates ISI

MMSE linear equalizer (MMSE-LE): Takes noise into account inverts channel as to best as . MMSE Decision feedback equalizer (MMSE-DFE): Usesand previous decisions reduce ISI possible in the presence of noise into account presence of noise.

Zero-forcing decision feedback-equalizer (ZF-DFE): Uses previous decisions to eliminate ISI and tation (mneumonic): To re-iterate a point we made earlier about notation. We dene inverts channel MMSE Decision feedback equalizer (MMSE-DFE): Uses previous decisions to reduce ISI & and takes into account presence of= noise Sxx (D) D{E [xk x ] } , S ( D ) = D{ E [ x y xy k kn kn ]},

trum and cross spectrum as,

re D() denotes the D-transform. his section, very often we denote the power spectrum and cross spectrum loosely as &

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

12

Zero-forcing Equalizer (ZFE)


Simplest possible equalizer to understand and analyze If transmitted symbols have been distorted by a known linear lter, we try to eliminate distortion by ltering the output through the inverse lter It does not take into account the presence of noise = Noise enhancement The output of the matched lter is given by:

ZERO-FORCING EQUALIZER (ZFE) yk = ||p||(xk qk ) + zk Y (D) = ||p||X (D)Q(D) + Z (D) Z (D) X (D) ||p||Q(D) + Y (D) WZF E (D)
1 ||p||Q(D )

87

X (D) +

Z (D ) ||p||Q(D )

1 Figure 5.4: Zero-forcing equalizer. WZF E (D) = ||p||Q(D) 1 Z (D ) R(D) = WZF E (D)Y (D) = (||p|| Q(D)X (D) + Z (D)) = X (D) + ||p||Q(D) ||p||Q(D) elationship (5.13) forms the starting point of all the equalization schemes. Now, if we invert the 1 el usingThe WZF by E (D ) given term Q(D) could be quite large, degrading severely the eective SNR
&

WZF E (D) =

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

13

Performance analysis of the ZFE


As we know, the output of the matched lter has a PSD given by: Sz (D) = N0 Q(D) z (D) = Normalized per real dimension, we denote it as S After the ZFE, we have: where
ZF E = D1 zk ZF E Hence, per dimension, the PSD of the noise zk is: N0 2 Q (D )

ZF E rk = xk + zk

Z (D ) ||p||Q(D)

but notice that:

1 1 ZF E (D) = N0 Q(D) S z 2 ||p||Q(D) ||p||Q(D) Q(D) = Q(D)

due to the conjugate symmetry of {ql }, that is, ql = q l , thus, we have:

1 N0 WZF E (D) ZF E (D) = N0 1 S = z 2 ||p||2 Q(D) 2 ||p|| Lets calculate now the SN RZF E we have
&

'

d 2 2 || p || c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter) T =
2 ZF E

14

1 = 2 1 = 2

N0 1 2 ||p ||

T 2

ZF E (ej ) d S z w

WZF E (ejT )d
ZF E (0)

Thus we have

SN RZF E

ZF E N0 1 1 j = W ( e ) d ZF E Ex 2 ||p|| 2 =
N0 1 E (0) 2 ||p|| w wZF ZF E (0)

N0 WZF E (ej ) d x ||p|| 2 E


=
2

Therefore, we have: x E

x ||p|| E
N0 2

1 wZF E (0)

x x ||p|| E E 1 Noise enhancement: The basic problem occurs when Q ( D ) has zeroes close to the unit circle as seen SN RZF = = = E 2 N0 1 N0 w (0) ZF E and hence enhances the ZF E (0) that becomes ZF E results in Figure 5.3.1. Hence, inverting Q( D) in a gain large 2 ||p || w 2
Q(ejwT ) W (ejwT )

/T

/T

Basic problems shows up when Q(D) has zeros close to the unit circle (consider T = 1 for normalization) Figure 5.5: The noise enhancement ZFE. = inverting Q(D) results in a gain that enhances noise power in that was ignored!!
&

noise power that was ignored.

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

15

Minimum mean squared error - Linear Equalization (MMSE - LE)


To avoid the ZFE noise enhancement we need take into account the presence of noise Basic approach: Find a linear lter {wk } that minimizes the output noise variance: ek = xk wk yk E (D) = X (D) W (D)Y (D) WM M SE LE (D) = arg min E |ek |2
W (D)

The MMSE linear equalizer minimizes the following:

Using the orthogonality principle, the following has to be satised: E[E (D) Y (D)] = 0 E[(X (D) WM M SE LE (D)Y (D)) Y (D)] = 0 Sxy (D) = WM M SE LE (D) Syy (D) WM M SE LE (D) = hence, WM M SE LE (D) = As
&

which gives us:

||p||Q(D)Ex Sxy (D) = Syy (D) ||p||2Q2(D)Ex + N0Q(D)


N0 ||p||Ex

N0 Ex

||p||Ex 1 = ||p||2Q(D)Ex + N0 ||p||Q(D) +

(noise enhancement is avoided)

0, or SN R , the MMSE -LE equalizer tends towards the ZFE (as expected)

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

16

Performance of the MMSE - LE Equalizer


At the output of the MMSE - LE, the D-transform we have is: R (D ) = 1 ||p|| Q(D) + SN R1 X (D)Q(D)||p|| +
MFB

Z (D ) ||p|| Q(D) + SN R1
MFB

where:

rk = yk wM M SE LE (k ) = [||p||(xk qk ) + zk ] wM M SE LE (k ) SN RM F B x E Ex 2 = ||p|| = ||p||2 N0 N0/2

Above expression can be written also as: R (D ) = 1 Q (D ) X (D ) + Z (D ) Q(D) + 1/SN RM F B ||p|| (Q(D) + 1/SN RM F B )
Z (D)

1/SN RM F B X (D ) + Z ( D ) Q(D) + 1/SN RM F B


1V (D)

where we have dened: V (D ) = In the time domain, we get: rk = xk vk xk + zk


&

1/SN RM F B ||p|| = WM M SE (D) Q(D) + 1/SN RM F B SN RM F B

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

17

Examining the contribution of xk in (vk xk ): vk xk =


n

vn xkn = v0xk +

n=0

vk xkn

where the second term does not depend on xk . Thus, we have: rk = xk v0xk
n=0

vk xkn + zk = rk = (1 v0)xk + ek
ek

( ek contains past/future symbols )

Hence, one can dene the detection SNR observed by the detector as: x(1 v0)2 E SN RM M SE LE,U = E[|ek |2]

However, notice that in the MMSE minimization, we compute E[|ek |2] instead, where: ek = xk wM M SE (k ) yk = xk rk From here, one can see that: Hence
x E E[|ek |2 ]

is not the same as to what the detector encounters in the computation of SN RM M SE LE,U

ek = v0xk ek

2 2 Next, we calculate M M SE LE = E[|ek | ]

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

18

Similar to the ZFE analysis,

SEE (D) = D{E[el e l k ]} = Ex WM M SE LE (D ) SXY (D ) WM M SE LE (D ) SXY (D ) +WM M SE LE (D) SY Y (D) WM M SE LE (D ) = Ex WM M SE LE (D) SY Y (D) WM M SE LE (D ) WM M SE LE (D) = 1 ||p|| (Q(D) + 1/SN RM F B )

Now, taking into account that:

SY Y (D) = ||p||2Q2(D)Ex + N0Q(D) = Ex||p||2Q(D) Q(D) + substituting and operating, one gets: SEE (D) = Ex = = Therefore, SEE (D) = Ex V (D) =
&

1 SN RM F B

Q(D) + SN R1

Ex Q(D)

MFB

Ex/SN RM F B Q(D) + SN R1
N0 ||p||

MFB

||p|| (Q(D) + 1/SN RM F B ) N0 WM M SE LE (D) ||p||

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

19

In discrete-time, this implies that:


2 2 M M SE LE = E[|ek | ] = Ex v0 =

N0 wM M SE LE (0) ||p||

2 Even though one computes M M SE LE , the detector is operating based on rk and the eective error is:

it is clear that E[|ek |2] determines the performance and both errors are related:
2 2 Ex = Exv0 v0 Ex = E[|ek |2] = E[|ek |2] v0 2 M M SE LE,U = Ex v0 (1 v0 ) 2 Ex + E[|ek |2] E[|ek |2] = v0

rk = (1 v0)xk + ek

Thus, In conclusion,

SN RM M SE LE = SN RM M SE LE,U

2 M M SE LE

Ex

Ex 1 = Exv0 v0

Ex(1 v0)2 1 v0 1 = 2 = = 1 M M SE LE,U v0 v0

SN RM M SE LE = 1 + SN RM M SE LE,U The detector operates actually with SN RM M SE LE,U , thus, this is the one that determines the error probability
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

20

Where do we go next ? So far, Both ZFE and MMSE-LE lter received sequence to try to convert the ISI problem to be close to an AWGN problem ZFE does this by inverting the channel but this could cause noise enhancement MMSE-LE takes into account the noise, but it transmits symbols as part of the noise rk = xk v0xk
n=0

vk xkn + zk = rk = (1 v0)xk + ek
ek

Question: can we take advantage of the fact that noise contains some past/future transmitted symbols ? This is the main basis of the Decision Feedback Equalizers

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

h&xYyqtistwk8xyilk8xDsutwtuiqvlaxDxDk8Hy soxqHyrwtujyetwi v@siqxlapiq&xpxyrwtujyetuiqvoHye i v&xesui &rikHi xuvtwy rurwnoYx&pxu6tuiqYhk8xDsutwtwi vYhqs&twrwxeyexu8tuv&rixDtwyxp&x s&ppxuvn6$i rx Decision-Feedback Equalizer (DFE)
'(!#" $&% !#" P I !#" E 6879A@BC" 354 ) 102" FT% FH%G Q @SR2102"
Decisionfeedback equalizer

21

$D % E

Whitened matched filter

Basic idea: One could potentially use VU previous decisions while attempting to estimate the current symbol

g turq&px h&xek8xDsutwtwi vlaxuxDk&Yy soxYyrwtujDxup V h&xek8xDptweyetwi vtyvHky vYyrwn8tihilXWBgmhpxD&twpxPHhiyYqxpypi v&ryq&8twi v Derivation and analysis of DFE requires the strong assumption that the decisions are indeed correct!! Yys&xek&xDstituiqvYsypxptuvYk&xuxDktsui ppxsH `batu&iq8s&ti &xey vYyrwn8tihil4&2 x Wgmhtw Without analysis DFEs an turwr#y vthis iqHxDassumption, v@&xtuiqvB the axp qstwrur#c y of  xB &tiyqisstill &8 twi open v@yvYquestion k&pi8sxuxkqstu@&d x WBgmh
The derivation and analysis have been shown to be still valuable for real cases

We make this assumption and proceed.

{e

&

uT {`20 iD 2xw

q{xqt{c{a# {{{T

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

i v&xesui &rikHi xuvtwy rurwnoYx&pxu6tuiqYhk8xDsutwtwi vYhqs&twrwxeyexu8tuv&rixDtwyxp&x s&ppxuvn6$i rx


'(!#" $&% !#" P I !#" E 6879A@BC" 354 ) 102" FT% FH%G Q @SR2102"
Decisionfeedback equalizer

22

$D % E

Whitened matched filter

Criterion for the MMSE-DFE

g turq&px VU h&xek8xDsutwtwi vlaxuxDk&Yy soxYyrwtujDxup V min E |x r | h & e x 8 k D x p w t e e y w t i t v y H v  k y Y v y w r 8 n  i t h i XWBgmhpxD&twpxPHhiyYqxpypi v&ryq&8twi v l 1. In order to utilize past decisions, one should ensure that r depends only on the past symbols Yys&xek&xDstituiqvYsypxptuvYk&xuxDktsui ppxsH `batu&iq8s&ti &xey vYyrwn8tihil4&2 x Wgmhtw turwr#y viqlter HxDv@ &x(D tu)iqhas vB a qstwrur# y  sequence c xB&tiyq{&y}8in twi order v@yvYk pi8sxuxonly kqstutrailing @&d x WBgm h terms 2. Feedforward W toxp shape the to&have ISI
W (D),B (D),b0 =1 k k 2 k k

3. One possible structure that causes the channel to be causal: Whitened Matched Filter 4. We need B (D) such that B (D) is causal and monic, i.e., so that: B (D ) = 1 + b 1 D + b 2 D 2 + 1 b (D ) =
n=1

{e

uThence, {`20 iD 2xw X (D)(1 B (D)) will depend only on past decisions
&

bn D n

q{xqt{c{a# {{{T

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

h&xYyqtistwk8xyilk8xDsutwtuiqvlaxDxDk8Hy soxqHyrwtujyetwi v@siqxlapiq&xpxyrwtujyetuiqvoHye i v&xesui &rikHi xuvtwy rurwnoYx&pxu6tuiqYhk8xDsutwtwi vYhqs&twrwxeyexu8tuv&rixDtwyxp&x s&ppxuvn6$i rx Main steps in deriving the MMSE-DFE
'(!#" $&% !#" P I !#" E 6879A@BC" 354 ) 102" FT% FH%G Q @SR2102"
Decisionfeedback equalizer

23

$D % E

Whitened matched filter

Step 1: Fix feedback lter B (D) V and U nd feedforward lter W (D), in terms of B (D), such that E |xk rk |2 is minimized XW Y

g turq&px h&xek8xDsutwtwi vlaxuxDk&Yy soxYyrwtujDxup V h&xek8xDptweyetwi vtyvHky vYyrwn8tihil BgmhpxD&twpxPHhiy qxpypi v&ryq&8twi v Yys&xek&xDstituiqvYsypxptuvYk&xuxDktsui ppxsH`batu&iq8s&ti R &(xe y vYyrwn8tihil4&x2Wgmhtw Step 2: Express the result of operating W (D) on Y (D), i.e., D) in terms of B (D) and set up a turwr#y viqHxDv@&xtuiqvB axpqstwrur#yc xB&tiyq&8twi v@yvYk&pi8sxuxkqstu@&xdWBgmh linear prediction problem
Step 3: Solve linear prediction problem to nd the causal lter B (D) that minimizes E |xk rk |2 Step 4: (Performance Analysis) As in the MMSE-LE, remove the bias term to nd the equivalent SN RM M SE DF E,U

{e

&

uT {`20 iD 2xw

q{xqt{c{a# {{{T

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

24

Step #1
Assuming that B (D) is xed, we nd W (D) in order to minimize the MMSE criterion. The error E (D) can be written as follows: E (D) = X (D) [W (D)Y (D) + (1 B (D))X (D)]
R (D)

Hence, E (D) = B (D)X (D) W (D)Y (D) In order to nd W (D) that minimizes E[|ek |2], we use the orthogonality principle: E[E (D)Y (D)] = E [B (D)X (D) W (D)Y (D)] Y (D) = 0 = B (D) SXY (D) W (D) SXY (D) = 0 SXY (D) B (D ) = B (D) WM M SE LE (D) = SY Y (D) ||p|| Q(D) + SN R1

Hence, W (D ) = B (D )

MFB

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

25

Step #2
We express the error in terms of B (D) by substituting the solution obtained for W (D) in Step # 1 E (D) = X (D) [W (D)Y (D) + (1 B (D))X (D)] = B (D)X (D) B (D) WM M SE LE (D)Y (D) = B (D) [X (D) WM M SE LE (D) Y (D)]
U (D)

= (1 + B (D) 1)U (D) = U (D) (1 B (D)) U (D) where B (D) is a strictly causal sequence. This is exactly a linear prediction problem, where we want to predict the sequence {uk } = D1(U (D)) we predict sample uk using only past samples

Step #3
The optimal linear predictor is given by: Bopt(D) =

1 L (D )

where Bopt(D) plays the role of A (D) in the previous section of Linear prediction and L(D) is such that: SU U (D) = U L(D)L(D)
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

26

We can calculate rst SU U (D) as follows:

SU U (D) = E [X (D) WM M SE LE (D)Y (D)][X (D) WM M SE LE (D )Y (D )] = SXX (D) WM M SE LE (D ) SY Y (D ) WM M SE LE (D ) SXY (D) SY Y (D )

where we have taken into account that WM M SE LE (D) = Making use of previous results, we have that: SU U (D) = Ex =

Q(D) + SN R1
N0 ||p||2

Ex Q(D)

MFB

Q(D) + SN R1

= U L(D)L (D) with L(D) being stable, causal, minimum-phase and monic = Bopt(D) = L(1 D) is also causal, stable and monic, hence:
SEE (D) = Bopt(D)SU U (D)Bopt (D) 1 1 = U L(D)L(D) L (D ) L (D ) = U

MFB

= we get a white noise sequence (assuming no decision error propagation !!)

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

27

Alternatively, one could factorize in a dierent way, namely: Q( D ) + then: SU U (D) = Thus, in this notation, we have U =

1 = 0 G(D)G(D) SN RM F B
N0 ||p||2

Q(D) + SN R1

MFB

0 G(D) G(D)

N0 ||p||2

N0 ||p||2 0

and that

L (D ) =

1 = Bopt(D) = G(D) G(D)

Substituting in the expression of the optimal feedforward lter Wopt(D), we get: Wopt(D) = Bopt(D) WM M SE LE (D) 1 = G(D) ||p|| Q(D) + SN R1 = G(D)

MFB

1 ||p0||0 G(D) G(D) 1 1 = ||p||0 G(D)


MFB

therefore, the main computation is to gure out the spectral factorization of Q(D) + SN R1

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

28

For the choices of Bopt(D) and Wopt(D), the error spectrum is:

SEE (D) = Bopt(D) SU U (D) B (D) N0 = U = 0||p||2

Next, we perform the performance analysis of MMSE-DFE

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

29

Performance Analysis of the MMSE-DFE


Since SEE (D) =
N0 , 0 ||p||2

EE (D) = the per-dimension PSD is S

N0 /2 0 ||p||2

In order to compute 0, one can make use of Szego formula Szego formula: Assuming a rational spectrum S (D), i.e. S (D) = 0 the following expression holds:
M k =1 (1 ck D ) (1 N k =1 (1 dk D ) (1 1 c kD )

1 d kD )

1 ln S (ej ) d = ln 0 2 if the nite energy constraint is satised, i.e.

1 r(0) = 2 Notes:

S (ej ) d <

This result actually holds for more general forms than just the rational spectrum, however, in most practical cases, a rational spectrum is assumed. The nite energy constraint for rational spectra is equivalent to saying that there are no poles of S (D) on the unit circle.

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

30

Using the Szego formula for the MMSE-DFE, we obtain the so-called Salz formula: E[|ek | ] =
2 2 M M SE DF E

Thus, we obtain the following SN RM M SE DF E : SN RM M SE DF E = Ex

N0 1 = exp ||p||2 2

ln SEE (ejw ) d

1 ln SEE (ejw ) d = SN RM F B exp 2 0||p||2Ex = 0 SN RM F B = N0

2 M M SE DF E

Next, we check whether or not we have a bias (as we did for the MMSE-LE) R (D ) = = = = R(D) + (1 B (D))X (D) W ( D )Y ( D ) + X ( D ) B ( D )X ( D ) X (D) (B (D)X (D) W (D)Y (D)) X ( D ) E (D )

The receiver makes decisions on xk based on rk

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

31

R (D) = X (D) G(D)X (D) = X (D) G(D)X (D) = = 1 G(D) +

1 Y (D ) ||p||0 G(D) 1 ||p||Q(D) 1 Z (D ) X ( D ) ||p||0 G(D) ||p||0 G(D)

Q(D) + 0G(D)G(D) 1 1 1 Z (D ) X ( D ) + 0G(D) ||p||0 G(D) 1


1 SN RM F B 0G(D) 1V (D)

Q (D ) 1 1 X ( D ) + Z (D ) 0G(D) ||p||0 G(D)

X (D ) +

1 1 Z (D ) ||p||0 G(D)
Z (D)

where we have dened: V (D ) = which is a purely anti-causal lter


1 SN RM F B 0G(D)

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

32

Thus, as we did for the MMSE-LE, we have that: rk = (1 v0)xk


n=0

vnxkn + zk
ek

Hence, we have again a bias since the detector is based on rk , not rk Notice that since E (D) = X (D) R (D), in the time domain, we conclude that: ek = v0xk ek We need to nd E[|ek |2] and to do that, we rst nd E[|ek |2], as follows:
2 2 2 E[|ek |2] = M M SE DF E = v0 Ex + E[|ek | ] 2 2 = v0 Ex + M M SE DF E,U 2 2 2 = M M SE DF E,U = M M SE DF E v0 Ex =

= rk = (1 v0)xk + ek

N0 2 v Ex 0 ||p||20
1 , G (D )

Given that V (D) = thus:

1 1 SN RM F B 0 G (D ) ,

where G(D) is monic and anti-causal, therefore so is

obtaining:
&

1 1 N0 1 1 ||p||2Ex = = = v0 v0 = SN RM F B 0 ||p||2Ex 0 0 N0
2 M M SE DF E = v0 Ex

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

33

hence, In terms of SNRs, the relationship is given by: SN RM M SE DF E,U Therefore, once again:
2 2 M M SE DF E,U = Ex v0 v0 Ex = v0 Ex (1 v0 )

1 (1 v0)2Ex = 1 = SN RM M SE DF E 1 = v0(1 v0)Ex v0

SN RM M SE DF E = SN RM M SE DF E,U + 1 Notice the similarity of this relationship with the MMSE-LE (fundamental relationship between biased and unbiased detectors) Notice that the error sequence {ek } is not in general white, even with the correct past decisions assumption

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

34

Zero-Forcing DFE

We can nd the ZF-DFE forward and feedback lters by simply setting SN RM F B in all the expressions derived for the MMSE-DFE This results in a spectral factorization of:

Q(D) = 0Pc(D)Pc(D)

and setting the feed forward and feedback lters of the ZF-DFE as: W (D ) = 1 , 0||p||Pc(D) B ( D ) = Pc ( D )

All the analysis performed for the MMSE-DFE carries through to the ZF-DFE as well.

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

35

Fractionally spaced equalization (FSE)


We have assumed till now that there is perfect synchronization we know exactly when the sampling at the output of the WMF occurs Suppose that we think that sampling occurs at kT , but it occurs at kT + t0. Equivalent channel is: y (kT + t0) =
m

xm||p||q (kT mT + t0) + z (kT + t0)

where q (kT mT + t0) = (t mT ) (t kT + t0) dt = (t) (t (k m)T + t0) dt

= we are sampling q (t + t0), while we designed equalizers assuming that it was q (t) Equivalent channel is given by Q(ej )ejt0 , which could cause a loss in performance One solution is to CHAPTER sample the 5. signal at Nyquist rate (thisCOMPLEXITY is usually faster than the symbol rate T ) to 100 EQUALIZATION: LOW SUBOPTIMAL RECEIVERS ensure we collect the sucient statistics.
Z (t) Anti aliasing lter xk p(t) + kT /L Fractionally spaced equalizer x k

Figure 5.7: Fractionally spaced equalizer.


&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

100

CHAPTER 5. EQUALIZATION: LOW COMPLEXITY SUBOPTIMAL RECEIVERS

36

Z (t) Anti aliasing lter xk p(t) + kT /L Fractionally spaced equalizer x k

There are two main motivations Figure for this: 5.7: Fractionally spaced equalizer. Robustness to timing errors In practice, the channel is unknown to the receiver and one needs to estimate the channel pi (k ) = [h(t) lter (t)]tto = we may not be able to form the matched collect sucient statistics =kT iT L = this motivates the use of a channel-independent method (i.e. Nyquist sampling) and Instead of performing matched ltering z + sampling, ) = z kT we rst sample faster and then we perform i (k L equalization in the discrete domain
Stacking up all the oversampled or fractionally sampled versions, one obtains removing of the signal the noise Anti-aliasing lter captures most energy while P0 (D ) Y0 (D) Z0 (D) . . . interest. Assume a perfect ideal lter. . . . Y (D) = low pass X (D) + = . . . YL1 (D) PL1 (D) P(D) Z(D) Which in more compact notation is, Y (D) = P(D)X (D) + Z(D) The equalizer structure can now be a row vector
&

and

iT

outside of the band of

ZL1 (D)

W(D) = [W0 (D), . . . , WL1 (D)]

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

37

We collect samples as follows: yi(k ) = y kT where

iT L

i = 0, . . . , L 1 i = 0, . . . , L 1
L

= Yi(D) = Pi(D)X (D) + Zi(D), Pi(D) = D{pi(k )},

pi(k ) = h(t) (t)|t=kT iT ,

zi(k ) = z kT

iT L

We have to make use of all Y0(D), . . . , YL1(D) to derive the equalizer

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

38

Stacking up all the fractionally sampled versions, one obtains P0 ( D ) Y 0 (D ) Z0(D) . . . . . . X (D ) + = Y (D ) = . . . PL1(D) YL1(D) ZL1(D)
P(D)

= Y(D) = P(D)X (D) + Z(D) Then, we can consider the equalizer structure as given by the row vector: W(D) = [W0(D), . . . , WL1(D)] and the output of the equalizer will be given by: R(D) = W(D)Y(D) = W(D)P(D)X (D) + W(D)Z(D) Considering the error E (D) = X (D) R(D), the fracionally spaced MMSE-LE can be found through the orthogonality principle: E[E (D)Y(D)] = 0 E[(X (D) WM M SE LE (D)Y(D))Y(D)] = 0 WM M SE LE (D) = E[X (D)Y(D)]E[Y (D)Y(D)]1 If we apply the antialising lter + Nyquist sampling, the noise becomes white = ExP(D) Ex P(D) P(D) + LN0I
1

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

39

Note that the equalization is done and then, the output is downsampled by L, instead of matched lter + aliasing and then equalizing Similarly, both the biased and unbiased SNRs are also exactly the same as given for the MMSE-LE, as long as we sample at Nyquist rate or higher.

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

40

Zero-forcing FSE
Suppose we want to use a Zero-forcing FSE, that is, we need: WZF (D) P(D) = 1 =
L1 i=0

Wi,ZF (D)Pi(D)

that is, we enforce now the downsampled version of the equalizer output to be like a discrete-time delta function
1 Theorem (Bezout Identity): If {Pi(D)}L i=0 do not share common zeros (i.e. are co-primes), then there exists a vector polynomial (of nite degree!)

W(D) = [W0(D), . . . , WL1(D)] such that: WZF (D) P(D) = 1


1 Notice that having nite-length polynomials {Wi(D)}L i=0 means FIR lters = one is able to convert the channel to a discrete AWGN without using innite-length inverses

Some notes: Bezout condition is in fact necessary and sucient, as shown by Sylvester [1840] This shows that there exists a nite-impulse response inverse to the vector channel (notice that this inverse is applied after downsampling)
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

41

Finite-Length Equalizers
In practice, one can only implement nite-length equalizers using digital signal processors Very simple implementation Usually, FIR lters have better numerical properties than IIR lters One could truncate the innite-length lter solutions and implement them (sub-optimal approach) We should nd the optimal equalizer solution for a given lter length Idea: work on the discrete domain to design the best FIR MMSE-LE (digital equalizer) = an anti-aliasing lter precedes the sampler and the digital equalizer Similarly as to what we did in the case of the FSE, we take more samples: iT iT iT y kT = xmp kT mT + z kT , L L L m We dene: y (kT ) . . = yk = . 1 y kT LL T

i = 0, . . . , L 1

p(kT mT ) z (kT ) . . . . + xm . . 1 1 T mT T p kT LL z kT LL
pkm zk

yk =
n
&

p n x k n + zk

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

42

Assumption: the combined pulse-response/antialiasing p(t) has nite support p(t) = 0 for t [0, T ] = pk = 0, for k < 0 and k > In practice, it is enough to require having negligible values outside the interval [0, T ] (most real-world channels are approximately time-limited) xk . = yk = [p0, p1, . . . , p ] . . + zk xk Suppose that we collect Nf samples of yk , i.e., a frame of Nf + transmitted symbols as: Yk = yk yk1 . . . ykNf +1 = p0 p1 p 0 0 p0 p1 p . . . . . . ... ... ... 0 0 0 p0
P

0 0 ... . . . p1

0 0 0 p

xk x k 1 . . . xkNf +1
Xk

zk zk1 . . . zkNf +1
Zk

We are going to use this model for the nite length equalizer design

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

43

FIR MMSE-LE
Equalizer is restricted to operate on Nf symbol times = Nf L samples of received sequence. FIR equalizer is a Nf L dimensional row vector applied to the received (sampled) vector Yk : rk = w Yk where w C1Nf L For causality, we pick a channel-equalizer delay of samples = the equalized output rk is close to xk, where is the delay. Hence, the equalizer works by minimizing the following error: ek = xk rk The criterion for the FIR MMSE-LE equalizer is (as usual): wopt = arg min E |ek |2
w

Next, we apply the orthogonality principle to nd the optimal solution

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

44

Using the orthogonality principle,

], we can see easily that: Dening the matrix RxY () = E[xk Yk 1 wopt = RxY () R YY

E[ek Yk ] = E[(xk wYk )Yk ]=0 ] = woptE[Yk Yk ] = woptRY Y E[xk Yk

Regarding the matrix RxY (), one can see that:


RxY () = E[xk (X k P + Zk )] = [0 . . . 0 Ex 0 . . . 0]P

= Let

(+1)th position Ex [0 . . . 0 p

. . . p 0 0 . . . 0]

(+1)th position

1 = [0, . . . , 0, 1, 0, . . . , 0] where the 1 occurs in the ( + 1)-th position. Hence,


RxY () = Ex 1 P RY Y = E[PXk X k P ] + E[Zk Zk ] = ExPP + LN0INf L

On the other hand,

Next, we look at the performance analysis


&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

45

Performance Analysis
We can calculate the error variance as follows:
2 1 2 M M SE LE = E[|xk w Yk | ] = Ex RxY ()RY Y RxY () 1 = Ex Ex1 P (Ex PP + LN0 INf L ) P 1 Ex 1 PEx 1 = 1 Ex Ex P (Ex PP + LN0 INf L )

Considering the Matrix inversion Lemma (A + BCB)1 = A1 A1B(C1 + BA1B)1BA1 we can identify: and operating, we conclude the following:
2 M M SE LE

A1 = ExINf + , 1
1 Ex INf L

B = P ,

C1 = LN0INf L
1

= =

1 + P P N0 L

1 1 = N0L1 Q()1

N0L1

N0L INf L + PP Ex
Q()

2 Smallest M M SE LE is achieved by choosing corresponding to the smallest diagonal element of Q()

Corresponding unbiased and biased SNRs are dened as usual (see book notes)
&

'

x k1 c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 2013 (2nd Quarter) . + wY ek = xk [b1 , . . . , bNb ] . k .

46

FIR MMSE-DFE

x kNb

z (t) length Nf L xk p(t) + Anti-aliasing lter kT /L w + x k

b length Nb

We have additionally a symbol-spaced feedback lter of nite length Nb, which lters the past decisions k1, . . . , x kNb }, Figure 5.8: Structure of the FIR MMSE-DFE. {x
As in the earlier MMSE-DFE setting, we derive correct the DFE assuming past decisions. x k1 Let . + wYk ek = xk [1 [bb ,,. .. , ,b.N . = 11 b , b2 . b.], bNb ] .
k Xk Nb

and

xkwe derive the DFE assuming correct past decisions Assumption: As we did for the innite length case, = xk1 . . .

kNb x

xkNb

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

47

b = [1, b1, b2, . . . , bNb ], Then, we have that:

Xk k Nb

xk x 1 = k. . . xkNb

k1 x . . + wYk ek = xk [b1, . . . , bNb ] . x kNb x k x 1 = [1, b1, b2, . . . , bNb ] k. . . xkNb wYk = b Xk k Nb wYk

taking into account the perfect decision feedback assumption

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

48

The FIR MMSE-DFE criterion is given by: {bopt, wopt} = argb,w min E[|ek |2]
b w

= arg min min E[|ek |2]

thus, we will perform the minimization in a nested manner (similar to the innite-length case) We rst x b and nd w in terms of b. Applying the orthogonality principle:
E[ek Yk ]=0

k bXk Nb wYk Yk = 0

] = b E[Xk w E[Yk Yk k Nb Yk ]

We need to evaluate now the various terms carefully

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

49

xk x k . . . . . P + Z RXY () = E[Xk Y ] = E . k k k Nb xkNf +1 xkNb x 1 k . . [xk , . . . , xkN +1] = Ex E . f Ex xkNb 1(N + )


f

P
(Nf + )Nf L

(Nb +1)1

= Ex J P Assume rst that xk occurs in the information window, i.e.: Nf + + 1 then, J = 0 for i.i.d. symbols {xk } Now we can consider two cases: xkNb occurs after xkNf +1 xkNb occurs before xkNf +1
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

50

Case I: + Nb Nf + 1 = xkNb occurs in the information window {xk , . . . , xkNf +1} (usual case) x k 1 . . [xk , . . . , xk, . . . , xkN , . . . , xkN +1] J = E . f b Ex xkNb 1(N + )
f

(Nb +1)1

0(Nb+1),

I(Nb+1)(Nb+1),

0(Nb+1)(Nf + 1Nb)

Case II: + Nb Nf + 1 (see book readers for details) = xkNb does not appear in the observation window {xk , . . . , xkNf +1} {xk , . . . , xk, . . . , xkNf + 1, . . . , xkNb }
observation window xk

thus, we have a truncation at the end. This is really an edge eect, which can be solved by using a shorter decision feedback window xk . . . xkNf +1
(Nf + )1

thus, truncating the lter b to this length (setting rest of taps to 0) and calculating optimal b.
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

51

We assume from now on only Case I The error will be given by:
2 e () 2

= E

k b Xk Nb

wYk

1 = b Ex I(Nb+1) Ex JP R Y Y P J Ex b

where RY Y = ExPP + LN0 INf L Next, we apply the Matrix inversion Lemma for the expression:
2 e ()

= Ex b I(Nb+1) JP

LN0 PP + IN L Ex f

P J

so that: P by making:

LN0 PP + IN L Ex f

Ex P = INf + + PP LN0 B = P , A = INf +

INf +

C 1 = hence, we get the expression:


2 e ()

LN0 INf L, Ex

= Ex b I(Nb+1) + J

Ex INf + + INf + + PP LN0

&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

52

Notice that for Case I, we had that J J = INb +1


2 () = Ex b I(Nb+1) + J e

Ex INf + + INf + + PP LN0 Ex INf + + PP LN0


1

J J

b b

= Ex b I(Nb+1) + INb+1 + J = Exb J Ex PP INf + + LN0 L I + P P SN R


1 () Q

b b

= LN0 b J

1() b = LN0 b Q In the case of innite-length lters, we performed spectral factorization, while here we use the Choleski 1() decomposition of the matrix Q () = G S1 G Q 1 () = G1 S G Q where G is an upper triangular matrix and thus, G is lower triangular
1 If G is upper triangular = G G is upper triangular = is lower triangular
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

53

() = G S1 G Q 1() = G1 S G Q Suppose we have s0() 0 sNb ()

0 as the diagonal matrix S with the ordering property:

S =

...

(this property can be ensured in the Choleski decomposition)

s0() s1() . . . sNb ()

Using the Choleski decomposition, our minimization becomes:

2 1 e () = LN0b [G S G ] b 1 1 = LN0(b G ) S (b G )

To minimize this, we should pick o s0(), i.e., the top-left corner element of S, thus:
1 b G = [1, 0, . . . , 0]

which implies that b must be the rst row of the upper triangular matrix G g(0) . 1 1 . = g(0) G G G G = . = I, = [1, 0, . . . , 0] g(Nb) = bopt = g(0)
&

'

c Baltasar Beferull Lozano - Advanced Signal Processing for Communications & Signal Processing, 2012 - 2013 (2nd Quarter)

54

Results can be summarized as follows: bopt = g(0) 1 wopt = bopt RXY () R YY = g(0) J P

L PP + I SN R

P
matched lter

feed-forward lter

which follows because: L L = PPP + P SN R SN R L L P PP + I = P P + I P SN R SN R 1 L L I P = P PP + I P P+ SN R SN R PPP + P Regarding the error variance, we get:
2 e () = LN0 s0()

this error is evaluated and the best value of is chosen

&

S-ar putea să vă placă și