Sunteți pe pagina 1din 13

ADAPTIVE FILTER THEORY COURSE ASSIGNMENT

Prof. Dr. Simon HAYKIN

COMPUTER EXPERIMENT ON ADAPTIVE EQUALIZATION

Ulas GUNTURKUN
0326155
McMaster University CRL
Adaptive Systems Laboratory

CONTENTS
1. Application of LMS (Least Mean Square) Algorithm to Adaptive Equalization
2. Application of NLMS (Normalized Least Mean Square) Algorithm to Adaptive Equalization
3. Application of GAL (Gradient Adaptive Lattice) Algorithm to Adaptive Equalization
4. Application of RLS (Recursive Least Squares) Algorithm to Adaptive Equalization
5. Comparison of the Algorithms In Terms of Convergence Rates and Robustness
6. References

1. LMS ALGORITHM APPLIED TO ADAPTIVE EQUALIZATION


1.1. Effect of Eigenvalue Spread
LMS algorithm performs better using the orthogonal data rather than correlated data.
The parameter W determines the eigenvalue spread (R ) and it is directly proportional to (R )
As can be seen in Figure 1.1, increasing the eigenvalue spread has the effect of slowing down the
convergence rate of the adaptive equalizer and also increasing the steady-state value of the average
squared error. For each eigenvalue spread, an approximation to the ensemble-average learning curve is
obtained by averaging the instantaneous squared error e 2 (n ) versus n curve over 200 independent trials
of the computer experiment.

To run the experiment in MATLAB, open the directory LMS, and type plot_lms_eig_effect ; in
command prompt.
Learning Curve

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

-1

10

W=3.5

-2

10

W=3.3
W=3.1
W=2.9

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

Figure 1.1. Learning Curves of the LMS algorithm for an adaptive equalizer with number
of taps M=11, step-size parameter = 0.075 , and varying eigenvalue spread (R )

500

1.2. Effect of Step-Size Parameter


To observe the effect of step-size parameter, W was fixed at 3.1, yielding an eigenvalue spread of
11.1238. The step-size parameter was this time assigned one of the three values 0.075, 0.025, and
0.0075.
The rate of convergence is highly dependent on step-size parameter as illustrated in Fig.1.2. LMS
algorithm can operate in a non-stationary environment and thereby track the statistical variations in the
environment. [1], [3] Therefore, the choice of step-size parameter is dependent on the environment in
which LMS will operate. For a large step-size parameter ( = 0.075 ), the equalizer converged to steadystate conditions in approximately 120 iterations. On the other hand, when is small (equal
to = 0.0075 ), the rate of convergence slowed down by more than the order of magnitude. The results
also show that the steady-state value of the average squared error (and hence the misadjustment) increases
with increasing .

To run the experiment in MATLAB, open the directory LMS, and type plot_lms_mu_effect ; in
command prompt.
Learning Curve

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

-1

10

=0.0075

-2

10

=0.075

=0.025
-3

10

500

1000
Number of Iterations , n

Figure 1.2. Learning Curves of the LMS algorithm for an adaptive equalizer with number
of taps M=11, fixed eigenvalue spread, and varying step-size parameter .

1500

2. NLMS ALGORITHM APPLIED TO ADAPTIVE EQUALIZATION


Whenever an adaptive process is performed, it is necessary to do the adaptation in a way that the system is
perturbed in the smallest way possible. [3]
The squared Euclidean norm of the weight error vector is desired to be minimized in the conventional
LMS algorithm subject to w T (n)u(n) = d (n) .
This is a constrained optimization problem, and solution to this problem gives rise to the derivation of
NLMS algorithm.
2.1. Effect of Eigenvalue Spread
Figure 2.1 shows the results of this experiment which confirms the effectiveness of principle of minimum
disturbance. Even for a high eigenvalue spread, algorithm has a less perturbed learning curve compared to
LMS. On the other hand, increasing eigenvalue spread has an effect of slowing down the slowing rate of
convergence likewise LMS. Besides, NLMS algorithm has less misadjustment value as a result of
optimizing the Euclidean norm of weight error vector.
To run the experiment in MATLAB, open the directory NLMS, and type plot_nlms_eig_effect ; in
command prompt.
Learning Curve

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

-1

10

-2

10

W=3.5
W=3.3
W=3.1
W=2.9

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

Figure 2.1. Learning Curves of the NLMS algorithm for an adaptive equalizer with number
of taps M=11, step-size parameter = 0.75 , and varying eigenvalue spread (R )

500

2.2. Effect of Step-Size Parameter


The NLMS algorithm has the normalized version of the step-size parameter in the LMS case. Since the
Euclidean squared norm of the input (regressor) vector is used as the normalization parameter, the
effective step-size parameter is time-varying. To overcome the difficulty that could arise in such a case
that the input vector could be extremely small, a constant is added to the normalizing parameter. Hence,
the weight update rule is regularized. The choice of the unnormalized step-size parameter is different from
2
the case of LMS so that it can be between zero and 2, while it should lie within the range 0 < <
in

max

LMS algorithm.
Figure 2.2 illustrates different convergence rates of NLMS algorithm by fixing W at 3.1, and the algorithm
was run for different step-size parameters 0.075, 0.75, 1.25.
Comparing figure 1.2 and 2.2, it can be said that NLMS algorithm is faster than its unnormalized
counterpart.

To run the experiment in MATLAB, open the directory NLMS, and type plot_nlms_mu_effect; in
command prompt.
Learning Curve

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

-1

10

=0.075

-2

10

=1.25

=0.75
-3

10

500

1000
Number of Iterations , n

Figure 2.2. Learning Curves of the NLMS algorithm for an adaptive equalizer with number
of taps M=11, fixed eigenvalue spread, and varying step-size parameter .

1500

3. GAL ALGORITHM APPLIED TO ADAPTIVE EQUALIZATION


Gradient Adaptive Lattice Algorithm (GAL) is built on two features: [3]
-Lattice predictor as the main structure
-Use of the gradient for adjusting the lattice parameters
The backward prediction errors constitute the regressor of the lattice predictor which are uncorrelated.
Thus, GAL filter has a orthogonalized input vector which speeds up the convergence rate compared to
LMS algorithm that is based on correlated inputs.
Although it converges much faster than LMS and NLMS, the steady state misadjustment value of the
GAL algorithm is much bigger than that of LMS.
Gradient Adaptive Lattice is an order recursive filter that s derived from the stochastic gradient approach
which is easy to design, but approximate in nature. The simplicity of design results from the fact that each
stage of the lattice predictor is characterized by a single reflection coefficient. In the contrast, the orderrecursive adaptive filters based on the least-squares approach are exact, but their algorithmic formulations
are more demanding in software-coding terms. [1]
Figure 3 confirms a high misadjustment value , and fast convergence rate obtained by the GAL algorithm.
As shown in Figure 3, GAL algorithm converges has almost the same rate of convergence for all the
eigenvalue spreads because of taking the benefit of uncorrelated backward prediction errors. Eigenvalue
spread only effects the misadjustment value while it affected both the rate of convergence and the
misadjustment for LMS and NLMS cases.
Another difference between the GAL algorithm and the others that are subjects of this study is the desired
response estimator of the GAL is not a tap-delay line which is the case for all the others. Hence, I used 2
delay units instead of 7 to introduce the desired response to the output of the filter.
To run the experiment in MATLAB, open the directory GAL,and type gal_eig_effect in comm.prompt.
Learning Curve

10

r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

W=3.5
W=3.3
W=3.1

-1

10

W=2.9

-2

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

500

Figure 3. Learning Curves of the GAL algorithm for an adaptive equalizer with number of taps M=11, fixed step-size
parameter =0.098, varying eigenvalue spreads, = 0.4 , = 0.4 , a = 0.6

4. RLS ALGORITHM APPLIED TO ADAPTIVE EQUALIZATION


Given the least squares estimate of the tap-weight vector of the filter at iteration n-1, we may compute the
updated estimate of the vector at iteration n upon the arrival of new data. We refer to the resulting
algorithm as RLS (Recursive Least Squares) Algorithm. [1]
An important feature of this filter is that its rate of convergence is typically an order of magnitude faster
than that of the simple LMS filter due to the fact that RLS filter whitens the input data by using the inverse
correlation matrix of the data assumed to be of zero mean. This improvement in performance however is
achieved at the expense of an increase in computational complexity of the RLS filter. [1]
Tap weights of the transversal filter remain fixed during the observation interval 1 i n for which the cost
function is defined.
RLS is relatively insensitive to the variations in the eigenvalue spread of the correlation matrix on the
contrary to LMS which is confirmed by close misadjustment values for four different eigenvalue spreads
and very fast convergence rates being close to each other in Figure 4.
Since the noise level in the tap inputs in this experiment is low (SNR=30dB, equivalently, a variance of
v2 = 0.001 ), the RLS algorithm exhibits an exceptionally fast rate of convergence, and, we choose the
exponentially weighting factor ( ) equal to one which refers to infinite memory that is a desired situation
because of the high SNR. ( should lie in the interval 0 < 1 and the exp.). This choice brings us that
2

the beneficial effect of adding n w (n) to the cost function is not forgotten with the time which would
be a case if was less than unity.
To run the experiment in MATLAB, open the directory RLS,and type rls_eig_effect in comm. prompt.
Learning Curve

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

-1

10

-2

10

W=3.5
W=3.3
W=3.1
W=2.9

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

Figure 4. Learning Curves of the RLS algorithm for an adaptive equalizer with number of taps M=11,

500

= 1 , = 0.004

5. COMPARISON OF THE ALGORITHMS IN TERMS OF CONVERGENCE RATES AND


ROBUSTNESS
As shown in Figure 5.1, the convergence rates of the algorithms can be compared in the following way
from faster to slower:
RLS>GAL>>NLMS>LMS

To run the experiment in MATLAB, open the directory ROBUSTNESS, and type
convergence_comparison in the command prompt.
Learning Curves

10

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

GAL
-1

10

LMS
NLMS
-2

10

RLS

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

500

Figure 5.1. Learning Curves of the LMS, NLMS, GAL, and RLS algorithm for an adaptive equalizer with number of taps
M=11, W=3.1 for all, = 0.075 for LMS, = 1.25 for NLMS, =0.098, = 0.4 , = 0.4 , a = 0.6 for GAL, and

= 1 , = 0.004 for RLS.


But, can we say that the same order is valid for the robustness comparison of the algorithms as well as rate
of convergence?
The last part of this study explores the answer to this question.
In order to quantify the robustness, the basic robustness question can be posed for any estimation
algorithm:
Is it possible that small disturbances and modeling errors may lead to large estimation errors?
The above statement implies that any approach to robust estimation requires a notion of largeness and
smallness for the signals involved. For this, there exist many possibilities. A measure, which is widely
used in practice, and one that follows for more analytic tractability is the energy of the signal. This is what
leads to H theory. [2]

The system is robust if the energy gain is less than unity under all possible operating conditions.
The model error eo (n) is a source of disturbance which is a result of the input u (n) being dependent on
the output d (n) of the model.
Since the optimal weights are unknown, the initial value w (0) assigned to the weight vector w (n) in the
algorithm result in another form of disturbance. [3]
Energy Gain of The Algorithm =

Energy of The Output Estimation Error


Energy of the Disturbances Due to eo (n) and w (0)

Analytical results indicate that provided that 0 < <

for all n, the energy gain of the LMS


2
u ( n)
algorithm is always less then unity despite the possible disturbances. The algorithm is therefore robust in
the H norm sense. However it is possible for the RLS algorithm to have an energy gain greater than
unity which makes it potentially non-robust.
For the simulations on robustness; I defined the model error as a random sequence with a normal
distribution: N (0,10 4 ) which is quite a small source of disturbance.
Besides, I started all the algorithms with a initial weight vector all the elements of which are equal to 2 to
make sure that it is deviated from the optimal weights.
Figure 5.2 illustrates the robustness comparison between LMS and RLS algorithms. LMS is a modelindependent algorithm, and is robust in H norm sense, while RLS is model-dependent and potentially
non-robust.
Figure 5.2 verifies these statements that the misadjustment value for the RLS algorithm is almost deviated
from its undisturbed performance by 20 dB while LMS algorithm converged to the same state as its
undisturbed variation.
The energy gains for the two algorithms are measured as follows:
Energy Gain of LMS Algorithm : 0.916616
Energy Gain of RLS Algorithm : 1474.473592
which confirms the statements above.

To observe the Figure 5.2 and compute the energy gains of two algorithms, open the directory
ROBUSTNESS, and type: robustness_lms_rls in the MATLAB command prompt.
I also simulated the robustness performances of the other algorithms as well. The results appear in the
Figure 5.3.
Although it loses its faster rate of convergence advantage to LMS; NLMS still shows a good performance
as well as LMS. On the other hand, GAL becomes too slow under the disturbed conditions however it
finally converges to almost the same steady state value as its undisturbed variation, and finally RLS shows
the worst performance under the disturbed operating conditions.

To observe the Figure 5.3 open the directory ROBUSTNESS, and type: robustness_comparison in the
MATLAB command prompt.
So, the robustness performances of the algorithms can be put in an order as follows from better to worse:
LMS>NLMS>GAL>RLS which is the opposite order to the one that was obtained for the rate of
convergence.

Learning Curves

10

LMS
RLS

Dotted Lines : Disturbed Operating Conditions


Solid Lines : Undisturbed Operating Conditions
1

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

10

-1

10

-2

10

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

500

Figure 5.2. Learning Curves of the LMS, and RLS algorithms under disturbed and undisturbed conditions for an adaptive
equalizer with number of taps M=11, W=3.1 for both, = 0.075 for LMS, = 1 , = 0.004 for RLS.

Learning Curves

10

LMS
RLS
GAL
NLMS

Dotted Lines : Disturbed Operating Conditions


Solid Lines : Undisturbed Operating Conditions
1

10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E

10

-1

10

-2

10

-3

10

50

100

150

200
250
300
Number of Iterations , n

350

400

450

500

Figure 5.3. Learning Curves of the LMS, and RLS algorithms under disturbed and undisturbed conditions for an adaptive
equalizer with number of taps M=11, W=3.1 for all, = 0.075 for LMS, = 1.25 for NLMS, =0.098, = 0.4 , = 0.4 ,

a = 0.6 for GAL, and

= 1 , = 0.004 for RLS.

REFERENCES:
[1] S. Haykin, Adaptive Filter Theory, 4th edition, Prentice Hall, New Jersey, 2002.
[2] B. Hassibi, On the robustness of LMS filters, in Least-Mean-Square Adaptive Filters, S. Haykin
and B. Widrow, Eds., John Wiley & Sons, 2003.
[3] S. Haykin, Adaptive Filter Theory Lecture Notes, at McMaster University, Oct.- Dec. 2003.

S-ar putea să vă placă și