Documente Academic
Documente Profesional
Documente Cultură
Ulas GUNTURKUN
0326155
McMaster University CRL
Adaptive Systems Laboratory
CONTENTS
1. Application of LMS (Least Mean Square) Algorithm to Adaptive Equalization
2. Application of NLMS (Normalized Least Mean Square) Algorithm to Adaptive Equalization
3. Application of GAL (Gradient Adaptive Lattice) Algorithm to Adaptive Equalization
4. Application of RLS (Recursive Least Squares) Algorithm to Adaptive Equalization
5. Comparison of the Algorithms In Terms of Convergence Rates and Robustness
6. References
To run the experiment in MATLAB, open the directory LMS, and type plot_lms_eig_effect ; in
command prompt.
Learning Curve
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
-1
10
W=3.5
-2
10
W=3.3
W=3.1
W=2.9
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
Figure 1.1. Learning Curves of the LMS algorithm for an adaptive equalizer with number
of taps M=11, step-size parameter = 0.075 , and varying eigenvalue spread (R )
500
To run the experiment in MATLAB, open the directory LMS, and type plot_lms_mu_effect ; in
command prompt.
Learning Curve
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
-1
10
=0.0075
-2
10
=0.075
=0.025
-3
10
500
1000
Number of Iterations , n
Figure 1.2. Learning Curves of the LMS algorithm for an adaptive equalizer with number
of taps M=11, fixed eigenvalue spread, and varying step-size parameter .
1500
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
-1
10
-2
10
W=3.5
W=3.3
W=3.1
W=2.9
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
Figure 2.1. Learning Curves of the NLMS algorithm for an adaptive equalizer with number
of taps M=11, step-size parameter = 0.75 , and varying eigenvalue spread (R )
500
max
LMS algorithm.
Figure 2.2 illustrates different convergence rates of NLMS algorithm by fixing W at 3.1, and the algorithm
was run for different step-size parameters 0.075, 0.75, 1.25.
Comparing figure 1.2 and 2.2, it can be said that NLMS algorithm is faster than its unnormalized
counterpart.
To run the experiment in MATLAB, open the directory NLMS, and type plot_nlms_mu_effect; in
command prompt.
Learning Curve
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
-1
10
=0.075
-2
10
=1.25
=0.75
-3
10
500
1000
Number of Iterations , n
Figure 2.2. Learning Curves of the NLMS algorithm for an adaptive equalizer with number
of taps M=11, fixed eigenvalue spread, and varying step-size parameter .
1500
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
W=3.5
W=3.3
W=3.1
-1
10
W=2.9
-2
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
500
Figure 3. Learning Curves of the GAL algorithm for an adaptive equalizer with number of taps M=11, fixed step-size
parameter =0.098, varying eigenvalue spreads, = 0.4 , = 0.4 , a = 0.6
the beneficial effect of adding n w (n) to the cost function is not forgotten with the time which would
be a case if was less than unity.
To run the experiment in MATLAB, open the directory RLS,and type rls_eig_effect in comm. prompt.
Learning Curve
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
-1
10
-2
10
W=3.5
W=3.3
W=3.1
W=2.9
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
Figure 4. Learning Curves of the RLS algorithm for an adaptive equalizer with number of taps M=11,
500
= 1 , = 0.004
To run the experiment in MATLAB, open the directory ROBUSTNESS, and type
convergence_comparison in the command prompt.
Learning Curves
10
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
GAL
-1
10
LMS
NLMS
-2
10
RLS
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
500
Figure 5.1. Learning Curves of the LMS, NLMS, GAL, and RLS algorithm for an adaptive equalizer with number of taps
M=11, W=3.1 for all, = 0.075 for LMS, = 1.25 for NLMS, =0.098, = 0.4 , = 0.4 , a = 0.6 for GAL, and
The system is robust if the energy gain is less than unity under all possible operating conditions.
The model error eo (n) is a source of disturbance which is a result of the input u (n) being dependent on
the output d (n) of the model.
Since the optimal weights are unknown, the initial value w (0) assigned to the weight vector w (n) in the
algorithm result in another form of disturbance. [3]
Energy Gain of The Algorithm =
To observe the Figure 5.2 and compute the energy gains of two algorithms, open the directory
ROBUSTNESS, and type: robustness_lms_rls in the MATLAB command prompt.
I also simulated the robustness performances of the other algorithms as well. The results appear in the
Figure 5.3.
Although it loses its faster rate of convergence advantage to LMS; NLMS still shows a good performance
as well as LMS. On the other hand, GAL becomes too slow under the disturbed conditions however it
finally converges to almost the same steady state value as its undisturbed variation, and finally RLS shows
the worst performance under the disturbed operating conditions.
To observe the Figure 5.3 open the directory ROBUSTNESS, and type: robustness_comparison in the
MATLAB command prompt.
So, the robustness performances of the algorithms can be put in an order as follows from better to worse:
LMS>NLMS>GAL>RLS which is the opposite order to the one that was obtained for the rate of
convergence.
Learning Curves
10
LMS
RLS
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
10
-1
10
-2
10
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
500
Figure 5.2. Learning Curves of the LMS, and RLS algorithms under disturbed and undisturbed conditions for an adaptive
equalizer with number of taps M=11, W=3.1 for both, = 0.075 for LMS, = 1 , = 0.004 for RLS.
Learning Curves
10
LMS
RLS
GAL
NLMS
10
r
or
r
E
d
er
a
u
q
S
n
a
e
M
e
g
ar
e
v
A
el
b
m
e
s
n
E
10
-1
10
-2
10
-3
10
50
100
150
200
250
300
Number of Iterations , n
350
400
450
500
Figure 5.3. Learning Curves of the LMS, and RLS algorithms under disturbed and undisturbed conditions for an adaptive
equalizer with number of taps M=11, W=3.1 for all, = 0.075 for LMS, = 1.25 for NLMS, =0.098, = 0.4 , = 0.4 ,
REFERENCES:
[1] S. Haykin, Adaptive Filter Theory, 4th edition, Prentice Hall, New Jersey, 2002.
[2] B. Hassibi, On the robustness of LMS filters, in Least-Mean-Square Adaptive Filters, S. Haykin
and B. Widrow, Eds., John Wiley & Sons, 2003.
[3] S. Haykin, Adaptive Filter Theory Lecture Notes, at McMaster University, Oct.- Dec. 2003.