Sunteți pe pagina 1din 4

2012 IEEE 7th Sensor Array and Multichannel Signal Processing Workshop (SAM)

An L1-Norm Linearly Constrained LMS Algorithm


Applied to Adaptive Beamforming
Jose F. de Andrade Jr. and Marcello L. R. de Campos

Jose A. Apolinario Jr.

Federal University of Rio de Janeiro (UFRJ)


Program of Electrical Engineering (PEE) COPPE
Rio de Janeiro, Brazil, 21.941972

Military Institute of Engineering (IME)


Department of Electrical Engineering (SE/3)
Rio de Janeiro, Brazil, 22.290270

Emails: andrade@dctim.mar.mil.br and mcampos@ieee.org

Email: apolin@ime.eb.br

AbstractWe propose in this work an L1 -norm LinearlyConstrained Least-Mean-Square (L1 -CLMS) algorithm. In addition to the linear constraints present in the CLMS algorithm,
the L1 -CLMS algorithm takes into account an L1 -norm penalty
on the filter coefficients. The performance of the L1 -CLMS
algorithm is evaluated for a time-varying system identification
under Gaussian noise and for an adaptive beamforming scenario.
The effectiveness of the L1 -CLMS algorithm is demonstrated
by comparing, via computer simulations, its results with the
CLMS algorithm. When employed in a sensor array, the L1 norm constraint increases the convergence rate making the
proposed algorithm a good candidate for adaptive beamforming
applications.

I. I NTRODUCTION
In many applications of adaptive filters, constraints can be
incorporated to algorithm design in order to force desired characteristics upon the solution at every iteration. For example,
in array signal processing, linear constraints to be satisfied by
the narrowband beamforming coefficients can guarantee nulls
at specified directions.
In the past decades, many adaptation algorithms have been
proposed to solve in realtime some approximation of the linearly constrained quadratic programming problem. Algorithms
based on the gradient direction (or on its estimate) do have
a clear advantage in terms of computational complexity when
compared to Newton-type or least squares algorithms. Particularly for array signal processing, where the number of sensors
and coefficients can be very large, reducing computational
complexity and energy consumption must be ranked high on
the designers priority list. One may also benefit from filter
sparsity to increase the speed of convergence as will be shown
in our simulations.
Motivated by the LASSO [1] and the Sparse LMS algorithms [2], we propose an algorithm with an additional
L1 -norm constraint based on the CLMS algorihtm [3]. The
main features that distinguish the proposed algorithm from
the CLMS algorithm are: convergence rate, when applied
to antenna array (adaptive beamforming), and flexibility for
dynamic identification of sparse systems based on L1 -norm
value selection. Due to the fact that Sparse LMS algorithms
presented in [2] do not attain linear constraints, we decided to
compare the results of the L1-CLMS algorithm with those of
the CLMS algorithm.

978-1-4673-1071-0/12/$31.00 2012 IEEE

II. T HE CLMS AND L1 -CLMS A LGORITHMS


The CLMS algorithm was proposed in [3] to adjust, in real
time, coefficients of an array of sensors to receive a signal
coming from a desired direction and attenuate interferences
coming from other directions. The algorithm can be derived
using Lagrange multiplies [4]. Let e(k) = d(k) wH x(k) be
the estimation error, where x(k) and d(k) are the input and
desired signals, respectively, the problem can be stated as
min E[|e(k)|2 ] s.t. CH w = z,

(1)

where C is an N NC constraint matrix and z is the respective constraint vector containing NC (number of constraints)
elements.
The Lagrange multipliers are introduced in the objective
function as follows:
H
(w) = E[|e(k)|2 ] + H
1 C w z)

(2)

The cost function (w) can be minimized by applying a


steepest descent method such that the coefficient vector is
updated at each iteration as

w (w)
(3)
2
The CLMS algorithm uses an instantaneous estimate for
w (w) such that, using the constraints and solving for 1 ,
its updating equation is expressed as
w(k + 1) = w(k)

w(k + 1) = P [w(k) + e (k)x(k)] + F


(4)

1 H
where P = IN N C CH C
C is a (projection) matrix

1
H
z is an N 1 vector.
and F = C C C
A. The New L1 -CLMS Algorithm
The L1 -CLMS algorithm searches for the solution of the
following constrained optimization problem:
(
CH w = z
2
min E[|e(k)| ] s.t.
(5)
w
kwk1 = t
A cost function with the L1 -norm penalty can be be defined
as

429

H
(w) = E[|e(k)|2 ] + H
1 C w z) + 2 kwk1 t

(6)

Using the same instantaneous estimate employed in the


derivation of the CLMS algorithm, we have:

The algorithm can then be summarized as in Algorithm 1.


Algorithm 1 : L1 -CLMS Algorithm

w (w) = 2e (k)x(k) + C1 + 2 sign[w],

Initialization:
k 1; t topt ; w(1) randn(N, 1);
P IN N C(CH C)1 CH ;
P C(CH C)1 CH ;
F C(CH C)1 z;
while (k < Kmax ) do
e(k) d(k) wH (k)x(k) ;
eL1 (k) t kw(k)k1 ;

(7)

where sign[w] , w/|w|.


Substituting Eq. (7) into Eq. (3), we have



2e (k)x(k)+C1 +2 sign[w(k)]
w(k +1) = w(k)
2
(8)
Multiplying Eq. (8) by CH , it is possible to solve for 1 :

Q(k)
P1

2
2
1 = Hw(k)+2e (k)Hx(k) Gz2 Hsign[w(k)] (9)

where G = (CH C)1 and H = (CH C)1 CH .


In order to overcome the difficulty of deriving the algorithm,
due to the necessity of eliminating the unknown a priori
coefficient vetor w(k + 1) on the left side of Eq. (8), we
propose the approximation signH [w(k)]w(k + 1) t. This
approximation is based on an assumed convergence of the
algorithm, when w(k + 1) w(k).
Defining tk = signH [w(k)]w(k) as the L1 -norm at instant k
and approximating N by sign[w(k)]H sign[w(k)], it is possible
to premultiply Eq. (8) by signH [w(k)] and eliminate the
unknown a 
priori coefficient vetor w(k + 1):

e (k)signH [w(k)]x(k) + signH [w(k)]C1


t = tk
2

H
+2 sign [w(k)]sign[w(k)] ; (10)
after isolating 2 from Eq. (10), we have:


2
2
2 =
eL1 (k) + e (k)signH [w(k)]x(k)
N
N
(11)
1
H
sign [w(k)]C1 ,
N
where eL1 (k) = t tk is the L1 -norm error.
The Lagrange multipliers 1 and 2 can be calculated
solving the system of equations stated by Eqs. (9) and (11).
In order to achieve a compact representation, we can define
the following auxiliary variables:
P = IN N CH
(12)


1
Q(k) =
1 signH [w(k)]CHsign[w(k)]
(13)
N
The interactive final algorithm expression can be stated as:


sign[w(k)]signH [w(k)]
w(k + 1) = P I +
CH w(k)
+
N Q(k)


sign[w(k)]signH [w(k)]
P I
P e (k)x(k) +
N Q(k)


sign[w(k)]signH [w(k)]
IP
CGz +
N Q(k)


sign[w(k)]signH [w(k)]
P
eL1 (k).
N Q(k)
(14)

1
N


sign [w(k)]P sign[w(k)] ;
H

sign[w(k)]signH [w(k)]
;
N Q(k)




w(k + 1)

P I + P1 P w(k) + P I



P1 P e (k)x(k) + I PP1 F + PP1 eL1 (k);
k k+1
end while

B. Comments on Convergence
Although a formal convergence analysis is beyond the
scope of this paper, it is possible to notice that vector 2 2 sign[w(k)] in Eq. (8) is bounded within the interval [ 2 2 1N 1 , 2 2 1N 1 ], implying, therefore, convergence
whenever the equivalent CLMS algorithm converges [2].
III. T HE A NTENNA A RRAY
Consider a uniform linear array (ULA) composed by N
receiving antennas (sensors) and NC receiving narrowband
signals coming from different directions 1 , , NC . The
samples observed from N sensors during M snapshots, assuming an analytic signal in the discrete-domain, can be denoted
as x(k), with k from 1 to M . The N 1 signal vector is then
written as:
NC
X
x(k) =
a(q )sq (k) + n(k), k = 1, 2, , M.
(15)
q=1

Using matrix notation, the last expression can be written as


As(k) + n(k) and the N M input data matrix X can be
expressed as
X = [x(1)x(2) x(M )] = AS + N,

(16)

where S is the Nc M complex signal envelope


matrix at the array center, N is the N M
noise matrix, A
=
[a(1 ), , a(NC )] is the
N Nc
steering matrix whose columns are
a(i ) = [1, ej(2/)d cos(i ) , , ej(2/)d(N 1) cos(i ) ]T .
and d are the wavelength of the signal and the distance
between antenna elements (sensors), respectively.
From the formulation above and imposing linear constraints,
it is possible to obtain a closed-form expression for wopt [5],
referred herein as the Linearly Constrained Minimum Variance
(LCMV) solution

1
wopt = R1 C CH R1 C
z,
(17)

430

MSE

MSE

10

10

where R is the input signal correlation matrix and CH wopt =


z with C given by [4]
[1, ej(2/)d cos(i ) , , ej(2/)d(N 1) cos(i ) ]T

(18)

L1CLMS
CLMS

L1CLMS
CLMS

10

10

10

In Eq. (18), angle i indicates the directions of signals of


interest and jammers and C is the constraints matrix with the
i-th column, i = 1 to NC , given by [4]. These signals are taken
into account as constraints, mathematically, each constraint is
stated as a column in matrix C and a row in vector z.
Considering a plane wave with wavelength incident from
direction , that propagates across a linear array of N isotropic
antennas at locations p1 , p2 , , pN , arranged on the same
plane, the beampattern would be given by
B() =

N
X

wn ej

2
pn

cos()

(19)

10
2

10

10

10
1

4
5
Iterations

x 10

(a) L1 -norm t = 1.0

4
5
Iterations

9
4

x 10

(b) L1 -norm t = 3.0

MSE

MSE

10

10
L1CLMS
CLMS

L1CLMS
CLMS

10

10

10

10

n=1

where wn is the n-th component of vector w.

10

IV. E XPERIMENTAL S IMULATION R ESULTS

4
5
Iterations

10

x 10

(c) L1 -norm t = 6.0

A. System Identification
The first simulation was carried out in a system identification problem with 16 coefficients, where the sparsity changes,
in steps, from a very sparse to a not sparse system. We used
three different filter coefficients to generate the desired signal,
resulting in a system with three degrees of sparsity: 15/16,
8/16 and 0, i. e., 15 null coefficients, 8 null coefficients and
0 null coefficient, respectively. The simulation begins with a
sparse filter and after 30, 000 iterations it changes to semisparse. After 60,000 iterations, a non-sparse filter is used to
produce d(k). All the taps are chosen randomly. The L1 -norms
for the three vectors of coefficients used in the simulations
are 0.9, 4.2 and 10.0, respectively. We ran the simulations
for ensembles of 100 independent runs. For the L1 -CLMS
algorithm, we tried 4 different values of the L1 -norm: t = 1.0,
t = 3.0, t = 6.0 and t = 9.0. The SNR used was 103 .
Note in Figure 1, that depending on the value chosen for
the L1 -norm, the performance of the L1 -CLMS algorithm
becomes better or equivalent to that of the CLMS algorithm
in terms of MSE and convergence speed. When the system is
very sparse; the L1 -CLMS algorithm yields faster convergence
rate compared to the CLMS algorithm. The poor performance
of the L1 -CLMS algorithm in some situations depicted in
Fig. 1 comes from the required sparsity forced by the L1 -norm
constraint when the system to be identified was not sparse.
For example, t = 1 yielded 15 null coefficients, which were
adequate for the first 30.000 iterations, when the system was
very sparse, but this value of t was not adequate when the
system became not sparse.
B. Adaptive Beamforming
The second and third simulations were carried out employing two ULAs equally space by 2 , with 14 elements (N = 14)
and 8 elements (N = 8), respectively. For both experiments,
the imposed L1 -norm constraint was t = 1.0. As linear
constraints, we considered the azimuth of the signal of interest

4
5
Iterations

(d) L1 -norm t = 9.0

Fig. 1: Tracking behaviors of 15th-order adaptive filter with


L1 -norm constraints.
equal to 70 and the directions of the interfering signals as
10 , 35 , 140 and 155 . These values were considered known
and available. The two experiments comprised ensembles of
500 independent runs and the convergence steps were set to
= 0.5 104 and = 0.15 103 , respectively. In order to
take into account the linear constraints, corresponding to one
direction of interest and four interfering signals, matrix C had
5 columns and vector z had 5 rows, z = [1 0 0 0 0]T .
A fourth simulation was carried out employing an ULA
equally spaced by 2 with 100 elements and = 0.25 104 .
All the other simulation parameters remained identical to the
ones used in the second and third simulations. This simulation
intends to present the performance, in terms of convergence
speed, of the proposed algorithm for solving the beamforming
problem with a large number of sensors.
When applied to solve adaptive beamforming, as seen in the
simulation results, the L1 -CLMS algorithm presented very fast
convergence speed while satisfying all constraints and having
sidelobes all in good accordance with sidelobe levels given by
the optimal LCMV solution [5].
V. C ONCLUSION
The results of the simulations presented in section IV show
that the L1 -CLMS algorithm proposed in this paper provides
very fast convergence for adaptive beamforming applications.
For system identification, when the system is sparse, the
proposed algorithm outperforms the CLMS in terms of convergence rate and MSE, if the constraint on the L1 -norm is chosen
appropriately. Even for non-sparse systems, the performance
of the L1-CLMS algorithm can be compared to that of the
CLMS algorithm in terms of convergence speed and MSE,
given that a proper value of t was set.

431

x 10

10

10

L1 CLMS
CLMS

L1 CLMS
CLMS
3

10

MSE

MSE

10

10

10
1

10

10
100

200

300

400
500
Iterations

600

700

800

Fig. 2: The L1 -CLMS algorithm performance compared to the


CLMS algorithm for a 14-element constrained ULA adaptive
array.
20

10

L1CLMS

10
0

30

30
40
50
60

60
70
0

20

40

60
80
100 120
Azimuth (Degrees)

140

160

180

70
0

(a) At iteration k = 50
10

140

10

CLMS
LCMV

30

160

20
30

40

40

50

50

70
180
0

60
20

40

60
80
100 120
Azimuth (Degrees)

140

160

180

70
0

10

CLMS
LCMV

CLMS
LCMV

20

20

20

20

40

|B()| in dB

10

|B()| in dB

10

40

30
40
50

50

60

60

60

60

70
0

70
0

40

60
80
100 120
Azimuth (Degrees)

140

160

180

20

(c) At iteration k = 400

40

60
80
100 120
Azimuth (Degrees)

140

160

(d) At iteration k = 800

20

40

60
80
100 120
Azimuth (Degrees)

140

160

L1CLMS
CLMS
LCMV

180

70
0

20

(c) At iteration k = 150

Fig. 3: 14-element constrained ULA adaptive array beampattern for the L1 -CLMS, CLMS and LCMV algorithms.

40

60
80
100 120
Azimuth (Degrees)

140

160

(d) At iteration k = 250

Fig. 5: 8-element constrained ULA adaptive array beampattern


for the L1 -CLMS, CLMS and LCMV algorithms.

ACKNOWLEDGMENT
The authors thank the Brazilian Agencies CNPq (contracts
306548/2010-0 and 306749/2009-2), FAPERJ, the Brazilian
Navy, and the Brazilian Army for partially funding this work.

10

L1 CLMS
CLMS
4

10

R EFERENCES

MSE

10

[1] R. Tibshirani, Regression shrinkage and selection via the lasso, Journal
of the Royal Statistical Society. Series B (Methodological), vol. 58, no. 1,
pp. 267288, 1996.
[2] Y. Chen, Y. Gu, and A. O. Hero, Sparse lms for system identification, in
Proceedings of the IEEE International Conference on Acoustics, Speech
and Signal Processing, 2009, pp. 31253128.
[3] O. L. Frost, An algorithm for linearly constrained adaptive array processing, Proceedings of the IEEE, vol. 60, no. 8, pp. 926935, 1972.
[4] P. S. R. Diniz, Adaptive Filtering: Algorithms and Practical Implementation, 3rd ed. Springer, Oct. 2010.
[5] H. L. Van Trees, Optimum Array Processing (Detection, Estimation, and
Modulation Theory, Part IV), 1st ed. Wiley-Interscience, Mar. 2002.

10

10

10

500

1000

1500
Iterations

2000

2500

3000

Fig. 6: The L1 -CLMS algorithm performance compared to the


CLMS algorithm for a 100-element constrained ULA adaptive
array.

432

180

40

50

20

160

30

50

70
180
0

140

10

30

60
80
100 120
Azimuth (Degrees)

10

L1CLMS

40

(b) At iteration k = 50

10

30

20

(a) At iteration k = 25

L1CLMS

|B()| in dB

|B()| in dB

60
80
100 120
Azimuth (Degrees)

CLMS
LCMV

10

20

(b) At iteration k = 200

L1CLMS

40

L1CLMS

60

20

250

10

CLMS
LCMV

10

20

40
50

200

20

L1CLMS

|B()| in dB

|B()| in dB

|B()| in dB

20

100
150
Iterations

10

CLMS
LCMV

10

10

50

20

L1CLMS

CLMS
LCMV

Fig. 4: The L1 -CLMS algorithm performance compared to the


CLMS algorithm for an 8-element constrained ULA adaptive
array.

|B()| in dB

10

180

S-ar putea să vă placă și