Documente Academic
Documente Profesional
Documente Cultură
High-dimensional data
Illustration
Consider CN = IN , the spectrum of SN is different from that of CN
Spectrum of eigenvalues
Marchenko-Pastur Law
1
0.8
0.6
Histogram
0.4
0.2
0 0.5 1 1.5 2
Eigenvalues of SbN
Example
MUSIC with few samples (or in large arrays) Call A() = [a(1 ), . . . , a(K )] CN K , N large,
K small, the steering vectors to identify and X = [x1 , . . . , xn ] CN n the n samples, taken from
X
K
xt = a(k ) p k sk,t + wt .
k =1
W U
The MUSIC localization function reads () = a()H U H a() in the signal vs. noise
W
spectral decomposition XXH = US H + U
SU W WUH .
S W
Writing equivalently A()PA()H + 2 IN = US S UHS + UW UW , as n, N , n/N c,
2 H
Music is NOT consistent in the large N, n regime! We need improved RMT-based solutions.
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 7/142
Stieltjes Transform
Definition
Let F be a real probability distribution function. The Stieltjes transform mF of F is the function
defined, for z C+ , as Z
1
mF (z ) = dF ()
z
For a < b continuity points of F , denoting z = x + iy , we have the inverse formula
Zb
1
F (b ) F (a) = lim =[mF (x + iy )]dx
y 0 a
The Stieltjes transform is to the Cauchy transform as the characteristic functin is to the Fourier
transform.
Equivalence F mF
Similar to the Fourier transform, knowing mF is the same as knowing F .
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 8/142
I The Stieltjes transform of a random matrix is the trace of the resolvent matrix
Q(z ) = (X zIN )1 . The resolvent matrix plays a key role in the derivation of many of the
results of random matrix theory.
I For compactly supported F , mF (z ) is linked to the moments Mk = E N1 tr Xk ,
X
+
mF (z ) = Mk z k 1
k =0
A. M. Tulino, S. Verd`
u, Random matrix theory and wireless communications, Now Publishers
Inc., 2004.
Definition
Let F be a probability distribution, mF its Stieltjes transform, then the Shannon-transform VF of
F is defined as Z Z
1
VF (x ) , log(1 + x )dF () = mF (t ) dt
0 x t
V. A. Mar
cenko, L. A. Pastur, Distributions of eigenvalues for some sets of random matrices,
Math USSR-Sbornik, vol. 1, no. 4, pp. 457-483, 1967.
The theorem to be proven is the following
Theorem
Let XN CN n have i.i.d. zero mean variance 1/n entries with finite eighth order moments. As
n, N with Nn c (0, ), the e.s.d. of XN XHN converges almost surely to a nonrandom
distribution function Fc with density fc given by
1 p
fc (x ) = (1 c 1 )+ (x ) + (x a ) + ( b x ) +
2cx
where a = (1 c )2 , and b = (1 + c )2 .
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 11/142
0.8
Density fc (x )
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
Figure: Mar
cenko-Pastur law for different limit ratios c = limN N /n.
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 12/142
Since we want an expression of mF , we start by identifying the diagonal entries of the resolvent
( XN XH
N zIN )
1 of X XH . Denote
N N
yH
XN =
Y
Now, for z C+ , we have
1
yH y z yH Y H
1
XN XH
N zIN =
Yy YYH zIN 1
Consider the first diagonal element of (RN zIN )1 . From the matrix inversion lemma,
1
(A BD1 C)1 A1 B(D CA1 B)1
A B
=
C D (A BD1 C)1 CA1 (D CA1 B)1
Trace Lemma
Z. Bai, J. Silverstein, Spectral Analysis of Large Dimensional Random Matrices, Springer Series
in Statistics, 2009.
To go further, we need the following result,
Theorem
Let {AN } CN N with bounded spectral norm. Let {xN } CN , be a random vector of i.i.d.
entries with zero mean, variance 1/N and finite 8th order moment, independent of AN . Then
1 a.s.
xH
N AN xN tr AN 0.
N
1
=
z z Nn mF (z )
Related bibliography
I V. A. Mar
cenko, L. A. Pastur, Distributions of eigenvalues for some sets of random matrices, Math
USSR-Sbornik, vol. 1, no. 4, pp. 457-483, 1967.
I J. W. Silverstein, Z. D. Bai, On the empirical distribution of eigenvalues of a class of large dimensional
random matrices, Journal of Multivariate Analysis, vol. 54, no. 2, pp. 175-192, 1995.
I Z. D. Bai and J. W. Silverstein, Spectral analysis of large dimensional random matrices, 2nd Edition
Springer Series in Statistics, 2009.
I R. B. Dozier, J. W. Silverstein, On the empirical distribution of eigenvalues of large dimensional
information-plus-noise-type matrices, Journal of Multivariate Analysis, vol. 98, no. 4, pp. 678-694, 2007.
I V. L. Girko, Theory of Random Determinants, Kluwer, Dordrecht, 1990.
I A. M. Tulino, S. Verd`
u, Random matrix theory and wireless communications, Now Publishers Inc., 2004.
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 17/142
Theorem 1
Let YN = 1 XN CN2 , where XN CnN has i.i.d entries of mean 0 and variance 1. Consider the
n
regime n, N + with Nn c. Let m N be the Stieltjes transform associated to XN XN . Then,
N mN 0 almost surely for all z C\R+ , where mN (z ) is the unique solution in the set
m
{z C+ , mN (z ) C+ } to:
1
ctdF CN
Z
m N (z ) = z
1 + tmN (z )
1
mN = cmN + (c 1)
z
This gives access to the spectrum of the sample covariance matrix model of x, when
1
yi = CN2 xi , xi i.i.d., CN = E [yyH ].
Part 1: Fundamentals of Random Matrix Theory/1.1. The Stieltjes Transform Method 18/142
0
Getting F from mF
0.4 0.4
Density
Density
0.2 0.2
0 0
1 3 7 1 3 4
Eigenvalues Eigenvalues
1 1
Figure: Histogram of the eigenvalues of BN = n1 CN2 ZH 2
N ZN CN , N = 3000, n = 300, with CN diagonal composed
of three evenly weighted masses in (i) 1, 3 and 7 on top, (ii) 1, 3 and 4 at bottom.
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 21/142
Support of a distribution
The support of a density f is the closure of the set {x, f (x ) 6=
0}.
cenko-Pastur law is (1 c )2 , (1 + c )2 .
For instance the support of the mar
1.2
0.8
Density fc (x )
0.6
0.4
0.2
" #
Support of the Marchenko-Pastur law
0
0.2
0 0.5 1 1.5 2 2.5 3
Figure: Mar
cenko-Pastur law for different limit ratios c = 0.5.
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 22/142
Extreme eigenvalues
I Limiting spectral results are insufficient to infer about the location of extreme eigenvalues.
P
I Example: Consider dFN (x ) = N1 N 0 N 1 1
k =1 ak . Then, dFN = N dFN + N AN (x ) and dFN with
AN > aN satisfy:
dFN dFN0 0.
I However, the supports of FN and FN0 differ by the mass AN .
Question: How is the behaviour of the extreme eigenvalues of random covariance matrices?
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 23/142
Theorem
Let XN CN n with i.i.d. entries with zero mean, unit variance and infinite fourth order. Let
CN CN N be nonrandom and bounded in norm. Let mN be the unique solution in C+ of
Z 1
N N N n 1
mN = z dF CN () , m N (z ) = m (z ) + , z C+ ,
n 1 + m N n N n z
J. W. Silverstein, Z.D. Bai, Y.Q. Yin, A note on the largest eigenvalue of a large dimensional
sample covariance matrix, Journal of Multivariate Analysis, vol. 26, no. 2, pp. 166-168, 1988.
I If 4th order moment is infinite,
H
lim sup XX
max =
N
J. Silverstein, Z. Bai, No eigenvalues outside the support of the limiting spectral distribution of
information-plus-noise type matrices to appear in Random Matrices: Theory and Applications.
I Only recently, information plus noise models, X with i.i.d. zero mean, variance 1/N, finite
I In order to derive statistical detection tests, we need more information on the extreme
eigenvalues.
I We will study the fluctuations of the extreme eigenvalues (second order statistics)
I However, the Stieltjes transform method is not adapted here!
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 26/142
Theorem
Let X CN n have i.i.d. Gaussian entries of zero mean and variance 1/n. Denoting +
N the
largest eigenvalue of XXH , then
+ 2
2 (1 + c)
N3 N 4 1 X F
+ +
(1 + c ) 3 c 2
0.4
0.3
Density
0.2
0.1
0
4 2 0 2
2 1 4
Figure: Distribution of N 3 c 2 (1 + c ) 3 + c )2 against the distribution of X + (distributed as
N (1 +
Tracy-Widom law) for N = 500, n = 1500, c = 1/3, for the covariance matrix model XXH . Empirical
distribution taken over 10, 000 Monte-Carlo simulations.
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 28/142
Techniques of proof
Method of proof requires very different tools:
I orthogonal (Laguerre) polynomials: to write joint unordered eigenvalue distribution as a
kernel determinant.
p
N (1 , . . . , p ) = det KN (i , j )
i,j =1
, det(IN KN ).
I differential equation tricks: hole probability in [t, ) gives right-most eigenvalue distribution,
which is simplified as solution of a Painelve differential equation: the Tracy-Widom
distribution.
R 2
F + (t ) = e t (x t )q (x ) dx , q 00 = tq + 2q 3 , q (x ) x Ai(x ).
Part 1: Fundamentals of Random Matrix Theory/1.2 Extreme eigenvalues 29/142
Spiked models
E x1 xH1 = I + P
Theorem 1 1
Let BN = n1 (I + P) 2 XN XH
N (I + P) , where XN C
2 N n has i.i.d., zero mean and unit variance
eig(P) = diag(1 , . . . , K , 0, . . . , . . . , 0)
| {z }
N K
with 1 > . . . > K > 1, c = limN N /n. Let 1 , , N be the eigenvalues of BN . We then
have
a.s. 1+
I if >
j c, j 1 + j + c j (i.e. beyond the Mar cenkoPastur bulk!)
j
a.s. 2
I if (0, c ], j (1 + c ) (i.e. right-edge of the Mar cenkoPastur bulk!)
j
a.s. 2
I if [ c, 0 ) , ( 1 c ) (i.e. left-edge of the Mar
cenkoPastur bulk!)
j j
I for the other eigenvalues, we discriminate over c:
a.s. 1+j
I if j < c, c < 1, j 1 + j + c j (i.e. beyond the Mar
cenkoPastur bulk!)
a.s.
I if j < c, c > 1, j (1 c )2 (i.e. left-edge of the Mar
cenkoPastur bulk!)
Part 1: Fundamentals of Random Matrix Theory/1.3 Extreme eigenvalues: the spiked models 33/142
0.6
Density
0.4
0.2
0
1+1 1+2
1 + 1 + c 1 , 1 + 2 + c 2
Eigenvalues
1 1
Figure: Eigenvalues of BN = 1
n (P + I)
2 XN XN H (P + I) 2 , where 1 = 2 = 1 and 3 = 4 = 2 Dimensions:
N = 500, n = 1500.
Part 1: Fundamentals of Random Matrix Theory/1.3 Extreme eigenvalues: the spiked models 34/142
I if c is large, or alternatively, if some population spikes are small, part to all of the
population spikes are attracted by the support!
I if so, no way to decide on the existence of the spikes from looking at the largest eigenvalues
I in signal processing words, signals might be missed using largest eigenvalues methods.
I as a consequence,
I the more the sensors (N),
I the larger c = lim N /n,
I the more probable we miss a spike
Part 1: Fundamentals of Random Matrix Theory/1.3 Extreme eigenvalues: the spiked models 35/142
I if x eigenvalue of BN but not of XXH , then for n large, x > (1 + c )2 (edge of MP law
support) and
with P = UUH , U CN r .
I due to unitary invariance of X,
Z
a.s.
UH (XXH xIN )1 U (t x )1 dF MP (t )Ir , m(x )Ir
with F MP the MP law, and m(x ) the Stieltjes transform of the MP law (often known for
r = 1 as trace lemma).
1
I finally, we have that the limiting solutions xk satisfy xk m(xk ) + (1 + k )
k = 0.
I replacing m(x ), this is finally:
a.s. 1
k xk , 1 + k + c (1 + k ) k , if k > c
Part 1: Fundamentals of Random Matrix Theory/1.3 Extreme eigenvalues: the spiked models 36/142
for m C+ .
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 39/142
Remember that we can evaluate the spectrum density by taking a complex line close to R and
evaluating =[mF (z )] along this line. Now we can do better.
It is shown that
lim mF (z ) = m0 (x ) exists.
z x R
z C+
We also have,
I for x0 inside the support, the density f (x ) of F in x0 is 1 =[m0 ] with m0 the unique solution
m C+ of Z
1 t
[zF (m) =] x0 = c dF C (t )
m 1 + tm
I let m0 R and xF the equivalent to zF on the real line. Then x0 outside the support of F
is equivalent to xF0 (mF (x0 )) > 0, mF (x0 ) 6= 0, 1/mF (x0 ) outside the support of F C .
This provides another way to determine the support!. For m (, 0), evaluate xF (m).
Whenever xF decreases, the image is outside the support. The rest is inside.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 40/142
0.4
Density
0.2
0
1 3 7
Eigenvalues
1 1
Figure: Histogram of the eigenvalues of BN = n1 CN2 XN XH 2
N CN , N = 300, n = 3000, with CN diagonal composed
of three evenly weighted masses in 1, 3 and 7.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 41/142
7
xF ( m )
1 13 17 0
1 1
Figure: Stieltjes transform of BN = n1 CN2 XN XH 2
N CN , N = 300, n = 3000, with CN diagonal composed of three
evenly weighted masses in 1, 3 and 7. The support of F is read on the vertical axis, whenever mF is decreasing.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 42/142
Xavier Mestre, Improved estimation of eigenvalues of covariance matrices and their associated
subspaces using their sample estimates, IEEE Transactions on Information Theory, vol. 54, no.
11, Nov. 2008.
Theorem
Let XN CN n have i.i.d. entries of zero mean, unit variance, and CN be diagonal such that
F CN F C , as n, N , N /n c, where F C has K masses in t1 , . . . , tK with multiplicity
1 1
N CN has support S given by
n1 , . . . , nK respectively. Then the l.s.d. of BN = n1 CN2 XN XH 2
1X
K
1 tk
xF (m) = c nk
m n 1 + tk m
k =1
with 2Q the number of real-valued solutions counting multiplicities of xF0 (m) = 0 denoted in
order m1 < m1+ 6 m2 < m2+ 6 . . . 6 mQ
< mQ+
.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 43/142
xF00 (m) = 0
0.4
Density
0.2
0
1 3 7
n1 n2 n3
Eigenvalues
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 45/142
X
n
N = 1
C xk xH
n k
k =1
N )
G1 (C 1
n log det(CN ) 0
in probability.
I However, Girkos proofs are rarely readable, if existent.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 47/142
t1 , . . . , tK .
I It has long been thought the inverse problem of estimating t1 , . . . , tK from the Stieltjes
transform method was not possible.
I Only trials were iterative convex optimization methods.
I The problem was partially solved by Mestre in 2008!
I His technique uses elegant complex analysis tools. The description of this technique is the
subject of this course.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 48/142
Reminders
1 1
I Consider the sample covariance matrix model BN = n1 CN2 XN XH 2
N CN .
I Up to now, we saw:
I that there is no eigenvalue outside the support with probability 1 for all large N.
I that for all large N, when the spectrum is divided into clusters, the number of empirical eigenvalues
in each cluster is exactly as we expect.
I these results are of crucial importance for the following.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 49/142
Theorem 1 1
Consider the model BN = n1 CN2 XN XH 2
N CN , with XN C
N n , i.i.d. with entries of zero mean, unit
P PK
is an N, n-consistent estimator of tk , where Nk = {N K i =k Ni + 1, . . . , N i =k +1 Ni },
1 , . . . , N are the eigenvalues of BN and 1 , . . . , N are the N solutions of
m XH C () = 0
N N XN
1
T
or equivalently, 1 , . . . , N are the eigenvalues of diag() N .
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 50/142
with mN the deterministic equivalent of mBN . This is the only random matrix result we need.
I Before going further, we need some reminders from complex analysis.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 52/142
Complex integration
I From Cauchy integral formula, denoting Ck a contour enclosing only tk ,
I I I
1 X
K
1 1 N
tk = d = Nj d = mF C ()d .
2i Ck tk 2i Ck Nk tj 2iNk Ck
j =1
1 X
N
1
mF (z ) ' mBN (z ) , , with (1 , . . . , N ) = eig(BN ) = eig(YYH ).
N k z
k =1
7
xF ( m )
m2
m1
1
1/x2 1/x1
1 13 17 0
0 (w )
mB
residue calculus, denote f (w ) = n nN N
N wmBN (w ) + N ,
I
mB (w )2
N
I the k s are poles of order 1 and
n
lim (z k )f (z ) =
z k N k
I So, finally
n X
tk = (m m )
Nk mcontour
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 57/142
which leads to I
1 mF0 (w )
dw = 0
2i k mF (w )2
the empirical version of which is
#{i : i k } #{i : i k }
Since their difference tends to 0, there are as many k s as k s in the contour, hence 1 is
asymptotically in the integration contour.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 58/142
Related bibliography
I C. A. Tracy and H. Widom, On orthogonal and symplectic matrix ensembles, Communications in Mathematical Physics, vol. 177, no. 3, pp.
727-754, 1996.
I G. W. Anderson, A. Guionnet, O. Zeitouni, An introduction to random matrices, Cambridge studies in advanced mathematics, vol. 118, 2010.
I F. Bornemann, On the numerical evaluation of distributions in random matrix theory: A review, Markov Process. Relat. Fields, vol. 16, pp.
803-866, 2010.
I Y. Q. Yin, Z. D. Bai, P. R. Krishnaiah, On the limit of the largest eigenvalue of the large dimensional sample covariance matrix, Probability
Theory and Related Fields, vol. 78, no. 4, pp. 509-521, 1988.
I J. W. Silverstein, Z.D. Bai and Y.Q. Yin, A note on the largest eigenvalue of a large dimensional sample covariance matrix, Journal of Multivariate
Analysis, vol. 26, no. 2, pp. 166-168. 1988.
I C. A. Tracy, H. Widom, On orthogonal and symplectic matrix ensembles, Communications in Mathematical Physics, vol. 177, no. 3, pp. 727-754,
1996.
I Z. D. Bai, J. W. Silverstein, No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance
matrices, The Annals of Probability, vol. 26, no.1 pp. 316-345, 1998.
I Z. D. Bai, J. W. Silverstein, Exact Separation of Eigenvalues of Large Dimensional Sample Covariance Matrices, The Annals of Probability, vol.
27, no. 3, pp. 1536-1555, 1999.
I J. W. Silverstein, P. Debashis, No eigenvalues outside the support of the limiting empirical spectral distribution of a separable covariance matrix,
J. of Multivariate Analysis vol. 100, no. 1, pp. 37-57, 2009.
I J. W. Silverstein, J. Baik, Eigenvalues of large sample covariance matrices of spiked population models Journal of Multivariate Analysis, vol. 97,
no. 6, pp. 1382-1408, 2006.
I I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis, Annals of Statistics, vol. 99, no. 2, pp. 295-327,
2001.
I K. Johansson, Shape Fluctuations and Random Matrices, Comm. Math. Phys. vol. 209, pp. 437-476, 2000.
I J. Baik, G. Ben Arous, S. P
ech
e, Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices, The Annals of
Probability, vol. 33, no. 5, pp. 1643-1697, 2005.
Part 1: Fundamentals of Random Matrix Theory/1.4 Spectrum Analysis and G-estimation 59/142
I J. W. Silverstein, S. Choi, Analysis of the limiting spectral distribution of large dimensional random matrices, Journal of Multivariate Analysis, vol.
54, no. 2, pp. 295-309, 1995.
I W. Hachem, P. Loubaton, X. Mestre, J. Najim, P. Vallet, A Subspace Estimator for Fixed Rank Perturbations of Large Random Matrices, arxiv
preprint 1106.1497, 2011.
I R. Couillet, W. Hachem, Local failure detection and diagnosis in large sensor networks, (submitted to) IEEE Transactions on Information Theory,
arXiv preprint 1107.1409.
I F. Benaych-Georges, R. Rao, The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices, Advances in
Mathematics, vol. 227, no. 1, pp. 494-521, 2011.
I X. Mestre, On the asymptotic behavior of the sample estimates of eigenvalues and eigenvectors of covariance matrices, IEEE Transactions on
Signal Processing, vol. 56, no.11, 2008.
I X. Mestre, Improved estimation of eigenvalues and eigenvectors of covariance matrices using their sample estimates, IEEE trans. on Information
Theory, vol. 54, no. 11, pp. 5113-5129, 2008.
I R. Couillet, J. W. Silverstein, Z. Bai, M. Debbah, Eigen-Inference for Energy Estimation of Multiple Sources, IEEE Transactions on Information
Theory, vol. 57, no. 4, pp. 2420-2439, 2011.
I P. Vallet, P. Loubaton and X. Mestre, Improved subspace estimation for multivariate observations of high dimension: the deterministic signals
case, arxiv preprint 1002.3234, 2010.
Application to Signal Sensing and Array Processing/2.1 Eigenvalue-based detection 62/142
Problem formulation
with h CN , x CN , W CN n .
I We assume no knowledge whatsoever but that W has i.i.d. (non-necessarily Gaussian)
entries.
Application to Signal Sensing and Array Processing/2.1 Eigenvalue-based detection 63/142
I Advantages:
I much simpler than finite size analysis
I ratio independent of , so needs not be known
I Drawbacks:
I only stands for very large N (dimension N for which asymptotic results arise function of !)
I ad-hoc method, does not rely on performance criterion.
Application to Signal Sensing and Array Processing/2.1 Eigenvalue-based detection 65/142
I Denote
max (YYH )
TN = 1 H
N tr YY
To guarantee a maximum false alarm ratio of ,
(1N )n n (1N )n
TN
decide H1 : if 1 N1 TN 1 N > N
H0 : otherwise.
0.4
0.3
0.2
0.1
0
0.1 0.5 1 2
Figure: ROC curve for a priori unknown 2 of the Neyman-Pearson test, conditioning number method and
GLRT, K = 1, N = 4, M = 8, SNR = 0 dB. For the Neyman-Pearson test, both uniform and Jeffreys prior,
with exponent = 1, are provided.
Application to Signal Sensing and Array Processing/2.1 Eigenvalue-based detection 67/142
Related biography
I R. Couillet, M. Debbah, A Bayesian Framework for Collaborative Multi-Source Signal Sensing, IEEE Transactions on Signal Processing, vol. 58,
no. 10, pp. 5186-5195, 2010.
I T. Ratnarajah, R. Vaillancourt, M. Alvo, Eigenvalues and condition numbers of complex random matrices, SIAM Journal on Matrix Analysis and
Applications, vol. 26, no. 2, pp. 441-456, 2005.
I M. Matthaiou, M. R. McKay, P. J. Smith, J. A. Mossek, On the condition number distribution of complex Wishart matrices, IEEE Transactions on
Communications, vol. 58, no. 6, pp. 1705-1717, 2010.
I C. Zhong, M. R. McKay, T. Ratnarajah, K. Wong, Distribution of the Demmel condition number of Wishart matrices, IEEE Trans. on
Communications, vol. 59, no. 5, pp. 1309-1320, 2011.
I L. S. Cardoso, M. Debbah, P. Bianchi, J. Najim, Cooperative spectrum sensing using random matrix theory, International Symposium on Wireless
Pervasive Computing, pp. 334-338 , 2008.
I P. Bianchi, M. Debbah, M. Maida, J. Najim, Performance of Statistical Tests for Source Detection using Random Matrix Theory, IEEE Trans. on
Information Theory, vol. 57, no. 4, pp. 2400-2419, 2011.
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 69/142
Source localization
A uniform array of M antennas receives signal from K radio sources during n signal snapshots.
Objective: Estimate the arrival angles 1 , , K .
1
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 70/142
X
K
xt = a(k )sk,t + wt , t = 1, , n
k =1
1
e sin
I AN = [aN (1 ), , aN (K )] with aN () =
e (N 1) sin
I 2 is the noise variance and is set 1 for simplicity,
I Objective: infer 1 , , K from the n observations
I Let XN = [x1 , , xn ], then,
S
X = AS + W = [A IN ]
W
aN ()
aN ()
where is the orthogonal projection matrix on the eigenspace associated to the K largest
eigenvalues of n1 XN XN
I It is well-known that this estimator is consistent when n + with K , N fixed,
I We consider the case of K finite spiked covariance model
I What happens when n, N + ?
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 72/142
Denote P = AAH = US UH T T T
S , = diag(1 , . . . , K ), and Z = [S W ] to recover (up to
one row) the generic spiked model
1
X = (IN + P) 2 Z.
1 H
I Reminder: If x eigenvalue of n XX with x > (1 + c )2 (edge of MP law), for all large n,
a.s. 1
x , k k , 1 + k + c (1 + k )
k , if k > c
for some k.
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 73/142
() = a()H UW UH
W a() (UW CN (N K ) such that UH
W US = 0)
a()H u H
i ui a(), k = 1, . . . , K
1 , . . . , u
with u N the eigenvectors belonging to 1 > . . . > N .
To fall back on known RMT quantities, we use the Cauchy-integral:
I
1 1
a()H u H
i ui a() = a()H ( XXH zIN )1 a()dz
2 Ci n
where P = US UH
S , and
1
H
b = IK + z (IK + )1 UH H
S ( n ZZ zIN )
1
US
1
aH
= za()H (IN + P) 2 ( n1 ZZH zIN )1 US
1
1
a2 = (IK + ) 1
UH 1
S ( n ZZ
H zIN )1 (IN + P) 2 a().
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 74/142
X
K I
1 1 zm2 (z )
Ti = 1+`
dz.
1 + ` 2
`=1 Ci ` + zm(z )
Therefore,
a.s. X
K
1 c 2
a() a()a()H
() = a()H i
a()H ui uH
i a()
1
i =1
1 + c
i
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 75/142
Improved G-MUSIC
Recall that:
1
1 + c a.s.
a()H uk uH
k a()
k
2
a()H u H
k uk a() 0
1 c k
X
K
1 + c 1
G () ' a()H a()
k
2
a()H u H
k u k a()
k =1
1 c k
with
k (c + 1)
q
k =
+ (c + 1
k )2 4c )
2
We then obtain another (N, n)-consistent MUSIC estimator, only valid for K finite!
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 76/142
Simulation results
0
10
Cost function [dB]
15
20
25
35 37
MUSIC
G-MUSIC
30
angle [deg] -10 35 37
angle [deg]
Figure: MUSIC against G-MUSIC for DoA detection of K = 3 signal sources, N = 20 sensors, M = 150
samples, SNR of 10 dB. Angles of arrival of 10 , 35 , and 37 .
Application to Signal Sensing and Array Processing/2.2 The spiked G-MUSIC algorithm 77/142
1X
n
1
CN = 1 1
xi xi
n
i =1 N xi CN xi
1X
n
1 1
CN = u xi CN xi xi xi
n N
i =1
1X
n
1 1
CN = u xi CN xi xi xi
n N
i =1
where u satisfies
(i) u : [0, ) (0, ) nonnegative continuous and non-increasing
(ii) : x 7 xu (x ) increasing and bounded with limx (x ) , > 1
1
(iii) < c+ .
1 Pn
I Additional technical assumption: Let n , n i =1 i . For each a > b > 0, a.s.
Heuristic approach
I Major issues with CN :
I Defined implicitly q
I Sum of non-independent rank-one matrices from vectors u ( N1 xi CN1 xi )xi (CN depends on all xj s).
I But there is some hope:
I First remark: we can work with CN = IN without generality restriction!
I Denote
1X
n
1 1
C(j ) = u x C x x x
n N i N i i i
i 6=j
1X
n
1
CN ' (u f ) i tr CN1 xi xi
n N
i =1
I Use random matrix results to find a limiting value for 1
N tr CN1 , and conclude
1X
n
CN ' (u f )(i )xi xi .
n
i =1
Advanced Random Matrix Models for Robust Estimation/3.1 Robust Estimation of Scatter 86/142
so that
1 1
1 1 N xi CN xi
xi C(i ) xi = .
N 1 cN ( N1 xi CN1 xi )
Now the function g : x 7 x /(1 cN (x )) is monotonous increasing (we use the assumption
< c 1 !), hence, with f = g 1 ,
1 1 1 1
xi CN xi = g 1 xi C(i ) xi .
N N
Advanced Random Matrix Models for Robust Estimation/3.1 Robust Estimation of Scatter 87/142
Main result
1X
n
a.s.
CN SN
0, where SN , v (i )xi xi
n
i =1
1 X (i )
n
1= .
n 1 + c (i )
i =1
I Remarks:
I Th. says: first order substitution of CN by SN allowed for large N, n.
I It turns out that v u and in general behavior.
I Corollaries:
a.s.
max i (SN ) i (CN ) 0
16i 6n
1 1 a.s.
tr (CN zIN )1 tr (SN zIN )1 0
N N
Important feature for detection and estimation.
I Proof: So far in the tutorial, we do not have a rigorous proof!
Advanced Random Matrix Models for Robust Estimation/3.1 Robust Estimation of Scatter 89/142
Proof
P 1 P 1
v j N1 wj n1 i 6=j i v (i di )wi wi wj v j N1 wj n1 i 6=j i v (i )ei wi wi wj
ej = =
v (j ) v (j )
P 1
P 1
v j N1 wj n1 i 6=j i v (i )en wi wi wj v enj 1
N wj
1
n i 6=j i v (i )wi wi wj
6 =
v (j ) v (j )
Advanced Random Matrix Models for Robust Estimation/3.1 Robust Estimation of Scatter 90/142
Proof
I Specialization to en :
P 1
n 1 1
v e n N wn n i 6=n i v (i )wi wi wn
en 6
v (n )
or equivalently, recalling (x ) = xv (x ),
P 1
P 1
n 1 1
1 1 e n N wn i v (i )wi wi wn
N wn n i 6=n i v (i )wi wi wn n i 6=n
6 .
(n )
Proof
I Back to original problem: For all large n a.s., we then have (using growth of )
enn ( + )
6 .
(n )
I Proof by contradiction: Assume en > 1 + ` i.o., then on a subsequence en > 1 + ` always and
1+` n ( + )
6 .
(n )
I Bounded support for i : If 0 < < i < + < for all i, n, then on a subsequence where
n 0 ,
0
1+` ( + )
6 CONTRADICTION!
(0 )
| {z } | {z }
1 as 0
1+`0
<1 as 0
(0 )
Simulations
0.5
1 Pn
Empirical eigenvalue distribution of n
i =1 x i x i
Limiting density
0.4
0.3
Density
0.2
0.1
0
0 5 10 15 20 25 30
Eigenvalues
1 Pn
Figure: Histogram of the eigenvalues of n i =1 xi xi for n = 2500, N = 500, CN = diag(I125 , 3I125 , 10I250 ), 1
with (.5, 2)-distribution.
Advanced Random Matrix Models for Robust Estimation/3.1 Robust Estimation of Scatter 93/142
Simulations
2 2
Density
Density
1 1
0 0
0 0.5 1 1.5 2 0 0.5 1 1.5 2
Eigenvalues Eigenvalues
Figure: Histogram of the eigenvalues of CN (left) and SN (right) for n = 2500, N = 500,
CN = diag(I125 , 3I125 , 10I250 ), 1 with (.5, 2)-distribution.
System Setting
I Signal model:
X
L
yi = pl al sli + i wi = Ai w
i
l =1
i , [s1i , . . . , sLi , wi ]T .
Ai , p1 a1 ... p L aL i IN , w
with y1 , . . . , yn CN satisfying:
1 Pn R
1. 1 , . . . , n > 0 random such that n , n i =1 i weakly and
t (dt ) = 1;
2. w1 , . . . , wn CN random independent unitarily invariant N-norm;
3. L N, p1 > . . . > pL > 0 deterministic;
a.s.
4. a1 , . . . , aL CN deterministic or random with A A diag(p1 , . . . , pL ) as N , with
A , [ p1 a1 , . . . , pL aL ] CN L .
5. s1,1 , . . . , sLn C independent with zero mean, unit variance.
I Relation to previous model: If L = 0, yi = i wi .
Elliptical model with covariance a low-rank (L) perturbation of IN .
We expect a spiked version of previous results.
I Application contexts:
I wireless communications: signals sli from L transmitters, N-antenna receiver; al random i.i.d.
channels (al al 0 l l 0 , e.g. al CN(0, IN /N ));
I array processing: L sources emit signals sli at steering angle al = a(l ). For ULA,
1
[a()]j = N 2 exp(2dj sin()).
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 97/142
Some intuition
Theoretical results
Theorem (Extension to spiked robust model)
Under the same assumptions as in previous section,
a.s.
kCN SN k 0
where
1X
n
SN , v (i )Ai w i Ai
i w
n
i =1
and we recall
Ai , p1 a1 ... pL a L i IN
i = [s1i , . . . , sLi , wi ]T .
w
Localization of eigenvalues
Further denote
1
(1 + c )2
Z
(x )vc (t )
p , lim c (dt ) , S+ , .
x S + 1 + (x )tvc (t ) (1 c )
a.s.
Then, if pj > p ,
j j > S + , otherwise lim supn
j 6 S + a.s., with j unique positive
solution to
Z 1
vc ()
c (j ) (d ) = pj .
1 + (j )vc ()
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 100/142
Simulation
1.2
Eigenvalues of n1 Pn
i = 1 yi yi
Limiting spectral measure
1
0.8
Density
0.6
0.4
0.2
0
0 1 2 3 4 5
Eigenvalues
P
Figure: Histogram of the eigenvalues of n1 i yi yi against the limiting spectral measure, L = 2, p1 = p2 = 1,
N = 200, n = 1000, Sudent-t impulsions.
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 101/142
Simulation
8
Eigenvalues of CN
Limiting spectral measure
6
Density
4
Right-edge of support 1
S+
2
0
0 0.2 0.4 0.6 0.8 1 1.2
Eigenvalues
Figure: Histogram of the eigenvalues of CN against the limiting spectral measure, for u (x ) = (1 + )/( + x )
with = 0.2, L = 2, p1 = p2 = 1, N = 200, n = 1000, Student-t impulsions.
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 102/142
Comments
I SCM vs robust: Spikes invisible in SCM in impulsive noise, reborn in robust estimate of
scatter.
I Largest eigenvalues:
I i (CN ) > S + Presence of a source!
I i (CN ) (sup(Support), S + ) May be due to a source or to a noise impulse.
I i (CN ) < sup(Support) As usual, nothing can be said.
Induces a natural source detection algorithm.
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 103/142
Z !1
vc () a.s.
c (
j ) (d ) pj .
1 + (
j )vc ()
where
Z
vc (t )
2 (dt )
1 + (k )tvc (t )
wk = .
Z Z
vc (t ) 1 (k )2 t 2 vc (t )2
(dt ) 1 2 (dt )
1 + (
k )tvc (t ) c
1 + (k )tvc (t )
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 104/142
2. Purely empirical bilinear form estimation. For each a, b CN with kak = kb k = 1, and each
pj > p ,
X X a.s.
a uk uk b k a uk uk b 0
w
k,pk =pj k,pk =pj
where
1X
n
v (
i )
2
n
i =1 1 + (k )i v (i
)
w
k =
1X 1 X (
n n
v (
i
) k )2
2i v ( )2
i
1
2
n 1 + ( i v (i
) N
i =1
k ) i =1 1 + ( k )i v (
i )
1 X 1 1
n
1 1
,
y C y, i ,
yi C(i )1 yi , (
x ) as (x ) but for (i , ) (i ,
).
n N i (i ) i N
i =1
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 105/142
Application to G-MUSIC
I Assume the model ai = a(i ) with
1
a() = N 2 [exp(2dj sin())]N 1
j =0 .
|{j,pj >p }|
X
RG () = 1
wk a() uk uk a()
k =1
|{j,pj >p }|
X
emp
RG () = 1 k a() uk uk a().
w
k =1
where
j , argminR {
RG ()}
j
emp
emp
, argminR RG () .
j j
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 106/142
MUSIC
0.6
Localization functions
0.4
0.2
4 6 8 10 12 14 16 18
[deg]
Figure: Random realization of the localization functions for the various MUSIC estimators, with N = 20,
n = 100, two sources at 10 and 12 , Student-t impulsions with parameter = 100, u (x ) = (1 + )/( + x )
with = 0.2. Powers p1 = p2 = 100.5 = 5 dB.
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 107/142
Robust MUSIC
103 MUSIC
Mean square error E [|
104
105
106
107
108
5 0 5 10 15 20 25 30
p1 , p2 [dB]
Figure: Means square error performance of the estimation of 1 = 10 , with N = 20, n = 100, two sources at
10 and 12 , Student-t impulsions with parameter = 10, u (x ) = (1 + )/( + x ) with = 0.2, p1 = p2 .
Advanced Random Matrix Models for Robust Estimation/3.2 Spiked model extension and robust G-MUSIC 108/142
Robust MUSIC
MUSIC
103
Mean square error E [|
104
105
106
5 0 5 10 15 20 25 30
p1 , p2 [dB]
Figure: Means square error performance of the estimation of 1 = 10 , with N = 20, n = 100, two sources at
10 and 12 , sample outlier scenario i = 1, i < n, n = 100, u (x ) = (1 + )/( + x ) with = 0.2, p1 = p2 .
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 110/142
Context
Ledoit and Wolf, 2004. A well-conditioned estimator for large-dimensional covariance matrices.
Pascal, Chitour, Quek, 2013. Generalized robust shrinkage estimator Application to STAP data.
Chen, Wiesel, Hero, 2011. Robust shrinkage estimation of high-dimensional covariance matrices.
1X
n
(1 ) xi xi + IN , for some [0, 1].
n
i =1
1 X
n
xi xi
N n
CN () = (1 ) + IN , (max{0, }, 1 ] (Pascal)
n 1
1 x C N
i =1 N i N ()xi
BN () 1 X
n
xi xi
CN () = , BN () = (1 ) + IN , (0, 1] (Chen)
N ()
1 tr B n 1
1 x C
N i =1 N i N ()xi
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 111/142
I Our result: In the random matrix regime, both estimators tend to be one and the same!
I Assumptions: As before, elliptical-like model
1
xi = i CN2 wi
Pascals estimator
where
1 X
n
xi xi
CN () = (1 ) + IN
n N ()1 xi
1 x C
i =1 N i
1 1 1 X n 1 1
SN () = CN2 wi wi CN2 + IN
() 1 (1 )c n
i =1
and
() is the unique positive solution to the equation in
1 X
N
i (CN )
1= .
N + (1 )i (CN )
i =1
Moreover, 7
() is continuous on (0, 1].
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 113/142
Chens estimator
Theorem (Chens estimator)
= [, 1]. Then, as N, n , N /n c (0, ),
For (0, 1), define R
a.s.
sup
CN () SN ()
0
R
where
BN () 1 X
n
xi xi
CN () = , BN () = (1 ) + IN
1 n N ()1 xi
1 x C
N tr BN () i =1 N i
1 X 12
n
1 1 T
SN () = CN wi wi CN2 + I
1 + T n 1 + T N
i =1
in which T =
()F (
(); ) with, for all x > 0,
r
1 1 1
F (x; ) = ( c (1 )) + ( c (1 ))2 + (1 )
2 4 x
and
() is the unique positive solution to the equation in
1 X
N
i (CN )
1= 1
.
N +
i =1 ;) i (CN )
(1)c +F (
Moreover, 7
() is continuous on (0, 1].
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 114/142
1 X 12
n
SN (
) 1
1 1
= SN (
) = (1 ) CN wi wi CN2 + IN .
+ n
(
) 1(1 )c i =1
M2 1 c 1 PN 2
Denote D ? = c c + ?
M2 1 , = c +M2 1 , M2 = limN N i =1 i (CN ) ? ,
and ? unique solutions to
?
T?
?
= = ? .
1 1
?
+ ? + T?
1
? ) 1(1
(
? )c
N ( a.s. N ( a.s.
D ? ) D ? , D ? ) D ? .
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 116/142
Estimating ? and ?
I ? and
Theorem only useful if ? can be estimated!
I Careful control of the proofs provide many ways to estimate these.
I Proposition below provides one example.
N
cN
=
N (
" 2 #
1 tr C
N N ) 1 1 Pn xi xi
N tr n i =1 1 kx k2
i
1
N
Pn
xi CN (
N ) 1 x i
N n1
i =1
kxi k2 cN
P =
x C (
" 2 #
)1 xi Pn
1 N n1 ni=1 i Nkx Nk2
N + 1 1 xi xi
i N tr n i =1 1 kx k2
i
1
N
Simulations
3
()}
inf (0,1] {DN
(
D N N )
D?
(
D N O )
Normalized Frobenius norm
0
1 2 4 8 16 32 64 128
n [log2 scale]
Figure: Performance of optimal shrinkage averaged over 10 000 Monte Carlo simulations, for N = 32, various
values of n, [CN ]ij = r |i j | with r = 0.7;
N as above;
O the clairvoyant estimator proposed in (Chen11).
Advanced Random Matrix Models for Robust Estimation/3.3 Robust shrinkage and application to mathematical finance 118/142
Simulations
1
0.8
Shrinkage parameter
0.6
0.4
0.2
N
?
N
?
O
0
1 2 4 8 16 32 64 128
n [log2 scale]
Figure: Shrinkage parameter averaged over 10 000 Monte Carlo simulations, for N = 32, various values of n,
[CN ]ij = r |i j | with r = 0.7;
N and
N as above;
O the clairvoyant estimator proposed in (Chen11);
= argmin{(max{0,1c 1 },1]} {D
N ()} and N ()}.
= argmin{(0,1]} {D
N
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 120/142
Context
I Hypothesis testing problem: Two sets of data
1
I Initial pure-noise data: x1 , . . . , xn , xi = i CN2 wi as before.
I New incoming data y given by:
x , H0
y=
p + x , H1
1
with x = CN2 w , p CN deterministic known, unknown.
I GLRT detection test:
H1
TN ()
H0
|y CN1 ()p |
TN () , q q .
y CN1 ()y p CN1 ()p
I Initial observations:
I As N, n , N /n c > 0, under H0 ,
a.s.
TN () 0.
Trivial result of little interest!
I Natural question: for finite N, n and given , find such that
P (TN () > ) = min
I Turns out the correct non-trivial object is, for > 0 fixed
P NTN () > = min
I Objectives:
I for each , develop central limit theorem to evaluate
lim P NTN () >
N,n
N /nc
What do we need?
Main results
where 7
is the aforementioned mapping and
1 p CN QN2 (
)p
2N (
) , 1 )2 N1 tr CN2 QN2 (
2 p QN (
)p )
N tr CN QN ( 1 c (1 )2 m( )
with QN (
) , ( IN + ( 1 )CN )1 .
)m(
Simulation
1
Empirical hist. of TN ()
Distribution of RN ()
0.6
0.8
Cumulative distribution
0.6
0.4
Density
0.4
0.2
0.2
Empirical dist. of TN ()
Distribution of RN ()
0 0
0 1 2 3 4 0 1 2 3 4
1
), N = 20, p = N 2 [1, . . . , 1]T , CN
Figure: Histogram distribution function of the NTN () versus RN (
Toeplitz from AR of order 0.7, cN = 1/2, = 0.2.
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 126/142
Simulation
1
Empirical hist. of TN ()
Density of RN ()
0.6 0.8
Cumulative distribution
0.6
Density
0.4
0.4
0.2
0.2
Empirical dist. of TN ()
Distribution of RN ()
0 0
0 1 2 3 4 0 1 2 3 4
1
), N = 100, p = N 2 [1, . . . , 1]T , CN
Figure: Histogram distribution function of the NTN () versus RN (
Toeplitz from AR of order 0.7, cN = 1/2, = 0.2.
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 127/142
p C ()p 2
1 N1 1 tr CN ()
1 p CN ()p N
2N (
) , .
2 1 c + c
N1 tr CN1 () N1 tr CN () 1 N1 tr CN1 () 1
N tr CN ()
2N (1) , lim1
Also let 2N (
). Then
a.s.
sup 2N ( 2N (
) ) 0.
R
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 128/142
Final result
Simulations
100
=2
101
P ( NTN () > )
=3
102
Limiting theory
Empirical estimator
103 Detector
1
Figure: False alarm rate P ( NTN () > ), N = 20, p = N 2 [1, . . . , 1]T , CN Toeplitz from AR of order 0.7,
cN = 1/2.
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 130/142
Simulations
100
=2
P ( NTN () > )
101
=3
1
Figure: False alarm rate P ( NTN () > ), N = 100, p = N 2 [1, . . . , 1]T , CN Toeplitz from AR of order 0.7,
cN = 1/2.
Advanced Random Matrix Models for Robust Estimation/3.4 Optimal robust GLRT detectors 131/142
Simulations
100
Limiting theory
Detector
101
102 N = 20
N = 100
103
104
0 0.2 0.4 0.6 0.8 1
1
Figure: False alarm rate P (TN () > ) for N = 20 and N = 100, p = N 2 [1, . . . , 1]T , [CN ]ij = 0.7|i j | ,
cN = 1/2.
Future Directions/4.1 Kernel matrices and kernel methods 134/142
N. El Karoui. The spectrum of kernel random matrices. The Annals of Statistics, 38(1):150, 2010.
X
k X f (xj , xj)
(RatioCut) min .
S1 ,...,Sk |Si |
S1 ...Sk =S i =1 j Si ,jSi
c
i 6=j, Si Sj =
1
where M = {M = [mij ]16i 6n,16j 6k , mij = |Sj | 2 xi Sj } and
X
n
" #
L = [Lij ]16i,j 6n = [W + diag(W 1)]16i,j 6n = f (xi , xj ) + i,j f (xi , xl ) .
l =1 16i,j 6n
Objectives
New random matrix model, can be analyzed with usual tools though.
Future Directions/4.2 Neural networks 140/142
Related biography
Our webpages:
I http://couillet.romain.perso.sfr.fr
I http://sri-uq.kaust.edu.sa/Pages/KammounAbla.aspx
Future Directions/4.2 Neural networks 142/142
Spraed it!