Sunteți pe pagina 1din 27

' $

Independent Component Analysis

Background paper:
http://www-stat.stanford.edu/∼hastie/Papers/ica.pdf

& %
' $
2

ICA Problem

X = AS
where
• X is a random p-vector representing multivariate input
measurements.
• S is a latent source p-vector whose components are
independently distributed random variables.
• A is p × p mixing matrix.
Given realizations x1 , x2 , . . . , xN of X, the goals of ICA are to
• Estimate A
• Estimate the source distributions Sj ∼ fSj , j = 1, . . . , p.
& %
' $
3

Cocktail Party Problem

In a room there are p independent sources of sound, and p


microphones placed around the room hear different mixtures.
Source Signals Measured Signals

PCA Solution ICA Solution

Here each of the xij = xj (ti ) and recovered sources are a

& %
time-series sampled uniformly at times ti .
' $
4

Independent vs Uncorrelated

WoLOG can assume that E(S) = 0 and Cov(S) = I, and hence


Cov(X) = Cov(AS) = AAT .
Suppose X = AS with S unit variance, uncorrelated
Let R be any orthogonal p × p matrix. Then
X = AS = ART RS = A∗ S ∗

and Cov(S ∗ ) = I
It is not enough to find uncorrelated variables, as they are not
unique under rotations.
Hence methods based on second order moments, like principal
components and Gaussian factor analysis, cannot recover A.
ICA uses independence, and non-Gaussianity of S, to recover A —

& %
e.g. higher order moments.
' $
5

Independent vs Uncorrelated Demo

Source S Data X
* * ***** * * * * * *** ** * * **
* * ** *****
* ******** * * * * * * * *** ***** **** *** ** ** *** ** * *
* * * *
* ** *** *** * * * ** ** * * ** ** * * * **
* * * * * * * ******* * * ***** * **
**** ******* ***************
* * ** * * *
*
* ** * * *** * ** *
* *
*
*
*
* * ***
* * ***** *
* * *
* *
* ***** **************
********* ******* ***************
Principal components are
* ** ****** * ** * *
* *** ** ***************** ** * *
* * ** * * * * ** *** ** ** * * * ****
********** *** *********
*
** * **
* * *
* **** ** * * * * *** *** * * ***
******** * * * ** ** * *** * * * * ** ** **
* ** * ** *
********* ***** *** * ** *************
* * * ****** *** * **
uncorrelated linear
** ** * ** ****** * ** *** ** * ** * ** * ** **** * ** * *** * *
* * * * * ** *** * * * *
* * **** * * * * * * ******** *********************** ******** ***
* * ** * ***** * * ** ** * * * * * **
** * * **** * ** * *** *** * ** *** *
* ** * * * ******
* *
**** * ************************ ********* combinations of X, chosen
** * ** * * * *** * ** * ** * *
*
*
*************** *********** **
*
** * ** * * * ** *
* * * ** ** ** ******* ** ********* ***
* ** * **** * ** * * * * ** * * *** *
* * * ** *** * * *** * * *** *** ** **** *
******* ****** * to successively maximize
*** * ****** * * * * * * **** * ** * ** * **** ***
* * * * * * * * * ** * * *
variance.
PCA Solution ICA Solution
** * ******* * * ** **
* **** * * *** * * * *
* ** *********** ** * * * ** *
*
* ** ** ** ** * * ****** ** *** * * * * ** ******** *
*
* * ** *
** * **
** **** ***** ****** * *
* * *** ** * ** *
** * ** *
* ** ******* ** ** **** ** ******
* ** * * * * **** ** * * **
*
**
* *
*
* * * ** * * **
*
Independent components
* ** **** ******* * **** ****** ******** ***** ** ** * * * * * ** * * * * **
* * ** * * ** * * * * * * **
***** ***************** ****** * **** * * ** * * **
*** ** ** **** * ***** *******
*
* *
* * * ** * * * ** * * * * * * * * * * **
*
* ***** *** *** * **** ****** ***** ******* ** *** ** *** *** * *
** * *** ** * * * * **
** * * ****
*
* *
* *
*** ***** * * ** * ***
*** * ** *** * * * ** *** * ** * * * ******** * *
are also uncorrelated
* * * * * *
****** *** ** ** * * ** * ***** ******* ** * ** * ******* ***** * * * * ** ** ** ***** * **** *** *** **** **
* ** *** ** ***** * ** *** *** *** ********** *
** ** * ** * ** **** * **** *** **** * *
* ** *** * ** * ** ** ** **
* * ** * * **** *
*
*** **** * *** ****
* ** ** * *
* * * * ** ** * * *** * * *** * *
* ** * **** *
*
**
linear combinations of X,
**** **** * ** ** ** ** ** ** * *** * * ** * ** ** * ***
* * * ** * **** * *** * *
* *
* * *** ***** ***
* * ** * * * *
** * *** * * * *
* * * **
*
*
* *
*
*
*
** * *
*
*
*
** * *
*
* * * * ** * * *
*
*
chosen to be as
*** *** * * * * * ****** * * * ******* ** ** * * ***** *****
* * * * * ** * * ** * *
** ** * * * * *
independent as possible.
& %
' $
6

Example: ICA representation of Natural Images

Pixel blocks are treated as vectors, and then the collection of such
vectors for an image forms an image database. ICA can lead to a
sparse coding for the image, using a natural basis.

see http://www.cis.hut.fi/projects/ica/imageica/(Patrik
Hoyer and Aapo Hyvärinen, Helsinki University of Technology)
& %
' $
7

Example: ICA and EEG data

See http://www.cnl.salk.edu/∼tewon/ica cnl.html (Scott


Makeig, Salk Institute)
& %
' $
8

Approaches to ICA

ICA literature is HUGE. Recent book by


Hyvärinen, Karhunen & Oja (Wiley, 2001) is a
great source for learning about ICA, and some
good computational tricks.
• Mutual Information and Entropy, maximizing non-Gaussianity
— FastICA (HKO 2001), Infomax (Bell and Sejnowski, 1995)
• Likelihood methods — ProdDenICA (Hastie +Tibshirani)-
later
• Nonlinear decorrelation — Y1 independent Y2 iff
maxg,f Corr[f (Y1 ), g(Y2 )] = 0 (Hérault-Jutten, 1984),
KernelICA (Bach and Jordan, 2001)

& %
• Tensorial moment methods
' $
9

Simplification

Let Σ = Cov(X) and suppose X = AS, and B = A−1 . Then we


1
can write B = WΣ− 2 for some nonsingular W. Then
1
S = BX = WΣ− 2 X with Cov(S) = I and W orthonormal.
1
So operationally we sphere the data X̃ = Σ− 2 X, and then seek an
orthogonal matrix W so that the components S = WX̃ are
independent.

& %
' $
10

Entropy and Mutual Information

R
Entropy: H(Y ) = − f (y) log f (y)dy — maximized by
f (y) = φ(y), the Gaussian (for fixed variance).
Pp
Mutual Information: I(Y ) = j=1 H(Yj ) − H(Y )
• Y is a random vector with joint density f (y) and entropy H(Y )
• H(Yj ) is the (marginal) entropy of component Yj , with
marginal density fj (yj ).
• I(Y ) is the Kullback-Leibler divergence between f (Y ) and it’s
Qp
independence version 1 fj (yj ) (which is the KL closest of all
independence densities to f (y))
• Hence I(Y ) is a measure of dependence between the
components of a random vector Y .
& %
' $
11

Entropy, Mutual Information, and ICA

If Cov(X) = I and W is orthogonal then simple calculations show


p
X
I(WX) = H(wjT X) − H(X)
j=1

Hence
min I(WX) ⇐⇒ min {dependence between wjT X}
W W
⇐⇒ min {the sum of the entropies of the wjT X}
W
⇐⇒ max {departures from Gaussianity of the wjT X}
W

• Many methods for ICA look for low-entropy or non-gaussian


projections (Hyvärinen, Karhunen & Oja, 2001)
• Strong similarities with projection pursuit (Friedman and

& %
Tukey, 1974)
' $
12

Negentropy and FastICA

Negentropy: J(Yj ) = H(Yj ) − H(Zj ), where Zj is a Gaussian RV


with same variance as Yj . Measures the departure from
Gaussianity.
FastICA uses simple approximations to negentropy

J(wjT X) ≈ [EG(wjT X) − EG(Zj )]2 ,

and with data replaced expectations by a sample averages. They


use
1
G(y) = log cosh y; , 1 ≤ a ≤ 2
a

& %
' $
13

ICA Density Model

Under the ICA model, the density of S:


p
Y
fS (s) = fj (sj )
j=1

with each fj a univariate density, with mean 0 and variance 1.


Next we develop two direct ways of fitting this density model.

& %
' $
14

Discrimination from Gaussian

• “Unsupervised vs supervised” idea


• Consider case of p = 2 variables. Idea: observed data is
assigned to class G = 1, and a background sample is generated
from spherical Gaussian g0 (x) and assigned to class G = 0.
• We fit to this data a projection pursuit model of the form
P r(G = 1)
log = f1 (aT1 X) + f2 (aT2 X) (1)
1 − P r(G = 1)
where [a1 , a2 ] is an orthonormal basis.
• This implies the data density model
g(X) = g0 (X) · exp f1 (aT1 X) · exp f2 (aT2 X)
Can also show that g0 (X) factors into a product

& %
h1 (aT1 X) · h2 (aT2 X), since [a1 , a2 ] is an orthonormal basis.
' $
15

• Hence model (1) is a product density for the data. If also looks
like a neural network
• Can fit it by a) sphering (“whitening”) the data and then b)
applying ppr function in R, combined with IRLS (iteratively
reweighted least squares).

& %
' $
16

Projection Pursuit Solutions

& %
' $
17

ICA Density Model -direct fitting

Density of X = AS:
p
Y
fX (x) = |B| fj (bTj x)
j=1

with B = A−1 .
Log-likelihood, given x1 , x2 , . . . , xN :
p
N X
X
ℓ(B, {fj }p1 ) = log |B| + log fj (bTj xi )
i=1 j=1

& %
' $
18

ICA Density Model -fitting ctd

• As before: let Σ = Cov(X); then for any B we can write


−1
B = WΣ 2 for some nonsingular W. Then S = BX and
Cov(S) = I =⇒ W is orthonormal. So we sphere the data
1
X̃ = Σ− 2 X and seek an orthonormal W.
• Let
fj (sj ) = φ(sj )egj (sj ) ,
a tilted Gaussian density. Here φ is the standard Gaussian density,
and gj satisfies the normalization conditions.
Then
N p
p 1 XXh T T

ℓ(W, Σ, {gj }1 ) = − log |Σ| + log φj (wj x̃i ) + gj (wj x̃i ]
2 i=1 j=1

• We estimate Σ̂ by the sample covariance of the xi , and ignore it

& %
thereafter — pre-whitening in the ICA literature.
' $
19

ProdDenICA algorithm

Maximizes log-likelihood on previous slide, over W and gj (·),


j = 1, 2, . . . p, with smoothness constraints on gj (·).
Details on next 3 slides.

& %
' $
20

Restrictions on gj

Our model is still over-parameterized. We maximize instead a


penalized log-likelihood:
p
" N
#
X 1 X Z
log φ(wj x̃i ) + gj (wj x̃i ) − λj {gj′′′ (t)}2 (t)dt
T T

j=1
N i=1

w.r.t. WT = (w1 , w2 , . . . , wp ) and gj , j = 1, . . . , p


subject to
• WT W = I
• each fj (s) = φ(s)egj (s) is a density, with mean 0 and variance 1.
• solutions ĝj are piecewise quartic splines

& %
' $
21

ProDenICA: Product Density ICA algorithm

1. Initialize W (random Gaussian matrix followed by orthogo-


nalization).
2. Alternate until convergence of W, using the Amari metric.
(a) Given W, optimize the penalized log-likelihood w.r.t. gj
(separately for each j), using the penalized density esti-
mation algorithm.
(b) Given gj , j = 1, . . . , p, perform one step of a fixed point
algorithm towards finding the optimal W.

& %
' $
22

Penalized density estimation for step (a)

• Discretize the data


• Fit a generalized additive model with Poisson family of
distributions (see text chapter 9)

& %
' $
23

Simulations
a b c

d e f

Taken from Bach and Jordan (2001)


g h i • Each distribution used to gen-
erate 2-dim S and X (with ran-
j k l
dom A) — 30 reps each
• 300 reps with 4-dim S — distri-
m n o
butions of Sj picked at random.
Compared FastICA (homegrown
p q r Splus version), KernelICA (Francis
Bach’s matlab version), and Pro-
DenICA (Splus).
& %
' $
24

Amari Metric

We evaluate solutions by comparing their estimated W with the


true W0 , using the Amari metric (HKO 2001, Bach and Jordan
2001):
p Pp ! p  Pp
|r |

1 j=1 ij 1 i=1 |rij |
X X
d(W0 , W) = −1 + −1
2p i=1 maxj |rij | 2p j=1 maxi |rij |

where rij = (W0 W−1 )ij .

& %
' $
25

2-Dim examples

0.50

FastICA •

(Log Scale)

KernelICA
ProdDenICA •

• •
Amari Distance from True W

0.10



•• •
• • • • •
0.05

• • •
• •• • •
• • •
• •
• • • • •
• •
• •• •
•• • •


0.01

a b c d e f g h i j k l m n o p q r

& %
Distribution
' $
26

4-dim examples

0.25 (0.014) 0.08 (0.004) 0.14 (0.011)


2.0
Amari Distance from True W

1.5
1.0
0.5
0.0

& %
FastICA KernelICA ProdDenICA
' $
27

-3 -2 -1 0
•• • • • • • ••
• • •
••••••• • •• •••• • •

0
•• • • • • • • •• • ••
••• ••••• •••••• • • • • • • ••• •••• • •• •• • • •
• • • •• • • •• •
•• •• • •• • • • • • •••• • • ••• • • •

-1
• ••• • ••
• •••• • ••• •
•• ••••••• • ••• ••• •••• • • • •••••••••••• ••••• • •
•• ••• • • • • • •
FastICA ••• ••• •• ••• ••• • ••••• •••••••••• ••••• •• •
••• ••• • •• ••• • ••••••••••••••••••••• •••••••••• •• • •
• ••• •••••••••••••••••••••••••••• •• •• • •

-2
••• • • •••••• • • •• ••• ••••••••••••••• • ••
••••• •••••••••••••••••••• •••• • •••• •••••••••••• ••• ••••
• ••••••••••••••••••••• •
• • • ••••••••••••••••••••••••• •
•••••••• ••••••• • •
•• ••••••••• • •
• ••• •
• • •••• • •
••••• ••

-3
• • •
• • • •
• •
• •
• •
• • • •
0

• • • • •
•• • •• • • • •
-1

• • • • • ••
•• ••• • • • • • • • • •••• • •
• ••• ••• •• •• ••• • • •
• • • •• •
KernelICA • • • ••• • • •• • •
•• ••••••••••••• •••• ••• • • •• • •• ••• •
• •• ••••• ••••••••••••••••••••••••••••• • • •
-2

• ••• ••• ••••••••••••••• •••• •••• •••• • •••• • • ••• •••• •••••••• ••

• ••••••••••••••••••••••••••••••••••••••••• •••••••••••• • •••• • • •• •••••••••••••••••••••••••• •••••••• • •
• • • ••••••••••••••••• •••••••••••••• •••• •• • •• • •• •••••••••••••••••••••••••• •
• • ••
••
••• ••• • ••• • •• •• • • •••••••••••••••••••• • •
-3

• • •• • •• • • •• •••••
• •

-0.5
• •• • • •• • • •

-1.5
• • • ••

• •• • • • • • • •• • • ••
• •• • •• ••• • ••
• •• •••• •• • •
• •• • ••••••••• •••• • •
•• • • • • • • •• • • • • •• •••••••••••••••••••• •••• ProdDenICA
•• • • •••••••••• • • • • • •• •• • • • •

-2.5
•• • • ••••••••••••••••• •••••••••••••••••••• • • •••• •• • ••••••••••••••••••••••••• • •• • •
• ••• • ••• • • •
• • ••••••••••••••••••••••••••••••••••••••••• • •••••••••••••• •••••••• •••••••••••••••••• • • • •
••••••••••••••••••••••••••••••••••••••••• •• • •
• •• •• ••• •••• • • • • • • •• •

• • • • • • • • ••
• • • •• ••• •
• ••••• •••• • •• • • • • •• • • •
• • ••• •• •• • • •
•• ••••••••••• •

-3.5
• •
• •
• •

& %
-3 -2 -1 0 -3.5 -2.5 -1.5 -0.5

S-ar putea să vă placă și