Sunteți pe pagina 1din 24

General Linear Model &

Classical Inference

London, SPM-M/EEG course


May 2012

C. Phillips, Cyclotron Research Centre, ULg, Belgium


http://www.cyclotron.ulg.ac.be
Overview

Introduction
ERP example
General Linear Model
Definition & design matrix
Parameter estimation & interpretation
Contrast & inference
Correlated regressors
Conclusion
Overview

Introduction
ERP example
General Linear Model
Definition & design matrix
Parameter estimation & interpretation
Contrast & inference
Correlated regressors
Conclusion
Overview of SPM
Raw EEG/MEG data Statistical Parametric
Design matrix Map (SPM)
Pre-processing:
Converting
Filtering
Resampling
Re-referencing
General
Epoching
Artefact rejection Linear Model
Time-frequency Inference &
transformation correction for
multiple
comparisons

Image
convertion Contrast:
Parameter c = [-1 1]
estimates
ERP example

Random presentation
of faces and
scrambled faces
70 trials of each type
128 EEG channels

Question:
is there a difference between the ERP of
faces and scrambled faces?
ERP example: channel B9
Focus on N170

f s
t
2 1n 1n

f s

compares size of effect to


its error standard deviation
Overview

Introduction
ERP example
General Linear Model
Definition & design matrix
Parameter estimation & interpretation
Contrast & inference
Correlated regressors
Conclusion
Data modeling
Data Faces Scrambled

= b1 + b2 +

Y = b1 X1 + b2 X2 +
Design matrix

b1
= +
b2

Y = X b +
General Linear Model
1 p 1 1

N: # trials
b p: # regressors

p
Y
= X +
Y Xb
N N N

design matrix X
GLM defined by
error distribution, e.g. ~ N (0, I )
2
General Linear Model
The design matrix embodies all available
knowledge about experimentally
controlled factors and potential confounds.
Applied to all channels & time points
Mass-univariate parametric analysis
one sample t-test
two sample t-test
paired t-test
Analysis of Variance (ANOVA)
factorial designs
correlation
linear regression
multiple regression
Parameter estimation

Y Xb
b1
= + Residuals: Y Xb
b2
If iid. error assumed:
~ N (0, 2 I )

Y X
Estimate parameters b ( X T X ) 1 X T Y
Ordinary Least Squares
i1 i minimal

N
such that 2
parameter estimate
Hypothesis Testing

The Null Hypothesis H0


Typically what we want to disprove (i.e. no effect).
Alternative Hypothesis HA = outcome of interest.
Contrast & t-test
Contrast : specifies linear combination of
parameter vector: cb
c = -1 +1 SPM-t over
ERP: faces < scrambled ? time & space
^ ^
=
^
b1 < b2 ? (bi : estimation of bi )
^
=^
-1xb1 + 1xb2 > 0 ?
=
^
test H0 : c b > 0 ?

contrast of
estimated ^
parameters c b
T= T=
variance s2c(XX)+c
estimate
Hypothesis Testing

The Null Hypothesis H0


Typically what we want to disprove (i.e. no effect).
Alternative Hypothesis HA = outcome of interest.

The Test Statistic T


summarises evidence about H0.
small in magnitude when
(typically)
H0 is true and large when false.
know the distribution of T under
the null hypothesis. Null Distribution of T
Hypothesis Testing
Significance level : u
Acceptable false positive rate .
threshold u, controls the false positive
rate
p(T u | H 0 )

Observation of test statistic t, a Null Distribution of T


realisation of T
Conclusion about the hypothesis:
t
reject H0 in favour of Ha if t > u
P-value:
summarises evidence against H0.
= chance of observing value more extreme P-val
than t under H0.
p(T t | H 0 ) Null Distribution of T
Contrast & T-test, a few remarks

Contrasts = simple linear combinations of


the betas
T-test = signal-to-noise measure (ratio of
estimate to standard deviation of estimate).

T-statistic, NO dependency on scaling of


the regressors or contrast
Unilateral test:
H0: cT b 0 vs. H A: c T b 0
Extra-sum-of-squares & F-test
Model comparison: Full vs. Reduced model?
Null Hypothesis H0: True model is X0 (reduced model)

X0 X1 X0 Test statistic: ratio of


explained and unexplained
variability (error)

RSS 0 RSS
F
RSS RSS0 RSS
full
2
reduced

2

ESS
F ~ F 1 , 2
RSS
1 = rank(X) rank(X0)
Full model ? Or reduced model? 2 = N rank(X)
F-test & multidimensional contrasts
Tests multiple linear hypotheses:
H0: True model is X0 H 0 : b3 = b4 = 0 test H0 : cTb = 0 ?
X0 X1 (b3-4) X0 cT = 0 0 1 0
0001

Full or reduced model?


Correlated and orthogonal regressors
y

x2
x2*
x1

y x1b1 x2 b 2 e y x1b1 x2*b 2* e


b1 b 2 1 b1 1; b 2* 1
Correlated regressors x2 orthogonalized w.r.t. x1
explained variance only the parameter
shared between estimate for x1 changes,
regressors not that for x2!
Inference & correlated regressors
implicitly test for an additional effect only
be careful if there is correlation
orthogonalisation = decorrelation (not generally needed)
parameters and test on the non modified regressor change

always simpler to have orthogonal regressors and


therefore designs.
use F-tests in case of correlation, to see the overall
significance. There is generally no way to decide to
which regressor the common part should be
attributed to.
original regressors may not matter: its the contrast
you are testing which should be as decorrelated as
possible from the rest of the design matrix
Overview

Introduction
ERP example
General Linear Model
Definition & design matrix
Parameter estimation & interpretation
Contrast & inference
Correlated regressors
Conclusion
Modelling?

Why? Make inferences about effects of interest

1. Decompose data into effects and error


How? 2. Form statistic using estimates of effects
(of interest) and error

Model? Use any available knowledge

Contrast:
Experimental e.g. [1 -1 ]
effects
effects
estimate
data model statistic
error
estimate
Thank you for your attention!

Any question?

Thanks to Klaas, Guillaume, Rik, Will, Stefan, Andrew & Karl


for the borrowed slides!

S-ar putea să vă placă și