Sunteți pe pagina 1din 3

Chapter 5: Linear Models

1 Relationship among vari- mean, variance, covariance and correlation:

ables 1X
xn = xi
n
Linear regression model 1 X
s2x = (xi x)2
n1
Y = X1 1 + . . . + Xp p + 1 X
sxy = (xi x)(yi y)
Y dependent variable n1
sxy
X independent/explanatory variables r=
sx sy
error term
Hence
sxy
!
y s2x x
2 Least Squares OLS = sxy
s2x

2.1 Ordinary Least Square (OLS)


3 The classical stochastic lin-
yi =
X
j xij + i ear model
y = X + 3.1 The model
Least squares solves
y = X +
X  X 
2 yi j xij (xik ) = 0 where
(i) X : n p, f.c.r nonrandom
2X0 X 2X0 y = 0
(ii) E() = 0
0 1 0
So that OLS = (X X) X y, which is possible if (iii) Var() = 2 In
0
X has f.c.r., so X X is nonsingular
ai = 0, then for any c we have If X is random we could state, (when x contains
P
Simple Rule
P If
P
wi ai = (wi c)ai errors this doesnt hold)

(ii0 )E(|X) = 0
2.2 Projections (iii0 )E(0 |X) = 2 In

PX = X(X 0 X)1 X 0 3.2 Properties of the OLS estimator


The OLS solution decomposes y in two orthogonal
vectors: E(y) = X
Var(y) = 2 In
y = PX y + (In PX )y
E() = unbiased!
= X +
Var() = (X 0 X)1
2
where
X 0 = 0 Gauss-Markov Theorem Under assumptions (i)-
2 2
||y|| = ||X || + |||| 2 (iii), the OLS estimator is BLUE.

3.3 Estimation of 2
2.3 Simple Linear Regression
Unbiased estimators
0
X = (n , x) s2 =
np
y = n 1 + x2 +
) = s2 (X 0 X)1
Var(
We have the following definitions of the sample )} = Var()
E{Var(

1
4 Restricted least squares 6 Linear Hypotheses
Consider q linear constraints given by H = 0, then 6.1 An F test
the RLS solution RLS (in the book 0 ) minimizes
(y X)0 (y X). This gives
H0 : H = 0
RLS = OLS (X 0 X)1 H 0 H1 : H 6= 0
0 1 0 0 1 0
= OLS (X X) H {H(X X) H }H OLS y 0 (P1 P0 )y/q
T =
y(In P1 )y/(n p)
Note that if H = h then RLS = OLS
Fq,np
(X 0 X)H 0 {H(X 0 X)1 H 0 }1 (H h)
where
P1 = PX
5 Normal Theory
P0 = PX P{X(X 0 X)1 H 0 }
5.1 The Model
6.2 Alternative Approach
y = X + To test H0 : H = h against H0 : H 6= h, we can
where use the following test statistic
(i) X : n p, f.c.r nonrandom (H OLS h)0 {H(X 0 X)1 H 0 }1 (H OLS h)/q
2 T = Fq,np
(iv) N (0, In ) s2
Then y N (X, 2 In )
6.3 A t test
5.2 ML Estimators
H0 : H = h
H1 : H h
M L = OLS
0 H h
2
M T =p
L = consistent! s2 H(X 0 X)1 H 0
n
tnp
5.3 A test statistic
A two-sided test can be based on T 2 F1,np
y 0 y = y 0 PX y + y 0 (In PX )y H0 : i = 0
show us H1 : i > 0
0
SST ot = y y 2 2n () i
t value = p
SSRegr = y 0 PX y 2 2p () s2 e i (X
0 X)1 e
i
SSRes = y 0 (In PX )y 2 2np tnp
0 0 2
where = X X/ Here, the denominator is also called the standard
2 (n + ) error of i
E(M ST ot ) =
n
2 (p + ) 6.4 Confidence sets
E(M SREgr ) =
p
2 (n p) H0 : i = i
E(M ST ot ) =
np H1 : i 6= i
i i
T =
We find we can test H0 : = 0 against H1 : 6= 0 se(i )
by the test statistic
tnp
y 0 PX y/p 2
T F1,np
T = 0
y (In PX )y/(n p)
C1 = {i |i se(i )t i i + se(i )t}
M SRegr
=
M SRes If T Fq,np , and P (T > T ) = , we reject the
Fp,np null at significance level if T > T

2
7 Some important applications
7.1 The regression F test
Sometimes we have y = 1 + X2 2 + , then we have H = 0 where H = (0, Ip1 ) then
0
2,OLS X20 (In P )X2 2,OLS /(p 1)
F =
y 0 (In PX )y/(n p)
R2
  
np
=
p1 1 R2
2
sy
R2 = 2
sy

7.2 Two-sample t test

y1 N (1 n1 , 2 In1 )
y2 N (2 n2 , 2 In2 )
H0 : 1 = 2
H1 : 1 6= 2
 
y1
y=
y2
  
0 1
= n1 +
0 n2 2
= X +
 
1
H0 : H = (1, 1) = 1 2 = 0
2
H1 : H 6= 0

7.3 Analysis of variance


TO DO

8 Sum-of-squares decomposition
Total Regression Residual
Sum of Squares (SS) y0 y y 0 PX Y y 0 (In PX )y
Degrees of Freedom (df) n p np
y0 y y 0 PX Y y 0 (In PX )y
Mean Squares (MS) n p np
0 0 0 0
Expected Mean Squares (EMS) 2 XnX 2 + Xp X 2

y = P0 y + (P1 P0 )y + (In P1 )y
SS ||y||2 = ||P0 y||2 + ||(P1 P0 )y||2 + ||(In P1 )y||2
df n pq q np
0 X 0 X 0 0
0 X 0 (P1 P 0)X
MS +2
n + Xpq
2 P0 X 2
+ q 2

S-ar putea să vă placă și