Documente Academic
Documente Profesional
Documente Cultură
1
T (y X ).
2 =
(y X )
np
The above estimate of is also a least-squares estimate.
The predicted value of y is given by
= PX y where PX = X(X T X)1 X T .
= X
y
PX is called the projector of X. It projects any vector to the
space spanned by the columns of X.
The model residual is estimated as:
T (y X )
= yT (I PX )y.
= (y X )
e
2
1
.
2
1
T
np y (I
PX )y.
IG
,
(n p) , j = 1, . . . M
2
2
For j = 1, . . . , M , draw from p( | 2(j) , y):
(j) N (X T X)1 X T y, 2(j) (X T X)1
(n/2)
((n p))p/2 ((n p)/2)|s2 (X T X)1 |
"
1+
#
T (X T X)( )
n/2
( )
(n p)s2
and we
Suppose we have observed the new predictors X,
y
0 I
X
2 I).
| y, , 2 ) = p(y
| , 2 ) = N (y
| X,
Note p(y
The posterior predictive distribution:
Z
| y) = p(y
| y, , 2 )p(, 2 | y)dd 2
p(y
Z
| , 2 )p(, 2 | y)dd 2 .
= p(y
By now we are comfortable evaluating such integrals:
First obtain: ( (j) , 2(j) ) p(, 2 | y), j = 1, . . . , M
(j) , 2(j) I).
(j) N (X
Next draw: y
8
Example: For the linear model, our parameters are (, 2 ). We write = (, log( 2 )) and, at the j-th
iteration, propose N ( (j1) , ). The log transformation on 2 ensures that all components of
have support on the entire real line and can have meaningful proposed values from the multivariate normal.
But we need to transform our prior to p(, log( 2 )).
Let z = log( 2 ) and assume p(, z) = p()p(z). Let us derive p(z). REMEMBER: we need to adjust
for the jacobian. Then p(z) = p( 2 )|d 2 /dz| = p(ez )ez . The jacobian here is ez = 2 .
(a + n/2 + 1)z + z
1
ez
{b +
1
2
(Y X) (Y X)}.
If log r 0 then set (j) = . If log r 0 then draw U (0, 1). If U r (or log U log r) then
(j) = . Otherwise, (j) = (j1) .
10