Sunteți pe pagina 1din 42

This is page iii

Printer: Opaque this

Contents

6 Conjugate families 1
6.1 Binomial - beta prior . . . . . . . . . . . . . . . . . . . . . . 2
6.1.1 Uninformative priors . . . . . . . . . . . . . . . . . . 2
6.2 Gaussian (unknown mean, known variance) . . . . . . . . . 2
6.2.1 Uninformative prior . . . . . . . . . . . . . . . . . . 4
6.3 Gaussian (known mean, unknown variance) . . . . . . . . . 4
6.3.1 Uninformative prior . . . . . . . . . . . . . . . . . . 5
6.4 Gaussian (unknown mean, unknown variance) . . . . . . . . 5
6.4.1 Completing the square . . . . . . . . . . . . . . . . . 6
6.4.2 Marginal posterior distributions . . . . . . . . . . . . 7
6.4.3 Uninformative priors . . . . . . . . . . . . . . . . . . 9
6.5 Multivariate Gaussian (unknown mean, known variance) . . 11
6.5.1 Completing the square . . . . . . . . . . . . . . . . . 11
6.5.2 Uninformative priors . . . . . . . . . . . . . . . . . . 12
6.6 Multivariate Gaussian (unknown mean, unknown variance) 13
6.6.1 Completing the square . . . . . . . . . . . . . . . . . 14
6.6.2 Inverted-Wishart kernel . . . . . . . . . . . . . . . . 14
6.6.3 Marginal posterior distributions . . . . . . . . . . . . 15
6.6.4 Uninformative priors . . . . . . . . . . . . . . . . . . 17
6.7 Bayesian linear regression . . . . . . . . . . . . . . . . . . . 18
6.7.1 Known variance . . . . . . . . . . . . . . . . . . . . 19
6.7.2 Unknown variance . . . . . . . . . . . . . . . . . . . 22
6.7.3 Uninformative priors . . . . . . . . . . . . . . . . . . 26
6.8 Bayesian linear regression with general error structure . . . 27
iv Contents

6.8.1 Known variance . . . . . . . . . . . . . . . . . . . . 28


6.8.2 Unknown variance . . . . . . . . . . . . . . . . . . . 29
6.8.3 (Nearly) uninformative priors . . . . . . . . . . . . . 32
6.9 Appendix: summary of conjugacy . . . . . . . . . . . . . . . 34
This is page 1
Printer: Opaque this

6
Conjugate families

Conjugate families arise when the likelihood times the prior produces a
recognizable posterior kernel

p ( | y) ( | y) p ()

where the kernel is the characteristic part of the distribution function that
depends on the random variable(s) (the part excluding any normalizing
constants). For example, the density function for a univariate Gaussian or
normal is

1 1 2
exp 2 (x )
2 2

and its kernel (for known) is



1 2
exp 2 (x )
2

1
as 2 is a normalizing constant. Now, we discuss a few common conjugate
family results1 and uninformative prior results to connect with classical
results.

1 A more complete set of conjugate families are summarized in chapter 7 of Accounting

and Causal Eects: Econometric Challenges as well as tabulated in an appendix at the


end of the chapter.
2 6. Conjugate families

6.1 Binomial - beta prior


A binomial likelihood with unknown success probability, ,

n ns
( | s; n) = s (1 )
s
n
s = i=1 yi , yi = {0, 1}
combines with a beta(; a, b) prior (i.e., with parameters a and b)
(a + b) a1 b1
p () = (1 )
(a) (b)
to yield
ns b1
p ( | y) s (1 ) a1 (1 )
ns+b1
s+a1 (1 )
which is the kernel of a beta distribution with parameters (a + s) and
(b + n s), beta( | y; a + s, b + n s).

6.1.1 Uninformative priors


Suppose priors for are uniform over the interval zero to one or, equiva-
lently, beta(1, 1).2 Then, the likelihood determines the posterior distribu-
tion for .
ns
p ( | y) s (1 )
which is beta( | y; 1 + s, 1 + n s).

6.2 Gaussian (unknown mean, known variance)


A single draw from a Gaussian likelihood with unknown mean, , known
standard deviation, ,

2
1 (y )
( | y, ) exp
2 2

combines with a Gaussian or normal prior for given 2 with prior mean
0 and prior variance 20

2
2 2
1 ( 0 )
p | ; 0 , 0 exp
2 20


2 Somewould utilize Jereys prior, p () beta ; 12 , 12 , which is invariant to trans-
formation, as the uninformative prior.
6.2 Gaussian (unknown mean, known variance) 3

or writing 20 2 /0 , we have

2
2 2
1 0 ( 0 )
p | ; 0 , /0 exp
2 2

to yield

2 2
2
1 (y ) 0 ( 0 )
p | y, , 0 , /0 exp +
2 2 2

Expansion and rearrangement gives



1
p | y, , 0 , 2 /0 exp 2 y 2 + 0 20 2y + 2 + 0 2 20
2
Any terms not involving are constants and can be discarded as they are
absorbed on normalization of the posterior

2
1 2
p | y, , 0 , /0 exp 2 (0 + 1) 2 (0 0 + y)
2
2
Completing the square (add and subtract (000+1 +y)
), dropping the term
subtracted (as its all constants), and factoring out (0 + 1) gives
2
2
0 + 1 0 0 + y
p | y, , 0 , /0 exp
2 2 0 + 1

Finally, we have

2
1 ( 1 )
p | y, , 0 , 2 /0 exp
2 21
1
0 0 +y 0 + 12 y 2 1
where 1 = 0 +1 = 0
1 1 and 21 = 0 +1 = 1
+ 12
, or the posterior
0 + 2 0
distribution of the mean given the data and priors is Gaussian or normal.
Notice, the posterior mean, 1 , weights the data and prior beliefs by their
relative precisions.
For a sample of n exchangeable draws, the likelihood is
n

1 (yi )
2
( | y, ) exp
i=1
2 2

combined with the above prior yields



2
1 ( n )
p | y, , 0 , 2 /0 exp
2 2n
4 6. Conjugate families

1
0 0 +ny 0 + n2 y 2
where n = 0 +n = 0
1 n , y is the sample mean, and 2n = 0 +n =
0 + 2
1
1
+ n2
, or the posterior distribution of the mean, , given the data and
0
priors is again Gaussian or normal and the posterior mean, n , weights the
data and priors by their relative precisions.

6.2.1 Uninformative prior



An uninformative prior for the mean, , is the (improper) uniform, p | 2 =
1. Hence, the likelihood
n

1 (yi )
2
( | y, ) exp
i=1
2 2
n
1
2 2
exp 2 yi 2ny + n
2 i=1
n
1 2
exp 2 yi2 ny 2 + n ( y)
2 i=1

1 2
exp 2 n ( y)
2
determines the posterior

2
n
( y)
p | 2 , y exp
2 2
2

which is the kernel for a Gaussian or N | 2 , y; y, n , the classical result.

6.3 Gaussian (known mean, unknown variance)


For a sample of n exchangeable draws with known mean, , and unknown
variance, , a Gaussian or normal likelihood is
n

1 (yi )
2
12
( | y, ) exp
i=1
2

combines with an inverted-gamma(a, b)



(a+1) b
p (; a, b) exp


to yield an inverted-gamma n+2a 1
2 , b + 2 t posterior distribution where
n
2
t= (yi )
i=1
6.4 Gaussian (unknown mean, unknown variance) 5

Alternatively and conveniently (but equivalently),


we could parameterize
the prior as an inverted-chi square 0 , 20 3

2

( 20 +1) 0 20
p ; 0 , 0 () exp
2
and combine with the above likelihood to yield

n+ 0 1
p ( | y) ( 2 +1) exp 0 20 + t
2
2

0 +t
an inverted chi-square 0 + n, 00 +n .

6.3.1 Uninformative prior


An uninformative prior for scale is
p () 1
Hence, the posterior distribution for scale is

( n t
2 +1)
p ( | y) exp
2

which is the kernel of an inverted-chi square ; n, nt .

6.4 Gaussian (unknown mean, unknown variance)


For a sample of n exchangeable draws, a normal likelihood with unknown
mean, , and unknown (but constant) variance, 2 , is
n

2
1 1 (yi )
2
, | y exp
i=1
2 2

Expanding and rewriting the likelihood gives


n
1 y 2 2yi + 2
2 n i
, | y exp
i=1
2 2
n
Adding and subtracting i=1 2yi y = 2ny 2 , we write
n

n
2 2 1 2
, 2 | y exp 2 yi 2yi y + y 2 + y 2 2y + 2
2 i=1

2
3 0 0 is a scaled, inverted-chi

X
square 0 , 20 with scale 20 where X is a chi
square( 0 ) random variable.
6 6. Conjugate families

or

1
n
2
n
2 2 2 2
, | y exp 2 (yi y) + (y )
2 i=1

which can be rewritten as



n 1 2
, 2 | y 2 2 exp 2 (n 1) s2 + n (y )
2
1
n 2
where s2 = n1 (y y) . The above likelihood combines with a
i=1 2 i
Gaussian or normal | ; 0 , /0 inverted-chi square 2 ; 0 , 20 prior4
2


2
2 2
2 2
1 0 ( 0 )
p | ; 0 , /0 p ; 0 , 0 exp
2 2

( 0 /2+1) 0 2
2 exp 20
2
2 ( 0+3
2 )


2
0 20 + 0 ( 0 )
exp
2 2

to yield a normal | 2 ; n , 2n /n *inverted-chi square 2 ; n , 2n joint
posterior distribution5 where
n = 0 + n
n = 0 + n
0 n 2
n 2n = 0 20 + (n 1) s2 + (0 y)
0 + n
That is, the joint posterior is
n+20 +3
p , 2 | y; 0 , 2 /0 , 0 , 20 2
2 2

1 0 0 + (n 1) s
2
exp 2 +0 ( 0 )
2 2
+n ( y)

6.4.1 Completing the square


The expression for the joint posterior is written by completing the square.
Completing the weighted square for centered around
1
n = (0 0 + ny)
0 + n

4 The prior for the mean, , is conditional on the scale of the data, 2 .
5 The product of normal or Gaussian kernels produces a Gaussian kernel.
6.4 Gaussian (unknown mean, unknown variance) 7

1
n
where y = n i=1 yi gives
2
(0 + n) ( n ) = (0 + n) 2 2 (0 + n) n + (0 + n) 2n
= (0 + n) 2 2 (0 0 + ny) + (0 + n) 2n

While expanding the exponent includes the square plus additional terms
as follows
2 2
0 ( 0 ) + n ( y) = 0 2 20 + 20 + n 2 2y + y 2
= (0 + n) 2 2 (0 0 + ny) + 0 20 + ny 2

Add and subtract (0 + n) 2n and simplify.


2 2
0 ( 0 ) + n ( y) = (0 + n) 2 2 (0 + n) n + (0 + n) 2n
(0 + n) 2n + 0 20 + ny 2
2
= (0 + n) ( n )

1 (0 + n) 0 20 + ny 2
2
(0 + n) (0 0 + ny)

Expand and simplify the last term.


2 2 2 0 n 2
0 ( 0 ) + n ( y) = (0 + n) ( n ) + (0 y)
0 + n
Now, the joint posterior can be rewritten as
n+20 +3
p , 2 | y; 0 , 2 /0 , 0 , 20 2

0 20 + (n 1) s2
1 2
exp 2 + 00+n n
(0 y)
2 2
+ (0 + n) ( n )
or

n+ 1
0 1
p , 2 | y; 0 , 2 /0 , 0 , 20 2 exp 2 n 2n
2

2

1 2
1 exp 2 (0 + n) ( n )
2

Hence, the conditional


posterior distribution
for the mean, , given 2 is
2
Gaussian or normal | 2 ; n , 0+n .

6.4.2 Marginal posterior distributions


Were often interested in the marginal posterior distributions which are
derived by integrating out the other parameter from the joint posterior. The
8 6. Conjugate families

2
marginal posterior
for the mean,
, on integrating out is a noncentral,
2n
scaled-Student t ; n , n , n 6 for the mean

n2+1
n
p ; n , 2n , n , n n ( n )2

n + 2n

or
n2+1
2
n 2n n ( n )
p ; n , , n 1+
n n 2n

and the marginal posterior for the variance, 2 , is an inverted-chi square 2 ; n , 2n
on integrating out .

2 2
2 ( n /2+1) n 2n
p ; n , n exp
2 2
Derivation of the marginal posterior for the mean, , is as follows. Let
A
z = 2 2 where

0 n 2 2
A = 0 20 + (n 1) s2 + (0 y) + (0 + n) ( n )
0 + n
2
= n 2n + (0 + n) ( n )

The marginal posterior for the mean, , integrates out 2 from the joint
posterior


p ( | y) = p , 2 | y d 2
0
2 n+20 +3 A
= exp 2 d 2
0 2
2
A
Utilizing 2 = 2z and dz = 2zA d 2 or d 2 = 2zA2 dz,
n+20 +3
A A
p ( | y) exp [z] dz
0 2z 2z 2
n+20 +1
A
z 1 exp [z] dz
0 2z

n+ 0 +1 n+ 0 +1
A 2 z 2 1
exp [z] dz
0

6 The

noncentral, scaled-Student t ; n , 2n /n , n implies n
n / n
has a standard


2 n +1
n 2
n / n
Student-t( n ) distribution p ( | y) 1 + n
.
6.4 Gaussian (unknown mean, unknown variance) 9
n+ 0 +1
The integral 0 z 2 1 exp [z] dz is a constant since it is the kernel of
a gamma density and therefore can be ignored when deriving the kernel of
the marginal posterior for the mean
n+ 0 +1
p ( | y) A 2
n+20 +1
2
n 2n + (0 + n) ( n )
n+20 +1
2
(0 + n) ( n )
1+
n 2n

2n
which is the kernel for a noncentral, scaled Student t ; n , 0 +n , n + 0 .
Derivation of the marginal posterior for 2 is somewhat simpler. Write
the joint posterior in terms of the conditional posterior for the mean mul-
tiplied by the marginal posterior for 2 .

p , 2 | y = p | 2 , y p 2 | y
Marginalization of 2 is achieved by integrating out .

2
p |y = p 2 | y p | 2 , y d

Since only the conditional posterior involves the marginal posterior for
2 is immediate.

n+20 +3 A
p , 2 | y 2 exp 2
2

2
2 n+20 +2 n 2n 1 (0 + n) ( n )
exp exp
2 2 2 2

Integrating out yields



n+20 +2 n 2n
p 2 | y 2 exp
2 2

2
1 (0 + n) ( n )
exp d
2 2

2 ( 2n +1) n 2n
exp
2 2

which we recognize as the kernel of an inverted-chi square 2 ; n , 2n .

6.4.3 Uninformative priors


The case of uninformative priors is relatively straightforward. Since priors
convey no information, the prior for the mean is uniform (proportional to
10 6. Conjugate families

a constant, 0 0) and an uninformative prior for 2 has 0 0 degrees


of freedom so that the joint prior is
1
p , 2 2

The joint posterior is



(n/2+1) 1 2
p , 2 | y 2 exp 2 (n 1) s2 + n ( y)
2

2 [(n1)/2+1] 2
exp n2
2
n
2
1 exp 2 ( y)
2
where
2n = (n 1) s2
2

The conditional posterior for given 2 is Gaussian y, n . And, the mar-
2
ginal posterior for is noncentral, scaled Student t y, sn , n 1 , the clas-
sical estimator.
Derivation of the marginal posterior proceeds as above. The joint poste-
rior is

2
2 (n/2+1) 1 2 2
p , | y exp 2 (n 1) s + n ( y)
2
A 2 2 2
Let z = 2 2 where A = (n 1) s + n ( y) . Now integrate out of the
joint posterior following the transformation of variables.

2 (n/2+1) A
p ( | y) exp 2 d 2
0 2

An/2 z n/21 ez dz
0

As before, the integral involves the kernel of a gamma density and therefore
is a constant which can be ignored. Hence,

p ( | y) An/2
n2
2
(n 1) s2 + n ( y)
n1+1
2 2
n ( y)
1+
(n 1) s2
2

which we recognize as the kernel of a noncentral, scaled Student t ; y, sn , n 1 .
6.5 Multivariate Gaussian (unknown mean, known variance) 11

6.5 Multivariate Gaussian (unknown mean, known


variance)
More than one random variable (the multivariate case) with joint Gaussian
or normal likelihood is analogous to the univariate case with Gaussian
conjugate prior. Consider a vector of k random variables (the sample is
comprised of n draws for each random variable) with unknown mean, ,
and known variance, . For n exchangeable draws of the random vector
(containing each of the m random variable), the multivariate Gaussian
likelihood is
n
1 T
( | y, ) exp (yi ) 1 (yi )
i=1
2

where superscript T refers to transpose, yi and are k length vectors and


is a k k variance-covariance matrix. A Gaussian prior for the mean
vector, , with prior mean, 0 , and prior variance, 0 ,is

1 T 1
p ( | ; 0 , 0 ) exp ( 0 ) 0 ( 0 )
2

The product of the likelihood and prior yields the kernel of a multivariate
posterior Gaussian distribution for the mean

1 T 1
p ( | , y; 0 , 0 ) exp ( 0 ) 0 ( 0 )
2
n
1 T 1
exp (yi ) (yi )
i=1
2

6.5.1 Completing the square


Expanding terms in the exponent leads to
n

T T
( 0 ) 1
0 ( 0 ) + (yi ) 1 (yi )
i=1
T
T

= 1
+ n
0 1
2 1 0 0 + n1 y
n
+T0 1
0 0 + yiT 1 yi
i=1

where y is the sample average. While completing the (weighted) square


centered around
1
1 1

= 1
0 + n 0 0 + n1 y
12 6. Conjugate families

leads to
T
1
0 + n
1
= T 10 + n
1

T
1 1

2 0 + n
T 1 1

+ 0 + n

T
Thus, adding and subtracting 1 0 + n
1
in the exponent com-
pletes the square (with three extra terms).

n

T T
( 0 ) 1
0 ( 0 ) + (yi ) 1 (yi )
i=1
T
T

T
= 1
0 + n 1
1
0 + n
1
2 + 10 + n
1

n

T
10 + n
1
+ T0 1
0 0 + yiT 1 yi
i=1
T
= 1
0 + n 1

n

T
1
0 + n
1
+ T0 1
0 0 + yiT 1 yi
i=1

Dropping constants (the last three extra terms unrelated to ) gives



1 T 1 1

p ( | , y; 0 , 0 ) exp 0 + n
2

Hence, the posterior for the mean has expected value and variance


1 1
V ar [ | y, , 0 , 0 ] = 1
0 + n

As in the univariate case, the data and prior beliefs are weighted by their
relative precisions.

6.5.2 Uninformative priors


Uninformative priors for are proportional to a constant. Hence, the like-
lihood determines the posterior

n

1 T 1
( | , y) exp (yi ) (yi )
2 i=1
6.6 Multivariate Gaussian (unknown mean, unknown variance) 13

Expanding the exponent and adding and subtracting ny T 1 y (to com-


plete the square) gives

n
n

T
(yi ) 1 (yi ) = yiT 1 yi 2nT 1 y + nT 1
i=1 i=1
+ny T 1 y ny T 1 y
T
= n (y ) 1 (y )
n
+ yiT 1 yi ny T 1 y
i=1

The latter two terms are constants, hence, the posterior kernel is
n
T
p ( | , y) exp (y ) 1 (y )
2

which is Gaussian or N ; y, n1 , the classical result.

6.6 Multivariate Gaussian (unknown mean,


unknown variance)
When both the mean, , and variance, , are unknown, the multivariate
Gaussian cases remains analogous to the univariate case. Specifically, a
Gaussian likelihood
n

12 1 T
(, | y) || exp (yi ) 1 (yi )
i=1
2

n T
n 1 i=1 (yi y) 1 (yi y)
|| 2
exp T
2 +n (y ) 1 (y )

n 1 2 T 1
|| 2
exp (n 1) s + n (y ) (y )
2

1
n T 1
where s2 = n1 i=1 (yi y) (yi y) combines with a Gaussian-
inverted Wishart prior

1
12 1 T 1
p | ; 0 , p ; , || exp ( 0 ) 0 ( 0 )
0 2

1

+k+1 tr
|| 2 || 2
exp
2
14 6. Conjugate families

where tr () is the trace of the matrix and is degrees of freedom, to produce




+n+k+1 tr 1
p (, | y) || || 2 2
exp
2

T
12 1 (n 1) s2 + n (y ) 1 (y )
|| exp T
2 +0 ( 0 ) 1 ( 0 )

6.6.1 Completing the square


Completing the square involves the matrix analog to the univariate un-
known mean and variance case. Consider the exponent (in braces)
T T
(n 1) s2 + n (y ) 1 (y ) + 0 ( 0 ) 1 ( 0 )
= (n 1) s2 + ny T 1 y 2nT 1 y + nT 1
+0 T 1 20 T 1 0 + 0 T0 1 0
= (n 1) s2 + (0 + n) T 1 2T 1 (0 0 + ny) + (0 + n) Tn 1 n
(0 + n) Tn 1 n + 0 T0 1 0 + ny T 1 y
T
= (n 1) s2 + (0 + n) ( n ) 1 ( n )
0 n T
+ (0 y) 1 (0 y)
0 + n
Hence, the joint posterior can be rewritten as


+n+k+1 tr 1
p (, | y) || || 2 2
exp
2
T
1 1 (0 + n) ( n ) 1 ( n )

|| 2 exp + (n 1) s2
2 0 n T
+ 0 +n (0 y) 1 (0 y)


+n+k+1 1 tr 1 + (n 1) s2
|| 2 || 2
exp T
2 + 00+n
n
(0 y) 1 (0 y)

12 1 T 1
|| exp (0 + n) ( n ) ( n )
2

6.6.2 Inverted-Wishart kernel


We wish to identify the exponent with Gaussian by inverted-Wishart ker-
nels where the inverted-Wishart involves the trace of a square, symmetric
matrix, call it n , multiplied by 1 .
To make this connection we utilize the following general results. Since a
quadratic form, say xT 1 x, is a scalar, its equal to its trace,

xT 1 x = tr xT 1 x
6.6 Multivariate Gaussian (unknown mean, unknown variance) 15

Further, for conformable matrices A, B and C, D,

tr (A) + tr (B) = tr (A + B)

and
tr (CD) = tr (DC)
We immediately have the results

tr xT x = tr xxT

and
tr xT 1 x = tr 1 xxT = tr xxT 1

1
Therefore, the above joint posterior can be rewritten as a N ; n , (0 + n)

inverted-Wishart 1 ; + n, n

+n
+n+k+1 1
p (, | y) |n | 2 || 2
exp tr n 1
2

1 0 + n T
|| 2 exp ( n ) 1 ( n )
2
where
1
n = (0 0 + ny)
0 + n
and
n
T 0 n T
n = + (yi y) (yi y) + (y 0 ) (y 0 )
i=1
0 + n

1
Now, its apparent the conditional posterior for given is N n , (0 + n)

0 + n T
p ( | , y) exp ( n ) 1 ( n )
2

6.6.3 Marginal posterior distributions


Integrating out the other parameter gives the marginal posteriors, a mul-
tivariate Student t for the mean,

Student tk (; n , , + n k + 1)

and an inverted-Wishart for the variance,



I-W 1 ; + n, n

where
1 1
= (0 + n) ( + n k + 1) n
16 6. Conjugate families

Marginalization of the mean derives from the following identities (see Box
and Tiao [1973], p. 427, 441). Let Z be a m m positive definite symmetric
matrix consisting of 12 m (m + 1) distinct random variables zij (i, j = 1, . . . , m; i j).
And let q > 0 and B be an mm positive definite symmetric matrix. Then,
the distribution of zij ,
1
q1
p (Z) |Z| 2 exp 12 tr (ZB) , Z > 0

is a multivariate generalization of the 2 distribution obtained by Wishart


[1928]. Integrating out the distinct zij produces the first identity.

1
2 q1
1 12 (q+m1)
|Z| exp tr (ZB) dZ = |B| (I.1)
Z>0 2

1 q+m1
2 2 (q+m1) m
2

where p (b) is the generalized gamma function (Siegel [1935])

1 p(p1)
p p
p1
p (b) = 12 2 b+ 2 , b> 2
=1

and
(z) = tz1 et dt
0

or for integer n,
(n) = (n 1)!

The second identity involves the relationship between determinants that


allows us to express the above as a quadratic form. The identity is

|Ik P Q| = |Il QP | (I.2)

for P a k l matrix and Q a l k matrix.


If we transform the joint posterior to p , 1 | y , the above identities
can be applied to marginalize the joint posterior. The key to transformation
is

p , 1 | y = p (, | y) 1


where
1 is the (absolute value of the) determinant of the Jacobian or


( 11 , 12 , . . . , kk )

=
1 ( 11 , 12 , . . . , kk )
k+1
= ||
6.6 Multivariate Gaussian (unknown mean, unknown variance) 17

with ij the elements of and ij the elements of 1 . Hence,



+n+k+1 1
p (, | y) || 2
exp tr n 1
2

1 0 + n T
|| 2 exp ( n ) 1 ( n )
2

+n+k
1 1
|| 2
exp tr S () 1
2
T
where S () = n + (0 + n) ( n ) ( n ) , can be rewritten

1
+n+k+2 1 1
2k+2
p , | y || 2
exp tr S () || 2
2

1 +nk 1
2
exp tr S () 1
2
Now, applying the first identity yields

1 (+n+1)
p , 1 | y d1 |S ()| 2
1 >0
1 (+n+1)
T 2
n + (0 + n) ( n ) ( n )
1 (+n+1)
T 2
I + (0 + n) 1
n ( n ) ( n )

And the second identity gives


12 (+n+1)
T
p ( | y) 1 + (0 + n) ( n ) 1
n ( n )

We recognize this is the kernel of a multivariate Student tk (; n , , + n k + 1)


distribution.

6.6.4 Uninformative priors


The joint uninformative prior (with a locally uniform prior for ) is
k+1
p (, ) || 2

and the joint posterior is



k+1 n 1 T
p (, | y) || 2
||2
exp (n 1) s2 + n (y ) 1 (y )
2

n+k+1 1 T
|| 2
exp (n 1) s2 + n (y ) 1 (y )
2

n+k+1 1
|| 2
exp tr S () 1
2
18 6. Conjugate families
n T T
where now S () = i=1 (y yi ) (y yi ) + n (y ) (y ) . Then, the
1
conditional posterior for given is N y, n
n
T
p ( | , y) exp ( y) 1 ( y)
2
The marginal posterior for is derived analogous to the above informed
conjugate prior case. Rewriting the posterior in terms of 1 yields

n+k+1 1 2k+2
p , 1 | y || 2
exp tr S () 1 || 2
2

1 nk1 1
2 exp tr S () 1
2



p ( | y) p , 1 | y d1
1 >0

1 nk1
2 exp 1 tr S () 1 d1
1 >0 2
The first identity (I.1) produces
n
p ( | y) |S ()| 2
n n2

T T
(y yi ) (y yi ) + n (y ) (y )

i=1
n 1 n2

T

I + n (y yi ) (y yi )
T
(y ) (y )
i=1

The second identity (I.2) identifies the marginal posterior for as (multi-
variate) Student tk ; y, n1 s2 , n k
n2
n T T
p ( | y) 1 + (y ) (y )
(n k) s2
n T
where (n k) s2 = i=1 (y yi ) (y yi ). The marginal posterior for the
1 n T
variance is I-W ; n, n where now n = i=1 (y yi ) (y yi ) .

6.7 Bayesian linear regression


Linear regression is the starting point for more general data modeling
strategies, including nonlinear models. Hence, Bayesian linear regression
is foundational. Suppose the data are generated by

y = X +
6.7 Bayesian linear regression 19

where X is a n p full column


rank matrix of (weakly exogenous) regres-
sors and N 0, 2 In and E [ | X] = 0. Then, the sample conditional
density is y | X, , 2 N X, 2 In .

6.7.1 Known variance


If the error variance, 2 In , is known and we have informed Gaussian priors
for conditional on 2 ,

p | 2 N 0 , 2 V0
1
where we can think of V0 = X0T X0 as if we had a prior sample (y0 , X0 )
such that 1 T
0 = X0T X0 X 0 y0
then the conditional posterior for is

p | 2 , y, X; 0 , V0 N , V

where 1 T
= X0T X0 + X T X
X0 X0 0 + X T X

= X T X 1 X T y

and 1
V = 2 X0T X0 + X T X
The variance expression follows from rewriting the estimator
1 T
= X0T X0 + X T X X0 X0 0 + X T X
1 T 1 T 1 T
= X0T X0 + X T X X0 X0 X0T X0 X 0 y0 + X T X X T X X y
T 1
= X0 X0 + X T X X0T y0 + X T y

Since the DGP is



y0 = X0 + 0 , 0 N 0, 2 In0
y = X + , N 0, 2 In

then
1 T
= X0T X0 + X T X X0 X0 + X0T 0 + X T X + X T

The conditional (and by iterated expectations, unconditional) expected


value of the estimator is
1 T
E | X, X0 = X0T X0 + X T X X0 X0 + X T X =
20 6. Conjugate families

Hence,

E | X, X0 =
1 T
= X0T X0 + X T X X 0 0 + X T
so that

V V ar | X, X0
T
= E | X, X0
1 T T
X0T X0 + X T X X0 0 + X T X0T 0 + X T
= E 1
X0T X0 + X T X | X, X0

T T
1 X0T 0 T0 X0 + X T T0 X0
X X 0 + X X
= E 0
T
+X0T 0 T X + X T T T X
1

T
X0 X0 + X X | X, X0
T 1 1
= X0 X0 + X T X X0T 2 IX0 + X T T IX X0T X0 + X T X
1 T 1
= 2 X0T X0 + X T X X0 X0 + X T X X0T X0 + X T X
1
= 2 X0T X0 + X T X
Now, lets backtrack and derive the conditional posterior as the product
of conditional priors and the likelihood function. The likelihood function
for known variance is

2
1 T
| , y, X exp 2 (y X) (y X)
2
Conditional Gaussian priors are

1 T
p | 2 exp 2 ( 0 ) V01 ( 0 )
2
The conditional posterior is the product of the prior and likelihood

T
2 1 (y X) (y X)
p | , y, X exp 2 T
2 + ( 0 ) V01 ( 0 )
T T

1 y y 2y T X + X T X
= exp 2 + T X0T X0 2 T0 X0T X0
2
+ T0 X0T X0 0
The first and last terms in the exponent do not involve (are constants)
and can ignored as they are absorbed through normalization. This leaves

1 2y T X + T X T X + T X0T X0
p | 2 , y, X exp 2
2 2 T0 X0T X0

1 T X0T X0 + X T X
= exp 2
2 2 y T X + T0 X0T X0
6.7 Bayesian linear regression 21

which can be recognized as the expansion of the conditional posterior


claimed above.

p | 2 , y, X N , V

1 T
exp V1
2

1 T T
= exp 2 X0 X0 + X T X
2


T X0T X0 + X T X
1 T
= exp 2 2 X0T X0 + X T X
2 T

+ X0T X0 + X T X


T X0T X0 + X T X
1 T
= exp 2 2 X0T X0 0 + X T y
2 T

+ X0T X0 + X T X

The last term in the exponent is all constants (does not involve ) so its
absorbed through normalization and disregarded for comparison of kernels.
Hence,

1 T
p | 2 , y, X exp V1
2

1 T X0T X0 + X T X
exp 2
2 2 y T X + T0 X0T X0

as claimed.

Uninformative priors
If theprior for
is uniformly distributed conditional on known variance,
2 , p | 2 1, then its as if X0T X0 0 (the information matrix for
the prior is null) and the posterior for is

p | 2 , y, X N , 2 X T X 1

equivalent to the classical parameter estimators.


To see this intuition holds, recognize combining the likelihood with the
uninformative prior indicates the posterior is proportional to the likelihood.

2
1 T
p | , y, X exp 2 (y X) (y X)
2
Expanding this expression yields

1
p | 2 , y, X exp 2 y T y 2y T X + T X T X
2
22 6. Conjugate families

The first term in the exponent doesnt depend on and can be dropped
as its absorbed via normalization. This leaves

2
1 T T T
p | , y, X exp 2 2y X + X X
2

Now, write p | 2 , y, X N , 2 X T X 1

2
1
T
T


p | , y, X exp 2 X X
2
and expand

1 + X
T X

p | 2 , y, X exp 2 T X T X 2 T X T X
2
The last term in the exponent doesnt depend on and is absorbed via
normalization. This leaves

1
p | 2 , y, X exp 2 T X T X 2 T X T X
2

1 1 T
exp 2 T X T X 2 T X T X X T X X y
2

1
exp 2 T X T X 2 T X T y
2
As this latter expression matches the simplifiedlikelihood expression, the

demonstration is complete, p | 2 , y, X N , 2 X T X 1 .

6.7.2 Unknown variance


In the usual case where the variance as well as the regression coecients,
, are unknown, the likelihood function can be expressed as

1 T
, 2 | y, X n exp 2 (y X) (y X)
2
Rewriting gives

2
n 1 T
, | y, X exp 2
2
since = y X. The estimated model is y = Xb + e, therefore X + =
1 T
Xb + e where b = X T X X y and e = y Xb are estimates of and
, respectively. This implies = e X ( b) and

T T T
1 e e 2 ( b) X e
, 2 | y, X n exp 2 T
2 + ( b) X T X ( b)
6.7 Bayesian linear regression 23

Since, X T e = 0 by construction, this simplifies as



2
n 1 T T T
, | y, X exp 2 e e + ( b) X X ( b)
2

or

2
n 1 2 T T
, | y, X exp 2 (n p) s + ( b) X X ( b)
2

1
where s2 = np eT e.7
2 2 1

The conjugate prior
2 for linear
regression is the Gaussian | ; 0 , 0 -
2
inverse chi square ; 0 , 0

T
( 0 ) 0 ( 0 )
2
p | ; 0, 2
1
0 p 2
; 0 , 20 p
exp
2 2

0 2
( 0 /2+1) exp 20
2

Combining the prior
with the likelihood gives a joint Gaussian , 2 1
n -
inverse chi square 0 + n, 2n posterior

(n p) s2
2
p , | y, X; 0 , 2
1 2
0 , 0 , 0 exp n
2 2

T
( b) X T X ( b)
exp
2 2

T
p ( 0 ) 0 ( 0 )
exp
2 2

2 ( 0 /2+1) 0 20
exp 2
2

7 Notice, the univariate Gaussian case is subsumed by linear regression where X =

(a vector of ones). Then, the likelihood as described earlier,



1
, 2 | y, X n exp 2 (n p) s2 + ( b)T X T X ( b)
2

becomes

1
= , 2 | y, X = n exp 2 (n 1) s2 + n ( y)2
2
1 T
where = , b = X T X X y = y, p = 1, and X T X = n.
24 6. Conjugate families

Collecting terms and rewriting, we have



2n
2 [( 0 +n)/2+1]
2
p , | y, X; 0 , 2
1 2
0 , 0 , 0 exp 2
2

p 1 T
exp 2 n
2

where 1
= 0 + X T X 0 0 + X T Xb

n = 0 + X T X
and
T T
n 2n = 0 20 +(n p) s2 + 0 0 0 + XT X


where n = 0 +n. The conditional posterior of given 2 is Gaussian , 2 1
n .

Completing the square


The derivation of the above joint posterior follows from the matrix ver-
sion of completing the square where 0 and X T X are square, symmetric,
full rank p p matrices. The exponents from the prior for the mean and
likelihood are
T
T XT X
( 0 ) 0 ( 0 ) +

Expanding and rearranging gives


T
T 0 + X t X 2 0 0 + X T X T X T X
+ T 0 + (6.1)
0 0

The latter two terms are constants not involving (and can be ignored
when writing the kernel for the conditional posterior) which well add to
when we complete the square. Now, write out the square centered around

T
0 + X T X = T 0 + X T X
T T
2 0 + X T X + 0 + X T X

Substitute for in the second term on the right hand side and the first two
terms are identical to the two terms in equation (6.1). Hence, the exponents
from the prior for the mean and likelihood in (6.1) are equal to
T
0 + X T X
T
T X T X
0 + X T X + T0 0 0 +
6.7 Bayesian linear regression 25

which can be rewritten as


T
0 + X T X
T T
+ 0 0 0 + XT X

or (in the form analogous to the univariate Gaussian case)


T
0 + X T X
T

+ 0 1 1 1 1 1
n 0 n 1 + 0 n 1 n 0 0

where 1 = X T X.

Stacked regression
Bayesian linear regression with conjugate priors works as if we have a prior
sample {X0 , y0 }, 0 = X0T X0 , and initial estimates
1 T
0 = X0T X0 X 0 y0

Then, we combine this initial "evidence" with new evidence to update our
beliefs in the form of the posterior. Not surprisingly, the posterior mean is
a weighted average of the two "samples" where the weights are based on
the relative precision of the two "samples".

Marginal posterior distributions


2
The marginal posterior for on integrating out is noncentral, scaled
2 1
multivariate Student tp , n n , 0 + n
T 0 +n+p
2
p ( | y, X) n 2n + n
0 +n+p
1 T 2
1+ 2
n
n n

where n = 0 +X T X. This result corresponds with the univariate Gaussian


case and is derived analogously by transformation of variables where z =
A 2
T 2
2 2 where A = n + 2 2 n . The marginal posterior for is
inverted-chi square ; n, n .
Derivation of the marginal posterior for is as follows.


p ( | y) = p , 2 | y d 2
0
2 n+ 02+p+2 A
= exp 2 d 2
0 2
26 6. Conjugate families

2
A
Utilizing 2 = 2z and dz = 2zA d 2 or d 2 = 2zA2 dz, (1 and 2 are
constants and can be ignored when deriving the kernel)
n+ 02+p+2
A A
p ( | y) exp [z] dz
0 2z 2z 2

n+ 0 +p n+ 0 +p
A 2 z 2 1
exp [z] dz
0
n+ 0 +k
The integral 0 z 2 1 exp [z] dz is a constant since it is the kernel of
a gamma density and therefore can be ignored when deriving the kernel of
the marginal posterior for beta
n+ 0 +p
p ( | y) A 2
T n+20 +p
n 2n + n
T n+20 +p

n
1+
n 2n

the kernel for a noncentral, scaled (multivariate) Student tp ; , 2n 1
n , n + 0 .

6.7.3 Uninformative priors


Again, the case of uninformative priors is relatively straightforward. Since
priors convey no information, the prior for the mean is uniform (propor-
tional to a constant, 0 0) and the prior for 2 has 0 0 degrees of
1
freedom so that the joint prior is p , 2 2 .
The joint posterior is

2
2 [n/2+1] 1 T
p , | y exp 2 (y X) (y X)
2
T 1 T
Since y = Xb + e where b = X X X y, the joint posterior can be
written

2
2 [n/2+1] 1 2 T T
p , | y exp 2 (n p) s + ( b) X X ( b)
2
Or, factoring into the conditional posterior for and marginal for 2 , we
have

p , 2 | y p 2 | y p | 2 , y

[(np)/2+1] 2
2 exp n2
2

1 T
p exp 2 ( b) X T X ( b)
2
6.8 Bayesian linear regression with general error structure 27

where
2n = (n p) s2
1
Hence, the conditional posterior for given 2 is Gaussian b, 2 X T X .

1
The marginal posterior for is multivariate Student tp ; b, s2 X T X ,n p ,
the classical estimator. Derivation of the marginal posterior for is analo-
A 2 T
gous to that above. Let z = 2 2 where A = (n p) s +( b) X T X ( b).
2
Integrating out of the joint posterior produces the marginal posterior
for .


p ( | y) p , 2 | y d 2

2 n+2 A
2
exp 2 d 2
2
Substitution yields
n+2
A 2
A
p ( | y) exp [z] dz
2z 2z 2

n n
A 2 z 2 1 exp [z] dz

As before, the integral involves the kernel of a gamma distribution, a con-


stant which can be ignored. Therefore, we have
n
p ( | y) A 2
n2
T
(n p) s2 + ( b) X T X ( b)
n2
T
( b) X T X ( b)
1+
(n p) s2
1
which is multivariate Student tp ; b, s2 X T X ,n p .

6.8 Bayesian linear regression with general error


structure
Now, we consider Bayesian regression with a more general error structure.
That is, the DGP is

y = X + , ( | X) N (0, )

First, we consider the known variance case, then take up the unknown
variance case.
28 6. Conjugate families

6.8.1 Known variance


If the error variance, , is known, we simply repeat the Bayesian linear re-
gression approach discussed above for the known variance case after trans-
forming all variables via the Cholesky decomposition of . Let
= T
and 1 1
1 = T
Then, the DGP is
1 y = 1 X + 1
where
1 N (0, In )
With informed priors for , p ( | ) N ( 0 , ) where it is as if =
T 1 1
X 0 0 X 0 , the posterior distribution for conditional on is

p ( | , y, X; 0 , ) N , V
where
1 T 1 T 1

= X0T 1
0 X 0 + X T 1
X X
0 0 X 0 0 + X X
1
T 1
= 1
+ X T 1
X 1
0 + X X

= X T 1 X 1 X T 1 y

and
1
V = X0T 1 T 1
0 X0 + X X
1
= 1
+X
T 1
X

It is instructive to once again backtrack to develop the conditional pos-


terior distribution. The likelihood function for known variance is

1 T 1
( | , y, X) exp (y X) (y X)
2
Conditional Gaussian priors are

1 T
p ( | ) exp 2 ( 0 ) V1 ( 0 )
2
The conditional posterior is the product of the prior and likelihood

T
2 1 (y X) 1 (y X)
p | , y, X exp 2 T
2 + ( 0 ) V1 ( 0 )
T 1
1 y y 2y T 1 X + T X T 1 X
= exp 2
2 + T V1 2 T0 V1 + T0 V1 0
6.8 Bayesian linear regression with general error structure 29

The first and last terms in the exponent do not involve (are constants)
and can ignored as they are absorbed through normalization. This leaves

1 2y T 1 X + T X T 1 X
p | 2 , y, X exp 2
2 + T V1 2 T0 V1

1 T V 1 + X T 1 X
= exp 2
2 2 y T 1 X + T0 V 1

which can be recognized as the expansion of the conditional posterior


claimed above.

p ( | , y, X) N , V

1 T 1
exp V
2

1 T 1 T 1

= exp V + X X
2


T
V 1
+ X T 1
X

1


T 1
= exp 2 V + X T 1 X
2


T
+ V1 + X T 1 X


T X0T X0 + X T X

1 T

= exp T 1
2 y X + 0 V T 1

2

T T T


+ X0 X0 + X X

The last term in the exponent is all constants (does not involve ) so its
absorbed through normalization and disregarded for comparison of kernels.
Hence,

1 T
p | 2 , y, X exp V1
2

1 T X0T X0 + X T X
exp 2 T
2 2 y T 1 X + T0 V1

as claimed.

6.8.2 Unknown variance


Bayesian linear regression with unknown general error structure, , is some-
thing of a composite of ideas developed for exchangeable ( 2 In error struc-
ture) Bayesian regression and the multivariate Gaussian case with mean
30 6. Conjugate families

and variance unknown where each draw is an element of the y vector and
X is an n p matrix of regressors. A Gaussian likelihood is

n 1 T 1
(, | y, X) || 2
exp (y ) (y )
2

T 1
n 1 (y Xb) (y Xb)
|| 2 exp T
2 + (b ) X T 1 X (b )

n 1 2 T T 1
|| 2
exp (n p) s + (b ) X X (b )
2
1 T 1 1 T
where b = X T 1 X X y and s2 = np (y Xb) 1 (y Xb).
Combine the likelihood with a Gaussian-inverted Wishart prior

1 T
p ( | ; 0 , ) p 1 ; , exp ( 0 ) 1 ( 0 )
2


+p+1 tr 1
|| ||
2 2
exp
2
T 1 1
where tr () is the trace of the matrix, it is as if = X0 0 X0 , and
is degrees of freedom to produce the joint posterior

1

+n+p+1 tr
p (, | y, X) || 2 || 2
exp
2

(n p) s2
1 T
exp + (b ) X T 1 X (b )
2 T
+ ( 0 ) 1 ( 0 )

Completing the square


Completing the square involves the matrix analog to the univariate un-
known mean and variance case. Consider the exponent (in braces)
T T
(n p) s2 + (b ) X T 1 X (b ) + ( 0 ) 1
( 0 )

= (n p) s2 + bT X T 1 Xb 2 T X T 1 Xb + T X T 1 X
+ T 1 T 1 T 1
2 0 + 0 0

= (n p) s2 + T 1
+ X T 1
X
2 T V1 + bT X T 1 Xb + T0 1
0

= (n p) s2 + T V1 2 T V1 + bT X T 1 Xb + T0 1
0

where
1
= 1
+X
T 1
X 1 T 1
0 + X Xb

= V 1
0 + X T 1
Xb
6.8 Bayesian linear regression with general error structure 31
1
and V = 1
+ X T 1
X .
Variation in around is
T T
V1 = T V1 2 T V1 + V1
The first two terms are identical to two terms in the posterior involving
and there is apparently no recognizable kernel from these expressions. The
joint posterior is
p (, | y, X)


+n+p+1 tr 1
|| ||2 2
exp
2
T 1
V
1 T


exp + (n p) s2 V1
2


+bT X T 1 Xb + T0 1
0


tr 1 + (n p) s2


+n+p+1
1 T
|| 2 || 2
exp V1
2 +bT X T 1 Xb + T 1

0 0

1 T
exp V1
2
Therefore, we write the conditional posteriors for the parameters of interest.
First, we focus on then we take up .
The conditional posterior for conditional on involves collecting all
terms
involving
. Hence, the conditional posterior for is ( | )
N , V or

1 T 1
p ( | , y, X) exp V
2

Inverted-Wishart kernel
Now, we gather all terms involving and write the conditional posterior
for .
p ( | , y, X)


+n+p+1 1 tr 1 + (n p) s2
|| ||
2 2
exp T
2 + (b ) X T 1 X (b )

tr 1 +

+n+p+1 1 T
|| 2 || 2
exp (y Xb) 1 (y Xb)
2 T T 1
+ (b ) X X (b )

T

+n+p+1 1 + (y Xb) (y Xb)
|| 2 || 2
exp tr T 1
2 + (b ) X T X (b )
32 6. Conjugate families

We can identify the kernel as an inverted-Wishart involving the trace of


a square, symmetric matrix, call it n , multiplied by 1 .
The above joint posterior can be rewritten as an inverted-Wishart 1 ; + n, n

+n
+n+p+1 1
p (, | y) |n | 2 || 2
exp tr n 1
2

where
T T
n = + (y Xb) (y Xb) + (b ) X T X (b )

With conditional posteriors in hand, we can employ McMC strategies


(namely, a Gibbs sampler) to draw inferences around the parameters of
interest, and . That is, we sequentially draw conditional on and
, in turn, conditional on . We discuss McMC strategies (both the Gibbs
sampler and its generalization, the Metropolis-Hastings algorithm) later.

6.8.3 (Nearly) uninformative priors


As discussed by Gelman, et al [2004] uninformative priors for this case is
awkward, at best. What does it mean to posit uninformative priors for a
regression with general error structure? Consistent probability assignment
suggests that either we have some priors about the correlation structure
or heteroskedastic nature of the errors (informative priors) or we know
nothing about the error structure (uninformative priors). If priors are un-
informative, then maximum entropy probability assignment suggests we
assign independent and unknown homoskedastic errors. Hence, we discuss
nearly uninformative priors for this general error structure regression.
The joint uninformative prior (with a locally uniform prior for ) is
12
p (, ) ||

and the joint posterior is



12 n 1 2 T T 1
p (, | y, X) || || exp
2
(n p) s + (b ) X X (b )
2

n+1 1 2 T T 1
|| 2
exp (n p) s + (b ) X X (b )
2

n+1 1 1

|| 2
exp tr S ()
2
T T
where now S () = (y Xb) (y Xb) +(b ) X T X (b ).
Then, the
T 1 1
conditional posterior for given is N b, X X |
n
T
p ( | , y, X) exp ( b) X T 1 X ( b)
2
6.8 Bayesian linear regression with general error structure 33

The conditional posterior for given is inverted-Wishart 1 ; n, n


n
n+1 1 1

p (, | y) |n | ||
2 2
exp tr n
2

where

T T
n = (y Xb) (y Xb) + (b ) X T X (b )

As with informed priors, a Gibbs sampler (sequential draws from the condi-
tional posteriors) can be employed to draw inferences for the uninformative
prior case.
Next, we discuss posterior simulation, a convenient and flexible strategy
for drawing inference from the evidence and (conjugate) priors.
34 6. Conjugate families

6.9 Appendix: summary of conjugacy

focal prior likelihood posterior


parameter(s) () ( | y) ( | y)

discrete data:

beta-binomial beta binomial beta


b1 ns b+ns1
p p a1
(1 p) ps (1 p) p a+s1
(1 p)

gamma-poisson gamma poisson gamma


a1 eb s en a+s1 (b+n)
e

gamma-exponential gamma exponential gamma


a1 eb n es a+n1 e(b+s)

beta-negative negative
beta beta
binomial a1 b1 binomial b+s1
p (1 p) s pa+nr1 (1 p)
p pnr (1 p)

beta-binomial-
hypergeometric8 hypergeometric

k k N k beta-binomial
k unknown beta-binomial
x nx N n
population success n
N kx
N known x (a+k)(b+N k)(a+b+n)
population size (a+x)(b+nx)(a+b)
, n (a+x)(b+nx)(a+b+N ) ,
(a)(b)(a+b+n)
n known sampling k = x, x + 1, . . . ,
x = 0, 1, 2, . . . , n
sample size without x+N n
x known replacement
sample success

multinomial- Dirichlet Dirichlet



K multinomial
K
Dirichlet
ai i 1 s11 sKK ai i +si 1
(vector) i=1 i=1


n n n!
s= i=1 yi , = x!(nx)! ,
x
(a)(b)
(z) = 0
et tz1 dt, (n) = (n 1)! for n a positive integer, B (a, b) = (a+b)

8 See Dyer and Pierce [1993].


6.9 Appendix: summary of conjugacy 35

focal prior likelihood posterior


parameter(s) () ( | y) ( | y)

continuous data:

Pareto-uniform
w
uniform Pareto
w unknown Pareto 1
wn ,
a
upper bound, ab (a+n) max[b,xi ]a+n
wa+1 w > max (xi ) wa+n+1
0 known
lower bound

Pareto-Pareto
Pareto Pareto
Pareto
unknown aba (an)b(an)
a+1
, n , an+1
,
precision,
>b 0 < < min (xi ) a > n, > b
known
shape

gamma-Pareto
Pareto
gamma n n gamma
m+1

,
unknown a1 e/b
n a+n1 e/b
ba (a) ,
(b )a+n (a+n) ,
shape, m= xi , 1
known >0 i=1 b = 1 +log mn log
> 0
b
0 < < min (xi )
precision

gamma- exponential gamma


gamma n s a+n1 e/b

exponential a1 e/b e , (a+n)(b )a+n
,
(a)ba n
s = i=1 xi
b = 1+bsb

inverse gamma-
gamma
gamma inverse gamma
inverse gamma s/ 1an e1/b

unknown 1a 1/b e n , ,
(a)b
e
a n (a+n)(b )a+n
rate, s = i=1 xi b
b = 1+bs
known
shape

conjugate prior-
gamma gamma
nonstandard m1
1 c , nonstandard
a ()b , n ()n
unknown n
(am)1 (c+n)
a, b, c > 0 m= xi , ()b+n
shape, i=1
known >0
xi > 0
rate
36 6. Conjugate families

focal prior likelihood posterior


parameter(s) () ( | y) ( | y)

continuous data:

normal normal

normal
2

normal-normal 2
n
)2 exp ( n)
,
exp ( 0)
2 2
, exp (yi2 2
2 2 n

2
0
i=1 ss n = 000+n
+ny
,
20 = 2
0 = exp 2 2 2n = 0+n

inverse gamma normal inverse gamma


inverse gamma ( n+2a
(a+1) +1)
-normal 2 (21)n/2 2 2

ss 1
2 exp b2 exp 2 2
b+ ss
exp 22

joint posterior:

normal | 2 normal | 2
inverse gamma
normal | 2 inverse gamma
(0 )2 normal
inverse gamma- 1
0 exp 2 20
2
0 0
(a+1) (21)n/2
0
exp
normal 2 ss 2 2 2

, 2 exp 2
exp b2 ,
2

2 a +1


2
20 = 0
exp b 2 ;

Student t marginal
posterior for :


2a 2+1
2
0 b 0
1+ 2 ;

inverse gamma
marginal
posterior for 2 :
a +1
2

exp b 2 ,

a = a + n2 , 0 = 0 + n,
1 ss
b + 2
b = 0 n(y0 )2 ,
+ 2(0 +n)

0 = 000+n
+ny
n 2
ss = i=1 (yi )
6.9 Appendix: summary of conjugacy 37

focal prior likelihood posterior


parameter(s) () ( | y) ( | y)

continuous data:

bilateral
bilateral bivariate
bilateral
bivariate Pareto
bivariate uniform
Pareto n (a +n) (a + n + 1)
Pareto- a 1 a+n
a(a+1)(r2 r1 ) r2 r1
uniform (ul)a+2
, ul
,
(ul)a+n+2
l, u l < r 1 , u > r2
r1 = min (r1 , xi ) ,

r2 = max (r2 , xi )

log normal normal



normal

2
normal- 2
n
yi )2
exp ( n)
2 2 ,
lognormal exp ( 0)
2 2
, exp (log 2 2
n

2
0
i=1 lss n = 0 00+nlog y
,
20 = +n
2
0 = exp 2 2 2
n = 0 +n

inverse gamma normal inverse gamma


inverse gamma- ( n+2a
(a+1) +1)
lognormal 2 (21)n/2 2 2

lss b+ 1 lss
2 exp b2 exp 2 2 exp 22

n 2
lss = i=1 (log yi )
38 6. Conjugate families

continuous data:
multivariate normal
inverted Wishart-
multivariate normal
,
prior (, )
12 T
multivariate normal ( | ) || exp 20 ( 0 ) 1 ( 0 )


+k+1 tr (1 )
inverted Wishart () || ||2 2
exp 2

likelihood (, | y)
n T
multivariate normal || 2
exp 12 (n 1) s2 + (y ) 1 (y )

joint posterior (, | y)
T
multivariate normal ( | , y) exp 02+n ( n ) 1 ( n )

+n
+n+k+1 tr (n 1 )
inverted Wishart ( | y) || 2
|| 2
exp 2

marginal posterior
12 (+n+1)
T
multivariate Student t ( | y) I + (0 + n) ( n ) 1
n ( n )


+n
+n+k+1 tr (n 1 )
inverted Wishart ( | y) || 2
|| 2
exp 2

where tr () is the trace of a matrix,

0 0 +ny 1
n T
n = 0 +n , s2 = n1 i=1 (yi y) 1 (yi y) ,
n T 0 n T
n = + i=1 (yi y) (yi y) + 0 +n (0 y) (0 y)
6.9 Appendix: summary of conjugacy 39

continuous data:
linear regression
normal inverse chi square-normal
, 2
prior , 2
T
normal | 2 p exp 21 2 ( 0 ) 0 ( 0 )


0 20
inverse chi square 2 ( 0 /2+1) exp 2 2


normal likelihood , 2 | y, X
T
normal n exp 21 2 eT e + ( b) X T X ( b)

joint posterior p , 2 | y, X
T
normal p | 2 , y, X p exp 21 2 n

[( 0 +n)/2+1]
2
inverse chi square 2 | y, X 2 exp 2n2

marginal posterior
T 0 +n+p
2
1
Student t | 2 , y, X 1+ n 2n n

[( 0 +n)/2+1]
2
inverse chi square 2 | y, X 2 exp 2n2

where
1 T
e = y Xb, b = XT X X y,
1
= 0 + X T X 0 0 + X T Xb , n = 0 + X T X,

T T
n 2n = 0 20 + eT e + 0 0 0 + XT X ,

and n = 0 + n
40 6. Conjugate families

continuous data:
linear regression
with general variance
normal inverted Wishart-normal
,
prior (, )
T
normal ( | ) exp 12 ( 0 ) 1
( 0 )



+p+1 tr (1 )
inverted Wishart () || ||
2 2
exp 2

normal likelihood (, | y, X)
n T
normal || 2
exp 12 (n p) s2 + ( b) X T 1 X ( b)

conditional posterior T
normal p ( | , y, X) exp 12 V1

+n
+n+p+1 tr (n 1 )
inverted Wishart ( | , y, X) || 2
|| 2
exp 2

where tr () is the trace of a matrix,

1 T 1 T 1
s2 = np (y Xb) 1 (y Xb) , b = X T 1 X X y,
1 1
V = 1
+ X T 1
X , = 1
+ X T 1
X 1
0 + X T 1
Xb ,

T T
n = + (y Xb) (y Xb) + b X T X b ,
1
and = X0T 1
0 X0

S-ar putea să vă placă și