Sunteți pe pagina 1din 31

Structural Macroeconometrics

Chapter 2. Approximating and Solving DSGE Models

David N. DeJong Chetan Dave


Empirical investigations involving DSGE models invariably require the completion

of two preparatory stages. One stage involves preparation of the model to be analyzed; this

is the focus of the current chapter. The other involves preparation of the data; this is the

focus of Chapter 3.

Regarding the model-preparation stage, DSGE models typically include three compo-

nents: a characterization of the environment in which decision makers reside; a set of de-

cision rules that dictate their behavior; and a characterization of the uncertainty they face

in making decisions. Collectively, these components take the form of a non-linear system

of expectational dierence equations. Such systems are not directly amenable to empirical

analysis, but can be converted into empirically implementable systems through the comple-

tion of the general two-step process outlined in this chapter.

The rst step involves the construction of a linear approximation of the model. Just as

non-linear equations may be approximated linearly via the use of Taylor-series expansions, so

too may non-linear systems of expectational dierence equations. The second step involves

the solution of the resulting linear approximation of the system. The solution is written in

terms of variables expressed as deviations from steady state values, and is directly amenable

to empirical implementation.

While this chapter is intended to be self-contained, far more detail is provided in the

literature cited below. Here, the goal is to impart an intuitive understanding of the model-

preparation stage, and to provide guidance regarding its implementation. In addition, we

note that there are alternatives to the particular approaches to model approximation and

solution presented in this chapter. A leading alternative to model approximation is provided

by perturbation methods; for a textbook discussion see Judd (1998). And a leading alter-

1
native to the approaches to model solution presented here is based on the use of projection

methods. This alternative solution technique is discussed in this text in Chapter 10; for

additional textbook discussions, see Judd (1998), Adda and Cooper (2003) and Sargent and

Ljungqvist (2004).

1 Linearization

1.1 Taylor Series Approximation

Consider the following n-equation system of non-linear dierence equations:

(xt+1 ; xt ) = 0; (1)

where the xs and 0 are n 1 vectors. The parameters of the system are contained in the

vector : DSGE models are typically represented in terms of such a system, augmented to

include sources of stochastic behavior. We abstract from the stochastic component of the

model in the linearization stage, since models are typically designed to incorporate stochastic

behavior directly into the linearized system (a modest example is provided in Section 2.2;

detailed examples are provided in Chapter 5). Also, while expectational terms are typically

included among the variables in x (e.g., variables of the form Et (xt+j ); where Et is the

conditional expectations operator), these are not singled out at this point, as they receive

no special treatment in the linearization stage.

Before proceeding, note that while (1) is written as a rst-order system, higher-order

specications may be written as rst-order systems by augmenting xt to include variables

2
observed at dierent points in time. For example, the pth -order equation

! t+1 = 1!t + 2!t 1 + ::: + p ! t p+1

may be written in rst-order form as

2 3 2 32 3
6 ! t+1 7 6 1 2 p 7 6 !t 7
6 7 6 76 7
6 7 6 76 7
6 ! 7 6 1 0 0 76 ! 7
6 t 7 6 76 t 1 7
6 7 6 76 7 = 0;
6 .. 7 6 .. .. .. 76 .. 7
6 . 7 6 . . . 76 . 7
6 7 6 76 7
6 7 6 76 7
4 5 4 54 5
! t p+2 0 0 1 0 ! t p+1

or more compactly, as

0
xt+1 xt = 0; xt+1 = [! t+1 ; ! t ; :::; ! t p+2 ] :

Thus (1) is su ciently general to characterize a system of arbitrary order.

The goal of the linearization step is to convert (1) into a linear system, which can then

be solved using any of the procedures outlined below. Anticipating the notation that follows

in Section 2.2, the form for the system we seek is given by

Axt+1 = Bxt : (2)

Denoting the steady state of the system as (x) = 0, where x is understood to be a function

of ; linearization is accomplished via a rst-order Taylor series approximation of (1) around

3
1
its steady state, given by

@ @
0 (x) + (x) (xt x) + (x) (xt+1 x); (3)
@xt @xt+1

@
where (xt x) is n 1, and the n n matrix @xt
(x) denotes the Jacobian of (xt+1 ; xt ) with

respect to xt evaluated at x. That is, the (i; j)th element of @


@xt
(x) is the derivative of the ith

@ @
equation in (1) with respect to the j th element of xt . Dening A = @xt+1
(x) and B = @xt
(x)

yields (2), where variables are expressed as deviations from steady state values.

1.2 Logarithmic Approximations

It is often useful to work with log-linear approximations of (1), due to their ease of

interpretation. For illustration, we begin with a simple example in which the system is 1 1;

and can be written as

xt+1 = f (xt ):

Taking logs and using the identity xt = elog xt ; the system becomes

log xt+1 = log f (elog xt ) :

Then approximating,

f 0 (x)
log xt+1 log [f (x)] + (log(xt ) log(x)) ;
f (x)

1
It is also possible to work with higher-order approximations; e.g., see Schmitt-Groh and Uribe (2002).

4
or since log [f (x)] = log x,

xt+1 f 0 (x) xt
log log( ) :
x f (x) x

f 0 ()
Note that f ()
is the elasticity of xt+1 with respect to xt : Moreover, writing xt as x + "t ; where

"t denotes a small departure from steady state,

xt "t "t
log = log 1 + ;
x x x

xt
and thus log x
is seen as expressing xt in terms of its percentage deviation from steady

state.

Returning to the n 1 case, re-write (1) as

1 (xt+1 ; xt ) = 2 (xt+1 ; xt ); (4)

since it is not possible to take logs of both sides of (1). Again using the identity xt = log ext ;

taking logs of (4) and rearranging yields

xt+1
log 1 (log e ; log ext ) log 2 (log e
xt+1
; log ext ) = 0: (5)

The rst-order Taylor series approximation of this converted system yields the log-linear

approximation we seek. The approximation for the rst term is

@ log [ 1 ] h xt i @ log [ 1 ] h xt+1 i


log 1 (xt+1 ; xt ) log [ 1 (x)] + (x) log( ) + (x) log( ) ; (6)
@ log(xt ) x @ log(xt+1 ) x

5
@ log[ 1 ] @ log[ 1 ]
where @ log(xt )
(x) and @ log(xt+1 )
(x) are n n Jacobian matrices, and log( xxt ) and log( xt+1
x
)

are n 1 vectors. The approximation of the second term in (5) is analogous. Then combining

the two approximations and rearranging yields (2), which takes the specic form

@ log [ 1 ] @ log [ 2 ] h xt+1 i @ log [ 1 ] @ log [ 2 ] h xt i


(x) (x) log( ) = (x) (x) log( ) :
@ log(xt+1 ) @ log(xt+1 ) x @ log(xt ) @ log(xt ) x
(7)

The elements of A and B are now elasticities, and the variables of the system are expressed

in terms of percentage deviations from steady state.

In Part II of the text we will discuss several empirical applications that involve the need

to approximate (1) or (5) repeatedly for alternative values of : In such cases, it is useful to

automate the linearization stage via the use of a numerical gradient calculation procedure.

We introduce this briey here in the context of approximating (1); the approximation of (5)

is analogous.

Gradient procedures are designed to construct the Jacobian matrices in (3) without an-

alytical expressions for the required derivatives. Derivatives are instead calculated numeri-

cally, given the provision of three components by the user. The rst two components are a

specication of and a corresponding specication of x: The third component is a procedure

designed to return the n 1 vector of values z generated by (1) for two cases. In the rst

case xt+1 is treated as variable and xt is xed at x; in the second case xt is treated as variable

@
and xt+1 is xed at x: The gradient procedure delivers the Jacobian @xt+1
(x) = A in the rst

@
case and @xt
(x) = B in the second case. Examples follow.

6
1.3 Examples

Consider the simple resource constraint

y t = c t + it ;

indicating that output (yt ) can be either consumed (ct ) or invested (it ). This equation is

already linear. In the notation of (1) the equation appears as

yt ct it = 0;

and in terms of (3), with xt = [yt ct it ]0 and the equation representing the ith of the system,

@
the ith row of @xt
(x) = [1 1 1] : In the notation of (5), the equation appears as

log yt log [exp(log ct ) exp(log it )] = 0;

and in terms of (7), the ith row of the right-hand-side matrix is

@ log [ 1 ] @ log [ 2 ] 1 c i
(x) (x) = : (8)
@ log(xt ) @ log(xt ) y c+i c+i

Finally, to use a gradient procedure to accomplish log-linear approximation, the ith return

of the system-evaluation procedure would be

zi = log yt log [exp(log ct ) exp(log it )] :

7
As an additional example consider the Cobb-Douglas production function

yt = at kt nt1 ; 2 (0; 1);

where output is produced by use of capital (kt ) and labor (nt ) and is subject to a technology

or productivity shock (at ). Linear approximation of this equation is left as an exercise. To

accomplish log-linear approximation, taking logs of the equation and rearranging maps into

the notation of (5) as

log yt log at log kt (1 ) log nt = 0:

h i0
With xt = log yyt log aat log kkt log nnt ; the ith row of the right-hand-side matrix in (7) is

@ log [ 1 ] @ log [ 2 ]
(x) (x) = [1 1 (1 )] : (9)
@ log(xt ) @ log(xt )

And to use a gradient procedure to accomplish log-linear approximation, the ith return of

the system-evaluation procedure would be

zi = log yt log at log kt (1 ) log nt :

8
2 Solution Methods

Having approximated the model as in (2), we next seek a solution of the form

xt+1 = F xt + G t+1 : (10)

This solution represents the time series behavior of fxt g as a function of f t g ; where t is a

vector of exogenous innovations, or as frequently referenced, structural shocks.

Here we present four popular approaches to the derivation of (10) from (2). Each approach

involves an alternative way of expressing (2), and employs specialized notation. Before

describing these approaches, we introduce an explicit example of (2), which we will map into

the notation employed under each approach to aid with the exposition.

The example is a linearized stochastic version of Ramseys (1928) optimal growth model:

yet+1 e
at+1 e
kt+1 = 0 (11)

yet+1 ce
ct+1 e
i it+1 = 0 (12)

1c Et (e
ct+1 ) + a Et (e
at+1 ) + e
k Et (kt+1 ) + 2c e
ct = 0 (13)

e
kt+1 e
k kt
e
i it = 0 (14)

e
at+1 e
at = "t+1 : (15)

n o
The variables yet ; e e e
ct ; it ; kt ; aet represent output, consumption, investment, physical capital,

and a productivity shock, all expressed as logged deviations from steady state values. The

9
variable "t is a serially uncorrelated stochastic process. The vector

=[ c i 1c a k 2c k i; ]0

contains the deepparameters of the model.

Two modications enable a mapping of the model into a specication resembling (2).

First, the expectations operator Et (:) is dropped from (13), introducing an expectational

error into the modied equation; let this error be denoted as ct+1 : Next, the innovation

term "t+1 in (15) must be accommodated. The resulting expression is

2 32 3 2 32 3
6 1 0 0 1 76 yet+1 7 6 0 0 0 0 0 76 yet 7
6 76 7 6 76 7
6 76 7 6 76 7
6 0 0 0 0 1 7 6 ect+1 7 6 0 0 0 0 76 ect 7
6 76 7 6 76 7
6 76 7 6 76 7
6 76 7 6 76 7
6 0 7 6 eit+1 7 6 0 76 eit 7
6 1 c i 0 76 7=6 0 0 0 0 76 7
6 76 7 6 76 7
6 76 7 6 76 7
6 0 0 76 kt+1 7
e 6 0 0 0 0 76 kt 7
e
6 1c k z 76 7 6 2c 76 7
6 76 7 6 76 7
4 54 5 4 54 5
0 0 0 1 0 zet+1 0 0 i k 0 zet
| {z }| {z } | {z }| {z }
A xt+1 B xt
2 32 3 2 32 3
6 0 0 0 0 0 76 0 7 6 0 0 0 0 0 76 0 7
6 76 7 6 76 7
6 76 7 6 76 7
6 0 0 0 0 1 76 7 6 0 0 0 0 0 76 7
6 76 0 7 6 76 0 7
6 76 7 6 76 7
6 76 7 6 76 7
+6
6 0 0 0 0 0 76
76 0
7+6
7 6 0 0 0 0 0 76
76 0
7: (16)
7
6 76 7 6 76 7
6 76 7 6 76 7
6 76 7 6 76 7
6 0 0 0 0 0 76 0 7 6 0 0 0 1 0 76 ct+1 7
6 76 7 6 76 7
4 54 5 4 54 5
0 0 0 0 0 "t+1 0 0 0 0 0 0
| {z }| {z } | {z }| {z }
C t+1 D t+1

10
2.1 Blanchard and Kahns Method

The rst solution method we present was developed by Blanchard and Kahn (1980),

and is applied to models written as

2 3 2 3
6 x1t+1 7 6 x1t 7
6 e6
7=A 7 + Eft ; (17)
4 5 4 5
Et (x2t+1 ) x2t

where the model variables have been divided into an n1 1 vector of endogenous predeter-

mined variables x1t (dened as variables for which Et x1t+1 = x1t+1 ), and an n2 1 vector of

endogenous non-predetermined variables x2t . The k 1 vector ft contains exogenous forcing

variables.

Following the approach of King and Watson (2002), a preliminary step is taken before

casting a given model into the form (17). The step is referred to as a system reduction: it

involves writing the model in terms of a subset of variables that are uniquely determined.

at and e
In terms of the example, note that observations on e kt are su cient for determining yet

using (11), and that given yet ; the observation of either e ct or eit is su cient for determining both
n o
variables using (12). Thus we proceed in working directly with e ct ; e
kt ; aet using (13) (15),
n o n o n o
e
and recover yet ; it as functions of e e
ct ; kt ; aet using (11) and (12). Among e e
ct ; kt ; aet ;

e
kt is predetermined (given e
kt and eit ; e
kt+1 is determined as in (14)); e
ct is endogenous but

not predetermined (as indicated in (13), its time-(t + 1) realization is associated with an

expectations error); and aet is an exogenous forcing variable. Thus in the notation of (17),

11
we seek a specication of the model in the form

2 3 2 3
6 e
kt+1 7 6 e
kt 7
6 e6
7=A 7 + Ee
at : (18)
4 5 4 5
Et (e
ct+1 ) e
ct

0 0
To obtain this expression, let t = yet eit , t = e
kt e
ct , and note that Et (e
at+1 ) =

e
at . In terms of these variables, the model may be written as

2 3 2 3 2 3
6 1 0 7 6 0 7 6 1 7
6 7 = 6 7 +6 7at
4 5 t 4 5 4 5e
t (19)
1 i 0 c 0
| {z } | {z } | {z }
2 30 2 1
3 2
2 3 2 3
6 k 1c 7 6 0 2c 7 6 0 0 7 6 a 7
6 7Et ( ) = 6 7 + 6 7 t+6 7e (20)
4 5 t+1 4 5 t 4 5 4 5at :
1 0 k 0 0 i 0
| {z } | {z } | {z } | {z }
3 4 5 6

Next, substituting (19) into (20), which requires inversion of 0, we obtain

1 1
3 Et ( t+1 ) = 4 + 5 0 1 t + 6 + 5 0 2 e
at : (21)

1
Finally, premultiplying (21) by 3 yields a specication in the form of (18); Blanchard and

Kahns solution method may now be implemented. Hereafter, we describe its implementation

in terms of the notation employed in (17).

e yielding
The method begins with a Jordan decomposition of A;

e=
A 1
J ; (22)

12
e are ordered in increasing
where the diagonal elements of J, consisting of the eigenvalues of A,

absolute value in moving from left to right.2 Thus J may be written as

2 3
6 J1 0 7
J =6
4
7;
5 (23)
0 J2

where the eigenvalues in J1 lie on or within the unit circle, and those in J2 lie outside of

the unit circle. J2 is said to be unstable or explosive, since J2n diverges as n increases. The

matrices and E are partitioned conformably as

2 3 2 3
6 11 12 7 6 E1 7
=6
4
7;
5 E=6
4
7;
5 (24)
21 22 E2

where 11 is conformable with J1 ; etc. If the number of explosive eigenvalues is equal to

the number of non-predetermined variables, the system is said to be saddle-path stable and

a unique solution to the model exists. If the number of explosive eigenvalues exceeds the

number of non-predetermined variables no solution exists (and the system is said to be a

source); and in the opposite case an innity of solutions exist (and the system is said to be

a sink).

e in (18) yields
Proceeding under the case of saddle-path stability, substitution for A

2 3 2 3 2 3
6 x1t+1 7 6 x1t 7 6 E1 7
6 7= 1
J 6 7+6 7 ft : (25)
4 5 4 5 4 5
Et (x2t+1 ) x2t E2

2
Eigenvalues of a matrix are obtained from the solution of equations of the form e = e; where e is
an eigenvector and the associated eigenvalue. The GAUSS command eigv performs this decomposition.

13
Next, the system is pre-multiplied by ; yielding

2 3 2 32 3 2 3
6 x1t+1 7 6 J1 0 7 6 x1t 7 6 D1 7
6 7=6 76 7+6 7 ft ; (26)
4 5 4 54 5 4 5
Et (x2t+1 ) 0 J2 x2t D2

where

2 3 2 32 3
6 x1t 7 6 11 12 7 6 x1t 7
6 7 = 6 76 7 (27)
4 5 4 54 5
x2t 21 22 x2t
2 3 2 32 3
6 D1 7 6 11 12 7 6 E1 7
6 7 = 6 76 7: (28)
4 5 4 54 5
D2 21 22 E2

This transformation eectively de-couplesthe system, so that the non-predetermined vari-

e contained in J2 ; as expressed in the


ables depend only upon the unstable eigenvalues of A

lower part of (26).

Having de-coupled the system, we derive a solution for the non-predetermined variables

by performing a forward iteration on the lower portion of (26). Using f2t to denote the

portion of ft conformable with D2 ; this is accomplished as follows. First, re-express the

lower portion of (26) as

x2t = J2 1 Et (x2t+1 ) J2 1 D2 f2t : (29)

This implies an expression for x2t+1 of the form

x2t+1 = J2 1 Et+1 (x2t+2 ) J2 1 D2 f2t+1 ; (30)

14
which can be substituted into (29) to obtain

x2t = J2 2 Et (x2t+1 ) J2 2 D2 Et (f2t+1 ) J2 1 D2 f2t : (31)

In writing (31) we have exploited the Law of Iterated Expectations, which holds that

Et [Et+1 (xt )] = Et (xt ) for any xt (e.g., see Ljungqvist and Sargent, 2004). Since J2 con-

tains explosive eigenvalues, J2 n disappears as n approaches innity, thus continuation of the

iteration process yields


X
1
(i+1)
x2t = J2 D2 Et (f2t+i ): (32)
i=0

Mapping this back into an expression for x2t using (27), we obtain

X
1
(i+1)
1 1
x2t = 22 21 x1t 22 J2 D2 Et (f2t+i ): (33)
i=0

i
In the case of the example model presented above, Et (f2t+i ) = e
at ; and thus (33) becomes

1 1 1 1
x2t = 22 21 x1t 22 J2 I J2 1 D2 e
at : (34)

Finally, to solve the non-explosive portion of the system begin by expanding the upper

portion of (25):

e11 x1t + A
x1t+1 = A e22 x2t + E1 ft ; (35)

e11 and A
where A e22 are partitions of 1
J conformable with x1t and x2t : Then substituting

for x2t using (33) yields a solution for x1t of the form given by (10).

We conclude this subsection by highlighting two requirements of this solution method.

15
First, a model-specic system reduction is employed to obtain an expression of the model

that consists of a subset of its variables. The variables in the subset are distinguished as

being either predetermined or non-predetermined. Second, invertibility of the lead matrices

0 and 3 is required in order to obtain a specication of the model amenable for solution.

Exercise 1 Write computer code for mapping the example model expressed in (11)-(15) into

the form of the representation given in (17).

2.2 Simss Method

Sims (2001) proposes a solution method applied to models expressed as

Axt+1 = Bxt + E + C t+1 +D t+1 ; (36)

where E is a matrix of constants.3 Relative to the notation we have employed above, E

is unnecessary because the variables in xt are expressed in terms of deviations from steady

state values. Like Blanchard and Kahns (1980) method, Simsmethod involves a de-coupling

of the system into explosive and non-explosive portions. However, rather than expressing

variables in terms of expected values, expectations operators have been dropped, giving

rise to the expectations errors contained in t+1 : Also, while Blanchard and Kahns method

entails isolation of the forcing variables from xt+1 ; these are included in xt+1 under Sims

method; thus the appearance in the system of the vector of shocks to these variables t+1 :

Third, Simsmethod does not require an initial system-reduction step. Finally, it does not

3
The programs available on Simswebsite perform all of the steps of this procedure. The web address is:
http://www.princeton.edu/~sims/. The programs are written in Matlab; analogous code written in GAUSS
is currently under construction.

16
entail a distinction between predetermined and non-predetermined variables.

Note from (16) that the example model has already been cast in the form of (36), thus

we proceed directly to a characterization of the solution method. The rst step employs a

QZ factorizationto decompose A and B into unitary upper triangular matrices:

A = Q0 Z 0 (37)

B = Q0 Z 0 ; (38)

where (Q; Z) are unitary, and ( ; ) are upper triangular.4 Next, (Q; Z; ; ) are ordered

such that, in absolute value, the generalized eigenvalues of A and B are organized in and

in increasing order moving from left to right, just as in Blanchard and Kahns Jordan

decomposition procedure.5 Having obtained the factorization, the original system is then

pre-multiplied by Q, yielding the the transformed system expressed in terms of zt+1 = Z 0 xt+1 :

zt = zt 1 + QE + QC t + QD t ; (39)

where we have lagged the system by one period in order to match the notation (and code)

of Sims.

Next, as with Blanchard and Kahns (1980), method, (39) is partitioned into explosive

4
A unitary matrix satises 0 = 0
= I. If Q and/or Z contain complex values, the transpositions
reect complex conjugation, that is, each complex entry is replaced by its conjugate and then transposed.
5
Generalized eigenvalues of are obtained as the solution to e = e; where is a symmetric matrix.
Simswebsite also provides a program that orders the eigenvalues appropriately.

17
and non-explosive blocks:

2 32 3 2 32 3 2 3
6 11 12 7 6 z1t 7 6 11 12 7 6 z1t 1 7 6 Q1 7
6 76 7=6 76 7+6 7 [E + C + D t] : (40)
4 54 5 4 54 5 4 5 t
0 22 z2t 0 22 z2t 1 Q2

The explosive block (the lower equations) is solved as follows. Letting wt = Q(E +C t +D t )

(partitioned conformably as w1t and w2t ), the lower block of (40) is given by

22 z2t = 22 z2t 1 + w2t : (41)

Leading (41) by one period and solving for z2t yields

1
z2t = M z2t+1 22 w2t+1 ; (42)

1
where M = 22 22 : Then recursive substitution for z2t+1 ; z2t+2 ; ... yields

X
1
z2t = Mi 1
22 w2t+1+i ; (43)
i=0

since lim M t z2t = 0. Recalling that wt is dened as wt = Q(E + C t + D t ); note that (43)
t!1

expresses z2t as a function of future values of structural and expectational errors. But z2t is

known at time t, and Et ( t+s ) = Et ( t+s ) = 0 for s > 0; thus (43) may be written as

X
1
z2t = Mi 1
22 Q2 E2 ; (44)
i=0

18
where Q2 E2 are the lower portions of QE conformable with z2 :6 Postmultiplying (44) by

1
P
1
22 Q2 E2 and noting that Mi = (I M ) 1 ; the solution of z2t is obtained as
i=0

1
z2t = ( 22 22 ) Q2 E: (45)

Having solved for z2t ; the nal step is to solve for z1t in (40). Note that the solution of

z1t requires a solution for the expectations errors that appear in (40). As Sims notes, when a

unique solution for the model exists, it will be the case that a systematic relationship exists

between the expectations errors associated with z1t and z2t ; exploiting this relationship

yields a straightforward means of solving for z1t : The necessary and su cient condition for

uniqueness is given by the existence of a k (n k) matrix that satises

Q1 D = Q2 D; (46)

which represents the systematic relationship between the expectations errors associated with

z1t and z2t noted above. Given uniqueness, and thus the ability to calculate as in (46),

the solution of z1t proceeds with the pre-multiplication of (39) by [I ]; which yields

2 3 2 3
6 z1t 7 6 z1t 1 7
6 7= 6 7+[Q1 Q2 ] [E + C + D t] :
11 12 22 4 5 11 12 22 4 5 t
z2t z2t 1

(47)

Then due to (46), the loading factor for the expectational errors in (47) is zero, and thus the

6
Sims also considers the case in which the structural innovations t are serially correlated, which leads
to a generalization of (44).

19
system may be written in the form

xt = E + 0 xt 1 + 1 t; (48)

where

2 3
1 1
6 11 11 ( 12 22 ) 7
H = Z6
4
7
5 (49)
0 I
2 3
6 Q1 Q2 7
E = H6
4
7E
5 (50)
1
( 22 22 ) Q2
1 0
0 = Z [ 11 ( 12 22 )]Z (51)
211 3
6 Q1 Q2 7
1 = H6
4
7 D:
5 (52)
0

Exercise 2 Using the code cited for this method, compute the solution for (11)-(15) for

given values of .

2.3 Kleins Method

Klein (2000) proposes a solution method that is a hybrid of those of Blanchard and

Kahn (1980) and Sims (2001).7 The method is applied to systems written as

e t (xt+1 ) = Bx
AE e t + Eft ; (53)

7
GAUSS and Matlab code that implement this solution method are available at
http://www.ssc.uwo.ca/economics/faculty/klein/.

20
where the vector ft (of length nz ) has a zero-mean vector autoregressive (VAR) specication

with autocorrelation matrix e may be singular.8


; additionally A

Like Blanchard and Kahn, Klein distinguishes between the predetermined and non-

predetermined variables of the model. The former are contained in x1t+1 ; the latter in x2t+1 :
0
Et (xt+1 ) = x1t+1 Et (x2t+1 ) . The solution approach once again involves de-coupling the

system in to non-explosive and explosive components, and solving the two components in

turn.

Returning to the example model expressed in (11)-(15), the form of the model amenable

to the implementation of Kleins method is given by (21), repeated here for convenience:

1 1
3 Et ( t+1 ) = 4 + 5 0 1 t + 6 + 5 0 2 e
at : (54)

The main advantage of Kleins approach relative to Blanchard and Kahns is that 3 may

be singular. To proceed with the description of Kleins approach, we revert to the notation

employed in (53).

e by implementing a com-
Kleins approach overcomes the potential non-invertibility of A

e and B:
plex generalized Schur decomposition to decompose A e This is in place of the QZ

decomposition employed by Sims. In short, the Schur decomposition is a generalization of

e and B:
the QZ decomposition that allows for complex eigenvalues associated with A e Given

e and B;
the decomposition of A e Kleins method closely follows that of Blanchard and Kahn.

8
See Chapter 4 for a description of VAR models.

21
e and B
The Schur decompositions of A e are given by

e
QAZ = S (55)

e
QBZ = T; (56)

where (Q; Z) are unitary and (S; T ) are upper triangular matrices with diagonal elements

e and B:
containing the generalized eigenvalues of A e Once again the eigenvalues are ordered

in increasing value in moving from left to right. Partitioning Z as

2 3
6 Z11 Z12 7
Z=6
4
7;
5 (57)
Z21 Z22

Z11 is n1 n1 and corresponds to the non-explosive eigenvalues of the system. Given saddle-

path stability, this conforms with x1 ; which contains the predetermined variables of the

model.

Having obtained this decomposition, the next step in solving the system is to triangularize

(53) as was done in working with the QZ decomposition. Begin by dening

zt = Z H xt ; (58)

where Z H refers to a Hermitian transpose.9 This transformed vector is divided into n1 1

stable (st ) and n2 e = Q0 SZ H and B


1 unstable (ut ) components. Then since A e = Q0 SZ H ;

9
Given a matrix , if the lower triangular portion of is the complex conjugate transpose of the upper
triangle portion of , then is denoted as Hermitian.

22
(53) may be written as

2 3 2 3 2 32 3 2 3
6 S11 S12 7 6 st+1 7 6 T11 T12 7 6 st 7 6 Q1 7
6 7 Et 6 7=6 76 7+6 7 Eft ; (59)
4 5 4 5 4 54 5 4 5
0 S22 ut+1 0 T22 ut Q2

once again, the linear portion of (59) contains the unstable components of the system. Solving

this component via forward iteration, we obtain10

ut = M ft (60)

T 1
vec(M ) = ( S22 ) Inz T22 vec(Q2 E): (61)

This solution for the unstable component is then used to solve the stable component, yielding

st+1 = S111 T11 st + S111 fT12 M S12 M + Q1 Egft Z111 Z12 M t+1 ; (62)

where t+1 is a serially uncorrelated stochastic process representing the innovations in the

VAR specication for ft+1 : In the context of our example model, ft corresponds to e
at ; the

10
The appearance of the vec operator accommodates the VAR specication for ft : In the context of the
1
example model, T is replaced by the scalar T ; and (61) becomes M = T S22 T22 Q2 E:

23
innovation to which is "t : In terms of the original variables the solution is expressed as

x2t = Z21 Z111 x1t + N ft (63)

x1t+1 = Z11 S111 T11 Z111 x2t + Lft (64)

N = (Z22 Z21 Z111 Z12 )M (65)

L = Z11 S111 T11 Z111 Z12 M + Z11 S111 [T12 M S12 M + Q1 E] + Z12 M : (66)

This solution can be cast into the form of (10) as

x1t+1 = Z11 S111 T11 Z111 Z21 Z111 x1t + Z11 S111 T11 Z111 N + L ft : (67)

Exercise 3 Apply Kleins code to the example model presented in (11)-(15).

2.4 An Undetermined Coe cients Approach

Uhlig (1999) proposes a solution method based on the method of undetermined coe -

cients.11 The method is applied to systems written as

0 = Et [F xt+1 + Gxt + Hxt 1 + Lft+1 + M ft ] (68)

ft+1 = N ft + t+1 ; Et ( t+1 ) = 0: (69)

11
Matlab code available for implementing this solution method is available at:
http://www.wiwi.hu-berlin.de/wpol/html/toolkit.htm.
GAUSS code is currently under construction.

24
With respect to the example model in (11)-(14) let xt = [yt ct it kt ]0 : Then lagging the rst

two equations, which are subject neither to structural shocks nor expectations errors, the

matrices in (68) and (69) are given by

2 3 2 3
6 0 0 0 0 7 6 1 0 0 7
6 7 6 7
6 7 6 7
6 0 0 0 0 7 6 1 0 7
6 7 6 c i 7
F =6
6
7;
7 G=6
6
7;
7 (70)
6 0 0 7 6 0 0 0 7
6 1c k 7 6 2c 7
6 7 6 7
4 5 4 5
0 0 0 1 0 0 i k

H = 0; L = [0 0 a 0]0 ; M = [ 1 0 0 0]0 , and N = .

Solutions to (68)-(69) take the form

xt = P x t 1 + Qft : (71)

In deriving (71), we will confront the problem of solving matrix quadratic equations of the

form

P2 P =0 (72)

for the m m matrix P . Thus we rst describe the solution of such equations.

To begin, dene

2 3 2 3
6 7 6 0m m 7
=6
4
7;
5 =6
4
7:
5 (73)
2m 2m 2m 2m
Im 0m m 0m m Im

Given these matrices, let s and denote the generalized eigenvector and eigenvalue of

25
with respect to ; and note that s0 = [ x0 ; x0 ] for some x 2 <m . Then the solution to the

matrix quadratic is given by

1
P = ; = [x1 ; :::; xm ]; = diag( 1 ; :::; m ); (74)

so long as the m eigenvalues contained in and (x1 ; :::; xm ) are linearly independent. The

solution is stable if the generalized eigenvalues are all less than one in absolute value.

Returning to the solution of the system in (68)-(69), the rst step towards obtaining (71)

is to combine these three equations into a single equation. This is accomplished in two steps.

First, write xt in (68) in terms of its relationship with xt 1 given by (71), and do the same

for xt+1 ; where the relationship is given by

xt+1 = P 2 xt 1 + P Qft + Qft+1 : (75)

Next, write ft+1 in terms with its relationship with ft given by (69). Taking expectations of

the resulting equation yields

0 = [F P 2 + GP + H]xt 1 + [(F P + G)Q + M + (F Q + L)N ]ft : (76)

Note that in order for (76) to hold, the coe cients on xt 1 and ft must be zero. The rst

restriction implies that P must satisfy the matrix quadratic equation

0 = F P 2 + GP + H; (77)

26
the solution of which is obtained as indicated in (73) and (74). The second restriction

requires the derivation of Q which satises

(F P + G)Q + M + (F Q + L)N = 0: (78)

The required Q can be shown to be given by

1
Q=V [ vec(LN + M )] ; (79)

where V is dened as

V = N0 F + Ik (F P + G): (80)

The solutions for P and Q will be unique so long as the matrix P has stable eigenvalues.

As noted by Christiano (2002), this solution method is particularly convenient for working

with models involving endogenous variables that have diering associated information sets.

Such models can be cast in the form of (68)-(69), with the expectations operator Et replacing

Et . In terms of calculating the expectation of an n 1 vector Xt , Et is dened as

2 3
6 E(X1t j 1t ) 7
6 7
6 .. 7
Et (Xt ) = 6
6 . 7;
7 (81)
6 7
4 5
E(Xnt j nt )

where it represents the information set available for formulating expectations over the ith

element of Xt . Thus systems involving this form of heterogeneity may be accommodated

using an expansion of the system (68)-(69) specied for a representative agent. The solution

27
of the expanded system proceeds as indicated above; for details and extensions, see Christiano

(2002).

Exercise 4 Apply Uhligs code to the example model presented in (11)-(15).

28
References

[1] Adda, J. and R. Cooper. 2003. Dynamic Economics. MIT Press.

[2] Blanchard, O. J. and C. M. Kahn. 1980. The Solution of Linear Dierence Models

Under rational Expectations. Econometrica 48 (5): 1305-1311.

[3] Christiano, L. J. 2002. Solving Dynamic Equilibrium Models by a Method of Undeter-

mined Coe cients. Computational Economics 20: 21-55.

[4] Judd, K. 1998. Numerical Methods in Economics. MIT Press.

[5] King, R.G. and M.W. Watson. 2002. System Reduction and Solution Algorithms for

Solving Linear Dierence Systems under Rational Expectations Computational Eco-

nomics 20:57-86.

[6] Klein, P. 2000. Using the Generalized Schur Form to Solve a Multivariate Linear Ratio-

nal Expectations Model. Journal of Economic Dynamics and Control 24: 1405-1423.

[7] Ramsey, F.K. 1928. A Mathematical Theory of Saving." Economic Journal 38:543-559.

[8] Sargent. T. and L. Ljungqvist. 2004. Recursive Macroeconomic Theory. MIT Press.

[9] Schmitt-Grohe S. and M. Uribe. 2002. Solving Dynamic General Equilibrium Models

Using a Second-Order Approximation of the Policy Function. NBER Technical Working

Paper No. 0282. Cambridge: NBER.

[10] Sims, C. 2001. Solving Linear Rational Expectations Models. Computational Eco-

nomics 20: 1-20.

29
[11] Uhlig, H. 1999. A toolkit for analyzing non-linear dynamic stochastic models easily.

In Ramon Marimon and Andrew Scott (eds.), Computational Methods for the Study of

Dynamic Economies. Oxford University Press, New York, 3061.

30

S-ar putea să vă placă și