Sunteți pe pagina 1din 152

Methodsforsolvingnonlinear,

rationalexpectationsbusinesscycle
models
A1dayworskhop fortheBathBristolExeterDoctoral
TrainingCentre,byTonyYates
UniversityofBristol+CentreforMacroeconomics
21May2114
1.Introduction:milestonesin
history,purpose,curriculumforself
study,planfortoday
Purpose
WorkofSamuelson,Bewley,Brock,Lucas,
Kydland,Prescottandothers
Emphasisandlaterdominanceofmicrofounded,
oftenrationalexpectationsmodelsinmacro
Appliestomodelsofgrowthasmuchasbusiness
cycles.
Recentcolonisationoffinance,development,
politicaleconomy.
Alsorichmicroliteraturetoo[eg Pakes...]
Purpose(2)
Dominantparadigmhasgeneratednew
industriesinappliedeconometrics
Methodsforestimatingmodels
Searchforfactstoconfrontwithmodels
Purposeistobeginaprocessofgivingyou
accesstothesevastliteratures
Someofthegreatdebatesemploying
computationalmethodsformacro
Causesofbusinesscycles.Weretheyrealornot?
Thegreatinflation,greatmoderation,greatcontractions.
Thecostsofbusinesscycles.
Optimalmonetaryandfiscalpolicy,incinstitutionaldesign.
Causeofandoptimalresponsetochangesinconsumption
andwealthdistribution
Financeasasourceandpropagatorofbusinesscycles,and
ofmisallocation
Labourmarketinstitutions(eg benefits),welfareandthe
businesscycle
R&Dandgrowth;financeandgrowth
Applicationsofrecursive,numerical
methods
IO:optimalentryexit,pricing.
Labour:searchproblemswhendecisionis
acceptreject.
DP^2:optimalgovernmentunemployment
compensationpolicywhenagentssolvean
acceptrejectsearchprobleminthelabour
market.
Monetarypolicyandthezerobound[Imworking
onthisnow].
Optimalredistributivetaxationwithaggregate
andidiosyncraticuncertainty
Plan=f(speed)
Quasilinear,perfectforesightmethod.[DSGEatzerolowerbound]
Nonlinear,perfectforesightusingNewtonRaphson [deterministic
RBC]
Parameterisedexpectations[stochasticRBC]
AprojectionmethoddeployingChebyshev polynomical function
approximation.[DeterministicRBC]
Dynamicprogrammingwithvalueandpolicyfunctioniteration
[DeterministicandstochasticRBC]
DynamicprogrammingusingcollocationandChebyshev
Polynomials[DeterministicRBC]
HeterogeneousagentmethodsusingKrusselSmith[Deterministic
RBC]
DP^2 someremarksonly.
Thevehicle:RBCmodel
(Initially)representativeagent,consumption
savingsinvestmentoutputproblem,stochastic,
neutraltechnologicalprogress.
Usefulbuildingblockformany,moremodernand
realisticapplicationsingrowth,finance,
monetaryeconomics.
Manynumericalmethodstextbooksexplainusing
thisasanexample.
Buttorepeat:methodsmuchmorewidely
applicable.
Minimaltoolsforappliedmacro
theory
Dynamicoptimisation tosolvetheproblemsof
thefirmsandconsumersinyourmodel
Functionapproximationusingsumsof
polynomials
Numericaloptimisation
Matrixalgebra
Numericalderivatives,numericalintegrals
Markovchains
Dynamicprogramming
Desirableforappliedmacro,minimal
fortheoreticalmacro
Realanalysis studyofexistenceand
convergencetheoremsonwhichdp andfa rely
MostminormodificationsofRBCmodelswillbe
suchthatrequiredconditionsmet.
Manymajormodificationseg heterogeneous
agents leaveyouinterritorywherethereisno
possibilityofprovingexistenceoruniqueness,so
youhavetorelyonexperimentation,homotopy.
Notcoveringtoday:
Peturbation methodsforsolvingRBC/DSGE
models
Thoughwearedeployingsimilarideatosolve
rootfinding problemsournonlinearmodelposes
Noweasy[maybetooeasy!]todothisinDynare,
softwarewrittenforuseinMatlab.
Thesearelocalmethods.Adequateunless:
Largeshocks
Occasionallybindingconstraintsorotherthings
inducingkinksinpolicyfunctions
Choicesetforagentsdiscontinuous
Goodresources
Stokey andLucas(1989) regularityconditions,
convergencetheoremsfortoolstowork.
Judd(1993) excellentdiscussionofnumerical
methodsforNLDSGEmodels
Heer andMaussner (2005) greathowtobook,
includingsomegooddetailsontools
Adda andCooper(2003) niceexplanationsof
dynamicprogramming,simulatedmethodof
moments.Areadinthebathtourofmacro.
MirandaandFackler (2002) includespowerful
toolboxofcode.
Goodresources(2)
Wouter denHaans lecturenoteson
projectionmethods,+manyothers
DenHaan andMarcet (1990) on
parameterisedexpectationsmethodis
excellentandverytransparent.
Wouter denHaan manuscriptonPEA
Christiano andFisher(2000) onunityofPEA
andprojectionmethods.
Software
Pencilandpaperindispensible,butinsufficient!
Matlab,Gauss,Pythonfordevelopment,
computation,simulation
LowerlevellanguagelikeFortran,C++for
computationallyintensivetasks
Mathematica,Maplefordebuggingpenciland
papercalculationofderivatives[perhaps
integratedintoyourMatlab orotherprograms]
Companioncurriculumonempirical
macro
Timeserieseconometrics;VARs,random
processes,Markovprocesses,ergodicity
Kalman Filter
Filtering.Dualitywithoptimallinearregulator.
Computingthelikelihoodof[thess formofa]
DSGE/RBCmodel
MarkovChainMonteCarlomethods
Characterisingnumericallyposteriordensities
Particlefiltering
Forwhenyoudonthaveanalyticalexpressionsforthe
likelihood
2.Piecewiselinear,perfect
foresightsolutionofanonlinearRE
NKmodel
[TakenfromBrendon,Paustian and
Yates,Thepitfallsofspeedlimit
interestraterulesattheZLB.]
Warmup:piecewiselinearREE
solutionmethod
Perfectforesight,nonlinearREE
Jung,Teranishi andWatanabe(2005),
Eggertson andWoodford(2003)
PolicyrulesinNKmodelwiththezerolower
boundtointerestrates
GuessperiodatwhichZLBbinds;solve
resultant,2partlinearREsystem,verify
whetherwehaveanREE
AnalmostlinearNewKeynesianRE
model

m
t

(10)(1[0)(o)
0

y
t
[E
t

m
t1

y
t
E
t

y
t1

1
o

R
t
E
t

m
t1

R
t
max o
m

m
t
o
y

y
t
o
y
(

y
t


y
t1
),

R
L
WhyZLBisofinterest
RatesattheZLBinJapanfor20years,andin
UK,USfor3.5years
Woodfordsforwardguidancepolicycopied
(?)bycentralbanksisoptimalpolicyatthe
ZLB
Fiscalmultiplier,andbenefitsoffiscalpolicy,
heavilydependentonZLB
Otherapplicationsofpiecewiselineartoo.
Whyarespeedlimitrulesofinterest?
Implement/mimiccommitmentpolicy
Walsh(2003a),GiannoniandWoodford(2003),
McCallumandNelson(2004),Stracca (2007),
LeducandNatal(2011)
Insuranceagainstmeasurementerror
OrphanidesandWilliams[various])
Someevidencetheyfittimeseriesforcentral
bankrates
Mehra (2002)
CalibrationofsimpleNKmodel
o elasticity of intertemporal substitution 1
[ discount rate 0.99
0 Calvo hazard parameter 0.67
inverse Frisch elasticity of labour supply 2
o
m
weight on inflation in policy rule 1.5
o
y
weight on output in policy rule 0
o
y
weight on change in output in p.r. 2

R
t
max o
m

m
t
o
y

y
t
o
y
(

y
t


y
t1
),

R
L
Solutionalgorithm

m
t

y
t

R
t

a
1
a
2
a
3

y
t1
, t 1
1.GuessZLBbindsatt=0,butnotthereafter.
2.ConventionalREsolvers
giveus:
3.Nowsolveforinitial
period,substitutingout
expectationsusing2.:

m
0
k

y
0
[a
1

y
0

y
0
a
2

y
0

1
o

R
L
a
1

y
0
4.Giveninitialvalues,use2.tosolve
recursivelyfort=1onwards...

R
0
shadow


R
L

R
t
shadow


R
L
, t 1
5.Verify:
SolvingthelinearREpart
Methodofundeterminedcoefficients[or
similar]
Conjecture:
Substituteinwheretermsinexpectations
appear
Solvefortheas.

m
t

y
t

R
t

a
1
a
2
a
3

y
t1
RecaponnonlinearZLBsolution
methodinwords
Makeaguessatthenumberofperiodsforwhichthezero
boundbinds.
Useundeterminedcoefficientsorsimilartosolveforlinear
REEintermsofstatefromthisperiodon.
Usethissolutionformtoeliminatetheexpectationsterms
intheequationsystemfortheinitialperiod.
Thisleavesyouwitha2equation2unknownsystemfor
theinitialperiod,whichyoucansolve.Rememberthatthe
interestrateruleisreplacedbytheassumptionthat
interestratesarezero(fromtheinitialguess).
Havingsolvedfortheinitialperiod,usethesolutionform
forthepostzeroboundperiodtosimulateforwardsstepby
step.
Dynamicsunderselffulfillingrecession
5 10 15 20
-0.04
-0.03
-0.02
-0.01
Output
5 10 15 20
-0.04
-0.03
-0.02
-0.01
Infl ati on
5 10 15 20
-0.04
-0.03
-0.02
-0.01
Labour
5 10 15 20
-10
-8
-6
-4
-2
x 10
-3
Pol i cy Rate
Self-fulfilling crisis: a simple NewKeynesian model
Inflationandoutput>4%
fromss
RatesattheZLB
Realrateunderselffulfillingrecession
2 4 6 8 10 12 14 16 18 20
2
4
6
8
10
12
14
16
18
x 10
-3
The real interest rate during the crisis episode
Realratehighinperiod2,
sustainingforecastoflow
inflationandoutput.
Generallessons
Solvingnonlinearproblemsbyrecastingasa
setoflinearproblemsolvingsteps.
Guessandverify.
NB:thisalgorithmisasteppingstoneto
solvingforoptimalpolicyatthezlb.
Whyperfectforesight?
Hopelesslyunrealistic?
Maybeokforsomequestions,eg transition
fromoneSStoanother,inresponseto
preannouncedpolicy.
Usefulbenchmark.
Steppingstonetolearningothermethods.
Steppingstonetocodingothermethods,once
theyarelearned.
3.Solvingaperfectforesight,finite
horizonversionofthegrowthmodel
usingNewtonRaphson methods
Afinitehorizonyeomanfarmermodel
U(C
0
, . . . C
T
)

t0
T
C
t

1/
, (, 1]
f(K
t
) K
t
o
, o (0, 1)
K
t1
C
t
f(K
t
),
0 C
t
,
0 K
t1
, t 0, . . . T
Example1.1.1,Heer andMaussner,p9.
FONCsfortheyeomanfarmermodel
K
t1
K
t
o
C
t
, t 0, 1. . . T
C
t
C
t1
1
oK
t1
o1
1, t 0, 1. . . T 1
Eulerequationdoesnot
holdforperiodT;no
intertemporal choicehere,
justeateverything.
C
t
K
t
o
K
t1
K
t
o
K
t1
K
t1
o
K
t2
1
oK
t1
o1
1

K
t1
o
K
t2
K
t
o
K
t1
1
oK
t1
o1
0
UsebudgetconstrainttosuboutforCtermsinEulerequation.
TheT dimensionalnonlinearequation
system
K
1
o
K
2
K
0
o
K
1
1
oK
1
o1
0,
K
2
o
K
3
K
1
o
K
2
1
oK
2
o1
0, . .
K
T
o
K
T1
o
K
T
1
oK
T
o1
0
ThisisasetofT nonlinearequationsinT unknowns,whichwe
aregoingtosolveusingamodifiedNewtonRaphson method,
usefulinmanymany contexts.
Thisalso,likeourwarmupexample,convertssolofnonlinear
equationintosequentialsolvingoflinearapproximations.
UnidimensionalNewtonRaphson
x

, s. t. f(x

) 0
g
0
(x) f(x
0
) f

(x
0
)(x x
0
)
x
1
, s. t. g
0
(x
1
) 0
x
1
x
0

f(x
0
)
f

(x
0
)
Thisisourultimategoal,
formalised
Weapproximateflinearlyaround
someinitialpointx_0,usingfirst2
termsofTaylorstheorem.
Thenwesolveforthex_1that
makestheapproximantg(x_1)=0
Theiterative,unidimension NR
method
x
s1
x
s

f(x
s
)
f

(x
s
)
Ifweiterateonthisequation,thenthe
estimatesslowlyconvergeontherootsof
theoriginalsystem.
Cangetproblemthatnewguesseslie
outsidethedomainofthefunction,sowe
modifybyplacingboundsonthe
allowableguesses.
Graphicalillustrationofuni
dimensionalNewtonRaphson
Source:Heer andMaussner,Chapter8:tools
MultidimensionalNewtonRaphson
0
0
. . .
0

f
1
(x
1
, . . . x
n
)
f
2
(x
1
, . . . x
n
)
. . .
f
n
(x
1
, . . . x
n
)
0 f(x)
f
1
()
K
1
o
K
2
K
0
o
K
1
1
oK
1
o1
0 K
3
0. K
4
. . . 0. K
T
f
2
() . . .
. . .
f
nT
() 0 K
1
0 K
2
. . .
K
T
o
K
T1
o
K
T
1
oK
T
o1
Definingthematrix
functionfviamatrix
representationofour
systemofequations
Inour
deterministic
yeomanfarmer
model,thefs
willlooklike
this
DefiningtheJacobian,andpopulating
itusingthedeterministicyeoman
farmereg
J(x)
f
1
1
f
2
1
. . . f
n
1
f
1
2
f
2
2
. . . f
n
2
. . . . . . . . . . . .
f
1
n
f
2
n
. . . f
n
n
, where f
j
i

f
i
(x)
dx
j
Ratherthanaderivative,
wenowworkwitha
Jacobian,amatrixof
partialderivatives.
f
1
1

df
1
dx
1

df
1
dK
1

d
dK
1
K
1
o
K
2
K
0
o
K
1
1
oK
1
o1
(K
1
o
K
2
)
1
( 1)(K
0
o
K
1
)
(2)
(1 )(K
1
o
K
2
)

(oK
1
o1
) o(o 1)K
1
o2
f
2
1
(K
0
o
K
1
)
(1)
(1 )(K
1
o
K
2
)

. . .
f
n
1
0, n 3. . . T
DefiningthefirstrowmostentriesintheJacobian matrixwillbezeros.
Fromlinearapproximationtoour
system,toiterativeapprox totheroots
g(x) f(x
0
) J(x
0
)dx, dx x x
0
x
1
x
0
J(x
0
)
1
f(x
0
)
x
s1
x
s
J(x
s
)
1
f(x
s
)
Linearapproximationto
themultivariatesystem
Solutiontog(x)=0at
thepointx_0
Solutiontog(x_s+1)=0
atthepointx_s;and
iterationsonthis
convergegloballyon
thesolutionx_*tothe
nonlinearsystem,with
somemodifications.
General,multivariate,modifiedNR
algorithm
1. Initialise: choose x
0
|x, x], i 0
2. (i) Compute J(x
i
),
2. (ii) solve f(x
i
) J(x
i
)(x
i1
x
i
) 0
2. (iii) if x
i1
|x, x], choose z (0, 1), s.t. x
i1

x
i
(x
i1
x
i
) |x, x]
2. (iv) set x
i1
x
i1

3. check convergence:stop if f(x


i1
) c, else set i i 1 and go to 2
Modificationchecksifnewguesslieswithindomainoffunction.
NotethisisHMalgorithm8.5.1,edition1.
Remarks
Manysolutionmethodsboildowntosolving
nonlinearoptimisationproblemslikethis
Manyalternativemethodsandrefinements.
Notesimilaritieswithbefore;converting
problemofsolvingnonlinearproblemintoone
offindingeverbettersolutionstothelinear
approximationstothenonlinearproblem.
Alternativemethodsandrefinements
Usingnumericalderivativesbasedon
differences,inthisNRmethod
Usingasecantmethodexplicitlybasedon
differences.
Stochasticmodificiations topreventgetting
stuckatalocal.[simulatedannealing]
Manyprecodedfunctionstouse,butithelps
toknowbasicmethodtobeabletousethem
effectively
PSonperfectforesightandNR
Weusedthismethodinourpaperonthezero
boundtoo.
Insteadoflinearising theNKmodel,andthen
workingwiththezeroboundusingaguess
andverifymethod
Formulatefull,nonlinearNKmodel,including
ZLB,solvingsysteminvolvingfinitenumberof
periods,oneequationforeachperiod,asyou
justsawfortheRBCmodel.
WhatabouttheinfinitehorizonRBC
model?
Samemethodcanbeusedtosolveinfinite
horizonmethod,onassumptionthatweget
veryclosetosteadystateinsomefinite
numberofperiods.
Forwarditeration,perfectforesight
withinfinitehorizon
max
C
t
)
E
0

t0

[
t
log(C
t
)
s. t. K
t1
K
t
o
C
t
C
t
0
K
t
0
Nowwehavediscountedutilityto
makesurethemaximand isbounded.
Wearegoingtoassumethatthereissomefinitetimeatwhichthemodel
shouldhavereachedsteadystatecapital,orveryclosetoit.
Then,fromsomeinitialconditions,solveforthetrajectorytothissteady
state.
Methodthesame.DeriveEE,combinewithresourceconstraint,formulate
systemofnonlin equations.SolverootfindingproblemusingNR.
Nonlinearsystemforforward
iteration
0
K
1
o
K
2
K
0
o
K
1
o[K
1
o1
0
K
2
o
K
3
K
1
o
K
2
o[K
2
o1
. . .
0
K
T
o
K

K
T1
o
K
T
o[K
T
o1
Nowwehavelogutility,and
discounting,soslightdifferenceinRHS.
NotebeforefinalKwas0;now
K_star=steadystate
ThisisTequationsinTunknowns.
K_0isgiven.
Iterationsneeded:ifK_Ttoofarfrom
ss,thenneedtolengthenT.
4.Solvingthegrowthmodelusing
theparameterisedexpectations
algorithm
SolvingRBCmodelusing
parameterisedexpectations
DenHaan andMarcet (1989)
Analogywithlearning,connectionwithMarcets
workwithSargent onpropertiesoflearning
algorithms
Lotsofproblemswithit,notthatwidelyused
now
Butverysimpletoconceiveandprogram,
illustratestheproblemnicely,andanintroduction
intogeneralclassofprojectionmethods
TheRBCorgrowthmodel
c
t
,k
t
)
t0

max E
t
|

t0

[
t
c
t
1,
1
1 ,
]
subject to :
c
t
k
t1
z
t
k
t
o
(1 o)k
t
ln(z
t
) ln(z
t1
) oe
t
, e
t
N(0, 1)
c
t
c(k
t
, z
t
), k
t1
k(k
t
, z
t
)
Solutionisapairoffunctionslinkingthechoicevariables(consumption
andcapitalforusetomorrow)tothestatevariables[shockz,andcapital
today]
c
t
,
E
t
|[c
t1
,
(ok
t1
o1
1 o)]
c
t
k
t1
z
t
k
t
o
(1 o)k
t
SolutionhastosatisfytheEulerequation[inmorecomplexmodels,
thefullsetofFONCs]andtheresourceconstraint.
Inthespecialcaseoflogutility[gamma=1]andfulldepreciation
[delta=1],thesesolutionscanbecalculatedanalyticallyandare
knowntobe:
c
t
(1 o[)z
t
k
t
o
, k
t1
o[z
t
k
t
o
Choosingtheapproximant:
expectationsintheEulerequation
g(k
t
, z
t
) e
a
1
ln k
t
a
2
ln z
t
c
t
,
E
t
|[c
t1
,
(ok
t1
o1
1 o)]
WeapproximatetheconditionalexpectationontheRHSofthe
Eulerequationabovewithapolynomialinthestatevariables.
NB1.maybedesirabletousehigherorderpolynomial
2.maybeachoiceaboutwhichfunctiontoapproximate[as
here],ormorethanonefunctionyouhavetoapproximate.Eg may
bemorethanoneagentformingexpectations!
PEAalgorithm
1. Drawa long sequence of values for shocks e
t
to generate the exogenous technology shock process z
t
. (Long100,000?)
2. Choose a starting value for a
1
, a
2
and a starting value for k
0
, perhaps the steady state.
3. Simulate the model by:
3.1 computing c
0
,
e
a
1
ln k
0
a
2
ln z
0
3.2 solving for k
1
from the resource constraint, thus: k
1
z
0
k
0
o
(1 o)k
0
c
0
,
3.3. push time forward one period and return to 3.1
4. Compute residuals by:
4.1 for each t, letting c
t
true
[c
t1
,
(ok
t1
o1
1 o), c
t
app
e
a
1
ln k
t
a
2
ln z
t
4.1 R
t
c
t
true
c
t
app
5. Update a
1
, a
2
by choosing them to min

t0
T
R
2
[nonlinear least squares]
6. Check convergence. If converged, stop, else return to 3.
PEAinwords
1.Parameterisetheexpectationsfunction,
initialisecoefficients
2.Drawoneverylongsequenceofshocks
3.Simulatethemodel.
4.Findanewsetofcoefficientsfor1.usingnon
linearleastsquares,bycomparingthepolynomial
predictionforc_t withthesimulatedone.
5.Checkforconvergence.Ifnotconverged,goto
3andcontinue
ProblemswithPEA
Multicollinearity whenyoutrytogetmore
accurateapproximationsbyincludinghigher
ordertermsinthepolynomial.
Eg:Thesquareoftheshockiscorrelatedwiththe
shockitself.
Solutionaccurateonlyclosetosteadystate,since
simulationslivemostlythere.
Potentialforinstabilityintheupdatingalgorithm
Analogywithintertemporal learningmodels,and
projectionfacilitiesforsolvingtheproblem.
ResolvingPEAconvergenceproblems
Homotopy:usesolutionstoproblemsyoucan
solvetoinformstartingvaluestoproblems
youhaventyetsolved
Projectionfacilitiesanddampedupdating:
slowdownupdatingprocessinPEAalgorithm.
PEAstepwithaprojectionfacility,or
dampedupdating
5. 1 : a
1j1
p
, a
2,j1
p
arg min(

t0
T
R
2
)
5. 2 : a
i,j1
za
i,j
(1 z)a
i,j1
p
Tradeoff:
Tooslowupdatingrisksgettingstuckawayfromthesolution.
Toofastupdatingrisksinducinginstabilityinthealgorithmandnot
findingthesolution.
Exampleof(notveryuseful)homotopy
algorithm
o o
f
, , ,
f
Objective:solve
RBCmodel
definedby
Initialise parameters:
RBC parameters ,
0
1, o
0
1
PEA parameters:
i) solve model analytically, giving c
t
0
(z
t
, k
t
), k
t1
0
(z
t
, k
t
)
ii) simulate model giving sequences c
0
, k
0
iii) initialise values of a
1
0
and a
2
0
in c
t
,
e
a
1
ln k
t
a
2
ln z
t
that minimise the gap between c
t
simulated in ii) and c
t
,
Counter: set i 1
Main loop:
1. Set ,
i
, o
i
(,
i1
, o
i1
) z|(,
f
, o
f
) (,
0
, o
0
)]
5. Solve for c
t
1
(z
t
, k
t
), k
t1
1
(z
t
, k
t
) using PEA, initialising (a
1
, a
2
) a
1
i1
and a
2
i1
, saving on convergence (a
1
i
,a
2
i
)
6. If (,, o) (o
f
, ,
f
), set i i 1 and go to 1, else you are done.
Homotopy
Generalprocedure:usesolutiontoamodel
youknowhowtosolvewell,tosolvemodel
youdont.
Remark:evenifnotformallydoinghomotopy,
oftenwisetobuilduptocomplexmodelsby
solvingsimplerversions,togaininsight,test
code.
Remarks
Analogybetweenintratemporalprocessof
iterativealgorithmtofindbestapproximate
expectationsfunction
andintertemporallearningwithagents
updatingexpectationsfunctionsasdata
becomesavailableeachperiod.
Similaritynotjustsuperficial;mathsof
convergenceorlackofithasconnections.
Interandintratemporallearning
analogy
Consumersstartwithanexpectationsfunction.
Takedecisions.
Datarealised.
Agentscomputesurprise.Newfunction
updatedasfunctionofsurprise,andimprecision
inestimates.
Gaindampsupdating.
Undersomeconditions,convergestoREE.
5.SolvingRBCmodelusing
collocation,andChebyshev
polynomials
Overviewofthissection
Preliminaryonapproximationusing
Chebyshev polynomials
SolvingRBCmodelsusingCheb Polysto
approximatetheexpectationsfunction.
IteratingontheBellmanEquationusingCheb
Polystoapproximatethevaluefunction.
5.1Functionapproximationand
Chebyshev polynomials
Introremarks
Functionapproximationisatechniquewidely
encounteredintheory,econometrics,numerical
analysis.
Weencountereditalreadyinusinglinear
approximationsinNewtonRaphson tofindroots
ofnonlinearequationsystem.
Regressionisfunctionapproximation.
Fourierseriesapproximationusedtocompute
thespectrum.
Here:wewillapproximatetheexpectations
functioninconsumersFOC.
Typicalfunctionapproximation
problem,andsolution
f(x) C|a, b]

f (x)

i1
n
o
i

i
(x)

0
(x),
1
(x), . . . )
f(x)

f (x)

i1

o
i

i
(x),
i
(x) x
i
f(x)

f (x) o
0
o
1
x o
2
x
2
. . . o
p
x
p
Wewanttoapproximatesomef whichis
continuousoverthea,b interval
Wedoitbytakingsomeweighted
combinationofbasisfunctions,eg afamilyof
polynomials
Forexample,wecanrepresentany
continuousfunctionEXACTLYwith
thisinfinitesumofpolynomials
Whichinpracticemeansusingthe
firstpterms.
Chebyshev Polynomials
T
i
(x) cos(i arccosx)
T
i1
(x) 2xT
i
(x) T
i1
(x)
T
0
(x) cos(0. arccosx) 1
T
1
(x) cos(1. arccosx) x
T
2
(x) 2xT
1
(x) T
0
(x) 2x
2
1
T
3
(x) 2xT
2
(x) T
1
(x) 4x
3
3x
GeneralexpressionfortheC.P.oforderi.
Theycanbedefinedrecursively.
HerearethefirstfourCPs.
Graphof1
st
3Chebyshev polynomials
Source:Heer+Maussner,ch 8,p434
DomainsoftheCPandyourf(x)
X(z)
2z
b a

a b
b a
, z |a, b]
Z(x)
(x 1)(b a)
2
a, x |1, 1]
CPsonlydefinedon1,1.X(z)
convertsvaluesdefinedona,b
intovaluesdefinedon1,1.
Z(x)doesthereverse.
f(x) : |a, b]
g(x) f(Z(x)) : |1, 1]

f (z; o)

i0
n
o
i
T
i
(X(z))
Thiswillbeourapproximating
function.
TheCPswilltakeasaninput
transformationsoftheoriginal
pointsinourfunction.
Digression:derivingtheEuler
equationinthedeterministicRBC
model
max
c
0
,c
1
...)
U

t0

[
t
c
t
1p
1
1 p
, [ (0, 1), p 0
subject to k
t1
c
t
k
t
o
(1 o)k
t
, o (0, 1)
0 c
t
0 k
t1
k
0
given
Nonlogarithic preferences,andthepresenceofonlypartial
depreciation,willmakeananalyticalsolutionimpossible.
ButasafirststepwehavetoderivetheEulerequationanyway;
thoughthatstillleavesthetaskofsolvingforc(k),thepolicyfunction.
Forminganddifferentiatingthe
Lagrangian intheRBCmodel
Lagrangian fortheconsumers
problem.
dL
dc
t
0 [
t
|(1 p)
c
t
p
1 p
z
t
]
0 [
t
|c
t
p
z
t
]
c
t
p
z
t
Whichwedifferentiatewrt
todaysconsumption
andtomorrowscapital
L

t0

[
t
c
t
1p
1
1p
z
t
|k
t1
c
t
k
t
o
(1 o)k
t
]
FOCwrt capitaltotheEulerequation
WedifferentiateL wrt k_t+1,set=0,thensubstituteinFOCwrt consumptionto
eliminatetheL.multipliers,arrivingattheEulerequation.
ThisisreallyasystemofEulerequationsforeveryt,whichthepolicyfunction
c(k) hastosatisfy.ThispolicyfunctioniswhatwewillapproximateusingCPs.
dL
dk
t1

d
dk
t1
[
t
c
t
1p
1
1p
z
t
|k
t1
c
t
k
t
o
(1 o)k
t
[
t1
c
t1
1p
1
1p
z
t1
|k
t2
c
t1
k
t1
o
(1 o)k
t1
[
t
|z
t
] [
t1
|z
t1
ok
t1
o1
z
t1
(1 o)] 0
0 |z
t
] [ z
t1
|ok
t1
o1
(1 o)]

z
t
z
t1
[|ok
t1
o1
(1 o)]
[sub in foc wrt c
t
]
c
t
p
c
t1
p
[|ok
t1
o1
(1 o)]
0 [|ok
t1
o1
(1 o)]
c
t1
c
t
p
1
Solvingforthesteadystateforcapital
0 [|ok
o1
(1 o)]
c
c
p
1
1 o ok
o1
1/[
k
o1

1
[o

1
o

o
o
k
o1

1 [(1 o)
[o
k
1o

[o
1 [(1 o)
k


[o
1 [(1 o)
1
1o
StartwiththeEulerequation.
Dropthetimesubscripts,sincein
thesteadystateallvariablesare
constant.
Thensolveforkstarintermsof
themodelsprimitiveparameters.
Thisvalueisgoingtohelpus
definegoodboundsforcapital
whensolvingthedynamicRBC
model.
Boundingthestatespace
X |k, k]
k 1. 5k

, k 0. 5k

Setboundsexperimentally,bracketingthesteadystatesolutionfor
capital.
Theresidualfunction
min
,

k
k
R(,, k)
2
dk
Likeallprojectionmethods,westartwiththe
objectiveofminimisingtheintegralofa
residualfunction,evaluatedacrossthestate
space,asafunctionoftheparams that
definetheapproximatingfunction.
S(,)
m(kk)
2L

l1
L
R(,, k(
.
k
l
))
2
1
.
k
l
2
Thek_tilde_ls aremappedintok_up,k_down.
Thisintegralisgoingtobeapproximatedusingasum,evaluatedatpoints
correspondingtothezerosoftheChebyshev Polynomial.Anexampleof
quadrature,aformofnumericalintegration.
Remarks
NotethereareTWOapproximationsgoingon.
Weareapproximatingthepolicyfunctionc(k)
usingChebyshev polynomials
And,inordertofindthebestsuch
approximation.
weareapproximatingtheintegraloftheresidual
functiondefinedoverthecapitalspacewitha
sum,usingquadrature.
Computingtheresidualfunctionfor
somearbitary k_0
R(,, k
0
)

c
1

c
0
p
(1 o ok
1
o1
) 1
Ideaisthatifwehavegotthegammasrightintheapproximating
function,ie ifwehaveagoodapproximationtothepolicyfunction.
thentheEulerEquationshouldbeclosetoholding.Ifitdoesthenthe
LHS=0.
Butthenweneedanalgorithmforcomputingc_hat_0,c_hat_1and
k_1.
Computingtheresidualfunction
R(,, k
0
)

c
1

c
0
p
(1 o ok
1
o1
) 1
1. compute

c
0


c(,, k
0
)
2. compute k
1
k
0
o
(1 o)k
0


c
0
3. compute

c
1


c(,, k
1
)

c(,, k
0
)
j0
p
,
j
T
j
(
.
k(k
0
))
TheEEbasedresidual
functionwearetryingto
compute.
Givenc_hat_0(k_0),wegetk_1
fromtheresourceconstraint.
Thenweevaluatec_hat_1(k_1).
UsingtheCPformulahere.
Wherek_tilde(k_0)mapsthe
latterintothe1,1intervalover
whichtheCPisdefined.
Notquitedone!
Wehavetominimisethatapproximateintegralof
the[functionofthe]residualfunction,overthe
capitalspace.
.Usingsomeminimisationroutine!
Wehaveseenelementsofhowtodothiswhen
weusedNRmethodstofindthezeros ofa
nonlinearequationsystem.
Canwriteafunctionthatcomputesthesum,then
usefminsearch,orcsminwel tominimiseit.
Remarks
Theoutputofthisprocedureisafunctionthat
specifieswhatc is,givensomeinheritedlevel
ofk.
Rememberthiswasthedeterministicgrowth
model.
Ifwewantshocks,wethenhaveatwo
dimensionalstatespace,andneedtodoCPin
twodimensions.
6.Introtodynamicprogramming,
valueandpolicyfunctioniteration
Sargent+L:Imperialismofrecursive
methods
DPveryuseful.
Detailsofconditionsunderwhichitslessonshold,
andnumericalmethodstooperationaliseitwork,
arehard.Avoidedhere.
Implementationinsimplesettingseasy!
Essentialifeg agentschoicesarediscontinuous,
whenLagrangian methodsbreakdown.
[accept/reject,enterornot,exitornot,busor
train]
OverviewofDynamicProgramming
section
1.Deterministic,finiteelementdynamic
programming.
2.Stochasic,finiteelementDP
3.Continuousstatemethodsusing
collocation.
Alongtheway:MarkovChainapproximations
tocontinuousrandomprocesses.
FewwordsaboutDP^2
6.1Dynamicprogramming:
generalsetting
Generalsetting
Choose u
t
)
t0

to
max

t0

[
t
r(x
t
, u
t
)
s.t. x
t1
g(x
t
, u
t
); x
0
R
n
given
Chooseasequenceofcontrol
settings(u),suchthata
discountedsumofreturns(r)is
maximised,subjecttolawof
motionforthestate(x)
Wecouldbethinkingofaconsumer,orafirm,[and!/]oralarge
agentlikeapolicymakersettingpolicysubjecttolawsofmotion
generatedbythesolutionstothesmallagentsproblem
[aggregated].
Musingsonh andV
u
t
h(x
t
)
x
t1
g(x
t
, u
t
)
V(x
0
) max
u
s
)
s0


t0

[
t
r(x
t
, u
t
)
Dynamicprogrammingturnsprobleminto
searchforV[valuefunction]andh[policy
function]suchthatifweiterateonthese2
equationsthetermonRHSofRHSis
maximised.
max
u
r(x, u) [V(
.
x)),
.
x g(x, u)
IfweknewV,wecouldcompute
theRHSfordifferentusand
thereforefindh.
Ofcourse,wedont.
TheBellmanequation
V(x) max
u
r(x, u) [V(g(x, u)))
V(x) r(x, h(x)) [V(g(x, h(x)))
V(todaysstate)andV(tomorrows)arelinkedbytheBellman
Equationabove.Remembergisthetransitionlawthat
producestomorrowsstatefromtoday.
Thepolicyfunctionh isafunctionthatsatisfiesthemax
operater above,inwhichcaseifwesubstituteinforu,we
cangetridofthemaxoperator.
Cakeeatingproblem
Canthaveyourcakeandeatit.Rather,cant
eatyourcakeandhaveit!
HowquicklyshouldIeatmycake,giventhatit
rots,thatIdontwanttogohungry,but
tomorrowmaynevercome.
BellmanEquation:supposethatfrom
tomorrow,Ihaveanoptimalcakeeatingplan.
HowmuchshouldIeatnow?
Contractionmappingyieldedby
[iterationson]theBellmanequation
V
j1
max
u
r(x, u) [V
j
(
.
x))
Undersomeconditions,startingfromanyguessofV_0,evenV_0=zeros,
iteratingontheBellmanequationwillconvergetogivethevalue
function.Thisiscalledvaluefunctioniteration.Asabiproductofthe
maxontheRHSitproducesthepolicyfunctionh.
AswewillseewecanalsoiterateonthePolicyfunction.Policyfunction
iteration.
6.2Deterministic,analyticalDP
inthegrowthmodel
DeterministicVFIinanRBCmodelwe
cansolveanalytically
max

t0

[
t
u(c)
s.t. c
t
k
t1
y
t
(1 o)k
t
y
t
f(k
t
)
k
0
given
u ln(c)
o 1
y
t
Ak
o
Deterministic:notenoexpectations
term,andnorandomproductivity
proess.
CobbDouglasproduction.
Logutility.
Fulldepreciationofcapitalk.
DeterministicVFIintheRBCmodel
V
0
(k) 0, k
V
1
(k) max
c,k
ln(c ) [. 0)
c(k) Ak
o
V
1
(k) ln(Ak
o
) lnA olnk
WestartwithanyoldguessatV,
V_0
WeplugitintotheBellman
Equation.
AstheBEinstructs,wemaximiseit
wrt tochoicevariablesc,k
Thisgivesusanewguessatthe
valuefunctionV_1.[Nowquite
differentfromV_0.
Werepeattheprocessoverand
overuntilconvergence.
2
nd
iterationontheBellmanEquation
V
2
(k) max
c,k

ln(c) [|lnA olnk

])
max
c,k

ln(Ak
o
k

) [|lnA olnk

])

1
Ak
o
k


[o
k

0
Ak
o
k


1
[o
k

Ak
o
k

|1
1
[o
]
k


Ak
o
1
1
[o

[oAk
o
1 [o
c Ak
o

[oAk
o
1 [o
c
Ak
o
1 [o
WehavesubstitutedinourV_1
guess.
Nowwehavetomaximisethis
expressionwrt c,k
Evaluatedatthesemaximised
values,wehaveournextguessat
thevaluefunction,V_2
Asabiproduct,wehavetheguess
atthepolicyfunctions(expressions
forcandkintermsofk
Completingthe2
nd
iterationonthe
BellmanEquation
V
2
(k) ln
Ak
o
1 [o
[|lnA oln
[oAk
o
1 [o
]
ln
A
1 o[
[lnA o[ln
[oA
1 [o
o(1 o[) lnk
ThisisourV_2guessoncewehavecompletedthe
maximisation,[hencenomaxoperatornow,sincethisis
done!].
NotethatitisquitedifferentfromV_1
Inacomputerprogram,wewouldusethedifference
betweeneachiterationsguesstodecidewhetherwe
shouldkeepgoingornot.
A3
rd
iteration?!
V
3
(k) max
c,k

lnc [V
2
(k

))
max
c,k

lnAk
o
k

) [|ln
A
1 o[
[lnA o[ln
[oA
1 [o
o(1 o[) lnk

])
Continuing,wewould:differentiatewrt k,solveforc,kthatmaximise;
pluginthemaximisedvalues,toproduceV_3.ThensubstituteintheBE
againtogetanexpressionforV_4.Andsoon.
Finalexpressionforvalueandpolicy
functions
V(k) (1 [)
1
ln|A(1 [o)]
[o
1[o
ln(A[o))
o
1[o
lnk
c(k) (1 [o)Ak
o
k

(k) [oAk
o
Wewouldneedininitely manyiterationstoconverge,butuseofthealgebra
ofgeometricseriesthatemergecanproducetheseexpressions.
ThereisalsoaguessandverifywaytosolveforV,butdoingitthisway
illustratessomeoftheaspectsofnumericaliterationsontheBE.
RemarksonVFI
Onlyhaveconvergenceundercertain
conditions.[SeeStokey andLucas].
Caniterateonthepolicyfunctioninstead,
undermorerestrictiveconditions.
Canusecombinationofthetwotospeed
computation.
Ingeneralwonthaveanalyticalexpressions
forderivativesweusedateachstep[andof
coursenoanalyticalexpressionsforV,c,k]
6.3Policyfunctioniteration
Policyfunctioniterationalgorithm
1. compute feasible u h
0
(x)
2. compute value of pursuing this forever V
h
j
(x)

t0

[
t
r(x
t
, h
j
(x
t
))
with j 0, x
t1
g|x
t
, h
j
(x
t
)]
3. Generate newpolicy u h
j1
(x) that solves the 2 period problem
max
u
r|x, u] [V
h
j
|g(x, u)]
4. Iterate on 1 3 ro convergence.
Remarksonpolicyfunctioniteration
Conditionsunderwhichitconvergesaremore
restrictive.
Butwhentheseconditionsaresatisfied,it
mayconvergefaster.
6.4Numerical,deterministic
dynamicprogramminginthegrowth
model
NumericalDynamicProgrammingin
Matlab orsimilar
k
k
l
k
l
1/d (k
u
k
l
)
k
l
2/d (k
u
k
l
)
. . .
k
u
k
l
0. 5k
ss
, k
u
1. 5k
ss
V
0
(k
i
) |0, 0, 0. . . . ]
g(k
i
) |1, 1, 1, . . . ]
Westartoutbydefiningavectorofgridpointsforcapital,k,andinitialising
vectorsforthevaluefunctionandthepolicyfunctions.Thepolicyfunction
vectorswillcontainnumberspointingustodifferentelementsofthe
capitalgrid.Ie g(1)=5 wouldmeanifweinheritthefirstelementofthe
capitalgrid,thebestchoiceforcapitaltomorrow,k wouldbethefifthone.
Alooptocomputethefirstnew
iterationontheBellmanEquationin
Matlab orsimilar
V
1
(k
i
) maxu(c
j
) [V
0
(k
j
))
1. set k k(1) k
l
y Ak
l
o
2. evaluate u(c
j
) [V
0
(k
j
)for each possible c
j
, k
j

3. choose highest [in this case c Ak


o
, k

0], assign this number to V


1
(1)
4. go back to 1. and repeat for next value of k, k(2) k
l
1/d (k
u
k
l
). . .
Subsequentloopsarethesame withexceptionthatoncewehave
dispensedwithV=0 theVisgoingtoinfluencethesearchforthemaximising
valuesofk,c.
Andwewillkeepgoinguntilwehavemeasuredanddecidedon
convergence.
Stoppingcriterionforavaluefunction
iterationloop
stop if maxV
diff
V
k
V
k1
c
Aftereachnewiteration,wewillcomparetheoldandnewVselementby
element,andcomputethemaximumdifference,andstopprovidedthe
absolutevalueofthisdifferenceislessthanaprespecifiedtolerancevalue.
Thisisdonebyusingawhileabs(max(vdiff))<e..endloopinMatlab.
RemarksonnumericalDP
Themaximisationstepcanbeverytime
consuming.Computerhastoenumerateand
checkeachcandidateValueforeachpolicy
choice.
Convergencegreatlyspeededupbygood
startingvalues.
Orevenmissingoutthemaxstep.
[Accelerator].
Syntaxofloopscanbeimportant:vectorising
RemarksonnumericalDP
Alternativestoppingcriteriaiswhenthepolicy
functions[vector(inourexample)ofpointers]
doesnotchange.
Thatis,ifyouarentinterestedinVitself.
Simulatingthemodeljustrequiresthe
pointers.
Theeg weworkedoutallowsustotrace
trajectoryfrominitialk,tosteadystate.
6.5Stochasticdynamic
programminginthegrowthmodel
Stochasticdynamicprogramming
Sofarwehaveassumedtherearenoshocks.
SowecantsolveandsimulatetheclassicRBC
model,drivenbytechnologyshocks.
Todothiswehavetoenlargethestatespace
toincludeadimensionfortheshocks.
SoourV=V(k,A)
Finiteelementapproximationofacontinuous
randomprocessusingMarkovchains.
DigressiononMarkovchains
Someeconomiccontextsbestdescribedwith
finiteelementrandomprocesses[regimes?]
Continuousrandomprocesseswell
approximatedbyMarkovchains[Tauchen
(1986)]
Markovchains
prob(x
t1
|x
t
, x
t1
, . . . x
tk
) prob(x
t1
|x
t
)
Markovproperty
m
0
Z
P
3objectsneededtodefineaMarkovchain.Initial
probabilties;vectorstoringpossiblevaluesforthe
chain;matrix definingtransitionprobabilities
m
0

P
k
Computingprobabilitiesatt+k
Ergodic,orstationarydistributionofa
Markovchain
m
k

m
k1

P
m

P (I P)m 0
p
ij
0i, j
p
ij
k
0i, j; P
k
P P . . . . P [k times], some k 1
Wecaniterateonthisequationto
findthestationarydistribution
..whichhappenstosolvethis
equation.
Suchadistributionexists,andisindependentofpi_0ifeitherof
thesetwoconditionsholds.Basicallyrequirescantrandomlyget
stuckinoneofthestates.
DefiningaMarkovchainfor
technology
A
2
1
0. 5
P
0. 3 0. 3 0. 4
0. 8 0. 1 0. 1
0. 2 0. 75 0. 05
m
0
|0. 1, 0. 1. 0. 8]

Technologycantakethreevalues:
high,mediumandlow.
Prob ofgoingfromhightolowis0.4;
Prob ofstayinginmediumifyoustartin
mediumis0.1.
Theinitialperiodprobabilitiesofhigh,
mediumandlow.
SimulatingaMarkovchainfor
technology
1. drawx as uniform random number, ie 0 x 1
2. if 0 x m
0
(1), A
1
o
A(1);
if m
0
(1) x m
0
(2), A
1
o
A(2);
if m
0
(2) x m
0
(3), A
1
o
A(3)
Proc for
simulating
initialperiod
technology
1. set t t 1
2. drawx uniform
3. let s technology state for previous period
3 if 0 x P
s1
, A
t1
o
A(1);
if P
s1
x P
s2
, A
t1
o
A(2);
if P
s2
x P
s3
, A
t1
o
A(3).
4. go to 1. . .
Proc forsimulating
subsequentrealisationsfor
technology
StochasticversionoftheBellman
equation
V
j1
max
u
r(x, u) [E|V
j
(
.
x|x))
V
j1
(k, A) max
c,k
lnc [E|V
j
(k

, A

|A))
V
j1
(k, A
m
) max
c,k
lnc [

i1
3
p
mi
|V
j
(k

, A
i

|A
m
)
Ourgeneralcasemodifiedto
includeshocks.
OurRBCcasewithshocks.
Expandingtheexpectationsoperatorinour
caseofadiscretestateMarkovchainusedto
approximatetheshocks.
Thenewtwodimensionalvalue
functionwithrandom,3state
technology
V
0
(k, A)
V
0
(k k
l
, A A(1)) V
0
(k k
l
, A A(2)) V
0
(k k
l
, A A(3))
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
V
0
(k k
u
, A A(1)) V
0
(k k
u
, A A(2)) V
0
(k k
u
, A A(3))
Valuefunctionisnowamatrixstoringvaluescorrespondingtoinherited
outcomesforkfromk_l tok_h,andineachofthesecases,forinherited
productivitylevelsfromA(high=1_toA(low=3).
FillingoneelementoftheBellman
Equationwitha3stateMarkovchain
fortechnology
V
j1
(k
l
, A(3)) max
c,k

lnc [|0. 2 |V
j
(k

, A

A(1)|A A(3))]
0. 75 |V
j
(k

, A

A(2)|A A(3))]
0. 05 |V
j
(k

, A

A(3)|A A(3))]
RemarksonapproximatingAR(1)for
technologywithaMarkovchain
Beforeyouimplementyourvaluefunctioniteration,
youwillhavetoapproximatetechnologywiththeMC.
OriginalmethodduetoTauchen (1986).
Themorestates,themoreaccuratetheapproximation.
Inthelimit,theapproximantcanbemadeequaltothe
continuouscounterpart.
But,withmoreelements,themoretimeconsuming
willbetheVFIforagivencapitalgridsize.
Or,toholdcomputingtimeconstant,youwillhaveto
makedowithacoarsergridforcapital.
6.6Dynamicprogrammingsquared
Policymakerdevisingoptimalunemployment
compensation;agentssolvingaccept/reject
searchproblem.
Nestedvaluefunctions.EachagentsVFisa
functionoftheothers.
Theseproblemscanbesolved.LQexamplesof
optimalpolicyinoptimisingRBC/NKmodels.
Solvediteratively.GuessVFforoneagent;
iterateontheotheragentsBE.Thenswap.
6.6Solvinggrowthmodelusing
Chebyshev collocationtosolvethe
Bellmanequation.
Basicstrategy
Approximateunknown[inthiscasevalue]
functionwithfinitecombinationofn known
basisfunctions,coefficientsonwhichtobe
determined
Requirethisapproximanttosatisfythe
functionalequation[inthiscasetheBellman
Equation]atn prescribedpointsofthe
domain,knownasthecollocationnodes.
Functionalequationsandcollocation:
remarks
V(k, a) max
k
u(c) [EV(k

, a

))
Thisisafunctionalequation.
Regularequationproblemgivesusaknownfunction,anasksustofind
avaluesuchthatsomeconditionismet,eg findx,suchthatf(x)=0
Afunctionalequationproblemasksustofindanunknown function
[hereV(.)]suchthatsomeconditionismet[herethecondition
stipulatedintheBellmanEquation]
Collocationhereconvertsfunctionalequationproblemintoaregular
equationproblem.
Fromthefunctionalequationtothe
collocationequation
V(k)

j1
n
c
j

j
(k)
ApproximateV withtheweightedsumofChebyshev Polynomials
ThensubstituteinonbothsidesoftheBellmanEquation.
Nowwehavethecollocationequation,aregularbutnonlinearequationsystem.

j1
n
c
j

j
(k
i
) max
k
u(k

) [E
j1
n
c
j

j
(k

) , i 1. . . n
Thecollocationequation
c v(c),

ij

j
(k
i
)
v
i
(c) max
k

K
u(k
i
, k

) [E

j1
n
c
j

j
(, k
i
k

)
Collocationequation
Collocationmatrix
Conditionalvaluefunction
ataparticularcapitalvalue
k_i
2waystosolvethecollocation
equation
c
1
v(c)
c
s1

1
v(c
s
)
c v(c) 0,
c
s1
c
s
( v

(c
s
))
1
(c
s
v(c
s
))
Writeinfixedpointform,theniterate
Or,poseasarootfindingproblem,andupdateusingNewtonsmethod.
Herev()istheJacobian ofthevaluefunctionataparticularvaluefork
NBthisisasystem,notjustoneequation.Oneeq foreverynode.
7.Heterogeneousagents
Whyheterogeneousagentmodels
SomeRBClikemodelshaveborrowersand
lenders,consumersandentrepreneurs.
Butmaynotbeenoughforsomeproblems.
Cantstudydynamicsofincomeandwealth
distribution,northeirimpacts.
Existenceofrepresentativeagentrulesout
interestingproblemslikebehaviourviz
uninsurableidiosyncraticrisk.
Heterogeneitystepbystep
Startwithnoaggregateuncertainty,but
uninsurableidiosyncraticrisk.
Basicmethodisto
takeaggregatepricesasgiven
solveagentsdecisionproblemusingstochasticDP
SimulateindividualCstogetdistribution
Computeaggregatequantities
Checktoseeifmarketclearedatassumedprices;if
not,findpricethatdoesclearmarket,thenrepeat.
Moveontoaggregateuncertainty.
Choices
Checkingmarketclearing.
Solvingindividualagentsdecisionproblem
[wehaveseensomeofthese]
FiniteelementDP?
ContinuousstateDPviacollocation?
Computingthestationarydistributionofasset
holdings
Montecarlo simulation
Approximatingdistributionfunction.
Historyofmethods[tobecompleted]
Huggett (1993)Pureexchangeeconomy;
endowments,noproduction,noaggregate
uncertainty.Idiosyncraticendowmentrisk.
Aiyagari (1994)
Krussell Smith(1998);aggregateuncertainty.
Meansofdistributionsaresufficientstatistics.
Earlyresultsonexistenceanduniquenessfor
simplecases.Generallynotavailableforlater,
morerealisticmodels.
Interestingrecentpapers
Heathcote etal(2009);Guvenen.Surveys.
MackayandReiss (2012).Countercyclicaltax
policyinstickypricehetagentmodel.
Reiter (2006) Computationofhetagentmodel
usingfunctionapproximationtomodelthe
distribution.
Ahetagentmodel
maxE
0
t0

[
t
u(c
t
)
u(c
t
)
c
t
1p
1 p
, p 0
w
t
if c e
b
t
if c u
m(c

|c) probc
t1
c

|c
t
c)
P
u.u
P
ue
P
eu
P
ee
Agentsmaximisediscountedstreamof
utilityfromconsumption
Wagesifemployed,benefitsifnot.
Employmentstatusis
exogenous,and
Markov,withknown
transitionlaw
a
t1
(1 (1 t)r
t
)a
t
(1 t)w
t
c
t
, c e
(1 (1 t)r
t
)a
t
b
t
c
t
, c u
Statecontingentbudgetconstraint.Returnonassetsandwagestaxed
atratetau.
Eulerequationforconsumption.
u

(c
t
)
[
E
t
|u

(c
t1
)(1 (1 t)r
t1
)]
Firmsandproduction
Y
t
F(K
t
, N
t
) K
t
o
N
t
1o
, o (0. 1)
r
t
o
N
t
K
t
1o
o
w
t
(1 o)
K
t
N
t
o
Competitivefirmsowned
byhouseholds,maximise
profits,subjecttothis
technology.
Inequilibrium,factorsarepaidtheir
marginalproducts
Government
B
t
T
t
Forsimplicity
Governmentsbalancetotal
spendingonbenefitsandtotaltax
revenuesfromcapitalincomeand
wages,eachperiod.
T
t


a
min
(tr
t
K
t
tw
t
N
t
)da
B
t


a
min
(b
t
)da
Objectscomprisingstaionary eqm
V(c, a), c(c, a), a

(c, a)
f(e, a), f(u, a)
w, r
K, N, T, B
Valueandpolicyfunctionsforagents
Timeinvariantdensityfunctionsforemploymentstatus
Constantfactorprices wagesandinterestrates
Constantcapital,labourinput,taxesandbenefits
Suchthat..
Aggregatequantitiesobtainedby
summingacrossagents
K

ce,u)

a
min

af(c, a)da
N

a
min

f(e, a)da
C

ce,u)

a
min

c(c, a)f(c, a)da


T t(wN rK)
B (1 N)b
Moreconditionsonthestationary
equilibriumofthehetagenteconomy
c(c, a), a

(c, a)
F(a

, c

)

ce,u)
m(c

|c)F(a
1
(a

, c), c)
Policyfunctionssolvehouseholdmaxproblem
Factorspaidmarginalproducts r o
N
K
1o
o
w (1 o)
K
N
o
Governmentbudgetbalanced
T B
Distributionfunctionistimeinvariant,ie iftake
productofitwithtransitionmatrix,getback
samedistribution
Basicstylisedstepstocomputethe
stationarydistribution
Computationofindividualpolicyfunctions,
givenaggregateqsandps
Thisstepwehavealreadydone,2differentways
Computationofdistributiongivenindividual
policyfunctions.
Thisisthenewstepthatyouhaventseen.
Stepsforcomputingthestationary
eqm ofthehetagentmodelinmore
detail
1. Compute stationary employment N
2. Make initial guesses at K and t
3. Compute the factor prices w and r
4. Compute household decision rules c(c, a), a

(c, a)
5. Compute F(a

, c

) stationary distribution of assets for emp and unemp


6. Compute K and T that solve aggregate consistency
7. Compute t that balances the budget
8. Update K and t if necessary and go to 2.
1.Computingstationaryemployment
N
t
p
ue
(1 N
t1
) p
ee
N
t1
Todaysemploymentisyesterdays,timesthechancethattheystayin
employment,plusyesterdaysunemployedtimesthechancetheyleave
unemployment.
GivenanyinitialN_0wecansimplyiterateonthisequationtoproduce
thestationaryN
WecanalsocomputestationaryNanalytically.
5.Computingstationarydistribution
Threemethods
Montecarlo simulation
Functionapproximation
Discretisationofthedensityfunction
5.Computingstationarydn using
MonteCarlosimulation
1. Choose sample size N [not to be confused with N for employment]x 10k
2. Initialise asset holdings a
0
i
and employment status c
0
i
.
3. Using the policy function already computed, compute a
i
(c
i
, a
i
) for each of the N agents.
4. Use random number generator to drawc
i
for each agent.
5. Compute summary moments of distribution of asset holdings, eg a, o
a
6. Iterate until moments converge.
Remarks
Noguaranteethatconvergentsolutionexists,
or,ifitdoes,thatitisunique
Noguaranteethatevenifyouhaveaunique
stationarydistribution,thatthisalgorithm,
evenifcorrectlycoded,willfindit.
Muchtrialanderrorneeded!
KrusselSmith
Nextlogicalstepistoincludeaggregate
uncertainty.
Recallwehadonlyidiosyncraticuncertainty.
Thisisfornexttime.
Thatpapercontainedmethodologicalinsight
thatagentscouldmakedowithjustmeanof
momentsofdistributions.
Containswithinitthehintthatheterogeneity
doesntmatter.
Recappingonallthedifferent
methods
Recap:quasilinear,perfectforesight
methodtosolvethezerobound
problem
Guesswhenthenonlinearitystopsbinding,
andthemodelislinear.
Solvethelinearmodel.
Usethatlinearsolutiontosolvebackwards
fortheinitialperiod,whenthemodelisALSO
linear,andwith1lessequation,and1fewer
unknowns.
Verifythattheguessholdsbycheckingthe
maxpolicyruleimpliesZLBbinding.
Recap:perfectforesight,nonlinear
NewtonRaphson method
RBCexample.
Solveforsteadystate.Problem:tosolvefortrajectory
givensomeinitialvalue.
DerivetheFOCs;substituteoutforCusingthe
resourceconstraint.
Formulatesystemofnonlinearequations,oneforeach
unknownperiod.
SolveusingmultidimensionalNR.
Linearise aroundapoint,thenfindtherootofthe
linearsystem.Usethatroot[provideditfallsinthe
domain]asthenextpointaroundwhichto
approximate.
Recap:Parameterisedexpectations
algorithm
RBCexample.
DeriveEulerequation.Substitutein
approximatefunction,involvingshocksand
states.
Drawalargetimeseriesofshocks.
Choosenewparametersinforecastfunction
tominimisegapbetweensimulatedforecasts
andwhatconsumptionactuallyturnsoutto
beinthesimulation.
Recap:ProjectionusingChebyshev
Polynomials
Anotherexampleoffunctionapproximation.
ApproximateattheChebyshev nodes,gaining
globalaccuracyforagivencomputationaltime.
ApproximatepolicyfunctionsusingCPs.Define
residualfunctionusingtheEulerEquation.
Minimisetheintegralofthisresidualfunction,
approximatingthatwithasumofresidualsatthe
nodes.
Recap:deterministicdynamic
programming
FormulatetheBellmanequation:recursive
definitionofthevaluefunction.
Magic:startingfromanydegenerateguess,
iterateontheBellmanequationtogetthe
solution.
Discretisethestate[inthiscase,justcapital]
space.
Essentialwhenchoicesarediscontinuous.
Recap:stochasticdynamic
programming
RBC:approximatetechnologyshockwith
finitestateMarkovchain.
Nowexpectationofvaluefunctionnextperiod
isprobabilityweightedsumofdifferent
choicesunderthedifferentoutcomesforthe
shock.
Recap:usingChebyshev collocationto
solvetheBellmanequation
ApproximatethevaluefunctionwithCP.
Alternativetodiscretisingthecapital/shock
space.
Recap:Aiyagari
Guessrealinterestratethatclearsthecapital
market.
Atthatrealrate,solvetheindividuals
problemusingdynamicprogramming.
Sumupindividualcapitaldemand,andsee
whatinterestrateclearsthemarket.
Repeatuntilconvergence.
Recap:KrusselSmith

S-ar putea să vă placă și