Sunteți pe pagina 1din 76

MFx Macroeconomic Forecasting

IMFx

This training material is the property of the International Monetary Fund (IMF) and is intended for use in IMF
I i
Institute ffor C
Capacity
i D Development
l (ICD) courses. A
Any reuse requires
i the
h permission
i i off the
h ICD
ICD.
EViews is a trademark of IHS Global Inc.
Properties of Time Series

L-1: Introduction
Introduction
General comments
Univariate analysis
y
Two general classes
classes of processes
Both
h science andd art ((judgement):
d )
Understandingg behavior and forecastingg
Assessing/testing
Introduction
Univariate analysis
Stochastic process vs time series

Draw from the process


p
p.d.f.
of Yt p.d.f.
df
of Yt+1 p.d.f.
pd
of Yt+2

Time series, subset of the draw


Introduction
Two ggeneral classes of processes
p
Stationary vs nonstationary
Unchanged distribution (pdf) over time?
Covariance
Co ariance stationary:
stationar
Unconditional
U di i l mean and
d variance
i constant
E (Yt ) E (Yt j ) Var (Yt ) Var (Yt j ) 2
Y

C
Covariance
i d
depends
d on time
i j that
h h has elapsed
l db between
observations not on reference period:
observations,
Cov(Yt ,Yt j ) Cov(Ys ,Ys j ) j
Introduction
Stationary?
y

X X
Introduction
Understanding behavior and forecasting
Find the
Yes best ARMA Forecast
model
Is Y
stationary?
y
Transform
No (difference or
other)
Introduction
Assessing and Testing M4

Find the
Yes best ARMA Forecast
model
Is Y
stationary?
y Diagnostic
Transform tests
M3, Part 1 Diagnostic
Unit root tests No (difference or tests
other)
M3, Part 2
Outline
Part 1: Stationary processes
Identification
Estimation & Model Selection
Putting it all together
Part 2: Nonstationary processes
Characterization
Testing
i
Properties of Time Series

Part 1: Stationaryy Time Series


L 2 Identification
L-2:
Part 1: Stationary Time Series
Just to remind you.
you

Id
Identification
tifi ti
Estimation & Model Selection
Puttingg it all together
g
Identification
The
h fi
first step iis visual
i l iinspection:
i graph
h and
d
observe yyour data.

You can observe a lot jjust byy watching


g

Yogi Berra
Identification
Does the series look stationary?

AR(1)
MA(1)

AR(1)
with
X X
AR(1)
trend with
b k
break
Identification
Note: differencing can remove the trend:

y yt yt 1
*
t
Identification
Assuming that the process is stationary,
stationary there
are three basic types that interest us:

Autoregressive
g (AR)
( ) yt a b1 yt 1 b2 yt 2 .... bp yt p t

Moving Average (MA) yt ut 1ut 1 2 ut 2 ... q ut q

yt a b1 yt 1 b2 yt 2 ... bp yt p
Combined (ARMA)
ut 1ut 1 2ut 2 ... q ut q
Identification
Some notation:
notation
AR(p),
(p), MA(q),
(q), ARMA(p,q),
(p,q), where p,q refer to the
order (maximum lag) of the process

t is a white
white noise
noise disturbance:

E t 0,
0 Var t E , Cov
t
2 2
C t , s 00, if t s
Properties of Time Series

Part 1: Stationaryy Time Series


L 3 Some tools for identification
L-3:
Part 1: Stationary Time Series
Where are we? Where are we going?

Stationaryy process
p (visual inspection
p )y
Learned about possible processes for y
Need to identify which one in order to
understand, then eventuallyy forecast y
tools to help identify
Part 1: Stationary Time Series
Autocovariance and autocorrelation
Relations between observations at different lags:
g
Autocovariance:
A i j E yt yt j
j
Autocorrelation: j
0
ACF or Correlogram
Correlogram
Correlogram: graph of autocorrelations at
each lagg
Part 1: Stationary Time Series
Back to our previous examples Different
ff
p
patterns:
AR(1), Geometric
b1 = 0.7
07 decay

MA(1)
MA(1),
1=0.7 Cutoff
Part 1: Stationary Time Series
Partial autocorrelation
The pth partial autcorrelation is the pth coefficient of a
li
linear regression
i off yt on its
it llags up tto p:
yt a b1 yt 1 b2 yt 2 ... bp yt p et
Thus, PACp = bbp
Relationship between yt and yt-p, controlling for effects of
other lags up to p
For example, for p = 3, regress yt on yt-1, yt-2, yt-3
Part 1: Stationary Time Series
Some identifiable patterns for ACF, PACF
Geometric decay of ACF in AR(1)
AR(1), oscillating if b<0
b<0.
Part 1: Stationary Time Series
Some identifiable patterns for ACF, PACF
Gradual decay of ACF in AR(p)
AR(p), tending toward zero
relatively
l ti l quickly.
i kl
Part 1: Stationary Time Series
Some identifiable patterns for ACF, PACF
Abrupt dropoff in PACF in AR(p) after lag p
p.
Part 1: Stationary Time Series
Some identifiable patterns for ACF, PACF
Opposite patterns for MA(q): abrupt dropoff in ACF after
q, gradual
d ld decay ttoward
d zero iin PACF
PACF.
Properties of Time Series

Part 1: Stationaryy Times Series


L 4 Looking closer at identification
L-4:
Part 1: Stationary Time Series
We now have a tool (ACF, PACF) to help us identify the
stochastic p
process underlying
y g a time series we are
observing.
g

Now we will:
Summarize the basic patterns to look for
Observe an actual data series and make an initial guess
Next
N t step:
t estimate
ti t (several
( l alternatives)
lt ti ) b based
d on thi
this
guess
Part 1: Stationary Time Series
ACF PACF
White
h noise Allll
's
' =0 Allll b's
b' = 0
Geometric decay Cutoff after lagg 1;;
S
Summary off AR(1)
(oscillating if b <0)
Decays toward zero,
zero
1 =b1

AR(p) Cutoff after lag p.


the may oscillate
Geometric decay
MA(1) Cutoff after lag 1. (oscillating if
patterns <0)
Decay (oscillating
MA(q)
(q) Cutoff after lagg q
q.
if <0)
to look
l k 0)
Geometric decayy
Geometric decay
d after
f
for:
fo ARMA(1,1)
lag 1 (oscillating if b <0)
after lag 1
(oscillating if b< 0)

Decay (direct or
ARMA(p,q) Decay (direct or oscillatory) after
oscillatory) after lag q lag p
Part 1: Stationary Time Series
Some tips
p

ACFs
that
h dod not go to zero couldld b
be sign
i off
nonstationarity
ACF of both AR,, ARMA decayy gradually,
g y, drops
p to 0 for MA
PACF decays gradually for ARMA
ARMA, MA,MA drops to 0 for AR
Possible
P ibl approach:
approachh begin
b i with i h parsimonious
i i llow order
d
AR, check residuals to decide on possible MA terms.
Part 1: Stationary Time Series
When looking at ACF, PACF

Box-Jenkins p
provide sampling
p g variance of the
observed ACF and PACFs (rs and bs)
Permits one to construct confidence intervals around
each
h assess whether
h th significantly
i ifi tl 0
Computer
p p
packages
g ((EViews)) p
provide this
automatically!
Properties of Time Series

Part 1: Stationaryy Times Series


L 5 Estimation
L-5:
Part 1: Stationary Time Series
Estimation & Model Selection:
Decide on plausible alternative specifications (ARMA)
Estimate each specification
p
Choose best
best model
model, based on:
Significance of coefficients
Fit vs parsimony (criteria)
White noise residuals
Ability to forecast
Account for possible structural breaks
Part 1: Stationary Time Series
Fit vs parsimony (information criteria):
Additional parameters (lags) automatically improve fit but
reduce forecast quality.
quality
Tradeoff
T d ff b between
t fit andd parsimony;
i widely
id l usedd criteria
criteria:
it i
Akaike Information Criterion (AIC)
AIC = T ln(SSR)
ln((SSR)) + 2(p
(p + q + 1))
Schwartz Bayesian Criterion (SBC)
SBC = T lnln(SSR)
(SSR) + (p + q + 1) lnln((T)
SBC will
ill ttend
d tto prefer
f more parsiminious
i i i models
d l th
than AIC
AIC.
Part 1: Stationaryy Time Series
White noise errors:
Aim to eliminate autocorrelation in the residuals
(could indicate that model does not reflect the lag structure well)
Plot
Pl t standardized id l ((it/)
t d di d residuals
No more than 5% of them should lie outside [-2,+2] over all periods
Look at rs, bs (and significance) at different lags
Box
Box--Pierce Statistic:
Statistic joint significance test up to lag s:
s H0: all rk = 0,
0
Q T rk s
2 2

k 1 H1: at least one rk0


Part 1: Stationary Time Series
((Anticipate
p M4))
Forecast ability:
Can
C assess h
how wellll the
h model
d l fforecasts out off sample:
l
Estimate the
h model
d l ffor a sub-sample
b l (f
(for example,
l the
h ffirst 250 out off
300 observations).
observations)
Use estimated parameters to forecast for the rest of the sample (last 50)
Compute the forecast errors
errors and assess:
Mean Squared
q Prediction Error
Granger-Newbold Test
Diebold-Mariano Test
Part 1: Stationary Time Series
(Anticipate M4)
A
Account t for
f possible
ibl structural
t t lb breaks:
k
Does the same model applypp y equally
q y well to the entire sample, p
or do p parameters change
g ((significantly)
g y) within the sample?
p
How to approach:
approach
Own priors/suspicion: Chow test for parameter change
If priors
i nott strong,
t recursive
i estimation,
ti ti ttests
t ffor parameter
t
stability
t bilit over th
the sample,
l ffor example,
l CUSUM
CUSUM.
Properties of Time Series

Part 1: Stationaryy Times Series


L 6 Putting it all together
L-6: togetherSimulated
Simulated Data
Part 1: Stationary Time Series
Let
Letss first work with simulated data

Look at how MA(1),


MA(1) AR(1) series are
simulated
i l t d
in Excel
in EViews
Part 1: Stationary Time Series
Lets first work with simulated data

View graph
g
View ACF, PACF do they behave as expected?
Decide on alternative specifications (one correct, one or more
incorrect)
Estimate and compare the results
Use the EViews Automatic ARIMA Modeling feature

Does the correct specification win?


Properties of Time Series

Part 1: Stationaryy Times Series


L 7: Putting it all together
L-7: togetherReal
Real world data
Part 1: Stationary Time Series
Now let
letss work with real world data
File M3
M3_Series.wf1
Series wf1
View Sheet PE_ratios and choose a series:
Look at the graph and correlogram for a specific time series
Does it appear to be stationary?
Again,
g choose two (or
( more)) p
possible specifications
p
Estimate, compare
p results ((coefficients, AIC/SBC, ACF of residuals))
Use Automatic ARIMA Modeling g
Which specification wins?
wins ?
Properties of Time Series

Part 2: Nonstationaryy Times Series


L 8 Introduction
L-8:
Part 2: Nonstationary Time Series
Introduction:
K Questions:
Key Q ti
What is nonstationarity?
Why is it important?
How do we determine whether a time series is
nonstationary?
o stat o a y
Part 2: Nonstationary Time Series
What is nonstationarity
nonstationarity??
Recall from Part 1:
1
Covariance stationarity of y implies that, over time, y has:
Constant mean
Constant variance
Co-variance between different observations that do not depend on
time (t), only on the distance or lag between them (j):

Cov(Yt ,Yt j ) Cov(Ys ,Ys j ) j


Part 2: Nonstationary Time Series
What is nonstationarity?
nonstationarityy?
Thus
Thus, if any of these conditions does not hold
hold, we say
that y is nonstationary
nonstationary:
There is no long
long-run
run mean to which the series returns
(
(economici conceptt off llong-term
t equilibrium)
equilibrium
ilib i )
The variance is time-dependent. For example, could go to
i fi it as th
infinity the numberb off observations
b ti goes tto iinfinity
fi it
Theoretical
h l autocorrelations
l d
do not d
decay, sample l
autocorrelations
t l ti d
do so very slowly.
l l
Part 2: Nonstationary Time Series
Nonstationary series can have a trend:
Deterministic
Deterministic: nonrandom function of time time:
yt t ut , where ut is iid
Example:
== 0.45
0 45
Part 2: Nonstationary Time Series
Non--stationary series can have a trend:
Non
Stochastic:
Stochastic random trendtrend, varies over time
Random Walk: yt yt 1 ut
Random Walk with Drift: yt yt 1 ut
( b
(as before,
f ut is iid)
d)
is the Drift; if
>o, then yt will be increasingg

Question:
Q ti RW is
i a special
i l case off what
h t process??
Part 2: Nonstationary Time Series
Example of a random walk:
Part 2: Nonstationary Time Series
Example
p off a random walk with drift:
f

= 2.0

Note: simulated with the same disturbances as in the Random


Walk in previous slide
slide.
Properties of Time Series

Part 2: Nonstationaryy Times Series


L 9 Consequences
L-9:
Part 2: Nonstationary Time Series
Key Questions:

What is nonstationarity?
Why is it important?
How do we determine whether a time series is
nonstationary?
o stat o a y
Part 2: Nonstationary Time Series
Consequences of non-non-stationarity
Shocks
Sh k d do nott di
die out
t
Statistical consequences
Non-normall ddistribution
b off test statistics
Bias in AR coefficients; poor forecast ability
Part 2: Nonstationary Time Series
Shocks do not die out
Consider
C id a generall AR(1)
AR(1):
yt byt 1 t
Can be expressed as an MA(q):
yt b y0 t b t 1 b t 2 ...b 2 b 1
t 2 t 2 t 1

The impact of shocks (disturbances) will depend on


value of b.
Part 2: Nonstationary Time Series
yt b y0 t b t 1 b t 2 ...b 2 b 1
t 2 t 2 t 1

Three cases:
1 b< 1
1. t
1, b 0 as t , so the eect of a shock will
di i i h as ti
diminish time elapses
l t 1

, yt y0 t i
2. b = 1,, bt = 1 for all t;; effect persists,
p
i 0
variance grows indefinitely with time
3. b > 1, shocks become more influential over time
Part 2: Nonstationary Time Series
Statistical consequences of nonstationarity
N -normall di
Non-
Non distribution
t ib ti off ttestt statistics
t ti ti

Bias in autorregressive coefficients


ff (b
(bs);
) we might
h
mistakenly estimate an AR(1), deficient forecast
Usual confidence intervals for coefficients not valid
Part 2: Nonstationary Time Series
Statistical consequences
q off non-
non-stationarityy ffor
multivariate regressions (anticipating
(anticipating M6)
M6)
For example
example, two unrelated nonstationary series y
and z might appear to be related through a standard
OLS regression
i
High R2
t-statistics that appear to be significant
The true test: are the regression residuals stationary? (i.e.,
long-run equilibrium relationship between y and z?)
Part 2: Nonstationary Time Series
Spurious regression practical exercise:
Simulate two random walk series: y and z
(each with its own disturbances, and either can have drift or not)
Note that by construction,
construction they are unrelated
Run OLS regression
g of y on z, evaluate coefficients, R2, and
plot residuals.
p
Properties of Time Series

Part 2: Nonstationaryy Times Series


L 10 Unit root tests
L-10:
Part 2: Nonstationary Time Series
Key Questions:
What is nonstationarity?
Whyy is it important?
p
How do we determine whether a time series is non-
non-
stationary?
Part 2: Nonstationary Time Series
Testing for non-
non-stationarity
model: yt byt 1 t
Recall AR(1) model
Special
p case: RW, when b = 1
Stationarity requires b 1
Generalizing
l to AR(p):
( )
Roots of the p
polynomial
y below must all be >1 in abs value
1 b1 z b2 z b3 z ... b3 z
2 3 p

If one of the roots = 1


1, then y is said to have a unit root
root.
Part 2: Nonstationary Time Series
Testing for non-
non-stationarity
model yt byt 1 t
AR(1) model:
Can test for whether y is a driftless random walk:
H0: b = 1
equivalently yt yt 1 t , = b 1
Or equivalently:
Or,
H0 = 0
This is the Dickey-Fuller
Dickey-Fuller (DF) test:
Regress
R on its
y it lag,
l ttestt ffor significance
i ifi off coefficient.
ffi i t
Part 2: Nonstationary Time Series
Testing for non-
non-stationarity
Can extend simple DF test in previous slide:slide
Intercept: p yt byt 1 t
Intercept and time trend: yt byt 1 t t
In allll three
h cases, H0: b = 0; y has
h a unit root
Rejecting the unit root test = finding that y is stationary
Note critical values for the tt-statistics
Note: statistics of b will vary depending
on whether intercept
intercept, trend are included.
included
Part 2: Nonstationary Time Series
Some terminology
Order of integration
integration: number of times a series y must
b diff
be differenced d to b
become stationary
i
Thus, if y is integrated
integrated of order zero,
zero , I(0), then it is
stationary (no differencing needed)
needed).
That
Th t iis, it iis stationary
t ti iin levels
l l
If y is I(1), then its first difference (y) is stationary
and
and so onon
Part 2: Nonstationary Time Series
Moving beyond white noise disturbances
DF test ass mes that t is white
assumes hite noise
noise.
However, if t is autocorrelated, need different
version of the test, allowing for higher
higher-order
order lags:
Augmented Dickey-Fuller
Dickey Fuller (ADF) test:
p
p
p
yt t yt 1 i yt i 1 t , 1 bi and i b j
i 1 i 1 j 1
Part 2: Nonstationary Time Series
ADF test
As with DF
DF, ADF tests whether coefficient on yt-1 () 0
Must makek choices
h i
Intercept, trend, both, none?
p: how many lags? (test statistics are very sensitive to p)
AIC
SBC
S C
General
General-to-specific
to specific (start out with large p, then re-estimate
re estimate
with successively smaller p)
Properties of Time Series
Part 2:
P 2 Nonstationary
N i Ti
Times SSeries
i
L-11: Testing for nonstationarity,
alternative tests
Part 2: Nonstationary Time Series
DF, ADF have been found to have low power in
certain circumstances:
Stationary processes with near-unit roots
For example, difficulty distinguishing between b = 1 and
b = 0.95,
0 95 especially with small samples
samples.
Trend
T d stationary
t ti processes
So alternative tests have been designed.
g
Part 2: Nonstationary Time Series
Phillips
Phillips
p Perron ((PP)) Test:
Formulation: yt t yt 1 ut , where ut is I(0)
* *

and may be heteroskedastic and autocorrelated


autocorrelated,, that
is following an ARMA(p,q).
is, ARMA(p q)
H0: = 0
PP corrects for anyy serial correlation and
heteroskedasticityy in the errors ut byy directlyy
modifying
y g the test statistics.
One advantage of PP: no need to specify lag lengthlength.
Part 2: Nonstationary Time Series
KwiatkowskiPhillips
Kwiatkowski Phillips
p Schmidt
SchmidtShin ((KPSS)Test:
)
Null hypothesis: y is trend stationary
Formulation: yt 0 Dt t ut
t t 1 t

Where
Wh Dt contains
i d deterministic
i i i components (constant or constant
) t is a random walk
plus time trend),
H0: 0
2
therefore is a constant
constant, y is trend
stationary
stationary.
H1: 2 0
KPSS critical values are obtained by simulation methods
methods.
Part 2: Nonstationary Time Series
A few notes:
DF
DF, ADF
ADF, and PP are called unit root tests
tests; the null
h
hypothesis
h i iis that
h yt has
h a unit i root; iis I(1) or hi
higher.
h
KPSS, on the other hand, is a stationarity
stationarity test
test,, null
hypothesis is that yt is I(0)
I(0).
Correct
C t specification
ifi ti iis kkey: iintercept
t t and d ttrendd
should be included when appropriate.
Structural breaks can complicate matters further.
further
Part 2: Nonstationary Time Series
A unified way of looking at the unit root tests
Slightly different representation
representation:
yt t ut
In practice,
ut ut 1 t
this is what
H0: = 1 y has h a unit it roott EViews does
H1: | |< 1 y is stationary (test for ).
If t is white noise, then DF can be used
If t is ARMA(p,q) then use ADF or PP.
Properties of Time Series
Part 2:
P 2 Nonstationary
N i Ti
Times SSeries
i
L-12: Some exercises with simulated
data
Part 2: Nonstationary Time Series
Simulate three processes in EViews
Stationary process with near-unit
near unit roots
Trend stationaryy process
p
An I(1) process
Graphh them
h andd observe
b their
h b behavior
h
Conduct Unit Root/Stationarity Tests on all three
three.
Part 2: Nonstationary Time Series
In Simulated
Simulated Times Series Examples.xlsx
Examples.xlsx
Si l t an I(0) process with
Simulate ith a structural
t t lb breakk
Import into EViews
Graph and observe
Conduct Unit Root/Stationarityy Tests
Properties of Time Series
Part 2:
P 2 Nonstationary
N i Ti
Times SSeries
i
L-13: Some exercises with real-world
data
Part 2: Nonstationary Time Series
Now let
letss work with real world data

Choose a series:
Look at graph and correlogram for a specific time series
Does it appear to be non-stationary?
Does it appear to have a trend, or a structural break?

Undertake Unit Root/Stationarity Tests


Do the different tests agree?
If you suspect a structural break
break, re
re-test
test for two sub-samples
sub samples

S-ar putea să vă placă și