Sunteți pe pagina 1din 74

TIME SERIES ECONOMETRICS:

SOME BASIC CONCEPTS


Reference : Gujarati, Chapters 21, 22
1. Assume the underlying time series data is
stationary.
2. Sometimes autocorrelation because the underlying
time series data is non-stationary.
3. Sometimes one obtains a very high R
2
and
signicant regression coecients even though there
is no meaningful relationship between the two
variables the problem of spurious, or
nonsense, regression.
7-1
Stochastic Processes
Let y
t
be the observation made at time t. The units of
time vary with application; they could be years,
quarters, months, days,... We assume that the
observations are equally spaced in time. The sequence
of random variables {Y
1
, Y
2
, , Y
T
} is called a
stochastic process. Its mean function is:

t
= E(Z
t
) t = 0, 1, 2,

t
is the expected value of the process at time t. The
autocovariance function is:

t,s
= Cov(Z
t
, Z
s
)
= E(Z
t

t
)(Z
s

s
) t, s = 0, 1, 2,
The variance function is:
V ar(Z
t
) =
t,t
= Cov(Z
t
, Z
t
)
= E(Z
t

t
)
2
The autocorrelation function is:

t,s
= Corr(Z
t
, Z
s
) =
Cov(Z
t
, Z
s
)
[V ar(Z
t
)V ar(Z
s
)]
1/2
t, s = 0, 1, 2,
7-2
STATIONARITY
The time series Z
t
is weakly stationary if

t
= E(Z
t
) =
and

t,s
= Cov(Z
t
, Z
s
) = Cov(Z
tl
, Z
sl
) (1)
for any integer l. Equation (1) implies:

t,s
=
0,k
where k = |t s|. Thus, for a stationary process we
can simply write

k
= Cov(Z
t
, Z
tk
) and
k
= Corr(Z
t
, Z
tk
)
Note that
k
=
k
/
0
7-3
WHITE NOISE
Let {
t
} be a sequence of independent random
variables with mean 0 and variance
2
and let
Y
t
= +
t
Then
E(Y
t
) =

k
= Cov(Y
t
, Y
tk
)
= Cov(
t
,
tk
) =
_

2
k = 0
0 k = 0
and

k
=
_

_
1 k = 0
0 k = 0
Such a sequence is called a purely random sequence
or white noise sequence.
7-4
Example of Stationary Series
Let {
t
} be a white noise sequence which is
distributed as N(0,
2
). Dene a new process {Y
t
} by
Y
t
= +
t
+
t1
Then
E(Y
t
) =

0
= var(Y
t
)
= var(
t
+
t1
)
= var(
t
) + var(
t1
) = 2
2

1
= cov(Y
t
, Y
t1
)
= cov(
t
+
t1
,
t1
+
t2
)
=
2

k
= cov(Y
t
, Y
tk
)
= cov(
t
+
t1
,
tk
+
tk1
)
= 0 for s > 1
Hence

k
=
_

_
2
2
k = 0

2
|k| = 1
0 |k| > 1
and
k
=
_

_
1 k = 0
1/2 |k| = 1
0 |k| > 1
7-5
Example of Nonstationary Series
In practice, we usually nd series which are not
stationary. For example, economic or business series
show a trend or change in mean level over time
reecting growth or nonstationary due to seasonal
features in series.
Important practical matter in time series involves
methods of transforming nonstationary series to
stationary one, or modeling nonstationarity. Two
fundamental approaches for dealing with
nonstationarity are:
1. Work with change or dierences in series since
these may be stationary.
2. remove non-stationary components,
e.g. nonconstant mean, by linear regression
technique.
7-6
RANDOM WALK
Let a
t
be iid N(0,
2
) and let
Z
t
= Z
t1
+ a
t
t = 1, 2,
and
Z
0
= 0
Then
Z
t
= a
1
+ a
2
+ + a
t
Z
t
is called random walk with mean
t
= 0, variance
V ar(Z
t
) = t
2
, and autocovariance,
t,s
= t
2
for
1 t s. Since
t
, V ar(Z
t
) and
t,s
depend on t, Z
t
is not stationary.
7-7
Example: Random Walk with Drift
Let
t
be iid N(0,
2
) and let
Y
t
= Y
t1
+ +
t
t = 1, 2,
and Y
0
= 0 where is a constant. Such a series is
called a random walk with drift . We have
Y
t
= Y
0
+ t +

t
j=1

j
Its mean
t
= E(Y
t
) = t, its
variance is: V ar(Y
t
) = t
2
. Thus {Y
t
} is not
stationary, with both mean and variance depend on t.
Note that the series of changes or rst dierence of
{Y
t
}, dened by
Z
t
= Y
t
Y
t1
= +
t
is a white noise series.
7-8
ESTIMATION OF MEAN,
AUTOCOVARIANCES,
AND AUTOCORRELATIONS FOR
STATIONARY SERIES
Suppose Y
1
, Y
2
, , Y
T
be a sample realization of a
stationary time series, {Y
t
} with mean
= E(Y
t
)
autocovariance function

k
= Cov(Y
t
, Y
t+k
) = Cov(Y
t
, Y
tk
)
and autocorrelation function

k
= Corr(Y
t
, Y
t+k
) =

k

0
7-9
The estimator for is the sample mean

Y =
1
T
T

t=1
Y
t
Estimator for
k
is
c
k
=
1
T
Tk

t=1
(Y
t


Y )(Y
t+k


Y ) k = 0, 1, 2,
where k is small relative to T. Note that
c
0
=
1
T
T

t=1
(Y
t


Y )
2
is the sample variance. Estimator for
k
is sample
ACF
r
k
=
c
k
c
0
=

Tk
t=1
(Y
t


Y )(Y
t+k


Y )

T
t=1
(Y
t


Y )
2
k = 0, 1, 2,
A plot of r
k
versus k is called a correlogram.
7-10
Sampling Properties of Estimators
1.

Y is an unbiased estimator of . That is:
E(

Y ) =
2.
V ar(

Y ) = V ar(
1
T
T

t=1
Y
t
)
=

0
T
_

_
1 + 2
T1

k=1
T k
T

k
_

_
If Y
t
are independent, then
k
= 0 for all k = 0
and so V ar(

Y ) =
0
/T.
When T is large, then
3. r
k
is approximately normally distributed.
4.
E(r
k
)
k
and
5.
V ar(r
k
)
1
T

s=
_

s
2
+
s+k

sk
4
k

sk
+ 2
s
2

k
2
_
7-11
Special case
When the series is white noise, so
s
= 0 for s = 0,
then
V ar(r
k
)
1
T
for k = 0
In fact, r
k
is approximately distributed as N(0, 1/T)
for k = 1, 2,
This property will be applied to check whether the
model is appropriate or not. If the model ts the data,
the residuals will follow a white noise series and hence
95% of its ACF will lie between 2/

T and 2/

T.
7-12
General Characteristics of Sample ACF
1. Stationary
(a) Sample ACF tends to damp out to 0 as lag k
increases fairly rapidly
(b) cut o
(c) damp out exponentially or sinusoidally
2. Non-stationary
(a) Sample ACF tends to damp out very slowly,
linearly
(b) sinusoidally but damping out very slowly
strong seasonal component
7-13
PARTIAL AUTOCORRELATION
FUNCTION
For a stationary and normal-distributed time series
{Z
t
}, the partial autocorrelation , (PACF), at
lag k is dened as:

k,k
= Corr(Z
t
, Z
tk
| Z
t1
, Z
t2
, , Z
tk+1
)
which is the correlation between Z
t
and Z
tk
after
removing the eect of the intervening variables
Z
t1
, Z
t2
, , Z
tk+1
. Its estimator is the sample
partial autocorrelation , r
k,k
.
Property: If {Z
t
} for t = 1, 2, , T is white noise,
then its sample partial autocorrelation function r
kk
will distribute as N(0, 1/T) for k = 1, 2,
This property will be applied to check whether the
model is appropriate or not. If the model ts the data,
the residuals will follow a white noise series and hence
95% of its PACF will lie between 2/

T and 2/

T.
7-14
Tests of Stationarity
1. Sample ACF tends to damp out to 0 as lag k
increases fairly rapidly
(a) cut o
(b) damp out exponentially or sinusoidally
2. Sample PACF tends to damp out to 0 as lag k
increases fairly rapidly
(a) cut o
(b) damp out exponentially or sinusoidally
7-15
Tests of White Noise
If the time series is white noise, we have
1. its ACF r
k
is approximately distributed as
N(0, 1/T) for k = 1, 2, , and
2. its PACF is approximately distributed as
N(0, 1/T) for k = 1, 2, .
Hence, if the time series is white noise, we have
1. its ACF r
k
lies between 2/

T and 2/

T;
2. its PACF r
k
lies between 2/

T and 2/

T.
3. In addition, we can apply the Box-Pierce Q
Statistic
Q = n
m

k=1
r
2
k
or the Ljung-Box Q Statistic
LB = n(n + 2)
m

k=1
r
2
k
n k
where n is sample size, m is lag length to test for
white noise.
If the time series is white noise, Q
2
m
and
LB
2
m
7-16
Notation:
The backward shift operator B is dened by
BY
t
= Y
t1
and hence
B
i
Y
t
= Y
ti
The forward shift operator F = B
1
is dened by
FY
t
= Y
t+1
and hence
F
i
Y
t
= Y
t+i
Example 1:
Y
t
=
t

t1
= (1 B)
t
Example 2:
Y
t
= Y
t1
+
t
implies
Y
t
Y
t1
=
t
or
(1 B)Y
t
=
t
7-17
If || < 1, then
Y
t
= (1 B)
1

t
We have
Y
t
= (1 + B +
2
B
2
+
3
B
3
+ )
t
and hence
Y
t
=
t
+
t1
+
2

t2
+
3

t3
+
Similarly, in Example 1

t
= (1 B)
1
Y
t
and hence

t
= (1 + B +
2
B
2
+
3
B
3
+ )Y
t

t
= Y
t
+ Y
t1
+
2
Y
t2
+
3
Y
t3
+
Remark: In Example 2, when = 1, we have
Y
t
= Y
t1
+
t
or
Y
t
Y
t1
=
t
which is a random walk series.
7-18
LINEAR MODELS FOR STATIONARY
SERIES
Properties of series are exhibited by ACF. Hence, we
build models which reect the ACF structure.
Linear Filters
Often we deal with formation of new series {Y
t
} by a
linear operation applied to a given series {X
t
}. The
system where X
t
is input and Y
t
is output which
results from linear operation of X
t
.
A linear time-invariant lter applied to series {X
t
}
is to produce a new series {Y
t
} such that
Y
t
=

j=

j
X
tj
If
j
satises
j
= 0 for j < 0, then
Y
t
=

j=0

j
X
tj
The lter is one-sided. Time-invariant because
coecients
j
do not depend on t.
7-19
Note
1. X
t
may be controllable, e.g. in the process of
production, {X
t
} is the input of raw material. Y
t
be output of product or by-product.
2. Dierencing operators are linear lters
Y
t
= X
t
= X
t
X
t1
and
Y
t
=
2
X
t
= X
t
X
t1
= X
t
2X
t1
+ X
t2
3. Moving averages are linear lters, e.g.
Y
t
=
1
2m+ 1
m

j=m
X
tj
If {X
t
} is stationary with mean
x
and autocovariance

k
, then
Y
t
=

j=

j
X
tj
with mean

Y
=

j=

j
E(X
tj
) =
x

j=

j
7-20
and the autocovariance

Y
(s) = Cov(Y
t
, Y
t+s
)
= Cov(

j=

j
X
tj
,

k=

k
X
t+sk
)
=

j=

k=

k
Cov(X
tj
, X
t+sk
)
=

j=

k=

s+jk
7-21
Linear Process
{Y
t
} is a linear process if it can be represented as
output of one-sided linear lter white noise {
t
}. That
is:
Y
t
= +

j=0

tj
where
t
are independent random variable with mean
0 and variance
2
. In this situation,

Y
=
and the autocovariance

Y
(s) = Cov(Y
t
, Y
t+s
)
= Cov(

j=0

tj
,

k=0

t+sk
)
=

j=0

k=0

k
Cov(
tj
,
t+sk
)
=
_
_
_

j=0

j+s
_
_
_

2
because Cov(
tj
,
t+sk
) =
2
when k = j + s and
equal to 0 when k = j + s.
7-22
Wolds Representation Theorem : If {Y
t
} is a
weakly stationary nondeterministic series with mean
, then Y
t
can always be expressed as:
Y
t
= +

j=0

tj
with
0
= 1 and

j=0
(
j
)
2
< where
t
are
uncorrelated random variables with mean 0 and
variance
2
.
This results supports the use of model representation
of the form:
Y
t
= +

j=0

tj

0
= 1
as a class of models for stationary series.
7-23
FINITE MOVING AVERAGE MODEL
A simple class of models is to set
j
= 0 for j > q.
{Y
t
} is said to be a moving average process of order q
(MA(q)) if it satises:
Y
t
= +
t

j=1

tj
where
t
are independent white noise with mean 0 and
variance
2
. We write
Y
t
= +
t

j=1

tj
= + (B)
t
where (B) = 1
q
j=1

j
B
j
is a MA average.
7-24
MA(1)
When q = 1, Y
t
= +
t

t1
, we have
E(Y
t
) =
V ar(Y
t
) =
0
=
2
(1 +
2
)

1
=
2
and

k
= 0 for |k| > 1
Hence

1
=

1 +
2
7-25
MA(2)
When q = 2,
Y
t
= +
t

t1

t2
E(Y
t
) =
V ar(Y
t
) =
0
=
2
(1 +
1
2
+
2
2
)

1
=
2
(
1
+
1

2
)

2
=
2
(
2
)
and

k
= 0 for |k| > 2
Hence

1
=

1
+
1

2
(1 +
1
2
+
2
2
)
and

2
=

2
(1 +
1
2
+
2
2
)
7-26
MA(q)
The model is
Y
t
= +
t

j=1

tj
= + (B)
t
where (B) = 1
q
j=1

j
B
j
.
E(Y
t
) =
V ar(Y
t
) =
0
=
2
(1 +
1
2
+
2
2
+ +
q
2
)

k
=
2
(
k
+
1

k+1
+ +
qk

q
) for k = 0, 1, 2, , q
and
k
= 0 for |k| > q. Hence, the ACF is

k
=
(
k
+
1

k+1
+ +
qk

q
)
1 +
1
2
+
2
2
+ +
q
2
for k = 0, 1, 2, , q and
k
= 0 for |k| > 0.
7-27
AUTOREGRESSIVE MODELS
The autoregressive model with order p, AR(p), is
Y
t
=
1
Y
t1
+
2
Y
t2
+ +
p
Y
tp
+ +
t
where
t
are independent white noise with mean 0 and
variance
2
. We can re-write it as
Y
t

1
Y
t1

2
Y
t2

p
Y
tp
= +
t
or
(B)Y
t
= +
t
where (B) = 1
p
j=1

j
B
j
is a AR average.
AR(p) model resembles a multiple linear regression
where Y
t1
,...,Y
tp
are independent variables.
Autoregression because Y
t
is regressed on its own past
values.
7-28
AR(1)
When p = 1,
Y
t
= Y
t1
+ +
t
Is it stationary?
By successive substitution
Y
t
= (Y
t2
+ +
t1
) + +
t
=
n
Y
tn
+
n1

j=0

j
+
n1

j=0

tj
Under the assumption that || < 1, as n , we
get
Y
t
=

j=0

j
+

j=0

tj
=

1
+

j=0

tj
which is stationary. Note that

j=0
||
j
<
So, if || < 1, {Y
t
} represents a stationary series with
innite MA representation as above with
j
=
j
E(Y
t
) = =

1
7-29

k
= Cov(Y
t
, Y
t+k
)
=
k

j=0

2j
=

k

2
1
2
If k = 0, we have

0
=

2
1
2
and hence

k
=
k
for k 0
7-30
Name of variable = Y.
Mean of working series = 9.97686
Standard deviation = 1.141318
Number of observations = 500
Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
0 1.302606 1.00000 | |********************|
1 0.617829 0.47430 | . |********* |
2 0.352030 0.27025 | . |***** |
3 0.156284 0.11998 | . |** |
4 0.058037 0.04455 | . |*. |
5 0.057984 0.04451 | . |*. |
6 0.027178 0.02086 | . | . |
7 -0.087968 -0.06753 | .*| . |
8 -0.141527 -0.10865 | **| . |
9 -0.117101 -0.08990 | **| . |
10 -0.152208 -0.11685 | **| . |
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1 0.47430 | . |********* |
2 0.05843 | . |*. |
3 -0.03681 | .*| . |
4 -0.01564 | . | . |
5 0.04006 | . |*. |
6 -0.01109 | . | . |
7 -0.10630 | **| . |
8 -0.05613 | .*| . |
9 0.00991 | . | . |
10 -0.06858 | .*| . |
7-31
Autocorrelation Check for White Noise
To Chi Autocorrelations
Lag Square DF Prob
6 159.47 6 0.000 0.474 0.270 0.120 0.045 0.045 0.021
12 187.08 12 0.000 -0.068 -0.109 -0.090 -0.117 -0.089 -0.089
18 197.51 18 0.000 -0.077 -0.067 0.002 0.044 0.077 0.042
24 223.09 24 0.000 -0.017 -0.057 -0.092 -0.093 -0.136 -0.098
Maximum Likelihood Estimation
Approx.
Parameter Estimate Std Error T Ratio Lag
MU 9.97880 0.08538 116.88 0
AR1,1 0.47378 0.03944 12.01 1
Constant Estimate = 5.25101414
Variance Estimate = 1.01335786
Std Error Estimate = 1.00665677
AIC = 1427.82345
SBC = 1436.25266
Number of Residuals= 500
Autocorrelation Check of Residuals
To Chi Autocorrelations
Lag Square DF Prob
6 4.53 5 0.476 -0.027 0.064 -0.003 -0.031 0.030 0.048
12 12.62 11 0.319 -0.054 -0.076 -0.004 -0.075 -0.013 -0.039
18 18.16 17 0.379 -0.027 -0.061 0.017 0.022 0.068 0.030
24 25.19 23 0.340 -0.018 -0.024 -0.054 -0.007 -0.097 -0.009
7-32
Autocorrelation Plot of Residuals
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
0 1.013358 1.00000 | |********************|
1 -0.027426 -0.02706 | .*| . |
2 0.064949 0.06409 | . |*. |
3 -0.0029458 -0.00291 | . | . |
4 -0.031095 -0.03068 | .*| . |
5 0.030587 0.03018 | . |*. |
6 0.048235 0.04760 | . |*. |
7 -0.054253 -0.05354 | .*| . |
8 -0.076521 -0.07551 | **| . |
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1 -0.02706 | .*| . |
2 0.06341 | . |*. |
3 0.00044 | . | . |
4 -0.03498 | .*| . |
5 0.02885 | . |*. |
6 0.05368 | . |*. |
7 -0.05554 | .*| . |
8 -0.08692 | **| . |
Model for variable Y
Estimated Mean = 9.97879948
Autoregressive Factors
Factor 1: 1 - 0.47378 B**(1)
7-33
OPTIONS NOCENTER PS = 35 LS = 72;
DATA A ;
INFILE c:\AR1.DATA ;
INPUT Y ;
DATA A ; SET A ;
T+1 ;
PROC PLOT ;
PLOT Y*T ;
PROC ARIMA DATA=A ;
IDENTIFY VAR=Y ;
PROC ARIMA DATA=A ;
IDENTIFY VAR=Y NOPRINT ;
ESTIMATE P = 1 METHOD=ML PLOT;
7-34
AR(2)
The autoregressive model with order 2, AR(2), is
Y
t
=
1
Y
t1
+
2
Y
t2
+ +
t
where
t
are independent white noise with mean 0 and variance

2
. We can re-write it as
Y
t

1
Y
t1

2
Y
t2
= +
t
or
(B)Y
t
= +
t
where (B) = 1
1
B
2
B
2
.
AR(2) model resembles a multiple linear regression where Y
t1
and Y
t2
are independent variables. As in AR(1) case, we can
use successive substitution to eventually express Y
t
as innite MA
model such that
Y
t
= +

j=0

tj
The innite MA will be absolutely summable (this means that

j=0
|
j
| < ).
7-35
Another method to express Y
t
in terms of the noises is:
Y
t
= (1
1
B
2
B
2
)
1
+ (1
1
B
2
B
2
)
1

t
=

1
1

2
+

j=0

tj
= + (B)
t
where
=

1
1

2
and
(B) = (1
1
B
2
B
2
)
1
=

j=0

j
B
j
{
j
} can be determined from (B) = (B)
1
that implies
(B)(B) = 1
Hence, we have
(1
1
B
2
B
2
)(
0
+
1
B +
2
B
2
+ ) = 1
and therefore

0
= 1

0
= 0
and

j1

j2
= 0 for j 1
where
0
= 1 and
j
= 0 for j < 0.
Thus {
j
} satises second order dierence equation.
7-36
Condition for Stationarity
The condition for stationarity is roots of
(Z) = 1
1
Z
2
Z
2
= 0 (2)
are greater than 1 in absolute value. Or roots of
m
2

1
m
2
= 0 (3)
are less than 1 in absolute value. This condition will make the
innite MA

j=0
|
j
| < be absolutely summable.
Note that roots of (3) are the reciprocals of roots of (2). That is,
if m
1
and m
2
are roots of (3) and z
1
and z
2
are roots of (2), then
m
1
=
1
z
1
and m
2
=
1
z
2
This condition will lead us to get the formula

j
= c
1
m
j
1
+ c
2
m
j
2
for any j 0
7-37
Mean of AR(2)
For the model
Y
t

1
Y
t1

2
Y
t2
= +
t
We have
E(Y
t
)
1
E(Y
t1
)
2
E(Y
t2
) = + E(
t
)
and this implies
=

1
1

2
7-38
Variance, Autocovariance and Autocorrelation of AR(2)
Note that
Cov(
t
, Y
tk
) =
_

2
k = 0
0 k > 0

0
= V ar(Y
t
) = V ar(
1
Y
t1
+
2
Y
t2
+
t
)
=
2
1

0
+
2
2

0
+ 2
1

1
+
2

1
= Cov(Y
t
, Y
t1
)
= Cov(
1
Y
t1
+
2
Y
t2
+
t
, Y
t1
)
=
1

0
+
2

1
This implies

1
=

1
1
2

0
For k > 0,

k
= Cov(Y
t
, Y
tk
)
= Cov(
1
Y
t1
+
2
Y
t2
+
t
, Y
tk
)
=
1

k1
+
2

k2
This implies

k
=
1

k1
+
2

k2
for k > 0. This is called Yule-Walker Equation.
7-39
In Y-W Equation, k = 1 and k = 2 are very important for
AR(2). They are

1
=
1

0
+
2

1
=
1
+
2

1
(4)
and

2
=
1

1
+
2

0
=
1

1
+
2
(5)
Solving these two equations, we have

1
=

1
1
2
and

2
=
2
+

1
2
1
2
Higher lag values of
k
can then be computed recursively by the
dierence equation. For example:

3
=
1

2
+
2

1
Equations (4) and (5) can also be used to solve for
1
and
2
such that

1
=

1
(1
2
)
1
2
1

2
=

2

2
1
1
2
1
7-40
ACF
k
satises the second order dierence equation (Y-W
Equation):

k
=
1

k1
+
2

k2
From the dierence equation theory, solution
k
to the dierence
equation has the form:

k
= c
1
m
k
1
+ c
2
m
k
2
for any k 0
(if m
1
and m
2
are distinct and real) where m
1
, m
2
are roots of
m
2

1
m
2
= 0
c
1
and c
2
can be determined by initial conditions

0
= 1 and
1
=

1
1
2
In this situation,
k
decline exponentially as k increases.
When m
1
and m
2
are complex, said
m
1
, m
2
= R(cos i sin)
c
1
and c
2
will be complex also, said
c
1
, c
2
= a bi
So that

k
= c
1
m
k
1
+ c
2
m
k
2
= (a + bi)R
k
(cos + i sin)
k
+ (a bi)R
k
(cos i sin)
k
= R
k
(a
1
cos(k) + a
2
sin(k))
7-41
where
R = |m
1
| = |m
2
| = (
2
)
1
2
< 1
and satises
cos =

1
2(
2
)
1
2
In this situation,
k
is damped sinusoid with damping factor R,
period 2/ and frequency .
7-42
PACF of AR(2)
The PACF of AR(2) are

11
=
1
and

22
=

2

2
1
1
2
1
( =
2
)
and

kk
= 0 for k > 2
Hence, the ACF of AR(2) damp o exponentially or sinusoidly
while the PACF cut o after lag 2.
7-43
/*-------------------------------------------------------*/
/*---- EXAMPLE ----*/
/*-------------------------------------------------------*/
Approx.
Parameter Estimate Std Error T Ratio Lag
MU 6.96407 0.20628 33.76 0
AR1,1 0.51108 0.08640 5.92 1
Constant Estimate = 3.40488718
Variance Estimate = 1.04185881
Std Error Estimate = 1.02071485
AIC = 290.170809
SBC = 295.38115
Number of Residuals= 100
Autocorrelation Check of Residuals
To Chi Autocorrelations
Lag Square DF Prob
6 6.91 5 0.228 -0.005 0.030 0.106 -0.221 -0.014 0.063
12 13.53 11 0.260 -0.191 0.024 0.022 0.060 -0.127 -0.046
18 17.83 17 0.399 -0.162 -0.058 0.004 0.036 0.010 0.072
24 20.85 23 0.590 -0.086 -0.031 -0.032 -0.055 -0.048 0.092
Autoregressive Factors
Factor 1: 1 - 0.51108 B**(1)
7-44
Approx.
Parameter Estimate Std Error T Ratio Lag
MU 6.96184 0.16749 41.57 0
MA1,1 -0.45979 0.10020 -4.59 1
MA1,2 -0.15518 0.10022 -1.55 2
Constant Estimate = 6.96184387
Variance Estimate = 1.08819896
Std Error Estimate = 1.04316775
AIC = 295.415416
SBC = 303.230926
Number of Residuals= 100
Correlations of the Estimates
Parameter MU MA1,1 MA1,2
MA1,1 0.001 1.000 0.394
MA1,2 -0.001 0.394 1.000
Autocorrelation Check of Residuals
To Chi Autocorrelations
Lag Square DF Prob
6 9.45 4 0.051 0.049 0.100 0.182 -0.203 -0.012 0.051
12 16.43 10 0.088 -0.195 0.008 -0.004 0.018 -0.142 -0.063
18 21.02 16 0.178 -0.170 -0.078 -0.013 0.012 -0.005 0.059
24 24.00 22 0.347 -0.087 -0.037 -0.032 -0.052 -0.038 0.094
Moving Average Factors
Factor 1: 1 + 0.45979 B**(1) + 0.15518 B**(2)
7-45
Approx.
Parameter Estimate Std Error T Ratio Lag
MU 6.97335 0.21712 32.12 0
MA1,1 -0.54556 0.09870 -5.53 1
MA1,2 -0.36785 0.10771 -3.42 2
MA1,3 -0.27311 0.09959 -2.74 3
Constant Estimate = 6.97334862
Variance Estimate = 1.00988046
Std Error Estimate = 1.00492809
AIC = 289.200417
SBC = 299.621098
Number of Residuals= 100
Correlations of the Estimates
Parameter MU MA1,1 MA1,2 MA1,3
MA1,1 -0.006 1.000 0.442 0.239
MA1,2 -0.012 0.442 1.000 0.448
MA1,3 -0.015 0.239 0.448 1.000
Autocorrelation Check of Residuals
To Chi Autocorrelations
Lag Square DF Prob
6 1.60 3 0.661 -0.023 -0.031 -0.004 -0.078 0.003 0.086
12 11.02 9 0.275 -0.222 0.009 0.024 0.110 -0.147 -0.031
Moving Average Factors
Factor 1: 1 + 0.54556 B**(1) + 0.36785 B**(2) + 0.27311 B**(3)
7-46
GENERAL ORDER AUTOREGRESSIVE MODELS
The autoregressive model with order p, AR(p), is
Y
t
=
1
Y
t1
+
2
Y
t2
+ +
p
Y
tp
+ +
t
where
t
are independent white noise with mean 0 and variance

2
.
or
(B)Y
t
= +
t
where (B) = 1

p
j=1

j
B
j
is a AR average.
If all roots of
(z) = 1
1
z
2
z
2

p
z
p
= 0
are larger than one in absolute value, or all roots of
m
p

1
m
p1

2
m
p2

p
= 0
are smaller than one in absolute value, then the process is
stationary and has a convergent innite MA representation.
7-47
That is
Y
t
= (B)
1
+ (B)
1

t
= + (B)
t
where
= E(Y
t
) = (B)
1

=

1
1

2

p
(B) =

j=0

j
B
j
= (B)
1
and

j=0
|
j
| <

j
is determined from the relation:
(B)(B) = 1
This implies that
j
satises

j1

j2

p

jp
= 0
for j > 0. Note that
0
= 1 and
j
= 0 for j < 0.
The solution of dierence equation implies that
j
satises

j
=
p

i=1
c
i
m
j
i
where m
i
are roots of
m
p

1
m
p1

2
m
p2

p
= 0
7-48
Autocovariance and Autocorrelation
Autocovariance
s
of AR(p) satisfy Yule-Walker Equation:

s
=
1

j1
+
2

j2
+ +
p

sp
(6)
Divide (6) by
0
, we get the Yule-Walker Equation for ACF,
j
:

s
=
1

j1
+
2

j2
+ +
p

sp
ACF satises the same dierent equation as the
s
and
s
, but
with dierent initial conditions.
General solution to the above dierent equation is

s
= c
1
m
s
1
+ c
2
m
s
2
+ + c
p
m
s
p
where m
i
are roots of
m
p

1
m
p1

2
m
p2

p
= 0
Yule-Walker equation are useful for determining AR parameters

1
, ,
p
. Equations can be expressed in matrix form as
P =
where
P =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
1

2

p1

1
1
1

p2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

p1

p2

p3
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
7-49
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
.
.
.

p
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
and =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
.
.
.

p
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
Equations are used to solve for in terms of ACF, the solution is:
= P
1

Sample version of this solution replaces


s
by sample ACF r
s
and
the estimate of (which is called Yule-Walker estimates of AR
parameters) is

= R
1
r
where
R =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 r
1
r
2
r
p1
r
1
1 r
1
r
p2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
r
p1
r
p2
r
p3
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
and r =
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
r
1
r
2
.
.
.
r
p
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
Variance
0
= V ar(Y
t
) can be expressed as

0
=
1

1
+
2

2
+ +
p

p
+
2
Hence,

2
=
0

2

p

p
=
0
(1
1

2

p

p
)
7-50
Partial Autocorrelation Function
When t AR models to data, we need to choose an appropriate
order p for model. The PACF is useful here.
Suppose Y
t
is stationary with ACF,
s
. For k 1, consider the
rst k Yule-Walker equation corresponding to AR(k) model:

s
=
1

j1
+
2

j2
+ +
k

sk
s = 1, , k (7)
and let
1k
,
2k
, ,
kk
denote to be the solution to Y-W
equation for
1
,
2
, ,
k
. This equation can be solved for order
k = 1, 2, and t he quantity
kk
is the PACF at lag k.
k = 1 =
1
=
11

0
=
11
=
1
k = 2 implies
_
_
_
_
_

2
_
_
_
_
_
=
_
_
_
_
_
1
1

1
1
_
_
_
_
_
_
_
_
_
_

12

22
_
_
_
_
_
_
_
_
_
_

12

22
_
_
_
_
_
=
_
_
_
_
_
1
1

1
1
_
_
_
_
_
1
_
_
_
_
_

1
2

2
2
_
_
_
_
_
This implies

12
=

1
(1
2
)
1
2
1
and

22
=

2

2
1
1
2
1
7-51
When we actually have an AR(p) process and we set k = p in
equation (7), we have
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

p1

p2
.
.
.

pp
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_

2
.
.
.

p
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
and hence
pp
=
p
. When k > p, we get
kk
= 0
PACF
k
k at lag k is actually equal to the partial correlation
between Y
t
and Y
tk
, when we adjust for the intermediate values
Y
t1
, Y
t2
, , Y
tk+1
.
7-52
INVERTIBILITY OF MA MODELS
Y
t
= + (B)
t
If all roots of
(z) = 1
1
z
2
z
2

q
z
q
= 0
are larger than one in absolute value, or all roots of
m
q

1
m
q1

2
m
q2

q
= 0
are smaller than one in absolute value, then the MA process can
be expressed in form of innite AR model. That is:
(B)
1
Y
t
= (B)
1
+
t
or
(B)Y
t
= +
t
where
(B) = 1
1
B
2
B
2
= (B)
1
with

j=1
|
j
| < . That is:
Y
t
=

j=1

j
Y
tj
+ +
t
Then MA process is said to be invertible.
7-53
MIXED AUTOREGRESSIVE MOVING AVERAGE
(ARMA) MODEL
Y
t
follows an ARMA(p, q) model if it satises:
Y
t
=
1
Y
t1
+
2
Y
t2
+ +
p
Y
tp
+ +
t

t1

q

tq
where
t
are independent white noise with mean 0 and variance

2
, or
(B)Y
t
= + (B)
t
where (B) = 1

p
j=1

j
B
j
is a AR average,
(B) = 1

q
j=1

j
B
j
is a AR average.
If all roots of
(z) = 1
1
z
2
z
2

p
z
p
= 0
are larger than one in absolute value, then the process is
stationary and has convergent innite MA representation:
Y
t
= (B)
1
+ (B)
1
(B)
t
= + (B)
t
where
= E(Y
t
) = (B)
1

=

1
1

2

p
7-54
(B) =

j=0

j
B
j
= (B)
1
(B)
and

j=0
|
j
| <
If all roots of
(z) = 1
1
z
2
z
2

q
z
q
= 0
are larger than one in absolute value,then the process is invertible
and has a convergent innite AR model. That is:
(B)
1
(B)Y
t
= (B)
1
+
t
or
(B)Y
t
=

+
t
where
(B) = 1
1
B
2
B
2
= (B)
1
(B)
with

j=1
|
j
| < .
7-55
/*-------------------------------------------------------*/
/*---- EXAMPLE : FORECASTING STOCK MARKET PRICES ----*/
/*-------------------------------------------------------*/
TITLE AT&T STOCK PRICES;
PROC ARIMA DATA=ATTSTOCK;
/*---- First Analysis ----*/
IDENTIFY VAR=X CENTER NLAG=13;
ESTIMATE P=1 METHOD=CLS NOCONSTANT;
ESTIMATE P=1 METHOD=ML NOCONSTANT;
ESTIMATE P=1 METHOD=ULS NOCONSTANT ;
FORECAST OUT=B1 LEAD=12 ID=N;
PROC ARIMA DATA=ATTSTOCK;
/*---- Second Analysis ----*/
IDENTIFY VAR=X(1) CENTER NLAG=13;
ESTIMATE METHOD=ULS NOCONSTANT ;
FORECAST OUT=B2 LEAD=12 ID=N;
PROC ARIMA DATA=ATTSTOCK;
/*---- Third Analysis ----*/
IDENTIFY VAR=X(1) NLAG=13;
ESTIMATE METHOD=ULS NOCONSTANT ;
FORECAST LEAD=12 ID=N;
PROC PLOT DATA=B2(FIRSTOBS=40);
PLOT FORECAST*N=F X*N=* L95*N=L U95*N=U
/OVERLAY VAXIS=46 TO 60;
TITLE2 ARIMA(0,1,0) FORECAST LEAD=12;
7-56
Name of variable = X.
Mean of working series = 0
Standard deviation = 3.4136
Number of observations = 52
Autocorrelations
Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std
0 11.652662 1.00000 | |********************| 0
1 10.890128 0.93456 | . |******************* | 0.138675
2 10.046985 0.86221 | . |***************** | 0.229833
3 9.490750 0.81447 | . |**************** | 0.285334
4 8.717965 0.74815 | . |*************** | 0.327001
5 7.973288 0.68425 | . |************** | 0.358410
6 7.242184 0.62150 | . |************ . | 0.382707
7 6.447407 0.55330 | . |*********** . | 0.401648
8 5.745524 0.49307 | . |********** . | 0.416048
9 5.150995 0.44204 | . |********* . | 0.427138
10 4.394067 0.37709 | . |******** . | 0.435846
11 3.370612 0.28926 | . |****** . | 0.442076
12 2.532964 0.21737 | . |**** . | 0.445701
13 2.054507 0.17631 | . |**** . | 0.447735
"." marks two standard errors
7-57
Partial Autocorrelations
Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1
1 0.93456 | . |******************* |
2 -0.08847 | . **| . |
3 0.15987 | . |*** . |
4 -0.20256 | . ****| . |
5 0.04819 | . |* . |
6 -0.10464 | . **| . |
7 -0.03046 | . *| . |
8 -0.00025 | . | . |
9 0.02826 | . |* . |
10 -0.14244 | . ***| . |
11 -0.21337 | . ****| . |
12 0.05050 | . |* . |
13 0.15800 | . |*** . |
Autocorrelation Check for White Noise
To Chi Autocorrelations
Lag Square DF Prob
6 212.15 6 0.000 0.935 0.862 0.814 0.748 0.684 0.622
12 278.08 12 0.000 0.553 0.493 0.442 0.377 0.289 0.217
Approx.
Parameter Estimate Std Error T Ratio Lag
AR1,1 0.98453 0.04062 24.24 1
Variance Estimate = 0.94924184
Std Error Estimate = 0.97429043
AIC = 145.8511*
7-58
SBC = 147.802344*
Number of Residuals= 52
* Does not include log determinant.
Autocorrelation Check of Residuals
To Chi Autocorrelations
Lag Square DF Prob
6 8.41 5 0.135 0.056 -0.152 0.337 -0.049 -0.033 0.063
12 16.17 11 0.135 -0.089 -0.090 0.111 0.236 -0.117 -0.135
18 23.05 17 0.148 0.186 -0.041 -0.176 0.028 -0.086 -0.124
24 34.06 23 0.064 0.166 0.116 -0.137 0.148 0.132 -0.142
Data have been centered by subtracting the value 57.795673077.
No mean term in this model.
Autoregressive Factors
Factor 1: 1 - 0.98453 B**(1)
7-59
Forecasts for variable X
Obs Forecast Std Error Lower 95% Upper 95%
53 52.2534 0.8664 50.5553 53.9515
54 52.2568 1.2249 49.8560 54.6575
55 52.2601 1.4997 49.3207 55.1995
56 52.2635 1.7312 48.8704 55.6566
57 52.2669 1.9350 48.4744 56.0593
58 52.2702 2.1190 48.1170 56.4234
59 52.2736 2.2881 47.7890 56.7582
60 52.2770 2.4453 47.4842 57.0697
61 52.2803 2.5929 47.1984 57.3623
62 52.2837 2.7323 46.9285 57.6389
63 52.2870 2.8648 46.6721 57.9019
64 52.2904 2.9913 46.4276 58.1532
7-60
Plot of FORECAST*N. Symbol used is F. Plot of X*N. Symbol used is *.
Plot of L95*N. Symbol used is L. Plot of U95*N. Symbol used is U.
+ U
c 56 +
+ U U U U U U
a 55 +F F
+ *
+ U U U U U U U U U U U U
s 54 + F F
+ * * * *
+ U U U U U
t 53 + F F F F F F F
+ * * * * *
+ L L
+ U
52 + F F F F F F
+ * * *
+ L L L L L L
f 51 + F F F F F F
+ L L L L
o 50 + F F
+ L L L
r 49 +
+ L
48 +
+ L L
X 47 +
+ L L
46 +
+ L
-+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-
40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
N
7-61
MODELS FOR NONSTATIONARY TIME SERIES
For many nonstationary series which exhibited homogeneous
behavior, the rst dierence of series:
W
t
= Y
t
Y
t1
= (1 B)Y
t
may be a stationary series, or if the rst dierence is not
stationary, its second dierence
W
t
= (1 B)
2
Y
t
= (1 2B + B
2
)Y
t
= Y
t
2Y
t1
+ Y
t2
may be stationary.
So, a useful class of models for nonstationary series are models
such that d
th
dierence
W
t
= (1 B)
d
Y
t
is a stationary series and W
t
follows an ARMA(p, q) model. So
(B)W
t
= + (B)
t
or
(B)(1 B)
d
Y
t
= + (B)
t
7-62
model for Y
t
is called an Autoregressive Integrated
Moving Average Model of order p, d, q or ARIMA(p, d, q).
Generally, when d > 0, it is often to be d = 1, (occasionally,
d = 2), ie
W
t
= (1 B)Y
t
is ARMA(p, q)
or
Y
t
is ARIMA(p, 1, q)
To get Y
t
from W
t
, we must sum or integrate W
t
. i.e.
Y
t
= (1 B)
1
W
t
= (1 + B + B
2
+ )W
t
= W
t
+ W
t1
+ W
t2
+
7-63
ARIMA(p, d, q) MODEL
Y
t
is non-stationary such that
W
t
= (1 B)
d
Y
t
is stationary ARMA(p, q) i.e.
(B)W
t
= + (B)
t
(B)(1 B)
d
Y
t
= + (B)
t
(B) = (B)(1 B)
d
= 1
1
B
p+d
B
p+d
In this form, Y
t
has form of ARMA(p + d, q) model, but is
non-stationary with d roots of (B) = 0 equal to 1.
7-64
Unit Root Stochastic Process
For the AR(1) model:
Y
t
= Y
t1
+ +
t
1 1
Y
t
=

j=0

j
+

j=0

tj
which is non-stationary when = 1 unit root problem.
7-65
Trend Stationary (TS) and Dierence Stationary
(DS)
Trend Stationary the trend is completely predictable
Dierence Stationary the trend is stochastic but becomes
stationary after dierencing.
Consider
Y
t
=
1
+
2
t +
3
Y
t1
+ u
t
1. Random Walk Model [RWM] without drift

1
=
2
= 0,
3
= 1:
Y
t
= Y
t1
+ u
t
is non-stationary but
Y
t
= (Y
t
Y
t1
) = u
t
is stationary.
2. Random Walk Model with drift
1
= 0,
2
= 0,

3
= 1:
Y
t
=
1
+ Y
t1
+ u
t
is non-stationary but
Y
t
= (Y
t
Y
t1
) =
1
+ u
t
is stationary and Y
t
exhibits a positive (
1
> 0) or negative
(
1
< 0) trend.
7-66
3. Deterministic Trend Model
1
= 0,
2
= 0, but

3
= 0:
Y
t
=
1
+
2
t + u
t
is non-stationary but stationary after detrend.
4. Random Walk with Drift and Deterministic
Trend
1
= 0,
2
= 0, and
3
= 1:
Y
t
=
1
+
2
t + Y
t1
+ u
t
is non-stationary and
Y
t
=
1
+
2
t + u
t
is still non-stationary.
5. Deterministic Trend with Stationary AR(1)
Component
1
= 0,
2
= 0, and
3
< 1:
Y
t
=
1
+
2
t +
3
Y
t1
+ u
t
which is stationary around the deterministic trend.
7-67
Integrated Stochastic Process
A time series Y
t
integrated of order d, denoted by Y
t
I(1), if Y
t
dierencing d times. Hence, random walk model without drift,
random walk model with drift, deterministic trend model and
random walk with stationary AR(1) component are I(1) while
random walk with drift and deterministic trend is I(2).
Properties of Integrated Series
1. If X
t
I(0) and Y
t
I(1), then Z
t
= X
t
+ Y
t
I(1).
2. If X
t
I(d), then Z
t
= a + bX
t
I(d).
3. If X
t
I(d
1
), Y
t
I(d
2
), then Z
t
= aX
t
+ bY
t
I(d
2
)
where d
1
d
2
.
4. If X
t
I(d) and Y
t
I(d), then Z
t
= aX
t
+ bY
t
I(d

)
where d

is generally equal to d but sometimes d

< d (when
cointegrated).
7-68
Problems
1. Consider
Y
t
=
1
+
2
X
t
+ u
t
.
The estimate

2
=

x
t
y
t

x
2
t
If X
t
I(1) and Y
t
I(0), then X
t
is non-stationary and its
variance will increase indenitely, then dominating the
numerator with the result that

2
will converge to zero
asymptotically and it will not even have an asymptotic
distribution.
2. Spurious Regression
Consider
Y
t
= Y
t1
+ u
t
X
t
= X
t1
+ v
t
Y
0
= 0 and X
0
= 0
where u
t
and v
t
are independent.
However, when we simulate independent u
t
and v
t
from
N(0, 1) and t
Y
t
=
1
+
2
X
t
+ e
t
.
We nd that estimate of
2
is signicantly dierent from zero
and R
2
is also signicantly dierent from zero.
7-69
The Unit Root Test
Consider
Y
t
= Y
t1
+ u
t
1 1
u
t
is error term.
Y
t
= (Y
t
Y
t1
) = Y
t1
Y
t1
+ u
t
= ( 1)Y
t1
+ u
t
= Y
t1
+ u
t
(8)
where = 1.
H
0
: = 1 H
1
: < 1
is equivalent to
H
0
: = 0 H
1
: < 0 (9)
If H
0
is true, then Y
t
= u
t
is white noise. To test for (9), we
simply regress Y
t
on Y
t1
and obtain the estimated slope
coecient,

. However, unfortunately, the estimate

does not
follow the t distribution even in large samples.
Dickey and Fuller show that

follows the (tau) statistic. The
test

is known as the Dickey-Fuller (DF) test.
If the hypothesis H
0
: = 0 is rejected, we can use the usual
(Students) t test.
7-70
The DF test is estimated in three dierent forms:
1. Y
t
is a random walk :
Y
t
= Y
t1
+ u
t
(10)
2. Y
t
is a random walk with drift :
Y
t
=
1
+ Y
t1
+ u
t
(11)
3. Y
t
is a random walk with drift around a
deterministic trend :
Y
t
=
1
+
2
t + Y
t1
+ u
t
(12)
If the hypothesis H
0
: = 0 is rejected, then Y
t
is a stationary
time series with zero mean for (10), is stationary with a nonzero
mean for (11) and is stationary around a deterministic trend for
(12)
7-71
The Augmented Dickey-Fuller (ADF) test
In the DF test for (10), (11) and (12), it is assumed that u
t
was
uncorrelated. If they are, we use ADF test for the following:
Y
t
=
1
+
2
t + Y
t1
+
i
m

i=1
Y
ti
+
t
(13)
where
t
is a white noise and Y
ti
= Y
ti
Y
ti1
. The number
of lagged dierence terms to include so that
t
is white noise. The
ADF test follows the same asymptotic distribution as the DF test
and so that same critical values can be used.
7-72
Cointegration
If Y
t
I(1) and X
t
I(1), but u
t
I(0) where
Y
t
=
1
+
2
X
t
+ u
i
. (14)
Y
t
and X
t
is said to be cointegrated.
As both Y
t
and X
t
I(1), they have stochastic trends and their
linear combination u
t
I(0) cancels out the stochastic trends.
As a results, the cointegrating regression (14) is meaningful and

2
is called the cointegrating parameter.
Economically speaking, two variables will be cointegrated if they
have a long-term, or equilibrium, relationship between them.
7-73
Testing for Cointegration
A number of tests have been established, we consider two simple
methods:
1. Engle-Granger (EG) or Augmented
Engle-Granger (AEG) test
Apply the DF or ADF unit root test on the residuals u
t
estimated from the cointegrating regression.
Since the estimated u
t
are based on the estimated
cointegrating parameter
2
. The DF and ADF critical values
are not appropriate. Engle and Granger have calucated these
values known as Engle-Granger (EG) or Augmented
Engle-Granger (AEG) tests.
2. Cointegrating regression Durbin-Watson
(CRDW) test
Use the Durbin-Watson d statistic obtained from the
cointegrating regression but now
H
0
: d = 0 H
1
: d > 0
as d 2(1 ).
Examples : Refer to Gugarati p825-829.
7-74

S-ar putea să vă placă și