Sunteți pe pagina 1din 10

74 CHAPTER 4.

STATIONARY TS MODELS

4.5 Autoregressive Processes AR(p)


The idea behind the autoregressive models is to explain the present value of the
series, Xt , by a function of p past values, Xt1 , Xt2 , . . . , Xtp .

Definition 4.7. An autoregressive process of order p is written as

Xt = 1 Xt1 + 2 Xt2 + . . . + p Xtp + Zt , (4.20)

where {Zt } is white noise, i.e., {Zt } W N(0, 2 ), and Zt is uncorrelated with
Xs for each s < t.

Remark 4.12. We assume (for simplicity of notation) that the mean of Xt is zero.
If the mean is E Xt = 6= 0, then we replace Xt by Xt to obtain

Xt = 1 (Xt1 ) + 2 (Xt2 ) + . . . + p (Xtp ) + Zt ,

what can be written as

Xt = + 1 Xt1 + 2 Xt2 + . . . + p Xtp + Zt ,

where
= (1 1 . . . p ).

Other ways of writing AR(p) model use:

Vector notation: Denote


= (1 , 2 , . . . , p )T ,
Xt1 = (Xt1 , Xt2 , . . . , Xtp )T .

Then the formula (4.20) can be written as

Xt = T Xt1 + Zt .

Backshift operator: Namely, writing the model (4.20) in the form

Xt 1 Xt1 2 Xt2 . . . p Xtp = Zt ,

and applying BXt = Xt1 we get

(1 1 B 2 B 2 . . . p B p )Xt = Zt
4.5. AUTOREGRESSIVE PROCESSES AR(P) 75

or, using the concise notation we write

(B)Xt = Zt , (4.21)

where (B) denotes the autoregressive operator

(B) = 1 1 B 2 B 2 . . . p B p .

Then the AR(p) can be viewed as a solution to the equation (4.21), i.e.,
1
Xt = Zt . (4.22)
(B)

4.5.1 AR(1)
According to Definition 4.7 the autoregressive process of order 1 is given by

Xt = Xt1 + Zt , (4.23)

where Zt W N(0, 2 ) and is a constant.

Is AR(1) a stationary TS?

Corollary 4.1 says that an infinite combination of white noise variables is a sta-
tionary process. Here, due to the recursive form of the TS we can write AR(1) in
such a form. Namely
Xt = Xt1 + Zt
= (Xt2 + Zt1 ) + Zt
= 2 Xt2 + Zt1 + Zt
..
.
k1
X
= k Xtk + j Ztj .
j=0

This can be rewritten as


k1
X
k Xtk = Xt j Ztj .
j=0

What would we obtain if we have continued the backwards operation, i.e., what
happens when k ?
76 CHAPTER 4. STATIONARY TS MODELS

Taking the expectation we obtain


k1
!2
X
lim E Xt j Ztj = lim 2k E(Xtk
2
)=0
k k
j=0

if || < 1 and the variance of Xt is bounded. Hence, we can represent AR(1) as



X
Xt = j Ztj
j=0

in the mean square sense. This is a linear process (4.15) with


 j
for j 0,
j =
0 for j < 0.
This technique of iterating backwards works well for AR of order 1 but not for
other orders. A more general way to convert the series into a linear process form
is the method of matching coefficients.

The AR(1) model is


(B)Xt = Zt ,
where (B) = 1B and || < 1. We want to write the model as a linear process

X
Xt = j Ztj = (B)Zt ,
j=0

where (B) = j
P
j=0 j B . It means we want to find the coefficients j . Substi-
tuting Zt from the AR model into the linear process model we obtain
Xt = (B)Zt = (B)(B)Xt . (4.24)
In full, the coefficients of both sides of the equation can be written as
1 = (1 + 1 B + 2 B 2 + 3 B 3 + . . .)(1 B)
= 1 + 1 B + 2 B 2 + 3 B 3 + . . . B 1 B 2 2 B 3 3 B 4 . . .
= 1 + (1 )B + (2 1 )B 2 + (3 2 )B 3 + . . .
Now, equating coefficients of B j on the LHS and RHS of this equation we see
that all the coefficients of B j must be zero, i.e.,
1 =
2 = 1 = 2
3 = 2 = 3
..
.
j = j1 = j .
4.5. AUTOREGRESSIVE PROCESSES AR(P) 77

So, we obtained the linear process form of the AR(1)



X
X
j
Xt = Ztj = j B j Zt .
j=0 j=0

Remark 4.13. Note, that from the equation (4.24) it follows that (B) is an inverse
of (B), that is
1
(B) = . (4.25)
(B)
For an AR(1) we have
1
(B) = = 1 + B + 2 B 2 + 3 B 3 + . . . (4.26)
1 B
As a linear process AR(1) is stationary with mean

X
E Xt = j E(Ztj ) = 0 (4.27)
j=0

and autocovariance function given by (4.19), that is



X
X
2 j j+ 2
( ) = = 2j .
j=0 j=0

However, the infinite sum in this expression is the sum of a geometric progression
as || < 1, i.e.,

X 1
2j = .
j=0
1 2

This gives us the following form for the ACVF of AR(1).

2
( ) = . (4.28)
1 2
Then the variance of AR(1) is

2
(0) = .
1 2
Hence, the autocorrelation function of AR(1) is

( )
( ) = = . (4.29)
(0)
78 CHAPTER 4. STATIONARY TS MODELS

-2

-4

-2

-4

0 50 100 150 200

Figure 4.7: Simulated AR(1) processes for = 0.9 (top) and for = 0.9
(bottom).

Series : ARprocess$AR1.2 Series : ARprocess$AR1phi0.9


1.0

1.0
0.8
0.5

0.6
ACF

ACF
0.4
0.0

0.2
-0.5

0.0

0 20 40 60 80 100 0 20 40 60 80 100
Lag Lag

(a) (b)
Figure 4.8: Sample ACF for AR(1): (a) xt = 0.9xt1 +zt and (b) xt = 0.9xt1 +
zt .
4.5. AUTOREGRESSIVE PROCESSES AR(P) 79

2
1
0
-1
-2

2
1
0
-1
-2

30 80 130 180

Figure 4.9: Simulated AR(1) processes for = 0.5 (top) and for = 0.5
(bottom).

Series : ARprocess$AR1.4 Series : ARprocess$AR1.3


1.0

1.0
0.8
0.8

0.6
0.6

0.4
0.4
ACF

ACF
0.2
0.2

0.0
0.0

-0.2
-0.2

-0.4

0 20 40 60 80 100 0 20 40 60 80 100
Lag Lag

(a) (b)
Figure 4.10: Sample ACF for AR(1): (a) xt = 0.5xt1 + zt and (b) xt =
0.5xt1 + zt .
80 CHAPTER 4. STATIONARY TS MODELS

Figures 4.7, 4.9 and 4.8, 4.10 show simulated AR(1) processes for four different
values of the coefficient (equal to -0.9, 0.9, -0.5 and 0.5) and the respective sam-
ple ACF functions.

Looking at these graphs we can see that for positive coefficient we obtain more
smooth TS than for the negative one. Also, the ACFs are very different. We see
that if is negative the neighboring observations are negatively correlated, but
those two time points apart are positively correlated. In fact, if is negative the
neighboring TS values have typically opposite signs. This is more evident if is
close to -1.

4.5.2 Random Walk


This is a TS where at each point of time the series moves randomly away from its
current position. The model can then be written as

Xt = Xt1 + Zt , (4.30)

where Zt is a white noise variable with zero mean and constant variance 2 . The
model has the same form as AR(1) process, but since = 1, it is not stationary.
Such process is called Random Walk.

Repeatedly substituting for past values gives

Xt = Xt1 + Zt
= Xt2 + Zt1 + Zt
= Xt3 + Zt2 + Zt1 + Zt
= ...
t1
X
= X0 + Ztj .
j=0

If the initial value, X0 , is fixed, then the mean value of Xt is equal to X0 , that is
" t1
#
X
E Xt = E X0 + Ztj = X0 .
j=0

So, the mean is constant, but as we see below, the variance and covariance depend
on time, not just on lag. The white noise variables Zt are uncorrelated, hence we
4.5. AUTOREGRESSIVE PROCESSES AR(P) 81

0
AR1.5

-2

-4

-6

-8

30 80 130 180

Figure 4.11: Simulated Random Walk xt = xt1 + zt .

obtain !
t1
X
var(Xt ) = var X0 + Ztj
j=0
t1
!
X
= var Ztj
j=0
t1
X
= var(Ztj ) = t 2
j=0

and !
t1
X t
X 1
cov(Xt , Xt ) = cov Ztj , Zt k
j=0 k=0
" t1
! t 1
!#
X X
=E Ztj Zt k
j=0 k=0
2
= |t | .
A simulated series of this form is shown in Figure 4.11.
82 CHAPTER 4. STATIONARY TS MODELS

Xt - Xt-1 Series : ARprocess$AR1.5differenced

1.0
2

0.8
1

0.6
ACF
0

0.4
0.2
-1

0.0
-2

-0.2
-3 0 20 40 60 80 100
0 50 100 150 200 t Lag

(a) (b)
Figure 4.12: (a) Differenced Random Walk xt and (b) its sample ACF.

As we can see the random walk meanders away from its starting value in no par-
ticular direction. It does not exhibit any clear trend, but at the same time is not
stationary.

However, the first difference of random walk is stationary as it is just white noise,
namely
Xt = Xt Xt1 = Zt .

The differenced random walk and its sample ACF are shown in Figure 4.12.

4.5.3 Explosive AR(1) Model and Causality


As we have seen in the previous section, random walk, which is AR(1) with = 1
is not a stationary process. So, there is a question if a stationary AR(1) process
with || > 1 exists? Also, what are the properties of AR(1) models for > 1?

Clearly, the sum k1 j


P
j=0 Ztj will not converge in mean square sense as k
and we will not get a linear process representation of the AR(1). However, if
1
|| > 1 then || < 1 and we can express a past value of the TS in terms of a future
value rewriting
Xt+1 = Xt + Zt+1

as
Xt = 1 Xt+1 1 Zt+1 .
4.5. AUTOREGRESSIVE PROCESSES AR(P) 83

90

70

50
AR1.6

30

10

-10

30 80 130 180

Figure 4.13: Simulated Explosive AR(1): xt = 1.02xt1 + zt .

Then, substituting for Xt+j several times we obtain

Xt = 1 Xt+1 1 Zt+1
= 1 (1 Xt+2 1 Zt+2 ) 1 Zt+1
= 2 Xt+2 2 Zt+2 1 Zt+1
= ...
k1
X
k
= Xt+k j Zt+j
j=1

Since |1 | < 1 we obtain



X
Xt = j Zt+j ,
j=1

which is a future dependent stationary TS. This however, does not have any prac-
tical meaning because it requires knowledge of future values to predict the future.

When a process does not depend on the future, such as AR(1) when || < 1, we
say that it is causal.

Figure 4.13 shows a simulated series xt = 1.02xt1 + zt . As we can see the values
of the time series quickly become large in magnitude, even for just slightly
above 1. Such process is called explosive. This is not a causal TS.

S-ar putea să vă placă și