Sunteți pe pagina 1din 16

1

2. Linear stationary models



We now consider the modeling of time
series as stochastic processes. The model
defines the mechanism through which the
observations are considered to have been
generated.

A simple example is the model
1 t t t
Y e e

= + ,
1, , t n = , where
t
e , 0,1, , t n = is a
sequence of uncorrelated random variables
drawn from a distribution with mean zero
and variance
2
, and is a parameter.


2

A particular set of values of
0 1
, , ,
n
e e e
results in a corresponding sequence of
observations
1
, ,
n
Y Y .

By drawing different sets of values
0 1
, , ,
n
e e e we can generate an infinite set
of realizations over 1, , t n = . Thus, the
model effectively defines a joint distribution
for the random variables
1
, ,
n
Y Y .

For a general model, the mean of the
process at time t is ( )
t t
E Y = , 1, , t n = ,
the average of
t
Y taken over all possible
realizations.
Also,
| |
( , ) ( )( )
t t t t t t
Cov Y Y E Y Y


+ +
= ,
1, , t n = .
3

If many realizations are available, the above
quantities may be estimated by ensemble
averages, for example


1 ( )
1

m
j
t t
j
m y

=
=

, 1, , t n = (1)

where
( ) j
t
y denotes the j th observation on
t
Y and m is the number of realizations.

However, in most time series problems,
only a single series of observations is
available.

4

In these situations, to carry out meaningful
inference on the mean and covariance
functions one needs to make restrictions of
the process generating the observations, so
that instead of aggregating observations at
a particular point in time, we do so over
time. This is where stationarity becomes
necessary. Under stationarity, we can use
the estimates


1
1

n
t t
t
Y n Y

=
= =

, (2)


1
1

( ) ( )( )
n h
h t t h
t
c h n Y Y Y Y

+
=
= =

. (3)

5

Under certain conditions, these statistics
give consistent estimates of the mean and
autovariance functions. The conditions
basically require that the observations
sufficiently far apart be almost
uncorrelated.

The quantities (3) may be normalized to
obtain sample autocovariances

( ) ( ) ( ) / (0) h r h c h c = = , 1, 2, h =

A plot of ( ) r h against non-negative values
of h is known as the sample autocorrelation
function or correlogram.
6


The sample autocorrelations are estimates
of the corresponding theoretical
autorrelations for the stochastic process
that is assumed to have generated the
observations.

Although the correlogram will tend to
mirror the properties of the theoretical
autocorrelation function, it will not
reproduce it exactly.


7

The correlogram is the main tool for
analyzing the properties of a time series in
the time domain.
We must, however, know the
autocovariance functions of different
stochastic processes (then we fit a model
with time domain properties similar to that
of the data), and also something about the
sampling variability of the estimated
autocorrelations.
We next study the nature of the
autocovariance function for models that
may be represented as a linear combination
of past and present values of
{ }
t
e .

8

2.1 Autoregressive processes
Consider the moving average
representation

0
t j t j
j
X e

=
=

,

where
0 1 2
, , , are parameters. Under
certain conditions (which allow us to
interchange expectation and summation of
the infinite series), its autocovariance
function is given by

{ }
0 0
h t t h j t j k t h k
j k
E X X E e e

+ +
= =

| |
| |
= =
`
|
|
\ .

\ .
)


9


2
0
j j h
j

+
=
=

. (4)

The variance of the process is then given by
2 2
0
(0)
j
j

=
=

, and thus the process has
finite variance if
2
0
j
j

=
<

.

A disadvantage of this representation is that
it has an infinite number of parameters. A
more parsimonious representation is that of
autoregressive-moving average processes
( ( , ) ARMA p q ).


10

2.1.1 First-order autoregression

Consider the process
0
j
t t j
j
X e

=
=

, 1 < .
Then we have

2
1 2 t t t t
X e e e

= + + + and
2
1 1 2 t t t
X e e

= + + so that
1 t t t
X X e

= or
1 t t t
X X e

= + , a first-
order autoregressive process (denoted by
AR(1)).


11

This suggests that under certain conditions,
an infinite moving average of past values of
{ }
t
e may be represented as a finite
autoregression.

Coming from the other direction, let
1 t t t
X X e

= + , 1, , t n = ,
2
(0, )
t
e WN (
t
e
is white noise with mean zero and variance
2
), an AR(1) process.

The lag operator B defined by
1 t t
BX X

=
will prove to be useful for analyzing series.
Applying B successively gives
2
1 2
( )
t t t t
B X B BX BX X

= = = , and in
general
k
t t k
B X X

= .

12

Although first observed at 1 t = , the process
can be thought of as having started at some
time in the remote past. Repeated
substitution gives


2
2 1 2 1
( )
t t t t t t t
X X e e X e e

= + + = + +

1
3 2
3 2 1
0
K
K j
t t t t t K t j
j
X e e e X e


=
= + + + = = +

. (5)


13

If 1 , the mean value of the process
depends on the starting value
t K
X

, and (5)
contains a deterministic component that
never dies out. On the other hand, if 1 < ,
this deterministic component is negligible if
K is large, and effectively disappears as
K , and so it is legitimate to write

0
j
t t j
j
X e

=
=

.


14

The process has variance
2
2 2
0
2
0
1
j
j

=
= =

, autocovariance
function
( )
1
0
0
( )
h
h j h
h t t h t h t j t h
j
E X X E X e X


=
(
= = + =
(

,
1, 2, h = , and autocorrelation function
( )
h
h = , 1, 2, h = .

For positive values of we have a smooth
exponential decay, but for negative values,
the autocovariance functions decays
exponentially, but oscillates between
negative and positive values.


15

If we assume that the process is stationary,
we can obtain the autocovariance function
in a more direct manner. Multiplying both
sides of
1 t t t
X X e

= + by
t h
X

and taking
expectations, we obtain

1
( ) ( ) ( )
t t h t t h t t h
E X X E X X E e X

= + , and
since
t h
X

depends on
1
, ,
t h t h
e e

, it is
uncorrelated with
t
e , and thus
1 h h


= ,
1, 2, h =

This is a first-order difference equation with
a solution given by
0
h
h
= , ( )
h
h = ,
0, 1, 2, h = as before.

16

For
1 t t t
X X e

= + , we can write
(1 )
t t
B X e = . For stationarity, 1 < , or
equivalently the root of ( ) (1 ) 0 B B = =
must lie outside the unit circle.

S-ar putea să vă placă și