Sunteți pe pagina 1din 3

ARIMA theory

ARIMA (autoregressive integrated moving average) models are generalizations of simple


AR model that use three tools for modeling the serial correlation in the disturbance which
are autoregressive, AR term, integration order term and moving average term, MA.

Autoregressive term

The AR(1) model uses only the first order term, but in general, higher order AR terms
can be used. Each AR term corresponds to the use of a lagged value of the residual in the
forecasting equation for the unconditional residual. An autoregressive model of order p,
AR(p) has the form of

ut = ρ1ut − 1 + ρ 2ut − 2 + ... + ρput − p + εt

Integration order term

Each integration order corresponds to differencing the series being forecast. A first-order
integrated component means that the forecasting model is designed for the first difference
of the original series. A second order component corresponds to using second differences,
and so on.

Moving average term

A moving average forecasting model uses lagged values of the forecast error to improve
the current forecast. A first-order moving average term uses the most recent forecast
error, a second-order term uses the forecast error from the two most recent periods and so
on. An MA(q) and has the form:

ut = εt +θ1εt −1 +θ2εt −2 +... +θqεt −q

ARMA (p,q) specification

The autoregressive and moving average specifications can be combined to form an


ARMA (p,q) specification:

ut = ρ1ut −1 + ρ2ut −2 +... + ρput − p +εt +θ1εt −1 +θ2εt −2 +... +θqεt −q

Principles of ARIMA modeling

ARIMA modeling contains three phases which is identification, estimation and


diagnostic checking.

Identification

The first step in forming an ARIMA model for a series of residuals is to look at its
autocorrelation properties. The nature of the correlation between current values of
residuals and their past values provides guidance in selecting ARIMA specification. The
autocorrelations are easy to interpret which each one of the current values of the series
with the series lagged a certain number of periods. The partial autocorrelation are a bit
more complicated, they measure the correlation of the current and lagged series after
taking into account the predictive power of all the values of the series with smaller lags.
Then, the type of ARIMA model to be use must be decided. If the autocorrelations were
zero after one lag and the partial autocorrelation declined geometrically, a first-order
moving average process would seem appropriate. If the autocorrelations appear to have
seasonal pattern, this would suggest the presence of a seasonal ARMA structure.

The Akaike information criterion and Schwarz criterion provided with each set of
estimates may also be used as a guide for the appropriate lag order selection.

After fitting a candidate ARIMA specification, you should verify that there are no
remaining autocorrelations that your model has not accounted for. Examine the
autocorrelations and the partial autocorrelations of the innovations (the residuals from the
ARIMA model) to see if any important forecasting power has been overlooked.

Nonstationary Time Series

ARMA estimation is based on stationary time series. A series said to be stationary if the
mean and autocovariances of the series do not depend on time. A common example of a
nonstationary series is the random walk:

yt = yt − 1 + εt
Where ε is a stationary random disturbance term. The series y has a constant forecast
value, conditional on t , and the variance is increasing over time. The random walk is a
difference stationary series since the first difference of y is stationary:

yt − yt −1 = (1 − L) yt = εt
A different stationary series is said to be integrated and is denoted as I (d) where d is the
order of integration. The order of integration is the number of unit roots contained in the
series, or the number of differencing operations it takes to make the series stationary. For
the random walk above, there is one unit root, so it is an I(1) series. Similarly a stationary
series is I(0).

It is important to check wether a series is stationary or not before using it in a regression.


The formal method is the unit root test.

Unit Root Tests

Consider a simple AR(1) process:


yt = ρyt −1 + xt ' δ +εt
Where xt are optional exogenous regressors which may consist of constant, or a constant
and trend, ρ and δ are parameters to be estimated, and the εt are assumed to be white
noise. If ρ ≥1 , y is a (trend-) stationary series. Thus, the hypothesis of (trend-)
stationarity can be evaluated by testing wether the absolute value of ρ is strictly less than
one.

S-ar putea să vă placă și