Sunteți pe pagina 1din 45

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.

Phil One, UOL


1) MULTICOLLENEARITY

DEFINITION: When two or more independent variables in a regression model are highly correlated to each other, it
is difficult to determine if each of these variables, individually, has an effect on Y, and to quantify the magnitude of
that effect
EXAMPLE: if all farms in a sample that use a lot of fertilizer also apply large amounts of pesticides (and vice
versa), it would be hard to tell if the observed increase in yield is due to higher fertilizer or to higher pesticide use.
EXAMPLE: In economics, when interest and inflation rates are used as independent variables, it is often hard to
quantify their individual effects on the dependent variable because they are highly correlated to each other.
EXAMPLE: Mathematically, this occurs because, everything else being constant, the standard error associated to a
given OLS parameter estimate will be higher if the corresponding independent variable is more highly correlated to
the other independent variables in the mode
This potential problem is known as multicollinearity
Mathematically, a set of variables is perfectly multicollinear if there exist one or more exact linear relationships
among some of the variables. For example, we may have

holding for all observations i, where
j
are constants and X
ji
is the i
th
observation on the j
th
explanatory variable. We
can explore one issue caused by multicollinearity by examining the process of attempting to obtain estimates for the
parameters of the multiple regression equation

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

The ordinary least squares estimates involve inverting the matrix
X
T
X
where

If there is an exact linear relationship (perfect multicollinearity) among the independent variables, the rank of X (and
therefore of X
T
X) is less than k+1, and the matrix X
T
X will not be invertible.
In most applications, perfect multicollinearity is unlikely. An analyst is more likely to face a high degree of
multicollinearity. For example, suppose that instead of the above equation holding, we have that equation in modified
form with an error term v
i
:

In this case, there is no exact linear relationship among the variables, but the X
j
variables are nearly perfectly
multicollinear if the variance of v
i
is small for some set of values for the 's. In this case, the matrix X
T
X has an
inverse, but is ill-conditioned so that a given computer algorithm may or may not be able to compute an approximate
inverse, and if it does so the resulting computed inverse may be highly sensitive to slight variations in the data (due to
magnified effects of rounding error) and so may be very inaccurate.




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

Causes of multicollinearity

- Improper use of dummy variables (e.g. failure to exclude one category)
- Including a variable that is computed from other variables in the equation (e.g. family income = husbands
income + wifes income, and the regression includes all 3 income measures
- In effect, including the same or almost the same variable twice (height in feet and height in inches; or, more
commonly, two different operationalizations of the same identical concept)
- The above all imply some sort of error on the researchers part. But, it may just be that variables really and
truly are highly correlated.

Detection of multicollinearity

Indicators that multicollinearity may be present in a model:
1. Large changes in the estimated regression coefficients when a predictor variable is added or deleted
2. Insignificant regression coefficients for the affected variables in the multiple regression, but a rejection of the
joint hypothesis that those coefficients are all zero (using an F-test)
3. Some authors have suggested a formal detection-tolerance or the variance inflation factor (VIF) for
multicollinearity:


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
4. where is the coefficient of determination of a regression of explanator j on all the other explanators. A
tolerance of less than 0.20 or 0.10 and/or a VIF of 5 or 10 and above indicates a multicollinearity problem (but
see O'Brien 2007)
Condition Number Test: The standard measure of ill-conditioning in a matrix is the condition index. It will indicate
that the inversion of the matrix is numerically unstable with finite-precision numbers ( standard computer floats and
doubles ). This indicates the potential sensitivity of the computed inverse to small changes in the original matrix. The
Condition Number is computed by finding the square root of (the maximum eigenvalue divided by the minimum
eigenvalue). If the Condition Number is above 30, the regression is said to have significant multicollinearity.
5. Farrar-Glauber Test If the variables are found to be orthogonal, there is no multicollinearity; if the variables
are not orthogonal, then multicollinearity is present.


Consequences of multicollinearity


- Even extreme multicollinearity (so long as it is not perfect) does not violate OLS assumptions. OLS estimates
are still unbiased and BLUE (Best Linear Unbiased Estimators)
- Nevertheless, the greater the multicollinearity, the greater the standard errors. When high multicollinearity is
present, confidence intervals for coefficients tend to be very wide and t-statistics tend to be very small.
Coefficients will have to be larger in order to be statistically significant, i.e. it will be harder to reject the null
when multicollinearity is present.
- Note, however, that large standard errors can be caused by things besides multicollinearity.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
- When two IVs are highly and positively correlated, their slope coefficient estimators will tend to be highly and
negatively correlated. .
- Further, a different sample will likely produce the opposite result. In other words, if you overestimate the
effect of one parameter, you will tend to underestimate the effect of the other. Hence, coefficient estimates
tend to be very shaky from one sample to the next.

Remedies for multicollinearity

I. Make sure you have not fallen into the dummy variable trap; including a dummy variable for every category
(e.g., summer, autumn, winter, and spring) and including a constant term in the regression together guarantee
perfect multicollinearity.
II. Try seeing what happens if you use independent subsets of your data for estimation and apply those estimates
to the whole data set. Theoretically you should obtain somewhat higher variance from the smaller datasets used
for estimation, but the expectation of the coefficient values should be the same. Naturally, the observed
coefficient values will vary, but look at how much they vary.
III. Leave the model as is, despite multicollinearity. The presence of multicollinearity doesn't affect the fitted
model provided that the predictor variables follow the same pattern of multicollinearity as the data on which
the regression model is based.
IV. Drop one of the variables. An explanatory variable may be dropped to produce a model with significant
coefficients. However, you lose information (because you've dropped a variable). Omission of a relevant
variable results in biased coefficient estimates for the remaining explanatory variables.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
V. Obtain more data, if possible. This is the preferred solution. More data can produce more precise parameter
estimates (with lower standard errors), as seen from the formula in variance inflation factor for the variance of
the estimate of a regression coefficient in terms of the sample size and the degree of multicollinearity.
VI. Mean-center the predictor variables. Mathematically this has no effect on the results from a regression.
However, it can be useful in overcoming problems arising from rounding and other computational steps if a
carefully designed computer program is not used.
VII. Standardize your independent variables. This may help reduce a false flagging of a condition index above 30.
VIII. It has also been suggested that using the Shapley value, a game theory tool, the model could account for the
effects of multicollinearity. The Shapley value assigns a value for each predictor and assesses all possible
combinations of importance.
IX. Ridge regression or principal component regression can be used.
X. If the correlated explanators are different lagged values of the same underlying explanator, then a distributed
lag technique can be used, imposing a general structure on the relative values of the coefficients to be
estimated.







Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

2) HETROSCEDASTICITY

DEFINITION: Heteroskedasticity is a systematic pattern in the errors where the variances of the
errors are not constant.
Ordinary least squares assumes that all observations are equally reliable.
Regression Model
Y
i
=
1
+
2
X
i
+ U
i


Homoskedasticity:
Var(U
i
) =
2
Or E(U) =


Heteroskedasticity:
Var(U
i
) =
i
2
Or E(Ui
) =
i



Note that we now have a t subscript attached to sigma squared. This indicates that the disturbance for each of the n-
units is drawn from a probability distribution that has a different variance.




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

CONSEQUENCES OF HETEROSCEDASTICITY

I. If the error term has non-constant variance, but all other assumptions of the classical linear regression model
are satisfied, then the consequences of using the OLS estimator to obtain estimates of the population
parameters are:

II. The OLS estimator is still unbiased.
III. The OLS estimator is inefficient; that is, it is not BLUE.
IV. The estimated variances and covariances of the OLS estimates are biased and inconsistent.
V. Hypothesis tests are not valid.

DETECTION OF HETEROSCEDASTICITY
There are several ways to use the sample data to detect the existence of heteroscedasticity.
Plot the Residuals
The residual for the tth observation,
t
.
, is an unbiased estimate of the unknown and unobservable error for that
observation,
t
. Thus the squared residuals,
t
2.
, can be used as an estimate of the unknown and unobservable error
variance, o
t
2
= E(
t
2
). You can calculate the squared residuals and then plot them against an explanatory variable that
you believe might be related to the error variance. If you believe that the error variance may be related to more than

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
one of the explanatory variables, you can plot the squared residuals against each one of these variables.
Alternatively, you could plot the squared residuals against the fitted value of the dependent variable obtained from the
OLS estimates. Most statistical programs have a command to do these residual plots for you. It must be emphasized
that this is not a formal test for heteroscedasticity. It would only suggest whether heteroscedasticity may exist. You
should not substitute the residual plot for a formal test.

Breusch-Pagan Test, and Harvey-Godfrey Test

There are a set of heteroscedasticity tests that require an assumption about the structure of the heteroscedasticity, if it
exists. That is, to use these tests you must choose a specific functional form for the relationship between the error
variance and the variables that you believe determine the error variance. The major difference between these tests is
the functional form that each test assumes. Two of these tests are the Breusch-Pagan test and the Harvey-Godfrey
Test. The Breusch-Pagan test assumes the error variance is a linear function of one or more variables. The Harvey-
Godfrey Test assumes the error variance is an exponential function of one or more variables. The variables are
usually assumed to be one or more of the explanatory variables in the regression equation.

Example

Suppose that the regression model is given by

Y
t
= |
1
+ |
2
X
t
+
t
for t = 1, 2, , n

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

We postulate that all of the assumptions of classical linear regression model are satisfied, except for the assumption
of constant error variance. Instead we assume the error variance is non-constant. We can write this assumption as
follows

Var(
t
) = E(
t
2
) = o
t
2
for t = 1, 2, , n

Suppose that we assume that the error variance is related to the explanatory variable X
t
. The Breusch-Pagan test
assumes that the error variance is a linear function of X
t
. We can write this as follows.

o
t
2
= o
1
+ o
2
X
t
for t = 1, 2, , n

The Harvey-Godfrey test assumes that the error variance is an exponential function of X
3
. This can be written as
follows
o
t
2
= exp(o
1
+ o
2
X
t
)

or taking a logarithmic transformation

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

ln(o
t
2
) = o
1
+ o
2
X
t
for t = 1, 2, , n

The null-hypothesis of constant error variance (no heteroscedasticity) can be expressed as the following restriction on
the parameters of the heteroscedasticity equation

H
o
: o
2
= 0
H
1
: o
2
= 0

To test the null-hypothesis of constant error variance (no heteroscedasticity), we can use a Lagrange multiplier test.
This follows a chi-square distribution with degrees of freedom equal to the number of restrictions you are testing. In
this case where we have included only one variable, X
t
, we are testing one restriction, and therefore we have one
degree of freedom. Because the error variances o
t
2
for the n-observations are unknown and unobservable, we must
use the squared residuals as estimates of these error variances. To calculate the Lagrange multiplier test statistic,
proceed as follows.

Step#1: Regress Y
t
against a constant and X
t
using the OLS estimator.
Step#2: Calculate the residuals from this regression,
t
.
.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Step#3: Square these residuals,
t
2.
. For the Harvey-Godfrey Test, take the logarithm of these squared
residuals, ln(
t
2.
).
Step#4: For the Breusch-Pagan Test, regress the squared residuals,
t
2.
, on a constant and X
t
,
using OLS. For the Harvey-Godfrey Test, regress the logarithm of the squared residuals, ln(
t
2.
),
on a a constant and X
t
, using OLS. This is called the auxiliary regression.
Step#5: Find the unadjusted R
2
statistic and the number of observations, n, for the auxiliary regression.
Step#6: Calculate the LM test statistic as follows: LM = nR
2
.

Once you have calculated the test statistic, compare the value of the test statistic to the critical value for some
predetermined level of significance. If the calculated test statistic exceeds the critical value, then reject the null-
hypothesis of constant error variance and conclude that there is heteroscedasticity. If not, do not reject the null-
hypothesis and conclude that there is no evidence of heteroscedasticity.

These heteroscedasticity tests have two major shortcomings:
1. You must specify a model of what you believe is the structure of the heteroscedasticity, if it exists. For
example, the Breusch-Pagan test assumes that the error variance is a linear function of one or more of the
explanatory variables, if heteroscedasticity exists. Thus, if heteroscedasticity exists, but the error variance
is a non-linear function of one or more explanatory variables, then this test will not be valid.
2. If the errors are not normally distributed, then these tests may not be valid.




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Whites Test

The White test is a general test for heteroscedasticity. It has the following advantages:
1. It does not require you to specify a model of the structure of the heteroscedasticity, if it exists.
2. It does not depend on the assumption that the errors are normally distributed.
3. It specifically tests if the presence of heteroscedasticity causes the OLS formula for the variances and the
covariances of the estimates to be incorrect.

Example

Suppose that the regression model is given by

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
t
for t = 1, 2, , n
We postulate that all of the assumptions of classical linear regression model are satisfied, except for the assumption
of constant error variance. For the White test, assume the error variance has the following general structure.

o
t
2
= o
1
+ o
2
X
t2
+ o
3
X
t3
+ o
4
X
2
t2
+ o
5
X
2
t3
+ o
6
X
t2
X
t3
for t = 1, 2, , n


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Note that we include all of the explanatory variables in the function that describes the error variance, and therefore we
are using a general functional form to describe the structure of the heteroscedasticity, if it exists. The null-hypothesis
of constant error variance (no heteroscedasticity) can be expressed as the following restriction on the parameters of
the heteroscedasticity equations

H
o
: o
2
= o
3
= o
4
= o
5
= o
6
= 0
H
1
: At least one is non-zero

To test the null-hypothesis of constant error variance (no heteroscedasticity), we can use a Lagrange multiplier test.
This follows a chi-square distribution with degrees of freedom equal to the number of restrictions you are testing. In
this case, we are testing 5 restrictions, and therefore we have 5 degrees of freedom. Once again, because the error
variances o
t
2
for the n-units are unknown and unobservable, we must use the squared residuals as estimates of these
error variances. To calculate the Lagrange multiplier test statistic, proceed as follows.

Step#1: Regress Y
t
against a constant, X
t2
, and X
t3
using the OLS estimator.
Step#2: Calculate the residuals from this regression,
t
.
.
Step#3: Square these residuals,
t
2

Step#4: Regress the squared residuals,
t
2.
, on a constant, X
t2
, X
t3
, X
2
t2
, X
2
t3
and X
t2
X
t3
using OLS.
Step#5: Find the unadjusted R
2
statistic and the number of observations, n, for the auxiliary regression.
Step#6: Calculate the LM test statistic as follows: LM = nR
2
.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

Once you have calculated the test statistic, compare the value of the test statistic to the critical value for some
predetermined level of significance. If the calculated test statistic exceeds the critical value, then reject the null-
hypothesis of constant error variance and conclude that there is heteroscedasticity. If not, do not reject the null-
hypothesis and conclude that there is no evidence of heteroscedasticity.

The following points should be noted about the White Test.

1. If one or more of the Xs are dummy variables, then you must be careful when specifying the auxiliary regression.
For example, suppose the X
3
is a dummy variable. In this case, the variable X
2
3
is the same as the variable X
3
. If
you include both of these in the auxiliary regression, then you will have perfect multicollinearity. Therefore, you
should exclude X
2
3
from the auxiliary regression.
2. If you have a large number of explanatory variables in the model, the number of explanatory variables in the
auxiliary regression could exceed the number of observations. In this case, you must exclude some variables from
the auxiliary regression. You could exclude the linear terms, and/or the cross-product terms; however, you should
always keep the squared terms in the auxiliary regression.




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

REMEDIES FOR HETEROSCEDASTICITY

Suppose that we find evidence of heteroscedasticity. If we use the OLS estimator, we will get unbiased but
inefficient estimates of the parameters of the model. Also, the estimates of the variances and covariances of the
parameter estimates will be biased and inconsistent, and as a result hypothesis tests will not be valid. When there is
evidence of heteroscedasticity, econometricians do one of two things.

1. Use to OLS estimator to estimate the parameters of the model. Correct the estimates of the variances and
covariances of the OLS estimates so that they are consistent.
2. Use an estimator other than the OLS estimator to estimate the parameters of the model.

Many econometricians choose alternative #1. This is because the most serious consequence of using the OLS
estimator when there is heteroscedasticity is that the estimates of the variances and covariances of the parameter
estimates are biased and inconsistent. If this problem is corrected, then the only shortcoming of using OLS is that
you lose some precision relative to some other estimator that you could have used. However, to get more precise
estimates with an alternative estimator, you must know the approximate structure of the heteroscedasticity. If you
specify the wrong model of heteroscedasticity, then this alternative estimator can yield estimates that are worse than
the OLS estimator.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL



Heteroscedasticity Consistent Covariance Matrix (HCCM) Estimation

White developed a method for obtaining consistent estimates of the variances and covariances of the OLS estimates.
This is called the heteroscedasticity consistent covariance matrix (HCCM) estimator. Most statistical packages have
an option that allow you to calculate the HCCM matrix.

Generalized Least Squares (GLS) Estimator

If the error term has non-constant variance, then the best linear unbiased estimator (BLUE) is the generalized least
squares (GLS) estimator. This is also called the weighted least squares (WLS) estimator.

Deriving the GLS Estimator for a General Linear Regression Model with Heteroscedasticity

Suppose that we have the following general linear regression model.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
t
for t = 1, 2, , n

Var(
t
) = o
t
2
= Some Function for t = 1, 2, , n

The rest of the assumptions are the same as the classical linear regression model. To derive the GLS estimator for
this model, you proceed as follows. Find the error variance o
t
2
. Find the square root of the error variance o
t
. This
yields the standard deviation of the error. Divide every term on both sides of the regression equation by the standard
deviation of the error. This yields the following regression equation.

Y
t
/o
t
= |
1
(1/o
t
) + |
2
(X
t2
/o
t
) + |
3
(X
t3
/o
t
) +
t
/o
t

Or equivalently
Y
t
*
= |
1
X
t1
*
+ |
2
X
t2
*
+ |
3
X
t3
*
+
t
*

Where
Y
t
*
= Y
t
/o
t
; X
t1
*
= (1/o
t
) ; X
t2
*
= (X
t2
/o
t
) ; X
t3
*
= (X
t3
/o
t
) ;
t
*
=
t
/o
t


This is called the transformed model. Note that the error variance for the transformed model is

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Var(
t
) = Var(
t
/o
t
) = Var(
t
) / o
t
2
= 1, and therefore the transformed model has constant error variance and satisfies
all of the assumptions of the classical linear regression model. Apply the OLS estimator to the transformed model.
Application of the OLS estimator to the transformed model is called the GLS estimator. That is, run an OLS
regression of Y
*
on X
1
*, X
2
*, and X
3
*. Do not include an intercept in the regression.

Weighted Least Squares (WLS) Estimator

The GLS estimator is the same as a weighted least squares estimator. The WLS estimator is the OLS estimator
applied to a transformed model that is obtained by multiplying each term on both sides of the regression equation by a
weight, denoted w
t
. For the above example, the transformed model is

w
t
Y
t
= w
t
|
1
+ |
2
(w
t
X
t2
) + |
3
(w
t
X
t3
) + w
t

t


For the GLS estimator, the w
t
= 1/o
t
. Thus, the GLS estimator is a particular kind of WLS estimator. Thus, each
observation on each variable is given a weight w
t
that is inversely proportional to the standard deviation of the error
for that observation. This means that observations with a large error variance are given less weight, and observations
with a smaller error variance are given more weight in the GLS regression.


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Problems with Using the GLS Estimator

The major problem with GLS estimator is that to use it you must know the true error variance and standard deviation
of the error for each observation in the sample. However, the true error variance is always unknown and
unobservable. Thus, the GLS estimator is not a feasible estimator.

Feasible Generalized Least Squares (FGLS) Estimator

The GLS estimator requires that o
t
be known for each observation in the sample. To make the GLS estimator
feasible, we can use the sample data to obtain an estimate of o
t
for each observation in the sample. We can then
apply the GLS estimator using the estimates of o
t
. When we do this, we have a different estimator. This estimator is
called the Feasible Generalized Least Squares Estimator, or FGLS estimator.
Example
Suppose that we have the following general linear regression model.

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
t
for t = 1, 2, , n


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Var(
t
) = o
t
2
= Some Function for t = 1, 2, , n

The rest of the assumptions are the same as the classical linear regression model. Suppose that we assume that the
error variance is a linear function of X
t2
and X
t3
. Thus, we are assuming that the heteroscedasticity has the following
structure.

Var(
t
) = o
t
2
= o
1
+ o
2
X
t2
+ o
3
X
t3
for t = 1, 2, , n

To obtain FGLS estimates of the parameters |
1
, |
2
, and |
3
proceed as follows.

Step#1: Regress Y
t
against a constant, X
t2
, and X
t3
using the OLS estimator.
Step#2: Calculate the residuals from this regression,
t
.
.
Step#3: Square these residuals,
t
2

Step#4: Regress the squared residuals,
t
2.
, on a constant, X
t2
, and X
t3
, using OLS.
Step#5: Use the estimates of o
1
, o
2
, and o
3
to calculate the predicted values o
t
2.
. This is an estimate of the
error variance for each observation. Check the predicted values. For any predicted value that is non-positive
replace it with the squared residual for that observation. This ensures that the estimate of the variance is a
positive number (you cant have a negative variance).

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Step#6: Find the square root of the estimate of the error variance, o
t
.
for each observation.
Step#7: Calculate the weight w
t
= 1/o
t
.
for each observation.
Step#8: Multiply Y
t
, , X
t2
, and X
t3
for each observation by its weight.
Step#9: Regress w
t
Y
t
on w
t
,

w
t
X
t2
, and w
t
X
t3
using OLS.


Properties of the FGLS Estimator

If the model of heteroscedasticity that you assume is a reasonable approximation of the
true heteroscedasticity, then the FGLS estimator has the following properties. 1) It
is non-linear. 2) It is biased in small samples. 3) It is asymptotically more efficient
than the OLS estimator. 4) Monte Carlo studies suggest it tends to yield more
precise estimates than the OLS estimator. However, if the model of
heteroscedasticity that you assume is not a reasonable approximation of the true
heteroscedasticity, then the FGLS estimator will yield worse estimates than the OLS
estimator.







Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
3) AUTOCORRELATION


INTRODUCTION

Autocorrelation occurs when the errors are correlated. In this case, we can think of the disturbances for different
observations as being drawn from different distributions that are not explanatory distributions.
Example
Suppose that we have the following equation that describes the statistical relationship between annual
consumption expenditures and annual disposable income for the U.S. for the period 1959 to 1995. Thus, we have
37 multivariate observations on consumption expenditures and income, one for each year.

Y
t
= o + |X
t
+
t
for t = 1, , 37
Because of the time-series nature of the data, we would expect the disturbances for different years to be correlated
with one another. For example, we might expect the disturbance in year t to be correlated with the disturbance in
year t-1. In particular, we might expect a positive correlation between the disturbances in year t and year t-1. For
example, if the disturbance in year t-1 (e.g., 1970) is positive, it is quite likely that the disturbance in year t (1971)
will be positive, or if the disturbance in year t-1 (e.g., 1970) is negative, it is quite likely that the disturbance in year t

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
(1971) will be negative. This is because time-series data tends to follow trends. The assumption of autocorrelation
can be expressed as follows
Cov(
t ,

s
) = E(
t

s
) = 0
Thus, autocorrelation occurs whenever the disturbance for period t is correlated with the disturbance for period s.
In the above example, autocorrelation exists because the disturbance in year t is correlated with the disturbance in
year t-1.
STRUCTURE OF AUTOCORRELATION
There are many different types of autocorrelation.
First-Order Autocorrelation
The model of autocorrelation that is assumed most often is called the first-order autoregressive process. This is
most often called AR(1). The AR(1) model of autocorrelation assumes that the disturbance in period t (current
period) is related to the disturbance in period t-1 (previous period). For the consumption function example, the
general linear regression model that assumes an AR(1) process is given by
Y
t
= o + |X
t
+
t
for t = 1, , 37


t
=
t-1
+ c
t
where -1 < < 1


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
The second equation tells us that the disturbance in period t (current period) depends upon the disturbance in
period t-1 (previous period) plus some additional amount, which is an error. In our example, this assumes that the
disturbance for the current year depends upon the disturbance for the previous year plus some additional amount
or error. The following assumptions are made about the error term c
t
: E(c
t
), Var(c
t
) = o
2
, Cov(c
t
,c
s
) = 0. That is, it is
assumed that these errors are explanatory and identically distributed with mean zero and constant variance. The
parameter is called the first-order autocorrelation coefficient. Note that it is assumed that can take any value
between negative one and positive one. Thus, can be interpreted as the correlation coefficient between
t
and
t-
1
. If > 0, then the disturbances in period t are positively correlated with the disturbances in period t-1. In this case
there is positive autocorrelation. This means that when disturbances in period t-1 are positive disturbances, then
disturbances in period t tend to be positive. When disturbances in period t-1 are negative disturbances, then
disturbances in period t tend to be negative. Time-series data sets in economics are usually characterized by
positive autocorrelation. If < 0, then the disturbances in period t are negatively correlated with the disturbances in
period t-1. In this case there is negative autocorrelation. This means that when disturbances in period t-1 are
positive disturbances, then disturbances in period t tend to be negative. When disturbances in period t-1 are
negative disturbances, then disturbances in period t tend to be positive.





Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Second-Order Autocorrelation
An alternative model of autocorrelation is called the second-order autoregressive process or AR(2). The AR(2)
model of autocorrelation assumes that the disturbance in period t is related to both the disturbance in period t-1
and the disturbance in period t-2. The general linear regression model that assumes an AR(2) process is given by

Y
t
= o + |X
t
+
t
for t = 1, , 37

t
=
1

t-1
+
2

t-2
+ c
t


The second equation tells us that the disturbance in period t depends upon the disturbance in period t-1, the
disturbance in period t-2, and some additional amount, which is an error. Once again, it is assumed that these errors
are explanatory and identically distributed with mean zero and constant variance.
th-Order Autocorrelation
The general linear regression model that assumes a th-order autoregressive process or AR(), where can assume
any positive value is given by
Y
t
= o + |X
t
+
t
for t = 1, , 37

t
=
1

t-1
+
2

t-2
+ +

t-
+ c
t



Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
For example, if you have quarterly data on consumption expenditures and disposable income, you might argue that
a fourth-order autoregressive process is the appropriate model of autocorrelation. However, once again, the most
often used model of autocorrelation is the first-order autoregressive process.

CONSEQUENCES OF AUTOCORRELATION
The consequences are the same as heteroscedasticity. That is:

1. The OLS estimator is still unbiased.
2. The OLS estimator is inefficient; that is, it is not BLUE.
3. The estimated variances and covariances of the OLS estimates are biased and inconsistent.
a) If there is positive autocorrelation, and if the value of a right-hand side variable grows over time, then the
estimate of the standard error of the coefficient estimate of this variable will be too low and hence the t-
statistic too high.
4. Hypothesis tests are not valid.





Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
DETECTION OF AUTOCORRELATION
There are several ways to use the sample data to detect the existence of autocorrelation.
Plot the Residuals
The error for the tth observation,
t
, is unknown and unobservable. However, we can use the residual for the tth
observation,
t
.
as an estimate of the error. One way to detect autocorrelation is to estimate the equation using
OLS, and then plot the residuals against time. In our example, the residual would be measured on the vertical axis.
The years 1959 to 1995 would be measured on the horizontal axis. You can then examine the residual plot to
determine if the residuals appear to exhibit a pattern of correlation. Most statistical packages have a command that
does this residual plot for you. It must be emphasized that this is not a formal test of autocorrelation. It would only
suggest whether autocorrelation may exist. You should not substitute a residual plot for a formal test.

The Durbin-Watson d Test
The most often used test for first-order autocorrelation is the Durbin-Watson d test. It is important to note that this
test can only be used to test for first-order autocorrelation, it cannot be used to test for higher-order
autocorrelation. Also, this test cannot be used if the lagged value of the dependent variable is included as a right-
hand side variable.



Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Example

Suppose that the regression model is given by

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
t



t
=
t-1
+ c
t
where -1 < < 1

Where Y
t
is annual consumption expenditures in year t, X
t2
is annual disposable income in year t, and X
t3
is the
interest rate for year t.

We want to test for first-order positive autocorrelation. Economists usually test for positive autocorrelation
because negative serial correlation is highly unusual when using economic data. The null and alternative
hypotheses are
H
0
: = 0
H
1
> 0

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

Note that this is a one-sided or one-tailed test.
To do the test, proceed as follows.
Step #1: Regress Y
t
against a constant, X
t2
and X
t3
using the OLS estimator.
Step #2: Use the OLS residuals from this regression to calculate the following test statistic:
d =
t=2
(
t
.

t-1
.
)
2
/
t=1
(
t
.
)
2

Note the following:
1. The numerator has one fewer observation than the denominator. This is because an observation must be
used to calculate
t-1
.
.
2. It can be shown that the test-statistic d can take any value between 0 and 4.
3. It can be shown if d = 0, then there is extreme positive autocorrelation.
4. It can be shown if d = 4, then there is extreme negative autocorrelation.
5. It can be shown if d = 2, then there is no autocorrelation.
Step #3: Choose a level of significance for the test and find the critical values d
L
and d
u
. Table A.5 in
Ramanathan gives these critical values for a 5% level of significance. To find these two critical
values, you need two pieces of information: n = number of observations, k = number of right-
hand side variables, not including the constant. In our example, n = 37, k = 2. Therefore, the

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
critical values are: d
L
= 1.36, d
u
= 1.59.
Step #4: Compare the value of the test statistic to the critical values using the following decision rule.

If d < d
L
then reject the null and conclude there is first-order autocorrelation.
If d > d
u
then do accept the null and conclude there is no first-order autocorrelation.
If d
L
s d s d
U
the test is inconclusive.

Note: A rule of thumb that is sometimes used is to conclude that there is no first-order autocorrelation if the d
statistic is between 1.5 and 2.5. A d statistic below 1.5 indicates positive first-order autocorrelation. A d statistic of
greater than 2.5 indicates negative first-order autocorrelation. However, strictly speaking, this is not correct.

The Breusch-Godfrey Lagrange Multiplier Test
The Breusch-Godfrey test is a general test of autocorrelation. It can be used to test for first-order autocorrelation or
higher-order autocorrelation. This test is a specific type of Lagrange multiplier test.



Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Example

Suppose that the regression model is given by

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
t



t
=
1

t-1
+
2

t-2
+ c
t
where -1 < < 1

Where Y
t
is annual consumption expenditures in year t, X
t2
is annual disposable income in year t, and X
t3
is the
interest rate for year t. We want to test for second-order autocorrelation. Economists usually test for positive
autocorrelation because negative serial correlation is highly unusual when using economic data. The null and
alternative hypotheses are

H
0
:
1
=
2
= 0
H
1
At least one is not zero


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
The logic of the test is as follows. Substituting the expression for
t
into the regression equation yields the following

Y
t
= |
1
+ |
2
X
t2
+ |
3
X
t3
+
1

t-1
+
2

t-2
+ c
t


To test the null-hypotheses of no autocorrelation, we can use a Lagrange multiplier test to whether the variables
t-
1
and
t-2
belong in the equation.

To do the test, proceed as follows.

Step #1: Regress Y
t
against a constant, X
t2
and X
t3
using the OLS estimator and obtain the residuals
t
.
.
Step #2: Regress
t
.
against a constant, X
t2
, X
t3
,
t-1
.
and
t-2
.
using the OLS estimator. Note that for this
regression you will have n-2 observations, because two observations must be used to calculate
the residual variables
t-1
.
and
t-2
.
. Thus, in our example you would run this regression using
the observations for the period 1961 to 1995. You lose the observations for the years 1959 and
1960. Thus, you have 35 observations.
Step #3: Find the unadjusted R
2
statistic and the number of observations, n 2, for the auxiliary regression.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Step #4: Calculate the LM test statistic as follows: LM = (n 2)R
2
.
Step #5: Choose the level of significance of the test and find the critical value of LM. The LM statistic
has a chi-square distribution with two degrees of freedom, _
2
(2). For the 5% level of significance
the critical value is 5.99.
Step #6: If the value of the test statistic, LM, exceeds 5.99, then reject the null and conclude that there is
autocorrelation. If not, accept the null and conclude that there is no autocorrelation.

REMEDIES FOR AUTOCORRELATION
If the true model of the data generation process is characterized by autocorrelation, then the best linear unbiased
estimator (BLUE) is the generalized least squares (GLS) estimator.
Deriving the GLS Estimator for a General Linear Regression Model with First-Order Autocorrelation





Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Suppose that we have the following general linear regression model. For example, this may be the consumption
expenditures model.

Y
t
= o + |X
t
+
t
for t = 1, , n

t
=
t-1
+ c
t


Recall that the error term c
t
satisfies the assumptions of the classical linear regression model. This statistical model
describes what we believe is the true underlying process that is generating the data.
To derive the GLS estimator, we proceed as follows.

1. Derive a transformed model that satisfies all of the assumptions of the classical linear regression model.
2. Apply the OLS estimator to the transformed model.

The GLS estimator is the OLS estimator applied to the transformed model. To derive the transformed model,
proceed as follows. Substitute the expression for
t
into the regression equation. Doing so yields



Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
(*) Y
t
= o + |X
t
+
t-1
+ c
t


If we can eliminate the term
t-1
from this equation, we would be left with the error term c
t
that satisfies all of the
assumptions of the classical linear regression model, including the assumption of no autocorrelation. To eliminate

t-1
from the equation, we proceed as follows. The original regression equation Y
t
= o + |X
t
+
t
must be satisfied
for every single observation. Therefore, this equation must be satisfied in period t 1 as well as in period t.
Therefore, we can write,
Y
t-1
= o + |X
t-1
+
t-1

This is called lagging the equation by one time period. Solving this equation for
t-1
yields

t-1
= Y
t-1
- o - |X
t-1

Now, multiply each side of this equation by the parameter . Doing so yields

t-1
= Y
t-1
- o - |X
t-1

Substituting this expression for
t-1
into equation (*) yields
Y
t
= o + |X
t
+ Y
t-1
- o - |X
t-1
+ c
t




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
This can be written equivalently as
Y
t
- Y
t-1
= (1 - )o + |(X
t
- X
t-1
) + c
t

This can be written equivalently as
Y
t
*
= o
*
+ |X
t
*
+ c
t

Where Y
t
*
= Y
t
- Y
t-1
; o
*
= (1 - )o ; X
t
*
= X
t
- X
t-1
.

This is the transformed model. Note the following:

1. The slope coefficient of the transformed model, |, is the same as the slope coefficient of the original model.
2. The constant term in the original model is given by o = o
*
/(1 - ).
3. The error term in the transformed model, c
t
, satisfies all of the assumptions of the error term in the classical
linear regression model.

Thus, if we run a regression of the transformed variable Y
t
*
on a constant and the transformed variable X
t
*
using the
OLS estimator, we can get a direct estimate of | and solve for the estimate of o. These estimates are not GLS
estimates and therefore are not BLUE. The problem is that when we create the transformed variables Y
t
*
and X
t
*
we
lose one observation because Y
t-1
and X
t-1
are lagged one period. Therefore, we have n 1 observations to estimate

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
the transformed model. In our example, we would lose the observation for the first year, which is 1959. It can be
shown that to preserve the first observation (n = 1), we can use the following.

Y
1
*
= (1 -
2
)
1/2
; X
1
*
= X
1
/ (1 -
2
)
1/2


The GLS estimator involves the following steps.

1. Create the transformed variables Y
t
*
and X
t
*
.
2. Regress the transformed variable Y
t
*
on a constant and the transformed variable X
t
*
using the OLS estimator and
all n observations.

The resulting estimates are GLS estimates, which are BLUE.

Problems with Using the GLS Estimator


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
The major problem with the GLS estimator is that to use it you must know the true autocorrelation coefficient . If
you dont the value of , then you cant create the transformed variables Y
t
*
and X
t
*
. However, the true value of is
almost always unknown and unobservable. Thus, the GLS is not a feasible estimator.

Feasible Generalized Least Squares (FGLS) Estimator

The GLS estimator requires that we know the value of . To make the GLS estimator feasible, we can use the
sample data to obtain an estimate of . When we do this, we have a different estimator. This estimator is called
the Feasible Generalized Least Squares Estimator, or FGLS estimator. The two most often used FGLS estimators are:

1. Cochrane-Orcutt estimator
2. Hildreth-Lu estimator
Example

Suppose that we have the following general linear regression model. For example, this may be the consumption
expenditures model.


Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Y
t
= o + |X
t
+
t
for t = 1, , n


t
=
t-1
+ c
t


Recall that the error term c
t
satisfies the assumptions of the classical linear regression model. This statistical model
describes what we believe is the true underlying process that is generating the data.

Cochrane-Orcutt Estimator

To obtain FGLS estimates of o and | using the Cochrane-Orcutt estimator, proceed as follows.

Step #1: Regress Y
t
on a constant and X
t
using the OLS estimator.
Step #2: Calculate the residuals from this regression,
t
.
.
Step #3: Regress
t
.
on
t-1
.
using the OLS estimator. Do not include a constant term in the regression.
This yields an estimate of , denoted
.
.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Step #4: Use the estimate of to create the transformed variables: Y
t
*
= Y
t
- Y
t-1
, X
t
*
= X
t
- X
t-1
.
Step #5: Regress the transformed variable Y
t
*
on a constant and the transformed variable X
t
*
using the
the OLS estimator.
Step #6: Use the estimate of o and | from step #5 to get calculate a new set of residuals,
t
.
.
Step #7: Repeat step #2 through step #6.
Step #8: Continue iterating step #2 through step #5 until the estimate of from two successive iterations
differs by no more than some small predetermined value, such as 0.001.
Step #9: Use the final estimate of to get the final estimates of o and |.

Hildreth-Lu Estimator

To obtain FGLS estimates of o and | using the Hildreth-Lu estimator, proceed as follows.

Step #1: Choose a value of of between 1 and 1.
Step #2: Use the this value of to create the transformed variables: Y
t
*
= Y
t
- Y
t-1
, X
t
*
= X
t
- X
t-1
.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL
Step #3: Regress the transformed variable Y
t
*
on a constant and the transformed variable X
t
*
using the
the OLS estimator.
Step #4: Calculate the residual sum of squares for this regression.
Step #5: Choose a different value of of between 1 and 1.
Step #6: Repeat step #2 through step #4.
Step #7: Repeat Step #5 and step #6. By letting vary between 1 and 1in a systematic fashion, you get a
set of values for the residual sum of squares, one for each assumed value of .
Step #8: Choose the value of with the smallest residual sum of squares.
Step #9: Use this estimate of to get the final estimates of o and |.

Comparison of the Two Estimators

If there is more than one local minimum for the residual sum of squares function, the Cochrane-Orcutt estimator
may not find the global minimum. The Hildreth-Lu estimator will find the global minimum. Most statistical
packages have both estimators. Some econometricians suggest that you estimate the model using both estimators
to make sure that the Cochrane-Orcutt estimator doesnt miss the global minimum.

Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL

Properties of the FGLS Estimator

If the model of autocorrelation that you assume is a reasonable approximation of the true autocorrelation, then the
FGLS estimator will yield more precise estimates than the OLS estimator. The estimates of the variances and
covariances of the parameter estimates will also be unbiased and consistent. However, if the model of
autocorrelation that you assume is not a reasonable approximation of the true autocorrelation, then the FGLS
estimator will yield worse estimates than the OLS estimator.

Generalizing the Model

The above examples assume that there is one explanatory variable and first-order autocorrelation. The model and
FGLS estimators can be easily generalized to the case of k explanatory variables and higher-order autocorrelation.


THE END




Assignment 03, Statistics For Management Shahzeb Khan Marwat M.Phil One, UOL





























Assignment 03, Statistics For Management Shahzeb Khan Marwat
M.Phil One, UOL

S-ar putea să vă placă și