Sunteți pe pagina 1din 9
CHAPTER 9 Multiple Equation Models In Chapter 7 we analyzed univariate, autoregressive schemes, where a scalar vari- able is modeled in terms of its own past values. The AR(p) process, for exam- ple, is Ye = m+ any-1 + ary ate + epYnp ter ‘We now consider a column vector of k different variables,y, = [y1, 2) °** Yar]' and model this in terms of past values of the vector. The result is a vector autoregression, or VAR. The VAR(p) process is Yt = M+ Ayr) + Ady -2 + °° + ApYe-p + Er (9.1) The A; are k x k matrices of coefficients, m is a k X 1 vector of constants, and €, is a vector white noise process, with the properties a s=t Ee) =0 forallr — E(ee;) = { 0 ay (9.2) where the (2 covariance matrix is assumed to be positive definite. Thus the e"s are serially uncorrelated but may be contemporaneously correlated. 91 VECTOR AUTOREGRESSIONS (VARs) 9.1.1 A Simple VAR To explain some of the basic features of VARs we will first consider the simple case where k = 2and p = 1. This gives = [eels fm] a fen a|fyuea]y few]. al (ml+ Ca ele] eel m+Ay-ite (93) 287 288 ECONOMETRIC METHODS or, written out explicitly, eae Ye = Mm + a Yian + @12Y22-1 + Er Yar = mz + aryYis-1 + A22V21-1 + €2 ‘Thus, as in all VARs, each variable is expressed as a linear combination of lagged values of itself and lagged values of all other variables in the group. In practice the VAR equations may be expanded to include deterministic time trends and other exogenous variables, but we ignore these for simplicity of exposition. As may be ex- pected from the univariate case, the behavior of the y's will depend on the properties of the A matrix. Let the eigenvalues and eigenvectors of the A matrix be ea Provided the eigenvalues are distinct, the eigenvectors will be linearly independent and C will be nonsingular. It then follows that C'AC=A and = A=CAC! (9.4) Define a new vector of variables z, as u=Cly, or y= Ca (9.5) ‘The process of premultiplying Eq. (9.3) by C~! and simplifying gives gsm + Aga t) (9.6) where m* = C-'m and 1, = C~'€,, which is a white noise vector. Thus Ze = my + Attia + Me Za = my + Arcot + Mr Each z variable follows a separate AR(1) scheme and is stationary, (0), ifthe eigen- value has modulus less than 1; is a random walk with drift, I(1) if the eigenvalue is 1; and is explosive if the eigenvalue exceeds 1 in numerical value. Explosive series may be ruled out as economically irrelevant. We will now consider various possible combinations of Ay and Az Case 1. |A;| <1 and |Ap| < 1. Each z is then 1(0). Because Eq. (9.5) shows that each y is a linear combination of the z’s, it follows that each y is I(0), which is written as y is I(0). Standard inference procedures apply to the VAR as formulated in Eq. (9.3) since all the variables are stationary. It also makes sense to investigate the static equilibrium of the system. The process of setting the disturbance vector in Eq. (9.3) to zero and assuming the existence of an equilibrium vector ¥ gives (U-Ay=m or Wy=m 1) where Il = I~ A. This equation will solve for a unique, nonzero J if the TT ‘cuarrers: Multiple Equation Models 289 matrix is nonsingular. From the results on matrix algebra in Appendix A we have the following: © The eigenvalues jz of II are the complements of the eigenvalues A of A, that is, wis 1-Aj. © The eigenvectors of II are the same as those of A. ‘Thus, I is nonsingular in this case; and a unique static equilibrium, ¥ = T'm, ex- ists. The values of A ensure that deviations from the equilibrium vector are transient and tend to die out with time. Case 2, dy = 1 and [Aa] < 1. Now z; is I(1), being a random walk with drift, and zz is 1(0). Each y is thus I(1) since it is a linear combination of an I(1) variable and an I(0) variable. We then write y is I(1). It does not now make sense to look for a static equilibrium relation between some y, and some ¥>, but it is meaningful to ask if there is a cointegrating relation between y;, and yx,. Such a relation is readily found. The second (bottom) row in Eq. (9.5) gives zy = lyr (9.8) where c) is the bottom row in C~!. Thus, zp is a linear combination of I(1) variables but is itself a stationary, 1(0) variable. The cointegrating vector annihilates the I(1) component in y,. This result may be made explicit by writing Eq. (9.5) as Ye = er zur + [er [zor Premuitiplying across this equation by the row vector ¢” then gives ey, = za, because the properties of nonsingular matrices give ec, = O and e@ey = 1. ‘The cointegrating relation may also be shown in terms of the IT matrix defined in Eq. (9.7). Reparameterize Eq. (9.3) as Ay, = m~ Ily,-; + €: (9.9) The eigenvalues of II are zero and (1 — A2). Thus IT is a singular matrix with rank equal to one. Since it shares eigenvectors with A we have 9.10) = Jexd — aa)|E Thus IT, which is of rank one, has been factorized into the product of a column vector and a row vector. This is termed an outer product. The row vector is the cointe- grating vector already defined, and the column vector gives the weights with which the cointegrating relation enters into each equation of the VAR. This explanation may 290 ECONOMETRIC METHODS. be seen more clearly by combining Eqs. (9.9) and (9.10) to get Ayu = my ~ cr(1 ~ Aa)ere—1 + €te oan Ayar = mz — €22(1 ~ Aa)224-1 + €2¢ This reformulation of the VAR equations is expressed in terms of first differences and Jevels, all of which are I(0). It can be regarded as an error correction formulation of the VAR, since z2,-1 measures the extent to Which y1,-1 and y2,-1 deviate from the long-run cointegrating relation. NUMERICAL EXAMPLE. Consider the system yur = 12, Yar = 06: ‘where m has been set at zero. The eigenvalues of A come from the solution of O.2yai-1 + Ere 9.12) 1+ Ody 20-1 + Ear Ken-%) an |g an (@n-)| Thus the eigenvalues satisfy Ata =tA=16 Ad = [Al = 06 giving Ar = Land Az = 0.6, The eigenvector corresponding to the first root is obtained from en] _ [0] cx] 10] ‘The eigenvector is determined only up to a scale factor. By letting ¢2; = 1, the first eigenvector is ¢; = [1 1)’. Similarly the second eigenvector is ey = [1 3]' . Thus, c= [i i} and OC 3 Equation (9.12) may be rewritten as Ayu = O.2yr-1 ~ O.2y2r-1 + €rr Ayn = O.6y11-1 ~ O.6y211 + €2, 04 _ a= laa which is the numerical version of Eq. (9.11). The factorization of the IT matrix is not unique. Multiplication of the first vector by an arbitrary constant, followed by multipli- cation of the second by its reciprocal, leaves IT unchanged. Thus the cointegrating vector may be written as z, = (y1,— yz.) with an appropriate adjustinent to the weighting vector. 0st +0Szaen +e Case 3. dy = Az = 1. This case does not yield to the same analysis as the previous two cases, for there are not two linearly independent eigenvectors corre- sponding to the repeated eigenvalue, so that there is then no nonsingular matrix C to diagonalize A as in Eq. (9.4). As an illustration, consider the matrix _ [os -04 a=[ 12 cuarrers. Multiple Equation Models 291 It is easily verified that there is a unit eigenvalue of multiplicity two; that is, Ay = Az = 1. The equation (A — Al) = 0 gives [oF elle) (ol Normalizing by setting cz, = | gives the eigenvector associated with the unit eigen- value as er = [-2 1]. But we are missing a second, linearly independent eigen- vector. The source of the difficulty is that, in general, A is not symmetric. If it were symmetric, there would be two linearly independent eigenvectors associated with the repeated root. Although A cannot be diagonalized, itis possible to find a nonsingular matrix P such that P'AP=J A= PJP" (9.13) where Je i 4 (9.14) is the Jordan matrix for an eigenvalue of A with multiplicity two.! Now define =P ly, y= Pa (9.15) The process of substituting for y, from Eq. (9.3) and simplifying gives a = Je. +m + (9.16) where m* = P~'m and, = P~'e;. Spelling Eq. (9.16) out in detail, we write tue = Azur + at + my + Me 17 Za = Atgua + my + Mar By substituting the unit eigenvalue, these equations become (1 — Deu = 22, (1 — L)zy = my + ne Multiplying through the first equation by (1 — L) produces ~ Dare = m3 + (me = Met + 2-1) Thus, z;, is an I(2) series and za, is I(1). Consequently each y variable is 1(2). It is of interest to calculate the P matrix. From Eq, (9.13) it must, in general, satisfy itm +m Faseas ny Alp. p2| = |p lft 1] that is, Ap, = An and = Ap) = pi + Apr See Appendix A. 292 EcoNoMETRIC METHODS The first equation obviously gives the sole eigenvector already determined, namely Pi =; = [-2_ LI’. The second equation becomes (A — p> = ps. Solving pro- duces pp = [8 1]', and so -2 8 1 _[-01 08 p-[7t a eter 3] It can easily be verified that these matrices satisfy Eq. (9.13). Finally we look for a possible cointegrating vector. Since Cette ‘we need a vector orthogonal to p; in order to eliminate the z, component of y. Clearly the bottom row of P~! does the trick. The cointegrating vector gives a linear com- bination of 1(2) variables that is I(1). In this case the cointegrating variable is not stationary; but it satisfies the general definition of cointegration, which is that a vec- tor of I(d) variables is said to be cointegrated of order (d, b), written Cl(d, b), if a linear combination exists that is I(d — b) for positive b. In this case y is CI(2, 1). It may also be seen that the IT matrix has rank one and the bottom row of IT also gives the cointegrating vector. All the variables in the VAR are nonstationary, as are all the first differences of these variables, so inference procedures are nonstandard in either case. 9.1.2 A Three-Variable VAR We still retain the assumption of a first-order VAR but expand the system to three variables. Suppose the eigenvalues of the A matrix are Ay = 1, |Aa| °° 2 Soo Eo (9.19) z TWP. 6@ weer Mse3|1 to Thus IT splits into the product of a 3 X 2 matrix of rank two and a2 x 3 matrix, also of rank two. The latter matrix contains the two cointegrating vectors and the former gives the weights with which both cointegrating vectors enter into the error correc- tion formulation for each Ay;. The full set of equations, obtained by substitution in Eq. 9.9), is a1 (w3cia)3e-1 + Ete 1 ~ (3€23)z30-1 + €2r (9.20) yt = mi ~(mrei2)22, Ayn = m2 — (M2022) Ayar = ms — (u2032)22,1-1 ~ (w3es3)2a4—1 + €3¢ ‘More compactly, repeating Eq. (9.9), we write Ay, = mya +e The factorization of I is written Il = ap’ 9.21) where @ and B are 3 x 2 matrices of rank two.* The rank of IT is two, and there are two cointegrating vectors, shown as the rows of B'. Substituting Eq. (9.21) in Eq, (9.9) gives Ay, =m - @B'y,1 +e) = m— az,-1 + € (9.22) where z,-) = B'y,_ contains the two cointegrating Variables Before leaving the three-variable case, suppose that the eigenvalues are Ay = 1, Az = I, and |A3| < 1. If we follow the development in the foregoing Case 3, it is possible to find a nonsingular matrix P such that P-'AP = J, where the Jordan matrix is now 0 0 As . a ore 2This notation departs from our convention of using uppercase letters for matrices, but it has become embedded in the cointegration literature. 294 ECONOMETRIC METHODS In defining a three-element vector z; = P~'y,, it follows that z; is I(2), z2 is I(1), and 23 is (0). In general all three y variables are then 1(2), and we may write Ye = [pr | zur + [po |2ar +) Ps | Zar Premultiplying by the second row of P~!, namely, p, will annihilate both z; and zs, giving py, = zz, which is I(1). Similarly, premuliplying by p® gives py, = zirs Which is 1(0). Thus, there are two cointegrating vectors, but only one produces a stationary linear combination of the y's. The reason is that y is I(2). The empirical data, however, suggest that most economic series are either I(1) or (0). Having a system of I(1) variables is possible even though there are multiple unit eigenvalues. Consider, for example, the matrix 10 0 010 1 la A The first two elements in the last row are set at one for simplicity, for the only crucial element in this row is a. Clearly the eigenvalues are A = 1, 4p = 1,and Ay = a, where the last eigenvalue is assumed to have modulus less than one. The first two yy variables are random walks with drift, and thus I(1), and the third equation in the VAR connects all three variables so that the third y is also I(1). The II matrix is o 0 O M=1-A= 0 0 -1 -1 1- ‘The rank of I is one, and it may be factorized as 0 = a 1 a-1 where the row vector is the cointegrating vector. This result may be seen from Viet Yor ta Dyas Yue + Ye += Wenn + Yara + aun + ms + €34) = Ay + Ayn + a1 + a= ms + (a - Dex: = constant + az;1 + ¥ a where v = € 1, + €2; + (a ~ Lex, is a white noise series. Thus z; follows a stable AR(A) process and is (0). 9.1.3 Higher-Order Systems So far we have only looked at first-order systems, but these have sufficed to illustrate the basic ideas, The extension to higher-order systems is fairly simple and may be ccuaereRs. Multiple Equation Models 295 illustrated with a second-order system, Je = m+ Ayr) + Ar-2 + € (9.23) Subtracting y,—1 from each side gives Ay, = m+ (Aj —Dy)-1 + Aoi-2 + & The process of adding and subtracting (A — J)y,-2 on the right side and simplifying results in Ay, = m+ (Ay ~ DAy,-1 — Myy-2 + € (9.24) where II = J — A; — Ap. An alternative reparameterization is Ay, = m— AzAy,-) ~ Hy.) + € (9.25) Thus, in the first difference reformulation of a second-order system, there will be one lagged first difference term on the right-hand side. The levels term may be lagged one period or two. If we proceed in this way the VAR(p) system defined in Eq, (9.1) may be repa- rameterized as Ay, = m+ BiAy,-1 + ++- + By-sAys—po1 ~ Uy-1 + & (9.26) where the Bs are functions of the As and IT = J — A; ~--- — A). As shown in Appendix 9.2, the behavior of the y vector depends on the values of A that solve laPl—aP-1A, —--—AAp-1—Ap| = 0. Ruling out explosive roots, we must consider three possibilities: . Rank (II) = &. If each root has modulus less than one. II will have full rank and be nonsingular. All the y variables in Eq. (9.1) will be 1(0), and unrestricted OLS estimates of Eq. (9.1) or Eq. (9.26) will yield identical inferences about the parameters. Rank (II) = r < k. This situation will occur if there is a unit root with multiph ity (k — r) and the remaining r roots are numerically less than one. The y vector will be I(1) or higher and IT may be expressed, following Eq. (9.21), as the outer product of two (kX r) matrices, each of rank r. The right-hand side of Eq. (9.26) then contains r cointegrating variables. . Rank (IT) = 0. This case is rather special. It will only occur if; +:+-+A, = 1, in which case I = 0 and Eq, (9.26) shows that the VAR should be specified solely in terms of first differences of the variables. v e 92 ESTIMATION OF VARs ‘There are two approaches to the estimation of VARs. One is the direct estimation of the system set out in Eq. (9.1) or in the alternative reparameterization of Eq. (9.26). From the argument of the previous section, direct estimation is appropriate if all the eigenvalues of II are numerically Jess than one. The second approach, which is appropriate when the y variables are not stationary, is to determine the number r

S-ar putea să vă placă și