Sunteți pe pagina 1din 12

Q1 Murti and Sastri (Econometrica, vol.

25/#2) investigated the production characteristics of


various Indian industries, including cotton and sugar. They specified Cobb-Douglas production
functions for output (Q) as a double-log function of labor (L), and capital (K):
lnQi = 0 + 1lnLi + 2lnKi + i and obtained the following estimates (standard errors in
parentheses):
Industry 0 1 2 R2

Cotton 0.97 0.92 0.12 .98


(0.03) (0.04)
Sugar 2.70 0.59 0.33 .80
(0.14) (0.17)

a What are the elasticities of output with respect to labor and capital for each industry?
By looking at the equation, 1 represents the elasticity of output with respect to labor
and 2 represents elasticity of output with respect to capital.
For cotton, elasticity of output with respect to labor is 0.92 and elasticity of output
with respect to capital is 0.12
For sugar, elasticity of output with respect to labor is 0.59 and elasticity of output
with respect to capital is 0.33.
1 2
b What economic significance does the sum ( + ) have?
The sum of elasticities shows the combined effect of capital and labor on the output
level. It also shows the combined input elasticity of demand for both labor and
capital.

Q2

The following model allows the return to education to depend upon the total amount of both
parents education, called pareduc.

log(wage)= 0+ 1educ+2educ.pareduc+3exper+4tenure+

a Show that, in decimal form, the return to another year of education in this model is
log(wage)/educ=1+2pareduc. What sign do you expect for 2? Why?

b Using a sample data, the estimated equation from a. is


log(wage)_hat=5.65+ .047educ+.00078educ.pareduc+.019exper+.010tenure
(.13) (.010) (.00021) (.004) (.003)
n=722, R2=.169.
Interpret the coefficient on the interactive term. It might help to choose two specific
values for pareduc for example, pareduc =32 if both parents have a college education,
or pareduc =24 if both parents have a high school education- and to compare the
estimated return to educ.

Q3) Write the transformed equation that has a homoscedastic error term

Consider a linear model to explain monthly beer consumption

beer=0+1inc+2price+3educ+4female+u
E(u|inc, price, educ, female) = 0
Var(u|inc, price, educ, female)= 2inc2

beer 0 1 price 3 educ 4 female


= + + 2 + + +
h ( x) h ( x) h( x ) h(x) h(x ) h( x ) h (x)

Q4) When an important variable is omitted from the model, the model suffers from
functional misspecification bias. Under that case, the coefficients on the included
explanatory factors in the model would be biased and inconsistent, even when WLS
or OLS is used to estimate the model.

WLS estimators may or may not have more bias depending on the degree of
correlation exhibited between the error term and the included explanatory variables
in absence of the important variable omitted from the model.

So it would be false to state that WLS is preferred to OLS when an important


variable has been omitted from the model

5.
log: C:\Users\Birens computer\Desktop\q5 ps2.smcl
log type: smcl
opened on: 16 Mar 2017, 12:43:02
. use "C:\Users\Alitiya\Downloads\STOCK7.dta", clear

. reg PE EARN DIV BETA

Source SS df MS Number of obs = 65


F( 3, 61) = 4.78
Model 172.896 3 57.6320001 Prob > F = 0.0047
Residual 735.371549 61 12.0552713 R-squared = 0.1904
Adj R-squared = 0.1505
Total 908.26755 64 14.1916805 Root MSE = 3.4721

PE Coef. Std. Err. t P>|t| [95% Conf. Interval]

EARN 8.790791 8.708454 1.01 0.317 -8.622838 26.20442


DIV 39.84828 12.78277 3.12 0.003 14.28754 65.40901
BETA -2.753799 1.660758 -1.66 0.102 -6.07469 .5670911
_cons 15.71915 1.826246 8.61 0.000 12.06734 19.37095

. generate logPE = log(PE)

. reg logPE EARN DIV BETA


Source SS df MS Number of obs = 65
F( 3, 61) = 6.13
Model .962145572 3 .320715191 Prob > F = 0.0010
Residual 3.19398494 61 .052360409 R-squared = 0.2315
Adj R-squared = 0.1937
Total 4.15613051 64 .064939539 Root MSE = .22882

logPE Coef. Std. Err. t P>|t| [95% Conf. Interval]

EARN .8286458 .5739236 1.44 0.154 -.3189853 1.976277


DIV 2.807713 .8424384 3.33 0.001 1.123153 4.492272
BETA -.2175845 .1094509 -1.99 0.051 -.4364451 .0012761
_cons 2.740957 .1203573 22.77 0.000 2.500288 2.981626

. * (a) The functional form of regression increased the variation which means the
regular regression only explained 19% of the model, but the log regression explains
23% of the model and th
> e constant for the model also decreased, which would mean the the functional
regression is better. However, the magintude of the coefficients has dercreased but
that does not mean that th
> e functional form is not better. The functional model also lowered the coefficient
of EARN and DIV greatly but increased the coefficeint for BETA. This means that the
functional model s
> uggests that the Price earning ratio increases by a point with 0.82 increase in the
earnin growth and 2.8 increase in dividends paid as well as a 0.21 decrease in
company riskiness. Since
> all the t ratios for the coefficeint below 1.96, they are all statistically significant.

.
. * The functional form of regression increased the variation which means the
regular regression only explained 19% of the model, but the log regression explains
23% of the model and the co
> nstant for the model also decreased, which would mean the the functional
regression is better. However, the magintude of the coefficients has dercreased but
that does not mean that the fu
> nctional form is not better. The functional model also lowered the coefficient of
EARN and DIV greatly but increased the coefficeint for BETA. This means that the
functional model sugge
> sts that the Price earning ratio increases by a point with 0.82 increase in the
earnin growth and 2.8 increase in dividends paid as well as a 0.21 decrease in
company riskiness. Since all
> the t ratios for the coefficeint below 1.96, they are all statistically significant.

. generate logBETA = log(BETA)

. generate logEARN = log(EARN)


(7 missing values generated)

. generate logDIV = log(DIV)


(20 missing values generated)

. reg logPE logBETA logEARN logDIV


Source SS df MS Number of obs = 42
F( 3, 38) = 3.99
Model .674950921 3 .22498364 Prob > F = 0.0145
Residual 2.14118305 38 .056346922 R-squared = 0.2397
Adj R-squared = 0.1796
Total 2.81613397 41 .068686194 Root MSE = .23738

logPE Coef. Std. Err. t P>|t| [95% Conf. Interval]

logBETA -.2366714 .146039 -1.62 0.113 -.5323118 .0589691


logEARN .084215 .0590529 1.43 0.162 -.0353314 .2037613
logDIV .0885859 .0346409 2.56 0.015 .0184592 .1587126
_cons 3.251919 .2038624 15.95 0.000 2.839221 3.664617

. * (b) The double log would not be better than the semilog function because the R-
squared is almost the same so its not explaining more about the model. Secondly,
the constant increased fr
> om the semi-log function, the t ratio for logDIV is more that 1.96, it also has a alot
of missing values, the coefficient for it also decreased, which makes it statistically
insignificant
> to the model. The magnitude of logEARN's coefficeint also went down. This
means that the double log model may not be as effective as the semilog model
(unless some variable is excluded)

. plot PE EARN
28.28 +
| *
|
|
|
|
| *
|
| *
P | *
E | * *
| **
| * ** ** *
| * ** * ** *
| * * *
| * * * * *
| * * ** * *
| * * **** ** * *
| * ** * * * *
| * * * **
8.36 + * * *
+----------------------------------------------------------------+
-.137 EARN .186

. plot logPE EARN

3.34215 +
| *
|
|
|
| *
| *
| * *
l | * *
o | * ** ** **
g | * ** * ** *
P | * *
E | * * * * *
| * * ** *
| * * * *
| * * ** * * * *
| * * * *
| * * * *
| * * *
| * **
2.12346 + *
+----------------------------------------------------------------+
-.137 EARN .186

. plot logPE logEARN


3.34215 +
| *
|
|
|
| *
| *
| * *
l | * *
o | * ** * * **
g | * * ** * ** *
P | * *
E | * * * * *
| * * * *
| * * * *
| * * * * * * *
| * * ** *
| * * *
| *
| * *
2.12346 + * *
+----------------------------------------------------------------+
-4.60517 logEARN -1.68201

. * (c) EARN seems relevant to PE but lets exlude EARN from the original model to
check the variance to see if the model was misspecified.

. reg PE BETA DIV

Source SS df MS Number of obs = 65


F( 2, 62) = 6.66
Model 160.611688 2 80.3058441 Prob > F = 0.0024
Residual 747.655861 62 12.0589655 R-squared = 0.1768
Adj R-squared = 0.1503
Total 908.26755 64 14.1916805 Root MSE = 3.4726

PE Coef. Std. Err. t P>|t| [95% Conf. Interval]

BETA -2.577104 1.651761 -1.56 0.124 -5.878926 .7247186


DIV 42.25223 12.56091 3.36 0.001 17.14334 67.36113
_cons 15.9701 1.809524 8.83 0.000 12.35291 19.58728

. * (c) The variance also decresed which means less of the model is explained when
EARN is excluded. This means EARN is a relevant variable for this function.

. log close
name: <unnamed>
log: C:\Users\birens computer\Desktop\q5 ps2.smcl
log type: smcl
closed on: 16 Mar 2017, 13:44:09

6.
opened on: 26 Mar 2017, 19:11:12

. *i*

. reg lwage belavg abvavg female educ exper expersq, robust

Linear regression Number of obs = 1,260


F(6, 1253) = 120.99
Prob > F = 0.0000
R-squared = 0.3598
Root MSE = .47683

Robust
lwage Coef. Std. Err. t P>|t| [95% Conf. Interval]

belavg -.1542032 .0410068 -3.76 0.000 -.2346528 -.0737536


abvavg -.0066465 .0312705 -0.21 0.832 -.0679948 .0547018
female -.4532832 .0292106 -15.52 0.000 -.5105904 -.3959761
educ .0663221 .0055299 11.99 0.000 .0554732 .077171
exper .0408305 .0042387 9.63 0.000 .0325147 .0491463
expersq -.0006301 .0000946 -6.66 0.000 -.0008156 -.0004445
_cons .558981 .0814393 6.86 0.000 .3992086 .7187534

*The variables belavg and abvavg are suprising in their signs and magnitude. The
coefficient of female is with a p-value of 0.00 which is less than critical value o
> f .05 at 5% significant level indicating that it is practically large and statiscally
significant*
. gen belavgXfemale = belavg*female

. gen belavgXfemale = belavg*female


variable belavgXfemale already defined
r(110);

. gen abvavgXfemale = abvavg*female

. gen educXfemale = educ*female

. gen experXfemale = exper*female

. gen expersqXfemale = expersq*female

. *ii*

. reg lwage belavg abvavg female educ exper expersq belavgXfemale


abvavgXfemale educXfemale experXfemale expersqXfemale

Source SS df MS Number of obs = 1,260


F(11, 1248) = 66.19
Model 163.957194 11 14.9051994 Prob > F = 0.0000
Residual 281.022779 1,248 .225178508 R-squared = 0.3685
Adj R-squared = 0.3629
Total 444.979972 1,259 .353439215 Root MSE = .47453

lwage Coef. Std. Err. t P>|t| [95% Conf. Interval]

belavg -.1693568 .0531369 -3.19 0.001 -.2736043 -.0651094


abvavg -.0390703 .0380466 -1.03 0.305 -.1137126 .0355721
female -.49681 .1610739 -3.08 0.002 -.8128156 -.1808044
educ .0609789 .0064705 9.42 0.000 .0482846 .0736731
exper .0504833 .0055818 9.04 0.000 .0395325 .061434
expersq -.0008023 .000121 -6.63 0.000 -.0010395 -.000565
belavgXfemale .0436467 .0875103 0.50 0.618 -.1280369 .2153302
abvavgXfemale .0824055 .0637923 1.29 0.197 -.0427466 .2075575
educXfemale .0176664 .0112268 1.57 0.116 -.0043591 .0396919
experXfemale -.020652 .0092816 -2.23 0.026 -.0388613 -.0024427
expersqXfemale .000318 .0002185 1.45 0.146 -.0001108 .0007467
_cons .5375405 .0985018 5.46 0.000 .3442931 .7307878

reg lwage belavg abvavg female educ exper expersq belavgXfemale


abvavgXfemale educXfemale experXfemale expersqXfemale, robust
Linear regression Number of obs = 1,260
F(11, 1248) = 69.05
Prob > F = 0.0000
R-squared = 0.3685
Root MSE = .47453

Robust
lwage Coef. Std. Err. t P>|t| [95% Conf. Interval]

belavg -.1693568 .0529687 -3.20 0.001 -.2732744 -.0654392


abvavg -.0390703 .0391032 -1.00 0.318 -.1157855 .0376449
female -.49681 .1586357 -3.13 0.002 -.8080321 -.1855879
educ .0609789 .0070913 8.60 0.000 .0470666 .0748911
exper .0504833 .0054175 9.32 0.000 .0398548 .0611117
expersq -.0008023 .0001148 -6.99 0.000 -.0010274 -.0005771
belavgXfemale .0436467 .0831824 0.52 0.600 -.119546 .2068394
abvavgXfemale .0824055 .0647921 1.27 0.204 -.0447079 .2095189
educXfemale .0176664 .0111196 1.59 0.112 -.0041488 .0394816
experXfemale -.020652 .0090353 -2.29 0.022 -.0383781 -.0029258
expersqXfemale .000318 .0002261 1.41 0.160 -.0001256 .0007616
_cons .5375405 .1042491 5.16 0.000 .3330176 .7420633

. LS lwage belavg abvavg female educ exper expersq belavgXfemale


abvavgXfemale educXfemale experXfemale expersqXfemale
command LS is unrecognized
r(199);

. test belavgXfemale abvavgXfemale educXfemale experXfemale expersqXfemale

( 1) belavgXfemale = 0
( 2) abvavgXfemale = 0
( 3) educXfemale = 0
( 4) experXfemale = 0
( 5) expersqXfemale = 0
F( 5, 1248) = 3.83
Prob > F = 0.0019

. test belavgXfemale abvavgXfemale educXfemale experXfemale expersqXfemale,


robust
option robust not allowed
r(198);

. test belavgXfemale abvavgXfemale educXfemale experXfemale expersqXfemale =


1
=exp required
r(100);

. test belavgXfemale=abvavgXfemale = educXfemale = experXfemale =


expersqXfemale = 1

( 1) belavgXfemale - abvavgXfemale = 0
( 2) belavgXfemale - educXfemale = 0
( 3) belavgXfemale - experXfemale = 0
( 4) belavgXfemale - expersqXfemale = 0
( 5) belavgXfemale = 1

F( 5, 1248) = 4.5e+07
Prob > F = 0.0000

. *Both of the F-statistic are less the p-value .05 at 5% significant level. Hence we
can conclude that heteroskedastically-robust version does not change the outcome
> .*

. test belavgXfemale abvavgXfemale


( 1) belavgXfemale = 0
( 2) abvavgXfemale = 0

F( 2, 1248) = 0.83
Prob > F = 0.4362

. *The p-value of F-statistic is .4362 with is smaller than the critical p-value .05 at
5% level of significance, inidicating the variables are jointly statistically
> insignificant. The individual p-values for belavgXfemale and abvavg are .52471
and 1.27183 are greater than critical p-value .05 at 5% level of significance, inidic
> ating the variables are not statistically significantly different from zero. This
shows us that the coefficients are practically very small*

. log close

S-ar putea să vă placă și