Sunteți pe pagina 1din 4

1

Department of Economics Chang Sik Kim


SKKU Spring 2018

Midterm Exam

Instruction: This is a 90 minute closed book exam. You can use a simple calculator,
but no formula sheet is allowed. If you have difficulty with a question, move onto another.
Total points in this exam is 100. Good luck!!

1. (Total 15 points) Determine whether each of the following statements is TRUE, or


FALSE, and EXPLAIN BRIEFLY. No credit will be given to an answer with no
explanation.

(a) (3 points) If you have more observations, a t–test will reject more frequently,
both when the null hypothesis is true and when it is false.

(b) (3 points) The correlation coefficient (in our notation rX,Y ) has the same sign as
the estimate of the slope coefficient in the simple linear regression model.

(c) (3 points) The interpretation of the slope coefficient in the model ln Yi = β1 +


β2 ln Xi + ui is 1% change in X is associated with a β2 % change in Y .

(d) (3 points) The OLS estimator βb1 and βb2 follow the normal distribution asymp-
totically only if the regression error follows the normal distribution.

(e) (3 points) You would like to estimate a simple linear regression model of invest-
ment (dependent variable) with interest rates (Independent variable) in a simple
linear regression model. Then, the data from a period when interest rates were
stable is better than the data from a period when interest rates were fluctuating
when you estimate the slope parameter.

2. (Total 10 points) You first estimate the following model

Yi = β2 Xi + ei , i = 1, · · · , n (1)

by OLS. Assume that the classical assumptions are satisfied with V ar (ei ) = σ 2 .

(a) (5 points) Obtain the formula of variance of OLS estimator βb2 in (1). Obtain the
b2 for σ 2 . (NO proof is necessary, just give a formula.)
unbiased estimator σ
2

(b) (5 points) Now denote the estimator for the variance of OLS estimator βb2 as
\   \    
V ar βb2 , then show that V ar βb2 is unbiased for V ar βb2 . Specify any as-
sumption(s) you need.

3. (Total 20 points) Consider a simple linear regression model

Yi = β1 + β2 Xi + ei ,

according to the data given, we have


Pn Pn
Y =8 Yi2 = 790 i=1 Xi Yi = 6030
Pi=1
n 2 = 3825 n = 152
X=5 X
i=1 i
Xn
b2i = 3750.
u
i=1

(a) (5 points) Obtain the estimates of βb2 and βb1 based on the numerical information
given above.

(b) (5 points) Calculate the estimate of variance of βb2 .

(c) (5 points) Construct the 95% confidence interval of βb2 . Please specify any as-
sumptions you need to get the results.

(d) (5 points) Perform the hypothesis testing of the following null and alternative
hypotheses
H0 : β2 = 0; vs. β2 6= 0,

with 5% significance level. What is the p-value? Please specify any assumptions
you need to get the results.

4. (Total 20 points) You first estimate the simple regression model

Yi = β1 + β2 Xi + ei , i = 1, · · · , n

by OLS. Assume that the regression model satisfies all the classical assumptions.

(a) (5 points) Show that


Y = βb1 + βb2 X

where Y , X are sample mean for Y and X, respectively.

(b) (5 points) What is the value of sample mean for residual? That is, determine the
value of eb = n1 ni=1 ebi . Explain why you get this value.
P
3

(c) Now you delete j-th observation (Yj , Xj ) and re-estimate the regression model
again by OLS. Determine and explain whether R2 increases, decreases, or stays
unchanged if

i. (5 points) Yj = Y (the dependent variable for the deleted observation equals


to the sample mean of Y, Y = n1 ni=1 Yi )
P

ii. (5 points) ebj = 0 (the residual for the deleted observation equals to zero.)

5. (Total 25 points) Consider a simple linear regression model

Yi = β1 + β2 Xi + ui ,

under the classical assumptions. Now, we introduce assumptions that {ui } are in-
dependent, that is, ui ∼ N 0, σ 2 and hence Yi ∼ N β1 + β2 Xi , σ 2 since Xi ’s are
 

deterministic. Therefore, the density function of Yi is given by


!
2
 1 (Yi − β1 − β2 Xi )2
f Yi , Xi ; β1 , β2 , σ =√ exp − .
2πσ 2 2σ 2

Then the likelihood function for (β1 , β2 , σ 2 ) given the data (Y, X) is given by
n
1 u2i
 
2
Y 1
L(β1 , β2 , σ ; Y, X) = 1 exp [− ]
(2πσ 2) 2 2 σ2
i=1
n
1 Yi − β1 − β2 Xi 2
 
Y 1
= 1 exp [− ].
(2πσ 2) 2 2 σ
i=1

and the Maximum Likelihood Estimator(MLE) of (β1 , β2 , σ 2 ) maximizes the following


log likelihood function
n
2 n n 2 1 X
ln L(β1 , β2 , σ ; Y, X) = − ln 2π − ln σ − 2 (Yi − β1 − β2 Xi )2
2 2 2σ
i=1

with respect to (β1 , β2 , σ 2 ) ∈ R1 × R1 × R+

(a) (5 points) Obtain the MLE estimator β̂1M LE , β̂2M LE for β1 and β2 and show that
they are the same as OLS estimators.
2
(b) (5 points) Obtain the MLE estimator (σ̂M 2
LE ) of σ .
2
(c) (5 points) Obtain the bias of the MLE estimator of σ̂M LE . Be sure to specify how
you get your answer.
2
(d) (5 points) Obtain the variance of the MLE estimator of σ̂M LE . Be sure to specify
how you get your answer.
4
2
(e) (5 points) Is σ̂M LE consistent estimator? Be sure to specify how you get your
answer.

6. (Total 10 points) You want to consider the following two different regressions

1
Yt = β + ut , t = 1, · · · , n,
t

and
1
Yt = β √ + et , t = 1, · · · , n,
t
where the regression errors et , ut satisfy the classical assumptions. Obtain the OLS
estimators in both regressions and discuss the consistency of OLS estimators.

(Hint) Notice that


n n
X 1 X 1 1
lim → ∞ and lim 2
= π 2 < ∞.
n→∞ t n→∞ t 6
t=1 t=1

S-ar putea să vă placă și