Sunteți pe pagina 1din 16

Topic 2

Nonlinear Regression Models

(I) Linear vs. nonlinear model


Although linear regression models predominate theory
and practice, there are occasions whether nonlinear-inthe parameter regression models (NLRM) are useful.
In this topic, we focus on
(I). Differentiate between linear vs. nonlinear model
When is a model linear?
When is a model is nonlinear model?
Example: Cobb-Douglas function

(II). Estimating Linear and Nonlinear Regression Models


LRM vs. NLRM
2

When is a model linear?


The CLRM assume that the regression model is linear in the
parameters, it may or may not be linear in the variables:
A model which is linear in the
parameters and linear in the
variable(s),
Yi = b0 + b1X1 + ei

A model which is linear in the


parameters and nonlinear in the
variable(s),
Y = b0 + b1X2+ ei

When is a model is nonlinear model?


What happens if the model is
non-linear in the parameters?
If the model is nonlinear in
parameters, the model can be a
1. Intrinsically linear
regression model,
Models that are
nonlinear in parameters
but with suitable
transformation, they can
be made linear-in-theparameter regression
models.

2. Intrinsically nonlinear
regression model
If such models cannot
be linearized in
parameters
intrinsically nonlinear
regression models.
When we talk about
nonlinear regression model,
we mean that it is
intrinsically nonlinear.
4

Example: Cobb-Douglas function


Consider the famous Cobb-Douglas production
function.
Letting Y = output, X2 = labor input, and X3 = capital
input, we can write this function in several ways:
Model 1

Y i = 1 X 2i2 X 3i3 e u i

Model 2

Yi = 1 X

Model 3

Yi = 1 X 2i2 X 3i3 + u i

2
2i

X 3i3 u i

Example: Cobb-Douglas function


Can we linearizing the nonlinear models?
1

1X

= 1X

2
2 i

X
2
2 i

2
2 i

3
3i

3
3 i

3
3 i

+ u

What can you conclude?

(II). Estimating Linear and


Nonlinear Regression Models
Example:
Asset

.54

0.520
0.508
0.484
0.46
0.4398
0.4238
0.4115
0.402
0.3944
0.388
0.3825
0.3738

0.5
5.0
10.0
15.0
20.0
25.0
30.0
35.0
40.0
45.0
55.0
60.0

.52

Table 1 : Management fees pay by a leading fund


mutual funds in the US to the investment advisors to
manage its asset.

.50
.48
FEE, %

1
2
3
4
5
6
7
8
9
10
11
12

Fee, %

.46
.44
.42
.40
.38
.36
0

10

20

30

40

50

60

70

ASSET, billions of dollars

Figure 1: Relationship of advisory fees to fund assets


7

Estimating Linear and


Nonlinear Regression Models
Too see the difference in estimating linear and
nonlinear regression models, we consider the following
two models:
(1)
Yi = 1+ 2Xi + ui
Yi = 1e2Xi + ui
(2)
Model (1) is a linear regression model, whereas model
(2) is a nonlinear regression model, often known as the
exponential regression model.

OLS in Linear Regression Model (LRM)


To estimate the parameters in
LRM, we can use OLS.
The OLS minimizing the residual
sum of squares.

The mathematics underlying LRM


is comparatively simple in that one
can obtain explicit, or analytical,
solution of the coefficients of such
models.

.54
.52
.50

FEE, %

.48
.46
.44
.42
.40
.38
.36
0

10

20

30

40

50

60

70

ASSET, billions of dollars

OLS in Linear Regression Model (LRM)


The small-sample and large-sample
theory of inference of such models
is well established.
With normal distributed error
terms, we able to develop exact
inference procedure (i.e.,
Hypothesis testing) using the t, F,
and 2 in small sample as well as
large samples.

Series: Residuals
Sample 1 12
Observations 12
3

Mean
Median
Maximum
Minimum
Std. Dev.
Skewness
Kurtosis

-2.49e-17
-0.004280
0.020042
-0.016900
0.014812
0.258415
1.406468

Jarque-Bera 1.403229
Probability
0.495784
0
-0.02

-0.01

0.00

0.01

0.02

10

NLLS in NonLinear Regression Model


(NLRM)
These are pure guesses, sometime
based on prior expectation or prior
empirical work or obtained by just
fitting a linear regression model
even though it may not be
appropriate.
.54
.52

Yi = 1e2Xi + ui

.50
.48
FEE, %

To estimate the parameters in


NLRM, we can use NonLinear
Least Squares (NLLS).
The NLLS minimizing the residual
sum of squares using iterative
procedure.
To see how the exponential
regression model in Model 2 fits
the data given in Table 1, we can
proceed by trial and error.
Suppose that we assume that
initially 1 = 0.45 and 1 =0.01.

.46
.44
.42
.40
.38
.36
0

10

20

30

40

50

60

70

ASSET, billions of dollars

11

NLLS in NonLinear Regression Model


(NLRM)

Trial

ESS

1
2

0.45
0.50

0.01
-0.01
.

0.3044
0.0073

Table 2: trial and Error procedure


The trial and error, or iterative
process can be implemented easily.
But you migth ask, how did we go
from 1 = 0.45 and 2 =0.01 to 1 =
0.50 and 2 =-0.01?
12

NLLS in NonLinear Regression Model


(NLRM)
We need some kind of algorithm
from one set value to another set
before we stop.
There are several approaches, or
algorithms, to NLRMs: (1) direct
seach or trial and error, (2) direct
optimization, and (3) interactive
linearization.
In practice, with the availability of
user friendly software packages,
estimation of NLRM no longer a
mystery.

13

NLLS in NonLinear Regression Model


(NLRM)
The Eviews nonlinear regression
routine, use the iterative
linearization method .
It took Eviews five iterations to
obtain the results as shown in the
left panel.
From the results, we can write the
estimated model as

Yi = 1e 2 X i + ui

14

NLLS in NonLinear Regression Model


(NLRM)
Properties of NLLS estimator:
Even with normally distributed
error term, the NLLS estimators
are not normally distributed.
As a result, we cannot use the t-test
or F test because we cannot obtain
an unbiased estimate of the error
variance from the estimated
residuals.
Plus, the residuals do not
necessary sum to zero, ESS & RSS
not necessary add up to the TSS
and therefore R2 may not be
meaningful.
15

NLLS in NonLinear Regression Model


(NLRM)
Large-sample theory
Consequently, inferences about the
regression parameters in nonlinear
regression are usually based on
large-sample theory.
This theory tell us that the LS and
ML estimators for nonlinear
regression models with normal
error terms, when the sample size
is large, are approximately normally
distributed and almost unbiased
and have almost minimum
variance.

This large-sample theory applies


when the error terms are not
normally distributed.
In short, then, all inference
procedures in NLRM are large
sample, or asymptotic.

16

S-ar putea să vă placă și