Sunteți pe pagina 1din 18

Roberto Valdez

Rjv2119
Homework 1
1.1 a)
We construct a function that we want to minimize in terms of c with the form:
() = [( )2 ]
= [ 2 2 + 2 ] = [ 2 ] 2[] + 2
We derive this equation with respect to c and equate it to 0 to obtain:

= 2[] + 2 = 0

From which we obtain that the value of c that minimizes the function is:
= []
We can also check the second derivative to see if it is actually a minimum:
2
=2>0
2
1.1 b)
We construct a function that we want to minimize in terms of () with the form:
2

(()) = [( ()) |]
= [ 2 2() + ()2 |] = [ 2 |] 2()[|] + ()2
We derive this equation with respect to () and equate it to 0 to obtain:

= 2[|] + 2() = 0
()
From which we obtain that the value of () that minimizes the function is:
() = [|]
We can also check the second derivative to see if it is actually a minimum:
2
=2>0
2

1.1 c)
We construct a function that we want to minimize in terms of () with the form:
2

(()) = [( ()) ] = [ [( ()) |]]


2

= [( ()) |] ()

Using what was demonstrated in the previous literal, we know that [( ()) |] is minimized
when:
() = [|]
Knowing that () is always positive (with range [0,1]) we can clearly see that:
2

[( ()) |] ()

Will be mininmized when the expected value is at is minimum so we select:


() = [|]
1.2 a)
We construct a function that we want to minimize in terms of (1 , 2 , , ) with the form:
2

((1 , 2 , , )) = [(+1 (1 , 2 , , )) |1 , 2 , , ]
= [+1 2 |1 , 2 , , ] 2(1 , 2 , , )[+1 |1 , 2 , , ] + (1 , 2 , , )2
We derive this equation with respect to (1 , 2 , , ) and equate it to 0 to obtain:

= 2[+1 |1 , 2 , , ] + 2(1 , 2 , , ) = 0
()
From which we obtain that the value of (1 , 2 , , ) that minimizes the function is:
(1 , 2 , , ) = [+1 |1 , 2 , , ]
We can also check the second derivative to see if it is actually a minimum:
2
=2>0
2

1.2 b)
We construct a function that we want to minimize in terms of (1 , 2 , , ) with the form:
2

((1 , 2 , , )) = [(+1 (1 , 2 , , )) ]
2

= [ [(+1 (1 , 2 , , )) |1 , 2 , , ]]
2

[(+1 (1 , 2 , , )) |1 , 2 , , ] (1 , 2 , , )1 2

=
1 ,2 ,,

Using what was demonstrated in the previous literal, we know that


2
[(+1 (1 , 2 , , )) |1 , 2 , , ] is minimized when:
(1 , 2 , , ) = [+1 |1 , 2 , , ]
Knowing that (1 , 2 , , ) is always positive (with range [0,1]) we can clearly see that:

[(+1 (1 , 2 , , )) |1 , 2 , , ] (1 , 2 , , )1 2

1 ,2 ,,

Will be mininmized when the expected value is at is minimum so we select:


(1 , 2 , , ) = [+1 |1 , 2 , , ]
1.2 c)
We know from the previous literal that
(1 , 2 , , ) = [+1 |1 , 2 , , ]
Minimizes the mean square error. If we add the information that 1 , 2 , , is iid with finite variance
and mean u then we can add:
(1 , 2 , , ) = [+1 |1 , 2 , , ] = [+1 ] =
1.2 d)
We construct a linear estimator for in terms of coefficients and iid 1 , 2 , ,

=
=1

We want this estimator to be unbiased and therefore we know that this condition has to be met:
[] =

For this to happen, using our iid assumption we know that:

[] = [ ] = [ ] =
=1

=1

=1

Therefore we obtain that if we want this estimator to be unbiased, then:

= 1
=1

Now we want this estimator to have minimum variance as well because we know by theorem that the
MSE is equal to bias + variance. Therefore, if we construct an estimator that is unbiased and has minimum
variance then we know it is the best estimator. We therefore do:

() = ( ) = 2 ( ) = 2 2
=1

=1

=1

So we know that the estimator will have minimum variance when the sum of =1 2 is minimal. We
therefore construct this minimization problem:

min 2

=1

. = 1
=1

So we construct the lagrangian:

( ) = 2 + ( 1)
=1

=1

We derivate this equation with respect to and and obtain the equations:
=

Therefore we have proven that estimator for :

1
=

=1

Is the best linear estimator under iid conditions.

1.2 e)
In a previous literal we have showed that
(1 , 2 , , ) = [+1 |1 , 2 , , ] = [+1 ] =
Is the value that minimizes the MSE which is the squared diference between +1 and the estimator. In
previous literal we also obtained that estimator for :

1
=

=1

Is the best linear unbiassed estimator for under iid conditions. We can therefore say that

1
=

=1

Is the is the best linear predictor of +1 that is unbiased for .


1.2 f)
From the problem we know that:
+1 = + +1
And therefor we decide to construct an estimate in terms of in the form of:
( ) = +
That minimizes the MSE for +1 in the form:
2

[(+1 ( )) | ]
With the above information we perform the following calculations:
2

[(+1 ( )) | ] = [(+1 )2 | ] = [(+1 )2 | ] = [(+1 )2 ]


For this problem we have derived the solution in a previous literal in the form of:
= [+1 ] =
We therefore construct the estimator with the form:
( ) = +

1.3)
From the strictly stationary definition we select = 1 and = and we obtain:
1 1+
We can therefore see that for any T :
[1 ] = [2 ] = [1+ ]
Therefore the mean is independent of time.
We now select = 2 and any t with = and we obtain:
( , + ) (1 , 1+ )
We can therefore see that:
( , + ) = (1 , 1+ )
And therefore the covariance is independent of t.
Proven this two conditions proves that the process is weakly stationary as well.
1.4 a)
Its easy to obtain the expectation of this process:
[ ] = + [ ] + [2 ] = 0
For the covariance we do:
() = ( + + 2 , + + 2 )
= 2 ( , ) + ( , 2 ) + (2 , ) + 2 (2 , 2 )
2 ( 2 + 2 ),
() = {
2 ,
0,

=0
|| = 2
. .

We can see that both the expected value and the covariance dont depend on t and therefore the
process is weekly stationary.
1.4 b)
Its easy to obtain the expectation of this process:
[ ] = [1 ] cos() + [2 ] sin() = 0

For the covariance we do:


() = (1 cos() + 2 sin() , 1 cos(( )) + 2 sin(( )))
= 2 (cos() cos(( )) + sin() sin(( )))
= 2 cos( + )
() = 2 cos()
We can see that both the expected value and the covariance dont depend on t and therefore the
process is weekly stationary.
1.4 c)
Its easy to obtain the expectation of this process:
[ ] = [ ] cos() + [ ] sin() = 0
For the covariance we do:
() = ( cos() + 1 sin() , cos(( )) + 1 sin(( )))
= cos() cos(( ))( , ) + cos() sin(( ))( , 1 )
+ sin() cos(( ))(1 , ) + sin() sin(( ))(1 , 1 )

() =

2,
2 sin() cos(( 1)) ,
2 cos() sin(( + 1)) ,
0,
{

=0
=1
= 1
. .

We can see that if = +


, both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary. If not then the process is not weekly stationary.
1.4 d)
Its easy to obtain the expectation of this process:
[ ] = + [0 ] =
For the covariance we do:
() = ( + 0 , + 0 )
= 2 (0 , 0 ) = 2 2

We can see that both the expected value and the covariance dont depend on t and therefore the
process is weekly stationary.
1.4 e)
Its easy to obtain the expectation of this process:
[ ] = [0 ] cos() = 0
For the covariance we do:
() = (0 cos() , 0 cos(( )))
= cos() cos(( )) 2
We can see that if = +
, both the expected value and the covariance dont depend on t and
therefore the process is weakly weekly stationary. If not then the process is not weekly stationary.
1.4 f)
Its easy to obtain the expectation of this process:
[ ] = [ ][1 ] = 0
For the covariance we do:
() = ( 1 , 1 )
() = {

2,
0,

=0
. .

We can see that both the expected value and the covariance dont depend on t and therefore the
process is weekly stationary.
1.5 a)
For this process we have:
1 + 2,
() = {
,
0,

=0
|| = 2
. .

1,

() = {
,
1 + 2
0,

=0
|| = 2
. .

So we substitute = 0.8 and obtain:


1.64,
() = { 0.8,
0,

=0
|| = 2
. .

1,
() = {0.488,
0,

=0
|| = 2
. .

1.5 b)
For this process we have:

=1

=1

1
1
1
() = ( 2 ) = 2 ( ( ) + ( , )) =
(4 (0) + 4 (2))

16

1 + 2
=
+
4
4
So we substitute = 0.8 and obtain:
() = 0.61
1.5 c)
So we substitute = 0.8 and obtain:
() = 0.21
We can see that the negative covariance causes the variance of the mean to decrease.
1.6 a)
For this process we have:
2
,
() = { 1 2
|| (0),

=0
0

For this process we have:

=1

=1

1
1
() = ( 2 ) = 2 ( ( ) + ( , ))

1
=
(4 (0) + 6(1) + 4(2) + 2(3))
16
So we substitute = 0.9 and 2 = 1 and obtain:

() = 4.6375
1.6 b)
So we substitute = 0.9 and 2 = 1 and obtain:
() = 0.126
We can see that the negative covariance causes the variance of the mean to decrease.
1.7)
{ + } = [ ] + [ ] = +
We can see that the expected value does not depend on time.
() = ( + , + ) = ( , ) + ( , ) = () + ()
We can see that the covariance just depends on the sum of the covariances and only depends on h.
1.7)
{ + } = [ ] + [ ] = +
We can see that the expected value does not depend on time.
() = ( + , + ) = ( , ) + ( , ) = () + ()
We can see that the covariance just depends on the sum of the covariances and only depends on h.
1.8 a)
Using the MGF, we know that if is N(0,1) then the moments are:
[ ] = 0
[ 2 ] = 1
[ 3 ] = 0
Using this we obtain:
0,
[ ] = {
0,

is even
is odd

Now if t is even and = 0 we have:


(0) = (, ) = [ 2 ] = 1

Now if t is even and = 1 we have:


(1) = (, + 1) =

1
2

([ 3 ] [ ]) = 0

Now if t is odd and = 0 we have:


1
(0) = (, ) = ([1 4 ] 2[1 2 ] + 1) = 1
2
Now if t is odd and = 1 we have:
1 2 1
(1) = (, + 1) = [+1 ] [
]=0
2
And clearly for every t if > 1 we have:
() = 0

Therefore we have the covariance function:


1,
() = {
0,

=0
0

And therefore we have proven that this process is white noise (0,1). Now to prove that it is not iid we
are going to use the result in the next literal.
1.8 b)
If n is odd
[+1 |1 , 2 , , ] = [ |1 , 2 , , ] = [ ] = 0
If n is even
[+1 |1 , 2 , , ] = [

1
2

( 2 1)| = ] = [

1
2

( 2 1)| = ] =

From this result we can clearly see that X is not iid.


1.9 a)
2
For making results simpler we select =
. We therefore can see that:
+1

= + ( + 1) = 0
2
We therefore express the sample covariance function ( > 0) as:

1
2

( 2 1)

=1

=1

=1

1
1
() = ( + ( ))( + ) = (( + )2 + ( + ))

We also know that

1
(0) = ( + )2

=1

We therefore define the sample correlation as:


() =


=1 ( + )
=1 ( + )
+

2
=1( + )
=1( + )2

We can infer that when then:



And therefore the above expression becomes
lim () = 1

1
=1
+ )2

=1(

1.9 b)
For making results simpler we select = and n even. We therefore can see that:

1
= cos() = 0

=1

Because of the periodicity of cos().


We therefore express the sample covariance function ( > 0) as:

2
() = cos() cos(( + ))

=1

Now if 0 and is odd we have:

2
2
() = (1) = ( )

=1

Now if 0 and is even we have:

2
2
() = (+1) = ( )

=1

Now if = 0 we have

2
2
(0) = (+1) = () = 2

=1

We therefore obtain the sample correlation function:

1 ,

() = {

1,

And therefore obtain:

lim () = {

is even
is odd

1,
1,

is even
is odd

Which equals exactly the value of


cos()
When =

1.10)
We define:
= 1
If we define = 0 + 1 + 2 2 +. . . + then:
= 0 ( 0 ( 1)0 ) + 1 (1 ( 1)1 )+. . . + ( ( 1) )

= ( ( 1) )
=0

Using the binomial expansion we can express:

=0

=1

( 1) = ( ) (1) = + ( ) (1)

And therefore we can express as:

=1

=1

= 0 + ( ( ) (1) )

We can clearly see that the inside summation (binomial expansion) starts in 1 and therefore for every
the highest coefficient of t is going to be 1 and therefore for the highest coefficient is going to be
1 . This result maintains recursively and therefore:
+1 = 0
1.11 a)
We can clearly see that given the conditions of the filter:

= 1
=

= 0
=

And therefore:

= (0 + 1 ( )) = 0 + 1 1 = 0 + 1 =
=

1.11 b)
We calculate:

[ ] = [ ] = [ ] = 0
=

=1

2
1
2
( ) = ( ) = 2 ( ) = 2 (
) 1=
2 + 1
2 + 1

1.12 a)
We can express:

= + (( ) ) = + ( ) ()

=0

=1

= + ( ) (1) ()

=1

=1

And from this expression we can see that this is going to be equal to if and only if

=1

= 1

() = 0

For every i.
1.12 b)
We use what is obtained in a) and using R we plug in the following code to check if:

= 1

() = 0

For every i.
j=-7:7
aj=(1/320)*c(-3,-6,-5,3,21,46,67,74,67,46,21,3,-5,-6,-3)
sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,]
0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,]
0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,]
0

1.14)
We use what is obtained in 1.12 a) and using R we plug in the following code to check if:

= 1

() = 0

For every i.
j=-2:2
aj=(1/9)*c(-1,4,3,4,-1)
sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,]
0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,]
0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,]
0

1.15 a)
Using properties of the function and of the seasonality, we can express:
= 12 = 1 + 13 12
[ ] = 0
( , ) = ( 1 + 13 12 , 1 + 13 12 )
= 4() 2( + 1) 2( 1) + ( + 11) + ( 11) 2( + 12)
2( 12) + ( + 13) + ( 13)
We can see that both the expected value and the covariance does not depend on t therefore Zt is
weakly stationary.
1.15 b)
Using properties of the function and of the seasonality, we can express:
2
= 12
= 212 + 24

[ ] = 0

( , ) = ( 212 + 24 , 212 + 24 )
= 6() 4( + 12) + ( + 24) + ( 24)
We can see that both the expected value and the covariance does not depend on t therefore Zt is
weakly stationary.
1.18)
We show the resulting graphs of what is requested in the problem:
Series
10200.

10000.

9800.

9600.

9400.

9200.

9000.

8800.

8600.

8400.

8200.

8000.
0

10

20

30

40

50

60

70

40

50

60

70

Series

600.

400.

200.

0.

-200.

-400.

-600.

10

20

30

============================================
ITSM::(Tests of randomness on residuals)
============================================
Ljung - Box statistic = .20240E+03 Chi-Square ( 20 ), p-value = .00000

McLeod - Li statistic = .20508E+03 Chi-Square ( 20 ), p-value = .00000


# Turning points = 43.000~AN(46.667,sd = 3.5324), p-value = .29926
# Diff sign points = 36.000~AN(35.500,sd = 2.4664), p-value = .83935
Rank test statistic = .97100E+03~AN(.12780E+04,sd = .10285E+03), p-value = .0
0284
Jarque-Bera test statistic (for normality) = 10.602 Chi-Square (2), p-value =
.00499
Order of Min AICC YW Model for Residuals = 2

2000.

1000.

0.

-1000.

-2000.

10

20

30

40

50

60

70

80

90

S-ar putea să vă placă și