Sunteți pe pagina 1din 21

Roberto Valdez

Rjv2119
Homework 1
1.1 a)
We construct a function that we want to minimize in terms of c with the form:

L ( c ) =E [ ( Y c )2 ]
E [ Y 22 Yc+c 2 ] =E [ Y 2 ] 2c E [ Y ] + c2
We derive this equation with respect to c and equate it to 0 to obtain:

dL
=2 E [ Y ] +2 c=0
dc
From which we obtain that the value of c that minimizes the function is:

c^ =E [ Y ]
We can also check the second derivative to see if it is actually a minimum:

d2 L
=2>0
d c2
1.1 b)
We construct a function that we want to minimize in terms of

f ( X ) with the form:

L ( f ( X ) ) =E [ ( Y f ( X ) ) X ]
2

E [ Y 22 Yf ( X )+ f ( X )2 X ]=E [ Y 2 X ]2 f ( X ) E [ Y X ] + f ( X )2
We derive this equation with respect to

f ( X ) and equate it to 0 to obtain:

dL
=2 E [ Y X ] +2 f ( X )=0
df ( X )
From which we obtain that the value of

f ( X ) that minimizes the function is:

f^ ( X )=E [ Y X ]
We can also check the second derivative to see if it is actually a minimum:

d2 L
=2>0
2
dc
1.1 c)
We construct a function that we want to minimize in terms of

L ( g ( X ) )=E [ ( Y g ( X ) ) ]=E E [ ( Y g ( X ) ) X ]
2

g(X)

with the form:

E [ ( Y g ( X ) ) X ] f ( x)dx
2

Using what was demonstrated in the previous literal, we know that

E [ ( Y g ( X ) ) X ] is minimized when:
2

^g ( X )=E [ Y X ]
Knowing that

f (x)

is always positive (with range

[ 0,1 ] ) we can clearly see that:

E [ ( Y g ( X ) )2 X ] f ( x )dx
X

Will be mininmized when the expected value is at is minimum so we select:

^g ( X )=E [ Y X ]
1.2 a)
We construct a function that we want to minimize in terms of

f ( X1, X2 , , Xn)

with

the form:

L ( f ( X 1 , X 2 , , X n ) ) =E ( X n+1 f ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n

E [ X n +12X 1 , X 2 , , X n ]2 f ( X 1 , X 2 , , X n ) E [ X n+1 X 1 , X 2 , , X n ] + f ( X 1 , X 2 , , X n )

f ( X1, X2 , , Xn)

We derive this equation with respect to

and equate it to 0 to

obtain:

dL
=2 E [ X n +1X 1 , X 2 , , X n ] + 2 f ( X 1 , X 2 , , X n ) =0
df ( X )
From which we obtain that the value of

f ( X1, X2 , , Xn)

that minimizes the

function is:

f^ ( X 1 , X 2 , , X n )=E [ X n+1 X 1 , X 2 , , X n ]
We can also check the second derivative to see if it is actually a minimum:

d2 L
=2>0
2
dc

1.2 b)
We construct a function that we want to minimize in terms of

g ( X 1 , X 2 , , X n)

with

the form:

] [ [

L ( g ( X 1 , X 2 , , X n ) ) =E ( X n +1g ( X 1 , X 2 , , X n ) ) =E E ( X n+1g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n

X 1 , X 2 , , X n

E ( X n+1 g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n f (x 1 , x 2 , , x n )d x 1 d x 2 d x n

Using what was demonstrated in the previous literal, we know that

E ( X n+1 g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n

is minimized when:

^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
Knowing that
see that:

f (x1 , x2 , , xn )

is always positive (with range

[ 0,1 ] ) we can clearly

]]

E ( X n+ 1g ( X 1 , X 2 , , X n ) ) X 1 , X 2 , , X n f ( x1 , x2 , , x n ) d x 1 d x 2 d x n

X1 , X 2 , , X n

Will be mininmized when the expected value is at is minimum so we select:

^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
1.2 c)
We know from the previous literal that

^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]
Minimizes the mean square error. If we add the information that

X 1 , X 2 , , X n is

iid with finite variance and mean u then we can add:

^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]=E [ X n +1 ] =
1.2 d)
We construct a linear estimator for in terms of coefficients

i and iid

X 1 , X 2 , , X n
n

= i X i
i=1

We want this estimator to be unbiased and therefore we know that this condition
has to be met:

E [ ] =
For this to happen, using our iid assumption we know that:

E [ ] =E

i=1

i=1

i X i = i E [ X i ]= i
i=1

Therefore we obtain that if we want this estimator to be unbiased, then:


n

i=1
i=1

Now we want this estimator to have minimum variance as well because we know by
theorem that the MSE is equal to bias + variance. Therefore, if we construct an
estimator that is unbiased and has minimum variance then we know it is the best
estimator. We therefore do:

Var ( )=Var

i=1

i=1

i X i = i2 Var ( X i ) = 2 i2
i=1

So we know that the estimator will have minimum variance when the sum of
n

i2

is minimal. We therefore construct this minimization problem:

i=1

min i
i

i=1

s . i=1
i=1

So we construct the lagrangian:


n

L ( i ) = +
i=1

2
i

( )
i=1

i1

We derivate this equation with respect to

^ i=

i and

1
n

Therefore we have proven that estimator for :


n

1
X
n i=1 i

Is the best linear estimator under iid conditions.


1.2 e)
In a previous literal we have showed that

and obtain the equations:

^g ( X 1 , X 2 , , X n) =E [ X n +1X 1 , X 2 , , X n ]=E [ X n +1 ] =
Is the value that minimizes the MSE which is the squared diference between

X n +1

and the estimator. In previous literal we also obtained that estimator for :
n

1
X
n i=1 i

Is the best linear unbiassed estimator for under iid conditions. We can therefore
say that
n

1
X
n i=1 i

X n +1 that is unbiased for .

Is the is the best linear predictor of


1.2 f)
From the problem we know that:

S n+1 =S n +X n +1

And therefor we decide to construct an estimate in terms of

Sn

in the form of:

g ( S n )=S n+ c
That minimizes the MSE for

E ( S n+1g ( Sn ) ) Sn

S n+1

in the form:

With the above information we perform the following calculations:

E ( S n+1g ( Sn ) ) Sn =E [ ( S n+1Sn c ) Sn ] =E [ ( X n+1c ) S n ]=E [ ( X n +1c )


2

For this problem we have derived the solution in a previous literal in the form of:

c^ =E [ X n+1 ]=
We therefore construct the estimator with the form:

^g ( S n )=S n+

1.3)
From the strictly stationary definition we select

n=1 and

h=T

and we obtain:

X 1 X 1+T
We can therefore see that for any T :

E [ X 1 ]=E [ X 2 ]=E [ X 1+T ]


Therefore the mean is independent of time.
We now select

n=2 and any t with

h=T

and we obtain:

( X t , X t+T ) ( X 1 , X 1+T )
We can therefore see that:

Cov ( X t , X t +T ) =Cov ( X 1 , X 1+T )


And therefore the covariance is independent of t.
Proven this two conditions proves that the process is weakly stationary as well.
1.4 a)
Its easy to obtain the expectation of this process:

E [ X t ] =a+b E [ Z t ] +c E [ Z t2 ]=0
For the covariance we do:

( h )=Cov ( a+b Z t + c Z t2 , a+b Zt h +c Z th2 )


b2 Cov ( Zt , Z th ) +bcCov ( Z t , Z t h2 )+ bcCov ( Z t2 , Z th ) + c2 Cov ( Zt 2 , Z th2 )

2 ( b2 +c 2 ) ,h=0
( h )= bc 2 ,|h|=2
0 ,o . w .
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.4 b)
Its easy to obtain the expectation of this process:

E [ X t ] =E [ Z 1 ] cos ( ct )+ E [ Z 2 ] sin ( ct )=0


For the covariance we do:

( h )=Cov ( Z 1 cos ( ct )+ Z 2 sin ( ct ) , Z 1 cos ( c ( th ) ) + Z2 sin ( c ( th ) ) )


2 ( cos ( ct ) cos ( c ( th ) ) +sin ( ct ) sin ( c (th ) ) )
2

cos ( ctct+ch )
( h )= 2 cos ( ch )
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.4 c)
Its easy to obtain the expectation of this process:

E [ X t ] =E [ Z t ] cos ( ct ) + E [ Zt ] sin ( ct ) =0
For the covariance we do:

( h )=Cov ( Z t cos ( ct ) + Z t1 sin ( ct ) , Z t h cos ( c ( th ) ) + Z th1 sin ( c ( th ) ) )


cos ( ct ) cos ( c ( th ) ) Cov ( Zt , Z th ) +cos ( ct ) sin ( c ( th ) ) Cov ( Zt , Z th1 ) +sin ( ct ) cos ( c ( th ) ) Cov ( Z t 1 , Z t h ) +sin

2 ,h=0
2
( h )= 2 sin ( ct ) cos ( c ( t1 ) ) ,h=1
cos ( ct ) sin ( c ( t+ 1 ) ) ,h=1
0 ,o . w .

We can see that if

+
,
c= k

k Z

both the expected value and the covariance

dont depend on t and therefore the process is weekly stationary. If not then the
process is not weekly stationary.
1.4 d)
Its easy to obtain the expectation of this process:

E [ X t ] =a+b E [ Z 0 ]=a
For the covariance we do:

( h )=Cov ( a+bZ 0 , a+bZ 0 )


b2 Cov ( Z0 , Z 0 ) =b2 2
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.4 e)
Its easy to obtain the expectation of this process:

E [ X t ] =E [ Z 0 ] cos ( ct )=0
For the covariance we do:

( h )=Cov ( Z 0 cos ( ct ) , Z 0 cos ( c ( th ) ) )


cos ( ct ) cos ( c ( th ) ) 2

+
,
c= k

We can see that if

k Z

both the expected value and the covariance

dont depend on t and therefore the process is weakly weekly stationary. If not then
the process is not weekly stationary.
1.4 f)
Its easy to obtain the expectation of this process:

E [ X t ] =E [ Z t ] E [ Z t 1 ] =0
For the covariance we do:

( h )=Cov ( Z t Z t 1 , Z t h Z t1h )

( h )= ,h=0
0 ,o . w .
We can see that both the expected value and the covariance dont depend on t and
therefore the process is weekly stationary.
1.5 a)
For this process we have:

1+ 2 ,h=0
( h )= ,|h|=2
0 ,o . w .

1 ,h=0

p ( h )=
,|h|=2
1+ 2
0 ,o . w .

So we substitute

1.64 ,h=0
( h )= 0.8 ,|h|=2
0 ,o . w .

=0.8

and obtain:

1 ,h=0
p ( h )= 0.488 ,|h|=2
0 ,o . w .

1.5 b)
For this process we have:
n

) (

1
1
Var ( X )=Var 2 X i = 2
n i=1
n
=0.8

So we substitute

1
1+2
Var ( X i ) + Cov ( X i , X j ) = 16 ( 4 ( 0 ) +4 ( 2 ) )= 4 + 4
i=1
i j

and obtain:

Var ( X )=0.61
1.5 c)

=0.8

So we substitute

and obtain:

Var ( X )=0.21
We can see that the negative covariance causes the variance of the mean to
decrease.
1.6 a)
For this process we have:

,h=0
( h )= 12
|h|
( 0 ) ,h 0
For this process we have:

Var ( X )=Var

So we substitute

Var ( X )=4.6375
1.6 b)

) (

1
1
Xi = 2
2
n i=1
n
=0.9

i=1

i j

Var ( X i ) + Cov ( X i , X j ) =
and

2=1

and obtain:

1
( 4 ( 0 ) +6 ( 1 ) + 4 (2 )+2 ( 3 ) )
16

So we substitute

=0.9 and

=1

and obtain:

Var ( X )=0.126
We can see that the negative covariance causes the variance of the mean to
decrease.
1.7)

E { X t + Y t } =E [ X t ] + E [ Y t ] = X + Y
We can see that the expected value does not depend on time.

( h )=Cov ( X t +Y t , X t h +Y th ) =Cov ( X t , X th ) +Cov ( Y t ,Y th ) = X ( h ) + Y ( h )


We can see that the covariance just depends on the sum of the covariances and
only depends on h.
1.7)

E { X t + Y t } =E [ X t ] + E [ Y t ] = X + Y
We can see that the expected value does not depend on time.

( h )=Cov ( X t +Y t , X t h +Y th ) =Cov ( X t , X th ) +Cov ( Y t ,Y th ) = X ( h ) + Y ( h )


We can see that the covariance just depends on the sum of the covariances and
only depends on h.
1.8 a)
Using the MGF, we know that if

E [ Z t ] =0
E [ Z t 2 ]=1
E [ Z t 3 ]=0
Using this we obtain:

Z t is N(0,1) then the moments are:

E [ X t ] = 0 ,t is even
0 ,t isodd
h=0

Now if t is even and

we have:

( 0 )= ( t , t ) =E [ Z t 2 ]=1
Now if t is even and

( 1 ) = ( t ,t +1 )=

1
3
E [ Z t ] E [ Z t ])=0
(
2

Now if t is odd and

( 0 )= ( t , t ) =

h=1 we have:

h=0

we have:

1
( E [ Z t14 ] 2 E [ Z t12 ] +1 )=1
2

Now if t is odd and

h=1 we have:

( 1 ) = ( t ,t +1 )=E [ Z t+1 ] E

And clearly for every t if

Z t121
=0
2
h>1

we have:

( h )=0

Therefore we have the covariance function:

( h )= 1 ,h=0
0 ,h 0
And therefore we have proven that this process is white noise (0,1). Now to prove
that it is not iid we are going to use the result in the next literal.
1.8 b)
If n is odd

E [ X n +1X 1 , X 2 , , X n ] =E [ Z n X 1 , X 2 , , X n ] =E [ Z n ] =0

If n is even

E [ X n +1X 1 , X 2 , , X n ]=E

] [

1
1
1
( Z n21 ) X n=x n =E ( Z n21 )Z n =xn = ( x n21 )
2
2
2

From this result we can clearly see that X is not iid.


1.9 a)
For making results simpler we select

b=

2 a
n+1 . We therefore can see that:

b
x =a+ (n+ 1)=0
2
We therefore express the sample covariance function
nh

1
1
^ ( h )= ( a+b ( th ) ) ( a+ bt ) =
n t =1
n

nh

(
t =1

nh

( a+bt ) + bh ( a+bt )
2

t =1

We also know that


n

^ ( 0 )=

1
2
( a+bt )

n t =1

We therefore define the sample correlation as:


nh

( a+ bt )
^p ( h )=

t=1
n

nh

bh ( a+ bt )
+

t=1

( a+ bt )2 ( a+bt )2
t=1

t=1

We can infer that when

then:

nh n
And therefore the above expression becomes

( h>0 ) as:

lim ^p ( h )=1

=1

( a+ bt )

t=1

1.9 b)
For making results simpler we select

w=

and n even. We therefore can see that:

x =

1
c cos ( wt )=0
n t =1

Because of the periodicity of

cos ( wt ) .

We therefore express the sample covariance function


nh

^ ( h )=

c2
cos ( wt ) cos ( w ( t +h ) )
n t=1
h0

Now if

and

is odd we have:

nh

c2
c2
^ ( h )= (1 )= ( hn )
n t=1
n
h0

Now if

and

is even we have:

nh

c2
c2
^ ( h )= ( +1 )= ( nh )
n t=1
n
h=0

Now if

we have

^ ( 0 )=

c2
c2
(
+1
)
=
( n )=c 2

n t=1
n

We therefore obtain the sample correlation function:

h
1 ,h is even
n
^p ( h )=
h
1 ,h is odd
n
And therefore obtain:

( h>0 ) as:

lim ^p ( h )= 1 ,his even


1 ,his odd

Which equals exactly the value of

cos ( wh )
w=

When

1.10)
We define:

mt =mt m t1
mt=c 0+ c1 t+ c 2 t 2+...+ c p t p

If we define

then:

mt =c 0 ( t 0( t1 )0 ) +c 1 ( t 1( t1 )1) +...+c p ( t p ( t1 ) p )
p

mt = c k ( t k ( t1 )

k=0

Using the binomial expansion we can express:


k

( t 1 ) =
i=0

k t k i (1 )i =t k +
k t ki (1 )i

i
i =1 i

()

()

mt

And therefore we can express


p

( ()

mt =c 0 + c k k t ki (1 )i
k=1
i=1 i

as:

We can clearly see that the inside summation (binomial expansion) starts in 1 and
therefore for every
therefore for

cp

ck

the highest coefficient of t is going to be

the highest coefficient is going to be

recursively and therefore:

p+1

m t=0

p 1

k1

and

. This result maintains

1.11 a)
We can clearly see that given the conditions of the filter:
q

a j=1

k=q
q

k=q

j a j=0

And therefore:
q

k=q

a j mt j = a j ( c 0 +c 1 (t j) )=c0
k=q

k=q

a j +c 1 t

k=q

a jc 1 ja j=c0 + c1 t=mt
k=q

1.11 b)
We calculate:

[
q

E [ A t ] =E

j=q

Var ( A t ) =Var

a j Zt j = a j E [ Z t j ]=0
i=1

1
a j Z t j = a j2 Var ( Z t j ) = 2 ( 2 q+1
) 1= 2q+1

j =q

j=q

j=q

1.12 a)
We can express:

a j mt j =mt a j+ c k a j ( ( t j )k t k ) =mt a j+ c k a j ki t ki ( j )i =mt a j+ c k ki t k


j
j
k=0
j
j
k=1
j
i=1
j
k=1
j i=1

()

And from this expression we can see that this is going to be equal to
only if

a j=1
j

( j )i a j=0
j

mt

if and

()

For every i.
1.12 b)
We use what is obtained in a) and using R we plug in the following code to check if:

a j=1
j

( j )i a j=0
j

For every i.
j=-7:7
aj=(1/320)*c(-3,-6,-5,3,21,46,67,74,67,46,21,3,-5,-6,-3)
sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,] 0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,] 0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,] 0

1.14)
We use what is obtained in 1.12 a) and using R we plug in the following code to
check if:

a j=1
j

( j )i a j=0
j

For every i.
j=-2:2
aj=(1/9)*c(-1,4,3,4,-1)

sum(aj)
[1] 1
print(t(j)%*%aj)
[,1]
[1,] 0
j2=j^2
print(t(j2)%*%aj)
[,1]
[1,] 0
j3=j^3
print(t(j3)%*%aj)
[,1]
[1,] 0

1.15 a)
Using properties of the function and of the seasonality, we can express:

Z t = 12 X t =Y tY t1 +Y t13Y t 12
E [ Z t ] =0

Cov ( Z t , Z th ) =Cov ( Y t Y t1 +Y t 13 Y t12 , Y t hY t 1h +Y t 13h Y t12h )=4 ( h )2 ( h+1 )2 ( h1 ) + ( h+


We can see that both the expected value and the covariance does not depend on t
therefore Zt is weakly stationary.
1.15 b)
Using properties of the function and of the seasonality, we can express:

Z t = 212 X t =Y t 2 Y t1 2 +Y t24
E [ Z t ] =0
Cov ( Z t , Z th ) =Cov ( Y t 2Y t12+Y t24 ,Y t h2 Y th12 +Y th24 ) =6 ( h )4 ( h+ 12 ) + ( h+24 )+ ( h24 )

We can see that both the expected value and the covariance does not depend on t
therefore Zt is weakly stationary.
1.18)
We show the resulting graphs of what is requested in the problem:
S e r ie s
10200.
10000.
9800.
9600.
9400.
9200.
9000.
8800.
8600.
8400.
8200.
8000.
0

10

20

30

40

50

60

70

40

50

60

70

S e r ie s
600.

400.

200.

0.

-2 0 0 .

-4 0 0 .

-6 0 0 .

10

20

30

============================================
ITSM::(Tests of randomness on residuals)
============================================
Ljung - Box statistic = .20240E+03 Chi-Square ( 20 ), p-value = .00000
McLeod - Li statistic = .20508E+03 Chi-Square ( 20 ), p-value = .00000
# Turning points = 43.000~AN(46.667,sd = 3.5324), p-value = .29926

# Diff sign points = 36.000~AN(35.500,sd = 2.4664), p-value = .83935


Rank test statistic = .97100E+03~AN(.12780E+04,sd = .10285E+03), p-value = .00284
Jarque-Bera test statistic (for normality) = 10.602 Chi-Square (2), p-value = .00499
Order of Min AICC YW Model for Residuals = 2

2000.

1000.

0.

-1 0 0 0 .

-2 0 0 0 .

10

20

30

40

50

60

70

80

90

S-ar putea să vă placă și