Sunteți pe pagina 1din 47

ECE 531: Detection and Estimation Theory, Spring 2011

Homework 1
ALL SOLUTIONS BY Shu Wang thanks for submitting latex le in 1st homework!
Problem1 (2.1)
Solution:
E[

2
] = E[
1
N
N1

n=0
x
2
[n]]
=
1
N
N1

n=0
E[x
2
[n]]
=
1
N
N
2
=
2
So this is an unbiased estimator.
V ar(

2
) = V ar(
1
N
N1

n=0
x
2
[n])
=
1
N
2
N1

n=0
V ar(x
2
[n])
=
1
N
2
NV ar(x
2
[n])
=
1
N
V ar(x
2
[n])
According to x[n] is iid, then x
2
[n] is also iid. We know that V ar(x
2
[n]) = E[x
4
[n]] E[x
2
[n]]
2
.
And according to central moment:
E[(x )
p
] =

0 if p is odd

p
(p 1)!! if p is even
n!! denotes the double factorial, that is the product of every odd number from n to 1. And is the
mean of x. In this problem, we know that the mean of x is 0. So we can have E[x
4
[n]] = 3
4
. So
V ar(x
2
[n]) = 3
4

4
= 2
4
. Then we have
V ar(

2
) =
1
N
2
N2
4
=
2
4
N
1
And V ar(

2
) 0 as N
Problem2 (2.3)
Solution:
E[

A] = E[
1
N
N1

n=0
x[n]]
=
1
N
N1

n=0
E[x[n]]
=
1
N
NA = A
V ar(

A) = V ar(
1
N
N1

n=0
x[n])
=
1
N
2
NV ar(x[n])
=
1
N

2
=

2
N
According to x[n] is iid. Gaussian distributed. So we have

A N(A,

2
N
).
Problem3 (2.8)
Solution:
From 2.3, we know that

A N(A,

2
N
). Then we can have:
lim
N
Pr{|

A A| > } = lim
N
Pr{
|

A A|

2
N
>

2
N
}
According to Q-function, we will have
lim
N
Pr{
|

A A|

2
N
>

2
N
} = lim
N
Q(

2
N
)
= lim
N
Q(

)
= 0
2
So

A is consistent.
E[

A] = E[
1
2N
N1

n=0
x[n]]
=
1
2N
N1

n=0
E[x[n]]
=
1
2N
NA
=
A
2
V ar(

A) = V ar(
1
2N
N1

n=0
x[n])
=
1
4N
2
N1

n=0
V ar(x[n])
=
1
4N
2
N
2
=

2
4N
According to x[n] is iid white Gaussian. So

A N(
A
2
,

2
4N
). Then we can have:
lim
N
Pr{|

AA| > } = lim


N
Pr{
|

AA|

2
4N
>

2
4N
}
Also according to Q-function, we can have:
lim
N
Pr{
|

A A|

2
4N
>

2
4N
} = lim
N
Q(

2
4N
)
= lim
N
Q(

4N

) = 0

A is a biased estimator. It is centered at


A
2
. So

A is not consistent.
Problem4 (2.9)
3
Solution:
E[

] = E[(
1
N
N1

n=0
x[n])
2
]
= V ar(
1
N
N1

n=0
x[n]) +E[
1
N
N1

n=0
x[n]]
2
=

2
N
+A
2
=
So this is a biased estimator. E[

] A
2
as N , this estimator becomes unbiased. This
estimator is asymptotically unbiased.
4
ECE 531 - Detection and Estimation Theory
Homework 2
Solutions
3.3 (Luke Vercimak) The data x[n] = Ar
n
+ w[n] for n = 0, 1, . . . , N 1 are observed, where
w[n] is WGN with variance
2
and r > 0 is known. Find the CRLB for A. Show that an
ecient estimator exists and nd its variance. What happens to the variance as N for
various values of r
The pdf is:
p(x, A) =
1
(2
2
)
N
2
exp
_
1
2
2
N1

n=0
[x[n] Ar
n
]
2
_
lnp(x, A)
A
=
1
2
2
N1

n=0
2 [x[n] Ar
n
] (1)r
n
lnp(x, A)
A
=
1

2
N1

n=0
_
x[n]r
n
+ Ar
2n

2
lnp(x, A)
A
2
=
1

2
N1

n=0
r
2n
I(A) = E
_
1

2
N1

n=0
r
2n
_
I(A) =
1

2
N1

n=0
r
2n
I(A) =
1

2
r
2N
1
r
2
1
CRLB(A) =

2
N
if r=1,

2
(r
2
1)
r
2N
1
otherwise
If r 1,

A A. If r < 1,

A .
3.11 (Luke Vercimak) For a 2 2 Fisher information matrix
I () =
_
a b
b c
_
which is positive denite, show that
_
I
1
()

11
=
c
ac b
2

1
a
=
1
[I ()]
11
1
What does this say about estimating a parameter when a second parameter is either known
or unknown? When does equality hold and why?
I () =
_
a b
b c
_
is positive denite
All principal minors are positive
det[a] = a > 0, det
_
a b
b c
_
= ac b
2
> 0
ac ac b
2

c
ac b
2

1
a
This shows that the variance of a parameter estimate when estimating two parameters will
be less than or equal to estimating only the single parameter. The equality holds when b = 0.
This implies that the rst and second parameters are uncorrelated.
3.15 (Shu Wang) We know that x[n] N(0, C), and x[n]s are independent. If we suppose i()
is the sher information of x[n]. Then we can have I() = Ni(). According to equation 3.32
of textbook, we can get:
i() = [
()

]
T
C
1
()[
()

] +
1
2
tr[(C
1
()
C()

)
2
]
=
1
2
tr[(C
1
()
C()

)
2
]
According to x[n] N(0, C). Also because
C =
_
1
1
_
Then we can have
C()

=
_
0 1
1 0
_
Also we can get
C
1
() =
1
1
2
_
1
1
_
Then we can have
(C
1
()
C()

)
2
=
1
(1
2
)
2
_
1 +
2
2
2 1 +
2
_
So
1
2
tr[(C
1
()
C()

)
2
] =
1+
2
(1
2
)
2
. Then I() =
N(1+
2
)
(1
2
)
and CRLB =
1
I()
=
(1
2
)
2
N(1+
2
)
2
ECE 531 - Detection and Estimation Theory
Homework 3
4.6 (Correction Shu Wang) In this problem, we only have a single component. So

=
[ a
k
,

b
k
]
T
.
According to Example 4.2, we have
C =
_
2
2
N
0
0
2
2
N
_
So a
k
N(a
k
,
2
2
N
) and

b
k
N(b
k
,
2
2
N
). Also a
k
and

b
k
are independent.
E[

P] = E[
a
2
k
+

b
2
k
2
]
= E[
a
2
k
2
] +E[

b
2
k
2
]
=
1
2
(E[ a
2
k
] +E[

b
2
k
])
=
1
2
[V ar( a
k
) +E
2
[ a
k
] +V ar(

B
k
) +E
2
[

B
k
]]
=
1
2
[
2
2
N
+a
2
k
+
2
2
N
+b
2
k
]
=
2
2
N
+
a
2
k
+b
2
k
2
Suppose P =
a
2
k
+b
2
k
2
. So E[

P] =
2
2
N
+P. Then E
2
[

P] = (
2
2
N
+P)
2
.
V ar(

P) = V ar(
a
2
k
+

b
2
k
2
)
=
1
4
[V ar( a
2
k
) +V ar(

b
2
k
)]
According to textbook page38. Eq 3.19:
If N(,
2
), then
E[
2
] =
2
+
4
E[
4
] =
4
+ 6
2

2
+ 3
4
V ar(
2
) = 4
2

2
+ 2
4
So V ar( a
2
k
) = 4a
2
k
2
2
N
+ 2(
2
2
N
)
2
and V ar(

b
2
k
) = 4b
2
k
2
2
N
+ 2(
2
2
N
)
2
. Then we can have:
V ar(

P) = (a
2
k
+b
2
k
)(
2
2
N
) + (
2
2
N
)
2
= (
2
2
N
)[2P +
2
2
N
]
1
So
E
2
[

P]
V ar(

P)
=
(
2
2
N
+P)
2
(
2
2
N
)[2P +
2
2
N
]
= 1 +
(2P)
2
N
2
4[2PN2
2
+ 4
4
]
If a
k
= b
k
= 0 P = 0
E
2
[

P]
V ar(

P)
= 1.
But if P >>
2
2
N
, then
E
2
[

P]
V ar(

P)
=
P
2
P
4
2
N
=
P
4
2
N
>> 1. Then signal will be easily detected.
4.13 (Shu Wang) In practice we sometimes encounter the linear model x = H +w but H
composed of random variables. Suppose we ignore this dierence and use our usual estimator

= (H
T
H)
1
H
T
x
where we assume that the particular realization of H is known to us. Show that if H and w
are independent, the mean and covariance of

are
E(

) =
C

=
2
E
H
_
(H
T
H)
1

where E
H
denotes the expectation with respect to the PDF of H. What happens if the
independence assumption is not made?
E[

] = E[(H
T
H)
1
H
T
x]
= E[(H
T
H)
1
H
T
(H +w)]
= E[(H
T
H)
1
H
T
H] +E[(H
T
H)
1
H
T
w]
According to H and w are independent. Also w has zero mean. Then we can have:
E[

] = E[] +E[(H
T
H)
1
H
T
]E[w]
= E[]
=
2
C

= E[(

)(

)
T
]
= E[((H
T
H)
1
H
T
x )((H
T
H)
1
H
T
x )
T
]
= E[((H
T
H)
1
H
T
x (H
T
H)
1
H
T
H)((H
T
H)
1
H
T
x (H
T
H)
1
H
T
H)
T
]
= E[((H
T
H)
1
H
T
(x H))((H
T
H)
1
H
T
(x H))
T
]
= E[((H
T
H)
1
H
T
w)((H
T
H)
1
H
T
w)
T
]
= E
Hw
[(H
T
H)
1
H
T
ww
T
H(H
T
H)
1
]
= E
H|w
E
w
[(H
T
H)
1
H
T
ww
T
H(H
T
H)
1
]
= E
H|w
[(H
T
H)
1
H
T

2
IH(H
T
H)
1
]
= E
H|w
[
2
(H
T
H)
1
]
=
2
E
H
[(H
T
H)
1
]
According to H and w are independent.
If H and w are not independent. Then E[

] may not equal to , so



may be biased.
5.3 (Luke Vercimak) The IID observations x[n] for n = 0, 1, . . . , N 1 have the exponential
PDF
p(x[n]; ) =
_
exp(x[n]) x[n] > 0
0 x[n] < 0
Find a sucient statistic for
Since the observations are IID, the joint distribution is
p(x; ) =
n
exp
_
N1

n=0
x[n]
_
=
_

n
exp
_

N1

n=0
x[n]
__
(1)
= (
n
exp[T(x)]) (1)
= g(T(x), )h(x)
By the Neyman-Fisher Factorization theorem,
T(x) =
N1

n=0
x[n]
is a sucient statistic for
5.9 (Luke Vercimak) Assume that x[n] is the result of a Bernoulli trial (a coin toss) with
Pr{x[n] = 1} =
Pr{x[n] = 0} = 1
and that N IID observations have been made. Assuming the Neyman-Fisher factorization
theorem holds for discrete random variables, nd a sucient statistic for . Then, assuming
3
completeness, nd the MVU estimator of
Let p = number of times x = 1 or

N1
n=0
x[n]. Since each observation is IID,
Pr [x] =
N1

n=0
Pr [x[n]]
=
p
(1 )
Np
=

p
(1 )
N
(1 )
p
=
_

1
_
p
(1 )
N
=
_
_

1
_
T(x)
(1 )
N
_
[1]
= g(T(x), )h(x)
By the Neyman-Fisher Factorization theorem,
T(x) = p =
N1

n=0
x[n]
is a sucient statistic for .
To get a MVUE statistic, the RBLS theorem says that we need to prove:
1. T(x) is complete. This is given in the problem statement.
2. T(x) is unbiased:
E[T(x)] = E
_
N1

n=0
x[n]
_
=
N1

n=0
E [x[n]]
=
N1

n=0
[Pr(x[n] = 1)x[n] + Pr(x[n] = 0)x[n]]
=
N1

n=0
[(1) + (1 )(0)]
= N
Therefore an unbiased estimator of is

=
1
N
N1

n=0
x[n]
4
By the RBLS theorem, this is also the MVUE.
5
ECE 531 - Detection and Estimation Theory
Homework 4
February 5, 2011
6.7 (Shu Wang) Assume that x[n] = As[n] + w[n] for n = 0, 1, . . . , N 1 are observed, where
w[n] is zero mean noise with covariance matrix C and s[n] is a known signal. The ampli-
tude of A is to be estimated using a BLUE. Find the BLUE and discuss what happens if
S = [s[0]s[1] . . . s[N 1]]
T
is an eigenvector of C. Also, nd the minimum variance.
E[x[n]] = As[n], because s[n]s are known. So

A =
s
T
C
1
x
s
T
C
1
s
. And the minimum variance
is var(

A) =
1
s
T
C
1
s
. From the problem we know that s is an eigenvector of C. According to
the property of eigenvectors:
If s is an eigenvector of C corresponding to the eigenvalue and C is invertible, the s is an
eigenvector of C
1
corresponding to the eigenvalue
1

.
proof:
Cs = s
C
1
Cs = C
1
s = C
1
s
s = C
1
s
1

s = C
1
s
So
1

is the eigenvalue of C
1
. So var(

A) =
1
s
T
C
1
s
=
1
s
T 1

s
=

s
T
s
.
So var(

A) = .In this case, since s is an eigenvector, no pre-whitening lter is needed!
6.9 (Luke Vercimak) OOK communication system. Given:
x[n] = Acos(2f
1
n) +w[n] n = 0, 1, . . . , N 1 C =
2
I E[w[n]] = 0
Find he BLUE for A (

A) and interpret the resultant detector. FInd the best frequency in the
range of 0 f
1

1
2
to use at the transmitter.
x[n] = Acos(2f
1
n) +w[n]
x = HA+w
Where:
H =
_

_
1
cos(2f
1
)
cos(2f
1
2)
.
.
.
cos(2f
1
(N 1))
_

_
1
A =
_
A

C
1
=
1

2
I
Using the Gauss-Markov Theorem,

A =
_
H
T
C
1
H
_
1
H
T
C
1
x
=
_
1

2
N1

n=0
cos
2
(2f
1
n)
_
1
H
T
C
1
x
=
_
1

2
N1

n=0
cos
2
(2f
1
n)
_
1
_
1

2
N1

n=0
cos(2f
1
n)x[n]
_
=

N1
n=0
cos(2f
1
n)x[n]

N1
n=0
cos
2
(2f
1
n)
The detector is the ratio of the cross correlation between the carrier and the received signal to
the autocorrelation of the carrier signal. It is a measurement of how much the received signal
is like the carrier signal. The value chosen for would be A/2 since this would minimize both
the number of false positive and false negatives.
The best frequency range to use for the carrier would reduce the variance of

A the most.
C

A
=
_
H
T
C
1
H
_
1
=
_
1

2
N1

n=0
cos
2
(2f
1
n)
_
1
=

2

N1
n=0
cos
2
(2f
1
n)
Maximizing the denominator will reduce C

A
the most. If f
1
was chosen to be 0 (no carrier)
or chosen to be
1
2
with the added constraint that the transmitting clock and sampling clock
were phase aligned with no phase shift, the variance would be minimum.
7.3 (Luke Vercimak) We observe N IID samples from the PDFs:
1. Gaussian
p(x; ) =
1

2
exp
_

1
2
(x )
2
.
_
2. Exponential
p(x; ) =
_
exp(x) x > 0
0 x < 0
In each case nd the MLE of the unknown parameter and be sure to verify that it indeed
maximizes the likelihood function. Do the estimators make sense?
2
p(x; ) =
1

2
exp
_

1
2
N1

n=0
(x[n] )
2
_
lnp(x; ) =
1
2
ln[2]
1
2
N1

n=0
(x[n] )
2
lnp(x; )

=
N1

n=0
(x[n] )
0 =
N1

n=0
(x[n] )
N =
N1

n=0
x[n]
=
1
N
N1

n=0
x[n]

2
lnp(x; )

2
= N
The curvature is negative at the critical point of the rst derivative. Therefore, the setting the
rst derivative to 0 indeed nds the maximum. The estimator calculated for the maximum
is the sample mean and is what would be expected.
p(x; ) =
N
exp(
N1

n=0
x[n])
lnp(x; ) = N ln[]
N1

n=0
x[n]
lnp(x; )

=
N


N1

n=0
x[n]
0 =
N

N1

n=0
x[n]

=
1
1
N

N1
n=0
x[n]

2
lnp(x; )

2
=
N

2
The curvature is negative at the critical point of the rst derivative. Therefore, the setting the
rst derivative to 0 indeed nds the maximum. The estimator calculated for the maximum is
3
the inverse of the sample mean. Since E[x] =
1

for an exponential distribution, this estimator


makes sense.
7.14 (Luke Vercimak + book images)
# Title: EE 531 Detection and estimation theory HW prob 7.14
# Author: Luke Vercimak
# Date: 2/6/2011
# Description:
# Performs a monte-carlo analysis on the distribution of the
# sample mean of standard normally distributed random variables.
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import numpy as np
# Caculates the sample mean and variance for
# N IID samples from a standard normal distribution
def CalcStats(N):
x = np.random.randn(N)
m = sum(x)/N
s2 = sum(pow((x-m),2))/N
return (m,s2)
# Runs a monte-carlo analysis on the std mean and variance estimators
# computed on N samples from a standard normal distribution. Results
# are displayed in figure fig
def MonteCarlo(fig, N):
# Number of monte-carlo interations (to get nice histograms)
N_monte_carlo = 10000
# Create a vector Ns to feed through the CalcStats routine.
points = [N]*N_monte_carlo
# Run the CalcStats subroutine N_monte_carlo times and store each result
# in stats
stats = map(CalcStats, points)
# Stats is a list of tuples, each tuple is (mean, variance). Rearrange
# This structure into two lists, one of sample means, one of samp. variances
m,s2 = zip(*stats)
# Normalize the results to compute the histogram
d = m/(np.sqrt(s2)/np.sqrt(N))
4
# Open a new figure to display the results
plt.figure(fig)
# Draw the histogram, use 50 bins and normalize the bin height
n, bins, patches = plt.hist(d, 50, normed=1, facecolor=green)
# Draw the theoretical result over the histogram so that
# a comparison can be made
y = mlab.normpdf( bins, 0, 1)
l = plt.plot(bins, y, r--, linewidth=1)
# Label the graph and clean it up
plt.ylabel(Normalized Bin count)
plt.title(r$\mathrm{Histogram\ of\ } +
r\bar{x} / \left( \hat{\sigma}/ \sqrt{N} \right) +
", N = %d $" % N)
plt.axis([-6.5, 6.5, 0, 0.5])
plt.grid(True)
# Finally show the graph
plt.show()
# Perform the monte-carlo analysis using 10 samples for the estimators
# in figure 1
MonteCarlo(1, 10)
# Repeat with 100 samples for figure 2
MonteCarlo(2, 100)
5
6
ECE 531 - Detection and Estimation Theory
Homework 5
February 22, 2011
7.18 (Luke Vercimak) Newton-Raphson
g(x) = exp
_

1
2
x
2
_
+ 0.1 exp
_

1
2
(x 10)
2
_
g

(x) = exp
_

1
2
x
2
_
(x) + 0.1 exp
_

1
2
(x 10)
2
_
(10 x)
g

(x) = exp
_

1
2
x
2
_
(x
2
)exp
_

1
2
x
2
_
+0.1 exp
_

1
2
(x 10)
2
_
(10x)
2
0.1 exp
_

1
2
(x 10)
2
_
The Newton-Raphson method nds the zeros of a function:
x
k+1
= x
k

f(x
k
)
f

(x
k
)
We want to nd the zeros of g

. Therefore:
x
k+1
= x
k

(x
k
)
f

(x
k
)
Using a computer to compute:
k x g(x) g

(x) g

(x)
0 0.5000 0.8825 0.4412 0.6619
1 0.1667 0.9862 0.1644 0.9588
2 0.0048 1.0000 0.0048 1.0000
3 0.0000 1.0000 0.0000 1.0000
4 0.0000 1.0000 0.0000 1.0000
k x g(x) g

(x) g

(x)
0 9.5000 0.0882 0.0441 0.0662
1 10.1667 0.0986 0.0164 0.0959
2 9.9952 0.1000 0.0005 0.1000
3 10.0000 0.1000 0.0000 0.1000
It is important for the initial guess of this method to be closer to the critical point that we
wish to estimate. Otherwise it will converge to the closest maxima or minima to the initial
guess.
7.20 (Luke Vercimak) Given x[n] = s[n] +w[n], determine the MLE estimator of s[n]. (w[n]
N(0,
2
)) Since nothing is known about s[n], we cannot determine anything about s[n + k]
from x[n]. Since we cannot take advantage of any information about the relationship of the
1
values of s[n] the best we can do is assume that each x[n] is independent, giving us a worst
case estimate. (The joint distribution will not give any additional information over the single
distribution.)
ln p(x[n]) = ln
_

2
2
_

1
2
2
(x[n] s[n])
2
Dierentiating this and setting the result equal to 0, we obtain the following results:
s[n] = x[n]
This makes sense due to the fact that we dont have any more information about s[n] other
than x[n]. The measurement will have the following PDF: s[n] N(s[n],
2
)
1. Is the MLE asymptotically unbiased? The estimator doesnt improve with increasing N.
Therefore it is either biased or unbiased.
E[ s[n]] = E[x[n]] = E[s[n] + w[n]] = s[n] + E[w[n]] = s[n]
Therefore the estimator is unbiased.
2. Is the MLE asymptotically ecient? The estimator doesnt depend on N, therefore it is
either asymptotically ecient or not.
ln p(x[n])
s[n]
=
1

2
(x[n] s[n]) = I()(g(x) )
Therefore x[n] is a ecient estimator for s[n]
3. Is the MLE asymptotically gaussian? x[n] is gaussian because it is the sum of a constant
and a gaussian RV. Therefore the MLE in this case is gaussian.
4. Is the MLE asympotically consistent? The estimate does not converge as N 0. Instead
the variance stays the same. Therefore the estimate is not consistent.
8.5 (Luke Vercimak + Natasha Devroye) DCT Estimation
Given:
s[n] =
p

i=1
A
i
cos 2f
i
n
Determine:
1. Find the LSE normal equations The model above is a linear model and can be put into
the form s = H Therfore,
H =
_

_
1 1 . . . 1
cos 2f
1
(1) cos 2f
2
(1) . . . cos 2f
p
(1)
.
.
.
.
.
.
.
.
.
.
.
.
cos 2f
1
(N 1) cos 2f
2
(N 1) . . . cos 2f
p
(N 1)
_

_
=
_

_
A
1
A
2
.
.
.
A
p
_

_
2
Per the books results, the normal equations would be:
H
T
H = H
T
s
2. Given that the frequencies are f
i
= i/N, explicitly nd the LSE and the minimum LSE
error.
H =
_

_
1 1 . . . 1
cos 2
1
N
(1) cos 2
2
N
(1) . . . cos 2
p
N
(1)
.
.
.
.
.
.
.
.
.
.
.
.
cos 2
1
N
(N 1) cos 2
2
N
(N 1) . . . cos 2
p
N
(N 1)
_

_
The columns of this matrix are orthogonal. Because of this:
H
T
H =
_

_
N
2
0 . . . 0
0
N
2
. . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . .
N
2
_

_
=
N
2
_

_
1 0 . . . 0
0 1 . . . 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 1
_

_
Solving the normal equations for , we get

= (H
T
H)
1
H
T
s
=
2
N
IH
T
s
=
2
N
H
T
s
Converting this back into scalar form results in the LSE estimator:
A
i
=
2
N
N1

n=0
_
cos 2i
n
N
_
s[n]
To nd the minimum LSE error, use the results found in eq 8.13:
J
min
= s
T
(s H

)
= s
T
s s
T
H

= s
T
s s
T
H
2
N
H
T
s
= s
T
s
2
N
s
T
HH
T
s
= s
T
s
2
N
s
T
N
2
Is
= s
T
s s
T
s
= 0
Because the signal model was linear to begin with, the LSE gives exact estimates of the
parameters and is able to reconstruct the signal in its entirety.
3
3. Finally if x[n] = s[n] + w[n], where w[n] is WGN with variance
2
determine the PDF
of the LSE assuming the given frequencies.
Because of the above result for J
min
, any error in the estimate is entirely due to w[n].
The estimator wouldnt change in this case and would be:

A
i
=
2
N
N1

n=0
_
cos 2i
n
N
_
x[n]
=
2
N
N1

n=0
_
cos 2i
n
N
_
(s[n] + w[n])
=
2
N
N1

n=0
_
cos 2i
n
N
_
s[n] +
2
N
N1

n=0
_
cos 2i
n
N
_
w[n]
E[

A
i
] =
2
N
N1

n=0
_
cos 2i
n
N
_
s[n] +
2
N
N1

n=0
_
cos 2i
n
N
_
E[w[n]]
=
2
N
N1

n=0
_
cos 2i
n
N
_
s[n] +
2
N
N1

n=0
_
cos 2i
n
N
_
0
=
2
N
N1

n=0
_
cos 2i
n
N
_
s[n]
= A
i
var[

A
i
] = var
_
2
N
N1

n=0
_
cos 2i
n
N
_
w[n]
_
=
4
N
2
_
N1

n=0
cos
2
(2i
n
N
)var[w[n]]
_
=
4
2
N
2
_
N1

n=0
1 + cos(2i
n
N
)
2
_
=
2
2
N
Furthermore, cov(

A
i
,

A
j
) =
ij
. The estimate

A
i
is the sum of a constants signal and a
number of gaussian RVs. Therefore the distribution of

A is gaussian, N
_
A,
2
2
N
I
_
.
8.10 (Shu Wang) Prove s
2
+x s
2
= x
2
We suppose that s = H

. Then we can have:


4
|| s||
2
= s
T
s =

T
H
T
H

||x s||
2
= (x H

)
T
(x H

) = x
T
x x
T
H

T
H
T
x +

T
H
T
H

||x||
2
= x
T
x
|| s||
2
+||x s||
2
= x
T
x + [

T
H
T
H

x
T
H

T
H
T
x +

T
H
T
H

]
= x
T
x [x
T
H

T
H
T
x 2

T
H
T
H

]
= x
T
x [(x
T

T
H
T
)H

T
H
T
(x H

)]
= x
T
x [(x H

)
T
H

T
H
T
(x H

)]
We know that (x H

)
T
H = 0 and H
T
(x H

) = 0.
So || s||
2
+||x s||
2
= x
T
x = ||x||
2
5
ECE 531: Detection and Estimation Theory, Spring 2011
Homework 6
Problem 1. (8.20) (Shu Wang)
Solution:
From the problem, we know that:
H =

1
r
.
.
.
r
N1

So we have h[n] = r
n
. According to 8.46, we have:

A(n) =

A(n 1) +K[n](x[n] h[n]
T

A(n 1))
=

A(n 1) +K[n](x[n] r
n

A(n 1))
From the problem, we know that
2
= 1. According to 8.45, we have V ar(

A(n)) = [n]. We can
get K[n] by using 8.47:
V ar(

A(n)) =
V ar(

A(n 1))h[n]
1 +h[n]
T
V ar(

A(n 1))h[n]
=
V ar(

A(n 1))r
n
1 +r
2n
V ar(

A(n 1))
Also according to 8.48, we have:
V ar(

A(n)) = (1 K[n]h[n]
T
)V ar(

A(n 1))
= (1
V ar(

A(n 1))r
2n
1 +r
2n
V ar(

A(n 1))
)V ar(

A(n 1))
=
V ar(

A(n 1))
1 +r
2n
V ar(

A(n 1))
Let
2
= 1 = V ar(

A(0)). Then we can have:
1
V ar(

A(1)) =
1
1 +r
2
V ar(

A(2)) =
1
1+r
2
1 +r
4
1
1+r
2
=
1
1 +r
2
+r
4
Then we can conclude that V ar(

A(n)) =
1
P
N
n=0
r
2n
.
Problem 2. (8.27) (Luke Vercimak)
Solution:
The model is x[n] = exp() +w[n]
1. Newton-Raphson
We will assume that the signal model is:
s[n] = exp()
s = exp()1
We want to minimize:
J = (x s())
T
(x s())
To do this we want to solve:
s()

(x s()) = 0
Using results 8.59 and 8.60 from the book:
g() = exp

k+1
=
k
+

exp(2)
N1

i=0
[exp() (x[n] exp())]

1
N1

i=0
[exp() (x[n] exp())]

k+1
=
k
+

N1
i=0
[exp() (x[n] exp())]
exp(2)

N1
i=0
[exp() (x[n] exp())]

k+1
=
k
+

N1
i=0
[exp() (x[n] exp())]
exp(2)

N1
i=0
[exp() (x[n] exp())]
2
2. Analytically
Changing the model to vector form:
x = exp()1 +w
We will assume that the signal model is:
s[n] = exp()
This model can be transformed to a linear model by the transformation:
= exp() = g()
Therefore (since g() is invertible):
s() = s(g
1
())
The signal model now becomes:
s = H = 1
Using the linear model results from the book to nd the LSE:
LSE() = (H
T
H)
1
Hx
= (1
T
1)
1
1
T
x
=
1
N
N1

n=0
x[n]
= x
LSE() = g
1
(LSE())
= ln(LSE())
= ln( x)
Problem 3. (10.3) (Shu Wang)
Solution:
Because x[n] are conditional independent of . Then we can have:
3
p(x|) = exp[
N1

n=0
(x[n] )]U(min(x[n]) )
p(x, ) = p(x|)p()
= exp[
N1

n=0
x[n] + (N 1)]U(min(x[n]) )
p(x) =

min(x[n])
0
p(x, d
=

min(x[n])
0
exp[
N1

n=0
x[n] + (N 1)]d
= exp[
N1

n=0
x[n]]
1
N 1
(exp[(N 1)min(x[n])] 1)
p(|x) =
p(x, )
p(x)
=
exp[(N 1)]U(min(x[n]) )
1
N1
(exp[(N 1)min(x[n])] 1)
E(|x) =

min(x[n])
0
p(|x)d
= (N 1)
1
exp[(N 1)min(x[n])] 1

min(x[n])
0
exp[(N 1)]d
By using partial integration, we have:

MMSE
=
min(x[n])
1 exp[(N 1)min(x[n])]

1
N 1
4
ECE 531 - Detection and Estimation Theory
Homework 7
March 10, 2011
11.3 (MMSE and MAP estimation) (Luke Vercimak)
1. MMSE

= E [|x]
=
_

x
p(|x)d
=
_

x
exp [( x)] d
= [( 1) exp [( x)]]

x
= x + 1
2. MAP

= arg max

p(|x)

= arg max

exp [( x)]
= x
12.1 (LMMSE) (Luke Vercimak)
Given:

= ax
2
[0] + bx[0] + c
x[0] U
_

1
2
,
1
2

. Find the LMMSE estimator and the quadratic estimator if = cos 2x[0].
Also, compare the minimum MSEs.
1. Quadratic
B
MSE
[

] = E
_
(

)
2
_
= E
_
_

_
ax
2
[0] + bx[0] + c
__
2
_
= E
_
_
ax
2
[0] bx[0] c
_
2
_
We need to nd the minimum of B
MSE
. To do this, we take its derivative with respect
1
to each parameter and set to 0.
0 =
B
MSE
[

]
a
0 =
B
MSE
[

]
b
0 =
B
MSE
[

]
c
0 = E
_
2
_
ax
2
[0] bx[0] c
_
(x
2
[0])

0 = E
_
2
_
ax
2
[0] bx[0] c
_
(x[0])

0 = E
_
2
_
ax
2
[0] bx[0] c
_
(1)

E[x
2
[0]] = E
_
ax
4
[0] + bx
3
[0] + cx
2
[0]

E[x[0]] = E
_
ax
3
[0] + bx
2
[0] + cx[0]

E[] = E
_
ax
2
[0] + bx[0] + c

In matrix form (and assuming x = x[0]:


E
_
_

_
_
x
2
x
1
_
_
_
_
= E
_
_
_
_
x
4
x
3
x
2
x
3
x
2
x
x
2
x
1
1
_
_
_
_
_
_
a
b
b
_
_
Since the distribution on x and are known (using integration),
E[x] = 0
E[x
2
] =
1
12
E[x
3
] = 0
E[x
4
] =
1
80
E[x
2
] =
1
2
2
E[x] = 0
E[] = 0
E[
2
] =
1
2
Substituting the expectations into the matrix equation above and solving for a, b, c it is
found that:
_
_
a
b
b
_
_
=
_
_
90

2
0
15
2
2
_
_
2
Therefore

=
90

2
x
2
[0] +
15
2
2
Computing the MSE
B
MSE
[

] = E
_
(

)
2
_
= E
_
_
cos(2x[0])

_
2
_
= E
_
cos
2
(2x[0]) 2

cos(2x[0]) +

2
_
=
1
2
2E
_

cos(2x[0])
_
+ 0
=
1
2
2E
_
(
90

2
x
2
[0] +
15
2
2
) cos(2x[0])
_
=
1
2
2E
_
90

2
cos(2x[0])x
2
[0]
_
2E
_
15
2
2
cos(2x[0])
_
=
1
2
2(
90

2
)E
_
cos(2x[0])x
2
[0]

2(
15
2
2
)E [cos(2x[0])]
=
1
2
2(
90

2
)E
_
x
2
[0]

2(
15
2
2
)E []
=
1
2
2(
90

2
)
1
2
2
0
=
1
2

90

4
2. Linear
Modifying the results of the quadratic:
0 =
B
MSE
[

]
b
0 =
B
MSE
[

]
c
0 = E [2 ( bx[0] c) (x[0])]
0 = E [2 ( bx[0] c) (1)]
E[x[0]] = E
_
bx
2
[0] + cx[0]

E[] = E [bx[0] + c]
In matrix form:
E
__
x
2
x
x 1
__ _
b
c
_
= E
_

_
x
1
__
Using the expectations from above it is found that: b = 0, c = 0,

= 0
3
Computing the MSE
B
MSE
[

] = E
_

= E
_
cos
2
(2x[0])

=
1
2
The MSE of the quadratic estimator is less.
12.11 (Whitening) (Shu Wang)
If we want C
yy
= I, then y
1
, y
2
, y
3
must be orthonormal. We use Gram-Schmidt orthogonal-
ization procedure to solve this problem.
y
1
=
x
1
||x
1
||
= x
1
z
2
= x
2
(x
2
, y
1
)y
1
= x
2
(x
2
, x
1
)x
1
= x
2
E[x
2
x
1
]x
1
= x
2
x
1
y
2
=
z
2
||z
2
||
||z
2
|| =
_
E[(z
2
)
2
] =
_
1
2
y
2
=
x
2
x
1
_
1
2
z
3
= x
3
(x
3
, y
2
)y
2
(x
3
, y
1
)y
1
= x
3
(x
3
,
x
2
x
1
_
1
2
)
x
2
x
1
_
1
2
(x
3
, x
1
)x
1
= x
3
E[x
3
x
2
x
1
_
1
2
]
x
2
x
1
_
1
2
E[x
3
x
1
]x
1
= x
3

2
x
1
[
x
2

2
x
1
1
2

x
2

4
x
1
1
2
]
= x
3
x
2
y
3
=
z
3
||z
3
||
=
x
3
x
2
_
1
2
Then we can have:
_
_
y
1
y
2
y
3
_
_
=
_
_
_
_
1 0 0

1
2
1

1
2
0
0

1
2
]
1

1
2
_
_
_
_
_
_
x
1
x
2
x
3
_
_
So
A =
_
_
_
_
1 0 0

1
2
1

1
2
0
0

1
2
]
1

1
2
_
_
_
_
4
Because x is zero mean, and C
yy
= E[yy
T
]. Then C
yy
= E[Axx
T
A
T
] = AC
xx
A
T
= I. so
C
xx
= A
1
(A
T
)
1
= (A
T
A)
1
. So C
1
xx
= A
T
A.
5
ECE 531: Detection and Estimation Theory, Spring 2011
Homework 8
Problem1. (13.4) (Shu Wang)
Solution:
From eq 13.5, we have:
C
s
[m, n] = a
m+n+2

2
s
+
2
u
a
mn
n

k=0
a
2k
= a
mn
(a
2n+2

2
s
+
2
u
n

k=0
a
2k
)
From eq 13.6, we know that
var(s[n]) = C
s
[n, n] = a
2n+2

2
s
+
2
u
n

k=0
a
2k
So C
s
[m, n] = a
mn
C
s
[n, n].
Problem2. (13.12) (Shu Wang)
Solution:
K[n] =
M[n|n1]

2
n
+M[n|n1]
, if
2
n
= 0, then K[n] = 1. We will have:
s[n|n] = s[n|n 1] +K[n](x[n] s[n|n 1]) = x[n]
s[n|n 1] = a s[n 1|n 1] = ax[n 1]
1
Innovation sequence:
x[n] = x[n] x[n|n 1]
= x[n] s[n|n 1]
= s[n] s[n|n 1]
= s[n] a s[n 1|n 1]
= s[n] ax[n 1]
= s[n] as[n 1]
= u[n]
So it is white, because of u[n] is white.
Problem13.15 (Optimal l-step predictor) (Luke Vercimak)
Well use the Kalman ltering equations:
Prediction:
s[n|n 1] = a s[n 1|n 1]
Kalman Gain:
K[n] =
M[n|n 1]

2
n
+M[n|n 1]
Correction:
s[n|n] = s[n|n 1] +K[n] (x[n] s[n|n 1])
If
2
n
the Kalman lter will not use the observed data and will generate its output solely
based on the previous input.
K[n] =
M[n|n 1]

2
n
+M[n|n 1]
=
M[n|n 1]
+M[n|n 1]
= 0
The correction equation then becomes:
s[n|n] = s[n|n 1] + 0 (x[n] s[n|n 1]) = s[n|n 1]
The prediction equation can then be expanded:
s[n + 1|n] = a s[n|n]
s[n + 2|n] = a s[n + 1|n] = a
2
s[n|n]
s[n + 3|n] = a s[n + 2|n] = a
2
s[n + 1|n] = a
3
s[n|n]
s[n +l|n] = a
l
s[n|n]
2
ECE 531: Detection and Estimation Theory, Spring 2011
Homework 9
ALL THANKS TO Shu Wang
Problem1. (3.4)
Solution:
According to example 3.2, we have
P
D
= Q(Q
1
(P
FA
)

NA
2

2
Q
1
(P
D
) = Q
1
(P
FA
)

NA
2

NA
2

2
= Q
1
(P
FA
) Q
1
(P
D
)

NA
2

2
= (Q
1
(P
FA
) Q
1
(P
D
))
2
Since 10 log
10
A
2

2
= 30dB
A
2

2
= 10
3
, we have:
N =
(Q
1
(P
FA
) Q
1
(P
D
))
2
10
3
= 36546
According to Appendix 2C, we can use Matlab to calculate Q
1
.
Problem2. (3.6)
H
0
: = 0
H
1
: = A1
L(x) =
1
(2
2
)
N
2
e
[
1
2
2
P
N1
n=0
(x[n]A)
2
]
1
(2
2
)
N
2
e
[
1
2
2
P
N1
n=0
(x[n])
2
]
>

1
2
2
(2A
N1

n=0
x[n] +NA
2
) > ln

2
N1

n=0
x[n] > ln +
NA
2
2
2
1
Since A < 0:
1
N
N1

n=0
x[n] <

2
NA
ln +
A
2
=

x <

H
1
x >

H
0
T(x)

N(0,

2
N
) under H
0
N(A,

2
N
) under H
1
P
FA
= Pr{T(x) <

; H
0
}
= 1 Pr{T(x) >

; H
0
}
= 1 Q(

2
N
)
P
D
= Pr{T(x) <

; H
1
}
= 1 Q(

2
N
)
1 P
FA
= Q(

2
N
)

2
N
Q
1
(1 P
FA
)
Q
1
(x) = Q
1
(1 x)

2
N
Q
1
(P
FA
)
P
D
= 1 Q(Q
1
(P
FA
)
A

2
N
)
Q(x) = 1 Q(x)
P
D
= Q(Q
1
(P
FA
) +
A

2
N
)
Since A < 0
2
P
D
= Q(Q
1
(P
FA
)
|A|

2
N
)
= Q(Q
1
(P
FA
)

A
2
N

2
)
This is same as A > 0.
Problem3. (3.12)
If we want to have a perfect detector, the PDF of H
0
and mathcalH
1
cannot overlap as the gure
below. So that means 1 c > c c <
1
2
.
Problem4. (3.18)
H
0
: x[0] N(0, 1)
H
1
: x[0] N(0, 2)
We decide H
1
if
3
P(H
1
|x) > P(H
0
|x)
P(x|H
1
)P(H
1
) > P(x|H
0
)P(H
0
)

P(x|H
1
)
P(x|H
0
)
>
P(H
0
)
P(H
1
)
=
P(x|H
1
)
P(x|H
0
)
=
1

4
e

1
4
x
2
[0]
1

2
e

1
2
x
2
[0]
=
1

2
e
1
4
x
2
[0]
>
x
2
[0] > 4ln(

2) |x[0]| > 2

ln(

2)
For P(H
0
) =
1
2
, we have P(H
1
) =
1
2
. Then =
P(H
0
)
P(H
1
)
= 1 |x[0]| > 2

ln(

2) = 1.1774 1.18.
We can have the decision region as follow:
4
For P(H
0
) =
3
4
, we have P(H
1
) =
1
4
. Then =
P(H
0
)
P(H
1
)
= 3 |x[0]| > 2

ln(

2 3) = 2.4043 2.4.
We can have the decision region as follow:
5
ECE 531: Detection and Estimation Theory, Spring 2011
Homework 10
Problem 1 (4.6 Luke Vercimak)
The this is a known signal in WGN. Per eq 4.3, the test statistic will be:
T(x) =
N1

n=0
x[n]s[n] >

In this case (s[n] = Ar


n
):
E =
N1

n=0
s
2
[n] = A
2
N1

n=0
r
2n
For 0 < r < 1:
= A
2
N1

n=0
r
2n

A
2
1 r
2
as N
Therefore as we gain additional samples, the detector performance will approach a constant. (ob-
tained by plugging E into 4.14).
For r = 1:
= A
2
N1

n=0
r
2n
= NA
2
as N
Per Eq 4.14, P
D
will approach 1 as N
For r > 1:
= A
2
N1

n=0
r
2n
as N
Per Eq 4.14, P
D
will approach 1 as N
For all cases, the detector threshold

can be determined by plugging E into:

2
EQ
1
(P
FA
)
Problem 2 (4.10 Shu Wang)
1
V
T
CV =
V
T
= V
1
C = V V
1
= V V
T
C
1
= V
1
V
T
C
1
= D
T
D
D =

1
V
T
First, we need to calculate the eigenvalues of C. det(I C) = 0, it is easy to get = 1 . Then
it is easy to nd the matrix of eigenvectors.
V
T
= V =

2
1

2
1

1
=

1+
0
0
1

D =

2(1+)
1

2(1+)
1

2(1)
1

2(1)

Problem 3 (4.19 Siyao Gu)


Since s
0
[0] = s
1
[0] = 1, we can concentrate planning the decision regions around s
0
[1] and s
1
[1].
The test can be simplied to
T

N(1,
2
) under H
0
N(1,
2
) under H
1
(1)
The NP test becomes
p(x; H
1
)
p(x; H
0
)
H
0

H
1
P(H
0
)
P(H
1
= (2)
p(x; H
1
)
p(x; H
0
)
=
1

2
exp(
(x[1]1)
2
2
)
1

2
exp(
(x[1]+1)
2
2
)
(3)
= exp

(x
2
[1] 2x[1] + 1)
2

(x
2
[1] + 2x[1] + 1)
2

(4)
p(x; H
1
)
p(x; H
0
)
= exp[2x[1]]
H
0

H
1
P(H
0
)
P(H
1
)
(5)
2
x[1]
H
0

H
1
1
2
ln
P(H
0
)
P(H
1
)
(6)
Thus the line running through x[1] and perpendicular to the line running between s
0
and s
1
is the
chosen decision boundary. This would be a 0-slope line. If P(H
0
) = P(H
1
), the boundary would
be x[1] = 0.
Problem 4 (4.24 Shu Wang)
According to the text book, we have T
i
(x) =

N1
n=0
x[n]s
i
[n]
1
2

i
. We need to choose H
i
to make
T
i
(x) to be the maximum statistic. The block diagram of the optimal receiver is on page 120, gure
4.13.
When M = 2, according to eq 4.25, we have:
P
e
= Q(

(1
s
)
2
2
)
If we want to minimize P
e
, we need to minimize
s
.

s
=
s
T
1
s
0
1
2
(s
T
1
s
1
+s
T
0
s
0
)
=
NA
0
A
1
1
2
(A
2
0
+A
2
1
)
|
s
| 1
So when A
0
= A
1
,
s
= 1 is minimum. Then P
e
is minimum.
3
ECE 531: Detection and Estimation Theory, Spring 2011
Homework 11 Solutions
Problem1. (5.14 Shu Wang)
From Eq 5.5 and 5.6, we have:
T(x) = x
T
C
s
(C
s
+
2
I)
1
x
s = Ah
C
s
= E[ss
T
] = E[AhAh
T
] = E[AA]hh
T
=
2
A
hh
T
T(x) = x
T

2
A
hh
T
(
2
A
hh
T
+
2
I)
1
x
By using matrix inversion lemma, we have:
(A+BCD)
1
= A
1
A
1
B(DA
1
B +C
1
)
1
DA
1
Here we set A =
2
I, B =
2
A
h, C = I and D = h
T
. Then we will get:
(
2
I +
2
A
hh
T
)
1
=
1

2
I
1

2
(

2
A

2
h
T
h
1 +h
T
h

2
A

2
)
T(x) = x
T

2
A
hh
T
(
1

2
I
1

2
(

2
A

2
h
T
h
1 +h
T
h

2
A

2
))x
= x
T
hh
T
(

2
A

2


2
A

2
(

2
A

2
h
T
h
1 +h
T
h

2
A

2
))x
= (h
T
x)
T
(h
T
x)(

2
A

2
A
h
T
h +
2
) >
T

(x) = (h
T
x)
2
>

2
A

2
A
h
T
h+
2
x

N(0,
2
I) under H
0
N(0, C
s
+
2
I) under H
1
h
T
x

N(0,
2
h
T
h) under H
0
N(0,
2
A
(h
T
h)
2
+
2
h
T
h) under H
1
1
According to chapter2, under H
0
, we can easily get:
(h
T
x)
2

2
h
T
h
X
2
1
P
FA
= Pr{T

(x) >

; H
0
}
= Pr{
T

(x)

2
h
T
h
>

2
h
T
h
; H
0
}
Also from Chapter2, we know that Q
X
2
1
(x)
= 2Q(

x). Then we have P


FA
= 2Q(

2
h
T
h
).
Similar to H
0
, we can have:
(h
T
x)
2

2
A
(h
T
h)
2
+
2
h
T
h
X
2
1
P
D
= Pr{T

(x) >

; H
1
}
= Pr{
T

(x)

2
A
(h
T
h)
2
+
2
h
T
h
>

2
A
(h
T
h)
2
+
2
h
T
h
; H
1
}
P
D
= 2Q(

2
A
(h
T
h)
2
+
2
h
T
h
)
2
Problem 5.16 for Avinash (book)
3
Problem2. (5.17 Yao Feng)
Deection coecient is dened as
d
2
=
(E(T; H
1
) E(T; H
0
))
2
V ar(T; H
0
)
E(T; H
1
) =
N1

n=0
E (Acos(2f
0
n +) +w[n])Acos2f
0
n)
= cos
N1

n=0
A
2
cos
2
2f
0
n sin
N1

n=0
A
2
cos2f
0
nsin2f
0
n
=
NA
2
2
cos
E(T; H
0
) =
N1

n=0
E(w[n]Acos2f
0
n) = 0
V ar(T; H
0
) = V ar(
N1

n=0
w[n]Acos2f
0
n)
=
N1

n=0
V ar(w[n]Acos2f
0
n)
=
2
N1

n=0
A
2
cos
2
2f
0
n
=
NA
2
2

2
So,
d
2
=
(
NA
2
2
cos)
2
NA
2
2

2
=
NA
2
2
2
cos
2

We can see that if = 0, which means our assumption is right, then we get the maximum d
2
, hence
the maximum P
D
; if = , which mean our truley sent signal is Asin2f
0
n, then we get the
minimum P
D
4
Problem3. (6.2 Shu Wang)
L(x) =
P(x[0], x[1] : H
1
)
P(x[0], x[1] : H
0
)
=

2
e
(x[0]+x[1])

2
0
e

0
(x[0]+x[1])
>
e
(
0
)(x[0]+x[1])
>

2
0

2
(
0
)(x[0] +x[1]) > ln(

2
0

2
)
If >
0
, we decide H
1
.
T(x) = x[0] +x[1] <
ln(

2
0

2
)

0
=

P
FA
= Pr{T(x) <

; H
0
}
The region of T(x) <

is shown in the following gure:


5
P
FA
=

x[0]
0

2
0
e

0
(x[0]+x[1])
dx[1]dx[0]
=

0
e

0
(x[0]+x[1])
|

x[0]
0
dx[0]
=

0
e

0
x[0]

0
e

dx[0]
= 1 e

0
e

For given P
FA
, the threshold is not depend on unknown parameter . So the UMP test exists.
6

S-ar putea să vă placă și