Sunteți pe pagina 1din 41

1

2 :


2
I. Summary of last lecture
A communication system can be modeled by a block diagram
Figure 1: Block diagram of communication system
We talk on:
The Communication Problem:
What is the massage?
The Radar Problem:
Is there a target Yes or No?
The Estimation Problem:
What is the value of an unknown parameter ?
3
The Communication Problem
A digital communication system transmits information by
sending ones and zeros. when
"one" is sent as
"zero" is sent as
apriori probabilities of transmission are known
1
H
0
H
1 1
0 0 1
( ) ( )
( ) ( ) 1
P one was transmitted P H p
P zeor one was transmitted P H p p
=
= =
1 0 i
Given P(H ), P(H ), and P(received signal | H ) , i = 0,1
Design an optimal receiver in the sense of minimum probability of error
4
1 1 0 0
e
Pe = P(H )P(error | H ) + Cos P(H )P(error | H ).
P is the probabil
t fu
ity o
nction:
f error
The Communication Problem
The communication problem:
1 0 i
Given P(H ), P(H ), and P(received signal | H ) , i = 0,1
Design an optimal receiver in the sense of minimum probability of error
5
A simple model of a Communication System
( ) ( ) * ( ) ( ) r t x t h t n t = + ( ) ( )
i
i
x t s t iT =

1 2
( , ,... ,....) =
i
s s s s
1 2
( , ,... ,....)
i
s s s = s
6
Simple Case:
The Binary Channel, the decision rule
The input get value in
the set
A={0,1 },
P(a=0)=q; P(a=1)=1-q
The output get value in
the set
B={x, y)
The transition Probability
of the channel is 1-p
and p
q
1-q
What is the decision rule in order to minimize the error ?
7
The Binary Channel, the decision rule
D(x) is the decision rule when x was received
If D(X) = 0 -> P(e|x)=(1-q)p
If D(X) = 1 -> P(e|x)=q(1-p)
What is the decision rule in order to minimize the error ?
0
1
x
y
1-p
1-p
p
x
x
c
q
1-q
D(X) =0
8
How to make the decision
Let us minimize the probability of error when x
was received
( ) 0
( )
1
1
( ) 0
1
0
( )
(1 ) (1
(
)
1 | 0)
| )
1
( 1
D x
D x
D x
D x
P r x p
P
q p q p
or
r x p
p q
p q
=
=
=
=


<


=
=
=
< =
Ratio of aprior
probabilities
q
1-q
D(X)
=0
9
The decision Rule for y
The same type of decision rule
( ) 0
( ) 1
( ) 0
( ) 1
1
0
(1 )(
( | 0)
( | 1)
1 )
1
1
D y
D y
D y
D y
p P r y
P
qp q p
or
p
p r p
q
q y
=
=
=
=


<

=
=
<
=
=
q
1-q D(y) =1
10
The Probability of Error
The error probability depends on
the decision rule . Note:
1
0
0
1
0 1
0 1
( , ) ( ( ), ) ( , ) ( ( ), )
[ ( | 0) ( ( ), 0) ( | 0) ( ( ), 0)]
[ ( | 1) ( ( ),1) ( | 1) ( ( ),1)]
( | 0) ( , 0
(
) ( | 1) ( ,1)
( | 0) ( , 0) ( | 0) ( , 0
( ) ( )
( ) ) )
e
k
P p x k W D x k p y k W D y k
p p x W D x p y W D y
p p x W D x p y W D y
p p x W p p x W
p p y W p p y
D x D x
D y y W D
=
= +
= +
+ +
= +
+ +

0
( , )
1
a b
W a b
a b
=

11
1-D Detection Problem / Binary Case
Given a communication system with the following
two hypotheses:
are known constants which are transmitted
with a priori probabilities
n is a discrete or continuous random variable with a
known probability density function
1 1
0 0 1
( ) ( )
( ) ( ) 1
P one was transmitted P H p
P zeor was transmitted P H p p
=
= =
0 0
1 1
:
:
H r s n
H r s n
= +
= +
( ) ( )
n n
P x n x dx f x dx < < + =
, 0,1
i
p i =
, 0,1
i
s i =
i
s
n
r
12
Math Lab code
% Parameters.
Delay = 3; DataL = 20; R = .5; Fs = 8; Fd = 1; PropD = 0;
% Generate random data.
x = .75*sign(randsrc(DataL, 1, [], 1245));
% at time 0, 1/Fd, 2/Fd, ...
tx = [PropD: PropD + DataL - 1] ./ Fd;
% Plot data.
figure
stem(tx, x, 'kx');
% Set axes and labels.
axis([0 30 -1.6 1.6]); xlabel('Time'); ylabel('Amplitude');
y=x+(nx-1/2)
figure
axis([0 30 -1.6 1.6]); xlabel('Time'); ylabel('Amplitude');
stem(tx, nx, 'kx');
figure
axis([0 30 -1.6 1.6]); xlabel('Time'); ylabel('Amplitude');
stem(tx, y, 'kx');
13
Example
A
m
p
l
i
t
u
d
e
14
Objective of the Decision Problem

1 1 0 0
e
Pe = P(H )P(error | H ) + Cos P(H )P(error | H ).
P is the probabil
t fu
ity o
nction:
f error
1 0 i
Given P(H ), P(H ), and P(received signal | H ) , i = 0,1
Design an optimal receiver in the sense of minimum probability of error
15
Bayes Decision Rule
The channel mapping from the
message set A={s
0,
s
1
) to a
received signal r
The receiver has to decided for
each value of r either
r
0 1
or H H
0 0
1 1
( ) { : ( ) , }
( ) { : ( ) , }
A r r D r H r
A r r D r H r
= =
= =
r
0 0
( ) Receiver decides when r was received D r H H =
1
( ) A r
Let us denote by
0
( ) r
16
The Subsets
The subsets
satisfy
1 0
( ) and ( ) A r A r
1 0
1. ( ) ( ) A r A r = I
We must make a
decision either 0 or 1
For each value of r
we have to make a
decision
1 0
2. ( ) ( ) A r A r = U
r
0
( ) r
1
( ) A r
17
r
0
( ) r
1
( ) A r
How to make decision ?
Find the optimal receiver (in the sense of minimum probability of
error) which will make a decision based on the observation of r
such that P
e
is minimal
and
0 1 0
1 0 1
0 1 0 1 0 1
( ) P(H was decided| )
( ) P(H was decided| )
P(r A (r)| ) P(r A (r)| )
e
e
P P H sent H sent
P H sent H sent
P p H sent p H sent
=
+
= +
1
c e
P P =
18
The General Decision Problem
We have the following relation
r
0
( ) r
1
( ) A r
0 0 0 0
( )
( | ) ( | ) ( ( ) | ) = =
o
A r
o
P H H f r H dr P r A r H
1 0 0 0 0 1 0
( )
1
( | ) ( | ) 1 ( | ) ( ( ) | ) = = =

A r
P H H f r H dr P H H P r A r H
1 1 1 1 1
( )
1
( | ) ( | ) ( ( ) | ) = =
n
A r
P H H f r H dr P r A r H
0 1 1 1 1 0 1
( )
0
( | ) ( | ) 1 ( | ) ( ( ) | ) = = =

A r
P H H f r H dr P H H P r A r H
19
The probability of correct decision
0 0 0 1 1 1
0 0 1 1
( ) ( | ) ( ) ( | )
( ) ( | ) ( ) ( | )
= +
= +

c c c
P P H P H H P H P H H
P H f r H dr P H f r H dr
A (r) r)
o
A (
1
r
0
( ) r
1
( ) A r
We need to maximize it!!
20
The Optimal Decision
Now we must choose the decision regions for H
0
and H
1
in such a manner that the probability of a correct decision
will be maximized
Let us define the function
Thus
r
( ) 1 G r =
( ) 0 G r =
1
1 ( )
( )
0 ( )
o
r A r
G r
r A r

.
0 0 0 1 1 1
0 0 1 1
( ) ( | ) ( ) ( | )
( ) ( ) ( | ) ( ) (1 ( )) ( | )
c c c
P P H P H H P H P H H
P H G r f r H dr P H G r f r H dr
= +
= +

1
A (r) (r)
21
The Result
Now we can write everything under the same integral
The probability of a correct decision will be maximized if
for every r the integrand will be maximized. Therefore,
the decision rule will be:
( )
0 0 1 1
0 0 1 1
( ) ( ) ( | ) ( ) (1 ( )) ( | )
( ) ( ) ( | ) ( )(1 ( )) ( | )

= +
= +

c
r r
r
P P H G r f r H dr P H G r f r H dr
P H G r f r H P H G r f r H dr
0 0 1 1
1 0 0 1 1
( ) { : ( ) ( | ) ( ) ( | )}
( ) { : ( ) ( | ) ( ) ( | )}
o
A r r P H f r H P H f r H
A r r P H f r H P H f r H
= >
=
r
( ) 1 G r =
( ) 0 G r =
22
Likelihood Ratio Test
An equivalent test is (why ?)
1
0
( ) log ( ) log
e e
H
r r
H

= =
<
%
1
1 0
0 1
0
( | ) ( )
( | ) ( )
H
f r H P H
f r H P H
H

<
Alternately, we may write
This is called a Bayes test
1
0
( | )
( )
( | )
f r H
r
f r H
=
The quantity on the left is called the Likelihood ratio and denoted by
0
1
( )
( )
P H
P H
=
The quantity on the right is the threshold
23
The optimal receiver
The optimal receiver can be implement as follows
and the probability of error is given by
1
0
e
1
0
( )
( ) log
( )
H
P H
r
P H
H

<
%
( ) log ( )
e
r r =
%
0 0 1 1
( ) ( )
( ) ( | ) ( ) ( | )
e
r r
P P H f r H dr P H f r H dr
<
= +

% %
24
Comments
The optimal decision rule is a function of the aprior
probabilities of the source and the
conditional probabilities for discrete outputs
conditional density functions for the case that the output is
continuous random variable

We did not use the assumption that the channel is additive,
we actually give the solution for the more general case
( , ), 0,1
i
r g s n i = =
25
Example 1: The additive Gaussian random
noise
Assumption:
Source:
Channel:
Noise: n is Gaussian random variable independent of
the source,
i.e.,
Note that
1
0
:
:
H r s n
H r s n
= +
= +
2
2
2
1
( ) exp( )
2
2
n
n
f n

=
1 1
0 0
: , ( ) 1 ;
: , ( )
=
=
H s P H p
H s P H p
2
(0, ) N
1 1
( | ) ( | ) ( )
n
f r H f r s n H f n r s = = + = =
2
2
2
2
( )
2
n
n
e
P n

=
26
The optimal receiver
The decision rule can also be written as
2
2
2
2
( )
exp( )
2
( ) log ( ) log
( )
exp( )
2

= =
+

%
e
r s
r r
r s
2 2
2 2 2
( ) ( )
( ) 2
2 2
+
= + =
%
r s r s s
r r
,
1
0
( ) log ( ) log
e e
H
r r
H

= =
<
%
1
2
0
log
2 1

<
e
H
p
r
s p
H
27
The decision region for p=1/2
0
p
0
p +
0
( )
r H
f r
1
( )
r H
f r
1
s
0
{ } 1 0
Pr H Pr H error error
| |
|
\
28
Error Probability for
Let us first compute
1
2
0
log
2 1
e
H
p
r T
s p
H

=
<
2
0 0
2
2
1 ( )
[ | ] ( | ) exp( )
2
2
T T
r s
P e H f r H dr dr


+
= =

Change variables: z=
r s dr
dz

+
=
Lower bound is replaced by log
2 1


+
= +

e
T s p s
s p
2
0
1
[ | ] exp( ) ( )
2
2
T s
z T s
P e H dz Q


+
+
= =

0
[ | ] P e H
0
[ | ] P e H
29
Error Probability for
Let us first compute
1
2
0
log
2 1

=
<
e
H
p
r T
s p
H
2
1 1
2
2
1 ( )
[ | ] ( | ) exp( )
2
2
T T
r s
P e H f r H dr dr

= =

Cange variables: z= -
r s dr
dz

=
2
1
1
[ | ] exp( ) ( )
2
2
T s
z T s
P e H dz Q

= =

1
[ | ] P e H
Upper bound is replaced by
- log
2 1
e
T s p s
s p

= +

30
The Total probability of error
0 0 1 1
( ) ( | ) ( ) ( | )
e
P P H P e H P H P e H = +
0 1
[ ] ( ) [ ] ( )
e
T s T s
P P H Q P H Q

+
= +
1
2
0
log
2 1
e
H
p
r T
s p
H

=
<
0 1
[ ] ( log ) [ ] ( log )
2 1 2 1
e e e
s p s p
P P H Q P H Q
s p s p


= + +

( log ) (1 ) ( log )
2 1 2 1
e e e
d p d p
P pQ p Q
d p d p


= + +

2 d s = Let us denote by the Euclidean
distance between the two symbols
31
Conclusions (I)
For p=1/2
When p>1/2 the decision line is moving to the right
0
p
0
p +
0
( )
r H
f r
1
( )
r H
f r
1
s
( )
2
e
d
P Q

=
P>1/2
P<1/2
0
32
Conclusions (II)
The error probability depends on the distance between the
two symbols and not on the actual values of the symbols
0 0 0
1 1 1
1
: :
: :
2
= + = +

= + = +

=
o
H r s n H r A n
H r s n H r A n
for
s s
A
0
p
0
p +
0
( )
r H
f r
1
( )
r H
f r
1
s
equivalent for d= 2A
33
On the Q(x) function
Q(x) is called Q function or Gaussian integral function
The Q(x) is related to the complementary error function
Erfc(x) as follows
There are several well known bounds
(*)
to Q(x)
2
1
Pr [ ] exp( ) ( ), (0,1)
2
2
x
z
ob n x dz Q x n N

> = = =

2
2
( ) exp( )
1
( ) ( / 2)
2
x
Erfc x z dz
Q x Erfc x


=
=

(*) P. O. Borjesson and Catl-Erik W. Sundberg, " Simple Approximations of the error function Q(x) for
Communication Applications," IEEE Trans. of Comm. Com-27, pp. 639-642, No. 3, March 1979
2
2
2
1
( ) min(1/ 2, ) exp( )
2
2
1 1
( ) (1 ) exp( )
2 2
x
Q x
x
x
Q x
x x



34
The error probability function Q(x)
2
1
z /2
2
x
Q(x) e dz

=

2
x
2
1
2 x
Q(x) e

2
x
2
1
2
Q(x) e

35
Example 2: The Poisson Channel
The Poisson distribution of events is used frequently as
a model of shot noise and other diverse phenomena.
We have to design a light detector using photon
counter with the following probability law:
Pr [ ] ; ( )
!

= =
n
ob n pulses e E n
n
0
1
0
0 0
1
1 1
: ( | )
!
: ( | )
!

=
=
n
n
H f n H e
n
H f n H e
n
We observe only this number which ranges from 0 to and
obeys Poisson distribution on both hypotheses. Aprior
probabilities are known
1 0 >
0 1
( ) ( ) 1/ 2 P H P H = =
36
Poisson Case: The Decision rule
Find the Likelihood ratio
1
0 1
0
1
1 1
0 0
0
( | )
!
( )
( | )
!

| |
= = =
|
\
n
n
n
e
f n H
n
n e
f n H
e
n
The decision rule
1
0 1
0 1
0 1
0
( )
log( ( )) log log
( )


| |
= + =
|
<
\
e
H
P H
n n
P H
H 1
1 0
1 0
0
log log

+
<

H
n
H
or
37
Poisson Case: The probability of Error
The probability of error is given by
where N* is a solution to the equation
0 1
* *
0 1
0 1
( ) ( )
! !



<
= +

n n
E
n N n N
P P H e P H e
n n
*
1 0
1 0
min( : )
log log


+
= >

N n n
38
Example 3: Random Source
Assumption:
Source:
Noise: n is Gaussian random variable independent
from the source,
i.e.,
0 0
1 1
:
:
H r s n
H r s n
= +
= +
2
2
2
1
( ) exp( )
2
n
n
f n

=
0 0 0
1 1 1
1/ 2
: ( ) 1/ 2
1/ 2
3 1/ 2
: ( ) 1/ 2
3 1/ 2
a probability
H s P H
a probability
a probability
H s P H
a probability
= =

= =

2
(0, ) N
-6 -4 -2 2 4 6
0.1
0.2
0.3
0.4
0.5
-3a -a a 3a
39
The optimal receiver
The decision rule can also be written as
2
1
2 2 2
2
2 2 2
0
3 3 9
exp( ) exp( ) exp( )
2
( ) 1
exp( ) exp( ) exp( )
2
H
ra ra a
r
ra ra a
H


| |
+
|

\
=
<
| |
+
|
\
2 2
1
2 2
0
2 2
1
2 2
0
( 3 ) ( 3 )
exp( ) exp( )
( )
2 2
( ) 1
( ) ( ) ( )
exp( ) exp( )
2 2
H
r a r a
P H
r
r a r a P H
H


+
+

= =
+ <
+
,
2
2
1
4
2
2
0
3
cosh( )
cosh( )
a
H
ra
e
ra
H

| |
|
|
<
|
\
1
0
2
H
for a r a
H


>>>
<
40
The decision region
{ }
1 0
Pr H Pr H error error
| |
|
\
r
-6 -4 -2 2 4 6
0.1
0.2
0.3
0.4
0.5
-3a -a a 3a
41
Summary
The quantity on the left is called the log Likelihood ratio
and denoted by
1
0
( | )
log ( ) log
( | )
f r H
r
f r H
=
0 0
1 1
: ( , )
: ( , )
H r g s n
H r g s n
=
=
A solution for the binary decision case
The optimal receiver
1
0
e
1
0
( )
( ) log
( )
H
P H
r
P H
H

<
%

S-ar putea să vă placă și