Documente Academic
Documente Profesional
Documente Cultură
Table of Contents
5.1 Optimum Receiver for Signals Corrupted by Additive White Noise
5.1.1 5.1.2 5.1.3 5.1.4 5.1.5 Correlation Demodulator Matched-Filter Demodulator The Optimum Detector The Maximum-Likelihood Sequence Detector A Symbol-by-Symbol MAP Detector for Signals with Memory
Table of Contents
5.2.3 5.2.4 5.2.5 5.2.6 5.2.7 5.2.8 5.2.9 5.2.10 Probability of Error for M-ary Biorthogonal Signals Probability of Error for Simplex Signals Probability of Error for M-ary Binary-Coded Signals Probability of Error for M-ary PAM Probability of Error for M-ary PSK Differential PSK(DPSK) and Its Performance Probability of Error for QAM Comparison of Digital Modulation Methods
Table of Contents
5.3.3 Symbol-by-Symbol Detection of CPM Signals 5.3.4 Suboptimum Demodulation and Detection of CPM Signals
5.4.1 Optimum Receiver for Signals with Random Phase on AWGN Channel
5.4.1 Optimum Receiver for Binary Signals 5.4.2 Optimum Receiver for M-ary Orthogonal Singals 5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals 5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals
Fall, 2004.
4
Table of Contents
5.5 Performance Analysis fir Wireline and Radio Communication Systems
5.5.1 Regenerative Repeaters 5.5.2 Link Budget Analysis in Raid Communication Systems
Fall, 2004.
5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
We assume that the transmitter sends digital information by use of M signals waveforms {sm(t)=1,2,,M }. Each waveform is transmitted within the symbol interval of duration T, i.e. 0tT. The channel is assumed to corrupt the signal by the addition of white Gaussian noise, as shown in the following figure:
r (t ) = sm (t ) + n(t ) , 0 t T
where n(t) denotes a sample function of AWGN process with power spectral density nn(f )=N0 W/Hz.
Fall, 2004.
6
5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
Our object is to design a receiver that is optimum in the sense that it minimizes the probability of making an error. It is convenient to subdivide the receiver into two partsthe signal demodulator and the detector.
The function of the signal demodulator is to convert the received waveform r(t) into an N-dimensional vector r=[r1 r2 ..rN ] where N is the dimension of the transmitted signal waveform. The function of the detector is to decide which of the M possible signal waveforms was transmitted based on the vector r. Fall, 2004.
7
5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
Two realizations of the signal demodulator are described in the next two sections: One is based on the use of signal correlators. The second is based on the use of matched filters. The optimum detector that follows the signal demodulator is designed to minimize the probability of error.
Fall, 2004.
r (t ) f k (t ) dt =
[ sm (t ) + n (t ) ] f k (t ) dt
k = 1, 2,..... N
rk = smk + nk ,
T 0
The signal is now represented by the vector sm with components smk, k=1,2,N. Their values depend on which of the M signals was transmitted.
Fall, 2004.
10
= rk f k (t ) + n(t )
k =1
n' (t ) = n(t ) nk f k (t )
k =1
is a zero-mean Gaussian noise process that represents the difference between original noise process n(t) and the part corresponding to the projection of n(t) onto the basis functions {fk(t)}.
Fall, 2004.
11
Power spectral density is nn(f)=N0 W/Hz Conclusion: The N noise Components {nk} are zero-mean uncorrelated Gaussian random variables with a common variancen2=N0. WITS Lab, NSYSU.
E (nk nm ) =
E [ n(t )n( )] f
T 0
Autocorrelation
k
(t ) f m ( )dtd
T T 1 = N 0 (t ) f k (t ) f m ( )dtd 0 0 2 T 1 1 = N 0 f k (t ) f m (t )dt = N 0 mk 0 2 2
Fall, 2004.
12
Fall, 2004.
13
m = 1,2,...., M
----(A)
k = 1,2,....., N ---(B)
By substituting Equation (A) into Equation (B), we obtain the joint conditional PDFs
N (rk smk ) 2 1 p (r|s m ) = exp , N 2 ( N 0 ) N0 k =1
Fall, 2004.
14
m = 1, 2,......, M
WITS Lab, NSYSU.
= E [ n(t )n( ) ] f k ( )d E (n j nk ) f j (t )
T 0 j =1
nk = n(t ) f k (t )dt
0
= =
N 1 1 N 0 ( t ) f k ( )d N 0 jk f j (t ) 2 j =1 2
1 1 N 0 f k (t ) N 0 f k (t ) = 0 2 2
15
Q.E.D.
Fall, 2004.
Fall, 2004.
16
=
g
g (t )dt = 0 a dt = a T
2 2 2
Fall, 2004.
17
Fall, 2004.
18
Fall, 2004.
19
where {fk(t)} are the N basis functions and hk(t)=0 outside of the interval 0 t T. The outputs of these filters are :
yk (t ) = r ( t ) hk ( t ) = r ( )hk (t )d
0 t
= r ( ) f k (T t + )d ,
0
k = 1, 2,......, N
WITS Lab, NSYSU.
Fall, 2004.
20
k = 1, 2,....., N
A filter whose impulse response h(t) = s(Tt), where s(t) is assumed to be confined to the time interval 0 t T, is called matched filter to the signal s(t). An example of a signal and its matched filter are shown in the following figure.
Fall, 2004.
21
y (t ) = s ( t ) h ( t ) = s ( )h ( t ) d = s ( ) s (T t + ) d
0 0
which is the time-autocorrelation function of the signal s(t). Note that the autocorrelation function y(t) is an even function of t , which attains a peak at t=T.
Fall, 2004.
22
Fall, 2004.
23
= s ( )h(t )d + n( )h(t )d
0 0
Fall, 2004.
24
= ys (T ) + yn (T ) This problem is to select the filter impulse response that maximizes the output SNR0 defined as: y s2 (T ) SNR 0 = 2 E yn (T )
E y (T ) =
2 n T 0
Fall, 2004.
25
Fall, 2004.
26
Fall, 2004.
27
s ( )Cs (T (T ) ) d 0 T 1 N 0 C 2 s 2 (T (T t ) ) dt 0 2
T
Note that the output SNR from the matched filter depends on the energy of the waveform s(t) but not on the detailed characteristics of s(t).
Fall, 2004.
28
Fall, 2004.
29
df =
S ( f ) e j 2 fT e j 2 ft df
By sampling the output of the matched filter at t = T, we obtain (from Parsevals relation):
ys (T ) = S ( f ) df = s 2 (t )dt =
2 T 0
0
The noise at the output of the matched filter has a power spectral density (from equation 2.2-27) ( f ) = 1 H ( f ) 2 N
2
Fall, 2004.
30
Pn = 0 ( f )df
1 1 1 2 2 = N 0 H ( f ) df = N 0 S ( f ) df = N 0 2 2 2
The output SNR is simply the ratio of the signal power Ps , given by Ps = y s2 (T ) , to the noise power Pn.
2 Ps 2 SNR 0 = = = Pn 1 N N0 0 2
Fall, 2004.
31
Fall, 2004.
32
The impulse responses of the two matched filters are (figure b):
2T h1 (t ) = f1 (T t ) = 0 2T h2 (t ) = f 2 (T t ) = 0
33
1 ( T t T) 2 (otherwise) 1 (0 t T ) 2 (otherwise)
Fall, 2004.
From equation 5.1-27, we have A2T= and the received vector is: r = [r1 r2 ] = + n1 n2 where n1(t)=y1n(T) and n2=y2n(T) are the noise components at the outputs of the matched filters, given by
ykn (T ) = n(t ) f k (t )dt ,
0 T
k = 1,2
WITS Lab, NSYSU.
Fall, 2004.
34
= E [y (T )] =
2 n 2 kn
E[n(t )n( )] f
T 0
(t ) f k ( )dtd
T T 1 = N 0 (t ) f k ( ) f k (t )dtd 0 0 2 T 1 1 = N 0 f k2 (t )dt = N 0 0 2 2 Observe that the SNR0 for the first matched filter is
SNR 0 =
Fall, 2004.
)
35
1 N0 2
2 N0
( r1 , r2 ) = ( + n1 , n2 ) , ( n1 , + n2 ) , ( + n1 , n2 ) and ( n1 , + n2 )
Fall, 2004.
36
Some simplification occurs in the MAP criterion when the M signal are equally probable a priori, i.e., P(sm)=1/M. The decision rule based on finding the signal that maximizes P(sm|r) is equivalent to finding the signal that maximizes P(r|sm).
Fall, 2004.
38
We called D(r,sm), m=1,2,,M, the distance metrics. Hence, for the AWGN channel, the decision rule based on the ML criterion reduces to finding the signal sm that is closest in distance to the receiver signal vector r. We shall refer to this decision rule as minimum distance detection.
Fall, 2004.
40
= r 2r s m + s m
, m = 1, 2,......., M
The term || r||2 is common to all distance metrics, and, hence, it may be ignored in the computations of the metrics. The result is a set of modified distance metrics. 2 D (r, s m ) = 2r s m + s m Note that selecting the signal sm that minimizes D'(r, sm ) is equivalent to selecting the signal that maximizes the metrics C(r, sm)= D'(r, sm ), 2 C (r, s m ) = 2r s m s m
Fall, 2004.
41
, m = 0,1,......., M
WITS Lab, NSYSU.
Fall, 2004.
42
Fall, 2004.
43
or PM ( r, s m ) = p ( r | s m ) P ( s m )
44
Fall, 2004.
Fall, 2004.
46
Fall, 2004.
s1
r
s2
1 2 1 p 1 1 p n ln = N 0 ln 2 4 p p
47
Fall, 2004.
48
Fall, 2004.
51
The signal transmitted in each signal interval is binary PAM. Hence, there are two possible transmitted signals corresponding to the signal points s1 = s2 = b where b is the energy per bit.
Fall, 2004.
53
The conditional PDFs for the two possible transmitted signals are (rk b ) 2 1 exp p (rk | s1 ) = 2 2 n 2 n 2 ( ) r + 1 k b exp p (rk | s2 ) = 2 2 n 2 n
Fall, 2004.
54
(A)
Fall, 2004.
55
Fall, 2004.
56
Fall, 2004.
58
At time t=T, we receive r1=s1(m)+n1 from the demodulator, and at t=2T, we receive r2=s2(m)+n2. Since the signal memory is one bit, which we denote by L=1, we observe that the trellis reaches its regular (steady state) form after two transitions. Differential encoding.
Fall, 2004.
59
2 ) + (r2 + b
b )
2 ) + (r2 b 2 ) + (r2 b
b ) b )
by using the output r1 and r2 from the demodulator. The two metrics are compared and the signal path with the larger metric is eliminated. Thus, at t=2T, we are left with two survivor paths, one at node S0 and the other at node S1 , and their corresponding metrics.
Fall, 2004.
61
These two metrics are compared and the path with the larger (greater-distance) metric is eliminated. Similarly, the metrics for the two paths entering S1 at t=3T are D1 (0,0,1) = D0 (0,0) + (r3 b ) 2
D1 (0,1,0) = D1 (0,1) + (r3 b ) 2 These two metrics are compared and the path with the larger metrics is eliminated.
Fall, 2004.
62
b )
Fall, 2004.
63
Fall, 2004.
65
Fall, 2004.
66
P( s ( k ) = Am | rk + D , rk + D 1 ,..., r1 )
for the M possible symbol values and choose the symbol with the largest probability.
Fall, 2004.
67
--(A)
and since the denominator is common for all M probabilities, the MAP criterion is equivalent to choosing the value of s (k) that maximizes the numerator of (A). Thus, the criterion for deciding on the transmitted symbol s (k) is
(k ) (k ) ~ s ( k ) = arg max p ( r ,..., r | s = A ) P ( s = Am ) k+D m 1 (k ) s
--(B)
When the symbol are equally probable, the probability may be dropped from the computation.
Fall, 2004.
68
P( s ( k ) = Am )
}
(1+ D )
p(r
s( 2)
1+ D
,...r1 | s
(1+ D )
,..., s ) P( s
(1)
(1)
,..., s )
(1)
p (s
1
s
(2)
(1+ D )
,..., s , s )
( 2)
) (
)(
The joint probability P1 ( s (1+ D ) ,..., s (2) , s (1) ) may be omitted if the symbols are equally probable and statistically independent. As a consequence of the statistical independent of the additive noise sequence, we have
p ( r1+ D ,..., r1 | s (1+ D ) ,..., s (1) ) = p ( r1+ D | s (1+ D ) ,..., s (1+ D L ) ) p ( rD | s ( D ) ,..., s ( D L ) ) p ( r2 | s (2) , s (1) ) p ( r1 | s (1) )
)(
)} )P(s
--(C)
( 2+ D )
= arg max s( 2) s ( 2+ D )
p(r
s ( 3)
2+ D
,..., r1 | s
( 2+ D )
,..., s
( 2)
,..., s
( 2)
)(
)(
can be obtained from the probabilities computed previously in the detection of s(1). That is,
p r1+ D , , r1 | s (1+ D ) , , s ( 2 )
) )
)(
--(E)
Fall, 2004.
72
( 2) ~ s = arg max s( 2) s ( 2+ D )
where, by definition,
p2 ( s (2+ D ) , , s (3) , s (2) ) =
p (s
2 s( 3)
( 2+ D )
,, s , s
( 3)
( 2)
Fall, 2004.
73
(k )
k+D
s ( k +1)
p (s
k
(k +D)
,, s
( k +1)
,s
(k )
= p rk + D | s ( k + D ) , , s ( k + D L ) P s ( k + D )
--(G)
)(
) p (s
s ( k 1) k 1
( k 1+ D )
, , s ( k 1)
Thus, the recursive nature of the algorithm is established by relations (F) and (G).
Fall, 2004.
74
Fall, 2004.
75
Figure (A)
Fall, 2004.
76
where n represents the additive Gaussian noise component, 1 which has zero mean and variance n2 = N 0 .
2
In this case, the decision rule based on the correlation metric given by Equation 5.1-44 compares r with the threshold zero. If r >0, the decision is made in favor of s1(t), and if r<0, the decision is made that s2(t) was transmitted.
Fall, 2004.
77
/ N0
/ N0
Fall, 2004.
78
1 = 2 1 = 2
2 b / N 0
x2 / 2
dx
r ( x=
N0
2 b / N 0
x2 / 2
dx
( x = x)
Q (t ) = 1 2
2 b = Q N 0
Fall, 2004.
79
x2 / 2
dx t 0
--(A) Two important characteristics of this performance measure: First, we note that the probability of error depends only on the ratio b /N0. Secondly, we note that. 2b /N0 is also the output SNR0 from the matched-filter (and correlation) demodulator. The ratio b /N0 is usually called the signal-to-noise ratio per bit.
Fall, 2004.
80
2 b 1 1 Pb = P ( e | s1 ) + P ( e | s2 ) = Q N 2 2 0
Fall, 2004.
81
b ]
where b denote the energy for each of the waveforms. Note that the distance between these signal points is d12 = 2 b .
Fall, 2004.
82
) (
b 1 x2 2 e dx = Q = b N0 N 2 0 The same error probability is obtained when we assume that s2 is transmitted: b = Q b where b is the SNR per bit. Pb = Q N 0
( )
Fall, 2004.
84
Figure (B)
Fall, 2004.
86
m = 1,2, , M
To evaluate the probability of error, let us suppose that the signal s1 is transmitted. Then the received signal vector is
r= b + n1 n2 n3
nM
where s denotes the symbol energy and n1,n2,,nM are zeromean, mutually statistically independent Gaussian random variable with equal variance n2 = 1 N 0 .
2
Fall, 2004.
87
s + n1
C (r , s2 ) = s n2 C (r , sM ) = s nM
Note that the scale factor s may be eliminated from the correlator outputs dividing each output by s . With this normalization, the PDF of the first correlator output is
x 1 s pr1 ( x1 ) = exp 1 N0 N 0
Fall, 2004.
88
)
2
where P ( n2 < r1 , n3 < r1 , , nM < r1 | r1 ) denotes the joint probability that n2,n3,, nM are all less than r1, conditioned on any given r1. Then this joint probability is averaged over all r1.
Fall, 2004.
89
m = 2,3, , M dx
--(B)
1 = 2
r1 2 N 0
x2 2
x = 2 N 0 xm
This probabilities are identical for m=2,3,,M, and, the joint probability under consideration is simply the result in Equation (B) raised to the (M1)th power. Thus, the probability of a correct decision is 1 Pc = 2
r1
2 N0
x2 2
dx
M 1
p ( r1 ) dr1
WITS Lab, NSYSU.
Fall, 2004.
90
y = r1
2 N0
x2 2
dx
M 1
The same expression for the probability of error is obtained when any one of the other M1 signals is transmitted. Since all the M signals are equally likely, the expression for PM given above is the average probability of a symbol error. This expression can be evaluated numerically.
Fall, 2004.
91
Figure (C)
Fall, 2004.
94
ex
2 dx
M 1
1 2 s y exp N0 2
dy
95
( )
PM ( M 1) Pb = ( M 1) Q
Fall, 2004.
96
i =2
S / N 0 < MQ
S / N0
thus,
PM < Me s PM < e
2 N0
= 2k e k b
2 N0
k ( b N 0 2ln 2 ) 2
--(F)
As k or equivalently, as M, the probability of error approaches zero exponentially, provided that b/N0 is greater than 2ln 2, b (1.42dB) > 2 ln 2 = 1.39 N0
Fall, 2004.
97
PM < 2e
Fall, 2004.
(
98
0 N 0 ln 2
)2
WITS Lab, NSYSU.
b
N0
> ln 2 = 0.693
( 1.6dB)
Hence, 1.6 dB is the minimum required SNR per bit to achieve an arbitrarily small probability of error in the limit as k(M). This minimum SNR per bit is called the Shannon limit for an additive Gaussian noise channel.
Fall, 2004.
99
5.2.3
As indicated in Section 4.3, a set of M=2k biorthogonal signals are constructed from M orthogonal signals by including the negatives of the orthogonal signals. Thus, we achieve a reduction in the complexity of the demodulator for the biorthogonal signals relative to that for orthogonal signals, since the former is implemented with M cross correlation or matched filters, whereas the latter required M matched filters or cross correlators. Let us assume that the signal s1(t) corresponding to the vector s1=[ s 0 00] was transmitted. The received signal vector is r = [ s + n1 n2 nM 2 ] where the {nm} are zero-mean, mutually statistically independent and identically distributed Gaussian random variables with 1 variance n2 = N 0 .
2
Fall, 2004.
100
5.2.3
The optimum detector decides in favor of the signal corresponding to the largest in magnitude of the cross correlators M 2 1 C ( r,s m ) = r s m = rk smk , m = 1, 2,..., M 2 k =1 while the sign of this largest term is used to decide whether sm(t) or sm(t) was transmitted. According to this decision rule, the probability of a correct decision is equal to the probability that r1 = s + n1 > 0 and r1 exceeds |rm|=|nm| for m=2,3, M. But r1 1 1 r1 N0 2 y 2 2 x2 N0 P ( nm < r1 | r1 > 0 ) = e dx = e dy r1 r1 N 0 2 2 N0
y=
Fall, 2004.
101
N0
5.2.3
Then, the probability of a correct decision is 1 x 2 Pc = e p ( r1 ) dr1 0 2 r1 N0 2 from which, upon substitution for p(r1), we obtain
1 Pc = 2
2 s
1 N0 2
v + 2 s N 0
v + 2 s N 0
x 2
2 N0 x1 s 1 v = r1 s pr1 ( x1 ) = exp 2 N0 N 0 where we have used the PDF of r1 given in Equation 5.2-15. The probability of a symbol error PM=1Pc
dx
M 2 1
v2 2
dv
--(G)
Fall, 2004.
102
5.2.3
Pc, and PM may be evaluated numerically for different values of M. The graph shown in the following figure (D) illustrates PM as a function of b/N0, where s=k b, for M=2,4,8,16 and 32.
Fall, 2004.
103
5.2.3
In this case, the probability of error for M=4 is greater than that for M=2. This is due to the fact that we have plotted the symbol error probability PM in Figure(D). If we plotted the equivalent bit error probability, we should find that the graphs for M=2 and M=4 coincide. As in the case of orthogonal signals, as M (k), the minimum required b/N0 to achieve an arbitrarily small probability of error is 1.6 dB, the Shannon limit.
Fall, 2004.
104
5.2.4
Next we consider the probability of error for M simplex signals. Recall from Section 4.3 that simplex signals are a set of M equally correlated with mutual cross-correlation coefficient mn= 1/(M 1). These signals have the same minimum separation of 2 s between adjacent signal points in M-dimensional space as orthogonal signals. They achieve this mutual separation with a transmitted energy of s(M 1)/M, which is less than that required for orthogonal signals by a factor of (M 1)/M. Consequently, the probability of error for simplex signals is identical to the probability of error for orthogonal signals, but this performance is achieved with saving of M 10 log(1 ) = 10 log dB M 1 For M=2, the saving is 3dB. As M is increased, the saving in SNR approaches 0 dB.
Fall, 2004.
105
sm = [sm1 sm 2
smN ] ,
m = 1,2, , M
N for all m and j. N is the block length of where smj = the code and is also the dimension of the M signal waveform (e) If d min is the minimum Euclidean distance, then the probability of a symbol error is upper-bounded as d (e ) 2 min PM < (M 1)Pb = (M 1)Q 2N0 (e ) 2 d min k < 2 exp 4N0
( )
( )
106
Fall, 2004.
1 m= 2 2 = 1 M d g m =1 6 M 2
M ( M + 1) 2 M ( M + 1)( 2 M + 1) 6
m =
m =1
Fall, 2004.
107
We note that if the mth amplitude level is transmitted, the demodulator output is 1
r = sm + n =
2 where the noise variable n has zero-mean and variance n = 2 N 0 Fall, 2004. 108 WITS Lab, NSYSU.
Am + n
e x
N0
dx
d2
N0
y2 2
dy where y = x
N0 2
Fall, 2004.
2 ( M 1) d g Q = N0 M
2
Q (t ) =
e x / 2 dx t 0
109
6 M 1
2
PavT
PM =
2 ( M 1) = Q M M 2 1 N0 6 PavT
(6 log 2 M ) b av
where bav = PavTb is the average bit energy and SNR per bit.
Fall, 2004.
110
(M
1 N0
bav
N o is the average
2 1 (m 1) s sin 2 (m 1) s m (t ) = s cos , = s g M M 2 Since the signal waveforms have equal energy, the optimum detector for the AWGN channel given by Equation 5.1-44 computes the correlation metrics
C (r, s m ) = r s m ,
Fall, 2004.
m = 1,2,
112
,M
WITS Lab, NSYSU.
Fall, 2004.
114
= =
1 2 1
2 r
e s e
N 0 sin r
V e N0 2
V 2 s N 0 cos r N0 2
2 2 2 r
dV
2 r
( V e
V 2 s cos r
2 / 2 r
V V = dV N0 2 N0 .
Fall, 2004.
115
Probability density function pr (r ) for S = 1, 2, 4, and 10. Fall, 2004. 116 WITS Lab, NSYSU.
PM = 1
p r ( r ) d r
In general, the integral of p r ( r ) doesnt reduced to a simple form and must be evaluated numerically, except for M =2 and M =4. For binary phase modulation, the two signals s1(t) and s2(t) are antipodal. Hence, the error probability is 2 b P2 = Q N 0
Fall, 2004.
117
1 2 b 1 Q 2 N 0
Fall, 2004.
For large values of M, doubling the number of phases requires an additional 6 dB/bit to achieve the same performance
Fall, 2004.
119
s sin r ,
k = log 2 M s = k b
s cos r e
s
sin 2 r
d r
= 2Q 2 s sin = 2Q 2k b sin M M
Fall, 2004.
120
sin ( M )
u 2
du
For four-phase PSK, the received signal is raised to the fourth power to remove the digital modulation, and the resulting fourth harmonic of the carrier frequency is filtered and divided by 4 in order to extract the carrier component.
These operations yield a carrier frequency component containing , but there are phase ambiguities of 90 and 180 in the phase estimate.
Consequently, we do not have an absolute estimate of the carrier phase for demodulation.
Fall, 2004.
122
The PSK signals resulting from the encoding process are said to be differentially encoded.
Fall, 2004.
123
Fall, 2004.
124
cos ( k ) + nk1
sin ( k ) + nk2
or equivalently,
rk =
s e j ( ) + nk
where k is the phase angle of the transmitted signal at the kth signaling interval, is the carrier phase, and nk=nk1+jnk2 is the noise vector.
Fall, 2004.
125
Similarly, the received signal vector at the output of the demodulator in the preceding signaling interval is :
s e j ( )nk*1 + s e j (
k
k 1
* nk + nk nk 1
which, in the absence of noise, yields the phase differencekk-1. Differentially encoded PSK signaling that is demodulated and detected as described above is called differential PSK (DPSK).
Fall, 2004.
126
The complication in determining the PDF of the phase is the term nkn*k-1.
Fall, 2004.
128
The variables x and y are uncorrelated Gaussian random variable with identical variances n2=N0. The phase is 1 y r = tan x The noise variance is now twice as large as in the case of PSK. Thus we can conclude that the performance of DPSK is 3 dB poorer than that for PSK.
Fall, 2004.
129
Fall, 2004.
130
Fall, 2004.
131
1 1 2 2 Pb = Q1 (a, b ) I 0 (ab ) exp a + b 2 2 where Q1(a,b) is the Marcum Q function (2.1-122), I0(x) is the modified Bessel function of order zero(2.1-123), and the parameters a and b are defined as
1 1 a = 2 b 1 2 , and b = 2 b 1 + 2
Fall, 2004.
132
Probability of bit error for binary and four-phase PSK and DPSK Fall, 2004. 133
To determine the probability of error for QAM, we must specify the signal point constellation.
Fall, 2004.
134
Figure (a) is a four-phase modulated signal and Figure (b) is with two amplitude levels, labeled A1 and A2, and four phases.
Because the probability of error is dominated by the minimum distance between pairs of signal points, let us impose the condition (e) that d min = 2 A and we evaluate the average transmitter power, based on the premise that all signal points are equally probable.
Fall, 2004.
135
[( )
Fall, 2004.
136
1 Pav = M A2 = M
The coordinates (Amc,Ams) for each signal point are normalized by A Fall, 2004. 137
(A
M m =1 M m =1
2 mc
2 + Ams
) )
(a
2 mc
2 + ams
Fall, 2004.
138
where p
Fall, 2004.
PM = 1 1 P M
When k is odd, there is no equivalent M -ary PAM system. There is still ok, because it is rather easy to determine the error rate for a rectangular signal set.
Fall, 2004.
141
[ ]
Fall, 2004.
143
1 3 av P M = 21 Q M 1 N M 0 We simply compare the arguments of Q function for the two signal formats. The ratio of these two arguments is 3 (M 1) RM = 2 sin 2 ( M )
Fall, 2004.
144
Fall, 2004.
145
M M M = = W= R 2T 2(k R ) 2 log 2 M In the case, the bandwidth increases as M increases. In the case of biorthogonal signals, the required bandwidth is one-half of that for orthogonal signals.
Fall, 2004.
148
Fall, 2004.
149
In contrast, M-ary orthogonal signals yield a R/W 1. As M increases, R/W decreases due to an increase in the required channel bandwidth. The SNR per bit required to achieve a given error probability decreases as M increases.
Fall, 2004.
150
2 s (t ) = cos[2f c t + (t ; I )] T where (t ;I) is the carrier phase. The filtered received signal for an additive Gaussian noise channel is r (t ) = s (t ) + n(t )
where n(t ) = nc (t ) cos 2f c t ns (t ) sin 2f c t
Fall, 2004.
152
(t ; I ) = 2h I k q (t kT )
I k q(t kT ) k = k = n L +1 = n + (t ; I ) nT t (n + 1)T t where q(t)=0 for t <0, q(t)=1/2 for t LT ,and q(t ) = 0 g ( )d
= h I k + 2h
Fall, 2004.
153
k = n L
Phase Tree
These phase diagrams are called phase tree.
CPFSK with binary symbols In =1, the set of phase trajectories beginning at time t=0.
Fall, 2004.
155
( t ; I ) = 2 h
k = n L +1
n 1
I k q ( t kT ) + 2 hI n q ( t nT )
The first term on the right-hand side depends on the information symbols (In-1, In-2, , In-L+1), which is called the correlative state vector, and represents the phase term corresponding to signal pulse that have not reached their final value. The second term represents the phase contribution due to the most recent symbol In.
Fall, 2004.
156
pM L 1 (even m ) Ns = L 1 (odd m ) where h=m/p. 2 pM Suppose the state of the modulator at t = nT is Sn. The effect of the new symbol in the time interval nT t (n+1)T is to change the state from Sn to Sn+1. At t = (n+1)T, the state
S n +1 = ( n +1 , I n , I n 1 , , I n L + 2 )
where n +1 = n + hI n L +1
Fall, 2004.
157
1 1 3 s = 0, , , , 2 4 4
Fall, 2004.
A path through the state trellis corresponding to the sequence (1, -1, -1, -1, 1, 1) Fall, 2004. 160
Phase tree for L = 2 partial response CPM with h = 3/4 WITS Lab, NSYSU.
r (t ) cos[ c t + (t ; I ) + n ]dt
The term CMn-1(I) represents the metrics for the surviving sequences up to time nT, and the term
vn (I; n ) = nT r (t ) cos[ c t + (t ; I ) + n ]dt represents the additional increments to the metrics contributed by the signal in the time interval nT t (n+1)T.
Fall, 2004.
161
( n +1)T
Fall, 2004.
162
Fall, 2004.
163
Fall, 2004.
164
2 NT = 2 N 2 cos[ c t + (t ; I i )]cos c t + (t ; I j ) dt 0 T 2 NT = 2 N cos (t ; I i ) (t ; I j ) dt 0 T 2 NT { = 1 cos (t ; I i ) (t ; I j ) }dt T 0 Hence the Euclidean distance is related to the phase difference between the paths in the state trellis.
0 NT
[s (t ) s (t )] dt
2 i j
s i (t )dt +
2
NT
s (t )dt 2
2 j
NT
si (t )s j (t )dt
Fall, 2004.
165
log 2 M NT {1 cos (t; I i ) (t; I j ) }dt = 0 T Furthermore, since (t ;Ii)-(t ;Ij)=(t ;Ii-Ij), so that, with = Ii-Ij, ij2 may be written as
2 ij
log 2 M NT {1 cos (t; )}dt = 0 T where any element of can take the values 0, 2, 4,, 2(M-1), except that 00.
2 ij
Fall, 2004.
166
where K
min
log 2 M NT ( ) = lim min 1 cos t ; I I dt i j 0 N i , j T We note that for conventional binary PSK with no memory, 2 2 N = 1 and min = 12 = 2.
Fall, 2004.
167
which differ for k = 0, 1 and agree for k 2, result in two phase trajectories that merge after the second symbol. The corresponds to the difference sequence
= {2,2,0,0, }
Fall, 2004.
168
2 and provides an upper bound on min . This upper bound for CPSK with M = 2 is sin 2h 2 d B (h ) = 21 , M = 2 2h
Foe example, where h=1/2, which corresponds to MSK, we 2 1 2 1 have d B 2 , so that = min 2.
2 For M > 2 and full response CPM, an upper bound on min can be obtained by considering the phase difference sequence = {, -, 0, 0,} where =2,4,,2(M-1).
Fall, 2004.
169
Fall, 2004.
170
5.4 Optimum Receiver For Signals with Random Phase In AWGN Channel
In this section, we consider the design of the optimum receiver for carrier modulated signals when the carrier phase is unknown and no attempt is made to estimate its value. Uncertainty in the carrier phase of the receiver signal may be due to one or more of the following reasons:
The oscillators that are used at the transmitter and the receiver to generate the carrier signals are generally not phase synchronous. The time delay in the propagation of the signal from the transmitter to the receiver is not generally known precisely.
Assuming a transmitted signal of the form s (t ) = Re g (t )e j 2f ct that propagates through a channel with delay t0 will be received as: j 2 f t t j 2 f c t0 j 2 f c s ( t t0 ) = Re g ( t t0 ) e c ( 0 ) = Re g t t e e ( ) 0
Fall, 2004.
171
5.4 Optimum Receiver For Signals with Random Phase In AWGN Channel
The carrier phase shift due to the propagation delay t0 is = 2f ct0 Note that large changes in the carrier phase can occur due to relatively small changes in the propagation delay. For example, if the carrier frequency fc=1 MHz, an uncertainty or a change in the propagation delay of 0.5s will cause a phase uncertainty of rad. In some channels the time delay in the propagation of the signal from the transmitter to the receiver may change rapidly and in an apparently random fashion. In the absence of the knowledge of the carrier phase, we may treat this signal parameter as a random variable and determine the form of the optimum receiver for recovering the transmitted information from the received signal.
Fall, 2004.
172
4.1-24
2 1 T = s (t )dt = sml (t ) dt 0 2 0 The two signals are characterized by the complex-valued correlation coefficient 1 T * 12 = sl1 (t )sl 2 (t )dt 2 0
Fall, 2004.
173
{[
where rl ( t ) = slm ( t ) e j + z ( t ) , 0 t T rl(t) is the equivalent low-pass received signal. This received signal is now passed through a demodulator whose sampled output at t =T is passed to the detector.
Fall, 2004.
174
Fall, 2004.
176
The output of the matched filters or correlators at the sampling instant are the two complex numbers rm = rmc + jrms ,
Fall, 2004.
177
m = 1,2
WITS Lab, NSYSU.
where is the complex-valued correlation coefficient of two signals sl1(t) and sl2(t), which may be expressed as =||exp(j0). The random noise variables n1c, n1s, n2c, and n2s are jointly Gaussian, with zero-mean and equal variance.
Fall, 2004.
178
> P ( s 2 | r ) or P ( s1 | r ) <
s2
s1
Fall, 2004.
179
p (r | s m , ) p ( )d
WITS Lab, NSYSU.
180
Fall, 2004.
181
p ( r1c , r1s | s1 , )d
2 2 r12 1 r 4 + + 1c c exp = 2 2 2 2 2
1 2
Fall, 2004.
183
( (
) )
2 2 2 The optimum detector computes the two envelopes r12 and r + r + r 2c 2s c 1s and the corresponding values of the Bessel function I 2 r 2 + r 2 / 2 1c 1s 0 and I 2 r22c + r22s / 2 to form the likelihood ratio.
0
Fall, 2004.
184
s2
Fall, 2004.
185
Fall, 2004.
186
5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
Recall that in binary FSK we employ two different frequencies, say f1 and f2=f1+f, to transmit a binary information sequence. The chose of minimum frequency separation f = f2 - f1 is considered below:
s1 ( t ) = 2 b Tb cos 2 f1t , s2 ( t ) = 2 b Tb cos 2 f 2t , 0 t Tb
5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
Demodulation and square-law detection:
f kc ( t ) = f ks ( t ) =
Fall, 2004.
5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
If the mth signal is transmitted, the four samples at the detector may be expressed as: 2 b 2 T T + + + + rkc = r ( t )i f kc ( t ) dt = f m f t n t i f k f t cos 2 cos 2 2 ( ) ( ) ( ) m 1 1 dt 0 0
b b
Tb
Tb
= =
2 b Tb
2 ( f {cos
Tb 0 Tb
}{
b
Tb
kc
2 ( m k ) ft cos 2 ( m k ) ft b sin = + cos sin m m + nkc Tb 2 m k f 2 m k f ( ) ( ) 0 sin 1 2 ( m k ) fTb cos 2 ( m k ) fTb = b cos m + sin m + nkc 2 m k fT 2 m k fT ( ) b ( ) b k , m = 0,1
Fall, 2004.
189
5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
If the mth signal is transmitted, the four samples at the detector may be expressed as (cont.): 2 b 2 T T + + + + rks = r ( t )i f ks ( t ) dt = f m f t n t i f k f t cos 2 sin 2 2 ( ) ( ) ( ) m 1 1 dt 0 0
b b
Tb
Tb
= =
2 b Tb
2 ( f {cos
Tb 0 Tb
}{
b
Tb
ks
2 ( m k ) ft sin 2 ( m k ) ft b cos = cos sin m m + nks Tb 2 m k f 2 m k f ( ) ( ) 0 1 2 ( m k ) fTb sin 2 ( m k ) fTb cos = b cos m sin m + nks 2 ( m k ) fTb 2 ( m k ) fTb k , m = 0,1
Fall, 2004.
190
5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
We observe that when k=m, the sampled values to the detector are rmc = b cos m + nmc k=m rms = b sin m + nms We observe that when km, the signal components in the samples rkc and rks will vanish, independently of the values of the phase shifts k, provided that the frequency separation between successive frequency is f= 1/T. In such case, the other two correlator outputs consist of noise only, i.e., rkc=nkc, rks=nks, k m .
Fall, 2004.
191
Fall, 2004.
192
m = 1, 2,..., M
where
and rmc = nmc , rms = nms , m = 2,3,..., M The additive noise components {nmc} and {nms} are mutually statistically independent zero-mean Gaussian variables with equal variance 2=N0/2.
Fall, 2004.
194
m = 2,3,..., M
Rm =
2 2 rmc + rms
Finally, by averaging p(Rm,m) over m, the factor of 2 is eliminated. Thus, we find that R1 has a Rice probability distribution and Rm, m=2,3,,M, are each Rayleigh-distribued.
Fall, 2004.
196
Because the random variables Rm, m=2,3,,M, are statistically independent and identically distributed, the joint probability conditioned on R1 factors into a product of M-1 identical terms. where
Pc = P ( R2 < R1 | R1 = x ) 0
x 0
M 1
pR1 ( x ) dx
x2 / 2
(5.4-42)
(From Eq 2.1-129)
Substitution of this result into Equation 5.4-42 and integration over x yield the probability of a correct decision as M 1 n s n M 1 1 Pc = ( 1) exp n =0 n n +1 ( n + 1) N 0 where s/N0 is the SNR per symbol. The probability of a symbol error, which is Pm=1-Pc, becomes M 1 nk b n +1 M 1 1 PM = ( 1) exp n n 1 n 1 N + + ) 0 n =1 ( where b/N0 is the SNR per bit.
Fall, 2004.
198
Fall, 2004.
200
Fall, 2004.
201
p ( x , x )dx dx
x1 1 2 1
where p(x1,x2) is the joint PDF of the envelopes R1 and R2. This approach was first used by Helstrom (1995), who determined the joint PDF of R1 and R2 and evaluated the double integral. An alternative approach is based on the observation that the probability of error may also be expressed as
2 2 Pb = P ( R2 > R1 ) = P ( R2 > R12 ) = P ( R2 R12 > 0 )
Fall, 2004.
203
I 0 ( ab )
where a=
1 ( 2N
b=
1+ ( 2N
Q1(a, b) is the Marcum Q function defined in 2.1-123 and I0(x) is the modified Bessel function of order zero.
Fall, 2004.
204
Pb is minimized when =0; that is, when the signals are orthogonal.
Fall, 2004.
205
From the definition of Q1(a, b) in Equation 2.1-123, it follows that b b 2 N0 Q1 0, N =e 0 Substitution of these relations yields the desired result given previously in Equation 5.4-47. On the other hand, when ||=1, the error probability becomes Pb=1/2, as expected.
Fall, 2004.
206
Fall, 2004.
207
Fall, 2004.
210
2 2 b + o Q N 0
211
2 b KQ N 0
2 b 10 = Q N 0
7
Which yields b/N0 29.6 dB. The difference in the required SNR is about 18.3 dB, or approximately 70 times the transmitter power of the digital communication system.
Fall, 2004.
214
Fall, 2004.
215
( 4 d )
PT GT GR
Ls = 4 d
The factor Ls is called the free-space path loss. If other losses, such as atmospheric losses, are encountered in the transmission of the signal, they may be accounted for by introducing an additional loss factor, say La. Therefore, the received power may be written in general as
PR = PT GT GR Ls La
Fall, 2004.
218
Another parameter that is related to the gain (directivity) of an antenna is its beamwidth, which we denote as B and which is illustrated graphically in the following figure.
Antenna beamwidth and pattern
Fall, 2004.
220
Also, suppose that the each station employs a 3-m parabolic antenna and that the downlink is operating at a frequency of 4GHz. The efficiency factor is =0.5. 2 D = G By substituting these numbers into R , we obtain the value of the antenna gain as 39 dB. The free-space path loss is Ls=195.6 dB.
Fall, 2004.
222
( PR )dB = 20 + 17 + 39 195.6
= 119.6 dBW
Or, equivalently,
PR = 1.1 1012 W
Fall, 2004.
223
where kB is Boltzmanns constant (1.3810-23 W-s/K) and T0 is the noise temperature in Kelvin. The total noise power in the signal bandwidth W is N0W. The performance of the digital communication system is specified by the b/N0 required to keep the error rate performance below some given value.
Fall, 2004.
224
where (b/N0)req is the required SNR per bit. If we have PR/N0 and the required SNR per bit, we can determine the maximum data rate that is possible.
Fall, 2004.
225
( 119.6 dBW )
Now, suppose the receiver front end has a noise temperature of 300 K, which is typical for a receiver in the 4-GHz range. Then, N0=4.1x10-21 W/Hz, or, equivalently, -203.9dbW/Hz. Therefore, PR = 119.6 + 203.9 = 84.3 dB Hz N0
N 0 = 4.1 1021 W Hz
Fall, 2004.
226
This corresponds to the rate of 26.9 megabits/s, which is equivalent to about 420 PCM channels, each operating at 64000 bits/s.
Fall, 2004.
227