Sunteți pe pagina 1din 228

Chapter 5 Optimum Receivers for the Additive White Gaussian Noise Channel

Table of Contents
5.1 Optimum Receiver for Signals Corrupted by Additive White Noise
5.1.1 5.1.2 5.1.3 5.1.4 5.1.5 Correlation Demodulator Matched-Filter Demodulator The Optimum Detector The Maximum-Likelihood Sequence Detector A Symbol-by-Symbol MAP Detector for Signals with Memory

5.2 Performance of the Optimum Receiver for Memoryless Modulation


5.2.1 Probability of Error for Binary Modulation 5.2.2 Probability of Error for M-ary Orthogonal Signals
Fall, 2004.
2

WITS Lab, NSYSU.

Table of Contents
5.2.3 5.2.4 5.2.5 5.2.6 5.2.7 5.2.8 5.2.9 5.2.10 Probability of Error for M-ary Biorthogonal Signals Probability of Error for Simplex Signals Probability of Error for M-ary Binary-Coded Signals Probability of Error for M-ary PAM Probability of Error for M-ary PSK Differential PSK(DPSK) and Its Performance Probability of Error for QAM Comparison of Digital Modulation Methods

5.3 Optimum Receiver for CPM Signals


5.3.1 Optimum Demodulator and Detection of CPM 5.3.2 Performance of CPM Signals
Fall, 2004.
3

WITS Lab, NSYSU.

Table of Contents
5.3.3 Symbol-by-Symbol Detection of CPM Signals 5.3.4 Suboptimum Demodulation and Detection of CPM Signals

5.4.1 Optimum Receiver for Signals with Random Phase on AWGN Channel
5.4.1 Optimum Receiver for Binary Signals 5.4.2 Optimum Receiver for M-ary Orthogonal Singals 5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals 5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals
Fall, 2004.
4

WITS Lab, NSYSU.

Table of Contents
5.5 Performance Analysis fir Wireline and Radio Communication Systems
5.5.1 Regenerative Repeaters 5.5.2 Link Budget Analysis in Raid Communication Systems

Fall, 2004.

WITS Lab, NSYSU.

5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
We assume that the transmitter sends digital information by use of M signals waveforms {sm(t)=1,2,,M }. Each waveform is transmitted within the symbol interval of duration T, i.e. 0tT. The channel is assumed to corrupt the signal by the addition of white Gaussian noise, as shown in the following figure:

r (t ) = sm (t ) + n(t ) , 0 t T
where n(t) denotes a sample function of AWGN process with power spectral density nn(f )=N0 W/Hz.
Fall, 2004.
6

WITS Lab, NSYSU.

5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
Our object is to design a receiver that is optimum in the sense that it minimizes the probability of making an error. It is convenient to subdivide the receiver into two partsthe signal demodulator and the detector.

The function of the signal demodulator is to convert the received waveform r(t) into an N-dimensional vector r=[r1 r2 ..rN ] where N is the dimension of the transmitted signal waveform. The function of the detector is to decide which of the M possible signal waveforms was transmitted based on the vector r. Fall, 2004.
7

WITS Lab, NSYSU.

5.1 Optimum Receiver for Signals Corrupted by Additive White Gaussian Noise
Two realizations of the signal demodulator are described in the next two sections: One is based on the use of signal correlators. The second is based on the use of matched filters. The optimum detector that follows the signal demodulator is designed to minimize the probability of error.

Fall, 2004.

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


We describe a correlation demodulation that decomposes the receiver signal and the noise into N-dimensional vectors. In other words, the signal and the noise are expanded into a series of linearly weighted orthonormal basis functions {fn(t)}. It is assumed that the N basis function {f n(t)}span the signal space, so every one of the possible transmitted signals of the set {sm(t)=1mM } can be represented as a linear combination of {f n(t)}. In case of the noise, the function {f n(t)} do not span the noise space. However we show below that the noise terms that fall outside the signal space are irrelevant to the detection of the signal.
Fall, 2004.
9

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


Suppose the receiver signal r(t) is passed through a parallel bank of N basis functions {f n(t)}, as shown in the following figure:

r (t ) f k (t ) dt =

[ sm (t ) + n (t ) ] f k (t ) dt
k = 1, 2,..... N

rk = smk + nk ,
T 0

smk = sm (t ) f k (t )dt , k = 1, 2,......N nk = n(t ) f k (t )dt , k = 1, 2,.......N


0 T

The signal is now represented by the vector sm with components smk, k=1,2,N. Their values depend on which of the M signals was transmitted.
Fall, 2004.
10

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


In fact,we can express the receiver signal r(t) in the interval 0 t N N T as: r (t ) = smk f k (t ) + nk f k (t ) + n(t )
k =1 N k =1

= rk f k (t ) + n(t )
k =1

The term n'(t), defined as

n' (t ) = n(t ) nk f k (t )
k =1

is a zero-mean Gaussian noise process that represents the difference between original noise process n(t) and the part corresponding to the projection of n(t) onto the basis functions {fk(t)}.
Fall, 2004.
11

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


We shall show below that n'(t) is irrelevant to the decision as to which signal was transmitted. Consequently, the decision may be based entirely on the correlator output signal and noise components rk=smk+nk, k=1,2,,N. The noise components {nk} are Gaussian and mean values are:

E (nk ) = E [ n(t ) ] f k (t )dt = 0 for all n. 0 and their covariances are:


T

Power spectral density is nn(f)=N0 W/Hz Conclusion: The N noise Components {nk} are zero-mean uncorrelated Gaussian random variables with a common variancen2=N0. WITS Lab, NSYSU.

E (nk nm ) =

E [ n(t )n( )] f
T 0

Autocorrelation
k

(t ) f m ( )dtd

T T 1 = N 0 (t ) f k (t ) f m ( )dtd 0 0 2 T 1 1 = N 0 f k (t ) f m (t )dt = N 0 mk 0 2 2

Fall, 2004.

12

5.1.1 Correlation Demodulator


From the above development, it follows that the correlator output {rk} conditioned on the mth signal being transmitted are Gaussian random variables with mean

E (rk ) = E ( smk + nk ) = smk


1 = = N0 2 Since the noise components {nk} are uncorrelated Gaussian random variables, they are also statistically independent. As a consequence, the correlator outputs {rk} conditioned on the mth signal being transmitted are statistically independent Gaussian variables.
2 r 2 n

and equal variance

Fall, 2004.

13

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


The conditional probability density functions of the random variables r=[r1 r2 rN] are:
p (r | s m ) = p (rk | smk ) ,
k =1 N

m = 1,2,...., M

----(A)

(rk smk ) 2 1 p (rk | smk ) = exp , N0 N 0

k = 1,2,....., N ---(B)

By substituting Equation (A) into Equation (B), we obtain the joint conditional PDFs
N (rk smk ) 2 1 p (r|s m ) = exp , N 2 ( N 0 ) N0 k =1
Fall, 2004.
14

m = 1, 2,......, M
WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


The correlator outputs (r1 r2 rN) are sufficient statistics for reaching a decision on which of the M signals was transmitted, i.e., no additional relevant information can be extracted from the remaining noise process n'(t). Indeed, n'(t) is uncorrelated with the N correlator outputs {rk}:

N E [ n(t )rk ] = E [ n(t ) ] smk + E [ n(t )nk ] = E [ n(t )nk ] = E n(t ) n j f j (t ) nk j =1

= E [ n(t )n( ) ] f k ( )d E (n j nk ) f j (t )
T 0 j =1

nk = n(t ) f k (t )dt
0

= =

N 1 1 N 0 ( t ) f k ( )d N 0 jk f j (t ) 2 j =1 2

1 1 N 0 f k (t ) N 0 f k (t ) = 0 2 2
15

Q.E.D.

Fall, 2004.

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


Since n' (t) and {rk} are Gaussian and uncorrelated, they are also statistically independent. Consequently, n'(t) does not contain any information that is relevant to the decision as to which signal waveform was transmitted. All the relevant information is contained in the correlator outputs {rk} and, hence, n'(t) can be ignored.

Fall, 2004.

16

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


Example 5.1-1.
Consider an M-ary baseband PAM signal set in which the basic pulse shape g(t) is rectangular as shown in following figure. The additive noise is a zero-mean white Gaussian noise process. Let us determine the basis function f(t) and the output of the correlation-type demodulator. The energy in the rectangular pulse is

=
g

g (t )dt = 0 a dt = a T
2 2 2

Fall, 2004.

17

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


Example 5.1-1.(cont.)
Since the PAM signal set has dimension N=1, there is only one basis function f(t) given as: 1 (0 t T ) f (t ) = g (t ) = 1 T (otherwise) 0 a 2T The output of the correlation-type demodulator is: T 1 T r = r (t ) f (t )dt = r (t )dt 0 0 T The correlator becomes a simple integrator when f(t) is rectangular: if we substitute for r(t), we obtain: T T 1 1 T + = + r= s t n t dt s t dt n t dt ( ) ( ) ( ) ( ) [ ] 0 m 0 0 m T T = sm + n

Fall, 2004.

18

WITS Lab, NSYSU.

5.1.1 Correlation Demodulator


Example 5.1-1.(cont.)
The noise term E(n)=0 and:
1 T T = E n ( t ) n ( t ) = E T 0 0 n(t )n( )dtd N0 T T 1 T T = E [ n(t )n( ) ] dtd = (t )dtd 0 0 0 0 T 2T N0 T 1 = 1 d = N 0 0 2T 2 The probability density function for the sampled output is: (r sm ) 2 1 p(r | sm ) = exp N N 0 0
2 n

Fall, 2004.

19

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Instead of using a bank of N correlators to generate the variables {rk}, we may use a bank of N linear filters. To be specific, let us suppose that the impulse responses of the N filters are: hk (t ) = f k (T t ) , 0t T

where {fk(t)} are the N basis functions and hk(t)=0 outside of the interval 0 t T. The outputs of these filters are :
yk (t ) = r ( t ) hk ( t ) = r ( )hk (t )d
0 t

= r ( ) f k (T t + )d ,
0

k = 1, 2,......, N
WITS Lab, NSYSU.

Fall, 2004.

20

5.1.2 Matched-Filter Demodulator


If we sample the outputs of the filters at t = T, we obtain
yk (T ) = r ( ) f k ( )d = rk ,
0 T

k = 1, 2,....., N

A filter whose impulse response h(t) = s(Tt), where s(t) is assumed to be confined to the time interval 0 t T, is called matched filter to the signal s(t). An example of a signal and its matched filter are shown in the following figure.

Fall, 2004.

21

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


The response of h(t) = s(Tt) to the signal s(t) is:

y (t ) = s ( t ) h ( t ) = s ( )h ( t ) d = s ( ) s (T t + ) d
0 0

which is the time-autocorrelation function of the signal s(t). Note that the autocorrelation function y(t) is an even function of t , which attains a peak at t=T.

Fall, 2004.

22

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Matched filter demodulator that generates the observed variables {rk}

Fall, 2004.

23

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Properties of the matched filter.
If a signal s(t) is corrupted by AWGN, the filter with an impulse response matched to s(t) maximizes the output signal-to-noise ratio (SNR). Proof: let us assume the receiver signal r(t) consists of the signal s(t) and AWGN n(t) which has zero-mean and 1 nm ( f ) = N 0 W/Hz. 2 Suppose the signal r(t) is passed through a filter with impulse response h(t), 0tT, and its output is sampled at time t=T. The output signal of the filter is: t y (t ) = s (t ) h(t ) = r ( )h(t )d
0

= s ( )h(t )d + n( )h(t )d
0 0

Fall, 2004.

24

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Proof: (cont.) At the sampling instant t=T: T T y (T ) = s ( )h(T )d + n( )h(T )d
0 0

= ys (T ) + yn (T ) This problem is to select the filter impulse response that maximizes the output SNR0 defined as: y s2 (T ) SNR 0 = 2 E yn (T )
E y (T ) =
2 n T 0

E [ n( )n(t )] h(T )h(T t )dtd


T 0

T T T 1 1 = N 0 (t )h(T )h(T t )dtd = N 0 h 2 (T t )dt 0 0 0 2 2

Fall, 2004.

25

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Proof: (cont.) 2 By substituting for ys (T) and E y n (T ) into SNR0. ' = T
s ( )h(T )d h( ') s (T ')d ' 0 0 = SNR 0 = T T 1 1 2 N 0 h (T t )dt N 0 h 2 (T t )dt 0 0 2 2 Denominator of the SNR depends on the energy in h(t). The maximum output SNR over h(t) is obtained by maximizing the numerator subject to the constraint that the denominator is held constant.
T T 2 2

Fall, 2004.

26

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Proof: (cont.) Cauchy-Schwarz inequality: if g1(t) and g2(t) are finiteenergy signals, then
g (t ) g (t )dt g 2 (t )dt g 2 (t )dt 2 2 1 1 with equality when g1(t)=Cg2(t) for any arbitrary constant C. If we set g1(t)=h1(t) and g2(t)=s(Tt), it is clear that the SNR is maximized when h(t)=Cs(Tt).
2

Fall, 2004.

27

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Proof: (cont.) The output (maximum) SNR obtained with the matched filter is:

s ( )h(T )d 0 2 SNR 0 = = T 1 N0 N 0 h 2 (T t )dt 0 2 2 T 2 2 s (t )dt = = 0 N0 N0


T

s ( )Cs (T (T ) ) d 0 T 1 N 0 C 2 s 2 (T (T t ) ) dt 0 2
T

Note that the output SNR from the matched filter depends on the energy of the waveform s(t) but not on the detailed characteristics of s(t).
Fall, 2004.
28

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Frequency-domain interpretation of the matched filter
Since h(t)=s(Tt), the Fourier transform of this relationship T is: H ( f ) = s (T t )e j 2 ft dt let = T - t
0 T = s ( )e j 2 f d e j 2 fT = S * ( f )e j 2 fT 0 The matched filter has a frequency response that is the complex conjugate of the transmitted signal spectrum multiplied by the phase factor e-j2fT (sampling delay of T). In other worlds, |H( f )|=|S( f )|, so that the magnitude response of the matched filter is identical to the transmitted signal spectrum. On the other hand, the phase of H( f ) is the negative of the phase of S( f ).

Fall, 2004.

29

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Frequency-domain interpretation of the matched filter
If the signal s(t) with spectrum S(f) is passed through the matched filter, the filter output has a spectrum Y( f )=|S( f )|2e-j2fT. S ( f ) S * ( f )e j 2 fT The output waveform is:
ys (t ) = Y ( f )e
j 2 ft

df =

S ( f ) e j 2 fT e j 2 ft df

By sampling the output of the matched filter at t = T, we obtain (from Parsevals relation):
ys (T ) = S ( f ) df = s 2 (t )dt =
2 T 0
0

The noise at the output of the matched filter has a power spectral density (from equation 2.2-27) ( f ) = 1 H ( f ) 2 N
2
Fall, 2004.
30

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Frequency-domain interpretation of the matched filter
The total noise power at the output of the matched filter is

Pn = 0 ( f )df
1 1 1 2 2 = N 0 H ( f ) df = N 0 S ( f ) df = N 0 2 2 2

The output SNR is simply the ratio of the signal power Ps , given by Ps = y s2 (T ) , to the noise power Pn.
2 Ps 2 SNR 0 = = = Pn 1 N N0 0 2

Fall, 2004.

31

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Example 5.1-2:
M=4 biorthogonal signals are constructed from the two orthogonal signals shown in the following figure for transmitting information over an AWGN channel. The noise is assumed to have a zero-mean and power spectral density N0.

Fall, 2004.

32

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Example 5.1-2: (cont.)
The M=4 biorthogonal signals have dimensions N=2 (shown 1 in figure a):
f1 (t ) = 2 T 0 f 2 (t ) = 2 T 0 (0 t T ) 2 (otherwise) 1 ( T t T) 2 (otherwise)

The impulse responses of the two matched filters are (figure b):
2T h1 (t ) = f1 (T t ) = 0 2T h2 (t ) = f 2 (T t ) = 0
33

1 ( T t T) 2 (otherwise) 1 (0 t T ) 2 (otherwise)

Fall, 2004.

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Example 5.1-2: (cont.)
If s1(t) is transmitted, the responses of the two matched filters are as shown in figure c, where the signal amplitude is A. Since y1(t) and y2(t) are sampled at t=T, we observe that y1S (T)= 1 A2T , y2S(T)=0.
2

From equation 5.1-27, we have A2T= and the received vector is: r = [r1 r2 ] = + n1 n2 where n1(t)=y1n(T) and n2=y2n(T) are the noise components at the outputs of the matched filters, given by
ykn (T ) = n(t ) f k (t )dt ,
0 T

k = 1,2
WITS Lab, NSYSU.

Fall, 2004.

34

5.1.2 Matched-Filter Demodulator


Example5.1-2(cont.)
Clearly, E(nk)=E[ykn(T)] and their variance is

= E [y (T )] =
2 n 2 kn

E[n(t )n( )] f
T 0

(t ) f k ( )dtd

T T 1 = N 0 (t ) f k ( ) f k (t )dtd 0 0 2 T 1 1 = N 0 f k2 (t )dt = N 0 0 2 2 Observe that the SNR0 for the first matched filter is

SNR 0 =
Fall, 2004.

)
35

1 N0 2

2 N0

WITS Lab, NSYSU.

5.1.2 Matched-Filter Demodulator


Example 5.1-2: (cont.)
This result agrees with our previous result. We can also note that the four possible outputs of the two matched filters, corresponding to the four possible transmitted signals are:

( r1 , r2 ) = ( + n1 , n2 ) , ( n1 , + n2 ) , ( + n1 , n2 ) and ( n1 , + n2 )

Fall, 2004.

36

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Our goal is to design a signal detector that makes a decision on the transmitted signal in each signal interval based on the observation of the vector r in each interval such that the probability of a correct decision is maximized. We assume that there is no memory in signals transmitted in successive signal intervals. We consider a decision rule based on the computation of the posterior probabilities defined as P(sm|r)P(signal sm was transmitted|r), m=1,2,,M. The decision criterion is based on selecting the signal corresponding to the maximum of the set of posterior probabilities { P(sm|r)}. This decision criterion is called the maximum a posterior probability (MAP) criterion.
Fall, 2004.
37

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Using Bayes rule, the posterior probabilities may be expressed as p (r | s m ) P (s m ) ---(A) P (s m | r ) = p (r ) where P(sm) is the a priori probability of the mth signal being transmitted. The denominator of (A), which is independent of which signal is transmitted, may be expressed as
p (r) = p (r | s m ) P (s m )
m =1 M

Some simplification occurs in the MAP criterion when the M signal are equally probable a priori, i.e., P(sm)=1/M. The decision rule based on finding the signal that maximizes P(sm|r) is equivalent to finding the signal that maximizes P(r|sm).
Fall, 2004.
38

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


The conditional PDF P(r|sm) or any monotonic function of it is usually called the likelihood function. The decision criterion based on the maximum of P(r|sm) over the M signals is called maximum-likelihood (ML) criterion. We observe that a detector based on the MAP criterion and one that is based on the ML criterion make the same decisions as long as a priori probabilities P(sm) are all equal. In the case of an AWGN channel, the likelihood function p(r|sm) is given by: N (rk smk ) 2 1 p (r|s m ) = exp , m = 1, 2,......, M N 2 ( N 0 ) N0 k =1 constant 1 1 N 2 ln p (r | s m ) = N ln(N 0 ) ( r s ) k mk 2 N 0 k =1
Fall, 2004.
39

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


The maximum of ln p(r|sm) over sm is equivalent to finding the signal sm that minimizes the Euclidean distance:
D (r, s m ) = (rk smk ) 2
k =1 N

We called D(r,sm), m=1,2,,M, the distance metrics. Hence, for the AWGN channel, the decision rule based on the ML criterion reduces to finding the signal sm that is closest in distance to the receiver signal vector r. We shall refer to this decision rule as minimum distance detection.

Fall, 2004.

40

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Expanding the distance metrics:
2 D(r, s m ) = r 2 rn smn + smn n =1 2 n n =1 n =1 N N N

= r 2r s m + s m

, m = 1, 2,......., M

The term || r||2 is common to all distance metrics, and, hence, it may be ignored in the computations of the metrics. The result is a set of modified distance metrics. 2 D (r, s m ) = 2r s m + s m Note that selecting the signal sm that minimizes D'(r, sm ) is equivalent to selecting the signal that maximizes the metrics C(r, sm)= D'(r, sm ), 2 C (r, s m ) = 2r s m s m
Fall, 2004.
41

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


The term rsm represents the projection of the signal vector onto each of the M possible transmitted signal vectors. The value of each of these projection is a measure of the correlation between the receiver vector and the mth signal. For this reason, we call C(r, sm), m=1,2,,M, the correlation metrics for deciding which of the M signals was transmitted. Finally, the terms || sm||2 =m, m=1,2,,M, may be viewed as bias terms that serve as compensation for signal sets that have unequal energies. If all signals have the same energy, || sm||2 may also be ignored. Correlation metrics can be expressed as:
C (r,s m ) = 2 r (t ) sm (t )dt
T 0

, m = 0,1,......., M
WITS Lab, NSYSU.

Fall, 2004.

42

5.1.3 The Optimum Detector


These metrics can be generated by a demodulator that crosscorrelates the received signal r(t) with each of the M possible transmitted signals and adjusts each correlator output for the bias in the case of unequal signal energies.

Fall, 2004.

43

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


We have demonstrated that the optimum ML detector computes a set of M distances D(r,sm) or D'(r,sm) and selects the signal corresponding to the smallest (distance) metric. Equivalently, the optimum ML detector computes a set of M correlation metrics C(r, sm) and selects the signal corresponding to the largest correlation metric. The above development for the optimum detector treated the important case in which all signals are equal probable. In this case, the MAP criterion is equivalent to the ML criterion. When the signals are not equally probable, the optimum MAP detector bases its decision on the probabilities given by: p(r | s m ) P(s m ) P (s m | r ) = p (r )
Fall, 2004.

or PM ( r, s m ) = p ( r | s m ) P ( s m )
44

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Example 5.1-3:
Consider the case of binary PAM signals in which the two possible signal points are s1 = s2 = b ,, where b is the energy per bit. The priori probabilities are P(s1)=p and P(s2)=1-p. Let us determine the metrics for the optimum MAP detector when the transmitted signal is corrupted with AWGN. The receiver signal vector for binary PAM is: r = b + y n (T ) where yn (T ) is a zero mean Gaussian random variable with variance n2 = 1 N 0 .
2
45

Fall, 2004.

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Example 5.1-3: (cont.)
The conditional PDF P(r|sm) for two signals are (r b )2 1 p (r | s1 ) = exp 2 2 n 2 n (r + n ) 2 p (r | s2 ) = exp 2 2 n 2 n Then the metrics PM(r,s1) and PM(r,s2) are 1
p

(r b ) 2 PM (r, s1 ) = p p (r | s1 ) = exp 2 2 n 2 n 2 ( r ) + 1 p b PM (r, s 2 ) = (1 p ) p (r | s2 ) = exp 2 2 n 2 n

Fall, 2004.

46

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Example 5.1-3: (cont.)
If PM(r,s1) > PM(r,s2), we select s1 as the transmitted signal: otherwise, we select s2. This decision rule may be expressed as: PM (r, s1 ) s1 1 PM (r, s 2 ) s2 (r + b ) 2 (r b ) 2 PM (r, s1 ) p = exp 2 PM (r, s 2 ) 1 p 2 n (r + b ) 2 (r b ) 2 s1 1 p ln 2 p 2 n s2

Fall, 2004.

s1

r
s2

1 2 1 p 1 1 p n ln = N 0 ln 2 4 p p
47

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Example 5.1-3: (cont.)
The threshold is line into two regions, say R1 and R2, where R1 consists of the set of points that are greater than h and R2 consists of the set of points that are less than h. If r b > h , the decision is made that s1 was transmitted. If r b < h , the decision is made that s2 was transmitted.
1 1 p N 0 ln 4 p , denoted by h, divides the real

Fall, 2004.

48

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Example 5.1-3: (cont.)
The threshold h depends on N0 and p. If p=1/2, h=0. If p>1/2, the signal point s1 is more probable and, hence, h<0. In this case, the region R1 is larger than R2 , so that s1 is more likely to be selected than s2. The average probability of error is minimized It is interesting to note that in the case of unequal priori probabilities, it is necessary to know not only the values of the priori probabilities but also the value of the power spectral density N0 , or equivalently, the noise-to-signal ratio, in order to compute the threshold.
When p=1/2, the threshold is zero, and knowledge of N0 is not required by the detector.
Fall, 2004.
49

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Proof of the decision rule based on the maximum-likelihood criterion minimizes the probability of error when the M signals are equally probable a priori. Let us denote by Rm the region in the N-dimensional space for which we decide that signal sm(t) was transmitted when the vector r=[r1r2.. rN] is received. The probability of a correct decision given that sm(t) was transmitted is: P (c | s m ) p ( s m ) = p (r | s m ) p ( s m ) dr The average probability of a correct decision is: M M 1 1 P (c ) = P (c | s m ) = p (r | s m )dr m =1 M m =1 M Rm Note that P(c) is maximized by selecting the signal sm if p(r|sm) is larger than p(r|sk) for all mk. Q.E.D.
Fall, 2004.
50
Rm

WITS Lab, NSYSU.

5.1.3 The Optimum Detector


Similarly for the MAP criterion, when the M signals are not equally probable, the average probability of a correct decision is M P(c) = P(s m | r ) p (r )dr m =1 Rm same for all sm. In order for P (c) to be as large as possible, the points that are to be included in each particular region Rm are those for which P(sm|r) exceeds all the other posterior probabilities. Q.E.D. We conclude that MAP criterion maximize the probability of correct detection.

Fall, 2004.

51

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


When the signal has no memory, the symbol-by-symbol detector described in the preceding section is optimum in the sense of minimizing the probability of a symbol error. When the transmitted signal has memory, i.e., the signals transmitted in successive symbol intervals are interdependent, the optimum detector is a detector that bases its decisions on observation of a sequence of received signals over successive signal intervals. In this section, we describe a maximum-likelihood sequence detection algorithm that searches for minimum Euclidean distance path through the trellis that characterizes the memory in the transmitted signal.
Fall, 2004.
52

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


To develop the maximum-likelihood sequence detection algorithm, let us consider, as an example, the NRZI signal described in Section 4.3-2. Its memory is characterized by the trellis shown in the following figure:

The signal transmitted in each signal interval is binary PAM. Hence, there are two possible transmitted signals corresponding to the signal points s1 = s2 = b where b is the energy per bit.
Fall, 2004.
53

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


The output of the matched-filter or correlation demodulator for binary PAM in the kth signal interval may be expressed as rk = b + nk where nk is a zero-mean Gaussian random variable with variance n2 = N 0 .
2

The conditional PDFs for the two possible transmitted signals are (rk b ) 2 1 exp p (rk | s1 ) = 2 2 n 2 n 2 ( ) r + 1 k b exp p (rk | s2 ) = 2 2 n 2 n
Fall, 2004.
54

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


Since the channel noise is assumed to be white and Gaussian and f(t= iT), f(t =jT) for ij are orthogonal, it follows that E(nknj)=0, kj. Hence, the noise sequence n1, n2, , nk is also white. Consequently, for any given transmitted sequence s(m), the joint PDF of r1, r2, , rk may be expressed as a product of K marginal PDFs, K ( m) ) p (r1 , r2 ,..., rk | s ( m ) ) = p (rk | sk
(m) 2 (rk sk ) = exp 2 2 2 n k =1 n K (m) 2 1 K (rk sk ) = exp 2 2 n k =1 2 n k =1 K

(A)

Fall, 2004.

55

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


Then, given the received sequence r1,r2,,rk at the output of the matched filter or correlation demodulator, the detector determines the sequence s(m)={s1(m),s2(m),..,sK(m)} that maximizes the conditional PDF p(r1,r2,,rk|s(m)). Such a detector is called the maximum-likelihood (ML) sequencedetector. By taking the logarithm of Equation (A) and neglecting the terms that are independent of (r1,r2,,rk), we find that an equivalent ML sequence detector selects the sequence s(m) that minimizes the Euclidean distance metric
(m) 2 D(r, s ( m ) ) = (rk sk ) k =1 K

Fall, 2004.

56

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


In searching through the trellis for the sequence that minimizes the Euclidean distance, it may appear that we must compute the distance for every possible sequence. For the NRZI example, which employs binary modulation, the total number of sequence is 2K , where K is the number of outputs obtained form the demodulator. However, this is not the case. We may reduce the number of the sequences in the trellis search by using the Viterbi algorithm to eliminate sequences as new data is received from the demodulator. The Viterbi algorithm is a sequence trellis search algorithm for performing ML sequence detection. We describe it below in the context of the NRZI signal. We assume that the search process begins initially at state S0.
Fall, 2004.
57

WITS Lab, NSYSU.

Viterbi Decoding Algorithm


Basic concept
Generate the code trellis at the decoder The decoder penetrates through the code trellis level by level in search for the transmitted code sequence At each level of the trellis, the decoder computes and compares the metrics of all the partial paths entering a node The decoder stores the partial path with the larger metric and eliminates all the other partial paths. The stored partial path is called the survivor.

Fall, 2004.

58

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


The corresponding trellis is shown in the following figure :

At time t=T, we receive r1=s1(m)+n1 from the demodulator, and at t=2T, we receive r2=s2(m)+n2. Since the signal memory is one bit, which we denote by L=1, we observe that the trellis reaches its regular (steady state) form after two transitions. Differential encoding.
Fall, 2004.
59

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


Thus, upon receipt of r2 at t=2T, we observe that there are two signal paths entering each of the nodes and two signal paths leaving each node. The two paths entering node S0 at t=2T correspond to the information bits (0,0) and (1,1) or, equivalently, to the signal points ( b , b ) and ( b , b ) , respectively. The two paths entering node S1 at t=2T correspond to the information bits (0,1) and (1,0) or, equivalently, to the signal points ( b , b ) and ( b , b ) , respectively. For the two paths entering node S0, we compute the two Euclidean distance metrics. D0 (0, 0) = (r1 + b ) 2 + (r2 + ) 2 b
D0 (1,1) = (r1
Fall, 2004.

2 ) + (r2 + b

b )

by using the outputs r1 and r2 from demodulator.


60

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


The Viterbi algorithm compares these two metrics and discards the path having the larger metrics. The other path with the lower metric is saved and is called survivor at t=2T. Similarly, for two paths entering node S1 at t=2T, we compute the two Euclidean distance metrics
D1 (0,1) = (r1 + D1 (1,0) = (r1

2 ) + (r2 b 2 ) + (r2 b

b ) b )

by using the output r1 and r2 from the demodulator. The two metrics are compared and the signal path with the larger metric is eliminated. Thus, at t=2T, we are left with two survivor paths, one at node S0 and the other at node S1 , and their corresponding metrics.
Fall, 2004.
61

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


Upon receipt of r3 at t=3T, we compute the metrics of the two paths entering state S0. Suppose the survivors at t=2T are the paths (0,0) at S0 and (0,1) at S1. Then, the two metrics for the paths entering S0 at t=3T are D0 (0, 0, 0) = D0 (0, 0) + (r3 + ) 2 b
D0 (0,1,1) = D1 (0,1) + (r3 +

These two metrics are compared and the path with the larger (greater-distance) metric is eliminated. Similarly, the metrics for the two paths entering S1 at t=3T are D1 (0,0,1) = D0 (0,0) + (r3 b ) 2
D1 (0,1,0) = D1 (0,1) + (r3 b ) 2 These two metrics are compared and the path with the larger metrics is eliminated.
Fall, 2004.
62

b )

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


It is relatively easy to generalize the trellis search performed by the Viterbi algorithm for M-ary modulation. Delay modulation with M=4 signals: such signal is characterized by the four-state trellis shown in the following figure

Fall, 2004.

63

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


Delay modulation with M=4 signals: (cont.) We observe that each state has two signal paths entering and two signals paths leaving each node. The memory of the signal is L=1. The Viterbi algorithm will have four survivors at each stage and their corresponding metrics. Two metrics corresponding to the two entering paths are computed at each node, and one of the two signal paths entering the node is eliminated at each state of the trellis. The Viterbi algorithm minimizes the number of trellis paths searched in performing ML sequence detection. From the description of the Viterbi algorithm given above, it is unclear as to how decisions are made on the individual detected information symbols given the surviving sequences.
Fall, 2004.
64

WITS Lab, NSYSU.

5.1.4 The Maximum-Likelihood Sequence Detector


If we have advanced to some stage, say K, where K>>L in the trellis, and we compare the surviving sequences, we shall find that with probability approaching one all surviving sequences will be identical in bit (or symbol) positions K-5L and less. In a practical implementation of the Viterbi algorithm, decisions on each information bit (or symbol) are forced after a delay of 5L bits, and, hence, the surviving sequences are truncated to the 5L most recent bits (or symbol). Thus, a variable delay in bit or symbol detection is avoided. The loss in performance resulting from the sub-optimum detection procedure is negligible if the delay is at least 5L.

Fall, 2004.

65

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


In this section, we describe a maximum a posteriori probability algorithm that makes decisions on a symbol-by-symbol basis, but each symbol decision is based on an observation of a sequence of received signal vectors. Hence, this detection is optimum in the sense that is minimizes the probability of a symbol error. The detection algorithm that is presented below is due to Abend and Fritchman(1970),who developed it as a detection algorithm for channels with intersymbol interference, i.e., channels with memory.

Fall, 2004.

66

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


We illustrate the algorithm in the context of detection a PAM signal with M possible levels. Suppose that it is desired to detect the information symbol transmitted in the kth signal interval, and let r1,r2,,rk+D be the observed received sequence, where D is the delay parameter which is chosen to exceed the signal memory, i.e., D>>L, where L is the inherent memory in the signal. On the basis of the received sequence, we compute the posterior probabilities

P( s ( k ) = Am | rk + D , rk + D 1 ,..., r1 )
for the M possible symbol values and choose the symbol with the largest probability.
Fall, 2004.
67

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


Since
P s(k )

p (rk + D ,..., r1 | s ( k ) = Am ) P( s ( k ) = Am ) = Am | rk + D ,..., r1 = p (rk + D , rk + D 1 ,..., r1 )

--(A)

and since the denominator is common for all M probabilities, the MAP criterion is equivalent to choosing the value of s (k) that maximizes the numerator of (A). Thus, the criterion for deciding on the transmitted symbol s (k) is
(k ) (k ) ~ s ( k ) = arg max p ( r ,..., r | s = A ) P ( s = Am ) k+D m 1 (k ) s

--(B)

When the symbol are equally probable, the probability may be dropped from the computation.
Fall, 2004.
68

P( s ( k ) = Am )

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


The algorithm for computing the probabilities in Equation (B) recursively begins with the first symbol s(1). We have
(1) (1) ~ s (1) = arg max p ( r ,..., r | s A ) P ( s = = Am ) 1+ D 1 m (1)
s

}
(1+ D )

= arg max s (1) s1+ D = arg max s (1) s (1+ D )

p(r
s( 2)

1+ D

,...r1 | s

(1+ D )

,..., s ) P( s
(1)

(1)

,..., s )
(1)

p (s
1
s
(2)

(1+ D )

,..., s , s )
( 2)

where s (1) denotes the decision on s(1).


Fall, 2004.
69

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


For mathematical convenience, we have defined
(1+ D ) ( 2) (1) (1+ D ) (1) (1+ D ) (1) P s ,..., s , s p r ,..., r | s ,..., s P s ,..., s 1 1+ D 1

) (

)(

The joint probability P1 ( s (1+ D ) ,..., s (2) , s (1) ) may be omitted if the symbols are equally probable and statistically independent. As a consequence of the statistical independent of the additive noise sequence, we have
p ( r1+ D ,..., r1 | s (1+ D ) ,..., s (1) ) = p ( r1+ D | s (1+ D ) ,..., s (1+ D L ) ) p ( rD | s ( D ) ,..., s ( D L ) ) p ( r2 | s (2) , s (1) ) p ( r1 | s (1) )

where we assume that s(k) =0 for k0


Fall, 2004.
70

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


For detection of the symbol s(2) ,we have
( 2) ( 2) ~ = = Am s ( 2 ) = arg max p r ,..., r | s A P s m 2+ D 1 (2) s

)(

)} )P(s

--(C)
( 2+ D )

= arg max s( 2) s ( 2+ D )

p(r
s ( 3)

2+ D

,..., r1 | s

( 2+ D )

,..., s

( 2)

,..., s

( 2)

The joint conditional probability in the multiple summation can be expressed as


p (r2+ D ,..., r1 | s ( 2+ D ) ,..., s ( 2 ) )

= p r2+ D | s ( 2+ D ) ,..., s ( 2+ D L ) p r1+ D ,..., r1 | s (1+ D ) ,, s ( 2 ) --(D)


Fall, 2004.
71

)(

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


Furthermore, the joint probability p r1+ D , , r1 | s (1+ D ) ,, s ( 2 ) P s (1+ D ) ,, s ( 2 )

)(

can be obtained from the probabilities computed previously in the detection of s(1). That is,
p r1+ D , , r1 | s (1+ D ) , , s ( 2 )

= p r1+ D , , r1 | s (1+ D ) , , s (1) P s (1+ D ) , , s (1) = p1 s (1+ D ) , , s ( 2 ) , s (1)


s (1 ) s (1 )

) )

)(

--(E)

Fall, 2004.

72

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


By combining Equation (D) and (E) and then substituting into Equation (C) ,we obtain

( 2) ~ s = arg max s( 2) s ( 2+ D )
where, by definition,
p2 ( s (2+ D ) , , s (3) , s (2) ) =

p (s
2 s( 3)

( 2+ D )

,, s , s

( 3)

( 2)

p ( r2+ D | s (2+ D ) , , s (2+ D L ) ) P ( s (2+ D ) ) p1 ( s (1+ D ) , , s (2) , s (1) )


s (1)

Fall, 2004.

73

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


In general, the recursive algorithm for detection the symbol s(k) is as follows: upon reception of rK+D,,r2,r1, we compute ~ --(F) s ( k ) = arg max p (r , , r | s ( k ) )P (s ( k ) )

(k )

k+D

= arg max s( k ) s ( k +D ) where, by definition,


pk s ( k + D ) , , s ( k +1) , s ( k )

s ( k +1)

p (s
k

(k +D)

,, s

( k +1)

,s

(k )

= p rk + D | s ( k + D ) , , s ( k + D L ) P s ( k + D )

--(G)

)(

) p (s
s ( k 1) k 1

( k 1+ D )

, , s ( k 1)

Thus, the recursive nature of the algorithm is established by relations (F) and (G).
Fall, 2004.
74

WITS Lab, NSYSU.

5.1.5 A Symbol-by-Symbol MAP Detector for Signals with Memory


The major problem with the algorithm is its computational complexity. In particular, the averaging performed over the symbol s ( k + D ) , , s ( k +1) , s ( k )in Equation (F) involves a large amount of computation per received signal, especially if the number M of amplitude levels {Am} is large. On the other hand, if M is small and the memory L is relatively short, this algorithm is easily implemented.

Fall, 2004.

75

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Let us consider binary PAM signals where the two signal waveforms are s1(t)=g(t) and s1(t)= g(t), and g(t) is an arbitrary pulse that is nonzero in the interval 0 t Tb and zero elsewhere. Since s1(t)= s2(t), these signals are said to be antipodal. The energy in the pulse g(t) is b. As indicated in section 4.3.1, PAM signals are one-dimensional, and, their geometric representation is simply the onedimensional vector s1 = b , s2 = b .

Figure (A)
Fall, 2004.
76

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Let us assume that the two signals are equally likely and that signal s1(t) was transmitted. Then, the received signal from the (matched filter or correlation) demodulator is
r = s1 + n = b + n

where n represents the additive Gaussian noise component, 1 which has zero mean and variance n2 = N 0 .
2

In this case, the decision rule based on the correlation metric given by Equation 5.1-44 compares r with the threshold zero. If r >0, the decision is made in favor of s1(t), and if r<0, the decision is made that s2(t) was transmitted.
Fall, 2004.
77

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


The two conditional PDFs of r are: ( r 1 p ( r | s1 ) = e N0
( r + 1 p ( r | s2 ) = e N0

/ N0

/ N0

Fall, 2004.

78

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Given that s1(t) was transmitted, the probability of error is simply the probability that r<0. r 2 0 0 b 1 dr P ( e | s1 ) = p ( r | s1 )dr = exp N0 N 0

1 = 2 1 = 2

2 b / N 0

x2 / 2

dx

r ( x=

N0

2 b / N 0

x2 / 2

dx

( x = x)
Q (t ) = 1 2

2 b = Q N 0
Fall, 2004.


79

x2 / 2

dx t 0

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


If we assume that s2(t) was transmitted, r = b + n and the probability that r>0 is also P ( e | s2 ) = Q 2 b / N 0 . Since the signal s1(t) and s2(t) are equally likely to be transmitted, the average probability of error is

--(A) Two important characteristics of this performance measure: First, we note that the probability of error depends only on the ratio b /N0. Secondly, we note that. 2b /N0 is also the output SNR0 from the matched-filter (and correlation) demodulator. The ratio b /N0 is usually called the signal-to-noise ratio per bit.
Fall, 2004.
80

2 b 1 1 Pb = P ( e | s1 ) + P ( e | s2 ) = Q N 2 2 0

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


We also observe that the probability of error may be expressed in terms of the distance between that the two signals s1and s2 . From figure (A), w observe that the two signals are separated by 1 2 the distance d12=2 b . By substituting b = d12 into Equation 4 (A),we obtain d2 12 Pb = Q 2N0 This expression illustrates the dependence of the error probability on the distance between the two signals points.

Fall, 2004.

81

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Error probability for binary orthogonal signals
The signal vectors s1and s2 are two-dimensional. s1 = [ b s2 = [0 0]

b ]

where b denote the energy for each of the waveforms. Note that the distance between these signal points is d12 = 2 b .
Fall, 2004.
82

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Error probability for binary orthogonal signals
To evaluate the probability of error, let us assume that s1 was transmitted. Then, the received vector at the output of the demodulator is r = [ b + n1 n2 ]. We can now substitute for r into the correlation metrics 2 given by C (r, s m ) = 2r s m s m to obtain C(r, s1) and C(r, s2). The probability of error is the probability that C(r, s2) > C(r, s1). n2 > n1 + b 2n2 b b > 2 b + 2n1 b b

) (

P ( e | s1 ) = P[C ( r , s2 ) > C ( r1 , s2 )] = P[n2 n1 > b ]


Fall, 2004.
83

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Error probability for binary orthogonal signals
Since n1 and n2 are zero-mean statistically independent Gaussian random variables each with variance N0, the random variable x= n2 n1 is zero-mean Gaussian with variance N0. Hence, 1 x 2 2 N0 P n2 n1 > b = e dx b 2 N 0

b 1 x2 2 e dx = Q = b N0 N 2 0 The same error probability is obtained when we assume that s2 is transmitted: b = Q b where b is the SNR per bit. Pb = Q N 0

( )

Fall, 2004.

84

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


If we compare the probability of error for binary antipodal signals with that for binary orthogonal signals, we find that orthogonal signals required a factor of 2 increase in energy to achieve the same error probability as antipodal signals. Since 10 log10 2=3 dB, we say that orthogonal signals are 3dB poorer than antipodal signals. The difference of 3dB is simply 2 due to the distance between the two signal points, which is d12 = 2 b 2 for orthogonal signals, whereas d12 = 4 b for antipodal signals. The error probability versus 10 log10 b/N0 for these two types of signals is shown in the following figure (B). As observed from this figure, at any given error probability, the b/N0 required for orthogonal signals is 3dB more than that for antipodal signals.
Fall, 2004.
85

WITS Lab, NSYSU.

5.2.1 Probability of Error for Binary Modulation


Probability of error for binary signals

Figure (B)
Fall, 2004.
86

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


For equal-energy orthogonal signals, the optimum detector selects the signal resulting in the largest cross correlation between the received vector r and each of the M possible transmitted signals vectors {sm}, i.e.,
C (r , sm ) = r sm = rk smk ,
k =1 M

m = 1,2, , M

To evaluate the probability of error, let us suppose that the signal s1 is transmitted. Then the received signal vector is
r= b + n1 n2 n3

nM

where s denotes the symbol energy and n1,n2,,nM are zeromean, mutually statistically independent Gaussian random variable with equal variance n2 = 1 N 0 .
2
Fall, 2004.
87

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


In this case, the outputs from the bank of M correlations are
C (r , s1 ) = s

s + n1

C (r , s2 ) = s n2 C (r , sM ) = s nM

Note that the scale factor s may be eliminated from the correlator outputs dividing each output by s . With this normalization, the PDF of the first correlator output is
x 1 s pr1 ( x1 ) = exp 1 N0 N 0
Fall, 2004.
88

)
2

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


And the PDFs of the other M1 correlator outputs are 2 1 xm prm ( xm ) = e N0 , m = 2,3, , M N 0 It is mathematically convenient to first derive the probability that the detector makes a correct decision. This is the probability that r1 is large than each of the other M1 correlator outputs n2,n3,, nM. This probability may be expressed as
Pc = P (n2 < r1 , n3 < r1 , , nM < r1 | r1 ) p (r1 )dr1

where P ( n2 < r1 , n3 < r1 , , nM < r1 | r1 ) denotes the joint probability that n2,n3,, nM are all less than r1, conditioned on any given r1. Then this joint probability is averaged over all r1.
Fall, 2004.
89

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


Since the {rm} are statistically independent, the joint probability factors into a product of M1 marginal probabilities of the form:
P(nm < r1 | r1 ) = prm ( xm )dxm
r1

m = 2,3, , M dx

--(B)

1 = 2

r1 2 N 0

x2 2

x = 2 N 0 xm

This probabilities are identical for m=2,3,,M, and, the joint probability under consideration is simply the result in Equation (B) raised to the (M1)th power. Thus, the probability of a correct decision is 1 Pc = 2

r1

2 N0

x2 2

dx

M 1

p ( r1 ) dr1
WITS Lab, NSYSU.

Fall, 2004.

90

5.2.2 Probability of Error for M-ary Orthogonal Signals


The probability of a (k-bit) symbol error is PM = 1 Pc
1 PM = 1 2 1 2

y = r1

2 N0

x2 2

dx

M 1

2 1 2 s dy exp y N0 2 --- (C)

The same expression for the probability of error is obtained when any one of the other M1 signals is transmitted. Since all the M signals are equally likely, the expression for PM given above is the average probability of a symbol error. This expression can be evaluated numerically.
Fall, 2004.
91

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


In comparing the performance of various digital modulation methods, it is desirable to have the probability of error expressed in terms of the SNR per bit, b/N0, instead of the SNR per symbol, s/N0. With M=2k, each symbol conveys k bits of information, and hence s= k b. Thus, Equation (C) may be expressed in terms of b/N0 by substituting for s. It is also desirable to convert the probability of a symbol error into an equivalent probability of a binary digit error. For equiprobable orthogonal signals, all symbol errors are equiprobable and occur with probability PM P = kM M 1 2 1
Fall, 2004.
92

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


Furthermore, there are n ways in which n bits out of k may be in error. Hence, the average number of bit errors per k-bit symbol is k k PM 2k 1 =k k n k PM --(D) 2 1 n =1 n 2 1 and the average bit error probability is just the result in Equation (D) divided by k, the number of bits per symbol. Thus, P 2k 1 Pb = k PM M k >> 1 2 1 2 The graphs of the probability of a binary digit error as a function of the SNR per bit, b/N0, are shown in Figure (C) for M=2,4,8,16,32, and 64. This figure illustrates that, by increasing the number M of waveforms, one can reduce the SNR per bit required to achieve a given probability of a bit error.
Fall, 2004.
93
k

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


For example, to achieve a Pb=10 5, the required SNR per bit is a little more than 12dB for M=2, but if M is increased to 64 signal waveforms, the required SNR per bit is approximately 6dB. Thus, a savings of over 6dB is realized in transmitter power required to achieve a Pb=10 5 by increasing M from M=2 to M=64.

Figure (C)

Fall, 2004.

94

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


What is the minimum required b/N0 to achieve an arbitrarily small probability of error as M? A union bound on the probability of error. Let us investigate the effect of increasing M on the probability of error for orthogonal signals. To simplify the mathematical development, we first derive an upper bound on the probability of a symbol error that is much simpler than the exact from given in the following Equation (5.2-21)
1 1 PM = 1 2 2
Fall, 2004.

ex

2 dx

M 1

1 2 s y exp N0 2

dy

95

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


A union bound on the probability of error (cont.) Recall that the probability of error for binary orthogonal b signals is given by 5.2-11: = Q b Pb = Q N 0 Now, if we view the detector for M orthogonal signals as one that makes M 1 binary decisions between the correlator outputs C(r,s1) that contains the signal and the other M1 correlator outputs C(r,sm), m=2,3,,M, the probability of error is upper-bounded by union bound of the M 1 events. that C(r,si)> C(r,s1) for i1, That is, if Ei represents the event M M then we have PM = P i = 2 Ei P ( Ei ) . Hence,

( )

PM ( M 1) Pb = ( M 1) Q
Fall, 2004.
96

i =2

S / N 0 < MQ

S / N0

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


A union bound on the probability of error(cont.) This bound can be simplified further by upper-bounding Q S / N0 We have --(E) Q s N 0 < e s 2 N 0

thus,

PM < Me s PM < e

2 N0

= 2k e k b

2 N0

k ( b N 0 2ln 2 ) 2

--(F)

As k or equivalently, as M, the probability of error approaches zero exponentially, provided that b/N0 is greater than 2ln 2, b (1.42dB) > 2 ln 2 = 1.39 N0
Fall, 2004.
97

WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


A union bound on the probability of error(cont.) The simple upper bound on the probability of error given by Equation (F) implies that, as long as SNR>1.42 dB, we can achieve an arbitrarily low PM.. However, this union bound is not a very tight upper bound as a sufficiently low SNR due to the fact that upper bound for the Q function in Equation (E) is loose. In fact, by more elaborate bounding techniques, it is shown in Chapter 7 that the upper bound in Equation (F) is sufficiently tight for b/N0>4 ln2. For b/N0<4 ln2, a tighter upper bound on PM is

PM < 2e
Fall, 2004.

(
98

0 N 0 ln 2

)2
WITS Lab, NSYSU.

5.2.2 Probability of Error for M-ary Orthogonal Signals


A union bound on the probability of error(cont.) Consequently, PM 0 as k, provided that

b
N0

> ln 2 = 0.693

( 1.6dB)

Hence, 1.6 dB is the minimum required SNR per bit to achieve an arbitrarily small probability of error in the limit as k(M). This minimum SNR per bit is called the Shannon limit for an additive Gaussian noise channel.

Fall, 2004.

99

WITS Lab, NSYSU.

5.2.3

Probability of Error for M-ary Biorthogonal Signals

As indicated in Section 4.3, a set of M=2k biorthogonal signals are constructed from M orthogonal signals by including the negatives of the orthogonal signals. Thus, we achieve a reduction in the complexity of the demodulator for the biorthogonal signals relative to that for orthogonal signals, since the former is implemented with M cross correlation or matched filters, whereas the latter required M matched filters or cross correlators. Let us assume that the signal s1(t) corresponding to the vector s1=[ s 0 00] was transmitted. The received signal vector is r = [ s + n1 n2 nM 2 ] where the {nm} are zero-mean, mutually statistically independent and identically distributed Gaussian random variables with 1 variance n2 = N 0 .
2
Fall, 2004.
100

WITS Lab, NSYSU.

5.2.3

Probability of Error for M-ary Biorthogonal Signals

The optimum detector decides in favor of the signal corresponding to the largest in magnitude of the cross correlators M 2 1 C ( r,s m ) = r s m = rk smk , m = 1, 2,..., M 2 k =1 while the sign of this largest term is used to decide whether sm(t) or sm(t) was transmitted. According to this decision rule, the probability of a correct decision is equal to the probability that r1 = s + n1 > 0 and r1 exceeds |rm|=|nm| for m=2,3, M. But r1 1 1 r1 N0 2 y 2 2 x2 N0 P ( nm < r1 | r1 > 0 ) = e dx = e dy r1 r1 N 0 2 2 N0
y=
Fall, 2004.
101

N0

WITS Lab, NSYSU.

5.2.3

Probability of Error for M-ary Biorthogonal Signals


r1 N0 2 M 2 1
2

Then, the probability of a correct decision is 1 x 2 Pc = e p ( r1 ) dr1 0 2 r1 N0 2 from which, upon substitution for p(r1), we obtain
1 Pc = 2

2 s

1 N0 2

v + 2 s N 0

v + 2 s N 0

x 2

2 N0 x1 s 1 v = r1 s pr1 ( x1 ) = exp 2 N0 N 0 where we have used the PDF of r1 given in Equation 5.2-15. The probability of a symbol error PM=1Pc

dx

M 2 1

v2 2

dv

--(G)

Fall, 2004.

102

WITS Lab, NSYSU.

5.2.3

Probability of Error for M-ary Biorthogonal Signals

Pc, and PM may be evaluated numerically for different values of M. The graph shown in the following figure (D) illustrates PM as a function of b/N0, where s=k b, for M=2,4,8,16 and 32.

Fall, 2004.

103

WITS Lab, NSYSU.

5.2.3

Probability of Error for M-ary Biorthogonal Signals

In this case, the probability of error for M=4 is greater than that for M=2. This is due to the fact that we have plotted the symbol error probability PM in Figure(D). If we plotted the equivalent bit error probability, we should find that the graphs for M=2 and M=4 coincide. As in the case of orthogonal signals, as M (k), the minimum required b/N0 to achieve an arbitrarily small probability of error is 1.6 dB, the Shannon limit.

Fall, 2004.

104

WITS Lab, NSYSU.

5.2.4

Probability of Error for Simplex Signals

Next we consider the probability of error for M simplex signals. Recall from Section 4.3 that simplex signals are a set of M equally correlated with mutual cross-correlation coefficient mn= 1/(M 1). These signals have the same minimum separation of 2 s between adjacent signal points in M-dimensional space as orthogonal signals. They achieve this mutual separation with a transmitted energy of s(M 1)/M, which is less than that required for orthogonal signals by a factor of (M 1)/M. Consequently, the probability of error for simplex signals is identical to the probability of error for orthogonal signals, but this performance is achieved with saving of M 10 log(1 ) = 10 log dB M 1 For M=2, the saving is 3dB. As M is increased, the saving in SNR approaches 0 dB.
Fall, 2004.
105

WITS Lab, NSYSU.

5.2.5 Probability of Error for M-ary Binary-Coded Signals


In Section 4.3, we have shown that binary-coded signal waveform are represented by signal vectors (4.3-38) :

sm = [sm1 sm 2

smN ] ,

m = 1,2, , M

N for all m and j. N is the block length of where smj = the code and is also the dimension of the M signal waveform (e) If d min is the minimum Euclidean distance, then the probability of a symbol error is upper-bounded as d (e ) 2 min PM < (M 1)Pb = (M 1)Q 2N0 (e ) 2 d min k < 2 exp 4N0

( )

( )
106

Fall, 2004.

WITS Lab, NSYSU.

5.2.6 Probability of Error for M-ary PAM


Recall (4.3-6) that M-ary PAM signals are represent geometrically as M one-dimensional signal points with value : 1 sm = m = 1, 2, , M g Am 2 where Am = (2m 1 M )d , m = 1,2, , M The Euclidean distance between adjacent signal points is d 2 g . Assuming equally probable signals, the average energy is : 2 M M M d 1 1 2 g 2 av = M m = M sm = 2M ( 2m 1 M ) m =1 m =1 m =1
d 2 g 1 2 = 1 M M 2M 3

1 m= 2 2 = 1 M d g m =1 6 M 2

M ( M + 1) 2 M ( M + 1)( 2 M + 1) 6

m =
m =1

Fall, 2004.

107

WITS Lab, NSYSU.

5.2.6 Probability of Error for M-ary PAM


Equivalently, we may characterize these signals in terms of their average power, where is : 2 d 1 g 2 av (5.2-40) Pav = = (M 1) T T 6 The average probability of error for M-ary PAM :
The detector compares the demodulator output r with a set of M-1 thresholds, which are placed at the midpoints of successive amplitude level and decision is made in favor of the amplitude level that is close to r.
2d 1 g 2

We note that if the mth amplitude level is transmitted, the demodulator output is 1

r = sm + n =

2 where the noise variable n has zero-mean and variance n = 2 N 0 Fall, 2004. 108 WITS Lab, NSYSU.

Am + n

5.2.6 Probability of Error for M-ary PAM


Assuming all amplitude levels are equally likely a priori, the average probability of a symbol error is the probability that the noise variable n exceeds in magnitude one-half of the distance between levels. However, when either one of the two outside levels (M-1) is transmitted, an error can occur in one direction only. As a result, we have the error probability:
M 1 1 > PM = P r s d m g M 2 M 1 2 = M 2 M 1 2 = M 2 N 0 2
1 2

e x

N0

dx

d2

N0

y2 2

dy where y = x

N0 2

Fall, 2004.

2 ( M 1) d g Q = N0 M
2

Q (t ) =

e x / 2 dx t 0

109

1 1 1 i( M 2 ) + i i2 M M 2 WITS Lab, NSYSU.

5.2.6 Probability of Error for M-ary PAM


From (5.2-40), we note that d By substituting for d 2
g
2

6 M 1
2

PavT

of a symbol error for PAM in terms of the average power :


2 ( M 1) M Q

, we obtain the average probability


) (
M 2 1 N0 6 av

PM =

2 ( M 1) = Q M M 2 1 N0 6 PavT

It is customary for us to use the SNR per bit as the basic

parameter, and since T=kTb and k =log2 M:


2(M 1) PM = Q M

(6 log 2 M ) b av

where bav = PavTb is the average bit energy and SNR per bit.
Fall, 2004.
110

(M

1 N0

bav

N o is the average

WITS Lab, NSYSU.

5.2.6 Probability of Error for M-ary PAM


The case M=2 corresponds to the error probability for binary antipodal signals. The SNR per bit increase by over 4 dB for every factor-of-2 increase in M. For large M, the additional SNR per bit required to increase M by a factor of 2 approaches 6 dB.
WITS Lab, NSYSU.

Probability of a symbol error for PAM Fall, 2004. 111

5.2.7 Probability of Error for M-ary PSK


Recall from section 4.3 that digital phase-modulated signal waveforms may be expressed as (4.3-11):
2 (m 1) sm (t ) = g (t ) cos 2f c t + , M and have the vector representation: 1 m M, 0t T

Energy in each waveform. (from 4.3-12)

2 1 (m 1) s sin 2 (m 1) s m (t ) = s cos , = s g M M 2 Since the signal waveforms have equal energy, the optimum detector for the AWGN channel given by Equation 5.1-44 computes the correlation metrics

C (r, s m ) = r s m ,
Fall, 2004.

m = 1,2,
112

,M
WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


In other word, the received signal vector r=[r1 r2] is projected onto each of the M possible signal vectors and a decision is made in favor of the signal with the largest projection. This correlation detector is equivalent to a phase detector that computes the phase of the received signal from r. We selects the signal vector sm whose phase is closer to r. The phase of r is = tan 1 r2 r r1 We will determine the PDF of r , and compute the probability of error from it. Consider the case in which the transmitted signal phase is r = 0 , corresponding to the signal s1(t).
Fall, 2004.
113

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


, and the received 0 The transmitted signal vector is s1 = s signal vector has components: r1 = s + n1 r2 = n2 Because n1 and n2 are jointly Gaussian random variable, it follow that r1 and r2 are jointly Gaussian random variable variables with E (r1 ) = s , E (r2 ) = 0, and r2 = r2 = 1 N 0 = r2 . 1 2 2 2 r + r2 1 s 1 2 p r (r1 , r2 ) = exp 2 2 2 r 2 r The PDF of the phaser is obtained by a change in variables from (r1,r2) to : 2 2 1 r2 V = r1 + r2 r = tan r1

Fall, 2004.

114

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


The joint PDF of V and r :
V dr1dr2 = VdVd V 2 + s 2 s V cos r pV , r (V , r ) = exp 2 2 r2 2 r

Integration of pV , r (V , r ) over the range of V yields pr ( r ) pr ( r ) = pV ,r (V , r ) dV


0

= =

1 2 1
2 r

e s e

N 0 sin r

V e N0 2

V 2 s N 0 cos r N0 2

2 2 2 r

dV

0 2 where we define the symbol SNR as s =


s sin r
2

2 r

( V e

V 2 s cos r

2 / 2 r

V V = dV N0 2 N0 .

Fall, 2004.

115

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK

f r ( r ) becomes narrower and more peaked about r = 0 as the SNR s increases.

Probability density function pr (r ) for S = 1, 2, 4, and 10. Fall, 2004. 116 WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


When s1(t) is transmitted, a decision error is made if the noise causes the phase to fall outside the range M r M . Hence, the probability of a symbol error is

PM = 1

p r ( r ) d r

In general, the integral of p r ( r ) doesnt reduced to a simple form and must be evaluated numerically, except for M =2 and M =4. For binary phase modulation, the two signals s1(t) and s2(t) are antipodal. Hence, the error probability is 2 b P2 = Q N 0
Fall, 2004.
117

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


When M = 4, we have in effect two binary phase-modulation signals in phase quadrature. Since there is no crosstalk or interference between the signals on the two quadrature carriers, the bit error probability is identical to that of M = 2. (5.2-57) Then the symbol error probability for M=4 is determined by 2 noting that 2 b 2 Pc = (1 P2 ) = 1 Q N 0 where Pc is the probability of a correct decision for the 2-bit symbol. There, the symbol error probability for M = 4 is
2 b P4 = 1 Pc = 2Q N 0
118

1 2 b 1 Q 2 N 0

Fall, 2004.

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


For M > 4, the symbol error probability PM is obtained by M numerically integrating Equation PM = 1 . p r ( r ) d r
M

For large values of M, doubling the number of phases requires an additional 6 dB/bit to achieve the same performance

Fall, 2004.

119

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


An approximation to the error probability for large M and for large SNR may be obtained by first approximating p r ( ) . For s N 0 >> 1 and r 0.5 , p r ( ) is well approximated as : s s sin 2 r P r ( r ) cos r e Performing the change in variable from r to u =
PM 1 2
M
M

s sin r ,
k = log 2 M s = k b

s cos r e
s

sin 2 r

d r

= 2Q 2 s sin = 2Q 2k b sin M M
Fall, 2004.
120

sin ( M )

u 2

du

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


When a Gray Code is used in the mapping of k-bits symbols into the corresponding signal phases, two k-bit symbols corresponding to adjacent signal phases differ in only a signal bit. The most probable error result in the erroneous selection of an adjacent phase to the true one. Most k-bit symbol error contain only a single-bit error. The equivalent bit error probability for M-ary PSK is well 1 approximated as : Pb PM k In practice, the carrier phase is extracted from the received signal by performing some nonlinear operation that introduces a phase ambiguity.
Fall, 2004.
121

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


For binary PSK, the signal is often squared in order to remove the modulation, and the double-frequency component that is generated is filtered and divided by 2 in frequency in order to extract an estimate of the carrier frequency and phase .
This operation result in a phase ambiguity of 180in the carrier phase.

For four-phase PSK, the received signal is raised to the fourth power to remove the digital modulation, and the resulting fourth harmonic of the carrier frequency is filtered and divided by 4 in order to extract the carrier component.
These operations yield a carrier frequency component containing , but there are phase ambiguities of 90 and 180 in the phase estimate.

Consequently, we do not have an absolute estimate of the carrier phase for demodulation.
Fall, 2004.
122

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


The phase ambiguity problem can be overcome by encoding the information in phase differences between successive signal transmissions as opposed to absolute phase encoding.
For example, in binary PSK, the information bit 1 may be transmitted by shifting the phase of carrier by 180 relative to the previous carrier phase. Bit 0 is transmitted by a zero phase shift relative to the phase in the previous signaling interval. In four-phase PSK, the relative phase shifts between successive intervals are 0, 90, 180, and -90,corresponding to the information bits 00, 01, 11, and 10, respectively.

The PSK signals resulting from the encoding process are said to be differentially encoded.
Fall, 2004.
123

WITS Lab, NSYSU.

5.2.7 Probability of Error for M-ary PSK


The detector is relatively simple phase comparator that compares the phase of the demodulated signal over two consecutive interval to extract the information. Coherent demodulation of differentially encoded PSK results in a higher probability of error than that derived for absolute phase encoding. With differentially encoded PSK, an error in the demodulated phase of the signal in any given interval will usually result in decoding errors over two consecutive signaling intervals. The probability of error in differentially encoded M-ary PSK is approximately twice the probability of error for M-ary PSK with absolute phase encoding.

Fall, 2004.

124

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


The received signal of a differentially encoded phase-modulated signal in any given signaling interval is compared to the phase of the received signal from the preceding signaling interval. We demodulate the differentially encoded signal by multiplying r(t) by cos2fct and sin2fct integrating the two products over the interval T. At the kth signaling interval, the demodulator output :
rk =

cos ( k ) + nk1

sin ( k ) + nk2

or equivalently,

rk =

s e j ( ) + nk

where k is the phase angle of the transmitted signal at the kth signaling interval, is the carrier phase, and nk=nk1+jnk2 is the noise vector.
Fall, 2004.
125

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


rk 1 = s e j ( k 1 ) + nk 1 The decision variable for the phase detector is the phase difference between these two complex numbers. Equivalently, we can project rk onto rk-1 and use the phase of the resulting complex number :
rk rk*1 = s e j ( k k 1 ) +

Similarly, the received signal vector at the output of the demodulator in the preceding signaling interval is :

s e j ( )nk*1 + s e j (
k

k 1

* nk + nk nk 1

which, in the absence of noise, yields the phase differencekk-1. Differentially encoded PSK signaling that is demodulated and detected as described above is called differential PSK (DPSK).
Fall, 2004.
126

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


If the pulse g(t) is rectangular, the matched filter may be replaced by integrate-and-dump filter

Block diagram of DPSK demodulator


Fall, 2004.
127

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


The error probability performance of a DPSK demodulator and detector The derivation of the exact value of the probability of error for M-ary DPSK is extremely difficult, except for M = 2. Without loss of generality, suppose the phase difference k-k-1=0. Furthermore, the exponential factor e-j(k-1-) and ej(k-) can be absorbed into Gaussian noise components nk-1 and nk, without changing their statistical properties.
rk rk*1 = s +

s (nk + nk*1 ) + nk nk*1

The complication in determining the PDF of the phase is the term nkn*k-1.
Fall, 2004.
128

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


However, at SNRs of practical interest, the term nkn*k-1 is small relative to the dominant noise term s (nk+n*k-1). We neglect the term nkn*k-1 and normalize rkr*k-1 by dividing through by s , the new set of decision metrics becomes :
* x = s + Re nk + nk 1 * y = Im nk + nk 1

The variables x and y are uncorrelated Gaussian random variable with identical variances n2=N0. The phase is 1 y r = tan x The noise variance is now twice as large as in the case of PSK. Thus we can conclude that the performance of DPSK is 3 dB poorer than that for PSK.
Fall, 2004.
129

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


This result is relatively good for M 4 , but it is pessimistic for M = 2 that the loss in binary DPSK relative to binary PSK is less than 3 dB at large SNR. In binary DPSK, the two possible transmitted phase differences are 0 and rad. Consequently, only the real part of rkr*k-1 is need for recovering the information. 1 * Re rk rk 1 = rk rk*1 + rk*rk 1 2 Because the phase difference the two successive signaling intervals is zero, an error is made if Re(rkr*k-1)<0. The probability that rkr*k-1+r*krk-1<0 is a special case of a derivation, given in Appendix B.

Fall, 2004.

130

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


Appendix B concerned with the probability that a general quadratic form in complex-valued Gaussian random Variable is less than zero. According to Equation B-21, we find it depend entirely on the first and second moments of the complex-valued Gaussian random variables rk and rk-1. We obtain the probability of error for binary DPSK in the form 1 b N 0 Pb = e 2 where b/N0 is the SNR per bit.

Fall, 2004.

131

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


The probability of a binary digit error for four-phase DPSK with Gray coding can be express in terms of well-known functions, but its derivation is quite involved. According to Appendix C, it is expressed in the form :

1 1 2 2 Pb = Q1 (a, b ) I 0 (ab ) exp a + b 2 2 where Q1(a,b) is the Marcum Q function (2.1-122), I0(x) is the modified Bessel function of order zero(2.1-123), and the parameters a and b are defined as
1 1 a = 2 b 1 2 , and b = 2 b 1 + 2
Fall, 2004.
132

WITS Lab, NSYSU.

5.2.8 Differential PSK (DPSK) and Its Performance


Because binary DPSK is only slightly inferior to binary PSK at large SNR, and DPSK does not require an elaborate method for estimate the carrier phase, it is often used in digital communication system.

Probability of bit error for binary and four-phase PSK and DPSK Fall, 2004. 133

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


Recall from Section 4.3 that QAM signal waveforms may be expressed as (4.3-19)

s m (t ) = Amc g (t ) cos 2f c t Ams (t ) sin 2f c t


where Amc and Ams are the information-bearing signal amplitudes of the quadrature carriers and g(t) is the signal pulse. The vector representation of these waveform is
1 g s m = Amc 2 Ams 1 g 2

To determine the probability of error for QAM, we must specify the signal point constellation.
Fall, 2004.
134

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


QAM signal sets that have M = 4 points.

Figure (a) is a four-phase modulated signal and Figure (b) is with two amplitude levels, labeled A1 and A2, and four phases.
Because the probability of error is dominated by the minimum distance between pairs of signal points, let us impose the condition (e) that d min = 2 A and we evaluate the average transmitter power, based on the premise that all signal points are equally probable.
Fall, 2004.
135

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


For the four-phase signal, we have 1 Pav = (4 )2 A2 = 2 A2 4 For the two-amplitude, four-phase QAM, we place the points (e) on circles of radii A and 3 A. Thus, d min = 2 A , and 1 Pav = 2 3 A2 + 2 A2 = 2 A2 2 which is the same average power as the M = 4-phase signal constellation. Hence, for all practical purposes, the error rate performance of the two signal sets is the same. There is no advantage of the two-amplitude QAM signal set over M = 4-phase modulation.

[( )

Fall, 2004.

136

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


QAM signal sets that have M = 8 points.
We consider the four signal constellations : Assuming that the signal points are equally probable, the average transmitted signal power is :

1 Pav = M A2 = M
The coordinates (Amc,Ams) for each signal point are normalized by A Fall, 2004. 137

(A
M m =1 M m =1

2 mc

2 + Ams

) )

(a

2 mc

2 + ams

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


The two sets (a) and (c) contain signal points that fall on a rectangular grid and have Pav = 6A2. The signal set (b) requires an average transmitted power Pav= 6.83A2, and (d) requires Pav = 4.73A2. The fourth signal set (d) requires approximately 1 dB less power than the first two and 1.6 dB less power than the third to achieve the same probability of error. The fourth signal constellation is known to be the best eightpoint QAM constellation because it requires the least power for a given minimum distance between signal points.

Fall, 2004.

138

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


QAM signal sets for M 16 For 16-QAM, the signal points at a given amplitude level are phase-rotated by relative to the signal points at adjacent amplitude levels. However, the circular 16-QAM constellation is not the best 16-point QAM signal constellation for the AWGN channel. Rectangular M-ary QAM signal are most frequently used in practice. The reasons are : Rectangular QAM signal constellations have the distinct advantage of being easily generated as two PAM signals impressed on phase-quadrature carriers.
Fall, 2004.
139

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


The average transmitted power required to achieve a given minimum distance is only slightly greater than that of the best M-ary QAM signal constellation. For rectangular signal constellations in which M = 2k, where k is even, the QAM signal constellation is equivalent to two PAM signals on quadrature carriers, each having M = 2 k 2 signal points. The probability of error for QAM is easily determined from the probability of error for PAM. Specifically, the probability of a correct decision for the M-ary QAM system is 2 Pc = 1 P M

where p

is the probability of error of an M ary PAM.


140

Fall, 2004.

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


By appropriately modifying the probability of error for M-ary PAM, we obtain 1 3 av P M = 21 Q M 1 N M 0 where av/No is the average SNR per symbol. Therefore, the probability of a symbol error for M-ary QAM is

PM = 1 1 P M

Note that this result is exact for M = 2k when k is even.

When k is odd, there is no equivalent M -ary PAM system. There is still ok, because it is rather easy to determine the error rate for a rectangular signal set.
Fall, 2004.
141

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


We employ the optimum detector that bases its decisions on the optimum distance metrics (5.1-43), it is relatively straightforward to show that the symbol error probability is tightly upper-bounded as 2 3 av PM 1 2Q (M 1)N 0 3k bav 4Q ( ) M 1 N 0 for any k 1, wherebav/N0 is the average SNR per bit.
Fall, 2004.
142

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


For nonrectangular QAM signal constellation, we may upperbound the error probability by use of a union bound :
(e ) d where min points. This bound may be loose when M is large. We approximate PM by replacing M-1 by Mn, where Mn is the (e ) largest number of neighboring points that are at distance d min from any constellation point. It is interesting to compare the performance of QAM with that of PSK for any given signal size M.

(e ) 2 PM < (M 1)Q d min 2 N 0 is the minimum Euclidean distance between signal

[ ]

Fall, 2004.

143

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


For M-ary PSK, the probability of a symbol error is approximate as PM 2Q 2 s sin M where s is the SNR per symbol. For M-ary QAM, the error probability is :

1 3 av P M = 21 Q M 1 N M 0 We simply compare the arguments of Q function for the two signal formats. The ratio of these two arguments is 3 (M 1) RM = 2 sin 2 ( M )
Fall, 2004.
144

WITS Lab, NSYSU.

5.2.9 Probability of Error for QAM


We can find out that when M = 4, we have RM = 1. 4-PSK and 4-QAM yield comparable performance for the same SNR per symbol. For M>4, M-ary QAM yields better performance than M-ary PSK.

Fall, 2004.

145

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


One can compare the digital modulation methods on the basis of the SNR required to achieve a specified probability of error. However, such a comparison would not be very meaningful, unless it were made on the basis of some constraint, such as a fixed data rate of transmission or, on the basis of a fixed bandwidth. For multiphase signals, the channel bandwidth required is simply the bandwidth of the equivalent low-pass signal pulse g(t) with duration T and bandwidth W, which is approximately equal to the reciprocal of T. R Since T=k/R=(log2M)/R, it follows that W = log 2 M
Fall, 2004.
146

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


As M is increased, the channel bandwidth required, when the bit rate R is fixed, decreases. The bandwidth efficiency is measured by the bit rate to bandwidth ratio, which is R = log 2 M W The bandwidth-efficient for transmitting PAM is single-sideband. The channel bandwidth required to transmit the signal is approximately equal to 1/2T and, R = 2 log 2 M W this is a factor of 2 better than PSK. For QAM, we have two orthogonal carriers, with each carrier having a PAM signal.
Fall, 2004.
147

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


Thus, we double the rate relative to PAM. However, the QAM signal must be transmitted via double-sideband. Consequently, QAM and PAM have the same bandwidth efficiency when the bandwidth is referenced to the band-pass signal. As for orthogonal signals, if the M = 2k orthogonal signals are constructed by means of orthogonal carriers with minimum frequency separation of 1/2T, the bandwidth required for transmission of k = log2M information bits is

M M M = = W= R 2T 2(k R ) 2 log 2 M In the case, the bandwidth increases as M increases. In the case of biorthogonal signals, the required bandwidth is one-half of that for orthogonal signals.
Fall, 2004.
148

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


A compact and meaningful comparison of modulation methods is one based on the normalized data rate R/W (bits per second per hertz of bandwidth) versus the SNR per bit (b/N0 ) required to achieve a given error probability. In the case of PAM, QAM, and PSK, increasing M results in a higher bit-to-bandwidth ratio R/W.

Fall, 2004.

149

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


However, the cost of achieving the higher data rate is an increase in the SNR per bit. Consequently, these modulation methods are appropriate for communication channels that are bandwidth limited, where we desire a R/W >1 and where there is sufficiently high SNR to support increases in M.
Telephone channels and digital microwave ratio channels are examples of such band-limited channels.

In contrast, M-ary orthogonal signals yield a R/W 1. As M increases, R/W decreases due to an increase in the required channel bandwidth. The SNR per bit required to achieve a given error probability decreases as M increases.
Fall, 2004.
150

WITS Lab, NSYSU.

5.2.10 Comparison of Digital Modulation Methods


Consequently, M-ary orthogonal signals are appropriate for power-limited channels that have sufficiently large bandwidth to accommodate a large number of signals. As M, the error probability can be made as small as desired, provided that SNR>0.693 (-1.6dB). This is the minimum SNR per bit required to achieve reliable transmission in the limit as the channel bandwidth W and the corresponding R/W0. The figure above also shown the normalized capacity of the band-limited AWGN channel, which is due to Shannon (1948). The ratio C/W, where C (=R) is the capacity in bits/s, represents the highest achievable bit rate-to-bandwidth ratio on this channel. Hence, it serves the upper bound on the bandwidth efficiency of any type of modulation.
Fall, 2004.
151

WITS Lab, NSYSU.

5.3 Optimum Receiver for CPM Signals


CPM is a modulation method with memory. The memory results from the continuity of the transmitted carrier phase from one signal interval to the next. The transmitted CPM signal may be expressed as

2 s (t ) = cos[2f c t + (t ; I )] T where (t ;I) is the carrier phase. The filtered received signal for an additive Gaussian noise channel is r (t ) = s (t ) + n(t )
where n(t ) = nc (t ) cos 2f c t ns (t ) sin 2f c t
Fall, 2004.
152

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


The optimum receiver for this signal consists of a correlator followed by a maximum-likelihood sequence detector that searches the paths through the state trellis for the minimum Euclidean distance path. And the Viterbi algorithm is an efficient method for performing this search. The carrier phase for a CPM signal with a fixed modulation index h may be expressed as

(t ; I ) = 2h I k q (t kT )

I k q(t kT ) k = k = n L +1 = n + (t ; I ) nT t (n + 1)T t where q(t)=0 for t <0, q(t)=1/2 for t LT ,and q(t ) = 0 g ( )d
= h I k + 2h
Fall, 2004.
153

k = n L

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


The signal pulse g(t) = 0 for t < 0 and t LT. For L = 1, we have a full response CPM, and for L > 1, where L is a positive integer, we have a partial response CPM signal.
h is rational, i.e., h = m/p where m and p are relatively prime positive integers, the CPM scheme can be represented by a trellis. There are p phase states when m is even (4.3-60) :

m 2m ( p 1)m s = 0, , ,, p p p There are 2p phase states when m is odd (4.3-61) : m ( 2 p 1)m s = 0, ,, p p


Fall, 2004.
154

WITS Lab, NSYSU.

Phase Tree
These phase diagrams are called phase tree.

CPFSK with binary symbols In =1, the set of phase trajectories beginning at time t=0.

Phase trajectories for quaternary CPFSK.

Fall, 2004.

155

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


We know if L > 1, we have an additional number of states due to the partial response character of the signal g(t). These additional states can be identified by expressing (t ; I) :

( t ; I ) = 2 h

k = n L +1

n 1

I k q ( t kT ) + 2 hI n q ( t nT )

The first term on the right-hand side depends on the information symbols (In-1, In-2, , In-L+1), which is called the correlative state vector, and represents the phase term corresponding to signal pulse that have not reached their final value. The second term represents the phase contribution due to the most recent symbol In.
Fall, 2004.
156

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


Hence, the state of the CPM signal (or the modulator) at time t = nT may be expressed as the combined phase state and correlative state, denoted as S n = { n , I n 1 , I n 2 , , I n L +1 } for a partial response signal pulse of length LT, where L > 1. In this case, the number of states is M is the alphabet size

pM L 1 (even m ) Ns = L 1 (odd m ) where h=m/p. 2 pM Suppose the state of the modulator at t = nT is Sn. The effect of the new symbol in the time interval nT t (n+1)T is to change the state from Sn to Sn+1. At t = (n+1)T, the state
S n +1 = ( n +1 , I n , I n 1 , , I n L + 2 )

where n +1 = n + hI n L +1
Fall, 2004.
157

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


Example 5.3-1. A binary CPM scheme with a modulation index h = and a partial response pulse with L = 2. Determine the states Sn of the CPM scheme and sketch the phase tree and state trellis. First, we note that there are 2p = 8 phase states, namely, For each of these phase states, there are two states that result from the memory of the CPM scheme. Hence, the total number of states is Ns = 16, namely ,
3 (0,1), (0, 1), ( ,1), ( , 1), ( ,1), ( , 1), ( ,1), ( , 1), ( ,1), 4 4 2 2 4 3 3 3 ( , 1), ( ,1), ( , 1), ( ,1), ( , 1), ( ,1), ( , 1) 4 4 4 2 2 4 4
158

1 1 3 s = 0, , , , 2 4 4

Fall, 2004.

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


Example 5.3-1.(count.) If the system is in phase state n= -/4 and In-1 = -1, then
n +1 = n + h I n 1
1 3 = = 4 4

State trellis for partial response (L = 2) CPM with h = 3/4


Fall, 2004.
159

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


Example 5.3-1.(count.)
g(t) is a rectangular pulse of duration 2T with initial state (0,1)

A path through the state trellis corresponding to the sequence (1, -1, -1, -1, 1, 1) Fall, 2004. 160

Phase tree for L = 2 partial response CPM with h = 3/4 WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


Metric computations.
It is easy to show that the logarithm of the probability of the observed signal r(t) conditioned on a particular sequence of transmitted symbols I is proportional to the cross-correlation metric ( n +1)T CM n (I ) = r (t ) cos[ c t + (t ; I )]dt
= CM n 1 (I ) + nT
( n +1)T

r (t ) cos[ c t + (t ; I ) + n ]dt

The term CMn-1(I) represents the metrics for the surviving sequences up to time nT, and the term
vn (I; n ) = nT r (t ) cos[ c t + (t ; I ) + n ]dt represents the additional increments to the metrics contributed by the signal in the time interval nT t (n+1)T.
Fall, 2004.
161
( n +1)T

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


There are ML possible sequences I = (In, In-1,, In-L+1) of symbols and p (or 2P) possible phase states {n}. Therefore, there are pML (or 2pML) different values of vn(I,n) computed in each signal interval, and each value is used to increment the metrics corresponding to the pML-1 surviving sequences from the previous signaling interval.
Computation of metric increments vn(I;n).

Fall, 2004.

162

WITS Lab, NSYSU.

5.3.1 Optimum Demodulation and Detection of CPM


The number of surviving sequences at each state of the Viterbi decoding process is pML-1 (or 2pML-1). For each surviving sequence, we have M new increments of vn(I;n) that are added to the existing metrics to yield pML (or 2pML) sequences with pML (or 2pML-1) metrics. However, this number is then reduced back to pML-1 (or 2pML-1) survivors with corresponding metrics by selecting the most probable sequence of the M sequences merging at each node of the trellis and discarding the other M-1 sequences.

Fall, 2004.

163

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


In evaluating the performance of CPM signals achieved with maximum-likelihood sequence detection, we must determine the minimum Euclidean distance of path through the trellis that separate that separate at the node at t = 0 and remerge at a later time at the same node. Suppose two signals si(t) and sj(t) corresponding to two phase trajectories (t ;Ii) and (t ;Ij). The sequence Ii and Ij must be different in their first symbol.

Fall, 2004.

164

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


The Euclidean distance between the two signals over an interval of length NT (1/T is the symbol rate) is defined as
d =
2 ij NT

2 NT = 2 N 2 cos[ c t + (t ; I i )]cos c t + (t ; I j ) dt 0 T 2 NT = 2 N cos (t ; I i ) (t ; I j ) dt 0 T 2 NT { = 1 cos (t ; I i ) (t ; I j ) }dt T 0 Hence the Euclidean distance is related to the phase difference between the paths in the state trellis.

0 NT

[s (t ) s (t )] dt
2 i j

s i (t )dt +
2

NT

s (t )dt 2
2 j

NT

si (t )s j (t )dt

Fall, 2004.

165

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


2 Since =blog2M, d ij may be expressed as

2 d ij = 2 b ij2 2 where d ij is defined as

log 2 M NT {1 cos (t; I i ) (t; I j ) }dt = 0 T Furthermore, since (t ;Ii)-(t ;Ij)=(t ;Ii-Ij), so that, with = Ii-Ij, ij2 may be written as
2 ij

log 2 M NT {1 cos (t; )}dt = 0 T where any element of can take the values 0, 2, 4,, 2(M-1), except that 00.
2 ij

Fall, 2004.

166

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


The error rate performances for CPM is dominated by the term corresponding to the minimum Euclidean distance :
b 2 PM = K min Q min N 0 is the number of paths having the minimum distance
N i, j

where K

min

2 = lim min ij2 min

log 2 M NT ( ) = lim min 1 cos t ; I I dt i j 0 N i , j T We note that for conventional binary PSK with no memory, 2 2 N = 1 and min = 12 = 2.
Fall, 2004.
167

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


2 Since min characterizes the performance of PCM, we can 2 investigate the effect on min resulting from varying the alphabet size M, the modulation index h, and the length of the transmitted pulse in partial response CPM. First, we consider full response(L = 1) CPM. For M = 2, we note that the sequence I i = +1,1, I 2 , I 3 I j = 1,+1, I 2 , I 3

which differ for k = 0, 1 and agree for k 2, result in two phase trajectories that merge after the second symbol. The corresponds to the difference sequence

= {2,2,0,0, }
Fall, 2004.
168

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


The Euclidean distance for this sequence is easily calculated from log 2 M NT 2 ij = {1 cos (t; )}dt
T
0

2 and provides an upper bound on min . This upper bound for CPSK with M = 2 is sin 2h 2 d B (h ) = 21 , M = 2 2h

Foe example, where h=1/2, which corresponds to MSK, we 2 1 2 1 have d B 2 , so that = min 2.
2 For M > 2 and full response CPM, an upper bound on min can be obtained by considering the phase difference sequence = {, -, 0, 0,} where =2,4,,2(M-1).

Fall, 2004.

169

WITS Lab, NSYSU.

5.3.2 Performance of CPM Signals


This sequence = {, -, 0, 0,} yields the upper bound for M-ary CPFSK as sin 2kh 2 (h ) = 1min ( ) dB 2 log M 1 2 k M 1 2kh
Large gains in performance can be achieved by increasing the alphabet size M. The upper bound may not be achievable for all values of h 2 2 because of min ( h) d B ( h) .
d2 B of the modulation index h for full response CPM WITS Lab, NSYSU.

Fall, 2004.

170

5.4 Optimum Receiver For Signals with Random Phase In AWGN Channel
In this section, we consider the design of the optimum receiver for carrier modulated signals when the carrier phase is unknown and no attempt is made to estimate its value. Uncertainty in the carrier phase of the receiver signal may be due to one or more of the following reasons:
The oscillators that are used at the transmitter and the receiver to generate the carrier signals are generally not phase synchronous. The time delay in the propagation of the signal from the transmitter to the receiver is not generally known precisely.

Assuming a transmitted signal of the form s (t ) = Re g (t )e j 2f ct that propagates through a channel with delay t0 will be received as: j 2 f t t j 2 f c t0 j 2 f c s ( t t0 ) = Re g ( t t0 ) e c ( 0 ) = Re g t t e e ( ) 0

Fall, 2004.

171

WITS Lab, NSYSU.

5.4 Optimum Receiver For Signals with Random Phase In AWGN Channel
The carrier phase shift due to the propagation delay t0 is = 2f ct0 Note that large changes in the carrier phase can occur due to relatively small changes in the propagation delay. For example, if the carrier frequency fc=1 MHz, an uncertainty or a change in the propagation delay of 0.5s will cause a phase uncertainty of rad. In some channels the time delay in the propagation of the signal from the transmitter to the receiver may change rapidly and in an apparently random fashion. In the absence of the knowledge of the carrier phase, we may treat this signal parameter as a random variable and determine the form of the optimum receiver for recovering the transmitted information from the received signal.
Fall, 2004.
172

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals


We consider a binary communication system that uses the two carrier modulated signal s1(t) and s2(t) to transmit the information, where: j 2 f c t sm ( t ) = Re s t e ( ) lm , m = 1, 2, 0 t T and slm(t), m=1,2 are the equivalent low-pass signals. The two signals are assumed to have equal energy 4.1-22
T 2 m

4.1-24

2 1 T = s (t )dt = sml (t ) dt 0 2 0 The two signals are characterized by the complex-valued correlation coefficient 1 T * 12 = sl1 (t )sl 2 (t )dt 2 0

Fall, 2004.

173

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals


The received signal is assumed to be a phase-shifted version of the transmitted signal and corrupted by the additive noise
n (t ) = Re [nc (t ) + jns (t )]e j 2f ct = Re z (t )e j 2f ct

The received signal may be expressed as


r (t ) = Re slm (t )e j + z (t ) e j 2f ct

{[

where rl ( t ) = slm ( t ) e j + z ( t ) , 0 t T rl(t) is the equivalent low-pass received signal. This received signal is now passed through a demodulator whose sampled output at t =T is passed to the detector.
Fall, 2004.

174

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum demodulator


In section 5.1.1, we demonstrated that if the received signals was correlated with a set of orthogonal functions {fn(t)} that spanned the signal space, the outputs from the bank of correlators provide a set of sufficient statistics for the detector to make a decision that minimizes the probability error. We also demonstrated that a bank of matched filters could be substituted for the bank of correlations. A similar orthogonal decomposition can be employed for a received signal with an unknown carrier phase. It is mathematically convenient to deal with the equivalent low-pass signal and to specify the signal correlators or matched filters in terms of the equivalent low-pass signal waveforms.
Fall, 2004.
175

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum demodulator


The impulse response hl(t) of a filter that is matched to the complex-valued equivalent low-pass signal sl(t), 0tT is given as (from Problem 5.6) hl(t)=sl*(T-t) and the output of 2 T such filter at t=T is simply (4.1-24) sl (t ) dt = 2 where is 0 the signal energy. A similar result is obtained if the signal sl(t) is correlated with sl*(t) and the correlator is sampled signal sl(t) at t=T. The optimum demodulator for the equivalent low-pass received signal sl(t) may be realized by two matcher filters in parallel, one matched to sl1(t) and the other to sl2(t).

Fall, 2004.

176

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum demodulator


Optimum receiver for binary signals

The output of the matched filters or correlators at the sampling instant are the two complex numbers rm = rmc + jrms ,
Fall, 2004.
177

m = 1,2
WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum demodulator


Suppose that the transmitted signal is s1(t). It can be shown that (Problem 5.41) r1 = 2 cos + n1c + j (2 sin + n1s ).
r2 = 2 cos( 0 ) + n2 c + j [2 sin ( 0 ) + n2 s ]

where is the complex-valued correlation coefficient of two signals sl1(t) and sl2(t), which may be expressed as =||exp(j0). The random noise variables n1c, n1s, n2c, and n2s are jointly Gaussian, with zero-mean and equal variance.

Fall, 2004.

178

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


The optimum detector observes the random variables [r1c r1s r2c r2s]=r, where r1=r1c+jr1s, and r2=r2c+jr2s, and bases its decision on the posterior probabilities P(sm|r), m=1,2. These probabilities may be expressed as p (r | sm )P (sm ) P (sm | r ) = , p (r ) m = 1,2
Likelihood ratio.

The optimum decision rule may be expressed as

> P ( s 2 | r ) or P ( s1 | r ) <
s2

s1

s1 p ( r | s1 ) > P (s2 ) < p ( r | s 2 ) s2 P ( s1 )

Fall, 2004.

179

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


The ratio of PDFs on the left-hand side is the likelihood ratio, which we denote as p (r | s1 ) (r ) = p (r | s 2 ) The right-hand side is the ratio of the two prior probabilities, which takes the value of unity when the two signals are equally probable. The probability density functions p(r|s1) and p(r|s2) can be obtained by averaging the PDFs p(r|sm,) over the PDF of the random carrier phase, i.e.,
p (r | s m ) =
Fall, 2004.
2 0

p (r | s m , ) p ( )d
WITS Lab, NSYSU.

180

5.4.1 Optimum Receiver for Binary Signals The optimum detector


For the special case in which the two signals are orthogonal, i.e., =0, the outputs of the demodulator are (from 5.4-10):

r1 = r1c + jr1s = 2 cos + n1c + j (2 sin + n1s ) r2 = r2 c + jr2 s = n2 c + jn2 s


where (n1c, n1s, n2c, n2s) are mutually uncorrelated and, statistically independent, zero-mean Gaussian random variable.

Fall, 2004.

181

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


The joint PDF of r=[r1c r1s r2c r2s] may be expressed as a product of the marginal PDFs: (r1c 2 cos ) 2 + (r1s 2 sin ) 2 1 p (r1c , r1s | s1 , ) = exp 2 2 2 2
r22c + r22s exp p (r2 c , r2 s | s1 , ) = 2 2 2 2 where 2=2N0. The uniform PDF for the carrier phase (p()=1/2) represents the most ignorance that can be exhibited by the detector. This is called the least favorable PDF for . 1
Fall, 2004.
182

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


With p()=1/2, 02, substituted into the integral in p(r|sm), we obtain:
1 2

p ( r1c , r1s | s1 , )d

2 2 r12 1 r 4 + + 1c c exp = 2 2 2 2 2

2 ( r1c cos + r1s sin ) exp d 2

0 where I0(x) is the modified Bessel function of zeroth order.

1 2

2 r 2 + r 2 2 ( r1c cos + r1s sin ) 1c 1s = exp d I 0 2 2

Fall, 2004.

183

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


By performing a similar integration under the assumption that the signal s2(t) was transmitted, we obtain the result r22c + r22s + 4 2 2 r22c + r22s 1 p ( r2 c , r2 s | s 2 ) = exp I0 2 2 2 2 2 When we substitute these results into the likelihood ratio given by equation (r), we obtain the result 2 2 s1 I 0 2 r12 + r c 1s P ( s2 ) > (r ) = < 2 2 2 I 0 2 r2 c + r2 s s2 P ( s1 )

( (

) )

2 2 2 The optimum detector computes the two envelopes r12 and r + r + r 2c 2s c 1s and the corresponding values of the Bessel function I 2 r 2 + r 2 / 2 1c 1s 0 and I 2 r22c + r22s / 2 to form the likelihood ratio.
0

Fall, 2004.

184

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals


The optimum detector we observe that this computation requires knowledge of the noise variance 2. The likelihood ratio is then compared with the threshold P(s2)/P(s1) to determine which signal was transmitted. A significant simplification in the implementation of the optimum detector occurs when the two signals are equally probable. In such a case the threshold becomes unity, and, due to the monotonicity of Bessel function shown in figure, the optimum detection rule s 2 > 2 2 simplifies to r12 c +r 1s < r2 c + r2 s
1

s2

Fall, 2004.

185

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals The optimum detector


The optimum detector bases its decision on the two envelopes 2 2 2 , and and r12 , it is called an envelope detector. r + r + r 2c 2s c 1s We observe that the computation of the envelopes of the received signal samples at the output of the demodulator renders the carrier phase irrelevant in the decision as to which signal was transmitted. Equivalently, the decision may be based on the computation of the squared envelope r1c2+r1s2 and r2c2+r2s2, in which case the detector is call a square-law detector.

Fall, 2004.

186

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
Recall that in binary FSK we employ two different frequencies, say f1 and f2=f1+f, to transmit a binary information sequence. The chose of minimum frequency separation f = f2 - f1 is considered below:
s1 ( t ) = 2 b Tb cos 2 f1t , s2 ( t ) = 2 b Tb cos 2 f 2t , 0 t Tb

The equivalent low-pass counterparts are:


sl1 ( t ) = 2 b Tb , sl 2 ( t ) = 2 b Tb e j 2ft , 0 t Tb

The received signal may be expressed as: 2 b cos ( 2 f mt + m ) + n ( t ) , m = 0,1 r (t ) = Tb


Fall, 2004.
187

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
Demodulation and square-law detection:

f kc ( t ) = f ks ( t ) =

2 cos , k = 0,1 ( 2 f1 + 2 k f ) t Tb 2 sin ( 2 f1 + 2 k f ) t , k = 0,1 Tb 188 WITS Lab, NSYSU.

Fall, 2004.

5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
If the mth signal is transmitted, the four samples at the detector may be expressed as: 2 b 2 T T + + + + rkc = r ( t )i f kc ( t ) dt = f m f t n t i f k f t cos 2 cos 2 2 ( ) ( ) ( ) m 1 1 dt 0 0
b b

Tb

Tb

= =

2 b Tb

2 ( f {cos
Tb 0 Tb

+ mf ) t cos m sin 2 ( f1 + mf ) t sin m i cos 2 ( f1 + k f ) t dt + nkc


1 m

}{

b
Tb

{cos 2 ( m k ) ft + cos 2 ( 2 f + ( m + k ) f ) t } cos 2 ( 2 f + ( m + k ) f ) t } sin dt + n {sin 2 ( m k ) ft sin


1 m Tb

kc

2 ( m k ) ft cos 2 ( m k ) ft b sin = + cos sin m m + nkc Tb 2 m k f 2 m k f ( ) ( ) 0 sin 1 2 ( m k ) fTb cos 2 ( m k ) fTb = b cos m + sin m + nkc 2 m k fT 2 m k fT ( ) b ( ) b k , m = 0,1

Fall, 2004.

189

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
If the mth signal is transmitted, the four samples at the detector may be expressed as (cont.): 2 b 2 T T + + + + rks = r ( t )i f ks ( t ) dt = f m f t n t i f k f t cos 2 sin 2 2 ( ) ( ) ( ) m 1 1 dt 0 0
b b

Tb

Tb

= =

2 b Tb

2 ( f {cos
Tb 0 Tb

+ mf ) t cos m sin 2 ( f1 + mf ) t sin m i sin 2 ( f1 + k f ) t dt + nks


1 m

}{

b
Tb

{ sin 2 ( m k ) ft + sin 2 ( 2 f + ( m + k ) f ) t } cos {cos 2 ( m k ) ft cos 2 ( 2 f + ( m + k ) f ) t } sin dt + n


1 m Tb

ks

2 ( m k ) ft sin 2 ( m k ) ft b cos = cos sin m m + nks Tb 2 m k f 2 m k f ( ) ( ) 0 1 2 ( m k ) fTb sin 2 ( m k ) fTb cos = b cos m sin m + nks 2 ( m k ) fTb 2 ( m k ) fTb k , m = 0,1

Fall, 2004.

190

WITS Lab, NSYSU.

5.4.1 Optimum Receiver for Binary Signals Detection of binary FSK signal
We observe that when k=m, the sampled values to the detector are rmc = b cos m + nmc k=m rms = b sin m + nms We observe that when km, the signal components in the samples rkc and rks will vanish, independently of the values of the phase shifts k, provided that the frequency separation between successive frequency is f= 1/T. In such case, the other two correlator outputs consist of noise only, i.e., rkc=nkc, rks=nks, k m .
Fall, 2004.
191

WITS Lab, NSYSU.

5.4.2 Optimum Receiver for M-ary Orthogonal Signals


If the equal energy and equally probable signal waveforms are represented as j 2 f c t sm ( t ) = Re s t e ( ) lm , m = 1, 2,..., M , 0 t T where slm(t) are the equivalent low-pass signals. The optimum correlation-type or matched-filter-type demodulator produces the M complex-valued random variables T * rm = rmc + jrms = rl ( t ) hlm (T t ) dt
0 * = rl ( t ) s (T (T t ) ) dt = rl ( t ) slm ( t ) dt , m = 1, 2,..., M 0 0 where rl(t) is the equivalent low-pass signals. The optimum detector, based on a random, uniformly distributed carrier phase, computes the M envelopes 2 2 rm = rmc + rms , m = 1, 2,..., M or, the squared envelopes |rm|2, and selects the signal with the largest envelope. T * lm T

Fall, 2004.

192

WITS Lab, NSYSU.

5.4.2 Optimum Receiver for M-ary Orthogonal Signals


Optimum receiver for M-ary orthogonal FSK signals.
There are 2M correlators: two for each possible transmitted frequency. The minimum frequency separation between adjacent frequencies to maintain orthogonality is f= 1/T.
Fall, 2004.
193

Demodulation of M-ary signals for noncoherent detection

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


We assume that the M signals are equally probable a priori and that the signal s1(t) is transmitted in the signal interval 0tT. The M decision metrics at the detector are the M envelopes
2 2 rm = rmc + rms ,

m = 1, 2,..., M

where

r1c = s cos 1 + n1c r1s = s sin 1 + n1s

and rmc = nmc , rms = nms , m = 2,3,..., M The additive noise components {nmc} and {nms} are mutually statistically independent zero-mean Gaussian variables with equal variance 2=N0/2.
Fall, 2004.
194

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


The PDFs of the random variables at the input to the detector are
(r2 + r2 ) r + r + s s 1c 1s 1 pr1 ( r1c , r1s ) = exp I0 2 2 2 2 2
2 1c 2 1s 2 2 rmc + rms exp prm ( rmc , rms ) = 2 2 2 2 We define the normalized variables

m = 2,3,..., M

Rm =

2 2 rmc + rms

rms m = tan rmc


Fall, 2004.
195

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


Clearly, rmc=Rmcosm and rms =Rmsinm. The Jacobian of this transformation is cos m sin m J = = 2 Rm Rm sin m Rm cos m Consequently, 1 2 s 2 s R1 p( R1 , 1 ) = R1 exp R1 + 2 I0 N0 N0 2 2
Rm 1 2 p ( Rm , m ) = exp Rm , 2 2 m = 2,3,..., M

Finally, by averaging p(Rm,m) over m, the factor of 2 is eliminated. Thus, we find that R1 has a Rice probability distribution and Rm, m=2,3,,M, are each Rayleigh-distribued.
Fall, 2004.
196

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


The probability of a correct decision is simply the probability that R1>R2, and R1>R3,,and R1>Rm. Hence,
Pc = P ( R2 < R1 , R3 < R1 , , RM < R1 ) = P ( R2 < R1 , R3 < R1 , , RM < R1 | R1 = x ) pR1 ( x ) dx
0

Because the random variables Rm, m=2,3,,M, are statistically independent and identically distributed, the joint probability conditioned on R1 factors into a product of M-1 identical terms. where
Pc = P ( R2 < R1 | R1 = x ) 0
x 0

M 1

pR1 ( x ) dx
x2 / 2

(5.4-42)

P ( R2 < R1 | R1 = x ) = pR2 ( r2 ) dr2 = 1 e


Fall, 2004.
197

(From Eq 2.1-129)

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


The (M-1)th power may be expressed as M 1 M 1 n M 1 nx 2 2 x2 / 2 1 e = ( 1) e n =0 n

Substitution of this result into Equation 5.4-42 and integration over x yield the probability of a correct decision as M 1 n s n M 1 1 Pc = ( 1) exp n =0 n n +1 ( n + 1) N 0 where s/N0 is the SNR per symbol. The probability of a symbol error, which is Pm=1-Pc, becomes M 1 nk b n +1 M 1 1 PM = ( 1) exp n n 1 n 1 N + + ) 0 n =1 ( where b/N0 is the SNR per bit.
Fall, 2004.
198

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


For binary orthogonal signals (M=2), Equation reduces to the simple form b 2 N0 P2 = 1 e 2 For M>2, we may compute the probability of a bit error by making use of the relationship 2k 1 Pb = k PM 2 1 which was established in Section 5.2. Figure shows the bit-error probability as a function of SNR per bit b for M=2,4,8,16, and 32.
Fall, 2004.
199

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


Just as in the case of coherent detection of M-ary orthogonal signals, we observe that for any given bit-error probability, the SNR per bit decreases as M increase. It will be show in Chapter 7 that, in the limit as M, the probability of bit error Pb can be made arbitrarily small provided that the SNR per bit is greater than the Shannon limit of -1.6dB. The cost for increasing M is the bandwidth required to transmit the signals. For M-ary FSK, the frequency separation between adjacent frequencies is f=1/T for signal orthogonality. The bandwidth required for the M signals is W=M f=M/T.

Fall, 2004.

200

WITS Lab, NSYSU.

5.4.3 Probability of Error for Envelope Detection of M-ary Orthogonal Signals


The bit rate is R=k/T, where k=log2M. Therefore, the bit rate-to-bandwidth ratio is
R log 2 M = W M

Fall, 2004.

201

WITS Lab, NSYSU.

5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals


In this section, we consider the performance of the envelope detector for binary, equal-energy correlated signals. When the two signals are correlated, the input to the detector is the complex-valued random variables given by Equation 5.4-10. We assume that the detector bases its decision on the envelopes |r1| and |r2|, which are correlated (statistically dependent). The marginal PDFs of R1=|r1| and R2=|r2| are Ricean distributed and may be expressed as 2 2 Rm Rm m Rm + m exp ( Rm > 0 ) I0 p ( Rm ) = 2 s N 0 4 N 0 2 N 0 0 ( Rm < 0 ) m=1,2, where 1=2 and 2=2||, based on the assumption that signal s1(t) was transmitted.
Fall, 2004.
202

WITS Lab, NSYSU.

5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals


Since R1 and R2 are statistically dependent as a consequence of the nonorthogonality of the signals, the probability of error may be obtained by evaluating the double integral Pb = P ( R2 > R1 ) =
0

p ( x , x )dx dx
x1 1 2 1

where p(x1,x2) is the joint PDF of the envelopes R1 and R2. This approach was first used by Helstrom (1995), who determined the joint PDF of R1 and R2 and evaluated the double integral. An alternative approach is based on the observation that the probability of error may also be expressed as
2 2 Pb = P ( R2 > R1 ) = P ( R2 > R12 ) = P ( R2 R12 > 0 )

Fall, 2004.

203

WITS Lab, NSYSU.

5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals


But R22-R12 is a special case of a general quadratic form in complex-valued Gaussian random variable, treated later in Appendix B. For the special case under consideration, the derivation yields the error probability in the form
Pb = Q1 ( a, b ) e
1 2
( a 2 +b2 ) 2

I 0 ( ab )

where a=

1 ( 2N

b=

1+ ( 2N

Q1(a, b) is the Marcum Q function defined in 2.1-123 and I0(x) is the modified Bessel function of order zero.
Fall, 2004.
204

WITS Lab, NSYSU.

5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals


The error probability Pb is illustrated in Figure for several values of ||.

Probability of error for noncoherent detection of binary FSK

Pb is minimized when =0; that is, when the signals are orthogonal.
Fall, 2004.
205

WITS Lab, NSYSU.

5.4.4 Probability of Error for Envelope Detection of Correlated Binary Signals


For this case, a = 0, b = b N 0, and Equation reduces to
b 1 b Pb = Q1 0, N 2e 0
2 N0

From the definition of Q1(a, b) in Equation 2.1-123, it follows that b b 2 N0 Q1 0, N =e 0 Substitution of these relations yields the desired result given previously in Equation 5.4-47. On the other hand, when ||=1, the error probability becomes Pb=1/2, as expected.

Fall, 2004.

206

WITS Lab, NSYSU.

5.5 Performance Analysis for Wireline and Radio Communication Systems


In the communication of digital signals through an AWGN channel, we have observed that the performance of the communication system, measured in terms of the probability of error, depends solely on the received SNR, b/N0, where b is the transmitted energy per bit and N0/2 is the power spectral density of the additive noise. The additive noise ultimately limits the performance of the communication system. Another factor that affects the performance of communication system is channel attenuation. All physical channels, including wire lines and radio channels, are lossy.

Fall, 2004.

207

WITS Lab, NSYSU.

5.5 Performance Analysis for Wireline and Radio Communication Systems


The signal is attenuated as it travels through the channel. The simple mathematical model for the attenuation show in the following figure may be used for the channel.

Mathematical model of channel with attenuation and additive noise.

If the transmitted signal is s(t), the received signal, with 0 1 is r (t ) = s (t ) + n (t )


Fall, 2004.
208

WITS Lab, NSYSU.

5.5 Performance Analysis for Wireline and Radio Communication Systems


If the energy in the transmitted signal is b, the energy in the received signal is 2b. Consequently, the received signal has an SNR 2b/N0. The effect of signal attenuation is to reduce the energy in the received signal and thus to render the communication system more vulnerable to additive noise. In the analog communication systems, amplifiers called repeaters are used to periodically boost the signal strength in transmission through the channel. However, each amplifier also boosts the noise in the system. In contrast, digital communication systems allow us to detect and regenerate a clean (noise-free) signal in a transmission channel. Such devices, called regenerative repeaters, are frequently used in wireline and fiber-optic communication channels.
Fall, 2004.
209

WITS Lab, NSYSU.

5.5.1 Regenerative Repeaters


The front end of each regenerative repeater consists of a demodulator/detector that demodulates and detects the transmitted digital information sequence sent by the preceding repeater. Once detected, the sequence is passed to the transmitter side of the repeater, which maps the sequence into signal waveforms that are transmitted to the next repeater. This type of repeater is called a regenerative repeater. Since a noise-free signal is regenerated at the each repeater, the additive noise does not accumulate. However, when errors occur in the detector of a repeater, the errors are propagated forward to the following repeaters in the channel.

Fall, 2004.

210

WITS Lab, NSYSU.

5.5.1 Regenerative Repeaters


To evaluate the effect of errors on the performance of overall system, suppose that the modulation is binary PAM, so that the probability of a bit error for one hop is 2 b Pb = Q N 0 Since errors occur with low probability, the overall probability of error over K regenerative repeaters may be approximated as
2 b Pb = 1 Pc = 1 1 Q N 0 2 b = 1 1 KQ N 0
Fall, 2004.

2 2 b + o Q N 0
211

2 b KQ N 0

WITS Lab, NSYSU.

5.5.1 Regenerative Repeaters


In contrast, the use of K analog repeaters in the channel reduces the received SNR by K, and hence, the bit error probability is 2 b Pb Q KN 0 For the same probability of error performance, the use of regenerative repeaters results in a significant saving in transmitter power compared with analog repeaters. In digital communication systems, regenerative repeaters are preferable. However, in wireline telephone channels that are used to transmit both analog and digital signals, analog repeaters are generally employed.
Fall, 2004.
212

WITS Lab, NSYSU.

5.5.1 Regenerative Repeaters


Example 5.5-1
A binary digital communication system transmits data over a wireline channel of length 1000 km. Repeaters are used every 10 km to offset the effect of channel attenuation. Let us determine the b/N0 that is required to achieve a probability of bit error of 10-5 if (a) analog repeaters are employed, and (b) regenerative repeaters are employed. The number of repeaters used in the system is K=100. If regenerative repeaters are used, b/N0 obtained is:
2 b 10 = 100Q N 0
5

2 b 10 = Q N 0
7

which is approximately 11.3 dB.


Fall, 2004.
213

WITS Lab, NSYSU.

5.5.1 Regenerative Repeaters


Example 5.5-1(cont.)
If analog repeaters are used, the b/N0 obtained is
2 b 10 = Q 100 N 0
5

Which yields b/N0 29.6 dB. The difference in the required SNR is about 18.3 dB, or approximately 70 times the transmitter power of the digital communication system.

Fall, 2004.

214

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


In the design of radio communication systems that transmit over line-of-sight microwave channels and satellite channels, the system designer must specify the size of the transmit and receive antennas, the transmitted power, and the SNR required to achieve a given level of performance at some desired data rate. Consider a transmit antenna that radiates isotropically in the free space at a power level of PT watts:

Isotropically radiating antenna.

Fall, 2004.

215

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


The power density at a distance d from the antenna is PT /4d2 W/m2. If the transmitting antenna has some directivity in a particular direction, the power density in the direction is increased by factor called the antenna gain and denoted by GT . In such case, the power density at the distance d is PTGT/4d2 W/m2. The product PTGT is usually called the effective radiated power (ERP or EIRP), which is basically the radiated power relative to an isotropic antenna, for which GT=1. A receiving antenna pointed in the direction of the radiated power gathers a portion of the power that is proportional to its crosssectional area.
Fall, 2004.
216

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


The received power extracted by the antenna may be expressed as PT GT AR PR = 4 d 2 where AR is the effective area of the antenna. From electromagnetic field theory, we obtain the basic relationship between the gain GR of an antenna and its effective area as GR 2 AR = m2 4 where=c/f is the wavelength of the transmitted signal, c is the speed of light (3108 m/s), and f is the frequency of the transmitted signal.
Fall, 2004.
217

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


If we substitute AR into PR, we obtain an expression for the received power in the form
PR =

( 4 d )

PT GT GR

Ls = 4 d

The factor Ls is called the free-space path loss. If other losses, such as atmospheric losses, are encountered in the transmission of the signal, they may be accounted for by introducing an additional loss factor, say La. Therefore, the received power may be written in general as

PR = PT GT GR Ls La
Fall, 2004.
218

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


The important characteristics of an antenna are its gain and its effective area. These generally depend on the wavelength of the radiated power and the physical dimensions of the antenna. For example, a parabolic (dish) antenna of diameter D has an effective area GR 2 1 AR = = D 2 4 4 where D2/4 is the physical area and is the illumination efficiency factor, which falls in the range 0.5 0.6. The antenna gain for a parabolic antenna of diameter D is 2 D GR =
Fall, 2004.
219

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


As a second example, a horn antenna of physical area A has an efficiency factor of 0.8, an effective area of AR=0.8A, and an antenna gain of 10 A GR = 2

Another parameter that is related to the gain (directivity) of an antenna is its beamwidth, which we denote as B and which is illustrated graphically in the following figure.
Antenna beamwidth and pattern

Fall, 2004.

220

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


The -3dB beamwidth of a parabolic antenna is approximately B = 70 ( D ) so that GT is inversely proportional to B2. That is, a decrease of the beamwidth by a factor of 2, which is obtained by doubling the diameter D, increases the antenna gain by a factor of 4 (6dB). Based on the general relationship for the received signal power given by PR=PTGTGRLsLa, the system designer can compute PR from a specification of the antenna gains and the distance between the transmitter and the receiver. Such computations are usually done on a power basis, so that

( PR )dB = ( PT )dB + ( GT )dB + ( GR )dB + ( Ls )dB + ( La )dB


Fall, 2004.
221

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


Example 5.5-2
Suppose that we have a satellite in geosynchronous orbit that radiates 100W of power, i.e.,20dB above 1W (20dBW).
The transmit antenna has a gain of 17 dB, so that the EPR=37dBw.

Also, suppose that the each station employs a 3-m parabolic antenna and that the downlink is operating at a frequency of 4GHz. The efficiency factor is =0.5. 2 D = G By substituting these numbers into R , we obtain the value of the antenna gain as 39 dB. The free-space path loss is Ls=195.6 dB.
Fall, 2004.
222

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


Example 5.5-2(cont.)
No other losses are assumed. Therefore, the received signal power is

( PR )dB = 20 + 17 + 39 195.6
= 119.6 dBW

Or, equivalently,
PR = 1.1 1012 W

Fall, 2004.

223

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


To complete the link budget computation, we must consider the effect of the additive noise at the receiver front end. Thermal noise that arises at the receiver front end has a relatively flat power density spectrum up to about 1012 Hz, and is given as N 0 = k BT0 W Hz

where kB is Boltzmanns constant (1.3810-23 W-s/K) and T0 is the noise temperature in Kelvin. The total noise power in the signal bandwidth W is N0W. The performance of the digital communication system is specified by the b/N0 required to keep the error rate performance below some given value.
Fall, 2004.
224

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


Since b = Tb PR = 1 PR N0 N0 R N0 it follows that
b PR = R N0 N 0 req

where (b/N0)req is the required SNR per bit. If we have PR/N0 and the required SNR per bit, we can determine the maximum data rate that is possible.

Fall, 2004.

225

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


Exapmle5.5-3
For the link consider in Example 5.5-2, the received signal power is PR = 1.11012 W

( 119.6 dBW )

Now, suppose the receiver front end has a noise temperature of 300 K, which is typical for a receiver in the 4-GHz range. Then, N0=4.1x10-21 W/Hz, or, equivalently, -203.9dbW/Hz. Therefore, PR = 119.6 + 203.9 = 84.3 dB Hz N0
N 0 = 4.1 1021 W Hz
Fall, 2004.
226

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


Exapmle5.5-3(cont.)
If the required SNR per bit is 10 dB, then, form Equation 5.5-16, we have the available rate as

RdB = 84.3 10 = 74.3 dB

( with respect to 1 bit/s )

This corresponds to the rate of 26.9 megabits/s, which is equivalent to about 420 PCM channels, each operating at 64000 bits/s.

Fall, 2004.

227

WITS Lab, NSYSU.

5.5.2 Link Budget Analysis in Radio Communication System


It is a good idea to introduce some safety margin, which is we shall the link margin MdB, in the above computations for the capacity of the communication links. Typically, this may be selected as MdB=6dB. Then, the link budget computation for the link capacity may be expressed in sample form
PR b RdB = M dB N 0 dB Hz N 0 req = ( PT )dBW + ( GT )dB + ( GR )dB b + ( La )dB + ( Ls )dB M dB N 0 req
Fall, 2004.
228

WITS Lab, NSYSU.

S-ar putea să vă placă și