Sunteți pe pagina 1din 57

1

Baseband is an adjective that describes signals


and systems whose range of frequencies is
measured from close to 0 hertz to a cut-off
frequency, a maximum bandwidth or highest
signal freqency
Allows signals between two specific frequencies
to pass
Discriminates against signals at other
frequencies
The cutoff frequencies, f1 and f2, are the
frequencies at which the output signal power
falls to half of its level at f0, the center frequency
of the filter. The value f2 - f1 is called the filter
bandwidth
Baseband Modulation
An information bearing-signal must conform to the limits of its channel

•Generally modulation is a two-step process


• baseband: shaping the spectrum of input bits to fit in a limited spectrum

•Most common baseband modulation is Pulse Amplitude Modulation (PAM)


• data amplitude modulates a sequence of time translates of basic pulse
• PAM is a linear form of modulation: easy to equalize, BW is pulse BW
• Typically baseband data will modulate in-phase [cos] and quadrature [sine]
data streams to the carrier passband

•Special cases of modulated PAM include


• phase shift keying (PSK)
• quadrature amplitude modulation (QAM)
7
8
Baseband Demodulation/Detection
 In case of baseband signaling, the received signal sis
already in pulse-like form. Why is then is demodulator
required?

 Arriving baseband pulses are not in the form of ideal pulse


shapes, each one occupying its own symbol interval.

 The channel (as well as any filtering at the transmitter)


causes intersymbol interference (ISI).

 Channel noise is another reason that may cause bit error is


channel noise.

9
Effect of Channel

Figure 1.16 (a) Ideal pulse. (b) Magnitude spectrum of the ideal pulse.

10
Figure 1.17 Three examples of filtering an ideal pulse. (a) Example 1:
Good-fidelity output. (b) Example 2: Good-recognition output. (c)
Example3: Poor-recognition output.

11
Effect of Noise
2

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20

-1

-2
0 2 4 6 8 10 12 14 16 18 20

12
3.1.2 Demodulation and Detection
AWGN

DETECT
DEMODULATE & SAMPLE
SAMPLE
at t = T
RECEIVED
WAVEFORM FREQUENCY
RECEIVING EQUALIZING
DOWN
FILTER FILTER THRESHOLD MESSAGE
TRANSMITTED CONVERSION
WAVEFORM COMPARISON SYMBOL
OR
CHANNEL
FOR COMPENSATION
SYMBOL
BANDPASS FOR CHANNEL
SIGNALS INDUCED ISI

OPTIONAL

ESSENTIAL

Figure 3.1: Two basic steps in the demodulation/detection of digital signals

The digital receiver performs two basic functions:


 Demodulation, to recover a waveform to be sampled at t = nT.
 Detection, decision-making process of selecting possible digital symbol

13
3.2 Detection of Binary Signal in Gaussian Noise

 For any binary channel, the transmitted signal over a symbol interval
(0,T) is:
 s1 (t ) 0  t  T for a binary 1
si (t )  
 s2 (t ) 0  t  T for a binary 0

 The received signal r(t) degraded by noise n(t) and possibly


degraded by the impulse response of the channel hc(t), is

r (t )  si (t ) * hc (t )  n(t ) i  1,2 (3.1)


Where n(t) is assumed to be zero mean AWGN process
 For ideal distortionless channel where hc(t) is an impulse function
and convolution with hc(t) produces no degradation, r(t) can be
represented as:
r (t )  s (t )  n(t ) i  1,2
i 0t T (3.2)

14
3.2 Detection of Binary Signal in Gaussian Noise

 The recovery of signal at the receiver consist of two parts


 Filter
 Reduces the effect of noise (as well as Tx induced ISI)

 The output of the filter is sampled at t=T.This reduces the received


signal to a single variable z(T) called the test statistics
 Detector (or decision circuit)
 Compares the z(T) to some threshold level 0 , i.e.,
H1

z(T ) 

0 where H1 and H2 are the two
possible binary hypothesis
H2

15
Receiver Functionality
The recovery of signal at the receiver consist of two parts:
1. Waveform-to-sample transformation (Blue Block)
 Demodulator followed by a sampler
 At the end of each symbol duration T, predetection point yields a
sample z(T), called test statistic
z(T )  a (T )  n (T ) i  1,2
i 0
(3.3)

Where ai(T) is the desired signal component,


and no(T) is the noise component
2. Detection of symbol
 Assume that input noise is a Gaussian random process and
receiving filter is linear

1  1 n 
2

p(n0 )  exp   0   (3.4)
 0 2  2   0  

16
 Then output is another Gaussian random process

1  1  z  a1  
2

p( z | s1 )  exp     
 0 2  2   0  

1  1 z  a2 
 
2

p( z | s2 )  exp     
 0 2  2   0  

Where 0 2 is the noise variance


 The ratio of instantaneous signal power to average noise power ,
(S/N)T, at a time t=T, out of the sampler is:
 S  ai2
   2 (3.45)
 N T 0
 Need to achieve maximum (S/N)T

17
3.2.2 The Matched Filter

 Objective: To maximizes (S/N)T


 Expressing signal ai(t) at filter output in terms of filter transfer
function H(f) (Inverse Fourier transform of the product H(f)S(f)).

a (t ) 
i  
H ( f ) S ( f ) e j 2ft df (3.46)

where S(f) is the Fourier transform of input signal s(t)


 Output noise power can be expressed as:
N0 
  
2
0 | H ( f ) |2 df
2  (3.47)
 Expressing (S/N)T as:
 2


j 2fT
H ( f ) S( f ) e df
S  
  
 N T N0 
 | H ( f ) |2 df (3.48)
2 

18
 Now according to Schwarz’s Inequality:

 2  2  2

 
f1 ( x) f 2 ( x)dx  

f1 ( x) dx  
f 2 ( x) dx (3.49)

 Equality holds if f1(x) = k f*2(x) where k is arbitrary constant and *


indicates complex conjugate
 Associate H(f) with f1(x) and S(f) ej2 fT with f2(x) to get:

 2  2  2


H ( f ) S ( f ) e j 2fT df   H ( f ) df
 
S ( f ) df (3.50)

 Substitute in eq-3.48 to yield:


S 2  2
  
 N T N 0

S ( f ) df (3.51)

19
 S  2E
 Or max    and energy E of the input signal s(t):
 N T N0
 2
 Thus (S/N)T depends on input signal energy E  S ( f ) df

and power spectral density of noise and
NOT on the particular shape of the waveform

S 2E
 Equality for max    holds for optimum filter transfer
 N T N 0
function H0(f)
such that:
H ( f )  H 0 ( f )  kS * ( f ) e j 2fT (3.54)


h(t )  1 kS * ( f )e  j 2fT  (3.55)

 For real valued s(t):  ks(T  t ) 0  t  T


h(t )   (3.56)
0 else where

20
 The impulse response of a filter producing maximum output signal-
to-noise ratio is the mirror image of message signal s(t), delayed by
symbol time duration T.
 The filter designed is called a MATCHED FILTER

 ks(T  t ) 0  t  T
h(t )  
0 else where
 Defined as:
a linear filter designed to provide the maximum
signal-to-noise power ratio at its output for a given
transmitted symbol waveform

21
3.2.3 Correlation realization of Matched filter
 A filter that is matched to the waveform s(t), has an impulse
response
 ks (T  t ) 0t T
h(t )  
0 else where
 h(t) is a delayed version of the mirror image of the original signal
waveform

Signal Waveform Mirror image of signal Impulse response of


waveform matched filter
Figure 3.7

22
 This is a causal system
 Recall that a system is causal if before an excitation is applied at

time t = T, the response is zero for -  < t < T


 The signal waveform at the output of the matched filter is
t (3.57)
z (t )  r (t ) * h(t )   r ( )h(t   )d
0

 Substituting h(t) to yield:

r ( ) sT  (t   )d
t
z (t )  
0

r ( ) sT  t   d
t
 0 (3.58)
 When t=T,
T
z (t )   0
r ( ) s( )d
(3.59)

23
 The function of the correlator and matched filter are the same

 Compare (a) and (b)


 From (a) T
z (t )  
0
r (t ) s (t )dt

T
z (t ) t T  z (T )   r ( )s( )d
0

24
 t
 From (b)
z '(t )  r (t )* h(t )   r ( )h(t   )d   r ( )h(t   )d
 0

But
h(t )  s(T  t )  h(t  )  s[T  (t  )]  s(T    t )
t
 z ' (t )   r ( ) s(  T  t )d
0

 At the sampling instant t = T, we have


T T
z '(t ) t T  z '(T )   r ( )s(  T  T )d   r ( )s( )d
0 0

 This is the same result obtained in (a)


T
z ' (T )   r ( ) s( )d
0

 Hence
z(T )  z' (T )

25
Detection
 Matched filter reduces the received signal to a single variable z(T), after
which the detection of symbol is carried out
 The concept of maximum likelihood detector is based on Statistical
Decision Theory
 It allows us to
 formulate the decision rule that operates on the data

 optimize the detection criterion

H1

z(T ) 

0
H2

26
Probabilities Review

 P[s1], P[s2]  a priori probabilities


 These probabilities are known before transmission
 P[z]
 probability of the received sample
 p(z|s1), p(z|s2)
 conditional pdf of received signal z, conditioned on the class si
 P[s1|z], P[s2|z]  a posteriori probabilities
 After examining the sample, we make a refinement of our
previous knowledge
 P[s1|s2], P[s2|s1]
 wrong decision (error)
 P[s1|s1], P[s2|s2]
 correct decision

27
How to Choose the threshold?
 Maximum Likelihood Ratio test and Maximum a posteriori (MAP)
criterion:
If

p(s1 | z)  p(s2 | z)   H1
else
p(s2 | z)  p(s1 | z)   H2

 Problem is that a posteriori probabilities are not known.


 Solution: Use Bay’s theorem:
p( z | s )P(s )
p(s | z)  i i
i p( z)

H1 H1
p( z|s1)P(s1) p( z|s2 )P(s2 )
 

 p( z | s1)P(s1)  p( z | s2 )P(s2 )
P( z) H2
P( z) H 2

28
 MAP criterion:
H1
p( z|s1) P(s2 )
L( z)  

 likelihood ratio test (LRT )
p( z|s2 ) H2
P(s1)

 When the two signals, s1(t) and s2(t), are equally likely, i.e., P(s2) =
P(s1) = 0.5, then the decision rule becomes
H1
p( z|s1)
L( z)  

1  max likelihood ratio test
p( z|s2 ) H2

 This is known as maximum likelihood ratio test because we are


selecting the hypothesis that corresponds to the signal with the
maximum likelihood.

 In terms of the Bayes criterion, it implies that the cost of both types
of error is the same

29
 Substituting the pdfs

1  1  z  a1  
2

H1 : p( z | s1 )  exp     
 0 2  2   0  

1  1  z  a2  
2

H2 : p( z | s2 )  exp     
 0 2  2   0  

1  1  z  a1   H1
2
H1 exp     
p( z | s1 )   0 2  2   0   
L( z )  1 1
p( z | s2 )  1  1 za    2

exp   
2
  H
H2  0 2  2   0   2

30
 Hence: H1
 z (a1  a2 ) (a12  a22 )  
exp    1
 02
2 02
 
H2
 Taking the log of both sides will give

H1
z (a1  a2 ) (a12  a22 ) 
ln{L( z )}   0
 02 2 02

H2

H1
z (a1  a2 )  (a12  a22 ) (a1  a2 )(a1  a2 )
 
 02  2 02
2 02
H2

31
 Hence

H1 H1
  02 (a1  a2 )(a1  a2 )  (a1  a2 )
z z  0
 2 02 (a1  a2 )  2
H2 H2

where z is the minimum error criterion and  0 is optimum threshold


 For antipodal signal, s1(t) = - s2 (t)  a1 = - a2

H1

z 0

H2

32
This means that if received signal was positive, s1 (t) was sent,
else s2(t) was sent

33
Detection of Binary Signal in Gaussian Noise

The output of the filtered sampled at T is a Gaussian random process

34
Matched Filter and Correlation
 The impulse response of a filter producing maximum output
signal-to-noise ratio is the mirror image of message signal s(t),
delayed by symbol time duration T.
 The filter designed is called a MATCHED FILTER and is given
by:

 ks(T  t ) 0  t  T
h(t )  
0 else where

35
Bay’s Decision Criterion and Maximum
Likelihood Detector
 Hence

H1
 (a1  a2 )
z  0
 2
H2
where z is the minimum error criterion and  0 is optimum threshold
 For antipodal signal, s1(t) = - s2 (t)  a1 = - a2

H1

z 0

H2
36
Probability of Error
 Error will occur if
 s1 is sent  s2 is received

P( H 2 | s1 )  P(e | s1 )
0
P(e | s1 )   p( z | s1 ) dz

 s2 is sent  s1 is received
P( H1 | s2 )  P(e | s2 )

P(e | s2 )   p( z | s2 ) dz
0

 The total probability of error is the sum of the errors


2
PB   P(e, si )  P(e | s1 ) P( s1 )  P(e | s2 ) P( s2 )
i 1

 P( H 2 | s1 ) P( s1 )  P( H1 | s2 ) P( s2 )

37
 If signals are equally probable
PB  P( H 2 | s1 ) P( s1 )  P( H1 | s2 ) P( s2 )
1
  P( H 2 | s1 )  P( H1 | s2 ) 
2
1
PB   P( H 2 | s1 )  P( H1 | s2 ) 
by Symmetry
P( H1 | s2 )
2
 Numerically, PB is the area under the tail of either of the conditional
distributions p(z|s1) or p(z|s2) and is given by:

 
PB   0
P ( H1 | s2 )dz   0
p ( z | s2 )dz

 1  1  z  a2  
2

 exp      dz
0 
0 2  2   0  

38
 1  1  z  a2  
2

PB   exp      dz
0
 0 2  2   0  
( z  a2 )
 u
0
 1  u2 
 
( a1  a2 )
2 0 2
exp    du
 2
 The above equation cannot be evaluated in closed form (Q-function)
 Hence,
 a1  a2 
PB  Q    equation B.18
 2 0 
1  z2 
Q( z )  exp   
z 2  2
39
Table for computing of Q-Functions

40
A vector View of Signals and Noise

 N-dimensional orthonormal space characterized by N linearly


independent basis function {ψj(t)}, where:

T  1 if i  j
0
 i (t ) j (t )dt  
 0 if i  j

 From a geometric point of view, each ψj(t) is mutually


perpendicular to each of the other {ψj(t)} for j not equal to k.

41
 Representation of any set of M energy signals { si(t) } as a linear
combinations of N orthogonal basis functions where N  M.

N
0  t  T
si (t )   aij j (t ) 
j 1 i  1,2,..., M
where:

T  i  1,2,..., M
aij   si (t ) j (t ) dt 
0
 j  1,2,..., N

42
 Therefore we can represent set of M energy signals {si(t) } as:

si  (ai1 , ai 2 , ....... aiN ) i  1,2,..., M


 Waveform energy:
N N
Ei   s (t )dt   [ aij (t ) j (t )]   ij (t )
T T
2 2 2
i a
0 0
j 1 j 1

Representing (M=3) signals, with (N=2) orthonormal basis functions

43
Question 1: Why use orthormal functions?
 In many situations N is much smaller than M. Requiring few
matched filters at the receiver.

 Easy to calculate Euclidean distances

 Compact representation for both baseband and passband


systems.

Question 2: How to calculate orthormal


functions?

 Gram-Schmidt orthogonalization procedure.

44
 Examples

45
 Examples (continued)

46
Generalized One Dimensional Signals

 One Dimensional Signal Constellation

47
 Binary Baseband Orthogonal Signals
 Binary Antipodal Signals

 Binary orthogonal Signals

48
Constellation Diagram
 Is a method of representing the symbol states of modulated
bandpass signals in terms of their amplitude and phase
 In other words, it is a geometric representation of signals
 There are three types of binary signals:
 Antipodal
 Two signals are said to be antipodal if one signal is the
negative of the other  s (t )  s (t )
1 0
 The signal have equal energy with signal point on the real
line

 ON-OFF
 Are one dimensional signals either ON or OFF with
signaling points falling
 on the real line

49
 With OOK, there are just 2 symbol states to map onto the
constellation space
 a(t) = 0 (no carrier amplitude, giving a point at the origin)
 a(t) = A cos wct (giving a point on the positive horizontal axis
at a distance A from the origin)

 Orthogonal
 Requires a two dimensional geometric representation since
there are two linearly independent functions s1(t) and s0(t)

50
 Typically, the horizontal axis is taken as a reference for symbols
that are Inphase with the carrier cos wct, and the vertical axis
represents the Quadrature carrier component, sin wct

Error Probability of Binary Signals

 Recall:

 a1  a0 
PB  Q   equation B.18
 2 0 

 Where we have replaced a2 by a0.

51
 To minimize PB, we need to maximize:
a1  a0
0
or
(a1  a0 ) 2
 20
 We have

(a1  a0 ) 2 Ed 2E
  d
 20 N0 / 2 N0

 Therefore,
a1  a0 1 (a1  a0 )2 1 2 Ed Ed
  
2 0 2  0
2
2 N0 2 N0

52
 The probability of bit error is given by:

 Ed 
PB  Q 
 (3.63)
 2N0 

Ed   s1 (t )  s0 (t ) dt
T 2

  s1 (t ) dt   s0 (t ) dt  2 s1 (t ) s0 (t )


T 2 T 2 T

0 0 0

53
 The probability of bit error for antipodal signals:

 2 Eb 
PB  Q 

 N0 
 The probability of bit error for orthogonal signals:

 Eb 
PB  Q 

 N 0 

 The probability of bit error for unipolar signals:

 Eb 
PB  Q 

 2N 0 

54
 Bipolar signals require a factor of 2 increase in energy compared
to orthogonal signals
 Since 10log102 = 3 dB, we say that bipolar signaling offers a 3 dB
better performance than orthogonal

55
Comparing BER Performance

For Eb / N 0  10dB
PB ,orthogonal  9.2 x10  2
PB ,antipodal  7.8x10  4

 For the same received signal to noise ratio, antipodal provides lower
bit error rate than orthogonal

56
Relation Between SNR (S/N) and Eb/N0
 In analog communication the figure of merit used is the average
signal power to average noise power ration or SNR.

 In the previous few slides we have used the term Eb/N0 in the bit
error calculations. How are the two related?

 Eb can be written as STb and N0 is N/W. So we have:

Eb STb S W 
   
N0 N / W N  Rb 
 Thus Eb/N0 can be thought of as normalized SNR.

 Makes more sense when we have multi-level signaling.

 Reading: Page 117 and 118.


57

S-ar putea să vă placă și