Sunteți pe pagina 1din 35

ESS140 Digital Communications ESS140 Digital Communications

www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 1 Announcements

Course information Those who want to take the course ESS195 Digital
Communications models: layered and Shannon Kommunikation, F, please see Erik in the break
Basic concepts and definitions of digital Changes and updates are found on the web
communications www.s2.chalmers.se/undergraduate/courses/ess140
Short review of signals and systems
Binary transmission over noisy channels
Minimum-distance receiver

Erik Strm, Updated September 1, 2003 Lecture 1 1 Erik Strm, Updated September 1, 2003 Lecture 1 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Course Material Lectures and Exercise

J. G. Proakis and M. Salehi, Communication Systems Not all material will be covered in the lectures. Please
Engineering, Second Edition, Prentice-Hall read the book!
The text book is available at Cremona or DC Exercises will consist of
Solutions to Selected Problems, Notes on Signals Short theory review
and Systems, will be available soon at DC One demonstration solution
Project descriptions and additional handouts will be Problem solving in groups
made available during the course at DC and on the The problems in the exercises may not be on the
web same level as the exam problems

Erik Strm, Updated September 1, 2003 Lecture 1 3 Erik Strm, Updated September 1, 2003 Lecture 1 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Projects Project Reports


The project will be carried out in groups of three
students The project should be documented in a report and
defended at an oral exam
Sign up for groups on list outside room 6323 before
Friday Guidelines and deadlines for the report is available in
the Course-PM
You will be assigned an assistant for consultation
Make sure to use the cover sheet with time report
Cooperation between groups is not allowed
and feedback page
The MATLAB software can be bought at Cremona.
Go to the following web page for details
http://licenser.adm.gu.se/cd/chalmers/CD_2002-2003/Matlab.htm

Erik Strm, Updated September 1, 2003 Lecture 1 5 Erik Strm, Updated September 1, 2003 Lecture 1 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Project Grading Project Grading Contd


The deadlines for reports is strict: late reports will
not be graded The project score is multiplied by 3 (number of group
The maximum score per project is 15 points members) and divided among the group members
Written report style and language: 2 points The point distribution is determined by the group
Commented and reasonable results: 10 points member themselves, but no member can get more
than 17 points
Oral exam: 3 points
A group that cannot decide on a distribution have to
All projects must receive a score of 8 points to be
see the examiner (Erik Strm) for arbitration
approved
A very uneven distribution indicates a group with
All projects must be approved to pass the course
problems. The examiner can decide to change the
The project score will be counted towards the final total score and the distribution in extreme cases.
course grade
Erik Strm, Updated September 1, 2003 Lecture 1 7 Erik Strm, Updated September 1, 2003 Lecture 1 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Exam Grades

Focused on understanding rather than memorization The final grade is based on the points accumulated
from the exam (max 40) and the projects
Allowed material
Any calculator Score 0-39 40-59 60-79 80
Proakis and Salehi Grade Fail 3 4 5
Beta
ECTS grades are assigned as
This years lecture slides
Own handwritten notes. No photo copies, Score 0-39 40-49 50-59 60-79 70-84 85
printouts, etc., are allowed Grade F E D C B A
Four problems with total maximum score of 40

Erik Strm, Updated September 1, 2003 Lecture 1 9 Erik Strm, Updated September 1, 2003 Lecture 1 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

ISO Layered Communications Model


Shannons Communication Model
Transmitter

source channel
source modulator
encoder encoder

noise
bit streams channel

Receiver
waveforms
source channel
sink demodulator
decoder decoder

Erik Strm, Updated September 1, 2003 Lecture 1 11 Erik Strm, Updated September 1, 2003 Lecture 1 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 2 Monday Tuesday Wednesday Thursday Friday


Digital Communications Digital Communications

week 2-7 group c week 5, group c


Implementation of the minimum distance receiver 08-10 EC EL 42
Digital
Digital Communications Communic.
Correlator receiver week 1-5, 7 week 5
10-12 VM VM
Matched filter receiver Lunch
Digital Digital Digital
Communic. Communic. Communications
Receiver for binary antipodal signaling group a,b group a, b
week 2-7 week 1 week 5
Binary pulse amplitude modulation (PAM) 13-15 EL41-42 HA4 EE,EF

Synchronization methods
15-17

17-19

Erik Strm, Updated September 4, 2003 Lecture 2 1 Erik Strm, Updated September 4, 2003 Lecture 2 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Transmission over Noisy Channels Distance Measure for Signals


The energy of a signal x(t) is defined as
m s(t ) r (t ) m
Ex =
2
m = 1 s(t ) = s1(t ) x(t ) dt

m = 2 s ( t ) = s2 ( t )
The length of a signal can be defined as
n (t )


2
x (t ) = E x = x (t ) dt

Given the observation of r(t), decide
The distance between x(t) and y(t) can be defined as
which m that was transmitted


2
[n(t) is unknown, {s1(t), s2(t)} is known] D( x(t ), y (t )) = x (t ) y (t ) = x (t ) y (t ) dt

Erik Strm, Updated September 4, 2003 Lecture 2 3 Erik Strm, Updated September 4, 2003 Lecture 2 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Minimum Distance Receiver Binary Correlator Receiver

The received signal is Rrs1


Re { }


( )dt
r (t ) = s(t ) + n(t ) r (t )

m
s1 (t ) E1 / 2
Compute
D1 = r (t ) s1(t ) Rrs2
( )dt Re { }

D2 = r (t ) s2 (t )
Choose the signal alternative that is closest to r(t): s2 (t ) E2 / 2

= 1 if D1 < D2 ,
m
= 2 if D1 > D2 ,
m

Erik Strm, Updated September 4, 2003 Lecture 2 5 Erik Strm, Updated September 4, 2003 Lecture 2 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Binary Matched Filter Receiver MF Receiver for Antipodal Modulation


h1(t ) = s1 (T0 t ) t = T0
y 1 (t ) t = T0
h1(t ) Re { }
r (t ) y (t ) y (T0 ) b
r (t )
h(t ) = g (T0 t ) sgn{ }
m
E1 / 2
t = T0
y 2 (t ) s(t ) = bg (t ) = g (t )
h2 (t ) Re { }

h2 (t ) = s2 (T0 t )
E2 / 2
T0 is chosen such that the matched filter is
causal

Erik Strm, Updated September 4, 2003 Lecture 2 7 Erik Strm, Updated September 4, 2003 Lecture 2 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Pulse Amplitude Modulation Noiseless Matched Filter Output


b[ k ] b[3] g (t ) p(t )
b[0]
1 A A2T
t
k T t T 2T
-1
b[1] b[2] y (t ) = b[k ]p(t kT )
k =
b[0] p(t ) b[3] p(t 3T )
b[0]g (t ) b[3]g (t 3T ) A2T
A 2T 3T

b[k ]g (t kT ) T 4T t
k =
T 2T 3T 4T t
A2T
b[1] p(t T ) b[2] p(t 2T )
A
b[1]g (t T ) b[2]g (t 2T ) sgn {y (kT + T )} = b[k ]
Erik Strm, Updated September 4, 2003 Lecture 2 9 Erik Strm, Updated September 4, 2003 Lecture 2 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

MF Receiver for Binary Antipodal PAM

t = T0 + kT
r (t ) y (t ) y (T0 + kT ) b[k ]
h(t ) = g (T0 t ) sgn{ }


s(t ) = b[k ]g (t kT )
k =

T0 is chosen such that the matched filter is


causal

Erik Strm, Updated September 4, 2003 Lecture 2 11


ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 3 Review
The matched filter to s(t) has impulse response
Synchronization of PAM signals h(t ) = s(T0 t ), where T0 is the sampling time
Signal constellation and signal space concepts t = T0
Inner product, norm, distance, and bases for signal r (t ) y (t ) y (T0 )
spaces h(t )
Energy and distance computations from signal
constellations y (t ) = r (t ) h(t ) = Rrs (t T0 )

Common signal constellation types: one-dimensional, Rrs ( ) = r (u + )s (u ) du = crosscorrelation function

orthogonal, biorthogonal, and simplex constellations
y (T0 ) = Rrs Rrs (0) = crosscorrelation

We choose T0 such that the filter become causal


Erik Strm, Updated September 8, 2003 Lecture 3 1 Erik Strm, Updated September 8, 2003 Lecture 3 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

ss ( t )
A binary PAM signal is formed from the bit stream ss ( t )
b[k ] {1, +1} and the pulse shape g(t) as A A

s(t ) = b[k ]g(t kT )
k =
Tsync Tsync +
A A
The noiseless output from the matched filter is
4A2T

y (t ) = s(t ) g (T0 t ) = b[k ]p(t kT )
k =
Tsync + + T
p(t ) = g (t ) g (T0 t ) = Rgg (t T0 ) A2T

Rgg ( ) = g(u + )g (u ) du = autocorrelation function Tsync +

A2T

2A T
2

Erik Strm, Updated September 8, 2003 Lecture 3 3 Erik Strm, Updated September 8, 2003 Lecture 3 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

M-ary Modulation M-ary Matched Filter Receiver


t = T0
h1(t ) Re{ i }
m s (t ) r (t )
m
E1 / 2
r (t ) t = T0
h2 (t ) Re{ i }
m
n (t )
E2 / 2
m {1,2,, M }, s(t ) {s1(t ), s2 (t ),, sM (t )}
t = T0
Typically, we want to transmit an integer number of hM (t ) Re{ i }
bits per symbol, say k bits/symbol, i.e., M = 2k

hm (t ) = s (To t )
m
EM / 2
Erik Strm, Updated September 8, 2003 Lecture 3 5 Erik Strm, Updated September 8, 2003 Lecture 3 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Important Properties of the Signal Set Inner Product and Norm Properties

x (t ), y (t ) = x (t )y (t ) dt

Size: M
= crosscorrelation between x (t ) and y (t )
Energies: E1, E2, , EM
Average energy: Es (= Eav in the book) ax (t ), y (t ) = a x(t ), y (t )
Distances between signal alternatives x (t ), ay (t ) = a x (t ), y (t )
x(t ) + y (t ), z(t ) = x (t ), z(t ) + y (t ), z(t )
d ij = si (t ) s j (t )


2
Minimum distance x(t ) = x (t ), x (t ) = x(t ) dt = E x

dmin = min dij = "length" of x (t )


i j
Dimensionality: N ax (t ) = a x (t )

Erik Strm, Updated September 8, 2003 Lecture 3 7 Erik Strm, Updated September 8, 2003 Lecture 3 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

2) For i = 2,, M compute


Gram-Schmidt Procedure i 1
di (t ) = si (t ) si (t ), k (t ) k (t )
k =1
The procedure is described on page 341 in the book.
If d i (t ) 0, let
d1(t )
Given the signal set {s1(t ),, sM (t )}, compute an i (t ) =
d1(t )
orthonormal basis { 1(t ),, N (t )}
If d i (t ) = 0, do not assign any basis function
3) Renumber the basis functions such that basis is
1) Define
{ 1(t ),, N (t )}
s1(t ) s1(t )
1( t ) = = This is only necessary if di (t ) = 0 for any i in step 2
E1 s1(t )

Note that N M
Erik Strm, Updated September 8, 2003 Lecture 3 9 Erik Strm, Updated September 8, 2003 Lecture 3 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Pulse Amplitude Modulation Pulse Position Modulation


2 (t )
s2
s1 s2 s3 s4 A Eg A 2Eg = dmin

3 A Eg A Eg A Eg 3 A Eg (t )

sm (t ) = (2m 1 M )Ag (t ) dmin = 2 A Eg s1


g (t )
(t ) = (M 2 1) A Eg 1(t )
Eg E s = Eg A2
3 s3
N =1 A Eg
3 (t )

T
sm (t ) = Ag t ( m 1) , dmin = A 2Eg , N = M, Es = A2Eg
M
Erik Strm, Updated September 8, 2003 Lecture 3 11 Erik Strm, Updated September 8, 2003 Lecture 3 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 4 Review

Statistical model of communications The signal space is the linear vector space spanned
A priori and a posteriori probabilities by the signal alternatives
Vector representation of white Gaussian noise M

span {s1(t ),, sM (t )} = x(t ) : x(t ) = ak sk (t ), ak scalar
MAP detection and probability of error k =1
ML detection The inner product and norm is defined as

x(t ), y (t ) = x(t )y (t ) dt

x (t ) = x (t ), x (t ) = E x

Erik Strm, Updated September 15, 2003 Lecture 4 1 Erik Strm, Updated September 15, 2003 Lecture 4 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The set { 1(t ),, N (t )} forms an orthonormal basis


for the signal space if In general, if sn (t ) and sm (t ) are in the signal space, then

1, n = k ( unit norm) N
1) n (t ), k (t ) = n (t ) k (t ) dt = (t ) dt = sn , sm = snk smk
s (t )s

sn (t ), sm (t ) =
0, n k ( orthogonal)
n m
k =1
N
2) sm (t ) = smk k (t ), (synthesis equation) which implies that
2 2
k =1
sm ( t ) = s m = Em = energy of sm (t )
smk = sm (t ), k (t ) , (analysis equation)
sm (t ) sn (t ) = sm sn = distance between sn (t ) and sm (t )

The number of basis functions, N, is called the


dimension of the space
It can be shown that N M

Erik Strm, Updated September 15, 2003 Lecture 4 3 Erik Strm, Updated September 15, 2003 Lecture 4 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Inner Product Implementations Conversion Blocks

Matched filter implementation


t = T0 s m1 sm1
x (t )
xk i , 1(t ) s m1 sm1
h(t ) = (To t )

Inner product
k
sm (t ) sm 1(t ) sm (t )
= sm
x (t ) xk smN
i , k (t ) smN
Correlator implementation i , N (t ) smN smN
xk
x (t )
N (t )
( i ) dt

k (t )

Erik Strm, Updated September 15, 2003 Lecture 4 5 Erik Strm, Updated September 15, 2003 Lecture 4 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

n (t ) n
m s( t ) r (t )
m m m
s r
modulator demodulator

n (t ) Fundamental Problem
m s( t ) r (t )
m
symbol to s vector to signal to r decision Given the observation of the received vector,
vector signal vector rule choose the decision such that
Pc = Pr {m
= m}
n is maximized, or equivalently, such that
m
m
symbol to s r
decision Pe = Pr {m
m}
vector rule
is minimized.
Vector channel

Erik Strm, Updated September 15, 2003 Lecture 4 7 Erik Strm, Updated September 15, 2003 Lecture 4 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Additive White Gaussian Noise The elements of the noise vector are independent
(AWGN) Channels identically distributed (iid) zero-mean Gaussian
random variables
The noise is assumed to be a white Gaussian x2
N 1
process with power spectral density Sn (f ) = N0 / 2 nk = n(t ), k (t ) N 0, 0 fnk ( x ) = exp
2 N0 N0
N0
E {n(t )} = 0, Rn ( ) = E {n(t )n(t + )} = ( ) and for n = [ n1 n2 nN ]
T

2 N 2
N
1 x2 1 xk
Rn ( ) Sn (f ) = F {Rn ( )} fn ( x ) = exp k = exp k =1
N0 ( )
N
N0 / 2 k =1 N0 N0 N0
N0
( )
2
1 x2
f = exp
( N0 )N/2
N0

Erik Strm, Updated September 15, 2003 Lecture 4 9 Erik Strm, Updated September 15, 2003 Lecture 4 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

MAP Receiver for AWGN Channels

i , s1 Re { i }

N0 E1
ln fm [1]
2 2
r (t ) r m

i , sM Re { i }

N0 E
ln fm [M ] M
2 2

Erik Strm, Updated September 15, 2003 Lecture 4 11


ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 5 Review

MAP and ML detection A common channel model is the additive white


Gaussian noise (AWGN) channel
Decision regions
The noise is assumed to be a white Gaussian
Probability of symbol error process with power spectral density Sn (f ) = N0 / 2
Union bound The noise vector samples are i.i.d. Gaussian random
variables with zero mean and variance N0 / 2
The noise vector n = [ n1 nN ] has pdf
T

1 x2
fn ( x ) = exp
( N0 )N/2
N0

Erik Strm, Updated September 22, 2003 Lecture 5 1 Erik Strm, Updated September 22, 2003 Lecture 5 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

fn ([x1 x2]T )
2
Contour plot Optimum Detection
0.3

1 The optimum detection (minimum probability of error)


0.2
is the maximum a posteriori (MAP) decision.
2

0
x

0.1
MAP = arg max fm|r (k | x ),for general channels
m
k {1,2,,M }

0 -1
= arg max fr|m ( x | k )fm [k ]
2 k {1,2,,M }

2 -2 = arg max fn ( x sk )fm [ k ], for additive noise channels


-2 -1 0 1 2 k {1,2,,M }
0
x2 x1
0 N0 E
x1 = arg max Re x, sk + ln fm [k ] k , for AWGN
-2 -2 k {1,2,,M } 2 2

Erik Strm, Updated September 22, 2003 Lecture 5 3 Erik Strm, Updated September 22, 2003 Lecture 5 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

For equally likely transmitted symbols, fm [k ] = 1/ M , MAP Receiver for AWGN Channels
MAP becomes maximum likelihood (ML) detection
i ,s1 Re { i }
ML = arg max fr|m ( x | k ), for general channels
m
k {1,2,,M }

= arg max fn ( x s k ), for additive noise channels N0


ln fm [1]
E1
k {1,2,,M } 2 2
r (t ) r m
Ek
= arg max Re x, s k , for AWGN channels
k {1,2,,M } 2
= arg min x sk = minimum distance receiver
k {1,2,,M }

We may choose to use ML detection for simplicity i , sM Re { i }


(even if fm [k ] 1/ M ), since the ML receiver does not
N0 E
need to know fm [k ] or N0 2
ln fm [M ] M
2

Erik Strm, Updated September 22, 2003 Lecture 5 5 Erik Strm, Updated September 22, 2003 Lecture 5 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140
1 t2 1
exp

2
Q( x ) = e t /2
dt
Decision Regions 0.4
2 2
1
2 x

2 (t )
0.8 Q( x )
Z1 0.3
= 1 Q( x )
Z2 0.6
0.2
Q( x 0 )
s1 0.4
r = x m = 2 s2
0.1
0.2

1 (t )
0 0
-4 -2 0 x0 2 4 -4 -2 0 2 4
t x
If Y N (mY , ), then 2
Y
s3
Z4 Z3
Y mY x mY x mY
Pr {Y > x} = Pr > =Q
s4
Y Y Y
N (0,1)
Erik Strm, Updated September 22, 2003 Lecture 5 7 Erik Strm, Updated September 22, 2003 Lecture 5 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Symbol Error Probability Union Bound


0
10
Es 2 (t ) 2 (t )
2Q
-1 Z1 Z1
10 N0 Z2 Z2
Pe s1 s1
d12
10
-2 s2 s2
1(t ) 1(t )
dmin
-3
10

10
-4 s4 s3 s4 s3
Es Z3
Q2 Z4 Z3
N
Z4
-5
10 0

10
-6 Pr {r Z2 | m = 1} = I1 Pr {r closer to s2 than s1 | m = 1} = I 2
0 2 4 6 8 10 12 14
10 log 10 E s/N 0

Erik Strm, Updated September 22, 2003 Lecture 5 9 Erik Strm, Updated September 22, 2003 Lecture 5 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

1(t )

s1
d12

s2 n
2 (t )

n N
Noise vector after change of coordinates = n = 1 , n1 and n2 independent N 0, 0
n2 2

Erik Strm, Updated September 22, 2003 Lecture 5 11


ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 6 Review

Union bound Any receiver can be defined by its decision regions


Linear filtering of random processes {Z1, Z2,, ZM }, defined such that
IQ-modulation if r Zk m
=k
Power spectral density of PAM and IQ-modulated
A maximum likelihood (ML) receiver picks the signal
signals
vector that is closest to the observed vector.
Lecture next week (Monday October 6)
For the maximum likelihood (ML), the decision
moved Thursday October 2, 10-12 in VM
regions are
Zk = {x : x sk < x s n , for k n}

Erik Strm, Updated September 29, 2003 Lecture 6 1 Erik Strm, Updated September 29, 2003 Lecture 6 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The probability of symbol error can be computed as Conditioned on m = 1, we have fr|m ( x | 1) = fn ( x s1 )


M M
Pe = Pr {m
k | m = k } Pr {m = k } = Pe|k fm [ k ] 2 (t ) 2 (t )
k =1 k =1 Z1 Z1
Z2 Z2
s1 s1
It is often convenient to use the following relations d12
s2 s2
Pc = Pr {m
= m} = 1 Pe 1(t )
dmin
1(t )

Pc|k = Pr {m
= k | m = k} = 1 Pe|k
s4 s3 s4 s3
pdf contour
If Y N (mY , Y2 ) then lines Z3
Z4 Z3 Z4

x mY 1
Pr {Y > x} = Q
integration area Z 2 to produce

2

, Q( x ) = et /2
dt Pr {r Z 2 | m = 1} = I1 Pr {r closer to s2 than s1 | m = 1} = I 2
Y 2

Erik Strm, Updated September 29, 2003 Lecture 6 3 Erik Strm, Updated September 29, 2003 Lecture 6 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

1(t ) Union Bound for AWGN Channels and


ML Detection
s1
The probability of symbol error can be bounded as
d12
M
M M
d kl2
Pe = fm [ k ]Pe|k PUB = fm [k ] Q
n
2N0
l k
s2 k =1 k =1 l =1
2 (t )
d2 d2
Since Q kl
Q min , for k l
2N0 2N0

M d2
n N
M
d2
Noise vector after change of coordinates = n = 1 , n1 and n2 independent N 0, 0 PUB fm [ k ] Q min = (M 1)Q min
n2 2 2N0
k =1
l k
l =1
2N0
Erik Strm, Updated September 29, 2003 Lecture 6 5 Erik Strm, Updated September 29, 2003 Lecture 6 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Linear AWGN Channels Wide-Sense Stationary Processes


n( t )
If X(t) is a wide-sense stationary (WSS) process, then
m s (t ) C (f ) r (t ) m
c (t )
Mean value function: m X (t ) = E [ X (t )] = m X

Autocorrelation function: R X (t , t + ) = E [ X (t ) X (t + )] = R X ( )
impulse response: c(t )
frequency response: C(f )
Power spectral density: S X (f ) = F [R X (t )]

S (f ) df
C (f ) C (f )
Power: PX = E X 2 (t ) = R X (0) = x

Erik Strm, Updated September 29, 2003 Lecture 6 7 Erik Strm, Updated September 29, 2003 Lecture 6 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Linear Filtering of WSS Processes Carrier Modulation


X I (t ) YI (t ) ZI ( t ) X I (t )
If Y(t) = X(t)*h(t), then Y(t) is also WSS. LP

2 cos(2 fc t ) 2 cos(2 fc t )
C (f )
X (t ) Y (t ) = X (t ) h(t )
H (f ) = F [ h(t )] X Q (t ) YQ (t ) ZQ (t ) X Q (t )
2 LP
S X (f ) SY (f ) = S X (f ) H (f )

R X ( ) RY ( ) = R X ( ) Rh ( ) 2 sin(2 fc t ) 2 sin(2 fc t )

Subscript I for in-phase, Q for quadrature-phase

Erik Strm, Updated September 29, 2003 Lecture 6 9 Erik Strm, Updated September 29, 2003 Lecture 6 10
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 7 Review
For a general M-ary signal constellation, ML
IQ carrier modulation detection, and equally likely transmitted symbols, the
Phase-shift keying PSK symbol error probability can be bounded as
Impact of carrier phase estimation errors on coherent
1 M M d kl d2
2
receivers
Pe Q
M k =1 l =1 2N0
(M 1)Q min
2N0


No lecture on Monday l k
Project report deadline 10:00 on Monday d kl = sk sL , dmin = min d kl
k l

Put reports in the box outside the office of Eva


Axelsson, room 6330 in the ED-building The bound is tight for large signal-to-noise ratios
(when Pe is small).
Project 2 is on the web and at DC (deadline Oct. 29)
Erik Strm, Updated October 2, 2003 Lecture 7 1 Erik Strm, Updated October 2, 2003 Lecture 7 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Suppose s1 is transmitted, then r = s1 + n The power spectral density of the PAM signal

1(t ) S (t ) = ag
k =
k T (t kT ) is
1
SS (f ) = Sa (f ) | GT (f ) |2
s1 T
d12
GT (f ) = F [ gT (t )]
2 (t ) n

R [m] e j 2 fTm
s2
n Sa (f ) =
n = 1 , a
n2 m =

N
n1, n2 i.i.d. N 0, 0
Ra [m] = E [ak ak + m ]
2


For equally likely and independent symbols,
s1 s2
2
2
Pr {r closer to s2 than s1 | s1 transmitted} = Q
2N0
= Q d12
2N0 Sa (f ) = E [ ak ak ] = a2

Erik Strm, Updated October 2, 2003 Lecture 7 3 Erik Strm, Updated October 2, 2003 Lecture 7 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

IQ Carrier Modulation IQ-modulator


X I (t ) YI (t ) ZI ( t ) X I (t ) PAM with I (t )
LP
PAM X I (t )
sI 1
2 cos(2 fc t ) 2 cos(2 fc t ) Eg
gT (t )
C (f )
s (t )
bits bits/ m symbol/ 2 cos(2 fct )
X Q (t ) YQ (t ) ZQ (t ) X Q (t ) symbol vector
LP
2-dimensional
PAM XQ (t )
sQ 1
constellation gT (t )
2 sin(2 fc t ) 2 sin(2 fc t ) Eg
sI
s= 2 sin(2 fc t )
Separation of X I (t ) and XQ (t ) possible if fc B sQ
(B = bandwidth of the baseband signals). PAM with Q (t )

Erik Strm, Updated October 2, 2003 Lecture 7 5 Erik Strm, Updated October 2, 2003 Lecture 7 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The signals
2 IQ-demodulator
I (t ) = gT (t )cos(2 fc t )
Eg Matched filter with respect to I (t )
2 t = T0 + nT
Q (t ) = gT (t )sin(2 fc t ) 1 rI
Eg gT (T0 t )
Eg
n(t )


2
Eg = gT (t ) dt r (t )
2 cos(2 fc t ) m
s( t )

are orthonormal if t = T0 + nT
1 rQ
k gT (T0 t )
gT (t ) = 1, for 0 t < T , and fc = , for any integer k Eg
2T
or very nearly orthonormal if 2 sin(2 fc t ) sI nI
r = s+n = +
Matched filter with respect to Q (t ) sQ nQ
fc bandwidth of gT (t )
Erik Strm, Updated October 2, 2003 Lecture 7 7 Erik Strm, Updated October 2, 2003 Lecture 7 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

dmin
s 6 Q (t ) s 6 Q (t )
PSK Signal Constellation Z7
2
Z7
s5 s7 s5 s7

s6 Q ( t )
Z7 I (t ) s 8 I (t )
s5 s7 s4 s8 s4
Es Es

s8 I ( t ) s3 s3
s1 s1
s4 1
s2 s2
2 Eg
M radius = A = Es
s3 2 d2 d2
s1 p1 = Pr {r 1 | s = s7 } = Q min p2 = Pr {r 2 | s = s7 } = Q min
2N0 2N0
s2
dmin = 2 Es sin
M d 2Es

Pe = Pr{r Z7 | s = s7 } p1 + p2 = 2Q min = 2Q sin
M N0
2N0
Erik Strm, Updated October 2, 2003 Lecture 7 9 Erik Strm, Updated October 2, 2003 Lecture 7 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

Impact of Phase Estimation Errors

s4
s3 s4
s3
/4 3 / 8
e =
8

s1
s2 s1 s2

Erik Strm, Updated October 2, 2003 Lecture 7 11


ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

In-Phase Quadrature Phase Modulator


Lecture 8 2
PAM with I (t ) = gT (t )cos(2 fc t )
Differentially encoded PSK and Differential PSK Eg
(DPSK)
PAM X I (t )
Filtering of pulse amplitude modulated (PAM) signals sI 1
gT (t )
Eg
Intersymbol interference (ISI) s (t )
bits bits/ m symbol/ 2 cos(2 fc t )
Sampling theorem and Nyquist criterion for ISI-free symbol vector

transmission
2-dimensional
PAM XQ (t )
sQ 1
Raised-cosine pulses constellation
Eg
gT (t )

sI
s= 2 sin(2 fc t )
sQ
QAM and FSK will not be explicitly covered at the 2
PAM with Q (t ) = gT (t ) sin(2 fc t )
lectures, please read the book (Sections 7.6.2, 7.3.3, Eg
7.5.6, 7.6.5 and 7.4.1, 7.5.7) and the handout Notes I (t ) and Q (t ) are orthonormal if fc  bandwidth of gT (t ).
on QAM, which is found on the web
Erik Strm, Updated October 14, 2003 Lecture 8 1 Erik Strm, Updated October 14, 2003 Lecture 8 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

IQ-demodulator Carrier Phase Estimation


Matched filter with respect to I (t )
t = T0 + nT In general, the received signal can be written as
1 rI
gT (T0 t )
Eg 2 2
n(t ) r (t ) = sI gT (t )cos(2 fc t + ) + sQ gT (t )sin(2 fc t + ) + n(t )
2 cos(2 fct ) decision
m Eg Eg
s (t ) r (t )
rule
t = T0 + nT
1
gT (T0 t )
rQ The carrier phase is estimated by the receiver.
Eg
The estimate is used to phase-shift the cosine and
2 sin(2 fc t ) sI nI sine carriers in the demodulator.
r = s+n = +
Matched filter with respect to Q (t ) sQ nQ The transmitted vectors appears at the receiver to
have been rotated by the estimation error e = .
Noise vector elements nI and nQ are i.i.d. N (0, N0 / 2) Of course, e is unknown to the receiver.
Erik Strm, Updated October 14, 2003 Lecture 8 3 Erik Strm, Updated October 14, 2003 Lecture 8 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Differentially encoded PSK


Linear AWGN Channels
Possble new states Q (t ) Q (t )
00 01 00 n( t )
s3 s4 10
11 m s (t ) C (f ) r (t )
m
11 11
I (t ) I (t ) modulator
c (t )
demodulator
10 01 10 10 01

11 11
s2 10 channel filter
s1 impulse response: c(t )
01 00 00 01 00
frequency response: C ( f )
State transition
Original state
Transmitted bits

Erik Strm, Updated October 14, 2003 Lecture 8 5 Erik Strm, Updated October 14, 2003 Lecture 8 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

t = nT + T0
PAM block splitting an PAM
c (t ) g R (t )
decision an
gT ( t ) y (t ) y (nT + T0 ) rule

an PAM s (t )
gT (t ) t = nT + T0
an PAM decision an
gT (t ) c (t ) g R (t )
(t ) y (t ) y (nT + T0 ) rule

t = nT + T0
an PAM z(t ) s (t ) an
PAM decision
an
gT (t ) (t ) x( t ) = gT (t ) c (t ) g R (t )
(t ) y (t ) y (nT + T0 ) rule

t = nT + T0
z( t ) = a (t nT )
n
an PAM
x(t ) y (nT + T0 )
decision
rule
an
n = y (t )

Erik Strm, Updated October 14, 2003 Lecture 8 7 Erik Strm, Updated October 14, 2003 Lecture 8 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

x (t ) = gT (t ) c(t ) g R (t ) Sampling Theorem for Deterministic


0.8
Signals
0.7
t = nTs
0.6
3T x c (t ) xd [n ] = xc ( nTs )
T0 =
0.5 2

0.4
causes ISI
X c (f ) = F { xc (t )} X d (F ) = x
n =
d [n ]e j 2 Fn

0.3
1 F k
X d (F ) = Xc
0.2 Ts k = Ts

0.1
1
k
We can write X d (fTs ) = x (nT )e c s
j 2 fTs n
= X c f
0
-T -T/2 0 T/2 T 3T/2 2T 5T/2 3T 7T/2 4T
n = Ts k = Ts

Erik Strm, Updated October 14, 2003 Lecture 8 9 Erik Strm, Updated October 14, 2003 Lecture 8 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

X (f + 1/ T ) C (f ) X (f ) X (f 1/ T )
Raised-Cosine Spectra
X rc (f ) Constant = T
T
Cosine-shape
Excess
f T
bandwidth
2
1
1 T
W W
T 1 1 1+ f

k
2T 2T 2T Roll-off
X rc f
If 1/T > 2W, then ISI-free transmission is not possible k = T factor
T
If 1/T = 2W, then ISI-free transmission is possible if
x (t ) = sinc(t / T ) 1 T 1
X rc f + X rc (f ) X rc (f ) X rc f
T 2 T
If 1/T < 2W, then ISI-free transmission is possible
with many pulse shapes
1 1 1+ f
2T 2T 2T
Erik Strm, Updated October 14, 2003 Lecture 8 11 Erik Strm, Updated October 14, 2003 Lecture 8 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 9 PAM over Linear Channels


t = nT + T0
an PAM decision an
Raised-cosine pulses gT (t )
c (t ) gR (t )
rule
y (t ) y (nT + T0 )
Square-root raised cosine pulses
Equalization
Zero-forcing equalizers (ZF) an t = nT + T0
PAM decision an
Linear minimum mean squared error (MMSE) x(t ) y (t ) y (nT + T0 ) rule
equalizers x(t ) = gT (t ) c (t ) g R (t )
Decision feedback equalizers (DFE)
The composite channel impulse response, x(t), will
determine the amount of intersymbol interference.

Erik Strm, Updated October 27, 2003 Lecture 9 1 Erik Strm, Updated October 27, 2003 Lecture 9 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

If we ignore the time delay T0, the sampled output of A pulse x(t) with Fourier transform X(f) is a Nyquist
the receiver filter can be written as pulse if

y n = y (nT ) = an x(0) +

a x((n k )T )
k =
k T = X (f k / T )
k =
desired term k n


intersymbol interference (ISI)


An ideal bandlimited channel has channel filter
ISI-free transmission is possible only if frequency response

1, n = 0 1, f W
x (nT ) = C (f ) =
0, n 0 0, f > W

A pulse that satisfies this condition is called a Nyquist


pulse. X (f ) = GT (f )C (f )GR (f ) = 0, for f > W

Erik Strm, Updated October 27, 2003 Lecture 9 3 Erik Strm, Updated October 27, 2003 Lecture 9 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

X (f + 1/ T ) C (f ) X (f ) X (f 1/ T )
Raised-Cosine Spectra
X rc (f ) Constant = T
T
Cosine-shape
Excess
f T
bandwidth
2
1
1 T
W W
T 1 1 1+ f

k
2T 2T 2T Roll-off
X rc f
If 1/T > 2W, then ISI-free transmission is not possible k = T factor
T
If 1/T = 2W, then ISI-free transmission is possible if
x (t ) = sinc(t / T ) 1 T 1
X rc f + X rc (f ) X rc (f ) X rc f
T 2 T
If 1/T < 2W, then ISI-free transmission is possible
with many pulse shapes
1 1 1+ f
2T 2T 2T
Erik Strm, Updated October 27, 2003 Lecture 9 5 Erik Strm, Updated October 27, 2003 Lecture 9 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

RC and RRC Spectra and Pulses



f <
1 Linear Continuous-Time Equalizers
T , 2T

T T 1 1 1+
X rc (f ) = 1 + cos f , f <
2 2T 2T 2T Equalizer
0, otherwise
Channel Filter Variable
Variable
n(t )
Based on estimate
Must be estimated of channel filter

t an
cos PAM to detector
t T c (t ) g R (t ) gE (t )
xrc (t ) = sinc 2
gT (t ) z( t )
T 2 t
1
T Transmit and Receive Filters
X rrc (f ) = X rc (f ) Fixed
Usually matched to each other
(1 ) t 4 t (1 + ) t
sin + cos
xrrc (t ) = T T T T
4 t 2
t 1
T

Erik Strm, Updated October 27, 2003 Lecture 9 7 Erik Strm, Updated October 27, 2003 Lecture 9 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Noise-Amplification for ZF Equalization Linear Discrete-Time Equalizers


n(t )
t = nTs
X rc (f ) an yn zn a n d
S (f ) c (t ) g R (t )
2 gT (t ) y (t )
C(f )

n( t ) (t )
Sn (f ) g R (t )
Sn (f ) X rc (f )
t = nTs
an y (t ) yn zn a n d
x(t ) = gT (t ) c(t ) gR (t )

Note that Sn (f ) X rc (f ) and S (f ) is the noise PSD The sampling time could be Ts = T (symbol-spaced)
before and after the equalizer, respectively or Ts = T/2 (fractionally-spaced)
The quantity d is called the detection delay
Erik Strm, Updated October 27, 2003 Lecture 9 9 Erik Strm, Updated October 27, 2003 Lecture 9 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Linear FIR Equalizers Linear MMSE Filtering


yn y n 1 y n 2L +1 y n 2L yn zn an d

cT

c0 c1 c2L 1 c2L en an d
zn an d

yn = [ y n y n 1 " y n 2L 1 y n 2L ]
T
Data vector
Mean squared error (c ) = E en = E zn an d
2 2 2

Coefficient vector c = [c0 c1 " c2L 1 c 2L ]


T

( )
1
Filter output zn = cT y n Optimum coefficients copt = argmin 2 (c ) = E yn yTn E [ yn an d ]
c
MATLAB command z = conv(y, c)
Erik Strm, Updated October 27, 2003 Lecture 9 11 Erik Strm, Updated October 27, 2003 Lecture 9 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Suppose an are independent zero mean symbols with


Example for L = 1, d = 1
variance a2 = E {an2 }
y yn T
y n y n y n y n 1 yn y n2
n Suppose the receiver filter is such that gR (t ) gR ( t )

E yn yTn = E y n 1 y n 1
{ } = E y n 1y n y n 1y n 1 y n 1y n 2 is a Nyquist pulse

y y y y
y n 2 y n 1 y n 2 y n 2
n 2 n 2 n 2 n It is then easy to show that
Ry [0] Ry [ 1] Ry [ 2] N0
Ry [k ] = a2 Rx [k ] + [k ]
= Ry [1] Ry [0] Ry [ 1] 2
Ry [2] Ry [1] Ry [0]
R x [k ] = x k x k = x n xn + k
Ry [ k ] = E { y n y n + k } n =

Rya [k ] = x k
2

y n an d Rya [ d ] Rya [ 1]
a


E {yn an } = E y n 1an d = Rya [1 d ] = Rya [0]
y a R [2 d ] R [1]
n 2 n d ya ya
Rya [k ] = E {y n an + k }
Erik Strm, Updated October 27, 2003 Lecture 9 13 Erik Strm, Updated October 27, 2003 Lecture 9 14

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Decision Feedback Equalizers (DFE) MMSE DFE


en an d
Feed forward filter If we assume that the feedback symbols are correct,
yn
zn Decision
an d then
cn rule T
yn = y n y n 1 " y n ( N11) an d 1 an d 2 " an d N2
bn
Feedback filter It is now possible to compute the coefficients that
N1 1 N2
minimizes the MSE as before
zn = c m y n m bm an d m = cT yn
( )
1

m
E [ y n an d ]
m =0 =1
feed forward feedback
c opt = E yn yTn
T
c = c0 c1 " cN1 1 b1 b2 " bN2 Where the correlation matrix and crosscorrelation
yn = y n y n 1 " y n (N1 1) an d 1 an d 2 " an d N2
T
vector depends on Ry [k ], Rya [ k ], Ra [k ]

Erik Strm, Updated October 27, 2003 Lecture 9 15 Erik Strm, Updated October 27, 2003 Lecture 9 16
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 10 Review
The symbol error probability for M-PSK is
/ M
Orthogonal FSK with coherent detection 1 sin2 ( / M ) Es
Comparison of modulation formats in terms of
Pe =

0
exp
sin2 ( ) N0
d
Power efficiency
2Es
Spectral efficiency Q , M =2
Bit and symbol error probability N0

Bit and symbol energy Es Es
= 2Q Q
2
, M =4
N 0 N0

2Q sin 2Es
, M >4
M N0

Erik Strm, Updated November 3, 2003 Lecture 10 1 Erik Strm, Updated November 3, 2003 Lecture 10 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Symbol error probability for differentially encoded


PSK is for coherent detection Symbol error probability for square QAM (M an even
2Es square, i.e., M is an integer), see handout for details
Pe 2Pe,M PSK 4Q sin
M N0 Pe =
1
M
( naPe|a + nbPe|b + nc Pe|c )
and for differentially coherent detection
Pe|a = 4q 4q 2 , Pe|b = 3q 2q 2 , Pe|c = 2q q 2
/ M
1 sin2 ( / M ) Es
Pe =

0
exp
1 + cos( / M )cos( ) N0
d
q =Q
3 Es
(M 1) N
0
1 Es
( ) ( )
2
exp , M =2 na = M 2 , nb = 4 M 2 , nc = 4
2 N0
=

2 1 + cos( / M )Q 2Es 3 Es
[1 cos( / M )] , M >2 Pe 4Q
(M 1) N , eq. (7.6.71) in the book
2cos( / M ) N0 0
Erik Strm, Updated November 3, 2003 Lecture 10 3 Erik Strm, Updated November 3, 2003 Lecture 10 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Orthogonal M-FSK with coherent detection


Frequency-Shift Keying

(
Q E / N ,
s 0 ) M =2
Pe = 1
{ (x
} )
1 2
The signals
1 [1 Q( x )] e 2
M 1 2 Es / N0
dx, M > 2
m (t ) =
2
cos ( 2 [fc + (m 1)f ] t ) , 0 t < T 2
T
are orthonormal if
Pe (M 1)Q ( E s / N0 )
1 n and noncoherent detection ( f = 1/ T )
fc and f = , for some integer n
T 2T 1 Es
The signal alternatives for orthogonal FSK are exp , M =2
2 2N0
Pe =
( 1)n +1 M 1 1 exp n Es ,
M 1


sm (t ) = Es m (t ), m = 1,2,, M M2
n n +1
n =1 n + 1 N0
Erik Strm, Updated November 3, 2003 Lecture 10 5 Erik Strm, Updated November 3, 2003 Lecture 10 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Error probability for some binary modulation methods


Binary Modulation 10
0

2-FSK noncoherent
2Es -1 2-FSK coherent
Pe,BPSK = Q
10
2-DPSK
N0 -2 Diff. encoded 2-PSK
10
2Es 2Es 2Es 2-PSK
Pe,diff. enc. BPSK = 2Q 2Q
2
2Q = 2Pe,BPSK -3
error prob. target
N0 N0 N0
10

1 Es / N0 -4
Pe,2DPSK = e 10
2 power difference
-5 to coh. FSK: 3 dB
Es 10
Q , coherent required Es/N0
N0
=
-6
10 required Es/N0 for for coh. FSK
Pe,BFSK 2-PSK
1 Es
2 exp 2N ,
-7
noncoherent 10
0 5 10 15
0 Es/N0 in dB
Erik Strm, Updated November 3, 2003 Lecture 10 7 Erik Strm, Updated November 3, 2003 Lecture 10 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Bit error probability for M-PSK (union bound) Bit error probability for M-QAM
0 0
10 10
k = 1,2 k = 1,2
k=4 k=4
-1 -1
10 k=6 10 k=6

-2 -2
10 M = 64 10
M = 64
-3 -3
10 10
M = 16 M = 16
-4
10 10
-4 M = 2,4
M = 2,4
-5
10 10
-5

-6
10 -6
10

-7
10 -7
0 5 10 15 20 25 30 10
Eb/N0 in dB 0 5 10 15 20
Eb/N0 in dB
Erik Strm, Updated November 3, 2003 Lecture 10 9 Erik Strm, Updated November 3, 2003 Lecture 10 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140
Bit error probability for coherent M-FSK (union bound) Bit error probability for 64-ary coherent FSK, PSK, QAM
0 0
10 10
k=2 FSK
-1
k=4 QAM
-1
10 k=6 10 PSK

-2 -2
10 10 PSK
-3
10 10
-3 QAM

-4 -4
10 M=4 10 FSK
M = 16
-5 -5
10 10
M = 64
-6 -6
10 10

-7 -7
10 10
0 2 4 6 8 10 12 0 5 10 15 20 25 30
Eb/N0 in dB Eb/N0 in dB
Erik Strm, Updated November 3, 2003 Lecture 10 11 Erik Strm, Updated November 3, 2003 Lecture 10 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The signal alternatives for orthogonal M-FSK and


Lecture 11 coherent detection are for m = 1,2,, M

2
Bandwidth and spectral efficiency cos ( 2 [fc + (m 1)f ] t ) ,
s m ( t ) = Es 0 t <T
T
Channel capacity 1
Signal constellation design with block codes where Es = energy/symbol, f = = frequency spacing
2T
Power and spectral efficiency tradeoffs using binary
Symbol error probability: eq. (7.6.82) in the book
block codes and binary modulation
Hamming and Euclidean distance measures E
Q M =2

s
,
N0
Pe =
Es
(M 1)Q N , M > 2 (standard UB)
0

Erik Strm, Updated November 11, 2003 Lecture 11 1 Erik Strm, Updated November 11, 2003 Lecture 11 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Some fundamental definitions and relations


Relation between bit and symbol error probabilities
P = average transmitter power Pe
1 Pb Pe
Rb = = data rate [bits/second] k
Tb and
P 1
Eb = PTb = = average energy per bit
Rb k Pe , Gray-coded const. and high Es / N0
Pb =
k = log2 (M ) = number of bits per symbol M Pe , for orthogonal constellations
1 1 R 2(M 1)
Rs = = = b = symbol rate [symbols/second]
T kTb k
P
Es = PT = = kEb = average energy per symbol
Rs

Erik Strm, Updated November 11, 2003 Lecture 11 3 Erik Strm, Updated November 11, 2003 Lecture 11 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Different modulation schemes can be compared in


Channel Capacity
terms of their power efficiency.
The comparison is based on the Eb/N0 required to Claude Shannon: Pe 0 is achievable if and only if
meet a certain target bit error probability Pb
Example: for Pb = 0.001 >
N0 Rb
( 2 1)
Eb W Rb / W

Modulation 64-FSK 64-QAM 64-PSK


Extension: a target bit error rate Pb is achievable if
Eb/N0 [dB] 4.27 14.9 24.2

Hence, 64-QAM is more power efficient than 64-PSK.


>
N0 Rb
(2
Eb W Rb [1H ( Pb )] / W
1)

For QAM and PSK, the power efficiency decreases H (Pb ) = Pb log2 (Pb ) (1 Pb )log2 (1 Pb )
with M. For FSK, the power efficiency increases with
M. Virtually no difference when Pb is less than 0.001

Erik Strm, Updated November 11, 2003 Lecture 11 5 Erik Strm, Updated November 11, 2003 Lecture 11 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140
Power and Spectral Efficiency at Pb = 10-5
1
10

Rb = C
64
256
64
A 4-ary Constellation in 2 Dimensions
16 16
Some fundamental definitions and relations
4 QAM
10
0 PSK
E = energy per dimension
2 FSK n = number of dimensions
Rb/W 4 2 Example
[bits/s/Hz]
Es = nE = kEb = energy per symbol
M
16
2 (t ) side of square = 2 E
10
-1

64
E
dmin,2 = 2 E , n = 2, k = 2
E E 1( t )
256
Asymptotic minimum Eb/N0 for error free transmission
E = (k / n )Eb = Eb
10
-2

-1.59 0 5 10 15 20 25 30
d 2
min,2 = 4E = 4Eb
Eb/N0 [dB]

Erik Strm, Updated November 11, 2003 Lecture 11 7 Erik Strm, Updated November 11, 2003 Lecture 11 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

M = 2k -ary n-dimensional modulator

A 4-ary Constellation in 3 Dimensions E


info (n,k) coded binary s(t )
bits bits 0 1 channel bits
block PAM
3 (t ) side of cube = 2 E 1,0 encoder 1,0,1 1 +1 1, 1,1 E , E , E xRRC ( t )
m c s
E
information word code word transmitted symbol
binary k-vector binary n-vector real n-vector

E 2 (t )
1( t ) k info bits n coded bits channel bits signal vector
0 0 1 1 1 1 1 1 E E E

dmin,3 = 2 2E E
0 1 0 0 1 1 1 1 E E

k = 2, n = 3
1 0 0 1 0 1 1 1 E E E

k 16
2
dmin,3 = 8E = 8 E b = Eb > dmin,2
2
= 4Eb 1 1 1 0 0 1 1 1 E

E E

n 3
Erik Strm, Updated November 11, 2003 Lecture 11 9 Erik Strm, Updated November 11, 2003 Lecture 11 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

Hamming Weight and Distance


The Hamming weight of the binary vector
c = [ c1 c2 cn ], where ci {0,1}
is defined as
n
w H (c ) = ci = number of ones in c
i =1

The Hamming distance from c1 to c2 is defined as


n
d H (c1, c 2 ) = w H (c1 c 2 ) = c1,i c2,i
i =1

where denotes modulo-2 addition (XOR)


0 1 = 1 0 = 1, 0 0 = 1 1 = 0
Erik Strm, Updated November 11, 2003 Lecture 11 11
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 12 The spectral efficiency for a system with data rate Rb


bits/second and signal bandwidth W Hertz is

Linear block codes: generator and parity check Rb


, bits/second/Hz
matrices W
Error correcting and detecting capability of block Unfortunately, there is no definition of bandwidth which
codes is useful for all possible modulation formats.
Hard and soft decoding of block codes M-ary IQ-modulation formats (e.g., PSK, QAM) using
Error probability calculations RRC pulses with roll-off factor have strictly
bandlimited signals and
Calculation of parity check matrix and syndrome for
systematic codes Rb log2 M k
= = , increases with M
W 1+ 1+

Erik Strm, Updated December 4, 2003 Lecture 12 1 Erik Strm, Updated December 4, 2003 Lecture 12 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The transmitter power P is related to the data rate Rb


For M-ary orthogonal FSK, a (crude) estimate of the and the energy per bit Eb as P = Eb Rb
required bandwidth is W = M/T, and
The best possible system, which can achieve Rb = C
Rb log2 M k with zero error probability, must transmit with
= = k, decreases with M
W M 2
Eb W RWb
2 1
The channel capacity C is the maximum possible N0 Rb
data rate that can be transmitted with zero error
probability Extension: a target bit error rate Pb is achievable if
For an AWGN channel with noise power spectral
density N0/2, bandwidth W, and transmitter power P >
N 0 Rb
(2
Eb W Rb [1H ( Pb )] / W
1)

P H (Pb ) = Pb log2 (Pb ) (1 Pb )log2 (1 Pb )


C = W log2 1 + bits/second
N0W Virtually no difference when Pb is less than 0.001
Erik Strm, Updated December 4, 2003 Lecture 12 3 Erik Strm, Updated December 4, 2003 Lecture 12 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

M = 2k -ary n -dimensional modulator n(t )


A (n, k) block code produces n coded bits for each
t = nT block of k information bits.
binary r (t )
0 E s( t ) y (nT )
PAM xRRC ( t ) The code rate is Rc = k/n information bits/coded bit
1 E E, E, E xRRC (t )
10
m
101
c s
There are M = 2k possible code words (binary n-
vectors): {c1, c2, , cM} = C
The effective signal constellation is

y ( nT ) r
m
sm = (2c m 1) E , m = 1,2,, M
convert from
0,1 to 1

The energy per dimension is E = RcEb


The spectral efficiency of a coded system is
y (nT ) y
m
1 0
Rb R
1 1
W = Rc b
coded W uncoded
Erik Strm, Updated December 4, 2003 Lecture 12 5 Erik Strm, Updated December 4, 2003 Lecture 12 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The Hamming weight wH(ck) is the number of ones in Soft decoding selects s as the signal vector that is
the binary vector ck. closest to r in Euclidean distance
The Hamming distance between ck, cl is The decoding error probability can be bounded as
2Eb
d H (c k , c l ) = w H (c k c l ) Pe = Pr{s s} = Pr{c c} = Pr{m
m} (2k 1)Q Rc dmin
N
0
where denotes modulo 2 addition An uncoded system has Rc = 1 and dmin = 1. If m is
The minimum distance of a block code is still a k-vector, then
2Eb
Pe = Pr{m m} (2k 1)Q
dmin,H = min d H (c k , c l ) N
k l 0

The squared Euclidean distance of the corresponding The gain in power efficiency for large Eb/N0 is called
signal constellation is the asymptotic coding gain and is
2
E = min sk sl = 4Edmin,H
2
dmin,
k l
Gcoding = 10log10 (Rc dmin )
dB

Erik Strm, Updated December 4, 2003 Lecture 12 7 Erik Strm, Updated December 4, 2003 Lecture 12 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Linear Block Codes Systematic Linear Block Codes


Let GF (2) = {0,1} denote the binary field, and The generator matrix is of the form
GF (2)k = set of all binary k -vectors
G = [Ik P]
We assume the vectors in GF(2) are row vectors
Clearly, m GF (2)k and c GF (2)n
kk identity matrix k(n-k) matrix
The set of all code words of a linear code make up a
subspace to GF (2)n and and the code words are of the form
c = mG c = mG = [mIk mP] = [m mP]

kn generator matrix k information bits (n-k) parity bits


1n code vector 1k information vector
Erik Strm, Updated December 4, 2003 Lecture 12 9 Erik Strm, Updated December 4, 2003 Lecture 12 10

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

Parity Check Matrix

For any linear code we can find a matrix H such that


GHT = 0
For any code word, we have that c = mG and

cHT = mGHT = m0 = 0

For systematic code with generator matrix G = [Ik P]


H = PT In k

Erik Strm, Updated December 4, 2003 Lecture 12 11


ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

n( t )
Lecture 13 M = 2k -ary n-dimensional modulator
t = nT
(n,k) binary
info bits coded bits 0 1 channel bits s (t ) r (t ) y ( nT )
block PAM xRRC ( t )
encoder 1 1
10 101 1 11 E xRRC (t )
Standard array and syndrome decoding m c

Overview of cyclic codes


Convolutional codes: state diagram and trellis Soft decoding

descriptions y ( nT ) collect r
decision rule

m
n
(preferably ML)
samples

Hard decoding

y ( nT ) decisions on collect y
m
channel bits 1 0 decision rule
sgn( ) n
1 1 samples
(preferably ML)

Erik Strm, Updated November 25, 2002 Lecture 13 1 Erik Strm, Updated November 25, 2002 Lecture 13 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Soft decoding minimizes the Euclidean distance The error correcting capability of a block code is
between r and the decoded signal vector d 1
t = min
The decoding error probability for soft decoding can 2 floor function: round
be bounded as down to closest integer

The probability of a channel bit error (hard dec.) is


Pe = Pr{c c | c transmitted} = Pr{m
m | m transmitted}
2Eb 2Eb
(M 1)Q p =Q Rc
N
Rc dmin
N
0 0
and the decoding error probability can be bounded as
The gain in power efficiency for large Eb/N0 is called
t
n
the asymptotic coding gain and is Pe 1 pd (1 p )n d
d =0 d
Gcoding = 10log10 (Rc dmin )
dB

Erik Strm, Updated November 25, 2002 Lecture 13 3 Erik Strm, Updated November 25, 2002 Lecture 13 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

A code is linear if any linear combination of code Parity Check Matrix


words is a code word
A linear block code has code words
For any linear code we can find an (n - k) n matrix
c = mG = code word, 1 n binary vector H such that
m = information bits, 1 k binary vector GHT = 0
G = Generator matrix, k n binary matrix
For any code word, we have that c = mG and
For a systematic code, the first k elements in the
code word is the information bits, and cHT = mGHT = m0 = 0
G = [Ik P] For systematic code with generator matrix G = [Ik P]
Ik = k k identity matrix
H = PT In k
P = k (n k ) matrix

Erik Strm, Updated November 25, 2002 Lecture 13 5 Erik Strm, Updated November 25, 2002 Lecture 13 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Standard Array
zero Cyclic Block Codes
codeword c1 c2 c 2k
e1 c 2 e1 c 2k e1 A linear code is a cyclic code if all cyclic shifts of a
coset code word is also a code word.
e 2n k c 2 e 2n k c 2k e 2n k There exist methods for encoding and hard decoding
which are much more efficient than for standard
coset leaders linear codes.
For row i = 2, 3, , 2n k find a pattern of minimum Hence, we can implement relatively long block codes
weight which is not already listed in the array (the with reasonable complexity.
choice is not always unique) Among the cyclic codes we find the important BCH
Call this pattern e i and form the ith row as the and Reed-Solomon codes.
corresponding coset c1 e i , c 2 e i ,, c 2k e i
Erik Strm, Updated November 25, 2002 Lecture 13 7 Erik Strm, Updated November 25, 2002 Lecture 13 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Rate Convolutional Encoder State diagram and trellis section

encoder memory:
defines the state
[ xi 1 xi 2 ] y (0)
i

information
bits xi xi 1 x i 2 coded bits
y i(0 ) , y i(1)
y i(1)

modulo 2 adders

Erik Strm, Updated November 25, 2002 Lecture 13 9 Erik Strm, Updated November 25, 2002 Lecture 13 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Encoding of N bits with trellis


Hamming distance to received pattern
termination State
Coded bits
transition 00 11 10 01
0 1 2 3 4 N 2 N 1 N N + L 1 S0 S0 00 0 2 1 1
S0 S2 11 2 0 1 1

S1 S0 11 2 0 1 1
S1 S2 00 0 2 1 1

S2 S1 01 1 1 2 0

S2 S3 10 1 1 0 2
S3 S1 10 1 1 0 2

Trellis development Trellis termination S3 S3 01 1 1 2 0


L1 first info bits L1 zero tail bits

Erik Strm, Updated November 25, 2002 Lecture 13 11 Erik Strm, Updated November 25, 2002 Lecture 13 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140
Received sequence

01 00 11

1 0 2
S0

1 2 0 accumulated
distance = 2
1

0
S1
1 0 2

accumulated
0 1 1 distance = 1
1

1
2

S2

2 1 1

partial metric If d1 < d2, then the path p1 + p3 is better than p2 + p3


0 1 1
S3 Hence, p2 can be eliminated.
time time time time
i i+1 i+2 i+3 The path p1 is called the survivor path at node X
Erik Strm, Updated November 25, 2002 Lecture 13 13 Erik Strm, Updated November 25, 2002 Lecture 13 14
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Lecture 14 Review
The generator matrix for a systematic code has the
Convolutional codes form
G = [Ik P]
Encoder
State diagram Ik = k k identity matrix and P = k (n k ) matrix
Trellis The parity check matrix for a systematic code is
Viterbi algorithm
H = PT In k
Performance analysis
The syndrome for y is
0, if y is a code word
s = yHT =
0, if y is not a code word

Erik Strm, Updated December 4, 2003 Lecture 14 1 Erik Strm, Updated December 4, 2003 Lecture 14 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The columns of the standard array describe the All elements in a coset has the same syndrome
decision regions for a hard ML decoder Hard ML-decoding: given the received vector y
The standard array is a table with 2k columns and 2n-k Compute syndrome, s y = yHT
rows (cosets) that lists all possible 2n received words. Find coset leader e with same syndrome, s y = eHT
Construction algorithm: Decode as c = y e
The first row lists the code words, where c1 = 0 We recall that
c1 c 2 c 3 c 2k y = c m x c = y e = c m x e
Search among the binary vectors not listed thus transmitted error
pattern
=0 if x = e
code word
far, and select one with the smallest possible
and the coset leader are therefore the correctable
weight, ei, as the coset leader and form the ith
error patterns
coset (row) as
For a t-error correcting code, the first coset leaders
e i c1 e i c 2 ei c3 e i c 2k are all the possible n-bit patterns with weight t or less

Erik Strm, Updated December 4, 2003 Lecture 14 3 Erik Strm, Updated December 4, 2003 Lecture 14 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The probability of incorrect decoding for code with Rate , Memory 2 Convolutional Encoder
error correcting capability t is
encoder memory:
2n k

pWH ( ei ) (1 p)n WH (ei )


defines the state
Pe = Pr{c c m } = 1 [ xi 1 xi 2 ] y i(0)
i =1
Pc =Pr{error pattern is a coset leader}
information
t
n i xi xi 1 xi 2
n i bits coded bits
1 p (1 p )
i =0 i y i(0), y i(1)
Pr{error pattern has weight t or less} Pc y i(1)

where p is the probability of a coded bit error


modulo 2 adders
2E 2Eb
p =Q = Q Rc The generators g(0) = [101] and g(1) = [111] are the
N N
0 0 impulse responses of the encoder branches
Erik Strm, Updated December 4, 2003 Lecture 14 5 Erik Strm, Updated December 4, 2003 Lecture 14 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The code rate is defined as State diagram and trellis section


number of coded bits k
Rc = =
number of input bits n

In practice, k is usually chosen as k = 1, and we will


assume that from now on
The memory, m, of the encoder is the number of
memory elements
The constraint length is defined here as L = m + 1,
but many other definitions exist in the literature
The number of states is 2m
Performance and complexity is increased with m
In practical systems, m is usually less than 10 or so
Erik Strm, Updated December 4, 2003 Lecture 14 7 Erik Strm, Updated December 4, 2003 Lecture 14 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Encoding of N bits with trellis An N-bit packet of information bits is encoded into a
termination path through the trellis with (N + L 1) trellis sections
The effective rate for a rate Rc = 1/n code is
0 1 2 3 4 N 2 N 1 N N + L 1
number of coded bits N 1 1
Reff = = =
number of info bits n(N + L 1) n L 1
=Rc
1+
N
fractional rate loss
The spectral efficiency is

Rb R R
W = Reff b < Rc b
coded W uncoded W uncoded

Trellis development Trellis termination The rate loss is negligible if N L


L1 first info bits L1 zero tail bits

Erik Strm, Updated December 4, 2003 Lecture 14 9 Erik Strm, Updated December 4, 2003 Lecture 14 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The possible paths in a terminated trellis represents State Coded


Hamming distance to received pattern
the possible sequences of coded bits transition bits 00 11 10 01
Assuming (as before) that the coded bits are
S0 S0 00 0 2 1 1
transmitted with binary PAM, the samples from the
matched filter are soft decisions on coded bits S0 S2 11 2 0 1 1
Decoding can be done by finding the path in the S1 S0 11 2 0 1 1
trellis that is closest to the received sequence in
S1 S2 00 0 2 1 1
squared Euclidean distance (soft decoding), or
S2 S1 01 1 1 2 0
Hamming distance (hard decoding)
The distance (or metric) can be computed by adding S2 S3 10 1 1 0 2
the contributions from each trellis section S3 S1 10 1 1 0 2

S3 S3 01 1 1 2 0

Erik Strm, Updated December 4, 2003 Lecture 14 11 Erik Strm, Updated December 4, 2003 Lecture 14 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140
Received sequence

01 00 11

1 0 2
S0

1 2 0 accumulated
distance = 2
1

0
S1
1 0 2

accumulated
0 1 1 distance = 1
1

1
2

S2

2 1 1

partial metric If d1 < d2, then the path p1 + p3 is better than p2 + p3


0 1 1
S3 Hence, p2 can be eliminated already at node A
time time time time
i i+1 i+2 i+3 The path p1 is called the survivor path at node A
Erik Strm, Updated December 4, 2003 Lecture 14 13 Erik Strm, Updated December 4, 2003 Lecture 14 14

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Viterbi decoding:
Label all transitions in the trellis with the distance
between received sequence and the coded bits.
For time i = 1 to (N + L 1)
Compute the accumulated distance of all paths
merging in the nodes at time i.
Keep the best path (the survivor) in each node
and save the accumulated metric.
The ML path is the survivor in the all-zero node at
the end of the trellis.
In practice, at time i, we can decode the information
bit at time i without any significant performance 5L
loss if 5L.

Erik Strm, Updated December 4, 2003 Lecture 14 15 Erik Strm, Updated December 4, 2003 Lecture 14 16

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

An error path at time i is a path segment that leaves We denote the error paths by x1, x 2 , and define
the zero state at time i and returns to the zero state
exactly once. w ( x ) = Hamming weight of coded bits
w I ( x ) = Hamming weight of information bits

Clearly, if x is a part of the decoded path, then this


will cause wI(x) information bit errors.
The average number of bit error that occurs due to
the transmission of one trellis transition (k info bits) is

nb = w I ( x k )Pr{ x k part of ML path}
k=1

and the bit error probability is


n
Pb = b
k
Erik Strm, Updated December 4, 2003 Lecture 14 17 Erik Strm, Updated December 4, 2003 Lecture 14 18
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

If x is a part of the ML path, then x has better metric


than the all-zero path Hence,
However, if x has better metric than the all-zero path, Pr {x k part of ML path}
then x is not always a part of the ML path
Pr {x k better metric than all-zero path} = P2 (w ( x ))

The pairwise error probability P2(d) is the probability


that an error path with weight d has better metric than
the corresponding all-zero path
The minimum weight of an error path is the free
distance of the code dfree

Erik Strm, Updated December 4, 2003 Lecture 14 19 Erik Strm, Updated December 4, 2003 Lecture 14 20

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

If x1,, x are the paths of weight dfree, and x +1,, x The weight spectrum can be found from the transfer
the paths of weight dfree+1, etc., then function of the code and is tabulated for most codes
of practical interest

nb w I ( x k )P2 (w ( x k )) The union bound on the bit error probability is
n 1
cd P2 (d )
k=1

= [w I ( x1 ) + + w I ( x )] P2 (d free ) Pb = b
k k d =dfree
cdfree

Clearly, we need to truncate the sum after a finite


+ [w I ( x +1 ) + + w I ( x )] P2 (d free + 1) + =
d=dfree
cd P2 (d ) number of terms, and
1 Nt
cd P2 (d )
cd free +1
Pb
k d =dfree
The sum of wI(x) all error paths of weight d is
denoted by cd and is called the (information) weight If two paths have distance d, then the corresponding
spectrum of the code signals have squared Euclidean distance
d E2 = 4Ed = 4EbRc d
Erik Strm, Updated December 4, 2003 Lecture 14 21 Erik Strm, Updated December 4, 2003 Lecture 14 22

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

For soft decoding The first term in the union bound is also the largest
d2 4Ed 2Eb (for large Eb/N0), and for soft decoding
P2 (d ) = Q E
= Q = Q Rc d 2Eb
2N0 cdfree cd free
2N0 N0 P2 (d free ) = Q Rc d free
k k N0
For hard decoding
The asymptotic coding gain (expected gain in power
d odd
d efficiency for large Eb/N0) is therefore
P2 (d ) = p j (1 p )d j
j =( d odd +1) / 2 j
Gcoding = 10log10 ( Rc d free )
d odd = d rounded down to closest odd integer
dB

2Eb For hard decoding we expect 2-3 dB less gain


p = Q Rc
N
0

Erik Strm, Updated December 4, 2003 Lecture 14 23 Erik Strm, Updated December 4, 2003 Lecture 14 24
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Rules for the Exam


Lecture 15
Allowed material on the exam is
Any calculator (no computers)
Noncoherent detection of orthogonal FSK
Copies of this years lecture slides
Continuous-phase frequency shift-keying (CPFSK)
L. Rde, B. Westergren. Beta, Mathematics Handbook
Minimum shift keying (MSK)
John G. Proakis and Masoud Salehi. Communication
Review of course Systems Engineering, First or Second Edition.
Source coding and future courses in communications Your own handwritten notes. Photo copies or printouts
of other material or other student's notes are not
allowed.
A dictionary
Exam consists of four 10-point problems
Exam plus project score determines the final grade
Erik Strm, Updated December 2, 2003 Lecture 15 1 Erik Strm, Updated December 2, 2003 Lecture 15 2

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The signal alternatives for orthogonal FSK are


Frequency-Shift Keying
sm (t ) = Es m (t ), m = 1,2,, M
The signals
The received signal can be written as
2
m (t ) = gT (t ) cos ( 2 [fc + (m 1)f ] t ) 2Es
Eg r (t ) = gT (t )cos ( 2 [fc + (m 1)f ] t + ) + n(t )
T
1, for 0 t < T
gT (t ) = where is the carrier phase and n(t) is AWGN
0, otherwise Clearly, we can use a coherent receiver to
Eg = T demodulate FSK
are (approximately) orthonormal if However, we can avoid the need of a carrier phase
estimate by increasing the frequency spacing from
1 1 1 3
fc and f = , , , f = 1/(2T) to f = 1/T
T 2T T 2T
Erik Strm, Updated December 2, 2003 Lecture 15 3 Erik Strm, Updated December 2, 2003 Lecture 15 4

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

The signal alternatives for orthogonal M-FSK for Noncoherent detector for binary orthogonal FSK
noncoherent detection are t = nT
1 rI 1
2 gT ( t )
cos ( 2 [fc + (m 1)f ] t + )
( i )2
sm (t ) = Es gT (t ) Eg
Eg f1 = fc
2 cos(2 f1t ) r12
t = nT
2
= Es cos ( ) gT (t ) cos ( 2 [fc + (m 1)f ] t ) 1
Eg
gT ( t )
rQ1
( i )2
Eg
r (t ) 2 sin(2 f1t ) choose
m
largest
2
Es sin ( ) gT (t ) sin ( 2 [ fc + (m 1)f ] t )
t = nT
1 rI2
Eg gT ( t ) ( i )2
Eg
2 cos(2 f2 t ) r22

(approx.) orthogonal for any and m f2 = fc +


1 t = nT
T 1 rQ 2
1 1 gT ( t ) ( i )2
if fc and f = Eg
T T 2 sin(2 f 2t )

Erik Strm, Updated December 2, 2003 Lecture 15 5 Erik Strm, Updated December 2, 2003 Lecture 15 6
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

0
Error probability for some binary modulation methods
10
The symbol error probability for noncoherent
2-FSK noncoherent
detection of orthogonal M-FSK is -1 2-FSK coherent
10
2-DPSK
M 1
M 1 1 n Es Diff. encoded 2-PSK
Pe = ( 1)n +1
-2

n + 1 exp n + 1 N
10
2-PSK
n =1 n 0
-3
error prob. target
10
1 Es
= exp , M =2 -4
2 2N0 10
power difference
-5 to coh. FSK: 3 dB
10
required Es/N0
-6
10 required Es/N0 for for coh. FSK
2-PSK
-7
10
0 5 10 15
Es/N0 in dB
Erik Strm, Updated December 2, 2003 Lecture 15 7 Erik Strm, Updated December 2, 2003 Lecture 15 8

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Binary FSK
1
MSK phase tree MSK phase trellis
3 3
0.5 2
+1
2 1
+1
0
+1

+1 1
-0.5 1 +1
1
2 2 1
-1 +1 +1
0 0.5 1 1.5 2 2.5 3 3.5 4
1 1
Binary continuous phase FSK +1 +1
1 0
T +1 2T 3T t T 2T 3T 4T 5T t
1 1
0.5

2
+1
0 1 E (t 3T )
cos 2 fct + , 3T t < 4T

2T 2 2T
-0.5

3 1
-1
phase state info bit = -1
0 0.5 1 1.5 2 2.5 3 3.5 4
2 at t = 3T at t = 3T

Erik Strm, Updated December 2, 2003 Lecture 15 9 Erik Strm, Updated December 2, 2003 Lecture 15 10

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Baseband spectra for MSK and QPSK


0
MSK Things we have talked about
-10 QPSK

-20 Communication links: how to transmit digital


information from point A to point B with a given
-30
performance (bit error probability)
[dB]
-40 Linear AWGN channels and intersymbol interference
-50
Trade-off between spectral and power efficiency
using different modulation formats and error control
-60 coding
-70

-80
0 0.5 1 1.5 2 2.5 3 3.5 4
Frequency normalized with bit rate: f/Rb [Hz/bits/sec]

Erik Strm, Updated December 2, 2003 Lecture 15 11 Erik Strm, Updated December 2, 2003 Lecture 15 12
ESS140 Digital Communications ESS140 Digital Communications
www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Shannons Communication Model Some things we have not talked about


Transmitter Synchronization and channel estimation
Other channels than linear AWGN channels: impulse
source channel noise, nonlinearities, fading channels, optical
source modulator
encoder encoder
channels, magnetic recording channels
Source coding
bit streams channel
noise Encryption and security: jamming, eavesdropping
Multiple access and multiplexing methods: FDMA,
Receiver TDMA, CDMA, power control, channel allocation
waveforms
source channel Network issues: routing, packet-switching, circuit-
sink demodulator
decoder decoder switching, addressing, protocols

Erik Strm, Updated December 2, 2003 Lecture 15 13 Erik Strm, Updated December 2, 2003 Lecture 15 14

ESS140 Digital Communications ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140 www.s2.chalmers.se/undergraduate/courses/ess140

Hardware constraints: antennas, amplifiers, cables,


nonlinear amplifiers, phase noise in oscillators and Modulation
mixers, word lengths in baseband processors Continuous-phase modulation (CPM)
Coding Partial response modulation
Automatic retransmission request Orthogonal frequency division multiplexing
Algebraic decoding for cyclic block codes (OFDM)
Suboptimum decoding of codes Spead spectrum and code division multiple access
(CDMA)
Codes for error detection (CRCs)
Channel estimation and synchronization
Techniques and codes for channels with burst Fading channels
errors: interleaving, concatenated codes
Bandwidth measures (spectrum masks)
Turbo codes and MAP decoding of codes Link budgets
Line coding (run-length codes), scrambling Frame and block errors
Trellis-coded modulation
Erik Strm, Updated December 2, 2003 Lecture 15 15 Erik Strm, Updated December 2, 2003 Lecture 15 16

ESS140 Digital Communications


www.s2.chalmers.se/undergraduate/courses/ess140

Communications Courses in the


Spring
Data Compression (source coding)
Wireless Communication (radio-based systems)
Satellites in Communications and Navigation
(satellite communications)
Internet Technology, Advanced Internet Technology
(network issues)
Project course in Communications

Erik Strm, Updated December 2, 2003 Lecture 15 17

S-ar putea să vă placă și