Documente Academic
Documente Profesional
Documente Cultură
Implementation, Simulation,
and Standardization
June 7, 2006
Matthew Valenti
Rohit Iyer Seshadri
West Virginia University
Morgantown, WV 26506-6109
mvalenti@wvu.edu
Tutorial Overview
Channel capacity
Convolutional codes
1:15 PM
Valenti
Turbo codes
Standard binary turbo codes: UMTS and cdma2000
Duobinary CRSC turbo codes: DVB-RCS and 802.16
LDPC codes
Tanner graphs and the message passing algorithm
Standard binary LDPC codes: DVB-S2
6/7/2006
3:15 PM
Iyer Seshadri
4:30 PM
Valenti
2/133
Implemented standards:
Binary turbo codes: UMTS/3GPP, cdma2000/3GPP2.
Duobinary turbo codes: DVB-RCS, wimax/802.16.
LDPC codes: DVB-S2.
6/7/2006
3/133
6/7/2006
4/133
k p
R
maxS
pa
x, yflog
z
z
T
C max I ( X; Y )
p( x )
p( x )
6/7/2006
p( x, y)
dxdy
p( x ) p( y)
U
V
W
5/133
Capacity of AWGN
with Unconstrained Input
k p
F
I
G
H JK
F
I
G
H JK
2 Es
1
1
2rEb
C max I ( X; Y ) log2
1 log2
1
p( x )
2
No
2
No
where Eb is the energy per (information) bit.
6/7/2006
6/133
l a fq
Ia
X; Y f
C max I X ; Y
p( x )
maximized when
two signals are equally likely
p ( x ): p 1/ 2
H (Y ) H ( N )
zaf
p y log 2 p( y)dy
b g
1
log 2 eNo
2
pY ( y) pX ( y) pN ( y)
pX ( ) pN ( y )d
6/7/2006
7/133
Code Rate r
Spectral Efficiency
1.0
It is theoretically
possible to operate
in this region.
0.5
-2
-1
Eb/No in dB
10
Iridium
1998
Code Rate r
Spectral Efficiency
Uncoded
BPSK
Turbo Code
1993
LDPC Code
2001
Chung, Forney,
Richardson, Urbanke
0.5
Galileo:LGA
1996
Pioneer
1968-72
IS-95
1991
Voyager
1977
Odenwalder
Convolutional
Codes 1976
Galileo:BVD
1992
Mariner
1969
-2
-1
Eb/No in dB
arbitrarily low
BER: Pb 10 5
8
10
Constraint Length K = 3
n output streams
m delay elements arranged in a shift register.
Combinatorial logic (OR gates).
Each of the n outputs depends on some modulo-2 combination of the k current
inputs and the m previous inputs in storage
6/7/2006
10/133
State Diagrams
1/11
S1 = 10
1/10
0/00
S0 = 00
0/11
1/00
0/01
S2 = 01
1/01
0/10
Since k=1,
2 branches enter
and 2 branches leave
each state
2m = 4 total states
6/7/2006
S3 = 11
11/133
Trellis Diagram
6/7/2006
12/133
Trellis Diagram
Every branch
corresponds to
a particular data bit
and 2-bits of the
code word
every sequence of
input data bits
corresponds to
a unique path
through the trellis
input and
output bits
for time L = 4
1/01
1/01
S3
S2
S1
S0
i=0
initial
state
0/00
0/00
i=1
0/00
i=2
0/00
i=3
0/00
i=4
0/00
i=5
m=2
tail bits
i=6
final state
Recursive Systematic
Convolutional (RSC) Codes
xi
ri
6/7/2006
14/133
S1 = 10
0/10
0/00
S0 = 00
1/11
6/7/2006
0/00
1/01
S2 = 01
S3 = 11
1/01
0/10
15/133
S3
S2
S1
S0
i=0
0/00
0/00
i=1
0/00
i=2
0/00
i=3
0/00
i=4
0/00
i=5
i=6
m = 2 tail bits
no longer all-zeros
must be calculated by the encoder
Convolutional Codewords
1/01
S3
S3
S2
S2
S1
S1
S0
0/00
S0
If there are L input data bits plus m tail bits, the overall
transmitted codeword is:
x = [x(1), x(2), , x(L), x(L+m)]
6/7/2006
17/133
MAP Decoding
Pu (t ) 1 | y
(t ) log
Pu (t ) 0 | y
6/7/2006
18/133
S3
p3,3
S3
S2
S2
S1
S1
For each t,
P(S (t 1) S
Si S j
S0
p0,0
S0
P(u (t ) 1 | y )
PS (t 1) S
(t ) | y
PS (t 1) S
(t ) | y
Si S j :u 1
Likewise
P(u (t ) 0 | y)
Si S j :u 0
6/7/2006
(t ) | y) 1
19/133
3,3
1
0,0
6/7/2006
20/133
Computing
3(t-1)
3,3(t)
3(t)
1(t-1)
6/7/2006
21/133
Computing
3(t)
3,3(t+1)
3(t+1)
2(t+1)
6/7/2006
22/133
Computing
Let xi,j = (x1, x2, , xn) be the word generated by the encoder when
transitioning from Si to Sj.
i,j(t) = P( xi,j | y(t) )
P( y(t) )
Is not strictly needed because will be the same value for the numerator
and denominator of the LLR (t).
Instead of computing directly, can be found indirectly as a normalization
factor (chosen for numerical stability)
P( xi,j )
Initially found assuming that code bits are equally likely.
In a turbo code, this is provided to the decoder as a priori information.
6/7/2006
23/133
2
2
2
mx Es (2 x 1)
N0
2
P ( y | x) p ( y i | xi )
i 1
6/7/2006
24/133
Pu (t ) 1 | y
(t ) log
Pu (t ) 0 | y
log
6/7/2006
(t 1)
i
S i S j :u 1
i, j
(t ) j (t )
(t 1)
i
Si S j :u 0
i, j
(t ) j (t )
25/133
6/7/2006
26/133
It is very unlikely that both encoders produce low weight code words.
MUX increases code rate from 1/3 to 1/2.
Input
RSC
#1
Systematic Output
xi
MUX
Nonuniform
Interleaver
6/7/2006
RSC
#2
Parity
Output
27/133
Coding dilemma:
All codes are good, except those that we can think of.
6/7/2006
28/133
fd 2
6/7/2006
29/133
Comparison of
Minimum-distance Asymptotes
0
10
Convolutional Code
CC free distance asymptote
Turbo Code
TC free distance asymptote
-2
10
d m in 18
c d m in 187
-4
BER
Convolutional code:
10
E
Pb 187Q 18 b
No
Turbo code:
-6
10
d min 6
~
ad min w
3 2
~
d min
cd min
k
65536
-8
10
0.5
1.5
2
2.5
Eb/No in dB
3.5
E
Pb 9.2 105 Q 6 b
No
The Turbo-Principle
6/7/2006
31/133
Performance as a Function of
Number of Iterations
10
constraint length
-1
10
-2
10
BER
10
-4
10
6 iterations
3 iterations
-5
10
10 iterations
-6
10
18 iterations
-7
0.5
1.5
Eb/No in dB
L= 65,536
interleaver size
number data bits
2 iterations
-3
r = 1/2
code rate
1 iteration
10
K=5
Log-MAP algorithm
Other factors
Interleaver design
Puncture pattern
Trellis termination
6/7/2006
33/133
10
K=1024
K=4096
K=16384
K=65536
-1
10
-2
K=5
Rate r = 1/2
18 decoder iterations
AWGN Channel
10 -2
10
-3
BER
10
-4
10
-4
10
-5
10
-6
10
-6
10
-8-7
10
10
0.5
0.5
1.5 2
1.5
Eb/No in dB
2
2.5
2.5
6/7/2006
35/133
Input
Xk
Upper
RSC
Encoder
Interleaver
Interleaved
Input
Xk
Lower
RSC
Encoder
Interleaved
Parity
Zk
Uninterleaved
Parity
Zk
Output
UMTS Interleaver:
Inserting Data into Matrix
6/7/2006
X1
X2
X3
X4
X5
X6
X7
X8
X9
X10
X11
X12
X13
X14
X15
X16
X17
X18
X19
X20
X21
X22
X23
X24
X25
X26
X27
X28
X29
X30
X31
X32
X33
X34
X35
X36
X37
X38
X39
X40
37/133
UMTS Interleaver:
Intra-Row Permutations
6/7/2006
X2
X6
X5
X7
X3
X4
X1
X8
X10
X12
X11
X15
X13
X14
X9
X16
X18
X22
X21
X23
X19
X20
X17
X24
X26
X28
X27
X31
X29
X30
X25
X32
X40
X36
X35
X39
X37
X38
X33
X34
38/133
UMTS Interleaver:
Inter-Row Permutations
6/7/2006
X40
X36
X35
X39
X37
X38
X33
X34
X26
X28
X27
X31
X29
X30
X25
X32
X18
X22
X21
X23
X19
X20
X17
X24
X10
X12
X11
X15
X13
X14
X9
X16
X2
X6
X5
X7
X3
X4
X1
X8
39/133
UMTS Interleaver:
Reading Data From Matrix
X40
X36
X35
X39
X37
X38
X33
X34
X26
X28
X27
X31
X29
X30
X25
X32
X18
X22
X21
X23
X19
X20
X17
X24
X10
X12
X11
X15
X13
X14
X9
X16
X2
X6
X5
X7
X3
X4
X1
X8
Thus:
X1 = X40 X2 = X26
X38 = X24 X2 = X16
6/7/2006
X3 = X18
X40 = X8
Turbo and LDPC Codes
40/133
6/7/2006
41/133
Trellis Termination
XL+1 XL+2 XL+3
ZL+1 ZL+2 ZL+3
6/7/2006
42/133
6/7/2006
L
1
3L 12 3
43/133
Channel Model
and LLRs
{0,1}
BPSK
Modulator
{-1,1}
2a
Channel gain: a
Rayleigh random variable if Rayleigh fading
a = 1 if AWGN channel
Noise
variance is:
6/7/2006
Eb
2r
N
o
3
E
2 b
No
44/133
u,i
SISO
MAP
Decoder
c,i
u,o
c,o
Inputs:
u,i LLRs of the data bits. This comes from the other decoder r.
c,i LLRs of the code bits. This comes from the channel observations r.
6/7/2006
45/133
r(Xk)
r(Zk)
Demux
Upper
MAP
Decoder
Interleave
zeros
r(Zk)
Demux
Lower
MAP
Decoder
Deinnterleave
X k
Performance as a Function of
Number of Iterations
10
-1
10
1 iteration
-2
10
-3
BER
10
2 iterations
-4
10
-5
10
3 iterations
-6
10 iterations
10
-7
10
0.2
0.4
0.6
0.8
1
1.2
Eb/No in dB
1.4
1.6
1.8
L=640 bits
AWGN channel
10 iterations
Log-MAP Algorithm:
Overview
Processing:
Sweep through the trellis in forward direction using modified
Viterbi algorithm.
Sweep through the trellis in backward direction using modified
Viterbi algorithm.
Determine LLR for each trellis section.
Determine output extrinsic info for each trellis section.
6/7/2006
48/133
0 if y - x 1.5
z max(x, y )
0.5 f y - x 1.5
Max operator.
6/7/2006
z max(x, y)
Turbo and LDPC Codes
log-MAP
constant-log-MAP
max-log-MAP
49/133
0.6
Constant-log-MAP
0.5
0.4
fc(|y-x|)
0.3
0.2
log-MAP
0.1
0
-0.1
0
|y-x|
10
S1
S1
S2
S2
S3
S3
S4
S4
S0
00
S0
10
6/7/2006
S5
S5
S6
S6
S7
S7
51/133
Forward Recursion
1
2
00
10
od
id
it
6/7/2006
52/133
Backward Recursion
1
2
00
10
od
id
it
6/7/2006
53/133
Log-likelihood Ratio
0
1
10
6/7/2006
00
F
P X
bg G
HP X
X k ln
1
0
I
J
K
n
max* n
s
s
max* i ij j
Si S j : X k 1
Si S j : X k 0
ij
54/133
Memory Issues
A nave solution:
Calculate s for entire trellis (forward sweep), and store.
Calculate s for the entire trellis (backward sweep), and store.
At the kth stage of the trellis, compute by combining s with stored s
and s .
A better approach:
Calculate s for the entire trellis and store.
Calculate s for the kth stage of the trellis, and immediately compute by
combining s with these s and stored s .
Use the s for the kth stage to compute s for state k+1.
Normalization:
In log-domain, s can be normalized by subtracting a common term from
all s at the same stage.
Can normalize relative to 0, which eliminates the need to store 0
Same for the s
6/7/2006
55/133
use these
values for
initialization
region
calculate
and over
this region.
6/7/2006
56/133
Extrinsic Information
6/7/2006
57/133
Performance Comparison
BER of 640 bit turbo code
10
max-log-MAP
constant-log-MAP
log-MAP
-1
10
-2
10
Fading
-3
BER
10
AWGN
-4
10
-5
10
-6
10
10 decoder iterations
-7
10
0.5
1.5
Eb/No in dB
2.5
cdma2000
Systematic
Output Xi
First Parity
Output Z1,i
Second Parity
Output Z2,i
Data Input
Xi
6/7/2006
59/133
10
performance of
cdma2000 turbo code
in AWGN
with interleaver length 1530
-2
10
-4
10
1/4
1/5
1/3
1/2
-6
10
-8
10
0.2
0.4
0.6
0.8
1
1.2
Eb/No in dB
1.4
1.6
1.8
1/01
1/01
1/01
1/01
1/01
S3
S2
S2
S1
S1
S0
1/01
0/00
0/00
0/00
0/00
0/00
0/00
S0
6/7/2006
61/133
Duobinary codes
A
S1
S2
S3
Hardware benefits
Half as many states in trellis.
Smaller loss due to max-log-MAP decoding.
6/7/2006
62/133
DVB-RCS
6/7/2006
63/133
DVB-RCS:
Influence of DecodingAlgorithm
0
rate r=
length N=212
8 iterations.
AWGN.
10
-1
10
-2
FER
10
-3
10
-4
10
-5
10
0.2
0.4
0.6
0.8
1
1.2
Eb/No in dB
1.4
1.6
1.8
DVB-RCS:
Influence of Block Length
0
10
rate
max-log-MAP
8 iterations
AWGN
N=48
N=64
N=212
N=432
N=752
-1
10
-2
FER
10
-3
10
-4
10
-5
10
0.5
1.5
2
Eb/No in dB
2.5
3.5
DVB-RCS:
Influence of Code Rate
0
10
N=212
max-log-MAP
8 iterations
AWGN
r=6/7
r=4/5
r=3/4
r=2/3
r=1/2
r=2/5
r=1/3
-1
10
-2
10
FER
-3
10
-4
10
-5
10
3
Eb/No in dB
802.16 (WiMax)
S1
S2
S3
6/7/2006
67/133
6/7/2006
68/133
6/7/2006
69/133
Min-sum algorithm
Reduced complexity approximation to the sum-product algorithm
6/7/2006
70/133
Tanner Graphs
Bipartite means that nodes of the same type cannot be connected (e.g. a
c-node cannot be connected to another c-node)
The ith check node is connected to the jth variable node iff the (i,j)th
element of the parity check matrix is one, i.e. if hij =1
All of the v-nodes connected to a particular c-node must sum (modulo-2)
to zero
6/7/2006
71/133
v0
v1
f1
v2
v3
f2
v4
v5
v6
v-nodes
6/7/2006
72/133
v0
v1
f1
v2
v3
f2
v4
v5
v6
v-nodes
6/7/2006
73/133
Bit-Flipping Algorithm:
(7,4) Hamming Code
f0 =1
y0 =1
y1 =1
y2 =1
f1 =1
y3 =1
f2 =0
y4 =0
y5 =0
y6 =1
c1 =0
c2 =1
c3 =1
c4 =0
c5 =0
c6 =1
6/7/2006
74/133
Bit-Flipping Algorithm:
(7,4) Hamming Code
f1 =1
f0 =1
y0 =1
6/7/2006
y1 =1
y2 =1
y3 =1
f2 =0
y4 =0
y5 =0
y6 =1
75/133
Bit-Flipping Algorithm:
(7,4) Hamming Code
f0 =0
y0 =1
6/7/2006
y1 =0
y2 =1
f1 =0
y3 =1
f2 =0
y4 =0
y5 =0
y6 =1
76/133
Step 3: Repeat 1 to 2 until all the parity checks are zero or a maximum
number of iterations are reached
6/7/2006
77/133
y0 =0
y1 =0
y2 =0
y3 =0
y4 =1
f1 =0
y5 =0
f2 =0
y6 =0
f3 =0
y7 =0
f4 =1
y8 =0
y9 =0
f5 =0
y10 =0
f6 =0
y11 =0
y12 =0
f7 =1
y13 =0
y14 =1
c1 =0
c2 =0
c3 =0
c4 =0
c5 =0
c6 =0
c7 =0
c8 =0
c9 =0
c10 =0
c11 =0
c12 =0
c13 =0
c14 =0
6/7/2006
78/133
y0 =0
6/7/2006
y1 =0
y2 =0
y3 =0
y4 =0
f1 =0
y5 =0
f2 =0
y6 =0
f3 =0
y7 =0
f4 =0
y8 =0
y9 =0
f5 =0
y10 =0
f6 =0
y11 =0
y12 =0
f7 =1
y13 =0
y14 =1
79/133
y0 =0
6/7/2006
y1 =0
y2 =0
y3 =0
y4 =0
f1 =0
y5 =0
f2 =0
y6 =0
f3 =0
y7 =0
y8 =0
f4 =0
y9 =0
f5 =0
y10 =0
y11 =0
f6 =0
y12 =0
f7 =0
y13 =0
y14 =0
80/133
Sum-Product Algorithm:
Notation
6/7/2006
Ci = {j: hji = 1}
This is the set of row location of the 1s in the ith column
Ci\j= {j: hji=1}\{j}
The set of row locations of the 1s in the ith column, excluding location j
Rj = {i: hji = 1}
This is the set of column location of the 1s in the jth row
Rj\i= {i: hji=1}\{i}
The set of column locations of the 1s in the jth row, excluding location i
Turbo and LDPC Codes
81/133
Sum-Product Algorithm
Step 1: Initialize qij (0) =1-pi = 1/(1+exp(-2yi/ 2))
qij (1) =pi = 1/(1+exp(2yi/ 2 ))
qij (b) = probability that ci =b, given the channel sample
f0
q00
q10
q01
v0
y0
y0
f1
q02
q11q
20
v1
y1
y1
q22
q32
q31
v3
y3
y2
q51
q62
q40
v2
y2
f2
y3
v4
y4
y4
v5
y5
y5
v6
y6
y6
6/7/2006
82/133
Sum-Product Algorithm
1 1
rji (0) 1 2qi ' j (1)
2 2 i 'R j\i
f1
f2
r13
r00
r02
rr20
10
v0
6/7/2006
r23
r01
r11
v1
v2
v3
r26
r15
r03
r22
v4
v5
v6
83/133
Sum-Product Algorithm
qij (0) kij (1 pi ) rj 'i (0)
j 'Ci \ j
f0
q10
q00
q02
q01
q11
v0
y0
y1
jCi
6/7/2006
q20
q22
v1
f1
q31
v2
y2
f2
q32
q40
v3
y3
q62
q51
v4
y4
v5
y5
v6
y6
1 if Qi (1) 0.5
ci
0 otherwise
84/133
Halting Criteria
cHT 0
6/7/2006
85/133
P(ci 0 | y, Si )
LLR of the ith code bit (ultimate goal of algorithm)
Qi log
P
(
c
i 1| y , Si )
6/7/2006
86/133
Sum-Product Decoder
(in Log-Domain)
Initialize:
qij = i = 2yi/2 = channel LLR value
r ji i ' j i ' j
i 'R
j \i i 'R j \i
At each v-node update the q message and Q LLR:
Qi i rji
jCi
qij Qi rji
Make hard decision:
1 if Qi 0
ci
0 otherwise
6/7/2006
87/133
Sum-Product Algorithm:
Notation
ij = sign( qij )
ij = | qij |
(x) = -log tanh(x/2) = log( (ex+1)/(ex-1) )= -1(x)
6
(x)
6/7/2006
3
x
88/133
Min-Sum Algorithm
Note that:
This greatly reduces complexity, since now we dont have to worry about
computing the nonlinear function.
Note that since is just the sign of q, can be implemented by using XOR
operations.
6/7/2006
89/133
BER of
Different Decoding Algorithms
10
10
BER
10
10
10
10
10
-1
Code #1:
MacKays construction 2A
AWGN channel
BPSK modulation
-2
Min-sum
-3
-4
-5
Sum-product
-6
-7
0.2
0.4
0.6
0.8
1
Eb/No in dB
1.2
1.4
1.6
1.8
Extrinsic-information Scaling
6/7/2006
91/133
BER of
Different Decoding Algorithms
10
10
BER
10
10
10
10
10
-1
Code #1:
MacKays construction 2A
AWGN channel
BPSK modulation
-2
Min-sum
-3
-4
Min-sum
w/ extrinsic info scaling
Scale factor =0.9
-5
Sum-product
-6
-7
0.2
0.4
0.6
0.8
1
Eb/No in dB
1.2
1.4
1.6
1.8
dv
x
i 2
( x)
dc
i 1
6/7/2006
i 1
x i 1
93/133
Around 1996, Mackay and Neal described methods for constructing sparse H
matrices
Construction 2A: M/2 columns have dv =2, with no overlap between any pair
of columns. Remaining columns have dv =3. As with 1A, the overlap between
any two columns is no greater than 1
6/7/2006
94/133
Luby et. al. (1998) developed LDPC codes based on irregular LDPC
Tanner graphs
Message and check nodes have conflicting requirements
Message nodes benefit from having a large degree
LDPC codes perform better with check nodes having low degrees
6/7/2006
95/133
Density Evolution:
Richardson and Urbanke, 2001
Given an irregular Tanner graph with a maximum dv and dc, what is the best
degree distribution?
How many of the v-nodes should be degree dv, dv-1, dv-2,... nodes?
How many of the c-nodes should be degree dc, dc-1,.. nodes?
For any LDPC code, there is a worst case channel parameter called the
threshold such that the message distribution during belief propagation
evolves in such a way that the probability of error converges to zero as the
number of iterations tends to infinity
6/7/2006
96/133
Density Evolution:
Richardson and Urbanke, 2001
Richardson and Urbanke identify a rate code with degree distribution pair
which is 0.06 dB away from capacity
Design of capacity-approaching irregular low-density parity-check codes, IEEE
Trans. Inf. Theory, Feb. 2001
Chung et.al., use density evolution to design a rate code which is 0.0045
dB away from capacity
On the design of low-density parity-check codes within 0.0045 dB of the Shannon
limit, IEEE Comm. Letters, Feb. 2001
6/7/2006
97/133
LDPC codes, especially irregular codes exhibit error floors at high SNRs
The error floor is influenced by dmin
Directly designing codes for large dmin is not computationally feasible
Trapping sets and Stopping sets have a more direct influence on the error
floor
Error floors can be mitigated by increasing the size of minimum stopping sets
Tian,et. al., Construction of irregular LDPC codes with low error floors, in Proc.
ICC, 2003
LDPC codes based on projective geometry reported to have very low error
floors
Kou, Low-density parity-check codes based on finite geometries: a rediscovery
and new results, IEEE Tans. Inf. Theory, Nov.1998
6/7/2006
98/133
A common method for finding G from H is to first make the code systematic
by adding rows and exchanging columns to get the H matrix in the form H =
[PT I]
Then G = [I P]
However, the result of the row reduction is a non-sparse P matrix
The multiplication c =[u uP] is therefore very complex
6/7/2006
99/133
Richardson and Urbanke show that even for large n, the encoding
complexity can be (almost) linear function of n
Efficient encoding of low-density parity-check codes, IEEE Trans. Inf.
Theory, Feb., 2001
6/7/2006
100/133
1 1
H2 1 1
1 ... 1
1 1
and
H 2T
1 1 1
1 1
... 1
... 1
... 1
... 1
u
Multiply
by H1T
6/7/2006
uH1TH2-T
D
101/133
Performance Comparison
BPSK modulation
AWGN and fully-interleaved Rayleigh fading
Enough trials run to log 40 frame errors
Sometimes fewer trials were run for the last point (highest SNR).
6/7/2006
102/133
6/7/2006
103/133
4 10000
5
5458 5000
9987 4542
Code number:
1 = MacKay construction 2A
2 = Richardson & Urbanke
3 = Jones, Wesel, & Tian
4 = Ryans Extended-IRA
6/7/2006
4999
13
4
1
1941
1689 1178
Turbo and LDPC Codes
104/133
10
10
BER
10
10
10
10
BER in AWGN
-1
BPSK/AWGN Capacity:
-0.50 dB for r = 1/3
-2
-3
-4
Code #3:
JWT
-5
Code #1:
Mackay 2A
Code #2:
R&U
Code #4:
IRA
-6
turbo
10
-7
0.2
0.4
0.6
Eb/No in dB
0.8
1.2
The digital video broadcasting (DVB) project was founded in 1993 by ETSI to
standardize digital television services
Normal frames support code rates 9/10, 8/9, 5/6, 4/5, 3/4, 2/3, 3/5, 1/2, 2/5,
1/3, 1/4
Short frames do not support rate 9/10
Valenti, et. al, Turbo and LDPC codes for digital video broadcasting,
Chapter 12 of Turbo Code Application: A Journey from a Paper to
Realizations, Springer, 2005.
6/7/2006
106/133
10
r=9/10
r=8/9
r=5/6
r=4/5
r=3/4
r=2/3
r=3/5
r=1/2
r=2/5
r=1/3
r=1/4
-1
FER
10
-2
10
-3
10
-4
10
2
3
Eb/No in dB
10
r=8/9
r=5/6
r=4/5
r=3/4
r=2/3
r=3/5
r=1/2
r=2/5
r=1/3
r=1/4
-1
FER
10
-2
10
-3
10
-4
10
0.5
1.5
2.5
3
Eb/No in dB
3.5
4.5
5.5
6/7/2006
109/133
6/7/2006
110/133
2
2
log f ( y x k )
6/7/2006
y xk
2 2
E s y xk
No
111/133
Log-Likelihood of Symbol xk
f (y | x k )
log
f (y | x m )
x m S
log f (y | x k ) log
log f (y | x k ) log
f (y | x
x m S
explog f (y | x )
x m S
log f (y | x k ) max*log f (y | x m )
x m S
6/7/2006
112/133
0.6
0.5
max( x, y ) f c y x
0.4
fc(|y-x|)
0.3
f c ( z ) log 1 exp z )
0.2
0.1
0
-0.1
0
|y-x|
10
6/7/2006
114/133
6/7/2006
E x k ,n k
log(2)
E x k ,n k
log(2)
bits
bits
115/133
Modulator:
Pick xk at random
from S
xk
nk
Receiver:
Compute log f(y|xk)
for every xk S
Calculate:
k log f ( y | x k )
max* log f ( y | x m )
x m S
Noise Generator
E k
C
log( 2)
256QAM
7
6
64QAM
5
16QAM
16PSK
8PSK
QPSK
1
0
-2
BPSK
8
10
Eb/No in dB
12
14
16
18
20
15
10
M=2
M=4
M=16
M=64
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
15
10
M=2
M=4
5
M=16
M=64
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
BICM
6/7/2006
Binary
Encoder
c' n
Bitwise
Interleaver
cn
Binary
to M-ary
mapping
xk
120/133
px
| y
pc n 1 | y
x k S n(1)
n log
log
log
pc n 0 | y
px k | y
k
x k S n( 0 )
max*
log f y x k max*
log f y x k
(1)
(0)
x k S n
py | x p[ x
py | x p[ x
x k S n(1)
x k S n( 0 )
x k S n
(1 )
121/133
BICM Capacity
p (c k | y )
log( 2) E cn ,n log
nats
p (c k 0 | y ) p (c k 1 | y )
p (c k 0 | y ) p (c k 1 | y )
log( 2) E cn ,n log
nats
p (c k | y )
p (c k 0 | y )
p ( c k 1 | y )
log( 2) - E cn ,n log exp log
exp log
nats
p
(
c
|
y
)
p
(
c
|
y
)
k
k
p (c k 0 | y )
p ( c k 1 | y )
log( 2) - E cn ,n max* log
, log
nats
p (c k | y )
p ( c k | y )
6/7/2006
122/133
C Ck
k 1
nats
k 1
nats
k 1
E cn ,n max* 0, (1) ck k
log( 2) k 1
6/7/2006
bits
123/133
BICM Capacity
Modulator:
Pick xk at random
from S
Receiver:
Compute p(y|xk)
for every xk S
xk
nk
n log
py x
py x
xS n(1)
xS n( 0 )
Noise Generator
For each symbol, calculate:
bits
E
log( 2)
E cn ,n max* 0, (1) ck k
k 1
124/133
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
-10
-5
0
5
10
Minimum Es/No (in dB)
15
20
BICM-ID
px | y
xS n(1)
px | y
log
xS n( 0 )
py x
py x
xS n(1)
xS n( 0 )
xS n(1)
py x p(x | c
0)
xS n( 0 )
6/7/2006
126/133
6/7/2006
127/133
Mutual Information
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10
15
20
25
variance
30
35
40
45
50
gray
SP
MSP
MSEW
Antigray
0.8
0.7
0.6
0.5
0.4
16-QAM
AWGN
6.8 dB
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
Iv
0.6
0.7
0.8
0.9
EXIT Chart
1
16-QAM
AWGN
6.8 dB
adding curve for a FEC code
makes this an extrinsic information
transfer (EXIT) chart
0.9
0.8
0.7
0.6
0.5
0.4
gray
SP
MSP
MSEW
Antigray
K=3 Conv code
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
Iv
0.6
0.7
0.8
0.9
16-QAM
8 dB
Rayleigh fading
0.9
0.8
0.7
0.6
0.5
0.4
1 by 1 MSP
2 by 1 Alamouti MSP
2 by 1 Alamouti huangNr1
2 by 2 Alamouti MSP
2 by 2 Alamouti huangNr2
K=3 Conv code
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
Iv
0.6
0.7
0.8
0.9
6/7/2006
132/133
Conclusions
6/7/2006
133/133