Documente Academic
Documente Profesional
Documente Cultură
Abstract An interesting practical consideration for decoding of serial or parallel concatenated codes with more than
two components is the determination of the lowest complexity
component decoder schedule which results in convergence. This
paper presents an algorithm that finds such optimal decoder
schedule. A technique is also given for combining and projecting a
series of three-dimensional extrinsic information transfer (EXIT)
functions onto a single two-dimensional EXIT chart. This is
a useful technique for visualizing the convergence threshold
for multiple concatenated codes and provides a design tool for
concatenated codes with more than two components.
Index Terms EXIT chart, iterative decoding, multiple concatenated codes, optimal scheduling.
I. I NTRODUCTION
INCE the invention of turbo codes [1] with two parallel
concatenated component codes, the turbo principle has
been extended to serially concatenated codes [2], multiple parallel concatenated codes [3] and multiple serially concatenated
codes [4].
Iterative decoding of concatenated codes with two components can be analyzed using two-dimensional extrinsic information transfer (EXIT) charts [5, 6]. These charts may be
used to predict convergence thresholds and average decoding
trajectories and have proved to be a useful tool for code
construction. EXIT chart analysis has been extended to parallel
concatenated codes with three components [7], resulting in
a three-dimensional chart. For three serially concatenated
codes however, the chart would be four-dimensional and it is
therefore difficult to show the decoding trajectory in a single
chart. Without a proper approach, extension of EXIT chart
x3 C3
E(x3 )
y3 2
A(y 3 )
C31
Fig. 1.
E(x2 )
21
E(y 3 )-
y2 1
A(y 2 )
A(x2 )-
C21
E(y 2 )-
x1 M
E(x1 )
11
1
s
?
w-
r
1
A(x1 )- M
x 1
- 2
x1-
x2-
C1
C2
y 1-@
A(y 1 )@
@
- 3
- 4
x3-
x4-
C3
C4
y 3-
E(x1 )C11
y 2M
Fig. 2.
x2 C2
A(y 2 )-
? r
s-
- M1
A(y 3 )-
@
@
A(y 4 )@
C41
21
E2 (x)-
A (x)
2 2
31
A(x3 )
E(x4 )-
E1 (x)-
A (x)
1 1
A(x2 )
E(x3 )-
C31
y 4-
A(x1 )
E(x2 )-
C21
11
E3 (x)-
A (x)
3 3
41
A(x4 )
E4 (x)-
A (x)
4 4
(i)=+1
x
0.
(i)=1
x
(1)
(2)
N
X
Ei (x).
(3)
i=1
i6=n
N
X
Ei (x).
(4)
i=1
1X
2x=1
+
Z
2pG|X (|x)
d.
pG|X (|x) log2
pG|X (|x) + pG|X (|x)
(5)
Assume G = X + W is Gaussian with mean X, where
0 is a constant and W is zero-mean Gaussian with
2
variance w
. Then
2
IG = J
,
(6)
w
where J is defined as [6]
1
J() = 1
2
+
Z
( 2 /2)2
e 22 log2 1 + e d.
(7)
(9)
(10)
I = J()
0.8
0.6
0.4
0.2
0
0
= J 1(I)
Fig. 3. The J function and its approximation where H1 = 0.3073, H2 =
0.8935, and H3 = 1.1064.
4 REb
2
=
r(i) = u y n (i) + u,
(11)
N0
2
In (11), u2 = 8REb /N
0 = 8Rb is the variance of the zerob
mean Gaussian u = 4 NRE
w [6]. The average MI [6] for the
0
priors in (11) is defined as
IA(yn ) ,
Ln /Rn
Rn X
I(y n (i); A(y n (i))),
Ln i=1
(12)
for all n = 2, 3, . . . , N 1.
In a system with parallel concatenated codes, IE(xn ) =
IEn (x) and IA(xn ) = IAn (x) (refer to Section II-B and Fig. 2).
Since the prior values are sums of N 1 extrinsic values (3),
they are modelled as sums of N 1 biased Gaussian random
1
0.8
0.6
0.4
0.2
0
1
0.8
0.6
IA(yn )
0.4
0.2
0 0
Fig. 4.
0.2
0.4
0.6
0.8
1
0.8
0.6
0.4
0.2
0
1
0.6
IA(yn )
IA(xn )
variables, (14). Using (7) and (8), the prior MI becomes [7]
v
uX
N
u
2
1
u
(23)
IA(xn ) = J
t J IE(xi ) .
i=1
i6=n
v
uX
N
u
2 p
1
u
IE(xn ) = Txn
Jt J IE(xi ) , J 8Rb ,
i=1
i6=n
(24)
for all n = 1, 2, . . . , N .
Define the average
PL MI on the decision statistics in (2) and
(4) as ID(x) , L1 i=1 I(x(i); D(x(i))). In the serial case,
ID(x) = IE(xN ) = TxN 0, IA(yN ) = TxN 0, IE(xN 1 ) ,
(25)
and in the parallel case,
v
uN
uX
2
(26)
J 1 IE(xi ) .
ID(x) = Jt
i=1
0.8
0.4
0.2
0 0
Fig. 5.
0.2
0.4
0.6
0.8
IA(xn )
ID
= 1.0 [6].
For two component codes [6, 17], the EXIT chart is twodimensional for a fixed b . In the serial case [17], the MIs
are on the coded bits between the encoders, x1 = 1 (y 2 ),
while in the parallel case [6] the MIs are on the source bits
x1 = 1 (x) and x2 = 2 (x). A vertical step in the EXIT
chart for a two-code system represents activation of decoder
one, C11 . A horizontal step in the same EXIT chart represents
activation of decoder two, C21 .
In a system with N = 3 parallel components, there are
three different extrinsic values, E(x1 ), E(x2 ), and E(x3 ),
connecting the decoders (cf. Fig. 2 where there are four
extrinsic values). According to (24) the EXIT chart is threedimensional for fixed b [7]. The convergence threshold is now
the b value that opens a tube between the three surfaces so
that the trajectory can go from IE(x1 ) = IE(x2 ) = IE(x3 ) = 0
to IE(x1 ) = IE(x2 ) = IE(x3 ) = 1 [7].
For N parallel concatenated codes, there will be N extrinsic
values connecting the decoders and the EXIT chart will be N -
IE(x2 ) = IE(x1 ) .
0.8
IV = IE(x2 ) , IE(x1 )
IE(x1 ) = TM IE(y2 ) , Rb
IE(x2 ) = TV IE(y3 ) , Rb
IE(y3 ) = Ty3 0, IE(x2 )
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
IH = IE(y3 ) , IE(y2 )
Fig. 6.
IV = IE(x3 )
0.8
V. O PTIMAL S CHEDULING
IE(x3 ) = TV IE(x4 ) , Rb
IE(x4 ) = TH IE(x3 ) , Rb
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
IH = IE(x4 )
Fig. 7.
dB.
Fig. 8.
k=1
k=2
k=3
k=4
k=5
k=6
C11
C11
C11
C11
C11
C11 . . .
"
b
b
b
b
"
"
b
"
e bb %
e b b" %
e b b" %
b
"
e
e""b%
e""b%
b%
"
b
%
%
b
b
b
"
"
"
e
e
e % b 1 . . .
C21
C21
C21
C21
C21
C2
%
%
%
e
e
e
b
b % e "
b % e "
% e "
b
b "
b "
"
% " e
% "
% "
b
b
b
"
" bee
" bee
b
e
%
%
%
b
b
b
"
"
"
C31
C31
C31
C31
C31
C31 . . .
The trellis for the decoding schedule of three codes concatenated in serial (solid paths) or in parallel (solid and dashed paths).
pk ,
v2 = c =
k
X
cpj .
(27)
j=1
(31)
ID
. If |V | =
6 0, find the metric with the lowest complexity, v = arg minvV v2 , and replace the candidate
path p by the path corresponding to v .
5) Go to Step 6.
6) If |Vk | = 0, output p as the optimal path with a final
set of extrinsic MIs and a total complexity in v . If
|Vk | =
6 0, go to Step 2.
In the serial case, it is sufficient to initialize Pk = {(1)}
since the optimal path must start by activating the demapper.
This means that the dashed paths in Fig. 8 can be removed for
a system with three serially concatenated codes. The general
algorithm above will in a serial case automatically remove all
paths that do not start with p1 = 1, since fn (0) will have zero
extrinsic MIs in v if n 6= 1 according to (20)(22). f1 (0) is
the only metric update function in a serial system that will
have a non-zero extrinsic MI (v3 6= 0) since it includes the
EXIT function of the innermost encoder (mapper) (19).
Using this brute force exhaustive search, the total number of
paths |Vk | grows exponentially with k until the first candidate
path is found, and is therefore infeasible for large k.
To reduce the search complexity in Algorithm 2, where the
metrics are F -dimensional, an adaptation of the well known
Viterbi algorithm [21] can be used. In order to do this we need
to define a partial order < on the metrics with the following
properties: (a) the metrics are monotonically non-decreasing,
and (b) deletion of paths entering the same state with smaller
metrics according to < does not affect the final outcome.
Definition 1 (Domination): Define a partial ordering on RF
as follows. For v, v RF , v < v if and only if vj vj ,
for all 3 j F , and v2 v2 . We say v dominates v if v
has all F 2 extrinsic MIs higher than v and a lower total
complexity.
The operator < does not operate on the first element of
v since that is the element the stopping criterion is based
on. Define the function dom, operating on a set of metrics
by discarding all dominated metrics. Thus v Vk will be
discarded if and only if there is another metric Vk v < v .
TABLE I
T HE F IRST 30 ACTIVATIONS FOR D IFFERENT ACTIVATIONS S CHEDULES FOR THE PARALLEL E XAMPLE C ODE AT b = 1.0 dB.
10
3, 1, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 4, 2, 3, 1, 4, 3, 4, 3, 1, 2, 4, 1, 3, 2, 1, 3, 4 . . .
1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2 . . .
1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2 . . .
3, 1, 2, 1, 3, 2, 1, 4, 3, 1, 2, 1, 3, 4, 1, 2, 3, 1, 4, 2, 3, 1, 2, 4, 1, 3, 2, 1, 3, 4, . . .
1, 2, 3, 1, 2, 3, . . . , 1, 2, 4, 1, 2, 4, . . . , 1, 2, 3, 1, 2, 3, . . . , 1, 2, 4, 1, 2, 4, . . .
2.0 dB
Optimal
A: 1,2,3,...
B: 1,2,3,2,...
C: fix, 2.0 dB
Optimal
A: 1,2,3,4,...
B: 1,2,3,4,3,2,...
C: fix, 0.6 dB
-0.6 dB
10
BER
10
BER
10
Optimal
A
B
C
D
10
-0.4 dB
10
4.0 dB
2.6 dB
0.0 dB
2.2 dB
1.0 dB
10
500
1000
1500
2000
2500
3000
3500
Complexity
10
100
200
300
400
500
600
Complexity
Fig. 9. Performance in BER of the serial example code versus the total
computational complexity for different b .
Fig. 10. Performance in BER of the parallel example code versus the total
computational complexity for different b .
10
IV = IE(x3 )
0.8
VII. C ONCLUSION
IE(x3 ) = TV IE(x4 ) , Rb
IE(x4 ) = TH IE(x3 ) , Rb
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
IH = IE(x4 )
Fig. 11. EXIT chart projection of the parallel example code at b = 1.0
dB, together with the average decoding trajectory for different activation
schedules. The markers correspond to the five schedules given in Table I.
(33)
vn
vn , n =
where henceforth for vectors, v v means
1, 2, . . . , N .
Define f : [0, 1]N 7 [0, 1]N and fn : [0, 1]N 7 [0, 1]N , for
n = 1, 2, . . . , N according to
f (v) = (g1 (v), g2 (v), . . . , gN (v))
fn (v) = (v1 , . . . , vn1 , gn (v), vn+1 , . . . , vN ) .
(34)
(35)
(36)
v v = fn (v ) fn (v).
(37)
and
11
Define a region f as
f , {v : v f (v)} .
if v f ,
(39)
for any n = 1, 2, . . . , N .
Given a sequence pK = (p1 , p2 , . . . , pK ), where pn
{1, 2, . . . , N } and an initial point v0 = s0 f , define the
following two sequences for k = 1, 2, . . . , K,
k
, f (vk1 )
(40)
vk = v1k , v2k , . . . , vN
k1
k
k
k
k
).
(41)
s = u1 , u2 , . . . , uN , fpk (s
Thus, vk is a sequence with parallel, or synchronous
updates and sk is a sequence with serial, or asynchronous
updates [22]. The sequence pK defines the update order for
the serial sequence.
Lemma 1: Let v0 f . Then the sequence vk defined
by (40) converges monotonically (in the sense vk vk1 ) to
a unique limit point 0 v 1.
Proof: Since f 1, the lemma results from showing
that vk is monotonically non-decreasing. This will be accomplished by induction. By assumption, v0 f and
v1 = f (v0 )
v
by (40)
by (38)
Now suppose
vk vk1
(42)
Then
vk = f (vk1 )
by (40)
f (v )
=v
k+1
by (40).
(43)
(ii) vk sKk ,
for k 1, directly prove the lemma.
By assumption, s0 = v0 f and
v1 = f (v0 ) = f (s0 )
fp1 (s0 )
= s1
v k sk .
(38)
by (40)
by (39)
by (41)
(44)
Then
vk+1 = f (vk )
by (40)
f (s )
fpk+1 (s )
=s
k+1
by (39)
by (41)
v
uN
p
uX
1(v )2 , J
.
J
gn (v) , Txn Ju
8R
i
b
t
i=1
i6=n
Assumption (33) can be guaranteed if the monotonicity assumption of the EXIT functions holds (Assumption 1) together
with the knowledge that both the J-function and its inverse
is monotonically increasing [6], i.e., J( + ) > J() and
J 1(IG + ) > J 1(IG ), for any > 0.
The decoding trajectory for a concatenated code with serial
components can be described in a similar way, where gn (v)
is then given by (19)(22). In this case, each activation can
update one or two elements in v. However, it is straightforward
to redefine fn (v) to be a function that updates several elements
in v and to prove that Lemma 2 is still valid.
Theorem 1 results from direct application of the above
lemmas as follows.
Theorem 1: (restated) Subject to Assumption 1, the sequence of MI resulting from successive or parallel activations
of EXIT functions converges monotonically to a limit point,
independent of the actual activation schedule.
Proof: The elements in v are MIs, and hence 0 v 1.
Further, since v0 = 0 and f (0) 0, v0 = 0 is inside the
12