Sunteți pe pagina 1din 12

1

Convergence Analysis and Optimal Scheduling


for Multiple Concatenated Codes
Fredrik Brannstrom, Member, IEEE, Lars K. Rasmussen, Senior Member, IEEE, and
Alex J. Grant, Senior Member, IEEE

Abstract An interesting practical consideration for decoding of serial or parallel concatenated codes with more than
two components is the determination of the lowest complexity
component decoder schedule which results in convergence. This
paper presents an algorithm that finds such optimal decoder
schedule. A technique is also given for combining and projecting a
series of three-dimensional extrinsic information transfer (EXIT)
functions onto a single two-dimensional EXIT chart. This is
a useful technique for visualizing the convergence threshold
for multiple concatenated codes and provides a design tool for
concatenated codes with more than two components.
Index Terms EXIT chart, iterative decoding, multiple concatenated codes, optimal scheduling.

I. I NTRODUCTION
INCE the invention of turbo codes [1] with two parallel
concatenated component codes, the turbo principle has
been extended to serially concatenated codes [2], multiple parallel concatenated codes [3] and multiple serially concatenated
codes [4].
Iterative decoding of concatenated codes with two components can be analyzed using two-dimensional extrinsic information transfer (EXIT) charts [5, 6]. These charts may be
used to predict convergence thresholds and average decoding
trajectories and have proved to be a useful tool for code
construction. EXIT chart analysis has been extended to parallel
concatenated codes with three components [7], resulting in
a three-dimensional chart. For three serially concatenated
codes however, the chart would be four-dimensional and it is
therefore difficult to show the decoding trajectory in a single
chart. Without a proper approach, extension of EXIT chart

F. Brannstrom and L. K. Rasmussen are supported by the Swedish Research


Council under Grant 621-2001-2976. F. Brannstrom is also supported by
Personal Computing and Communication (PCC++) under Grant PCC-0201-09.
L. K. Rasmussen and A. J. Grant are supported by the Australian Government
under ARC Grant DP0344856. The material in this correspondence was
presented in part at the IEEE International Symposium on Information Theory,
Yokohama, Japan, June/July 2003 and at the International Symposium on
Turbo Codes and Related Topics, Brest, France, September 2003.
F. Brannstrom was with the Department of Computer Engineering,
Chalmers University of Technology, Sweden. He is now with the Department
of Signals and Systems, Chalmers University of Technology, SE-412 96
Goteborg, Sweden (e-mail: fredrikb@chalmers.se).
L. K. Rasmussen is with the Institute for Telecommunications Research,
University of South Australia, Mawson Lakes, SA 5095, Australia (e-mail:
Lars.Rasmussen@unisa.edu.au) and with the Department of Computer Engineering, Chalmers University of Technology, SE-412 96 G o teborg, Sweden
(e-mail: larsr@ce.chalmers.se).
A. J. Grant is with the Institute for Telecommunications Research, University of South Australia, Mawson Lakes, SA 5095, Australia (e-mail:
Alex.Grant@unisa.edu.au).

analysis to codes with more than three components becomes


unmanageable.
For two component codes, decoding alternates between
the two component decoders [5, 6]. With more than two
components however, the schedule of decoder activations is
no longer obvious. Here, the term activate is used instead of
iterate (in a two-component code, one iteration is the same
as two activations). Previously, fixed schedules have been
used e.g., [4, 7], while a message-passing strategy giving the
best bit-error rate (BER) performance for multiple parallel
concatenated codes has been investigated in [3, 8].
This paper presents a technique for combining and projecting a series of two- and three-dimensional EXIT functions
onto a single two-dimensional chart. Similar to a system with
two components, the convergence threshold for a system with
multiple concatenated codes can then be visualized as a tunnel
in the projected EXIT chart. An optimization algorithm is also
described, which finds the activation schedule that yields the
best possible performance using the lowest possible decoding
complexity.
In Section II the system model is specified. Section III
defines the different mutual informations and EXIT functions.
Section IV introduces the EXIT chart projection and examples
are given demonstrating convergence analysis for multiple
concatenated codes. Section V develops the decoder schedule
optimization algorithm. Numerical examples are presented in
Section VI and concluding remarks are given in Section VII.
II. S YSTEM M ODEL
Consider coded transmission of binary data over an additive
white Gaussian noise (AWGN) channel. With reference to
the examples in Fig. 1 (three serial codes) and Fig. 2 (four
parallel codes), the encoder consists of N serially or parallel
concatenated component codes Cn , n = 1, 2, . . . , N . The
rate of code n is Rn and the overall transmission rate is
R. Transmission is divided into blocks of L independent and
identically distributed source bits x {1, +1}L . Encoder Cn
maps a sequence of Ln input symbols xn {1, +1}Ln to
a sequence of Ln /Rn output symbols y n {1, +1}Ln /Rn ,
1 n N . Individual elements of these sequences will be
denoted by xn (i) and y n (j), respectively, i = 1, 2, . . . , Ln
and j = 1, 2, . . . , Ln /Rn .
The innermost encoder, whose output is connected to
the channel is a modulator M, mapping blocks of m bits
to one of M = 2m symbols, e.g., BPSK, 4PSK, 8PSK or
16QAM symbols [5]. The rate of the mapper is m and the

x3 C3

E(x3 )

y3 2

A(y 3 )
C31

Fig. 1.

E(x2 )
21 

E(y 3 )-

y2 1

A(y 2 )

A(x2 )-

C21

E(y 2 )-

x1 M

E(x1 )
11 
1

s
?
w-

r

1
A(x1 )- M

Three serially concatenated codes.

x 1

- 2

x1-

x2-

C1

C2

y 1-@

A(y 1 )@
@

- 3

- 4

x3-

x4-

C3

C4

y 3-

E(x1 )C11

y 2M

Fig. 2.

x2 C2

A(y 2 )-

? r
s-
- M1


A(y 3 )-

@
@

A(y 4 )@

C41

21

E2 (x)-

A (x)
2  2
31

A(x3 )
E(x4 )-

E1 (x)-

A (x)
1  1

A(x2 )
E(x3 )-

C31
y 4-

A(x1 )
E(x2 )-

C21

11

E3 (x)-

A (x)
3  3
41

A(x4 )

E4 (x)-

A (x)
4  4

Four parallel concatenated codes.

sequence of symbols s {s1 , s2 , . . . , sM }L/R is transmitted


over the AWGN channel subject
PM to 2a transmitter average
1
energy constraint Es = M
k=1 |sk | = REb , where Eb
is the average bit energy of the source bits. The receivers
matched filter output is r = s+w, where each element in w is
2
= N0 /2 per dimension.
zero-mean Gaussian with variance w
The signal-to-noise ratio (SNR) is b , Eb /N0 .
Let E(xn ) denote a sequence of extrinsic values corresponding to xn . Likewise, A(xn ) is a sequence of priors for
xn . This notation is naturally extended to other sequences, x
and y n , 1 n N . Priors and extrinsics will be represented
as sequences of log-likelihoods, A(xn ) RLn .
The decoder consists of N a posteriori probability (APP)
decoders Cn1 [9], interconnected by interleavers and deinterleavers 1 . Upon activation, each decoder uses its code
constraint and the most recent priors [8] to update its extrinsic
values [9], (E(xn ), E(y n )) = Cn1(A(xn ), A(y n )).
Let D(x) RL denote the decision statistics for the source
(i) on element
bits x after each activation. The hard decision x
x(i) is according to
D(x(i))

(i)=+1
x
0.
(i)=1
x

(1)

Performance is measured in BER, i.e., the probability


Pr(
x(i) 6= x(i)).

A. Serially Concatenated Codes


In a system with N serially concatenated codes [4], the
encoders are interconnected by N 1 interleavers, xn =
n (y n+1 ), 1 n N 1. The outer encoder CN has
LN = L and for the other codes, 1 n N 1, the block
lengths are LQ
n = Ln+1 /Rn+1 = Ln1 Rn , and the overall
N
rate is R = n=1 Rn . Fig. 1 shows an example with three
components, where the inner encoder is a mapper C1 = M
with R1 = m. The inner encoder may also model an intersymbol interference channel [10] or a multiple access channel
[11].
All extrinsic values are set to zero before decoding,
E(xn ) = {0}Ln and E(y n ) = {0}Ln /Rn , 1 n N .
The inner decoder (or demapper) uses the matched filter
outputs r together with the prior values on x1 to produce
extrinsics E(x1 ) = M1(A(x1 ), r). After initial activation of
the demapper, decoding proceeds iteratively according to some
activation schedule of the component decoders. Extrinsic values from decoder n, become priors to the connecting decoders
A(y n+1 ) = n1 (E(xn )) and A(xn1 ) = n1 (E(y n ))
respectively. The source priors A(xN ) will always be zero and
hence A(x3 ) is not shown in Fig. 1. The decision statistics
D(x) = E(xN ),
1
is activated.
are only updated when CN

(2)

B. Parallel Concatenated Codes


In a system with N parallel concatenated codes there are
N interleavers permuting the source sequence into N different
input sequences, xn = n (x), with Ln = L, 1 n N .
Fig. 2 shows an example with four components. In this
example, M is a memoryless modulator/multiplexer, mapping
bits from y n to antipodal, e.g. BPSK symbols before they are
transmitted in serial over the channel. Since
1
Pa BPSK mapper
N
.
has a rate m = 1, the overall rate is R =
n=1 1/Rn
1
The demapper/demultiplexer, M produces A(y n ), 1
n N and these are constant during the decoding process.
Prior to decoding, all extrinsic values are set to zero, En (x) =
{0}L , 1 n N . Decoding proceeds iteratively according
to some schedule of activation of the component decoders and
in this case, any of the N decoders can be activated first.
Upon activation, decoder Cn1 updates extrinsics on the
source bits En (x) = n1 (E(xn )) as a function of the prior
values A(xn ) = n (An (x)), where
An (x) =

N
X

Ei (x).

(3)

i=1
i6=n

The decision statistics are updated according to


D(x) = En (x) + An (x) =

N
X

Ei (x).

labelling 0, 1, 2, 3, 6, 7, 5, 4 (octal representation of the bits


mapped to the signal points). This mapping is chosen to give
a low convergence threshold [5, 12]. The concatenated code
transmits R = 1.5 bits per channel use.
The parallel example has four CCs as component codes as
in Fig. 2. Two of these are feed-forward and two recursive,
and Rn = 1 for n = 1, 2, 3, 4. The component codes are
C1 : CC(3) (two states), C2 : CC(7) (four states), C3 : CC(2/3)
(two states) and C4 : CC(15/13) (eight states), respectively. No
systematic bits are transmitted. These codes are used together
with BPSK modulation producing the overall rate R = 1/4
bits per channel use.
III. M UTUAL I NFORMATION
Mutual information (MI) [13] is used in EXIT charts to
predict the convergence behavior of iterative decoding [5
7]. Let G R be a continuous random variable and X
{+1, 1} a uniformly distributed discrete random variable.
Define pG|X (|x) as the conditional probability density function (PDF) of G given X = 1. The MI IG = I(X; G) is [6,
13]
IG =

(4)

i=1

In contrast to a serial code, the source bit decisions in a parallel


code are updated every activation according to (1) and (4).
The mapper in Fig. 2 may also map m = N bits (one bit
from each of the N output sequences y n ) to a M = 2N point constellation. In this case something can be gained
by passing extrinsic values E(y n ) from all decoders Cn1 ,
1 n N , to the demapper M1 . The demapper is now also
a decoder that can be activated, leading to a system with N +1
components. In this scenario, N interleavers/deinterleavers
should be inserted between the encoders/decoders and the
mapper/demapper to enhance performance resulting in a hybrid parallel/serial system. For simplicity, BPSK modulation
will be used for parallel codes in this paper. It is straightforward to apply the principles described in the next sections to
hybrid systems.
C. Examples
Two example codes will be used in this paper, one serial
and one parallel. The specific combination of component codes
and mappers used are not optimal in any sense, and are chosen
simply to demonstrate the principles introduced in this paper.
A feed-forward convolutional code (CC) will be denoted as
CC(2, 3), where the numbers are octal representation of the
generator polynomials 1 and 1 + D. A recursive convolutional
1+D+D3
code will be denoted as CC(15/13), with generator 1+D
2 +D3
[6].
The serial example has three components. With reference
to Fig. 1, C3 : CC(2, 3) has two states, R3 = 1/2 and C2 :
CC(2/3) has two states, R2 = 1. A memoryless 8PSK
mapper is used as the inner code with counter-clockwise

1X
2x=1

+


Z
2pG|X (|x)
d.
pG|X (|x) log2
pG|X (|x) + pG|X (|x)

(5)
Assume G = X + W is Gaussian with mean X, where
0 is a constant and W is zero-mean Gaussian with
2
variance w
. Then
 
2
IG = J
,
(6)
w
where J is defined as [6]
1
J() = 1
2

+
Z

( 2 /2)2
e 22 log2 1 + e d.

(7)

J() is monotonically increasing and therefore has a unique


inverse function,
= J 1(IG ) .
(8)
It is infeasible to express J or its inverse in closed form.
However, they can be closely approximated by [12]
2H2

J() (1 2H1 )H3 ,



 2H1

1
2
1
H3
1
J (IG )
log2 1 IG
.
H1

(9)
(10)

Numerical optimization, using the Nelder-Mead simplex


method [14], to minimize the total squared difference between (7) and (9) gives H1 = 0.3073, H2 = 0.8935, and
H3 = 1.1064. The solid curve in Fig. 3 shows (7) and the
indistinguishable approximation (9).
A. Prior Mutual Information
In a parallel system using BPSK modulation, e.g. Fig. 2,
the received matched filter output at time i is r(i) =

B. Extrinsic Mutual Information


The extrinsic average MIs for decoder n = 1, 2, . . . , N ,
IE(xn ) and IE(yn ) , are expressed similar to (12) and depend
on Cn . The extrinsic MIs are calculated using (5) and the conditional PDFs of the extrinsic log-likelihoods, pE|X (|xn (i))
and pE|Y (|y n (i)). The conditional PDFs are usually estimated from histograms of E(xn ) and E(y n ), obtained by
Monte Carlo simulations of decoder Cn1 for specific values of
0 IA(xn ) 1 and 0 IA(yn ) 1 [6]. In these simulations,
the prior values fed to the decoder are
 modelled according to

(14)(15), where x = J 1 IA(xn ) and y = J 1 IA(yn ) ,
respectively.

I = J()

0.8

0.6

0.4

0.2

0
0

= J 1(I)
Fig. 3. The J function and its approximation where H1 = 0.3073, H2 =
0.8935, and H3 = 1.1064.

REb y n (i) + w, where the variance of the zero-mean Gaus2


= N0 /2. The corresponding prior on y n (i)
sian noise w is w
(calculated by M1 and fed to decoder Cn1 ) is [6]


pG|X (r(i)|+1)
A(y n (i)) = ln
pG|X (r(i)|1)

4 REb
2
=
r(i) = u y n (i) + u,
(11)
N0
2
In (11), u2 = 8REb /N
0 = 8Rb is the variance of the zerob
mean Gaussian u = 4 NRE
w [6]. The average MI [6] for the
0
priors in (11) is defined as
IA(yn ) ,

Ln /Rn
Rn X
I(y n (i); A(y n (i))),
Ln i=1

(12)

and can for a parallel code using BPSK be expressed as



p
(13)
IA(yn ) = J 8Rb ,

using (5)(7). IA(yn ) remains constant during decoding for


all n = 1, 2, . . . , N in a parallel system. Note (13) is also the
BPSK constellation-constrained capacity of an AWGN channel
[6].
In a serial system, the priors A(y n ) are the extrinsic values
from the connecting decoder and hence are not Gaussian as in
(11). However, simulation experiments have shown that using
a Gaussian assumption provides an acceptable approximation
to the evolution of the true MIs [6]. Therefore, the Gaussian
assumption is applied to all prior values, A(xn ) and A(y n )
for all n = 1, 2, . . . , N , in both the serial and the parallel case,
x2
x n + sx ,
(14)
2
y2
A(y n ) =
y + sy .
(15)
2 n
Here, the elements in sx and sy are zero-mean Gaussian with
variances x2 and y2 , respectively. Using (6) and (12), the
average MIs for the prior values in (14)(15) are expressed
as IA(xn ) = J(x ) and IA(yn ) = J(y ), respectively.
A(xn ) =

C. Extrinsic Information Transfer Functions


The extrinsic MIs are functions, Txn : [0, 1]2 7 [0, 1] and
Tyn : [0, 1]2 7 [0, 1], of the prior MIs IA(xn ) and IA(yn ) [7],

IE(xn ) = Txn IA(xn ) , IA(yn ) ,
(16)

IE(yn ) = Tyn IA(xn ) , IA(yn ) .
(17)

Note that Txn (0, 0) = Tyn (0, 0) = 0 and Txn (1, 1) =


Tyn (1, 1) = 1. As an example, the EXIT functions (16)(17)
for CC(2/3) are shown in Figs. 45.

IA(yn ) is equal to (13) if REb y n is the transmitted BPSK


symbol as in Fig. 2. If the inner code (mapper M) is not
BPSK, as in Fig. 1, the extrinsic MI depends on the mapping
of bits to the M > 2 signal points in the constellation and
also on Rb ,

IE(x1 ) = TM IA(x1 ) , Rb .
(18)
The EXIT functions (16)(18) are usually obtained by Monte
Carlo simulations of the individual component codes using the
Gaussian model in (14) and (15) for the priors as described
above [6]. More recently it has been shown that for certain
codes and simple channel models, it is possible to compute
the EXIT functions [15, 16].
Average MI is not affected by an interleaver or a deinterleaver. In a system with serially concatenated codes, the prior
MIs for decoder n are therefore the extrinsic MIs from the
decoders interconnected with decoder n by an interleaver or
a deinterleaver, IA(xn ) = IE(yn+1 ) , IA(yn ) = IE(xn1 ) , and
IA(xN ) = 0 (refer to Section II-A and Fig. 1). From (16)(18)
the following 2(N 1) mutually coupled EXIT functions can
be defined for a system with N serially concatenated codes

IE(x1 ) = TM IE(y2 ) , Rb ,
(19)

IE(xn ) = Txn IE(yn+1 ) , IE(xn1 ) ,
(20)

IE(yn ) = Tyn IE(yn+1 ) , IE(xn1 ) ,
(21)

IE(yN ) = TyN 0, IE(xN 1 ) ,
(22)

for all n = 2, 3, . . . , N 1.
In a system with parallel concatenated codes, IE(xn ) =
IEn (x) and IA(xn ) = IAn (x) (refer to Section II-B and Fig. 2).
Since the prior values are sums of N 1 extrinsic values (3),
they are modelled as sums of N 1 biased Gaussian random

IE(xn ) = Txn IA(xn ) , IA(yn )

IE(yn ) = Tyn IA(xn ) , IA(yn )

1
0.8
0.6
0.4
0.2
0
1
0.8
0.6

IA(yn )

0.4
0.2
0 0

Fig. 4.

0.2

0.4

0.6

0.8

1
0.8
0.6
0.4
0.2
0
1

0.6

IA(yn )

IA(xn )

EXIT function for the input bits of CC(2/3).

variables, (14). Using (7) and (8), the prior MI becomes [7]

v
uX
N
u
2
1

u
(23)
IA(xn ) = J
t J IE(xi ) .
i=1
i6=n

Usually, approximations (9) and (10) are sufficiently accurate.


According to (16), (17), (23), and (13) the following N
mutually coupled EXIT functions can be defined for a system
with N parallel concatenated codes using BPSK modulation

v
uX
N


u

2 p
1
u

IE(xn ) = Txn
Jt J IE(xi ) , J 8Rb ,
i=1
i6=n

(24)
for all n = 1, 2, . . . , N .
Define the average
PL MI on the decision statistics in (2) and
(4) as ID(x) , L1 i=1 I(x(i); D(x(i))). In the serial case,


ID(x) = IE(xN ) = TxN 0, IA(yN ) = TxN 0, IE(xN 1 ) ,
(25)
and in the parallel case,

v
uN
uX

2
(26)
J 1 IE(xi ) .
ID(x) = Jt
i=1

ID(x) = 1.0 means full information about the source bits,


i.e., BER close to zero. The average MI trajectory according
to a specific decoder activation schedule may be predicted
by repeated application of (19)(22) (serial system) or (24)
(parallel system). With more than two components, it is not immediately obvious whether different activation schedules result
in different convergence behavior. Subject to a monotonicity
assumption on (16)(18), (which seems reasonable for any
useful code we suspect it always holds) it turns out that the
limit behavior is independent of activation schedule. This is
summarized in Assumption 1 and Theorem 1 (proved in the
Appendix).

0.8
0.4
0.2
0 0

Fig. 5.

0.2

0.4

0.6

0.8

IA(xn )

EXIT function for the output bits of CC(2/3).

Assumption 1 (Monotonicity): All EXIT functions  are


monotonically non-decreasing, T IA(x) + x , IA(y) + y
T IA(x) , IA(y) , if x 0 and y 0.
Theorem 1: Subject to Assumption 1, the sequence of MI
resulting from successive or parallel activations of EXIT functions converges monotonically to a limit point, independent of
the actual activation schedule.
Theorem 1 also implies that the MI on the decision statistics
will converge according to (25) and (26). The limiting value
(independent of the activation schedule) is defined as the

convergence point ID(x) , ID


.
IV. EXIT C HART P ROJECTIONS
In an N = 2 component code, decoding alternates between
the two component decoders [6, 17]. EXIT charts can be
used to predict the convergence point and the convergence
threshold. The convergence point corresponds to the lowest
intersection of the EXIT curves [6]. The convergence threshold
is defined as the b at which a tunnel between the two curves
is opened so that the convergence point is close to one,

ID
= 1.0 [6].
For two component codes [6, 17], the EXIT chart is twodimensional for a fixed b . In the serial case [17], the MIs
are on the coded bits between the encoders, x1 = 1 (y 2 ),
while in the parallel case [6] the MIs are on the source bits
x1 = 1 (x) and x2 = 2 (x). A vertical step in the EXIT
chart for a two-code system represents activation of decoder
one, C11 . A horizontal step in the same EXIT chart represents
activation of decoder two, C21 .
In a system with N = 3 parallel components, there are
three different extrinsic values, E(x1 ), E(x2 ), and E(x3 ),
connecting the decoders (cf. Fig. 2 where there are four
extrinsic values). According to (24) the EXIT chart is threedimensional for fixed b [7]. The convergence threshold is now
the b value that opens a tube between the three surfaces so
that the trajectory can go from IE(x1 ) = IE(x2 ) = IE(x3 ) = 0
to IE(x1 ) = IE(x2 ) = IE(x3 ) = 1 [7].
For N parallel concatenated codes, there will be N extrinsic
values connecting the decoders and the EXIT chart will be N -

IE(x2 ) = IE(x1 ) .

In a system with N = 3 serially concatenated codes, there


are four extrinsic values, E(y 3 ), E(x2 ), E(y 2 ), and E(x1 )
connecting the three decoders and therefore four extrinsic MIs
(19)(22) and Fig. 1. Hence, for a fixed b , the EXIT chart for
three serially concatenated codes is four-dimensional and hard
to visualize. For a system with N serial components, the EXIT
chart will have 2(N 1) dimensions according to (19)(22).
Even if several of the serial component codes are the same, the
simplification described above for symmetric parallel codes is
not possible.
Convergence analysis for N > 2 component codes (serial
or parallel, symmetric or asymmetric) can however be accommodated by projecting the EXIT functions back onto two
dimensions [12, 18]. The approach is as follows.
The convergence threshold in b is found when ID(x) =
1.0. Since ID(x) = TxN 0, IE(xN 1 ) in a serial system (25),
it suffices to analyze the behavior of IV , IE(xN 1 ) versus
IH , IE(yN ) . The indexes V and H respectively stand for the
vertical and horizontal axes in the EXIT chart.
To obtain ID(x) = 1.0 (26) in a parallel system, at least
two constituent codes need to be recursive [19], as in the
parallel example code. Without loss of generality, the two
codes with highest indexes, CN 1 and CN , in a parallel system
are assumed to be recursive. The behavior of the whole system
can then be determined by analyzing IV , IE(xN 1 ) versus
IH , IE(xN ) . Note that the definition of IH depends on
whether the system is serial or parallel.
The convergence analysis can now be made with a twodimensional EXIT chart, IV = TV (IH , Rb ) (vertical axis)
versus IH = TH (IV , Rb ) (horizontal axis). These two transfer
functions can be found, for a specific R and b , by the
following procedure (let be a small constant):
Algorithm 1 (EXIT Chart Projections):
1) Let the extrinsic MI for all codes be zero, and IH = 0.
2) Activate all decoders except decoder N until IV has
converged to a fixed value and save IV = TV (IH , Rb ).
3) Let IH = IH + . If IH 1.0, go to Step 2.
4) Let the extrinsic MI for all codes be zero and IV = 0.
5) Activate all decoders except decoder N 1 until IH has
converged to a fixed value and save IH = TH (IV , Rb ).
6) Let IV = IV + . If IV 1.0, go to Step 5.
7) Output the two EXIT functions: IV = TV (IH , Rb ) from
Step 2 and IH = TH (IV , Rb ) from Step 5.
Step 2 and Step 5 use the EXIT functions in (19)(22)
and (24) to find the convergence values in the serial and the
parallel case respectively. By Assumption 1 and Theorem 1 the
actual activation schedule is irrelevant, as long as all N 1
decoders are activated until no further gain is possible (see also
Appendix). For example, the conventional activation schedule

0.8

IV = IE(x2 ) , IE(x1 )

dimensional. In the special case when all parallel component


codes are identical (symmetric codes), the EXIT chart can be
visualized in two-dimensions, IE(x1 ) versus IE(x2 ) , by setting
IE(xn ) = IE(x2 ) for all 1 n N . The EXIT functions are
then [7]

 
 p
IE(x1 ) = Tx1 J N 1J 1 IE(x2 ) , J 8Rb ,


IE(x1 ) = TM IE(y2 ) , Rb

IE(x2 ) = TV IE(y3 ) , Rb

IE(y3 ) = Ty3 0, IE(x2 )

0.6

0.4

0.2

0
0

0.2

0.4

0.6

0.8

IH = IE(y3 ) , IE(y2 )
Fig. 6.

EXIT chart projection of the serial example code at b = 2.0 dB.

1, 2, . . . , N 1, 1, 2, . . . , N 1, 1, 2, . . . can be used in Step


2 in Algorithm 1, since it will give the same results compared
to any other chosen schedule that includes all decoders except
decoder N . The small value in Step 3 and 6 is chosen
arbitrarily to give sufficient resolution of the final EXIT chart.
The projection is visualized by plotting TV (IH , Rb ) versus
IH in the same two-dimensional EXIT chart as IV versus
TH (IV , Rb ). A similar projection was independently developed specifically for three serially concatenated codes in [10].
The solid curve in Fig. 6 is the EXIT function for CC(2, 3)
used as an outer code in a serial system.
According to

(22), IH = IE(yN ) = TyN 0, IE(xN 1 ) = TyN (0, IV ). This
shows that independent on how many codes that are serially
concatenated, IH is fixed (as in a serial system with two codes)
and not depending on R or b . The dashed-dotted curve in
Fig. 6 is the EXIT function for the 8PSK mapper (19) in
the serial example code in Section II-C at b = 2.0 dB, [5].
If these two codes were directly concatenated, making it a
system with two serially concatenated codes, no projection
would be necessary. A vertical step between the solid curve
and the dashed-dotted curve represents an activation of the
8PSK demapper, while a horizontal step between the dasheddotted curve and the solid curve represents an activation of
the outer decoder. In this scenario, the performance would
be very poor due to the presence of a fixed point at about
IE(x1 ) = 0.55 bits [6].
In the serial example code in Section II-C the CC(2/3) is
inserted between the 8PSK mapper and the outer CC(2, 3).
The complete EXIT chart is now four-dimensional (19)(22).
By following the procedure of Algorithm 1 and using the EXIT
functions, Txn and Tyn , for the CC(2/3) shown in Figs. 45,
the resulting projection is shown in Fig. 6 as the dashed curve.
A vertical step between the solid curve and the dashed curve

IV = IE(x3 )

0.8

V. O PTIMAL S CHEDULING

IE(x3 ) = TV IE(x4 ) , Rb

IE(x4 ) = TH IE(x3 ) , Rb

0.6

0.4

0.2

0
0

0.2

0.4

0.6

0.8

IH = IE(x4 )
Fig. 7.
dB.

EXIT chart projection of the parallel example code at b = 0.6

represents an unspecified number of activations between the


inner demapper and the intermediate decoder, until nothing
more can be gained. A horizontal step between the dashed
line and the thick solid line represents a single activation of
the outer decoder. This projected EXIT chart can therefore
be used to determine which points are attainable, but not the
number of activations required. The convergence threshold is
now easily determined, in this example it is very close to 2.0
dB.
Fig. 7 shows the projection of the parallel example code
in Section II-C at b = 0.6 dB, which seems to be close to
the convergence threshold. The projection is made by using the
EXIT functions, Txn and Tyn (similar to the ones in Figs. 45),
for all four components. Although no systematic bits are used,
decoding can successfully start since C3 : CC(2/3) has a nonzero extrinsic MI IE(x3 ) 6= 0 when IA(y3 ) 6= 0 [12], which can
be concluded from Fig. 4. A vertical step between the solid
curve and the dashed curve in Fig. 7 represents activations
of C11 , C21 and C31 until nothing more can be gained. A
horizontal step represents activations of C11 , C21 and C41 until
nothing more can be gained.
In a system with N codes, a vertical step represents an
1
unspecified number of activations of all decoders except CN
and a horizontal step represents activations of all decoders
1
except CN
1 . The convergence threshold, but not the number
of activations, can easily be determined for the two example
codes by the projections in Figs. 67.
Note that a horizontal step in a serial system represents a single activation of the outer decoder since none of
the N 2 innermost
decoders have any affect on IH =

1
TyN 0, IE(xN 1 ) , as long as CN
1 is not activated. In a
parallel system, both IV and IH will depend on R and b
according to (24).

In a system with more than two components it is always


favorable to use the most recently updated information passed
from all connected decoders [8, 19]. If all codes satisfy Assumption 1, and all decoders are activated sufficiently many
times, the same convergence point will be reached independent
of activation schedule, according to Theorem 1. The particular
schedule however, determines the cost of convergence. Here,
the term cost can refer to computational complexity, number of
activations, decoding delay, power consumptions or any other
measure of choice.
There are many ways of choosing the activation schedule.
One of the most commonly used periodic activation schedules for N concatenated codes is 1, 2, . . . , N, 1, 2, . . . , N, . . .
(schedule A) [4, 7]. A similar periodic activation schedule
is 1, 2, . . . , N 1, N, N 1, . . . , 2, 1, 2, . . . (schedule B).
Another suggestion is to activate the decoder for which the
gain in MI is maximal [10].
It is of interest to reach the convergence point with the
lowest possible cost. For example, let cn > 0 be a constant
proportional to the computational complexity of decoder n,
1 n N . We can now define the optimal decoder
schedule, which reaches the convergence point with lowest
total computational complexity c, [12, 18, 20]. This optimal
schedule depends on the codes and on b . If cn = 1 for all
1 n N , this schedule corresponds to using a minimum
number of activations. Optimal decoder schedule here means
that, based on the EXIT functions, no other decoder schedule
can reach the convergence point using less complexity.
All possible activation schedules can be described by an
N -state trellis (one state for each decoder). The trellis for
the first six activations in a system with three concatenated
codes is shown in Fig. 8. Let ak denote a state at activation k.
Thus ak = n corresponds to using Cn1 at the k-th activation.
There is a branch between every ak1 = n and ak = n for
1 n 6= n N , (nothing is gained by repeated activation of
a decoder and no decoders are allowed to be activated at the
same time [8]).
A path (corresponding to a specific activation schedule)
entering state ak = n is denoted by pk = (p1 , p2 , . . . , pk ),
where pj {1, 2, . . . , N } for 1 j k 1 and pk = n. Let
v = (v1 , v2 , . . . , vF ) be the associated metric-vector1 which
has as elements all the extrinsic MIs, the MI on the decision
statistics, and the total decoding complexity, c, resulting from
the activations along the path pk .
In a system with N serially concatenated codes there are
F 2 = 2(N 1) different extrinsic MIs (19)(22), arranged
together with ID(x) and c in the following arbitrary order,

v = ID(x) , c, IE(x1 ) , . . . , IE(xN 1 ) , IE(y2 ) , . . . , IE(yN ) .

In a system with N parallel concatenated codes there are F


2 = N different extrinsic MIs (24) and

v = ID(x) , c, IE(x1 ) , . . . , IE(xN ) .

In both cases, convergence to v1 = ID


is desired. Element v2
represents the total complexity, c, using activation schedule
1 This

is not a metric in a metric-space sense.

Fig. 8.

k=1
k=2
k=3
k=4
k=5
k=6






C11
C11
C11
C11
C11
C11 . . .

 "

b
b
b
b
"
"
b
"
e bb %
e b b" %
e b b" %
b
"
e
e""b% 
e""b% 
b% 
"
 b 

%
%
b
b
b
"
"
"
e
e
e % b 1 . . .
C21
C21
C21
C21
C21
C2
%
%
%
e
e
e



b
b % e "
b % e "
% e "
b
b "
b "
"
% " e
% "
% "
b
b
b
"
" bee
" bee

 b 
e
%
%
%
b
b
b
"
"
"
C31
C31
C31
C31
C31
C31 . . .







The trellis for the decoding schedule of three codes concatenated in serial (solid paths) or in parallel (solid and dashed paths).

pk ,
v2 = c =

k
X

cpj .

(27)

j=1

For each state n define the metric update function fn : RF 7


RF which produces a new metric-vector using the extrinsic
MIs and the total complexity (all included in v) as input
arguments.
In a system with three serially concatenated codes, the
decoding trellis has three states (see Fig. 8) and the metricvector is six-dimensional,

v = ID(x) , c, IE(x1 ) , IE(x2 ) , IE(y2 ) , IE(y3 ) .
(28)
The metric update functions are according to (25), (27) and
(19)(22),

f1 (v) = (v1 , v2 + c1 , TM (v5 , Rb ) , v4 , v5 , v6 ) ,


(29)
f2 (v) = (v1 , v2 + c2 , v3 , Tx2 (v6 , v3 ) , Ty2 (v6 , v3 ) , v6 ) , (30)
f3 (v) = (Tx3 (0, v4 ) , v2 + c3 , v3 , v4 , v5 , Ty3 (0, v4 )) .

(31)

The metric-vector in a system with four parallel concatenated


codes is also six-dimensional

v = ID(x) , c, IE(x1 ) , IE(x2 ) , IE(x3 ) , IE(x4 ) .
(32)

The metric update functions2 are similar to (29)(31), and use


(26), (27), and (24) to update v.
Let Pk and Vk be the sets of all surviving paths/metrics
after k activations and let Pk,n Pk and Vk,n Vk be the
sets of paths/metrics entering state n after k activations. The
exhaustive search proceeds as follows.
Algorithm 2 (Brute-force Exhaustive Search):
1) Let k = 1. Initialize Pk to only include the N
paths Pk = {(1), (2), . . . , (N )}, and initialize Vk to
only include the corresponding metric-vectors Vk =
{f1 (0), f2 (0), . . . , fN (0)}. Define p and v to be a
candidate path/metric with a high initial complexity,
v2 = .
2) Increment k with one. For each state n , extend every
path pk1 Pk1,n , with metric v , along each state
transition n n, 1 n 6= n N , producing the
new path pk = (pk1 , n) which is added to Pk,n with
2 In the Appendix, the first two elements of v are not needed and thus
removed. Similarly, the function fn (v) is only applied on the remaining
elements. Despite this difference, the same notation is used.

the corresponding updated metric v = fn (v ) added to


Vk,n .
3) Reduce the set Vk to only include metrics that have a
lower complexity than the complexity of the candidate
path, i.e., v Vk if v2 < v2 and reduce Pk accordingly.
4) Define a new set of metrics V containing all metrics
v Vk that have reached the convergence point v1 =

ID
. If |V | =
6 0, find the metric with the lowest complexity, v = arg minvV v2 , and replace the candidate
path p by the path corresponding to v .
5) Go to Step 6.
6) If |Vk | = 0, output p as the optimal path with a final
set of extrinsic MIs and a total complexity in v . If
|Vk | =
6 0, go to Step 2.
In the serial case, it is sufficient to initialize Pk = {(1)}
since the optimal path must start by activating the demapper.
This means that the dashed paths in Fig. 8 can be removed for
a system with three serially concatenated codes. The general
algorithm above will in a serial case automatically remove all
paths that do not start with p1 = 1, since fn (0) will have zero
extrinsic MIs in v if n 6= 1 according to (20)(22). f1 (0) is
the only metric update function in a serial system that will
have a non-zero extrinsic MI (v3 6= 0) since it includes the
EXIT function of the innermost encoder (mapper) (19).
Using this brute force exhaustive search, the total number of
paths |Vk | grows exponentially with k until the first candidate
path is found, and is therefore infeasible for large k.
To reduce the search complexity in Algorithm 2, where the
metrics are F -dimensional, an adaptation of the well known
Viterbi algorithm [21] can be used. In order to do this we need
to define a partial order < on the metrics with the following
properties: (a) the metrics are monotonically non-decreasing,
and (b) deletion of paths entering the same state with smaller
metrics according to < does not affect the final outcome.
Definition 1 (Domination): Define a partial ordering on RF
as follows. For v, v RF , v < v if and only if vj vj ,
for all 3 j F , and v2 v2 . We say v dominates v if v
has all F 2 extrinsic MIs higher than v and a lower total
complexity.
The operator < does not operate on the first element of
v since that is the element the stopping criterion is based
on. Define the function dom, operating on a set of metrics
by discarding all dominated metrics. Thus v Vk will be
discarded if and only if there is another metric Vk v < v .

TABLE I
T HE F IRST 30 ACTIVATIONS FOR D IFFERENT ACTIVATIONS S CHEDULES FOR THE PARALLEL E XAMPLE C ODE AT b = 1.0 dB.

10

3, 1, 2, 3, 1, 2, 3, 1, 4, 3, 2, 1, 3, 4, 2, 3, 1, 4, 3, 4, 3, 1, 2, 4, 1, 3, 2, 1, 3, 4 . . .
1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2 . . .
1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2, 1, 2, 3, 4, 3, 2 . . .
3, 1, 2, 1, 3, 2, 1, 4, 3, 1, 2, 1, 3, 4, 1, 2, 3, 1, 4, 2, 3, 1, 2, 4, 1, 3, 2, 1, 3, 4, . . .
1, 2, 3, 1, 2, 3, . . . , 1, 2, 4, 1, 2, 4, . . . , 1, 2, 3, 1, 2, 3, . . . , 1, 2, 4, 1, 2, 4, . . .

2.0 dB

Optimal
A: 1,2,3,...
B: 1,2,3,2,...
C: fix, 2.0 dB

Optimal
A: 1,2,3,4,...
B: 1,2,3,4,3,2,...
C: fix, 0.6 dB

-0.6 dB

10

BER

10

BER

10

Optimal
A
B
C
D

10

-0.4 dB

10

4.0 dB

2.6 dB

0.0 dB

2.2 dB
1.0 dB

10

500

1000

1500

2000

2500

3000

3500

Complexity

10

100

200

300

400

500

600

Complexity

Fig. 9. Performance in BER of the serial example code versus the total
computational complexity for different b .

Fig. 10. Performance in BER of the parallel example code versus the total
computational complexity for different b .

V = dom V only implies 1 |V| |V |.


Under Assumption 1, (25)(26) and (27), the metric-vectors
are monotonically non-decreasing, v = fn (v ) = v + ,
for some 0 (see also Appendix). This implies that

there exists only one convergence point v1 = ID


, since the
initial value of the extrinsic MI is zero. Furthermore, suppose
the metric v for a path pk is dominated by the metric v
(v < v ) for some other path pk entering the same state
n at activation k, pk = pk = n. Then the metric for the
extended path pk+1 = (pk , i), 1 i N , is fi (v ) and the
metric for the path pk+1 = (pk , i) is fi (v). Since v < v ,
Assumption 1 gives that fi (v) < fi (v ) and we may therefore
remove pk from consideration (see also Appendix). Thus,
Step 5 in Algorithm 2 can be modified to include deletion
of dominated paths entering the same state, without affecting
the final outcome.
Algorithm 3 (Viterbi Search):
Retain all steps from Algorithm 2, but change Step 5 to:

VI. N UMERICAL E XAMPLES

5) Delete dominated metrics, i.e., let Vk,n = dom Vk,n for


all 1 n N and remove the corresponding paths
from Pk,n .
Algorithm 3 can be used as long as all codes have nondecreasing extrinsic MIs according to Assumption 1. This
requirement is easily checked by inspection of the EXIT
functions, e.g., the functions in Figs. 45. Algorithm 3 does
not remove the possibility of exponential growth of the number
of retained paths, but we have observed in practice that the
number of required paths is usually quite small.

The simulated BER performance of the two examples in


Section II-C versus total complexity is shown in Figs. 910
for different b . In the simulations, the source bits are divided
into blocks of size L = 105 . Activation schedule C is fixed to
the optimal activation schedule at the convergence threshold
(2.0 dB in the serial example code and 0.6 dB in the parallel
example code). This schedule is found using the Algorithm 3.
Schedules A, B, and C are fixed for all b in contrast to the
optimal schedule. Unfortunately, no general structure has been
found for the optimal activation schedule as can be concluded
from Table I. This schedule depends on the EXIT functions
of all codes in the system and on b .
The complexity values in the serial example code used
in Fig. 9 (proportional to the decoding time) are c1 = 54,
c2 = 30, and c3 = 17. The convergence threshold for this
system is around 2.0 dB, as predicted by Fig. 6. From Fig. 9
it can be concluded that for all b 2.2, 30 60% in
decoding complexity can be saved by choosing the optimal
schedule instead of schedule B, while 30 40% can be saved
compared to schedule A, for all BER < 102 . Fig. 9 also
shows that using the fixed activation schedule C for all values
of b will give almost the same complexity as if the optimal
schedule was chosen. From Fig. 9 it is obvious that the same
performance will eventually be reached independent of the
activation schedule of the decoders.
In the parallel example, the component codes have the same
rate and structure so the decoding complexity is proportional
to the number of states, c1 = 2, c2 = 4, c3 = 2, and c4 =

10

IV = IE(x3 )

0.8

VII. C ONCLUSION

IE(x3 ) = TV IE(x4 ) , Rb

IE(x4 ) = TH IE(x3 ) , Rb

0.6

0.4

0.2

0
0

0.2

0.4

0.6

0.8

IH = IE(x4 )
Fig. 11. EXIT chart projection of the parallel example code at b = 1.0
dB, together with the average decoding trajectory for different activation
schedules. The markers correspond to the five schedules given in Table I.

8. Fig. 10 shows that for this parallel system, 25 50% in


decoding complexity can be saved by choosing the optimal
schedule instead of schedule B and 10 20% can be saved
compared to schedule A. The fixed schedule C seems to be
somewhere between the optimal schedule and schedule A. To
illustrate the difference between the activation schedules, all
schedules for the parallel example code at b = 1.0 dB are
given in Table I. In Fig. 10, where the BER reach equilibrium
after many activations, it is even more obvious than in Fig. 9
that the same performance will always be reached independent
of the chosen activation schedule of the decoders.
Fig. 11 shows the EXIT chart projection of the parallel
example code at b = 1.0 dB. The average decoding trajectory
using the same activation schedules as in Fig. 10 and Table I
are also included. Schedule D (marked with diamonds) is
one of the schedules that reach the curves in the projected
EXIT chart, as explained in Section IV. The average decoding
trajectories in Fig. 11 are all based on the EXIT functions
on the component codes involved. Fig. 11 shows clearly that
different schedules give different average decoding trajectory,
that all lie within the two curves in the projected EXIT chart.
It also illustrates that the same convergence point is reached
independent of decoding schedule, which is also concluded
from Fig. 10.
The savings in percentage stated here are just examples on
how much can be saved by choosing the optimal schedule
instead of some other activation schedule. For other combinations of component codes than the ones used in these two
examples the savings can be more or less.

We considered the problem of iterative decoding of multiple


concatenated codes with more than two components. For such
codes, the activation schedule of the component decoders is
not obvious. Furthermore, EXIT chart analysis is not straightforward.
We have proposed a technique for projecting several threedimensional EXIT functions onto two dimensions in order to
determine the convergence threshold for any arbitrary number,
N , of multiple concatenated codes. This projection can now be
used as a design tool to find combinations of codes that give
a desired performance. We have also described a Viterbi-like
algorithm that finds the activation schedule for the component
decoders, giving the lowest possible decoding complexity.
The only requirement to perform both the projection and to
find the optimal activation schedule is that the EXIT functions

of the component codes, IE(x
 n ) = Txn IA(xn ) , IA(yn ) and
IE(yn ) = Tyn IA(xn ) , IA(yn ) , are known and that they are
monotonically non-decreasing.
The results from the two example codes presented here show
that 10 60% in decoding complexity can be saved by choosing the optimal schedule compared to some fixed activation
schedule. The decoding complexity can be substituted with
decoding delay or power consumption depending on how the
constants in the search algorithm are chosen.
The results also show that instead of using the periodic
activation schedule 1, 2, . . . , N, 1, 2, . . . , N, . . . the decoder
complexity can be reduced if the optimal activation schedule
at the convergence threshold is used for all b . In fact,
in the serial example code, this activation schedule gives
similar performance as the optimal activation schedule (which
depends on b ).
From these two examples it is obvious that the computational complexity of iterative decoding can be substantially
reduced, without affecting the performance, by wisely choosing the decoding schedule.
A PPENDIX
Let v = (v1 , v2 , . . . , vN ) [0, 1]N , be an N -vector with
elements 0 vn 1. By assumption, let the functions gn :
[0, 1]N 7 [0, 1], n = 1, 2, . . . , N each satisfy
v v = gn (v ) gn (v)

(33)
vn

vn , n =
where henceforth for vectors, v v means
1, 2, . . . , N .
Define f : [0, 1]N 7 [0, 1]N and fn : [0, 1]N 7 [0, 1]N , for
n = 1, 2, . . . , N according to
f (v) = (g1 (v), g2 (v), . . . , gN (v))
fn (v) = (v1 , . . . , vn1 , gn (v), vn+1 , . . . , vN ) .

(34)
(35)

Note that the assumption (33) implies the relations


v v = f (v ) f (v)

(36)

v v = fn (v ) fn (v).

(37)

and

11

Define a region f as

for any p1 = 1, 2, . . . , N . Suppose

f , {v : v f (v)} .

The region in (38) is referred to as the feasible region of f .


The definitions in (34), (35), and (38) together with assumption
(33) now imply that,
f (v) fn (v),

if v f ,

(39)

for any n = 1, 2, . . . , N .
Given a sequence pK = (p1 , p2 , . . . , pK ), where pn
{1, 2, . . . , N } and an initial point v0 = s0 f , define the
following two sequences for k = 1, 2, . . . , K,

k
, f (vk1 )
(40)
vk = v1k , v2k , . . . , vN

k1
k
k
k
k
).
(41)
s = u1 , u2 , . . . , uN , fpk (s
Thus, vk is a sequence with parallel, or synchronous
updates and sk is a sequence with serial, or asynchronous
updates [22]. The sequence pK defines the update order for
the serial sequence.
Lemma 1: Let v0 f . Then the sequence vk defined
by (40) converges monotonically (in the sense vk vk1 ) to
a unique limit point 0 v 1.
Proof: Since f 1, the lemma results from showing
that vk is monotonically non-decreasing. This will be accomplished by induction. By assumption, v0 f and
v1 = f (v0 )
v

by (40)

by (38)

Now suppose
vk vk1

(42)

Then
vk = f (vk1 )

by (40)

f (v )

=v

by (42) and (36)

k+1

by (40).

Since (42) holds for k = 1, it holds for all k > 1 by induction.


Henceforth, let
v = lim vk ,

(43)

the existence of which is guaranteed by the previous lemma.


Lemma 2: Let s0 = v0 f and suppose the sequence pK
is such that for any integer M > 0, there exists a KM K
such that each integer 1, 2, . . . , N appears at least M times in
the subsequence pKM = (p1 , p2 . . . , pKM ). Then the sequence
sk defined by (41) converges monotonically to v .
Proof: Showing that
(i) vk sk ,

(ii) vk sKk ,
for k 1, directly prove the lemma.
By assumption, s0 = v0 f and
v1 = f (v0 ) = f (s0 )
fp1 (s0 )

= s1

v k sk .

(38)

by (40)
by (39)
by (41)

(44)

Then
vk+1 = f (vk )

by (40)

f (s )

by (44) and (36).


k

fpk+1 (s )
=s

k+1

by (39)
by (41)

Since (44) holds for k = 1, it holds for all k > 1 by induction,


which is part (i).
For part (ii), note that v1 sK1 , since on the left each
element is updated once, in parallel by the corresponding gn ,
whereas on the right, each element is updated serially, at least
once (by the assumption on pK1 ):
u1p1 = gp1 (s0 ) = gp1 (v0 ) = vp11
u2p2 = gp2 (s1 ) gp2 (v0 ) = vp12
..
.
It is easy to see that this domination is preserved, vk sKk ,
k > 1.
These two lemmas show that both sequences (40) and (41)
have the same limit point. Note that this limit point is a
function of the starting point, which was chosen as v0 =
s0 f . Different starting points within the feasible region
f could result in different limit points. In fact, selecting
a starting point outside the feasible region f can result in
different limit points for the two sequences.
Using the notation above, the decoding trajectory for a
concatenated code with parallel components can be described
using (24) as follows:

v , (v1 , v2 , . . . , vN ) = IE(x1 ) , IE(x2 ) , . . . , IE(xN ) ,

v
uN

p
uX
1(v )2 , J

.
J
gn (v) , Txn Ju
8R
i
b
t

i=1
i6=n

Assumption (33) can be guaranteed if the monotonicity assumption of the EXIT functions holds (Assumption 1) together
with the knowledge that both the J-function and its inverse
is monotonically increasing [6], i.e., J( + ) > J() and
J 1(IG + ) > J 1(IG ), for any > 0.
The decoding trajectory for a concatenated code with serial
components can be described in a similar way, where gn (v)
is then given by (19)(22). In this case, each activation can
update one or two elements in v. However, it is straightforward
to redefine fn (v) to be a function that updates several elements
in v and to prove that Lemma 2 is still valid.
Theorem 1 results from direct application of the above
lemmas as follows.
Theorem 1: (restated) Subject to Assumption 1, the sequence of MI resulting from successive or parallel activations
of EXIT functions converges monotonically to a limit point,
independent of the actual activation schedule.
Proof: The elements in v are MIs, and hence 0 v 1.
Further, since v0 = 0 and f (0) 0, v0 = 0 is inside the

12

feasible region f defined in (38). This together with Lemma


1 and 2 complete the proof of Theorem 1.
R EFERENCES
[1] C. Berrou, A. Glavieux, and P. Thitimajshima, Near Shannon limit
error-correcting coding and decoding: Turbo-codes, in Proc. IEEE
Int. Conf. Commun. (ICC 93), vol. 2, Geneva, Switzerland, May 1993,
pp. 10641070.
[2] S. Benedetto and G. Montorsi, Serial concatenation of block and
convolutional codes, IEE Electron. Lett., vol. 32, no. 10, pp. 887888,
May 1996.
[3] D. Divsalar and F. Pollara, Multiple turbo codes for deep-space communications, TDA Progress Report 42-121, Jet Propulsion Laboratory,
Pasadena, CA, May 1995.
[4] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, Analysis,
design, and iterative decoding of double serially concatenated codes with
interleavers, IEEE J. Selected Areas Commun., vol. 16, pp. 231244,
Feb. 1998.
[5] S. ten Brink, Convergence of iterative decoding, IEE Electron. Lett.,
vol. 35, pp. 11171119, June 1999.
[6] , Convergence behavior of iteratively decoded parallel concatenated codes, IEEE Trans. Commun., vol. 49, pp. 1727 1737, Oct.
2001.
[7] , Convergence of multi-dimensional iterative decoding schemes,
in Proc. Thirty-Fifth Asilomar Conference on Signals, Systems and
Computers, vol. 1, Pacific Grove, CA, Nov. 2001, pp. 270274.
[8] J. Han and O. Y. Takeshita, On the decoding structure for multiple turbo
codes, in Proc. IEEE Int. Symp. Inform. Theory (ISIT 01), Washington,
DC, June 2001, p. 98.
[9] S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, A soft-input
soft-output APP module for iterative decoding of concatenated codes,
IEEE Commun. Lett., vol. 1, pp. 2224, Jan. 1997.
[10] M. Tuchler, Convergence prediction for iterative decoding of threefold
concatenated systems, in Proc. IEEE Global Commun. Conf. (GLOBECOM 02), vol. 2, Taipei, Taiwan, Nov. 2002, pp. 13581362.
[11] F. Brannstrom, T. M. Aulin, L. K. Rasmussen, and A. J. Grant,
Convergence analysis of iterative detectors for narrow-band multiple
access, in Proc. IEEE Global Commun. Conf. (GLOBECOM 02),
vol. 2, Taipei, Taiwan, Nov. 2002, pp. 13731377.
[12] F. Brannstrom, Convergence analysis and design of multiple concatenated codes, Ph.D. dissertation, Chalmers Univ. of Techn., Goteborg,
Sweden, Mar. 2004.
[13] T. M. Cover and J. A. Thomas, Elements of Information Theory. New
York, NY: Wiley, 1991.
[14] J. A. Nelder and R. Mead, A simplex method for function minimization, The Computer Journal, vol. 7, pp. 308313, 1965.
[15] A. Ashikhmin, G. Kramer, and S. ten Brink, Extrinsic information
transfer functions, information functions, support weights, and duality,
in Proc. Int. Symp. on Turbo Codes and Rel. Topics, Brest, France, Sept.
2003, pp. 223226.
[16] , Extrinsic information transfer functions: model and erasure
channel properties, IEEE Trans. Inform. Theory, vol. 50, pp. 2657
2673, Nov. 2004.
[17] S. ten Brink, Design of serially concatenated codes based on iterative decoding convergence, in Proc. Int. Symp. on Turbo Codes and
Rel. Topics, Brest, France, Sept. 2000, pp. 319322.
[18] F. Brannstrom, L. K. Rasmussen, and A. Grant, Optimal scheduling
for multiple serially concatenated codes, in Proc. Int. Symp. on Turbo
Codes and Rel. Topics, Brest, France, Sept. 2003, pp. 383386.
[19] S. Huettinger and J. Huber, Design of multiple-turbo-codes with
transfer characteristics of component codes, in Proc. Conf. Inform. Sciences and Syst. (CISS 02), Princeton University, Mar. 2002.
[20] F. Brannstrom, L. K. Rasmussen, and A. Grant, Optimal scheduling for
iterative decoding, in Proc. IEEE Int. Symp. Inform. Theory (ISIT 03),
Yokohama, Japan, June/July 2003, p. 350.
[21] G. D. Forney Jr., The Viterbi algorithm, Proc. IEEE, vol. 61, pp.
268278, Mar. 1973.
[22] D. P. Bertsekas and J. N. Tsitsiklis, Parallel and Distributed Computation. Englewood Cliffs, NJ: Prentice-Hall, 1989.

S-ar putea să vă placă și