Sunteți pe pagina 1din 80

Chapter 6: Channel Coding I

Waveform Coding
Types of Error Control
Structured Sequences
Linear Block Codes
Error Detecting and Correcting Capability
Cyclic Codes
Block Codes

Channel Coding

Channel coding refers to the class of signal


transformations
designed
to
improve
communications performance by enabling the
transformed signals to better withstand the effects of
various channel impairments, such as
Noise
Interference
Fading
These signal processing techniques can be thought
of as vehicles for accomplishing desirable system
trade-offs such as
Error performance versus bandwidth
Power versus bandwidth

Waveform Coding

Channel coding can be partitioned into two study


areas:
Waveform coding or signal design coding
Waveform coding deals with transforming waveform
into better waveforms to make the detection
process less subject to errors.
Structure sequence or structure redundancy
Structured sequences deals with transforming data
sequences into better sequences , having
structured redundancy (redundant bits). The
redundant bits can then be used for the detection
and correction of errors.
The encoding procedure provides the coded signal
(whether waveforms or structured sequences) with
better distance properties than those of their uncoded
counterparts.

Channel Coding: Part 1

Generalized One Dimensional Signals

One Dimensional Signal Constellation

Binary Baseband Orthogonal Signals

Binary Antipodal Signals

Binary orthogonal Signals

Constellation Diagram

Is a method of representing the symbol states of modulated


bandpass signals in terms of their amplitude and phase
In other words, it is a geometric representation of signals
There are three types of binary signals:

Antipodal

Two signals are said to be antipodal if one signal is the


negative of the other s (t ) s (t )
1
0
The signal have equal energy with signal point on the
real line

ON-OFF

Are one dimensional signals either ON or OFF with


signaling points falling on the real line

With OOK, there are just 2 symbol states to map onto the
constellation space

a(t) = 0 (no carrier amplitude, giving a point at the origin)

a(t) = A cos wct (giving a point on the positive horizontal


axis at a distance A from the origin)

Orthogonal

Requires a two dimensional geometric representation since


there are two linearly independent functions s1(t) and s0(t)

6.1.1 Antipodal and Orthogonal Signals

Antipodal:

Orthogonal pulse waveforms:


s1(t) = p(t)
0<t<T
s2(t) = p(t - T/2)
0<t<T
where p(t) is a pulse with duration is the symbol duration.
Another orthogonal waveform set frequently used in
communication systems is sin x and cos x.
In general, a set of equal-energy signals si(t) , i = 1, 2, M,
constitutes an orthonormal (orthogonal, normalized to unity)
set if and only if:
f or i j
1
1 T
zij
s
(
t
)
s
(
t
)
dt

(6.1)
i
j
0
otherwise
E 0

s1(t) = - s2(t)

The signal energy is

si2 (t ) dt

(6.2)

Figure 6.2: Antipodal signal set

Figure 6.3: Binary Orthogonal signal set

6.1.3 Waveform coding

Waveform coding procedures transform a waveform set


(representing a message) into an improved waveform set.
The most popular of such waveform codes are referred to as
orthogonal and bi-orthogonal codes
The goal is to render the cross-correlation coefficient among all
pairs of signals as small as possible
The smallest possible value of the cross-correlation coefficient
occurs when the signals are anticorrelated ( zij= -1); this can be
achieved only when the number of symbols in the set is two
(M=2) and the symbols are antipodal.
In general, it is possible to make all the cross-correlation
coefficients equal to zero
The set is then said to be orthogonal
Antipodal signal sets are optimum in the sense that each signal
is most distant from the other signal in the set (as shown in the
figure); where the distance d between signal vectors is seen to
be d = 2 E, where E represents the signal energy during a
symbol duration T.

6.1.3 Waveform coding

Compared with antipodal signals, the distance properties of


orthogonal signal sets can be thought as pretty good (for a
given level of waveform energy). As shown in figure, the
distance between the orthogonal signal vectors is seen to be
d = 2E
The cross-correlation between two signals is a measure of the
distance between the signal vectors.
The smaller the cross-correlation, the more distant are the
vectors from each other.

Another form of orthogonality condition for sequence of pulses:


zij

numer of digit agreement number of digit disagreements


total number of digits in the sequence
for i j

(6.3)

otherwise

Orthogonal Codes

A one-bit data set can be transformed, using orthogonal codewords


of two digits each, described by the rows of matrix H1 as follows
Data set

0
1

Orthogonal codeword set

0 0
H1

0 1

Data set
0 0
0 1
1 0
1 1

Data set
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1

Orthogonal codeword set


0 0 | 0 0
0 1 | 0 1

H H
1
H 2 1

H1 H1
0 0 | 1 1
0 1 | 1 0
Orthogonal codeword set
0
0

0
H 3

0
0

0
0

0 0 0 0
0 1 0 1
0 1 1 | 0 0 1 1

1 1 0 | 0 1 1 0
H2


H2
0 0 0 | 1 1 1 1
1 0 1 | 1 0 1 0

0 1 1 | 1 1 0 0
1 1 0 | 1 0 0 1
0 0 0
1 0 1

|
|

H2
H 2

In general, we can construct a codeword set Hk, of dimension 2k x 2k,


called a Hadamard matrix, for a k-bit data set from the Hk-1 matrix,
as follows:

H k 1
Hk
H k 1

H k 1
H k 1

(6.4)

Each pair of words in each codeword set H1, H2, . Hk, has as
many digit agreements as disagreements . Hence, in accordance
with equation 6.3, zij =0 (for I j), and each of the set is orthogonal.
Just as in M-ary signaling with an orthogonal format (such as MFSK),
the error performance improves. The probability of codeword error,
PE, can be upper bounded as

PE ( M ) ( M 1)Q

where Es=kEb

Es
N0

(6.5)

Biorthogonal Codes
A biothogonal signal set of M total signals or codewords can be
obtained from an orthogonal set of M/2 signals by augmenting it with
the negative of each signals as
H

Bk k 1
H k 1

Data set
0 0 0
0 0 1
0 1 0
0 1 1
1

Biorthogonal codeword set


0
0

0
B3
1
1

0
1

0
0

0
1
1

1
1
1

0
1
0

1
0
0

0
1
1

0
1
0
0

The biorthogonal set is really two sets of orthogonal codes such that
each codeword in one set has its antipodal codeword in the other
set

zij 1
0

for i j

for i j , | i j | M / 2

(6.8)

for i j , | i j | M / 2

Advantages
The biorthogonal code requires one-half as many code bits per code
word as required by orthogonal codes
Since the antipodal signal vectors have better distance properties
than orthogonal ones, biothogonal codes perform slightly better than
orthogonal ones
Probability of codeword error can be upper bounded as:

PE ( M ) ( M 2)Q

Es
N0

Q 2 Es

N
0

(6.9)

6.1.4 Waveform-Coding System Example

Tc is code bit duration

Figure 6.4: Waveform-encoded system (transmitter)

6.1.4 Waveform-Coding System Example


Figure 6.4 illustrates the example of assigning a k-bit
message from a message set of M=2k, with a codedpulse sequence from a code set of the same size.
Each k-bit message chooses one of the generators
yielding a coded-pulse sequence or codeword.
The sequences in the coded set that replace the
messages from a wave-form set with good distance
properties (e.g., orthogonal, biorthogonal).
For the orthogonal code, each codeword consists of
M=2k pulses (representing code bits).
Hence, 2k code bits replace k message bits.
The chosen sequence then modulates a carrier wave
using binary PSK, such that the phase ( = 0, ) of
the carrier during each code-bit duration, 0 t Tc,
corresponds to the amplitude ( j = -1 or 1) of the jth
bipolar pulse in the codeword.

Figure 6.5: Waveform-encoded system with coherent detection (receiver)

6.1.4 Waveform-Coding System Example


At the receiver in Figure 6.5, the signal is
demodulated to baseband and fed to M correlators
(or matched filters).
For orthogonal codes, such as those characterized
by Hadamard matrix, correlation is performed over a
codeword duration that can be expressed as T = 2kTc.
For a real-time communication system, messages
may not be delayed; hence, the codeword duration
must be the same as the message duration, and thus,
T can also be expressed as T (log2M)Tb = kTb, where
Tb is the message-bit duration.
Time duration of a message bit is M/k times longer
than that of a code bit.

6.1.4 Waveform-Coding System Example


In other words, the code bits or coded pulses (which
are PSK modulated) must move at a rate M/k faster
than the message bits.
For such orthogonally coded waveforms and an
AWGN channel, the expected value at the output of
each correlator, at time T, is zero, except for the
correlator corresponding to the transmitted
codeword.

Error control Coding

Errors are introduced in the data when it passes through the


channel. The channel noise interfere the signal. The signal
power is reduced. Hence errors are introduced.

Rationale for Coding and Types of codes

Transmission of data over the channel depends upon following


two parameters:

transmitter power
channel bandwidth

The power spectral density of channel noise and these two


parameters determine signal to noise power ratio.
The signal to noise power ratio determine the probability of error
of the modulation scheme.
For the given signal to noise ratio, the error probability can be
reduced further by using coding techniques.
The coding techniques also reduce signal to noise power ratio
for fixed probability of error.

Error control Coding

Figure: Digital communication system with channel encoding

Channel encoder

The channel encoder adds extra bits (redundancy) to the message bits.
The encoded signal is then transmitted over the noisy channel.

Channel decoder

The channel decoder identifies the redundant bits and uses them to detect
and correct the errors in the message bits if any.
Thus the number of errors introduced due to channel noise are minimized
by encoder and decoder.
Due to the redundant bits, the overall data rate increases. Hence channel
has to accommodate this increased data rate.
The systems become slightly complex because of coding techniques.

6.2 Types of Error control


There are two basic ways for using redundancy for controlling
errors:
1.
Error detection and retransmission
2.
Forward error correction (FEC)
Error detection and retransmission
It utilizes parity bits (redundant bits added to the data) to
detect that an error has been made. The receiving terminal does not
attempt to correct the error; it simply requests that the transmitter
retransmit the data
Note that a two-way link is required for such dialogue
between the transmitter and receiver.
Forward error correction (FEC)
FEC requires a one-way link only, since in this case the
parity bits are designed for both the detection and correction of
errors.

6.2 Types of Error control


Terminal Connectivity
Simplex
Half duplex
Full duplex

Figure 6.6: Terminal connectivity classifications (a) Simplex (b) Half-duplex


(c) Full-duplex

Automatic Repeat Request


When the error control consists of error detection
only, the communication system generally needs to
provide a means of alerting the transmitter that an
error has been detected and that a retransmission is
necessary.
ARQ vs. FEC
ARQ is much simpler than FEC and need no
redundancy.
ARQ is sometimes not possible if

A reverse channel is not available or the delay with


ARQ would be excessive
The retransmission strategy is not conveniently
implemented
The expected number of errors, without corrections,
would require excessive retransmissions

(a) Stop and wait ARQ

(b) Continuous ARQ with pullback

(c) Continuous ARQ with selective repeat (full duplex)


Figure 6.7: Automatic Repeat Request (ARQ) (a) Stop and wait ARQ (b)
Continuous ARQ with pullback (c) Continuous ARQ with selective repeat

6.3 Structured Sequences

Parity-check coding procedures are classified as


structured sequences because they represent
methods of inserting structured redundancy into the
source data so that the presence of errors can be
detected or the errors corrected.
Structured sequences are partitioned into three subcategorries:

Block codes
Convolutional codes
Turbo codes

6.3.2 Code Rate and Redundancy


Block codes

In case of block codes, encoder transforms each k-bit data


block into a larger block or codeword of n-bits called code bits
or or channel symbol
The block or codeword consists of k message bits and (n-k)
redundant bits.
The (n-k)-bits added to each data block are called redundant
bits, parity bits or check bits. Such block codes are called (n, k)
block codes.
They carry no new information.
Ratio of redundant bits to data bits: (n-k)/k is called redundancy
of code
Ratio of data bits to total bits, k/n is called code rate

6.3.2 Code Rate and Redundancy


Block codes

6.3.2 Code Rate and Redundancy


Block codes
Illustration:
The generating set for the (7,4)
code:
1000 ===> 1101000
0100 ===> 0110100
0010 ===> 1110010
0001 ===> 1010001

6.3.2 Code Rate and Redundancy


Convolutional codes

The coding operation is discrete time convolution of input


sequence with the impulse response of the encoder.
The convolutional encoder accepts the message bits
continuously
and
generates
the
encoded
sequence
continuously.
The codes can be classified as linear or nonlinear codes.
Linear code:

If the two codewords of the linear code are added by modulo-2


arithmetic, then it produces third codeword in the code.
This is very important property of the codes, since other codewords
can be obtained by addition of existing codewords.

Nonlinear code:

Addition of the nonlinear codewords does not necessarily produce


third codeword.

6.3.3 Parity-Check Codes


Single-parity-Check Code
Parity check codes use linear sums of the information bits, called
parity symbols or parity bits, for error detection or correction.

Two-dimensional

Parity-check Code

Even Parity Example:

Rectangular Code

Also called a product code, can be thought of as a parallel code


structure. The rate of the code k/n is

MN

n ( M 1)( N 1)

Figure 6.8: Parity checks for parallel structure

6.3.4 Why Use Error-Correction Coding

Ideal PB (Probability of
bit error) versus Eb/N0

Bit error probability for coherently


detected M-ary orthogonal signaling

Bit error probability of coherently


detected multiple phase signaling

6.3.4 Why Use Error-Correction Coding

Figure 6.9:Comparison of
typical coded versus
uncoded error performance

Trade-Off 1: Error Performance vs. Bandwidth (A to C rather than B)

Trade-Off 2: Power versus Bandwidth (D to E)

Trade-Off 3: Data Rate versus Bandwidth (D to F and then to E)

Trade-Off 4: Capacity versus Bandwidth (like in CDMA)

6.3.4 Why Use Error-Correction Coding

Trade-Off 1: Error Performance vs. Bandwidth (A to C rather than B)

Error performance improvement can be achieved at the expense of


bandwidth.

Aside from the new components (encoder and decoder) needed, the
price is more transmission bandwidth.

Error-correction coding needs redundancy.

If we assume that the system is a real-time communication system (such


that the message may not be delayed), the addition of redundant bits
dictates a faster rate of transmission, which of course means more
bandwidth.

6.3.4 Why Use Error-Correction Coding

Trade-Off 2: Power versus Bandwidth (D to E)

If error correction coding is introduced, a reduction in the required Eb/N0


can be achieved.

Thus, the trade-off is one in which the same quality of data is achieved,
but the coding allows for a reduction in power or Eb/N0.

What is the cost? The same as before- more bandwidth.

6.3.4 Why Use Error-Correction Coding

6.3.4 Why Use Error-Correction Coding

6.3.4 Why Use Error-Correction Coding

6.3.4 Why Use Error-Correction Coding

6.4 Linear block Codes

Linear block codes are a class of parity-check codes that


can be characterized by the (n, k) notation.
The encoder transforms a block of k message digits (a
message vector) into a longer block of n codeword digit
(a code vector) constructed from a given alphabet of
elements.
When the alphabet consists of two elements (0 and 1),
the code is a binary code comprising binary digits (bits).
The k-bit messages form 2k distinct message
sequences, referred to as k-tuples (sequences of k
digits).
The n-bit blocks can form as many as 2n distinct
sequences, referred to as n-tuples.

6.4 Linear block Codes

The encoding procedure assigns to each of the 2k


message k-tuples one of the 2n n-tuples.
A block code represents a one-to-one assignment,
whereby the 2k message k-tuples are uniquely mapped
into a new set of 2k codeword n-tuples.
The mapping can be accomplished via a look-up table.

6.4 Linear block Codes


6.4.1 Vector Spaces

00 0
0 1 1
1 0 1
11 0

00 0
0 1 0
1 0 0
1 1 1

6.4 Linear block Codes

6.4 Linear block Codes


6.4.2 Vector Subspaces

For example, the vector space V4 is totally populated by the


following 24 = sixteen 4-tuples:
0000 0001 0010 0011 0100 0101 0110 0111
1000 1001 1010 1011 1100 1101 1110 1111
An example of a subset of V4 that forms a subspace is
0000 0101 1010 1111

The subset chosen for the code should include as many as


elements to reduce the redundancy but they should be as apart as
possible to maintain good error performances

Figure 6.10: Linear block-code structure

6.4.3 A (6.3) Linear Block Code Examples

Examine the following coding assignment that describes a (6, 3)


code. There are 2k = 23 = 8 message vectors, and therefore eight
codewords.
There are 2k = 26 = sixty-four 6-tuples in the V6 vector space.
Message Vector

Codeword

000

000000

100

110100

010

011010

110

101110

001

101001

101

011101

011

110011

111

000111

Table 6.1: Assignment of Codewords to Messages

6.4.4 Generator Matrix

If k is large, a lookup table implementation of the encoder becomes


prohibitive.
For a (127, 92) code there are 292 or approximately 5 x 1027 code
vectors.
If the encoding procedure consists of a simple look-up table, imagine
the size of the memory necessary to contain such a large number of
codewords. Fortunately, it is possible to reduce complexity by
generating the required codeword as needed, instead of storing them.
Since a set of codewords that forms a linear block code is a kdimensional subspace of the n-dimensional binary vector space (k < n),
it is always possible to find a set of n-tuples, fewer than 2k, that can
generate all the 2k codewords of the subspace.
The generating set of vectors is said to span the subspace.
The smallest linearly independent set that spans the subspace is called
a basis of the subspace, and the number of vectors in this basis set is
the dimension of the subspace.

6.4.4 Generator Matrix

Any basis set of k linearly independent n-tuples V1, V2, ..,


Vk can be used to generate the required linear block code
vectors, since each code vector is a linear combination of
V1, V2, , Vk.
Let the set of 2k codewords {U} be described as:
U = m1V1+ m2V2 + ..+ mkVk
where mi = (0 or 1) are the message digits and i = 1,, k.
In general, generator matrix can be defined by the following
k x n array:

V1 v11
V v
G 2 21


Vk vk1

v12
v22
vk 2

v1n
v2 n

vkn

(6.24)

6.4.4 Generator Matrix

Code vectors are usually designated as row vectors.


Thus, the message m, a sequence of k message bits, is
shown below as a row vector (1xk matrix having one row and
k columns):
m = m1, m2, .., mk
Generation of codeword U is written in matrix notation as the
product of m and G :
U=mG

(6.25)

V1 1 1 0 1 0 0
G V2 0 1 1 0 1 0
V3 1 0 1 0 0 1

(6.26)

6.4.5 Systematic Linear Block codes

A systematic (n,k) linear block code is a mapping from a kdimensional message vector to an n-dimensional codeword in such
a way that part of the sequence generated coincides with the k
message digits. Thus remaining (n-k) digits are parity digits.
A systematic linear code will have a generator matrix of the form

p11


p 21

G P I k

p k1

p12

p 22

p 2, ( n k )

p1,( n k )

p k 2 p k ,( n k )

1 0 0
0 1 0

0 0 1

(6.27)

Where P is the parity array portion of the generator matrix


pij = (0 or 1), and Ik is the k x k identity matrix (ones on the
main diagonal and zeros elsewhere).

Storage requirement reduced from


2k(n+k) to k(n-k).

1 1 0
0 1 1
G
1 1 1

0 1
1

k ( n k )

1 0 0 0
0 1 0 0
P | Ik
0 0 1 0

0 0 0 1

k k

6.4.5 Systematic Linear Block codes

Combining (6.26) and (6.27):


p11
p
21
u1 , u 2 ,....u n [m1 , m 2 ,.mk ]

pk1

p12

p1,( nk )

p22 p2,( n k )
pk 2 p k , ( n k )

1 0 0
0 1 0

0 0 1

where
ui = m1p1i+ m2p2i+ .mkpki
= mi-n+k
Given the message k-tuple
m = m1, m2, .., mk
and the general code vector n-tuple
U = u1, u2, .., un

for i=1,(n-k)
for i=(n-k+1).n

The systematic code vector can be expressed as:

U p1 , p2 ,, pn k , m1 , m2 ,, mk

parity bits

(6.28)

message bits

p1 m1 p11 m2 p21 mk pk1


p2 m1 p12 m2 p22 mk pk 2
pn k m1 p1,( n k ) m2 p2,( n k ) mk pk ,( n k )

(6.29)

Example
For (6,3) code example in sec.6.4.3, the codewords can be described
as:

U m1 , m2 , m3

1 1 0 1 0 0
0 1 1 0 1 0

1 0 1 0 0 1

P
I

m1 m3 , m1 m2 , m2 m3 , m
,m ,m

1 2 3
u1

u2

u3

u4

u5

(6.30)

(6.31)

u6

Equation (6.31) depicts that the first parity bit is the sum of the first and
third message bits; the second parity bit is the sum of the first and
second message bits; and the third parity bit is the sum of the second
and third message bits.
Such structure, compared with single-parity checks or simple digitrepeat procedures, may provide greater ability to detect and correct
errors.

Example

We see structure of linear block code (Equation (6.31)) that the redundant
digits are produced in a variety of ways.
The first parity bit is the sum of the first and third message bits; the second
parity bit is the sum of the first and second bits; and the third parity bit is
the sum of the second and third message bits.
Intuition tells us that such structure, compared with single-parity checks or
simple digit-repeat procedures, may provide greater ability to detect and
correct errors.

6.4.6 Parity-Check Matrix

Let H denote the parity check matrix, that will enable us to decode
the received vectors
For each (k x n) generator matrix G, there exists an (n-k) x n matrix
H, such that rows of G are orthogonal to the rows of H i.e., GHT=0
Fulfilling the orthogonality requirements:

H I nk | PT

(6.32)

and

1
0


I nk

p11
P

p21

pk 1

p12
p22

pk 2

p1,( n k )

p2 ,( n k )

pk ,( n k )

(6.33)

It is easy to verify that the product UHT of each codeword U,


generated by G and the HT matrix, yields
UHT=p1+p1, p2+p2, . Pn-k+ Pn-k = 0
where the parity bits p1, p2, .pn-k are defined earlier in Eq.6.29. U
is a code word generated by matrix G iff UHT= 0

6.4.7 Syndrome Testing

Let r = r1, r2, .., rn be received vector (one of 2n n-tuples)


resulting from the transmission of vector U = u1, u2, .., un
(one of 2k n-tuples). We can therefore describe r as
r=U+e
(6.34)
where e = e1, e2, .., en is an error vector or error pattern
introduced by the channel.
There are a total of 2n-1 potential nonzero error patterns in in the
space of 2n n-tuples.
The syndrome of r is defined as:
(6.35)
S = r HT
The syndrome is the result of a parity check performed on r to
determine whether r is a valid member of the codeword set.
If, in fact, r is a member, the syndrome S has a value 0. If r contains
detectable errors, the syndrome has some nonzero value.

If r contains correctable errors, the syndrome (like the symptom of


an illness) has some nonzero value that can earmark the particular
error pattern.
The decoder, depending upon whether it has been implemented to
perform FEC or ARQ, will then take action to locate the errors and
correct them (FEC), or else it will request a retransmission (ARQ).
Combining (6.34) and (6.35), the syndrome of r is seen to be
S = (U + e)HT
= U H T + e HT
(6.36)
However, U HT = 0 for all members of the codeword set. Therefore,
S = e HT
(6.37)
The foregoing development, starting with (6.34) and terminating with
(6.37), is evidence that syndrome test, whether performed on either
a corrupted code vector or on the error pattern that caused it, yields
the same syndrome.
An important property of linear block codes, fundamental to the
decoding process, is that the mapping between correctable error
patterns and syndromes is one to one.

Requirements of the parity-check matrix


No column of H can be all zeros, or else an error in the
corresponding codeword position would not affect the syndrome
and would be undetectable
All columns of H must be unique. If two columns of H were
identical, errors in these two corresponding codeword positions
would be indistinguishable
Example
Codeword U = 1 0 1 1 1 0 , and r = 0 0 1 1 1 0 Find S=rHT

S rH T
1 0
0 1

0 0
0 0 1 1 1 0
1 1
0 1

1 0
1, 1 1, 1 1 1

0
0

0
1

S eH T
1 0 0 0 0 0 H T
1 0 0
0

6.4.8 Error Correction

Arranging 2n n-tuples; representing possible received vectors, in an


array is called standard array. Standard array for (n,k) code is:

U1

U2

Ui

U 2k

e2

U 2 e2

U i e2

U 2k e2

e3

U 2 e3

U i e3

U 2k e3

ej

U2 ej

U 2k e j

nk
2

Ui e j

nk
2

U2 e

nk
2

Ui e

nk
2

U e
k
2

(6.38)

Each row, called a coset consists of an error pattern in the first


column called coset leader
If error pattern is not a coset leader, erroneous decoding will result

The syndrome of a Coset


Coset is a short for a set of numbers having a common feature
If ej is the coset leader then Ui+ej is an n-tuple in this coset.
Syndrome of this n-tuple is:
S = (Ui + ej) HT = ej HT
The syndrome must be unique to estimate the error pattern
Error Correction Decoding
T
Calculate the syndrome of R using S=rH
Locate the coset leader ( error pattern) ej, whose syndrome
equals rHT
This error pattern is assumed to be the corruption caused by the
channel
The corrected received vector, or code word, is identified as
U=r+ej. We retrieve the valid codeword by subtracting(adding) the
identified error

Locating the Error Pattern


Example of a standard array for a (6,3) code is shown:

1
0
Computing ejHT for each coset leader

0
S ej
1
0

0
1
0
1
1
0

0
0

0
1

The results are:

Error Pattern

Syndrome

000000

000

000001

101

000010

011

000100

110

001000

001

010000

010

100000

100

010001

111

Table 6.2: Syndrome lookup Table

Error Correction Example


Error pattern is an estimate of error, the decoder addes the
estimated error to received signal to obtain an estimate of
transmitted code word as:

U r e U e e U e e

(6.40)

Example: let U=101110, and r=001110, then show how the


decoder can correct the error using look-up table 6.2

S 0 0 1 1 1 0H T 1 0 0
estimated error :
e 1 0 0 0 0 0
The correctedvector is estimated by :
U r e
0 0 1 1 1 0 1
1 0 1 1 1 0

Implementation of decoder

Figure 6.12: Implementation of the (6,3) decoder

6.5 Error Detection and Correcting Capability


6.5.1 Weigh and Distance of Binary Vector

Hamming distance between two codewords is the number of


elements in which they differ
Hamming weight is the number of nonzero elements

Example:
U =100101101
V =011110100
w(U) = 5
d(U,V) = w(U+V) = 6

6.5.2 Minimum Distance of a Linear Code

The minimum distance (dmin) among all the distances between each
pair of codes in the code set

6.5.3 Error Detection and Correction

The error-correcting capability t of a code: the maximum number of


guaranteed correctable errors per codeword

d 1
t m in
2

Error detecting capability defined in terms of dmin

e d min 1

6.7 Cyclic Codes

An (n,k) linear code is called a cyclic code if it can be described by


the following property: the code with shifted version being in the
code set, so if we have a code word:

U ( X ) u0 u1 X u2 X 2 un 1 X n 1

Then its shifted version:

U ( X ) un 1 u0 X u1 X 2 u2 X 3 un 1 X n
is also a code word.

6.7.2 Binary Cyclic Codes Properties

Cyclic code can be generated using a generator polynomial:


(6.57)

g ( X ) g0 g1 X g2 X 2 g p X p

where g0 and gp must be equal to 1. (n-p=k)


If the message polynomial is given by:

m( X ) m0 m1 X m2 X 2 mn p1 X n p1

(6.58)

The codeword polynomial in (n,k) cyclic code can be expressed as:

U X m0 m1 X m2 X 2 mk 1 X k 1 g X (6.59)

U is said to be a valid codeword of the subspace S iff g(X) divides


into U(X) without a remainder. A generator polynomial g(X) of an
(n,k) cyclic code is a factor of Xn+1, that is:
Xn+1=g(X)h(X)

6.7.3 Encoding in Systematic Form

m X m0 m1 X m2 X 2 mk 1 X k 1

Message vector:

Multiply m(k) with Xn-k

X n k m X m0 X n k m1 X n k 1 mk 1 X n 1

Divide by g(X)

X n k m X q X g X p X

Adding p(X) to both sides, we have:

p X X n k m X q X g X U X

Where:

U ( p0 , p1 ,, pn k 1 , m0 , m1 ,, mk 1 )

( n k ) parity bits

k message bits

Example (on the board)

6.8 Well-Known Block Codes


6.8.1 Hamming Codes

Simple class of block codes characterized by the structure:

n, k 2m 1,2m 1 m

These codes have a minimum distance of 3 and are capable of


correcting single errors
For hard-decision decoding, bit error probability is:
1 n n j
PB j p (1 p)n j
n j 2 j
Where p is channel symbol error probability
PB p p(1 p) n 1
Equivalent equation:
For performance over Gaussian channel using coherently
demodulated BPSK, channel symbol error probability is:

2 Ec
p Q
N0

6.8.2 BCH Codes

Bose-Chadhuri-Hocquenghem (BCH) codes are generalization of


Hamming Codes
Powerful class of cyclic codes that provide large selection of block
lengths, code rates, alphabet sizes, and error correcting capability
Table 6.4 lists some code generators g(X) for various values of n,k
and t upto block length of 255

Table 6.4: Generators of BCH Codes

Table 6.4: Generators of BCH Codes

S-ar putea să vă placă și