Sunteți pe pagina 1din 770

Digital Communication and Error

Correcting Codes
Timothy J. Schulz
Professor and Chair

Engineering Exploration
Fall, 2004

Department of Electrical and Computer Engineering


Department of Electrical and Computer Engineering

Digital Data

• ASCII Text

A 01000001
B 01000010
C 01000011
D 01000100
E 01000101
F 01000110
. .
. .
. .

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Digital Sampling
000010001000001011011011010000111110111111111

011
010
001
000
111
110
101
100

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Digital Communication

• Example: Frequency Shift Keying (FSK)


– Transmit a tone with a frequency determined by each bit:

s  t   b cos  2 f 0t    1  b  cos  2 f1t 

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Digital Channels
Binary Symmetric Channel
1-p
0 0

1 1
1-p

Error probability: p

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Error Correcting Codes


3 channel bits per 1 information bit: rate = 1/3

encode book
information bits channel bits
0 000
1 111

decode book
channel bits information bits
000 0
001 0
010 0
011 1
100 0
101 1
110 1
111 1

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Error Correcting Codes

information bits 0 0 1 0 1
channel code 000 000 111 000 111
received bits 010 000 100 001 110
decoded bits 0 0 0 0 1

5 channel errors; 1 information error

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Error Correcting Codes

• An error will only be made if the channel makes 2 or


three errors on a block of 3 channel bits
0.5

ccc
situation
no errors
probability
(1-p)(1-p)(1-p)
= 1-3p+3p -p 2 3 0.45

cce one error (1-p)(1-p)(p) = p-2p2+p3 0.4


cec one error (1-p)(p)(1-p) = p-2p2+p3
cee two errors (1-p)(p)(p) = p2-p3 0.35
ecc one error (p)(1-p)(1-p) = p-2p2+p3
bit error probability
ece two errors (p)(1-p)(p) = p2-p3 0.3
eec two errors (p)(p)(1-p) = p2-p3
0.25
eee three errors (p)(p)(p) = p3
0.2

error probability = 3p2 – 2p3 0.15

0.1

0.05

0
0 0.1 0.2 0.3 0.4 0.5
channel error probability

Digital Coding for Error Correction 00101001110101101010101000


Department of Electrical and Computer Engineering

Error Correcting Codes

• Codes are characterized by the number of channel bits


(M) used for (N) information bits. This is called an N/M
code.
• An encode book has 2N entries, and each entry is an
M-bit codeword.
• A decode book has 2M entries, and each entry is an
N-bit information-bit sequence.

Digital Coding for Error Correction 00101001110101101010101000


Linear Block Codes

EE576 Dr. Kousa Linear Block Codes 15


Basic Definitions

• Let u be a k-bit information sequence


v be the corresponding n-bit codeword.
A total of 2k n-bit codewords constitute a (n,k) code.
• Linear code: The sum of any two codewords is a codeword.
• Observation: The all-zero sequence is a codeword in every
linear block code.

EE576 Dr. Kousa Linear Block Codes 16


Generator Matrix
• All 2k codewords can be generated from a set of k linearly independent
codewords.
• Let g0, g1, …, gk-1 be a set of k independent codewords.

 g 0   g 00 g 01  g 0,n1 
G          
 
g k -1   g k 1,0 g k 1,1  g k 1,n1 

• v = u·G

EE576 Dr. Kousa Linear Block Codes 17


Systematic Codes
• Any linear block code can be put in systematic form

n-k k
check bits information bits

• In this case the generator matrix will take the form


G = [ P Ik]
• This matrix corresponds to a set of k codewords
corresponding to the information sequences that have a
single nonzero element. Clearly this set in linearly
independent.

EE576 Dr. Kousa Linear Block Codes 18


Generator Matrix (cont’d)
• EX: The generating set for the (7,4) code:
1000 ===> 1101000; 0100 ===> 0110100
0010 ===> 1110010; 0001 ===> 1010001
• Every codeword is a linear combination of these 4 codewords.
That is: v = u •G, where
 
1 1 0 1 0 0 0
 
 0 1 1 0 1 0 0
G   P | Ik 
1 1 1 0 0 1 0
 
1 0 1
  
0 0
  
0
 
1
 k( nk ) k k 
• Storage requirement reduced from 2k(n+k) to k(n-k).

EE576 Dr. Kousa Linear Block Codes 19


Parity-Check Matrix
• For G = [ P | Ik ], define the matrix H = [In-k | PT]
• (The size of H is (n-k)xn).
• It follows that GHT = 0.
• Since v = u•G, then v•HT = u•GHT = 0.
• The parity check matrix of code C is the generator matrix
of another code Cd, called the dual of C.
1 0 0 1 0 1 1
H  0 1 0 1 1 1 0
 
0 0 1 0 1 1 1

EE576 Dr. Kousa Linear Block Codes 20


Encoding Using H Matrix
(Parity Check Equations)

1 0 0
0 1 0
0 0 1
 v1 v2 v3 v4 v5 v6 v7  1 1 0  0
0 1 1
information 1 1 1
1 0 1

v1+v4  v6  v7  0 v1=v4  v6  v7
v2+v4  v5  v6  0  v2=v4  v5  v6
v3+v5  v6  v7  0 v3=v5  v6  v7

EE576 Dr. Kousa Linear Block Codes 21


Encoding Circuit

EE576 Dr. Kousa Linear Block Codes 22


Minimum Distance
• DF: The Hamming weight of a codeword v , denoted by
w(v), is the number of nonzero elements in the codeword.
• DF: The minimum weight of a code, wmin, is the smallest
weight of the nonzero codewords in the code.
wmin = min {w(v): v  C; v ≠0}.
• DF: Hamming distance between v and w, denoted by
d(v,w), is the number of locations where they differ.
Note that d(v,w) = w(v+w)
• DF: The minimum distance of the code
dmin = min {d(v,w): v,w  C, v ≠ 0}
• TH3.1: In any linear code, dmin = wmin

EE576 Dr. Kousa Linear Block Codes 23


Minimum Distance (cont’d)

• TH3.2 For each codeword of Hamming weight l there


exists l columns of H such that the vector sum of these
columns is zero. Conversely, if there exist l columns of H
whose vector sum is zero, there exists a codeword of
weight l.
• COL 3.2.2 The dmin of C is equal
to the minimum numbers of
columns in H that sum to zero. 1 0 0 1 0 1 1
H  0 1 0 1 1 1 0 
• EX:  
0 0 1 0 1 1 1

EE576 Dr. Kousa Linear Block Codes 24


Decoding Linear Codes

• Let v be transmitted and r be received, where


r=v+e v + r
e  error pattern = e1e2..... en, where e

th
ei   1 if the error has occured in the i location
 0 otherwise
The weight of e determines the number of errors.

• We will attempt both processes: error detection, and error


correction.

EE576 Dr. Kousa Linear Block Codes 25


Error Detection
• Define the syndrome
s = rHT = (s0, s1, …, sk-1)
• If s = 0, then r = v and e =0,
• If e is similar to some codeword,
then s = 0 as well, and the error is undetectable.
• EX 3.4:
1 0 0
0 1 0 
0 0 1 
 s0 s1 s2    r1 r2 r3 r4 r5 r6 r7  1 1 0  0 s 0 = r0  r3  r5  r6
0 1 1  s1 = r1  r3  r4  r5
1 1 1
1 0 1 s 2 = r2  r4  r5  r6

EE576 Dr. Kousa Linear Block Codes 26


Error Correction

• s = rHT = (v + e) HT = vHT + eHT = eHT


• The syndrome depends only on the error pattern.
• Can we use the syndrome to find e, hence do the
correction?
• Syndrome digits are linear combination of error digits.
They provide information about error location.
• Unfortunately, for n-k equations and n unknowns there are
2k solutions. Which one to use?

EE576 Dr. Kousa Linear Block Codes 27


Example 3.5
• Let r = 1001001
• s = 111
• s0 = e0+e3+e5+e6 =1
• s1 = e1+e3+e4+e5 =1
• s2 = e2+e4+e5+e6 =1
• There are 16 error patterns that satisfy the above equations,
some of them are
0000010 1101010 1010011 1111101
• The most probable one is the one with minimum weight.
Hence v* = 1001001 + 0000010 = 1001011

EE576 Dr. Kousa Linear Block Codes 28


Standard Array Decoding

• Transmitted codeword is any one of:


v1, v2, …, v2k
• The received word r is any one of 2n n-tuple.
• Partition the 2n words into 2k disjoint subsets D1, D2,…, D2k
such that the words in subset Di are closer to codeword vi
than any other codeword.
• Each subset is associated with one codeword.

EE576 Dr. Kousa Linear Block Codes 29


Standard Array Construction
1. List the 2k codewords in a row, starting with the all-zero codeword v1.
2. Select an error pattern e2 and place it below v1. This error pattern will
be a correctable error pattern, therefore it should be selected such that:
(i) it has the smallest weight possible (most probable error)
(ii) it has not appeared before in the array.
3. Add e2 to each codeword and place the sum below that codeword.
4. Repeat Steps 2 and 3 until all the possible error patterns have been
accounted for. There will always be 2n / 2k = 2 n-k rows in the array.
Each row is called a coset. The leading error pattern is the coset
leader.
• Note that choosing any element in the coset as coset leader does not
change the elements in the coset; it simply permutes them.

EE576 Dr. Kousa Linear Block Codes 30


Standard Array

v1  0 v2 v3  v 2k
e2 e2 + v2 e 2 + v3  e 2  v 2k
e3 e3 + v 2 e3 + v3  e3  v 2k
   
e 2n - k e 2n -k  v 2 e 2n - k  v3 e 2n -k  v 2k
• TH 3.3
No two n-tuples in the same row are identical.
Every n-tuple appears in one and only one row.

EE576 Dr. Kousa Linear Block Codes 31


Standard Array Decoding is Minimum
Distance Decoding
• Let the received word r fall in Di subset and lth coset.
• Then r = el + vi
• r will be decoded as vi. We will show that r is closer to vi
than any other codeword.
• d(r,vi) = w(r + vi) = w(el + vi + vi) = w(el)
• d(r,vj) = w(r + vj) = w(el + vi + vj) = w(el + vs)
• As el and el + vs are in the same coset, and el is selected to
be the minimum weight that did not appear before, then
w(el)  w(el + vs)
• Therefore d(r,vi)  d(r,vj)

EE576 Dr. Kousa Linear Block Codes 32


Standard Array Decoding (cont’d)

• TH 3.4
Every (n,k) linear code is capable of correcting exactly 2n-k
error patterns, including the all-zero error pattern.
• EX: The (7,4) Hamming code
# of correctable error patterns = 23 = 8
# of single-error patterns = 7
Therefore, all single-error patterns, and only single-error
patterns can be corrected. (Recall the Hamming Bound, and
the fact that Hamming codes are perfect.

EE576 Dr. Kousa Linear Block Codes 33


Standard Array Decoding (cont’d)

EX 3.6: The (6,3) code defined by the H matrix:


Codewords
 1 0 0 0 1 1
  000000
H   0 1 0 1 0 1 110001
  101010
 0 0 1 1 1 0
011011
011100
v1=v5  v6
101101
v2=v4  v6 110110
v3=v4  v5 000111
d min  3
EE576 Dr. Kousa Linear Block Codes 34
Standard Array Decoding (cont’d)

• Can correct all single errors and one double error pattern
000000 110001 101010 011011 011100 101101 110110 000111
000001 110000 101011 011010 011101 101100 110111 000110
000010 110011 101000 011001 011110 101111 110100 000101
000100 110101 101110 011111 011000 101001 110010 000011
001000 111001 100010 010011 010100 100101 111110 001111
010000 100001 111010 001011 001100 111101 100110 010111
100000 010001 001010 111011 111100 001101 010110 100111
100100 010101 001110 111111 111000 001001 010010 100011

EE576 Dr. Kousa Linear Block Codes 35


The Syndrome
• Huge storage memory (and searching time) is required by standard array
decoding.
• Recall the syndrome
s = rHT = (v + e) HT = eHT
• The syndrome depends only on the error pattern and not on the transmitted
codeword.

• TH 3.6
All the 2k n-tuples of a coset have the same syndrome. The syndromes of
different cosets are different.
(el + vi )HT = elHT (1st Part)
Let ej and el be leaders of two cosets, j<l. Assume they have the same
syndrome.
ejHT = elHT (ej +el)HT = 0.
This implies ej +el = vi, or el = ej +vi
This means that el is in the jth coset. Contradiction.

EE576 Dr. Kousa Linear Block Codes 36


The Syndrome (cont’d)

Error Pattern Syndrome


• There are 2n-k rows and
2n-k syndromes (one-to- 0000000 000
one correspondence). 1000000 100
• Instead of forming the 0100000 010
standard array we form a 0010000 001
decoding table of the 0001000 110
0000100 011
correctable error patterns 0000010 111
and their syndromes. 0000001 101

EE576 Dr. Kousa Linear Block Codes 37


Syndrome Decoding

Decoding Procedure:
1. For the received vector r, compute the syndrome s = rHT.
2. Using the table, identify the coset leader (error pattern) el .
3. Add el to r to recover the transmitted codeword v.
• EX:
r = 1110101 ==> s = 001 ==> e = 0010000
Then, v = 1100101
• Syndrome decoding reduces storage memory from nx2n to
2n-k(2n-k). Also, It reduces the searching time considerably.

EE576 Dr. Kousa Linear Block Codes 38


Hardware Implementation

• Let r = r0 r1 r2 r3 r4 r5 r6 and s = s0 s1 s2
• From the H matrix:
s0 = r0 + r3 + r 5 + r 6
s1 = r1 + r3 + r 4 + r 5
s2 = r2 + r4 + r 5 + r 6
• From the table of syndromes and their corresponding
correctable error patterns, a truth table can be constructed.
A combinational logic circuit with s0 , s1 , s2 as input and
e0 , e1 , e2 , e3 , e4 , e5 , e6 as outputs can be designed.

EE576 Dr. Kousa Linear Block Codes 39


Decoding Circuit for the (7,4) HC

EE576 Dr. Kousa Linear Block Codes 40


Error Detection Capability
• A codeword with dmin can detect all error patterns of weight dmin – 1 or
less. It can detect many higher error patterns as well, but not all.
• In fact the number of undetectable error patterns is 2k-1 out of the 2n -1
nonzero error patterns.
• DF: Ai  number of codewords of weight i.
• {Ai; i=0,1,…,n} = weight distribution of the code.
• Note that Ao=1; Aj =0 for 0 < j < dmin
n
Pu   i
A p i

i  d min
(1  p ) n i

EE576 Dr. Kousa Linear Block Codes 41


• EX: Undetectable error probability of (7,4) HC
A0 =A7 = 1; A1 =A2 =A5 =A6=0; A3=A4=7
Pu(E) =7p3(1-p)4 + 7p4(1-p)3 + p7
For p = 10-2 Pu(E) = 7x10-6 n

• Define the weight enumerator: A( z ) 


i 0
Ai z i

• Then i
n n
 p 
Pu   Ai p (1  p)
i n i
 (1  p)  Ai p 
n i

i 1 i 1 1 p 
• Let z = p/(1-p), and noting that A0=1
i
 p  n
 p    p  
A   1   Ai   ; Pu  (1  p)  A
n
  1
1 p  i 1 1 p   1 p  

EE576 Dr. Kousa Linear Block Codes 42


• The probability of undetected error can as well be found from the
weight enumerator of the dual code

Pu  2 n  k B (1  2 p )  (1  p) n

where B(z) is the weight enumerator of the dual code.


• When either A(z) and B(z) are not available, Pu may be upper bounded
by
Pu ≤ 2-(n-k) [1-(1-p)n]
• For good channels (p 0) Pu ≤ 2-(n-k)

EE576 Dr. Kousa Linear Block Codes 43


Error Correction Capability
• An (n,k) code of dmin can correct up to t errors where

t   (d min  1) / 2

• It may be able to correct higher error patterns but not all.


• The total number of patterns it can correct is 2n-k
t
n
If     2 n  k
i 0  i  the code is perfect
n t
 n  p i (1  p) n i  1   n  p i (1  p) n i
Pu   i
i  t 1 
 i
i 0  

EE576 Dr. Kousa Linear Block Codes 44


Hamming Codes
• Hamming codes constitute a family of single-error correcting codes
defined as:
n = 2m-1, k = n-m, m  3
• The minimum distance of the code dmin = 3
• Construction rule of H:
H is an (n-k)xn matrix, i.e. it has 2m-1 columns of m tuples.
The all-zero m tuple cannot be a column of H (otherwise dmin=1).
No two columns are identical (otherwise dmin=2).
Therefore, the H matrix of a Hamming code of order m has as its
columns all non-zero m tuples.
The sum of any two columns is a column of H. Therefore the sum of
some three columns is zero, i.e. dmin=3.

EE576 Dr. Kousa Linear Block Codes 45


Systematic Hamming Codes
• In systematic form:
H =[ Im Q]
• The columns of Q are all m-tuples of weight  2.
• Different arrangements of the columns of Q produces
different codes, but of the same distance property.
• Hamming codes are perfect codes
t
 n   2nk
 i
i 0  

Right side = 1+n; Left side = 2m =n+1

EE576 Dr. Kousa Linear Block Codes 46


Decoding of Hamming Codes

• Consider a single-error pattern e(i), where i is a number


determining the position of the error.
• s = e(i) HT = HiT = the transpose of the ith column of H.
• Example:
 1 0 0
 0 1 0
 0 0 1
 0 1 0 0 0 0 0 1 1 0   0 1 0
 0 1 1
 1 1 1
1 0 1

EE576 Dr. Kousa Linear Block Codes 47


Decoding of Hamming Codes (cont’d)

• That is, the (transpose of the) ith column of H is the


syndrome corresponding to a single error in the ith position.
• Decoding rule:
1. Compute the syndrome s = rHT
2. Locate the error ( i.e. find i for which sT = Hi)
3. Invert the ith bit of r.

EE576 Dr. Kousa Linear Block Codes 48


Weight Distribution of Hamming Codes

• The weight enumerator of Hamming codes is:


A( z ) 
1
n 1

(1  z ) n  n(1  z )( z 2 ) ( n 1) / 2 
• The weight distribution could as well be obtained from the
recursive equations:
A0=1, A1=0
(i+1)Ai+1 + Ai + (N-i+1)Ai-1 = CNi i=1,2,…,N
• The dual of a Hamming code is a (2m-1,m) linear code. Its
weight enumerator is B( z )  1  (2  1) z
m 2 m1

EE576 Dr. Kousa Linear Block Codes 49


History

• In the late 1940’s Richard Hamming recognized that


the further evolution of computers required greater
reliability, in particular the ability to not only detect
errors, but correct them. His search for error-
correcting codes led to the Hamming Codes, perfect
1-error correcting codes, and the extended Hamming
Codes, 1-error correcting and 2-error detecting
codes.

EE576 Dr. Kousa Linear Block Codes 50


Uses

• Hamming Codes are still widely used in computing,


telecommunication, and other applications.
• Hamming Codes also applied in
– Data compression
– Some solutions to the popular puzzle The Hat
Game
– Block Turbo Codes

EE576 Dr. Kousa Linear Block Codes 51


A [7,4] binary Hamming Code

• Let our codeword be (x1 x2 … x7) ε F27


• x3, x5, x6, x7 are chosen according to the message
(perhaps the message itself is (x3 x5 x6 x7 )).
• x4 := x5 + x6 + x7 (mod 2)
• x2 := x3 + x6 + x7
• x1 := x3 + x5 + x7

EE576 Dr. Kousa Linear Block Codes 52


[7,4] binary Hamming codewords

EE576 Dr. Kousa Linear Block Codes 53


A [7,4] binary Hamming Code
• Let a = x4 + x5 + x6 + x7 (=1 iff one of these bits is in error)
• Let b = x2 + x3 + x6 + x7
• Let c = x1 + x3 + x5 + x7
• If there is an error (assuming at most one) then abc will be
binary representation of the subscript of the offending bit.

• If (y1 y2 … y7) is received and abc ≠ 000, then we


assume the bit abc is in error and switch it. If
abc=000, we assume there were no errors (so if
there are three or more errors we may recover the
wrong codeword).

EE576 Dr. Kousa Linear Block Codes 54


Definition: Generator and Check
Matrices
• For an [n, k] linear code, the generator matrix is a
k×n matrix for which the row space is the given code.
• A check matrix for an [n, k] is a generator matrix for
the dual code. In other words, an (n-k)×k matrix M for
which Mx = 0 for all x in the code.

EE576 Dr. Kousa Linear Block Codes 55


A Construction for binary
Hamming Codes
• For a given r, form an r × 2r-1 matrix M, the columns of which
are the binary representations (r bits long) of 1, …, 2r-1.
• The linear code for which this is the check matrix is a [2r-1, 2r-1
– r] binary Hamming Code = {x=(x1 x2 … x n) : MxT = 0}.

Example Check Matrix


• A check matrix for a [7,4] binary Hamming Code:

EE576 Dr. Kousa Linear Block Codes 56


Syndrome Decoding

• Let y = (y1 y2 … yn) be a received codeword.


• The syndrome of y is S:=LryT. If S=0 then there was
no error. If S ≠ 0 then S is the binary representation
of some integer 1 ≤ t ≤ n=2r-1 and the intended
codeword is
x = (y1 … yr+1 … yn).

EE576 Dr. Kousa Linear Block Codes 57


Example Using L3
• Suppose (1 0 1 0 0 1 0) is received.

100 is 4 in binary, so the intended codeword was (1 0


1 1 0 1 0).

EE576 Dr. Kousa Linear Block Codes 58


Extended [8,4] binary Hamm.
Code
• As with the [7,4] binary Hamming Code:
– x3, x5, x6, x7 are chosen according to the message.
– x4 := x5 + x6 + x7
– x2 := x3 + x6 + x7
– x1 := x3 + x5 + x7
• Add a new bit x0 such that
– x0 = x1 + x2 + x3 + x4 + x5 + x6 + x7 . i.e., the new bit
makes the sum of all the bits zero. x0 is called a
parity check.

EE576 Dr. Kousa Linear Block Codes 59


Extended binary Hamming Code
• The minimum distance between any two codewords
is now 4, so an extended Hamming Code is a 1-error
correcting and 2-error detecting code.
• The general construction of a [2r, 2r-1 - r] extended
code from a [2r –1, 2r –1 – r] binary Hamming Code
is the same: add a parity check bit.

EE576 Dr. Kousa Linear Block Codes 60


Check Matrix Construction of
Extended Hamming Code
• The check matrix of an extended Hamming Code can
be constructed from the check matrix of a Hamming
code by adding a zero column on the left and a row
of 1’s to the bottom.

EE576 Dr. Kousa Linear Block Codes 61


q-ary Hamming Codes

• The binary construction generalizes to Hamming


Codes over an alphabet A={0, …, q}, q ≥ 2.
• For a given r, form an r × (qr-1)/(q-1) matrix M over A,
any two columns of which are linearly independent.
• M determines a [(qr-1)/(q-1), (qr-1)/(q-1) – r] (= [n,k])
q-ary Hamming Code for which M is the check matrix.

EE576 Dr. Kousa Linear Block Codes 62


Example: ternary [4, 2] Hamming
• Two check matrices for the some [4, 2] ternary
Hamming Codes:

EE576 Dr. Kousa Linear Block Codes 63


Syndrome decoding: the q-ary
case
• The syndrome of received word y, S:=MyT, will be a
multiple of one of the columns of M, say S=αmi, α
scalar, mi the ith column of M. Assume an error
vector of weight 1 was introduced y = x + (0 … α …
0), α in the ith spot.

EE576 Dr. Kousa Linear Block Codes 64


Example: q-ary Syndrome

• [4,2] ternary with check matrix , word (0


1 1 1) received.

• So decode (0 1 1 1) as
(0 1 1 1) – (0 0 2 0) = (0 1 2 1).

EE576 Dr. Kousa Linear Block Codes 65


Perfect 1-error correcting

• Hamming Codes are perfect 1-error correcting codes.


That is, any received word with at most one error will
be decoded correctly and the code has the smallest
possible size of any code that does this.
• For a given r, any perfect 1-error correcting linear
code of length n=2r-1 and dimension n-r is a
Hamming Code.

EE576 Dr. Kousa Linear Block Codes 66


Proof: 1-error correcting
• A code will be 1-error correcting if
– spheres of radius 1 centered at codewords cover the
codespace, and
– if the minimum distance between any two codewords ≥ 3,
since then spheres of radius 1 centered at codewords will be
disjoint.
• Suppose codewords x, y differ by 1 bit. Then x-y is a
codeword of weight 1, and M(x-y) ≠ 0. Contradiction. If x, y
differ by 2 bits, then M(x-y) is the difference of two multiples of
columns of M. No two columns of M are linearly dependent, so
M(x-y) ≠ 0, another contradiction. Thus the minimum distance
is at least 3.

EE576 Dr. Kousa Linear Block Codes 67


Perfect

• A sphere of radius δ centered at x is


Sδ(x)={y in An : dH(x,y) ≤ δ}. Where A is the alphabet,
Fq, and dH is the Hamming distance.
• A sphere of radius e contains words.
• If C is an e-error correcting code then
, so .

EE576 Dr. Kousa Linear Block Codes 68


Perfect
• This last inequality is called the sphere packing
bound for an e-error correcting code C of length n
over Fm:
where n is the length
of the code and in this case e=1.
• A code for which equality holds is called perfect.

EE576 Dr. Kousa Linear Block Codes 69


Proof: Perfect

• The right side of this, for e=1 is qn/(1+n(q-1)).


• The left side is qn-r where n= (qr-1)/(q-1).
qn-r(1+n(q-1)) = qn-r(1+(qr-1)) = qn.

Applications
• Data compression.
• Turbo Codes
• The Hat Game

EE576 Dr. Kousa Linear Block Codes 70


Data Compression

• Hamming Codes can be used for a form of lossy


compression.
• If n=2r-1 for some r, then any n-tuple of bits x is within
distance at most 1 from a Hamming codeword c. Let
G be a generator matrix for the Hamming Code, and
mG=c.
• For compression, store x as m. For decompression,
decode m as c. This saves r bits of space but
corrupts (at most) 1 bit.

EE576 Dr. Kousa Linear Block Codes 71


The Hat Game
• A group of n players enter a room whereupon they each receive
a hat. Each player can see everyone else’s hat but not his own.

• The players must each simultaneously guess a hat color, or


pass.
• The group loses if any player guesses the wrong hat color or if
every player passes.
• Players are not necessarily anonymous, they can be numbered.
• Assignment of hats is assumed to be random.
• The players can meet beforehand to devise a strategy.
• The goal is to devise the strategy that gives the highest
probability of winning.

EE576 Dr. Kousa Linear Block Codes 72


EE 551/451, Fall, 2007

Communication Systems

                                                            

Zhu Han
Department of Electrical and Computer Engineering

Class 25

Dec. 6th, 2007


Outline
 Project 2
 ARQ Review
 Linear Code
– Hamming Code Revisit
– Reed–Muller code
   Cyclic Code
                                                          

– CRC Code
– BCH Code
– RS Code

EE 541/451 Fall 2007


ARQ, FEC, HEC
 ARQ
Error detection code
tx rx
ACK/NACK

 Forward Error Correction (error correct coding)

  tx
                                                          
Error correction code
rx

 Hybrid Error Correction


Error detection/
Correction code
tx rx
ACK/NACK

EE 541/451 Fall 2007


Hamming Code
 H(n,k): k information bit length, n overall code length
 n=2^m-1, k=2^m-m-1:
 H(7,4), rate (4/7); H(15,11), rate (11/15); H(31,26), rate (26/31)
 H(7,4): Distance d=3, correction ability 1, detection ability 2.

   Remember that it is good to have larger distance and rate.


                                                          
 Larger n means larger delay, but usually better code

EE 541/451 Fall 2007


Hamming Code Example
 H(7,4)
 Generator matrix G: first 4-by-4 identical matrix
 Message information vector p

   Transmission vector x
                                                          
 Received vector r
and error vector e
 Parity check matrix H

EE 541/451 Fall 2007


Error Correction
 If there is no error, syndrome vector z=zeros

 If there is one error at location 2

   New syndrome vector z is


                                                          

which corresponds to the second column of H. Thus, an error


has been detected in position 2, and can be corrected
EE 541/451 Fall 2007
Exercise
 Same problem as the previous slide, but p=(1001)’ and the error
occurs at location 4 instead.
 Pause for 5 minutes
 Might be 10 points in the finals.

                                                            

EE 541/451 Fall 2007


Important Hamming Codes
 Hamming (7,4,3) -code. It has 16 codewords of length 7. It can
be used to send 27 = 128 messages and can be used to correct 1
error.
• Golay (23,12,7) -code. It has 4 096 codewords. It can be used to
transmit 8 3888 608 messages and can correct 3 errors.

   Quadratic residue (47,24,11) -code. It has 16 777 216


codewords and can be used to transmit 140 737 488 355 238
                                                          

messages and correct 5 errors.

EE 541/451 Fall 2007


Reed–Muller code

                                                            

EE 541/451 Fall 2007


Cyclic code
 Cyclic codes are of interest and importance because
– They posses rich algebraic structure that can be utilized in a
variety of ways.
– They have extremely concise specifications.
– They can be efficiently implemented using simple shift register

  – Many practically important codes are cyclic


                                                          

 In practice, cyclic codes are often used for error detection


(Cyclic redundancy check, CRC)
– Used for packet networks
– When an error is detected by the receiver, it requests
retransmission
– ARQ

EE 541/451 Fall 2007


BASIC DEFINITION of Cyclic Code

                                                            

EE 541/451 Fall 2007


FREQUENCY of CYCLIC CODES

                                                            

EE 541/451 Fall 2007


EXAMPLE of a CYCLIC CODE

                                                            

EE 541/451 Fall 2007


POLYNOMIALS over GF(q)

                                                            

EE 541/451 Fall 2007


EXAMPLE

                                                            

EE 541/451 Fall 2007


Cyclic Code Encoder

                                                            

EE 541/451 Fall 2007


Cyclic Code Decoder
 Divider

                                                            
 Similar structure as multiplier for encoder

EE 541/451 Fall 2007


Cyclic Redundancy Checks (CRC)

                                                            

EE 541/451 Fall 2007


Example of CRC

                                                            

EE 541/451 Fall 2007


Checking for errors

                                                            

EE 541/451 Fall 2007


Capability of CRC
 An error E(X) is undetectable if it is divisible by G(x). The
following can be detected.
– All single-bit errors if G(x) has more than one nonzero term
– All double-bit errors if G(x) has a factor with three terms
– Any odd number of errors, if P(x) contain a factor x+1
– Any burst with length less or equal to n-k
  – A fraction of error burst of length n-k+1; the fraction is 1-2^(-(-n-
                                                          

k-1)).
– A fraction of error burst of length greater than n-k+1; the fraction
is 1-2^(-(n-k)).
 Powerful error detection; more computation complexity
compared to Internet checksum
 Page 652

EE 541/451 Fall 2007


BCH Code
 Bose, Ray-Chaudhuri, Hocquenghem
– Multiple error correcting ability
– Ease of encoding and decoding
– Page 653
 Most powerful cyclic code
  – For any positive integer m and t<2^(m-1), there exists a t-error
                                                          

correcting (n,k) code with n=2^m-1 and n-k<=mt.


 Industry standards
– (511, 493) BCH code in ITU-T. Rec. H.261 “video codec for
audiovisual service at kbit/s” a video coding a standard used for
video conferencing and video phone.
– (40, 32) BCH code in ATM (Asynchronous Transfer Mode)

EE 541/451 Fall 2007


BCH Performance

                                                            

EE 541/451 Fall 2007


Reed-Solomon Codes
 An important subclass of non-binary BCH
 Page 654
 Wide range of applications
– Storage devices (tape, CD, DVD…)
– Wireless or mobile communication
  – Satellite communication
                                                          

– Digital television/Digital Video Broadcast(DVB)


– High-speed modems (ADSL, xDSL…)

EE 541/451 Fall 2007


Examples
 10.2 page 639
 10.3 page 648
 10.4 Page 651
 Might be 4 points in the final

                                                            

EE 541/451 Fall 2007


1971: Mariner 9
 Mariner 9 used a [32,6,16] Reed-Muller
code to transmit its grey images of Mars.

                                                            
camera rate:
100,000 bits/second

transmission speed:
16,000 bits/second

EE 541/451 Fall 2007


1979+: Voyagers I & II
 Voyagers I & II used a [24,12,8] Golay code
to send its color images of Jupiter and Saturn.

                                                            

 Voyager 2 traveled further to Uranus


and Neptune. Because of the higher
error rate it switched to the more
robust Reed-Solomon code.
EE 541/451 Fall 2007
Modern Codes
 More recently
Turbo codes
were invented,
which are used in
3G cell phones,
(future) satellites,
  and in the Cassini-
                                                          
Huygens space
probe [1997–].

 Other modern codes: Fountain, Raptor, LT, online codes…


 Next, next class

EE 541/451 Fall 2007


Error Correcting Codes
imperfectness of a given code as the difference between the code's required Eb/No to
attain a given word error probability (Pw), and the minimum possible Eb/No required to
attain the same Pw, as implied by the sphere-packing bound for codes with the same
block size k and code rate r.

                                                            

EE 541/451 Fall 2007


Radio System Propagation

                                                            

EE 541/451 Fall 2007


Satellite Communications
 Large communication area. Any two
places within the coverage of radio
transmission by satellite can
communicate with each other.  
 Seldom effected by land disaster
( high reliability)
 Circuit can be started upon
establishing earth station (prompt
circuit starting)

   Can be received at many places


simultaneously, and realize broadcast,
                                                          

multi-access communication
economically( feature of multi-
access)
 Very flexible circuit installment , can
disperse over-centralized traffic at
any time.
 One channel can be used in different
directions or areas (multi-access
connecting).

EE 541/451 Fall 2007


GPS
 Just a timer, 24 satellite
 Calculation position

                                                            

EE 541/451 Fall 2007


IV054
CHAPTER 3: Cyclic and convolution
codes
Cyclic codes are of interest and importance because

• They posses rich algebraic structure that can be


utilized in a variety of ways.
• They have extremely concise specifications.
• They can be efficiently implemented using simple
shift registers.
Cyclic codes
• Many practically important codes are cyclic.

Convolution codes allow to encode streams od data


(bits).
EE576 Dr. Kousa Linear Block Codes 105
IV054
BASIC DEFINITION AND EXAMPLES
•Definition A code C is cyclic if
•(i) C is a linear code;
•(ii) any cyclic shift of a codeword is also a codeword, i.e. whenever a0,… an -1  C, then
also an -1 a0 … an –2  C.
Example
(i) Code C = {000, 101, 011, 110} is cyclic.
(ii) Hamming code Ham(3, 2): with the generator matrix
1 0 0 0 0 1 1
 
0 1 0 0 1 0 1
G 
0 0 1 0 1 1 0
 
0 0 0 1 1 1 1 
is equivalent to a cyclic code. 

(iii) The binary linear code {0000, 1001, 0110, 1111} is not a cyclic, but it is
equivalent to a cyclic code.
(iv) Is Hamming code Ham(2, 3) with the generator matrix
1 0 1 1
 
 0 1 1 2 
(a) cyclic?
(b) equivalent to a cyclic code?
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 106
IV054 FREQUENCY of CYCLIC CODES
•Comparing with linear codes, the cyclic codes are quite scarce. For, example there are 11 811
linear (7,3) linear binary codes, but only two of them are cyclic.

•Trivial cyclic codes. For any field F and any integer n >= 3 there are always the following cyclic
codes of length n over F:

• No-information code - code consisting of just one all-zero codeword.

• Repetition code - code consisting of codewords (a, a, …,a) for a  F.

• Single-parity-check code - code consisting of all codewords with parity 0.

• No-parity code - code consisting of all codewords of length n

•For some cases, for example for n = 19 and F = GF(2), the above four trivial cyclic codes are
the only cyclic codes.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 107
IV054 EXAMPLE of a CYCLIC CODE
•The code with the generator matrix

1 0 1 1 1 0 0
 
•has codewords
G  0 1 0 1 1 1 0
0 0 1 0 1 1 1
 
• c1 = 1011100 c2 = 0101110 c3 =0010111

• c1 + c2 = 1110010 c1 + c3 = 1001011 c2 + c3 = 0111001

• c1 + c2 + c3 = 1100101

•and it is cyclic because the right shifts have the following impacts
• c1  c2, c2  c3, c 3  c 1 + c3

• c1 + c2  c2 + c3, c1 + c3  c1 + c2 + c3, c 2 + c3  c1

• c 1 + c2 + c 3  c 1 + c 2

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 108
IV054
POLYNOMIALS over GF(q)

• Fq[x] denotes the set of all polynomials over GF(q ).


• deg (f(x )) = the largest m such that xm has a non-zero coefficient in f(x).

Multiplication of polynomials If f(x), g(x) Fq[x], then


deg (f(x) g(x)) = deg (f(x)) + deg (g(x)).
Division of polynomials For every pair of polynomials a(x), b(x)  0 in Fq[x] there
exists a unique pair of polynomials q(x), r(x) in Fq[x] such that
a(x) = q(x)b(x) + r(x), deg (r(x)) < deg (b(x)).
Example Divide x3 + x + 1 by x2 + x + 1 in F2[x].
Definition Let f(x) be a fixed polynomial in Fq[x]. Two polynomials g(x), h(x) are said
to be congruent modulo f(x), notation
g(x)  h(x) (mod f(x)),
if g(x) - h(x) is divisible by f(x).

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 109
IV054 RING of POLYNOMIALS
•The set of polynomials in Fq[x] of degree less than deg (f(x)), with addition and multiplication modulo f(x) forms a ring
denoted Fq[x]/f(x).

•Example Calculate (x + 1)2 in F2[x] / (x2 + x + 1). It holds


•(x + 1)2 = x2 + 2x + 1  x2 + 1  x (mod x2 + x + 1).

•How many elements has Fq[x] / f(x)?


•Result | Fq[x] / f(x) | = q deg (f(x)).

•Example Addition and multiplication in F2[x] / (x2 + x + 1)


+ 0 1 x 1+x  0 1 x 1+x
0 0 1 x 1+x 0 0 0 0 0
1 1 0 1+x x 1 0 1 X 1+x
x x 1+x 0 1 x 0 x 1+x 1
1+x 1+x x 1 0 1+x 0 1+x 1 x

Definition A polynomial f(x) in Fq[x] is said to be reducible if f(x) = a(x)b(x), where a(x), b(x)  Fq[x] and
deg (a(x)) < deg (f(x)), deg (b(x)) < deg (f(x)).

If f(x) is not reducible, it is irreducible in Fq[x].

Theorem The ring Fq[x] / f(x) is a field if f(x) is irreducible in Fq[x].

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 110
IV054
FIELD Rn, Rn = Fq[x] / (xn - 1)

•Computation modulo xn – 1
•Since xn  1 (mod xn -1) we can compute f(x) mod xn -1 as follow:
•In f(x) replace xn by 1, xn +1 by x, xn +2 by x2, xn +3 by x3, …

•Identification of words with polynomials


•a0 a1… an -1  a0 + a1 x + a2 x2 + … + an -1 xn -1

•Multiplication by x in Rn corresponds to a single cyclic shift


•x (a0 + a1 x + … an -1 xn -1) = an -1 + a0 x + a1 x2 + … + an -2 xn -1

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 111
IV054 Algebraic characterization of cyclic codes
• Theorem A code C is cyclic if C satisfies two conditions
• (i) a(x), b(x)  C  a(x) + b(x)  C
• (ii) a(x)  C, r(x)  Rn  r(x)a(x)  C

• Proof
• (1) Let C be a cyclic code. C is linear  (i) holds.
• (ii) Let a(x)  C, r(x) = r0 + r1x + … + rn -1xn -1
• r(x)a(x) = r0a(x) + r1xa(x) + … + rn -1xn -1a(x)
• is in C by (i) because summands are cyclic shifts of a(x).

• (2) Let (i) and (ii) hold


•  Taking r(x) to be a scalar the conditions imply linearity of C.
•  Taking r(x) = x the conditions imply cyclicity of C.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 112
IV054 CONSTRUCTION of CYCLIC CODES
•Notation If f(x)  Rn, then
 f(x) = {r(x)f(x) | r(x)  Rn}

•(multiplication is modulo xn -1).

•Theorem For any f(x)  Rn, the setf(x) is a cyclic code (generated by f).

•Proof We check conditions (i) and (ii) of the previous theorem.

•(i) If a(x)f(x)  f(x) and b(x)f(x)  f(x), then


• a(x)f(x) + b(x)f(x) = (a(x) + b(x)) f(x) f(x)

•(ii) If a(x)f(x)  f(x), r(x)  Rn, then


• r(x) (a(x)f(x)) = (r(x)a(x)) f(x)  f(x).

Example C = 1 + x2 , n = 3, q = 2.
We have to compute r(x)(1 + x2) for all r(x)  R3.
R3 = {0, 1, x, 1 + x, x2, 1 + x2, x + x2, 1 + x + x2}.

Result C = {0, 1 + x, 1 + x2, x + x2}


C = {000, 011, 101, 110}
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 113
IV054
Characterization theorem for cyclic codes
•We show that all cyclic codes C have the form C = f(x) for some f(x)  Rn.

•Theorem Let C be a non-zero cyclic code in Rn. Then


• there exists unique monic polynomial g(x) of the smallest degree such that
• C = g(x)
• g(x) is a factor of xn -1.
Proof
(i) Suppose g(x) and h(x) are two monic polynomials in C of the smallest degree.
Then the polynomial g(x) - h(x)  C and it has a smaller degree and a multiplication
by a scalar makes out of it a monic polynomial. If g(x)  h(x) we get a contradiction.
(ii) Suppose a(x)  C.
Then
a(x) = q(x)g(x) + r(x) (deg r(x) < deg g(x))
and
r(x) = a(x) - q(x)g(x)  C.
By minimality
r(x) = 0
and therefore a(x) g(x).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 114
IV054
Characterization theorem for cyclic codes
•(iii) Clearly,
•xn –1 = q(x)g(x) + r(x) with deg r(x) < deg g(x)

•and therefore r(x)  -q(x)g(x) (mod xn -1) and


•r(x)  C  r(x) = 0  g(x) is a factor of xn -1.

GENERATOR POLYNOMIALS
Definition If for a cyclic code C it holds
C = g(x),
then g is called the generator polynomial for the code C.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 115
IV054 HOW TO DESIGN CYCLIC CODES?
•The last claim of the previous theorem gives a recipe to get all cyclic codes of given length n.
•Indeed, all we need to do is to find all factors of
• xn -1.
•Problem: Find all binary cyclic codes of length 3.
•Solution: Since
• x3 – 1 = (x + 1)(x2 + x + 1)
• both factors are irreducible in GF(2)

•we have the following generator polynomials and codes.

• Generator polynomials Code in R3 Code in V(3,2)


• 1 R3 V(3,2)
• x+1 {0, 1 + x, x + x2, 1 + x2} {000, 110, 011, 101}
• x2 + x + 1 {0, 1 + x + x2} {000, 111}
• x3 – 1 ( = 0) {0} {000}

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 116
IV054 Design of generator matrices for cyclic codes
• Theorem Suppose C is a cyclic code of codewords of length n with the generator polynomial
• g(x) = g0 + g1x + … + grxr.
• Then dim (C) = n - r and a generator matrix G1 for C is

 g 0 g1 g 2 ... gr 0 0 0 ... 0
 
 0 g 0 g1 g 2 ... gr 0 0 ... 0 
G1   0 0 g 0 g1 g2 ... gr 0 ... 0 
 
 .. .. .. 
Proof  0 0 ... 0 0 ... 0 g0 ... g r 

(i) All rows of G1 are linearly independent.
(ii) The n - r rows of G represent codewords
g(x), xg(x), x2g(x),…, xn -r -1g(x)
(*)
(iii) It remains to show that every codeword in C can be expressed as a linear combination of
vectors from (*).
Inded, if a(x)  C, then
a(x) = q(x)g(x).
Since deg a(x) < n we have deg q(x) < n - r.
Hence
q(x)g(x) = (q0 + q1x + … + qn -r -1xn -r -1)g(x)
= q0g(x) + q1xg(x) + … + qn -r -1xn -r -1g(x).
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 117
IV054 EXAMPLE
•The task is to determine all ternary codes of length 4 and generators for them.
•Factorization of x4 - 1 over GF(3) has the form
• x4 - 1 = (x - 1)(x3 + x2 + x + 1) = (x - 1)(x + 1)(x2 + 1)
•Therefore there are 23 = 8 divisors of x4 - 1 and each generates a cyclic code.
• Generator polynomial Generator matrix
• 1 I4

• x  1 1 0 0
0  1 1 0

 0 0  1 1
• x+1 1 1 0 0
0 1 1 0

0 0 1 1
• x2 + 1
1 0 1 0
0 1 0 1
• (x - 1)(x + 1) = x2 - 1 
 1 0 1 0
0  1 0 1 
• (x - 1)(x2 + 1) = x3 - x2 + x - 1 [ -1 1 -1 1 ] 
• (x + 1)(x2 + 1) [1111]
• x4 - 1 = 0 [0000]
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 118
IV054
Check polynomials and parity check matrices for cyclic codes
•Let C be a cyclic [n,k]-code with the generator polynomial g(x) (of degree n - k). By the last theorem
g(x) is a factor of xn - 1. Hence
• xn - 1 = g(x)h(x)

•for some h(x) of degree k (where h(x) is called the check polynomial of C).

•Theorem Let C be a cyclic code in Rn with a generator polynomial g(x) and a check polynomial h(x).
Then an c(x)  Rn is a codeword of C if c(x)h(x)  0 - this and next congruences are modulo xn - 1.

Proof Note, that g(x)h(x) = xn - 1  0


(i) c(x)  C  c(x) = a(x)g(x) for some a(x)  Rn
 c(x)h(x) = a(x) g(x)h(x)  0.
 0
(ii) c(x)h(x)  0
c(x) = q(x)g(x) + r(x), deg r(x) < n – k = deg g(x)
c(x)h(x)  0  r(x)h(x)  0 (mod xn - 1)
Since deg (r(x)h(x)) < n – k + k = n, we have r(x)h(x) = 0 in F[x] and therefore
 c(x)
r(x) = 0Cyclic = q(x)g(x)  C.
codes
EE576 Dr. Kousa Linear Block Codes 119
IV054
POLYNOMIAL REPRESENTATION of DUAL CODES
•Since dim (h(x)) = n - k = dim (C) we might easily be fooled to think that the check
polynomial h(x) of the code C generates the dual code C.
•Reality is “slightly different'':

•Theorem Suppose C is a cyclic [n,k]-code with the check polynomial


• h(x) = h0 + h1x + … + hkxk,
•then
•(i) a parity-check matrix for C is
 hk hk 1 ... h0 0 ... 0
 
0 hk ... h1 h0 ... 0
H  
.. ..
 
0 0 ... 0 hk ... h0 

•(ii) C is the cyclic code generated by the polynomial

h x   hk  hk 1 x  ...  h0 x k
•i.e. the reciprocal polynomial of h(x).

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 120
IV054 POLYNOMIAL REPRESENTATION of DUAL CODES

•Proof A polynomial c(x) = c0 + c1x + … + cn -1xn –1 represents a code from C if c(x)h(x) = 0.


For c(x)h(x) to be 0 the coefficients at xk,…, xn -1 must be zero, i.e.
c0 hk  c1hk 1  ...  ck h0  0
c1hk  c2 hk 1  ...  ck 1h0  0
.. ..
cn  k 1hk  cn  k hk 1  ...  cn 1h0  0
•Therefore, any codeword c0 c1… cn -1  C is orthogonal to the word hk hk -1…h000…0 and to
its cyclic shifts.
•Rows of the matrix H are therefore in C. Moreover, since hk = 1, these row-vectors are
linearly independent. Their number is n - k = dim (C). Hence H is a generator matrix for
C, i.e. a parity-check matrix for C.
•In order to show that C is a cyclic code generated by the polynomial
h x   hk  hk 1 x  ...  h0 x k
•it is sufficient to show that h x  is a factor of xn -1.
k
 
•Observe that h x   x h x and since
1
h(x -1)g(x -1) = (x -1)n -1
•we have that xkh(x -1)xn -kg(x -1) = xn(x –n -1) = 1 – xn
•and therefore h x  is indeed a factor of xn -1.
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 121
IV054
ENCODING with CYCLIC CODES I
ENCODING
•Encoding using with CYCLIC CODES I
a cyclic code can be done by a multiplication of two polynomials - a
message polynomial and the generating polynomial for the cyclic code.

•Let C be an (n,k)-code over an field F with the generator polynomial


•g(x) = g0 + g1 x + … + gr –1 x r -1 of degree r = n - k.

•If a message vector m is represented by a polynomial m(x) of degree k and m is encoded


by
• m  c = mG1,

•then the following relation between m(x) and c(x) holds


• c(x) = m(x)g(x).

•Such an encoding can be realized by the shift register shown in Figure below, where
input is the k-bit message to be encoded followed by n - k 0' and the output will be the
encoded message.

•Shift-register encodings of cyclic codes. Small circles represent multiplication by the


corresponding constant,  nodes represent modular addition, squares are delay elements
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 122
IV054 ENCODING of CYCLIC CODES II

•Another method for encoding of cyclic codes is based on the following (so called
systematic) representation of the generator and parity-check matrices for cyclic codes.

•Theorem Let C be an (n,k)-code with generator polynomial g(x) and r = n - k. For i = 0,1,
…,k - 1, let G2,i be the length n vector whose polynomial is G2,i(x) = x r+I -x r+I mod g(x). Then
the k * n matrix G2 with row vectors G2,I is a generator matrix for C.

•Moreover, if H2,J is the length n vector corresponding to polynomial H2,J(x) = xj mod g(x),
then the r * n matrix H2 with row vectors H2,J is a parity check matrix for C. If the message
vector m is encoded by
• m  c = mG2,

•then the relation between corresponding polynomials is


• c(x) = xrm(x) - [xrm(x)] mod g(x).

•On this basis one can construct the following shift-register encoder for the case of a
systematic representation of the generator for a cyclic code:
•Shift-register encoder for systematic representation of cyclic codes. Switch A is closed for
first k ticks and closed for last r ticks; switch B is down for first k ticks and up for last r ticks.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 123
IV054 Hamming codes as cyclic codes
•Definition (Again!) Let r be a positive integer and let H be an r * (2r -1)
matrix whose columns are distinct non-zero vectors of V(r,2). Then the
code having H as its parity-check matrix is called binary Hamming
code denoted by Ham (r,2).
•It can be shown that binary Hamming codes are equivalent to cyclic
codes.

Theorem The binary Hamming code Ham (r,2) is equivalent to a cyclic code.

Definition If p(x) is an irreducible polynomial of degree r such that x is a primitive element


of the field F[x] / p(x), then p(x) is called a primitive polynomial.

Theorem If p(x) is a primitive polynomial over GF(2) of degree r, then the cyclic code
p(x) is the code Ham (r,2).

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 124
IV054
Hamming codes as cyclic codes

•Example Polynomial x3 + x + 1 is irreducible over GF(2) and x is


primitive element of the field F2[x] / (x3 + x + 1).
•F2[x] / (x3 + x + 1) =
•{0, x, x2, x3 = x + 1, x4 = x2 + x, x5 = x2 + x + 1, x6 = x2 + 1}

•The parity-check matrix for a cyclic version of Ham (3,2)


1 0 0 1 0 1 1
 
H  0 1 0 1 1 1 0
0 0 1 0 1 1 1
 

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 125
IV054
PROOF of THEOREM
•The binary Hamming code Ham (r,2) is equivalent to a cyclic code.
•It is known from algebra that if p(x) is an irreducible polynomial of degree r, then the ring F2[x] / p(x) is a field of
order 2r.
•In addition, every finite field has a primitive element. Therefore, there exists an element  of F2[x] / p(x) such that
• F2[x] / p(x) = {0, 1, , 2,…, 2r –2}.

•Let us identify an element a0 + a1 + … ar -1xr -1 of F2[x] / p(x) with the column vector
• (a0, a1,…, ar -1)T

•and consider the binary r * (2r -1) matrix


• H = [ 1 2 … 2^r –2 ].

•Let now C be the binary linear code having H as a parity check matrix.
•Since the columns of H are all distinct non-zero vectors of V(r,2), C = Ham (r,2).
•Putting n = 2r -1 we get
• C = {f0 f1 … fn -1  V(n, 2) | f0 + f1 + … + fn -1 n –1 = 0 (2)
• = {f(x)  Rn | f() = 0 in F2[x] / p(x)} (3)

•If f(x)  C and r(x)  Rn, then r(x)f(x)  C because


• r()f() = r()  0 = 0
•and therefore, by one of the previous theorems, this version of Ham (r,2) is cyclic.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 126
IV054 BCH codes and Reed-Solomon codes
•To the most important cyclic codes for applications belong BCH codes and Reed-Solomon codes.

•Definition A polynomial p is said to be minimal for a complex number x in Zq if p(x) = 0 and p is


irreducible over Zq.

Definition A cyclic code of codewords of length n over Zq, q = pr, p is a prime, is


called BCH code1 of distance d if its generator g(x) is the least common multiple of
the minimal polynomials for
 l,  l +1,…,  l +d –2
for some l, where
 is the primitive n-th root of unity.
If n = qm - 1 for some m, then the BCH code is called primitive.
primitive
Definition A Reed-Solomon code is a primitive BCH code with n = q - 1.
Properties:
• Reed-Solomon codes are self-dual.

1
BHC stands for Bose and Ray-Chaudhuri and Hocquenghem who discovered
these codes. Cyclic codes
EE576 Dr. Kousa Linear Block Codes 127
IV054
CONVOLUTION CODES

• Very often it is important to encode an infinite stream or several streams of data – say
bits.
• Convolution codes, with simple encoding and decoding, are quite a simple
• generalization of linear codes and have encodings as cyclic codes.

• An (n,k) convolution code (CC) is defined by an k x n generator matrix,


• entries of which are polynomials over F2

• For example,

G1  [ x 2  1, x 2  x  1]
• is the generator matrix for a (2,1) convolution code CC 1 and

1  x 0 x  1
G2  
 0 
 1 x 
• is the generator matrix for a (3,2) convolution code CC 2
Cyclic codes
EE576 Dr. Kousa Linear Block Codes 128
IV054 ENCODING of FINITE POLYNOMIALS

• An (n,k) convolution code with a k x n generator matrix G can be usd to encode


a
• k-tuple of plain-polynomials (polynomial input information)

• I=(I0(x), I1(X),…,Ik-1(x))

• to get an n-tuple of crypto-polynomials

• C=(C0(x), C1(x),…,Cn-1(x))

• As follows

• C= I . G

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 129
EXAMPLES

• EXAMPLE 1

• (x3 + x + 1).G1 = (x3 + x + 1) . (x2 + 1, x2 + x + 1]


• = (x5 + x2 + x + 1, x5 + x4 + 1)

• EXAMPLE 2

 1 0 x  1
( x  x, x  1).G2  ( x  x, x  1).
2 3 2
3

01 x 

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 130
IV054
ENCODING of INFINITE INPUT STREAMS
• The way infinite streams are encoded using convolution codes will be
• Illustrated on the code CC1.

• An input stream I = (I0, I1, I2,…) is mapped into the output stream
• C= (C00, C10, C01, C11…) defined by

• C0(x) = C00 + C01x + … = (x2 + 1) I(x)


• and
• C1(x) = C10 + C11x + … = (x2 + x + 1) I(x).

• The first multiplication can be done by the first shift register from the next
• figure; second multiplication can be performed by the second shift register
• on the next slide and it holds
• C0i = Ii + Ii+2, C1i = Ii + Ii-1 + Ii-2.
• That is the output streams C0 and C1 are obtained by convolving the input
• stream with polynomials of G1’

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 131
IV054 ENCODING
The first shift register output

input

1 x x2

will multiply the input stream by x2+1 and the second shift register

output

input

1 x x2

will multiply the input stream by x2+x+1.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 132
IV054 ENCODING and DECODING

The following shift-register will therefore be an encoder for the


code CC1

 C00,C01,C02

I 1 x x2 Output streams

 C10,C11,C12

For encoding of convolution codes so called

Viterbi algorithm

Is used.

Cyclic codes
EE576 Dr. Kousa Linear Block Codes 133
Cyclic Linear Codes

Rong-Jaye Chen

EE576 Dr. Kousa Linear Block Codes 134


OUTLIN
E
 [1] Polynomials and words
 [2] Introduction to cyclic codes
 [3] Generating and parity check matrices for cyclic codes
 [4] Finding cyclic codes
 [5] Dual cyclic codes

EE576 Dr. Kousa Linear Block Codes 135


Cyclic Linear Codes
• [1] Polynomials and words
– 1. Polynomial of degree n over K

K [x ]  {a 0  a1x  a 2x 2  a 3x 3  ....  an x n }
a 0 ,...., an  K , deg(f (x ))  n

– 2. Eg 4.1.1
Let f ( x )  1  x  x 3  x 4 g ( x )  x  x 2  x 3 h (x )  1  x 2  x 4 then
(a ) f (x )  g (x )  1  x 2  x 4
(b ) f (x )  h (x )  x  x 2  x 3
(c ) f (x )g (x )  (x  x 2  x 3 )  x (x  x 2  x 3 )  x 3 (x  x 2  x 3 ) 
x 4 (x  x 2  x 3 )  x  x 7

EE576 Dr. Kousa Linear Block Codes 136


Cyclic Linear Codes

– 3. [Algorithm 4.1.8]Division algorithm

Let f (x ) and h ( x ) be in K [x ] with h ( x )  0. Then there exist


unique polynomial q ( x ) and r ( x ) in K [x ] such that

f ( x )  q ( x )h ( x )  r ( x ),

with r (x )  0 or deg(r ( x ))  deg(h ( x ))


– 4. Eg. 4.1.9
f ( x )  x  x 2  x 6  x 8 , h (x )  1  x  x 2  x 4
q (x )  x 3  x 4 , r (x )  x  x 2  x 3
f ( X )  h (x )(x 3  x 4 )  (x  x 2  x 3 )
deg(r ( x ))  deg(h ( x ))  4

EE576 Dr. Kousa Linear Block Codes 137


Cyclic Linear Codes
– 5. Code represented by a set of polynomials
• A code C of length n can be represented as a set of polynomials over K of
degree at most n-1

– f ( x )  a  a x  a x 2  ....  a x n 1 over K
0 1 2 n 1

– c  a 0a1a 2 ...a n 1 of length n in K n

– 6. E.g 4.1.12

Codeword c Polynomial c(x)


0000 1+x2
1010 x+x3
0101 1+x+x2+
1111 x3
EE576 Dr. Kousa Linear Block Codes 138
Cyclic Linear Codes
– 7. f(x) and p(x) are equivalent modulo h(x)
f (x ) mod h (x )  r (x )  p (x ) mod h (x )
ie . f ( x )  p ( x )(mod h ( x ))

– 8.Eg 4.1.15

f (x )  1  x 4  x 9  x 11 , h (x )  1  x 5 , p (x )  1  x 6
f ( x ) mod h ( x )  r ( x )  1  x  p ( x ) mod h ( x )
=>f(x) and p(x) are equivalent mod h(x)!!
– 9. Eg 4.1.16

f (x )  1  x 2  x 6  x 9  x 11 , h (x )  1  x 2  x 5 , p (x )  x 2  x 8
f ( x )mod h ( x )  x  x 4 , p ( x )mod h ( x )  1  x 3
=>f(x) and p(x) are NOT equivalent mod h(x)!!

EE576 Dr. Kousa Linear Block Codes 139


Cyclic Linear Codes

– 10. Lemma 4.1.17


If f ( x )  g ( x )(mod h ( x )), then
f ( x )  p ( x )  g ( x )  p ( x )(mod h ( x ))
and
f ( x ) p ( x )  g ( x ) p ( x )(mod h ( x ))
– 11. Eg. 4.1.18

f ( x )  1  x  x 7 , g (x )  1  x  x 2 , h ( x )  1  x 5 , p (x )  1  x 6
so f ( x )  g (x )(mod h (x )), then

f (x )  p (x ) and g ( x )  p ( x ) :
((1  x  x 7 )  (1  x 6 )) mod h (x )  x 2  ((1  x  x 2 )  (1  x 6 ))mod h ( x )

f (x ) p (x ) and g ( x ) p ( x ) :
((1  x  x 7 )(1  x 6 )) mod h ( x )  1  x 3  ((1  x  x 2 )(1  x 6 ))mod h ( x )
EE576 Dr. Kousa Linear Block Codes 140
Cyclic Linear Codes

• [2]Introduction to cyclic codes


– 1. cyclic shift π(v)
• V: 010110,  (v ) : 001011

v 10110 111000 0000 1011


 (v )
01011 011100 0000 1101
– 2.cyclic code
• A code C is cyclic code(or linear cyclic code) if (1)the cyclic shift of each
codeword is also a codeword and (2) C is a linear code
• C1=(000, 110, 101, 011} is a cyclic code
• C2={000, 100, 011, 111} is NOT a cyclic code
– V=100, =010 is not in C2

 (v )

EE576 Dr. Kousa Linear Block Codes 141


Cyclic Linear Codes
– 3. Cyclic shiftπis a linear transformation

• Lemma 4.2.3  (v  w )   (v )   (w ),
and  (av )  a  (v ), a  K  {0,1}
Thus to show a linear code C is cyclic
it is enough to show that  (v )  C
for each word v in a basis for C

• S={v, π(v), π2(v), …, π n-1(v)}, and C=<S>,


then v is a generator of the linear cyclic code C

EE576 Dr. Kousa Linear Block Codes 142


Cyclic Linear Codes

– 4. Cyclic Code in terms of polynomial


v   (v ), v (x )  xv (x )

Eg 4.2.11 v=1101000, n=7, v(x)=1+x+x 3


word polynimial(mod 1+x 7 )
----------- -----------------------------
0110100 xv ( x )  x  x 2  x 4
0011010 x 2v ( x )  x 2  x 3  x 4
0001101 x 3v ( x )  x 3  x 4  x 6
1000110 x 4v ( x )  x 4  x 5  x 7  1  x 4  x 5 mod(1  x 7 )
0100011 x 5v ( x )  x 5  x 6  x 8  x  x 5  x 6 mod(1  x 7 )
1010001 x 6v ( x )  x 6  x 7  x 9  1  x 2  x 6 mod(1  x 7 )

EE576 Dr. Kousa Linear Block Codes 143


Cyclic Linear Codes
– 5. Lemma 4.2.12
Let C be a cyclic code let v in C. Then for any polynomial a(x),
c(x)=a(x)v(x)mod(1+xn) is a codeword in C

– 6. Theorem 4.2.13
C: a cyclic code of length n,
g(x): the generator polynomial, which is the unique nonzero
polynomial of minimum degree in C.
degree(g(x)) : n-k,
• 1. C has dimension k
• 2. g(x), xg(x), x2g(x), …., xk-1g(x) are a basis for C
• 3. If c(x) in C, c(x)=a(x)g(x) for some polynomial a(x)
with degree(a(x))<k

EE576 Dr. Kousa Linear Block Codes 144


Cyclic Linear Codes
– 7. Eg 4.2.16
the smallest linear cyclic code C of length 6 containing g(x)=1+x 3 <-> 100100 is
{000000, 100100, 010010, 001001, 110110,
101101, 011011, 111111}

– 8. Theorem 4.2.17
g(x) is the generator polynomial for a linear cyclic code of length n if only if g(x) divides 1+x n
(so 1+xn =g(x)h(x)).

– 9. Corollary 4.2.18
The generator polynomial g(x) for the smallest cyclic code of length n containing
the word v(polynomial v(x)) is g(x)=gcd(v(x), 1+x n)

– 10. Eg 4.2.19
n=8, v=11011000 so v(x)=1+x+x3+x4
g(x)=gcd(1+x+x3+x4 , 1+x8)=1+x2
Thus g(x)=1+x2 is the smallest cyclic linear code containing
v(x), which has dimension of 6.
EE576 Dr. Kousa Linear Block Codes 145
Cyclic Linear Codes

• [3]. Generating and parity check matrices for cyclic code


– 1. Effective to find a generating matrix
• The simplest generator matrices (Theorem 4.2.13)

 g (x ) 
 xg ( x ) 
G   , n: length of codes, k=n-deg(g(x))
 : 
 k-1 
 x g (x ) 

EE576 Dr. Kousa Linear Block Codes 146


Cyclic Linear Codes

2. Eg 4.3.2
• C: the linear cyclic codes of length n=7 with generator polynomial
g(x)=1+x+x3, and deg(g(x))=3, => k = 4

g (x )  1  x  x 3 1101000 
xg (x )  x  x 2  x 4 0110100
 G=  
x 2g (x )  x 2  x 3  x 5 0011010
 
x 3g (x )  x 3  x 4  x 6  0001101 

EE576 Dr. Kousa Linear Block Codes 147


Cyclic Linear Codes

3. Efficient encoding for cyclic codes


Let C be a cyclic code of length n and dimension k
(so the generator polynomial g(x) has degree n - k).
message polynomial a ( x )  a0  a1 x    ak 1 x k 1
( representing source message ( a0 , a1 ,, ak 1 ))

Encoding algorithm : c(x)  a (x) g (x)


more time efficient compared with that of
a general linear code ( c  aG )

EE576 Dr. Kousa Linear Block Codes 148


Cyclic Linear Codes
– 4. Parity check matrix

• H : wH=0 if only if w is a codeword

• Symdrome polynomial s(x)

– c(x): a codeword, e(x):error polynomial, and w(x)=c(x)+e(x)

– s(x) = w(x) mod g(x) = e(x) mod g(x), because c(x)=a(x)g(x)

– H: i-th row ri is the word of length n-k

=> ri(x)=xi mod g(x)

– wH = (c+e)H => c(x) mod g(x) + e(x) mod g(x) = s(x)

EE576 Dr. Kousa Linear Block Codes 149


Cyclic Linear Codes
– 5. Eg 4.3.7

• n=7, g(x)=1+x+x3, n-k = 3

r0 (x )  1mod g ( x )  1    100 100 


r1 ( x )  x mod g (x )  x 010
   010  
r2 (x )  x mod g (x )  x
2 2
   001 001
 
r3 (x )  x 3 mod g (x )  1  x    110  H  110 
   011 011
r 4 ( x )  x 4 mod g ( x )  x  x 2  
   111  111 
r5 (x )  x 5 mod g (x )  1  x  x 2
101 
r6 (x )  x 6 mod g ( x )  1  x 2    101

EE576 Dr. Kousa Linear Block Codes 150


Cyclic Linear Codes

• [4]. Finding cyclic codes


– 1. To construct a linear cyclic code of length n
• Find a factor g(x) of 1+xn, deg(g(x)) = n-k
• Irreducible polynomials
– f(x) in K[x], deg(f(x)) >= 1
– There are no a(x), b(x) such that f(x)=a(x)b(x),
deg(a(x))>=1, deg(b(x))>=1
• For n <= 31, the factorization of 1+xn
(see Appendix B)
• Improper cyclic codes: Kn and {0}

EE576 Dr. Kousa Linear Block Codes 151


Cyclic Linear Codes
– 2. Theorem 4.4.3

s 2r
if n=2 s then 1+x  (1  x )
r n

– 3. Coro 4.4.4
Let n  2 r s, where s is odd and let 1  x s be
the product of z irreducible polynomials.
Then there are (2 r  1) z  2 proper linear
cyclic codes of length n.

EE576 Dr. Kousa Linear Block Codes 152


Cyclic Linear Codes

– 4. Idempotent polynomials I(x)

• I(x) = I(x)2 mod (1+xn) for odd n


• Find a “basic” set of I(x)

Ci= { s=2j i (mod n) | j=0, 1, …, r}


where
k
1 = 2r mod n
I (x )   ai c i (x ), ai  {0,1} where ci ( x )   x j
jCi
i 0

EE576 Dr. Kousa Linear Block Codes 153


Cyclic Linear Codes

– 5. Eg 4.4.12
For n=7,
C 0  {0}, so c 0 ( x )  x 0  1
C1  {1, 2, 4} = C 2  C 4 , so c1 ( x )  x 1  x 2  x 4
C 3  {3, 5, 6} = C 5 = C 7 , so c 2 ( x )  x 3  x 6  x 5

 I(x)=a0c 0 ( x )  a1c1 ( x )  a3c 3 ( x ), ai  {0,1},


I(x)  0
– 6. Theorem 4.4.13
Every cyclic code contains a unique idempotent
polynomial which generates the code.(?)

EE576 Dr. Kousa Linear Block Codes 154


Cyclic Linear Codes

– 7. Eg. 4.4.14 find all cyclic codes of length 9

C 0  {0}, C1  {1, 2, 4, 8, 7,5}, C3  {3, 6}


 c 0 ( x )  1, c1 ( x )  x  x 2  x 4  x 5  x 7  x 8 , c3 ( x )  x 3  x 6
==> I(x)=a0c 0 ( x )  a1c1 ( x )  a3c 3 ( x )

Idempotent polynomial The generator


I(x) polynomial
g(x)=gcd(I(x), 1+x9)
1 1
x+x2+x4+x5+x7+x8 1+x+x3+x4+x6+x7
x3+x6 1+x3
1+x+x2+x4+x5+x7+x8 1+x+x2

: :
EE576 Dr. Kousa Linear Block Codes 155
Cyclic Linear Codes
• [5].Dual cyclic codes
– 1. The dual code of a cyclic code is also cyclic
– 2. Lemma 4.5.1
a > a(x), b > b(x) and b’ > b’(x)=xnb(x-1) mod 1+xn
then
a(x)b(x) mod 1+xn = 0 iff πk(a) . b’=0
for k=0,1,…n-1

– 3. Theorem 4.5.2
C: a linear code, length n, dimension k with generator g(x)
If 1+xn = g(x)h(x) then
C⊥: a linear code , dimension n-k with generator x kh(x-1)

EE576 Dr. Kousa Linear Block Codes 156


Cyclic Linear Codes
– 4. Eg. 4.5.3
g(x)=1+x+x3, n=7, k=7-3=4
h(x)=1+x+x2+x4
h(x)generator for C⊥ is
g⊥ (x)=x4h(x-1)=x4(1+x-1+x-2+x-4 )=1+x2+x3+x4

– 5. Eg. 4.5.4
g(x)=1+x+x2, n=6, k=6-2=4
h(x)=1+x+x3+x4
h(x)generator for C⊥ is g⊥ (x)=x4h(x-1)=1+x+x3+x4

EE576 Dr. Kousa Linear Block Codes 157


Modulation, Demodulation and
Coding Course
Period 3 - 2005
Sorour Falahati
Lecture 8

EE576 Dr. Kousa Linear Block Codes 158


Last time we talked about:
 Coherent and non-coherent detections

 Evaluating the average probability of symbol error for different bandpass


modulation schemes

 Comparing different modulation schemes based on their error


performances.

Lecture 8
EE576 Dr. Kousa Linear Block Codes 159
Today, we are going to talk about:

• Channel coding

• Linear block codes


– The error detection and correction capability
– Encoding and decoding
– Hamming codes
– Cyclic codes

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 160
What is channel coding?
• Channel coding:
– Transforming signals to improve communications
performance by increasing the robustness against
channel impairments (noise, interference, fading, ..)
– Waveform coding: Transforming waveforms to
better waveforms
– Structured sequences: Transforming data
sequences into better sequences, having structured
redundancy.
• “Better” in the sense of making the decision
process less subject to errors.

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 161
Error control techniques
• Automatic Repeat reQuest (ARQ)
– Full-duplex connection, error detection codes
– The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
– The transmitter retransmits the previously sent
packet if it receives NACK.
• Forward Error Correction (FEC)
– Simplex connection, error correction codes
– The receiver tries to correct some errors
• Hybrid ARQ (ARQ+FEC)
– Full-duplex, error detection and correction codes
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 162
Why using error correction coding?
– Error performance vs. bandwidth
– Power vs. bandwidth
– Data rate vs. bandwidth PB

– Capacity vs. bandwidth Coded

A
F
Coding gain:
For a given bit-error probability, C B
the reduction in the Eb/N0 that can be
realized through the use of code: D
E
 Eb   Eb  Uncoded

G [dB]   
 [dB]    [dB]
 N0 u  N 0 c Eb / N 0 (dB)

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 163
Channel models

• Discrete memoryless channels


– Discrete input, discrete output
• Binary Symmetric channels
– Binary input, binary output
• Gaussian channels
– Discrete input, continuous output

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 164
Linear block codes
Some definitions – cont’d
• Binary field :
– The set {0,1}, under modulo 2 binary addition and
multiplication forms a field.
Addition Multiplication
00  0 00  0
0 1  1 0 1  0
1 0  1 1 0  0
11  0 1 1  1
– Binary field is also called Galois field, GF(2).

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 165
Some definitions – cont’d
• Fields :
– Let F be a set of objects on which two operations ‘+’ and ‘.’ are
defined.
– F is said to be a field if and only if
1. F forms a commutative group under + operation. The
additive identity element is labeled “0”.
a, b  F  a  b  b  a  F
2. F-{0} forms a commutative group under . Operation. The
multiplicative identity element is labeled “1”.
a, b  F  a  b  b  a  F
3. The operations “+” and “.” distribute:
a  (b  c)  (a  b)  (a  c)

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 166
Some definitions – cont’d
• Vector space:
– Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
1. Commutative: u, v  V  u  v  v  u  F
2. a  F , v  V  a  v  u  V
3. Distributive:
(a  b)  v  a  v  b  v and a  (u  v)  a  u  a  v
4. Associative: a, b  F , v  V  (a  b)  v  a  (b  v )
5. v  V, 1  v  v

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 167
Some definitions – cont’d
– Examples of vector spaces
• The set of binary n-tuples, denoted by Vn

V4  {(0000), (0001), (0010), (0011), (0100), (0101), (0111),


(1000), (1001), (1010), (1011), (1100 ), (1101), (1111 )}
• Vector subspace:
– A subset S of the vector space Vn is called a
subspace if:
• The all-zero vector is in S.
• The sum of any two vectors in S is also in S.
• Example:
{(0000), (0101), (1010), (1111)} is a subspace of V4 .

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 168
Some definitions – cont’d
• Spanning set:
– A collection of vectors G   v1 , v 2 ,  , v n,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.
• Example:
 (1000), (0110), (1100), (0011), (1001) spans V4 .
• Bases:
– A spanning set for V that has minimal cardinality is
called a basis for V.
• Cardinality of a set is the number of objects in the set.
• Example:
 (1000), (0100), (0010), (0001) is a basis for V4 .
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 169
Linear block codes
• Linear block code (n,k)
– A set C  Vnwith cardinality 2 k is called a linear block
code if, and only if, it is a subspace of the vector Vn
space .

Vk  C  Vn
• Members of C are called code-words.
• The all-zero codeword is a codeword.
• Any linear combination of code-words is a
codeword.

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 170
Linear block codes – cont’d

mapping Vn
Vk
C

Bases of C

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 171
Linear block codes – cont’d
• The information bit stream is chopped into blocks of k bits.
• Each block is encoded to a larger block of n bits.
• The coded bits are modulated and sent over channel.
• The reverse procedure is done at the receiver.

Channel
Data block Codeword
encoder
k bits n bits

n-k Redundant bits


k
Rc  Code rate
n
Lecture 8
2005-02-09
EE576 Dr. Kousa Linear Block Codes 172
Linear block codes – cont’d
• The Hamming weight of vector U, denoted by
w(U), is the number of non-zero elements in
U.
• The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d (U, V )  w(U  V )

• The minimum distance of a block code is


d min  min d (U i , U j )  min w(U i )
i j i

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 173
Linear block codes – cont’d
• Error detection capability is given by

ed 1
min which is defined as the
• Error correcting-capability t of a code,
maximum number of guaranteed correctable errors per codeword, is

 d min  1
t
 2 

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 174
Linear block codes – cont’d
• For memory less channels, the probability that the
decoder commits an erroneous decoding is

– p is the transition probability or bit error probability


over channel.
• The decoded bit error probability is
n
n j
PM     p (1  p ) n  j
j t 1  j 

1 n n j
PB   j   p (1  p) n  j
n j t 1  j 

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 175
Linear block codes – cont’d
• Discrete, memoryless, symmetric channel model
1 1-p 1
p
Tx. bits Rx. bits
p
0 1-p 0
– Note that for coded systems, the coded bits are
modulated and transmitted over channel. For example,
for M-PSK modulation on AWGN channels (M>2):

2  2 log 2 M  Ec     2  2 log 2 M  Eb Rc   
p 
Q sin     Q sin  
log 2where
M  isN 0energy per M  coded
log 2 M
bit, given by N0 M 
Ec Ec  Rc Eb

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 176
Linear block codes –cont’d
mapping Vn
Vk
C

Bases of C

– A matrix G is constructed by taking as its rows the vectors


on the basis, .
{V1 , V2 ,  , Vk }
 v11 v12  v1n 
 V1  
   v21 v22  v2 n 
G    
   
Vk   
 vk 1 vk 2  vkn 
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 177
Linear block codes – cont’d
• Encoding in (n,k) block code

U  mG  V1 
V 
(u1 , u2 , , un )  (m1 , m2 , , mk )   2 
 
 
Vk 
(u1 , u2 , , un )  m1  V1  m2  V2    m2  Vk
– The rows of G, are linearly independent.

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 178
Linear block codes – cont’d
• Example: Block code (6,3)
Message vector Codeword

000 000000
 V1  1 1 0 1 0 0 100 110100
G   V2   0 1 1 0 1 0 010 011010
 V3  1 0 1 0 0 1  110 1 011 1 0
001 1 01 0 0 1
101 0 111 0 1
011 1 1 0 011
1 11 0 0 0 111
2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 179
Linear block codes – cont’d
• Systematic block code (n,k)
– For a systematic code, the first (or last) k elements in
the codeword are information bits.

G  [P I k ]
I k  k  k identity matrix
Pk  k  (n  k ) matrix
U  (u1 , u2 ,..., un )  ( p1 , p2 ,..., pn  k , m1 , m2 ,..., mk )
          
parity bits message bits

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 180
Linear block codes – cont’d
• For any linear code we can find an G
matrix H ( n  k )n , which its rows are
orthogonal to rows of :
GH  0 T

• H is called the parity check matrix and


its rows are linearly independent.
• For systematic linear block codes:
H  [I n  k P T ]

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 181
Linear block codes – cont’d
Data source Format
m Channel U Modulation
encoding
channel
Channel Demodulation
Data sink Format
decoding r Detection

r  Ue
r  (r1 , r2 ,...., rn ) received codeword or vector
e  (e1 , e2 ,...., en ) error pattern or vector
• Syndrome testing:
– S is syndrome of r, corresponding to the error
pattern e. S  rH T  eH T

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 182
Linear block codes – cont’d
• Standard array
1. For row i  2,3,...,2 n, k find a vector in of
minimum weight which is not already Vn listed in the
array.
2. Call this patterne i and form the i : th row as the
corresponding coset
zero
codeword U1 U2  U 2k
e2 e2  U2  e 2  U 2k
coset
   
e 2 n k e 2 n k  U 2  e 2 n k  U 2 k
coset leaders

2005-02-09 Lecture 8
EE576 Dr. Kousa Linear Block Codes 183
Linear block codes – cont’d
• Standard array and syndrome table decoding
1. Calculate S  rHT
ˆ  r leader,
U
2. Find the coset  eˆ eˆ  ei , corresponding to S.
3. Calculate and corresponding m̂.

– Note that U ˆ  r  eˆ  (U  e)  eˆ  U  (e  eˆ )
• If eˆ  e, error is corrected.
• If ˆ , undetectable decoding error occurs.
ee

EE576 Dr. Kousa Linear Block Codes 184184


Linear block codes – cont’d
codewords
• Example: Standard array for the (6,3) code
000000 110100 011010 101110 101001 011101 110011 000111
000001 110101 011011 101111 101000 011100 110010 000110
000010 110111 011000 101100 101011 011111 110001 000101
000100 110011 011100 101010 101101 011010 110111 000110
001000 111100   
010000 100100 coset
100000 010100 
010001 100101   010110

Coset leaders

EE576 Dr. Kousa Linear Block Codes 185185


Linear block codes – cont’d
Error pattern Syndrome
000000 000 U  (101110) transmitted.
000001 101
r  (001110) is received.
000010 011
The syndrome of r is computed :
000100 110
001000 001 S  rH T  (001110) H T  (100)
010000 010 Error pattern corresponding to this syndrome is
100000 100
eˆ  (100000)
010001 111
The corrected vector is estimated
ˆ  r  eˆ  (001110)  (100000)  (101110)
U

EE576 Dr. Kousa Linear Block Codes 186


Hamming codes
• Hamming codes
– Hamming codes are a subclass of linear block codes and
belong to the category of perfect codes.
– Hamming codes are expressed as a function of a single
integer .
m2
Code length : n  2m  1
Number of information bits : k  2 m  m  1
Number of parity bits : n-k  m
Error correction capability : t  1
– The columns of the parity-check matrix, H, consist of all
non-zero binary m-tuples.

EE576 Dr. Kousa Linear Block Codes 187


Hamming codes
• Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H  0 1 0 1 0 1 1  [I 33 PT ]
0 0 1 1 1 0 1
0 1 1 1 0 0 0 
1 0 1 0 1 0 0
G   [P I 44 ]
1 1 0 0 0 1 0
 
1 1 1 0 0 0 1 

EE576 Dr. Kousa Linear Block Codes 188


Cyclic block codes

• Cyclic codes are a subclass of linear block codes.


• Encoding and syndrome calculation are easily
performed using feedback shift-registers.
– Hence, relatively long block codes can be
implemented with a reasonable complexity.
• BCH and Reed-Solomon codes are cyclic codes.

EE576 Dr. Kousa Linear Block Codes 189


Cyclic block codes

• A linear (n,k) code is called a Cyclic code if all cyclic


shifts of a codeword are also a codeword.
– Example:

U  (u0 , u1 , u2 ,..., un 1 ) “i” cyclic shifts of U

U (i )
 (un i , un i 1 ,..., un 1 , u0 , u1 , u2 ,..., un i 1 )

U  (1101)
U (1)  (1110 ) U ( 2)  (0111) U (3)  (1011) U ( 4)  (1101)  U

EE576 Dr. Kousa Linear Block Codes 190


Cyclic block codes
• Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form
U( X )  u0  u1 X  u 2 X 2  ...  un 1 X n 1 degree (n-1)
• Relationship between a codeword and its cyclic shifts:
XU( X )  u0 X  u1 X 2  ..., un  2 X n 1  un 1 X n
 un 1  u0 X  u1 X 2  ...  un  2 X n 1  u n 1 X n  un 1
                 
U (1 ) ( X ) u n1 ( X n 1)

 U (1) ( X )  un 1 ( X n  1)
U (1) ( X )  XU( X ) modulo ( X n  1)
– Hence:
By extension
U ( i ) ( X )  X i U( X ) modulo ( X n  1)

EE576 Dr. Kousa Linear Block Codes 191


Cyclic block codes
• Basic properties of Cyclic codes:
– Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there
is a unique monic polynomial g ( X ) with
minimal degree r  n. g ( X ) is called the
generator polynomials.
g ( X )  g 0  g1 X  ...  g r X r
2. Every code polynomialU( X ) in C, can be
expressed uniquely as U( X )  m ( X )g ( X )
3. The generator polynomial g ( X ) is a factor of
X n 1

EE576 Dr. Kousa Linear Block Codes 192


Cyclic block codes
4. The orthogonality of G and H in polynomial
form is expressed as g ( X )h( X )  X  .1This
n

means h( X ) is also a factor of X n  1

5. The rowi, i  1,..., k , of generator matrix is


formed by the coefficients of the " i  1"cyclic
shift of the generator polynomial.
 g0 g1  g r 0
 g( X )   
 Xg ( X )   g0 g1  gr 
G      
    
 k 1   g0 g1  g r 
 X g ( X )  
0 g0 g1  g r 

EE576 Dr. Kousa Linear Block Codes 193


Cyclic block codes
• Systematic encoding algorithm for an (n,k) Cyclic
code:
nk
1. Multiply the message polynomial m( X ) by X

2. Divide the result of Step 1 by the generator


polynomial g ( X ) . Let p( X ) be the reminder.

3. Add p ( X ) to X nk
m( X ) to form the codewordU( X )

EE576 Dr. Kousa Linear Block Codes 194


Cyclic block codes
• Example: For the systematic (7,4) Cyclic code with
generator polynomial g( X )  1  X  X 3
1. Find the codeword for the message
m  (1011)
n  7, k  4, n  k  3
m  (1011)  m( X )  1  X 2  X 3
X n k m( X )  X 3m( X )  X 3 (1  X 2  X 3 )  X 3  X 5  X 6
Divide X n  k m( X ) by g ( X) :
X 3  X 5  X 6  (1  X  X 2  X 3 )(1  X  X 3 )  1
              
quotient q(X) generator g(X) remainder p ( X )

Form the codeword polynomial :


U ( X )  p( X )  X 3m ( X )  1  X 3  X 5  X 6
U  (1 0 0 1 0 1 1 )
parity bits message bits

EE576 Dr. Kousa Linear Block Codes 195


Cyclic block codes
2. Find the generator and parity check matrices, G and H,
respectively.
g( X )  1  1 X  0  X 2  1 X 3  ( g 0 , g1 , g 2 , g 3 )  (1101)
1 1 0 1 0 0 0
0 Not in systematic form.
1 1 0 1 0 0
G We do the following:
0 0 1 1 0 1 0 row(1)  row(3)  row(3)
 
0 0 0 1 1 0 1 row(1)  row(2)  row(4)  row(4)

1 1 0 1 0 0 0
0 1 0 0 1 0 1 1
1 1 0 1 0 0
G H  0 1 0 1 1 1 0
1 1 1 0 0 1 0
  0 0 1 0 1 1 1
1 0 1 0 0 0 1
I 33 PT
P I 44

EE576 Dr. Kousa Linear Block Codes 196


Cyclic block codes
• Syndrome decoding for Cyclic codes:
– Received codeword in polynomial form is given by
Received r ( X )  U ( X )  e( X ) Error
codewor pattern
d
– The syndrome is the reminder obtained by dividing the
received polynomial by the generator polynomial.
r ( X )  q ( X )g ( X )  S ( X ) Syndrome

– With syndrome and Standard array, error is estimated.

• In Cyclic codes, the size of standard array is considerably reduced.

EE576 Dr. Kousa Linear Block Codes 197


Example of the block codes

PB
8PSK

QPSK

Eb / N 0 [dB]
EE576 Dr. Kousa Linear Block Codes 198
ADVANTAGE of GENERATOR MATRIX:

 we need to store only the k rows of G instead of 2k


vectors of
the code.

 for the example we have looked at generator array of


(36)
replaces the original code vector of dimensions (8  6).

This is a definite reduction in complexity .

EE576 Dr. Kousa Linear Block Codes 199


Systematic Linear Block Codes
 Systematic (n,k) linear block codes has such a mapping
that
part of the sequence generated coincides with the k
message
digits.

 Remaining (n-k) digits are parity digits

A systematic linear block code has a generator matrix of


p p  p 1 0  0
the form :  11 12 1,(nk ) 
p p  p 0 1 0
G  P I    21
 22 2,(nk ) 
 k 
     

 pk1 p  p 0 0  1
 k2 k ,(nk ) 

EE576 Dr. Kousa Linear Block Codes 200


 P is the parity array portion of the generator matrix
pij = (0 or 1)

Ik is the (kk) identity matrix.

 With the systematic generator encoding complexity is


further
reduced since we do not need to store the identity matrix.
p p  p 1 0  0
 11 12 1,(nk ) 
Since U =m G p p  p 0 1 0
u , u ,........,un  m , m ,........,mn    21 22 2,(nk ) 
1 2  1 2  
     

 pk1 p  p 0 0  1
 k2 k ,(nk ) 

EE576 Dr. Kousa Linear Block Codes 201


 where
u  m p  m p  .......  m p for i  1,....., (n  k )
i 1 1i 2 2i k ki
m for i  (n  k  1),...., n
ink
And the parity bits are
p  m p  m p  ........  m p
1 1 11 2 21 k k1
p  m p  m p  ........  m p
2 1 12 2 22 k k2
p m p m p  ........  m p
nk 1 1,(nk ) 2 2,(nk ) k k ,(nk )

 given the message k-tuple


m= m1,……..,mk
And the general code vector n-tuple
U = u1,u2,……,uk
Systematic code vector is:
U = p1,p2……..,pk , m1,m2,……,mk

EE576 Dr. Kousa Linear Block Codes 202


Example:
For a (6,3) code the code vectors are described as

1 1 0 1 0 0
 
U  
  m , m , m  0 1 1 0 1 0
 1 2 3  
1 0 1 0 0 1

P I3

 U = m1+m3, m1+m2, m2+m3, m1, m2, m3

u1 , u2 , u3, u 4, u 5, u6

EE576 Dr. Kousa Linear Block Codes 203


Parity Check Matrix (H)
We define a parity-check matrix since it will enable us
to
decode the received vectors.

For a (kn) generator matrix G


There exists an (k-n) n matrix H
Such that rows of G are orthogonal to the rows of H
i.e G HT = 0
To satisfy the orthogonality requirement H matrix is
written as: H  I PT 
 nk 

EE576 Dr. Kousa Linear Block Codes 204


1 0 0 

01  0 
Hence  
  
  
I 

nk

  0
0  1 
H 
T

  p p  p 
 P  
1 ,( n  k )

11 12

pp  p
21 22  2 ,( n  k )

  
 
 p
p  p
k1 k2  k ,( n  k )

The product UHT of each code vector is a zero vector.


UH T  p  p , p  p ,......, p p 0
1 1 2 2 nk nk

Once the parity check matrix H is formed we can use it to


test whether a received vector is a valid member of the
codeword set. U is a valid code vector if and only if
UHT=0.

EE576 Dr. Kousa Linear Block Codes 205


Syndrome Testing
Let r = r1, r2, ……., rn be a received code vector (one of 2n n-
tuples)
Resulting from the transmission of U = u1,u2,…….,un (one of
the 2k n-tuples).
 r=U+e
where e = e1, e2, ……, en is the error vector or error pattern
introduced by the channel
 In space of 2n n-tuples there are a total of (2n –1) potential
nonzero error patterns.
 The SYNDROME of r is defined as:
S = r HT
The syndrome is the result of a parity check performed on r to
determine whether r is a valid member of the codeword set.
EE576 Dr. Kousa Linear Block Codes 206
 If r contains detectable errors the syndrome has some non-
zero value
 syndrome of r is seen as
S = (U+e) HT = UHT + eHT

 since UHT = 0 for all code words then :


S = eHT

An important property of linear block codes, fundamental to


the
decoding process, is that the mapping between correctable
error
patterns and syndromes is one-to-one.

EE576 Dr. Kousa Linear Block Codes 207


Parity check matrix must satisfy:

1. No column of H can be all zeros, or else an error


in the
corresponding code vector position would not
affect the
syndrome and would be undetectable

2. All columns of H must be unique. If two columns


are identical errors corresponding to these code
word locations will be indistinguishable.

EE576 Dr. Kousa Linear Block Codes 208


Example:
Suppose that code vector U = [ 1 0 1 1 1 0 ] is transmitted and
the vector r = [ 0 0 1 1 1 0 ] is received.
Note one bit is in error..
Find the syndrome vector,S, and verify that it is equal to eHT.

(6,3) code has generator matrix G we have seen before:


1 1 0 1 0 0 
G  0 1 1 0 1 0 
 
1 0 1 0 0 1
P I

P is the parity matrix and I is the identity matrix.

EE576 Dr. Kousa Linear Block Codes 209


 1 0 0 1 0 0
 0   
 0 1 0 
1 0

 0 0 1  
T
H p   0 0 1 
 1,1 p p   
1,2 1,3 1 1 0 
p p p  0 1 1 
 2,1 2,2 2,3 
p  
p p  1 0 1
 3,1 3,2 3,3 
1 0 0 
0 1 0 
 
0 0 1 
S=rH =
T [ 001110 ] 
 1 1 0 
0 1 1 
 
1 0 1 

= [ 1, 1+1, 1+1 ] = [ 1 0 0]
(syndrome of corrupted code vector)
Now we can verify that syndrome of the corrupted code vector is the same as
the syndrome of the error pattern:
S = eHT = [1 0 0 0 0]HT = [ 1 0 0 ]
( =syndrome of error pattern )
EE576 Dr. Kousa Linear Block Codes 210
Error Correction
Since there is a one-to-one correspondence between correctable
error patterns and syndromes we can correct such error patterns.
Assume the 2n n-tuples that represent possible received vectors
are arranged in an array called the standard array.
1. The first row contains all the code vectors starting with all-
zeros
vector
2. First column contains all the correctable error patterns
The standard
U array
U for a (n,k) code
U is:
 U
1 2 i 2k
e U e  U e  U e
2 2 2 i 2 2k 2
 
e U e 
j i j
 
e U e  U e
2nk 2 2nk 2k 2nk
EE576 Dr. Kousa Linear Block Codes 211
Each row called a coset consists of an error pattern in the first
column, also known as the coset leader, followed by the code
vectors perturbed by that error pattern.

The array contains 2n n-tuples in the space Vn


each coset consists of 2k n-tuples

2n
 there are k  2nk cosets
2

If the error pattern caused by the channel is a coset leader, the


received vector will be decoded correctly into the transmitted
code vector Ui. If the error pattern is not a coset leader the
decoding will produce an error.

EE576 Dr. Kousa Linear Block Codes 212


Syndrome of a Coset
If ej is the coset leader of the jth coset then ;

Ui + e j is an n-tuple in this coset

Syndrome of this coset is:


S = (Ui + ej)HT = Ui HT + ejHT
= e jH T

All members of a coset have the same syndrome and in fact the
syndrome is used to estimate the error pattern.

EE576 Dr. Kousa Linear Block Codes 213


Error CorrectionDecoding
The procedure for error correction decoding is as follows:

1. Calculate the syndrome of r using S = rHT


2. Locate the coset leader (error pattern) , ej, whose
syndrome equals rHT
3. This error pattern is the corruption caused by the channel
4. The corrected received vector is identified as U = r + ej .
We retrieve the valid code vector by subtracting out the
identified error

Note: In modulo-2 arithmetic subtraction is identical to that of


addition

EE576 Dr. Kousa Linear Block Codes 214


Example:
Locating the error pattern:
For the (6,3) linear block code we have seen before the standard
array can be arranged as:

000000 110100 011010 101110 101001 011101 110011


000111
000001 110101 011011 101111 101000 011100 110010 000110
000010 110110 011000 101100 101011 011111 110001 000101
000100 110000 011110 101010 101101 011001 110111 000011
001000 111100 010010 100110 100001 010101 111011 001111
010000 100100 001010 111110 111001 001101 100011 010111
100000 010100 111010 001110 001001 111101 010011 100111
010001 100101 001011 111111 111000 001100 100010 010110

EE576 Dr. Kousa Linear Block Codes 215


The valid code vectors are the eight vectors in the first row and
the correctable error patterns are the eight coset leaders in the
first column.

Decoding will be correct if and only if the error pattern caused


by the channel is one of the coset leaders

We now compute the syndrome corresponding to each of the


correctable error sequences by computing ejHT for each coset
leader 1 0 0
0 1 0
 
0 0 1
Se  
j 1 1 0

0 1 1
 
1 0 1

EE576 Dr. Kousa Linear Block Codes 216


Syndrome lookup table..

error pattern Syndrome


000000 000
000001 101
000010 011
000100 110
001000 001
010000 010
100000 100
010001 111

EE576 Dr. Kousa Linear Block Codes 217


Error Correction
We receive the vector r and calculate its syndrome S
We then use the syndrome-look-up table to find the
corresponding error pattern.
This error pattern is an estimate of the error, we denote it as ê
The decoder then adds ê to r to obtain an estimate of the
transmitted
code vector û
Û = r + ê = (U + e) + ê = U + (e ê)
If the estimated error pattern is the same as the actual error
pattern that is if ê = e then û =U
If ê  e the decoder will estimate a code vector that was not
transmitted and hence we have an undetectable decoding error.

EE576 Dr. Kousa Linear Block Codes 218


Example
Assume code vector U = [ 1 0 1 1 1 0 ] is transmitted and the
vector r=[0 0 1 1 1 0] is received.

The syndrome of r is computed as:


S = [0 0 1 1 1 0 ]HT = [ 1 0 0 ]
From the look-up table 100 has corresponding error pattern:
ê = [1 0 0 0 0 0 ]
The corrected vectors is the Û = r + ê = 0 0 1 1 1 0 + 1 0 0 0 0
0
= 1 0 1 1 1 0 (corrected)
In this example actual error pattern is the estimated error
pattern,
Hence û=U

EE576 Dr. Kousa Linear Block Codes 219


3F4 Error Control Coding

Dr. I. J. Wassell

EE576 Dr. Kousa Linear Block Codes 220


Introduction

• Error Control Coding (ECC)


– Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or
correction at the receiver
– Done to prevent the output of erroneous bits
despite noise and other imperfections in the
channel
– The positions of the error control coding and
decoding are shown in the transmission model

EE576 Dr. Kousa Linear Block Codes 221


Transmission Model
Error Modulator
Digital Source
Control
Line X()
(Transmit
Source Encode Coding
r Coding Filter, etc)
Hc() Channe
Transmitter l
N() Noise

Error Demod
+
Digital Source Line
Control (Receive
Sink Decode
Decoding
Decoding
Filter,
Y()
r
etc)
Receiver
EE576 Dr. Kousa Linear Block Codes 222
Error Models
• Binary Symmetric Memoryless Channel
– Assumes transmitted symbols are binary
– Errors affect ‘0’s and ‘1’s with equal probability
(i.e., symmetric)
– Errors occur randomly and are independent from
bit to bit (memoryless)

1-p
0 0 p is the probability
p
of bit error or the Bit
IN OUT Error Rate (BER) of
p
1 1 the channel
1-p
EE576 Dr. Kousa Linear Block Codes 223
Error Models
• Many other types
• Burst errors, i.e., contiguous bursts of bit errors
– output from DFE (error propagation)
– common in radio channels
– Insertion, deletion and transposition errors
• We will consider mainly random errors
Error Control Techniques
• Error detection in a block of data
– Can then request a retransmission, known as automatic repeat request
(ARQ) for sensitive data
– Appropriate for
• Low delay channels
• Channels with a return path
– Not appropriate for delay sensitive data, e.g., real time speech and data
• Forward Error Correction (FEC)
– Coding designed so that errors can be corrected at the receiver
– Appropriate for delay sensitive and one-way transmission (e.g., broadcast
TV) of data
– Two main types, namely block codes and convolutional codes. We will only
look at block codes
EE576 Dr. Kousa Linear Block Codes 224

Block Codes
We will consider only binary data
• Data is grouped into blocks of length k bits (dataword)
• Each dataword is coded into blocks of length n bits (codeword), where
in general n>k
• This is known as an (n,k) block code
• A vector notation is used for the datawords and codewords,
– Dataword d = (d1 d2….dk)
– Codeword c = (c1 c2……..cn)
• The redundancy introduced by the code is quantified by the code rate,
– Code rate = k/n
– i.e., the higher the redundancy, the lower the code rate

Block Code - Example


• Dataword length k = 4
• Codeword length n = 7
• This is a (7,4) block code with code rate = 4/7
• For example, d = (1101), c = (1101001)
EE576 Dr. Kousa Linear Block Codes 225
Error Control Process
Source code Codeword
data chopped (n bits)
10110 1000 into blocks
1 Chann
1000
Datawor el
d (k bits) coder

Codeword +
Datawor possible
d (k bits) errors (n bits)
Chann Chann
el el
decode
Error flags r
• Decoder gives corrected data
• May also give error flags to
– Indicate reliability of decoded data
– Helps with schemes employing multiple layers of error correction
EE576 Dr. Kousa Linear Block Codes 226
Parity Codes
• Example of a simple block code – Single Parity Check Code
– In this case, n = k+1, i.e., the codeword is the dataword with
one additional bit
– For ‘even’ parity the additional bit is,

q  i 1 d i (mod 2)
k

– For ‘odd’ parity the additional bit is 1-q


– That is, the additional bit ensures that there are an ‘even’ or
‘odd’ number of ‘1’s in the codeword

Parity Codes – Example 1


• Even parity
(i) d=(10110) so,
c=(101101)
(ii) d=(11011) so,
c=(110110)
EE576 Dr. Kousa Linear Block Codes 227
Parity Codes – Example 2
• Coding table for (4,3) even parity code

Datawor Codewor
0d 0 0 0d 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1

EE576 Dr. Kousa Linear Block Codes 228


Parity Codes
• To decode
– Calculate sum of received bits in block (mod 2)
– If sum is 0 (1) for even (odd) parity then the dataword is the first k bits of
the received codeword
– Otherwise error
• Code can detect single errors
• But cannot correct error since the error could be in any bit
• For example, if the received dataword is (100000) the transmitted dataword
could have been (000000) or (110000) with the error being in the first or
second place respectively
• Note error could also lie in other positions including the parity bit
• Known as a single error detecting code (SED). Only useful if probability
of getting 2 errors is small since parity will become correct again
• Used in serial communications
• Low overhead but not very powerful
• Decoder can be implemented efficiently using a tree of XOR gates

EE576 Dr. Kousa Linear Block Codes 229


Hamming Distance
• Error control capability is determined by the Hamming distance
• The Hamming distance between two codewords is equal to the number
of differences between them, e.g.,
10011011
11010010 have a Hamming distance = 3
• Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)
• The Hamming distance of a code is equal to the minimum Hamming
distance between two codewords
• If Hamming distance is:
1 – no error control capability; i.e., a single error in a received
codeword yields another valid codeword
XXXXXXX X is a valid codeword
Note that this representation is diagrammatic only.
In reality each codeword is surrounded by n codewords. That is,
one for every bit that could be changed

EE576 Dr. Kousa Linear Block Codes 230


Hamming Distance
• If Hamming distance is:
2 – can detect single errors (SED); i.e., a single error will yield an
invalid codeword
XOXOXO X is a valid codeword
O in not a valid codeword
See that 2 errors will yield a valid (but incorrect) codeword

• If Hamming distance is:


3 – can correct single errors (SEC) or can detect double errors (DED)
XOOXOOX X is a valid codeword
O in not a valid codeword
See that 3 errors will yield a valid but incorrect codeword

EE576 Dr. Kousa Linear Block Codes 231


Hamming Distance - Example

• Hamming distance 3 code, i.e., SEC/DED


– Or can perform single error correction (SEC)

10011011 X This code corrected this


11011011 way
11010011
O This code corrected this
11010010
O X way

X is a valid codeword
O is an invalid codeword

EE576 Dr. Kousa Linear Block Codes 232


Hamming Distance
• The maximum number of detectable errors is

d min  1
• That is the maximum number of correctable
errors is given by,
 d min  1
t 
 2 
where dmin is the minimum Hamming distance
between 2 codewords and  . means the
smallest integer

EE576 Dr. Kousa Linear Block Codes 233


Linear Block Codes
• As seen from the second Parity Code example, it is possible to use a
table to hold all the codewords for a code and to look-up the
appropriate codeword based on the supplied dataword
• Alternatively, it is possible to create codewords by addition of other
codewords. This has the advantage that there is now no longer the
need to held every possible codeword in the table.
• If there are k data bits, all that is required is to hold k linearly
independent codewords, i.e., a set of k codewords none of which can
be produced by linear combinations of 2 or more codewords in the set.
• The easiest way to find k linearly independent codewords is to choose
those which have ‘1’ in just one of the first k positions and ‘0’ in the
other k-1 of the first k positions.

EE576 Dr. Kousa Linear Block Codes 234


Linear Block Codes
• For example for a (7,4) code, only four codewords are
required, e.g.,

1 0 0 0 1 1 0
0 1 0 0 1 0 1
0 0 1 0 0 1 1
0 0 0 1 1 1 1
• So, to obtain the codeword for dataword 1011, the first, third and fourth
codewords in the list are added together, giving 1011010
• This process will now be described in more detail

EE576 Dr. Kousa Linear Block Codes 235


Linear Block Codes
• An (n,k) block code has code vectors
d=(d1 d2….dk) and
c=(c1 c2……..cn)
• The block coding process can be written as c=dG
where G is the Generator Matrix  a a ... a1n   a1 
11 12
a a ... a  a 
• G   21 22 2n   2 

Thus,  . . ... .   . 
   
k
 ak1 ak 2 ... akn  a k 
c   dia i
i 1
• ai must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors, then to
avoid 2 datawords having the same codeword the ai vectors must be
linearly independent
EE576 Dr. Kousa Linear Block Codes 236
Linear Block Codes
• Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
d 3  d1  d 2
So, k k k k
c 3   d 3i a i   (d1i  d 2i )a i  d1i a i   d 2i a i
i 1 i 1 i 1 i 1
c3  c1  c 2
• 0 is always a codeword, i.e.,
Since all zeros is a dataword then,

k
c   0 ai  0
i 1

EE576 Dr. Kousa Linear Block Codes 237


Error Correcting Power of LBC
• The Hamming distance of a linear block code (LBC) is
simply the minimum Hamming weight (number of 1’s or
equivalently the distance from the all 0 codeword) of the
non-zero codewords
• Note d(c1,c2) = w(c1+ c2) as shown previously
• For an LBC, c1+ c2=c3
• So min (d(c1,c2)) = min (w(c1+ c2)) = min (w(c3))
• Therefore to find min Hamming distance just need to
search among the 2k codewords to find the min
Hamming weight – far simpler than doing a pair wise
check for all possible codewords.

EE576 Dr. Kousa Linear Block Codes 238


Linear Block Codes – example 1
• For example a (4,2) code, suppose;

1 0 1 1 a1 = [1011]
G   
 0 1 0 1 a2 = [0101]
• For d = [1 1], then;

1 0 1 1
 0 1 0 1
c 
_ _ _ _
 1 1 1 0

EE576 Dr. Kousa Linear Block Codes 239


Linear Block Codes – example 2

• A (6,5) code with


1 0 0 0 0 1
0 1 0 0 0 1
 
G  0 0 1 0 0 1
 
0 0 0 1 0 1

0 0 0 0 1 1

• Is an even single parity code

EE576 Dr. Kousa Linear Block Codes 240


Systematic Codes
• For a systematic block code the dataword appears unaltered in
the codeword – usually at the start
• The generator matrix has the structure,
k R R=n-k
1 0 .. 0 p11 p12 .. p1R 
0 1 .. 0 p21 p22 .. p2 R 
G   I | P
.. .. .. .. .. .. .. .. 
 
0 0 .. 1 pk1 pk 2 .. pkR 
• P is often referred to as parity bits
• I is k*k identity matrix. Ensures dataword appears as beginning
of codeword
• P is k*R matrix.

EE576 Dr. Kousa Linear Block Codes 241


Decoding Linear Codes
• One possibility is a ROM look-up table
• In this case received codeword is used as an address
• Example – Even single parity check code;
Address Data
000000 0
000001 1
000010 1
000011 0
……… .
• Data output is the error flag, i.e., 0 – codeword ok,
• If no error, dataword is first k bits of codeword
• For an error correcting code the ROM can also store datawords
• Another possibility is algebraic decoding, i.e., the error flag is computed
from the received codeword (as in the case of simple parity codes)
• How can this method be extended to more complex error detection and
correction codes?

EE576 Dr. Kousa Linear Block Codes 242


Parity Check Matrix
• A linear block code is a linear subspace Ssub of all length n vectors
(Space S)
• Consider the subset Snull of all length n vectors in space S that are
orthogonal to all length n vectors in Ssub
• It can be shown that the dimensionality of Snull is n-k, where n is the
dimensionality of S and k is the dimensionality of Ssub
• It can also be shown that Snull is a valid subspace of S and
consequently Ssub is also the null space of Snull
• Snull can be represented by its basis vectors. In this case the generator
basis vectors (or ‘generator matrix’ H) denote the generator matrix for
Snull - of dimension n-k = R
• This matrix is called the parity check matrix of the code defined by G,
where G is obviously the generator matrix for Ssub- of dimension k
• Note that the number of vectors in the basis defines the dimension of
• thethe
So subspace
dimension of H is n-k (= R) and all vectors in the null space are
orthogonal to all the vectors of the code
• Since the rows of H, namely the vectors bi are members of the null
space they are orthogonal to any code vector
• So a vector y is a codeword only if yHT=0
• Note that a linear block code can be specified by either G or H
EE576 Dr. Kousa Linear Block Codes 243
Parity Check Matrix
• So H is used to check if a codeword is valid,

 b11 b12 ... b1n   b1 


b b22 ... b2 n   b 2 
H   21  R=n-k
 . . ... .   . 
   
bR1 bR 2 ... bRn  b R 
• The rows of H, namely, bi, are chosen to be
orthogonal to rows of G, namely ai
• Consequently the dot product of any valid
codeword with any bi is zero

EE576 Dr. Kousa Linear Block Codes 244


Parity Check Matrix
• This is so since,
k
c   di a i
i 1
and so,
k k
b j .c  b j . d i a i   d i (a i .b j )  0
i 1 i 1

• This means that a codeword is valid (but not necessarily


correct) only if cHT = 0. To ensure this it is required that the rows
of H are independent and are orthogonal to the rows of G
• That is the bi span the remaining R (= n - k) dimensions of the
codespace

EE576 Dr. Kousa Linear Block Codes 245


Parity Check Matrix

• For example consider a (3,2) code. In this case G has 2 rows, a1 and a2
• Consequently all valid codewords sit in the subspace (in this case a
plane) spanned by a1 and a2
• In this example the H matrix has only one row, namely b1. This vector is
orthogonal to the plane containing the rows of the G matrix, i.e., a1 and
a2
• Any received codeword which is not in the plane containing a1 and a2
(i.e., an invalid codeword) will thus have a component in the direction of
b1 yielding a non- zero dot product between itself and b1

EE576 Dr. Kousa Linear Block Codes 246


Parity Check Matrix
• Similarly, any received codeword which is in the
plane containing a1 and a2 (i.e., a valid codeword) will
not have a component in the direction of b1 yielding a
zero dot product between itself and b1

c1
a2
c2
a1

c3

b1
EE576 Dr. Kousa Linear Block Codes 247
Error Syndrome
• For error correcting codes we need a method to compute the
required correction
• To do this we use the Error Syndrome, s of a received
codeword, cr
s = crHT
• If cr is corrupted by the addition of an error vector, e, then
cr = c + e
and
s = (c + e) HT = cHT + eHT
s = 0 + eHT
Syndrome depends only on the error

EE576 Dr. Kousa Linear Block Codes 248


Error Syndrome
• That is, we can add the same error pattern to different
codewords and get the same syndrome.
– There are 2(n - k) syndromes but 2n error patterns
– For example for a (3,2) code there are 2 syndromes and 8
error patterns
– Clearly no error correction possible in this case
– Another example. A (7,4) code has 8 syndromes and 128
error patterns.
– With 8 syndromes we can provide a different value to
indicate single errors in any of the 7 bit positions as well as
the zero value to indicate no errors
• Now need to determine which error pattern caused the
syndrome

EE576 Dr. Kousa Linear Block Codes 249


Error Syndrome
• For systematic linear block codes, H is constructed
as follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity
for H
• Example, (7,4) code, dmin= 3

1 0 0 0 0 1 1
0 0 1 1 1 1 0 0 
1
G   I | P  
1 0 0 1 0
 
H  - P T | I  1 0 1 1 0 1 0
0 0 1 0 1 1 0
  1 1 0 1 0 0 1
0 0 0 1 1 1 1

EE576 Dr. Kousa Linear Block Codes 250


Error Syndrome - Example
• For a correct received codeword cr = [1101001]
In this case,
0 1 1
1 0 1 

1 1 0
 
s  c r H T  1 1 0 1 0 0 1 1 1 1   0 0 0
1 0 0
 
0 1 0
0 0 1 

EE576 Dr. Kousa Linear Block Codes 251


Error Syndrome - Example
• For the same codeword, this time with an error in the
first bit position, i.e.,
cr = [1101000]

0 1 1
1 0 1

1 1 0
 
s  c r H T  1 1 0 1 0 0 0 1 1 1   0 0 1
1 0 0
 
0 1 0
0 0 1

• In this case a syndrome 001 indicates an error in bit 1 of the
codeword

EE576 Dr. Kousa Linear Block Codes 252


Comments about H
• The minimum distance of the code is equal to the minimum number of
columns (non-zero) of H which sum to zero
• We can express
 d0 
d 
c r H T  [cr 0 , cr1 ,..., cr n 1 ] 1   cr 0 d 0  cr1d1  ...  cr n 1d n 1
 . 
 
d
 n 1 
Where do, d1, dn-1 are the column vectors of H

• Clearly crHT is a linear combination of the columns of H


• For a codeword with weight w (i.e., w ones), then crHT is a linear
combination of w columns of H.
• Thus we have a one-to-one mapping between weight w codewords and
linear combinations of w columns of H
• Thus the min value of w is that which results in crHT=0, i.e., codeword cr
will have a weight w (w ones) and so dmin = w
EE576 Dr. Kousa Linear Block Codes 253
Comments about H
• For the example code, a codeword with min weight (dmin = 3) is
given by the first row of G, i.e., [1000011]
• Now form linear combination of first and last 2 cols in H, i.e.,
[011]+[010]+[001] = 0
• So need min of 3 columns (= dmin) to get a zero value of cHT in
this example

Standard Array
• From the standard array we can find the most likely transmitted
codeword given a particular received codeword without having
to have a look-up table at the decoder containing all possible
codewords in the standard array
• Not surprisingly it makes use of syndromes

EE576 Dr. Kousa Linear Block Codes 254


Standard Array
• The Standard Array is constructed as follows,

c1 (all zero) c2 …… cM s0
e1 c2+e1 …… cM+e1 s1 All patterns in
e2 c2+e2 …… cM+e2 s2 row have same
e3 c2+e3 …… cM+e3 s3 syndrome
… …… …… …… … Different rows
eN c2+eN …… cM+eN sN have distinct
syndromes

• The array has 2k columns (i.e., equal to the number


of valid codewords) and 2R rows (i.e., the number of
syndromes)
EE576 Dr. Kousa Linear Block Codes 255
Standard Array
• The standard array is formed by initially choosing ei to be,
– All 1 bit error patterns
– All 2 bit error patterns
– ……
• Ensure that each error pattern not already in the array has a
new syndrome. Stop when all syndromes are used
• Imagine that the received codeword (cr) is c2 + e3 (shown in bold in the
standard array)
• The most likely codeword is the one at the head of the column
containing c2 + e3
• The corresponding error pattern is the one at the beginning of the row
containing c2 + e3
• So in theory we could implement a look-up table (in a ROM) which
could map all codewords in the array to the most likely codeword (i.e.,
the one at the head of the column containing the received codeword)
• This could be quite a large table so a more simple way is to use
syndromes

EE576 Dr. Kousa Linear Block Codes 256


Standard Array

• This block diagram shows the proposed


implementation

s e
Compute Look-up
syndrome table

cr c
+

EE576 Dr. Kousa Linear Block Codes 257


Standard Array
• For the same received codeword c2 + e3, note that the
unique syndrome is s3
• This syndrome identifies e3 as the corresponding error
pattern
• So if we calculate the syndrome as described previously,
i.e., s = crHT
• All we need to do now is to have a relatively small table
which associates s with their respective error patterns. In
the example s3 will yield e3
• Finally we subtract (or equivalently add in modulo 2
arithmetic) e3 from the received codeword (c2 + e3) to yield
the most likely codeword, c2

EE576 Dr. Kousa Linear Block Codes 258


Hamming Codes
• We will consider a special class of SEC codes (i.e.,
Hamming distance = 3) where,
– Number of parity bits R = n – k and n = 2R – 1
– Syndrome has R bits
– 0 value implies zero errors
– 2R – 1 other syndrome values, i.e., one for each bit
that might need to be corrected
– This is achieved if each column of H is a different
binary word – remember s = eHT

EE576 Dr. Kousa Linear Block Codes 259


Hamming Codes
• Systematic form of (7,4) Hamming code is,

1 0 0 0 0 1 1
0 0 1 1 1 1 0 0 
1 0 0 1 0 1
G   I | P  
0 0 1 0 1 1 0
 
H  - P T | I  1 0 1 1 0 1 0
  1 1 0 1 0 0 1
0 0 0 1 1 1 1

• The original form is non-systematic,


1 1 1 0 0 0 0 
1 0 0 1 1 0 0  0 0 0 1 1 1 1
G   H  0 1 1 0 0 1 1
0 1 0 1 0 1 0 
  1 0 1 0 1 0 1
1 1 0 1 0 0 1 
• Compared with the systematic code, the column orders of both
G and H are swapped so that the columns of H are a binary
count
EE576 Dr. Kousa Linear Block Codes 260
Hamming Codes
• The column order is now 7, 6, 1, 5, 2, 3, 4, i.e., col. 1 in the non-
systematic H is col. 7 in the systematic H.
Hamming Codes - Example
• For a non-systematic (7,4) code
d = 1011
c = 1110000
+ 0101010
+ 1101001
= 0110011

e = 0010000
cr= 0100011

s = crHT = eHT = 011


• Note the error syndrome is the binary address of the bit to be
corrected
EE576 Dr. Kousa Linear Block Codes 261
Hamming Codes
• Double errors will always result in wrong bit being
corrected, since
– A double error is the sum of 2 single errors
– The resulting syndrome will be the sum of the
corresponding 2 single error syndromes
– This syndrome will correspond with a third single
bit error
– Consequently the ‘corrected’ codeword will now
contain 3 bit errors, i.e., the original double bit
error plus the incorrectly corrected bit!

EE576 Dr. Kousa Linear Block Codes 262


Bit Error Rates after Decoding
• For a given channel bit error rate (BER), what is the BER after
correction (assuming a memoryless channel, i.e., no burst
errors)?
• To do this we will compute the probability of receiving 0, 1, 2, 3,
…. errors
• And then compute their effect
• Example – A (7,4) Hamming code with a channel BER of 1%, i.e., p = 0.01
P(0 errors received) = (1 – p)7 = 0.9321
P(1 error received) = 7p(1 – p)6 = 0.0659
76 2
P(2 errors received)  p (1  p ) 5  0.002
2
P(3 or more errors) = 1 – P(0) – P(1) – P(2) = 0.000034

EE576 Dr. Kousa Linear Block Codes 263


Bit Error Rates after Decoding

• Single errors are corrected, so,


0.9321+ 0.0659 = 0.998 codewords are correctly
detected
• Double errors cause 3 bit errors in a 7 bit
codeword, i.e., (3/7)*4 bit errors per 4 bit dataword,
that is 3/7 bit errors per bit.
Therefore the double error contribution is 0.002*3/7
= 0.000856

EE576 Dr. Kousa Linear Block Codes 264


Bit Error Rates after Decoding

• The contribution of triple or more errors will be less


than 0.000034 (since the worst that can happen is
that every databit becomes corrupted)
• So the BER after decoding is approximately
0.000856 + 0.000034 = 0.0009 = 0.09%
• This is an improvement over the channel BER by a
factor of about 11

EE576 Dr. Kousa Linear Block Codes 265


Perfect Codes
• If a codeword has n bits and we wish to correct up to
t errors, how many parity bits (R) are needed?
• Clearly we need sufficient error syndromes (2R of
them) to identify all error patterns up to t errors
– Need 1 syndrome to represent 0 errors
– Need n syndromes to represent all 1 bit errors
– Need n(n-1)/2 to syndromes to represent all 2 bit
errors
– Need nCe = n!/(n-e)!e! syndromes to represent all e
bit errors

EE576 Dr. Kousa Linear Block Codes 266


Perfect Codes
• So,
2R  1  n to correct up to 1 error
n(n - 1)
 1 n  to correct up to 2 errors
2
n(n - 1) n(n - 1)(n - 2)
 1 n   to correct up to 3 errors
2 6
If equality then code is Perfect

• Only known perfect codes are SEC Hamming codes and TEC Golay
(23,12) code (dmin=7). Using previous equation yields

23(23 - 1) 23(23 - 1)(23 - 2)


1  23    2048  211  2 ( 2312)
2 6
EE576 Dr. Kousa Linear Block Codes 267
Summary

• In this section we have


– Used block codes to add redundancy to messages
to control the effects of transmission errors
– Encoded and decoded messages using Hamming
codes
– Determined overall bit error rates as a function of
the error control strategy

EE576 Dr. Kousa Linear Block Codes 268


Error Correction Codes &
Multi-user Communications
Agenda
• Shannon Theory
• History of Error Correction Code
• Linear Block Codes
• Decoding
• Convolution Codes
• Multiple-Access Technique
• Capacity of Multiple Access
• Random Access Methods
2006/07/07 Wireless Communication Engineering I 270
Shannon Theory:   
  R < C → Reliable communication Redundancy (Parity bits) in
transmitted data stream
 → error correction capability

Encoding

Block Code Convolutional Code

Code length is fixed Coding Rate is fixed


Decoding

Hard-Decoding Soft-Decoding

Digital Information Analog Information

2006/07/07 Wireless Communication Engineering I 271


History of Error Correction Code
• Shannon (1948):
Random Coding, Orthogonal Waveforms
• Golay (1949): Golay Code, Perfect Code
• Hamming (1950):
Hamming Code
(Single Error Correction, Double Error
Detection)
• Gilbert (1952): Gilbert Bound on Coding Rate

2006/07/07 Wireless Communication Engineering I 272


• Muller (1954):
Combinatorial Digital Function and
Error Correction Code
• Elias (1954): Tree Code, Convolutional Code
• Reed and Solomon (1960):
Reed-Solomon Code (Maximal Separable Code)
• Hocquenghem (1959) and Bose and
Chaudhuri (1960):
BCH Code (Multiple Error Correction)
• Peterson (1960):
Binary BCH Decoding, Error Location
Polynomial
2006/07/07 Wireless Communication Engineering I 273
• Wozencraft and Reiffen (1961):
Sequencial decoding for convolutional code
• Gallager (1962) LDPC
• Fano (1963):
Fano Decoding Algorithm for convolutional code
• Ziganzirov (1966):
Stack Decoding Algorithm for convolutional code
• Forney (1966):
Generalized Minimum Distance Decoding
(Error and Erasure Decoding)
• Viterbi (1967):
Optimal Decoding Algorithm for convulutional code
2006/07/07 Wireless Communication Engineering I 274
• Berlekamp (1968): Fast Iterative BCH Decoding
• Forney (1966): Concatinated Code
• Goppa (1970):
Goppa Code (Rational Function Code)
• Justeson (1972):
Justeson Code (Asmptotically Good Code)
• Ungerboeck and Csajka (1976):
Trellis Code Modulation,
Bandwidth-constraint channel
• Goppa (1980): Algebraic-Geometry Code
2006/07/07 Wireless Communication Engineering I 275
• Welch and Berlekamp (1983):
Remainder Decoding Algorithm without using
Syndrome
• Araki, Sorger and Kotter (1993):
Fast GMD Decoding Algorithm
• Berrou (1993): Turbo Code,
Parallel concatinated convolutional code

2006/07/07 Wireless Communication Engineering I 276


Basics of Decoding

a) Hamming distance d (ci , c j )  2t  1


b) Hamming distance d (ci , c j )  2t
The received vector is denoted by r.
t → errors correctable
2006/07/07 Wireless Communication Engineering I 277
Linear Block Codes
(n, k, dmin) code n : code length
k : number of information bits
d min : minimum distance
k n : coding rate
d ↑ Good Error correction capability
k / n ↓ Low rate
r = n-k ↑ → d ↑
  (n, k, d) Linear Block Code is
  Linear Subspace with k-dimension in n-dimension linear space.

2006/07/07 Wireless Communication Engineering I 278


Arithmetic operations (, , , /) for encoding
and decoding over an finite field GF(Q)
where Q = pr, p: prime number r: positive
integer

Example GF(2):
+ 0 1  0 1
addition 0 0 1 multiplication 0 0 0
1 1 0 1 0 1
XOR AND

2006/07/07 Wireless Communication Engineering I 279


[Encoder]
• The Generator Matrix G and the Parity Check Matrix H

k information bits X → encoder G → n-bits codeword C

C  XG
• Dual (n, n - k) code

Complement orthogonal subspace


Parity Check Matrix H = Generator Matrix of Dual code
CHt = 0
GHt = 0

2006/07/07 Wireless Communication Engineering I 280


2006/07/07 Wireless Communication Engineering I 281
error vector & syndrome
c : codeword vector
e : error vector
r : received vector (after Hard decision)
s : syndrome
s  rH t   c  e  H t  eH t
s  e (decoding process)

2006/07/07 Wireless Communication Engineering I 282


[Minimum Distance]
Singleton Bound
If no more than dmin - 1 columns of H are linearly
independent.
dmin ≤ n - k + 1 (Singleton Bound)
Maximal Separable Code:
dmin = n - k + 1, e.g. Reed-Solomon Code

• Some Specific Linear Block Codes


– Hamming Code (n, k, dmin) = (2m - 1, 2m - 1 - m, 3)
– Hadamard Code (n, k, dmin) = (2m, m + 1, 2m - 1)

2006/07/07 Wireless Communication Engineering I 283


Easy Encoding
• Cyclic Codes
C = (cn -1, …, c0) is a codeword → (cn - 2, …, c0, cn
- 1) is also a codeword
Codeword polynomial: C(p) = cn - 1 pn - 1 + ...+ cn

pC  p  mod p  1  Cyclic Shift


n

2006/07/07 Wireless Communication Engineering I 284


Encoding: Message polynomial
X  p   xk 1 p    xn 
k 1

Codeword polynomial C  p   X  p  g  p 
where g(p): generator polynomial of degree nk
p 1  g  p  h p 
n

h(p): Parity polynomial

  Encoder is implemented by Shift registers.


2006/07/07 Wireless Communication Engineering I 285
Encoder for an (n, k) cyclic code.
2006/07/07 Wireless Communication Engineering I 286
Syndrome calculator for (n, k) cyclic code.

Digital to Analog (BPSK)


c  1  s  1
0  s  1

 s  2c  1
2006/07/07 Wireless Communication Engineering I 287
Soft-Decoding & Maximum Likelihood
( k)
r = s +n
(
= r1 , , rn)
= ( s ,, s ) + ( n ,, n )
1 n 1 n

  k 
Prob r s : Likelihood
 

Max Prob r s
k

  Min r  s 
k
k
k 2

 Max : Correlation r, c 


  k
k
2006/07/07 Wireless Communication Engineering I 288
• Optimum Soft-Decision Decoding of Linear
Block Codes
Optimum receiver has M = 2k Matched Filter →
M correlation metrics
C  r, Ci     2cij  1 rj
n

j 1

Ci : i - th codeword
wher c : j - th position bit of the i - th codeword
ij
e
rj : j - th received signal
→ Largest matched filter output is selected.

2006/07/07 Wireless Communication Engineering I 289


Error probability for soft-decision decoding
(Coherent PSK)
PM  exp   b Rc d min  k ln 2
 b : SNR per bit
wher R : Coding rate   k n 
c
e
Uncoded binary PSK
1
Pe  exp   b 
2
2006/07/07 Wireless Communication Engineering I 290
Coding gain:
Cg  10 log Rc d min  k ln 2  b 

dmin ↑ → Cg ↑

2006/07/07 Wireless Communication Engineering I 291


• Hard-Decision Decoding

Discrete-time channel =
modulator + AWGN channel + demodulator
→ BSC with crossover probability

 
p  Q 2 b Rc : coherent PSK
 
Q  b Rc : coherent FSK
1
2 exp  12  b Rc  : noncoherent FSK
2006/07/07 Wireless Communication Engineering I 292
Maximum-Likelihood Decoding →
Minimum Distance Decoding
Syndrome Calculation by Parity check matrix H

S  YH t

  Cm  e H t

 eH t
where C m : transmitted codeword
Y : received codeword at the demodulator
e : binary error vector
2006/07/07 Wireless Communication Engineering I 293
• Comparison of Performance between Hard-Decision and Soft-Decision
Decoding

→ At most 2dB difference

• Bounds on Minimum Distance of Linear Block Codes (Rc vs. dmin)


– Hamming upper bound (2t < dmin)
1 t
n
1  Rc  log 2   
n i 0  i 
– Plot kin upper bound

d min  1  1 2
1  log 2 d min   1  Rc  
n  2d min  2 n

2006/07/07 Wireless Communication Engineering I 294


– Elias upper bound
d min
 2 A1  A
n
Rc  1  A log 2 A  1  A log 2 1  A
– Gilbert-Varsharmov lower bound
d min

n
Rc  1  H  

2006/07/07 Wireless Communication Engineering I 295


2006/07/07 Wireless Communication Engineering I 296
• Interleaving of Coded Data for Channels
with Burst Errors
Multipath and fading channel → burst error
Burst error correction code: Fire code
Correctable burst length b

1 
b    n  k 
2 
Block and Convolution interleave is
effective for burst error.
2006/07/07 Wireless Communication Engineering I 297
Convolution Codes
Performance of convolution code > block code
shown by Viterbi’s Algorithm.

 nE ( R )
P (e)  z
E(R) : Error Exponent

2006/07/07 Wireless Communication Engineering I 298


2006/07/07 Wireless Communication Engineering I 299
Constraint length-3, rate-1/2 convolutional encoder.
2006/07/07 Wireless Communication Engineering I 300
• Parameter of convolution code:
Constraint length, K
Minimum free distance
• Optimum Decoding of Convolution
Codes –
The Viterbi Algorithm
For K ≤ 10, this is practical.
• Probability of Error for Soft-Decision
Decoding
2006/07/07 Wireless Communication Engineering I 301
Trellis for the convolutional encoder
2006/07/07 Wireless Communication Engineering I 302
 a Q 

Pe  d 2 b Rc d
d  d free

where ad : the number of paths of


distance d
• Probability of Error for Hard-Decision
Decoding Hamming distance is a metric
for hard-decision
2006/07/07 Wireless Communication Engineering I 303
Turbo Coding

2006/07/07 Wireless Communication Engineering I 304


RSC Encoder

2006/07/07 Wireless Communication Engineering I 305


Shannon Limit & Turbo Code

2006/07/07 Wireless Communication Engineering I 306


Multi-user Communications
Multiple Access Techniques
1. A common communication channel is shared by many users.
up-link in a satellite communication, a set of terminals →
a central computer, a mobile cellular system
2. A broadcast network
down-links in a satellite system, radio and TV broadcast systems
3. Store-and-forward networks
4. Two-way communication systems
-FDMA (Frequency-division Multiple Access)
-TDMA (Time-division Multiple Access)
- CDMA (Code-division Multiple Access):
for burst and low-duty-cycle information transmission
Spread spectrum signals → small cross-correlations
For no spread random access, collision and interference occur.
Retransmission Protocol

2006/07/07 Wireless Communication Engineering I 307


Capacity of Multiple Access
Methods
In FDMA, normalized total capacity Cn = KCK / W
(total bit rate for all K users per unit of bandwidth)
 Eb 
Cn  log 2 1  Cn 
 N0 
where W : Bandwidth
Eb : Energy per bit
N 0 : Noise power spectrum desity
2006/07/07 Wireless Communication Engineering I 308
Normalized capacity as a function of εb / N0 for FDMA.
2006/07/07 Wireless Communication Engineering I 309
Total capacity per hertz as a function of εb / N0 for FDMA.
2006/07/07 Wireless Communication Engineering I 310
In TDMA, there is a practical limit for the
transmitter power
In no cooperative CDMA,

1
Cn  log 2 e 
Eb N 0

2006/07/07 Wireless Communication Engineering I 311


Normalized capacity as a function of εb / N0 for noncooperative CDMA.

2006/07/07 Wireless Communication Engineering I 312


Capacity region for multiple users

Capacity region of two-user CDMA multiple access Gaussian channel.


2006/07/07 Wireless Communication Engineering I 313
Code-Division Multiple Access
• CDMA Signal and Channel Models
• The Optimum Receiver
Synchronous Transmission
Asynchronous Transmission
- Sub optimum Detectors
Computational complexity grows linearly
with the number of users, K.
Conventional Single-user Detector
Near-far problem
Decor relation Detector
Minimum Mean-Square-Error Detector
Other Types of Detectors
- Performance Characteristics of Detectors
2006/07/07 Wireless Communication Engineering I 314
Random Access Methods
• ALOHA Systems and Protocols Channel access
protocol
synchronized (slotted) ALOHA
unsynchronized (unslotted) ALOHA
Throughput for slotted ALOHA

2006/07/07 Wireless Communication Engineering I 315


2006/07/07 Wireless Communication Engineering I 316
Throughput & Delay Performance

2006/07/07 Wireless Communication Engineering I 317


• Carrier Sense Systems and Protocols
CSMA / CD
(carrier sense multiple access with
collision detection)
No persistent CSMA
1-persistent CSMA
p-persistent CSMA

2006/07/07 Wireless Communication Engineering I 318


2006/07/07 Wireless Communication Engineering I 319
2006/07/07 Wireless Communication Engineering I 320
2006/07/07 Wireless Communication Engineering I 321
Chapter 10
Error Detection
and
Correction

10.322
10-1 INTRODUCTION

some issues related, directly or indirectly, to error


detection and correction.

Topics discussed in this section:


Types of Errors
Redundancy
Detection Versus Correction
Modular Arithmetic

10.323
Figure 10.1 Single-bit error

In a single-bit error, only 1 bit in the data unit has


changed.

10.324
Figure 10.2 Burst error of length 8

A burst error means that 2 or more bits in the data unit


have changed.

10.325
Error detection/correction
 Error detection
 Check if any error has occurred
 Don’t care the number of errors
 Don’t care the positions of errors
 Error correction
 Need to know the number of errors
 Need to know the positions of errors
 More difficult

10.326
Figure 10.3 The structure of encoder and decoder

To detect or correct errors, we need to send extra


(redundant) bits with data.

10.327
Modular Arithmetic
 Modulus N: the upper limit
 In modulo-N arithmetic, we use only the
integers in the range 0 to N −1, inclusive.
 If N is 2, we use only 0 and 1
 No carry in the calculation (sum and
subtraction)

10.328
Figure 10.4 XORing of two single bits or two words

10.329
10-2 BLOCK CODING

In block coding, we divide our message into blocks,


each of k bits, called datawords. We add r redundant
bits to each block to make the length n = k + r. The
resulting n-bit blocks are called codewords.

Topics discussed in this section:


Error Detection
Error Correction
Hamming Distance
Minimum Hamming Distance

10.330
Figure 10.5 Datawords and codewords in block coding

10.331
Example 10.1

The 4B/5B block coding discussed in Chapter 4 is a good


example of this type of coding. In this coding scheme, k =
4 and n = 5.

As we saw, we have 2k = 16 datawords and 2n = 32


codewords. We saw that 16 out of 32 codewords are used
for message transfer and the rest are either used for other
purposes or unused.

10.332
Figure 10.6 Process of error detection in block coding

10.333
Table 10.1 A code for error detection (Example 10.2)

10.334
Figure 10.7 Structure of encoder and decoder in error correction

10.335
Table 10.2 A code for error correction (Example 10.3)

10.336
Hamming Distance
 The Hamming distance between two
words is the number of differences
between corresponding bits.
 The minimum Hamming distance is the
smallest Hamming distance between
all possible pairs in a set of words.

10.337
We can count the number of 1s in the Xoring of two words

1. The Hamming distance d(000, 011) is 2 because

2. The Hamming distance d(10101, 11110) is 3 because

10.338
Example 10.5

Find the minimum Hamming distance of the coding scheme


in Table 10.1.

Solution
We first find all Hamming distances.

The dmin in this case is 2.


10.339
Example 10.6

Find the minimum Hamming distance of the coding scheme


in Table 10.2.

Solution
We first find all the Hamming distances.

The dmin in this case is 3.


10.340
Minimum Distance for
Error Detection
 To guarantee the detection of up to s errors in all cases, the
minimum Hamming distance in a block code must be dmin = s + 1.
 Why?
Example 10.7

•The minimum Hamming distance for our first code scheme (Table 10.1) is 2. This code guarantees
detection of only a single error.
•For example, if the third codeword (101) is sent and one error occurs, the received codeword does not
match any valid codeword. If two errors occur, however, the received codeword may match a valid
codeword and the errors are not detected.

10.341
Example 10.8

•Table 10.2 has dmin = 3. This code can detect up to two


errors. When any of the valid codewords is sent, two errors
create a codeword which is not in the table of valid
codewords. The receiver cannot be fooled.
•What if there are three error occurrance?

10.342
Figure 10.8 Geometric concept for finding dmin in error detection

10.343
Figure 10.9 Geometric concept for finding dmin in error correction

To guarantee correction of up to t errors in all cases, the


minimum Hamming distance in a block code must be dmin
= 2t + 1.

10.344
Example 10.9

A code scheme has a Hamming distance dmin = 4. What is


the error detection and correction capability of this
scheme?

Solution
This code guarantees the detection of up to three errors
(s = 3), but it can correct up to one error. In other words,
if this code is used for error correction, part of its capability is wasted. Error
correction codes need to have an odd minimum distance (3, 5, 7, . . . ).

10.345
10-3 LINEAR BLOCK CODES

•Almost all block codes used today belong to a subset called


linear block codes.
•A linear block code is a code in which the exclusive OR
(addition modulo-2 / XOR) of two valid codewords creates
another valid codeword.

10.346
Example 10.10

Let us see if the two codes we defined in Table 10.1 and


Table 10.2 belong to the class of linear block codes.

1. The scheme in Table 10.1 is a linear block code


because the result of XORing any codeword with any
other codeword is a valid codeword. For example, the
XORing of the second and third codewords creates the
fourth one.

2. The scheme in Table 10.2 is also a linear block code.


We can create all four codewords by XORing two
other codewords.
10.347
Minimum Distance for
Linear Block Codes
 The minimum hamming distance is the number of 1s in
the nonzero valid codeword with the smallest number
of 1s

10.348
Linear Block Codes
 Simple parity-check code
 Hamming codes

Table 10.3 Simple parity-check code C(5, 4)


•A simple parity-check code is a single-bit
error-detecting code in which n = k + 1 with dmin = 2.
•The extra bit (parity bit) is to make the total number
of 1s in the codeword even
•A simple parity-check code can detect an odd
number of errors.

10.349
Figure 10.10 Encoder and decoder for simple parity-check code

10.350
Example 10.12

Let us look at some transmission scenarios. Assume the


sender sends the dataword 1011. The codeword created
from this dataword is 10111, which is sent to the receiver.
We examine five cases:

1. No error occurs; the received codeword is 10111. The


syndrome is 0. The dataword 1011 is created.
2. One single-bit error changes a1 . The received
codeword is 10011. The syndrome is 1. No dataword
is created.
3. One single-bit error changes r0 . The received codeword
is 10110. The syndrome is 1. No dataword is created.
10.351
Example 10.12 (continued)

4. An error changes r0 and a second error changes a3 .


The received codeword is 00110. The syndrome is 0.
The dataword 0011 is created at the receiver. Note that
here the dataword is wrongly created due to the
syndrome value.
5. Three bits—a3, a2, and a1—are changed by errors.
The received codeword is 01011. The syndrome is 1.
The dataword is not created. This shows that the simple
parity check, guaranteed to detect one single error, can
also find any odd number of errors.

10.352
Figure 10.11 Two-dimensional parity-check code

10.353
Figure 10.11 Two-dimensional parity-check code

10.354
Figure 10.11 Two-dimensional parity-check code

10.355
Table 10.4 Hamming code C(7, 4)

1. All Hamming codes discussed in this book have dmin = 3.


2. The relationship between m and n in these codes is
n = 2m − 1.

10.356
Figure 10.12 The structure of the encoder and decoder for a Hamming code

10.357
Table 10.5 Logical decision made by the correction logic analyzer

r0=a2+a1+a0 S0=b2+b1+b0+q0

r1=a3+a2+a1 S1=b3+b2+b1+q1

r2=a1+a0+a3 S2=b1+b0+b3+q2

10.358
Example 10.13

Let us trace the path of three datawords from the sender to the
destination:
1. The dataword 0100 becomes the codeword 0100011.
The codeword 0100011 is received. The syndrome is
000, the final dataword is 0100.
2. The dataword 0111 becomes the codeword 0111001.
The codeword 0011001 is received. The syndrome is \
011. After flipping b2 (changing the 1 to 0), the final
dataword is 0111.
3. The dataword 1101 becomes the codeword 1101000.
The codeword 0001000 is received. The syndrome is
101. After flipping b0, we get 0000, the wrong dataword.
This shows that our code cannot correct two errors.

10.359
10-4 CYCLIC CODES

Cyclic codes are special linear block codes with one


extra property. In a cyclic code, if a codeword is
cyclically shifted (rotated), the result is another
codeword.

Topics discussed in this section:


Cyclic Redundancy Check
Hardware Implementation
Polynomials
Cyclic Code Analysis
Advantages of Cyclic Codes
Other Cyclic Codes
10.360
Table 10.6 A CRC code with C(7, 4)

10.361
Figure 10.14 CRC encoder and decoder

10.362
Figure 10.15 Division in CRC encoder

10.363
Figure 10.16 Division in the CRC decoder for two cases

10.364
Figure 10.21 A polynomial to represent a binary word

10.365
Figure 10.22 CRC division using polynomials

The divisor in a cyclic code is normally called the


generator polynomial or simply the generator.

10.366
10-5 CHECKSUM

The last error detection method we discuss here is


called the checksum. The checksum is used in the
Internet by several protocols although not at the data
link layer. However, we briefly discuss it here to
complete our discussion on error checking

Topics discussed in this section:


Idea
One’s Complement
Internet Checksum

10.367
Example 10.18

Suppose our data is a list of five 4-bit numbers that we


want to send to a destination. In addition to sending these
numbers, we send the sum of the numbers. For example, if
the set of numbers is (7, 11, 12, 0, 6), we send (7, 11, 12, 0,
6, 36), where 36 is the sum of the original numbers. The
receiver adds the five numbers and compares the result
with the sum. If the two are the same, the receiver assumes
no error, accepts the five numbers, and discards the sum.
Otherwise, there is an error somewhere and the data are
not accepted.

10.368
Example 10.19

We can make the job of the receiver easier if we send the


negative (complement) of the sum, called the checksum. In
this case, we send (7, 11, 12, 0, 6, −36). The receiver can
add all the numbers received (including the checksum). If
the result is 0, it assumes no error; otherwise, there is an
error.

10.369
Example 10.20

How can we represent the number 21 in one’s


complement arithmetic using only four bits?

Solution
The number 21 in binary is 10101 (it needs five bits). We
can wrap the leftmost bit and add it to the four rightmost
bits. We have (0101 + 1) = 0110 or 6.

10.370
Example 10.21

How can we represent the number −6 in one’s


complement arithmetic using only four bits?

Solution
In one’s complement arithmetic, the negative or
complement of a number is found by inverting all bits.
Positive 6 is 0110; negative 6 is 1001. If we consider only
unsigned numbers, this is 9. In other words, the
complement of 6 is 9. Another way to find the complement
of a number in one’s complement arithmetic is to subtract
the number from 2n − 1 (16 − 1 in this case).

10.371
Figure 10.24 Example 10.22

1 1 1 1
0 0 0 0

10.372
Note
Sender site:
1. The message is divided into 16-bit words.
2. The value of the checksum word is set to 0.
3. All words including the checksum are
added using one’s complement addition.
4. The sum is complemented and becomes the
checksum.
5. The checksum is sent with the data.

10.373
Note
Receiver site:
1. The message (including checksum) is
divided into 16-bit words.
2. All words are added using one’s
complement addition.
3. The sum is complemented and becomes the
new checksum.
4. If the value of checksum is 0, the message
is accepted; otherwise, it is rejected.

10.374
Example 10.23

Let us calculate the checksum for a text of 8 characters


(“Forouzan”). The text needs to be divided into 2-byte (16-
bit) words. We use ASCII (see Appendix A) to change each
byte to a 2-digit hexadecimal number. For example, F is
represented as 0x46 and o is represented as 0x6F. Figure
10.25 shows how the checksum is calculated at the sender
and receiver sites. In part a of the figure, the value of
partial sum for the first column is 0x36. We keep the
rightmost digit (6) and insert the leftmost digit (3) as the
carry in the second column. The process is repeated for
each column. Note that if there is any corruption, the
checksum recalculated by the receiver is not all 0s. We
leave this an exercise.
10.375
Figure 10.25 Example 10.23

10.376
Modern Coding Theory: LDPC Codes

Hossein Pishro-Nik
University of Massachusetts Amherst

November 7, 2006
Outline

 Introduction and motivation


 Error control coding
 Block codes
 Minimum distance
 Modern coding: LDPC codes
 Practical challenges
 Application of LDPC codes to
holographic data storage
378
Errors in Information Transmission

Digital Communications: Received bits = Corrupted version


of the transmitted bits
Transporting information from
one party to another, using a
sequence of symbols, e.g. bits. …. 0100010101 …

Noise & interference:


received sequence may be
different from the transmitted
one.
…. 0110010101 …

Transmitted bits 379


Errors in Information Transmission: Cont.

Magnetic recording Track

Sector
Some of the bits may change 010101011110010101001
during the transmission from the
disk to the disk drive

380
Errors in Information Transmission: Cont.

These communication systems can be modeled


as Binary Symmetric Channels (BSC):
Receiver
Sender

BSC
Information bits Corrupted bits
10010…10101… 10110…00101…

0 1-p 0

Each bit is flipped p

with probability p: 0<p<0.5


p
1-p 1
1
381
Pioneers of Coding Theory

Bell Telephone Laboratories

Richard Hamming Claude Shannon


382
Error Control Coding: Repetition Codes

Error Control Coding:


Use redundancy to reduce the bit error rate

10010…10101… BSC 10110…00101…

Bit error probability p=0.01

Example:

Three-fold repetition code: send each bit three times

383
Repetition Codes: Cont.

( x1 ) ( y1 , y2 , y3 )  ( x1 , x1 , x1 )
Encoder: Repeat
each bit three times
codeword

BSC

( x1 ) ( z1 , z 2 , z3 )
Decoder: majority
voting
Corrupted codeword

384
Repetition Codes: Cont.

( x1 )  (0) (0,0,0)
Encoder
codeword
BSC
(1,0,0)
(0)
Decoder

Successful Corrupted codeword


decoding!

Decoding: majority voting


385
Decoding Error

Decoding Error Probability pe :


= Prob{2 or 3 bits in the codeword received in error}

pe  p 3  3 p 2 (1  p),
p  0.01  pe  3 10-4

 Advantage: reduced bite error rate

 Disadvantage: we lose bandwidth because each bit


should be sent three times

386
Error Control Coding: Block Codes

( x1 , x2 ,..., xk ) ( y1 , y 2 ,..., y n )
Encoder
Information block codeword
n>k BSC
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder
Corrupted codeword
 Encoding: mapping the information block to the
corresponding codeword

 Decoding: an algorithm for recovering the


information block from the corrupted codeword

 Always n>k: redundancy in the codeword


387
Error Control Coding: Block Codes

( x1 , x2 ,..., xk ) ( y1 , y 2 ,..., y n )
Encoder
Information block Codeword
BSC
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder
Corrupted codeword

K: code dimension, n: code length

This is called an (n,k) block code

388
Code Rate

 In general an (n,k) block code is a 1-1 mapping


from k bits to n bits: ( x1 , x2 ,..., xk )  (y1 , y2 ,..., yn )

Dimension k
R  Code rate  
Code length n

0  R 1

 R shows the amount of redundancy in the codeword

 Higher R = Lower redundancy

389
Repetition Codes Revisited

For repetition code: k=1, n=3, R=1/3

( x1 ) ( y1 , y 2 , y3 )  ( x1 , x1 , x1 )
Encoder
K=1 n=3

BSC
( x1 ) ( z1 , z 2 , z3 )
Decoder

The repetition code is a (3,1) block code

390
Block Codes: Cont.

There are two valid codewords in the repetition code:


(0)  (0,0,0)
(1)  (1,1,1) Valid codewords
 A (5,3) block code: (n=5, k=3, R=3/5):
(0 0 0)  ( 0 0 0 0 0)
(0 0 1)  (0 1 1 0 0)
(0 1 0)  (1 0 0 1 0)
(0 1 1)  (0 1 0 1 1)
(1 0 0)  (1 0 1 1 1)
8 valid codewords
(1 0 1)  (0 1 1 1 0)
(1 1 0)  (0 1 0 0 1)
(1 1 1)  (1 1 0 0 0)

 The number of valid codewords is equal


.2
k
to the number of possible information blocks,
391
Block Codes: Cont.

( x1 , x2 ,..., xk )  (y1 , y 2 ,..., yn )


Valid codewords
All n-tuples
All k-tuples

(0,0)
(0,1)
(1,0)

(1,1)

2 k points 2 n points
392
Good Block Codes

 There exist more efficient and more powerful


codes.

 Good Codes:

 High rates = lower redundancy (depends on the


channel error rate p)

 Low error rate at the decoder

 Simple and practical encoding and decoding

393
Linear Block Codes

C : ( x1 , x2 ,..., xk )  (y1 , y 2 ,..., y n )

A linear mapping
 g11 g12 ... g1n 
 
 g 21 g 22 .... g 2 n 
( y1 , y2 ,..., y n )  ( x1 , x2 ,..., xk )   . . . . 
 
 . . . . 
 g k1 gk 2 ... g kn 

Linear block codes:


Generator matrix G
 Simple structure: easier to analyze
 Simple encoding algorithms.

394
Linear Block Codes

 There are many practical linear block codes:


 Hamming codes
 Cyclic codes
 Reed-Solomon codes
 BCH codes
 …

395
Channel Capacity

( x1 , x2 ,..., xk ) ( y1 , y2 ,..., y n )
Encoder

Noisy
channel
( x1 , x2 ,..., xk ) ( z1 , z 2 ,..., z n )
Decoder

Channel capacity (Shannon):


The maximum achievable data rate

Shannon capacity is achievable using random codes

396
Shannon Codes

( x1 , x2 ,..., xk )  (y1 , y 2 ,..., yn ) Random mapping


Valid codewords
All n-tuples
All k-tuples

(0,0)
(0,1)
(1,0)

(1,1)

2 k points 2 n points
397
Shannon Random Codes

As n (block length) goes to infinity,


random codes achieve the channel
capacity, i.e,

Code rate R approaches C, while the


decoding error probability goes to zero

398
Error Control Coding:
Low-Density Parity-Check (LDPC) Codes

 Ideal codes
 Have efficient encoding
 Have efficient decoding
 Can approach channel capacity

 Low-density parity-check (LDPC) codes


 Random codes: based on random graphs
 Simple iterative decoding

399
t-Error-Correcting Codes

 The repetition codes can correct one error in the


codeword; however, it fails to correct higher number
of errors.

 A code that is capable of correcting t errors in the


codewords is called a t-error-correcting code.

 The repetition code is a 1-error-correcting code.

400
Minimum Distance

The minimum distance of a code is the minimum


Hamming distance between its codewords:

d min= Min{dist(u,v): u and v are codewords}


For the repetition code, since there are only two
valid codewords,

c1  (0,0,0) and c2  (1,1,1)

d min  dist (c1 , c2 )  3

401
Minimum Distance: Cont.

All vectors of length n (n-tuples)

402
Minimum Distance: Cont.

Higher minimum distance = Stronger code

Example: For repetition code t=1, and d min  3.

403
Modern Coding Theory

• Random linear codes can achieve


channel capacity
• Linear codes can be encoded efficiently
• Decoding of linear codes: NP hard

• Gallager’s idea:
– Find a subclass of random linear codes that
can be decoded efficiently
404
Modern Coding Theory

 Iterative coding schemes: LDPC codes, Turbo codes

Encoder

BSC
Iterative
Decoder

 Iterative decoding instead of distance-based decoding

405
Introduction to Channel Coding

 Noisy channels:

Noisy
Information bits Corrupted bits
10010…10101…
channel 10e10…e01e1…

Example: binary erasure channel (BEC)

(0,1) BEC (0,e)

Other channels: Gaussian channel, binary symmetric channel,…

406
Low-Density Parity-Check Codes
 Defined by random sparse graphs (Tanner graphs)

y1  y2  y3  0 Check (message) nodes

Variable (bit) nodes

y1 y2 y3 yn

Simple iterative decoding: message-passing algorithm

407
Important Recent Developments

 Luby et al. and Richardson et al.


 Density evolution
 Optimization using density evolution

 Shokrollahi et al.
 Capacity-achieving LDPC codes for the binary erasure
channel (BEC)

 Richardson et al. and Jin et al.


 Efficient encoding
 Irregular repeat-accumulate codes

408
Standard Iterative Decoding over the BEC

Codeword Received word

01101001  01e0ee01

Standard Iterative Algorithm: Check node

Repeat for any check node


{
If only one of the
neighbors is missing, Neighbors
0 1 e
recover it y1 y2 y3
}
y3  y1  y2  0  1  1

409
Standard Iterative Decoding: Cont.

f f f f

0 1 e =0 e =1 1 e =1

Decoding is successful!

410
Algorithm A: Cont.

 The algorithm may fail

0 e e e 1 1

Stopping Set: S

411
Practical Challenges: Finite-Length Codes

• In practice, we need to use short or


moderate-length codes
• Short or moderate length codes do not
perform as well as long codes

412
Error Floor of LDPC Codes
10-1
Capacity-
approaching LDPC
Codes suffer from
the error floor
10-5
problem
BER

High Error
10 -7
Floor Low Error Floor

10-9

Average erasure probability of the channel


413
Volume Holographic Memory (VHM) Systems

414
Noise and Error Sources

• Thermal noise, shot noise,…


• Limited diffraction
• Aberration
• Misalignment error
• Inter-page interference (IPI)
• Photovoltaic damage
• Non-uniform erasure

415
The Scaling Law

• The magnitudes of systematic errors and thermal noise


are assumed to remain unchanged with respect to M
1
(number of pages) and SNR is proportional to 2
M

SNR  2
M

• SNR decreases as the number of pages increases.

• There exists an optimum number of pages that maximizes


the storage capacity.

416
Raw Error Distribution over a Page

• Bits in different regions


of a page are affected
by different noise
powers.

• The noise power is


higher at edges.

417
Properties and Requirements

• Can use large block lengths

• Non-uniform error correction

12
• Error floor: Target BER< 10

• Simple implementation: Simple decoding

418
Ensembles for Non-uniform Error Correction

Check Nodes

Variable Nodes …
c1 c2 ck

Bits from the first region

419
Ensemble Properties

• Threshold effect
• Concentration
theorem
• Density evolution:

420
Ensemble Properties

• Stability condition (BEC):

421
Design Methodology

• The performance of the decoder is not directly related to


the minimum distance.

• However, the minimum distance still plays an important


role:
– Example: error floor effect

• To eliminate error floor, we avoid degree-two variable


nodes to have large minimum distance.

• For efficient decoding, and also for simplicity of design,


we use low degrees for variable nodes.

422
Performance on VHM

10-2

Rate=.85

Avg degree=6

Gap from capacity


10-5
at BER 1e-9:
0.6dB
BER

n= 105

n= 10
10-7

4
10-9

423
Storage Capacity
Information theoretic capacity
for soft-decision decoding: .95Gb
1
LDPC: soft
.84 Gb
LDPC: hard
0.8 .76 Gb
Storage capacity (Gbits)

RS: hard
.52Gb
0.6

0.4

0.2

0
2000 4000 6000
Number of pages 424
Conclusion

• Carefully designed LDPC codes can result in


significant increase in the storage capacity.
• By incorporating channel information in
design of LDPC codes
- Small gap from capacity
- Error floor reduction
- More efficient decoding

425
Modern Coding Theory

 The performance of the decoder is not directly


related to the minimum distance.

 However, the minimum distance still plays an


important role:

 Example: error floor effect

426
Chapter 4

Digital
Transmission
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
4.1 Line Coding

Some Characteristics

Line Coding Schemes

Some Other Schemes

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.1 Line coding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.2 Signal level versus
data level

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.3 DC component

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 1

A signal has two data levels with a pulse duration of 1 ms. We calculate the
pulse rate and bit rate as follows:
Pulse Rate = 1/ 10-3= 1000 pulses/s
Bit Rate = Pulse Rate x log2 L = 1000 x log2 2 = 1000 bps

Example 2

A signal has four data levels with a pulse duration of 1 ms. We calculate the
pulse rate and bit rate as follows:

Pulse Rate = = 1000 pulses/s


Bit Rate = PulseRate x log2 L = 1000 x log2 4 = 2000 bps

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.4 Lack of
synchronization

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 3

In a digital transmission, the receiver clock is 0.1 percent


faster than the sender clock. How many extra bits per
second does the receiver receive if the data rate is 1
Kbps? How many if the data rate is 1 Mbps?

Solution
At 1 Kbps:
1000 bits sent 1001 bits received1 extra bps
At 1 Mbps:
1,000,000 bits sent 1,001,000 bits received1000 extra bps

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.5 Line coding
schemes

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.6 Unipolar encoding

Note:

Unipolar encoding uses only one voltage level.


McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.7 Types of polar
encoding

Note:

Polar encoding uses two voltage levels (positive and negative).


McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Note:
In NRZ-L the level of the signal is dependent upon the state of the
bit.

Note:

In NRZ-I the signal is inverted if a


1 is encountered.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.8 NRZ-L and NRZ-I
encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.9 RZ encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

A good encoded digital signal


must contain a provision for
synchronization.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.10 Manchester
encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In Manchester encoding, the


transition at the middle of the bit
is used for both synchronization
and bit representation.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.11 Differential
Manchester encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In differential Manchester encoding, the transition at the middle of


the bit is used only for synchronization.
The bit representation is defined by the inversion or noninversion
at the beginning of the bit.

Note:

In bipolar encoding, we use three levels: positive, zero,


and negative.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.12 Bipolar AMI
encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.13 2B1Q

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.14 MLT-3 signal

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


4.2 Block Coding

Steps in Transformation

Some Common Block Codes

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.15 Block coding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.16 Substitution in
block coding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Table 4.1 4B/5B encoding

Data Code Data Code

0000 11110 1000 10010

0001 01001 1001 10011

0010 10100 1010 10110


0011 10101 1011 10111
0100 01010 1100 11010
0101 01011 1101 11011
0110 01110 1110 11100
0111 01111 1111 11101
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Table 4.1 4B/5B encoding (Continued)

Data Code

Q (Quiet) 00000

I (Idle) 11111

H (Halt) 00100
J (start delimiter) 11000
K (start delimiter) 10001
T (end delimiter) 01101
S (Set) 11001
R (Reset) 00111
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.17 Example of 8B/6T
encoding

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


4.3 Sampling

Pulse Amplitude Modulation


Pulse Code Modulation
Sampling Rate: Nyquist
Theorem
How Many Bits per Sample?
Bit Rate
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Figure 4.18 PAM

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

Pulse amplitude modulation has some


applications, but it is not used by itself in data
communication. However, it is the first step in
another very popular conversion method called
pulse code modulation.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.19 Quantized PAM
signal

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.20 Quantizing by using sign
and magnitude

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.21 PCM

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.22 From analog signal to
PCM digital code

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

According to the Nyquist theorem,


the sampling rate must be at least 2
times the highest frequency.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.23 Nyquist theorem

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 4
What sampling rate is needed for a signal with a
bandwidth of 10,000 Hz (1000 to 11,000 Hz)?

Solution
The sampling rate must be twice the highest
frequency in the signal:

Sampling rate = 2 x (11,000) = 22,000 samples/s

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 5

A signal is sampled. Each sample requires at least 12


levels of precision (+0 to +5 and -0 to -5). How many bits
should be sent for each sample?

Solution
We need 4 bits; 1 bit for the sign and 3 bits for the
value. A 3-bit value can represent 23 = 8 levels (000
to 111), which is more than what we need. A 2-bit
value is not enough since 22 = 4. A 4-bit value is too
much because 24 = 16.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 6
We want to digitize the human voice. What is the bit rate, assuming 8 bits per sample?

Solution
The human voice normally contains frequencies from 0 to 4000 Hz.
Sampling rate = 4000 x 2 = 8000 samples/s

Bit rate = sampling rate x number of bits per sample


= 8000 x 8 = 64,000 bps = 64 Kbps

Note:

Note that we can always change a band-pass signal to a low-pass


signal before sampling. In this case, the sampling rate is twice the
bandwidth.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


4.4 Transmission Mode

Parallel Transmission

Serial Transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.24 Data transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.25 Parallel
transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.26 Serial
transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In asynchronous transmission, we send 1 start bit (0) at the


beginning and 1 or more stop bits (1s) at the end of each byte.
There may be a gap between each byte.

Note:

Asynchronous here means “asynchronous at the byte level,” but


the bits are still synchronized; their durations are the same.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.27 Asynchronous
transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In synchronous transmission,
we send bits one after another
without start/stop bits or gaps.
It is the responsibility of the
receiver to group the bits.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Figure 4.28 Synchronous
transmission

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


PART III

Data Link Layer

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Position of the data-link layer

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Data link layer duties

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


LLC and MAC sublayers

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


IEEE standards for LANs

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Chapters

Chapter 10 Error Detection and Correction


Chapter 11 Data Link Control and Protocols
Chapter 12 Point-To-Point Access
Chapter 13 Multiple Access
Chapter 14 Local Area Networks
Chapter 15 Wireless LANs
Chapter 16 Connecting LANs
Chapter 17 Cellular Telephone and Satellite Networks

Chapter 18 Virtual Circuit Switching


McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Chapter 10

Error Detection
and
Correction
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Note:

Data can be corrupted during


transmission. For reliable
communication, errors must be
detected and corrected.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.1 Types of Error

Single-Bit Error

Burst Error

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In a single-bit error, only one bit in


the data unit has changed.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.1 Single-bit error

Note:

A burst error means that 2 or more bits in the data unit have changed.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.2 Burst error of length 5

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.2 Detection

Redundancy

Parity Check

Cyclic Redundancy Check (CRC)

Checksum

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

Error detection uses the concept of


redundancy, which means adding
extra bits for detecting errors at the
destination.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.3 Redundancy

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.4 Detection methods

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.5 Even-parity concept

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note:

In parity check, a parity bit is added


to every data unit so that the total
number of 1s is even
(or odd for odd-parity).

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 1
Suppose the sender wants to send the word world. In ASCII the five characters
are coded as
1110111 1101111 1110010 1101100 1100100
The following shows the actual bits sent
11101110 11011110 11100100 11011000 11001001

Example 2

Now suppose the word world in Example 1 is received by the receiver without
being corrupted in transmission.
11101110 11011110 11100100 11011000 11001001
The receiver counts the 1s in each character and comes up with even numbers
(6, 6, 4, 4, 4). The data are accepted.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 3
Now suppose the word world in Example 1 is corrupted during transmission.
11111110 11011110 11101100 11011000 11001001
The receiver counts the 1s in each character and comes up with even and odd
numbers (7, 6, 5, 4, 4). The receiver knows that the data are corrupted, discards
them, and asks for retransmission.

Note:

Simple parity check can detect all single-bit errors. It can detect burst
errors only if the total number of errors in each data unit is odd.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.6 Two-dimensional parity

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 4
Suppose the following block is sent:
10101001 00111001 11011101 11100111 10101010

However, it is hit by a burst noise of length 8, and some bits are corrupted.
10100011 10001001 11011101 11100111 10101010

When the receiver checks the parity bits, some of the bits do not follow the
even-parity rule and the whole block is discarded.
10100011 10001001 11011101 11100111 10101010
Note:

In two-dimensional parity check, a block of bits is divided into


rows and a redundant row of bits is added to the whole block.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
10.7 CRC generator and checker

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.8 Binary division in a CRC
generator

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.9 Binary division in CRC checker

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.10 A polynomial

10.11 A polynomial representing a


divisor

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Table 10.1 Standard polynomials
Name Polynomial Application

CRC-8 x 8 + x2 + x + 1 ATM header


CRC-10 x10 + x9 + x5 + x4 + x 2 + 1 ATM AAL
ITU-16 x16 + x12 + x5 + 1 HDLC
x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10
ITU-32 LANs
+ x8 + x7 + x5 + x4 + x2 + x + 1

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 5

It is obvious that we cannot choose x (binary 10) or x2 + x (binary 110) as the


polynomial because both are divisible by x. However, we can choose x + 1
(binary 11) because it is not divisible by x, but is divisible by x + 1. We can also
choose x2 + 1 (binary 101) because it is divisible by x + 1 (binary division).

Example 6

The CRC-12
x12 + x11 + x3 + x + 1
which has a degree of 12, will detect all burst errors affecting an odd number of
bits, will detect all burst errors with a length less than or equal to 12, and will
detect, 99.97 percent of the time, burst errors with a length of 12 or more.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.12 Checksum

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.13 Data unit and checksum

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Note
:
The sender follows these steps:

•The unit is divided into k sections, each of n bits.

•All sections are added using one’s complement to get the sum.

•The sum is complemented and becomes the checksum.

•The checksum is sent with the data.

Note
:
The receiver follows these steps:

•The unit is divided into k sections, each of n bits.

•All sections are added using one’s complement to get the sum.

•The sum is complemented.

•If the result is zero, the data are accepted: otherwise, rejected.
McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004
Note:

The receiver follows these steps:


•The unit is divided into k sections, each of n bits.

•All sections are added using one’s complement to get the sum.

•The sum is complemented.

•If the result is zero, the data are accepted: otherwise, rejected.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 7
Suppose the following block of 16 bits is to be sent using a
checksum of 8 bits.
10101001 00111001
The numbers are added using one’s complement
10101001
00111001
------------
Sum 11100010
Checksum 00011101
The pattern sent is 10101001 00111001 00011101

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 8
Now suppose the receiver receives the pattern sent in Example 7 and there is no
error.
10101001 00111001 00011101
When the receiver adds the three sections, it will get all 1s, which, after
complementing, is all 0s and shows that there is no error.
10101001
00111001
00011101
Sum 11111111
Complement 00000000 means that the pattern is OK.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Example 9
Now suppose there is a burst error of length 5 that affects 4 bits.
10101111 11111001 00011101
When the receiver adds the three sections, it gets
10101111
11111001
00011101
Partial Sum 1 11000101
Carry 1
Sum 11000110
Complement 00111001 the pattern is corrupted.

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.3 Correction

Retransmission

Forward Error Correction

Burst Error Correction

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Table 10.2 Data and redundancy bits
Number of Number of Total
data bits redundancy bits bits
m r m+r
1 2 3
2 3 5
3 3 6

4 3 7

5 4 9

6 4 10

7 4 11

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.14 Positions of redundancy bits in
Hamming code

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.15 Redundancy bits calculation

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.16 Example of redundancy bit
calculation

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.17 Error detection using Hamming
code

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


10.18 Burst error correction example

McGraw-Hill ©The McGraw-Hill Companies, Inc., 2004


Chapter 11

Chapter 11 : Error-Control
Coding

Lecture edition by K.Heikkinen

EE576 Dr. Kousa Linear Block Codes 517


Chapter 11 goals
• To understand error-correcting codes in use theorems and their
principles
– block codes, convolutional codes, etc.
Chapter 11 contents
• Introduction
• Discrete Memoryless Channels
• Linear Block Codes
• Cyclic Codes
• Maximum Likelihood decoding of Convolutional Codes
• Trellis-Coded Modulation
• Coding for Compound-Error Channels

EE576 Dr. Kousa Linear Block Codes 518


Introduction

• Cost-effective facility for transmitting information at a


rate and a level of reliability and quality
– signal energy per bit-to-noise power density ratio
– achieved practically via error-control coding
• Error-control methods
• Error-correcting codes

EE576 Dr. Kousa Linear Block Codes 519


Discrete Memoryless Channels

EE576 Dr. Kousa Linear Block Codes 520


Discrete Memoryless Channels

• Discrete memoryless channles (see fig. 11.1)


described by the set of transition probabilities
– in simplest form binary coding [0,1] is used of which
BSC is an appropriate example
– channel noise modelled as additive white gaussian
noise channel
• the two above are so called hard-decision
decoding
– other solutions, so called soft-decision coding

EE576 Dr. Kousa Linear Block Codes 521


Linear Block Codes

• A code is said to be linear if any twowords in


the code can be added in modulo-2 arithmetic
to produce a third code word in the code
• Linear block code has n bits of which k bits are
always identical to the message sequence
• Then n-k bits are computed from the message
bits in accordance with a prescribed encoding
rule that determines the mathematical structure
of the code
– these bits are also called parity bits

EE576 Dr. Kousa Linear Block Codes 522


Linear Block Codes

• Normally code equations are written in the form of


matrixes (1-by-k message vector)
– P is the k-by-(n-k) coefficient matrix
– I (of k) is the k-by-k identity matrix
– G is k-by-n generator matrix
• Another way to show the relationship between the
message bits and parity bits
– H is parity-check matrix

EE576 Dr. Kousa Linear Block Codes 523


Linear Block Codes

• In Syndrome decoding the generator matrix (G) is


used in the encoding at the transmitter and the parity-
check matrix (H) atthe receiver
– if corrupted bit, r = c+e, this leads to two important
properties
• the syndrome is dependant only on the error
pattern, not on the trasmitted code word
• all error patterns that differ by a code word, have
same syndrome

EE576 Dr. Kousa Linear Block Codes 524


Linear Block Codes

• The Hamming distance (or minimum) can be used to


calculate the difference of the code words
• We have certain amount (2_power_k) code vectors, of
which the subsets constitute a standard array for an
(n,k) linear block code
• We pick the error pattern of a given code
– coset leaders are the most obvious error patterns

EE576 Dr. Kousa Linear Block Codes 525


Linear Block Codes

• Example : Let us have H as parity-check matrix which


vectors are
– (1110), (0101), (0011), (0001), (1000), (1111)
– code generator G gives us following codes (c) :
• 000000, 100101,111010, 011111
– Let us find n, k and n-k ?
– what will we find if we multiply Hc ?

EE576 Dr. Kousa Linear Block Codes 526


Linear Block Codes

Examples of (7,4) Hamming code


words and error patterns

EE576 Dr. Kousa Linear Block Codes 527


Cyclic Codes

• Cyclic codes form subclass of linear block codes


• A binary code is said to be cyclic if it exhibits the two
following properties
– the sum of any two code words in the code is also a code
word (linearity)
• this means that we speak linear block codes
– any cyclic shift of a code word in the code is also a code
word (cyclic)
• mathematically in polynomial notation

EE576 Dr. Kousa Linear Block Codes 528


Cyclic Codes

• The polynomial plays major role in the generation of cyclic


codes
• If we have a generator polynomial g(x) of an (n,k) cyclic code
with certain k polynomials, we can create the generator
matrix (G)
• Syndrome polynomial of the received code word corresponds
error polynomial

EE576 Dr. Kousa Linear Block Codes 529


Cyclic Codes

EE576 Dr. Kousa Linear Block Codes 530


Cyclic Codes
• Example : A (7,4) cyclic code that has a block length of 7, let us find the
polynomials to generate the code (see example 3 on the book)
– find code polynomials
– find generation matrix (G) and parity-check matrix (H)
• Other remarkable cyclic codes
– Cyclic redundancy check (CRC) codes
– Bose-Chaudhuri-Hocquenghem (BCH) codes
– Reed-Solomon codes

EE576 Dr. Kousa Linear Block Codes 531


Convolutional Codes
• Convolutional codes work in serial manner, which suits better to such kind of
applications
• The encoder of a convolutional code can be viewed as a finite-state machine that
consists of an M-stage shift register with prescribed connections to n modulo-2
adders, and a multiplexer that serializesthe outputs of the address

• Convolutional codes are portrayed in graphical form by using three different


diagrams
– Code Tree
– Trellis
– State Diagram

EE576 Dr. Kousa Linear Block Codes 532


Maximum Likelihood Decoding of
Convolutional Codes
• We can create log-likelihood function to a convolutional code
that have a certain hamming distance
• The book presents an example algorithm (Viterbi)
– Viterbi algorithm is a maximum-likelihood decoder, which
is optimum for a AWGN (see fig. 11.17)
• initialisation
• computation step
• final step

EE576 Dr. Kousa Linear Block Codes 533


Trellis-Coded Modulation

• Here coding is described as a process of imposing certain


patterns on the transmitted signal
• Trellis-coded modulation has three features
– Amount of signal point is larger than what is required,
therefore allowing redundancy without sacrificing
bandwidth
– Convolutional coding is used to introduce a certain
dependancy between successive signal points
– Soft-decision decoding is done in the receiver

EE576 Dr. Kousa Linear Block Codes 534


Coding for Compound-Error
Channels
• Compound-error channels exhibit independent and burst
error statistics (e.g. PSTN channels, radio channels)
• Error-protection methods (ARQ, FEC)

EE576 Dr. Kousa Linear Block Codes 535


The Logical Domain

Chapter 6

Error Control in the


Binary Channel

Fall 2007 Telecommunications Technology


EE576 Dr. Kousa Linear Block Codes 536
The Exclusive OR
A
• If A and B are binary
variables, the XOR of A
XOR 0 1
and B is defined as:

0+0=1+1=0
0 0 1
0+1=1+0=1
B
XOR with 1 complements 1 1 0
variable
• Note dij = w(xi XOR xj)

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 537
Hamming Distance

• The weight of a code word is the number of 1’s in it.


• The Hamming Distance between two code words is
equal to the number of digits in which they differ.
• The distance dij between xi = 1110010 and xj =
1011001 is 4.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 538
x1 = 1110010
x2 = 1100001
y = 1100010

d(x1, y) = w(1110010 + 1100010)


= w(0010000)
=1

d(x2, y) = w(110001 + 1100010)


= w(000011)
=2

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 539539
The Binary Symmetric Channel

1- p
x0 y0
p

p
x1 y1
1- p

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 540540
BSC and Hamming Distance

• If x is the input code word to a BSC, and y is the


output, y = x + n, where the noise vector n has a 1
wherever an error has occurred:
x= 1110010, n = 0010000 y = 1100010
• An error causes a distance of 1 between input and
output code words.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 541541
Geometric Point-of-View
Code set of
all 8, 3-digit 010 011
words
Minimum 110 111
distance = 1 000 001

100 101

In the BSC, any error changes a code word into a code word
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 542542
Reduced Rate Source

010 011
Code set of
4, 3-digit 110 111
words
000 001
Minimum
distance = 2
100 101

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 543543
Error Correction and Detection
Capability
• The distance between two code words is the
number of places in which they differ.
• dmin is the distance between the two codes
which are closest together.
• A code with minimum distance dmin may be
used
– to detect dmin-1 errors, or
– to correct (dmin-1)/2 errors.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 544544
1 2

x1 y x2

The probability that y came from x1 = Pr{1 error} = pq6

The probability that y came from x2 = Pr{2 errors} = p2q5

p(y/x1) > p(y/x2); p <1/2

Received word is more likely to have come from closest code word

Decode received vector as closest code word => correction

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 545545
Error Detection and Correction
Shannon’s Noisy Channel Coding Theorem:
• To achieve error-free communications
– Reduce Rate below Capacity
– Add structured redundancy
• Increase the distance between code words

Error Correcting Codes


• Add redundancy in a structured way
• For example, add a single parity check digit
• Choose value of the appended digit to make number of 1’s even
(or odd)

Telecommunications Technology
Fall 2007
EE576 Dr. Kousa Linear Block Codes 546546
Parity Check Equation
m0 m1 m2 m3 c1

i  n 1
ck  m
i 0
i
A
+ 0 1
Addition is modulo 2 B 0 0 1
1 1 0

EE576 Dr. Kousa Linear Block Codes 547


An (n,k) Group Code

Code word x7 x6 x5 x4 x3 x2 x1

m4 m3 m2 c3 m1 c2 c1
c1  m1  m2  m4
c2  m1  m3  m4
c3  m2  m3  m4
m4 m3 m2 m1 c3 c2 c1
1 1 1 0 1 0 0
1 1 0 1 0 1 0
1 0 1 1 0 0 1
EE576 Dr. Kousa Linear Block Codes 548
Parity Check and Generator Matrices

1 0 0 0 1 1 1
0 1 0 0 1 1 0
H
0 0 1 0 1 0 1
0 0 0 1 0 1 1
 1 1 1 0 1 0 0
G 1 1 0 1 0 1 0
1 0 1 1 0 0 1
EE576 Dr. Kousa Linear Block Codes 549
Codes
 m3 m2 m1 m0
Message m

Code word x x6 x5 x4 x3 x2 x1 x0

 
Transmitted code word x  mH
  
Received word y  xe

Error event e has a 1 wherever an error has occurred.

EE576 Dr. Kousa Linear Block Codes 550


Syndrome Calculation

  
y  mH  e
  T
s  yG
  T  T
s  mHG  e G
  T
s  eG
EE576 Dr. Kousa Linear Block Codes 551
Hamming Code
• In the example, n = 7 and k = 4.
• There are r = n - k, or 3, parity check digits.
• This code has a minimum distance of 1, thus all single errors can be
“corrected”.
• If no error occurs the syndrome is 000.
• If more errors occur, the syndrome is another 3-bit sequence.
• Each single error gives a unique syndrome.
• Any single error is more likely to occur than any double, triple, or higher
order error.
• Any non-zero syndrome is most likely to have occurred because the
single error that could cause it occurred, than for any other reason.
• Therefore, deciding that the single error occurred is most likely the
correct decision.
• Hence, the term error correction.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 552
Properties of Binomial Variables
• Given n bits with a probability of error p and a probability of no error q =
1-p.
– The probability of no errors is qn
– The probability of one error is pqn-1
– The probability of k errors is pkqn-k
• It is no problem to show that if p<1/2 then any k-error event is more
likely than any k+1-error event.
– The most likely number of errors is np. When p is very low then the
most likely error event is NO ERRORS, single errors are next most
likely.
• Single-error-correcting codes can be very effective!

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 553
Hamming Codes
• Hamming codes are (n,k) group codes where n = k+r is the
length of the code words k is the number of data bits. R is the
number of parity check bits, and 2r = n + 1.
• Typical codes are
• (7,4), r = 3
• (15,11), r = 4 (24 = 16)
• (63, 57), r = 6
• Hamming codes are ideal, single error correcting codes.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 554
Hamming Code Performance
• If the probability of bit error without coding is pu and pc with
coding
 
• the probability of a word error without coding is 1  1  1  pu 
n

 the probability of a word error using a (7,4) Hamming Code is:

1  (1  p )
c
7
 7 pc (1  pc ) 6  
• pu is the uncoded channel error probability.
• pc is the probability of bit error when Eb/No is reduced to 4/7 ths of
that at which pu was calculated.

EE576 Dr. Kousa Linear Block Codes 555


Cyclic Codes
• Cyclic codes are algebraic group codes in which the code words
form an ideal.
• If the bits are considered coefficients of a polynomial, every code
word is divisible by a generator polynomial.
• The rows of the generator matrix are cyclic permutations of one
another.

An ideal is a group in which every member is the product of two others.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 556
Cyclic Code Generation

A message M = 110011 can be expressed as the polynomial M(X) = X5 + X4


+X+1
(code digits are coefficients of polynomial)

With a generator polynomial P(X) = X4 + X3 + 1, the code word can be


generated as: T(X) = XnM(X) + R(X),

where R(X) is the remainder when XnM(X) is divided by P(X), i.e.


XnM(X)/P(X) = Q(X) + R(X)/P(X),

where n is the order of P(X).

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 557
Applications of Cyclic Codes
• Cyclic codes (or cyclic redundancy check CRC) are used routinely to
detect errors in data transmission.
• Typical codes are the
– CRC-16: P(X) = X16 + X15 + X2 + 1
– CRC-CCITT: P(X) = X16 + X12 + X5 + 1

Cyclic Code Capabilities


• A cyclic code will detect:
– All single-bit errors.
– All double-bit errors.
– Any odd number of errors.
– Any burst error for which the length of the burst is less than the
CRC.
– Most larger burst errors.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 558
Convolutional Codes

• Block codes are memoryless codes - each output depends only the current k-bit block
being coded.
• The bits in a convolutional code depend on previous source bits.
• The source bits are convolved with the impulse response of a filter.

Why convolutional codes? Because the code set grows exponentially with code length –
the hypothesis being that the Rate could be maintained as n grew, unlike all block codes
– the Wozencraft contribution.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 559
Convolutional Coder
1 1 0 1 0 1 ... 11 10 11 01 01 01 ...
O1

Input
Xi Xi-1 Xi-2
Encoded output

O2

• Rate 1/2 convolutional coder

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 560
0
Trellis Diagram
1
00

01

10

11
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 561
1 1 0 1 0 1
11 10 11 01 01 01
00
11

01
01 01 01

10 10
11

11

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 562
Decoding
00

01

10

11

Insert an error in a sequence of transmitted bits and try and decode it.

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 563
Sequential Decoding
• Decoder determines the most likely output sequence.
• Compares the received sequence with all possible sequences that might have
been obtained with the coder
• Selects the sequence that is closest to the received sequence.

Viterbi Decoding
• Choose a decoding-window width b in excess of the block length.
• Compute all code words of length b and compare each to the received code
word
• Select that code word closest to the received word.
• Re-encode the decoded frame and subtract from the received word.

Turbo Codes
• Turbo codes were invented by Berrou, Clavieux and Thimajshima in 1993
• Turbo codes achieve excellent error correction capabilities at rates very close to
the Shannon bound
• Turbo codes are concatenated or product codes
• Sequential decoding is used.
Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 564
Interleaved Concatenated Code

Information

Inner checks on information

Outer checks on information

Checks on checks

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 565
Coding and Decoding

Inner Coder Outer Coder


(Block) (Convolutional)

Outer Decoder Inner Decoder

Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 566
Turbo Code Performance

= spectral efficiency = bits per second per Hertz


Telecommunications Technology
EE576 Dr. Kousa Linear Block Codes 567
W
h
The Problem: Noise

Information
Source Transmitter Reciever Destination

Signal Received
Signal
Message Message

Message = [1 1 1 Message = [1 1 0
1] 1]
Noise
NoiseSource
= [0 0
1 0]

EE576 Dr. Kousa Linear Block Codes 568


W
h
a
Poor solutions

• Repeats –
Single CheckSum -
Data = [1 1 1 1]
• Truth table:
Message=
A B X-OR
0 0 0 [1 1 1 1]
0 1 1 [1 1 1 1]
1 0 1
1 1 0 [1 1 1 1]

• General form:
Data=[1 1 1 1]
Message=[1 1 1 1 0]

EE576 Dr. Kousa Linear Block Codes 569


W
h
a
Why they are poor t

Repeat 3 times:
Shannon Efficiency •This divide W by 3
•It divides overall capacity by at
C  W log 2 1  S / N  least a factor of 3x.
C is Channel Capacity
W is raw Channel Capacity Single Checksum:
•Allows an error to be detected
S/N is the signal to noise ratio
but requires the message to be
discarded and resent.
•Each error reduces the channel
capacity by at least a factor of 2
because of the thrown away
message.

EE576 Dr. Kousa Linear Block Codes 570


W
h
a
Hammings Solution t

i
Encoding:
• Multiple Checksums
Message=[a b c d] Message=[1 0 1 0]
r= (a+b+d) mod 2 r=(1+0+0) mod 2 =1
s= (a+b+c) mod 2 s=(1+0+1) mod 2 =0
t= (b+c+d) mod 2
t=(0+1+0) mod 2 =1
Code=[r s a t b c d]
Code=[ 1 0 1 1 0 1 0 ]

EE576 Dr. Kousa Linear Block Codes 571


W
h
a
Simulation t

i
Stochastic Simulation:
Fig 1: Error Detection s
100%

100,000 iterations
Add Errors to (7,4) data
Percent Errors Detected (%)
90%

No repeat randoms
Measure Error Detection 80%

Results: 70%

Error Detection
•One Error: 100% 60%

•Two Errors: 100%


•Three Errors: 83.43% 50%
1 2 3 4
•Four Errors: 79.76% Errors Introduced

EE576 Dr. Kousa Linear Block Codes 572


W
h
How it works: 3 dots a
t

Only 3 possible words i


Distance Increment = 1 s
A B C
t
One Excluded State (red)

A B C T
wo valid code words (blue)
It is really a checksum.
 Single Error Detection
 No error correction
A C
This is a graphic representation of the “Hamming Distance”
EE576 Dr. Kousa Linear Block Codes 573
W
h
a
Hamming Distance t

Definition: i
The number of elements that need to be changed (corrupted) to s
turn one codeword into another.
The hamming distance from: t
• [0101] to [0110] is 2 bits
h
• [1011101] to [1001001] is 2 bits
• “butter” to “ladder” is 4 characters
• “roses” to “toned” is 3 characters

EE576 Dr. Kousa Linear Block Codes 574


W
h
Another Dot a
t

The code space is now 4. i


The hamming distance is still 1. s

t
h
e

Allows:
Error DETECTION for For Hamming distances greater than 1
Hamming Distance = 1. an error gives a false correction.
Error CORRECTION for
Hamming Distance =1
EE576 Dr. Kousa Linear Block Codes 575
W
h
Even More Dots a
t

i
s

t
h
e
Allows:
Error DETECTION for • For Hamming distances greater than M
Hamming Distance = 2. 2 an error gives a false correction.
• For Hamming distance of 2 there is
Error CORRECTION for an error detected, but it can not be
Hamming Distance =1. corrected.

EE576 Dr. Kousa Linear Block Codes 576


W
h
a
Multi-dimensional Codes t

Code Space: i
• 2-dimensional s
• 5 element states
Circle packing makes more t
efficient use of the code-space h
e

M
a

EE576 Dr. Kousa Linear Block Codes 577


W
h
a

Cannon Balls t

i
s
Efficient Circle packing is the same Efficient Sphere packing is the same
as efficient 2-d code spacing as efficient 3-d code spacing t
h
e

M
a
t

Efficient n-dimensional sphere packing is the same as n-code spacing

• http://wikisource.org/wiki/Cannonball_stacking
• http://mathworld.wolfram.com/SpherePacking.html
EE576 Dr. Kousa Linear Block Codes 578
W
More on Codes h
a
• Hamming (11,7)
t
• Golay Codes
• Convolutional Codes
• Reed-Solomon Error Correction i
• Turbo Codes s
• Digital Fountain Codes
t
An Example h
We will e
• Encode a message
• Add noise to the transmission
M
• Detect the error
• Repair the error
a
t
r
EE576 Dr. Kousa Linear Block Codes 579
W
h

Encoding the message a


t

i
To encode our message But why? s
we multiply this matrix
You can verify that: t
1 0 0 0 0 1 1 h
0 e
1
Hamming[1 0 0 0]=[1 0 0 0 0 1 1]
1 0 0 1 0
H  Hamming[0 1 0 0]=[0 1 0 0 1 0 1]
0 0 1 0 1 1 0 Hamming[0 0 1 0]=[0 0 1 0 1 1 0] M
  Hamming[0 0 0 1]=[0 0 0 1 1 1 1] a
0 0 0 1 1 1 1
t
r
By our message i
code  H  message x
?
Where multiplication is the logical AND
And addition is the logical XOR

EE576 Dr. Kousa Linear Block Codes 580


Add noise
• If our message is 1 0 0 0 0 1 1
Message = [0 1 1 0] 0 1 0 0 1 0 1
   0 1 1 0
0 0 1 0 1 1 0
• Our Multiplying yields  
0 0 0 1 1 1 1
Code = [0 1 1 0 0 1 1]
 0  1 0 0 0 0 1 1
Lets add an error,  1  0 1 0 0 1 0 1
 1   0 0 1 0 1 1 0
so Pick a digit to mutate  0   0 0 0 1 1 1 1

  0 1 0 0 1 0 1
  0 0 1 0 1 1 0

  0 1 1 0 0 1 1

Code => [0 1 0 0 0 1 1]

EE576 Dr. Kousa Linear Block Codes 581


Testing the message
• We receive the erroneous The matrix used to decode is:
string:
Code = [0 1 0 0 0 1 1] 0 0 0 1 1 1 1
• We test it: Decoder  0 1 1 0 0 1 1
Decoder*CodeT 1 0 1 0 1 0 1
=[0 1 1] To test if a code is valid:
• And indeed it has an error • Does Decoder*CodeT
=[0 0 0]
– Yes means its valid
– No means it has error/s

EE576 Dr. Kousa Linear Block Codes 582


Repairing the message
• To repair the code we find the • Decoder*codeT is
collumn in the decoder matrix [ 0 1 1] 0 0 0 1 1 1 1
whose elements are the row Decoder  0 1 1 0 0 1 1
results of the test vector 1 0 1 0 1 0 1
• We then change
• This is the third element of our
code
• Our repaired code is
[0 1 1 0 0 1 1]

Decoding the message


We trim our received code by 3 elements and we have our original
message.
[0 1 1 0 0 1 1] => [0 1 1 0]

EE576 Dr. Kousa Linear Block Codes 583


Channel Coding in
IEEE802.16e
Student: Po-Sheng Wu
Advisor: David W. Lin

EE576 Dr. Kousa Linear Block Codes 584


Outline
• Overview
• RS code
• Convolution code
• LDPC code
• Future Work

Overview

EE576 Dr. Kousa Linear Block Codes 585


RS code

• The RS code in 802.16a is derived from a systematic


RS (N=255, K=239, T=8) code on GF(2^8)

EE576 Dr. Kousa Linear Block Codes 586


RS code

EE576 Dr. Kousa Linear Block Codes 587


RS code

• This code then is shortened and punctured to enable


variable block size and variable error-correction
capability.
• Shorten : (n, k) → (n-l, k-l)
• Punctured : (n, k) → (n-l, k)
• In general, the generator polynomial
in IEEE802.16a h=0

EE576 Dr. Kousa Linear Block Codes 588


RS code

• They are shortened to K’ data bytes and punctured to permit T’


bytes to be corrected.
• When a block is shortened to K’, the first 239-K’ bytes of the
encoder input shall be zero
• When a codeword is punctured to permit T’ bytes to be
corrected, only the first 2T’ of the total 16 parity bytes shall be
employed.
• When shortened and punctured to (48,36,6) the first 203(239-
36) information bytes are assigned 0.
• And only the first 12(2*6) bytes of R(X) will be employed in the
codeword.

EE576 Dr. Kousa Linear Block Codes 589


Shortened and Punctured

EE576 Dr. Kousa Linear Block Codes 590


RS code

EE576 Dr. Kousa Linear Block Codes 591


RS code

• Decoding : The Euclid’s (Berlekamp) algorithm is a


common decoding algorithm for RS code.
• Four step:
-compute the syndrome value
-compute the error location polynomial
-compute the error location
-compute the error value

EE576 Dr. Kousa Linear Block Codes 592


Convolution code

• Each RS code is encoded by a binary convolution


encoder, which has native rate of ½, a constraint
length equal to 7.

EE576 Dr. Kousa Linear Block Codes 593


Convolution code

• “1” means a transmitted bit and “0” denotes a


removed bit, note that the has been changed
from that of the native convolution code with rate ½ .

EE576 Dr. Kousa Linear Block Codes 594


Convolution code

• Decoding: Viterbi algorithm

EE576 Dr. Kousa Linear Block Codes 595


Convolution code

• The convolution code in IEEE802.16a need to be


terminated in a block, and thus become a block code.
• Three method to achieve this termination
– Direct truncation
– Zero tail
– Tail biting

EE576 Dr. Kousa Linear Block Codes 596


RS-CC code
• Outer code: RS code
• Inner code: convolution code
• Input data streams are divided into RS blocks, then each RS
block is encode by a tail-biting convolution code.
• Between the convolution coder and modulator is a bit
interleaver.

EE576 Dr. Kousa Linear Block Codes 597


LDPC code

• low density parity checks matrix


• LDPC codes also linear codes. The codeword can be
expressed as the null space of H, Hx=0
• Low density enables efficient decoding
– Better decoding performance to Turbo code
– Close to the Shannon limit at long block length

EE576 Dr. Kousa Linear Block Codes 598


LDPC code

• n is the length of the code, m is the number of parity


check bit

EE576 Dr. Kousa Linear Block Codes 599


LDPC code

• Base model

EE576 Dr. Kousa Linear Block Codes 600


LDPC code
• if p(f,i,j) = -1
– replace by z*z zero matrix
else
– p(f,i,j) is the circular shift size

 p (i, j ), p(i,j)  0

p  f , i, j     p(i,j)z f 
 z  , p(i,j)>0
 0 

EE576 Dr. Kousa Linear Block Codes 601


LDPC code
• Encoding
[u p1 p2]

• Decoding
– Tanner Graph
– Sum Product Algorithm

EE576 Dr. Kousa Linear Block Codes 602


LDPC code
• Tanner Graph

EE576 Dr. Kousa Linear Block Codes 603


LDPC code
• Sum Product Algorithm

EE576 Dr. Kousa Linear Block Codes 604


LDPC code

EE576 Dr. Kousa Linear Block Codes 605


LDPC code

Future Work
• Realize these algorithm in computer
• Find some decoding algorithm to speed up
EE576 Dr. Kousa Linear Block Codes 606
Chapter 11

Data Link
Control
and
Protocols
EE576 Dr. Kousa Linear Block Codes 607
11.1 Flow and Error Control

Flow Control
Flow control refers to a set of procedures used to restrict the amount of
data that the sender can send before waiting for acknowledgment.

Error Control
Error control in the data link layer is based on automatic repeat
request, which is the retransmission of data.

EE576 Dr. Kousa Linear Block Codes 608


11.2 Stop-and-Wait ARQ

Operation
Bidirectional Transmission

EE576 Dr. Kousa Linear Block Codes 609


11.1 Normal operation

EE576 Dr. Kousa Linear Block Codes 610


11.2 Stop-and-Wait ARQ, lost frame

EE576 Dr. Kousa Linear Block Codes 611


11.3 Stop-and-Wait ARQ, lost ACK frame

EE576 Dr. Kousa Linear Block Codes 612


Note:

In Stop-and-Wait ARQ, numbering


frames prevents the retaining of
duplicate frames.

EE576 Dr. Kousa Linear Block Codes 613


11.4 Stop-and-Wait ARQ, delayed ACK

EE576 Dr. Kousa Linear Block Codes 614


Note:

Numbered acknowledgments are


needed if an acknowledgment is
delayed and the next frame is lost.

EE576 Dr. Kousa Linear Block Codes 615


11.5 Piggybacking

EE576 Dr. Kousa Linear Block Codes 616


11.3 Go-Back-N ARQ

Sequence Number

Sender and Receiver Sliding Window


Control Variables and Timers
Acknowledgment

Resending Frames

Operation
EE576 Dr. Kousa Linear Block Codes 617
11.6 Sender sliding window

EE576 Dr. Kousa Linear Block Codes 618


11.7 Receiver sliding window

EE576 Dr. Kousa Linear Block Codes 619


11.8 Control variables

EE576 Dr. Kousa Linear Block Codes 620


11.9 Go-Back-N ARQ, normal operation

EE576 Dr. Kousa Linear Block Codes 621


11.10 Go-Back-N ARQ, lost frame

EE576 Dr. Kousa Linear Block Codes 622


11.11 Go-Back-N ARQ: sender window
size

EE576 Dr. Kousa Linear Block Codes 623


Note:

In Go-Back-N ARQ, the size of the


sender window must be less than 2m;
the size of the receiver window is
always 1.

EE576 Dr. Kousa Linear Block Codes 624


11.4 Selective-Repeat ARQ

Sender and Receiver Windows

Operation

Sender Window Size

Bidirectional Transmission

Pipelining

EE576 Dr. Kousa Linear Block Codes 625


11.12 Selective Repeat ARQ, sender and receiver
windows

EE576 Dr. Kousa Linear Block Codes 626


11.13 Selective Repeat ARQ, lost frame

EE576 Dr. Kousa Linear Block Codes 627


Note:

In Selective Repeat ARQ, the size of the


sender and receiver window must be at
most one-half of 2m.

EE576 Dr. Kousa Linear Block Codes 628


11.14 Selective Repeat ARQ, sender
window size

EE576 Dr. Kousa Linear Block Codes 629


Example 1
In a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit
takes 20 ms to make a round trip. What is the bandwidth-delay product? If the
system data frames are 1000 bits in length, what is the utilization percentage of
the link?

Solution
The bandwidth-delay product is

1  106  20  10-3 = 20,000 bits

The system can send 20,000 bits during the time it takes for the data to go from
the sender to the receiver and then back again. However, the system sends only
1000 bits. We can say that the link utilization is only 1000/20,000, or 5%. For
this reason, for a link with high bandwidth or long delay, use of Stop-and-Wait
ARQ wastes the capacity of the link.
EE576 Dr. Kousa Linear Block Codes 630
Example 2
What is the utilization percentage of the link in Example 1 if the link uses Go-
Back-N ARQ with a 15-frame sequence?

Solution
The bandwidth-delay product is still 20,000. The system can send up to 15
frames or 15,000 bits during a round trip. This means the utilization is
15,000/20,000, or 75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to be resent.

EE576 Dr. Kousa Linear Block Codes 631


11.5 HDLC

Configurations and Transfer Modes

Frames

Frame Format

Examples

Data Transparency

EE576 Dr. Kousa Linear Block Codes 632


11.15 NRM

EE576 Dr. Kousa Linear Block Codes 633


11.16 ABM

EE576 Dr. Kousa Linear Block Codes 634


11.17 HDLC frame

EE576 Dr. Kousa Linear Block Codes 635


11.18 HDLC frame types

EE576 Dr. Kousa Linear Block Codes 636


11.19 I-frame

EE576 Dr. Kousa Linear Block Codes 637


11.20 S-frame control field in HDLC

EE576 Dr. Kousa Linear Block Codes 638


11.21 U-frame control field in HDLC

EE576 Dr. Kousa Linear Block Codes 639


Table 11.1 U-frame control command and response
Command/response Meaning
SNRM Set normal response mode
SNRME Set normal response mode (extended)
SABM Set asynchronous balanced mode
SABME Set asynchronous balanced mode (extended)
UP Unnumbered poll
UI Unnumbered information
UA Unnumbered acknowledgment
RD Request disconnect
DISC Disconnect
DM Disconnect mode
RIM Request information mode
SIM Set initialization mode
RSET Reset
XID Exchange ID
FRMR Frame reject
EE576 Dr. Kousa Linear Block Codes 640
Example 3
Figure 11.22 shows an exchange using piggybacking where is no
error. Station A begins the exchange of information with an I-
frame numbered 0 followed by another I-frame numbered 1.
Station B piggybacks its acknowledgment of both frames onto an I-
frame of its own. Station B’s first I-frame is also numbered 0 [N(S)
field] and contains a 2 in its N(R) field, acknowledging the receipt
of A’s frames 1 and 0 and indicating that it expects frame 2 to
arrive next. Station B transmits its second and third I-frames
(numbered 1 and 2) before accepting further frames from station A.
Its N(R) information, therefore, has not changed: B frames 1 and 2
indicate that station B is still expecting A frame 2 to arrive next.

EE576 Dr. Kousa Linear Block Codes 641


11.22 Example 3

EE576 Dr. Kousa Linear Block Codes 642


Example 4
In Example 3, suppose frame 1 sent from station B to
station A has an error. Station A informs station B to
resend frames 1 and 2 (the system is using the Go-Back-
N mechanism). Station A sends a reject supervisory frame
to announce the error in frame 1. Figure 11.23 shows the
exchange.

EE576 Dr. Kousa Linear Block Codes 643


11.23 Example 4

EE576 Dr. Kousa Linear Block Codes 644


Note:

Bit stuffing is the process of adding one


extra 0 whenever there are five
consecutive 1s in the data so that the
receiver does not mistake the
data for a flag.

EE576 Dr. Kousa Linear Block Codes 645


11.24 Bit stuffing and removal

EE576 Dr. Kousa Linear Block Codes 646


11.25 Bit stuffing in HDLC

EE576 Dr. Kousa Linear Block Codes 647


Chapter 11
Data Link Control

EE576 Dr. Kousa Linear Block Codes 648


11.648
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
11-1 FRAMING

The data link layer needs to pack bits into frames, so


that each frame is distinguishable from another. Our
postal system practices a type of framing. The simple
act of inserting a letter into an envelope separates one
piece of information from another; the envelope serves
as the delimiter.
Topics discussed in this section:
Fixed-Size Framing
Variable-Size Framing

11.649
EE576 Dr. Kousa Linear Block Codes 649
Figure 11.1 A frame in a character-oriented protocol

11.650
EE576 Dr. Kousa Linear Block Codes 650
Figure 11.2 Byte stuffing and unstuffing

Byte stuffing is the process of adding 1 extra byte


whenever there is a flag or escape character in the text.

11.651
EE576 Dr. Kousa Linear Block Codes 651
Figure 11.3 A frame in a bit-oriented protocol

Bit stuffing is the process of adding one extra 0


whenever five consecutive 1s follow a 0 in the
data, so that the receiver does not mistake
the pattern 0111110 for a flag.

11.652
EE576 Dr. Kousa Linear Block Codes 652
Figure 11.4 Bit stuffing and unstuffing

11.653
EE576 Dr. Kousa Linear Block Codes 653
11-2 FLOW AND ERROR CONTROL

The most important responsibilities of the data link


layer are flow control and error control. Collectively,
these functions are known as data link control.

Topics discussed in this section:


Flow Control
Error Control

11.654
EE576 Dr. Kousa Linear Block Codes 654
Note

 Flow control refers to a set of procedures used to


restrict the amount of data that the sender can
send before waiting for acknowledgment.

 Error control in the data link layer is based on


automatic repeat request, which is the
retransmission of data.

11.655
EE576 Dr. Kousa Linear Block Codes 655
11-3 PROTOCOLS

Now let us see how the data link layer can combine
framing, flow control, and error control to achieve the
delivery of data from one node to another.

The protocols are normally implemented in software by


using one of the common programming languages.

To make our discussions language-free, we have written


in pseudocode a version of each protocol that
concentrates mostly on the procedure instead of delving
into the details of language rules.
11.656
EE576 Dr. Kousa Linear Block Codes 656
Figure 11.5 Taxonomy of protocols discussed in this chapter

11.657
EE576 Dr. Kousa Linear Block Codes 657
11-4 NOISELESS CHANNELS

Let us first assume we have an ideal channel in which


no frames are lost, duplicated, or corrupted. We
introduce two protocols for this type of channel.

Topics discussed in this section:


Simplest Protocol
Stop-and-Wait Protocol

11.658
EE576 Dr. Kousa Linear Block Codes 658
Figure 11.6 The design of the simplest protocol with no flow or error control

11.659
EE576 Dr. Kousa Linear Block Codes 659
Figure 11.7 Flow diagram for Example 11.1

Figure 11.7 shows an example


of communication using this
protocol. It is very simple. The
sender sends a sequence of
frames without even thinking
about the receiver. To send
three frames, three events
occur at the sender site and
three events at the receiver site.
Note that the data frames are
shown by tilted boxes; the
height of the box defines the
transmission time difference
between the first bit and the
last bit in the frame.

11.660
EE576 Dr. Kousa Linear Block Codes 660
Figure 11.8 Design of Stop-and-Wait Protocol

11.661
EE576 Dr. Kousa Linear Block Codes 661
Figure 11.9 Flow diagram for Example 11.2

Figure 11.9 shows an


example of communication
using this protocol. It is still
very simple. The sender
sends one frame and waits
for feedback from the
receiver. When the ACK
arrives, the sender sends the
next frame. Note that
sending two frames in the
protocol involves the sender
in four events and the
receiver in two events.

11.662
EE576 Dr. Kousa Linear Block Codes 662
11-5 NOISY CHANNELS

Although the Stop-and-Wait Protocol gives us an idea


of how to add flow control to its predecessor, noiseless
channels are nonexistent. We discuss three protocols in
this section that use error control.

Topics discussed in this section:


Stop-and-Wait Automatic Repeat Request
Go-Back-N Automatic Repeat Request
Selective Repeat Automatic Repeat Request

11.663
EE576 Dr. Kousa Linear Block Codes 663
Note
Error correction in Stop-and-Wait ARQ is done by
keeping a copy of the sent frame and retransmitting of the
frame when the timer expires.

 In Stop-and-Wait ARQ:
 we use sequence numbers to number the frames.
The sequence numbers are based on modulo-2
arithmetic.
 In Stop-and-Wait ARQ, the acknowledgment
number always announces in modulo-2 arithmetic the
sequence number of the next frame expected.
11.664
EE576 Dr. Kousa Linear Block Codes 664
Figure 11.11 Flow diagram for an example of Stop-and-Wait ARQ.

Frame 0 is sent and


acknowledged. Frame 1
is lost and resent after
the time-out. The resent
frame 1 is
acknowledged and the
timer stops.
Frame 0 is sent and
acknowledged, but the
acknowledgment is lost.
The sender has no idea
if the frame or the
acknowledgment is lost,
so after the time-out, it
resends frame 0, which
is acknowledged.

11.665
EE576 Dr. Kousa Linear Block Codes 665
Example 11.4

Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the


line is 1 Mbps, and 1 bit takes 20 ms to make a round trip. What is
the bandwidth-delay product? If the system data frames are 1000 bits
in length, what is the utilization percentage of the link?
Solution
The bandwidth-delay product is
The system can send 20,000 bits during the time it takes for the data
to go from the sender to the receiver and then back again. However,
the system sends only 1000 bits. We can say that the link utilization is
only 1000/20,000, or 5 percent. For this reason, for a link with a
high bandwidth or long delay, the use of Stop-and-Wait ARQ wastes
the capacity of the link.

11.666
EE576 Dr. Kousa Linear Block Codes 666
Example 11.5

What is the utilization percentage of the link in Example


11.4 if we have a protocol that can send up to 15 frames
before stopping and worrying about the
acknowledgments?

Solution
The bandwidth-delay product is still 20,000 bits. The
system can send up to 15 frames or 15,000 bits during a
round trip. This means the utilization is 15,000/20,000, or
75 percent. Of course, if there are damaged frames, the
utilization percentage is much less because frames have to
be resent.
11.667
EE576 Dr. Kousa Linear Block Codes 667
Note
In the Go-Back-N Protocol, the sequence numbers are
modulo 2m,
where m is the size of the sequence number field in bits.

11.668
EE576 Dr. Kousa Linear Block Codes 668
Figure 11.12 Send window for Go-Back-N ARQ

11.669
EE576 Dr. Kousa Linear Block Codes 669
Note
The send window is an abstract concept defining an
imaginary box of size 2m−1 with three variables: Sf, Sn, and
Ssize.

 The send window can slide one or more slots when a


valid acknowledgment arrives.

11.670
EE576 Dr. Kousa Linear Block Codes 670
Figure 11.13 Receive window for Go-Back-N ARQ

11.671
EE576 Dr. Kousa Linear Block Codes 671
Note

The receive window is an abstract concept defining an


imaginary box
of size 1 with one single variable Rn.
The window slides
when a correct frame has arrived; sliding occurs one slot
at a time.

11.672
EE576 Dr. Kousa Linear Block Codes 672
Figure 11.15 Window size for Go-Back-N ARQ

11.673
EE576 Dr. Kousa Linear Block Codes 673
Note

In Go-Back-N ARQ, the size of the send window must be


less than 2m;
the size of the receiver window
is always 1.

11.674
EE576 Dr. Kousa Linear Block Codes 674
Figure 11.16 Flow diagram for Example 11.6

This is an
example
of a case
where the
forward
channel is
reliable,
but the
reverse is
not. No
data
frames
11.675
EE576 Dr. Kousa Linear Block Codes 675
Figure 11.17 Flow diagram for Example 11.7

Scenario
showing what
happens when
a frame is
lost.

11.676
EE576 Dr. Kousa Linear Block Codes 676
Note

Stop-and-Wait ARQ is a special case of Go-Back-N ARQ


in which the size of the send window is 1.

11.677
EE576 Dr. Kousa Linear Block Codes 677
Figure 11.18 Send window for Selective Repeat ARQ

11.678
EE576 Dr. Kousa Linear Block Codes 678
Figure 11.19 Receive window for Selective Repeat ARQ

11.679
EE576 Dr. Kousa Linear Block Codes 679
Figure 11.21 Selective Repeat ARQ, window size

11.680
EE576 Dr. Kousa Linear Block Codes 680
Note

In Selective Repeat ARQ, the size of the sender and


receiver window
must be at most one-half of 2m.

11.681
EE576 Dr. Kousa Linear Block Codes 681
Figure 11.22 Delivery of data in Selective Repeat ARQ

11.682
EE576 Dr. Kousa Linear Block Codes 682
Figure 11.23 Flow diagram for Example 11.8

Scenario
showing how
Selective Repeat
behaves when a
frame is lost.

11.683
EE576 Dr. Kousa Linear Block Codes 683
11-6 HDLC

High-level Data Link Control (HDLC) is a bit-oriented


protocol for communication over point-to-point and
multipoint links. It implements the ARQ mechanisms we
discussed in this chapter.

Topics discussed in this section:


Configurations and Transfer Modes
Frames
Control Field

11.684
EE576 Dr. Kousa Linear Block Codes 684
Figure 11.25 Normal response mode

Figure 11.26 Asynchronous balanced mode

11.685
EE576 Dr. Kousa Linear Block Codes 685
Figure 11.27 HDLC frames
Control field
format for the
different frame
types

11.686
EE576 Dr. Kousa Linear Block Codes 686
Table 11.1 U-frame control command and response

11.687
EE576 Dr. Kousa Linear Block Codes 687
Figure 11.31 Example of piggybacking with error

Figure 11.31 shows an exchange in which a


frame is lost. Node B sends three data
frames (0, 1, and 2), but frame 1 is lost.
When node A receives frame 2, it discards it
and sends a REJ frame for frame 1. Note
that the protocol being used is Go-Back-N
with the special use of an REJ frame as a
NAK frame. The NAK frame does two things
here: It confirms the receipt of frame 0 and
declares that frame 1 and any following
frames must be resent. Node B, after
receiving the REJ frame, resends frames 1
and 2. Node A acknowledges the receipt by
sending an RR frame (ACK) with
acknowledgment number 3.

11.688
EE576 Dr. Kousa Linear Block Codes 688
11-7 POINT-TO-POINT PROTOCOL

Although HDLC is a general protocol that can be used


for both point-to-point and multipoint configurations,
one of the most common protocols for point-to-point
access is the Point-to-Point Protocol (PPP). PPP is a
byte-oriented protocol.

Topics discussed in this section:


Framing
Transition Phases
Multiplexing
Multilink PPP
11.689
EE576 Dr. Kousa Linear Block Codes 689
Figure 11.32 PPP frame format

PPP is a byte-oriented protocol using byte stuffing with the


escape byte 01111101.

11.690
EE576 Dr. Kousa Linear Block Codes 690
Figure 11.33 Transition phases

11.691
EE576 Dr. Kousa Linear Block Codes 691
Figure 11.35 LCP packet encapsulated in a frame

11.692
EE576 Dr. Kousa Linear Block Codes 692
Table 11.2 LCP packets

11.693
EE576 Dr. Kousa Linear Block Codes 693
Table 11.3 Common options

11.694
EE576 Dr. Kousa Linear Block Codes 694
Figure 11.36 PAP packets encapsulated in a PPP frame

11.695
EE576 Dr. Kousa Linear Block Codes 695
Figure 11.37 CHAP packets encapsulated in a PPP frame

11.696
EE576 Dr. Kousa Linear Block Codes 696
Figure 11.38 IPCP packet encapsulated in PPP frame

Code value
for IPCP
packets

11.697
EE576 Dr. Kousa Linear Block Codes 697
A Survey of Advanced
FEC Systems
Eric Jacobsen
Minister of Algorithms, Intel Labs
Communication Technology Laboratory/
Radio Communications Laboratory
July 29, 2004
With a lot of material from Bo Xia, CTL/RCL

www.intel.com/labs
Communication and Interconnect Technolo

Outline
 What is Forward Error Correction?
 The Shannon Capacity formula and what it means
 A simple Coding Tutorial

 A Brief History of FEC


 Modern Approaches to Advanced FEC
 Concatenated Codes
 Turbo Codes
 Turbo Product Codes
 Low Density Parity Check Codes

www.intel.com/labs 699
Communication and Interconnect Technolo

Information Theory Refresh


The Shannon Capacity Equation
C = W log2(1 + P / N)
Channel Transmit Noise
Channel Bandwidth Power Power
Capacity (Hz)
(bps)

2 fundamental ways to increase data rate


C is the highest data rate that can be transmitted error free under
the specified conditions of W, P, and N. It is assumed that P
is the only signal in the memoryless channel and N is AWGN.
®

www.intel.com/labs 700
Communication and Interconnect Technolo

A simple example
A system transmits messages of two bits each through a channel
that corrupts each bit with probability Pe.

Tx Data = { 00, 01, 10, 11 } Rx Data = { 00, 01, 10, 11 }

The problem is that it is impossible to tell at the receiver whether the


two-bit symbol received was the symbol transmitted, or whether it
was corrupted by the channel.

Tx Data = 01 Rx Data = 00

In this case a single bit error has corrupted the received symbol, but
it is still a valid symbol in the list of possible symbols. The most
fundamental coding trick is just to expand the number of bits
transmitted so that the receiver can determine the most likely
transmitted symbol just by finding the valid codeword with the
minimum Hamming distance to the received symbol.
®

www.intel.com/labs 701
Communication and Interconnect Technolo
Continuing the Simple
Example
A one-to-one mapping of symbol to codeword is produced:

Symbol:Codeword The result is a systematic block code


00 : 0010 with Code Rate R = ½ and a minimum
01 : 0101 Hamming distance between codewords
10 : 1001 of dmin = 2.
11 : 1110

A single-bit error can be detected and corrected at the receiver by


finding the codeword with the closest Hamming distance. The
most likely transmitted symbol will always be associated with the
closest codeword, even in the presence of multiple bit errors.

This capability comes at the expense of transmitting more bits,


usually referred to as parity, overhead, or redundancy bits.
®

www.intel.com/labs 702
Communication and Interconnect Technolo

Coding Gain
The difference in performance between an uncoded and a coded
system, considering the additional overhead required by the code,
is called the Coding Gain. In order to normalize the power required
to transmit a single bit of information (not a coded bit), Eb/No is used
as a common metric, where Eb is the energy per information bit, and
No is the noise power in a unit-Hertz bandwidth.

Uncoded … … The uncoded symbols require a certain


amount of energy to transmit, in this
Symbols case over period Tb.

Coded … … The coded symbols at R = ½ can be


Symbols transmitted within the same period if
the transmission rate is doubled. Using
with R = ½
Tb Time No instead of N normalizes the noise
considering the differing signal
bandwidths.
www.intel.com/labs
®

703
Communication and Interconnect Technolo
Coding
Coding Gain
Gain and
and Distance
Distance to
to Channel
Channel Capacity
Capacity Example
Example
C, R = 3/4 C, R = 9/10
0.1
1.62 3.2 Uncoded
“Matched-Filter
0.01 Bound”
Performance
3
1 10
These curves
Compare the
performance of
BER = Pe

4
1 10
two Turbo
Codes with a
1 10
5 concatenated
Viterbi-RS
system. The
d = ~1.4dB Coding Gain = ~5.95dB
1 10
6 TC with R =
d = ~2.58dB Coding Gain = ~6.35dB 9/10 appears to
be inferior to
1 10
7
1 2 3 4 5 6 7 8 9 10 11
the R = ¾ Vit-
Eb/No (dB) RS system, but
R = 3/4 w/RS
R = 9/10 w/RS Capacity for R = 3/4 is actually
VitRs R = 3/4
Uncoded QPSK
operating
closer to
capacity.
®

www.intel.com/labs 704
Communication and Interconnect Technolo

FEC Historical Pedigree


1950 1960 1970

Shannon’s Paper
1948
Early practical
Hamming
defines basic implementations
binary codes of RS codes for tape
Gallager’s Thesis
and disk drives
On LDPCs
BCH codes Berlekamp and Massey
Proposed Viterbi’s Paper rediscover Euclid’s
On Decoding polynomial technique
Convolutional Codes and enable practical
Reed and Solomon algebraic decoding
define ECC Forney suggests
Technique concatenated codes

www.intel.com/labs 705
Communication and Interconnect Technolo

FEC Historical Pedigree II


1980 1990 2000

Ungerboeck’s
TCM Paper - 1982
LDPC beats
RS codes appear
in CD players Turbo Codes
For DVB-S2
Berrou’s Turbo Code
First integrated Standard - 2003
Paper - 1993
Viterbi decoders Renewed interest
(late 1980s) Turbo Codes in LDPCs due to TC
Adopted into Research
Standards
TCM Heavily (DVB-RCS, 3GPP, etc.)
Adopted into
Standards

www.intel.com/labs 706
Communication and Interconnect Technolo

Block
Block Codes
Codes

Generally, a block code is any code defined with a finite codeword length.

Systematic Block Code If the codeword is constructed by


appending redundancy to the
Data Field Parity payload Data Field, it is called a
“systematic” code.
Codeword
The “parity” portion can be actual parity bits, or generated by some other means, like
a polynomial function or a generator matrix. The decoding algorithms differ greatly.

The Code Rate, R, can be adjusted by shortening the data field (using zero padding)
or by “puncturing” the parity field.

Examples of block codes: BCH, Hamming, Reed-Solomon, Turbo Codes,


Turbo Product Codes, LDPCs

Essentially all iteratively-decoded codes are block codes.

www.intel.com/labs 707
Communication and Interconnect Technolo

Convolutional
Convolutional Codes
Codes

Convolutional codes are generated using a shift register to apply a polynomial to a


stream of data. The resulting code can be systematic if the data is transmitted in
addition to the redundancy, but it often isn’t.
This is the convolutional encoder for
The p = 133/171 Polynomial that is in
very wide use. This code has a
Constraint Length of k = 7. Some
low-data-rate systems use k = 9 for
a more powerful code.

This code is naturally R = ½, but


deleting selected output bits, or
“puncturing” the code, can be done
to increase the code rate.

Convolutional codes are typically decoded using the Viterbi algorithm, which increases in
complexity exponentially with the constraint length. Alternatively a
sequential decoding algorithm can be used, which requires a much longer constraint length
for similar performance.

Diagram from [1] ®

www.intel.com/labs 708
Communication and Interconnect Technolo

Convolutional
Convolutional Codes
Codes -- II
II

This is the code-trellis, or state diagram of a k = 2


Convolutional Code. Each end node represents a code
state, and the branches represent codewords selected
when a one or a zero is shifted into the encoder.

The correcting power of the code comes from the


sparseness of the trellis. Since not all transitions from
any one state to any other state are allowed, a state-
estimating decoder that looks at the data sequence can
estimate the input data bits from the state relationships.

The Viterbi decoder is a Maximum Likelihood


Sequence Estimator, that estimates the encoder
state using the sequence of transmitted codewords.

This provides a powerful decoding strategy, but


when it makes a mistake it can lose track of the
sequence and generate a stream of errors until
it reestablishes code lock.

Diagrams from [1] ®

www.intel.com/labs 709
Communication and Interconnect Technolo

Concatenated
Concatenated Codes
Codes

A very common and effective code is the concatenation of an inner convolutional


code with an outer block code, typically a Reed-Solomon code. The convolutional
code is well-suited for channels with random errors, and the Reed-Solomon code is
well suited to correct the bursty output errors common with a Viterbi decoder. An
interleaver can be used to spread the Viterbi output error bursts across multiple RS
codewords.

Data
RS Conv. Viterbi
Interleaver Channel
Encoder Encoder Decoder

Inner Code
Outer Code

Data
De- RS
Interleaver Decoder

www.intel.com/labs 710
Communication and Interconnect Technolo

Concatenating
Concatenating Convolutional
Convolutional
Codes
Codes
Parallel and serial
Data
CC CC Viterbi/APP
Interleaver Channel
Encoder1 Encoder2 Decoder

Serial Concatenation
Data
De- Viterbi/APP
Interleaver Decoder
Data Data
Viterbi/APP
Channel Combiner
Decoder
CC
Encoder1
De- Viterbi/APP
CC Interleaver Decoder
Interleaver
Encoder2

www.intel.com/labs 711
Communication and Interconnect Technolo

Iterative
Iterative Decoding
Decoding of
of CCCs
CCCs

Rx Data
Viterbi/APP
Interleaver
Decoder Data

De- Viterbi/APP
Interleaver Decoder

Turbo Codes add coding diversity by encoding the same data twice through
concatenation. Soft-output decoders are used, which can provide reliability update
information about the data estimates to the each other, which can be used during a
subsequent decoding pass.

The two decoders, each working on a different codeword, can “iterate” and continue
to pass reliability update information to each other in order to improve the probability
of converging on the correct solution. Once some stopping criterion has been met,
the final data estimate is provided for use.

These Turbo Codes provided the first known means of achieving decoding
performance close to the theoretical Shannon capacity.
®

www.intel.com/labs 712
Communication and Interconnect Technolo

MAP/APP decoders
 Maximum A Posteriori/A Posteriori Probability
 Two names for the same thing
 Basically runs the Viterbi algorithm across the data sequence in both
directions
 ~Doubles complexity
 Becomes a bit estimator instead of a sequence estimator
 Optimal for Convolutional Turbo Codes
 Need two passes of MAP/APP per iteration
 Essentially 4x computational complexity over a single-pass Viterbi
 Soft-Output Viterbi Algorithm (SOVA) is sometimes substituted as a
suboptimal simplification compromise

www.intel.com/labs 713
Communication and Interconnect Technolo

Turbo Code Performance

www.intel.com/labs 714
Communication and Interconnect Technolo
Turbo
Turbo Code
Code Performance
Performance II
II

0.1
The performance curves shown here 1.629 2.864

were end-to-end measured


0.01
performance in practical modems. The
black lines are a PCCC Turbo Code, and
3
1 10
The blue lines are for a concatenated
Viterbi-RS decoder. The vertical
dashed lines show QPSK capacity for 1 10
4

R = ¾ and R = 7/8. The capacity for

BER
QPSK at R = ½ is 0.2dB. 1 10
5

The TC system clearly operates much 1 10


6

closer to capacity. Much of the


observed distance to capacity is due 7
1 10
to implementation loss in the modem.
8
1 10
0 1 2 3 4 5 6 7 8 9
Eb/No (dB)
Uncoded
Vit-RS R = 1/2
Vit-RS R = 3/4
Vit-RS R = 7/8
Turbo Code R = 1/2
Turbo Code R = 3/4
Turbo Code R = 7/8

www.intel.com/labs 715
Communication and Interconnect Technolo

Tricky
Tricky Turbo
Turbo Codes
Codes

Repeat-Accumulate codes use simple repetition followed by a differential encoder


(the accumulator). This enables iterative decoding with extremely simple codes.
These types of codes work well in erasure channels.

Repeat Accumulate
Section Section

1:2 Interleaver D +

R = 1/2 R=1
Outer Code Inner Code

Since the differential encoder has R = 1, the final code rate is determined by the
amount of repetition used.

www.intel.com/labs 716
Communication and Interconnect Technolo

Turbo
Turbo Product
Product Codes
Codes

Horizontal Hamming Codes

The so-called “product codes” are codes


Vertical Hamming Codes

Created on the independent dimensions


Of a matrix. A common implementation
2-Dimensional

Parity
Arranges the data in a 2-dimensional array,
Data Field
and then applies a hamming code to each
row and column as shown.

The decoder then iterates between decoding


Parity the horizontal and vertical codes.
Parity Parity

Since the constituent codes are Hamming codes, which can be decoded simply, the
decoder complexity is much less than Turbo Codes. The performance is close to capacity
for code rates around R = 0.7-0.8, but is not great for low code rates or short blocks. TPCs
have enjoyed commercial success in streaming satellite applications.

www.intel.com/labs 717
Communication and Interconnect Technolo
Low Density Parity Check
Codes
 Iterative decoding of simple parity check codes
 First developed by Gallager, with iterative decoding, in 1962!
 Published examples of good performance with short blocks
 Kou, Lin, Fossorier, Trans IT, Nov. 2001

 Near-capacity performance with long blocks


 Very near! - Chung, et al, “On the design of low-density parity-check codes within
0.0045dB of the Shannon limit”, IEEE Comm. Lett., Feb. 2001

 Complexity Issues, especially in encoder


 Implementation Challenges – encoder, decoder memory

www.intel.com/labs 718
Communication and Interconnect Technolo

LDPC Bipartite Graph

Check Nodes

Edges

Variable Nodes
(Codeword bits)

This is an example bipartite graph for an irregular LDPC code.

www.intel.com/labs 719
Communication and Interconnect Technolo
Iteration
Iteration Processing
Processing

1st half iteration, compute ’s,’s, and r’s for each edge.
Check Nodes
i+1 = maxx(i,qi) Edges
i = maxx(i+1,qi) (one per parity bit)

ri = maxx(i,i+1)
ri

qi

mVn

Variable Nodes mV = mV0 + r’s qi = mV – ri


(one per code bit)
2nd half iteration, compute mV, q’s for each
variable node.

www.intel.com/labs 720
Communication and Interconnect Technolo
LDPC
LDPC Performance
Performance Example
Example

LDPC Performance can


Be very close to capacity.
The closest performance
To the theoretical limit
ever was with an LDPC,
and within 0.0045dB of
capacity.

The code shown here is


a high-rate code and
is operating within a few
tenths of a dB of capacity.

Turbo Codes tend to work


best at low code rates and
not so well at high code rates.
LDPCs work very well at high
code rates and low code rates.

Figure is from [2] ®

www.intel.com/labs 721
Communication and Interconnect Technolo

Current State-of-the-Art
 Block Codes
 Reed-Solomon widely used in CD-ROM, communications standards.
Fundamental building block of basic ECC
 Convolutional Codes
 K = 7 CC is very widely adopted across many communications standards
 K = 9 appears in some limited low-rate applications (cellular telephones)
 Often concatenated with RS for streaming applications (satellite, cable, DTV)
 Turbo Codes
 Limited use due to complexity and latency – cellular and DVB-RCS
 TPCs used in satellite applications – reduced complexity
 LDPCs
 Recently adopted in DVB-S2, ADSL, being considered in 802.11n, 802.16e
 Complexity concerns, especially memory – expect broader consideration

www.intel.com/labs 722
Cyclic Codes for Error Detection
W. W. Peterson and D. T. Brown

by
Maheshwar R Geereddy

EE576 Dr. Kousa Linear Block Codes 723


Definition
 
A code is called cyclic if [xnx0x1...xn-1] is a
codeword whenever [x0x1...xn-1xn] is also a codeword.

Notations
k = Number of binary digits in the message
before encoding
n = Number of binary digits in the encoded
message
n – k = number of check bits

n k

EE576 Dr. Kousa Linear Block Codes 724


b = length of a burst of errors.
G (X) = message polynomial
P (X) = generator polynomial
R (X) = remainder on dividing X n-k G (X) by P(X).
F (X) = encoded message polynomial
E (X) = error polynomial

H (X) = Received encoded message polynomial


H (X) = F (X) + E (X)

EE576 Dr. Kousa Linear Block Codes 725


Polynomial Representation of Binary
Information
It is convenient to think of binary digits as
coefficients of a polynomial in the dummy variable
X.
Polynomial is written low-order-to-high-order.
Polynomials are treated according to the laws of
ordinary algebra with an exception addition is to
be done modulo two.

EE576 Dr. Kousa Linear Block Codes 726


Algebraic Description of Cyclic Codes

A cyclic code is defined in terms of a generator


polynomial P(X) of degree n-k.

If P(X) has X as a factor then every code


polynomial has X as a factor and therefore
zero-order coefficient equal to zero.
Only codes for which P(X) is not divisible by X
are considered.

EE576 Dr. Kousa Linear Block Codes 727


Encoded Message Polynomial F(X)

• Computer X n – k G (X)
• R(X) = X n – k G (X) / P(X)
• Add the remainder to the X n – k G (X)
– F(X) = X n – k G (X) + R(X)

X n – k G (X) = Q(X) P(X) + R(X)

Also F(X) = Q(X) P(X)

EE576 Dr. Kousa Linear Block Codes 728


Principles of Error Detection and Error
Correction
• An encoded message containing errors can be
represented by
H (X) = F (X) + E (X)

H (X) = Received encoded message polynomial


F (X) = encoded message polynomial
E (X) = error polynomial

EE576 Dr. Kousa Linear Block Codes 729


Principles of Error Detection and Error
Correction Contd…
• To detect error, divide the received, possible
erroneous message H(X) by P(X) and test the
remainder.
• If the remainder is nonzero an error has been
detected.
• If the remainder is zero, either no error or an
undetectable error has occurred

EE576 Dr. Kousa Linear Block Codes 730


DETECTION OF SINGLE ERRORS

• Theorem 1: A cyclic code generated by an polynomial


P(X) with more than one term detects all single errors.
• Proof:
– A single error in the i’th position of an encoded message
corresponds to an error polynomial X i.
– For detection of single errors, it is necessary that P(X) does
not divide X i.
– Obviously no polynomial with more than one term divides X I.

EE576 Dr. Kousa Linear Block Codes 731


DETECTION OF SINGLE ERRORS
Contd….

• Theorem 2: Every polynomial divisible by 1 + X has


an even number of terms.
– Proof:

• Also if P(X) contains a factor 1 + X any odd numbers


of errors will be detected.

EE576 Dr. Kousa Linear Block Codes 732


Double and Triple Error Detecting
Codes (Hamming Codes)
• Theorem 3: A code generated by the polynomial
P(X) detects all single and double errors if the length
n of the code is no greater than the exponent e to
which P(X) belongs.
– Detecting double errors requires that P(X) does not divisible
by
X i + X j for any i, j < n

EE576 Dr. Kousa Linear Block Codes 733


Double and Triple Error Detecting
Codes Contd…
• Theorem 4: A code generated by P (X) = (1 + X)
P1(X) detects all single, double, and triple errors if the
length n of the code is no greater than the exponent e
to which P1(X) belongs.
– Single and triple errors are detected by presence of factor 1
+ X as proved in Theorem 2.
– Double errors are detected because P1(X) belong to the
exponent
e >= n as proved in Theorem 3
– Q.E.D

EE576 Dr. Kousa Linear Block Codes 734


Detection of a Burst-Error
• A burst error of length b will be defined as any pattern of errors for
which the number of symbols between first and last errors including
these errors, is b.
• Theorem 5: Any cyclic code generated by a polynomial of degree n-k
detects any burst –error of length n-k or less.
– Any burst polynomial can be factored as E(X) = Xi E1(X)
– E1(X) is of degree b-1
– Burst can be detected if P(X) does not divide E(X)
– Since P(X) is assumed not to have X as a factor, it could divide
E(X) only if it could divide E1(X).
– But b < = n – k
• There fore P(X) is of higher degree than E1(X) which implies that P(X)
could not divide E1(X)
• Q.E.D
• Theorem 6: The fraction of bursts of length b > n-k that are undetected
is
2-(n-k) if b > n – k + 1
2-(n – k – 1) if b = n – k + 1
EE576 Dr. Kousa Linear Block Codes 735
Detection of Two Bursts of Errors
(Abram son and Fire Codes)
• Theorem 7: The cyclic code generated by P (X) = (1 + X) P1 (X)
detects any combination of two burst-errors of length two or less
if the length of the code, n, is no go greater than e, the exponent
to which P1 (X) belongs.
– Proof
– There are four types of error patterns
• E(X) = X i + X j
• E(X) = (X i + Xi+1) + X j
• E(X) = X j (X j + Xj+1)
• E(X) = (X i + Xi+1) + (X j + Xj+1)

EE576 Dr. Kousa Linear Block Codes 736


Other Cyclic Codes
• There are several important cyclic codes which have not been
discussed in this paper.
– BCH codes (developed by Bose, Chaudhuri, and
Hocquenghem) are a very important type of cyclic codes.
– Reed-Solomon codes are a special type of BCH codes that
are commonly used in compact disc players.

Implementation
• Briefly, to encode a message, G (X), n-k zeros are annexed (I.e.
multiplication of Xn-1G (X) is performed) and then Xn-1G (X) is
divided by the polynomial P (X) of degree n-k. The remainder is
then subtracted from Xn-1G (X). (It replaces the n-k zeroes).
• This encoded message is divisible by P (X) for checking out
errors

EE576 Dr. Kousa Linear Block Codes 737


Implementation Contd…
• It can be seen that modulo 2 arithmetic has simplified the division
considerably.
•  Here we do not require the quotient, so the division to find the remainder
can be described as follows.
• 1) Align the coefficient of the highest degree terms of the divisor and
dividend and subtract (same as addition)
• 2) Align the coefficient of the highest degree terms of the divisor and
difference and subtract again
3) Repeat the process until the difference has the lower degree than the
divisor
• The hardware to implement this algorithm is a shift register and a
collection of modulo two adders.
• The number of shift register positions is equal to the degree of the
divisor, P (X), and the dividend is shifted through high order first and left
to right.

EE576 Dr. Kousa Linear Block Codes 738


Implementation Contd…
• As the first one (the coefficient of the high order term of
the dividend) shifts off the end we subtract the divisor by
the following procedure:
1. In the subtraction the high-order terms of the divisor and
dividend always cancel. As the higher order term of the dividend
is shifted off the end of the register, this part of the subtraction is
done automatically.
2. Modulo two adders are placed so that when a one shifts off the
end of the register, the divisor is subtracted from the contents of
the register. The register than contains a difference that is
shifted until another comes off the end and then the process is
repeated. This continues until the entire dividend is shifted into
the register.

EE576 Dr. Kousa Linear Block Codes 739


EE576 Dr. Kousa Linear Block Codes 740
Input 100010001101011

0 -> 10 00 1
0 -> 11 10 1
0 -> 11 01 1
1 -> 11 00 0
1 -> 11 10 0
0 -> 11 11 0
1 -> 01 11 1
0 -> 00 01 0
1 -> 00 00 1
1 -> 00 10 1

EE576 Dr. Kousa Linear Block Codes 741


EE576 Dr. Kousa Linear Block Codes 742
EE576 Dr. Kousa Linear Block Codes 743
Implementation Contd…
• To minimize the hardware it is desirable to use the same register for
both encoding and error detection.
• If circuit of fig : 3 is used for error detection, the remainder on dividing X
n-k H(X) by P(X) instead of remainder on dividing H(X) by P(X)

• This makes no difference, because if H(X) is not evenly divisible by


P(X) than obviously X n-k H(X) will not be divisible either.
• Error Correction: It is a much more difficult task than error detection.
• It can be shown that each different correctable error pattern must give a
different remainder after division buy P(X).
• There fore error correction can be done.

Conclusion
• Cyclic codes for error detection provides high efficiency and the ease of
implementation.
• It provides standardization like CRC-8 and CRC-32

EE576 Dr. Kousa Linear Block Codes 744


The Viterbi Algorithm
• Application of Dynamic Programming-the Principle of
Optimality
• -Search of Citation Index -213 references since 1998
• Applications
– Telecommunications
• Convolutional codes-Trellis codes
• Inter-symbol interference in Digital Transmission
• Continuous phase transmission
• Magnetic Recording-Partial Response Signaling-
• Divers others
– Image restoration
– Rainfall prediction
– Gene sequencing
– Character recognition

EE576 Dr. Kousa Linear Block Codes 745


Milestones
• Viterbi (1967) decoding convolutional codes
• Omura (1968) VA optimal
• Kobayashi (1971) Magnetic recording
• Forney (1973) Classic survey recognizing the
generality of the VA
• Rabiner (1989) Influential survey paper of hidden
Markov chains

EE576 Dr. Kousa Linear Block Codes 746


Example-Principle of Optimality
Find optimal path to each Professor X chooses an
bridge optimum
Peris path on his trip to lunch
Publis h
h
1.2 N .
. 2
7 1.2
.5 N
Faculty
1. Club
. 2
5
.
EE Bld Optimal: 6 adds N bridges 3
Brute force:8 Optimal: 4(N+1) adds
adds . Brute force: (N-1)2N
5 adds
S

. .8 1.0
8 S .
8
EE576 Dr. Kousa Linear Block Codes 747
Digital Transmission with
Convolutional Codes

Information a1 , a2 ,..., aN Convolutional


Source
c1 , c2 ,..., cN
AN Encoder

BSC
p
p
 1 , a 2 ,..., a N
Information a b1 , b2 ,..., bN
Viterbi
Sink Algorithm
BN

EE576 Dr. Kousa Linear Block Codes 748


Maximum a Posteriori (MAP)
Estimate

Define
D( B N , AN )  Hamming distance between sequences

Maximum Aposteriori Probability


N
,BN ) N
,BN )
max P (b1 , b2 ,..., bN / a1 , a2 ,..., a N )  max p D ( A (1  p) N  D ( A
a1 , a2 ,..., aN a1 , a2 ,..., a N

p  bit error probability

Equivalently
min D ( A N , B N ) log( p /(1  p)
a1 , a2 ,..., a N

Brute force = Exponential


Growth with N
EE576 Dr. Kousa Linear Block Codes 749
Convolutional codes-Encoding a
sequence
Example(3,1) code (output,input)
efficiency=input/output

Initial state - s1  s2  0

Output
Input
T T 111 100 010 110 011 001 000
110100
S1 S2
State 01  i
0 2  i  s1
0 3  i  s1  s2
Initial state - s1  s2  0

EE576 Dr. Kousa Linear Block Codes 750


Markov chain for Convolutional code

0-000

input -output state


0 -001
00 Fig.2.14
01

1 -111 0 -010
0 -011
1 -110

10 1 -100
11 1 -101
EE576 Dr. Kousa Linear Block Codes 751
Trellis Representation
State output Next state
0 input
s1s2
1 input
00 000 00
111
01 01
001
110
10 011 10
100
11 11
010
101

EE576 Dr. Kousa Linear Block Codes 752


Iteration for Optimization
min D( A N , B N )  min D( AN , B N )
a1 , a2 ,..., a N s1 , s2 ,..., s N
Shift register contents
N
min D( AN , B N )  min
s1 , s2 ,..., s N s1 , s2 ,..., s N
 d (a , b )-
i 1
i i BSC
memorylessness

N 1 N 1
min ( D( A , B )  N N
min ( D( A ,B )  d (aN , bN ))
s1 , s2 ,..., s N s1 , s2 ,..., s N 1 , s N
N 1 N 1
min ( D( A , B )  min N N
min ( D( A ,B )  d (aN , bN ))
s1 , s2 ,..., s N sN s1 , s2 ,..., s N 1/ S N

min ( D( A N , B N )  min( d ( a N , bN )  min D( A N 1, B N 1 ))


s1 , s2 ,..., s N sN s1 , s2 ,..., sN 1/ S N

EE576 Dr. Kousa Linear Block Codes 753


Key step!
min D( A N  2 , B N  2 ))  min D( A N  2 , B N  2 ))
s1 , s2 ,..., sN 2 / S N 1 ,S N s1 , s2 ,..., s N 2 / S N 1
Redundant
N 1 N 1
min D( A ,B )
s1 , s2 ,..., s N 1 / s N
N 2 N 2
min ( d ( aN 1 , bN 1 )  min D( A ,B ))
s N 1 / s N Incremental distance s1 , s2 ,..., sN 2 / sN 1 Accumulated distance
Linear growth in N

EE576 Dr. Kousa Linear Block Codes 754


Deciding Previous State
min D( Ai , B i ) 
s1 , s2 ,..., si / si 1

min(d (ai , bi )  min D( Ai 1 , B i 1 ))


si / si 1 s1 , s2 ,..., si 1 / si

State i-1 d (ai , bi ) State i


D( Ai 1 , B i 1 ) 00 bi  010
1 00
4 ai  000 4
Search previous states
10 2
2 ai  001

EE576 Dr. Kousa Linear Block Codes 755


Viterbi Algorithm-shortest path
First step to detect sequence
s0
s1
s2
s3
Trace though successive states

shortest path-Hamming distance to s0


Trellis codes-Euclidean distance
Optimum Sequence Detection

EE576 Dr. Kousa Linear Block Codes 756


Inter-symbol Interference
z (t )

N
Transmitter Channel Equalizer
 a p(t  iT )
i 1
i

Decisions VA
N
z (t )   ai h(t  iT )  n(t )-Received signal
i 1

ri  j   h(t  iT )h(t  jT )dt
0

ri  j  0; i  j  m  Finite memory channel


EE576 Dr. Kousa Linear Block Codes 757
AWGN Channel-MAP Estimate
 2
 N

min   z (t )   ai h(t  iT )  dt -
0  
a1 , a2 ,..., a N
i 1

Euclidean distance between received and possible signals


Simplification
 N N N 
min  2 a i Z i   ai a j ri  j 
a1 , a2 ,..., aN
 i 1 i 1 j 1 
where

Z i   y (t )h(t  iT )dt -Output of Matched Filter
0
EE576 Dr. Kousa Linear Block Codes 758
Viterbi Algorithm for ISI
Define:s k  {ak  m 1 ,..., ak }-states
Memory m

 k k k

D( Z1 ,..., Z k , sk  m 1 ,..., sk )   2 a i Z i   ai a j ri  j   Accumulated distance
 i 1 i 1 j 1 
k 1
d ( Zk ; sk 1 , sk )  2ak Z k  2ak 
i k m
ai rk i  ak2r0  Incremental distance

State = number of symbols in memory


min D( Z1 ,..., Z k 1 , sk  m 1 ,..., sk 1 ) 
s1 , s2 ,..., sk 1 / sk

min (d (Zk ; sk 1 , sk )  min D( Z1 ,..., Z k  2 , sk  m 1 ,..., sk  2 ))


sk 1 / sk s1 , s2 ,..., sk 2 / sk 1

EE576 Dr. Kousa Linear Block Codes 759


Magnetic Recording
Magnetization pattern

m(t )   (ak  1)u (t  kT )  1(t )
k 0

Magnetic flux passes over heads


Differentiation of pulses

Controlled ISI
Same model applies to
Output
Partial Response signaling
 d m(t ) 
e(t )    * h(t )
 dt 

 2 xk h(t  kT ) where xk  ak  ak 1
k 0 Nyquist pulse

EE576 Dr. Kousa Linear Block Codes Sample 760


Continuous Phase FSK
Digital Input Sequence
a1 , a2 ,..., aN
Tranmitted Signal
yk  cos( (ak )t  xk ); kT  t  (k  1)T
Constraint-Continuous Phase
 (ak 1 )t  xk 1   (ak )t  xk ; mod 2
Example-Binary signaling

  odd number ½ cycles


Whole cycles
0.8

0.3
0;even no. ones
-0.2
0 0.2 0.4 0.6 0.8 1
xk  
-0.7
 1;odd no. ones
-1.2
EE576 Dr. Kousa Signaling interval Linear Block Codes 761
Merges and State Reduction
Optimal paths through trellis

All paths merge

Force merges to reduce complexity

Computations order of (No states)2

Carry only high probability states

EE576 Dr. Kousa Linear Block Codes 762


Input Pixel Effect of Blurring

Optical output signal


Blurring Analogous to ISI
L L
s (i, j )    a(i  l , j  m)
l  L m  L Input pixel
h(l , m)  n(i, j )
Optical channel AWGN

EE576
where L  optical blur width
Dr. Kousa Linear Block Codes 763
Row Scan

VA for optimal row sequence

Known state transitions


And Decision Feed back
Utilized for state reduction

EE576 Dr. Kousa Linear Block Codes 764


Hidden Markov Chain

• Data suggests Markovian structure


• Estimate initial state probabilities
• Estimate transition probabilities
• VA used for estimation of Probabilities
• Iteration

EE576 Dr. Kousa Linear Block Codes 765


Rainfall Prediction

Rainy Rainy
wet dry

No rain

Showery Showery
wet dry

Rainfall observations

EE576 Dr. Kousa Linear Block Codes 766


DNA Sequencing

• DNA-double helix
– Sequences of four nucleotides, A,T,C and G
– Pairing between strands
– Bonding

•Genes
A  T and C  G
–Made up of Cordons, i.e. triplets of adjacent nucleotides
–Overlapping of genes
Nucleotide sequence
CGGATTC

Gene 1
Cordon A in three genes
Gene 2

Gene 3
EE576 Dr. Kousa Linear Block Codes 767
Hidden Markov Chain
Tracking genesM S
1 P1
S-start first cordon of gene
P1-4- +1,…,+4 from start
Gene M2 P2
E-stop
H-gap
M1-4 -1,…,-4 from start
M3
P3
Initial and
Transition M4
Probabilities known P4

EE576
H
.
Dr. Kousa Linear Block Codes
E
768

Recognizing Handwritten Chinese Characters
Text-line images

Estimate stroke width

Set up m X n grid

Estimate initial and transition probabilities

Detect possible segmentation paths by VA

Next Slide

Results
EE576 Dr. Kousa Linear Block Codes 769
Example Segmenting Handwritten
Characters
Eliminating
All possible Redundant
segmentation Paths
paths

Removal of Discarding
Overlapping near paths
paths

EE576 Dr. Kousa Linear Block Codes 770

S-ar putea să vă placă și