Sunteți pe pagina 1din 12

coms4100/7105 coms4100

Digital Communications
Lecture 26: Linear Block Codes
This lecture: 1. Matrix Representation of Block Codes. 2. Syndrome Decoding. 3. Cyclic Codes. Ref: CC5 pp. 604617.

1/12

Matrix Representation of Block Codes


2/12

Linear Codes
A code is linear if: 1. the all-zero vector 0 = (0, . . . , 0) is a codeword and 2. modulo-2 addition of two codewords gives another codeword, i.e., for codewords x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) z = x + y = (x1 y1 , . . . , xn ym ) is a valid codeword (we assume code vectors are row vectors). It follows that a code vector x can be expressed as a matrix multiplication of message vector m with a generator matrix G: x = mG where the addition implicit in the matrix multiplication is mod 2.

coms4100

We extend the idea of code vectors by introducing generator matrices for systematic, linear block codes.

Dene the Hamming weight w(v) of a code vector v as the number of non-zero elements. Observe that the Hamming distance between vectors u and v is d(u, v) = w(u + v). For a linear code, u + v is another valid code vector so dmin =
vcodebook
3/12

min

w(v).

Systematic Codes
A code is systematic if the message bits are preserved in the codeword bits, i.e., if code vector = (message bits | parity bits).

Systematic, Linear Codes


It follows that if a code is systematic and linear then the generator matrix can be partitioned so that G = (I | P) where I is the identity matrix and P generates the parity bits.

coms4100

Example: The Hamming Code


The (7, 4) Hamming code is a dmin = 3 and 1 0 0 1 G= 0 0 0 0 systematic, linear block code with
4/12

0 0 1 1 1 1 0 1 1 0 0 1 0 1 1

The code can be easily implemented in hardware:

coms4100

0 0 1 0 1

Syndrome Decoding
5/12

How should we detect and correct errors in a block code? An obvious approach is to store the codebook in memory. Compute the Hamming distance between the received block and each code in the codebook. We detect an error if there is no codeword at zero distance. If there is a unique codeword at minimum distance, we decode to it, otherwise we declare an uncorrectable error. This accounts for the most likely pattern of bit errors and so is called maximum likelihood (ML) decoding. The problem with this approach is to store and compute distances from all 2k codewords, each of n bitsprohibitive if k is large. With a systematic, linear code, we can dene a parity check matrix H = (P T | I ) such that vHT = 0 for any codeword v.

coms4100

Conversely, if v is not a codeword, vHT 0. With y being the received vector, we call s = yHT the syndrome. Let e represent the bit errors, so that y = x + e, where x is the transmitted codeword. Observe that s = yHT = xHT + eHT = eHT . Hence, the syndrome is a function only of the bit errors. To decode, we consult a table to determine the ML (lowest weight) error vector that gives rise to the observed syndrome. The advantage is that the syndrome table has only 2q entries, where q = n k, rather than 2k . Since there are need
n w
6/12

bit patterns of length n with weight w , we 2q n n + + 0 t

in order to have enough syndrome patterns to decode up to t errors.

coms4100

Example: The Hamming Code


In the (7, 4) Hamming code, observe that any one-bit error yields the corresponding column of H as the syndrome, where 1 1 1 0 1 0 0 . H= 0 1 1 1 0 1 0 1 1 0 1 0 0 1 The columns of H contain all possible combinations of s except s = 0. So, to decode when s 0, we nd the column of H that matches s and ip the corresponding bit in y.
7/12

coms4100

Cyclic Codes
8/12

By powerful, we mean a code that can detect and/or correct many errors in a single codeword. Cyclic codes are linear block codes in which the cyclic shift of any codeword gives another codeword. The codes arise from the theory of Galois elds beyond the scope of this coursebut we present some of the key elements. To describe cyclic codes, we think of codewords as polynomials so that a code vector x can be written as a polynomial x p with x p = xn1 p n1 + + x1 p + x0 . A cyclic shift of the codeword can be written x p = xn2 p n1 + + x0 p + xn1 = px p + xn1 p n + 1 where addition is again modulo-2.

coms4100

Hardware encoding and decoding can be simplied for powerful codes by using cyclic codes.

Encoding
Let g p , the generator polynomial, be a factor of p n + 1 of order q = n k. A cyclic code consists of codewords of the form x p = qm p g p where qm p has order k. (1)
9/12

With m p representing the message bits, we also write x p = pq m p + c p . Equating (1) and (2), we see that pq m p c p = qm p + . g p g p That is, c p is the remainder of dividing p q m p by g p . The division operation can be eciently performed in hardware. (2)

coms4100

10/12

Decoding
The syndrome is calculated also by division by g p . A relatively simple shift-register circuit can be used here too. Many cyclic codes also have simple methods of processing the syndrome that do away with table look-up.

coms4100

Examples
Useful cyclic codes come about from careful selection of g p .
11/12

The (7, 4) Hamming code is generated by g p = p + p + 1. Other examples of important cyclic codes are the Bose-ChaudhuriHocquenghem (BCH) codes and the (23, 12) Golay code. The Golay code has g p = p 11 + p 9 + p 7 + p 6 + p 5 + p + 1 and dmin = 7. The cyclic redundancy check (CRC) codes are intended for error detection rather than correction. For instance, CRC-16 has g p = p 16 + p 15 + p 2 + 1. In CRC-q codes, it is q that is specied and k can vary (within bounds) according to the length of the message to be protected. They are also prized for their ability to detect long burst errors (longer than should be expected from the value of q).

coms4100

M -ary Codes
The polynomials described to now assume binary coecients.
12/12

The theory can be extended to symbols from an M -ary alphabet. The Reed-Solomon (RS) codes are subsets of the BCH codes for M -ary encoding. With n, k and q now referring to symbols rather than bits, RS 1 codes can correct up to 2 q symbols per block.

coms4100

S-ar putea să vă placă și