Documente Academic
Documente Profesional
Documente Cultură
Introduction to Algebra
2 jj 10/3/2018
Group
3 jj 10/3/2018
Group
4 jj 10/3/2018
Group
5 jj 10/3/2018
Group
6 jj 10/3/2018
Group
7 jj 10/3/2018
Fields
10 jj 10/3/2018
Fields
11 jj 10/3/2018
Fields
12 jj 10/3/2018
Fields
13 jj 10/3/2018
Fields
14 jj 10/3/2018
Fields
15 jj 10/3/2018
Fields
16 jj 10/3/2018
Fields
17 jj 10/3/2018
18 jj 10/3/2018
19 jj 10/3/2018
20 jj 10/3/2018
21 jj 10/3/2018
22 jj 10/3/2018
1 jj 10/8/2018
2 jj 10/8/2018
Codes for error detection and correction
parity check coding
3 jj 10/8/2018
Introduction
4 jj 10/8/2018
Codes for error detection and correction
5 jj 10/8/2018
Codes for error detection and correction
(FEC)
Error Control Coding (ECC)
Extra bits are added to the data at the transmitter
(redundancy) to permit error detection or correction
at the receiver
Done to prevent the output of erroneous bits despite
noise and other imperfections in the channel
Two main types, namely block codes and
convolutional codes.
6 jj 10/8/2018
Block Codes
7 jj 10/8/2018
Block Codes
A vector notation is used for the datawords and
codewords,
Dataword d = (d1 d2….dk)
Codeword c = (c1 c2……..cn)
The redundancy introduced by the code is quantified by
the code rate,
Code rate = k/n
i.e., the higher the redundancy, the lower the code rate
8 jj 10/8/2018
Block Codes - Example
Dataword length k = 4
Codeword length n = 7
This is a (7,4) block code with code rate = 4/7
For example, d = (1101), c = (1101001)
9 jj 10/8/2018
Parity Codes
Example of a simple block code – Single Parity Check
Code
In this case, n = k+1, i.e., the codeword is the dataword with one
additional bit
For ‘even’ parity the additional bit is,
q i 1 di (mod 2)
k
11 jj 10/8/2018
Parity Codes- Example
Coding table for (4,3) even parity code
Dataword Codeword
0 0 0 0 0 0 0
0 0 1 0 0 1 1
0 1 0 0 1 0 1
0 1 1 0 1 1 0
1 0 0 1 0 0 1
1 0 1 1 0 1 0
1 1 0 1 1 0 0
1 1 1 1 1 1 1
12 jj 10/8/2018
Parity Codes
To decode
Calculate sum of received bits in block (mod 2)
If sum is 0 (1) for even (odd) parity then the dataword is the
first k bits of the received codeword Otherwise error
Code can detect single errors
But cannot correct error since the error could be in
any bit
For example, if the received dataword is (100000) the
transmitted dataword could have been (000000) or
(110000) with the error being in the first or second
place respectively
Note error could also lie in other positions including the
parity bit
13 jj 10/8/2018
Parity Codes
Known as a single error detecting code (SED). Only
useful if probability of getting 2 errors is small since parity
will become correct again
Used in serial communications
Low overhead but not very powerful
Decoder can be implemented efficiently using a tree of XOR
gates
14 jj 10/8/2018
Hamming Distance
Error control capability is determined by the
Hamming distance
The Hamming distance between two codewords is
equal to the number of differences between them,
e.g.,
10011011
11010010 have a Hamming distance = 3
Alternatively, can compute by adding codewords (mod 2)
=01001001 (now count up the ones)
15 jj 10/8/2018
Hamming Distance
The Hamming distance of a code is equal to the minimum
Hamming distance between two codewords
16 jj 10/8/2018
Hamming Distance
If Hamming distance is:
3 – can correct single errors (SEC) or can detect double errors
(DED)
3 errors will yield a valid but incorrect codeword
17 jj 10/8/2018
Hamming Distance
The maximum number of detectable errors is
d min 1
That is the maximum number of correctable
errors is given by,
d min 1
t
2
where dmin is the minimum Hamming distance between 2
codewords and . means the smallest integer
18 jj 10/8/2018
1 jj 10/15/2018
2 jj 10/15/2018
linear block codes
error detecting and correcting capabilities
generator and parity check matrices
Standard array and syndrome decoding
3 jj 10/15/2018
Linear Block Codes
By definition: A code is said to be linear if any two
codewords in the code can be added in modulo-2
arithmetic to produce a third codeword in the
code.
Consider an (n,k) linear block code- k bits of the n code bits
are always identical to the message sequence to be
transmitted.
The (n – k) bits are computed from the message bits in
accordance with a prescribed encoding rule that determines
the mathematical structure of the code
(n – k) bits are referred to as parity-check bits
4 jj 10/15/2018
Linear Block Codes
Block codes in which the message bits are transmitted in
unaltered form are called systematic codes
For applications requiring both error detection and error
correction, the use of systematic block codes simplifies
implementation of the decoder.
Let m0, m1, …., mk – 1 constitute a block of k arbitrary
message bits. ,2k distinct message blocks
Encoder producing an n-bit codeword (c0, c1, …, cn – 1).
(b0, b1, …., bn – k – 1 )denote the (n – k) parity-check bits
in the codeword.
5 jj 10/15/2018
Linear Block Codes
For the code to possess a systematic structure, a
codeword is divided into two parts, message bits and
parity-check bits
Message bits of a codeword before the parity-check bits, or
vice versa.
(n – k) leftmost bits of a codeword are identical to the
corresponding parity-check bits and the k rightmost bits of
the codeword are identical to the corresponding message bits
6 jj 10/15/2018
Linear Block Codes
8 jj 10/15/2018
Linear Block Codes
9 jj 10/15/2018
Linear Block Codes
The generator matrix G is in the canonical form, in that
its k rows are linearly independent
It is not possible to express any row of the matrix G as a
linear combination of the remaining rows.
The full set of codewords, referred to simply as the code, is
generated as C = mG
by passing the message vector m range through the set of all
2k binary k-tuples (1-by-k vectors)
Sum of any two codewords in the code is another
codeword
This basic property of linear block codes is called closure.
10 jj 10/15/2018
Linear Block Codes
To prove its validity, consider a pair of code vectors ci and cj
corresponding to a pair of message vectors mi and mj
11 jj 10/15/2018
Linear Block Codes
There is another way of expressing the relationship between the message bits
and parity-check bits of a linear block code.
Let H denote an (n – k)-by-n matrix, defined as
12 jj 10/15/2018
Linear Block Codes
The matrix H is called the parity-check matrix of the code
and the equations specified by (10.16) are called parity-check
equations.
The generator equation (10.13) and the parity-check detector
equation (10.16) are basic to the description and operation of a
linear block code.
These two equations are depicted in the form of block diagrams in
Figure 10.5a and b, respectively.
13 jj 10/15/2018
Example:1
(7, 4) Hamming code over GF(2)
The encoding equation for this code is given by
c0 = m 0
c1 = m 1
c2 = m 2
c3 = m 3
c4 = m 0 + m 1 + m 2
c5 = m 1 + m 2 + m 3
c6 = m0 + m1 + m3
Example:2
1 0 0 1 0 1 1
H 0 1 0 1 1 1 0
0 0 1 0 1 1 1
Decoding
The generator matrix G is used in the encoding
operation at the transmitter.
Parity-check matrix H is used in the decoding
operation at the receiver
r denote the 1-by-n received vector that results from
sending the code vector c over a noisy binary channel
vector r as the sum of the original code vector c and a new
vector e
r = c+e
17 jj 10/15/2018
Error vector or Error pattern
The vector e is called the error vector or error pattern
The ith element of e equals 0 if the corresponding element of r
is the same as that of c.
ith element of e equals 1 if the corresponding element of r
is different from that of c, an error is occurred in the ith
location
for i = 1, 2,…, n, we have
18 jj 10/15/2018
Decoding
Let c be transmitted and r be received, where
r=c+e
e = error pattern = e1e2..... en, where c + r
e
20 jj 10/15/2018
Syndrome: Properties
Property3: For a linear block code, the syndrome s is equal to the sum of
those rows of the transposed parity-check matrix HT where errors
have
jj
occurred due to channel noise.
21 10/15/2018
Syndrome: Properties
22 jj 10/15/2018
Minimum Distance Considerations
The Hamming weight w(c) of a code vector c is defined
as the number of nonzero elements in the code vector.
23 jj 10/15/2018
Minimum Distance Considerations
Minimum distance is the same as the smallest Hamming
weight of the difference between any pair of code vectors
From the closure property of linear block codes, the sum (or
difference) of two code vectors is another code vector.
24 jj 10/15/2018
Minimum Distance Considerations
dmin is related to the structure of the parity-check
matrix H of the code
cHT = 0, where HT is the transpose of the parity-check
matrix H
Let the matrix H be expressed in terms of its columns as H =
[h1, h2,… hn]
For a code vector c to satisfy the condition cHT = 0, the
vector c must have ones in such positions that the
corresponding rows of HT sum to the zero vector 0.
25 jj 10/15/2018
Minimum Distance Considerations
by definition, the number of ones in a code vector is the
Hamming weight of the code vector.
smallest Hamming weight of the nonzero code vectors in a
linear block code equals the minimum distance of the code
26 jj 10/15/2018
Minimum Distance Considerations
dmin determines the error-correcting capability of
the code
(n,k) linear block code is required to detect and correct all
error patterns over a binary symmetric channel, and whose
Hamming weight is less than or equal to t.
That is, if a code vector ci in the code is transmitted and
the received vector is r = ci + e, we require that the
decoder output whenever the error pattern e has a
Hamming weight w(e) ≤ t
27 jj 10/15/2018
Minimum Distance Considerations
assume : 2k code vectors in the code are transmitted with
equal probability
best strategy for the decoder to pick the code vector closest to
the received vector r; that is, the one for which the Hamming
distance d(ci,r) is the smallest.
Decoder will be able to detect and correct all error patterns of
Hamming weight w(e), provided that the minimum
distance of the code is equal to or greater than 2t + 1.
28 jj 10/15/2018
Minimum Distance Considerations
demonstrate the validity of this requirement by adopting a
geometric interpretation :
transmitted 1-by-n code vector and the 1-by-n received
vector are represented as points in an n-dimensional space.
construct two spheres, each of radius t, around the points
that represent code vectors ci and cj under two different
conditions:
29 jj 10/15/2018
Minimum Distance Considerations
1. Let these two spheres be disjoint, d(ci,cj) ≥2t + 1
If, then, the code vector ci is transmitted and the Hamming
distance d(ci,r) ≤ t, it is clear that the decoder will pick ci, as
it is the code vector closest to the received vector r.
2. If, on the other hand, the Hamming distance d(ci,cj) ≤ 2t,
the two spheres around ci and cj intersect
30 jj 10/15/2018
Minimum Distance Considerations
if ci is transmitted, there exists a received vector r such that
the Hamming distance d(ci,r) ≤ t, yet r is as close to cj as it is
to ci.
there is now the possibility of the decoder picking the vector
cj, which is wrong.
An (n,k) linear block code has the power to correct all error
patterns of weight t or less if, and only if, d(ci,cj) ≥ 2t + 1,
for all ci and cj.
31 jj 10/15/2018
Minimum Distance Considerations
32 jj 10/15/2018
Decoding
syndrome-based decoding scheme for linear block codes.
2k code vectors of an (n, k) linear block code.
r : received vector, which may have one of 2n possible values.
The receiver has the task of partitioning the 2n possible
received vectors into 2k disjoint subsets in such a way that
the ith subset Di corresponds to code vector ci for 1 < i < 2k.
The received vector r is decoded into ci if it is in the ith subset.
For the decoding to be correct, r must be in the subset that
belongs to the code vector ci that was actually sent.
The 2k subsets described herein constitute a standard array of
the linear block code.
33 jj 10/15/2018
Standard array
To construct
1. The 2k code vectors are placed in a row with the all-zero
code vector c1 as the leftmost element.
2. An error pattern e2 is picked and placed under c1, and a
second row is formed by adding e2 to each of the remaining
code vectors in the first row; it is important that the error
pattern chosen as the first element in a row has not
previously appeared in the standard array.
3. Step 2 is repeated until all the possible error patterns have
been accounted for.
34 jj 10/15/2018
Standard array
35 jj 10/15/2018
Standard Array Decoding
Codewords
1 0 0 0 1 1
000000
H 0 1 0 1 0 1 110001
101010
0 0 1 1 1 0
011011
c1 = c5 c 6 011100
101101
c2 = c4 c6 110110
c3 = c4 c5 000111
d min 3
Standard Array Decoding (cont’d)
Can correct all single errors and one double error pattern
40 jj 10/15/2018
Syndrome Decoding
41 jj 10/15/2018
46 jj 10/15/2018
47 jj 10/15/2018
48 jj 10/15/2018
49 jj 10/15/2018
50 jj 10/15/2018
(a) G=
51 jj 10/15/2018
(a) HT=
52 jj 10/15/2018
53 jj 10/15/2018
P
G
54 jj 10/15/2018
55 jj 10/15/2018