Documente Academic
Documente Profesional
Documente Cultură
Waveform Coding
Types of Error Control
Structured Sequences
Linear Block Codes
Error Detecting and Correcting Capability
Cyclic Codes
Block Codes
Channel Coding
Waveform Coding
Constellation Diagram
Antipodal
ON-OFF
With OOK, there are just 2 symbol states to map onto the
constellation space
Orthogonal
Antipodal:
(6.1)
i
j
0
otherwise
E 0
s1(t) = - s2(t)
si2 (t ) dt
(6.2)
(6.3)
otherwise
Orthogonal Codes
0
1
0 0
H1
0 1
Data set
0 0
0 1
1 0
1 1
Data set
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1
H H
1
H 2 1
H1 H1
0 0 | 1 1
0 1 | 1 0
Orthogonal codeword set
0
0
0
H 3
0
0
0
0
0 0 0 0
0 1 0 1
0 1 1 | 0 0 1 1
1 1 0 | 0 1 1 0
H2
H2
0 0 0 | 1 1 1 1
1 0 1 | 1 0 1 0
0 1 1 | 1 1 0 0
1 1 0 | 1 0 0 1
0 0 0
1 0 1
|
|
H2
H 2
H k 1
Hk
H k 1
H k 1
H k 1
(6.4)
Each pair of words in each codeword set H1, H2, . Hk, has as
many digit agreements as disagreements . Hence, in accordance
with equation 6.3, zij =0 (for I j), and each of the set is orthogonal.
Just as in M-ary signaling with an orthogonal format (such as MFSK),
the error performance improves. The probability of codeword error,
PE, can be upper bounded as
PE ( M ) ( M 1)Q
where Es=kEb
Es
N0
(6.5)
Biorthogonal Codes
A biothogonal signal set of M total signals or codewords can be
obtained from an orthogonal set of M/2 signals by augmenting it with
the negative of each signals as
H
Bk k 1
H k 1
Data set
0 0 0
0 0 1
0 1 0
0 1 1
1
0
B3
1
1
0
1
0
0
0
1
1
1
1
1
0
1
0
1
0
0
0
1
1
0
1
0
0
The biorthogonal set is really two sets of orthogonal codes such that
each codeword in one set has its antipodal codeword in the other
set
zij 1
0
for i j
for i j , | i j | M / 2
(6.8)
for i j , | i j | M / 2
Advantages
The biorthogonal code requires one-half as many code bits per code
word as required by orthogonal codes
Since the antipodal signal vectors have better distance properties
than orthogonal ones, biothogonal codes perform slightly better than
orthogonal ones
Probability of codeword error can be upper bounded as:
PE ( M ) ( M 2)Q
Es
N0
Q 2 Es
N
0
(6.9)
transmitter power
channel bandwidth
Channel encoder
The channel encoder adds extra bits (redundancy) to the message bits.
The encoded signal is then transmitted over the noisy channel.
Channel decoder
The channel decoder identifies the redundant bits and uses them to detect
and correct the errors in the message bits if any.
Thus the number of errors introduced due to channel noise are minimized
by encoder and decoder.
Due to the redundant bits, the overall data rate increases. Hence channel
has to accommodate this increased data rate.
The systems become slightly complex because of coding techniques.
Block codes
Convolutional codes
Turbo codes
Nonlinear code:
Two-dimensional
Parity-check Code
Rectangular Code
MN
n ( M 1)( N 1)
Ideal PB (Probability of
bit error) versus Eb/N0
Figure 6.9:Comparison of
typical coded versus
uncoded error performance
Aside from the new components (encoder and decoder) needed, the
price is more transmission bandwidth.
Thus, the trade-off is one in which the same quality of data is achieved,
but the coding allows for a reduction in power or Eb/N0.
00 0
0 1 1
1 0 1
11 0
00 0
0 1 0
1 0 0
1 1 1
Codeword
000
000000
100
110100
010
011010
110
101110
001
101001
101
011101
011
110011
111
000111
V1 v11
V v
G 2 21
Vk vk1
v12
v22
vk 2
v1n
v2 n
vkn
(6.24)
(6.25)
V1 1 1 0 1 0 0
G V2 0 1 1 0 1 0
V3 1 0 1 0 0 1
(6.26)
A systematic (n,k) linear block code is a mapping from a kdimensional message vector to an n-dimensional codeword in such
a way that part of the sequence generated coincides with the k
message digits. Thus remaining (n-k) digits are parity digits.
A systematic linear code will have a generator matrix of the form
p11
p 21
G P I k
p k1
p12
p 22
p 2, ( n k )
p1,( n k )
p k 2 p k ,( n k )
1 0 0
0 1 0
0 0 1
(6.27)
1 1 0
0 1 1
G
1 1 1
0 1
1
k ( n k )
1 0 0 0
0 1 0 0
P | Ik
0 0 1 0
0 0 0 1
k k
pk1
p12
p1,( nk )
p22 p2,( n k )
pk 2 p k , ( n k )
1 0 0
0 1 0
0 0 1
where
ui = m1p1i+ m2p2i+ .mkpki
= mi-n+k
Given the message k-tuple
m = m1, m2, .., mk
and the general code vector n-tuple
U = u1, u2, .., un
for i=1,(n-k)
for i=(n-k+1).n
U p1 , p2 ,, pn k , m1 , m2 ,, mk
parity bits
(6.28)
message bits
(6.29)
Example
For (6,3) code example in sec.6.4.3, the codewords can be described
as:
U m1 , m2 , m3
1 1 0 1 0 0
0 1 1 0 1 0
1 0 1 0 0 1
P
I
m1 m3 , m1 m2 , m2 m3 , m
,m ,m
1 2 3
u1
u2
u3
u4
u5
(6.30)
(6.31)
u6
Equation (6.31) depicts that the first parity bit is the sum of the first and
third message bits; the second parity bit is the sum of the first and
second message bits; and the third parity bit is the sum of the second
and third message bits.
Such structure, compared with single-parity checks or simple digitrepeat procedures, may provide greater ability to detect and correct
errors.
Example
We see structure of linear block code (Equation (6.31)) that the redundant
digits are produced in a variety of ways.
The first parity bit is the sum of the first and third message bits; the second
parity bit is the sum of the first and second bits; and the third parity bit is
the sum of the second and third message bits.
Intuition tells us that such structure, compared with single-parity checks or
simple digit-repeat procedures, may provide greater ability to detect and
correct errors.
Let H denote the parity check matrix, that will enable us to decode
the received vectors
For each (k x n) generator matrix G, there exists an (n-k) x n matrix
H, such that rows of G are orthogonal to the rows of H i.e., GHT=0
Fulfilling the orthogonality requirements:
H I nk | PT
(6.32)
and
1
0
I nk
p11
P
p21
pk 1
p12
p22
pk 2
p1,( n k )
p2 ,( n k )
pk ,( n k )
(6.33)
S rH T
1 0
0 1
0 0
0 0 1 1 1 0
1 1
0 1
1 0
1, 1 1, 1 1 1
0
0
0
1
S eH T
1 0 0 0 0 0 H T
1 0 0
0
U1
U2
Ui
U 2k
e2
U 2 e2
U i e2
U 2k e2
e3
U 2 e3
U i e3
U 2k e3
ej
U2 ej
U 2k e j
nk
2
Ui e j
nk
2
U2 e
nk
2
Ui e
nk
2
U e
k
2
(6.38)
1
0
Computing ejHT for each coset leader
0
S ej
1
0
0
1
0
1
1
0
0
0
0
1
Error Pattern
Syndrome
000000
000
000001
101
000010
011
000100
110
001000
001
010000
010
100000
100
010001
111
U r e U e e U e e
(6.40)
S 0 0 1 1 1 0H T 1 0 0
estimated error :
e 1 0 0 0 0 0
The correctedvector is estimated by :
U r e
0 0 1 1 1 0 1
1 0 1 1 1 0
Implementation of decoder
Example:
U =100101101
V =011110100
w(U) = 5
d(U,V) = w(U+V) = 6
The minimum distance (dmin) among all the distances between each
pair of codes in the code set
d 1
t m in
2
e d min 1
U ( X ) u0 u1 X u2 X 2 un 1 X n 1
U ( X ) un 1 u0 X u1 X 2 u2 X 3 un 1 X n
is also a code word.
g ( X ) g0 g1 X g2 X 2 g p X p
m( X ) m0 m1 X m2 X 2 mn p1 X n p1
(6.58)
U X m0 m1 X m2 X 2 mk 1 X k 1 g X (6.59)
m X m0 m1 X m2 X 2 mk 1 X k 1
Message vector:
X n k m X m0 X n k m1 X n k 1 mk 1 X n 1
Divide by g(X)
X n k m X q X g X p X
p X X n k m X q X g X U X
Where:
U ( p0 , p1 ,, pn k 1 , m0 , m1 ,, mk 1 )
( n k ) parity bits
k message bits
n, k 2m 1,2m 1 m
2 Ec
p Q
N0