Documente Academic
Documente Profesional
Documente Cultură
Part 2
Presented By:
Dr Tim O’Farrell
Room 353
Email: T.O’Farrell@leeds.ac.uk
Tel: +44 113 343 2052
Syllabus: Part 2
1.71
Institute of Integrated Information Systems
1.72
Institute of Integrated Information Systems
(ii) Information is related to the possibility of change: certainty conveys no information, but
the resolution of uncertainty does.
(iii) More information is associated with a larger set of possibilities than with a smaller set,
i.e. more information is associated with the occurrence of a rare event than with the
occurrence of a more common one.
(iv) If a message contains i symbols from a given set of possibilities, it can be expected to
contain more information than a message with fewer than i symbols.
1.73
Institute of Integrated Information Systems
or taking logs: [
I = f Qk ]
Hence: I = Constant × k log Q
I∝ k ( message length )
I ∝ log Q (complexity)
which seems satisfactory.
1.74
Institute of Integrated Information Systems
Simplest possible message is of one binary symbol, i.e. k=1 and Q=2. It would be logical
for such a message to contain 1 unit of information:
∴ I = constant × 1 log 2
Putting constant = 1 and taking logs to base 2:
I = 1 log 2 2
= 1 unit of information (bit)
Can be shown, by simple analysis that optimum (lowest cost) value of Q is:
2<e<3
Thus, binary transmission is reasonably efficient.
1.75
Institute of Integrated Information Systems
7.3 Entropy
For a Q-state source, where all states Xi (1 ≤ i ≤ Q ) are equiprobable with
probability: p( X i ) = 1 Q ,
When n symbols are transmitted (n very large), the symbol Xi will occur on average n p(Xi
) times and the total information content of these n p(Xi ) symbols is given by:
−n p( X ) log p( X ) bits
For all possible symbols, the average
i 2
information
i
content per symbol is:
1 Q
or ∑ −n p( Xi ) log 2 p( Xi )
n i =1
bits / symbol
Q
H ( X i ) = − ∑ p( X i ) log 2 p( X i ) bits / symbol
Where H(Xi ) is the “entropy”, uncertainty or average information content of the source.
i =1
1.76
Institute of Integrated Information Systems
1.77
Institute of Integrated Information Systems
1.78
Institute of Integrated Information Systems
( A + B)
Total number of distinguishable signal levels = 2
2B
1.79
Institute of Integrated Information Systems
For levels not to be confused, the total number of distinguishable amplitude levels is:
( A + B) ( A + B)
= 2 =
If signal and noise are 2B
independent B
(uncorrelated), the time average:
( A + B) 2 = A 2 + 2 AB + B 2
since the “AB” term averages to = zero.
A2 + B2
But:
and: A2 ∝ S (Signal Power )
B2 ∝ N ( Noise Power )
Hence: 1 1
( A + B) (S + N ) S
= 1 +
2 2
=
B 1
N 2 N
where S/N is the signal-to-noise power ratio.
If all levels are assumed to be equiprobable, the information associated with the
occurrence of any level is:
1.80
Institute of Integrated Information Systems
When the noisy signal has a bandwidth W Hz, the Nyquist sampling criterion states that it
must be sampled at a rate of at least 2W samples per second.
Note: that for a given C, W and S/N can be “traded-off” against each other; for a given I, W,
T and S/N can be traded-off.
1.81
Institute of Integrated Information Systems
On the other hand, internal noise forms a fundamental limitation on system performance,
and can only be reduced by specialised design techniques.
1.82
Institute of Integrated Information Systems
ϕ nn (τ ) ∝ δ (τ ) φ nn ( jω ) = constant
1 − (A − A )2
p( A) = exp
2π σ (2σ )
2
1.83
Institute of Integrated Information Systems
(ii) Often considered as being applied to 1Ω load resistance. In this case, power dissipated
P is given by:
v2
or P = i 2 R = i 2 P = = v2
R
(iii) If two noise voltages from separate and independent sources are summed to give:
v c = v1 + v 2
Squaring and averaging:
v c 2 = ( v1 + v 2 ) 2
= v1 2 + 2 v 1 v 2 + v 2 2
But since v1 and v2 are independent (uncorrelated):
i.e. time average of the product of two noise waveforms tends
2 vto be→zero.
1v2 0 Hence:
∴ Power of the summed noise2 waveform is the sum of the individual powers of the two
v c = v1 2 + v 2 2
sources. Can be extended to the combination of more than two sources.
1.84
Institute of Integrated Information Systems
In the arrangement:
1.85
Institute of Integrated Information Systems
Maximum power will be transferred to the load when R = RL (maximum power transfer
theorem). Thus, power dissipated in the load is:
v0 2 R
P = = kT B
( 2 R) 2
Therefore:
v0 2 = 4 k T R B
This is the open circuit mean square noise voltage generated by a noisy resistor.
1.86
Institute of Integrated Information Systems
2
Equivalent mean square noise voltage at the output terminals v eq by:
is given
v eq 2 = 4 k T B Req
In the case of a network of complex impedances with equivalent impedence (R + jX ),
the equivalent noise voltage at the output terminals is:
v eq 2 = 4 k T B R
where R is now a function of frequency.
1.87
Institute of Integrated Information Systems
io 2 = 2 Io e B io 2 = 2 Ie e B
Shot noise has very stable characteristics and can be used as the basic source in
standard noise generators.
1.88
Institute of Integrated Information Systems
1.89
Institute of Integrated Information Systems
Where:
Si = wanted signal input power Ni = unwanted noise input power
So = wanted signal output power No = unwanted noise output power
G = power gain NR = internal noise power
F = noise figure
1.90
Institute of Integrated Information Systems
Assume that the individual NFs F1 and F2 have been measured previously under
identical noise conditions, i.e. connected to a noise source of power Ni .
Generally, for either stage:
F=
No
=
[ N R + Ni G ]
G Ni Ni G
∴ N R = N i G ( F − 1)
N R1 = N i G1 ( F1 − 1)
N R 2 = N i G2 ( F2 − 1)
1.91
Institute of Integrated Information Systems
From the basic definition of NF, the NF for the 2-stage system is:
No
F (2) =
G1 G2 N i
G1 G2 N i F1 + G2 N i [ F2 − 1 ]
=
G1 G2 N i
∴ F (2) = F1 +
[ F2 − 1 ]
G1
F( N ) = F1 +
[ F2 − 1 ]
+
[ F3 − 1 ]
+ ... +
[ FN − 1 ]
G1 G1 G2 G1 G2 G3 ... G N −1
1.92
Institute of Integrated Information Systems
No
F = N G
i
1.93
Institute of Integrated Information Systems
• One of the main tasks of source coding is to remove the redundancy in a way
that does not compromise the information.
• Source coding can be made more efficient as more is known about the
characteristics of the source, eg its state probabilities.
1.94
10. CHANNEL CODING FUNDAMENTALS (ERROR CONTROL CODES)
(i) Objective of channel coding is to protect the source (or source-encoded) data against errors introduced
by noise and other types of distortion encountered on the channel.
(ii) This is achieved by introducing deterministic redundancy at the transmitter which can be exploited at
the receiver to detect and/or correct any errors introduced during transmission.
(iii) The form of channel coding employed must be matched to the types of errors likely to be encountered
on the channel, e.g. random or burst errors.
(iv) The more that is known of the channel characteristics, the more accurately can the channel encoding
scheme be designed.
There are 2 basic types of error control codes (ECCs), i.e. “block” codes and “convolutional” codes.
1.101
10.2 Block and Convolutional Codes
(a) Block Codes
• Encoder accepts information in successive k-bit blocks;
• Encoder adds (n-k) redundant bits, derived from logical operations on the k information bits, to form
n-bit codewords;
• Encoder is zero-memory since the successive k-bit blocks are independent.
• Encoder operates on the incoming sequence using a “sliding window” of length m bits;
k-BIT n-BIT
BLOCK BLOCK INPUT ENCODER OUTPUT
ENCODER (m-BIT
BIT-STREAM MEMORY) BIT-STREAM
1.102
10.3 Block Code Definitions
(a) Hamming Weight (wt (c)): number of non-zero elements in codeword c.
(b) Hamming Distance (d): number of locations in which 2 codewords ci and ci differ.
(c) Minimum Hamming Distance (dmin): smallest Hamming Distance between any pair of codewords in Code C.
(d) Notation: block codes are described by (n, k) or (n, k, dmin); e.g. for example above, (4, 2) or (4, 2, 2).
(e) Code Rate: or efficiency, R = (k/n); e.g. for example above R= 2/4.
(f) Error Detection and Correction Power: in general, an (n, k, dmin)linear block code can
(i) detect e errors, if and only if d e + 1, or
(ii) correct t errors, if and only if d 2t +1,
(iii) subject to the overall constraint that d e+t+1
1.103
10.4 Examples of Simple Block Codes
10.4.1 Non-Redundant (NR) Code
• Map 4 possible information states into normal binary code, i.e
0 00
States 1 01 Set of Codewords
2 10 = Code C
3 11
• Minimum distance = 1; thus, any single error in any codeword converts it to another valid codeword.
• Take previous NR code, and compute even 1s (XOR) parity checks (PCs):
00 0
01 1 Set of 3-bit
10 1 Codewords
11 0
INFO REDUNDANCY
1.104
10.4.2.1 Error Detection by SPC
(i) Single Error Detection (SED)
e.g. TX : 011 error
RX : 010
Recalculate PC : 011
• Recalculated and received PCs differ; therefore, an error is detected.
• No correction possible since, with a single error, TX codeword could have been 011, 110 or 000
SPC
010 000
(NV) (V)
110 001
(V) (NV)
100 011
(NV) (V)
101 111
(V) (NV)
V = Valid Codeword
NV = Non-Valid Codeword
For the SPC, there are 4 possible NV Codewords; if any of these is detected at the
RX, a transmission error is detected
1.106
10.4.3 Hamming Single Error Correcting (SEC) Code
• Systematic synthesis of a code to correct single errors
(i) Encoding
• Assume codeword length = n ; write binary numbers 0 - n in sequence, i.e.
Numbers 1-n Binary form Powers of 2
1 0001 20 = b1
2 0010 21 = b2
3 0011 m1
4 0100 22 = b3
5 0101 m2
6 0110 m3
7 0111 m4
8 1000 2 = b4
3
. . .
. . .
. . .
• To compute PCs, take binary column corresponding to power of 2 and XOR all
information digits for which there is a ‘1’ in the column, e.g. for n = 7:
b1 = m1 ⊕2 m2 ⊕2 m4
b2 = m1 ⊕2 m3 ⊕2 m4
b3 = m2 ⊕2 m3 ⊕2 m4
b1 b2 m1 b3 m2 m3 m4
TX Codeword 1 1 1 1 1 1 1
Error sequence 0 0 0 0 0 1 0
RX Codeword 1 1 1 1 1 0 1
Recalculated PCs 1 0 0
Syndrome 0 1 1
• Reversed syndrome is 110 = 6, i.e the error is in the 6th codeword position, m3.
1.108
10.4.4 Repetition Code
• Information transmitted several times, e.g
• Generally:
(i) m repetitions allow (m-1) errors to be detected;
(ii) m repetitions allow (m-1) errors to be corrected, where means “integer part of”.
2
1.109
10.4.5 Array Code
• 2-Dimensional Code, e.g
l HORIZONTAL PARITY
CHECKS
(l x m)
INFORMATION
m DIGITS
CHECK-ON-CHECKS
• Allows correction of single errors since failed horizontal and vertical PCs will indicate co-ordinates of error.
• Useful for low-error environments where information is naturally formatted in 2-D, e.g multi-track digital tape.
1.110