Sunteți pe pagina 1din 35

Institute of Integrated Information Systems

MODULE ELEC 2410: COMMUNICATION SYSTEMS

Part 2

Presented By:

Dr Tim O’Farrell

Room 353
Email: T.O’Farrell@leeds.ac.uk
Tel: +44 113 343 2052

Institute of Integrated Information Systems


School of Electronic & Electrical Engineering
University of Leeds
Leeds LS2 9JT
UK
Institute of Integrated Information Systems

Syllabus: Part 2

7. Information and Channel Capacity


8. Noise Types, Models and Analysis
9. Source Coding Fundamentals
10. Channel Coding Fundamentals

1.71
Institute of Integrated Information Systems

7. INFORMATION AND CHANNEL CAPACITY


7.1 Information in a Communications Context
Consider decision tree for an electronic component store:

Total Information Required = I1 + I2 + I3 to resolve uncertainty.

1.72
Institute of Integrated Information Systems

7.1 Information in a Communications Context (contd.)

Statements about the naure of “information”:

(i) Information represents the “removal of uncertainty”.

(ii) Information is related to the possibility of change: certainty conveys no information, but
the resolution of uncertainty does.

(iii) More information is associated with a larger set of possibilities than with a smaller set,
i.e. more information is associated with the occurrence of a rare event than with the
occurrence of a more common one.

(iv) If a message contains i symbols from a given set of possibilities, it can be expected to
contain more information than a message with fewer than i symbols.

1.73
Institute of Integrated Information Systems

7.2 Measurement of Information


Consider message symbols chosen from a set of Q equiprobable states:
X1 X 2 X 3  X Q
The message length k (symbols) and the total number of distinct messages are related by:
k No. of distinct
(symbols) message possibilities
1 1 out of Q
2 1 out of Q 2
3 1 out of Q 3
: :
: :
i
Hartley proposed that the measure 1 outbe
of information I should of Q i
of the form:

or taking logs: [
I = f Qk ]
Hence: I = Constant × k log Q

I∝ k ( message length )
I ∝ log Q (complexity)
which seems satisfactory.

1.74
Institute of Integrated Information Systems

7.2 Measurement of Information (contd.)

Simplest possible message is of one binary symbol, i.e. k=1 and Q=2. It would be logical
for such a message to contain 1 unit of information:
∴ I = constant × 1 log 2
Putting constant = 1 and taking logs to base 2:
I = 1 log 2 2
= 1 unit of information (bit)

With logs to base 10, units are Hartleys;


with logs to base e, units are nats.

Can be shown, by simple analysis that optimum (lowest cost) value of Q is:
2<e<3
Thus, binary transmission is reasonably efficient.

1.75
Institute of Integrated Information Systems

7.3 Entropy
For a Q-state source, where all states Xi (1 ≤ i ≤ Q ) are equiprobable with
probability: p( X i ) = 1 Q ,

the information content of 1 symbol:


I = 1 log 2 Q
 1 
= log 2   bits / symbol
 p( X i ) 
= − log 2 p( X i )

When n symbols are transmitted (n very large), the symbol Xi will occur on average n p(Xi
) times and the total information content of these n p(Xi ) symbols is given by:
−n p( X ) log p( X ) bits
For all possible symbols, the average
i 2
information
i
content per symbol is:

1 Q
or ∑ −n p( Xi ) log 2 p( Xi )
n i =1
bits / symbol

Q
H ( X i ) = − ∑ p( X i ) log 2 p( X i ) bits / symbol
Where H(Xi ) is the “entropy”, uncertainty or average information content of the source.
i =1

1.76
Institute of Integrated Information Systems

7.3 Entropy (contd.)


For a binary source with states 1 and 0 having probabilities of occurrence p(1) and p(0)
respectively, the entropy is given by:
1
H ( X i ) = − ∑ p( X i ) log 2 p( X i ) bits / symbol
i =0

When p(1) = p(0) = ½ :


1 1 1 1
H ( X i ) = −  log 2 + log 2 
2 2 2 2
= 1 bit / symbol

When p(1) = ¾ and p(0) = ¼ :


3 3 1 1
H ( X i ) = −  log 2 + log 2 
4 4 4 4
= 0.811 bits / symbol

Generally: H ( X i ) = − p(0) log 2 p(0) − p(1) log 2 p(1)

But: p(0) + p(1) = 1

or: p(1) = [ 1 − p(0) ]

∴ H ( X i ) = − p(0) log 2 p(0) − [ 1 − p(0) ] log 2 [ 1 − p(0) ]

1.77
Institute of Integrated Information Systems

7.3 Entropy (contd.)

Plotting H(Xi ) as a function of p(0) for a binary source gives:

Note maximum at p(0) = p(1) = ½ .

1.78
Institute of Integrated Information Systems

7.4 Channel Capacity


Consider a signal with maximum amplitude ±A volts embedded in noise with maximum
amplitude ±B volts:

( A + B)
Total number of distinguishable signal levels = 2
2B

1.79
Institute of Integrated Information Systems

7.4 Channel Capacity (contd.)

For levels not to be confused, the total number of distinguishable amplitude levels is:
( A + B) ( A + B)
= 2 =
If signal and noise are 2B
independent B
(uncorrelated), the time average:

( A + B) 2 = A 2 + 2 AB + B 2
since the “AB” term averages to = zero.
A2 + B2
But:
and: A2 ∝ S (Signal Power )
B2 ∝ N ( Noise Power )
Hence: 1 1
( A + B) (S + N ) S 
=  1 +
2 2
=
B 1
N 2  N 
where S/N is the signal-to-noise power ratio.

If all levels are assumed to be equiprobable, the information associated with the
occurrence of any level is:

I = log 2 [ No. of possible states / levels ]


1
S 2
= log 2  1 + 
 N
1 S
= log 2  1 +  bits per independent sample of the noisy signal
2  N

1.80
Institute of Integrated Information Systems

7.4 Channel Capacity (contd.)

When the noisy signal has a bandwidth W Hz, the Nyquist sampling criterion states that it
must be sampled at a rate of at least 2W samples per second.

The channel capacity C (bits/s) is calculated from:


C = [ No. of bits / sample ] × [ No. of samples / s ]
1 S
= log 2  1 +  2W
2  N 
S
C = W log 2  1 +  bits / s
 N 
Or information I transmitted in T seconds is:
S
I = W T log 2  1 +  bits
 N 

Note: that for a given C, W and S/N can be “traded-off” against each other; for a given I, W,
T and S/N can be traded-off.

The above assumes:


(i) Ideal “brickwall” channel response;
(ii) Uniform distribution of signal and noise energy within W;
(iii) Uniform amplitude PDFs for both signal and noise.

1.81
Institute of Integrated Information Systems

8. NOISE TYPES, MODELS AND ANALYSIS

8.1 Noise Types

Two basic types:


(i) “Internal Noise” : noise generated within the units of the communication system,
principally the receiver.
(ii) “External Noise” : noise generated by sources apart from the communication system
itself, e.g. lightning, co-channel interference, electrical machinery, power lines, etc.

In principle, effects of external noise can be overcome by better screening, earthing,


filtering, antenna directivity, frequency planning, etc.

On the other hand, internal noise forms a fundamental limitation on system performance,
and can only be reduced by specialised design techniques.

1.82
Institute of Integrated Information Systems

8.2 Ideal Noise


In system performance calculations, it is convenient to use a noise model termed
“Additive Gaussian White Noise” (AGWN).

ϕ nn (τ ) ∝ δ (τ ) φ nn ( jω ) = constant

Also, amplitude PDF:

1  − (A − A )2 
p( A) = exp  
2π σ  (2σ )
2

where: A = mean value


σ
= standard deviation or RMS value

1.83
Institute of Integrated Information Systems

8.3 Combination of Noise Sources


(i) Conventional to describe noise sources by mean square currents or voltages
2 2
( or i ). v

(ii) Often considered as being applied to 1Ω load resistance. In this case, power dissipated
P is given by:
v2
or P = i 2 R = i 2 P = = v2
R
(iii) If two noise voltages from separate and independent sources are summed to give:
v c = v1 + v 2
Squaring and averaging:

v c 2 = ( v1 + v 2 ) 2
= v1 2 + 2 v 1 v 2 + v 2 2
But since v1 and v2 are independent (uncorrelated):
i.e. time average of the product of two noise waveforms tends
2 vto be→zero.
1v2 0 Hence:

∴ Power of the summed noise2 waveform is the sum of the individual powers of the two
v c = v1 2 + v 2 2
sources. Can be extended to the combination of more than two sources.

1.84
Institute of Integrated Information Systems

8.4 Internal Noise Types


(a) Thermal or Johnson Noise
Results from thermal agitation of electrons in a conducting medium. Occurs for all
temperatures > absolute zero (-273o C), and thus is present in all practical
electromagnetic (EM) systems.

In the arrangement:

it is found that the maximum average noise power available is:


Pave = k T B
where:
T = absolute temperature (K)
B = bandwidth (Hz)
k = Boltzman’s Constant (1.38 x 10-23 J/K)

1.85
Institute of Integrated Information Systems

(a) Thermal or Johnson Noise (contd.)


When a noisy resistor is represented by an equivalent voltage generator circuit, and
connected to a noise-free load resistance RL:

Maximum power will be transferred to the load when R = RL (maximum power transfer
theorem). Thus, power dissipated in the load is:
v0 2 R
P = = kT B
( 2 R) 2
Therefore:
v0 2 = 4 k T R B

This is the open circuit mean square noise voltage generated by a noisy resistor.

1.86
Institute of Integrated Information Systems

(a) Thermal or Johnson Noise (contd.)


Noisy resistor can be represented, for analysis purposes, by either voltage or current
generator equivalent circuits, i.e.

In a more complex network of resistances, e.g.

2
Equivalent mean square noise voltage at the output terminals v eq by:
is given
v eq 2 = 4 k T B Req
In the case of a network of complex impedances with equivalent impedence (R + jX ),
the equivalent noise voltage at the output terminals is:
v eq 2 = 4 k T B R
where R is now a function of frequency.

1.87
Institute of Integrated Information Systems

(b) Shot Noise


Occurs in both thermionic and semiconductor devices and is due to current carriers
acting as discrete charge-carrying particles, rather than as a homogeneous current with
uniform velocity. The noise arises from the statistical fluctuations in the number of
carriers reaching anode from cathode, or collector from emitter.

io 2 = 2 Io e B io 2 = 2 Ie e B

Shot noise has very stable characteristics and can be used as the basic source in
standard noise generators.

1.88
Institute of Integrated Information Systems

(c) I/F (Flicker) Noise

Important at low frequencies since:


1
Power ∝
Frequency

Mechanism is imperfectly understood; however, it seems to be due to random


variations in the position of the main emitting region of either emitter or cathode; this
becomes more pronounced as frequency decreases, causing the efficiency of emission
to vary, i.e. noise.

1.89
Institute of Integrated Information Systems

8.5 Noise Analysis 1: Noise Figure of a Receiving System


The “noise figure” (NF) of a receiving system allows the effect of internal (thermal) noise on
system performance to be calculated. NF is a measure of receiver “goodness”.
Consider the system:

Where:
Si = wanted signal input power Ni = unwanted noise input power
So = wanted signal output power No = unwanted noise output power
G = power gain NR = internal noise power
F = noise figure

 total mean square noise at output 


F= 
 mean square noise at output due to input 
No No N o So N G Si
= = = = o
( No − N R ) G Ni G N i So N i G So
 Si 
 Ni   Input SNR 
=  =  Output SNR 
So  
 N o 
∴ F is a measure of degredation of SNR due to internal noise within receiver.
For an ideal receiver F = 1.

1.90
Institute of Integrated Information Systems

8.6 Noise in Cascaded Systems


For a multi-element receiver: consider 2-stage system:

Assume that the individual NFs F1 and F2 have been measured previously under
identical noise conditions, i.e. connected to a noise source of power Ni .
Generally, for either stage:
F=
No
=
[ N R + Ni G ]
G Ni Ni G
∴ N R = N i G ( F − 1)
N R1 = N i G1 ( F1 − 1)
N R 2 = N i G2 ( F2 − 1)

There are 3 components to the output noise No :

(i) Amplified input noise = N i G1 G 2

(ii) Amplified noise from stage 1 = G2 N R1


= N i G1 G2 [ F1 − 1 ]

(iii) Noise from stage 2 = N R2


= N i G2 [ F2 − 1 ]

1.91
Institute of Integrated Information Systems

8.6 Noise in Cascaded Systems (contd.)


∴ Total output noise is given by:
N o = G1 G2 N i + G1 G2 N i [ F1 − 1 ] + G2 N i [ F2 − 1 ]
= G1 G2 N i F1 + G2 N i [ F2 − 1 ]

From the basic definition of NF, the NF for the 2-stage system is:
No
F (2) =
G1 G2 N i
G1 G2 N i F1 + G2 N i [ F2 − 1 ]
=
G1 G2 N i

∴ F (2) = F1 +
[ F2 − 1 ]
G1

Similar analysis can be applied to an N-stage system giving:

F( N ) = F1 +
[ F2 − 1 ]
+
[ F3 − 1 ]
+ ... +
[ FN − 1 ]
G1 G1 G2 G1 G2 G3 ... G N −1

Note: the importance of the first stage noise figure F1.

1.92
Institute of Integrated Information Systems

8.7 Variation of Noise Figure with Frequency

In practice, NF of a unit does not remain constant with frequency:

 No 
F = N G 
 i 

1.93
Institute of Integrated Information Systems

9. SOURCE CODING FUNDAMENTALS

9.1 General Nature of Source Coding

• Most sources contain both information and redundancy.

• One of the main tasks of source coding is to remove the redundancy in a way
that does not compromise the information.

• Source coding can be made more efficient as more is known about the
characteristics of the source, eg its state probabilities.

• A specific illustration of data source coding (Huffman coding) will now be


considered.

1.94
10. CHANNEL CODING FUNDAMENTALS (ERROR CONTROL CODES)

10.1 General Nature of Channel Coding

(i) Objective of channel coding is to protect the source (or source-encoded) data against errors introduced
by noise and other types of distortion encountered on the channel.

(ii) This is achieved by introducing deterministic redundancy at the transmitter which can be exploited at
the receiver to detect and/or correct any errors introduced during transmission.

(iii) The form of channel coding employed must be matched to the types of errors likely to be encountered
on the channel, e.g. random or burst errors.

(iv) The more that is known of the channel characteristics, the more accurately can the channel encoding
scheme be designed.

There are 2 basic types of error control codes (ECCs), i.e. “block” codes and “convolutional” codes.

1.101
10.2 Block and Convolutional Codes
(a) Block Codes
• Encoder accepts information in successive k-bit blocks;
• Encoder adds (n-k) redundant bits, derived from logical operations on the k information bits, to form
n-bit codewords;
• Encoder is zero-memory since the successive k-bit blocks are independent.

(b) Convolutional Codes

• Encoder accepts information bits as a continuous sequence;

• Encoder operates on the incoming sequence using a “sliding window” of length m bits;

• Encoder needs a memory of length m as the successive m bits are dependent.

k-BIT n-BIT
BLOCK BLOCK INPUT ENCODER OUTPUT
ENCODER (m-BIT
BIT-STREAM MEMORY) BIT-STREAM

(a) Block Encoding (b) Convolutional Encoding

1.102
10.3 Block Code Definitions
(a) Hamming Weight (wt (c)): number of non-zero elements in codeword c.
(b) Hamming Distance (d): number of locations in which 2 codewords ci and ci differ.
(c) Minimum Hamming Distance (dmin): smallest Hamming Distance between any pair of codewords in Code C.

Example: Given block code C to encode 2 bits/codeword, where


c1 = 0000, c2 = 1100, c3 = 0011, c4 = 1111
wt (c1) = 0, wt (c2) = wt (c3) = 2, wt (c4) = 4
d (c1, c2) = d (c1,c3) = d (c2, c4) = d (c3, c4) = 2 d (c1, c4) = d (c2, c3) = 4
Hence, dmin for code C = 2.

(d) Notation: block codes are described by (n, k) or (n, k, dmin); e.g. for example above, (4, 2) or (4, 2, 2).
(e) Code Rate: or efficiency, R = (k/n); e.g. for example above R= 2/4.
(f) Error Detection and Correction Power: in general, an (n, k, dmin)linear block code can
(i) detect e errors, if and only if d e + 1, or
(ii) correct t errors, if and only if d 2t +1,
(iii) subject to the overall constraint that d e+t+1
1.103
10.4 Examples of Simple Block Codes
10.4.1 Non-Redundant (NR) Code
• Map 4 possible information states into normal binary code, i.e
0 00
States 1 01 Set of Codewords
2 10 = Code C
3 11

• Minimum distance = 1; thus, any single error in any codeword converts it to another valid codeword.

10.4.2 Single Parity Check (SPC) Code

• Take previous NR code, and compute even 1s (XOR) parity checks (PCs):

00 0
01 1 Set of 3-bit
10 1 Codewords
11 0
INFO REDUNDANCY

• Minimum distance = 2; therefore, some error control potential

1.104
10.4.2.1 Error Detection by SPC
(i) Single Error Detection (SED)
e.g. TX : 011 error
RX : 010
Recalculate PC : 011
• Recalculated and received PCs differ; therefore, an error is detected.
• No correction possible since, with a single error, TX codeword could have been 011, 110 or 000

(ii) Triple Error Detection (Inversion)


e.g. TX : 011
RX : 100
Recalculate PC : 101
• Again, PCs differ and errors detected.
• No correction possible since RX codeword could arise from triple-error situation shown, or from single
errors
in codewords 110, 101 or 000.

(iii) Double Errors


• All double errors are missed.
1.105
• Comparing NR and SPC codes diagramatically
NR
10 00
(V) (V)
11 01
(V) (V)

SPC

010 000
(NV) (V)
110 001
(V) (NV)
100 011
(NV) (V)
101 111
(V) (NV)

V = Valid Codeword
NV = Non-Valid Codeword
For the SPC, there are 4 possible NV Codewords; if any of these is detected at the
RX, a transmission error is detected

1.106
10.4.3 Hamming Single Error Correcting (SEC) Code
• Systematic synthesis of a code to correct single errors

(i) Encoding
• Assume codeword length = n ; write binary numbers 0 - n in sequence, i.e.
Numbers 1-n Binary form Powers of 2
1 0001 20 = b1
2 0010 21 = b2
3 0011 m1
4 0100 22 = b3
5 0101 m2
6 0110 m3
7 0111 m4
8 1000 2 = b4
3

. . .
. . .
. . .

Put PCs (b) in positions corresponding to powers of 2 ; otherwise, information


digits (m).

• To compute PCs, take binary column corresponding to power of 2 and XOR all
information digits for which there is a ‘1’ in the column, e.g. for n = 7:

b1 = m1 ⊕2 m2 ⊕2 m4
b2 = m1 ⊕2 m3 ⊕2 m4
b3 = m2 ⊕2 m3 ⊕2 m4

• Code rate R = k/n = 4/7 1.107


(ii) Decoding
• Steps are as follows:
(a) Recalculate PCs from received information digits;
(b) Compare (⊕2), digit-by-digit, received and recalculated PCs to give the “syndrome”;
(c) An all-zero syndrome indicates no errors;
(d) For a non-zero syndrome, reading in the reverse sense gives the binary location of the single error within the
codeword;
(e) Double/triple/etc errors will give rise to incorrect decoding.

Example: for n=7

b1 b2 m1 b3 m2 m3 m4

TX Codeword 1 1 1 1 1 1 1
Error sequence 0 0 0 0 0 1 0
RX Codeword 1 1 1 1 1 0 1
Recalculated PCs 1 0 0
Syndrome 0 1 1

• Reversed syndrome is 110 = 6, i.e the error is in the 6th codeword position, m3.

1.108
10.4.4 Repetition Code
• Information transmitted several times, e.g

TX INFORMATION 1101 MAJORITY VOTE

RX WORD 1 0101 0101


RX WORD 2 1001 ??01
RX WORD 3 1111 1101
RX WORD 4 1100 1101

SINGLE RESULTS AT EACH


ERRORS STAGE

• All single errors in a given digit position are corrected in 3 repetitions.

• Generally:
(i) m repetitions allow (m-1) errors to be detected;
(ii) m repetitions allow  (m-1)  errors to be corrected, where   means “integer part of”.
2

• Efficiency is low since rate R = 1/m

1.109
10.4.5 Array Code
• 2-Dimensional Code, e.g
l HORIZONTAL PARITY
CHECKS
(l x m)
INFORMATION
m DIGITS

CHECK-ON-CHECKS

VERTICAL PARITY CHECKS

• 2-D extension of SPC code where codeword length = (l+1) x (m+1)


• Code rate R = lxm
(l+1) x (m+1)

• Allows correction of single errors since failed horizontal and vertical PCs will indicate co-ordinates of error.

• Useful for low-error environments where information is naturally formatted in 2-D, e.g multi-track digital tape.

1.110

S-ar putea să vă placă și