Sunteți pe pagina 1din 57

Communications Theory

(Couch book)

Fig1

If channel is ideal signal would be exactly the same as transmitted


What we get at the sink is not necessarily what we get at the source.
We need to apply techniques to counteract for the noise

Transmitter shapes the message to produce a signal suitable for the channel. (Shape it for the
characteristics of the channel.) Ex Telephony pressure changed to electricity, Digital signals we use
sampling to quantise, compression and encoding, and interleaving.
Channel is the medium used to transmit our signal(pair of wires(twisted pair, coax(having ground
being the shielding), band of frequencies(wireless), light (fiberoptic))
Receiver tries to construct the message.
Sink is the final message(any device that can read the message

Discrete, continuous or mix of both


Most use discrete/digital

Data
Data can be either analog or digital (voice, file)
Capture digital and convert to analogue
Or sample the analogue and convert to digital
Digital data can be either converted to digital or modulated into an analogue signal

Our goal is to transmit the information whilst having minimal deterioration of the signal

Some constraints are the allowable transmission energy(ex you do not want to drain the battery of
a mobile phone, however we would like to use as much power as we want), available bandwidth
(the more bitrate the better the bps is, (the wider the bandwidth also the more noise you capture)),
and cost (to build and to run)

SNR for analogue (the higher the signal above the noise the better (dB))
Probability of bit error in digital (bit error rate), (more of a statistical measure)

Information sent from a digital source when the jth message is transmitted is given by Eqn1
The more something is rare the more it gives you information
Depends on the likelihood of sending a message and not the possible interpretation

Information changes with messages since Pjs are not equal

Average information considering the possible message that can be sent


This is called Entropy
Eqn2

Ex1
Since words have the same probability then 10 bit words, thus no compression is possible.
If not equiprobable than some would have more than 10 others less
Ex1continued
Maximum entropy occurs when all the probabilities are equal
Source efficiency is Eq3
Entropy gives us the minimum number of bits needed to encode a symbol, if we go below our
entropy we might not be able to distinguish between the symbols

Rate is our entropy division by time Eq4


These apply for digital. Sources, analogue sources are more complex however we can
approximate them using digital sources

Properties of Entropy
Entropy is 0 when all probabilities are 0 except one (ex a white image(only white is 1 others are 0))

When we have two events Eqn5


If independent H(X, Y) = H(X) + H(Y) Eqn6
In two images are frames(animation) the two images have a dependancy thus we can do
compression in video.
If independent we cannot do that

Change to equalise probabilities increases H

Conditional probability Eqn7


Conditional entropy is the average o the entropy of y for each value of x, weighted according to the
probability of getting that particular x Eqn8

Data Rate
How fast can we send our bits
Most important part is the bandwidth, the bigger the bandwidth the better
It also depends on the number of levels of the signals, with 2 bits 4 messages, 4bits 16messages
etc less bits but more information
It also depends on the quality of the channel (wireless if afar we need to keep data for longer so
that it is read correctly thus slower bit rate)

Theoretical maximum bit rate is given by Nyquist formula Eqn9


(this is never reached since in practice we have noise)

Ex2

Assume that we have only signal and white Gaussian noise (white noise is noise where the power
is the same)
Shannon Eqn10
C gives us the limit and we need to keep our rate below C so that the probability of error
approaches 0
This is the ideal case
(Using coding techniques we can reduce the signal to noise ratio (ex CRC), using shielding, higher
power, higher bandwidth, cooling)

Ex3

Ex4

Transmission Impairment
Attenuation, Distortion, Noise
Attenuation is due to resistance, loss in energy along the medium. Thus we use amplifiers Fig2 It
will have automatic gain control
Distortion - our channel will be a bandpass filter Fig3 Propagation speed is different and filter
affects different frequencies differently
Noise - to combat noise we will be applying signal processing and coding techniques, and caused
by Thermal noise, induces noice, crosstalk, and impulse

Signals and noise


Periodic Signals
Periodic
Completes a pattern within a measuring time frame
Repeats a pattern over a subsequent identical periods Eqn11 Typical of analogue signals

Aperiodic
Does not exhibit a pattern or cycle that repeats over time Typical of digital signals

We can analyse the signals in the time domain and frequency domain

Random and deterministic


Random signal cannot be described with a mathematical function, example noise
We se statistical data to model (Probability density function - determined by the characteristics of
the noise)

Deterministic signals can be completely describes mathematically (easily know what the signal will
look like and we can describe it using mathematics)

Causal and non causal


Causal Signal
Zero for all negative time (typical for communication signals)

Non-Causal signal
Has non-zero values in both positive and negative time

Anti-causal
Has zero value for all positive values of time

Time invariant system


Output does not depend on time
If we apply an inout now and the same input later, the same output is generated (shifted ny the
same amount of time0
If x(t) = y(t) then x(t-deltat) = y(t-deltat)
Typical in communications data
Fourier Transform
We need to find the frequencies present in a waveform
Easy if we have one sinusoidal wave (Just find the period and take inverse)
Most of the signals are complex(Contain more than one frequency, not periodic, thus use Fourier
transform)

Definition: The Fourier transform of a waveform is Eqn12

(Since our channel is a band pass filter in practice we wont have from -infinity to infinity)

This is a. complex function that has Eqn13

We are interested in the magnitude(because we are looking at the signal to noise ratio)

The time can be obtained from the spectrum using the inverse Fourier transform Eqn14

Note: All physical waveforms in practice are Fourier transformable!!

Fourier Transform Properties


Symmetry
If x(t) is real, then X(-f) = X*(f)
(Spectral symmetry of real signals)
The magnitude spectrum is even about the origin
|X(-f)| = |X(f)|
Phase spectrum is odd about the origin
Eqn15

Time and frequency scaling Eqn16

Linearity Eqn17 (shift in time)

Time delay Eqn18 (shift in frequency)

Conjugation Eqn19

Complex signal frequency Eqn20

Parsevals Theorem
This is given by: Eqn21

If x1(t) = x2(t) = x(t0, we get Rayleighs every theorem Eqn22

This gives an alternative in finding the energy using the frequency domain description

Energy Spectral Density


Defined by: Eqn 23
Units J/Hz

Total normalised energy is the area under the energy spectral density(This gives the normalised
energy so that we can determine)
(typically we want to balance our energy amongst our spectral)

Dirac Delta Function


Defined by: Eqn25
x(a) is continous at a=0
a can be time or frequency

Another definition is Eqn26


Is not a true function
Treated as a general function
Fig4

Shifting property Eqn27

The equivalent integral of the delta function is:


Eqn28

+ or - used as needed
+ Assumes that the function is even
+ Can be verified using the Fourier Transform

Unit step function is: Eqn29

Since delta(t) is 0, except at lambda = 0, we get Eqn30

Epsilon -> 0, gives Eqn 31

Spectrum of a sinusoid
Find the spectrum of a sinusoidal signal having frequency f0 and peak value A
Ex5

Let Eqn32 denote a single rectagular pulse


Eqn33
Fig6
Sa(.) denotes the function
Eqn34
The spectrum is obtained by taking the Fourier transform of
Eqn 35
Fig7

Using the duality theorem => spectrum of Sa(x) pulse is a rectangle


Eqn36
Replacing T by 2W we get the Fourier transform pair
Eqn37
-where W is the absolute bandwidth in Hertz
If pulses are offset spectra will be complex
-Loose Symmetry
Rectangular pulse is important was it represents binary data!!
Convolution
Using the duality theorem > spectrum of Sa(x) pulse is a rectangle
Eqn38
Replacing T by 2W, we get
(peaks at maximum similarity)

Definn=ition
The convolution of a waveform w1(t) with a waveform w2(t) to produce a third waveform w3(t) is:
Eqn40
If discontinuous waveforms are convolved, we can use the equivalent integral
Eqn41

Steps involved
Reverse one of the signals ex w2 giving w2(-lambda)
Time shift w2 by t seconds gives us w2(-(lambda-t))
Multiply the result by w1 to obtain the integrand
Ex6

Power Spectral Density


Useful to describe how power of signal and noise is affected by devices - example filters

Definition:
The power spectral density (PSD) for a deterministic waveform is Eqn42
-where wT(t) <> WT(f) and Pw(f) is in Watts/Hz

Notes
-PSD is always a real non negative function of frequency
-PSD is not sensitive to the phase spectrum of w(t)

The normalised power can be calculated as Eqn43


This is the area under the PSD curve
(Power fluctuates and thats why we need the normalised power)

Autocorrelation function
Definition:
The autocorrelation of a real waveform is Eqn44

Moreover, PSD and autocorrelation function are Fourier transform pairs Eqn45
-Called Weiner-Khintchine theorem

PSD can be calculated by calculating the autocorrelation function and taking the Fourier Transform
Normalised power P=Rw(0)

Orthogonal Functions
Defition:
-Functions Eqn46 are said to be orthogonal with respect to each other over interval a< t < b if
Eqn47

-Moreover, if the functions in set Eqn48 are orthogonal, then they also satisfy Eqn49

-Eqn50 is called the Kronecker delta function


-If Kn = 1 for all n, the Eqn51 are orthogonal functions
Orthogonal Series
Let w(t) be a practical signal to represent in a < t < b
We obtain an equivalent orthogonal series representingationo by using Theroem:
-w(t) can be represented over interval (a, b) by the series Eqn52
-where the orthogonal coefficients are given by Eqn53
-and the range of n is over the integer values that correspond to the subscripts that were used to
denote the orthogonal functions in the complete orthogonal set

Fourier series
A type of orthogonal series
-Functions used are either sinusoids or complex exponential functions

The complex Fourier series uses the orthogonal functions Eqn54


-where n ranges over all possible integer values, 55 w0 = 2pi/T0 where 56 T0 = (b-a) is the interval
in which the series is valid

Theorem
-A physical waveform may be represented over the interval a< t < a=T0 by the complex
exponential Fourier series
Eqn 57
-where the complex coefficients are:
Eqn58
and w0 = 2pif0 = 2pi/T0
If w(t) is periodic with period T0, this Fourier representation is valid over all time
Notes
-a is arbitrary(can start forom wherever u need) usually a = 0 pr a =-T0/2
-f0 is fundamental
Nf0 is nth harmonic
C0 is Dc Value

Properties
-If w(t) is real => Eqn59
-If w(t) is real and even (w(t) = w(-t)) => lm[cn] = 0
-If w(t) is real and odd (w(t) = -w(t)) => Re[cn] = 0
-Parsevals Theorem Eqn60
-Complex Fourier series coefficients of a real waveform are related to the quadrature Fourier series
coefficients by Eqn61
-Complex Fourier series coefficients of a real waveform are related to the polar Fourier series
coefficients by Eqn 62

Distortion of signals
Aplitude Distortion
-If response is no flat

Phase distortion
-If phase response is not linear

Real life scenarios


-Audio transmission
Human ear sensitive to amplitude distortion
(change in phase is equivalent in change in frequency/ the bandwidth is small)
-Video Applications
Human eye is more sensitive to delay - smearing of objects edges
(If amplitude changes a little bit you will not even notice it)
Sampling Theorem
Any physical waveform may be represented over the interval -infinity < t < infinity by:
Eqn63
Where Eqn64

and fs is a parameter that is assigned a value greater than 0. Moreover, if w(t) is band limited to B
Hz and fs >= 2B, equation becomes the sampling function representation, with Eqn 65 [an = w(n/
fs)]

Minimum sampling rate to reconstruct a band limited waveform without errors Eqn 66
This is called Nyguist frequency
Let us use N sample values to reconstruct a band limited waveform
-Assume we want only the interval symb2
-The sampling function series can be truncated to include only N of the symb1 functions that have
their peaks in symb2. Eqn67

This will produce a weighted sum of time-delayed (sin x) / x waveforms, with [an = w(n/fs)] Eqn68
Fig8
Fig9
(When u some the components together you get something that resembles the original waveforms

The sample values can be:


-Stored to reconstruct wave later
-Transmitted over channel and reconstructed at receiver

Sample value is then multiplied by appropriate (sin x) / x (sinc function) function

The samples are added to give original waveform

Minimum number of samples needed is Eqn69

We have N orthogonal functions in the algorithm

Discrete Fourier transform


The discrete Fourier transform (DFT) s defined by: Eqn70

THE Inverse DFT (IDFT) is defined by: Eqn 71

The Fast Fourier transform is normally used to reduce the time for computations(Using butterfly
smth smth)

DFT can be used to compute the continuous Fourier transform (therefore we can approx better our
analog continuous signals)

Computing CFT with DFT

This uses 3 concepts


-Windowing(look at a window from the whole waveform)
-Sampling
-Periodic sample generation

The time waveform w(t) is first windowed to have a finite number of samples, N
Eqn72
The Fourier transform of this new waveform is:
Eqn73

Fig 9

(A window is just a square pulse we are multiplying our sinc with the delta functions)

We can approximate the CFT by using a finite series to represent the integral Eqn74
-Where t= k . delta t, f=n?T, dt =delta t, and delta t = T/N

Thus Eqn75

Sample values used in DFT are x(k) = w(k.delta t)

Note that since exponent is periodic in n it follows that X(n) is periodic


-Only first N values are needed

Effects of computing CFT with DFT


Leakage
-Caused by windowing in time domain
Like convolving the spectrum of unwind owed function with the spectrum of the window
function in frequency domain
-Spreads the spectrum of the frequency components
-Reduced by increasing T, or different window shapes

Aliasing
-Spectrum of sampled waveform consists of replicating spectrum of unsampled signal about
harmonics of fs
If fs <2B, where B is the highest significant frequency in unsampled signal we will have
aliasing errors
-Decreasied with higher fs or resampling low-pass filter

Picket-fence Effect
-N-point DFT canot resolve components any closer than delta = 1/T spacing
-Improved by increasing T
If data length is T0 =< T we can extend T by adding 0s (zero padding). This reduces delta f

Notes:
-Delta f must satisfy Nyguist sampling condition
-T must give desired frequency resolution
-`number of data points N=T/(dleta t)
-DFT can also be used to find the coefficients of the complex Fourier series

Introduction to Modulation
We need to transmit more than one signal at a time. To recognise it from other signals

Introduction
Modulation is a process of adding information (signal), m(t), to a carrier signal

It is needed mainly for transmission of data


Channel can be bandpass
need to shift the signal into the band interest
Allows multiplexing onto a single channel
Shift to higher frequency where there is more bandwidth and we can use smaller antennas (lower
freq mean higher bandwidth and smaller antennas)

Looking at a sinusoid Eqn76


We can vary 3 parameter to change the shape. Amplitude, Frequency and Phase angle
(In wireless amplitude modulation is not possible, cause the signal is not stable)
(Phase tend to be more stable)

Complex Envelope Representation


Any physical bandpass waveform can be represented by:
Eqn77

g(t) is the complex envelope of v(t) and Eqn80

Equivalent representations:
Eqn78
And
Eqn79

Where Ex7
Waveforms are all baseband waveforms and, except for g(t), are all real
v(t) is a low-pass-to-bandpass transformation

The complex envelope of an AM signal is Eqn81


-Where Ac is a constant (power level) and M(t) is the modulation signal

The AM signal is given by Eqn82

Eqn83 is the in-phase component x(t) of the complex envelope


-Corresponds to real envelope |g(t)| when m(t) >= -1

Fig11
(To detect the signal from the modulated signal, you need a diode)

Fig12

(The limit is when the amplitude increases there may be an overlap and you will have distortion,
therefore it can only increase till it gets to 0)

If m(t) has a positive peak value of=1 and a peak negative of -1 => signal is 100% modulated

Percentage of positive modulation on Am signal


Eqn84

Percentage of negative modulation on AM signal


Eqn85

Overall modulation percentage is


Eqn86

The normalised average per of an AM signal is:


Eqn87

If modulation has no dc level, <m(t)> = 0


(Simple amplitude modulation is not power efficient)
Modulation efficiency is the percentage of total power of modulated signal that carries information

In AM, only sidebands carry information


Eqn88
(Never 100%, however simple electronics thus very cheap)

Fig13

(The bandwidth is now doubled since we have 2 of them, therefore in amplitude modulation we
need double the bandwidth)

The spectrum of an AM signal is:


Eqn89

Therefore the AM spectrum is easy to obtain


- It is a translated version of the modulation spectrum plus delta function of the carrier

Bandwidth is twice that of the modulation bandwidth


(and more energy to transmit the signal)

(Efficiency is low because we are wasting a lot of power in the carrier)

Double-Sideband Suppressed Carrier


DSB-SC is an Am signal which has the carrier suppressed
Eqn90
-m(t) has zero dc

Spectrum now does not have the delta functions at +-fc


Eqn91

% modulation = infinite (no carrier)


Efficiency is 100%n (same reason)
Product detector is needed (Electronics slightly more complicated, no longer use a simple diode
detector)

If modulating signal is a sum of sinusoids, we get two frequency bands


Fig 13

Demodulation in ideal channel


Eqn 92

This will be the received signal r(t)

Multiplying by cos(wc t) gives:


Eqn 93

This gives
Eqn 94
(The 1 is our wanted signal) (The cos(2wc t) is the unwanted and we can filter this out using a low
pass filter)

(To further improve we can remove one of the frequencies)


Single Sideband
(SIngle sideband is what we need because all we need is the bandwidth)
Definition: An upper single sideband signal has a zero-valued spectrum for |f| <fc. A lower single
sideband signal has a zero-valued spectrum for |f| > fc

Advantage is the bandwidth needed is the same as the modulating signal

An SSB signal is obtained using the envelope


Eqn 95
This gives the SSB waveform
Eqn96

Mhat(t) is the Hilbert transform of m(t)


Eqn97

H(f) = [h(t)] is a -90 phase shift network


Eqn 98

Fig 15

The normalised average power is:


Eqn99

But Eqn100

Thus
Eqn 101

which represents the modulating signal multiplied by a gain factor

The normalised peak envelope power is Eqn 102

Photo1

Vestigial Sideband
DSB takes a lot of bandwidth but SSB is expensive to implement

A compromise is VSB (used for example in TV)


Suppresses part of one of the sidebands (relax the filter)

The VSB signal is given by


Eqn 103 s(t) is a DSB signal

Spectrum is Eqn 104

Can be recovered using product detection or envelope detection (if large carrier is used)

Fig16

Angle Modulation
The complex envelope is given by: Eqn 105

Real part = R(t) =|g(t)| = Ac (constant) and phase (t) is a linear function of m(t)

The angle-modulated signal is: Eqn 106

Special cases of angle modulation are:


-Phase Modulation
-Frequency Modulation
For PM: Eqn 107, Dp is the phase sensitivity of modulator (depending on modulation signal it varies
the phase)
For FM: Eqn 108, Df is frequency deviation constant

Fig 17

If bandpass signal is: Eqn109

Then, instantaneous frequency is


Eqn 110

for FM this is Eqn 111

FM Modulation
We have a variation about fc which is directly proportional to m(t)

Fig 18

We have a constant envelope -> constant power level Pav =0.5 Ac2

Increasing amplitude of modulating signal will increase deviation


-Increase the bandwidth required!!
-Different from AM
Frequency modulation index is given by: Eqn 112, B is the bandwidth of m(t)

Similarly for PM, peak phase modulation is Eqn 113

Spectra of Angle-Modulated Signals


Spectrum is given by Eqn 114

Note that g(t) is a nonlinear function of m(t) => no general formula for G(f) w.r.t. M(f) exist

Evaluation has to be done per case

Ex8

g(t) can be represented by a Fourier series Eqn 115

Where Eqn116
After some maths Eqn117

Jn() is a Bessel function of the first kind of nth order. We can evaluate this using tables or MATLAB

Fig19

Properties:
Jn() =J-n(), n even
Jn() = -J-n(), n odd

For <<0
J0() approx = 1
J1() approx = /2
Jn() approx = 0, n>=2

For arbitrary , Eqn 118

Taking the Fourier of g(t) gives Eqn 119

This result in spectrum Fig 20

Note that discrete carrier term (at fc) is proportional to |Jo()| => depends on and fm

Bandwidth Depends on and fm

It can be shown that 98% of total power is within


BT = 2( + 1)B

This provides a rule of thumb expression called Carsons rule


-Easy way to compete bandwidth requirements
-Other definitions such as 3-dB bandwidth need full spectrum evaluation

Narrowband Angle Modulation


If (t) variations are small, envelope g(t) =Ac ej can be approximated using Taylors series
-Use only first terms

Given Eqn120
The expression for the modulated signal becomes Eqn 121

(The spectrum of cos(wct) is one delta function)

The spectrum is: Eqn122

Binary Modulated Bandpass Signal


The modulating signal, m(t), is digital

Amplitude shift keying (ASK) consists of switching a carrier on and off


-produces a unipolar binary signal
-Called On-Off Keying (OOK)
-Signal is: Eqn123
-Envelope is: Eqn124
-Bandwidth needed is 2B

Fig 21
Fig 22

Binary-Phase Shift keying


Fig25

The BPSK signal is: Eqn 125


-m(t) is apolar signal
-Assume m(t) has peaks fo +-1
-Expanding equations gives: Fig23

Eqn 126
-Since cos(x) and sin(x) are even an odd functions
Eqn 127

Pilot carrier term is set by peak deviation: delta. Tita n = Dp

Digital modulation index is defined as: Eqn 128


- 2 delta Tita is the maximum peak-to-peak phase deviation during Ts
- Note: for binary signalling, Ts = Tb(Tb is the bit time)
Pilot carrier term depends on Dp, where delta Tita = Dp for m(t) = +- 1
- Small Dp => large amplitude for carrier compared to data
- But we need to maximise the data term
- This can be achieved by setting delta Tita = Dp = 90degrees = pi/2 rad
- This gives h=1
- BPSK signal becomes Eqn 129
- Equivalent to DSB-SC signalling with polar data waveform

Fig24
(BPSK is better for transmission then OOK)

Differential PSK
PSK signal cannot be detected incoherently

A partially coherent method can be used


- Phase reference for current interval is provided by a delayed version of the signal

Fig26
(Instead of a low pass filter we can use a matched filter - which gives you maximum signal to noise
ratio (maximum power transfer -shape of filter is closer to the shape of the signal))

Differential decoding is provided by one-bit delay and multiplier

If data on BPSK is differentially encoded, we can reconstruct it at the output

Integrate-and-dump circuit matched filter is needed (dump to flush the integrator)

Bandpass filter impulse response is Eqn 130

Frequency-Shift Keying
Can be generated by switching between two f=different frequencies
- Discontinuous at switching times
Eqn 131
Continous-phase FSK signal is obtained by feeding data signal into a frequency modulator

This gives: Eqn 132 (Even though discontinuous theta of t is continuous since we have an integral)

Although m(t) is discontinues at switching theta of

FSK bandwidth is 2B + (f2 - f1)


Fig 27
(bandwidth is hefty)

Multilevel Modulation
We can have more than two modulation levels

We can generate this from a serial binary input stream using a DAC
Fig 28

Example: If l = 2 => number of levels is: M = 2l = 4


Symbol rate (baud) is D = R/ l = 0.5R, R= 1/Tb
Quadrature PSK
PM transmitter with M=4 levels generates M-ary PSK(MPSK)

A plot givers 4 points, one for each value corresponding to the 4 phases that theta can take

Two possible sets of g(t) are shown

Suppose possible values at DAC output are -3, -1 ,+1 an +3V

These can correspond to 0, 90, 180 and 270 or 45, 135, 225, and 315 (The Volts are represented by
phase)

Fig 29

MPSK
MPSK can be obtained using 2 quadrature carriers modulated by the x and y components of the
complex envelope Eqn 133

Permitted values of x and y are : Eqn 134

Quadrature Amplitude Modulation


QAM constellations are not restricted to lie on a circle like MPSK

Signal is: Eqn135


Where Eqn 136

Example with 16 symbols Fig 30


Eqn137
D = R/l ,h1(t) is the pulse shape used for each symbol

Channels
White Noise
Fig31

The additive noise channel


- Has zero mean
- Models all noise in the channel

White noise is defined as having a constant power spectral density


Eqn138

1/2 indicates that 1/2 of power is on the positive frequency spectrum and the other 1/2 on the
negative

Assumes infinite bandwidth (only theoretical !!!)

Autocorrelation function gives:


Eqn 140

Samples at different times are uncorrelated


Fig32

Gaussian Noise
The distribution at any time instant is Gaussian

Fig33

Results from Central Limit Theorem (CLT)

In communications we assume Additive White Gaussian Noise (AWGN)

Has a real input x and a real output y

The conditional distribution is Gaussian


Eqn141

Channel has continuous input and output but in time

Called additive White Gaussian Channel


(Represents all of our channels both wired and wireless and we use it a lot especially when
considering the analogue domain)

The Gaussian Channel


Consider an continuous input x(t)

Received signal is y(t) = x(t) + n(t)

The average power of a transmission of length T is: Eqn142

Received signal contains noise

The magnitude is que=antified by the Noise Spectral density, N0 (Can be represented by the spectral
density

Let us transmit a set of N real numbers {xn} in a signal time duration T, n = 1, N

Let signal be a weighted combination of orthonormal basis function phin(t)

Eqn143

Where Eqn144

Eqn 145
Fig 34

No noise case -> yn = xn

White Gaussian noise n(t) adds scalar noise nn to the estimate yn

This noise is Gaussian: nn ~ Normal(0, N0/2)

The power constraint Eqn146

Limits signal amplitudes xn

The bandwidth of a continuous channel is Eqn 148

Where Nmax is the maximum number of orthonormal functions produced in T

Use of a real continuous channel with bandwidth W, noise spectral density N0, and power P is
equivalent to N / T = 2W uses/s

-Gaussian channel noise level Eqn 149

Signal power constraints Eqn 150

Eb /N0
Assume that we encode the system to transmit binary source bits at a rate R bits/s

How can we compare two systems having different Rs and powers?


- Transmitting at higher R is good (R is rate)
- Needing less power is good too! (especially if portable devices)

We measure the rate-compensated SNR by the ratio of the power per source bit to N0

Eqn 151

Eb / N0 is used to compare coding schemes in AWGN

Capacity
The capacity for the continuous channel is Eqn152
Analysing this:
- Assume that we have a fixed power constraint
- What is the best bandwidth to use?
- Let W0 = P/N0 (bandwisth for which SNR = 1)
- Then C/W0 = (W/W0)log(1+W0/W))
- Thus capacity increases with W0 log e
- Better to transmit at low SNR over large bandwidth (spare spectrum)
- Otherwise high SNR at narrow bandwidth (most used)
- Constant spectrum is a limited resource and needs sharing

Discrete Memoryless Channel


We have an input alphabet X(n), and output alphabet Y(n), and a set of conditional probability
distributions P(y | x) (probability of y given we sent x)

The transition probability is given in matrix form Eqn 153

Probability if output is
Eqn 154
- where px is the probability distribution over the input

Binary Symmetric Channel


X(n) = {0, 1} and Y(n) = {0, 1} (with some probability we might have an error instead of 1 we receive 0
f is the probability of having an error
P(y = 0 | x = 0) = 1-f
P(y = 0 | x =1) =f
P(y = 1 | x = 0) = f
P(y = 1 | x =1) = 1-f

Fig 35

Let us consider a binary symmetric channel with f = 0.1


- This means that 10% of the bits are flipped
- See image below

Fig 36

Assume an input x comes from a probability set X

We obtain a joint probability set XY in which random variables x and y have joint distribution:

P(x, y) = P(y | x) P(x)


Question is if we received a symbol y, what was x?

Using Bayes Theorem we can write the posterior distribution

Eqn 155

Example
- consider a Binary Symmetric Channel with f = 0.2
- Let Px = {p0 = 0.75, p1 =0.24} (p0 is probability of 0 and p1 is probability of 1)
- Assume we receive a 1

Ex 9

Binary Erasure Channel


X(n) = {0, 1} (sending set)and Y(n) = {0, ?, 1} (receiving set) (ex in connection routers drop packets)
P(y = 0 | x = 0) = 1 - f
P(y = 0 | x = 1) = 0
P(y = ? | x = 0) = f
P( y = ? | x = 1) = f
P(y = 1 | x = 0) = 0
P(y=1 | x=1) = 1 - f

Optimal input distribution of binary symmetric channel is uniform (due to channel symmetry)

Similarly, symmetry exists in the binary erasure channel -> optimal input distribution is uniform

Capacity is equal to:


C = 1 - f bits/channel use

Note:
- In binary symmetric channel, receiver has no knowledge on which bits are flipped
- In binary erasure channel, receiver knows which symbols are erased
- If transmitter is informed we can get the 1-f limit

Eye Diagram
Fig 38
Effect of channel filtering and noise can be seen by observing the received line code on oscilloscope

Multiple sweeps with each sweep triggered by a clock signal and sweep width is slightly larger than
Tb

Assessment of quality

Noise margin of system is given by the height of the opening

Sensitivity to timing error is given by slope of open eye

Timing error allowed by sampler is given by width inside the eye

Solutions to Improve Channel Performance


Use more reliable components (less noise figure)

Remove any disturbance - such as air turbulence


Use lower modulation schemes - wider bit times

Use higher power for signal

Cool the circuitry (thermal noise)

Coding - error detection and correction (increase the number of bits)

Noise in Analogue Systems


Introduction
For systems with additive noise channels, received signal is : y(t) = x(t) + n(t)

For bandpass systems with transmission bandwidth BT

Eqn 156

Where gT(t) is the composite envelope at input (gs(t) + gn(t))

Eqn 157

Comparing Baseband Signals


Need common measurement criterion
- For analogue systems this is Ps(received power) divided boy white noise power in a bandwidth
equal to the modulation signal
- That is Eqn 158
- Ps is the power of the AM, DSB-SC, or FM signal at the receiver input
- B is the bandwidth of the baseband (modulating) signal
Eqn 159
BT is the bandwidth for the bandpass signal at receiver input

AM Systems with Product Detection


AM system with coherent detection

Fig37

The complex envelope of an AM signal is Eqn 160


- where Ac is the amplitude level and m(t) is the modulating signal

Complex envelope with noise becomes


Eqn 161

Output for the product detection is:


Eqn 162
- DC voltage which occurs because of the discrete AM carrier
- Detected modulation
- Detected noise
(if we don t use signal s close to the DC the Ac can be removed

The output SNR is given by:


Eqn 163

The input signal power Ps is:


Eqn 164
- Assuming that there is no DC on the modulating waveform Eqn 165 (SNR `at the output)

Input SNR is:

Eqn 166 (SNR at the input)

Thus: Eqn 167

If we have 100% sine wave modulation -> Eqn 168


- This gives (S/N)out/(S/N)in = 2/3

We can also evaluate in terms of (S/N)baseband by substituting for Ps Eqn 169

For 100% sine wave modulation this is 1/3


- AM system is 4.8dB worse than a baseband system(normal baseband signal without modulation)
that uses same amount of signal power due to power in carrier (loose power)

AM Systems with Envelope Detection


Detector produces KRT(t) at output
- K is a proportional constant

For signal plus noise:


Eqn170

The power is
Eqn171
(With large SNR yn (noise) is smaller than Ac therefore last term can be neglected since becomes
very small)

For large (S/N)in -> (yn/Ac)2 << 1, thus


Eqn 172

This gives us:


Eqn173
- Note|: For Large (S/N)in, this performance is identical to product detector

Performance falls for small (S/N)in


- Detector output is: Eqn 174
- For (S/N)in < 1: Eqn 175
- This is: for Gaussian noise channel we have a Rayleigh distributed noise and a signal term
multiplied by a random noise factor
- Later has more impact on signal corruption

We have a threshold effect


-(S/N)out is ver small when (S/N)in is <1

AM signal with (S/N) << 1


Fig 39

Envelope detector performs badly for small (S/N)in


- Rarely notices in practice
- AM users are interested in good broadcast stations only with (S/N)out ~ 25dB
- In this range envelope detector performs like product detector
Note: Envelope detector is cheap and does not need a coherent reference!
- Used almost exclusively in AM broadcast receivers

Note: Product detector might be better option for Weak AM stations or AM data transmission systems

DSB-SC Systems
This is an AM signal in which the discrete carrier term has been removed
- Corresponding to infinite percent AM

M(t) is recovered by using coherent detection


Eqn176

Therefore SNR is
Eqn177
- Noise performance of DSB-SC is the same as baseband signalling systems but twice the
bandwidth -NT = 2B

SSB Systems
IF bandwidth si equal to B

Complex envelope is: Eqn 178


+ for USSB and -for LSSB

Total received signal is:


Eqn 179

Output off product detector yields:


Eqn180

This gives an SNR:


Eqn181

Hilbert transform
Eqn182
Eqn183
Eqn184

The input powers is:


Eqn185

Input noise is N0B which gives us


Eqn186

SSB is equivalent to baseband signalling


- Both in noise performance and bandwidth needs

Note DSB, SSB, abd BB all have similar SNRout

PM Systems
Phase detector can be used to recover the modulation signal on a PM signal

Fig40
Complex envelope of PM signal is:

Eqn187
Dp is the phase sensitivity of phase modulator (rad/V)

At the detector input, the complex signal plus noise is


Eqn188
- For Gaussian nose, Rn(t) is Rayleigh distributed and thetan(t) is uniform distributed

Phase detector output is


Eqn189
- K is the gain constant of detector

For large (S/N)in, ThetaT(t) can be approximated using the vector diagram

Fig 41

For Ac >> Rn(t):


Eqn190

Assuming no phase modulation present


Eqn 191
- Where yn(t) = Rn(t) sin Thetan(t)

Shows that unmodulated carrier suppresses the noise at output (quieting effect) when (S/N)in >> 1

When phase modulation is present


- Thetas(t) is deterministic and this a constant for a given t
- Eqn192

Relevant part of detector output is


Eqn 193

PSD of n0 is
Eqn194
- PSD of bandpass input noise is
- No / 2 in passband of IF FIkter
- Zero outside passband

Fig42

Receiver output is a filtered version of r0(t)


- S0(t) is within the passband
- n0(t) is band limited
Eqn195

The noise power of receiver is Eqn196

This gives an output SNR of Eqn197

But Dp = Betap / Vp
- Betap is the PM index and Vp is the peak value of m(t)

Replacing Dp in the SNR output equation gives:


Eqn198
SNR at the input is:
Eqn199

Using Carsons rule:


Eqn200

Thus: Eqn201

The output to input SNR ratio is:


Eqn202

Expressing this in terms of the baseband system


Eqn203
- Improvement of PM system over baseband one depends on amount of phase deviation used
- This is: increase Betap
- Maximum value of Betapm(t) / Vp can be taken to be pi
- For sinusoidal modulation:
Eqn204

Similar to PM but detector output is proportional to dThetaT/dt

Complex envelope of FM signal is


Eqn205

White noise is added to this signal

Output of detector is proportional to the derivative of the phase at input


Eqn206
- Recall: Eqn207

For FM, useful signal is:


Eqn208

and noise:
Eqn209
- Assuming (S/N)in >> 1

Because of the derivative component, the PSD of noise is different from PM case

Eqn210

Fig 43

Receiver output is a low pass version of r0(t)

Noise power of the filtered noise is


Eqn 211

Output SNR becomes:


Eqn212

Now Df/2piB = Betaf /Vp, giving


Eqn213

SNR at input is
Eqn214
Output to input SNR relation is:
Eqn215

In terms of equivalent baseband SNR:


Eqn216
Eqn217

For sinusoid Eqn 218


- Indicates improvement if we increase Betaf
- However this increases the bandwidth => decrease in (S/N)in
- To improve we can use an FM discriminator(improve electronics)
- Affects the threshold
- For further improvement we apply pre-emphasising of higher frequencies of m(t) at transmitter
input and deemphasising at receiver output
- Stems from parabolic shape of PSD of detection
(Trying to remove the parabolic characteristic)

Pulse Code Modulation


(Digitising the signal)

Definition - Pulse Code Modulation (PCM) is essentially analogue-to-digital conversion where the
information contained in the instantaneous samples of an analogue signal represented by digital
words in a serial bitstream

If the digital words have n binary digits, we will have a set of M = 2n unique code words
- Each code word represents an amplitude level

Note that analogue signal can take an infinite number of levels -> digital code word word
representats a value which is closest to actual sampled value
- Called Quantisation
(more n more bits more accuracy)

Advantages of PCM
Relatively inexpensive digital circuitry

PCM signals coming from any analogue source can be merged with data signals and transmitted on
a common system (using for example TSM)

PCM waveform can be regenerated at repeaters


- Useful for long distance communications
- Noisy input get cause bit errors in regenerated signal

Performance in noise is superior to that of analogue system


- Error probability can be further reduced through coding

Main disadvantage - Much wider bandwidth

PCM signal is generated through:


- Sampling
- Quantizing
- Encoding

Fig 44
Quantization
Assuming that we have 8 levels, we get:
Fig 45

Steps in quantizer shown are all equal -> uniform(can be not equal more resolution in some parts)

Error is introduced as we have finite levels


- Difference between analogue signal and output
- Peak value is 1/2 of the quantiser step size
- Similar to rounding off error

PCM signal is obtained by encoding each quantised PAM signal into a digital word

We can use for example Gray Coding


- Provides only one bit change for each step
- Helps because single errors at receiver cause minimal errors recovered analogue signal (unless
it is asign bit)

In example we used binary code words

We can represent anode samples using digital words that have a different base than 2
- Called multi-level signal
- Advantage: need less bandwidth
- Disadvantage: more complex circuitry

PCM signal is a nonlinear function of input


- Spectrum is not directly related to the spectrum of the analogue signal

Bandwidth of binary PCM depends on bit rate and waveform pulse shape used for data
representation

Let bit rate R = nfs


- n = number of bits in PCM code word, fs is samplingg rate

For no aliasing, we want fs >= 2B


- where B is bandwidth of analogue signal

Using dimensionality theorem, bandwidth of binary PCM is bounded by: Eqn 219

Bandwidth of PCM signals


Minimum is obtained only when (sin x) / x pulse shape is used to obtain the PCM waveform
- In most case rectangular type is used -> more bandwidth necessary

Exact bandwidth depends on choice of line coding and pulse shaping

For example we can select a unipolar NRZ, a polar NRZ, or a bipolar RZ PCM waveform.
- Typical of cheap circuits

The null bandwidth is the reciprocal of pulse width -1 /Tb = R - for binary signalling

Fig47

Therefore, for rectangular pulses, first null bandwidth is


- BPCM = R = nfs
Results for fs = 2B are given in next slide

Note: Dimensionality theorem gives us a lower bound -BPCM >= nB


- PCM bandwidth is significantly larger than analogue signal
- Example: it n = 3, PCM bandwidth will be at least 3 times wider
- If bandwidth is less because of improper filer or system has poor frequency response, filtered
pulses are elongated

Fiq 49
(the more levels the better the performance, more dBs but the bandwidth required becomes larger,
accuracy at the expense of bandwidth)

Effects of Noise
Recovered signal will be corrupted by noise

Two main effects:


- Quantizing noise
- Caused by quantiser
- Bit errors
- Caused noise in channel and improper filtering

We also need sufficient bandwidth limitations - using anti-aliasing filter - and fast sampling - to have
negligible aliasing of noise

Recall that the bit error rate, Pe, depends on the energy per bit of signal, Eb, and on N0/2

PCM Performance in Noise


We need the peak signal to average noise ratio for the analogue input

Analogue sample xk is obtained at sample time t=kTs

This is quantised to Q(tk) - value from M

Sample Q(xk) is coded in n-bit PCM word (ak1, ak2, ,akn)

For polar systems, aks take values -1 or +1

Fig 50

For binary coding, assume PCM code words are related to samples by:
Eqn220

If nth sample is (+1, +1,, +1) the sample is:


Eqn221

The sum of this finite series is:


Eqn222

Received analogue signal is yk = xk + nk

Peak signal power to average noise power is:


Eqn223

Nk is made up of two uncorrelated effects


Eqn224

Thus Eqn225

For normally distributed signal, quantising noise is uniformaly distributed


- This is in interval (- / 2 and /2)
- is the step size

With M = 2n = 2V/:
Eqn226

Noise power because of bit error is:


Eqn227

Recovered analogue sample is reconstructed from the received PCM signal

The received nth sample is (bk1, bk2, , bkn), giving:


Eqn 228

If we have errors, bkj != akj, hence Eqn 229


- bkj and bkl, are two bits in received PCM word at different positions.
akj and bkl are transmitted and received bits in two different positions

Encoding results in bits that are independent if j != l

Therefore Eqn 230

If we evaluate the averages, we get Eqn 231

Thus: Eqn232

This becomes Eqn233

But 2n = M

The signal-to-noise ratio is


Eqn234

The curves for different M are given in next slide

Fig51

(S/N)out as a function of Pe and M

For Pe < 1 / (4M2) main noise contribution is quantisation noise

For Pe > 1 / (4M2) main contribution is channel noise

The (S/N) found is the peak value


- Would like to find the average signal-to-noise ratio

Xk is uniformly distributed from -V to V


Eqn235
Thus:
Eqn236
Channel coding can correct some errors: Pe -> 0

If we assume no errors and no ISI, then:


Eqn237 & 238

Assumption is that peak-to-peak level of analouge waveform, at input of PCM encoder is set to the design level
of quantiser

If peak input exceeds design peak V we get flat-tops near the peak values - overload noise
- Produces unwanted harmonic content

Normal operation generates random noise


- If input is not sufficiently large we get deterioration of SNR

If input level is a relatively small value compared to the design level -> error values become not equally likely
from sample to sample
- Granular noise
- Reduced by increasing levels or non-uniform quantiser(Fig 52)

A near-constant input analogue waveform can generate oscillating sample outputs - hunting noise
- Cause a sinusoidal tone at 1/2 fs
- Reduced by filtering or ensure no vertical step at the constant value

Example(Bk pg 151)
Assume voice over telephone occupies the bandwidth 300 - 3400 Hz. The signal is converted to PCM for
transmission. Let us oversample at 8 ksamples/s. If each sample value has 8 bits, bit rate is:

R = (8 ksmaples/s)(8 bits/sample) = 64 kbits/s

Minimum absolute bandwidth of this signal is:


(B)min = 1/2R = 32kHz

The first null bandwidth is


BPCM=R=64kHz

Thus we need 64kHz to transmit the digital PCM signal. Peak signal-to-quantization noise power ration is:

(S/N)pk out = 3(28)2 = 52.9dB

Noise in Digital System


Introduction
Consider the block diagram of a general binary communication system

Fig53

For baseband singling


- Receiver processing circuits include low-p[ass filtering plus amplification(because it was
transmitted through a channel thus it has been attenuated)

For bandpass signalling (ex. OOK, BPSK, FSK)


- Receiver consists of superheterodyne receiver made up of mixer, IF amplifier, and a detector

Analogue baseband waveform is sampled at t=t0+nT giving samples at r0(t0+nT)


- This goes too a comparator which produces the binary serial waveform - symbol1
(Low pass filters are used. This is to produce more robust circuit in lower frequencies)
Error probabilities for Binary Signalling
Let T represent the duration to transmit one bit

The transmitted signal over an interval (0, T) is:


Eqn 239

At the receiver, the binary signal plus noise results in a baseband analogue waveform:
Eqn240

- R01(t) is the output signal corrupted with noise for a 1


- R02(t) is the output signal corrupted with noise for a 0

The signal is sampled at t0 during the bit interval

For matched-filter processing circuit, t0 is normally T

Resulting sample is:

Eqn241

r0(t0) is a random variable with a continuous distribution (noise corrupted the signal)

Assume we can evaluate PDFs of r0 = r01 and r0 = r02


- Conditional PDFas as they depend on transmitted symbol

Condition PDFs are


Fig 54
- Actual shape depends on characteristics of channel noise, filter, detector circuit, and type of
binary signals

From figure assume that the polarity of the circuits is such that for a clean signal:
r0>= VT gives a binary 1
r0<VT gives a binary 0

When noise is present, errors can. Happen when:


- An error occurs if r0 < VT and 1 was sent Eqn242 (shaded area to the left)
- Similarly we have an error if r0 >= VT and 0 was sent Eqn243 (shaded area to the right)

The BER is
Eqn 244

Recall from probability theory that:


Eqn245

Thus the general BER equation of a binary system


Eqn246

P(s1 sent) and P(s2 sent) are source statistics


- In most causes they are equally likely (1/2)

Results in Gaussian Noise


Assume channel noise has zero mean and is a wide-sense stationary Gaussian process
Furthermore, receiver processing circuits are linear except for threshold device

Therefore we got a Gaussian process at output

Note:
- For bandpass signalling assumption is valid as we have linear filters with some gain
(superheterodyne)
- For bandpass signalling, superheterodyne is linear
- Yet, if we use AGC or limiters, receiver becomes nonlinear(depending on distance from
transmitter)

- If nonlinear detector is used (ex envelope detector), output noise is not Gaussian

For linear case:


- Recall: r0 = s0 + n0
- n0(t) = n0 is a zero-mean Gaussian random variable and s0(t) = s0 is a constant depending
on the signal Eqn 247
- These constants are known for a given receiver with known signalling wave shapes s1(t) and
s2(t)
This means that r0 is a Gaussian random variable having mean value of s01 or s02
- Labels mr01 and mr02 in the probabilities

We have two conditional PDFs:


Eqn 248
Eqn249 is the average of the output noise

Using equal input probabilities


Eqn250

Let lambda = -(r0 - s01) / sigma0 in 1st integral and lambda = (r0 - s02) / sigma0 in 2nd

Eqn 251

or

Eqn 252

This means that appropriate selection of VT can reduce the probability of error

To find this we need to evaluate: dPe / dVT = 0 (To find the optimal VT)

Eqn 253

Using Leibnizs rule:


Eqn 254
Or
Eqn 255

This means that:


Eqn 256

Therefore for minimum Pe: Eqn257

Thus the optimal threshold the BER is:


Eqn258 (We can maximise the signal by using a filter that maximises the signal
White Noise and Matched-Filter Reception
We can further reduce BER through optimisation of the receiving filter

This means that we need a filter that maximises Eqn259


- sd2(t0) is the instantaneous power of the difference signal

Linear filter which maximises instatneOUS OUTPUT SIGNAL POWER WHEN COMPARED TO
THE AVERAGE NOISE POWER (SIGMA02 = N02(T)) IS MATCHED FILTER

Definition:
- The matched filter is a linear filter that maximises (S/N)out = s02(t0)/n02(t) and has a transfer
function given by EQn260
- where S(f) is the Fourier transform of the known input signal s(t) which has duration T seconds.
Pn(f) is the PSD of the input noise, t0 is the sampling time, and K ids an arbitrary real non-zero
constant

For white noise this simplifies to Eqn 261, Fig55

In this case we want the matched filter to be matched to sd(t)

Impulse response of filter for binary signalling is:


Eqn 262

The output peak SNR obtained from filter is:


Eqn 263
- where Ed is the difference signal energy at receiver input Eqn 264

Therefore, for binary signalling corrupted by white Gaussian noise, the BER is: Eqn 265
- This is valid for matched-filter reception and using the optimum threshold

Coloured Noise and Matched-Filter Reception


We can modify the results obtained to evaluate the BER for coloured noise

A pre-whitening filter needs to be added before the processing units

Fig56

Transfer function of pre-whitening filter is Eqn 266

Noise at filter output appears as white

Implies that now the previous theory applies

Matched filter in processing circuits has to match the filtered wave shapes
Eqn 267

where Eqn 268

The signal output from the pre-whitening filter will spread beyond the T second interval

Two types of degradation occur:


- Signal energy occurring beyond T is not used by matching filter to maximise the output signal
- Portion of previous signals that fall in the current interval will produce ISI

Effects can be reduced if duration of original signalling intervals is made less than T so that
spreading will remain within T

Unipolar Signalling
Fig57

We have two baseband signalling waveforms

Eqn 269

Let us assume that we have a low-pass filter with unity gain (H(f))

Assume filter bandwidth = B > 2/T


- Approximately preserves signal shape
- Noise will be reduced

Noise power at filter output -> Eqn270

Set to optimum threshold -> Eqn271

The BER is: Eqn272

If we apply a matched filter we get: Eqn 273


- Assuming sampling time t0 = T
- Energy in difference signal is A2T
Energy for binary 1 = A2T Energy for binary 0 = 0 Average energy per bit = A2T/2

For rectangular pulse shape, matched filter is an integrator

Optimum threshold is Eqn 274

We normally express BER in terms of Eb/N0


- Gives indication of average energy needed to send 1 bit over a white noise channel
- Used to compare different signalling methods

Fig 58

Polar Signalling
Fig 58

Baseband signalling waveform is:


Eqn 275

The optimum threshold is now VT = 0

BER using low-pass filter is Eqn 276

For matched filter implementation:


- Energy of difference signal is Ed = (2A)2T
- Average bit energy is now Eb=A2T
- BER is Eqn 277
(Double the distance between the two therefore double the performance)
(Easier to distinguish)

Polar has 3dB advantage with respect to unipolar

Bipolar Signalling
Fig59

For NRZ
- Binary 1s are given by alternating positive and negative values Eqn 278
- Binary 0s are given by a zero level Eqn 279

Similar to unipolar but we need two thresholds +VT and - VT

Under AWGNn the BER is Eqn 280

This gives:
Eqn281

Using calculus we find the optimum VT for minimum BER to. be: Eqn282

For low error systems, VT ~= A / 2

This gives a BER of Eqn 283

For receiver with low-pass filter and Eqn284


Eqn285

Using matched filter, SNR is Eqn 286

For bipolar NRZ, energy in different signal is Ed = A2T=2Eb

The BER is Eqn 287

For bipolar RZ, Ed=A2T/4=2Eb => BER formula remains the same

BER of bipolar is 3/2 that of unipolar

Coherent Detection of Bandpass Signals


For On-Off Keying, signal is given by
Eqn 288

Product detector can be used


Fig61
Mixer turns into IF, where high gain can be achieved before detection

The noisy signal is shown:


Fig62
The bandpass noise is given by:
Eqn 289
PSD is Eqn 290, Thetan is uniformly distributed and independent of Thetac

On-Off Keying
The filter at the receiver can either be a low-pass filter or a matched filter

Case 1-low-pass filter with DC gain of unity


- Assume equivalent bandwidth of filter is B >= 2/T
- To preserve envelope of OOK signal

Baseband analogue output is:


Eqn 291

Noise power is: Eqn 292

Optimum threshold VT = A/2

The BER becomes: Eqn 293


- Note: equivalent bandpass bandwidth of receiver is Bp = 2B

Case 2- Matched Filter

Energy in difference signal is:


Eqn 294

Therefore, the BER is: Eqn 295

Here s1(t) has a rectangular envelope => matched filter is an integrator

Optimum threshold is Eqn 296


- When fc >> R, this gives VT=AT/2

Performance is the same as for baseband unipolar signalling

Binary-Phase-Shift Keying
BPSK is an antipodal signal with Eqn 297

Fig 63

Case 1 -Low-pass filter with unity gain

Baseband analogue output is Eqn 298

Optimum threshold is VT = 0

The BER is: Eqn 299


- Note: Comparing BPSK with OOK
- On peak envelope power for a given N0 => 6dB gain(less signal power)
- On average signal power => 3dB advantage (average power of OOK is 3dB below PEP)

Eqn 300
Case 2 - Matched filter

The energy in the difference signal at receiver input is: Eqn 301

Therefore, BER is Eqn 302


- where average Eb=A2T/2 and VT=0

`Note:
- Performance is same as for baseband polar signalling
- Is 3dB better than OOK

Frequency-Shift Keying
Coherent detection using two product detectors

Fig 64 <- one is the PSD of s1 whilst the other is that of s2

The signals are: Eqn 303


- Frequency shift is 2deltaF =f1-f2

Case 1 - Low-pass filter with unity DC gain


- Equivalent filter bandwidth = 2/T <= B < deltaF

LPF must act like a dual bandpass filter - one entered at f1 and the other at f2, with Bp = 2B

Noise consists of two narrowband components


Eqn 304

Filtering is possible because 2deltaF > 2B

Input signal an noise is:


Eqn 305
r(t) = r1(t) + r2(t)
- Noise power is Eqn 306

Baseband analogue output os Eqn 307

The baseband noise processes are independent

Output noise is:


Eqn 308

Therefore BER is Eqn 309

Comparing FSK with BPSK and OOK:


- On peak envelope power, FSK needs 3dB more power than BPSK and 3dB less for OOK (same
Pe)
- On average power, FSK is 3dB worse than BPSK and equivalent to OOK

Case 2 -Matched filter

Energy in difference signal is:


Eqn 310

or Eqn 311
Consider 2deltaF = f1 - f2 = n/(2T) = nR/2
- Integral term goes to zero
- Required for s1(t) and s2(t) to be orthogonal

If (f1 - f2) >> R, s1(t) and s2(t) are nearly orthogonal


- Value of integral is negligible compared to A2T

If one or both conditions are valid, Ed=A2T

BER is given by: Eqn 312


- Average energy per bit is A2T/2

Notes
- Performance is equivalent to OOK (matched)
- 3dB below BPSK

Coherent detection need coherent reference


- Obtained from input signal => noisy

Circuits more complex than non-coherent

Non-Coherent Detection of Bandpass Signals


BER analysis of non-coherent receivers is more complex

However, circuits to implement the receivers are simpler than the coherent ones

Note that BPSK cannot be detected with a non-coherent receivers.

On-Off Keying
Fig 65

Noise at filter output is bandlimited Gaussian noise

Signal plus noise is:


Eqn 313

Let filter bandwidth be Bp >= transmission bandwidth of OOK signal

For a 1 -> Eqn 314 we get:


Eqn 315

For a 0 -> s2(t) = 0

Eqn 316

BER is: Eqn 317


Assuming signals are equally likely

When we send s2(t) we receive r2(t)


The PDF of the envelope is Rayleigh distributed

Output of detector is r0 = R = r02

The PDF when we have only noise is:


Eqn 318
Sigma2 is the variance of the noise at input of detector

Therefore -> sigma2 = (N0 / 2) (2Bp) = N0Bp

When we send s1(t) we receive r1(t)

n(t) is a Gaussian process -> in-phase baseband component A + x(t) is also Gaussian process
- But mean is A not zero!

Output of detector is r0 = R = r01

For this sinusoid plus noise, we get


Eqn 319
- Rician PDF , with I0 being the modified Bessel function of the first kind and zero order
Eqn320

The two conditional PDFs result in:


Fig 66
Note: for A / sigma >> 1 maximum occurs at r0 = A
Also A / sigma >> 1 we get a Gaussian shape (as a increases)

The BER equation becomes:


Eqn 321

Optimum VT for A / sigma >> 1 is VT~= A / 2

The integral containing the Bessel function cannot be evaluated in closed form

We can approximate Eqn 322, valid for z >> 1

Then: Eqn 323

Since A / sigma >> 1, the integrand is negligible for values of r0 in the neighbourhood of A
- Lower limit can thus be replaced by -infinity and r0 / 2 pi simga2 A by 1/ 2 pi sigma2 Eqn 324

BER becomes
Eqn 325

Eqn 326

Using Eqn 327 for z >> 1, this becomes


Eqn 328

Approximation for BER is:

Eqn 329
- Average bit energy is Eb = A2T / 4, sigma2 = N0Bp
- R = 1/T is the bit rate of OOK signal

BER depends on filter bandwidth. Minimum Bp = R


Frequency-Shift Keying
Input consists of FSK signal plus noise (PSD = N0 / 2)

Fig 67

Filter bandwidth = Bp

For no noise, output of summing junction is:


- r0(t) = +A for binary 1
- r0(t) = -A for binary 0

Given this symmetry and the noise of upper and lower channels is similar => VT = 0

PDF of r0(t) conditioned on s1 and that on s2 are similar f(r0 | s1) = f(-r0 | s2)

The BER is:


Eqn 330

r0(t) is positive when the channel output vU(t) is larger than vL(t)

Thus: Pe = P(vU > vL | s2)

When receiving a zero plus noise, output pf upper bandpass filter is only Gaussian noise
- In this case, output of upper envelope detector vU is noise that has a Rayleigh distribution Eqn
337
- where sigma2 = N0 Bp

VL ha s a Rician distribution
- We now have a sinusoid plus noise, thus:
Eqn 338

We now obtain:
Eqn 339

Calculating the inner integral gives:


Eqn 340

Evaluating the integral gives is:


Eqn 341
- Average energy per bit: Eb = A2T/2
- Sigma2 = N0Bp

Notess:
- OOK and FSK are equivalent on an Eb/N0 basis
- For error performance, non-coherent FSK needs ~ 1dB more Eb/N0 than for coherent FSK for Pe
<= 10-4
- In practice, most FSK receivers are non-coherent

Differential Phase-Shift Keying


PSK signals cannot be detected incoherently
Yet, partially coherent method ca`n be applied
- Phase reference for current interval is obtained by a delayed version of the signal in previous
interval

Fig 68

(One gives simpler circuitry than the other)

If we have a BPSK signal with no noise


- r0(t0) is positive if present data bit and previous one are of the same sense
- r0(t0) is negative if present data bit and previous one are different

Signal technique to transmit a differentially encoded BPSK is known as DPSK

Assumptions
- Additive input noise is white and Gaussian
- Phase perturbation varies slowly (ensure constant phase reference)
- Transmitter carrier oscillator is stable

BER of suboptimal demodulator for large SNR and BT > 2/T is:
Eqn341
- For typical BT and Eb/N0 (in range BT = 3/T and Eb/N0 = 10, BER is approx. Eqn 342
- Therefore, the performance is similar to OOK and FSK

For an optimum DPSK receiver, BER is Eqn 343

Plot is shown Fig 69

Comparing performance of BPSK and DPSK with optimum demodulation -> for same Pe, DPSK
needs ~ 1dB more Eb/N0 than BPSK (Pe 10-4)

DPSK does not need a carrier synchronisation


(Reducing the need for a local oscillator in the receiver, even if the performance is slightly less,
therefore cheaper receiver)

Quadrature PSK
QPSK is a multilevel signalling technique

Has 4 levels per symbol


- Two bits are transmitted in a signalling interval

GPSK signal is
Eqn 344
A factor is the bit data (one for cosine and one for sine)

The relevant input noise is


Eqn 345

QPSK is equivalent to two BPSK signals


- One using the cosine carrier and the other the sine
(QPSK is the equivalent as having two BPSK signals)

The signal can be detected using the coherent receiver below


Fig70

Note that we have an IQ detector

(2 bits mean a symbol so we need to change from parallel to serial)


(the 90 phase shift to keep symmetric we normally apply 45 on one side and 45 on the other to
match the paths)

Upper and Lower channels are BPSK receivers

Hence BER is the same as BPSK system


Eqn 346

Note that for same bit rate R, bandwidths of BPSK and QPSK signals are NOT the same
- QPSK has half the bandwidth of BPSK for a given R

Bandwidth of /4 QPSK is identical to QPSK


- For same BER, differentially detected /4 QPSK needs about 3dB more Eb/N0 than BPSK
- Coherent detection gives same BER performance

Minimum-Shift Keying
MSK is equivalent to QPSK, except that dat on x(t) and y(t) quadrate modulation components are
offset and equivalent data pulses shape is a positive part of a cosine function (not rectangular)
- PSD rolls off faster!

Optimal receiver is similar to QPSK but matched filter with cosine pulse shape is needed

Thus probability of bit error for MSK is identical to QPSK

Can also be detected using FM-type detectors


- Gives suboptimal detection with BER similar to FSK

Comparison
Fig71
(Polar is the best, Unipolar i the worst and Bipolar is in the middle, for Baseband)

Symbol Error and Bit Error for Multi-level Signalling


Simple closed-form formulas for BER are difficult to obtain
- Can be obtained by simulation or measurements

However in some cases, simple formulas for upper bounds on the probability of symbol error can
be found

The symbol error bound for MPSK (AWGN) is:


Eqn 347
- Bound becomes tight as M and Eb/N0 increase
(symbol error rate and bit error rate are not the same since the symbol is not 1Bit)

For MQAM signalling (AWGN), symbol error bound is:


Eqn 348
- Efficiency factor M is -4dB for 16-QAM, -6dB for 32-QAM, -8.5dB for 64QAM, etc.
Probability on the relationship are: Eqn 349
-M = 2K

In low error conditions (Pe < 10-3), error symbol selected is usually nearest neighbour on
constellation (If higher it would go out of neghbourhoud)

The BER in this case would be close to Lower bound!

Fig 72 Source http://www.dsplog.com/2007/11/06/symbol-error-rate-for-4-qam/

Fig 73 Source https://commons.wikiedia.org/wiki/File:


4qam_constellation_noisy_sigma03_color.png

Synchronization
We have 3 levels of synchronisation in digital communications systems
- Bit synchronization
- Frame or Word synchronisation
- Carrier synchronisation (oscillator with carrier frequency)

Bit synchronization is needed for clocking the sample-and-hold and the matched filter

All BER results assume that bit sync and carrier sync are noise-free
- If these are noise we have larger Pes

Frame or word sync is required by some systems to remark the serial data into digital words or
bytes

In other systems, block coding or convolutional coding is used. Frame allows for error detection
and correction

Frame sync is needed in TDM systems

Higher level synchronisation can be required

Carrier sync is required when we have coherent detection

(performance curves will be given in the exams and the equations would be in them)

Source Coding
(How we are representing the data before we send it)
(Try to reduce the size of the data since we have high sampling rate to make it digital)

Introduction
Source data needs to be represented with good fidelity and sampling rate

This leads to a number of bits, example


- ASCII data -> 8 bits per character
- Audio signal sample -> 16 bits
- RGB colour pixel -> 24 bits

Results in a channel rate which is too high

Solution: Coding
- Reduce channel rate required
- Introduces some levels of loss and distortion
- Ruled by Entropy
(We remove any unnecessary data, looks into perceived content)
(sometimes we need lossless compression, which looks into statistical redundancies, example
when we compress a document)

Fig 74

Source coder is needed to transform the source information into a coded sequence whose values
are obtained from a code alphabet (known from both ends of the comms medium)

Similarly the decoder will estimate the source signal from the received coded sequence
- This may contain transmission errors

Types of source coding


- Lossless
- Has discrete values which are reproduced without error (ex low no of bits for common letters
(ex a))
- Examples: executables, text data
- Lossy
- Sequence can be continuous or discrete and allows distortion
- Examples: audio, speech, images, video(images and video are sometimes lossless for
example in medical applications)

Fig 75

We have two possible optimisations here:

Minimise the channel rate


- Given an acceptable distortion level (can be zero (for lossless))

Minimise the distortion


- Given a channel rate

Thus in coding we have a tradeoff between rate and distortion!!


- Curves can be plotted and operating point depends on application, acceptable quality, and
available channel

Coding results in compression if we have statistical redundancy

If symbols are equally probable -> no compression is possible

If symbols do not have the same probability of occurrence, we can assign different code lengths to
the symbols
- Shorter codes for the more frequent ones
- Longer codes to the rare symbols

We need models that capture the statistical redundancy in the source

We can also reduce irrelevant data


- Data can be more precise than needed
- We can have oversampling
- Data can be quantised differently

A model of the intended receiver is required

A suitable transform can help us to better classify the data

Basic steps:
- Change representation
- Quantization
- Code assignment

Fig76

Optimal Prefix Code


We want to find a prefix code with the shortest average codeword length for a given finite alphabet

We can sort the symbol in the alphabet such that p1 p2 p3 pM


Find a set of positive integers li such that Eqn 350

is minimised, subject to Eqn 351

Can have more than one solution!!!

Huffman Coding
Consider the following example:
- Let symbols A, B. C, D, E, F have probabilities of transmission of 0.3, 0.2, 0.1, 0.15, 0.05, 0.2
- Using simple binary or Grey code assignment:

Source Symbol Probability Binary Code Gray Code

A 0.3 oo0 oo0

B 0.2 Oo1 oo1

C 0.1 o10 o11

D 0.15 o11 o10

E 0.05 100 110

F 0.2 101 100

(For more compression we need to go to the lossy)


(We will not do lossy in this unit)

The algorithm construct the prefix code starting from the last bits of the least probable symbols
- Arrange source symbols in descending order of probability
- Create a new source with one less symbol by combining the last two codewords and assign two
bits
- Add the two probabilities to replace the previous two
- Select the two lowest probabilities in thew new list and combine, agin assign two bits
- Repearte this combination and bit assignment until probability is 1
- Encode each original signal into the binary sequence generated by the combinations, with the
first combination being the LSB

Fig 77

Note:
Eqn352
lfixed=3
Eqn353

HIHuff (If lower than we have some code words which are unrecognisable)

Resulting Huffman code is not unique(if we change bit assignment

Run-length Coding
Simple scheme that converts a string of repeated characters into one with fewer characters

The format is Fig 78


- where S is a special character indicating that compression follows, X is the repeated character,
and C is the count
- Example $6000000037 -> $6S0737
- Note: S is not part of original data set
- Can be applied to images (black and white)

Binary Run-length Coding


Use k bits to represent runs of 0s
Runs of 1s are not transmitted (When we stop the count it means that there is a 1)
Seprate consecutive 1s by a zero( removed at receiver)
- Example
- Let k = 4bits, encode the following bit pattern

Fig 79

Performance of Run-length Coding


Worst case scenario:
- Alternating 1s and 0s
- We waster k-1 bits as we represent a transition by k bits

Best case scenario:


- No transition
- Compression ratio is (2k - 1)/k

Golomb Coding
We can potentially have an infinite number of symbols (Therefore Hoffman coding this is hard)

Use variable-length coding for better performance

Divide number by a fixed value and transmit quotient and remainder


Symbol value (n) = qm + r (q is quotient)(m is multiplier)
Where 0 r < m
- q is coded un unary
- r is coded as fixed prefix code

n = qm + r is represented using:
Fig 81

- where ^r is the fixed prefix for r

Example:
- Take m = 6
- Let us encode the following symbols

Fig 82

How to choose m?

Let the probability of 0 be p, then probability of 1 is 1-p


The probability of having 0n1 is pn(1-p)

The optimal order of the code is:


Eqn 354

Example: let p = 63/64

Eqn 355

Using graphs

Assuming m = 5

Fig 83

Input Output

Ooooo 1

Oooo1 O111

Ooo1 O11o

Oo1 O1o

O1 Oo1

1 Ooo

Using this code:

Input:

Fig84

Eqn 356

Taking m = 5 as an example and assuming that the probability of 0 is p

Eqn 357

Notes:
- Useful when one symbol is more likely than the others
- Need to find the best order (training or learning)

Error Resilience
(We are going to reintroduce redundancy, we need to correct errors using statistics by
reintroducing redundancy in the bit steam, example repeating the data/sending it two times)

Introduction
Communications over a channel is prone to errors

We need to reduce the effects of errors

Can be done by
- Automatic Repeat Request
- Need a feedback channel
- When parity errors are detected, receiver requests transmitter to resend the data block
- Forward Error Correction
- No feedback channel
- Data is encoded such that receiver can detect and correct errors
- Adds redundancy to the data stream

Forward Error Correction increases bandwidth requirements

Classified into
- Block codes
- Maps k input binary symbols into n output binary symbols
- Coder is a meatless system
- n > k, such as adding parity bits
- Denoted by (n, k) and has a code rate of R = k / n
- Convolution codes
- Accepts k binary symbols and outputs n binary symbols
- N depends on v + k inputs -> needs memory
- R=k/n

Definitions
Hamming weight of code word is the number of 1 bits in the word
- Example: 1011011 has a Hamming weight of 5

Hamming distance is the number of bit positions in which two code words differ
- Denoted by: d
- Example: for code words 101101 and 110111, d = 3

A received code word can be checked for errors

Some errors can be detected and corrected if d s + t + 1, s is the number of detectable errors
and t is the number that can. be corrected.

Block Codes
General code word is: i1i2i3 ikp1p2p3 pr
- k is the number of information bits
- r is the number of parity bits
- N = k + ris the total length of the code (n, k)

Such a block code is called systematic

Equivalent codes have parity bits interleaved between information bits

More parity bits => more redundancy => more detection and correction capability => more
bandwidth necessary!
Linear Block Codes
We use modulo 2 arithmetic on {0, 1}
- 0 + 0 = 0, 0 + 1 = 1 + 0 = 1, 1 + 1 = 0
- 0*0=0, 0*1 = 1*0 = 0, 1*1 = 1

A code is linear if given that code_word_a, code_word_b S, then


code_word_a + code_word_b S
Example: if (01110), (11001) S then
01110 + 11001 = 10111 S

If C is an (n, k) code, there exists code words g0,g1, , gk-1 that form a basis for the code
considered as a vector space over F2

Every code word c C has a unique expression


Eqn 358

Therefore an (n, k) linear code has 2k code words, where a code word represents 1 of 2k
messages

In matrix form Eqn 359

G is the generator matrix

Associated with a block code generator G we have a matrix H called the parity check matrix

If c is a code word, then cHT = 0


- Code word is orthogonal to each row of H => GHT = 0

H is an (n, n-k) code

Example: Assume a (7, 4) code generated by:

Example 1

Sometimes it is fit to have the original data explicitly in the code word

Called Systematic encoding

We need a generator matrix that allows this


- Perform row reductions and column reordering on G until we get the identity matrix
- We can Write G = [P | Ik]
- Ik is the k x k identity matrix and P is k x n - k
- Therefore c= mG = m[P | Ik] = [(kirsty)

When G is systematic,
- H = In-k | -PT], for binary codes H = [In-k | PT]

Theorem
- Let a linear block code C have a parity check matrix H. The minimum distance C is equal to the
smallest positive number of columns of H which are linearly dependent
Proof
- Let columns of H be d0, d1, , dn-1, then
- C0d0 + c1d1 + +cn-1dn-1 = 0
- Let c be a codeword of smallest weight, w = w(c) (kirsty)

Thus the bound on the distance of a code is: dmin < n-k+1
- This is because H has n-k linearly independent rows
- It follows that any combination of n-k+1 column of H must be linearly independent

The syndrome of a received vector is:


s = r HT

For a code word the syndrome = 0

Syndrome is independent of code word:


r=c+e, gives: s=(c+e)HT = eHT

If 2 error vectors, e and e, have same syndrome


- Error vector must differ by a non-zero code word => eHT = eHT
Then, (e - e)HT = 0 => they must be code word

Hamming Code
Block code having distance of 3

Givent that d >= 2t + 1 => t = 1, a single error can be detected and corrected; up to d-1 can be
detected

Only certain (n,k) are allowed:


(n, k ) = (2m-1,2m-1-m)(kirsty)
m is an integer >= 2

- This gives (3,1), (7,4), (15,11), (31, 26),


- Parity bits = n - k =m
- Note: Ads m increases, R -> 1

H consists of all non-zero binary m-tuples

Syndrome indicates which column of H corresponds to error position

Note: Some errors with d and more can (kirsty)

Maximum Likelihood Detection


Decoder chooses the most likely code word given the received vector r

Maximise P(r | c)

In Binary Symmetric Channel, crossover probability p < 1/2


- Maximum likelihood decoding coincides with minimum distance decoding (smallest Hamming
distance)

Sometimes our interest is in message bit decoding


- Receiver will decode each message bit seperately
- Decodes ith message symbol mi as 0 or 1 depending on: Eqn 351
Array Decoding
Assume we sent c0 and the error pattern is e0, the receiver vector is: r = c0 + e0

Receiver has to estimate the most likely code word and most likely error

These are related by: r = +

The syndrome s = rH = eH
- This limits the possible error patterns
- Using linear algebra, a solution e1 gives a set of possible solutions S = e1 + C := {e1 + c}
- Maximum likelihood decoder estimates . Closest to r
- Equivalent to finding with smallest Hamming weight in S

Sets having the form a + C, a F2n are called co-sets

Using linear algebra, we can show that there isa one-to-one connection between
co-sets and syndromes

Co-sets of a code partition the spaces F2n

Consider the (6, 3) block code with generator matrix: Ex 6


From table, leftmost column represents the. co-set that has the least Hamming
weight
- Called co-set leaders

If we have more than one vector with minimum weight, sel=ection of co-set leader
is done arbitrarily from these

Such table is called standard array

To decode
- Compute the syndrome
- Identify co-set leader
- Estimate = r +

Ex 7

Notes:
- When decoding we only need first and last columns
- If redundancies are large we get a huge array - impractical

Performance of Array Decoding

Decoder will make correct decision iff the actual error pattern is a co-set leader

For a binary symmetric channel we can easily find the probability of code word error

Assuming the (6, 3) code discussed previously:

Eqn 360
- p is the crossover probability

A block code detects an error iff e is not itself a code word other than the all zero
one.

If the weight distribution is {Ai}, 0 <= i <= n

Probability of infected error is Eqn 361

Using the (6, 3) code:


A0 = 1, A1 = A2 = 0, A3 = 4, A4 = 3, A5 = A6 = 0
(With 0 1s, with 1 1s and 2 1s 0, from code words (first row of array))

Thus:
Eqn 362
Cyclic codes

Given a vector c = (c0, c1, , cn-1)

A new code word is formed by the cyclic shift of c

c = (ca, ca+1, cn-1, c0 ca-1), 0 < a <= n-1

A code is cyclic iff every cyclic shift c of c belongs to C

Consider the generator matrix


Matrix 1

We get code words:

Ex 8

Generator polynomial of any (n, k) cyclic code has the properties:


- g(x) has degree = n-k
- g(x) | xn + 1
- That is g(x) divides xn + 1
- A polynomial over F2 of degree <= n-1 is a code polynomial iff it can be
expressed in the form c(x) = m(x)g(x) for some polynomial of degree <= k - 1
-
- From example, (7, 3) => n-k = 4

g(x)|x7 + 1 since x7 + 1 = (x4 + x3 + x3 + 1)(x3 + x2 + 1)


etc

BCH Codes

Are a family of binary cyclic codes


- Trade iff between dimensions and minimum distance

Definition:
- Let n be a divisor od 2p - 1, pis an integer >= 3
- The finite field containing F2 has size 2p (Galois Field (2p))
- Let gamma be a primitive element of GF(2p) and Eqn
- Alpha has order n
- A t-error correction BCH code is the set of all binary n-tuples that lie in the null-
space of the (t x n) matrix

Note: If N denotes the null-space of the matrix over GF(2p), then BCH code C N
F2n.

BCH Codes

Has minimum distance d >= 2t +1

NUmber of information bits k >= n -pt

Can be described in terms of generator polynomial


- This is the smallest degree polynomial that has zeros alpha, alpha3,
alpha2l+1, alpha2t-1

Example
- Single error correcting BCH having n = 15 ha g(kirsty)
(continued on paper)

S-ar putea să vă placă și