Sunteți pe pagina 1din 21

9/29/2014

Digital Communications
Instructor: Bakhtiar Ali
Lecture # 2

Plan for Today

Review of Previous Lecture

1.2 Classification Of Signals


1.3 Spectral Density
1.4 Autocorrelation
1.5 Random Signals

9/29/2014

Review of Previous Lecture


Why Digital Communication?

Easy to regenerate the distorted signal

Regenerative repeaters along the transmission path can detect a


digital signal and retransmit a new, clean (noise free) signal
These repeaters prevent accumulation of noise along the path
This is not possible with analog communication systems

Two-state (finite state) signal representation

The input to a digital system is in the form of a sequence of bits


(binary or M_ary)

Immunity to distortion and interference

Digital communication is rugged in the sense that it is more immune


to channel noise and distortion

Why Digital Communication?

Hardware is more flexible

Low cost

The use of LSI and VLSI in the design of components and systems
have resulted in lower cost

Easier and more efficient to multiplex several digital signals

Digital hardware implementation is flexible and permits the use of


microprocessors, mini-processors, digital switching and VLSI
Shorter design and production cycle

Digital multiplexing techniques Time & Code Division Multiple


Access - are easier to implement than analog techniques such as
Frequency Division Multiple Access

Can combine different signal types data, voice, text, etc.

Data communication in computers is digital in nature whereas voice


communication between people is analog in nature

9/29/2014

Why Digital Communication?

The two types of communication are difficult to combine over


the same medium in the analog domain.
Using digital techniques, it is possible to combine both format
for transmission through a common medium

Can use packet switching


Encryption and privacy techniques are easier to
implement
Better overall performance

Digital communication is inherently more efficient than analog


in realizing the exchange of SNR for bandwidth
Digital signals can be coded to yield extremely low rates and
high fidelity as well as privacy

3. Disadvantages
Requires reliable synchronization
Requires A/D conversions at high rate
In general requires larger bandwidth
4. Performance Criteria
Probability of error or Bit Error Rate

9/29/2014

Performance Metrics

Analog Communication Systems


Metric is fidelity: want m
(t ) m(t )
SNR typically used as performance metric

Digital Communication Systems


Metrics are data rate (R bps) and probability of bit error

P p ( b b)
Symbols
already known at the receiver
b

Without noise/distortion/sync. problem, we will never make


bit errors

9/29/2014

Goals in Communication System Design

To maximize transmission rate, R


To maximize system utilization, U
To minimize bit error rate, Pe
To minimize required systems bandwidth, W
To minimize system complexity, Cx
To minimize required power, Eb/No

9/29/2014

Main Points

Transmitters modulate analog messages or bits in case of a DCS


for transmission over a channel.

Receivers recreate signals or bits from received signal (mitigate


channel effects)

Performance metric for analog systems is fidelity, for digital it is


the bit rate and error probability.

Comparative Analysis of Analog and


Digital Communication

9/29/2014

Analog Communication: Transmitter and Receiver


Receiver
Transmitter
Wireless
Channel
DeMod

Modulator
MUX
(FDM)

Modulator

DEMUX
(Tuner)

DeMod

DeMod

Modulator

Recovered Messages
Modulated Signal

Message Signals

Digital Communication: Transmitter


From Other
Channels

Analog
input

Bits

01101

10110

1010010
Analog to
Digital
Converter

Source
Encode

Multiplex

Encrypt
Encoded
Bits

Encrypted
Data
01101
Multiplexed
Data

01010
10101

Digital Bandpass
waveform

Pulse
modulated
waveform
Bandpass
modulate

Bit to Sym.
& Pulse
Modulate

Channel
Encoded
Data

1001101

Scrambled
data
Channel
Encode

Scrambler
10001

9/29/2014

Digital Communication: Receiver


Digital
Bandpass
waveform

De-modulate

Digital
Baseband
waveform

Bits
Equalizer,
Timing and
Sym. to Bits

Channel
Decode

Channel
Decoded
Data
01101

Descramble

Descrambled
Bits

Source
Decoded
Bits
Analog
output

D/A

Demultiplexed
Bits

Decrypted
Bits
Source
Decode

1010010

Decrypt
10110

10001

DeMultiplex

To other
Channels

Digital Signal Nomenclature

Information Source

Discrete output values e.g. Keyboard


Analog signal source e.g. output of a microphone

Character

Member of an alphanumeric/symbol (A to Z, 0 to 9)
Characters can be mapped into a sequence of binary digits
using one of the standardized codes such as

ASCII: American Standard Code for Information Interchange


EBCDIC: Extended Binary Coded Decimal Interchange Code

9/29/2014

Digital Signal Nomenclature

Digital Message

M - ary

A digital message constructed with M symbols

Digital Waveform

Messages constructed from a finite number of symbols; e.g., printed


language consists of 26 letters, 10 numbers, space and several
punctuation marks. Hence a text is a digital message constructed
from about 50 symbols
Morse-coded telegraph message is a digital message constructed
from two symbols Mark and Space

Current or voltage waveform that represents a digital symbol

Bit Rate

Actual rate at which information is transmitted per second

Digital Signal Nomenclature

Baud Rate
Refers to the rate at which the signaling elements
are transmitted, i.e. number of signaling elements
per second.

Bit Error Rate


The probability that one of the bits is in error or
simply the probability of error

9/29/2014

1.2 Classification Of Signals


1. Deterministic and Random Signals

A signal is deterministic means that there is no uncertainty with


respect to its value at any time.

Deterministic waveforms are modeled by explicit mathematical


expressions, example:

x(t) = 5Cos(10t)

A signal is random means that there is some degree of


uncertainty before the signal actually occurs.

Random waveforms/ Random processes when examined over a


long period may exhibit certain regularities that can be described
in terms of probabilities and statistical averages.

2. Periodic and Non-periodic Signals

A signal x(t) is called periodic in time if there exists a constant


T0 > 0 such that

x(t) = x(t + T0 )

for

- < t <

(1.2)

t denotes time
T0 is the period of x(t).

10

9/29/2014

3. Analog and Discrete Signals

An analog signal x(t) is a continuous function of time; that is, x(t)


is uniquely defined for all t

A discrete signal x(kT) is one that exists only at discrete times; it


is characterized by a sequence of numbers defined for each
time, kT, where
k is an integer
T is a fixed time interval.

4. Energy and Power Signals

The performance of a communication system depends on the


received signal energy; higher energy signals are detected more
reliably (with fewer errors) than are lower energy signals

x(t) is classified as an energy signal if, and only if, it has nonzero
but finite energy (0 < Ex < ) for all time, where:

T/2

Ex =

lim
T

T / 2

x 2 (t) dt

(t) dt

(1.7)

An energy signal has finite energy but zero average power.

Signals that are both deterministic and non-periodic are classified


as energy signals

11

9/29/2014

4. Energy and Power Signals

Power is the rate at which energy is delivered.

A signal is defined as a power signal if, and only if, it has finite
but nonzero power (0 < Px < ) for all time, where
T/2

lim

Px =

1
x 2 (t) dt

T T / 2

(1.8)

Power signal has finite average power but infinite energy.

As a general rule, periodic signals and random signals are


classified as power signals

5. The Unit Impulse Function

Dirac delta function (t) or impulse function is an abstractionan


infinitely large amplitude pulse, with zero pulse width, and unity
weight (area under the pulse), concentrated at the point where its
argument is zero.

(t) dt = 1

(t) = 0 for t 0
(t) is bounded at t 0

(1.9)
(1.10)
(1.11)

Sifting or Sampling Property

x ( t ) (t-t 0 )dt = x(t 0 )

(1.12)

12

9/29/2014

1.3 Spectral Density

The spectral density of a signal characterizes the distribution of


the signals energy or power in the frequency domain.

This concept is particularly important when considering filtering in


communication systems while evaluating the signal and noise at
the filter output.

The energy spectral density (ESD) or the power spectral density


(PSD) is used in the evaluation.

1. Energy Spectral Density (ESD)

Energy spectral density describes the signal energy per unit


bandwidth measured in joules/hertz.
Represented as x(f), the squared magnitude spectrum

x( f ) X ( f )

(1.14)
According to Parsevals
theorem,
the
energy
of
x(t):

Ex =

Therefore:

Ex =

x 2 (t) d t =

(f) df

df

(1.13)

|X (f)|

(1.15)

The Energy spectral density is symmetrical in frequency about


origin and total energy of the signal x(t) can be expressed as:

E x = 2 x (f) df

(1.16)

13

9/29/2014

2. Power Spectral Density (PSD)

The power spectral density (PSD) function Gx(f ) of the periodic


signal x(t) is a real, even, and nonnegative function of frequency
that gives the distribution of the power of x(t) in the frequency
domain.

PSD is represented as:


G x (f ) =
|C n |2 ( f nf 0 )
(1.18)

n=-

Whereas the average power of a periodic signal x(t) is


T /2

represented as:
1 0
2
2

Px

T0

|C

x (t) dt

n=-

T0 / 2

(1.17)

Using PSD, the average normalized power of a real-valued


signal is represented as:

Px

(f) df 2 G x (f) df

(1.19)

1.4 Autocorrelation
1. Autocorrelation of an Energy Signal

Correlation is a matching process; autocorrelation refers to the


matching of a signal with a delayed version of itself.
Autocorrelation function of a real-valued energy signal x(t) is
defined as:

R x ( ) =

x(t) x (t + ) dt

for

- < <

(1.21)

The autocorrelation function Rx() provides a measure of how


closely the signal matches a copy of itself as the copy is shifted
units in time.
Rx() is not a function of time; it is only a function of the time
difference between the waveform and its shifted copy.

14

9/29/2014

1. Autocorrelation of an Energy Signal

The autocorrelation function of a real-valued energy signal has


the following properties:

R x ( ) =R x (- )

symmetrical in about zero

R x ( ) R x (0) for all

maximum value occurs at the origin

R x ( ) x (f)

autocorrelation and ESD form a


Fourier transform pair, as designated
by the double-headed arrows

R x (0)

(t) dt

value at the origin is equal to the


energy of the signal

2. Autocorrelation of a Power Signal

Autocorrelation function of a real-valued power signal x(t) is


defined as:

R x ( )

T /2

lim
T

1
x(t) x (t + ) dt
T T/ 2

for - < <

(1.22)

When the power signal x(t) is periodic with period T0, the
autocorrelation function can be expressed as

1
R x ( )
T0

T0 / 2

x(t) x (t + ) dt

for - < <

(1.23)

T0 / 2

15

9/29/2014

2. Autocorrelation of a Power Signal

The autocorrelation function of a real-valued periodic signal has


the following properties similar to those of an energy signal:

R x ( ) =R x (- )

symmetrical in about zero

R x ( ) R x (0) for all

maximum value occurs at the origin

R x ( ) Gx (f)

autocorrelation and PSD form a


Fourier transform pair

1
R x (0)
T0

T0 / 2

x 2 (t) dt

T0 / 2

value at the origin is equal to the


average power of the signal

1.5 Random Signals


1. Random Variables

All useful message signals appear random; that is, the receiver does
not know, a priori, which of the possible waveform have been sent.
Let a random variable X(A) represent the functional relationship
between a random event A and a real number.
The (cumulative) distribution function FX(x) of the random variable X
is given by
(1.24)
FX ( x) P( X x)
Another useful function relating to the random variable X is the
probability density function (pdf)
(1.25)
dF X ( x )

PX ( x )

dx

16

9/29/2014

1.1 Ensemble Averages

m X E{ X }

xp

( x )dx

E{ X }
2

p X ( x)dx

var( X ) E{( X m X ) 2 }

(x m

) 2 p X ( x ) dx

var( X ) E{ X } E{ X }
2

The first moment of a probability


distribution of a random variable
X is called mean value mX, or
expected value of a random
variable X
The second moment of a
probability distribution is the
mean-square value of X
Central moments are the
moments of the difference
between X and mX and the
second central moment is the
variance of X
Variance is equal to the
difference between the meansquare value and the square of
the mean

2. Random Processes

A random process X(A, t) can be viewed as a function of two


variables: an event A and time.

17

9/29/2014

1.5.2.1 Statistical Averages of a Random


Process

A random process whose distribution functions are continuous can


be described statistically with a probability density function (pdf).

A partial description consisting of the mean and autocorrelation


function are often adequate for the needs of communication
systems.

Mean of the random process


X(t) :

E{ X (tk )}

xp

Xk

(1.30)

( x) dx mX (tk )

Autocorrelation function of the random process X(t)


(1.31)

R X ( t1 , t 2 ) E { X ( t1 ) X ( t 2 )}

1.5.2.2 Stationarity

A random process X(t) is said to be stationary in the strict sense


if none of its statistics are affected by a shift in the time origin.

A random process is said to be wide-sense stationary (WSS) if


two of its statistics, its mean and autocorrelation function, do not
vary with a shift in the time origin.

E{ X (t )} m X a constant
R X ( t1 , t 2 ) R X ( t1 t 2 )

(1.32)
(1.33)

18

9/29/2014

1.5.2.3 Autocorrelation of a Wide-Sense


Stationary
Random Process

For a wide-sense stationary process, the autocorrelation


function is only a function of the time difference = t1 t2;
(1.34)
RX ( ) E{ X (t ) X (t )}
for
Properties of the autocorrelation function of a real-valued widesense stationary process are

1. RX ( ) RX ( )

Symmetrical in about zero


Maximum value occurs at the origin
Autocorrelation and power spectral
density form a Fourier transform pair

2. R X ( ) R X (0) for all


3. R X ( ) G X ( f )

Value at the origin is equal to the average


power of the signal

4. R X (0) E{ X 2 (t )}

1.5.3. Time Averaging and Ergodicity

When a random process belongs to a special class, known as an


ergodic process, its time averages equal its ensemble averages.

The statistical properties of such processes can be determined


by time averaging over a single sample function of the process.

A random process is ergodic in the mean if

mX

1
lim
T T

T /2

X (t ) d t

(1.35)

T /2

It is ergodic in the autocorrelation function if

1
R X ( ) lim
T T

T /2

X ( t ) X (t ) dt

(1.36)

T / 2

19

9/29/2014

1.5.4. Power Spectral Density and


Autocorrelation

A random process X(t) can generally be classified as a power


signal having a power spectral density (PSD) GX(f )

Principal features of PSD functions

1. G X ( f ) 0
2. G X ( f ) G X ( f )

3. GX ( f ) RX ( )

PSD and autocorrelation form a Fourier


transform pair
Relationship between average
normalized power and PSD

4. RX (0)

And is always real valued


for X(t) real-valued

( f )df

Assignment # 1: Due date: Sept. 24, 2014

Problems: 1.1, 1.2, 1.4, 1.6, 1.7, 1.8 and 1.9

20

9/29/2014

21

S-ar putea să vă placă și