Sunteți pe pagina 1din 28

Chapter 2:

Signal Representation
Aveek Dutta
Assistant Professor
Department of Electrical and Computer Engineering
University at Albany
Spring 2018

Images and equations adopted from:


Digital Communications - John G. Proakis and Masoud Salehi 5/e. Copyright by The McGraw Hill Companies
Fundamentals of Communication Systems - Michael P. Fitz, .Copyright by The McGraw Hill Companies
Recap of Lecture 1
● Signals are time domain functions
○ Converted to binary streams (Sampling and Quantization)
○ Add redundancy (Coding Theory)
○ Map bits to Waveform (Modulation - Analog or Digital). Sometimes called upconversion
■ Higher frequency waveform (Carrier) carry lower frequency information signals
○ Pulse shaping, filtering, etc. - Mostly to conform to mandated spectrum
○ Use amplifiers and antenna to convert electrical signals to EM waves
○ Signals occupy bandwidth (Hz) - Thanks to Joseph Fourier.
● Waveforms are distorted by channel noise, which is not deterministic
○ Use statistical models to approximate noise
○ The ratio of Signal Power to Noise Power is called SNR (Signal to Noise Ratio). Unit is dB
○ Goal is to design systems that maximize SNR
● Receivers are inverse operations of the transmitter
○ Antenna converts EM to electrical signals
○ Condition incoming signal (Equalization)
○ Demod, Decode, Interpolate to reproduce the transmitted signal
○ Key metric is BER (Bit Error Rate)
● Higher the SNR, lower the BER while Maximizing Bits/sec/Hz (Spectral Efficiency)
Background
Channel - Type 1 Channel - Type 2 Channel - Type 3

Signals can be:


● Deterministic or Random
● Periodic or Aperiodic
● Complex or Real
● Continuous or Discrete Time

Common Signals
Fourier Transform

Image from: http://complextoreal.com/wp-content/uploads/2012/12/Chap1.pdf

Read These - even if you think you know FT


1. https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/
2. http://complextoreal.com/wp-content/uploads/2012/12/Chap1.pdf Fourier Transform Pair
2.1 - Bandpass and Lowpass Signals
● A bandpass signal, xc(t), is a signal whose
one-sided energy spectrum is both:
○ centered at a nonzero frequency, f C
○ does not extend in frequency to zero (DC)

● A real-valued bandpass signal, x(t), has Hermitian Symmetry X(-f) = X*(f), from
which we conclude that |X(-f)|=|X(f)| and X*(f) = - X(f).
○ In other words, for real x(t), the magnitude of X(f) is even and phase is odd

● A lowpass or baseband signal has its spectrum located around the zero
frequency (DC)
● Also define +ve and -ve spectrum

Therefore, X(f) = X+(f) + X_(f)

● For a real signal x(t), since X(f) is Hermitian, we have X_(f) = X*+(-f).
Low pass equivalent of a bandpass signal
● Say, there exists a signal x+(t), corresponding to the signal x(t) with just the
+ve spectrum X+(f)

Only consider the


+ve frequencies

Hilbert
Transform
of x(t)

● xl(t) is the lowpass equivalent of the bandpass signal, whose spectrum is


defined by 2X+(f+f0)
Modulation
Theorem
of FT
...contd
Modulation
Theorem
of FT

Definition: The real and imaginary parts of xl(t) are called the in-phase
component and the quadrature component of x(t), respectively, and are
denoted by Xi(t) and xq(t).

Solving for x(t) and x^(t)


Mod - Demod
Modulator Demodulator

Equivalent
Structure
Visualization of Baseband Signal
Example

x x
Energy of signals
● The energy of signal x(t) is given by

Energy of the one


sided spectral
energy

Energy of the low


pass equivalent
signals
Cross-correlation
● Inner Product of two vectors (signals)

● Hence, we can also write


● Let’s prove that,
● The complex quantity ⍴x,y is called cross-correlation coefficient, given by

● Two signals are Orthogonal if their inner product (and hence ⍴) is zero. If
⍴xl,yl = 0 then ⍴x,y = 0.
○ Orthogonality in baseband implies orthogonality in passband but not vice versa.
2.2 Signal Space
● Inner Product

● Unit Vector - projection of the vector on to the unit vector ei

● L2 - norm is simply the length of a vector

● Two vectors are orthonormal iff they are orthogonal and each vector has
a unit norm
● Same vector concepts apply to signals as well. Summation is replaced by
integral
Orthonormal Bases
● Let a set of vectors B = {v1, v2 v3…..vk} are linearly independent, unit vectors
and orthogonal to each other
○ B is called an orthonormal set (orthogonal and normalized)

● Then any arbitrary vector in that space spanned by a subspace V can be


represented as linear combination of all the orthonormal basis B multiplied
by different constant terms: X = {c1. v1 + c2. v2 + … ck. vk}
○ Think about the R3 euclidean coordinate system we know of

○ Therefore, vk .X = ck (how?)
● Try it yourself:
○ Given two vectors v1 = {⅓ , ⅔ , ⅔}T and v2 = {⅔ , ⅓ , -⅔}T check if these form an orthonormal set
○ Given two vectors v1 = {⅗ , ⅘ }T and v2 = {-⅘ , ⅗ }T , what is the c matrix for a vector X = {9 , -2}

● Orthonormal bases are good for creating coordinate systems for signals
Gram-Schmidt Orthogonalization Method
● Construct set of orthonormal vectors from a set of n-dimensional vectors, vi
● The first orthonormal vector is simply,
○ In other words, u1 is the orthonormal vector for subspace with span (v1)
● Find the unit vector orthogonal to u1 such that
○ In other words, u1 , u2 is the orthonormal vector for subspace with span (v1 , v2) = span (u1 , u2)

u2
normalize
v2 v1
● Similarly, for 3 dimensional span θ
u1 Projection
of v2 on v1

normalize
v3
u2 Since, <v1 , v2> = |v1||v2| cos θ and v1 = |v1|u1
<v , v >
u3 projV1 (v2) = |v2| cos θ x u1 = 1 2 u1
|v1|
|v1|<u1 , v2>
Projection of or, u1
u1 v3 on plane |v1|
span(v1 , v2 )
GSOM for signals
● Construct a set of orthonormal waveforms from a given set of finite
energy signals {sm(t)} [no. of bases N ≤ M (waveforms)]
● The first basis is same as in vectors, except normalized using the energy

● The second basis is obtained by projecting s2(t) on to ɸ1(t)

Projection

● The kth orthonormal basis is given by, where, Normalize


Example 2.2-3

M dimensional Signals N Orthonormal Basis


Signal Constellation
● Once we have constructed the set of orthonormal waveforms {ɸn(t)}, we can
express the M signals {sm(t)} as linear combinations of the {ɸn(t)}

● Therefore, each signal can be represented by a vector, and the collection of


M vectors in a N - dimensional signal space is called constellation

Vector to Signal Signal to Vector


Example
2.3 - Random Variables
● Bernoulli
● Binomial - Sum of n independent Bernoulli trials with parameter p

● Uniform

● Gaussian

○ A Gaussian random variable with m = 0 and σ = 1 is called a standard normal. A function


closely related to the Gaussian random variable is the Q function

○ Properties of Q-function

● Complementary Error Function


2.7 Random Processes
● Let (Ω, F, P) be a probability space. A real random process, N(ω, t), with ω ∈
Ω is a single-valued function or mapping from Ω to real valued functions of
an index set variable t.
○ The function N(ω1, t) for a fixed ω1 is called a sample path of the random process

● If N(t) is a Gaussian random process then one sample of this process, N(t s),
is completely characterized with the PDF

where,
Zero Mean Gaussian Process
● Most common source of noise in communication systems is thermal
noise, which has a zero mean
● If N(t) is a Gaussian random process then two samples of this process,
N(t1) and N(t2), are completely characterized with the PDF

Sample Variance (zero Mean)


Correlation coefficient

● (Auto) - Correlation function is defined by


● The variance and correlation coefficient can be expressed in terms of the
autocorrelation function
Stationarity
● If the statistical description of a random process does not change over
time, it is said to be a stationary random process
○ Also, the density function describing M samples of the random process is independent of
time shifts in the sample times, i.e.,

for any value of M and t0

● V. IMP - statistical description of any two samples


taken from a stationary random process is a function
of the time difference between the two time samples

● This implies E(N(t)) = 0 and σ2N(ti) = σ2N is a constant

● Also, RN(0 , t1) = RN(t0 , t1+ t0) for any t0

○ This implies that the correlation function is


essentially a function of only one variable

RN( t1 , t2) = RN (τ ) = E[ N( t1) N( t1 −τ )]

τ = t1 - t2
Frequency domain
Follows from

● In frequency domain,

● Since, FT is a complex random function and the energy grows unbounded


as measurement interval Tm gets large, we measure power spectral density

● The average PSD of the random process is given by


Frequency Domain
● PSD of a stationary random process is given by the Wiener-Khinchin
theorem

● Proof:

Total Average Power

● Two useful results


AWGN
● The PSD of AWGN is

Where, N0 = KT

● The Autocorrelation function of


AWGN is

● More on simulating AWGN and its properties using MATLAB


Linear systems and Random Process
● Assume the filter input is W (t ), the filter impulse response is hR(t ), and
the filter output is N (t)

● The random process that results from linear time-invariant filtering of a


stationary Gaussian random process is also a stationary Gaussian process

The average power of noise at the


output of the filter
2.9 BP and LP Random Processes
● A bandpass process, the PSD is located around frequency (+/- f0) and for
lowpass the PSD is located around zero frequency
○ A bandpass process is real, zero mean, stationary process. Hence I-Q components are

● Also define a low pass equivalent process (Xi and Xq are both zero mean)

● PSD of Xi and Xq is given by


See Example 2.9-1

If the spectrum X(f) is symmetric around


f=0 then Sx(f+fo) and Sx(f-fo) are same, i.e,
I and Q components of the bandpass
random process are uncorrelated (or
independent processes)

S-ar putea să vă placă și