Sunteți pe pagina 1din 24

INTRODUCTION TO DIGITAL SIGNAL PROCESSING

And
Review of Fourier Series, Fourier Transform


WHAT IS DSP ?

DSP, or Digital Signal Processing, as the term suggests, is the
processing of signals by digital means. A signal in this context can
mean a number of different things. Historically the origins of signal
processing are in electrical engineering, and a signal here means
an electrical signal carried by a wire or telephone line, or perhaps
by a radio wave. More generally, however, a signal is a stream of
information representing anything from stock prices to data from a
remote-sensing satellite.

ANALOG AND DIGITAL SIGNALS

In many cases, the signal is initially in the form of an analog
electrical voltage or current, produced for example by a
microphone or some other type of transducer. In some situations
the data is already in digital form - such as the output from the
readout system of a CD (compact disc) player. An analog signal
must be converted into digital (i.e. numerical) form before DSP
techniques can be applied. An analog electrical voltage signal, for
example, can be digitized using an integrated electronic circuit
(IC) device called an analog-to-digital converter or ADC. This
generates a digital output in the form of a binary number whose
value represents the electrical voltage input to the device.






SIGNAL PROCESSING

Signals commonly need to be processed in a variety of ways. For
example, the output signal from a transducer may well be
contaminated with unwanted electrical "noise". The electrodes
attached to a patient's chest when an ECG is taken measure tiny
electrical voltage changes due to the activity of the heart and
other muscles. The signal is often strongly affected by "mains
pickup" due to electrical interference from the mains supply.
Processing the signal using a filter circuit can remove or at least
reduce the unwanted part of the signal. Increasingly nowadays
the filtering of signals to improve signal quality or to extract
important information is done by DSP techniques rather than by
analog electronics.

DEVELOPMENT OF DSP

The development of digital signal processing dates from the
1960's with the use of mainframe digital computers for number-
crunching applications such as the Fast Fourier Transform (FFT),
which allows the frequency spectrum of a signal to be computed
rapidly. These techniques were not widely used at that time,
because suitable computing equipment was available only in
universities and other scientific research institutions.

DIGITAL SIGNAL PROCESSORS

Introduction of microprocessors in 1970s and 80s resulted in a
wide range of applications of DSP.
With increasing use of DSP techniques in industry and elsewhere,
special purpose microprocessors with architectures designed
specifically for applications in the field of DSP were introduced.
Such microprocessors were termed as Digital Signal Processors
(DSPs).DSPs are programmable devices and are capable of
carrying out millions of instruction per second making it possible
for use in real time applications.

APPLICATION OF DSP

DSP technology nowadays find application in almost all spheres
of life Some of the more important areas are;
Telephony (particularly mobile phones)
IC Technology
Medicine
Multimedia (Image and Speech Processing)
Data Compression
Entertainment Electronics






















WHAT ARE SIGNALS AND SYSTEMS?


Signals and systems are the two most fundamental concepts in
digital signal processing, as well as in many other disciplines.
Signals are patterns of variation of physical quantities such as
temperature, pressure, voltage, brightness, etc.

Systems operate on signals to produce new signals. In fact these
are all examples of systems. For example, microphones convert
air pressure to electrical current and speakers convert electrical
current to air pressure.

Before going into the details of signals and systems we will review
some mathematical tools that handle signals.
A signal can be viewed from two different standpoints:
1. The time domain.
2. The frequency domain.
The one which we are most used to is the time domain. This is
like the trace on an oscilloscope where the vertical deflection is
the signals amplitude, and the horizontal deflection is the time
variable.
The second representation is the frequency domain. This is like
the trace on a spectrum analyzer, where the horizontal deflection
is the frequency variable and the vertical deflection is the signals
amplitude at that frequency.
Any given signal can be fully described in either of these domains.
We can go between the two by using a tool called the Fourier
Transform.
Why the frequency domain?
Now that we now know what the frequency domain is, we have to
wonder why we are interested in all this extra work.
Depending on what we want to do with the signal, one domain
tends to be simpler than the other, so rather than getting tied up in
mathematics with a time domain signal we might take it across to
the frequency domain where the mathematics is a lot simpler.
FOURIER SERIES
To solve many engineering problems, we need to know the
response of a Linear Time Invariant system to some input signal.
If the input signal can be broken up into simple signals and we
know how the system responds to these simple signals, then we
can predict how the system will behave to our input.
A Linear Time Invariant System is one that:
1. Is unaffected by time. That is, if you perform an experiment
on Monday to find the systems response to a sine wave, you
will get the same result if you do the experiment again on
Wednesday.
2. Is Linear. Given two input signals (ax, cx) and that they
produce two output signals (by, dy), the system is linear if,
and only if, the input signal ax + cx produces the output
signal by + dy

More formally:





In a Linear System:









Thus a linear system satisfies both the rules of superposition
and association.
SYSTEM
Input x(t) Output y(t)
SYSTEM
Input x(t-t
0
)) Output y(t-t
0
)

SYSTEM
Input ax(t)+cx(t)
Output by(t)+dy(t)
SYSTEM
Input ax(t) Output by(t)
SYSTEM
Input cx(t) Output dy(t)

Therefore anything that can break a signal down into its
constituent parts would be very useful. One such tool is the
Fourier series
With the exception of some mathematical curiosities, any periodic
signal of period T can be expanded into a trigonometric series of
sine and cosine functions, as long as it obeys the following
conditions:
1. f(t) has finite number of maxima and minima within T
2. f(t) has finite number of discontinuities within T, and
3. It is necessary that
}
<
T
t d t f
0
) ( ) (

4. These three conditions mean that you are able to calculate
the area under the graph. If 1 isn't true then it's a power
signal going to infinity and isn't absolutely integrable. If 2 isn't
true then once again it isn't absolutely integrable, and 3. is
simply reaffirming conditions 1 and 2.
If all of these are true then the signal can be represented as:





and the coefficients are:

dt
T
nt
t f
T
a
T t
t
n
}
+
|
.
|

\
|
=
0
0
2
cos ) (
2 t


dt
T
nt
t f
T
b
T t
t
n
}
+
|
.
|

\
|
=
0
0
2
sin ) (
2 t



=

=
+ + =
1 1
2
sin
2
cos
2
) (
0
n
n
n
n
T
nt
b
T
nt
a
a
t f
t t

}
+
=
T t
t
dt t f a
0
0
0 ) (


In practice we do not have to have infinite number of
HARMONICS to construct the original signal f (t). We can
construct f(t) with a finite number of HARMONICS with an
acceptable error. Consider for example the reconstructed signal
of a saw tooth waveform shown in figure
- The first signal has only two harmonics, part (a)
- The second signal has five harmonics, part (b)
- The third signal has ten harmonics, and (c)
- The original signal at the bottom. Part (d)






















As you can see by the time you have ten harmonics, the signal is
virtually identical to the original.

The other snag is that the signal must be periodic, and few real
world signals are truly periodic. We can get around this by
artificially changing a non-periodic signal (over time T) so that it
becomes periodic. e.g. a pulse signal can be repeated to give the
appearance of a periodic signal.
e.g. From the non-periodic input signal


We can produce this periodic signal of period T
0










The Fourier series can also be represented in a complex form
which is more compact and is more convenient when dealing with
complex signals.

If we make the use of the trigonometric identity e
jx
= cos x + j sin x
f(t)
T
0

f(t)
And we define
2
n n
n
jb a
c

=
we get the complex form of the
Fourier series to be
( )

=
=
n
n n t j c t f e exp ) (

n
= 2n/T


FOURIER TRANSFORM

The Fourier Transform is generalization of the Fourier Series.
Strictly speaking it only applies to continuous and aperiodic
functions, but the use of the impulse function allows the use of
discrete signals. The Fourier Transform is defined as

}

=
-
t)dt f(t)exp(-j ) F( e e

The inverse transform is defined as

}


= t)dt )exp(j F(
2
1
f(t) e e
t


The Fourier Transform F() of f(t) is also known as the
SPECTRUM of f(t). Since the Spectrum of a real signal is
complex the amplitude of F() i.e., F ( ) , is called the
AMPLITUDE SPECTRUM and the phase of ZF() is called the
PHASE SPECTRUM of f(t).



If we take the Fourier transform of a square pulse (Gate
Function) as shown in Figure









and apply the Fourier Transform to it we get

}

=
-
t)dt f(t)exp(-j ) F( e e


But since the pulse is zero every where except in the range
T/2 < t < T/2 we can write the equation as

2 /
2 /
2 /
2 /
j
t) exp(-j
dt ) t j exp( ) F(
T
T
T
T

= =
}
e
e
e e



e
e e
e
j
T/2) exp(-j T/2) exp(j
) F(
+
=



Or
) x ( sinc T )
2
T
( sinc T
) 2 / (
T/2) sin(
T ) F( = = =
e
e
e
e
T


Where x = T/
1
f(t)
T/2 -T/2
t
0
T



























Amplitude and Phase spectrum of the single Gate Function



Consider now the rectangular pulse shown in figure



f(t) = 1, for 0 < t < T and f(t) = 0 otherwise.
The Fourier Transform is given by
F() = T e-iT/2 sinc (T/2)
The amplitude and phase spectrum are shown in figure.




















T
1
t
f(t)

































Properties of Fourier Transform





DISCRETE TIME SIGNALS

A discrete-time signal is a sequence of numbers (real or
complex). such a sequence represents the variation of some
physical quantity as a function of a discrete-time index "n".
For instance, the number sequence (0, 1, 2, 3, etc.) can be
described as x(n) = n for 0 < n <3, and it equals 0 for other values
of n or equivalently, in graphical form as shown here. Notice that
the integer label "n" can have both positive and negative values,
namely n ranges from minus infinity to plus infinity.


FINITE ENERGY AND FINITE POWER SIGNALS

Since signals represent physical quantities, they are subject to
various physical constraints, such as having finite energy.

A signal x(n) is said to have finite energy when the sum of the
squared magnitude of x of n for n ranging from minus infinity
to plus infinity is finite.

Another common physical constraint is boundedness. When
physical quantities exceed their prescribed bounds the results are
often catastrophic: bridges collapse because of excess stress,
electrical and electronic components fail because of excess

x(nT) o(t-nT)
n=-
voltage or current, volatile substances ignite or explode, etc. A
signal x(n) is called bounded if there exists a positive number A
such that the absolute value of x of n does not exceed A for all n.

From a mathematical standpoint it is sometimes convenient to
extend the notions of finite energy and boundedness by
introducing a broader family of finite power signals. A signal x(n)
has finite power when the limit as N goes to infinity, of the sum
of squared magnitudes of x (n) for n ranging from minus N to
plus N is finite.

Every finite-energy signal is bounded, and every bounded signal
has finite power.



Finite energy signals represent transient phenomena: such
signals have the property that x(n) converges to zero as n goes to
either plus or minus infinity. On the other hand, finite power
signals represent persistent phenomena, for which x(n) does not
decay with time. Strictly speaking, persistent signals cannot exist
in our energy-bounded world. Nevertheless, they offer a
convenient mathematical model for many real life signals such as
ocean waves, air temperature, power line voltage, sunspot
activity, etc., all of which exhibit essentially constant power over
prolonged periods of time.


- Finite energy = transient phenomena
x(n) decays with time

0 x(n) lim
n
=


- Finite power signal = persistent phenomena
x(n) does not decay with time



EXAMPLES

Finite Energy Finite Power




A discrete time sinusoidal signal is always given by the
expression x(n) = A cos (2f
0
n + )
where "A" is a positive number called amplitude, f
0
(with
magnitude bounded by one half) is called frequency and (with
magnitude bounded by pi) is called phase-shift.

The product
0
= 2f
0
is known as radial frequency.

ALIASING OF SINUSOIDAL SIGNALS

If x
1
(n) = cos 2f
0
n and x
2
(n) = cos 2 (f
0
+k)n then it can be
seen that
x
1
(n) = x
2
(n)

for all n .Where k is any integer (positive or negative). For this
reason, we always restrict the frequency of a sinusoid to values
in the range magnitude of f
0
is less or equal to one half.
Discrete time sinusoidal
signal
Amplitude
Frequency

Phase Shift

Angular (Radial) Frequency
A
x(n) = A cos (2f
0
n + )
,, < /2

EULERS IDENTITY

A very important analytical tool is Euler's identity, which relates
sinusoids to complex exponentials, namely. Mathematically,


Periodic and Aperiodic Signals

Discrete signal x(n) is periodic if

x(n + N) = x(n)
for any integer N. Otherwise it is aperiodic.

The Period of the periodic signal is the smallest integer N that
satisfies this relation.

Some Fundamental Sequences

1, n = 0
The Unit Sample o(n) =
0, otherwise

1, n > 0
The Unit Step u(n) =
0, otherwise

The Exponential sequence is defined as

x(n) = a
n


We can write the unit sample in terms of unit step as

=
=
n
- k
k) - (n u(n) o

Similarly a unit sample can be expressed as

o(n) = u(n) u(n-1)



The Unit Impulse Sequence




















-10 -5 0 5 10 15 20
0
0.2
0.4
0.6
0.8
1
Time index n
A
m
p
l
i
t
u
d
e
Unit Sample Sequence




The exponential sequence
















Periodic and Aperiodic Sequences

A discrete signal x(n) is said to be periodic if for some positive
integer N,

x(n) = x(n+N)
otherwise it is an Aperiodic Signal.

If x1(n) is a periodic sequence with period N1 and x2(n) is a
sequence with period N2, then the sum sequence
x(n) = x1(n) + x2(n)
will always be periodic with the fundamental period

N = (N1N2)/ gcd(N1,N2)
where gcd(N1,N2) means the greatest common Divisor of N1 and
N2.The same is true for the product but the fundamental period
may be smaller.
0 5 10 15 20 25 30 35
0
20
40
60
80
100
120
Time index n
A
m
p
l
i
t
u
d
e


SIGNAL MANIPULATION

1. Transformation of the independent variable
Sequences are often altered by modifying the index n as
follows
y(n) = x (f(n))
where f(n) is some function of n.


2. Shifting
This is the transformation defined by f(n) = x(n-n
0
). x (n) is
shifted to the right by n
0
samples if n
0
is positive (this is referred
to as delay) and is shifted to the left if n
0
is negative (referred to
as advance.
Thus y(n) = x(n n
0
)

3. Reversal
This transformation is given by f(n) = -n and simply FLIPPS
the signal x(n) with respect to n. Thus
y(n) = x(-n)

4. Time Scaling
This transformation is defined by f(n) = Mn or f(n) = n/N
where M and N are positive integers. In the case of f(n) =
Mn, the sequence is formed by taking every Mth sample of
x(n)(this operation is known as (Down Sampling). With f(n)
= n/N the sequence y(n) = x(f(n)) is defined as

x(n/N) n = 0, N, 2N, . . .
y(n) =
0 otherwise

This operation is called Up-Sampling

All the above operations are shown in the accompanying
diagram.




Figure showing various operations on a discrete signal
Addition, Multiplication and Scaling


Addition: y(n) = x1(n) + x2(n) - < n <
Multiplication: y(n) = x1(n) . x2(n) - < n <
Scaling by a factor c: y(n) = cx(n) - < n <





Signal Decomposition

The unit sample function (n) may be used to decompose an
arbitrary discrete signal x(n) into a sum of weigted and shifted unit
sample function as follows

x(n) = . . . + x(-1)o(n+1)+x(0)o(n) + x(1)o(n-1) x(2)o(n-2) +
This may be written as

=
=
- k
k) - (n x(k) x(n) o


Where each term in the sum x(k) o(n-k) is a signal that has the
amplitude of x(k) at time n=k and a value of zero for all other
values of n. This decomposition is the discrete version of the
SIFTING property for continuous time signals.

S-ar putea să vă placă și