Documente Academic
Documente Profesional
Documente Cultură
10
2 Discrete-Time Signals
18
25
33
42
6 The Z-Transform
52
7 Digital Filtering
62
8 Digital Filtering
77
92
10 Appendix
99
CONTENTS
10
1.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.2.1
12
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
1.3.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
1.3.2
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
1.4
16
1.5
Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
17
1.3
2 Discrete-Time Signals
18
2.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19
2.2.1
20
2.2.2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
20
2.2.3
20
2.2.4
21
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.3.1
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21
2.4
23
2.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
24
2.3
25
3.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
26
3.2.1
27
3.2.2
. . . . . . . . . . . . . . . . . . . . . . . . . . .
28
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
28
3.3.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.3.2
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
29
3.4
31
3.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
32
3.3
33
4.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
4.2.1
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
35
4.2.2
36
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.3.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37
4.3.2
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
38
4.4
40
4.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
41
4.3
42
5.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
5.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
43
5.2.1
43
5.2.2
Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
5.3.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
5.3.2
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
5.4
49
5.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
51
5.3
6 The Z-Transform
52
6.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
6.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
6.2.1
Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
53
6.2.2
. . . . . . . . . . . . . . . . . . . . . . . . . . .
54
6.2.3
55
6.2.4
Inverse Z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3.1
Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.3.2
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
57
6.4
60
6.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
6.3
7 Digital Filtering
62
8.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
8.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
8.2.1
Design Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
8.2.2
79
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
8.3.1
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
8.4
85
8.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
8.3
8 Digital Filtering
77
8.1
Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
8.2
Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
78
8.2.1
Design Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
79
8.2.2
79
Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
8.3.1
Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
81
8.4
85
8.5
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
91
8.3
92
9.1
General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
9.2
Project Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
9.3
Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
93
9.4
Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
94
9.5
95
9.6
Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
9.7
Report Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
95
9.8
Report Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
96
9.9
Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
97
10 Appendix
99
99
99
99
LIST OF FIGURES
1.1
The first plot shows xa (t) and the next three plots show the effect of sampling
at fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency. We can
see how in this case the signal reconstructed is an aliased version of the original
at a lower frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.1
15
Plots showing discrete-time signals generated using MATLAB. The upper left
plot corresponds to x(n) = 2(n+3)+1(n+2)(n)3(n4), 5 n 5,
the upper right plot is the sequence x(n) = [1, 2, 4, 5, 9, 3, 2, 1, 4]
generated using a sum of delayed (n). The bottom left plot is a complex
exponential x(n) = e(0.2+j0.3)n ,
22
3.1
Example of a discrete system that smoothes an input signal and improves the
signal to noise ratio. The underlying signal is a 10 Hz sinusoid sampled at 300
Hz which is corrupted by Gaussian noise. Notice the output is a smooth delayed
version of the input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.1
30
Example of calculating the DFT using the FFT. The plot shows the time
and frequencydomain representations of a signal xa (t) = 3sin(t) + 2sin(5t)
sampled at twice the maximum frequency (fs = 5 Hz). We can see how the
magnitude of the DFT has two peaks at 0.5 Hz and 2.5 Hz, as we expected. In
the next plot we show the DFT of the same signal, but corrupted by Gaussian
noise. Notice how we can still identify the two frequency components even
though the signal is distorted. The final plot shows the FFT in dB scale. We
can see the effect of having a finite-rectangular window. . . . . . . . . . . . . .
5.1
39
The first plot shows xa (t) and the next three plots show the effect of sampling
at fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency. We can
see how in this case the signal reconstructed is an aliased version of the original
at a lower frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.2
47
The three plots show the effect of sampling at fs1 = 10fmax = 25, fs2 =
2.5fmax = 6.25, fs3 = 2 in the frequency domain. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency.
The
z-plane of H(z) =
0.006143
.
11.87834z 1 0.975156z 2
. . . . . . . . . . . . . . . . . . . . . .
48
58
6.2
0.006143
.
11.87834z 1 0.975156z 2
We can see
that the system is a digital resonator filter which has a peak at = 0.1 . . . .
59
8.1
82
8.2
Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented
with filtfilt (no delay of the output with respect to the input) and therefore it
is a noncausal filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3
83
84
8.1
82
8.2
Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented
with filtfilt (no delay of the output with respect to the input) and therefore it
is a noncausal filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.3
83
84
CHAPTER
1
Introduction to Sampling and Aliasing
10
1.1
Objective
The objective of this lab is to explore the concept of sampling and reconstruction of analog
signals. Specifically, we will simulate the sampling process of an analog signal using MATLAB,
investigate the effect of sampling in the time domain, and introduce the concept of aliasing.
In a later lab, after studying the DTFT and DFT, we will revisit this topic and explore these
two concepts in the frequency domain.
1.2
Theoretical Introduction
The signals that we encounter in the real world are mostly analog signals, which vary
continuously in time and amplitude. These signals can be processed using analog filters.
Analog filters are implemented using electrical networks containing active and passive circuit
elements. In many cases, however, we are interested in processing real world analog signals
digitally. In order to process analog signals using digital hardware such as a digital signal
processors (DSPs), one needs to convert the analog signals into a digital form which is suitable
for digital hardware. The conversion of an analog signal to a digital form is performed by an
analog-to-digital converter (ADC), which produces a stream of binary numbers from analog
signals, that is, it takes samples of the analog signal and digitizes the amplitude at these
discrete times. It is for this reason that a digital signal is said to be discrete in time and
discrete in amplitude. Prior to the ADC conversion, an analog filter called the prefilter or
antialiasing filter is applied to the analog signal in order to prevent an effect known as aliasing,
which we will explore in this laboratory experiment. When the signal is in a digital form, it can
be processed by a digital signal processor (DSP). After performing the digital signal processing,
the signal must be converted back into an analog signal. This is done by a digital-to-analog
converter (DAC) and an analog postfilter which smooths out the staircase waveform.
In order to do digital signal processing we must first perform other tasks. It may appear
11
that DSP is more complicated than ASP. We must therefore ask ourselves the question:
Why does anybody want to process the signal digitally? The answer to this question is
that DSP provides many advantages over ASP. The primary reasons for using DSP are
programmability, reliability, accuracy, availability, and cost of the digital hardware. Systems
using DSP approach are programmed and can be tested or reprogrammed to perform a different
DSP task easily. Also, DSP operations are based solely on additions and multiplications, which
lead to a very stable processing (stability independent of temperature, etc). Furthermore, DSP
operations can easily be modified in real-time by simple programming changes or reloading of
registers.
1.2.1
In this lab we focus our attention in the process of sampling and how to avoid the so called
problem of aliasing. During sampling, an analog signal x(t) is periodically measured every T
seconds, that is, time is discretized in units of the sampling interval or sampling period T :
t = nT,
n = 0, 1, 2, ...
where T is the sampling period. The inverse of T is called the sampling frequency, that is,
the samples per second:
fs =
1
T
When we sample an analog signal we must make sure that we are taking enough samples
(sampling fast enough) so that the samples are a good representation of the original analog
signal. In some cases, when the sampling frequency fs is not fast enough the samples taken
may not represent the original analog signal. The potential confusion of the original signal
with another of a different frequency is known as aliasing. Aliasing can be avoided if one
satisfies the conditions of the sampling theorem.
12
The sampling theorem provides a quantitative answer to the questions of how to choose
the sampling time interval T or the sampling frequency fs .
Sampling Theorem 1 For accurate representation of a signal x(t) by its time samples
x(nT ), two conditions must be met: 1) The signal x(t) must be bandlimited, that is, its
frequency contents must be limited to contain frequencies up to some maximum frequency
fmax and no frequencies beyond that, and 2) the sampling rate fs must be chosen to be at least
twice the maximum frequency fmax , that is,
fs 2fmax
According to the sampling theorem, before sampling we must make sure the signal is
bandlimited (this is the function of the analog prefilter) and that the sampling frequency is
at least twice the maximum frequency.
1.3
Computer Exploration
In order to explore the concepts of sampling and aliasing we can perform a MATLAB
simulation where we create an analog signal (simulated), take samples at different frequencies,
and observe the effect of fs .
1.3.1
Procedure
13
1.3.2
Example
As an example, lets follow this procedure to simulate the effect of sampling the analog signal:
xa (t) = 3sin(t) + 2sin(5t)
This signal contains two frequency components at f1 = 1/2 and f2 = 2.5 Hz. We explore the
effect of sampling and aliasing by sampling xa (t) with three different sampling frequencies:
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency and therefore we should expect to see aliasing. Figure 1.1
shows the results of the MATLAB simulation.
14
x(t)
10
10
10
5
Time, s
10
x(nT)
x(nT)
x(nT)
Figure 1.1: The first plot shows xa (t) and the next three plots show the effect of sampling at
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency. We can see how in this case the signal reconstructed is
an aliased version of the original at a lower frequency.
15
1.4
% f1 = 1/2 Hz
f2 = 2.5;
T
% f2 = 2.5 Hz
= 1/1000; N
= 1000; n
= 0:10*N; t
= n*T; x
3*sin(2*pi*f1.*t) + 2*sin(2*pi*f2.*t);
subplot(4,1,1);
h
%==================================================
% Sampling at fs = 10*fmax = 10*f2
%==================================================
fs = 10*f2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(4,1,2);
h = plot(t,x,:,k,xs, r.); ylabel(x(nT));
%==================================================
% Sampling at fs = 2.5*fmax = 2.5*f2
%==================================================
fs = 2.5*f2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(4,1,3);
h = plot(t,x,:,k,xs, r.); ylabel(x(nT));
%==================================================
% Sampling at fs = 1
%==================================================
fs = 1;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
xa = 3*sin(2*pi*f1.*t)+2*sin(2*pi*0.5*t);
subplot(4,1,4);
h = plot(t,x,:,t,xa, k,xs, r.);
xlabel(Time, s);
ylabel(x(nT));
16
1.5
Tasks
17
CHAPTER
2
Discrete-Time Signals
18
2.1
Objective
In this laboratory we will create MATLAB functions to efficiently implement the most common
discrete-time signals such as the unit sample sequence, the unit step sequence, real and
complexvalued exponential sequences, and sinusoidal sequences. Furthermore, we will explore
the concept of unit sample synthesis.
2.2
Theoretical Introduction
A discrete-time signal is a sequence of numbers. In this lab manual we will denote a continuoustime signal as x(t), where t represents the independent time variable. We will use x(n) to
denote a discrete-time signal, where n is the time index. We can use MATLAB to implement
any finite-duration sequence by a row vector of appropriate values. A vector, however, does
not have any information about the sample position n. For this reason we need to use two
vectors to represent a signal x(n), one for x and another for one n. For example, a sequence
x(n) = {2, 3, 5, 2, 9, 7}, where the underlined value represents the sample at time zero, can
be represented in MATLAB by:
n=[-2,-1,0,1,2,3]; x = [2, 3, 5, -2, 9, 7];
In the cases where the sample position information is not needed we can represent a signal
using the vector x alone.
There are a few discrete-time signals that we encounter very often. For this reason, it
is convenient to create MATLAB functions that can be used to implement these important
sequences.
19
2.2.1
The unit sample sequence, (n) can be implemented using the logical relation n==0 as
follows:1 .
function [x,n] = ImpSeq(n0,n1,n2)
%ImpSeq: Generates x(n) = delta(n-n0); n1<= n <= n2
%
% [x,n] =
ImpSeq(n0,n1,n2)
%
n = [n1:n2]; x = [(n-n0) == 0];
2.2.2
StepSeq(n0,n1,n2)
%
n = [n1:n2]; x = [(n-n0) >= 0];
2.2.3
The MATLAB operator for exponentiation can be used to implement a real exponential
sequence. For example, to implement x(n) = (0.8)n , 0 n 10, we can use:
n = [0:10];
x = (0.8).^n;
In the case of complexvalued sequences we must use the MATLAB function exp. For
example, to generate x(n) = e(3+j2)n , 0 n 10, we write:
1
The implementation is based on the code given by V. K. Ingle and J. G. Proakis on Digital Signal
Processing Using MATLAB, BookWare Companion Series. The MATLAB code can be downloaded from the
books website
20
n = [0:10];
2.2.4
x = exp((3+2j)*n);
It is also important to create MATLAB functions to perform operations on the basic functions
described above. For instance, we must be able to multiply, add, perform time shifts, folding,
even and odd decompositions, sample summations (sum), sample products (prod ), etc. An
important fact is that any arbitrary sequence x(n) can be synthesized as a weighted sum of
delayed and scaled unit sample sequences, that is
x(n) =
x(k)(n k)
k=
This result will be used to develop the concept of convolution, and the response of a lineartime-invariant (LTI) system to an arbitrary input.
2.3
Computer Exploration
In this section we show some examples of how to use the basic signals to create more complex
ones. Furthermore, we explore the concept of how to synthesize any signal as a weighted sum
of delayed and scaled unit sample sequences.
2.3.1
Example
10
8
1
6
0
4
2
0
2
2
3
10
0
Time Index, n
4
5
10
10
Time Index, n
1.5
1
0
0.5
2
0
0.5
1
6
1.5
8
10
0
Time Index, n
2
10
10
0
Time Index, n
10
Figure 2.1: Plots showing discrete-time signals generated using MATLAB. The upper left plot
corresponds to x(n) = 2(n + 3) + 1(n + 2) (n) 3(n 4), 5 n 5, the upper right
plot is the sequence x(n) = [1, 2, 4, 5, 9, 3, 2, 1, 4] generated using a sum of delayed
(n). The bottom left plot is a complex exponential x(n) = e(0.2+j0.3)n , 10 n 10, and
the bottom right corresponds to a cosine at 30 Hz in Gaussian noise with zero mean and unit
variance.
22
2.4
%==================================================
% Generate and Plot x2(n)
%==================================================
n = [-2:10];
x = 1*impseq(0,-2,10)+ 2*impseq(1,-2,10) ...
+ 4*impseq(2,-2,10)+ 5*impseq(3,-2,10) ...
+ 9*impseq(4,-2,10)- 3*impseq(5,-2,10) ...
- 2*impseq(6,-2,10)- 1*impseq(7,-2,10) ...
+ 4*impseq(8,-2,10);
subplot(2,2,2); stem(n,x);
xlabel(Time Index, n); box off;
%==================================================
% Generate and Plot x3(n)
%==================================================
n
= real(exp(alpha*n));
subplot(2,2,3); stem(n,x);
xlabel(Time Index, n); box off;
%==================================================
% Generate and Plot x4(n)
%==================================================
n
= [-10:10];
% Time Index
fs
= 12*30;
% Sampling Frequency
ns
= sin(2*pi*30*n.*1/fs);
% Signal
xn
= x+ns;
% Signal + Noise
subplot(2,2,4);
stem(n,xn); xlabel(Time Index, n); box off;
23
2.5
Analysis
24
CHAPTER
3
Discrete-Time Systems and Convolution
25
3.1
Objective
3.2
Theoretical Introduction
LTI systems are very important in practice because there is a well developed mathematical
theory that enables us to analyze, design, and study these systems in great detail. In particular,
we can completely characterize LTI systems by their impulse response. If we have an LTI
26
system and we want to predict what the system does, the only thing we need to do is to input
an impulse and record the output (the impulse response). Once we have the impulse response,
we can use the convolution sum to find the output of the system for an arbitrary input.
3.2.1
Lets try to find an expression for the output of a LTI system for an arbitrary input. Since
we are not making any assumptions about the input sequence x(n), we must find a way to
express this sequence in terms of some other for which we do know what the output is. By
definition, the output of an LTI system L[] to a unit sample is the impulse response, denoted
h(n). We can, therefore, express the arbitrary input signal x(n) using (n k) as our basis
functions. This is convenient because we know the output of the system for a single impulse.
The procedure to develop an expression for the output of the system to an arbitrary input
is as follows:
1. Goal: we want to find an expression for y(n) for any input signal x(n)
2. Given: we know that the output of a system when the input is an impulse is h(n).
y = L[x(n)] = L[(n)] = h(n)
3. Since we know what the output is for an impulse (n) we just have to represent the
input signal x(n) using the impulses as our basis functions.
x(n) =
x(k)(n k)
k=
x(k)(n k)] =
k=
x(k)L[(n k)] =
k=
x(k)h(n k)
k=
where we have used the fact that the system is linear and time-invariant.
The expression we obtained for the output y(n) is called the linear-convolution sum and
is denoted by y(n) = x(n) h(n).
3.2.2
definitions for stability. In general we can say that a system is stable if bounded input signals
produce bounded output responses (BIBO). In terms of the impulse response, an LTI system
is stable if its impulse response is absolutely summable, that is,
|h(n)| <
n=
Another important concept is causality. We say that a system is causal if the output at
time n0 depends only of the inputs at time n0 and before, but not on future values of n. In
general, only causal systems can be implemented in real-time. However, in cases where the
system is not completely causal we may still be able to implement them in a closetorealtime
fashion by using a buffer.
3.3
Computer Exploration
In this section we show some examples of how to use the concept of convolution in a practical
sense. In particular, we can see how a system with a very simple impulse response can be
used as a filter to smooth noisy signals.
28
3.3.1
Procedure
3.3.2
Example
As an example lets generate a sinusoidal signal with a frequency of 10 Hz, sampled at 300 Hz
which is corrupted by zero mean, unit variance Gaussian noise. Next, we generate a sequence
h(n) = 18 [u(n) u(n 8)], that is, a constant impulse response. Using convolution we filter
the noisy input signal x(n) with the filter characterized by h(n). Finally, we plot the results
and experiment using different numbers of filter coefficients and shapes of h(n)
In Figure 3.1 we show the output of the system presented in the previous example. Notice
that the use of this filter with a very simple impulse response can help to reduce the Gaussian
noise. The output of the system is less noisy than the original sequence. This system is called
a moving-average filter, because the output is an average, that is
1
y(n) = [x(n) + x(n 1) + x(n 2) + x(n 3) + x(n 4) + x(n 5) + x(n 6) + x(n 7)]
8
In this case all the coefficients are constant and equal to 1/8. Notice the relationship between
this expression and convolution. We can rewrite y(n) as
7
k=0
k=0
X1
1X
y(n) =
x(n k) =
x(n k)
8
8
29
h(k)x(n k) =
k=0
7
X
x(k)h(n k)
k=0
This equation is the convolution sum of a finite-impulse response filter (FIR) of order 7. The
problem in digital filtering design is how we go about choosing these h(n) coefficients so that
we can filter-out the undesired noise present in the input signal. We are usually interested
in filtering out some particular frequencies while leaving the others unchanged, therefore we
need techniques that allow us to find specific coefficients to remove specific frequencies. This
design is done in the frequency domain. We will revisit this topic once we have introduced
the concepts of DTFT, DFT, and FFT.
1.5
0.5
0.5
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
Figure 3.1: Example of a discrete system that smoothes an input signal and improves the
signal to noise ratio. The underlying signal is a 10 Hz sinusoid sampled at 300 Hz which is
corrupted by Gaussian noise. Notice the output is a smooth delayed version of the input.
30
3.4
= 150;
% Number of samples
= 1:N;
% Time index
fs = 300;
% Sampling frequency
= 1/fs;
% Sampling period
= 10;
% Frequency of sinusoid
= cos(2*pi*f*n.*T);
= 0.25;
% 1% Noise
ns = a*randn(1,length(x));
% Gaussian Noise
xn = x+ns;
% Signal in noise
%==================================================
% Generate the impulse response
%==================================================
h
= 1/8*stepseq(0,0,7)
%==================================================
% Filter noisy signal using h(n), xf(n) = xs(n)*h(n)
%==================================================
xf = conv(xn,h);
% Filtered output
%==================================================
% Plotting of results
%==================================================
h1 = plot(n*T, xn, (1:length(xf))*T, xf);
axis tight; box off;
31
3.5
Analysis
32
CHAPTER
4
DTFT, DFT, and FFT
33
4.1
Objective
In this laboratory we will introduce the tools to perform Fourier analysis in discrete-time,
namely the Discrete-Time Fourier Transform (DTFT), the Discrete-Fourier Transform (DFT),
and an efficient algorithm to compute the DFT called the Fast-Fourier Transform (FFT).
4.2
Theoretical Introduction
We have seen already how any signal can be decomposed as a weighted sum of delayed unit
sample impulses. We saw that representing the input signal using impulses as our basis
function was useful in the development of the convolution sum.
Most signals of practical interest can be decomposed into a sum of sinusoidal signal
components. These decompositions are extremely important in the analysis and design of
LTI systems because complex exponentials (sinusoids) are eigenfuntions of LTI systems. This
means that the response of an LTI system to a sinusoidal input signal is another sinusoid of
the same frequency but of different amplitude and phase.
Frequency analysis involves the resolution of the signal into its frequency components
34
(just as the resolution of light into its different colors). When we decompose a signal into its
frequency components we are doing frequency analysis. The opposite problem, reconstructing
the original signal from its frequency components, is known as frequency synthesis. The term
spectrum is used to refer to the frequency content of a signal. The process of obtaining
the spectrum of deterministic signals, signals for which we have a mathematical equation to
represent them, is called frequency or spectral analysis. On the other hand, the process of
determining the spectrum of the signals we encounter in practice, for which we do not have a
mathematical formula, is called spectral estimation.
4.2.1
Definitions
DTFT
If x(n) is absolutely summable, then its discrete-time Fourier transform (DTFT) is given by:
jw
X(e ) =
x(n)ejwn
n=
X(ejw )ejwn dw
35
at specific points.
X(k) =
N
1
X
x(n)ej2kn/N , k = 0, 1, 2, . . . , N 1
n=0
x(n) =
N 1
1 X
X(k)ej2n/N,
N
n=0,1,2,...,N 1
k=0
The fast-Fourier transform FFT is not really a new transform, it is just a very efficient
algorithm (actually there are several FFT algorithms) that is used to compute the DFT.
Matlab provides a function called fft to compute the DFT of a vector x.
4.2.2
The above definition of the DFT allows us to compute the DTFT of a signal x(n) at N equally
spaced values. Sometimes, this is a very coarse estimate of the DTFT. If we are interested in
evaluating the frequency at more values than N we can make longer the original sequence x(n)
by appending zeros at the end. This procedure for increasing the computational frequency
resolution is called zero-padding. It is important to realize that even though we are computing
the DFT at more points, we are not increasing the physical frequency resolution, which depends
on the length of our window.
Anytime we are computing the DFT of an infinite sequence x(n) we must first truncate x(n)
36
into a finite-sequence. This operation can be modelled as a multiplication in the time domain
by a rectangular window. Since multiplication in the time domain corresponds to convolution
in the frequency domain, the effect of this operation is a convolution of the original spectrum
of x(n) by the sinc function. The distortion introduced due to the effect of window causes
the original spectrum to have artificial sidelobes, which correspond to the sidelobes of the
sinc function. As we increase the length of our window we improve the frequency resolution,
and the artifact caused by the rectangular window become less significant. Depending on the
application we may decide to use a different window from the rectangular. In particular, there
are many other windows which reduce the sidelobe leakage at the expense of increasing the
mainlobe width. Some of the most important windows are the hanning, hamming, blackman,
blackman-harris, harris and triangular windows. MATLAB provides direct implementations
for all of these.
4.3
Computer Exploration
The most basic application of the DFT (or FFT) is to approximate the DTFT. Having a
signal x(n) we are interested in performing frequency analysis on it, that is, calculate the
spectral content (spectrum) of the signal. Next, we indicate a simple procedure of a computer
exploration to investigate the FFT as a tool for spectral analysis, then show an example of
how to perform this task.
4.3.1
Procedure
4.3.2
Example
As an example of calculating the DFT using the FFT we generated the following figure. The
plot shows the time and frequencydomain representations of a signal xa (t) = 3sin(t) +
2sin(5t) sampled at twice the maximum frequency (fs = 5 Hz). We can see how the
magnitude of the DFT has two peaks at 0.5 Hz and 2.5 Hz, as we expected. In the next
plot we show the DFT of the same signal, but corrupted by Gaussian noise. Notice how we
can still identify the two frequency components even though the signal is distorted. The final
plot shows the FFT in dB scale. We can see the effect of having a finite-rectangular window.
38
4
150
x(nT)
2
0
100
50
4
0
10
6
4
150
xn(nT)
2
0
100
2
50
4
6
0
10
100
4
0
100
xn(nT)
200
300
400
500
4
0
10
Figure 4.1: Example of calculating the DFT using the FFT. The plot shows the time and
frequencydomain representations of a signal xa (t) = 3sin(t) + 2sin(5t) sampled at twice
the maximum frequency (fs = 5 Hz). We can see how the magnitude of the DFT has two
peaks at 0.5 Hz and 2.5 Hz, as we expected. In the next plot we show the DFT of the same
signal, but corrupted by Gaussian noise. Notice how we can still identify the two frequency
components even though the signal is distorted. The final plot shows the FFT in dB scale.
We can see the effect of having a finite-rectangular window.
39
4.4
% 1 Hz component
f2 = 2.5;
% 2.5 Hz component
fs = 5*f2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(3,2,1);
h = plot(k,xs); set(h, Markersize, 15);
ylabel(x(nT)); box off;
axis tight;
%==================================================
% FFT of original sequence
%==================================================
subplot(3,2,2);
X = abs(fft(xs));
% FFT Magnitude
%==================================================
% Signal with noise and FFT
%==================================================
xn = xs + 1.75*randn(1,length(xs)); subplot(3,2,3); h =
plot(k,xn); set(h, Markersize, 15); ylabel(xn(nT)); box off;
axis tight; subplot(3,2,4);
X = abs(fft(xn));
% FFT Magnitude
%==================================================
% FFT ploted in db scale
%==================================================
xn = xs + 0*randn(1,length(xs)); subplot(3,2,5); h = plot(k,xn);
set(h, Markersize, 15); ylabel(xn(nT)); box off; axis tight;
subplot(3,2,6);
X = 20*log(abs(fft(xn)));
% FFT Magnitude dB
40
4.5
Analysis
41
CHAPTER
5
Sampling and Aliasing
42
5.1
Objective
In this laboratory we will revisit the topic of sampling and aliasing. We will discuss the effects
of sampling using Fourier transforms. Looking at the spectra of sampled signals will enable
us to get a better understanding of the concept of aliasing.
5.2
Theoretical Introduction
An ideal sampler instantaneously measures the analog signal x(t) at the sampling instants
t = nT . We can consider the output of the sampling process to be an analog signal which
consists of a linear superposition of impulses occurring at the sampling times. In this model,
each impulse is weighted by the corresponding sample value, that is
xs (t)(t nT ) = x(nT )(t nT )
xs (t) = x(t)
(t nT ) =
n=
x(nT )(t nT )
n=
In practical sampling, each sample must be held constant for a short period of time for the A/D
converter to accurately convert the sample to digital form. This process can be mathematically
modelled substituting the impulses by rectangular pulses of time duration T ,
xp (t) =
x(nT )p(t nT )
n=
5.2.1
The spectrum of the sampled signal can be obtained by finding the Fourier transform of xs (t),
Z
Xs (f ) =
xs (t)e2jf t dt
43
Xs (t) =
n=
(t nT )e2jf t dt
x(nT )
n=
Xs (t) =
x(nT )e2jf nT
n=
We see that the spectrum of the sampled signal is the DTFT of x(nT ). Furthermore, we
see that Xs (t) is a periodic function of f with period fs , that is, Xs (f + fs ) = X(f ),
Xs (f ) =
1
T
X(f mfs )
m=
this equation indicates that the spectrum of a sampled signal is equal to a scaled version of the
original analog spectrum periodically replicated at intervals of the sampling rate fs . We see
that if x(t) is bandlimited to some maximum frequency fmax , the replicas are separated from
each other by a distance d = fs 2fmax . Therefore, if we are interested in the replicas not
overlapping each other, we require that d 0, that is fs 2fmax , which is the requirement
to avoid aliasing introduced in the first laboratory. We see that aliasing occurs when the
replicated spectra overlap, d < 0, since this overlapping results in a distortion of the original
spectrum.
5.2.2
Reconstruction
If the signal is bandlimited and fs is large enough so the replicas do not overlap, that is,
fs 2fmax , then the portion of the spectrum Xs (f ) that lies within the Nyquist interval
44
[fs /2, f2 /2] will be identical to the original analog spectrum X(f ) by a scaled constant,
T Xs (t) = X(f ),
5.3
fs
fs
f
2
2
Computer Exploration
As we saw in the first laboratory, in order to explore the concepts of sampling and aliasing
we can perform a MATLAB simulation where we create an analog signal (simulated), take
samples at different frequencies, and observe the effect of fs . In the first lab, we saw the effect
of sampling in the time domain. Now we explore this concept in the frequency domain by
using the DFT/FFT.
5.3.1
Procedure
5.3.2
Example
As an example, lets follow this procedure to simulate the effect of sampling the analog signal
we already saw in the first lab:
xa (t) = 3sin(t) + 2sin(5t)
45
This signal contains two frequency components at f1 = 1/2 and f2 = 2.5 Hz. We explore the
effect of sampling and aliasing by sampling xa (t) with three different sampling frequencies:
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling frequencies
meet the sampling theorem requirement. On the other hand, the third sampling frequency
is less than twice the maximum frequency and therefore we should expect to see aliasing.
Figure 5.1 shows the results of the MATLAB simulation in the time domain. Figure 5.2 shows
the frequency domain representation of the signals. We can see how in the first two cases,
when the sampling theorem requirements were met, the spectrum of the sampled signal that
lies within the Nyquist interval represents the original frequency content. On the other hand,
we see that when we sampled the signal with a rate less than twice the maximum frequency,
the spectrum is different.
46
x(t)
10
10
10
5
Time, s
10
x(nT)
x(nT)
x(nT)
Figure 5.1: The first plot shows xa (t) and the next three plots show the effect of sampling at
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency. We can see how in this case the signal reconstructed is
an aliased version of the original at a lower frequency.
47
300
200
100
0
10
12
14
0.5
1.5
2.5
3.5
80
60
40
20
0
40
30
20
10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Figure 5.2: The three plots show the effect of sampling at fs1 = 10fmax = 25, fs2 = 2.5fmax =
6.25, fs3 = 2 in the frequency domain. The first two sampling frequencies meet the sampling
theorem requirement. On the other hand, the third sampling frequency is less than twice the
maximum frequency. The aliasing effect is evident in the third plot.
48
5.4
% f1 = 1/2 Hz
f2 = 2.5;
T
= 1/1000; N
% f2 = 2.5 Hz
= 1000; n
= 0:10*N; t
= n*T; x
3*sin(2*pi*f1.*t) + 2*sin(2*pi*f2.*t);
figure(1); subplot(4,1,1);
h
%==================================================
% Sampling at fs = 10*fmax = 10*f2
%==================================================
fs = 10*f2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
figure(1); subplot(4,1,2);
h = plot(t,x,:,k,xs, r.);
set(h, Markersize,15); ylabel(x(nT));
figure(2); subplot(3,1,1);
X = abs(fft(xs));
% FFT Magnitude
%==================================================
% Sampling at fs = 2.5*fmax = 2.5*f2
%==================================================
fs = 2.5*f2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
figure(1);
subplot(4,1,3);
h = plot(t,x,:,k,xs, r.)
set(h, Markersize,15); ylabel(x(nT));
figure(2); subplot(3,1,2);
X = abs(fft(xs));
% FFT Magnitude
%==================================================
49
% Sampling at fs = 1
%==================================================
f1 = 1/2;
% f1 = 1/2 Hz
f2 = 2.5;
% f2 = 2.5 Hz
fs = 2;
% Sampling Frequency
= 1/fs;
% Sampling Period
= 0:10*fs;
% Plot 10 seconds
= n*T;
% Time Index
xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
xa = 3*sin(2*pi*f1.*t)+2*sin(2*pi*0.5*t);
figure(1); subplot(4,1,4);
h = plot(t,x,:,t,xa, k,xs, r.);
set(h, Markersize, 15);
xlabel(Time, s); ylabel(x(nT));
figure(2); subplot(3,1,3);
X = abs(fft(xs));
% FFT Magnitude
50
5.5
Analysis
51
CHAPTER
6
The Z-Transform
52
6.1
Objective
In this laboratory we will introduce the Z-transform. This transform is a tool for the analysis,
design, and implementation of digital filters. We will also introduce the concept of system
transfer function, which is defined in terms of the z-transform of the impulse response.
6.2
Theoretical Introduction
Transform techniques such as the DTFT or Z-transforms are important tools in the analysis
and design of linear-time invariant (LTI) systems. The DTFT allows us to represent a discretetime signal in terms of complex exponentials. We also saw that this transform enabled us to
study LTI systems in the frequency domain by taking the DTFT of the systems impulse
response sequence, h(n), obtaining the frequency response function, H(ejw ). The frequency
response function enabled us to study the sinusoidal steady-state response, and the response
to any arbitrary signal, x(n), for which the DTFT was defined by evaluating the inverse DTFT
of X(ejw )H(ejw ).
In this chapter we introduce a new transform, the Z-transform. The Z-transform can be
considered an extension of the DTFT for two reasons: 1) it provides another domain in which
a larger class of sequences and systems can be analyzed, and 2) it can be used to analyze
transient system responses (not only steady-state) and systems with initial conditions.
6.2.1
Definition
X(z)
X
n=
53
x(n)z n
x(n)rn ejn
n=
We can easily see that the Z-transform reduces to the DTFT for r = 1, that is, the evaluation of
the Z-transform on the unit circle z = ej provides information about the frequency spectrum.
X(z)|z=ej =
x(n)ej
n=
6.2.2
The concepts of causality and stability can be redefined in terms of the Z-transform.
Causal signals (right-sided) are characterized by ROCs that are outside the maximum pole
circle. Anticausal signals have ROCs that are inside the minimum pole circle. Mixed signals
have ROCs that are an annular region between two circles, with the poles that lie inside the
inner circle contributing causally and the poles that lie outside the outer circle contributing
anticausally.
Stability can be also characterized in the z-domain in terms of the ROC. A necessary and
sufficient condition for the stability of a signal x(n) is that the ROC of the corresponding
Z-transform contains the unit circle.
For a system to be simultaneously stable and causal, it is necessary that all its poles lie
54
6.2.3
Finding a Z-transform of a sequence involves applying the definition and finding the ROC.
For example, given
x(n) = {1, 2, 5, 8, 0, 1}
The Z-transform is
X(z) = 1z 2 + 2z 1 + 5 + 8z 1 + 1z 2
with the ROC being the entire z-plane except at z = 0 and z = .
Useful Infinite Geometric Series
Two infinite series are very useful in finding Z-transforms:
xn = 1 + x + x2 + x3 + . . . =
n=0
55
1
1x
xn = x + x2 + x3 + . . . =
n=1
x
1x
Examples
As an example of how to used the previous equations, lets find the Z-transform of
x(n) = (0.5)n u(n) = {1, 0.5, 0.52 , 0.53 , . . . }
Its Z-transform is:
X(z) =
(0.5)n u(n)z n =
n=
(0.5)n z n =
n=0
(0.5z 1 )n =
n=0
1
1 0.5z 1
1
, |z| 0.5
1 0.5z 1
A Z-transform and its ROC are uniquely determined by the time signal x(n).
6.2.4
Inverse Z-transform
The definition of the inverse Z-transform requires an evaluation of the complex contour
integral. In general, we do not use the definition to find the inverse Z-transform. Instead, we
use partial fraction expansions. The central idea is that when X(z) is a rational function of
z 1 , we can express it as a sum of simple first-order factors using the partial fraction expansion
that can be inverted by inspection.
56
6.3
Computer Exploration
MATLAB provides several functions very useful to work in the z-domain. In this laboratory
we will focus in the functions zplane, residuez, and freqz.
6.3.1
Procedure
1. Given a Z-transform, plot the poles and the zeros in the z-plane.
2. Given a transfer function for a system, plot the magnitude and phase responses.
3. Given a transfer function, perform a partial fraction expansion.
6.3.2
Example
0.006143
1 1.87834z 1 0.975156z 2
lets plot the poles and zeros in the z-plane. Plot the magnitude and phase response and
perform a partial fraction expansion.
Figures 6.1 and 6.2 show the z-plane and frequency response of H(z). From the frequency
response we can see that the system is a digital resonator filter which has a peak at = 0.1.
From the z-plane we can see that the system has complex poles close to the unit circle. Using
the residuez function we can find the partial fraction expansion and the exact location of the
zeros and poles.
57
0.8
0.6
0.4
Imaginary Part
0.2
0.2
0.4
0.6
0.8
1
1
0.5
0
Real Part
58
0.5
0.006143
.
11.87834z 1 0.975156z 2
Magnitude (dB)
10
20
30
40
50
60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Normalized Frequency ( rad/sample)
0.8
0.9
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Normalized Frequency ( rad/sample)
0.8
0.9
Phase (degrees)
50
50
100
150
0.006143
Figure 6.2: Magnitude and phase response of H(z) = 11.87834z
1 0.975156z 2 . We can see that
the system is a digital resonator filter which has a peak at = 0.1
59
6.4
%===================================================
% Plot the Magnitude
%===================================================
figure; freqz(num,den); box off;
%===================================================
% Partial Fraction Expansion
%===================================================
num = [0.006143]; den = [1 -1.87834 0.975156];
[r,p,k]=residuez(num,den)
60
6.5
Analysis
1
, |z|
10.9z 1
6+z 1
.
10.25z 2
Find the poles and zeros, plot them in the z-plane, and plot
the frequency response. What type of filter is this: lowpass, highpass, or bandpass?
4. Given a moving average filter with impulse response h(n) = u(n) u(n ). Find the
poles and zeros, plot them in the z-plane, and plot the frequency response for 3 different
values of . What type of filter is this: lowpass, highpass, or bandpass? What is the
cut-off frequency in each case? What is the effect of increasing ?
5. Given H(z) =
11.25z 1
,
(1+4z 2 )(10.81z 2)
and plot the frequency response. Notice that H(z) has poles and zeros outside the unit
circle, find another system G(z) such that |G(z)| = |H(z)| by reflecting all the poles
and zeros inside the unit circle. Plot the poles and zeros of the new system G(z) in
the z-plane and verify that both systems have the same magnitude response. A system
with all its poles and zeros inside the unit circle is said to be a minimal-phase system.
Given any system H(z) we can find a minimal phase system with the same magnitude
response, why is this important?
61
CHAPTER
7
Digital Filtering
62
7.1
Objective
In this laboratory we will investigate the concept of digital filtering. Emphasis will be placed
in the specification of these filters and in computer-aided (CAD) design. Specifically, we will
learn how to correctly specify the desired filter characteristics, and how to use MATLAB to
obtain the filter coefficients.
7.2
Theoretical Introduction
Filters are frequency selective systems, that is, the magnitude and phase response of these
systems id a function of the input frequency. However, in the area of DSP we often use the
terms of filter and system interchangeably.
In previous labs we introduced the concept of a moving average filter in the time domain.
Specifically, we saw that a system with an impulse response of the form h(n) =
1
(u(n)
u(n )), could be used to remove high frequency components (smooth), and that controls
how much smoothing is done. We also saw that the output of any system is given by the
convolution of the input with the impulse response. In this particular case, the output of the
system, that is, the result of the filtering operation, is given by
y(n) =
1
1
1
1
x(n) + x(n 1) + x(n 2) + . . . + x(n )
y(n) =
k=
X
k=0
1
x(n k)
The impulse response of the filter doesnt have to be a constant, that is, a more general
63
expression would be
M
1
X
bk x(n k)
k=0
where b0 , b1 , . . . , bM 1 is the set of filter coefficients coefficients. Notice that bk = h(k), that is
M
1
X
k=0
bk x(n k) =
M
1
X
h(k)x(n k)
k=0
We see that in this case the filter coefficients are equal to the filters impulse response. We
can also recognize the output as being the convolution of the input with the impulse response.
7.2.1
Design Objective
When designing digital filters, the objective is how to choose the filter coefficients bk so that the
system removes the undesired frequencies while keeping the desired frequencies. This problem
is very difficult to solve in the time domain. In the design of frequencyselective filters, the
desired filter characteristics are specified in the frequency domain in terms of the desired
magnitude and phase response of the filter. Once the filter is fully specified, the objective is
to determine the coefficients that provide the desired frequency response specification.
7.2.2
Filters can be classified into finiteimpulse response (FIR) or infiniteimpulse response (IIR)
filters.
64
FIR Filters
An FIR filter has an impulse response h(n) that extends only over a finite time interval and
it is zero beyond that interval, that is
h(n) = [h0 , h1 , . . . , hM , 0, 0, 0, . . .]
where M is referred as the filter order. The impulse response coefficients h0 , h1 , . . . , hM , 0, 0, 0, . . .
are referred by various names, such as filter coefficients, filter weights, filter taps, etc. As we
already saw before, the output of an FIR filter can be written as
M
X
h(k)x(n k)
k=0
We can see that the output of an FIR filter is a weighted average of the present input sample
x(n) and the past M samples x(n 1), x(n 2), . . . , x(n M ). FIR filters are referred as
non-recursive filters because the output only depends on the inputs and not on the previous
outputs.
IIR Filters
An IIR filter has an impulse response h(n) of infinite duration. The expression for the output
of a causal IIR filter is
y(n) =
h(k)x(n k)
k=0
In general, IIR filters are recursive, that is, the output of the system depends not only on the
inputs but also on the previous outputs, for example
y(n) = y(n 1) + x(n)
65
7.3
Computer Exploration
MATLAB provides several functions very useful to design and simulate digital filters. In this
laboratory we will focus on the functions filter, filtfilt, ellipord, ellip, cheby1, cheby2, butter,
etc .
7.3.1
Example
Lets write two MATLAB functions to implement a lowpass and a highpass filter that allow us
to choose the cutoff frequency and types of filter we want to implement. The inputs to these
functions are the samples of the signal, the sampling frequency, and the desired type of filter
(causal, noncausal, elliptic, etc). The outputs are the filtered signal and the filter coefficients.
The following figure shows an example of using the lowpass filter to filter out the high
frequencies of a biomedical signal corrupted by quantization noise. The sampling frequency
of the input signal is 125 Hz and we would like to filter out frequencies beyond 15 Hz. The
next example shows the effect of filtering the same signal with a highpass filter to remove the
lowfrequency components and eliminate the signal trend (remove frequencies below 0.1 Hz).
66
Magnitude (dB)
50
0
50
100
10
20
30
Frequency (Hz)
40
50
60
10
20
30
Frequency (Hz)
40
50
60
Phase (degrees)
0
100
200
300
400
67
Raw Signal
Lowpass Filtered
15
Amplitude
14.5
14
13.5
13
12.5
422
422.2
422.4
422.6
422.8
Time,s
423
423.2
Figure 7.2: Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented with filtfilt (no
delay of the output with respect to the input) and therefore it is a noncausal filter.
68
25
Raw Signal
Highpass Filtered
20
Amplitude
15
10
5
0
5
10
200
400
600
Time,s
800
1000
1200
Figure 7.3: Result of the highpass filtering operation. Notice that the highpass filter eliminated
the signal trend.
69
7.4
Below we show the MATLAB code used to implement the lowpass and highpass funtions:
function [y,n] = LowPass(x,fsa,fca,fta,cfa,pfa);
%LowPass: Lowpass filter
%
%
[y,n] = LowPass(x,fs,fc,ft,cf,pf)
%
%
Input signal
fs
fc
ft
cf
pf
Filtered Signal
%
%
filtered sequence is then reversed and run back through the filter.
%
%
cutoff frequency fs/4 Hz.This will filter out the high frequency
%
%
load ICP;
[y,n] = LowPass(icp,fs,fs/4,1,2,1);
%
%
Version 1.00 MA
%
%
%=====================================================================
% Process function arguments
%=====================================================================
if nargin<1 | nargin>6,
help LowPass;
return;
end;
70
fs = 125;
fc = fs/4;
ft = 1;
cf = 2;
pf = 0;
% Default - no plotting
if nargout==0,
pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;
%=========================================================================
% Process Inputs
%=========================================================================
x
= x(:); LD = length(x); k
= 1:LD;
%=========================================================================
% LowPass Filtering
%=========================================================================
nf
Rs
= 40;
= wlp*1.2; Wp
= wlp*0.8; Rp
= 0.5;
% causal
= filter(B,A,x);
else
% non-causal
= filtfilt(B,A,x);
71
end;
% causal
= butter(N,Wn);
= filter(B,A,x);
else
% non-causal
= butter(N,Wn);
= filtfilt(B,A,x);
end;
= min([500 floor(NX/4)]);
if rem(N,2),
% Make N even
N = N + 1;
end;
B = fir1(N,wlp,Blackman(N+1),noscale);
% B should be odd
B = B/sum(B);
A = 0;
y = conv(B,x);
y = y(N/2 + (1:NX));
ci = [1:(N/2) (NX+1-N/2):NX];
for c1 = 1:length(ci),
in
= ci(c1);
xi
= max( 1,in-N/2):min(NX,in+N/2);
bi
= (N/2+1) + (max(1-in,-N/2):min(NX-in,N/2));
y(in) = sum(x(xi).*B(bi))/sum(B(bi));
end;
%=========================================================================
% Plotting
%=========================================================================
if pf==1
close all
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Lowpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Lowpass Filtered);
box off; axisset;
72
%=========================================================================
% Take care of outputs
%=========================================================================
if nargout==0,
clear(y,n);
end;
[y,n] = HighPass(x,fs,fc,cf,pf)
%
%
Input signal
fs
fc
cf
pf
Filtered Signal
%
%
The causal implementation uses only the present and previous values
filters the data in the forward direction, and the filtered sequence
is then reversed and run back through the filter; Y is the time
The result
%
%
frequency 0.5 Hz. This will filter out the low frequency
%
%
load ICP;
[y,n] = Highpass(icp,fs,0.5,2,1);
%
%
Version 1.00 MA
73
%=======================================================================
% Process function arguments
%=======================================================================
if nargin<1 | nargin>6,
help Highpass;
return;
end;
fs = 125;
fc = fs/4;
cf = 2;
% Default flag
pf = 0;
% Default - no plotting
if nargout==0,
pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;
%=======================================================================
% Process Inputs
%=======================================================================
x
= x(:); LD = length(x); k
= 1:LD;
%=======================================================================
% LowPass Filtering
%=======================================================================
nf
Rs
= 40;
= whp*1.2; Ws
= whp*0.8; Rp
if cf==1,
= 0.5;
% causal
= filter(B,A,x);
else
% non-causal
74
= filtfilt(B,A,x);
end;
%=======================================================================
% Plotting
%=======================================================================
if pf == 1
figure(1)
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Highpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Highpass Filtered);
box off; axisset
figure(2)
figureset(2)
freqz(B,A, 512, fs)
end;
%=======================================================================
% Take care of outputs
%=======================================================================
if nargout==0, clear(y,n); end;
75
7.5
Analysis
Download the signals ECGNoisy60Hz, ECGQuantization, ECGBaselineDrift, and ECGNoiseCombined from the class website. Use MATLAB to perform the following computer
exploration.
1. Load and plot the signal ECGNoisy60Hz in MATLAB. Use the FFT to evaluate the
frequency spectrum. Notice the noise present at 60 Hz. Use MATLAB to design a filter
to eliminate this problem. Plot and compare the initial signal and the filtered signal.
2. Repeat the procedure with ECGQuantization. In this case the signal is severely affected
by quantization noise. Use MATLAB to design a filter to eliminate this noise and show
the results.
3. Repeat the procedure with ECGBaselineDrift. In this case the signal is severely affected
by baseline drift due to patient movement. Use MATLAB to design a filter to eliminate
this noise and show the results.
4. Repeat the procedure with ECGBaselinedCombined. The signal contains all the above
types of noise combined. Use MATLAB to design a system to eliminate this noise and
show the results.
76
CHAPTER
8
Digital Filtering
77
8.1
Objective
In this laboratory we will investigate the concept of digital filtering. Emphasis will be placed
in the specification of these filters and in computer-aided (CAD) design. Specifically, we will
learn how to correctly specify the desired filter characteristics, and how to use MATLAB to
obtain the filter coefficients.
8.2
Theoretical Introduction
Filters are frequency selective systems, that is, the magnitude and phase response of these
systems id a function of the input frequency. However, in the area of DSP we often use the
terms of filter and system interchangeably.
In previous labs we introduced the concept of a moving average filter in the time domain.
Specifically, we saw that a system with an impulse response of the form h(n) =
1
(u(n)
u(n )), could be used to remove high frequency components (smooth), and that controls
how much smoothing is done. We also saw that the output of any system is given by the
convolution of the input with the impulse response. In this particular case, the output of the
system, that is, the result of the filtering operation, is given by
y(n) =
1
1
1
1
x(n) + x(n 1) + x(n 2) + . . . + x(n )
y(n) =
k=
X
k=0
1
x(n k)
The impulse response of the filter doesnt have to be a constant, that is, a more general
78
expression would be
M
1
X
bk x(n k)
k=0
where b0 , b1 , . . . , bM 1 is the set of filter coefficients coefficients. Notice that bk = h(k), that is
M
1
X
k=0
bk x(n k) =
M
1
X
h(k)x(n k)
k=0
We see that in this case the filter coefficients are equal to the filters impulse response. We
can also recognize the output as being the convolution of the input with the impulse response.
8.2.1
Design Objective
When designing digital filters, the objective is how to choose the filter coefficients bk so that the
system removes the undesired frequencies while keeping the desired frequencies. This problem
is very difficult to solve in the time domain. In the design of frequencyselective filters, the
desired filter characteristics are specified in the frequency domain in terms of the desired
magnitude and phase response of the filter. Once the filter is fully specified, the objective is
to determine the coefficients that provide the desired frequency response specification.
8.2.2
Filters can be classified into finiteimpulse response (FIR) or infiniteimpulse response (IIR)
filters.
79
FIR Filters
An FIR filter has an impulse response h(n) that extends only over a finite time interval and
it is zero beyond that interval, that is
h(n) = [h0 , h1 , . . . , hM , 0, 0, 0, . . .]
where M is referred as the filter order. The impulse response coefficients h0 , h1 , . . . , hM , 0, 0, 0, . . .
are referred by various names, such as filter coefficients, filter weights, filter taps, etc. As we
already saw before, the output of an FIR filter can be written as
M
X
h(k)x(n k)
k=0
We can see that the output of an FIR filter is a weighted average of the present input sample
x(n) and the past M samples x(n 1), x(n 2), . . . , x(n M ). FIR filters are referred as
non-recursive filters because the output only depends on the inputs and not on the previous
outputs.
IIR Filters
An IIR filter has an impulse response h(n) of infinite duration. The expression for the output
of a causal IIR filter is
y(n) =
h(k)x(n k)
k=0
In general, IIR filters are recursive, that is, the output of the system depends not only on the
inputs but also on the previous outputs, for example
y(n) = y(n 1) + x(n)
80
8.3
Computer Exploration
MATLAB provides several functions very useful to design and simulate digital filters. In this
laboratory we will focus on the functions filter, filtfilt, ellipord, ellip, cheby1, cheby2, butter,
etc .
8.3.1
Example
Lets write two MATLAB functions to implement a lowpass and a highpass filter that allow us
to choose the cutoff frequency and types of filter we want to implement. The inputs to these
functions are the samples of the signal, the sampling frequency, and the desired type of filter
(causal, noncausal, elliptic, etc). The outputs are the filtered signal and the filter coefficients.
The following figure shows an example of using the lowpass filter to filter out the high
frequencies of a biomedical signal corrupted by quantization noise. The sampling frequency
of the input signal is 125 Hz and we would like to filter out frequencies beyond 15 Hz. The
next example shows the effect of filtering the same signal with a highpass filter to remove the
lowfrequency components and eliminate the signal trend (remove frequencies below 0.1 Hz).
81
Magnitude (dB)
50
0
50
100
10
20
30
Frequency (Hz)
40
50
60
10
20
30
Frequency (Hz)
40
50
60
Phase (degrees)
0
100
200
300
400
82
Raw Signal
Lowpass Filtered
15
Amplitude
14.5
14
13.5
13
12.5
422
422.2
422.4
422.6
422.8
Time,s
423
423.2
Figure 8.2: Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented with filtfilt (no
delay of the output with respect to the input) and therefore it is a noncausal filter.
83
25
Raw Signal
Highpass Filtered
20
Amplitude
15
10
5
0
5
10
200
400
600
Time,s
800
1000
1200
Figure 8.3: Result of the highpass filtering operation. Notice that the highpass filter eliminated
the signal trend.
84
8.4
Below we show the MATLAB code used to implement the lowpass and highpass funtions:
function [y,n] = LowPass(x,fsa,fca,fta,cfa,pfa);
%LowPass: Lowpass filter
%
%
[y,n] = LowPass(x,fs,fc,ft,cf,pf)
%
%
Input signal
fs
fc
ft
cf
pf
Filtered Signal
%
%
filtered sequence is then reversed and run back through the filter.
%
%
cutoff frequency fs/4 Hz.This will filter out the high frequency
%
%
load ICP;
[y,n] = LowPass(icp,fs,fs/4,1,2,1);
%
%
Version 1.00 MA
%
%
%=====================================================================
% Process function arguments
%=====================================================================
if nargin<1 | nargin>6,
help LowPass;
return;
end;
85
fs = 125;
fc = fs/4;
ft = 1;
cf = 2;
pf = 0;
% Default - no plotting
if nargout==0,
pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;
%=========================================================================
% Process Inputs
%=========================================================================
x
= x(:); LD = length(x); k
= 1:LD;
%=========================================================================
% LowPass Filtering
%=========================================================================
nf
Rs
= 40;
= wlp*1.2; Wp
= wlp*0.8; Rp
= 0.5;
% causal
= filter(B,A,x);
else
% non-causal
= filtfilt(B,A,x);
86
end;
% causal
= butter(N,Wn);
= filter(B,A,x);
else
% non-causal
= butter(N,Wn);
= filtfilt(B,A,x);
end;
= min([500 floor(NX/4)]);
if rem(N,2),
% Make N even
N = N + 1;
end;
B = fir1(N,wlp,Blackman(N+1),noscale);
% B should be odd
B = B/sum(B);
A = 0;
y = conv(B,x);
y = y(N/2 + (1:NX));
ci = [1:(N/2) (NX+1-N/2):NX];
for c1 = 1:length(ci),
in
= ci(c1);
xi
= max( 1,in-N/2):min(NX,in+N/2);
bi
= (N/2+1) + (max(1-in,-N/2):min(NX-in,N/2));
y(in) = sum(x(xi).*B(bi))/sum(B(bi));
end;
%=========================================================================
% Plotting
%=========================================================================
if pf==1
close all
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Lowpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Lowpass Filtered);
box off; axisset;
87
%=========================================================================
% Take care of outputs
%=========================================================================
if nargout==0,
clear(y,n);
end;
[y,n] = HighPass(x,fs,fc,cf,pf)
%
%
Input signal
fs
fc
cf
pf
Filtered Signal
%
%
The causal implementation uses only the present and previous values
filters the data in the forward direction, and the filtered sequence
is then reversed and run back through the filter; Y is the time
The result
%
%
frequency 0.5 Hz. This will filter out the low frequency
%
%
load ICP;
[y,n] = Highpass(icp,fs,0.5,2,1);
%
%
Version 1.00 MA
88
%=======================================================================
% Process function arguments
%=======================================================================
if nargin<1 | nargin>6,
help Highpass;
return;
end;
fs = 125;
fc = fs/4;
cf = 2;
% Default flag
pf = 0;
% Default - no plotting
if nargout==0,
pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;
%=======================================================================
% Process Inputs
%=======================================================================
x
= x(:); LD = length(x); k
= 1:LD;
%=======================================================================
% LowPass Filtering
%=======================================================================
nf
Rs
= 40;
= whp*1.2; Ws
= whp*0.8; Rp
if cf==1,
= 0.5;
% causal
= filter(B,A,x);
else
% non-causal
89
= filtfilt(B,A,x);
end;
%=======================================================================
% Plotting
%=======================================================================
if pf == 1
figure(1)
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Highpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Highpass Filtered);
box off; axisset
figure(2)
figureset(2)
freqz(B,A, 512, fs)
end;
%=======================================================================
% Take care of outputs
%=======================================================================
if nargout==0, clear(y,n); end;
90
8.5
Analysis
Download the signals ECGNoisy60Hz, ECGQuantization, ECGBaselineDrift, and ECGNoiseCombined from the class website. Use MATLAB to perform the following computer
exploration.
1. Load and plot the signal ECGNoisy60Hz in MATLAB. Use the FFT to evaluate the
frequency spectrum. Notice the noise present at 60 Hz. Use MATLAB to design a filter
to eliminate this problem. Plot and compare the initial signal and the filtered signal.
2. Repeat the procedure with ECGQuantization. In this case the signal is severely affected
by quantization noise. Use MATLAB to design a filter to eliminate this noise and show
the results.
3. Repeat the procedure with ECGBaselineDrift. In this case the signal is severely affected
by baseline drift due to patient movement. Use MATLAB to design a filter to eliminate
this noise and show the results.
4. Repeat the procedure with ECGBaselinedCombined. The signal contains all the above
types of noise combined. Use MATLAB to design a system to eliminate this noise and
show the results.
91
CHAPTER
9
Project: Design of an Automatic Beat Detection Algorithm
92
9.1
General Information
9.2
Project Description
Automatic beat detection algorithms have many clinical applications including pulse
oximetry, cardiac arrhythmia detection, and cardiac output monitoring. Most of these
algorithms have been developed by medical device companies and are proprietary.
Thus, researchers who wish to investigate pulse contour analysis must rely on manual
annotations or develop their own algorithms.
The objective of this project is to design an automatic detection algorithm for
intracranial pressure (ICP) signals that locates the first peak following each heart beat.
This is called the percussion peak in ICP signals.
Development of automatic detection algorithms is an active area of research.
9.3
Significance
The unavailability of robust detection algorithms for pressure signals has, at least
partially, prevented researchers from fully conducting beat-by-beat analysis. Current
93
methods of intracranial ICP signal analysis are primarily based on time- or frequencydomain metrics such as mean, standard deviation, and spectral power at the heart rate
frequency. Few investigators have analyzed variations in the beat-level morphology of
the pressure signals because detection algorithms that can automatically identify each
of the beat components are generally unavailable.
Many researchers manually annotate desired components of physiologic pressure signals
because detection algorithms for these signals are not widely available. This approach
is labor-intensive, subjective, expensive, and can only be used on short signal segments.
There are numerous current and potential applications for pressure beat detection
algorithms. Many pulse oximeters perform beat detection as part of the signal processing
necessary to estimate oxygen saturation, but these algorithms are proprietary and cannot
be used in other applications. Systolic peak detection is necessary for some measures
of baroreflex sensitivity. Identification of the pressure components is necessary for some
methods that assess the interaction between respiration and beat-by-beat ventricular
parameters and the modulation effects of respiration on left ventricular size and stroke
volume. Detection is a necessary task when analyzing arterial compliance and the
pressure pulse contour. Beat-to-beat morphology analysis of ICP also requires robust
automatic detection.
9.4
Specifications
Input signal
fs
94
pf
fi
9.5
9.6
Resources
A paper entitled: An Automatic Beat Detection Algorithm for Pressure Signals published
in IEEE Transactions of Biomedical Engineering has been posted on the course website.
This paper describes a general detection algorithm that can be used in pressure signals
(not only ICP).
9.7
Report Sections
Abstract: Concisely state what was done, how it was done, principal results, and their
significance. The abstract should contain the most critical information of the paper.
Introduction: State what the problem is specifically, the significance of finding a solution
to the problem, and the work that other researchers have done on this problem. If you
have a long report, the last paragraph in this section should describe the organization
of the rest of the paper.
Methodology: In short, how did you do what you did. This section should include an
explanation of the methods you used, a description of the data set, and you should
95
describe how the data was collected or where you obtained it from.
Results: What are the results of applying your method. This should be strictly factual
stating only how well your model performed, the outcome of your hypothesis tests, etc.
It should not include your interpretation or ideas; just the facts.
Discussion: What did you learn from the results listed in the earlier section. If the
results were different than what you (or the reader) would expect, try to explain why.
If you have ideas for futher research, this is where you should describe those ideas.
Conclusion: This section should summarize your main discoveries or findings from the
project.
9.8
Report Requirements
The paper should be written for someone that understands the key concepts and methods
covered in this class. You may assume the reader is a graduate of an engineering program.
The reports must conform to IEEE requirements for journal papers.
Do not include code or raw data that youve written for the project.
Avoid passive sentence construction. If you dont like using first person pronouns (I),
you can often use this paper or this report as the subject of sentences. For example,
This paper describes an analysis of... instead of An analysis of ... is described or I
describe an analysis of ....
If English is not your native language, have someone at the Writing Center review your
report for organization, style, and grammar. Must be in Final Submission Format
Must use LaTeX or MS Word stylesheet.
96
The name of the course (Signals and Systems) and term (Spring 2003) should be listed
as part of the author affiliation. Something like, This work was completed as part of
a course project for Signals and Systems at Portland Community College during spring
term of 2005. would be appropriate.
Do not list yourself as a member of the IEEE unless you really are a member.
Label your axes in all the figures.
Describe the figures in words using a caption below the figure.
Be sure to use the IEEE format for the caption.
Tables: Remember to use units. The captions go above the tables
Include relevant citations.
Use review articles to avoid a lengthy literature search.
Each reference number should be enclosed in square brackets.
Do not begin a sentence with a reference number.
The final report must be submitted in electronic form (via e-mail). MS Word, LaTeX,
postscript, or PDF are all acceptable.
9.9
Assessment
Format: Does the report adhere to the IEEE format and requirements listed above?
Grammar: Is the report written in past tense (it should be). Does the report use the
terms I or you inappropriately? Were there many grammar or spelling errors?
97
Organization: Is the report well organized? Are the section headings appropriate and
clear?
Clarity of Writing: Was the report clearly written? Could I understand what was done
and why after reading it?
Scope: Was the project of sufficient scope for the class?
Abstract: Does the abstract give an accurate and concise summary of the report?
Significance: Does the report explain the significance of the project?
Objectives: Are the project objectives clearly specified in the introduction?
Methodology: Were the methods and algorithms used appropriate for the data and
project objectives?
Results: Were the results sufficient? Were they clearly stated? Was a table or plot used
to display the results appropriately?
Discussion: Are the results discussed? Were there any surprises and, if so, were ideas
about the reasons for the surprises given? Was the significance of the results explained?
Citations: Were appropriate citations made to previous work?
98
CHAPTER
10
Appendix
10.1
10.2
10.3
99
1662
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005
AbstractBeat detection algorithms have many clinical applications including pulse oximetry, cardiac arrhythmia detection, and
cardiac output monitoring. Most of these algorithms have been developed by medical device companies and are proprietary. Thus,
researchers who wish to investigate pulse contour analysis must
rely on manual annotations or develop their own algorithms. We
designed an automatic detection algorithm for pressure signals that
locates the first peak following each heart beat. This is called the
percussion peak in intracranial pressure (ICP) signals and the systolic peak in arterial blood pressure (ABP) and pulse oximetry
(SpO2 ) signals. The algorithm incorporates a filter bank with
variable cutoff frequencies, spectral estimates of the heart rate,
rank-order nonlinear filters, and decision logic. We prospectively
measured the performance of the algorithm compared to expert
annotations of ICP, ABP, and SpO2 signals acquired from pediatric intensive care unit patients. The algorithm achieved a sensitivity of 99.36% and positive predictivity of 98.43% on a dataset
consisting of 42,539 beats.
Index TermsArterial blood pressure (ABP), component detection, intracranial pressure (ICP), pressure beat detection, pulse
contour analysis, pulse oximetry (SpO2 ).
I. INTRODUCTION
The unavailability of robust detection algorithms for pressure signals has, at least partially, prevented researchers from
fully conducting beat-by-beat analysis. Current methods of ICP
signal analysis are primarily based on time- or frequency-domain metrics such as mean, standard deviation, and spectral
power at the heart rate frequency [7]. Few investigators have
analyzed variations in the beat-level morphology of the pressure signals because detection algorithms that can automatically
identify each of the beat components are generally unavailable.
Many researchers manually annotate desired components of
physiologic pressure signals because detection algorithms for
these signals are not widely available. This approach is laborintensive, subjective, expensive, and can only be used on short
signal segments.
There are numerous current and potential applications for
pressure beat detection algorithms. Many pulse oximeters perform beat detection as part of the signal processing necessary
to estimate oxygen saturation, but these algorithms are proprietary and cannot be used in other applications. Systolic
peak detection is necessary for some measures of baroreflex
sensitivity [8][10]. Identification of the pressure components
is necessary for some methods that assess the interaction between respiration and beat-by-beat ventricular parameters and
the modulation effects of respiration on left ventricular size
and stroke volume [11]. Detection is a necessary task when
analyzing arterial compliance and the pressure pulse contour
[12]. Beat-to-beat morphology analysis of ICP also requires
robust automatic detection.
1663
1664
Fig. 4.
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005
Block diagram showing the architecture and individual stages of the new detection algorithm for peak component detection in pressure signals.
TABLE I
ALGORITHM PSEUDOCODE
Fig. 5. Example illustrating the output of some of the stages performed by the
detector during peak component detection in pressure signals.
C. Preprocessing Stages
The preprocessing stage consists of three bandpass filters.
The first filter removes the trend and eliminates high frequency
noise. The resulting signal is used to estimate the heart rate
which, in turn, is used to determine the cutoff frequency of the
other two bandpass filters. The second filter further attenuates
high frequency components and passes only frequencies that
are less than 2.5 times the heart rate. The output of this filter
only contains one cycle per heart contraction and eliminates
enough high frequency power to ensure the signal derivative is
not dominated by high frequency noise. The third bandpass filter
detrends the signal by eliminating frequencies below half the
minimum expected heart rate and slightly smoothes the signal
1665
1666
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005
TABLE II
SENSITIVITY AND POSITIVE PREDICTIVITY OF THE DETECTION ALGORITHM
FOR ICP, ABP, AND ECG SIGNALS. THE TABLE SHOWS THE SE AND P
RESULTS FOR ACCEPTANCE INTERVALS OF 8.0, 16.0, 24.0, AND 48 MS. THESE
RESULTS USED THE EXPERT MANUAL ANNOTATIONS (DT) ON 42 539 BEATS
RANDOMLY SELECTED FROM A PEDIATRIC INTENSIVE CARE UNIT PATIENT
POPULATION. THE SEGMENTS INCLUDED REGIONS OF SEVERE ARTIFACT
TABLE IV
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2179 BEATS OF RANDOMLY
SELECTED ABP SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES
TABLE III
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2300 BEATS OF RANDOMLY
SELECTED ICP SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES
TABLE V
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2649 BEATS OF RANDOMLY
SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
SELECTED
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES
SpO
sensitivity
indicates the percentage of true beats that were
indicates the percorrectly detected by the algorithm. The
centage of beat detections which were labeled as such by the
expert.
C. Algorithm Assessment
The algorithm was validated prospectively against expert annotated detections generated by two different experts on ICP,
ABP, and
signals. The performance of the algorithm was
first assessed on the randomly chosen segments without taking
into consideration whether they contained portions of significant artifact. After an expert manually classified each minute
as normal, corrupted, or absent, the algorithm performance was
assessed using each experts manual annotations as the true
peaks on the normal and corrupted segments. The algorithm was
developed using pressure signals from different patients than
those used for performance assessment. The assessment was
measured only once without any parameter tuning.
IV. RESULTS
Table II reports the algorithms sensitivity and positive predictivity for the different pressure signals and acceptance intervals of 8.0, 16.0, 24.0, and 48.0 ms. These are based on one
experts manual annotations for all 42 539 beats including segments clasified as normal, corrupted, and absent. Tables IIIV
report the algorithms sensitivity and positive predictivity on
ICP, ABP, and
signals, respectively. These tables show the
algorithms performance (AD) compared with two different experts (DT & JM) on segments classified as normal or corrupted.
The inter-expert agreement is also reported with DT used as the
true peaks. The algorithms average sensitivity on the 42 539
beats is 99.36%,
; with a 98.43%
1667
Fig. 6. Illustrative example showing an ICP signal and the percussion peaks (P ) identified by the two experts and the detection algorithm. In this case both
experts and the algorithm were in perfect agreement.
Fig. 7. Illustrative example showing an ICP signal and the percussion peaks (P ) identified by the two experts and the detection algorithm. Again both experts
and the algorithm were in perfect agreement despite the changing morphology and the different character than the signal shown in Fig. 6.
tance interval of 16 ms
A. Results
The results show that the algorithm is nearly as accurate as
the experts are with one-another. Figs. 6 and 7 show examples
of ICP percussion peaks. Note that the signal morphology in
Fig. 7 is considerably different from Fig. 6. Fig. 8 shows some
examples when the algorithm detected different peaks than the
experts in a
signal. Note that this segment is corrupted by
clipping artifact and the algorithm continued to identify peaks
(over detection). When clipping occurs, the algorithm tries to interpolate and perform component detection trying to minimize
the interbeat interval variability. Experts did not try to interpolate in segments where the signal was absent due to device saturation. This reduces the algorithms reported sensitivity and
positive predictivity. Fig. 8 also shows a missed peak after the
clipped region. In general, regions where artifact occurs have a
slight effect on normal beats that are close. This occurs because
the artifacts can affect the rank-filters baseline and, therefore,
the estimated relative amplitude and estimated slope.
1668
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005
SpO
TABLE VII
FUNCTION PSEUDOCODE: ESTIMATEHEARTRATE
pressure detection algorithms. Validation databases with manually annotated beats by human experts are needed in order to
provide reproducible and comparable performance assessment
of pressure detection algorithms.
Our validation dataset is publicly available at http://
bsp.pdx.edu to provide other developers annotated examples that can be used to validate their beat detection algorithms.
Nonetheless, we caution developers and users about the risk
of validation databases. If developers use these datasets for
development, the performance is favorably biased by the tuning
and algorithm design that occurs during development. These
algorithms may have worse performance when applied prospectively to new datasets. Although validation databases contain
large number of annotated peaks, detection algorithms can still
be favorably tuned to the common cardiac physiology of the
patient population, which is often a narrow subgroup that has
been targeted for their common pathologies. Ideally, validation
should be performed prospectively by a third party on data
that is unavailable to developers. Some progress toward this
higher standard of performance has been achieved through the
Computers and Cardiology challenges. Independent third-party
validation of algorithms on proprietary data with standardized
performance measures would significantly advance the quality
of detection algorithms as a whole.
VI. CONCLUSION
We described a new automatic beat detection algorithm that
can be used to detect the percussion component in ICP signals
signals. Although there
and the systolic peak in ABP and
is a substantial body of literature describing QRS detection algorithms, there are almost no published descriptions or assessments of pressure detection algorithms. These algorithms are
needed needed for many applications and research.
Our algorithm consists of several stages. It relies on the estimated heart rate to choose the cutoff frequencies used by the preprocessing bandpass filters and to aid the discrimination of false
negatives and false positives on the interbeat-interval decision
logic stage. It uses three bandpass filters to eliminate drift and attenuate high frequency noise. It uses nonlinear rank order filters
for peak detection and decision logic. The algorithm was validated prospectively (validation dataset was not available during
algorithm development). The algorithm was run only once on
the dataset and achieved a sensitivity of 99.36% and a positive
TABLE VIII
FUNCTION PSEUDOCODE: IBICORRECT
1669
ACKNOWLEDGMENT
The authors wish to acknowledge the support of the Northwest Health Foundation and the Doernbecher Childrens Hospital Foundation.
REFERENCES
predictivity of 99.43% when compared with expert manual ansignals from the CSL Database
notations of ICP, ABP, and
(OHSU).
We also described a validation dataset and the CSL Database
of the Doernbecher Childrens Hospital (Oregon Health & Science University). This validation dataset is publicly available as
a standard database for algorithm validation.
APPENDIX
The following Tables provide the pseudocode of the functions
used by pressure detector algorithm.
[1] B.-U. Khler, C. Henning, and R. Orglmeister, The principles of software QRS detection, IEEE Eng. Med. Biol. Mag., vol. 21, no. 1, pp.
4257, Jan.-Feb. 2002.
[2] L. Anonelli, W. Ohley, and R. Khamlach, Dicrotic notch detection
using wavelet transform analysis, in Proc. 16th Annu. Int. Conf.
IEEE Engineering in Medicine and Biology Society, vol. 2, 1994, pp.
12161217.
[3] P. F. Kinias and M. Norusis, A real time pressure algorithm, Comput.
Biol. Med., vol. 11, pp. 211211, 1981.
[4] M. Aboy, J. McNames, and B. Goldstein, Automatic detection algorithm of intracranial pressure waveform components, in Proc. 23th Int.
Conf. IEEE Engineering in Medicine and Biology Society, vol. 3, 2001,
pp. 22312234.
[5] M. Aboy, C. Crespo, J. McNames, and B. Goldstein, Automatic detection algorithm for physiologic pressure signal components, in Proc.
24th Int. Conf. IEEE Engineering in Medicine and Biology Society and
Biomedical Engineering Society , vol. 1, 2002, pp. 196197.
[6] E. G. Caiani, M. Turiel, S. Muzzupappa, A. Porta, G. Baselli, S. Cerutti,
and A. Malliani, Evaluation of respiratory influences on left ventricular
function parameters extracted from echocardiographic acoustic quantification, Physiolog. Meas., vol. 21, pp. 175186, 2000.
[7] J. D. Doyle and P. W. S. Mark, Analysis of intracranial pressure, J.
Clin. Monitoring, vol. 8, no. 1, pp. 8190, 1992.
[8] G. Parati, M. Di Rienzo, and G. Mancia, How to measure baroreflex
sensitivity: From the cardiovascular laboratory to daily life, J. Hypertension, vol. 18, pp. 719, 2000.
[9] M. Di Rienzo, P. Castiglioni, G. Mancia, A. Pedotti, and G. Parati, Advances in estimating baroreflex function, IEEE Eng. Med. Biol. Mag.,
vol. 20, no. 2, pp. 2532, Mar./Apr. 2001.
[10] M. Di Rienzo, G. Parati, P. Castiglioni, R. Tordi, G. Mancia, and A. Pedotti, Baroreflex effectiveness index: An additional measure of baroreflex control of heart rate in daily life, Am. J. Physiol. Regulatory, Integrative, Comparative Physiol., vol. 280, pp. R744R751, 2001.
[11] E. G. Caiani, M. Turiel, S. Muzzupappa, A. Porta, L. P. Colombo, and
G. Baselli, Noninvasive quantification of respiratory modulation on
left ventricular size and stroke volum, Physiolog. Meas., vol. 23, pp.
567580, 2002.
[12] G. E. McVeigh, C. W. Bratteli, C. M. Alinder, S. Glasser, S. M. Finkelstein, and J. N. Cohn, Age-related abnormalities in arterial compliance identified by pressure contour analysis, Hypertension, vol. 33, pp.
13921398, 1999.
[13] W. Holsinger, K. Kempner, and M. Miller, A QRS preprocessor based
on digital differentiation, IEEE Trans. Biomed. Eng., vol. BME-18, pp.
121217, 1971.
[14] M.-E. Nygards and J. Hulting, An automated system for (ecg) monitoring, Comput. Biomed. Res., vol. 12, pp. 181202, 1979.
[15] M. Okada, A digital filter for the QRS complex detection, IEEE Trans.
Biomed. Eng., vol. BME-26, pp. 700703, 1979.
[16] J. Fraden and M. Neumann, QRS wave detection, Med. Biol. Eng.
Comput., vol. 18, pp. 125132, 1980.
[17] J. Pan and W. Tompkins, A real-time QRS detection algorithm, IEEE
Trans. Biomed. Eng., vol. BME-32, pp. 230236, 1985.
[18] G. Friesen, T. Jannett, M. Jadallah, S. Yates, S. Quint, and H. Nagle, A
comparison of the noise sensitivity of nine QRS detection algorithms,
IEEE Trans. Biomed. Eng., vol. 37, no. 1, pp. 8598, Jan. 1990.
[19] F. Gritzali, Toward a generalized scheme for QRS detection in ECG
waveforms, Signal Process., vol. 15, pp. 183192, 1988.
[20] P. Hamilton and W. Tompkins, Quantitative investigation of QRS detection rules using the (mit/bih) arrhythmiac database, IEEE Trans.
Biomed. Eng., vol. BME-33, pp. 11571165, 1986.
[21]
, Adaptive matched filtering for QRS detection, in Proc. Annu.
Int. Conf. IEEE Engineering in Medicine and Biology Society, 1988, pp.
147148.
[22] V. Afonso, W. Tompkins, T. Nguyen, and S. Luo, ECG beat detection using filter banks, IEEE Trans. Biomed. Eng., vol. 46, no. 2, pp.
192202, Feb. 1999.
1670
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005
[23] S. Kadambe, R. Murray, and G. Boudreaux-Bartels, Wavelet transformbased QRS complex detector, IEEE Trans. Biomed. Eng., vol. 46, no.
7, pp. 838848, Jul. 1999.
[24] W. W. Nichols and M. F. ORourke, McDonalds Blood Flow in Arteries:
Tehoretical, Experimental and Clinical Principles, 4th ed. London,
U.K.: Arnold, 1998.
[25] B. North, Intracranial pressure monitoring, in Head Injury, P. Reilly
and R. Bullock, Eds. London, U.K.: Chapman & Hall, 1997, pp.
209216.
[26] M. H. Hayes, Statistical Digital Signal Processing and Modeling. New
York: Wiley, 1996.
[27] MIT-BIH ECE Database. Massachusetts Inst. Technol., Cambridge.
[Online]. Available: http://ecg.mit.edu
[28] M. Aboy. (2002) Instructions for Labeling Segments Used on the Evaluation of the BSP-Automatic Pressure Detection Algorithm. Portland
State University. [Online]. Available: http://bsp.pdx.edu
[29] (1998) ANSI/AAMI CE57: Testing and Reporting Performance Results
of Cardiac Rhythm and ST Segment Measurement Algorithms. (AAMI)
Recommended Practice/American National Standard. [Online]. Available: http://www.aami.org
Daniel Tsunami, photograph and biography not available at the time of publication.
Mateo Aboy (M98) received the double B.S.
degree (magna cum laude) in electrical engineering
and physics from Portland State University (PSU),
Portland, OR, in 2002. In 2004, he received the M.S.
degree (summa cum laude) in electrical and computer engineering from PSU and the M.Phil (DEA)
degree from the University of Vigo (ETSIT-Vigo),
Vigo, Spain, where he is working towards the Ph.D.
degree in the Signal Theory and Communications
Department.
Since September 2000, he have been a research
member of the Biomedical Signal Processing Laboratory (PSU). He has been
with the Electronics Engineering Technology Department at Oregon Institute
of Technology, Portland, since 2005. His primary research interest is statistical
signal processing.
Mr. Aboy is a lifetime honorary member of the Golden-Key Honor Society, a
past Chapter President of HKN (International Electrical Engineering Honor Society), and past Corresponding Secretary of TBP (National Engineering Honor
Society).
Miles S. Ellenby received the B.E. and M.E. degrees degree (1986) and masters (1987) degrees in
electrical engineering from the University of Illinois,
Urbana-Champaign, in 1986 and 1987, respectively.
He received the M.D. degree from the University of
Chicago in 1990.
He did his Pediatrics residency (19911994) at
the Childrens Hospital of Philadelphia, followed
by a year as chief resident (19941995). He did a
fellowship in Pediatric Critical Care Medicine in the
combined University of California/San Francisco
Childrens Hospital of Oakland program (19951998). He joined the faculty
at Oregon Health & Science University in 2000, where he is now an Assistant
Professor of Pediatrics. He specializes in critical care medicine with a special
interest in the care of children with congenital heart disease. His research
interests are in cardiovascular monitoring, complex systems analysis and
biological signals processing.
1485
Communications______________________________________________________________________
Adaptive Modeling and Spectral Estimation of
Nonstationary Biomedical Signals Based on Kalman
Filtering
Mateo Aboy*, Oscar W. Mrquez, James McNames, Roberto Hornero,
Tran Trong, and Brahm Goldstein
AbstractWe describe an algorithm to estimate the instantaneous power
spectral density (PSD) of nonstationary signals. The algorithm is based on a
dual Kalman lter that adaptively generates an estimate of the autoregressive model parameters at each time instant. The algorithm exhibits superior
PSD tracking performance in nonstationary signals than classical nonparametric methodologies, and does not assume local stationarity of the data.
Furthermore, it provides better time-frequency resolution, and is robust to
model mismatches. We demonstrate its usefulness by a sample application
involving PSD estimation of intracranial pressure signals (ICP) from patients with traumatic brain injury (TBI).
Index TermsIntracranial pressure, Kalman lter, linear models, spectral estimation, traumatic brain injury.
I. INTRODUCTION
Currently, power spectral density (PSD) estimation of physiologic
signals is performed predominantly using classical techniques based
on the fast Fourier transform (FFT). Nonparametric methods such as
the periodogram and its improvements (i.e., Barletts, Welchs, and
Blackman-Tukeys methodologies [1][4]) are based on the idea of estimating the autocorrelation sequence of a random process from measured data, and then taking the FFT to obtain an estimate of the power
spectrum. The main two advantages of these techniques are their computational efciency, due to the numerical efciency of the FFT algorithm, and that they do not make any assumptions about the process
except for its stationarity. This makes them the methodology of choice,
particularly in situations where long data records need to be analyzed
and there is no clear model for the process. Furthermore, the availability
of long data records enables one to improve their statistical properties
by averaging or smoothing. However, these techniques have some limitations. They require stationarity of the segments studied, do not work
Manuscript received on March 1, 2004; revised January 2, 2005. A previous
version of this paper was presented at the IEEE-EMBS 2004 conference. This
work was supported in part by the Northwest Health Foundation, in part by
the Doernbecher Childrens Hospital Foundation, and in part by the Thrasher
Research Fund. Asterisk indicates corresponding author.
*M. Aboy is with the Department of Electronics Engineering Technology,
Oregon Institute of Technology, Portland, OR 97201 USA. He is also with the
Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201 USA (e-mail:
mateoaboy@ieee.org).
O. W. Mrquez is with the Signal Theory and Communications Department,
ETSI-Telecomunicacin, University of Vigo, 36310 Vigo, Spain, EU.
J. McNames is with the Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201 USA.
R. Hornero is with the Department of Signal Theory and Communications,
ETSI-Telecomunicacin, University of Valladolid, 47011Valladolid, Spain, EU.
T. Trong is with the Department of Biomedical Engineering, OGI School of
Science and Engineering, Oregon Health and Science University, Portland, OR
97206 USA.
B. Goldstein is with the Complex Systems Laboratory, Department of Pediatrics, Oregon Health and Science University, Portland, OR 97201 USA.
Digital Object Identier 10.1109/TBME.2005.851465
well for short data records, and have limited frequency resolution. Since
physiologic signals are nonstationary in nature, these techniques are
applied following the methodology of the short-time Fourier transform
(STFT), where nonparametric methods are applied to short overlapping
segments which are assumed to be stationary. This approach has also
its limitations. It imposes a piecewise stationary model on the data and,
since local stationarity requires the analysis segments to be short in duration, they have limited time-frequency resolution.
Time-frequency resolution can be improved by using parametric
methods of PSD estimation. The parametric approach is based on
modeling the signal under analysis as a realization of a particular
stochastic process and estimating the model parameters from its
samples. In the absence of a priori knowledge about how the process
is generated, parametric PSD is generally performed assuming an
autoregressive (AR) model [4]. This is a popular assumption for
several reasons: 1) many natural signals such as speech, music or
seismic signals have an underlying autoregressive structure; 2) in
general, any signalnot necessarily AR in naturecan be modeled
as an AR process if a sufciently large model order is selected; 3)
the all-pole structure of AR enables for good spectral peak matching,
which makes it a good model candidate for situations where we are
more interested in the spectral peaks than valleys; and 4) estimation
of the model parameters involves the solution of a linear system of
equations, which can be solved efciently. Even though parametric
PSD can improve the frequency resolution, the current techniques
for PSD estimation based on AR models (i.e., autocorrelation, covariance, modied convariance, and Burgs methods [5], [6]) assume
stationarity. To analyze nonstationary signals they must also assume
the signal is locally piecewise stationary.
We describe a methodology to estimate the time-varying AR model
parameters of nonstationary signals using an adaptive Kalman lter.
This methodology produces instantaneous estimates of PSD, improved
time-frequency resolution, and enables for nonstationary spectral analysis in situations where data records are too short and the local stationary model does not work well. The reliability of the algorithm was
tested with synthetic data generated from different models (AR, MA,
ARMA, and harmonic), and with real data from physiologic pressure
signals. Following the description of this methodology, we demonstrate
its usefulness by a sample application involving PSD estimation of intracranial pressure signals (ICP) from patients with traumatic brain injury (TBI).
II. METHODS
The adaptive Kalman lter algorithm we propose for instantaneous
PSD estimation assumes an underlying autoregressive structure of the
data. We chose an underlying AR model structure because of its intrinsic generality and peak matching capabilities. These are important
properties for the analysis of physiologic signals, since we are usually
more interested in estimating the frequency at which the formant frequencies (peaks) occur than the valleys. Starting from this assumption,
we modeled a given physiologic signal with a recursion of the form
x(n) =
P
k=1
a k x (n 0 k ) + w (n )
(1)
fak gpk=1
1486
x (n ) =
x (n )
x(n 0 1)
..
.
x(n 0 p + 1)
(3)
a 1 (n ) a 2 (n )
0
1
a p (n )
0
0
(4)
... 0
x(n) = A(n 0 1)x(n 0 1) + w(n)
y (n) = Hx(n) + v(n):
(5)
A (n ) =
1
0
...
...
...
..
.
..
.
..
..
.
H
a
(6)
k=1
q (n )
ak (n)x(n 0 k + 1) +
q (n ):
(7)
(8)
z (n) = x(n + 1)
z^(n) = T (n)^(njn 0 1)
^(njn) = ^(njn 0 1) + (n) [z(n) 0 z^(n)]
8T + e (n)
(njn 0 1) = 8 (n 0 1jn 0 1)8
T
4(n) = (n) (njn 0 1) (n)T
(n) = (njn 0 1) (n) [44(n) + l (n)]01
x
a
x
P
(9)
(10)
(11)
(12)
(13)
(14)
K
x
Q
T
(15)
P(njn) = I 0 K(n)x (n) P(njn 0 1)
^(njn 0 1) = 8a^(n 0 1jn 0 1) is the best estimate of the state
where a
Px (ejw ) =
1+
jb(0)j2
p a e0jkw 2 :
k=1 k
(16)
If b(0) and fagkp=1 can be estimated from data, then we can form an
estimate of the power spectrum of a stationary process as
P^x (ejw ) =
^b(0) 2
:
1 + pk=1 a^k e0jkw 2
(17)
^b(0; n) 2
:
1 + pk=1 a^k (n)e0jkw 2
(18)
P^x (ejw ; n) =
1487
Fig. 1. Representative results of the rst comparative study between a nonparametric methodology (Welchs) and the proposed DKF PSD estimator. (a) Plot shows
an example of the 10-s ICP segment (light grey) and the 2-s subsegment used for this simulation. (b) Plot of the 2-s subsegment highlighted in (a). (c) Welchs
PSD (dark) and DKF PSD (light) estimates corresponding to the 10-s segment. (d) Welchs PSD (dark) and DKF PSD (light) estimates on the 2-s subsegment. The
thin line corresponds to Welchs estimate based on the 10-s segment. (e) Welchs PSD (dark) and DKF PSD (light) estimates in the 10-s segment with y -axis in dB
scale. (f) Welchs PSD (dark) and DKF PSD (light) estimates in the 2-s subsegment with y -axis in dB scale. The dotted line is the Welch estimate based on the 10 s.
^x (ejw ; n)KM =
P
1
jFFT [a(n)]j2
a(n) = 1 0 a(n)T
(19)
(20)
III. RESULTS
We tested the reliability of the instantanous PSD estimation algorithm with synthetic data generated from different models (AR, MA,
ARMA, and harmonic), and with real data from physiologic pressure
signals. In the following, we demonstrate its usefulness by a sample
application involving PSD estimation of ICPs from patients with traumatic brain injury (TBI).
1488
Fig. 2. Representative results of the second comparative study between a nonparametric methodology (Welchs) and the proposed PSD estimator based on the
DKF. (a) ICP segment during a period of hypertension (ICP
25 mmHg) and the reduction in mean ICP after mechanical hyperventilation (approximately
800 s). (b) Spectrogram of the ICP signal centered around the time of therapeutic intervention (hyperventilation). In the spectrogram, we can clearly see the
cardiac components around 2 Hz and the respiratory component (0.10.55) Hz. In the respiratory component, we can note a period of spontaneous breathing
(approximately 0225 s), and the period of mechanical hyperventilation (approximately 225 s). (c) Spectrogram of the ICP signal centered around the time of
therapeutic intervention (hyperventilation) generated using the nonparametric PSD estimator with a window of 15 s. (d) Spectrogram of the ICP signal centered
around the time of therapeutic intervention (hyperventilation) generated using the instantaneous PSD estimate based on the DKF. The time resolution is of 1 sample.
We can appreciate a much better frequency resolution in this case. (e) PSD plot showing the instantaneous PSD estimates (thin light lines) before and after the
intervention and their average (thick lines). The average before the change is shown in grey and the average after the change is the black thick line.
>
>
B. Comparative Studies
We compared PSD estimates obtained with the proposed Kalman
PSD estimation algorithm with those generated by classical nonparametric estimation techniques. For the purposes of this paper, Welchs
method of nonparametric PSD estimation was used as the methodology
representing the nonparametric methods.
The rst comparison was aimed at determining the quality of the
PSD estimates of a nearly stationary ICP signal. The PSD of the signal
was estimated with Welchs PSD estimator and with the proposed
Kalman PSD estimator. PSDs of locally stationary 10-s segments
obtained from ICP signals were estimated using both methodologies.
Then, 2-s subsegments from these 10-s segments were selected and
both methodologies were applied to estimate the PSD corresponding
1489