Sunteți pe pagina 1din 114

Computer Explorations in Discrete-Time Signals and Systems

and Digital Signal Processing Using MATLAB


A Laboratory Manual
Dr. Mateo Aboy
October 22, 2005

SUMMARY TABLE OF CONTENTS

1 Introduction to Sampling and Aliasing

10

2 Discrete-Time Signals

18

3 Discrete-Time Systems and Convolution

25

4 DTFT, DFT, and FFT

33

5 Sampling and Aliasing

42

6 The Z-Transform

52

7 Digital Filtering

62

8 Digital Filtering

77

9 Project: Design of an Automatic Beat Detection Algorithm

92

10 Appendix

99

CONTENTS

1 Introduction to Sampling and Aliasing

10

1.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

1.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

1.2.1

Sampling and aliasing . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.3.1

Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.3.2

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

1.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

1.5

Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

1.3

2 Discrete-Time Signals

18

2.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

2.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

2.2.1

Unit Sample Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.2.2

Unit Step Sequence

. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.2.3

Real and ComplexValued Exponential Sequences . . . . . . . . . . . .

20

2.2.4

Other Functions and Operations . . . . . . . . . . . . . . . . . . . . . .

21

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

2.3.1

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

2.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

2.3

3 Discrete-Time Systems and Convolution

25

3.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

3.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

3.2.1

Development of the Convolution Sum . . . . . . . . . . . . . . . . . . .

27

3.2.2

Stability and Causality

. . . . . . . . . . . . . . . . . . . . . . . . . . .

28

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

28

3.3.1

Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

3.3.2

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

3.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

3.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

3.3

4 DTFT, DFT, and FFT

33

4.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

4.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

4.2.1

Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

4.2.2

Zero Padding, Windowing and Other Considerations . . . . . . . . . . .

36

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

4.3.1

Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

4.3.2

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

4.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

4.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

4.3

5 Sampling and Aliasing

42

5.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

5.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

5.2.1

Spectra of Sampled Signals . . . . . . . . . . . . . . . . . . . . . . . . .

43

5.2.2

Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

44

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

5.3.1

Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

5.3.2

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

5.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

5.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

5.3

6 The Z-Transform

52

6.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2.2

Causality and Stability

. . . . . . . . . . . . . . . . . . . . . . . . . . .

54

6.2.3

Finding Z-transforms Analytically . . . . . . . . . . . . . . . . . . . . .

55

6.2.4

Inverse Z-transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

6.3.1

Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

6.3.2

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

57

6.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

6.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

6.3

7 Digital Filtering

62

8.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

8.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

8.2.1

Design Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

8.2.2

FIR and IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

8.3.1

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

8.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

8.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

8.3

8 Digital Filtering

77

8.1

Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

8.2

Theoretical Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

8.2.1

Design Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

8.2.2

FIR and IIR Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

Computer Exploration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

8.3.1

Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

8.4

Example MATLAB code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

8.5

Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

8.3

9 Project: Design of an Automatic Beat Detection Algorithm

92

9.1

General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

9.2

Project Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

9.3

Significance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

9.4

Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

9.5

Development and Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

9.6

Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

9.7

Report Sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

9.8

Report Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

9.9

Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

10 Appendix

99

10.1 Appendix I: IEEE-EMBS Detection Paper . . . . . . . . . . . . . . . . . . . . .

99

10.2 Appendix II: IEEE-TBME Detection Paper . . . . . . . . . . . . . . . . . . . .

99

10.3 Appendix III: IEEE-TBME Kalman Filter Paper . . . . . . . . . . . . . . . . .

99

LIST OF FIGURES

1.1

The first plot shows xa (t) and the next three plots show the effect of sampling
at fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency. We can
see how in this case the signal reconstructed is an aliased version of the original
at a lower frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1

15

Plots showing discrete-time signals generated using MATLAB. The upper left
plot corresponds to x(n) = 2(n+3)+1(n+2)(n)3(n4), 5 n 5,
the upper right plot is the sequence x(n) = [1, 2, 4, 5, 9, 3, 2, 1, 4]
generated using a sum of delayed (n). The bottom left plot is a complex
exponential x(n) = e(0.2+j0.3)n ,

10 n 10, and the bottom right

corresponds to a cosine at 30 Hz in Gaussian noise with zero mean and unit


variance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

3.1

Example of a discrete system that smoothes an input signal and improves the
signal to noise ratio. The underlying signal is a 10 Hz sinusoid sampled at 300
Hz which is corrupted by Gaussian noise. Notice the output is a smooth delayed
version of the input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.1

30

Example of calculating the DFT using the FFT. The plot shows the time
and frequencydomain representations of a signal xa (t) = 3sin(t) + 2sin(5t)
sampled at twice the maximum frequency (fs = 5 Hz). We can see how the
magnitude of the DFT has two peaks at 0.5 Hz and 2.5 Hz, as we expected. In
the next plot we show the DFT of the same signal, but corrupted by Gaussian
noise. Notice how we can still identify the two frequency components even
though the signal is distorted. The final plot shows the FFT in dB scale. We
can see the effect of having a finite-rectangular window. . . . . . . . . . . . . .

5.1

39

The first plot shows xa (t) and the next three plots show the effect of sampling
at fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency. We can
see how in this case the signal reconstructed is an aliased version of the original
at a lower frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5.2

47

The three plots show the effect of sampling at fs1 = 10fmax = 25, fs2 =
2.5fmax = 6.25, fs3 = 2 in the frequency domain. The first two sampling
frequencies meet the sampling theorem requirement. On the other hand, the
third sampling frequency is less than twice the maximum frequency.

The

aliasing effect is evident in the third plot. . . . . . . . . . . . . . . . . . . . . .


6.1

z-plane of H(z) =

0.006143
.
11.87834z 1 0.975156z 2

. . . . . . . . . . . . . . . . . . . . . .

48
58

6.2

Magnitude and phase response of H(z) =

0.006143
.
11.87834z 1 0.975156z 2

We can see

that the system is a digital resonator filter which has a peak at = 0.1 . . . .

59

8.1

Frequency response of the lowpass filter. . . . . . . . . . . . . . . . . . . . . . .

82

8.2

Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented
with filtfilt (no delay of the output with respect to the input) and therefore it
is a noncausal filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8.3

Result of the highpass filtering operation.

83

Notice that the highpass filter

eliminated the signal trend. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84

8.1

Frequency response of the lowpass filter. . . . . . . . . . . . . . . . . . . . . . .

82

8.2

Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented
with filtfilt (no delay of the output with respect to the input) and therefore it
is a noncausal filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8.3

Result of the highpass filtering operation.

Notice that the highpass filter

eliminated the signal trend. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

84

CHAPTER

1
Introduction to Sampling and Aliasing

10

1.1

Objective

The objective of this lab is to explore the concept of sampling and reconstruction of analog
signals. Specifically, we will simulate the sampling process of an analog signal using MATLAB,
investigate the effect of sampling in the time domain, and introduce the concept of aliasing.
In a later lab, after studying the DTFT and DFT, we will revisit this topic and explore these
two concepts in the frequency domain.

1.2

Theoretical Introduction

The signals that we encounter in the real world are mostly analog signals, which vary
continuously in time and amplitude. These signals can be processed using analog filters.
Analog filters are implemented using electrical networks containing active and passive circuit
elements. In many cases, however, we are interested in processing real world analog signals
digitally. In order to process analog signals using digital hardware such as a digital signal
processors (DSPs), one needs to convert the analog signals into a digital form which is suitable
for digital hardware. The conversion of an analog signal to a digital form is performed by an
analog-to-digital converter (ADC), which produces a stream of binary numbers from analog
signals, that is, it takes samples of the analog signal and digitizes the amplitude at these
discrete times. It is for this reason that a digital signal is said to be discrete in time and
discrete in amplitude. Prior to the ADC conversion, an analog filter called the prefilter or
antialiasing filter is applied to the analog signal in order to prevent an effect known as aliasing,
which we will explore in this laboratory experiment. When the signal is in a digital form, it can
be processed by a digital signal processor (DSP). After performing the digital signal processing,
the signal must be converted back into an analog signal. This is done by a digital-to-analog
converter (DAC) and an analog postfilter which smooths out the staircase waveform.
In order to do digital signal processing we must first perform other tasks. It may appear

11

that DSP is more complicated than ASP. We must therefore ask ourselves the question:
Why does anybody want to process the signal digitally? The answer to this question is
that DSP provides many advantages over ASP. The primary reasons for using DSP are
programmability, reliability, accuracy, availability, and cost of the digital hardware. Systems
using DSP approach are programmed and can be tested or reprogrammed to perform a different
DSP task easily. Also, DSP operations are based solely on additions and multiplications, which
lead to a very stable processing (stability independent of temperature, etc). Furthermore, DSP
operations can easily be modified in real-time by simple programming changes or reloading of
registers.

1.2.1

Sampling and aliasing

In this lab we focus our attention in the process of sampling and how to avoid the so called
problem of aliasing. During sampling, an analog signal x(t) is periodically measured every T
seconds, that is, time is discretized in units of the sampling interval or sampling period T :
t = nT,

n = 0, 1, 2, ...

where T is the sampling period. The inverse of T is called the sampling frequency, that is,
the samples per second:
fs =

1
T

When we sample an analog signal we must make sure that we are taking enough samples
(sampling fast enough) so that the samples are a good representation of the original analog
signal. In some cases, when the sampling frequency fs is not fast enough the samples taken
may not represent the original analog signal. The potential confusion of the original signal
with another of a different frequency is known as aliasing. Aliasing can be avoided if one
satisfies the conditions of the sampling theorem.

12

The sampling theorem provides a quantitative answer to the questions of how to choose
the sampling time interval T or the sampling frequency fs .
Sampling Theorem 1 For accurate representation of a signal x(t) by its time samples
x(nT ), two conditions must be met: 1) The signal x(t) must be bandlimited, that is, its
frequency contents must be limited to contain frequencies up to some maximum frequency
fmax and no frequencies beyond that, and 2) the sampling rate fs must be chosen to be at least
twice the maximum frequency fmax , that is,
fs 2fmax
According to the sampling theorem, before sampling we must make sure the signal is
bandlimited (this is the function of the analog prefilter) and that the sampling frequency is
at least twice the maximum frequency.

1.3

Computer Exploration

In order to explore the concepts of sampling and aliasing we can perform a MATLAB
simulation where we create an analog signal (simulated), take samples at different frequencies,
and observe the effect of fs .

1.3.1

Procedure

1. Simulate an analog signal with different frequency components.


2. Take samples at different sampling frequencies.
3. Plot the results.

13

1.3.2

Example

As an example, lets follow this procedure to simulate the effect of sampling the analog signal:
xa (t) = 3sin(t) + 2sin(5t)
This signal contains two frequency components at f1 = 1/2 and f2 = 2.5 Hz. We explore the
effect of sampling and aliasing by sampling xa (t) with three different sampling frequencies:
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency and therefore we should expect to see aliasing. Figure 1.1
shows the results of the MATLAB simulation.

14

x(t)

10

10

10

5
Time, s

10

x(nT)

x(nT)

x(nT)

Figure 1.1: The first plot shows xa (t) and the next three plots show the effect of sampling at
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 1. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency. We can see how in this case the signal reconstructed is
an aliased version of the original at a lower frequency.

15

1.4

Example MATLAB code

Below we show the MATLAB code used to generate this example:


%==================================================
% Simulated Analog Signal: f1 = 1/2 Hz, f2 = 2.5 Hz
%==================================================
f1 = 1/2;

% f1 = 1/2 Hz

f2 = 2.5;
T

% f2 = 2.5 Hz

= 1/1000; N

= 1000; n

= 0:10*N; t

= n*T; x

3*sin(2*pi*f1.*t) + 2*sin(2*pi*f2.*t);
subplot(4,1,1);
h

= plot(t,x); ylabel(x(t)); box off;

%==================================================
% Sampling at fs = 10*fmax = 10*f2
%==================================================
fs = 10*f2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(4,1,2);
h = plot(t,x,:,k,xs, r.); ylabel(x(nT));

%==================================================
% Sampling at fs = 2.5*fmax = 2.5*f2
%==================================================
fs = 2.5*f2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(4,1,3);
h = plot(t,x,:,k,xs, r.); ylabel(x(nT));

%==================================================
% Sampling at fs = 1

(fs < fmax = f2)

%==================================================
fs = 1;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
xa = 3*sin(2*pi*f1.*t)+2*sin(2*pi*0.5*t);
subplot(4,1,4);
h = plot(t,x,:,t,xa, k,xs, r.);
xlabel(Time, s);
ylabel(x(nT));

16

1.5

Tasks

Use MATLAB to perform the following computer exploration:


1. Simulate the continuous time signal xa (t) = cos(1t) + cos(3t) + cos(5t) + cos(7t) by
sampling at a high sampling frequency (fs = 50fmax ).
2. Plot, observe, and describe the effect of sampling xa (t) with f s = 10fmax .
3. Plot, observe, and describe the effect of sampling xa (t) with f s = 5fmax .
4. Plot, observe, and describe the effect of sampling xa (t) with f s = 2fmax .
5. Plot, observe, and describe the effect of sampling xa (t) with f s = fmax without prefilter.
6. Plot, observe, and describe the effect of sampling xa (t) with f s = fmax an ideal prefilter.

17

CHAPTER

2
Discrete-Time Signals

18

2.1

Objective

In this laboratory we will create MATLAB functions to efficiently implement the most common
discrete-time signals such as the unit sample sequence, the unit step sequence, real and
complexvalued exponential sequences, and sinusoidal sequences. Furthermore, we will explore
the concept of unit sample synthesis.

2.2

Theoretical Introduction

A discrete-time signal is a sequence of numbers. In this lab manual we will denote a continuoustime signal as x(t), where t represents the independent time variable. We will use x(n) to
denote a discrete-time signal, where n is the time index. We can use MATLAB to implement
any finite-duration sequence by a row vector of appropriate values. A vector, however, does
not have any information about the sample position n. For this reason we need to use two
vectors to represent a signal x(n), one for x and another for one n. For example, a sequence
x(n) = {2, 3, 5, 2, 9, 7}, where the underlined value represents the sample at time zero, can
be represented in MATLAB by:
n=[-2,-1,0,1,2,3]; x = [2, 3, 5, -2, 9, 7];
In the cases where the sample position information is not needed we can represent a signal
using the vector x alone.

There are a few discrete-time signals that we encounter very often. For this reason, it
is convenient to create MATLAB functions that can be used to implement these important
sequences.

19

2.2.1

Unit Sample Sequence

The unit sample sequence, (n) can be implemented using the logical relation n==0 as
follows:1 .
function [x,n] = ImpSeq(n0,n1,n2)
%ImpSeq: Generates x(n) = delta(n-n0); n1<= n <= n2
%
% [x,n] =

ImpSeq(n0,n1,n2)

%
n = [n1:n2]; x = [(n-n0) == 0];

2.2.2

Unit Step Sequence


function [x,n] = StepSeq(n0,n1,n2)
%StepSeq: Generates x(n) = u(n-n0); n1<= n <= n2
%
% [x,n] =

StepSeq(n0,n1,n2)

%
n = [n1:n2]; x = [(n-n0) >= 0];

2.2.3

Real and ComplexValued Exponential Sequences

The MATLAB operator for exponentiation can be used to implement a real exponential
sequence. For example, to implement x(n) = (0.8)n , 0 n 10, we can use:
n = [0:10];

x = (0.8).^n;

In the case of complexvalued sequences we must use the MATLAB function exp. For
example, to generate x(n) = e(3+j2)n , 0 n 10, we write:
1

The implementation is based on the code given by V. K. Ingle and J. G. Proakis on Digital Signal
Processing Using MATLAB, BookWare Companion Series. The MATLAB code can be downloaded from the
books website

20

n = [0:10];

2.2.4

x = exp((3+2j)*n);

Other Functions and Operations

It is also important to create MATLAB functions to perform operations on the basic functions
described above. For instance, we must be able to multiply, add, perform time shifts, folding,
even and odd decompositions, sample summations (sum), sample products (prod ), etc. An
important fact is that any arbitrary sequence x(n) can be synthesized as a weighted sum of
delayed and scaled unit sample sequences, that is

x(n) =

x(k)(n k)

k=

This result will be used to develop the concept of convolution, and the response of a lineartime-invariant (LTI) system to an arbitrary input.

2.3

Computer Exploration

In this section we show some examples of how to use the basic signals to create more complex
ones. Furthermore, we explore the concept of how to synthesize any signal as a weighted sum
of delayed and scaled unit sample sequences.

2.3.1

Example

As an example, lets generate and plot the following sequences:


1. x(n) = 2(n + 3) + 1(n + 2) (n) 3(n 4), 10 n 10
2. x(n) = [1, 2, 4, 5, 9, 3, 2, 1, 4] using a sum of (n)
3. x(n) = e(0.2+j0.3)n , 10 n 10.
4. A cosine at 30 Hz in Gaussian noise with zero mean and unit variance.
21

10
8

1
6
0

4
2

0
2
2
3
10

0
Time Index, n

4
5

10

10

Time Index, n

1.5
1

0
0.5
2

0
0.5

1
6
1.5
8
10

0
Time Index, n

2
10

10

0
Time Index, n

10

Figure 2.1: Plots showing discrete-time signals generated using MATLAB. The upper left plot
corresponds to x(n) = 2(n + 3) + 1(n + 2) (n) 3(n 4), 5 n 5, the upper right
plot is the sequence x(n) = [1, 2, 4, 5, 9, 3, 2, 1, 4] generated using a sum of delayed
(n). The bottom left plot is a complex exponential x(n) = e(0.2+j0.3)n , 10 n 10, and
the bottom right corresponds to a cosine at 30 Hz in Gaussian noise with zero mean and unit
variance.

22

2.4

Example MATLAB code


%==================================================
% Generate and Plot x1(n)
%==================================================
n = [-10:10];
x = 2*impseq(-3,-10,10)+1*impseq(-2,-10,10) ...
- impseq(0,-10,10) - 3*impseq(4,-10,10);
subplot(2,2,1); stem(n,x);
xlabel(Time Index, n); box off;

%==================================================
% Generate and Plot x2(n)
%==================================================
n = [-2:10];
x = 1*impseq(0,-2,10)+ 2*impseq(1,-2,10) ...
+ 4*impseq(2,-2,10)+ 5*impseq(3,-2,10) ...
+ 9*impseq(4,-2,10)- 3*impseq(5,-2,10) ...
- 2*impseq(6,-2,10)- 1*impseq(7,-2,10) ...
+ 4*impseq(8,-2,10);
subplot(2,2,2); stem(n,x);
xlabel(Time Index, n); box off;

%==================================================
% Generate and Plot x3(n)
%==================================================
n

= [-10:10]; alpha = -0.2+0.3*j;

= real(exp(alpha*n));

subplot(2,2,3); stem(n,x);
xlabel(Time Index, n); box off;

%==================================================
% Generate and Plot x4(n)
%==================================================
n

= [-10:10];

% Time Index

fs

= 12*30;

% Sampling Frequency

ns

= 0.2*randn(1,length(n)); % Gaussian Noise

= sin(2*pi*30*n.*1/fs);

% Signal

xn

= x+ns;

% Signal + Noise

subplot(2,2,4);
stem(n,xn); xlabel(Time Index, n); box off;

23

2.5

Analysis

Use MATLAB to perform the following computer exploration:


1. Generate the realvalued signal x(n) = an , 10 n 10 for at least four different
values of a. Consider the cases where a is positive, negative, in between 1 and 1, etc.
Plot the results as a subplot.
2. Generate the complex-valued signal x(n) = e(+j)n for different values of and . Plot
the real and imaginary parts, and the magnitude and phase.
3. Generate a cosine waveform at 125 Hz. Use a sampling frequency fs of 15 KHz.
4. Using the cosine sequence generated in the previous part, add Gaussian noise to it and
plot the results.

24

CHAPTER

3
Discrete-Time Systems and Convolution

25

3.1

Objective

In this laboratory we investigate discrete-systems. In particular, we will focus our attention


on a subclass of systems known as linear-time invariant (LTI) systems. We will see that in the
case of an LTI system we can write an expression for the output of the system to an arbitrary
input. Using the concepts of impulse-response and convolution, we will introduce the concept
of digital filtering.

3.2

Theoretical Introduction

A discrete time system can be modelled mathematically as a transformation, a function, or


an operator, that takes an input sequence and produces an output sequence, that is
y(n) = T [x(n)]
In the context of DSP we say that a system processes an input signal and produces an output
signal. In general, systems are divided into two broad classes: linear and nonlinear. Linear
systems are those that obey the principle of superposition,
L[a1 x1 (n) + a2 x2 (n)] = a1 L[x1 (n)] + a2 L[x2 (n)]
Linear systems can be further subdivided into two classes: time-invariant and time-variant
or time-dependent. A linear system in which an input-output pair is invariant to a time-shift
n is called a linear-time-invariant system or LTI system.

LTI systems are very important in practice because there is a well developed mathematical
theory that enables us to analyze, design, and study these systems in great detail. In particular,
we can completely characterize LTI systems by their impulse response. If we have an LTI

26

system and we want to predict what the system does, the only thing we need to do is to input
an impulse and record the output (the impulse response). Once we have the impulse response,
we can use the convolution sum to find the output of the system for an arbitrary input.

3.2.1

Development of the Convolution Sum

Lets try to find an expression for the output of a LTI system for an arbitrary input. Since
we are not making any assumptions about the input sequence x(n), we must find a way to
express this sequence in terms of some other for which we do know what the output is. By
definition, the output of an LTI system L[] to a unit sample is the impulse response, denoted
h(n). We can, therefore, express the arbitrary input signal x(n) using (n k) as our basis
functions. This is convenient because we know the output of the system for a single impulse.
The procedure to develop an expression for the output of the system to an arbitrary input
is as follows:
1. Goal: we want to find an expression for y(n) for any input signal x(n)
2. Given: we know that the output of a system when the input is an impulse is h(n).
y = L[x(n)] = L[(n)] = h(n)

3. Since we know what the output is for an impulse (n) we just have to represent the
input signal x(n) using the impulses as our basis functions.

x(n) =

x(k)(n k)

k=

4. Since the system is time invariant we know that


L[(n k)] = h(n k)
27

5. Therefore, we can express the output of the system as:


y(n) = L[x(n)] = L[

x(k)(n k)] =

k=

x(k)L[(n k)] =

k=

x(k)h(n k)

k=

where we have used the fact that the system is linear and time-invariant.
The expression we obtained for the output y(n) is called the linear-convolution sum and
is denoted by y(n) = x(n) h(n).

3.2.2

Stability and Causality

In linear system theory we are interested in systems that are stable.

There are several

definitions for stability. In general we can say that a system is stable if bounded input signals
produce bounded output responses (BIBO). In terms of the impulse response, an LTI system
is stable if its impulse response is absolutely summable, that is,

|h(n)| <

n=

Another important concept is causality. We say that a system is causal if the output at
time n0 depends only of the inputs at time n0 and before, but not on future values of n. In
general, only causal systems can be implemented in real-time. However, in cases where the
system is not completely causal we may still be able to implement them in a closetorealtime
fashion by using a buffer.

3.3

Computer Exploration

In this section we show some examples of how to use the concept of convolution in a practical
sense. In particular, we can see how a system with a very simple impulse response can be
used as a filter to smooth noisy signals.
28

3.3.1

Procedure

1. Generate a sinusoidal signal in Gaussian noise.


2. Generate a sequence for the impulse response h(n)
3. Using convolution filter the noisy input signal x(n) with the filter characterized by h(n)
4. Plot the results and experiment using different numbers of filter coefficients and shapes
of h(n)

3.3.2

Example

As an example lets generate a sinusoidal signal with a frequency of 10 Hz, sampled at 300 Hz
which is corrupted by zero mean, unit variance Gaussian noise. Next, we generate a sequence
h(n) = 18 [u(n) u(n 8)], that is, a constant impulse response. Using convolution we filter
the noisy input signal x(n) with the filter characterized by h(n). Finally, we plot the results
and experiment using different numbers of filter coefficients and shapes of h(n)

In Figure 3.1 we show the output of the system presented in the previous example. Notice
that the use of this filter with a very simple impulse response can help to reduce the Gaussian
noise. The output of the system is less noisy than the original sequence. This system is called
a moving-average filter, because the output is an average, that is
1
y(n) = [x(n) + x(n 1) + x(n 2) + x(n 3) + x(n 4) + x(n 5) + x(n 6) + x(n 7)]
8
In this case all the coefficients are constant and equal to 1/8. Notice the relationship between
this expression and convolution. We can rewrite y(n) as
7

k=0

k=0

X1
1X
y(n) =
x(n k) =
x(n k)
8
8
29

In general, the coefficients dont have to be constant,


7
X

h(k)x(n k) =

k=0

7
X

x(k)h(n k)

k=0

This equation is the convolution sum of a finite-impulse response filter (FIR) of order 7. The
problem in digital filtering design is how we go about choosing these h(n) coefficients so that
we can filter-out the undesired noise present in the input signal. We are usually interested
in filtering out some particular frequencies while leaving the others unchanged, therefore we
need techniques that allow us to find specific coefficients to remove specific frequencies. This
design is done in the frequency domain. We will revisit this topic once we have introduced
the concepts of DTFT, DFT, and FFT.
1.5

0.5

0.5

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Figure 3.1: Example of a discrete system that smoothes an input signal and improves the
signal to noise ratio. The underlying signal is a 10 Hz sinusoid sampled at 300 Hz which is
corrupted by Gaussian noise. Notice the output is a smooth delayed version of the input.
30

3.4

Example MATLAB code


%==================================================
% Generate sinusoidal signal corrupted by noise
%==================================================
N

= 150;

% Number of samples

= 1:N;

% Time index

fs = 300;

% Sampling frequency

= 1/fs;

% Sampling period

= 10;

% Frequency of sinusoid

= cos(2*pi*f*n.*T);

% Clean Sinusoidal signal

= 0.25;

% 1% Noise

ns = a*randn(1,length(x));

% Gaussian Noise

xn = x+ns;

% Signal in noise

%==================================================
% Generate the impulse response
%==================================================
h

= 1/8*stepseq(0,0,7)

%==================================================
% Filter noisy signal using h(n), xf(n) = xs(n)*h(n)
%==================================================
xf = conv(xn,h);

% Filtered output

%==================================================
% Plotting of results
%==================================================
h1 = plot(n*T, xn, (1:length(xf))*T, xf);
axis tight; box off;

31

3.5

Analysis

Use MATLAB to perform the following computer exploration:


1. Download the ICP signal from the class website. Plot the signal in the time domain
(fs = 125 Hz).
2. Filter the signal using an impulse response of the form h(n) = u(n) u(n ), where
is a user-specified parameter. Filter the signal and plot the results for different values
of . Comment on the results.
3. MATLAB has two functions that can be used to implement filters by providing the
coefficients, one of them is called filter and the other filtfilt. Using the MATLAB help,
repeat the first experiment using both functions. Comment on the differences. Is a filter
implemented using filtfilt a causal system?
4. Use the moving average filter to smooth the ICP and eliminate the high frequency
effects due to quantization.
5. Use the moving average filter to eliminate frequency components higher than the
fundamental component. Try to filter the ICP signal in such a way that the filtered
signal will be as sinusoidal as possible.
6. Use the moving average filter to eliminate the ICP trend.

32

CHAPTER

4
DTFT, DFT, and FFT

33

4.1

Objective

In this laboratory we will introduce the tools to perform Fourier analysis in discrete-time,
namely the Discrete-Time Fourier Transform (DTFT), the Discrete-Fourier Transform (DFT),
and an efficient algorithm to compute the DFT called the Fast-Fourier Transform (FFT).

4.2

Theoretical Introduction

We have seen already how any signal can be decomposed as a weighted sum of delayed unit
sample impulses. We saw that representing the input signal using impulses as our basis
function was useful in the development of the convolution sum.

Fourier analysis consist of decomposing a signal as a sum of sinusoids (or complex


exponentials). Depending on whether we are working on continuoustime or discretetime, or
whether we are working with periodic or nonperiodic signals there are different Fourier tools
to perform this decomposition. In the continuoustime, we have the Fourier Series (FS) and
the Fourier Transform (FT) for working with periodic and nonperiodic signals, respectively.
Analogously, in discretetime we have the DiscreteFourier Series (DFS) and the Discrete
Fourier Transform (DTFT).

Most signals of practical interest can be decomposed into a sum of sinusoidal signal
components. These decompositions are extremely important in the analysis and design of
LTI systems because complex exponentials (sinusoids) are eigenfuntions of LTI systems. This
means that the response of an LTI system to a sinusoidal input signal is another sinusoid of
the same frequency but of different amplitude and phase.

Frequency analysis involves the resolution of the signal into its frequency components

34

(just as the resolution of light into its different colors). When we decompose a signal into its
frequency components we are doing frequency analysis. The opposite problem, reconstructing
the original signal from its frequency components, is known as frequency synthesis. The term
spectrum is used to refer to the frequency content of a signal. The process of obtaining
the spectrum of deterministic signals, signals for which we have a mathematical equation to
represent them, is called frequency or spectral analysis. On the other hand, the process of
determining the spectrum of the signals we encounter in practice, for which we do not have a
mathematical formula, is called spectral estimation.

4.2.1

Definitions

DTFT
If x(n) is absolutely summable, then its discrete-time Fourier transform (DTFT) is given by:

jw

X(e ) =

x(n)ejwn

n=

The inverse-time Fourier transform (IDTFT) of X(ejw ) is given by:


1
x(n) =
2

X(ejw )ejwn dw

DFS, DFT, and FFT


From the definition of the DTFT we can see that its computation involves summing an infinite
number of terms. Furthermore, even though we are analyzing discrete-time signals, the DTFT
is a function of the continuous-time frequency variable w. For this two reasons, the DTFT is
not a numerically computable transform. The discrete-Fourier Series (DFS) and the discreteFourier transform (DFS) provide us with a mechanism for numerically computing the DTFT

35

at specific points.

The discrete-Fourier transform of any signal x(n) is defined as

X(k) =

N
1
X

x(n)ej2kn/N , k = 0, 1, 2, . . . , N 1

n=0

This formula shows how to transform a sequence of length L N into a sequence of


frequency samples X(k) of length N . These samples are obtained by evaluation of the DTFT
X(ejw ) at a set of N equally spaced discrete frequencies. From X(k) we can reconstruct the
original sequence x(n) by using the inverse-discrete Fourier transform (IDFT):

x(n) =

N 1
1 X
X(k)ej2n/N,
N

n=0,1,2,...,N 1

k=0

The fast-Fourier transform FFT is not really a new transform, it is just a very efficient
algorithm (actually there are several FFT algorithms) that is used to compute the DFT.
Matlab provides a function called fft to compute the DFT of a vector x.

4.2.2

Zero Padding, Windowing and Other Considerations

The above definition of the DFT allows us to compute the DTFT of a signal x(n) at N equally
spaced values. Sometimes, this is a very coarse estimate of the DTFT. If we are interested in
evaluating the frequency at more values than N we can make longer the original sequence x(n)
by appending zeros at the end. This procedure for increasing the computational frequency
resolution is called zero-padding. It is important to realize that even though we are computing
the DFT at more points, we are not increasing the physical frequency resolution, which depends
on the length of our window.

Anytime we are computing the DFT of an infinite sequence x(n) we must first truncate x(n)
36

into a finite-sequence. This operation can be modelled as a multiplication in the time domain
by a rectangular window. Since multiplication in the time domain corresponds to convolution
in the frequency domain, the effect of this operation is a convolution of the original spectrum
of x(n) by the sinc function. The distortion introduced due to the effect of window causes
the original spectrum to have artificial sidelobes, which correspond to the sidelobes of the
sinc function. As we increase the length of our window we improve the frequency resolution,
and the artifact caused by the rectangular window become less significant. Depending on the
application we may decide to use a different window from the rectangular. In particular, there
are many other windows which reduce the sidelobe leakage at the expense of increasing the
mainlobe width. Some of the most important windows are the hanning, hamming, blackman,
blackman-harris, harris and triangular windows. MATLAB provides direct implementations
for all of these.

4.3

Computer Exploration

The most basic application of the DFT (or FFT) is to approximate the DTFT. Having a
signal x(n) we are interested in performing frequency analysis on it, that is, calculate the
spectral content (spectrum) of the signal. Next, we indicate a simple procedure of a computer
exploration to investigate the FFT as a tool for spectral analysis, then show an example of
how to perform this task.

4.3.1

Procedure

1. Generate a signal composed of several different frequency components.


2. Estimate the amplitude spectrum using the FFT.
3. Zero-pad to the next power of 2 and perform the FFT again.
4. Change the window length and repeat the experiments.
37

5. Change the type of window and repeat the experiment.

4.3.2

Example

As an example of calculating the DFT using the FFT we generated the following figure. The
plot shows the time and frequencydomain representations of a signal xa (t) = 3sin(t) +
2sin(5t) sampled at twice the maximum frequency (fs = 5 Hz). We can see how the
magnitude of the DFT has two peaks at 0.5 Hz and 2.5 Hz, as we expected. In the next
plot we show the DFT of the same signal, but corrupted by Gaussian noise. Notice how we
can still identify the two frequency components even though the signal is distorted. The final
plot shows the FFT in dB scale. We can see the effect of having a finite-rectangular window.

38

4
150
x(nT)

2
0

100

50

4
0

10

6
4

150

xn(nT)

2
0

100

2
50

4
6
0

10

100
4

0
100

xn(nT)

200

300

400
500

4
0

10

Figure 4.1: Example of calculating the DFT using the FFT. The plot shows the time and
frequencydomain representations of a signal xa (t) = 3sin(t) + 2sin(5t) sampled at twice
the maximum frequency (fs = 5 Hz). We can see how the magnitude of the DFT has two
peaks at 0.5 Hz and 2.5 Hz, as we expected. In the next plot we show the DFT of the same
signal, but corrupted by Gaussian noise. Notice how we can still identify the two frequency
components even though the signal is distorted. The final plot shows the FFT in dB scale.
We can see the effect of having a finite-rectangular window.

39

4.4

Example MATLAB code

Below we show the MATLAB code used to generate this example:


%==================================================
% Sinusoidal signal with two frequencies
%==================================================
f1 = 0.5;

% 1 Hz component

f2 = 2.5;

% 2.5 Hz component

fs = 5*f2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
subplot(3,2,1);
h = plot(k,xs); set(h, Markersize, 15);
ylabel(x(nT)); box off;
axis tight;

%==================================================
% FFT of original sequence
%==================================================
subplot(3,2,2);
X = abs(fft(xs));

% FFT Magnitude

f = (1:length(X))*fs/length(X);% Frequency axis


stem(f(1:end/2),X(1:length(X)/2)); box off; axis tight;

%==================================================
% Signal with noise and FFT
%==================================================
xn = xs + 1.75*randn(1,length(xs)); subplot(3,2,3); h =
plot(k,xn); set(h, Markersize, 15); ylabel(xn(nT)); box off;
axis tight; subplot(3,2,4);
X = abs(fft(xn));

% FFT Magnitude

f = (1:length(X))*fs/length(X);% Frequency axis


stem(f(1:end/2),X(1:length(X)/2)); box off; axis tight;

%==================================================
% FFT ploted in db scale
%==================================================
xn = xs + 0*randn(1,length(xs)); subplot(3,2,5); h = plot(k,xn);
set(h, Markersize, 15); ylabel(xn(nT)); box off; axis tight;
subplot(3,2,6);
X = 20*log(abs(fft(xn)));

% FFT Magnitude dB

f = (1:length(X))*fs/length(X);% Frequency axis


plot(f(1:end/2),X(1:length(X)/2)); box off; axis tight;

40

4.5

Analysis

Use MATLAB to perform the following computer exploration:


1. Perform the FFT of the same signal given in the example but using a different window
length. Plot, and comment on the results.
2. Repeat the experiment using a different window type. How do the results obtained with
this window compare with the ones using the rectangular window?
3. Generate 512 samples of a pure sinusoid and perform the FFT. Can you observe the
effect of the window? Why?
4. Perform the FFT on the impulse response used on the previous lab h(n) = u(n) u(n
10). Comment on the results.
5. Implement the DFT directly using the definition and compare its performance (speed of
execution) with the FFT algorithm.
6. Since convolution in the time domain equal multiplication in the frequency domain, we
could calculate the convolution of two sequences by first computing the FFT of each
one of them, multiply them in the frequency domain, and then perform an inverse DFT.
Create a function that uses this procedure to compute convolution.

41

CHAPTER

5
Sampling and Aliasing

42

5.1

Objective

In this laboratory we will revisit the topic of sampling and aliasing. We will discuss the effects
of sampling using Fourier transforms. Looking at the spectra of sampled signals will enable
us to get a better understanding of the concept of aliasing.

5.2

Theoretical Introduction

An ideal sampler instantaneously measures the analog signal x(t) at the sampling instants
t = nT . We can consider the output of the sampling process to be an analog signal which
consists of a linear superposition of impulses occurring at the sampling times. In this model,
each impulse is weighted by the corresponding sample value, that is
xs (t)(t nT ) = x(nT )(t nT )

xs (t) = x(t)

(t nT ) =

n=

x(nT )(t nT )

n=

In practical sampling, each sample must be held constant for a short period of time for the A/D
converter to accurately convert the sample to digital form. This process can be mathematically
modelled substituting the impulses by rectangular pulses of time duration  T ,
xp (t) =

x(nT )p(t nT )

n=

5.2.1

Spectra of Sampled Signals

The spectrum of the sampled signal can be obtained by finding the Fourier transform of xs (t),
Z

Xs (f ) =

xs (t)e2jf t dt

43

Xs (t) =

x(nT )(t nT )e2jf t dt

n=

(t nT )e2jf t dt

x(nT )

n=

Xs (t) =

x(nT )e2jf nT

n=

We see that the spectrum of the sampled signal is the DTFT of x(nT ). Furthermore, we
see that Xs (t) is a periodic function of f with period fs , that is, Xs (f + fs ) = X(f ),

Xs (f ) =

1
T

X(f mfs )

m=

this equation indicates that the spectrum of a sampled signal is equal to a scaled version of the
original analog spectrum periodically replicated at intervals of the sampling rate fs . We see
that if x(t) is bandlimited to some maximum frequency fmax , the replicas are separated from
each other by a distance d = fs 2fmax . Therefore, if we are interested in the replicas not
overlapping each other, we require that d 0, that is fs 2fmax , which is the requirement
to avoid aliasing introduced in the first laboratory. We see that aliasing occurs when the
replicated spectra overlap, d < 0, since this overlapping results in a distortion of the original
spectrum.

5.2.2

Reconstruction

If the signal is bandlimited and fs is large enough so the replicas do not overlap, that is,
fs 2fmax , then the portion of the spectrum Xs (f ) that lies within the Nyquist interval

44

[fs /2, f2 /2] will be identical to the original analog spectrum X(f ) by a scaled constant,
T Xs (t) = X(f ),

5.3

fs
fs
f
2
2

Computer Exploration

As we saw in the first laboratory, in order to explore the concepts of sampling and aliasing
we can perform a MATLAB simulation where we create an analog signal (simulated), take
samples at different frequencies, and observe the effect of fs . In the first lab, we saw the effect
of sampling in the time domain. Now we explore this concept in the frequency domain by
using the DFT/FFT.

5.3.1

Procedure

1. Simulate an analog signal with different frequency components.


2. Take samples at different sampling frequencies.
3. Plot the results.
4. Take the FFT of the samples at the different sampling frequencies.
5. Plot the spectrum within the Nyquist interval and observe the frequency content of the
sampled signal.

5.3.2

Example

As an example, lets follow this procedure to simulate the effect of sampling the analog signal
we already saw in the first lab:
xa (t) = 3sin(t) + 2sin(5t)

45

This signal contains two frequency components at f1 = 1/2 and f2 = 2.5 Hz. We explore the
effect of sampling and aliasing by sampling xa (t) with three different sampling frequencies:
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling frequencies
meet the sampling theorem requirement. On the other hand, the third sampling frequency
is less than twice the maximum frequency and therefore we should expect to see aliasing.
Figure 5.1 shows the results of the MATLAB simulation in the time domain. Figure 5.2 shows
the frequency domain representation of the signals. We can see how in the first two cases,
when the sampling theorem requirements were met, the spectrum of the sampled signal that
lies within the Nyquist interval represents the original frequency content. On the other hand,
we see that when we sampled the signal with a rate less than twice the maximum frequency,
the spectrum is different.

46

x(t)

10

10

10

5
Time, s

10

x(nT)

x(nT)

x(nT)

Figure 5.1: The first plot shows xa (t) and the next three plots show the effect of sampling at
fs1 = 10fmax = 25, fs2 = 2.5fmax = 6.25, fs3 = 2. The first two sampling frequencies meet
the sampling theorem requirement. On the other hand, the third sampling frequency is less
than twice the maximum frequency. We can see how in this case the signal reconstructed is
an aliased version of the original at a lower frequency.

47

300
200
100
0

10

12

14

0.5

1.5

2.5

3.5

80
60
40
20
0

40
30
20
10
0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

Figure 5.2: The three plots show the effect of sampling at fs1 = 10fmax = 25, fs2 = 2.5fmax =
6.25, fs3 = 2 in the frequency domain. The first two sampling frequencies meet the sampling
theorem requirement. On the other hand, the third sampling frequency is less than twice the
maximum frequency. The aliasing effect is evident in the third plot.

48

5.4

Example MATLAB code

Below we show the MATLAB code used to generate this example:


%==================================================
% Simulated Analog Signal: f1 = 1/2 Hz, f2 = 2.5 Hz
%==================================================
f1 = 1/2;

% f1 = 1/2 Hz

f2 = 2.5;
T

= 1/1000; N

% f2 = 2.5 Hz
= 1000; n

= 0:10*N; t

= n*T; x

3*sin(2*pi*f1.*t) + 2*sin(2*pi*f2.*t);
figure(1); subplot(4,1,1);
h

= plot(t,x); ylabel(x(t)); box off;

%==================================================
% Sampling at fs = 10*fmax = 10*f2
%==================================================
fs = 10*f2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
figure(1); subplot(4,1,2);
h = plot(t,x,:,k,xs, r.);
set(h, Markersize,15); ylabel(x(nT));

figure(2); subplot(3,1,1);
X = abs(fft(xs));

% FFT Magnitude

f = (1:length(X))*fs/length(X);% Frequency axis


stem(f(1:end/2),X(1:length(X)/2)); box off; axis tight;

%==================================================
% Sampling at fs = 2.5*fmax = 2.5*f2
%==================================================
fs = 2.5*f2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
figure(1);
subplot(4,1,3);
h = plot(t,x,:,k,xs, r.)
set(h, Markersize,15); ylabel(x(nT));

figure(2); subplot(3,1,2);
X = abs(fft(xs));

% FFT Magnitude

f = (1:length(X))*fs/length(X);% Frequency axis


stem(f(1:end/2),X(1:length(X)/2)); box off; axis tight;

%==================================================

49

% Sampling at fs = 1

(fs < fmax = f2)

%==================================================
f1 = 1/2;

% f1 = 1/2 Hz

f2 = 2.5;

% f2 = 2.5 Hz

fs = 2;

% Sampling Frequency

= 1/fs;

% Sampling Period

= 0:10*fs;

% Plot 10 seconds

= n*T;

% Time Index

xs = 3*sin(2*pi*f1.*k) + 2*sin(2*pi*f2.*k);
xa = 3*sin(2*pi*f1.*t)+2*sin(2*pi*0.5*t);
figure(1); subplot(4,1,4);
h = plot(t,x,:,t,xa, k,xs, r.);
set(h, Markersize, 15);
xlabel(Time, s); ylabel(x(nT));

figure(2); subplot(3,1,3);
X = abs(fft(xs));

% FFT Magnitude

f = (1:length(X))*fs/length(X);% Frequency axis


stem(f(0:end/2),X(1:length(X)/2)); box off; axis tight;

50

5.5

Analysis

Use MATLAB to perform the following computer exploration:


1. Sample xa (t) = sin(10000t) + sin(30000t) + sin(50000t) + sin(70000t) at a rate of
40 KHz.
2. Plot, observe, and describe the effect of sampling xa (t) without the use of any prefilter
in the frequency domain.
3. Filter the signal xa (t) with an ideal lowpass filter with cutoff frecuency at 30 KHz.
Plot, observe, and describe the effect of sampling after using this filter in the frequency
domain.
4. Filter the signal with an ideal lowpass filter with fc = 20 KHz (half the sampling
frequency). Plot, observe, and describe the effect of sampling after using this filter in
the frequency domain.
5. Filter the signal with a practical filter with a flat response up to 20 KHz, and with an
attenuation of 40 dB/octabe beyond 20 KHz. Plot, observe, and describe the effect of
sampling after using this filter in the frequency domain.
6. Compare the results obtained with the ideal and the practical filter.

51

CHAPTER

6
The Z-Transform

52

6.1

Objective

In this laboratory we will introduce the Z-transform. This transform is a tool for the analysis,
design, and implementation of digital filters. We will also introduce the concept of system
transfer function, which is defined in terms of the z-transform of the impulse response.

6.2

Theoretical Introduction

Transform techniques such as the DTFT or Z-transforms are important tools in the analysis
and design of linear-time invariant (LTI) systems. The DTFT allows us to represent a discretetime signal in terms of complex exponentials. We also saw that this transform enabled us to
study LTI systems in the frequency domain by taking the DTFT of the systems impulse
response sequence, h(n), obtaining the frequency response function, H(ejw ). The frequency
response function enabled us to study the sinusoidal steady-state response, and the response
to any arbitrary signal, x(n), for which the DTFT was defined by evaluating the inverse DTFT
of X(ejw )H(ejw ).

In this chapter we introduce a new transform, the Z-transform. The Z-transform can be
considered an extension of the DTFT for two reasons: 1) it provides another domain in which
a larger class of sequences and systems can be analyzed, and 2) it can be used to analyze
transient system responses (not only steady-state) and systems with initial conditions.

6.2.1

Definition

The Z-transform of a discrete-time signal x(n) is defined as the power series

X(z)

X
n=

53

x(n)z n

where z is a complex variable, z = + j = rej . Since the Z-transform is an infinite power


series, it only exists for those values of z for which this series converges. The problem of finding
the region of convergence (ROC) for X(z) is equivalent to determining the range of values of
r for which the sequence x(n)rn is absolutely summable. Using the fact that z = rej , we
can express the Z-transform as
X(z)|z=rej =

x(n)rn ejn

n=

We can easily see that the Z-transform reduces to the DTFT for r = 1, that is, the evaluation of
the Z-transform on the unit circle z = ej provides information about the frequency spectrum.
X(z)|z=ej =

x(n)ej

n=

6.2.2

Causality and Stability

The concepts of causality and stability can be redefined in terms of the Z-transform.

Causal signals (right-sided) are characterized by ROCs that are outside the maximum pole
circle. Anticausal signals have ROCs that are inside the minimum pole circle. Mixed signals
have ROCs that are an annular region between two circles, with the poles that lie inside the
inner circle contributing causally and the poles that lie outside the outer circle contributing
anticausally.

Stability can be also characterized in the z-domain in terms of the ROC. A necessary and
sufficient condition for the stability of a signal x(n) is that the ROC of the corresponding
Z-transform contains the unit circle.

For a system to be simultaneously stable and causal, it is necessary that all its poles lie
54

strictly inside the unit circle in the z-plane, that is


maxi |pi | 1
A signal or system can also be simultaneously stable and anticausal. In this case, all its
poles must lie strictly outside the unit circle.
mini |pi | 1
Stability is more important in DSP than causality, since it is required in order to avoid
numerically divergent computations.

6.2.3

Finding Z-transforms Analytically

Finding a Z-transform of a sequence involves applying the definition and finding the ROC.
For example, given
x(n) = {1, 2, 5, 8, 0, 1}
The Z-transform is
X(z) = 1z 2 + 2z 1 + 5 + 8z 1 + 1z 2
with the ROC being the entire z-plane except at z = 0 and z = .
Useful Infinite Geometric Series
Two infinite series are very useful in finding Z-transforms:

xn = 1 + x + x2 + x3 + . . . =

n=0

55

1
1x

which converges for |x| 1 and diverges otherwise; and

xn = x + x2 + x3 + . . . =

n=1

x
1x

Examples
As an example of how to used the previous equations, lets find the Z-transform of
x(n) = (0.5)n u(n) = {1, 0.5, 0.52 , 0.53 , . . . }
Its Z-transform is:
X(z) =

(0.5)n u(n)z n =

n=

(0.5)n z n =

n=0

(0.5z 1 )n =

n=0

1
1 0.5z 1

|0.5z 1 | 1 |z| 0.5


Notice that the Z-transform of x(n) = (0.5)n u(n 1) is
X(z) =

1
, |z| 0.5
1 0.5z 1

A Z-transform and its ROC are uniquely determined by the time signal x(n).

6.2.4

Inverse Z-transform

The definition of the inverse Z-transform requires an evaluation of the complex contour
integral. In general, we do not use the definition to find the inverse Z-transform. Instead, we
use partial fraction expansions. The central idea is that when X(z) is a rational function of
z 1 , we can express it as a sum of simple first-order factors using the partial fraction expansion
that can be inverted by inspection.

56

6.3

Computer Exploration

MATLAB provides several functions very useful to work in the z-domain. In this laboratory
we will focus in the functions zplane, residuez, and freqz.

6.3.1

Procedure

1. Given a Z-transform, plot the poles and the zeros in the z-plane.
2. Given a transfer function for a system, plot the magnitude and phase responses.
3. Given a transfer function, perform a partial fraction expansion.

6.3.2

Example

Given a transfer function such as


H(z) =

0.006143
1 1.87834z 1 0.975156z 2

lets plot the poles and zeros in the z-plane. Plot the magnitude and phase response and
perform a partial fraction expansion.

Figures 6.1 and 6.2 show the z-plane and frequency response of H(z). From the frequency
response we can see that the system is a digital resonator filter which has a peak at = 0.1.
From the z-plane we can see that the system has complex poles close to the unit circle. Using
the residuez function we can find the partial fraction expansion and the exact location of the
zeros and poles.

57

0.8

0.6

0.4

Imaginary Part

0.2

0.2

0.4

0.6

0.8

1
1

0.5

0
Real Part

Figure 6.1: z-plane of H(z) =

58

0.5

0.006143
.
11.87834z 1 0.975156z 2

Magnitude (dB)

10
20
30
40
50
60

0.1

0.2

0.3
0.4
0.5
0.6
0.7
Normalized Frequency ( rad/sample)

0.8

0.9

0.1

0.2

0.3
0.4
0.5
0.6
0.7
Normalized Frequency ( rad/sample)

0.8

0.9

Phase (degrees)

50

50

100

150

0.006143
Figure 6.2: Magnitude and phase response of H(z) = 11.87834z
1 0.975156z 2 . We can see that
the system is a digital resonator filter which has a peak at = 0.1

59

6.4

Example MATLAB code

Below we show the MATLAB code used to generate this example:


%===================================================
% Plot the Poles and Zeros in the Z-Plane
%===================================================
num = [0.006143]; den = [1 -1.87834 0.975156]; figure;
zplane(num,den); box off;

%===================================================
% Plot the Magnitude
%===================================================
figure; freqz(num,den); box off;

%===================================================
% Partial Fraction Expansion
%===================================================
num = [0.006143]; den = [1 -1.87834 0.975156];
[r,p,k]=residuez(num,den)

60

6.5

Analysis

Use MATLAB to perform the following computer exploration:


1. Plot the poles and zeros in the z-plane of the filter H(z) =

1
, |z|
10.9z 1

0.9. Plot the

frequency response. What type of filter is this: lowpass, highpass, or bandpass?


2. An FIR filter is described by the I/O equation y(n) = x(n) x(n 4). Find the poles
and zeros, plot them in the z-plane, and plot the frequency response. What type of filter
is this: lowpass, highpass, or bandpass?
3. Given H(z) =

6+z 1
.
10.25z 2

Find the poles and zeros, plot them in the z-plane, and plot

the frequency response. What type of filter is this: lowpass, highpass, or bandpass?
4. Given a moving average filter with impulse response h(n) = u(n) u(n ). Find the
poles and zeros, plot them in the z-plane, and plot the frequency response for 3 different
values of . What type of filter is this: lowpass, highpass, or bandpass? What is the
cut-off frequency in each case? What is the effect of increasing ?
5. Given H(z) =

11.25z 1
,
(1+4z 2 )(10.81z 2)

find the poles and zeros, plot them in the z-plane,

and plot the frequency response. Notice that H(z) has poles and zeros outside the unit
circle, find another system G(z) such that |G(z)| = |H(z)| by reflecting all the poles
and zeros inside the unit circle. Plot the poles and zeros of the new system G(z) in
the z-plane and verify that both systems have the same magnitude response. A system
with all its poles and zeros inside the unit circle is said to be a minimal-phase system.
Given any system H(z) we can find a minimal phase system with the same magnitude
response, why is this important?

61

CHAPTER

7
Digital Filtering

62

7.1

Objective

In this laboratory we will investigate the concept of digital filtering. Emphasis will be placed
in the specification of these filters and in computer-aided (CAD) design. Specifically, we will
learn how to correctly specify the desired filter characteristics, and how to use MATLAB to
obtain the filter coefficients.

7.2

Theoretical Introduction

Filters are frequency selective systems, that is, the magnitude and phase response of these
systems id a function of the input frequency. However, in the area of DSP we often use the
terms of filter and system interchangeably.

In previous labs we introduced the concept of a moving average filter in the time domain.
Specifically, we saw that a system with an impulse response of the form h(n) =

1
(u(n)

u(n )), could be used to remove high frequency components (smooth), and that controls
how much smoothing is done. We also saw that the output of any system is given by the
convolution of the input with the impulse response. In this particular case, the output of the
system, that is, the result of the filtering operation, is given by
y(n) =

1
1
1
1
x(n) + x(n 1) + x(n 2) + . . . + x(n )

y(n) =

k=
X
k=0

1
x(n k)

The impulse response of the filter doesnt have to be a constant, that is, a more general

63

expression would be

y(n) = b0 x(n) + b1 x(n 1) + . . . + bM 1 x(n M + 1) =

M
1
X

bk x(n k)

k=0

where b0 , b1 , . . . , bM 1 is the set of filter coefficients coefficients. Notice that bk = h(k), that is
M
1
X
k=0

bk x(n k) =

M
1
X

h(k)x(n k)

k=0

We see that in this case the filter coefficients are equal to the filters impulse response. We
can also recognize the output as being the convolution of the input with the impulse response.

7.2.1

Design Objective

When designing digital filters, the objective is how to choose the filter coefficients bk so that the
system removes the undesired frequencies while keeping the desired frequencies. This problem
is very difficult to solve in the time domain. In the design of frequencyselective filters, the
desired filter characteristics are specified in the frequency domain in terms of the desired
magnitude and phase response of the filter. Once the filter is fully specified, the objective is
to determine the coefficients that provide the desired frequency response specification.

7.2.2

FIR and IIR Filters

Filters can be classified into finiteimpulse response (FIR) or infiniteimpulse response (IIR)
filters.

64

FIR Filters
An FIR filter has an impulse response h(n) that extends only over a finite time interval and
it is zero beyond that interval, that is
h(n) = [h0 , h1 , . . . , hM , 0, 0, 0, . . .]
where M is referred as the filter order. The impulse response coefficients h0 , h1 , . . . , hM , 0, 0, 0, . . .
are referred by various names, such as filter coefficients, filter weights, filter taps, etc. As we
already saw before, the output of an FIR filter can be written as

y(n) = h0 x(n) + h1 x(n 1) + h2 x(n 2) + . . . + hM x(n M ) =

M
X

h(k)x(n k)

k=0

We can see that the output of an FIR filter is a weighted average of the present input sample
x(n) and the past M samples x(n 1), x(n 2), . . . , x(n M ). FIR filters are referred as
non-recursive filters because the output only depends on the inputs and not on the previous
outputs.
IIR Filters
An IIR filter has an impulse response h(n) of infinite duration. The expression for the output
of a causal IIR filter is
y(n) =

h(k)x(n k)

k=0

In general, IIR filters are recursive, that is, the output of the system depends not only on the
inputs but also on the previous outputs, for example
y(n) = y(n 1) + x(n)

65

FIR vs. IIR Filters


FIR filters are used in filtering problems where there is a requirement for a linear-phase
characteristic within the passband of the filter. If there is no requirement for a linear-phase
characteristic, either an IIR or an FIR filter may be used. In general, IIR filters require fewer
filter coefficients to meet a specific magnitude response requirement.

7.3

Computer Exploration

MATLAB provides several functions very useful to design and simulate digital filters. In this
laboratory we will focus on the functions filter, filtfilt, ellipord, ellip, cheby1, cheby2, butter,
etc .

7.3.1

Example

Lets write two MATLAB functions to implement a lowpass and a highpass filter that allow us
to choose the cutoff frequency and types of filter we want to implement. The inputs to these
functions are the samples of the signal, the sampling frequency, and the desired type of filter
(causal, noncausal, elliptic, etc). The outputs are the filtered signal and the filter coefficients.
The following figure shows an example of using the lowpass filter to filter out the high
frequencies of a biomedical signal corrupted by quantization noise. The sampling frequency
of the input signal is 125 Hz and we would like to filter out frequencies beyond 15 Hz. The
next example shows the effect of filtering the same signal with a highpass filter to remove the
lowfrequency components and eliminate the signal trend (remove frequencies below 0.1 Hz).

66

Magnitude (dB)

50
0
50
100

10

20

30
Frequency (Hz)

40

50

60

10

20

30
Frequency (Hz)

40

50

60

Phase (degrees)

0
100
200
300
400

Figure 7.1: Frequency response of the lowpass filter.

67

Raw Signal and Lowpass Filtered Signal


15.5

Raw Signal
Lowpass Filtered

15

Amplitude

14.5

14

13.5

13

12.5
422

422.2

422.4

422.6
422.8
Time,s

423

423.2

Figure 7.2: Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented with filtfilt (no
delay of the output with respect to the input) and therefore it is a noncausal filter.

68

Raw Signal and Highpass Filtered Signal

25

Raw Signal
Highpass Filtered
20

Amplitude

15
10
5
0
5
10

200

400

600
Time,s

800

1000

1200

Figure 7.3: Result of the highpass filtering operation. Notice that the highpass filter eliminated
the signal trend.

69

7.4

Example MATLAB code

Below we show the MATLAB code used to implement the lowpass and highpass funtions:
function [y,n] = LowPass(x,fsa,fca,fta,cfa,pfa);
%LowPass: Lowpass filter
%
%

[y,n] = LowPass(x,fs,fc,ft,cf,pf)

%
%

Input signal

fs

Signal sample rate (Hz). Default=125 Hz

fc

Cutoff frequency (Hz). Default=fs/4 Hz

ft

Type: 1=Elliptic (default), 2=Butterworth,

3=FIR based on Blackman Window, 4=Minimun Ringing

cf

Causality flag: 1 = causal, 2 = noncausal (default) for tp=1,2

pf

Plot flag: 0=none (default), 1=screen

Filtered Signal

Order of the filter

%
%

Filters the input signal x with a cutoff frequency fc using an

elliptical butterworth filter. The lowpass filter can be causal

or noncausal. The causal implementation uses only the present and

previous values to determine the filters output y, and therefore

it is physically realizable for realtime processing. The noncausal

implementation filters the data in the forward direction, and the

filtered sequence is then reversed and run back through the filter.

The result has precisely zero phase distortion and magnitude

modifiedby the square of the filters magnitude response.

%
%

Example: Filter the raw intracranial pressure signal using an

elliptic lowpass filter with zero phase (noncausal) and with

cutoff frequency fs/4 Hz.This will filter out the high frequency

components (frequencies above fs/4 Hz) and smooth the data.

%
%

load ICP;

[y,n] = LowPass(icp,fs,fs/4,1,2,1);

%
%

Version 1.00 MA

%
%

See also HighPass, filter, filtfilt, ellip, and butter.

%=====================================================================
% Process function arguments
%=====================================================================
if nargin<1 | nargin>6,
help LowPass;
return;
end;

70

fs = 125;

% Default sampling rate, Hz

if exist(fsa) & ~isempty(fsa),


fs = fsa;
end;

fc = fs/4;

% Default cutoff frequency, Hz

if exist(fca) & ~isempty(fca),


fc = fca;
end;

ft = 1;

% Default filter type

if exist(fta) & ~isempty(fta),


ft = fta;
end;

cf = 2;

% Default causality flag

if exist(cfa) & ~isempty(cfa),


cf = cfa;
end;

pf = 0;

% Default - no plotting

if nargout==0,

% Plot if no output arguments

pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;

%=========================================================================
% Process Inputs
%=========================================================================
x

= x(:); LD = length(x); k

= 1:LD;

%=========================================================================
% LowPass Filtering
%=========================================================================
nf

= fs/2; wlp = fc/nf; Ws

Rs

= 40;

= wlp*1.2; Wp

= wlp*0.8; Rp

= 0.5;

%----------------------------------------------------------------------% Elliptic Filter (tp==1)


%----------------------------------------------------------------------if ft==1,
if cf==1,

% causal

[N,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(N, Rp, Rs, Wn);
y

= filter(B,A,x);

else

% non-causal

[N,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(N, Rp, Rs, Wn);
y

= filtfilt(B,A,x);

71

end;

%----------------------------------------------------------------------% Butterworth Filter (tp==2)


%----------------------------------------------------------------------elseif ft == 2,
if cf==1,

% causal

[N, Wn] = buttord(Wp,Ws,Rp,Rs);


[B,A]

= butter(N,Wn);

= filter(B,A,x);

else

% non-causal

[N, Wn] = buttord(Wp,Ws,Rp,Rs);


[B,A]

= butter(N,Wn);

= filtfilt(B,A,x);

end;

%----------------------------------------------------------------------% Eliminate start-up transients (tp==3)


%----------------------------------------------------------------------elseif ft==3,
NX = length(x);
N

= min([500 floor(NX/4)]);

if rem(N,2),

% Make N even

N = N + 1;
end;
B = fir1(N,wlp,Blackman(N+1),noscale);

% B should be odd

B = B/sum(B);
A = 0;
y = conv(B,x);
y = y(N/2 + (1:NX));
ci = [1:(N/2) (NX+1-N/2):NX];
for c1 = 1:length(ci),
in

= ci(c1);

xi

= max( 1,in-N/2):min(NX,in+N/2);

bi

= (N/2+1) + (max(1-in,-N/2):min(NX-in,N/2));

y(in) = sum(x(xi).*B(bi))/sum(B(bi));
end;

%=========================================================================
% Plotting
%=========================================================================
if pf==1
close all
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Lowpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Lowpass Filtered);
box off; axisset;

72

if (ft ~= 3 & ft ~=4),


figure(2)
figureset(2)
freqz(B,A, 512, fs)
set(h, Linewidth, 1.5);
box off; axisset
end;
end;

%=========================================================================
% Take care of outputs
%=========================================================================
if nargout==0,
clear(y,n);
end;

function [y,n] = HighPass(x,fsa,fca,cfa,pfa)


%HighPass: Highpass filter
%
%

[y,n] = HighPass(x,fs,fc,cf,pf)

%
%

Input signal

fs

Signal sample rate (Hz). Default=125 Hz

fc

Cutoff frequency (Hz). Default=fs/4 Hz

cf

Causality flag: 1 = causal, 2 = noncausal (default) for tp=1

pf

Plot flag:hig 0=none (default), 1=screen

Filtered Signal

Order of the filter

%
%

Filters the input signal x with a cutoff frequency fc using an

elliptical filter. The highpass filter can be causal or noncausal.

The causal implementation uses only the present and previous values

to determine the filters output y, and therefore it is physically

realizable for realtime processing. The noncausal implementation

filters the data in the forward direction, and the filtered sequence

is then reversed and run back through the filter; Y is the time

reverse of the output of the second filtering operation.

has precisely zero phase distortion and magnitude modified by the

square of the filters magnitude response.

The result

%
%

Example: Filter the raw intracranial pressure signal using a

highpass filter with zero phase (noncausal) and with cutoff

frequency 0.5 Hz. This will filter out the low frequency

components (frequencies below 0.5 Hz) and detrend the data:

%
%

load ICP;

[y,n] = Highpass(icp,fs,0.5,2,1);

%
%

Version 1.00 MA

73

See also HighPass, Lowpass, filter, filtfilt, ellip, and butter.

%=======================================================================
% Process function arguments
%=======================================================================

if nargin<1 | nargin>6,
help Highpass;
return;
end;

fs = 125;

% Default sampling rate, Hz

if exist(fsa) & ~isempty(fsa),


fs = fsa;
end;

fc = fs/4;

% Default cutoff frequency, Hz

if exist(fca) & ~isempty(fca),


fc = fca;
end;

cf = 2;

% Default flag

if exist(cfa) & ~isempty(cfa),


cf = cfa;
end;

pf = 0;

% Default - no plotting

if nargout==0,

% Plot if no output arguments

pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;

%=======================================================================
% Process Inputs
%=======================================================================
x

= x(:); LD = length(x); k

= 1:LD;

%=======================================================================
% LowPass Filtering
%=======================================================================
nf

= fs/2; whp = fc/nf; Wp

Rs

= 40;

= whp*1.2; Ws

= whp*0.8; Rp

if cf==1,

= 0.5;

% causal

[n,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(n, Rp, Rs, Wn, high);
y

= filter(B,A,x);

else

% non-causal

[n,Wn] = ellipord(Wp, Ws, Rp, Rs);

74

[B,A ] = ellip(n, Rp, Rs, Wn, high);


y

= filtfilt(B,A,x);

end;

%=======================================================================
% Plotting
%=======================================================================
if pf == 1
figure(1)
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Highpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Highpass Filtered);
box off; axisset

figure(2)
figureset(2)
freqz(B,A, 512, fs)

end;

%=======================================================================
% Take care of outputs
%=======================================================================
if nargout==0, clear(y,n); end;

75

7.5

Analysis

Download the signals ECGNoisy60Hz, ECGQuantization, ECGBaselineDrift, and ECGNoiseCombined from the class website. Use MATLAB to perform the following computer
exploration.
1. Load and plot the signal ECGNoisy60Hz in MATLAB. Use the FFT to evaluate the
frequency spectrum. Notice the noise present at 60 Hz. Use MATLAB to design a filter
to eliminate this problem. Plot and compare the initial signal and the filtered signal.
2. Repeat the procedure with ECGQuantization. In this case the signal is severely affected
by quantization noise. Use MATLAB to design a filter to eliminate this noise and show
the results.
3. Repeat the procedure with ECGBaselineDrift. In this case the signal is severely affected
by baseline drift due to patient movement. Use MATLAB to design a filter to eliminate
this noise and show the results.
4. Repeat the procedure with ECGBaselinedCombined. The signal contains all the above
types of noise combined. Use MATLAB to design a system to eliminate this noise and
show the results.

76

CHAPTER

8
Digital Filtering

77

8.1

Objective

In this laboratory we will investigate the concept of digital filtering. Emphasis will be placed
in the specification of these filters and in computer-aided (CAD) design. Specifically, we will
learn how to correctly specify the desired filter characteristics, and how to use MATLAB to
obtain the filter coefficients.

8.2

Theoretical Introduction

Filters are frequency selective systems, that is, the magnitude and phase response of these
systems id a function of the input frequency. However, in the area of DSP we often use the
terms of filter and system interchangeably.

In previous labs we introduced the concept of a moving average filter in the time domain.
Specifically, we saw that a system with an impulse response of the form h(n) =

1
(u(n)

u(n )), could be used to remove high frequency components (smooth), and that controls
how much smoothing is done. We also saw that the output of any system is given by the
convolution of the input with the impulse response. In this particular case, the output of the
system, that is, the result of the filtering operation, is given by
y(n) =

1
1
1
1
x(n) + x(n 1) + x(n 2) + . . . + x(n )

y(n) =

k=
X
k=0

1
x(n k)

The impulse response of the filter doesnt have to be a constant, that is, a more general

78

expression would be

y(n) = b0 x(n) + b1 x(n 1) + . . . + bM 1 x(n M + 1) =

M
1
X

bk x(n k)

k=0

where b0 , b1 , . . . , bM 1 is the set of filter coefficients coefficients. Notice that bk = h(k), that is
M
1
X
k=0

bk x(n k) =

M
1
X

h(k)x(n k)

k=0

We see that in this case the filter coefficients are equal to the filters impulse response. We
can also recognize the output as being the convolution of the input with the impulse response.

8.2.1

Design Objective

When designing digital filters, the objective is how to choose the filter coefficients bk so that the
system removes the undesired frequencies while keeping the desired frequencies. This problem
is very difficult to solve in the time domain. In the design of frequencyselective filters, the
desired filter characteristics are specified in the frequency domain in terms of the desired
magnitude and phase response of the filter. Once the filter is fully specified, the objective is
to determine the coefficients that provide the desired frequency response specification.

8.2.2

FIR and IIR Filters

Filters can be classified into finiteimpulse response (FIR) or infiniteimpulse response (IIR)
filters.

79

FIR Filters
An FIR filter has an impulse response h(n) that extends only over a finite time interval and
it is zero beyond that interval, that is
h(n) = [h0 , h1 , . . . , hM , 0, 0, 0, . . .]
where M is referred as the filter order. The impulse response coefficients h0 , h1 , . . . , hM , 0, 0, 0, . . .
are referred by various names, such as filter coefficients, filter weights, filter taps, etc. As we
already saw before, the output of an FIR filter can be written as

y(n) = h0 x(n) + h1 x(n 1) + h2 x(n 2) + . . . + hM x(n M ) =

M
X

h(k)x(n k)

k=0

We can see that the output of an FIR filter is a weighted average of the present input sample
x(n) and the past M samples x(n 1), x(n 2), . . . , x(n M ). FIR filters are referred as
non-recursive filters because the output only depends on the inputs and not on the previous
outputs.
IIR Filters
An IIR filter has an impulse response h(n) of infinite duration. The expression for the output
of a causal IIR filter is
y(n) =

h(k)x(n k)

k=0

In general, IIR filters are recursive, that is, the output of the system depends not only on the
inputs but also on the previous outputs, for example
y(n) = y(n 1) + x(n)

80

FIR vs. IIR Filters


FIR filters are used in filtering problems where there is a requirement for a linear-phase
characteristic within the passband of the filter. If there is no requirement for a linear-phase
characteristic, either an IIR or an FIR filter may be used. In general, IIR filters require fewer
filter coefficients to meet a specific magnitude response requirement.

8.3

Computer Exploration

MATLAB provides several functions very useful to design and simulate digital filters. In this
laboratory we will focus on the functions filter, filtfilt, ellipord, ellip, cheby1, cheby2, butter,
etc .

8.3.1

Example

Lets write two MATLAB functions to implement a lowpass and a highpass filter that allow us
to choose the cutoff frequency and types of filter we want to implement. The inputs to these
functions are the samples of the signal, the sampling frequency, and the desired type of filter
(causal, noncausal, elliptic, etc). The outputs are the filtered signal and the filter coefficients.
The following figure shows an example of using the lowpass filter to filter out the high
frequencies of a biomedical signal corrupted by quantization noise. The sampling frequency
of the input signal is 125 Hz and we would like to filter out frequencies beyond 15 Hz. The
next example shows the effect of filtering the same signal with a highpass filter to remove the
lowfrequency components and eliminate the signal trend (remove frequencies below 0.1 Hz).

81

Magnitude (dB)

50
0
50
100

10

20

30
Frequency (Hz)

40

50

60

10

20

30
Frequency (Hz)

40

50

60

Phase (degrees)

0
100
200
300
400

Figure 8.1: Frequency response of the lowpass filter.

82

Raw Signal and Lowpass Filtered Signal


15.5

Raw Signal
Lowpass Filtered

15

Amplitude

14.5

14

13.5

13

12.5
422

422.2

422.4

422.6
422.8
Time,s

423

423.2

Figure 8.2: Result of the lowpass filtering operation. Notice that the lowpass filter smooths
the effects of quantization noise. Notice also that this filter was implemented with filtfilt (no
delay of the output with respect to the input) and therefore it is a noncausal filter.

83

Raw Signal and Highpass Filtered Signal

25

Raw Signal
Highpass Filtered
20

Amplitude

15
10
5
0
5
10

200

400

600
Time,s

800

1000

1200

Figure 8.3: Result of the highpass filtering operation. Notice that the highpass filter eliminated
the signal trend.

84

8.4

Example MATLAB code

Below we show the MATLAB code used to implement the lowpass and highpass funtions:
function [y,n] = LowPass(x,fsa,fca,fta,cfa,pfa);
%LowPass: Lowpass filter
%
%

[y,n] = LowPass(x,fs,fc,ft,cf,pf)

%
%

Input signal

fs

Signal sample rate (Hz). Default=125 Hz

fc

Cutoff frequency (Hz). Default=fs/4 Hz

ft

Type: 1=Elliptic (default), 2=Butterworth,

3=FIR based on Blackman Window, 4=Minimun Ringing

cf

Causality flag: 1 = causal, 2 = noncausal (default) for tp=1,2

pf

Plot flag: 0=none (default), 1=screen

Filtered Signal

Order of the filter

%
%

Filters the input signal x with a cutoff frequency fc using an

elliptical butterworth filter. The lowpass filter can be causal

or noncausal. The causal implementation uses only the present and

previous values to determine the filters output y, and therefore

it is physically realizable for realtime processing. The noncausal

implementation filters the data in the forward direction, and the

filtered sequence is then reversed and run back through the filter.

The result has precisely zero phase distortion and magnitude

modifiedby the square of the filters magnitude response.

%
%

Example: Filter the raw intracranial pressure signal using an

elliptic lowpass filter with zero phase (noncausal) and with

cutoff frequency fs/4 Hz.This will filter out the high frequency

components (frequencies above fs/4 Hz) and smooth the data.

%
%

load ICP;

[y,n] = LowPass(icp,fs,fs/4,1,2,1);

%
%

Version 1.00 MA

%
%

See also HighPass, filter, filtfilt, ellip, and butter.

%=====================================================================
% Process function arguments
%=====================================================================
if nargin<1 | nargin>6,
help LowPass;
return;
end;

85

fs = 125;

% Default sampling rate, Hz

if exist(fsa) & ~isempty(fsa),


fs = fsa;
end;

fc = fs/4;

% Default cutoff frequency, Hz

if exist(fca) & ~isempty(fca),


fc = fca;
end;

ft = 1;

% Default filter type

if exist(fta) & ~isempty(fta),


ft = fta;
end;

cf = 2;

% Default causality flag

if exist(cfa) & ~isempty(cfa),


cf = cfa;
end;

pf = 0;

% Default - no plotting

if nargout==0,

% Plot if no output arguments

pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;

%=========================================================================
% Process Inputs
%=========================================================================
x

= x(:); LD = length(x); k

= 1:LD;

%=========================================================================
% LowPass Filtering
%=========================================================================
nf

= fs/2; wlp = fc/nf; Ws

Rs

= 40;

= wlp*1.2; Wp

= wlp*0.8; Rp

= 0.5;

%----------------------------------------------------------------------% Elliptic Filter (tp==1)


%----------------------------------------------------------------------if ft==1,
if cf==1,

% causal

[N,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(N, Rp, Rs, Wn);
y

= filter(B,A,x);

else

% non-causal

[N,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(N, Rp, Rs, Wn);
y

= filtfilt(B,A,x);

86

end;

%----------------------------------------------------------------------% Butterworth Filter (tp==2)


%----------------------------------------------------------------------elseif ft == 2,
if cf==1,

% causal

[N, Wn] = buttord(Wp,Ws,Rp,Rs);


[B,A]

= butter(N,Wn);

= filter(B,A,x);

else

% non-causal

[N, Wn] = buttord(Wp,Ws,Rp,Rs);


[B,A]

= butter(N,Wn);

= filtfilt(B,A,x);

end;

%----------------------------------------------------------------------% Eliminate start-up transients (tp==3)


%----------------------------------------------------------------------elseif ft==3,
NX = length(x);
N

= min([500 floor(NX/4)]);

if rem(N,2),

% Make N even

N = N + 1;
end;
B = fir1(N,wlp,Blackman(N+1),noscale);

% B should be odd

B = B/sum(B);
A = 0;
y = conv(B,x);
y = y(N/2 + (1:NX));
ci = [1:(N/2) (NX+1-N/2):NX];
for c1 = 1:length(ci),
in

= ci(c1);

xi

= max( 1,in-N/2):min(NX,in+N/2);

bi

= (N/2+1) + (max(1-in,-N/2):min(NX-in,N/2));

y(in) = sum(x(xi).*B(bi))/sum(B(bi));
end;

%=========================================================================
% Plotting
%=========================================================================
if pf==1
close all
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Lowpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Lowpass Filtered);
box off; axisset;

87

if (ft ~= 3 & ft ~=4),


figure(2)
figureset(2)
freqz(B,A, 512, fs)
set(h, Linewidth, 1.5);
box off; axisset
end;
end;

%=========================================================================
% Take care of outputs
%=========================================================================
if nargout==0,
clear(y,n);
end;

function [y,n] = HighPass(x,fsa,fca,cfa,pfa)


%HighPass: Highpass filter
%
%

[y,n] = HighPass(x,fs,fc,cf,pf)

%
%

Input signal

fs

Signal sample rate (Hz). Default=125 Hz

fc

Cutoff frequency (Hz). Default=fs/4 Hz

cf

Causality flag: 1 = causal, 2 = noncausal (default) for tp=1

pf

Plot flag:hig 0=none (default), 1=screen

Filtered Signal

Order of the filter

%
%

Filters the input signal x with a cutoff frequency fc using an

elliptical filter. The highpass filter can be causal or noncausal.

The causal implementation uses only the present and previous values

to determine the filters output y, and therefore it is physically

realizable for realtime processing. The noncausal implementation

filters the data in the forward direction, and the filtered sequence

is then reversed and run back through the filter; Y is the time

reverse of the output of the second filtering operation.

has precisely zero phase distortion and magnitude modified by the

square of the filters magnitude response.

The result

%
%

Example: Filter the raw intracranial pressure signal using a

highpass filter with zero phase (noncausal) and with cutoff

frequency 0.5 Hz. This will filter out the low frequency

components (frequencies below 0.5 Hz) and detrend the data:

%
%

load ICP;

[y,n] = Highpass(icp,fs,0.5,2,1);

%
%

Version 1.00 MA

88

See also HighPass, Lowpass, filter, filtfilt, ellip, and butter.

%=======================================================================
% Process function arguments
%=======================================================================

if nargin<1 | nargin>6,
help Highpass;
return;
end;

fs = 125;

% Default sampling rate, Hz

if exist(fsa) & ~isempty(fsa),


fs = fsa;
end;

fc = fs/4;

% Default cutoff frequency, Hz

if exist(fca) & ~isempty(fca),


fc = fca;
end;

cf = 2;

% Default flag

if exist(cfa) & ~isempty(cfa),


cf = cfa;
end;

pf = 0;

% Default - no plotting

if nargout==0,

% Plot if no output arguments

pf = 1;
end;
if exist(pfa) & ~isempty(pfa),
pf = pfa;
end;

%=======================================================================
% Process Inputs
%=======================================================================
x

= x(:); LD = length(x); k

= 1:LD;

%=======================================================================
% LowPass Filtering
%=======================================================================
nf

= fs/2; whp = fc/nf; Wp

Rs

= 40;

= whp*1.2; Ws

= whp*0.8; Rp

if cf==1,

= 0.5;

% causal

[n,Wn] = ellipord(Wp, Ws, Rp, Rs);


[B,A ] = ellip(n, Rp, Rs, Wn, high);
y

= filter(B,A,x);

else

% non-causal

[n,Wn] = ellipord(Wp, Ws, Rp, Rs);

89

[B,A ] = ellip(n, Rp, Rs, Wn, high);


y

= filtfilt(B,A,x);

end;

%=======================================================================
% Plotting
%=======================================================================
if pf == 1
figure(1)
figureset(1)
h = plot(k./fs, x, b, k./fs, y, r);
title(Raw Signal and Highpass Filtered Signal);
xlabel(Time,s);
ylabel(Amplitude);
legend(Raw Signal, Highpass Filtered);
box off; axisset

figure(2)
figureset(2)
freqz(B,A, 512, fs)

end;

%=======================================================================
% Take care of outputs
%=======================================================================
if nargout==0, clear(y,n); end;

90

8.5

Analysis

Download the signals ECGNoisy60Hz, ECGQuantization, ECGBaselineDrift, and ECGNoiseCombined from the class website. Use MATLAB to perform the following computer
exploration.
1. Load and plot the signal ECGNoisy60Hz in MATLAB. Use the FFT to evaluate the
frequency spectrum. Notice the noise present at 60 Hz. Use MATLAB to design a filter
to eliminate this problem. Plot and compare the initial signal and the filtered signal.
2. Repeat the procedure with ECGQuantization. In this case the signal is severely affected
by quantization noise. Use MATLAB to design a filter to eliminate this noise and show
the results.
3. Repeat the procedure with ECGBaselineDrift. In this case the signal is severely affected
by baseline drift due to patient movement. Use MATLAB to design a filter to eliminate
this noise and show the results.
4. Repeat the procedure with ECGBaselinedCombined. The signal contains all the above
types of noise combined. Use MATLAB to design a system to eliminate this noise and
show the results.

91

CHAPTER

9
Project: Design of an Automatic Beat Detection Algorithm

92

9.1

General Information

Title: Automatic Beat Detection Algorithm for Intracranial Pressure Signals.


Demonstration Due Date: Last week of class.
Report Due Date: Finals week.
Report Guidelines: IEEE Transactions (7 pages maximum, 4 pages nominal).
Project Type: Research and Development.

9.2

Project Description

Automatic beat detection algorithms have many clinical applications including pulse
oximetry, cardiac arrhythmia detection, and cardiac output monitoring. Most of these
algorithms have been developed by medical device companies and are proprietary.
Thus, researchers who wish to investigate pulse contour analysis must rely on manual
annotations or develop their own algorithms.
The objective of this project is to design an automatic detection algorithm for
intracranial pressure (ICP) signals that locates the first peak following each heart beat.
This is called the percussion peak in ICP signals.
Development of automatic detection algorithms is an active area of research.

9.3

Significance

The unavailability of robust detection algorithms for pressure signals has, at least
partially, prevented researchers from fully conducting beat-by-beat analysis. Current

93

methods of intracranial ICP signal analysis are primarily based on time- or frequencydomain metrics such as mean, standard deviation, and spectral power at the heart rate
frequency. Few investigators have analyzed variations in the beat-level morphology of
the pressure signals because detection algorithms that can automatically identify each
of the beat components are generally unavailable.
Many researchers manually annotate desired components of physiologic pressure signals
because detection algorithms for these signals are not widely available. This approach
is labor-intensive, subjective, expensive, and can only be used on short signal segments.
There are numerous current and potential applications for pressure beat detection
algorithms. Many pulse oximeters perform beat detection as part of the signal processing
necessary to estimate oxygen saturation, but these algorithms are proprietary and cannot
be used in other applications. Systolic peak detection is necessary for some measures
of baroreflex sensitivity. Identification of the pressure components is necessary for some
methods that assess the interaction between respiration and beat-by-beat ventricular
parameters and the modulation effects of respiration on left ventricular size and stroke
volume. Detection is a necessary task when analyzing arterial compliance and the
pressure pulse contour. Beat-to-beat morphology analysis of ICP also requires robust
automatic detection.

9.4

Specifications

Design a DSP system to perform automatic detection of ICP signals.


Function Structure: fi = PressureDetect(x,fs,pf);
xi

Input signal

fs

Signal sample rate (Hz). Default = 125 Hz

94

pf
fi

9.5

Plot flag: 0=none (default), 1=screen


Percussion peak (P1) index, samples

Development and Test

A composite ICP signal containing examples of different ICP morphologies is provided


in the course website.
This signal will be used for algorithm development and validation.

9.6

Resources

A paper entitled: An Automatic Beat Detection Algorithm for Pressure Signals published
in IEEE Transactions of Biomedical Engineering has been posted on the course website.
This paper describes a general detection algorithm that can be used in pressure signals
(not only ICP).

9.7

Report Sections

Abstract: Concisely state what was done, how it was done, principal results, and their
significance. The abstract should contain the most critical information of the paper.
Introduction: State what the problem is specifically, the significance of finding a solution
to the problem, and the work that other researchers have done on this problem. If you
have a long report, the last paragraph in this section should describe the organization
of the rest of the paper.
Methodology: In short, how did you do what you did. This section should include an
explanation of the methods you used, a description of the data set, and you should
95

describe how the data was collected or where you obtained it from.
Results: What are the results of applying your method. This should be strictly factual
stating only how well your model performed, the outcome of your hypothesis tests, etc.
It should not include your interpretation or ideas; just the facts.
Discussion: What did you learn from the results listed in the earlier section. If the
results were different than what you (or the reader) would expect, try to explain why.
If you have ideas for futher research, this is where you should describe those ideas.
Conclusion: This section should summarize your main discoveries or findings from the
project.

9.8

Report Requirements

The paper should be written for someone that understands the key concepts and methods
covered in this class. You may assume the reader is a graduate of an engineering program.
The reports must conform to IEEE requirements for journal papers.
Do not include code or raw data that youve written for the project.
Avoid passive sentence construction. If you dont like using first person pronouns (I),
you can often use this paper or this report as the subject of sentences. For example,
This paper describes an analysis of... instead of An analysis of ... is described or I
describe an analysis of ....
If English is not your native language, have someone at the Writing Center review your
report for organization, style, and grammar. Must be in Final Submission Format
Must use LaTeX or MS Word stylesheet.

96

The name of the course (Signals and Systems) and term (Spring 2003) should be listed
as part of the author affiliation. Something like, This work was completed as part of
a course project for Signals and Systems at Portland Community College during spring
term of 2005. would be appropriate.
Do not list yourself as a member of the IEEE unless you really are a member.
Label your axes in all the figures.
Describe the figures in words using a caption below the figure.
Be sure to use the IEEE format for the caption.
Tables: Remember to use units. The captions go above the tables
Include relevant citations.
Use review articles to avoid a lengthy literature search.
Each reference number should be enclosed in square brackets.
Do not begin a sentence with a reference number.
The final report must be submitted in electronic form (via e-mail). MS Word, LaTeX,
postscript, or PDF are all acceptable.

9.9

Assessment

Format: Does the report adhere to the IEEE format and requirements listed above?
Grammar: Is the report written in past tense (it should be). Does the report use the
terms I or you inappropriately? Were there many grammar or spelling errors?

97

Organization: Is the report well organized? Are the section headings appropriate and
clear?
Clarity of Writing: Was the report clearly written? Could I understand what was done
and why after reading it?
Scope: Was the project of sufficient scope for the class?
Abstract: Does the abstract give an accurate and concise summary of the report?
Significance: Does the report explain the significance of the project?
Objectives: Are the project objectives clearly specified in the introduction?
Methodology: Were the methods and algorithms used appropriate for the data and
project objectives?
Results: Were the results sufficient? Were they clearly stated? Was a table or plot used
to display the results appropriately?
Discussion: Are the results discussed? Were there any surprises and, if so, were ideas
about the reasons for the surprises given? Was the significance of the results explained?
Citations: Were appropriate citations made to previous work?

98

CHAPTER

10
Appendix

10.1

Appendix I: IEEE-EMBS Detection Paper

10.2

Appendix II: IEEE-TBME Detection Paper

10.3

Appendix III: IEEE-TBME Kalman Filter Paper

99

1662

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005

An Automatic Beat Detection Algorithm


for Pressure Signals
Mateo Aboy*, Member, IEEE, James McNames, Senior Member, IEEE, Tran Thong, Fellow, IEEE, Daniel Tsunami,
Miles S. Ellenby, and Brahm Goldstein, Associate Member, IEEE

AbstractBeat detection algorithms have many clinical applications including pulse oximetry, cardiac arrhythmia detection, and
cardiac output monitoring. Most of these algorithms have been developed by medical device companies and are proprietary. Thus,
researchers who wish to investigate pulse contour analysis must
rely on manual annotations or develop their own algorithms. We
designed an automatic detection algorithm for pressure signals that
locates the first peak following each heart beat. This is called the
percussion peak in intracranial pressure (ICP) signals and the systolic peak in arterial blood pressure (ABP) and pulse oximetry
(SpO2 ) signals. The algorithm incorporates a filter bank with
variable cutoff frequencies, spectral estimates of the heart rate,
rank-order nonlinear filters, and decision logic. We prospectively
measured the performance of the algorithm compared to expert
annotations of ICP, ABP, and SpO2 signals acquired from pediatric intensive care unit patients. The algorithm achieved a sensitivity of 99.36% and positive predictivity of 98.43% on a dataset
consisting of 42,539 beats.
Index TermsArterial blood pressure (ABP), component detection, intracranial pressure (ICP), pressure beat detection, pulse
contour analysis, pulse oximetry (SpO2 ).

I. INTRODUCTION

and are proprietary. This forces researchers to either manually


annotate short segments or implement their own semi-automatic
algorithms that lack the performance, generality, and robustness
of modern detection algorithms for ECG signals [6]. Most of
these semi-automatic algorithms for pressure signals have not
been rigorously validated or published.
We describe an automatic detection algorithm that identifies
the time-location of the percussion component in intracranial
sigpressure (ICP) and the systolic peak in ABP and
nals. The algorithm is designed for subjects without significant
cardiac dysrhythmias. In Sections I-AI-D, we describe the
clinical relevance of pressure beat detection algorithms, give
an overview of detection algorithms and describe the beat components common to pressure signals. Section II describes the
detection algorithm in detail, including pseudocode to implement the different modules. Section III describes the validation
database, benchmark parameters, and the performance criteria.
Section IV reports the results of the performance assessment,
and Section V discusses the algorithms performance, limitations, and computational efficiency.
A. Clinical Significance

UTOMATIC beat detection algorithms are essential for


many types of biomedical signal analysis and patient
monitoring. This type of analysis is most often applied to
the electrocardiogram (ECG) signal in which one or more
of its components is detected automatically. Although many
detection algorithms have been developed for ECG signals
[1], there are only a few publications that describe algorithms
to detect features in pressure signals [2][5]. Since pressure
detection algorithms are necessary for most types of pulse
oximeters and devices that monitor cardiac output, most of these
algorithms have been developed by medical device companies
Manuscript received October 30, 2003; revised November 14, 2004. This
work was supported in part by the Thrasher Research Foundation, in part by
the Northwest Health Foundation, and in part by the Doernbecher Childrens
Hospital Foundation. Asterisk indicates corresponding author.
*M. Aboy is with the Electronics Engineering Technology Department,
Oregon Institute of Technology, Portland, OR 97229 USA and also with the
Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering at Portland State University, 1900 SW 4th Ave., Portland,
OR 97201 USA (e-mail: mateoaboy@ieee.org).
J. McNames and D. Tsunami are with the Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering at Portland State
University, Portland, OR 97201 USA.
T. Thong is with the Department of Biomedical Engineering, OGI School of
Science and Engineering, Oregon Health and Science University, Portland, OR
97201 USA.
M. S. Ellenby and B. Goldstein are with the Complex Systems Laboratory in
the Department of Pediatrics, Oregon Health and Science University, Portland,
OR 97201 USA.
Digital Object Identifier 10.1109/TBME.2005.855725

The unavailability of robust detection algorithms for pressure signals has, at least partially, prevented researchers from
fully conducting beat-by-beat analysis. Current methods of ICP
signal analysis are primarily based on time- or frequency-domain metrics such as mean, standard deviation, and spectral
power at the heart rate frequency [7]. Few investigators have
analyzed variations in the beat-level morphology of the pressure signals because detection algorithms that can automatically
identify each of the beat components are generally unavailable.
Many researchers manually annotate desired components of
physiologic pressure signals because detection algorithms for
these signals are not widely available. This approach is laborintensive, subjective, expensive, and can only be used on short
signal segments.
There are numerous current and potential applications for
pressure beat detection algorithms. Many pulse oximeters perform beat detection as part of the signal processing necessary
to estimate oxygen saturation, but these algorithms are proprietary and cannot be used in other applications. Systolic
peak detection is necessary for some measures of baroreflex
sensitivity [8][10]. Identification of the pressure components
is necessary for some methods that assess the interaction between respiration and beat-by-beat ventricular parameters and
the modulation effects of respiration on left ventricular size
and stroke volume [11]. Detection is a necessary task when
analyzing arterial compliance and the pressure pulse contour
[12]. Beat-to-beat morphology analysis of ICP also requires
robust automatic detection.

0018-9294/$20.00 2005 IEEE

ABOY et al.: AUTOMATIC BEAT DETECTION ALGORITHM FOR PRESSURE SIGNALS

1663

Fig. 1. Common architecture of detection algorithms. A preprocessing stage


emphasizes the desired components and a decision stage performs the actual
component detection.

B. Overview of Beat Detection Algorithms


Most physiologic signal detection algorithms can be divided
into two stages. As shown in Fig. 1, a preprocessing stage
emphasizes the desired components in order to maximize the
signal-to-noise ratio (SNR) and a decision stage decides if an
incoming peak is a true component based on a user-specified
threshold. This architecture has been employed in most ECG
detection algorithms. The preprocessing stage traditionally
relies on signal derivatives and digital filters [13][21]. Recent
algorithms use wavelets and filter banks for preprocessing [22],
[23].
C. Pressure Pulse Morphology

Fig. 2. Example of an ICP pulse showing the percussion peak (P ), dichrotic


peak (P ), and dichrotic notch in a low-pressure ICP signal. Note that the
tidal peak (P ) is absent in this case, and the (P ) component has the highest
amplitude.

The pulse morphology of ABP and


signals is well
known and consists of a systolic peak, dichrotic notch, and
dichrotic peak [24]. ICP has a similar pulse morphology, but
often has a third peak. The three peaks common to ICP signals
, tidal
, and dichrotic
peaks. In
are the percussion
this paper, we refer to the percussion (ICP) and systolic (ABP
and
) peaks as . The valley between
and
in ICP
signals is termed the dichrotic notch, and corresponds to the
dichrotic notch in arterial blood pressure. The
component
is a sharp peak, with fairly constant amplitude. In low-pressure
ICP signals, the
component has the highest amplitude.
The
component is more variable and is not always present
in low-pressure ICP signals. Fig. 2 shows an example of a
low-pressure ICP signal and its components. In high-pressure
ICP signals, the
component is always present and usually
has the highest amplitude. Fig. 3 shows an example of a
high-pressure ICP signal. The physiology underlying the ICP
pulse morphology and its components is reviewed in [25].
D. Differences Between ECG and Pressure Signals
Pressure signals have a different time-domain morphology
and spectral density than ECG signals. Since most of the ECG
signal power is in the 1025 Hz range, almost all QRS detection
algorithms use a bandpass filter with these cutoff frequencies
in the preprocessing stage to reduce out-of-band noise. These
algorithms combine the filter operation with another transformation, such as the signal derivative or the dyadic wavelet
transform, to exploit the large slope and high frequency content
of the QRS complex. This transformation generates a feature
signal in which QRS complexes can be detected by a simple
threshold.
Since pressure signals are more sinusoidal and less impulsive
than ECG signals, most of the signal power is in a lower frequency range that includes the fundamental frequency, typically
from 0.73.5 Hz in humans. Thus, preprocessing and decision
logic that rely on the impulsive shape of the QRS complex to
improve detection accuracy are inappropriate for pressure signals and can reduce accuracy.

Fig. 3. Example of an ICP pulse showing the percussion peak (P ), tidal


peak (P ), and dichrotic peak (P ) in a high-pressure ICP signal. Note that
the tidal peak (P ) has the highest amplitude in this case, and the dichrotic
notch is absent. These are characteristic features of a high-pressure ICP pulse
morphology.

II. ALGORITHM DESCRIPTION & THEORY


A. Algorithm Overview
Fig. 4 shows a block diagram of our detection algorithm.
The pressure signal is preprocessed by three bandpass elliptic
filters with different cutoff frequencies. The output of the first
bandpass filter is used to estimate the heart rate based on the
estimated power spectral density (PSD). The estimated heart
rate is then used to calculate the cutoff frequencies of the other
two filters. Peak detection and decision logic are based on
rank-order (percentile-based) nonlinear filters, that incorporate
relative amplitude and slope information to coarsely estimate
the percussion and systolic peak components
. A nearest
neighbor algorithm combines information extracted from the

1664

Fig. 4.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005

Block diagram showing the architecture and individual stages of the new detection algorithm for peak component detection in pressure signals.

TABLE I
ALGORITHM PSEUDOCODE

Fig. 5. Example illustrating the output of some of the stages performed by the
detector during peak component detection in pressure signals.

relative amplitude and slope. Finally, the interbeat-interval


stage uses this classification together with the estimated heart
rate to make the final classification and detection of signal
components. Since detection is made on the filtered signal,
a second nearest neighbor algorithm finds the peaks in the
raw signal that are closest to the detected components. Fig. 5
shows an example illustrating the output of some of the stages
performed by the detector during peak component detection.
Table I lists the pseudocode for this algorithm.
B. Maxima Detection
Peak detection is performed at several stages in the algorithm.
It is first used to detect all peaks in the raw signal prior to any
preprocessing. Peak detection is also employed on each data
partition of the filtered signal to find the relative amplitudes of
the
component candidates and on the inflection points. The
pseudocode for this function is shown in Table VI.

C. Preprocessing Stages
The preprocessing stage consists of three bandpass filters.
The first filter removes the trend and eliminates high frequency
noise. The resulting signal is used to estimate the heart rate
which, in turn, is used to determine the cutoff frequency of the
other two bandpass filters. The second filter further attenuates
high frequency components and passes only frequencies that
are less than 2.5 times the heart rate. The output of this filter
only contains one cycle per heart contraction and eliminates
enough high frequency power to ensure the signal derivative is
not dominated by high frequency noise. The third bandpass filter
detrends the signal by eliminating frequencies below half the
minimum expected heart rate and slightly smoothes the signal

ABOY et al.: AUTOMATIC BEAT DETECTION ALGORITHM FOR PRESSURE SIGNALS

with an upper cutoff frequency equal to 10 times the estimated


heart rate.
D. Spectral Heart Rate Estimation
In this stage, the pressure signal is partitioned and the power
, of each segment is estimated. For
spectral density (PSD),
the results reported here, we used the Blackman-Tukey method
of spectral estimation. In general, any of the standard methods
of spectral estimation could be used. The algorithm uses a harmonic PSD technique that combines spectral components according to (1)
(1)
where ensures that the power of the harmonics added to
does not exceed the power at the fundamental by more than
and
.
a factor of . For our results we used
Table VII list the pseudocode for this function. The harmonic
PSD technique combines the power of the fundamental and harmonic components. This technique has two main benefits: 1) it
is less sensitive to signal morphology than traditional PSD estimates because it accounts for variations in the power distribution among harmonic frequencies, and 2) it achieves better
frequency resolution of the lower harmonics by leveraging the
relatively better resolution at the harmonic frequencies [26].
E. Peak Detection and Decision Logic
The detector uses nonlinear filters based on ranks for peak
detection and decision logic. After preprocessing, a rank filter
detects the peaks in each signal partition above the 60th percentile using a running window of 10 s. Since the signal has
been detrended and smoothed, most of these peaks correspond
to the
signal components. In the case of high-pressure ICP
signals,
components are usually misclassified as
at this
stage. Another rank filter applied to the derivative signal detects
all maxima above the 90th percentile. These peaks correspond
to the points of maximum slope, the signal inflection points.
This decision logic calculates the interbeat intervals of the
detected candidate components. Whenever the detector has
missed a component (false negative), the interbeat interval has
an impulse which exceeds 1.75 the estimated heart rate. In
the cases where the detector has over-detected a component
(false positive), the impulse is negative showing an interbeat
interval (IBI) less than 0.75 the estimated heart rate. Since
missed and over-detected components create impulses in the
interbeat series, this stage uses median-based filters to remove
this impulsive noise. These detection errors can be easily located
by applying a simple set of thresholds to the residual signal,
i.e., the difference between the IBI series and the filter output.
F. Nearest Neighbor Decision Logic
This stage combines slope and beat amplitude information
to decide whether a peak in the smoothed signal is a valid .
These two metrics are combined by using a simple nearest
neighbor algorithm. The inputs to this stage are two arrays containing the time location of inflection points (slope maxima),
and the candidate peak components obtained using the rank
filter. The nearest neighbor algorithm locates each candidate
component that immediately follows each inflection point. This
selects the peaks that meet the relative amplitude requirement
and that are immediately preceded by a large slope, which

1665

components with higher amplitude than


in
eliminates
high-pressure waves. In the case of low-frequency waves,
has usually the highest amplitude component, but the amplitude
components may be above the 60th percentile threshold.
of
These are also eliminated in this stage.
G. IBI Classification Logic
After the candidate peaks have passed the relative amplitude
and slope criteria in the previous stage, the final classification
is performed based on the interbeat-intervals (IBI) of the time
series containing the candidate peaks. Assuming subjects do
not have significant arrhythmias, the number of false positives
and false negatives can be reduced by imposing time constrains
on the IBI series. As mentioned earlier, whenever the detector
has missed a peak, the interbeat interval has an impulse which
exceeds twice the estimated heart rate. In the cases where the
detector has over-detected, the impulse is in the opposite direction and is less than half the estimated heart rate.
,
This stage calculates the first difference,
of the peak-to-peak interval series. It then searches the time
series for instances where the interbeat distance is less than
0.75 the median IBI. This is considered an over-detection, and
is removed from the candidate time series. This stage then
searches for cases were the IBI is greater than 1.75 the median
IBI, which are considered missed peaks. To correct this the algorithm searches the initial maxima time series, obtained before
preprocessing and adds the component that minimizes the interbeat variability. This process is repeated until all the candidate components fall within the expected range or the maximum
number of allowed corrections is reached.
Finally, two rank-order filters at the 90th and 10th percentile
are applied to the IBI series in order to detect the locations of
possible misdetections and over detections that were within the
accepted heart rate limits.
III. METHODS
A. Validation Database and Manual Annotation
Several standard databases are available for the evaluation of
QRS detection algorithms. These include the MIT-BIH, AHA,
and CSE databases [27]. Presently, there are no benchmark
databases available to assess the performance of pressure
detection algorithms on ICP, ABP, or POX. There are two
free databases of blood pressure waveforms: Physionet and
Eurobavar but neither of these has manually annotated pressure
components.
We assessed the performance of our algorithm on ICP, ABP,
signals acquired from the Pediatric Intensive Care
and
Unit (PICU) at Doernbecher Childrens Hospital, Oregon
Health & Science University. The signals were acquired by a
data acquisition system in the Complex Systems Laboratory
(CSL) and are part of the CSL database. The patient population
for this study was limited to subjects admitted for traumatic
brain injury, sepsis, and cardiac conditions. The sampling rate
was 125 Hz and the resolution was
(8 bits, 256
levels). Although this sampling rate is not sufficient for some
types of cardiac arrhythmia analysis, it is adequate for pressure
pulse contour analysis and the other applications listed earlier.
A total of 42 539 beats were selected using a random number
generator from a population of 210 patients (60 TBI, 60 Sepsis,
and 90 cardiac). Two patients from each group were randomly
selected. A 60 minute record was then randomly chosen from
the entire recording available for each patient.

1666

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005

TABLE II
SENSITIVITY AND POSITIVE PREDICTIVITY OF THE DETECTION ALGORITHM
FOR ICP, ABP, AND ECG SIGNALS. THE TABLE SHOWS THE SE AND P
RESULTS FOR ACCEPTANCE INTERVALS OF 8.0, 16.0, 24.0, AND 48 MS. THESE
RESULTS USED THE EXPERT MANUAL ANNOTATIONS (DT) ON 42 539 BEATS
RANDOMLY SELECTED FROM A PEDIATRIC INTENSIVE CARE UNIT PATIENT
POPULATION. THE SEGMENTS INCLUDED REGIONS OF SEVERE ARTIFACT

TABLE IV
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2179 BEATS OF RANDOMLY
SELECTED ABP SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES

TABLE III
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2300 BEATS OF RANDOMLY
SELECTED ICP SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES

TABLE V
ALGORITHMS SENSITIVITY AND POSITIVE PREDICTIVITY VALIDATED AGAINST
TWO EXPERTS MANUAL ANNOTATIONS OF 2649 BEATS OF RANDOMLY
SIGNALS FOR ACCEPTANCE INTERVALS (AI) OF 16.0 AND
SELECTED
24.0 MS. THE TABLE SHOWS THE ALGORITHMS PERFORMANCE (AD)
AGAINST THE TWO EXPERTS (DT AND JM), AND THE CONSISTENCY
OF THE EXPERTS BETWEEN THEMSELVES

One expert performed manual annotations for all the six


records. Each record was divided into nonoverlapping segments of 1 minute duration. The expert visually classified each
segment as normal, corrupted, or absent. A normal
segment was defined as a segment in which the noise corrupting
the signal was not abnormal, in the sense that the corrupting
noise is typically present for the specific waveform in a critical
care environment. Examples of this type of noise are baseline
drift, amplitude modulation with respiration, power-line interference, and morphology changes. A corrupted segment was
defined as a segment in which the signal contains substantial
artifact that prevents standard analysis methods from being
effective. Examples include device saturation (clipping) and
external perturbation of the sensor (catheter movement by nurse
of patient). Segments in which the signal was lost (constant)
for more than 10 s were classified as absent. Instructions for
classifying segments and examples are available in [28].
Once the segments were classified, the expert manually labeled every beat in all six records (42,539 beats). A second expert manually annotated 7128 beats of the normal and corrupted
segments.
B. Benchmark Parameters
Following the guidelines proposed by the Association for
the Advancement of Medical Instrumentation (AAMI), two
benchmark parameters were used to assess the algorithms performance: sensitivity and positive predictivity [29]. Sensitivity
and positive predictivity are defined as
(2)
(3)
where
is the number of true positives,
the number
of false negatives, and
the number of false positives. The

SpO

sensitivity
indicates the percentage of true beats that were
indicates the percorrectly detected by the algorithm. The
centage of beat detections which were labeled as such by the
expert.
C. Algorithm Assessment
The algorithm was validated prospectively against expert annotated detections generated by two different experts on ICP,
ABP, and
signals. The performance of the algorithm was
first assessed on the randomly chosen segments without taking
into consideration whether they contained portions of significant artifact. After an expert manually classified each minute
as normal, corrupted, or absent, the algorithm performance was
assessed using each experts manual annotations as the true
peaks on the normal and corrupted segments. The algorithm was
developed using pressure signals from different patients than
those used for performance assessment. The assessment was
measured only once without any parameter tuning.
IV. RESULTS
Table II reports the algorithms sensitivity and positive predictivity for the different pressure signals and acceptance intervals of 8.0, 16.0, 24.0, and 48.0 ms. These are based on one
experts manual annotations for all 42 539 beats including segments clasified as normal, corrupted, and absent. Tables IIIV
report the algorithms sensitivity and positive predictivity on
ICP, ABP, and
signals, respectively. These tables show the
algorithms performance (AD) compared with two different experts (DT & JM) on segments classified as normal or corrupted.
The inter-expert agreement is also reported with DT used as the
true peaks. The algorithms average sensitivity on the 42 539
beats is 99.36%,
; with a 98.43%

ABOY et al.: AUTOMATIC BEAT DETECTION ALGORITHM FOR PRESSURE SIGNALS

1667

Fig. 6. Illustrative example showing an ICP signal and the percussion peaks (P ) identified by the two experts and the detection algorithm. In this case both
experts and the algorithm were in perfect agreement.

Fig. 7. Illustrative example showing an ICP signal and the percussion peaks (P ) identified by the two experts and the detection algorithm. Again both experts
and the algorithm were in perfect agreement despite the changing morphology and the different character than the signal shown in Fig. 6.

tance interval of 16 ms

, positive predictivity for an accep.


V. DISCUSSION

A. Results
The results show that the algorithm is nearly as accurate as
the experts are with one-another. Figs. 6 and 7 show examples
of ICP percussion peaks. Note that the signal morphology in
Fig. 7 is considerably different from Fig. 6. Fig. 8 shows some
examples when the algorithm detected different peaks than the
experts in a
signal. Note that this segment is corrupted by
clipping artifact and the algorithm continued to identify peaks
(over detection). When clipping occurs, the algorithm tries to interpolate and perform component detection trying to minimize
the interbeat interval variability. Experts did not try to interpolate in segments where the signal was absent due to device saturation. This reduces the algorithms reported sensitivity and
positive predictivity. Fig. 8 also shows a missed peak after the
clipped region. In general, regions where artifact occurs have a
slight effect on normal beats that are close. This occurs because
the artifacts can affect the rank-filters baseline and, therefore,
the estimated relative amplitude and estimated slope.

Since data was sampled at 125 Hz and there were several


regions with clipping, we chose an acceptance interval of 16 ms
. We expect that similar or better performance
would be obtained on signals sampled at a higher rate.
B. Algorithm Limitations and Computational Efficiency
Most stages are computationally efficient enough to implement in a nearly real-time block processing architecture.
However, the IBI-based decision logic stage eliminates all the
candidate components which do not meet timing requirements
and adds components that minimize the IBI variability. This
stage is computationally inefficient because it requires several
searching and sorting operations. This is exacerbated by the
repeated passes through this step until no further corrections
are made. If the number of allowed corrections is not limited,
the algorithm may continue this indefinitely. For the results
reported here we limited the number of corrections to 5 times
the initial number of false detections.
C. Validation Databases
Although there are several standard databases available for
the evaluation of QRS detection algorithms, there are no benchmark databases presently available to assess the performance of

1668

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005

SpO

Fig. 8. Example showing a


signal and the systolic peak (SBP) identified by the two experts and the detection algorithm. In this case the experts and
algorithm labeled different peaks in the regions of artifact. Clearly Expert-1 (DT) made the correct choice and Expert-2 (JM) needs more training.
TABLE VI
FUNCTION PSEUDOCODE: DETECTMAXIMA

TABLE VII
FUNCTION PSEUDOCODE: ESTIMATEHEARTRATE

pressure detection algorithms. Validation databases with manually annotated beats by human experts are needed in order to
provide reproducible and comparable performance assessment
of pressure detection algorithms.
Our validation dataset is publicly available at http://
bsp.pdx.edu to provide other developers annotated examples that can be used to validate their beat detection algorithms.
Nonetheless, we caution developers and users about the risk
of validation databases. If developers use these datasets for
development, the performance is favorably biased by the tuning
and algorithm design that occurs during development. These
algorithms may have worse performance when applied prospectively to new datasets. Although validation databases contain
large number of annotated peaks, detection algorithms can still
be favorably tuned to the common cardiac physiology of the
patient population, which is often a narrow subgroup that has
been targeted for their common pathologies. Ideally, validation
should be performed prospectively by a third party on data
that is unavailable to developers. Some progress toward this
higher standard of performance has been achieved through the
Computers and Cardiology challenges. Independent third-party
validation of algorithms on proprietary data with standardized
performance measures would significantly advance the quality
of detection algorithms as a whole.
VI. CONCLUSION
We described a new automatic beat detection algorithm that
can be used to detect the percussion component in ICP signals
signals. Although there
and the systolic peak in ABP and
is a substantial body of literature describing QRS detection algorithms, there are almost no published descriptions or assessments of pressure detection algorithms. These algorithms are
needed needed for many applications and research.
Our algorithm consists of several stages. It relies on the estimated heart rate to choose the cutoff frequencies used by the preprocessing bandpass filters and to aid the discrimination of false
negatives and false positives on the interbeat-interval decision
logic stage. It uses three bandpass filters to eliminate drift and attenuate high frequency noise. It uses nonlinear rank order filters
for peak detection and decision logic. The algorithm was validated prospectively (validation dataset was not available during
algorithm development). The algorithm was run only once on
the dataset and achieved a sensitivity of 99.36% and a positive

ABOY et al.: AUTOMATIC BEAT DETECTION ALGORITHM FOR PRESSURE SIGNALS

TABLE VIII
FUNCTION PSEUDOCODE: IBICORRECT

1669

ACKNOWLEDGMENT
The authors wish to acknowledge the support of the Northwest Health Foundation and the Doernbecher Childrens Hospital Foundation.

REFERENCES

predictivity of 99.43% when compared with expert manual ansignals from the CSL Database
notations of ICP, ABP, and
(OHSU).
We also described a validation dataset and the CSL Database
of the Doernbecher Childrens Hospital (Oregon Health & Science University). This validation dataset is publicly available as
a standard database for algorithm validation.

APPENDIX
The following Tables provide the pseudocode of the functions
used by pressure detector algorithm.

[1] B.-U. Khler, C. Henning, and R. Orglmeister, The principles of software QRS detection, IEEE Eng. Med. Biol. Mag., vol. 21, no. 1, pp.
4257, Jan.-Feb. 2002.
[2] L. Anonelli, W. Ohley, and R. Khamlach, Dicrotic notch detection
using wavelet transform analysis, in Proc. 16th Annu. Int. Conf.
IEEE Engineering in Medicine and Biology Society, vol. 2, 1994, pp.
12161217.
[3] P. F. Kinias and M. Norusis, A real time pressure algorithm, Comput.
Biol. Med., vol. 11, pp. 211211, 1981.
[4] M. Aboy, J. McNames, and B. Goldstein, Automatic detection algorithm of intracranial pressure waveform components, in Proc. 23th Int.
Conf. IEEE Engineering in Medicine and Biology Society, vol. 3, 2001,
pp. 22312234.
[5] M. Aboy, C. Crespo, J. McNames, and B. Goldstein, Automatic detection algorithm for physiologic pressure signal components, in Proc.
24th Int. Conf. IEEE Engineering in Medicine and Biology Society and
Biomedical Engineering Society , vol. 1, 2002, pp. 196197.
[6] E. G. Caiani, M. Turiel, S. Muzzupappa, A. Porta, G. Baselli, S. Cerutti,
and A. Malliani, Evaluation of respiratory influences on left ventricular
function parameters extracted from echocardiographic acoustic quantification, Physiolog. Meas., vol. 21, pp. 175186, 2000.
[7] J. D. Doyle and P. W. S. Mark, Analysis of intracranial pressure, J.
Clin. Monitoring, vol. 8, no. 1, pp. 8190, 1992.
[8] G. Parati, M. Di Rienzo, and G. Mancia, How to measure baroreflex
sensitivity: From the cardiovascular laboratory to daily life, J. Hypertension, vol. 18, pp. 719, 2000.
[9] M. Di Rienzo, P. Castiglioni, G. Mancia, A. Pedotti, and G. Parati, Advances in estimating baroreflex function, IEEE Eng. Med. Biol. Mag.,
vol. 20, no. 2, pp. 2532, Mar./Apr. 2001.
[10] M. Di Rienzo, G. Parati, P. Castiglioni, R. Tordi, G. Mancia, and A. Pedotti, Baroreflex effectiveness index: An additional measure of baroreflex control of heart rate in daily life, Am. J. Physiol. Regulatory, Integrative, Comparative Physiol., vol. 280, pp. R744R751, 2001.
[11] E. G. Caiani, M. Turiel, S. Muzzupappa, A. Porta, L. P. Colombo, and
G. Baselli, Noninvasive quantification of respiratory modulation on
left ventricular size and stroke volum, Physiolog. Meas., vol. 23, pp.
567580, 2002.
[12] G. E. McVeigh, C. W. Bratteli, C. M. Alinder, S. Glasser, S. M. Finkelstein, and J. N. Cohn, Age-related abnormalities in arterial compliance identified by pressure contour analysis, Hypertension, vol. 33, pp.
13921398, 1999.
[13] W. Holsinger, K. Kempner, and M. Miller, A QRS preprocessor based
on digital differentiation, IEEE Trans. Biomed. Eng., vol. BME-18, pp.
121217, 1971.
[14] M.-E. Nygards and J. Hulting, An automated system for (ecg) monitoring, Comput. Biomed. Res., vol. 12, pp. 181202, 1979.
[15] M. Okada, A digital filter for the QRS complex detection, IEEE Trans.
Biomed. Eng., vol. BME-26, pp. 700703, 1979.
[16] J. Fraden and M. Neumann, QRS wave detection, Med. Biol. Eng.
Comput., vol. 18, pp. 125132, 1980.
[17] J. Pan and W. Tompkins, A real-time QRS detection algorithm, IEEE
Trans. Biomed. Eng., vol. BME-32, pp. 230236, 1985.
[18] G. Friesen, T. Jannett, M. Jadallah, S. Yates, S. Quint, and H. Nagle, A
comparison of the noise sensitivity of nine QRS detection algorithms,
IEEE Trans. Biomed. Eng., vol. 37, no. 1, pp. 8598, Jan. 1990.
[19] F. Gritzali, Toward a generalized scheme for QRS detection in ECG
waveforms, Signal Process., vol. 15, pp. 183192, 1988.
[20] P. Hamilton and W. Tompkins, Quantitative investigation of QRS detection rules using the (mit/bih) arrhythmiac database, IEEE Trans.
Biomed. Eng., vol. BME-33, pp. 11571165, 1986.
[21]
, Adaptive matched filtering for QRS detection, in Proc. Annu.
Int. Conf. IEEE Engineering in Medicine and Biology Society, 1988, pp.
147148.
[22] V. Afonso, W. Tompkins, T. Nguyen, and S. Luo, ECG beat detection using filter banks, IEEE Trans. Biomed. Eng., vol. 46, no. 2, pp.
192202, Feb. 1999.

1670

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 10, OCTOBER 2005

[23] S. Kadambe, R. Murray, and G. Boudreaux-Bartels, Wavelet transformbased QRS complex detector, IEEE Trans. Biomed. Eng., vol. 46, no.
7, pp. 838848, Jul. 1999.
[24] W. W. Nichols and M. F. ORourke, McDonalds Blood Flow in Arteries:
Tehoretical, Experimental and Clinical Principles, 4th ed. London,
U.K.: Arnold, 1998.
[25] B. North, Intracranial pressure monitoring, in Head Injury, P. Reilly
and R. Bullock, Eds. London, U.K.: Chapman & Hall, 1997, pp.
209216.
[26] M. H. Hayes, Statistical Digital Signal Processing and Modeling. New
York: Wiley, 1996.
[27] MIT-BIH ECE Database. Massachusetts Inst. Technol., Cambridge.
[Online]. Available: http://ecg.mit.edu
[28] M. Aboy. (2002) Instructions for Labeling Segments Used on the Evaluation of the BSP-Automatic Pressure Detection Algorithm. Portland
State University. [Online]. Available: http://bsp.pdx.edu
[29] (1998) ANSI/AAMI CE57: Testing and Reporting Performance Results
of Cardiac Rhythm and ST Segment Measurement Algorithms. (AAMI)
Recommended Practice/American National Standard. [Online]. Available: http://www.aami.org

Tran Thong (S70M76SM82F89) received


the B.S.E.E. degree from Illinois Institute of Technology, Chicago, in 1972, and the M.S.E. and M.A.
(EE) degrees in 1973 and 1974 and the Ph.D. degree
in electrical engineering in 1975 from Princeton
University, Princeton, NJ.
He worked at Bell Laboratories, Litton Industries,
General Electric Company, Tektronix Inc. From 1993
to 2001, he was the Engineering Vice-President of
Micro Systems Engineering, Inc., a Biotronik company, where he was responsible for the development
of pacemakers and implantable defibrillators. Since 1990, he has been an Adjunct Professor of Electrical and Computer Engineering in the OGI School of
Science & Engineering, Oregon Health & Science University, Beaverton. In
2002, he joined the Department of Biomedical Engineering of the OGI School
of Science and Engineering as an Assistant Professor. He has published over 50
articles and conference papers, and holds 23 U.S. patents. His current research is
directed toward cardiac rhythm management and biomedical signal processing.
Dr. Thong is a member of Sigma Xi, Tau Beta Pi, and Eta Kappa Nu.

Daniel Tsunami, photograph and biography not available at the time of publication.
Mateo Aboy (M98) received the double B.S.
degree (magna cum laude) in electrical engineering
and physics from Portland State University (PSU),
Portland, OR, in 2002. In 2004, he received the M.S.
degree (summa cum laude) in electrical and computer engineering from PSU and the M.Phil (DEA)
degree from the University of Vigo (ETSIT-Vigo),
Vigo, Spain, where he is working towards the Ph.D.
degree in the Signal Theory and Communications
Department.
Since September 2000, he have been a research
member of the Biomedical Signal Processing Laboratory (PSU). He has been
with the Electronics Engineering Technology Department at Oregon Institute
of Technology, Portland, since 2005. His primary research interest is statistical
signal processing.
Mr. Aboy is a lifetime honorary member of the Golden-Key Honor Society, a
past Chapter President of HKN (International Electrical Engineering Honor Society), and past Corresponding Secretary of TBP (National Engineering Honor
Society).

James McNames (M99SM03) received the B.S.


degree in electrical engineering from California Polytechnic State University, San Luis Obispo, in 1992.
He received M.S. and Ph.D. degrees in electrical engineering from Stanford University, Stanford, CA, in
1995 and 1999, respectively.
He has been with the Electrical and Computer Engineering Department at Portland State University,
Portland, OR since 1999, where he is currently an Associate Professor. He has published over 90 journal
and conference papers. His primary research interest
is statistical signal processing with biomedical applications.
He founded the Biomedical Signal Processing (BSP) Laboratory
(bsp.pdx.edu) in fall 2000. The mission of the BSP Laboratory is to advance the art and science of extracting clinically significant information from
physiologic signals. Members of the BSP Laboratory primarily focus on
clinical projects in which the extracted information can help physicians make
better critical decisions and improve patient outcome.

Miles S. Ellenby received the B.E. and M.E. degrees degree (1986) and masters (1987) degrees in
electrical engineering from the University of Illinois,
Urbana-Champaign, in 1986 and 1987, respectively.
He received the M.D. degree from the University of
Chicago in 1990.
He did his Pediatrics residency (19911994) at
the Childrens Hospital of Philadelphia, followed
by a year as chief resident (19941995). He did a
fellowship in Pediatric Critical Care Medicine in the
combined University of California/San Francisco
Childrens Hospital of Oakland program (19951998). He joined the faculty
at Oregon Health & Science University in 2000, where he is now an Assistant
Professor of Pediatrics. He specializes in critical care medicine with a special
interest in the care of children with congenital heart disease. His research
interests are in cardiovascular monitoring, complex systems analysis and
biological signals processing.

Brahm Goldstein (A99) received the B.S. degree


in biological science from Northwestern University,
Evanston, IL, in 1977 and the M.D. degree from the
State University of New York (SUNY) Upstate Medical Center at Syracuse in 1981. His clinical training
included residency in pediatrics at the University
of California at Los Angeles (UCLA) Medical
Center and fellowships in pediatric cardiology and
pediatric critical care medicine at Childrens Hospital and Massachusetts General Hospital, Boston,
respectively.
After serving as a faculty member at the Harvard Medical School,
Cambridge, MA, and at the University of Rochester School of Medicine
and Dentistry, Rochester, MN, he joined the Oregon Health & Science
University, Portland, where he now is Professor of Pediatrics and Director of
the Pediatric Clinical Research Office. His research interests include the study
of heart rate variability and the acquisition and analysis of biomedical signals
in critical illness and injury (brain injury and septic shock in particular). He
formed the Complex Systems Laboratory (www.ohsuhealth/dch/complex) in
1998 to study complex disease states in critically ill and injured children.
Dr. Goldstein is a diplomate of the American Board of Pediatrics and its subboard pediatric critical care medicine.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 8, AUGUST 2005

1485

Communications______________________________________________________________________
Adaptive Modeling and Spectral Estimation of
Nonstationary Biomedical Signals Based on Kalman
Filtering
Mateo Aboy*, Oscar W. Mrquez, James McNames, Roberto Hornero,
Tran Trong, and Brahm Goldstein
AbstractWe describe an algorithm to estimate the instantaneous power
spectral density (PSD) of nonstationary signals. The algorithm is based on a
dual Kalman lter that adaptively generates an estimate of the autoregressive model parameters at each time instant. The algorithm exhibits superior
PSD tracking performance in nonstationary signals than classical nonparametric methodologies, and does not assume local stationarity of the data.
Furthermore, it provides better time-frequency resolution, and is robust to
model mismatches. We demonstrate its usefulness by a sample application
involving PSD estimation of intracranial pressure signals (ICP) from patients with traumatic brain injury (TBI).
Index TermsIntracranial pressure, Kalman lter, linear models, spectral estimation, traumatic brain injury.

I. INTRODUCTION
Currently, power spectral density (PSD) estimation of physiologic
signals is performed predominantly using classical techniques based
on the fast Fourier transform (FFT). Nonparametric methods such as
the periodogram and its improvements (i.e., Barletts, Welchs, and
Blackman-Tukeys methodologies [1][4]) are based on the idea of estimating the autocorrelation sequence of a random process from measured data, and then taking the FFT to obtain an estimate of the power
spectrum. The main two advantages of these techniques are their computational efciency, due to the numerical efciency of the FFT algorithm, and that they do not make any assumptions about the process
except for its stationarity. This makes them the methodology of choice,
particularly in situations where long data records need to be analyzed
and there is no clear model for the process. Furthermore, the availability
of long data records enables one to improve their statistical properties
by averaging or smoothing. However, these techniques have some limitations. They require stationarity of the segments studied, do not work
Manuscript received on March 1, 2004; revised January 2, 2005. A previous
version of this paper was presented at the IEEE-EMBS 2004 conference. This
work was supported in part by the Northwest Health Foundation, in part by
the Doernbecher Childrens Hospital Foundation, and in part by the Thrasher
Research Fund. Asterisk indicates corresponding author.
*M. Aboy is with the Department of Electronics Engineering Technology,
Oregon Institute of Technology, Portland, OR 97201 USA. He is also with the
Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201 USA (e-mail:
mateoaboy@ieee.org).
O. W. Mrquez is with the Signal Theory and Communications Department,
ETSI-Telecomunicacin, University of Vigo, 36310 Vigo, Spain, EU.
J. McNames is with the Biomedical Signal Processing Laboratory, Department of Electrical and Computer Engineering, Portland State University, Portland, OR 97201 USA.
R. Hornero is with the Department of Signal Theory and Communications,
ETSI-Telecomunicacin, University of Valladolid, 47011Valladolid, Spain, EU.
T. Trong is with the Department of Biomedical Engineering, OGI School of
Science and Engineering, Oregon Health and Science University, Portland, OR
97206 USA.
B. Goldstein is with the Complex Systems Laboratory, Department of Pediatrics, Oregon Health and Science University, Portland, OR 97201 USA.
Digital Object Identier 10.1109/TBME.2005.851465

well for short data records, and have limited frequency resolution. Since
physiologic signals are nonstationary in nature, these techniques are
applied following the methodology of the short-time Fourier transform
(STFT), where nonparametric methods are applied to short overlapping
segments which are assumed to be stationary. This approach has also
its limitations. It imposes a piecewise stationary model on the data and,
since local stationarity requires the analysis segments to be short in duration, they have limited time-frequency resolution.
Time-frequency resolution can be improved by using parametric
methods of PSD estimation. The parametric approach is based on
modeling the signal under analysis as a realization of a particular
stochastic process and estimating the model parameters from its
samples. In the absence of a priori knowledge about how the process
is generated, parametric PSD is generally performed assuming an
autoregressive (AR) model [4]. This is a popular assumption for
several reasons: 1) many natural signals such as speech, music or
seismic signals have an underlying autoregressive structure; 2) in
general, any signalnot necessarily AR in naturecan be modeled
as an AR process if a sufciently large model order is selected; 3)
the all-pole structure of AR enables for good spectral peak matching,
which makes it a good model candidate for situations where we are
more interested in the spectral peaks than valleys; and 4) estimation
of the model parameters involves the solution of a linear system of
equations, which can be solved efciently. Even though parametric
PSD can improve the frequency resolution, the current techniques
for PSD estimation based on AR models (i.e., autocorrelation, covariance, modied convariance, and Burgs methods [5], [6]) assume
stationarity. To analyze nonstationary signals they must also assume
the signal is locally piecewise stationary.
We describe a methodology to estimate the time-varying AR model
parameters of nonstationary signals using an adaptive Kalman lter.
This methodology produces instantaneous estimates of PSD, improved
time-frequency resolution, and enables for nonstationary spectral analysis in situations where data records are too short and the local stationary model does not work well. The reliability of the algorithm was
tested with synthetic data generated from different models (AR, MA,
ARMA, and harmonic), and with real data from physiologic pressure
signals. Following the description of this methodology, we demonstrate
its usefulness by a sample application involving PSD estimation of intracranial pressure signals (ICP) from patients with traumatic brain injury (TBI).
II. METHODS
The adaptive Kalman lter algorithm we propose for instantaneous
PSD estimation assumes an underlying autoregressive structure of the
data. We chose an underlying AR model structure because of its intrinsic generality and peak matching capabilities. These are important
properties for the analysis of physiologic signals, since we are usually
more interested in estimating the frequency at which the formant frequencies (peaks) occur than the valleys. Starting from this assumption,
we modeled a given physiologic signal with a recursion of the form

x(n) =

P
k=1

a k x (n 0 k ) + w (n )

(1)

where x(n) is the physiologic signal under analysis at instant n,


are the model parameters, fx(n 0 k)gkp=1 are delayed

fak gpk=1

0018-9294/$20.00 2005 IEEE

1486

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 8, AUGUST 2005

samples of the signal, and w(n) is assumed to be a random sequence


independent and normally distributed with zero mean. Equation (1)
can be generalized by allowing the model coefcients fak gpk=1 to be
time-variant. The estimation problem within the context of nonstationary processes yields naturally to the discrete Kalman lter (DKF)
[7][9].
In order to use the DKF, we must have a signal model in state-space
form, and the state of the system evolves as a rst-order difference
equation, and must be estimated from noisy observations. The general
form of the state-space model for the linear DKF is given by [8], [10]

x(n) = A(n 0 1)x(n 0 1) + w(n)


(2)
y (n ) = H (n )x (n ) + v (n )
where x(n) is the state of the system, A(n 0 1) is the transition or
system matrix, H(n) is the observation matrix, y(n) is the vector of
observations, and w(n) and v(n) are zero-mean white Gaussian noise

processes representing system and observation noise, respectively. The


system noise and the process noise are assumed to be independent. If
the problem can be formulated in state-space according to (2), and if we
know (n 0 1), (n), and the covariance matrix of (n) and (n),
then we can use the DKF to estimate the state of the system optimally
according to the Kalman recursion.

where 8 is a user specied diagonal matrix with entries (ij )i=j


corresponding to the correlation between (n) and (n 0 1), which
control the adaptation speed and frequency tracking capabilities of
the algorithm. For biomedical signals, where the model parameters
change slowly, values close to 1 work well (e.g., 0.995). In the case of
(ij )i=j = 1, the system equation becomes (n) = (n 0 1) + (n).
This is a simple Markov process where the vector coefcients evolve
following a random walk. The adaptation speed is controlled by
the covariance of (n). There is a tradeoff between high adaptation
speed (fast tracking) and variance of the estimates. The measurement
equation of the model, x(n + 1) = T (n) (n) + (n) implements
a linear predictor, where the signal at time n + 1 is estimated from
previous values of n according to

x (n ) =

x (n )
x(n 0 1)
..
.
x(n 0 p + 1)

(3)

This enables us to rewrite (1) as a rst-order difference equation with


time-varying model parameters, and enables us to create a state-space
model for the DKF

a 1 (n ) a 2 (n )

0
1

a p (n )

0
0

(4)

... 0
x(n) = A(n 0 1)x(n 0 1) + w(n)
y (n) = Hx(n) + v(n):

(5)

A (n ) =

1
0

...
...
...

..
.

..
.

..

..
.

H
a

The measurement matrix is


= (1 0 . . . 0). However, in order for
this state-space model to be useful we need a way to estimate the vector
of time-varying coefcients (n) = (a1 (n) a2 (n) . . . ap (n))T corresponding to the rst row of the transition matrix at time n, (n).

B. Dual Kalman Filter


The vector of time-varying coefcients that made the rst row of
the transition matrix (n) can also be estimated using a DKF. This is
referred as a dual Kalman lter, that is, two DKFs working in parallel
to estimate the model parameters and the state of the system [11][13].
The estimation of the model parameters (n) can be formulated in
state space as follows:

a(n) = 8a(n 0 1) + e(n)


x(n + 1) = xT (n)a(n) + q(n):

(6)

x(n + 1) = T (n) (n) +

k=1

q (n )

ak (n)x(n 0 k + 1) +

q (n ):

(7)

In the state-space formulation given by (6), the model parameters (n)


become state variables. The optimum linear estimate of state of the
system (n) can be estimated recursively according to

a^(njn 0 1) = 8a^(n 0 1jn 0 1)

A. Signal Model in State-Space


Since our signal model (1) is a pth-order difference equation, we can
transform it to a system of difference equations by dening the state of
the system as a p-dimensional vector

(8)

z (n) = x(n + 1)
z^(n) = T (n)^(njn 0 1)
^(njn) = ^(njn 0 1) + (n) [z(n) 0 z^(n)]
8T + e (n)
(njn 0 1) = 8 (n 0 1jn 0 1)8
T
4(n) = (n) (njn 0 1) (n)T
(n) = (njn 0 1) (n) [44(n) + l (n)]01

x
a

x
P

(9)
(10)
(11)
(12)
(13)

(14)
K
x
Q
T
(15)
P(njn) = I 0 K(n)x (n) P(njn 0 1)
^(njn 0 1) = 8a^(n 0 1jn 0 1) is the best estimate of the state
where a

(i.e., AR model parameters) without incorporating the observation at


time n, just based on the model structure we imposed on the evolution
of (n) (prediction), and z^(n) = T (n)^(njn 0 1) is the best estimate
of the the measurement z (n) = x(n + 1) based on the model. The
optimum estimate of the state at time n incorporating the measurement
at time n is given by ^(njn) = ^(njn 0 1)+ (n)[z (n) 0z^(n)], which
is composed of two terms: the best estimate of the state without the
measurement at time n, and a weighted difference of the observation
at time n and the best estimate of this observation (correction). The
weighting factor (n) is calculated optimally following the Kalman
recursion, and is referred to as the Kalman gain [7], [8], [10].

C. Instantanous PSD Estimation


The theoretical power spectrum of pth-order stationary autoregressive process is given by

Px (ejw ) =

1+

jb(0)j2

p a e0jkw 2 :
k=1 k

(16)

If b(0) and fagkp=1 can be estimated from data, then we can form an
estimate of the power spectrum of a stationary process as

P^x (ejw ) =

^b(0) 2
:
1 + pk=1 a^k e0jkw 2

(17)

^b(0; n) 2
:
1 + pk=1 a^k (n)e0jkw 2

(18)

Since the dual Kalman lter we propose provides estimates of fagkp=1


at each time instant, the nonstationary power spectrum is given by

P^x (ejw ; n) =

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 8, AUGUST 2005

1487

Fig. 1. Representative results of the rst comparative study between a nonparametric methodology (Welchs) and the proposed DKF PSD estimator. (a) Plot shows
an example of the 10-s ICP segment (light grey) and the 2-s subsegment used for this simulation. (b) Plot of the 2-s subsegment highlighted in (a). (c) Welchs
PSD (dark) and DKF PSD (light) estimates corresponding to the 10-s segment. (d) Welchs PSD (dark) and DKF PSD (light) estimates on the 2-s subsegment. The
thin line corresponds to Welchs estimate based on the 10-s segment. (e) Welchs PSD (dark) and DKF PSD (light) estimates in the 10-s segment with y -axis in dB
scale. (f) Welchs PSD (dark) and DKF PSD (light) estimates in the 2-s subsegment with y -axis in dB scale. The dotted line is the Welch estimate based on the 10 s.

Therefore, the nonstationary PSD given the instantaneous estimates of


model parameters a(n) can be computed according to

^x (ejw ; n)KM =
P

1
jFFT [a(n)]j2

a(n) = 1 0 a(n)T

(19)
(20)

III. RESULTS
We tested the reliability of the instantanous PSD estimation algorithm with synthetic data generated from different models (AR, MA,
ARMA, and harmonic), and with real data from physiologic pressure
signals. In the following, we demonstrate its usefulness by a sample
application involving PSD estimation of ICPs from patients with traumatic brain injury (TBI).

A. Subjects and Material


This study included ICP signals from patients with signicant head
injuries who were admitted to the pediatric intensive care unit at Doernbecher Childrens Hospital. ICP was monitored continuously using a
ventricular catheter or parenchymal ber-optic pressure transducer (Integra NeuroCare, Integra LifeSciences, Plainsboro, NJ). The ICP monitor was connected to a Philips Merlin CMS patient monitor (Philips,
Best, Netherlands) that sampled the ICP signals at 125 Hz. An HPUX
workstation automatically acquired these signals through a serial data
network and they were stored in les on CD-ROM. Patients were managed according to the standards of care in pediatric intensive care unit
at Doernbecher Childrens Hospital. The data acquisition protocol was
reviewed and approved by the Institutional Review Board at Oregon
Health and Science University, and the requirement on informed consent was waived.

1488

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 8, AUGUST 2005

Fig. 2. Representative results of the second comparative study between a nonparametric methodology (Welchs) and the proposed PSD estimator based on the
DKF. (a) ICP segment during a period of hypertension (ICP
25 mmHg) and the reduction in mean ICP after mechanical hyperventilation (approximately
800 s). (b) Spectrogram of the ICP signal centered around the time of therapeutic intervention (hyperventilation). In the spectrogram, we can clearly see the
cardiac components around 2 Hz and the respiratory component (0.10.55) Hz. In the respiratory component, we can note a period of spontaneous breathing
(approximately 0225 s), and the period of mechanical hyperventilation (approximately 225 s). (c) Spectrogram of the ICP signal centered around the time of
therapeutic intervention (hyperventilation) generated using the nonparametric PSD estimator with a window of 15 s. (d) Spectrogram of the ICP signal centered
around the time of therapeutic intervention (hyperventilation) generated using the instantaneous PSD estimate based on the DKF. The time resolution is of 1 sample.
We can appreciate a much better frequency resolution in this case. (e) PSD plot showing the instantaneous PSD estimates (thin light lines) before and after the
intervention and their average (thick lines). The average before the change is shown in grey and the average after the change is the black thick line.

>

>

B. Comparative Studies
We compared PSD estimates obtained with the proposed Kalman
PSD estimation algorithm with those generated by classical nonparametric estimation techniques. For the purposes of this paper, Welchs
method of nonparametric PSD estimation was used as the methodology
representing the nonparametric methods.
The rst comparison was aimed at determining the quality of the
PSD estimates of a nearly stationary ICP signal. The PSD of the signal
was estimated with Welchs PSD estimator and with the proposed
Kalman PSD estimator. PSDs of locally stationary 10-s segments
obtained from ICP signals were estimated using both methodologies.
Then, 2-s subsegments from these 10-s segments were selected and
both methodologies were applied to estimate the PSD corresponding

to the 2-s subsegments. The objective of this study was to compare


the accuracy of the PSD estimates produced by both methodologies
in the 2-s subsegments as estimates of the PSDs estimated in the 10-s
segments.
The second study was aimed at comparing the time-frequency resolution of the spectrograms generated using both methodologies. For
this purpose, a set of nonstationary ICP signals were used. We selected
ICP segments from patients who were undergoing a period of intracranial hypertension (nonstationary conditions), and to whom a therapy
of mechanical hyperventilation was applied as a rst therapy to reduce elevated ICP. We compared both methodologies at the task of determining the time instant at which the hyperventilation intervention
started and determining the change in respiratory frequency applied
based exclusively on the ICP spectrogram.

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 52, NO. 8, AUGUST 2005

In Fig. 1, we show representative results of the rst comparative


study between a nonparametric methodology (Welchs) and the proposed PSD estimator based on the DKF. For the purposes of this
study, Welchs method was always used with the maximum possible
window length, since this yields to the best frequency resolution.
Fig. 1(a) shows the 10-s ICP segment (light grey) and the 2-s subsegment subsegment used in this simulation. Fig. 1(b) show a plot
of the 2-s subsegment highlighted in Fig. 1(a). In Fig. 1(c), we show
the Welchs PSD (dark) and DKF PSD (light) estimates in the 10-s
segment, and in Fig. 1(d) we show the results of both methodologies
on the 2-s segment. The dotted line corresponds the Welch estimate
based on the 10-s segment. Fig. 1(e) and Fig. 1(f) show Welchs
PSD (dark) and DKF PSD (light) estimates in dB scale. In Fig. 1(f),
the dotted line corresponds to the Welch estimate based on the 10 s.
From this results, we can conclude that the PSD estimate generated
by our proposed algorithm has better frequency resolution. We also
note that the estimate based on the 2-s segment using the Kalman
PSD algorithm has even better frequency resolution than Welchs
PSD estimate based on 10 s. Observing the plots in dB scale, we
can also see how the Kalman PSD estimate does not have sidelobes
caused by windowing effects and is smoother.
Results from the second simulation are shown in Fig. 2. In this gure,
we show representative results of the second comparative study between a nonparametric PSD estimator and the proposed PSD estimator
based on the DKF, which consisted in comparing the two methods
time-frequency resolution for nonstationary signals. Fig. 2(a) shows
the ICP segment during a period of hypertension (ICP > 25 mmHg)
and the reduction in mean ICP after mechanical hyperventilation (approximately 800 s). In Fig. 2(b), we show the spectrogram of the ICP
signal centered around the time of therapeutic intervention (hyperventilation) using a 40-s window. Examining this spectrogram we can see
the cardiac component around 2 Hz and the respiratory component in
the range of 0.10.55 Hz. In the evolution of the respiratory component,
we can note a period of spontaneous breathing (approximately 0225 s)
and a period of mechanical hyperventilation (>225 s). Fig. 2(c) shows
the spectrogram of the ICP signal centered around the time of therapeutic intervention (hyperventilation) generated using nonparametric
PSD estimation with a window of 15 s with 50% overlap, which enables
us to know the time instant of the therapeutic intervention with a time
resolution of 15 s. In Fig. 2(d), we show spectrogram of the ICP signal
centered around the time of therapeutic intervention (hyperventilation)
generated using the instantaneous PSD estimate based on the DKF. In
this case, the time resolution is of 1 sample 1=fs s, where fs is the
sampling frequency. Note the higher time and frequency resolution of
the Kalman PSD estimate. This enables us to know the precise instant
at which the therapeutic intervention occurred and to determine how
much the respiratory rate was changed. The change in respiratory rate
can be calculated from the plot in Fig. 2(e), which shows the instantaneous PSD estimates (thin light lines) before and after the intervention.
The average before the change (grey thick line), and the average after
the change (black thick line).
Based only on this simulation study, we cannot claim the DKF is
universally better on all nonstationary signals than all other existing
PSD estimation methods. Further studies are needed to determine in
which situations the DKF PSD estimator performs better than the nonparametric techniques, and what are its limitations. As it was pointed
out in the introduction, due to the computational efciency of the nonparametric PSD estimators, these techniques are well suited for situations where we need to analyze long data records. In this case, especially if we do not need very precise time-resolution, we can increase
the window length to improve the frequency resolution of the nonparametric PSD estimates, as long as we do not violate the stationarity assumption. However, in situations where the signals are nonstationary,

1489

short, and we need good time-frequency resolution, the instantaneous


DKF technique we proposed here may be useful.
IV. CONCLUSION
The authors described an algorithm to perform instantaneous AR
modeling and spectral estimation in nonstationary signals using dual
Kalman lters, and demonstrated its potential applicability and usefulness by means of two comparative studies, one on simulated signals
and another involving PSD estimation in ICP signals from patients with
TBI. The proposed algorithm was compared with Welchs method of
PSD estimation. Similar results were obtained when the DKF PSD estimator was compared against other standard nonparametric methods
such as the Blackman-Tukey or the modied periodogram. Based on
this preliminary study, we conclude that the DKF estimator is able to
track changes in the PSD better than a moving window technique, and
exhibits good time-frequency resolution when compared with benchmark nonparametric PSD techniques at the task of estimating the PSD
of very short data records which are nonstationary. Furthermore, the
proposed method does not assume a piecewise stationary model on the
data.
REFERENCES
[1] M. Barlett, Smoothing periodograms from time series with continuous
spectra, Nature (London), vol. 161, no. 8, pp. 686687, 1948.
[2] P. Welch, The use of fast fourier transform for estimation of power
spectra: a method based on time averaging over short modied periodograms, IEEE Trans. Audio Electoacust., vol. 12, pp. 7073, 1967.
[3] R. Blackman and J. Tukey, The Measurement of Power Spectra. New
York: Dover, 1958.
[4] D. G. Manolakis, V. K. Ingle, and S. M. Kogon, Statistical and Adaptive Signal Processing. Spectral Estimation, Signal Modeling, Adaptive
Filtering and Array Processing. New York: McGraw-Hill, 2000.
[5] J. Burg, Maximum entropy spectral analysis, Ph.D. dissertation, Dept.
Elect. Eng., Stanford Univ., Stanford, CA, 1975.
[6] S. Kay, Modern Spectral Estimation. Englewood Cliffs, NJ: PrenticeHall, 1988.
[7] R. Kalman, A new approach to linear ltering and prediction problems, Trans. ASMEJ. Basic Eng., vol. 82, pp. 3545, 1960.
[8] M. S. Grewal and A. P. Andrews, Kalman Filtering: Theory and Practice
Using MATLAB. New York: Wiley, 2001.
[9] T. Kailath, A. H. Sayed, and B. Hassibi, Linear Estimation. Englewood Cliffs, NJ: Prentice-Hall, 2000.
[10] A. Gelb, J. Kasper, R. Nash, C. Price, and A. Sutherland, Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974.
[11] E. A. Wan and A. T. Nelson, Neural dual extended Kalman ltering:
applications in speech enhancement and monaural blind signal separation, in Proc. 1997 Neural Networks for Signal Processing VII, 1997,
pp. 466475.
, Removal of noise from speech using the dual EKF algorithm, in
[12]
Proc. 1998 IEEE Int. Conf. Acoustics, Speech, and Signal Processing,
1998, vol. 1, 1998, pp. 381384.
[13] A. T. Nelson and E. A. Wan, A two-observation Kalman framework for
maximum-likelihood modeling of noisy time series, in Proc. IEEE Int.
Joint Conf. Neural Networks, vol. 3, 1998, pp. 24892494.

S-ar putea să vă placă și