Sunteți pe pagina 1din 31

AGC

DSP
Professor A G Constantinides 1
A Prediction Problem
Problem: Given a sample set of a stationary
processes

to predict the value of the process some time
into the future as


The function may be linear or non-linear. We
concentrate only on linear prediction
functions

]} [ ],..., 2 [ ], 1 [ ], [ { M n x n x n x n x
]) [ ],..., 2 [ ], 1 [ ], [ ( ] [ M n x n x n x n x f m n x = +
AGC
DSP
Professor A G Constantinides 2
A Prediction Problem
Linear Prediction dates back to Gauss in the
18
th
century.
Extensively used in DSP theory and
applications (spectrum analysis, speech
processing, radar, sonar, seismology, mobile
telephony, financial systems etc)
The difference between the predicted and
actual value at a specific point in time is
caleed the prediction error.
AGC
DSP
Professor A G Constantinides 3
A Prediction Problem
The objective of prediction is: given the
data, to select a linear function that
minimises the prediction error.
The Wiener approach examined earlier
may be cast into a predictive form in
which the desired signal to follow is the
next sample of the given process
AGC
DSP
Professor A G Constantinides 4
Forward & Backward
Prediction
If the prediction is written as


Then we have a one-step forward prediction
If the prediction is written as


Then we have a one-step backward
prediction

]) [ ],..., 2 [ ], 1 [ ( ] [ M n x n x n x f n x =
]) 1 [ ],..., 2 [ ], 1 [ ], [ ( ] [ = M n x n x n x n x f M n x
AGC
DSP
Professor A G Constantinides 5
Forward Prediction Problem
The forward prediction error is then

Write the prediction equation as


And as in the Wiener case we minimise
the second order norm of the prediction
error
] [ ] [ ] [ n x n x n e
f
=

=
=
M
k
k n x k w n x
1
] [ ] [ ] [
AGC
DSP
Professor A G Constantinides 6
Forward Prediction Problem
Thus the solution accrues from

Expanding we have

Differentiating with resoect to the
weight vector we obtain
} ]) [ ] [ {( min } ]) [ {( min
2 2
n x n x E n e E J
f
= =
w w
| | } ]) [ {( ]) [ ] [ {( 2 } ]) [ {( min
2 2
n x E n x n x E n x E J + =
w
}
] [
] [ { 2 )
] [
] [ {( 2
i i i
w
n x
n x E
w
n x
n x E
w
J
c
c
+
c
c
=
c
c
AGC
DSP
Professor A G Constantinides 7
Forward Prediction Problem
However

And hence


or

] [
] [
i n x
w
n x
i
=
c
c
]} [ ] [ { 2 ]) [ ] [ {( 2 i n x n x E i n x n x E
w
J
i
+ =
c
c
]} [ ] [ ] [ { 2 ]) [ ] [ {( 2
1
i n x k n x k w E i n x n x E
w
J
M
k
i

+ =
c
c
=
AGC
DSP
Professor A G Constantinides 8
Forward Prediction Problem
On substituting with the correspending
correlation sequences we have


Set this expression to zero for
minimisation to yield

+ =
c
c
=
M
k
xx
i
k i r k w i r
w
J
1
] [ ] [ 2 ] [ 2
M i i r k i r k w
xx
M
k
xx
,..., 3 , 2 , 1 ] [ ] [ ] [
1
= =


=
AGC
DSP
Professor A G Constantinides 9
Forward Prediction Problem
These are the Normal Equations, or Wiener-
Hopf , or Yule-Walker equations structured
for the one-step forward predictor

In this specific case it is clear that we need
only know the autocorrelation propertities of
the given process to determine the predictor
coefficients
AGC
DSP
Professor A G Constantinides 10
Forward Prediction Filter
Set


And rewrite earlier expression as



These equations are sometimes known as the
augmented forward prediction normal
equations
M m
M m m w
m
m a
M
>
=
=
=
0
,.., 1 ] [
0 1
] [
M k
k r
k m r m a
xx
M
m
xx M
,..., 2 , 1 0
0 ] 0 [
] [ ] [
0 =
=
=


=
AGC
DSP
Professor A G Constantinides 11
Forward Prediction Filter
The prediction error is then given as


This is a FIR filter known as the
prediction-error filter


=
=
M
m
M f
k n x k a n e
0
] [ ] [ ] [
M
M M f
z M a z a z a z A

+ + + + = ] [ ... ] 2 [ ] 1 [ 1 ) (
2 1
1
AGC
DSP
Professor A G Constantinides 12
Backward Prediction Problem
In a similar manner for the backward
prediction case we write

And


Where we assume that the backward
predictor filter weights are different from the
forward case

] [ ] [ ] [ M n x M n x n e
b
=

+ =
=
M
k
k n x k w M n x
1
] 1 [ ] [
~
] [
AGC
DSP
Professor A G Constantinides 13
Backward Prediction Problem
Thus on comparing the the forward and
backward formulations with the Wiener least
squares conditions we see that the desirable
signal is now

Hence the normal equations for the backward
case can be written as



] [ M n x
M k k M r k m r m w
xx
M
m
xx
,..., 3 , 2 , 1 ] 1 [ ] [ ] [
~
1
= + =


=
AGC
DSP
Professor A G Constantinides 14
Backward Prediction Problem
This can be slightly adjusted as


On comparing this equation with the
corresponding forward case it is seen that the
two have the same mathematical form and

Or equivalently

M k k r m k r m M w
xx
M
m
xx
,..., 3 , 2 , 1 ] [ ] [ ] 1 [
~
1
= =

+
=
M m m M w m w ,..., 2 , 1 ] 1 [
~
] [ = + =
M m m M w m w ,..., 2 , 1 ] 1 [ ] [
~
= = + =
AGC
DSP
Professor A G Constantinides 15
Backward Prediction Filter
Ie backward prediction filter has the same
weights as the forward case but reversed.

This result is significant from which many
properties of efficient predictors ensue.
Observe that the ratio of the backward
prediction error filter to the forward
prediction error filter is allpass.
This yields the lattice predictor structures.
More on this later
M
M M M b
z z M a z M a M a z A

+ + + + = ... ] 2 [ ] 1 [ ] [ ) (
2 1
AGC
DSP
Professor A G Constantinides 16
Levinson-Durbin
Solution of the Normal Equations
The Durbin algorithm solves the following

Where the right hand side is a column of
as in the normal equations.
Assume we have a solution for

Where

m m m
r w R =
R
m k
k k k
s s = 1 r w R
T
k k
r r r r ] ,..., , , [
3 2 1
= r
AGC
DSP
Professor A G Constantinides 17
Levinson-Durbin
For the next iteration the normal equations can
be written as


Where



Set
1 1
0
+ +
=
(

k k
k
r
r w
J r
r J R
k
T
k
*
k k
(

=
+
+
1
1
k
k
k
r
r
r
(

=
+
k
k
k
o
z
w
1
k
J
Is the k-order
counteridentity
AGC
DSP
Professor A G Constantinides 18
Levinson-Durbin
Multiply out to yield

Note that

Hence

Ie the first k elements of are adjusted
versions of the previous solution

* *
r J R w r J r R z
k k k k k k k k k k k
1 1
) (

= = o o
1 1
=
k k k k
R J J R
*
w J w z
k k k k k
o =
1 + k
w
AGC
DSP
Professor A G Constantinides 19
Levinson-Durbin
The last element follows from the
second equation of


Ie
(

=
(

+1
0
k
k
k
k
k
r
r
r w
J r
r J R
k
T
k
*
k k
o
) (
1
1
0
k k k k k
r
r
z J r
T
=
+
o
AGC
DSP
Professor A G Constantinides 20
Levinson-Durbin

The parameters are known as the
reflection coefficients.
These are crucial from the signal
processing point of view.
k
o
AGC
DSP
Professor A G Constantinides 21
Levinson-Durbin

The Levinson algorithm solves the
problem

In the same way as for Durbin we keep
track of the solutions to the problems
b y R =
m
k k k
b y R =
AGC
DSP
Professor A G Constantinides 22
Levinson-Durbin

Thus assuming , to be known
at the k step, we solve at the next step
the problem

(

=
(

+1
0
k
k
k
k
k
b
r
b v
J r
r J R
k
T
k
*
k k

k
w
k
y
AGC
DSP
Professor A G Constantinides 23
Levinson-Durbin

Where

Thus

(

=
+
k
k
k

v
y
1
* *
y J y r J b R v
k k k k k k k k k k
= =

) (
1
*
0
1
k
T
k
k k
T
k k
k
r
b
y r
y J r

=
+

AGC
DSP
Professor A G Constantinides 24
Lattice Predictors
Return to the lattice case.
We write


or



) (
) (
) (
z A
z A
z T
f
b
M
=
M
M M
M
M M M
M
z M a z a z a
z z M a z M a M a
z T


+ + + +
+ + + +
=
] [ ... ] 2 [ ] 1 [ 1
... ] 2 [ ] 1 [ ] [
) (
2 1
1
2 1
AGC
DSP
Professor A G Constantinides 25
Lattice Predictors
The above transfer function is allpass of order M.
It can be thought of as the reflection coeffient of
a cascade of lossless transmission lines, or
acoustic tubes.
In this sense it can furnish a simple algorithm for
the estimation of the reflection coefficients.
We strat with the observation that the transfer
function can be written in terms of another
allpass filter embedded in a first order allpass
structure
AGC
DSP
Professor A G Constantinides 26
Lattice Predictors
This takes the form


Where is to be chosen to make
of degree (M-1) .
From the above we have


) ( 1
) (
) (
1
1
1
1
1
1
z T z
z T z
z T
M
M
M

+
+
=

1
) (
1
z T
M
)) ( 1 (
) (
) (
1
1
1
1
z T z
z T
z T
M
M
M

AGC
DSP
Professor A G Constantinides 27
Lattice Predictors
And hence



Where

Thus for a reduction in the order the constant
term in the numerator, which is also equal to
the highest term in the denominator, must be
zero.
) ] [ ... ] 2 [ ] 1 [ 1 (
... ] 1 [ ] [ (
) (
1
2
1
1
1
1
1
1 1
M
M M M
M
M M
M
z M a z a z a z
z z M a M a
z T



+ + + +
+ + +
=
] [ 1
] [ ] [
] [
1
1
1
M a
r M a r a
r a
M
M M
M

AGC
DSP
Professor A G Constantinides 28
Lattice Predictors
This requirement yields
The realisation structure is
] [
1
M a
M
=
) (z T
M
) (
1
z T
M
1
z
1

AGC
DSP
Professor A G Constantinides 29
Lattice Predictors
There are many rearrangemnets that can be
made of this structure, through the use of
Signal Flow Graphs.
One such rearrangement would be to reverse
the direction of signal flow for the lower path.
This would yield the standard Lattice
Structure as found in several textbooks (viz.
Inverse Lattice)
The lattice structure and the above
development are intimately related to the
Levinson-Durbin Algorithm
AGC
DSP
Professor A G Constantinides 30
Lattice Predictors
The form of lattice presented is not the usual
approach to the Levinson algorithm in that
we have developed the inverse filter.
Since the denominator of the allpass is also
the denominator of the AR process the
procedure can be seen as an AR coefficient to
lattice structure mapping.
For lattice to AR coefficient mapping we
follow the opposite route, ie we contruct the
allpass and read off its denominator.
AGC
DSP
Professor A G Constantinides 31
PSD Estimation
It is evident that if the PSD of the prediction
error is white then the prediction transfer
function multiplied by the input PSD yields a
constant.
Therefore the input PSD is determined.
Moreover the inverse prediction filter gives us
a means to generate the process as the
output from the filter when the input is white
noise.

S-ar putea să vă placă și