Sunteți pe pagina 1din 22

“State Estimation Using the

Kalman Filter”
Thomas F. Edgar
Department of Chemical Engineering
University of Texas
Austin, TX 78712

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 1


Outline
• Introduction
• Basic Statistics for Linear Dynamic Systems
• State Estimation
• Kalman Filter – Algorithm and Properties

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 2


Control with Limited/Noisy
Measurements
(1) Some variables may not be measurable in real
time

(2) Noise in the instruments and in the process


may give erroneous data for control purposes

Solution: Use Kalman Filter

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 3


Random Variables

Ex

turbulent flow,
temperature sensor
in boiling liquid

mean (expected) value of r.v. :



x = E[ x] ∆ ∫ xp( x )dx
−∞
 p(x) probability
expected value density function
operator
Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 4
Definition of Variance and Covariance

var( x ) ∆ E ( x − x ) 
 2
∆ (x-x)(x-x)

∫ − 2
( x x ) p( x ) dx
1
−∞

[ var( x )]
2
= σ (standard deviation)

for two random variables, x and y, the degree of their dependence


is indicated by covariance = cov( x, y )∆ E ( x − x )( y − y )

cov( x, y ) = E [ xy ] − x y
cov( x, x ) = var x = E  x 2  − ( x )2

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 5


Correlation Coefficient
cov(x,y )
ρ ( x, y )∆
var(x )var(y )
−1 ≤ ρ ≤ +1 ρ →1
highly correlated
ρ →0
uncorrelated
extension to multivariable case:
covariance matrix Pij = cov( xi , x j )
P ∆ E ( x − x )( x − x )T 

P =P P ≥ 0 pii = var xi
T

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 6


State Estimation
Object: Using data (which is filtered), reconstruct values
for unmeasured state variables
1 N
Definitions: mean = x = ∑ xi
N
variance σ 2 = ∑( xi − x )2
σ2 large, lots of scatter! single data pt. is unreliable
Example: 2 measurements of equal reliability
x p + xr
( x p , x r ) → xf = xf
2
more generally, xp xr data pt. “p”
ep = x p − x data pt. “r”
(actually data vectors)
e =x −x
r r
Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 7
State Estimation (cont’d)
error vectors → error covariance matrices
 
P= =  ⋯ 
ep epT  ep1 ep1 ep1 ep2 
 
 
 ep2 ep2 ⋱ 
 
 
 epn epn 

Similarly, R = er er T
combine x p and x r to find x f
x f = (I − F )x p + F ( x r ) = x p + F ( x r − x p )

ef = (I − F ) e p + F e r F : weighting matrix
The covariance matric of ef is
H = P − F P − P F + F (P + R ) F
T T

(no correlation between e r and e p )


Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 8
State Estimation (cont’d)

Select F so that ∑ fi is a minimum


2
e
i
(least squares estimate or minimum variance estimate)

F opt = P (P + R )−1

scalar example:

P = q2 1
(a ) F=
R =q 2
2

(b) P > > R F =1 (select xr )

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 9


State Estimation (cont’d)

For dynamic systems, we have 2 sources of information:

(xp) (1) state equation (and previous state estimates)

(xr) (2) new measurements at time step k


(but # measurements < # states)

x (k) = A (k-1) x (k-1) + G (k-1) w (k-1)

linear dynamic system; w = process noise

state variables xnx1

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 10


State Estimation (cont’d)
y ( k ) = Cx (k ) + v (k )
y ℓx1 ℓ ≤ n v (k ) : instrument noise
y: measured variables
We wish to update xˆ (k ) (the estimates of the states)
from inaccurate or unknown initial conditions on x(k),
measurements corrupted by noise.
The Kalman filter eqn:

xˆ (k ) = A(k − 1)xˆ (k − 1)
+ K (k ) [ y (k ) − CA (k − 1) xˆ (k − 1)]

ˆ"
"y
y − yˆ : difference between measurement and estimate
n.b. if we can’t measure xi (k), then xi (0) is unknown
Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 11
State Estimation (cont’d)
Define covariance matrices
Qnxn : w (k ) process noise vector (white)
Rℓxℓ : v (k ) instrument noise vector (white)

Q, R usually diagonal matrices that can be “tuned” (variances needed)


w, v are uncorrelated white noises (characteristics defined by mean, variance)

markov sequence p(w(k+1) / w(k)) = p(w(k+1))

define state error covariance

P = ( x p − x )( x p − x )
T

x: true value

State estimate covariance


H = ( xˆ − x )( xˆ − x )T
Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 12
State Estimation (cont’d)

given xˆ (k − 1), an unbiased estimate


(min. variance) is
x p (k ) = A(k − 1)xˆ (k − 1)
covariance matrix:

(x − xk )( x p − xk )
T
P (k ) =
p

=  A( k − 1)xˆ (k − 1) − A(k − 1)x (k − 1) − G(k − 1)w (k − 1) i

[ ]
T

= A(k − 1)H (k − 1)A(k − 1)T + G(k − 1)Q(k − 1)G(k − 1)T


Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 13
State Estimation (cont’d)
adjust measurement eqn. (to obtain square system):
y1(k ) = C1x (k ) + v1( k ) y = Cx + v
nx1 nxn nx1 ℓx1 ℓxn ℓx1
C 
C1 =   D : dummy matrix so C1 is square
D 

R O
cov(v1 ) R1 =   γ →∞ (bad or missing information
 O γ I 
p(x) on n − ℓ
x components )

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 14


State Estimation (cont’d)
Inverting,
x (k ) = C1−1[ y1(k ) − v1(k )]

estimate xr (k ) = C1−1y1(k )

cov [ xr − x(k )] = C1−1R1(k )(C1−1 )T

Ex C=[1 0]

select D = [ 0 1 ] to make H1 invertible

1 0 
R=1 R1 =   N large
 0 N

1 0  note
R1−1 =  -1 
 0 N  N −1 → 0

potential for roundoff error


Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 15
State Estimation (cont’d)
Combine xr and xp to minimize variance
 

xˆ (k ) = x p (k ) + F (k )  xr (k ) − x p (k )

(F opt = P (P + R )−1
from earlier analysis)

xˆ (k ) = A(k − 1)xˆ (k − 1) + F (k )[C1−1y1(k ) − A(k − 1)xˆ (k − 1)]

actually, F opt (k ) = P ( k )[P (k ) + C1−1R1(k )(C1−1 )T ]−1

F opt (k ) = P ( k )[C1T R1−1( k )C1P (k ) + I ]−1C1T R1(k )−1C1

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 16


State Estimation (cont’d)
−1
−1
 R 0  C 
examine C1 R1 C1 = C D  
T T T
−1   
0 γ I  D 
= CT R −1C
covariance of composite estimate becomes

H (k ) = [P (k )CT R −1(k )C + I ]−1P (k )

nxn matrix inverse

By rearrangement and matrix identity (reduce size of inverse to


be computed)
H (k ) = P (k ) − P (k )CT [CP (k )CT + R ]−1CP (k )

Thomas F. Edgar (UT-Austin) ℓx ℓFilter
Kalman Virtual Control Book 12/06 17
Kalman Filter Algorithm
1. assume xˆ (0), H (0)
select Q, R
2. P (1) = A(0) H(0) AT (0) + G(0) QGT (0)
−1
3. K (1) = P (1)C CP (1)C + R 
T T

ℓx ℓ

4. xˆ (1) = A(0)xˆ (0) + K (1) y (1) − CAxˆ (0)

update
H (1) = P (1) − K (1)CP (1)

return to 2. to generate P(2), K(2)

n.b. If A, G are not functions of k,

P(k), K(k) can be generated ahead of time.


Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 18
Kalman Filter Extension
Extensions

x(k ) = A(k − 1)xˆ (k − 1) + B(k − 1)u(k − 1) + G(k − 1)w (k − 1)

Kalman filter becomes

xˆ (k ) = A(k − 1)xˆ (k − 1) + B(k − 1)u(k − 1) +


K(k ) y (k ) − C[ A(k − 1) xˆ (k − 1) + B(k − 1)u(k − 1)]

Recursive update for P:

P (k ) = P (k − i ) − P (k − 1)CT [CP (k − 1)CT + R ]−1CP (k )

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 19


Properties of Kalman Filter
(a) It provides an unbiased estimate
E [ xɶ (t )] = 0 v t > o
(b) P(t) is the covariance matrix of xɶ
P = E  xɶ xɶ T 
P(t) found by soln of Riccati eqn. and does not depend
on y(t) – can be calculated a priori
n
note that J (t ) = ∑ E ( xɶ i 2 (t )) (variance)
i =1

n
= ∑ Pii (t ) = tr (P )
i =1

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 20


Properties of Kalman Filter (cont’d)
(c) Kalman filter is a linear, minimum variance estimator

linear o.d.e. relating xˆ to y (t )

For non-white (colored) noise, optimal estimator


is not necessarily linear

(d) For long times (t → ∞ )


K (t ) → K
P (t ) → P s.s. Riccati eqn.
don’t have to update gain matrix

(e) note similarity to LQP)


(Kalman filter uses initial condition)
Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 21
Properties of Kalman Filter (cont’d)

(f) Extension of K.F. to nonlinear systems (involves


successive linearization of state eqn.)

(g) Q,R. difficult to estimate a priori, but can be used


as design parameters
(relative values of Q, R are important)

(h) large Q ⇒ large K


(implies process noise is large)

large R (small Q) ⇒ small K


(implies measurement noise is large)

Thomas F. Edgar (UT-Austin) Kalman Filter Virtual Control Book 12/06 22

S-ar putea să vă placă și