Documente Academic
Documente Profesional
Documente Cultură
Department of Chemical and Biological Engineering, University of Wisconsin, Madison, United States
b Department of Chemical and Biomolecular Engineering, Ohio State University, United States
Received 6 March 2006; received in revised form 18 May 2006; accepted 22 May 2006
Available online 21 July 2006
Abstract
This paper provides an overview of currently available methods for state estimation of linear, constrained and nonlinear systems. The following
methods are discussed: Kalman filtering, extended Kalman filtering, unscented Kalman filtering, particle filtering, and moving horizon estimation.
The current research literature on particle filtering and moving horizon estimation is reviewed, and the advantages and disadvantages of these
methods are presented. Topics for new research are suggested that address combining the best features of moving horizon estimation and particle
filters.
2006 Elsevier Ltd. All rights reserved.
Keywords: State estimation; Particle filtering; Moving horizon estimation
1. Introduction
For the purposes of this paper we consider the following discrete time dynamic system
(1a)
(1b)
Corresponding author.
E-mail address: rawlings@engr.wisc.edu (J.B. Rawlings).
0098-1354/$ see front matter 2006 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compchemeng.2006.05.031
in which
x(k) is the state of the system at time t(k). The initial value,
x(0), is a random variable with a given density;
u(k) is the system input at time t(k) (assumes a zero-order hold
over the time interval [t(k),t(k + 1)];
w(k) and v(k) are sequences of independent random variables, called process and measurement noises, respectively,
with time-invariant densities;
F(x(k), u(k)) is a (possibly) nonlinear system model. F may be
the solution to a first principles, differential equation model;
G(x(k), u(k)) is a full column rank matrix (this condition
is required for uniqueness of the conditional density to be
defined later;
y(k) is the system measurement or observation at time t(k);
h is a (possibly) nonlinear function of x(k).
The state estimation problem is to determine an estimate of
the state x(T) given the chosen model structure and a sequence of
noisy observations (measurements) of the system, Y(T) := {y(0),
. . ., y(T)}. As might be expected from such a fundamental
1530
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
problem statement, state estimation has found diverse application in science and engineering over many years. In the stochastic
setting chosen here, the conditional density of the state given the
measurements, px|Y (x(T)|Y(T)) is the natural statistical distribution of interest, and the state estimation problem is essentially
solved if we can find this distribution. The complete conditional density is difficult to calculate exactly, however, except
for well-known simple systems, such as when F and G are
linear, and w and v are normally distributed. In this case the
conditional density is also Gaussian with mean and covariance
provided by the well-known Kalman filter. When F and G are
nonlinear, however, the conditional density is not Gaussian, and
obtaining a complete solution is generally impractical. Moreover, when state estimation is used as part of a feedback control
system, the state estimator must meet other requirements. The
estimate must be found during the available sample time of the
system as each measurement becomes available. The on-line
requirements provide further limitations on what is achievable
in state estimation. In this review, we consider many of the methods for solving this problem including Kalman filtering (KF),
extended Kalman filtering (EKF), unscented Kalman filtering
(UKF), particle filtering (PF), and moving horizon estimation
(MHE).
Although the Kalman filter is the optimal state estimator for
unconstrained, linear systems subject to normally distributed
state and measurement noise, many physical systems, exhibit
nonlinear dynamics and have states subject to hard constraints,
such as nonnegative concentrations or pressures. Hence Kalman
filtering is no longer directly applicable. As a result, many different types of nonlinear state estimators have been proposed;
Daum (2005) provides a highly readable and tutorial summary of
many of these methods, and Soroush (1998) provides a review
with a focus on applications in process control. We focus our
attention on techniques that formulate state estimation in a probabilistic setting, that is, both the model and the measurement
are potentially subject to random disturbances. Such techniques
include the extended Kalman filter, moving horizon estimation,
Bayesian estimation, and Gaussian sum approximations. In this
probabilistic setting, state estimators attempt to reconstruct the
conditional density px|Y (x(T)|Y(T)). In many applications, the
entire density is not of interest, but a single point estimate of
the state is of most interest. One question that arises, then, is
which point estimate is most appropriate for this use. Two obvious choices for the point estimate are the mean and the mode of
the conditional density. For asymmetric distributions, Fig. 1(a)
demonstrates that these estimates are generally different. Additionally, if this distribution is multi-modal as is Fig. 1(b), then
the mean may place the state estimate in a region of low probability. Clearly, the mode is a more desirable estimate in such
cases.
For nonlinear systems, the conditional density is generally
asymmetric and potentially multi-modal. Such systems are not
pathological cases. On the contrary, in this paper we include a
multi-modal example in Section 4 that requires only a single,
isothermal chemical reaction with second-order kinetics. This
example is based on the work in (Haseltine & Rawlings, 2005),
which derives some simple conditions that lead to the formation
Fig. 1. Comparison of mean and mode as candidate point estimates for (a)
asymmetric and (b) multi-modal densities.
of multiple modes in the conditional density for systems tending to a steady state. Bakshi and coworkers show examples with
simple continuous stired-tank reactor (CSTR) models of chemical reactions that produce multi-modal conditional densities
(Chen, Bakshi, Goel, & Ungarala, 2004). Alspach and Sorenson
(1972) and references contained within, Gordon, Salmond, and
Smith (1993), and Chaves and Sontag (2002) have proposed
other examples in which multiple modes arise in the conditional
density. Gaussian sum approximations (Alspach & Sorenson,
1972) offer one method for addressing the formation of multiple modes in the conditional density for unconstrained systems. Current Bayesian estimation methods (Blviken, Acklam,
Christopherson, & Strdal, 2001; Chen, Ungarala, Bakshi, &
Goel, 2001; Gordon et al., 1993; Spall, 2003) offer another
means for addressing multiple modes, but these methods propose estimation of the mean rather than the mode. Gordon et
al. (1993) suggest using continuous density estimation techniques to estimate the mode of the conditional density. Silverman
(1986) demonstrates via a numerical example that the number
of samples required to reconstruct a point estimate within a
given relative error increases exponentially with the dimensionality of the state, so we expect continuous density estimation
may be applicable only to systems with low-dimensional state
vectors.
The basic formulation of a Bayesian solution to estimation in
nonlinear dynamic systems has existed for at least four decades
(Ho & Lee, 1964). The use of sequential Monte Carlo (SMC)
or particle filtering methods for solving this task can be traced
back to the late sixties (Handschin & Mayne, 1969). Until
recently, however, this formulation was not practical due to the
computational challenges posed by multi-dimensional Bayesian
integration and the need for on-line or sequential processing.
The challenge of solving Bayesian integration problems is
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
1531
(2)
P (k + 1) = AP(k)A + GQG
(3)
x (0) = x 0 ,
(4)
P (0) = Q0
(5)
(6)
(7)
in which L(k) is the filter gain. For the linear case, every density in
sight is normal, the mean is equal to the mode for every density,
and the issue of which statistical property to use for the point
estimate does not arise. Example IV-A illustrates these results
1532
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
(k) + G(k)Q
(k)
P (k + 1) = A(k)P(k)
A
G
x (0) = x 0 ,
P (0) = Q0
(k))1
L(k) = P (k)C
(k)
A(k)
=
,
x
h(x)
C(k)
=
x
and all partial derivatives are evaluated at x (k) and u(k), and
G(k)
= G(x(k), u(k)). The densities of w, v and x0 are assumed
to be normal. Many variations on the same theme have been
proposed such as the iterated EKF and the second-order EKF
(Gelb, 1974, 32, 190192). Of the nonlinear filtering methods, the EKF method has received the most attention due to
its relative simplicity and demonstrated effectiveness in handling some nonlinear systems. Examples of implementations
include estimation for the production of silicon/germanium alloy
films (Middlebrooks, 2001), polymerization reactions (Prasad,
Schley, Russo, & Bequette, 2002), and fermentation processes
(Gudi, Shah, & Gray, 1994). However, the EKF is at best an
ad hoc solution to a difficult problem, and hence there exist
many pitfalls to the practical implementation of EKFs (see, for
example, Wilson, Agarwal, & Rippin, 1998). These problems
include the inability to accurately incorporate physical state
constraints and the naive use of linearization of the nonlinear
model.
Until recently, few properties regarding the stability and convergence of the EKF had been proven. Recent publications
present bounded estimation error and exponential convergence
arguments for the continuous and discrete EKF forms given
detectability, small initial estimation error, small noise terms,
and no model error (Reif, Gunther, Yaz, & Unbehauen, 1999,
2000; Reif & Unbehauen, 1999).
However, depending on the system, the bounds on initial estimation error and noise terms may be unrealistic. Also, initial
estimation error may result in bounded estimate error but not
exponential convergence, as illustrated by Chaves and Sontag
(2002).
Julier and Uhlmann (2004) summarize the status of the EKF
as follows:
The extended Kalman filter is probably the most widely used
estimation algorithm for nonlinear systems. However, more
than 35 years of experience in the estimation community has
shown that it is difficult to implement, difficult to tune, and only
reliable for systems that are almost linear on the time scale of
the updates.
We seem to be making a transition from a previous era in
which new approaches to nonlinear filtering were criticized as
overly complex because the EKF works, to a new era in which
researchers are demonstrating ever simpler examples in which
the EKF fails completely. The unscented Kalman filter is one of
the methods developed specifically to overcome the problems
caused by the naive linearization used in the EKF.
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
all i
P=
(R+CP C )1
LE ((x x )Y )
CP
i
i
wi ( y )(i y )
1 Note that this idea is fundamentally different than the idea of particle filtering,
which is discussed subsequently. The sigma points are chosen deterministically,
for example as points on a selected covariance contour ellipse or a simplex. The
particle filtering points are chosen by random sampling.
1533
(8)
X(T )
T
py|x (y(j)|x(j))
j=0
pX (X(T )) =
T
1
j=1
We also have
py|x (y(j)|x(j)) = pv (y(j) h(x(j)))
px|x (x(j + 1)|x(j)) = pw (x(j + 1) F (x(j), u(j)))
Substituting these results into Eq. (8), and noting that
pY (Y(T)) does not depend on the decision variables X(T), the
1534
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
T
1
j=1
T
pv (y(j) h(x(j)))
j=0
X(T )
T
1
Lw (w(j)) +
T
j=1
Lv (y(j) h(x(j)))
(9)
j=0
X(T N:T )
T
1
VT N (x(T N)) +
T
Lw (w(j))
j=T N
Lv (y(j) h(x(j)))
(10)
j=T N
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
approaches stem from the property that, in a deterministic setting (no state or measurement noise), MHE is an asymptotically
stable observer as long as the arrival cost is underbounded.
One simple way of estimating the arrival cost, therefore, is to
implement a uniform prior. Computationally, a uniform prior
corresponds to not penalizing deviations of the initial state from
the prior estimate.
For nonlinear systems, Tenny and Rawlings (2002) estimate
the arrival cost by approximating the constrained, nonlinear
system as an unconstrained, linear time-varying system and
applying the corresponding filtering and smoothing schemes.
They conclude that the smoothing scheme is superior to the
filtering scheme because the filtering scheme induces oscillations in the state estimates due to unnecessary propagation of
initial error. The assumption here is that the conditional density is well approximated by a multivariate normal. The problem with this assumption of course is that nonlinear systems
may exhibit a multi-modal conditional density. Haseltine and
Rawlings (2002) demonstrate that approximating the arrival cost
with the smoothing scheme in the presence of multiple local
optima may skew all future estimates. If global optimization is
implementable in real time, approximating the arrival cost with
a uniform prior and making the estimation horizon reasonably
long is preferable to an approximate multivariate normal arrival
cost.
3.5. Particle ltering
Unlike most other nonlinear filtering methods, including
those described earlier, particle filtering does not assume a fixed
shape of any density, but approximates the densities of interest
via samples or particles
p(x(t))
np
1535
(13)
1
f (x)p(x) dx
f (xi )
N
(15)
i=1
Eq. (15) requires samples from the posterior, which are often
difficult to obtain since the posterior may have unusual shapes
and may lack a convenient closed-form representation. Consequently, it is common to write Eq. (15) as,
E[f (x)] =
f (x)
p(x)
1
(x) dx
f (xi )f (xi )qi
(x)
N
(16)
i=1
qi (t)(x(t) xi (t))
in which
i=1
in which np is the number of particles or samples in the approximation, xi is the sample location and qi is the sample weight.
Thus PF can capture the time-varying nature of distributions
commonly encountered in nonlinear dynamic problems, and
any moment can be calculated from the samples. Furthermore,
this sampling based approach can solve the estimation problem in a recursive manner without resorting to model approximation. The posterior at time T may be written recursively
based on prior knowledge of the system, px|Y (x(T)|Y(T 1)),
and the current information of the process, py|x (y(T)|x(T)) or
likelihood
px|Y (x(T )|Y (T )) py|x (y(T )|x(T ))px|Y (x(T )|Y (T 1)) (11)
The two terms on the right hand side of Eq. (11) may be
further manipulated as follows.
px|Y (x(T )|Y (T 1))
= px|x (x(T )|x(T 1))px|Y (x(T 1)|Y (T 1)) dx(T 1)
(12)
qi =
p(xi )
(xi )
(17)
is the weight function and {xi } are samples drawn from the
importance function, (x). This formulation permits convenient
sampling from a known distribution, (x), and relaxes the need
to draw samples from the true posterior distribution, p(x). Also,
any pairs of samples (particles) and weights, {xi , qi } contain
information about the relevant distribution. A basic requirement
of the importance function is that its support should include
the support of the true distribution (Geweke, 1989). Moreover,
having f(xi )q(xi ) roughly equal for all particles ensures precise
estimates.
A computationally efficient and recursive solution to Eqs.
(12)(14) is provided by sequential Monte Carlo (SMC) sampling. The recursive approach of SMC is depicted graphically in Fig. 2. Information available at time T 1 includes
the particles and weights, (xi (T 1), qi (T 1)), which represent the posterior at time T 1, px|Y (x(T 1)|Y(T 1)). Bayes
rule is applied recursively by passing each sample through
the state equation, Eq. (1), to obtain samples corresponding
to the prior at time T, px|Y (x(T)|Y(T 1)). This prediction step
1536
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
utilizes information about process dynamics and model accuracy without making any assumptions about the nature of the
dynamics and shape or any other characteristic of the distributions. Once the measurement, y(T) is available, it can be
used to recursively update the previous weights by the following equation (Arulampalam, Maskell, Gordon, & Clapp,
2002).
p(y(T )|xi (T )p(xi (T )|xi (T 1))
(xi (T )|xi (T 1), y(T ))
qi (x ai ),
p x (x) =
np
i=1
q i (T ) qi (T 1)
np
(18)
q i (x a i )
i=1
The resampled density is clearly not the same as the original sampled density. It is likely that we have moved many of
the new samples to places where the original density has large
values. But by resampling in the fashion described here, we
have not introduced bias into the estimates (Gelfand & Smith,
1990).
Degeneracy may also appear due to little overlap between the
prior and likelihood, which may be due to a poor initial guess or
large unmodeled changes in the system. Methods for addressing
these challenges include the hybrid use of particle filtering with
EKF or empirical Bayes methods as described in more detail
and illustrated by (Chen et al., 2004; Lang, Goel, & Bakshi,
2006).
The resulting algorithm is fully recursive and computationally efficient since the sampling-based approach avoids integration for obtaining the moments at each time step. The recursive
nature implies that solving a nonlinear optimization problem
in a moving window or approximating the prior by the type
of methods necessary for MHE are not required. Furthermore,
SMC does not rely on restrictive assumptions about the nature
of the error or prior distributions and models, making it broadly
applicable.
3.6. Estimating covariances from data
All of the techniques described in this review depend on
knowing densities of the disturbances to the process and
measurement, pw (w) and pv (v). In process control applications, these are never known and must be obtained from
operating data. This need has been addressed by numerous researchers in control and identification starting with
the classic approaches of Mehra and Belanger (Belanger,
1974; Mehra, 1970). Obtaining better disturbance statistics
from data remains a topic of current research (Valappil &
Georgakis, 2000). Odelson and coworkers provide a recent
review of the classical methods and suggest some new
improvements (Odelson, Lutz, & Rawlings, 2006; Odelson,
Rajamani, & Rawlings, 2006; Rajamani, Rawlings, & Qin,
2006).
We next present two tutorial examples to illustrate some of
the issues discussed in this review.
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
Fig. 4. Conditional density of state vs. time, before (P (k)) and after (P(k))
measurement y(k)=x1 (k). Analytical solution from Kalman filtering.
1537
4. Examples
4.1. Linear system and estimating conditional density
Consider the linear dynamic system
x(k + 1) = Ax(k) + Bu(k) + Gw(k)
y(k) = Cx(k) + v(k)
in which w and v are zero mean, normally distributed with
covariances Q and R, and the initial state is distributed as
x(0) N(x(0), Q0 ). We choose the model parameters as follows
0.7
0.3
A=
B = I2 C = [ 1 0 ] G = I2
0.3 0.7
1.75 1.25
Q0 =
Q = 0.1I2 R = 0.01
1.25 1.75
Notice we are measuring only the first state. The system is
observable and we can reconstruct the second state from the measurements. We examine the state evolution and the conditional
density until k = 2 with the following input and measurement
sequences.
1
7
7
7
x (0) =
u(0) =
u(1) =
u(2) =
1
2
1
1
y(0) = 3
y(1) = 9
y(2) = 13
1538
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
Fig. 9. Stochastic evolution of states and the particle filter mean estimates in the
batch reactor.
Resampling is carried out after every measurement. A poor initial guess of the states (for example x (0) = [0.1, 4.5]T with
Q0 = Q) leads to divergence of the particle filter. To avoid
the divergence a broad initial spread of the particles can be
chosen.
Fig. 10 shows the formation of multiple peaks for the probability density p(x(2)|y(0), y(1), y(2)) as tracked by the particles at
t = 0.2. However, we can see that the particles are concentrated
at a few discrete locations rather than being spread out. This
impoverishment is also illustrated in the distribution of particles
at time t = 0.8 in Fig. 11. The multiple peaks finally disappear
at time t = 1.5 as seen in Fig. 12. The mean estimate using the
particle filter does not converge to the actual state, however, as
k = 0.16
2
0.1
2
x = f (x) =
kPA ,
y = [ 1 1 ]x,
x(0) =
1
4.5
The total pressure is measured. The state and the measurements are corrupted by Gaussian noises with covariances
Q = diag (0.0012 , 0.0012 ) and R = 0.12 , respectively. The discretization time is t = t(k + 1) t(k)=0.1.
The particle filter with SMC sampling was used with 1000
particles for estimating the states evolving as shown in Fig. 9.
P A and P B are the weighted mean estimate of the particles.
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
1539
1540
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
Doucet, A., Godsill, S., & Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197208.
Ferrari-Trecate, G., Mignone, D., & Morari, M. (2002). Moving horizon estimation for hybrid systems. IEEE Transactions on Automatic Control, 47(10),
16631676.
Gatzke, E., & Doyle, F. J. (2002). Use of multiple models and qualitative knowledge for on-line moving horizon disturbance estimation and fault diagnosis.
Journal of Process Control, 12(2), 339352.
Gelb, A. (Ed.). (1974). Applied optimal estimation. Cambridge, Massachusetts:
The M.I.T. Press.
Gelfand, A., & Smith, A. (1990). Sampling based approaches to calculating
marginal densities. Journal of the American Statistical Association, 85,
398408.
Geweke, J. (1989, November). Bayesian inference in econometric models using
Monte Carlo integration. Econometrica, 57(6), 13171339.
Goel, P., Lang, L., & Bakshi, B. R. (2005, January). Sequential Monte Carlo
in Bayesian inference for dynamic models: An overview. In Proceedings of
International Workshop/Conference on Bayesian Statistics and its Applications, Co-sponsored by International Society for Bayesian Analysis.
Goodwin, G. C., De Dona, J. A., Seron, M. A., & Zhuo, X. W. (2005).
Lagrangian duality between constrained estimation and control. Automatica, 41, 935944.
Goodwin, G. C., Haimovich, H., Quevedo, D. E., & Welsh, J. S. (2005, September). A moving horizon approach to networked control system design. IEEE
Transactions on Automatic Control, 49(9), 14271445.
Gordon, N., Salmond, D., & Smith, A. (1993, April). Novel approach to
nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings FRadar and Signal Processing, 140(2), 107113.
Gudi, R., Shah, S., & Gray, M. (1994). Multirate state and parameter estimation
in an antibiotic fermentation with delayed measurements. Biotechnology and
Bioengineering, 44, 12711278.
Handschin, J. E., & Mayne, D. Q. (1969). Monte Carlo techniques to estimate
the conditional expectation in multistage nonlinear filtering. International
Journal of Control, 9(5), 547559.
Haseltine, E. L., & Rawlings, J. B. (2002). A critical evaluation of extended
Kalman ltering and moving horizon estimation, TWMCC, Department
of Chemical Engineering, University of Wisconsin-Madison, Tech. Rep.
200203, August 2002.
Haseltine, E. L., & Rawlings, J. B. (2005, April). Critical evaluation of
extended Kalman filtering and moving horizon estimation. Industrial and
Engineering Chemistry Research, 44(8), 24512460 [Online]. Available:
http://pubs.acs.org/journals/iecred/.
Ho, Y. C., & Lee, R. C. K. (1964). A Bayesian approach to problems in stochastic estimation and control. IEEE Transactions on Automatic Control, 9(5),
333339.
Julier, S., & Uhlmann, J. (2002, August). Authors reply. IEEE Transactions on
Automatic Control, 47(8), 14081409.
Julier, S. J., & Uhlmann, J. K. (2004, March). Unscented filtering and nonlinear
estimation. Proceedings of the IEEE, 92(3), 401422.
Julier, S. J., & Uhlmann, J. K. (2004, December). Corrections to unscented
filtering and nonlinear estimation. Proceedings of the IEEE, 92(12), 1958.
Julier, S. J., Uhlmann, J. K., & Durrant-Whyte, H. F. (2000, March). A
new method for the nonlinear transformation of means and covariances
in filters and estimators. IEEE Transactions on Automatic Control, 45(3),
477482.
Kim, I., Liebman, M., & Edgar, T. (1991). A sequential error-in-variables
method for nonlinear dynamic systems. Computers and Chemical Engineering, 15(9), 663670.
Kong, A., Liu, J. S., & Wong, W. H. (1994, March). Sequential imputations
and Bayesian missing data problems. Journal of the American Statistical
Association, 89(425), 278288.
Lang, L., Goel, P. K., & Bakshi, B. R. (2006, January). A smoothing based
method to improve performance of sequential Monte Carlo estimation under
poor initial guess. In Proceedings of Chemical Process Control 7.
Lefebvre, T., Bruyninckx, H., & De Schutter, J. (2002, August). Comment on
A new method for the nonlinear transformation of means and covariances
in filters and estimators. IEEE Transactions on Automatic Control, 47(8),
14061408.
J.B. Rawlings, B.R. Bakshi / Computers and Chemical Engineering 30 (2006) 15291541
Liebman, M., Edgar, T., & Lasdon, L. (1992). Efficient data reconciliation and estimation for dynamic processes using nonlinear programming techniques. Computers and Chemical Engineering, 16(10/11), 963
986.
Meadows, E. S., Muske, K. R., & Rawlings, J. B. (1993, June). Constrained
state estimation and discontinuous feedback in model predictive control. In
Proceedings of the 1993 European Control Conference (pp. 23082312).
Mehra, R. (1970). On the identification of variances and adaptive Kalman filtering. IEEE Transactions on Automatic Control, 15(12), 175184.
Mhamdi, A., Helbig, A., Abel, O., & Marquardt, W. (1996). Newton-type receding horizon control and state estimation. In Proceedings of the 1996 IFAC
World Congress (pp. 121126).
Michalska, H., & Mayne, D. Q. (1995). Moving horizon observers and observerbased control. IEEE Transactions on Automatic Control, 40(6), 9951006.
Middlebrooks, S. A. (2001). Modelling and control of silicon and germanium thin lm chemical vapor deposition, Ph.D. dissertation, University
of Wisconsin-Madison.
Moraal, P. E., & Grizzle, J. W. (1995). Observer design for nonlinear systems
with discrete-time measurements. IEEE Transactions on Automatic Control,
40(3), 395404.
Muske, K. R., & Rawlings, J. B. (1995). Nonlinear moving horizon state estimation. In R. Berber (Ed.), Methods of model based process control (pp.
349365). Dordrecht, The Netherlands: Kluwer, Ser. NATO advanced study
institute series: E Applied Sciences 293.
Nrgaard, M., Poulsen, N. K., & Ravn, O. (2000). New developments in state
estimation for nonlinear systems. Automatica, 36, 16271638.
Odelson, B. J., Lutz, A., & Rawlings, J. B. (2006, May). The autocovariance
least-squares methods for estimating covariances: Application to modelbased control of chemical reactors. IEEE Control Systems Technology, 14(3),
532541.
Odelson, B. J., Rajamani, M. R., & Rawlings, J. B. (2006, February). A
new autocovariance least-squares method for estimating noise covariances. Automatica, 42(2), 303308 [Online]. Available: http://www.
elsevier.com/locate/automatica.
Prasad, V., Schley, M., Russo, L. P., & Bequette, B. W. (2002). Product property
and production rate control of styrene polymerization. Journal of Process
Control, 12(3), 353372.
Rajamani, M. R., Rawlings, J. B., & Qin, S. J. (2006). Equivalence of MPC
disturbance models identified from data. In Proceedings of Chemical Process
Control 7.
Ramamurthi, Y., Sistu, P., & Bequette, B. (1993). Control-relevant dynamic data
reconciliation and parameter estimation. Computers and Chemical Engineering, 17(1), 4159.
Rao, C. V. (2000). Moving horizon strategies for the constrained monitoring and
control of nonlinear discrete-time systems, Ph.D. dissertation, University
of Wisconsin-Madison, 2000.
Rao, C. V., & Rawlings, J. B. (2000). Nonlinear moving horizon estimation. In
F. Allgower & A. Zheng (Eds.), Nonlinear model predictive control: Vol. 26,
(pp. 4569). Basel: Birkhauser, Ser. Progress in systems and control theory.
Rao, C. V., & Rawlings, J. B. (2002, January). Constrained process monitoring:
Moving-horizon approach. AIChE Journal, 48(1), 97109.
Rao, C. V., Rawlings, J. B., & Lee, J. H. (2001). Constrained linear state estimation a moving horizon approach. Automatica, 37(10), 16191628.
Rao, C. V., Rawlings, J. B., & Mayne, D. Q. (2003, February). Constrained
state estimation for nonlinear discrete-time systems: Stability and moving
1541