2014 Spring Meeting & 10th Global Congress on Process Safety
(81f) Improved Estimator Insensitivity to Outliers, Measurement Drift, and Noise
Improved Estimator Insensitivity to
Outliers, Measurement Drift, and Noise
Reza
Asgharzadeh Shishavan and John D. Hedengren
350R CB,
Provo, UT 84602
Brigham
Young University
Three common problems that are encountered with industrial
data include outliers, measurement drift, and measurement noise. Gross error
detection is commonly used to eliminate outliers while estimators are commonly
employed to reconstruct actual process values from noisy or drift-prone
measurements. Common approaches include filtered bias updates, Kalman
filtering, adaptive observers, inferential calculations for predicted values,
and Moving Horizon Estimation (MHE). Problems with many of these approaches are
that state and parameter values change with corrupted data. The focus of this
paper is to design a novel MHE estimator and Nonlinear Model Predictive
Controller (NMPC) framework that is less sensitive to the three classes of
corrupt data described above (outliers, drift, and noise) [1].
This novel formulation is a modified ℓ1-norm objective
for estimation and control with a simultaneous optimization approach. The
advantage of this ℓ1-norm over conventional ℓ1-norm, squared-error,
or ℓ2-norm objectives is that it includes dead-band which improves outlier
rejection, measurement drift insensitivity, and noise rejection that suppresses
unnecessary state or parameter adjustments. The solution method attributed to
the proposed ℓ1-norm includes this innovative form with inequality
constraints and slack variables to apply ℓ1-norm in nonlinear
optimization based controllers and estimators.
Additionally, this paper offers details on the simultaneous
optimization of objective function and model equations for solving nonlinear
parameter estimation and control problems. Simultaneous optimization decreases
the computational time by several orders of magnitudes for some problems with
moderate sized models but with a relatively large number of decision variables [2-5]. Sequential optimization approaches, where the model and objective
function are solved separately in a series of iterations, may be better suited
for large-scale models such as those that result from distributed parameter
systems such as partial differential and algebraic equation models [6].
Depending on the size of model and number of decision variables, either may be
more suitable for solving complex, large-scale industrial problems that include
multiple variables and require rapid convergence.
A typical industrial distillation problem is illustrated, and
is solved using the two techniques. The first is with the proposed simultaneous
method with a deadband and second using sequential shooting optimization
approach. The ℓ1-norm, ℓ2-norm, Kalman filter, and other estimators
are also compared to demonstrate improved insensitivity to outliers, noise, and
measurement drift. The computational time is compared for both cases to
demonstrate the trade-offs with the proposed approaches.
References