Breadcrumb
- Home
- Publications
- Proceedings
- 2009 Annual Meeting
- Computing and Systems Technology Division
- Poster Session: Computers in Operations and Information Processing
- (486g) Bias Detection and Identification Using Historical Data
Measurement bias is one type of gross error that can be caused by many sources, such as poorly calibrated or malfunctioning instruments. Several model-based approaches have been proposed for bias detection and identification, which work comparing the actual operation of the plant with that predicted by a mathematical model using hypothesis statistical tests. A good survey of these techniques is available in the books by Narasimhan and Jordache (2000) and Romagnoli and Sánchez (2000).
To avoid biased estimations of process variables, other strategies incorporate the non-ideality of the data distribution in the formulation of the data reconciliation problem. Thus random and gross errors are removed simultaneously based on their probability distribution. This is usually accomplished by combining nonlinear programming and the maximum likelihood principle, after the error distribution has been suitable characterized (Arora and Biegler (2001); Wang and Romagnoli (2003)).
The -statistic is widely used in Statistical Process Control to reliably detect the out of control status, but by itself it offers no assistance as fault identification tool. Different strategies have been proposed to calculate the contribution of each process variable to the inflated statistic. They work in the original or in the latent variable space.
A straightforward method to decompose the -statistic as a unique sum of each variable contribution was recently developed by Alvarez et al. (2007), which is called OSS (Original Space Strategy). This decomposition was succesfully applied to detect and identify biases for steady state processes (Sanchez et al., 2008). Later on Alvarez et al. (2008) proposed a new strategy to estimate the influence of a given variable on the final value of the inflated statistic's value. In this approach, the contribution of each variable is measured in terms of the distance between the current observation and its Nearest In Control Neighbour (NICN).
In this work, the detection and identification capabilities of the monitoring technique presented by Alvarez et. al (2008), are compared with those corresponding to the most commonly used gross error detection and identification techniques for some benchmarks. Results indicate the technique succeeds in identifying single and multiple biases and, fulfills three issues paramount to practical implementation in commercial software: robustness, uncertainty and efficiency.
References
Alvarez, R.; Brandolin, A.; Sánchez, M. (2007) On the Variable Contributions to the D-statistic. Chemometr. and Intell. Lab. System., 88, 89-196.
Alvarez, R.; Brandolín, A.; Sánchez, M.; Puigjaner, L. (2008) A Nearest In Control Neighbour Based Method to Estimate Variable Contributions to the Hotelling's Statistic. Proceedings of 2008 AIChE Annual Meeting, Philadelphia, USA, November 16?21.
Arora, N.; Biegler, L. (2001) Redescending estimators for data reconciliation and parameter estimation. Comp. & Chem. Engng., 25, 1585-1599.
Narasimhan, S.; Jordache, C. (2000) Data Reconciliation and Gross Error Detection; Gulf Publishing Company, Houston.
Romagnoli, J.; Sánchez M. (2000) Data Processing and Reconciliation for Chemical Process Operations; Academic Press: San Diego.
Sánchez, M.; Alvarez, R.; Brandolin, A. (2008) A MSPC procedure for bias identification in steady state processes. AIChE Journal, 54, 8, 2082-2088.
Wang, D.; Romagnoli, J. (2003) A Framework for robust data reconciliation based on a generalized objective function. Ind. Eng. Chem. Res., 42, 3075-3084.