2013 AIChE Annual Meeting
(201a) Simultaneous Dynamic Data Reconciliation and Model Updating for Kinetic Parameters Determination of Biomass Pyrolysis Using Nspso Bio-Inspired Optimization Algorithm
Authors
Biomass pyrolysis is a thermochemical conversion process that is of both industrial and ecological importance. From designing and operating of industrial biomass conversion systems to reactors modeling, understanding kinetics of solid state pyrolysis is imperative. Biomass pyrolysis consists of complicated multiple reactions system, and it is not easy to determine the reaction mechanism in the detail necessary to formulate the reaction kinetics. Thus, several simplified models have been suggested. One of these models involves lumping the complicated multiple reactions together as a single first-order or nth-order reaction. The thermal decomposition of lignocellulosic biomass materials and their components separated by enzymatic process was made for generating experimental data that supports the kinetic model development that describes the pyrolysis process. A kinetic analysis of biomass pyrolysis is usually conducted by measurements of either a thermogravimetric (TGA) curve or a differential thermogravimetric (DTG) curve.
Usually, process measurements do not reveal complete and correct information about the process state. It is due to the limited number of process quantities accessible for measuring instruments and the consequently reduction of reliability of gathered data, as well as unavoidable measurement errors caused by imperfect instrumentation, measuring devices, sampling equipment, sensor positioning, signal processing or conversion.
Data reconciliation (DR) is a filtering technique that improves the reliability and accuracy of process data; it reduces impacts of random errors and biases, and generates estimatives in tune with the process model by forcing them to satisfy material and energy balance constraints. This technique also gives the possibility to estimate unmeasured variables under favorable observability conditions. Data reconciliation processes usually employs a weighted least squares objective function, and are based on the assumption that random errors are distributed normally with zero mean and a known variance. However, weighted least squares objective function can lead to incorrect estimation and severely biased reconciliation. To make the estimation insensitive to the presence of gross errors, robust approaches have been proposed. Comparison of least square estimator and robust M-estimator shows the latter is superior in detecting outliers and robustness. However, conventional robust DR may be subject to local solution due to the discontinuous and nonconvex properties of the robust M-estimator, into nonlinear DR problems. Dynamic data reconciliation (DDR) is a natural extension of DR for analysis of dynamic processes and provides a set of reconciled estimatives on some time horizon. DDR combines state, parameter and unknown input estimation, and it is typically formulated as a dynamic optimization problem restricted by a process model.
In the present work, a maximum likelihood method for dynamic measurements and kinetic model of pyrolysis process is formulated as the objective function for simultaneous solution of dynamic data reconciliation and model updating problems. The method of maximum likelihood provides a basis for many of the techniques and methods and its advantage are: a) it has a good intuitive foundation. -the underlying concept is that the best estimate of a parameter is that giving the highest probability that the observed set of measurements will be obtained-. b) least-squares method and various approaches such as combining errors or calculating weighted averages, etc., can be derived or justified in terms of the maximum likelihood approach. c) The method is of sufficient generality that most problems are amenable to a straightforward application of this method, even in cases where other techniques become difficult. Inelegant but conceptually simple approaches often provide useful results where there is no easy alternative. The method of maximum likelihood provides a solution for estimation problem. Consider a set of observations from a population with the probability distribution function, a common problem of experimental design is to determine in advance if a proposed experimental configuration will be able to achieve a specified precision; in this way, experimental results were used to estimate errors in parameters.
The use of experiments on an effective way to analyze and optimize a given process is a problem statement which should be paid more attention in the engineering community. Frequently, experiments are first performed and the measured data are analyzed afterwards. In contrast to this strategy, statistical methods, such as the concept of Design of Experiments, should be used for planning experimental designs and perform sets of well selected experiments to get the most informative combination of the given factors.
The selection of an appropriate experimental design depends on the objectives, the most important of which are: to provide a satisfactory distribution of information throughout the region of interest, ensuring a good fit between the model and the real process, employing a minimum number of experimental runs, minimizing complexity of the design of experiments (DOE) calculation, allowing for a model adequacy determination, and being robust to errors occurring in the settings of the experimental variables. Alphabetic optimal experimental designs are common methods for reducing parameter uncertainty through planned experimentation. The four most common designs are A-, D-, E-, and G-optimal designs. In the context of experimental design for decision variables computed from estimated models, the four alphabetic designs mentioned above are defined as follows: D-optimal designs minimize the determinant of decision variable covariance matrix, A-optimal designs minimize the sum of the variances of individual decision variables, E-optimal designs minimize the largest eigenvalue of decision variable covariance matrix, and finally G-optimal designs minimize the maximum variance of predicted responses. In this work, D-optimal designs were selected and algorithm was programed in Matlab®.
Thermogravimetric and differential scanning calorimetry (DSC) curves at different heating programs in inert atmosphere were used for thermal decomposition of biomass component. From this experiment process, variation of mass vs. time was obtained. In addition, we manipulated de chemical composition of lignin in biomass, chemical composition of cellulose in biomass, and heating rate for the D-optimal design experiment.
Pseudo-first order models and Pseudo-nth orders models reaction schemes were employed in the modelling. Kinetic parameters were estimated using a bio-inspired algorithmic for multi-objective optimization: Non-Dominated Sorting Particle Swarm Optimization (NSPSO). It is inspired by the flocking behavior of birds, which is very simple to simulate and has been found to be quite efficient in handling single objective optimization (SOO) problems. The simplicity and efficiency of PSO motivated researchers to apply it on multi-objective optimizations (MOO) problems. A multi-objective optimization problem is defined as the maximization or the minimization of many objectives subject to equality and inequality constraints. Several works present extension of particle swarm optimization (PSO) method in order to handle multi-objective optimization problems. Among these algorithms, NSPSO is based on the same non-dominated sorting concept used in NSGA-II.
In this paper, SOO is used for obtaining kinetic parameters from D-Optimal experiment and MOO for the parameters confidence interval. Model validation was developed by Fisher-Snedecor test, for all categories. Application of the Fisher-Snedecor test must be done by selecting a level of significance, and applying a Type II statistical test: the probability of accepting that there is no difference between the variances of each type of experiment, when in reality there is a difference. Then, the test compares the variances of each category, in two parts, the first one verifies the accuracy of the experimentation and the second one the model quality.
If the first test is not true but the second one is, a generalized behavior model is obtained. If both tests are true it can say that the model is common knowledge, however, if the two tests are not true you cannot say that the model is not validated (we consider that the model is validated in default of)
We found DDR and model updating (parameters estimations) can be performed simultaneously, and applying validation model with Fisher-Snedecor test; with this strategy, a generalized model for three Pseudo-nth orders model reaction was obtained. Moreover, lignin TGAs allows to identify the presence of a fourth component which is added to kinetic model. The four Pseudo-nth orders model reaction is also generalized. Simultaneous DDR and model updating improves and reduces the confidence interval for kinetics parameters compared when data reconciliation and model updating are performed separately. Model estimation by NSPSO is a suitable strategy, because of the straightforwardness of the implementation and the quality of the estimatives, which was confirmed by the statistical tests.