2025 AIChE Annual Meeting

(644e) Deep-Learning-Aided Modifier Adaptation for Real-Time Optimization

Authors

Calvin Tsay, Imperial College London
Luis Ricardez-Sandoval, University of Waterloo
Deep learning allows for functions, and their gradients, to be approximated to high accuracy owing to the universal approximation property of deep neural networks (DNNs).1,2 Modifier adaptation (MA)3 is a real-time optimization4 method that is used to optimize process economics online. MA has the drawback of requiring plant gradients to make first-order model corrections, which are difficult to acquire in practice as they require potentially time-consuming plant perturbations that delay the optimization procedure.5

Process historians are commonplace in today’s chemical industry, and they contain a wealth of data that can often go unused. In this work, we present the use of backpropagated gradients, which are computed from DNNs trained on historical steady-state data (thus do not require any explicit gradient data for training and prediction). Backpropagation is the dominant method of training DNNs and refers to applying the chain rule to calculate derivatives with respect to the loss function.6 The technique has convenient implementation in most machine learning libraries.7 We apply this principle on the trained network to extract output-input gradients efficiently. Our method, which we call DNN-MA, is shown to retain provable convergence to plant optimum, which is a key feature of MA. The resulting DNN-MA scheme is a grey-box modelling approach since it makes data-driven adjustments to uncertain process models; accordingly, it balances the efficiency of data-driven modelling techniques with the interpretability of mechanistic modelling.

DNN-MA is demonstrated in analogous integrated and intensified reactor-separator systems,8 where it is shown to reconcile plant and model optima in the presence of model mismatch. The case studies show better economics and constraint satisfaction when using the intensified system and the DNN-MA. Further, process intensification and DNN-MA are observed to work in tandem, as both accelerate the convergence of the plant to its true optima (faster closed-loop response). The proposed method shows how historical recorded data can be leveraged to address epistemic uncertainty and improve performance in model-based optimization, especially in intensified systems.

[1] Hornik, K. (1991). Approximation Capabilities of Multilayer Feedforward Networks. Neural Netw. 4(2), 251–257.

[2] Guliyev, N.J., Ismailov, V.E. (2018). Approximation capability of two hidden layer feedforward neural networks with fixed weights. Neurocomputing 316, 262–369.

[3] Marchetti, A., Chachuat, B., Bonvin, D. (2009). Modifier-Adaptation Methodology for Real-Time Optimization. Ind. Chem. Eng. Res. 48(13), 6022–6033.

[4] Darby, M.L., Nikolaou, M., Jones, J., Nicholson, D. (2011). RTO: An overview and assessment of current practice. J. Process Control 21(6), 874–884.

[5] Patrón, G.D., Ricardez-Sandoval, L. (2023). Directional modifier adaptation based on input selection for real-time optimization. Comput. Chem. Eng. 177, 108351.

[6] Rumelhart, D.E., Hinton, G.E., Williams, R.J. (1986). Learning representations by back-propagating errors. Nat. 323, 533–536.

[7] Paszke, A., et al. (2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adv. Neural Inf. Syst. 32.

[8] Baldea, M. (2015). From process integration to process intensification. Comput. Chem. Eng. 81, 104–114.