2025 AIChE Annual Meeting

(328f) Reinforcement Learning for Process Control: Past Achievements and Future Directions

Author

Jong Min Lee - Presenter, Seoul National University
For decades, the curse of dimensionality in dynamic programming has been a major barrier to the application of optimal control to systems with uncertain or complex dynamics. Motivated by this challenge, Prof. Jay H. Lee and I began investigating a class of algorithms now broadly known as reinforcement learning (RL), or approximate dynamic programming, over twenty years ago. With the recent advances in machine learning—highlighted by AlphaGo’s landmark success in 2016—RL has gained significant attention in fields such as computer science and operations research, leading to the development of a wide range of new algorithms. However, systematic research on how RL can be effectively applied to chemical process control and optimization remains limited. Key questions persist: Which RL algorithms are most suitable for process systems? What challenges arise when integrating RL into industrial settings? In particular, RL holds promise for controlling large-scale systems with uncertainty, where first-principles models are difficult to construct but high-fidelity digital twins are available. In this talk, I will discuss algorithmic advances, comparative evaluations of RL methods for process control, and practical challenges such as safety, stability, and interpretability. I will also explore future research directions including the development of offline RL approaches suitable for safety-critical systems, the design of reward functions tailored to the characteristics of process dynamics, and the interpretation of learned policies and value functions to support operator decision-making in real-world applications.