2025 AIChE Annual Meeting
(613d) Reinforcement Learning-Driven Process Intensification Synthesis – Application to Reaction/Separation Systems
Authors
In this work, we introduce a reinforcement learning (RL)-driven process intensification synthesis approach. It centers on the Generalized Modular Framework (GMF), which is a phenomena-based representation framework utilizing total Gibbs free energy driving forces. These thermodynamic driving forces comprise two separate terms with the first approximating the change in moles of a component while the second estimating the derivative of total Gibbs free energy accounting for separation driving force and the extent of reaction. GMF utilizes an aggregation of phenomenological modules to capture the fundamental mass/heat transfer in the process systems while intensifying toward the thermodynamic/kinetic limits. A RL-driven GMF synthesis approach is then presented to intelligently explore optimal modularization and intensification opportunities. The RL algorithm begins by reading in the maximum allowable number of GMF modules (or initial feasible designs if available). Inlet-outlet stream matrices are created to interpret the phenomenological flowsheet to RL observations. The RL agent utilizes Deep Q-Network which is trained using experience replay memory to improve the stability. The design structures generated by the RL agent are optimized using the GMF process synthesis model, with the optimization objectives used as rewards to refine RL learning. The integration of RL and GMF can significantly expedite the discovery of innovative designs by augmenting RL exploration ability in the combinatorial design space and GMF phenomena-based representation to generate innovative design solutions. The efficacy of the proposed approach is demonstrated using two case studies: (i) binary separation of benzene and toluene in comparison to conventional distillation systems, (ii) membrane-assisted reaction for modular hydrogen production.