Reliable kinetic models are essential in chemical reaction engineering. However, developing such models often requires assumptions—such as reaction orders or mechanisms—that may not hold universally. Traditional first-principles methods can fail when real systems deviate from textbook conditions, while purely data-driven methods (e.g., feedforward neural networks) can fit complex behaviors but frequently lack interpretability. Hence, balancing flexibility in data fitting with scientifically meaningful insight remains a major challenge.
Kolmogorov–Arnold Networks (KANs) have emerged in the scientific machine learning community as promising tools that offer both flexibility and transparency. In contrast to standard multilayer perceptrons (MLPs) with fixed activation functions, KANs learn these functions during training. This approach enables them to capture highly nonlinear relationships with fewer parameters, while retaining a structure that can be readily understood and analyzed. In fields where it is critical to understand physical mechanisms—such as chemical reaction engineering—this interpretability is a substantial advantage over more opaque “black-box” models.
In this study, we investigate the ability of KANs to discover reaction rate expressions from noisy time-series data generated by batch stirred tank reactor simulations. Because reaction rates are not directly observable, we first employ a Neural Ordinary Differential Equation (Neural ODE) model to fit the reactor’s concentration profiles over time. This approach allows us to infer smoothed concentration gradients even when measurement noise is significant. We then use these gradients—representing reaction rates—to train a KAN model in order to learn the functional relationship between species concentrations, temperature, and the reaction rate.
We assess our framework using batch reactor simulations of a chemical reaction with known kinetics. The results demonstrate that KANs can uncover the underlying kinetics even in the presence of noise. Moreover, the learned functional dependencies match the known kinetics, reinforcing the viability of this method. These findings show that KANs can align data-driven modeling with established physical laws while bypassing the need for potentially restrictive assumptions. They also suggest that this approach may extend to discovering more complex mechanisms solely from batch experiments, especially where direct measurement of reaction rates is not feasible.
[1] Liu, Ziming, et al. "Kan: Kolmogorov-arnold networks." arXiv preprint arXiv:2404.19756 (2024).
[2] Shen, Haoran, et al. "Reduced effectiveness of kolmogorov-arnold networks on functions with noise." ICASSP 2025-2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2025.
[3] Chen, Ricky TQ, et al. "Neural ordinary differential equations." Advances in neural information processing systems 31 (2018).