2025 Spring Meeting and 21st Global Congress on Process Safety

(33e) Autonomous PID Tuning for Industrial Process Control: A Surrogate Model and Agent-Based Approach

Authors

Jose Romagnoli, Louisiana State University
As industries transition toward the digitalization and interconnectedness of Industry 4.0, the availability of vast amounts of process data opens new opportunities for optimizing industrial control systems. Traditional Proportional-Integral-Derivative (PID) controllers, often require manual tuning to maintain optimal performance in the face of changing process conditions. This paper presents an automated and adaptive method for PID tuning, leveraging historical closed-loop data and machine learning to create a data-driven approach that can continuously evolve over time.

At the core of this method is the use of historical process data to train a plant surrogate model, which accurately mimics the behavior of the real system under various operating conditions. This model allows for safe and efficient exploration of control strategies without interfering with live operations. Once the surrogate model is constructed, an RL agent interacts with it to learn the optimal control policy. This agent is trained to respond dynamically to the current state of the plant, which is defined by a comprehensive set of variables, including operational conditions, system disturbances, and other relevant measurements.

By integrating RL into the tuning process, the system is capable of adapting to a wide range of scenarios without the need for manual intervention. The RL agent learns to adjust the PID controller parameters based on the evolving state of the system, optimizing performance metrics such as stability, response time, and energy efficiency. After the training phase, the agent is deployed online to monitor the real-time state of the plant. If any significant deviations or disturbances are detected, the RL agent is called upon to make real-time adjustments to the PID controller, ensuring that the process remains optimized under new conditions.

One of the unique advantages of this approach is its ability to continuously update and refine the surrogate model and RL agent over time. As the plant operates, real-time data is collected and integrated into the historical dataset, allowing the models to adapt to any long-term changes in the process. This continuous learning capability makes the system highly resilient and scalable, ensuring optimal performance even in the face of new and unforeseen operating conditions.

By combining data-driven modeling with reinforcement learning, this method provides a robust, adaptive, and automated solution for PID tuning in modern industrial environments. The approach not only reduces the need for manual tuning and oversight but also maximizes the use of available process data, aligning with the principles of Industry 4.0. As industrial systems become increasingly complex and data-rich, such methods hold significant potential for improving process efficiency, reliability, and sustainability.