2025 AIChE Annual Meeting

(206d) Parallel Variation Operator Strategy for Multi-Objective Optimization of Chemical Processes

Authors

Kyle Camarda, University of Kansas
In chemical engineering, multi-objective optimization (MOO) frequently arises when processes are designed with both economic and environmental objectives. Deterministic approaches such as the ε-constraint method are often too computationally expensive for such problems. In this work, we present a Parallel Variation Operator Strategy (PVOS), aimed at improving the performance of commonly applied genetic algorithms, while retaining the computational efficiency need to find the full set of pareto-optimal solutions to process design problems. This strategy simultaneously utilizes multiple crossover operators, specifically Simulated Binary Crossover and Single-Point Crossover on the same set of selected parents, and employs multi-core processors. The crossover operators run concurrently, thereby reducing overall computational time per generation. The offspring from each crossover operator are combined with the original parent population, and survivors are selected using non-dominated sorting and crowding distance procedures.

We validate our proposed method on a suite of 21 well-known benchmark problems and compare it against the NSGA-II implementation available in the Pymoo library. Performance is assessed regarding Pareto-optimality using Hypervolume, Inverted Generational Distance, general distance, and spacing metrics, with consistent parameter settings and 30 seeds to account for stochastic variability and ensure a fair comparison. Results indicate that the Parallel Variation Operator Strategy (PVOS) results in faster convergence and enhanced diversity. This is demonstrated by an average reduction of almost 50% in the number of generations required for convergence compared to the classic NSGA-II. This approach also saves computational time, with 17 out of 21 test problems exhibiting reduced execution times.

This enhancement may especially be evident in complex search landscapes, where varied crossover strategies can explore the solution space more effectively. Beyond demonstrating computational efficiency and superior search quality, this work contributes a new perspective on operator parallelization in evolutionary algorithms. The findings suggest that parallelizing variation operators can unlock both performance gains and deeper explorations of multi-objective search spaces, which enhances convergence efficiency. Future research could extend this parallel strategy to other evolutionary algorithms, apply it to more complex or large-scale problems, and integrate additional crossover or mutation schemes.