2025 AIChE Annual Meeting

(469e) Understanding the Solution of Optimization Problems Via Explainable Artificial Intelligence

Authors

Ilias Mitrai - Presenter, University of Minnesota
Jiyong Lee, The University of Texas at Austin
In this work we propose a novel framework to understand and explain the solution of optimization problems using explainable artificial intelligence and graph neural networks. Decision-making problems arise in a wide range of applications such as process design, control, production scheduling, and planning. Over the last decades, significant effort has been devoted to reduce the effort required to formulate and implement a solution algorithm by developing mathematical programming formulations in standardized forms and general purpose solvers/algorithms for a wide range of problems. Despite this progress, explaining and understanding the optimal solution returned by a solver is usually a time-consuming task since it requires extensive domain knowledge.

Mathematical optimization is usually treated as a black box, i.e., for a given problem formulation and parameters it provides a solution. However, the solution can be counterintuitive and practitioners might not trust it. This has motivated the development of several approaches for explaining the solution of an optimization problem. Traditionally, shadow prices (or multipliers) and active constraints have been used to analyze the effect of constraints on the optimal solution [1]. Recent approaches either focus on “what-if” kind of analysis using the so-called counterfactual explanations in tandem with inverse optimization and large language models [2-6], aim to identify a subset of the variables and constraints that affect the solution [7], or compare the solution with previously obtained ones [8]. Despite this progress, these approaches can either be applied to certain classes of optimization problems or require the repeated solution of the optimization problem under consideration which is not tractable for online applications. In the general, the solution of an optimization problem depends on the parameters, the constraints, the objective, and the domain of the variables. Thus, when explaining the solution of the problem, one must consider all these aspects simultaneously.

In this work, we propose a novel approach to explain the solution of an optimization problem by combining explainable artificial intelligence and graph neural networks . Specifically, for a given optimization problem, we seek to find a subset of the variables, constraints, and parameters that have a crucial role in identifying the optimal solution.

  • To achieve this, first, the optimization problem is represented as a bipartite graph where the nodes are the variables and constraints of the problem, and the edges capture the presence of a variable in a constraint. Such graph representations have been used extensively for learning to accelerate optimization algorithms using machine learning [9-11]. Under this representation, an optimization problem is represented by three matrices: the adjacency matrix A which captures the structural coupling among the variable and constraints; the node feature matrix V which captures information about the variables (such as bounds, domain) and constraints (equality, inequality, convexity); and the edge matrix E which captures information about the numerical value of a parameter in a constraint.
  • This representation is used as the input to a graph neural network (GNN) model which predicts the value of the objective function at the optimal solution while accounting for the structural and functional coupling of the variables and constraints. For training the GNN, we use supervised learning and generate a distribution of optimization problems (varying the number of variables, constraints, and the values of the parameters). The features are the three matrices A, V, E and the label is the value of the objective function at the optimal solution.
  • Given the trained GNN and an optimization problem with its graph representation with features, we use explainable artificial intelligence algorithms (xAI) [12,13] to identify a subgraph, i.e., a subset of nodes (variables and constraints), edges (parameters), and features (variable bounds and domain) that have a crucial role in the GNN’s prediction of the optimal solution. We use the identified variables, parameters, and constraints as an explanation for the optimal solution of the optimization problem. We note that this approach can be applied to generic optimization problems, and once the GNN is trained, explaining the prediction for a new optimization problem is faster (order of 1 second) than solving the optimization problem.

We apply the proposed approach to a case study considering the real-time operation of a continuous production system. We assume that a dynamic optimization problem is solved to determine the optimal operation of the systems such that the demand is satisfied. We train a GNN by varying the demand profile, the initial condition and flowrate, and time horizon, i.e., we train the GNN model with optimization problems with different number of variables, constraints, and parameters. The application of xAI to the graph representation of such problems shows that the most important constraints for predicting the value of the objective function at the optimal solution are ramping constraints which limit how fast the production rate can change. In addition, xAI reveals that the solution depends on the initial state of the system (this is to be expected from domain knowledge) and the upper bound (a feature) of the inlet flowrate. In general, the proposed approach can be applied to generic optimization problems and offers a new approach to identify the most important variables, constraints, and parameters which affect the most the optimal solution of an optimization problem.

References

[1]. Greenberg, H.J., 1993. How to analyze the results of linear programs—Part 2: Price interpretation. Interfaces, 23(5), pp.97-114.

[2]. Kurtz, J., Birbil, Ş.İ. and Hertog, D.D., 2024. Counterfactual explanations for linear optimization. arXiv preprint arXiv:2405.15431.

[3]. Korikov, A. and Beck, J.C., 2021. Counterfactual explanations via inverse constraint programming. In 27th International Conference on Principles and Practice of Constraint Programming (CP 2021) (pp. 35-1). Schloss Dagstuhl–Leibniz-Zentrum für Informatik.

[4]. Korikov, Anton, and J. Christopher Beck. "Objective-Based Counterfactual Explanations for Linear Discrete Optimization." In International Conference on Integration of Constraint Programming, Artificial Intelligence, and Operations Research, pp. 18-34. Cham: Springer Nature Switzerland, 2023.

[5]. Forel, A., Parmentier, A. and Vidal, T., 2023, July. Explainable data-driven optimization: From context to decision and back again. In International Conference on Machine Learning (pp. 10170-10187). PMLR.

[6]. Chen, H., Constante-Flores, G.E. and Li, C., 2024. Diagnosing infeasible optimization problems using large language models.e INFOR: Information Systems and Operational Research, 62(4), pp.573-587.

[7]. Rathi, T., Gupta, R., Pinto, J.M. and Zhang, Q., 2024. Enhancing explainability of stochastic programming solutions via scenario and recourse reduction. Optimization and Engineering, 25(2), pp.795-820.

[8]. Aigner, K.M., Goerigk, M., Hartisch, M., Liers, F. and Miehlich, A., 2024, March. A framework for data-driven explainability in mathematical optimization. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 19, pp. 20912-20920).

[9]. Cappart, Q., Chételat, D., Khalil, E.B., Lodi, A., Morris, C. and Veličković, P., 2023. Combinatorial optimization and reasoning with graph neural networks. Journal of Machine Learning Research, 24(130), pp.1-61.

[10]. Chen, Z., Liu, J., Wang, X., Lu, J. and Yin, W., 2022. On representing linear programs by graph neural networks. arXiv preprint arXiv:2209.12288.

[11]. Mitrai, I. and Daoutidis, P., 2024. Taking the human out of decomposition-based optimization via artificial intelligence, Part I: Learning when to decompose. Computers & Chemical Engineering, 186, p.108688.

[12]. Ying, Z., Bourgeois, D., You, J., Zitnik, M. and Leskovec, J., 2019. GNNexplainer: Generating explanations for graph neural networks. Advances in neural information processing systems, 32.

[13]. Luo, D., Cheng, W., Xu, D., Yu, W., Zong, B., Chen, H. and Zhang, X., 2020. Parameterized explainer for graph neural network. Advances in neural information processing systems, 33, pp.19620-19631.