Breadcrumb
- Home
- Publications
- Proceedings
- 2025 AIChE Annual Meeting
- Computing and Systems Technology Division
- 10C: Advances in Optimization II
- (468c) A Decision Rule Framework for Explainable Optimization
Several approaches have been proposed in literature to enhance the interpretability of optimization solutions. Bertsimas et al. [3] introduced an approach that trains decision trees on large datasets of solved optimization instances, explicitly linking specific scenarios to near-optimal solutions. Although intuitive, their approach typically involves significant computational overhead and is validated primarily on smaller-scale problems. Another strategy, described by Goerigk et al. [7], embeds the decision tree directly into the optimization model, enabling the optimization process to determine the tree’s structure. While inherently interpretable, this approach generally relies on greedy heuristics to mitigate the computational expense of developing a decision tree, thereby sacrificing solution quality. Both methods are also limited by tree depth, which restricts their ability to capture a broad range of potential optimal solutions in larger, more complex optimization scenarios.
Complementing these decision tree-based techniques, recent advancements include argumentation frameworks, fuzzy inference systems, and scenario-based methods. Argumentation frameworks leverage abstract argumentation to represent scheduling problems and generate structured natural language explanations, clarifying precisely why certain schedules are feasible or efficient by associating optimality criteria with argumentation constructs [5]. Fuzzy inference approaches establish a qualitative relationship between problem parameters and optimal solutions through fuzzy clustering and rule learning, providing linguistically meaningful explanations of decision sensitivity to operational parameters [6]. Additionally, scenario clustering and recourse reduction techniques have emerged for enhancing the explainability of stochastic programming solutions. These methods group scenarios based on similarity in recourse decisions and select representative scenarios, thereby simplifying model complexity and highlighting the critical factors that drive decision-making under uncertainty [11]. Despite these advancements, these methods are often domain-specific, computationally intensive at scale, and typically provide explanations limited to local insights or simplified representations.
Concurrently, recent chatbot frameworks including OptiGuide [10], RouteExplainer [8], and OptiChat [4] provide interactive, natural-language counterfactual explanations. When asked a “what-if” query, these chatbots adjust the relevant parameters, resolve the optimization model, and comment on the resulting solution quality. However, these explanations remain local i.e., they explain why one solution is better than another in a specific instance but fail to offer a global understanding of the model. To the best of our knowledge, there is still a need for a framework that is domain agnostic, scalable, delivers good-quality interpretable solutions, and provides a comprehensive, global interpretation of model solution.
To address this gap, we propose a framework that leverages Linear Decision Rules (LDR) [2] to interpret the solutions of Linear Programming (LP) models. By expressing each decision variable as an affine function of the model parameters, LDR yield highly interpretable expressions that reveal how variables depend on parameters, thus providing a global interpretation of model solution. We also develop a software package that automatically transforms original LPs into their LDR formulations, capturing both primal and dual representations [9]. This package efficiently solves the transformed models, quantifies the quality of the LDR approximation by computing the gap between primal and dual objective values, and outputs explicit, interpretable LDR expressions for targeted decision variables. These expressions support both global and local sensitivity analyses.
To enhance usability and bridge the gap between optimization models and practitioners, we integrate this package into OptiChat, our group’s natural-language dialogue system. OptiChat guides users through model interpretation, infeasibility diagnosis, sensitivity analysis, information retrieval, modification evaluation, and counterfactual explanation generation. By combining LDR-based interpretability with interactive dialogue, our approach empowers practitioners to gain deeper insights, build trust, and confidently apply optimization models in strategic decision-making.
References:
[1] Plamen P Angelov, Eduardo A Soares, Richard Jiang, Nicholas I Arnold, and Peter M Atkinson. Explainable artificial intelligence: an analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5):e1424, 2021.
[2] Aharon Ben-Tal and Arkadi Nemirovski. Robust solutions of uncertain linear programs. Operations research letters, 25(1):1–13, 1999.
[3] Dimitris Bertsimas and Bartolomeo Stellato. The voice of optimization. Machine Learning, 110(2):249–277, 2021.
[4] Hao Chen, Gonzalo Esteban Constante-Flores, Krishna Sri Ipsit Mantri, Sai Madhukiran Kompalli, Akshdeep Singh Ahluwalia, and Can Li. Optichat: Bridging optimization modelsand practitioners with large language models. arXiv preprint arXiv:2501.08406, 2025.
[5] Kristijonas Cyras, Dimitrios Letsios, Ruth Misener, and Francesca Toni. Argumentation for explainable scheduling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2752–2759, 2019.
[6] Tewodros L Deneke, Ricardo H Dunia, and Michael Baldea. Explainable optimal solutions using fuzzy inference. In 2024 American Control Conference (ACC), pages 51–55. IEEE, 2024.
[7] Marc Goerigk and Michael Hartisch. A framework for inherently interpretable optimization models. European Journal of Operational Research, 310(3):1312–1324, 2023.
[8] Daisuke Kikuta, Hiroki Ikeuchi, Kengo Tajiri, and Yuusuke Nakano. Routeexplainer: An explanation framework for vehicle routing problem. In Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 30–42. Springer, 2024.
[9] Daniel Kuhn, Wolfram Wiesemann, and Angelos Georghiou. Primal and dual linear decision rules in stochastic and robust optimization. Mathematical Programming, 130:177–209, 2011.
[10] Beibin Li, Konstantina Mellou, Bo Zhang, Jeevan Pathuri, and Ishai Menache. Large language models for supply chain optimization. arXiv preprint arXiv:2307.03875, 2023.
[11] Tushar Rathi, Rishabh Gupta, Jose M Pinto, and Qi Zhang. Enhancing explainability of stochastic programming solutions via scenario and recourse reduction. Optimization and Engineering, 25(2):795–820, 2024.