Breadcrumb
- Home
- Publications
- Proceedings
- 2025 AIChE Annual Meeting
- Computing and Systems Technology Division
- 10C: Advances in Optimization II
- (468f) Derivative-Free Optimization Using Inexact Admm
In view of the foregoing limitations, we propose an efficient and theoretically convergent DFO algorithm suitable for a class of high-dimensional problems with separable objective functions. To tackle these high-dimensional optimization problems efficiently, distributed optimization [5] is employed, which involves decomposing the monolithic problem into manageable subproblems solved in parallel by dedicated solvers with a coordination mechanism for the solvers. The resulting problem formulation becomes the minimization of additively separable non-convex functions of continuous variables with equality constraints, in derivative-free settings.
Our proposed method utilizes the alternating direction method of multipliers (ADMM) as the foundational framework for the distributed optimization. As a modification of the traditional ADMM in literature, we employ a two-level ADMM structure, adopted from [6,7], where the inner layer sequentially performs the ADMM updates, and the outer layer drives the introduced slack variables to zero using the method of multipliers. This two-level structure ensures the asymptotic convergence to the approximate stationarity point of the problem without restrictions on the structure of the linear constraints matrices, making it suitable for consensus optimization and complicated sharing problems among subsystems. In addition, each subproblem is solved inexactly, where a convergence-guaranteed derivative-free trust-region solver [8] suitable for both smooth and non-smooth problems is used. This guarantees the suboptimality of each subproblem to a decreasing error tolerance, while also significantly reducing the computational costs associated with exact updates. We establish the theoretical convergence of the proposed approach to an approximate solution and demonstrate the effectiveness of the method on numerical examples as well as practical engineering problems.
References
[1] Conn, A. R., Scheinberg, K., & Vicente, L. N. (2009). Introduction to derivative-free optimization. Society for Industrial and Applied Mathematics.
[2] Larson, J., Menickelly, M., & Wild, S. M. (2019). Derivative-free optimization methods. Acta Numerica, 28, 287-404.
[3] Boukouvala, F., & Ierapetritou, M. G. (2014). Derivative‐free optimization for expensive constrained problems using a novel expected improvement objective function. AIChE Journal, 60(7), 2462-2474.
[4] Rios, L. M., & Sahinidis, N. V. (2013). Derivative-free optimization: a review of algorithms and comparison of software implementations. Journal of Global Optimization, 56(3), 1247-1293.
[5] Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011). Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine learning, 3(1), 1-122.
[6] Sun, K., & Sun, X. A. (2023). A two-level distributed algorithm for nonconvex constrained optimization. Computational Optimization and Applications, 84(2), 609-649.
[7] Tang, W., & Daoutidis, P. (2022). Fast and stable nonconvex constrained distributed optimization: the ELLADA algorithm. Optimization and Engineering, 23(1), 259-301.
[8] Garmanjani, R., Júdice, D., & Vicente, L. N. (2016). Trust-region methods without using derivatives: worst case complexity and the nonsmooth case. SIAM Journal on Optimization, 26(4), 1987-2011.