2023 AIChE Annual Meeting

(206c) Evaluating Lower Bounds in Global Optimization Via Cheaper Sampling of Convex Relaxations

Authors

Khan, K. - Presenter, McMaster University
Bonhoff, F., RWTH Aachen
Several chemical engineering applications demand global optimization of nonconvex process models, including safety verification and determination of thermodynamic equilibria. Typical deterministic methods for global optimization proceed by generating bounds on the unknown objective value at a global minimum, and progressively improving these bounds. In this setting, lower bounds are typically obtained by minimizing convex relaxations of the original objective function. This minimization is often carried out using a gradient-based nonlinear programming solver. However, if these convex relaxations involve parametric ODEs, embedded optimization problems, or other complications, then furnishing subgradients for convex minimization may be difficult. Two years ago, at the 2021 AIChE Annual Meeting, we presented the first approach [1] for computing appropriate lower bounds tractably by derivative-free sampling of the convex relaxations, under minimal assumptions, and without any gradient/subgradient evaluations at all. For a convex relaxation of n variables, that approach involves taking (2n+1) samples, and assembling these in a new dedicated finite difference formula for bounding. Roughly, these samples are taken at the center of the considered subdomain, and at respective perturbations of this center in each positive coordinate direction and each negative coordinate direction.

This presentation shows that, surprisingly, we can get by with far fewer samples. We show how to obtain guaranteed affine underestimators and/or constant lower bounds of a convex function of n variables on a box domain using only (n+2) derivative-free samples of that function, and assembling these in newer dedicated finite difference formulas for bounding. This new result approximately halves the computational effort of bound construction for large n, and is minimal in the sense that it is obviously impossible to achieve this outcome using only (n+1) samples. Like our previous underestimators [1], our new sampling-based underestimators inherit second-order pointwise convergence [2] of the sampled convex relaxations, and thereby converge rapidly to the original convex relaxations as the subdomain of interest shrinks. Our formulae are readily adapted to incorporate known bounds on any noise/error in each evaluation of the sampled convex function. Implications are discussed, and we present numerical results in which these new underestimators are deployed within branch-and-bound frameworks for global optimization.

References

[1] Yingkai Song, Huiyi Cao, C. Mehta, and Kamil A. Khan, Bounding convex relaxations of process models from below by tractable black-box sampling, Computers & Chemical Engineering, 153 (2021). doi:10.1016/j.compchemeng.2021.107413

[2] A. Bompadre and A., Mitsos, Convergence rate of McCormick relaxations, J Glob Optim, 52:1–28 (2012). doi:10.1007/s10898-011-9685-2