2023 AIChE Annual Meeting
(206c) Evaluating Lower Bounds in Global Optimization Via Cheaper Sampling of Convex Relaxations
This presentation shows that, surprisingly, we can get by with far fewer samples. We show how to obtain guaranteed affine underestimators and/or constant lower bounds of a convex function of n variables on a box domain using only (n+2) derivative-free samples of that function, and assembling these in newer dedicated finite difference formulas for bounding. This new result approximately halves the computational effort of bound construction for large n, and is minimal in the sense that it is obviously impossible to achieve this outcome using only (n+1) samples. Like our previous underestimators [1], our new sampling-based underestimators inherit second-order pointwise convergence [2] of the sampled convex relaxations, and thereby converge rapidly to the original convex relaxations as the subdomain of interest shrinks. Our formulae are readily adapted to incorporate known bounds on any noise/error in each evaluation of the sampled convex function. Implications are discussed, and we present numerical results in which these new underestimators are deployed within branch-and-bound frameworks for global optimization.
References
[1] Yingkai Song, Huiyi Cao, C. Mehta, and Kamil A. Khan, Bounding convex relaxations of process models from below by tractable black-box sampling, Computers & Chemical Engineering, 153 (2021). doi:10.1016/j.compchemeng.2021.107413
[2] A. Bompadre and A., Mitsos, Convergence rate of McCormick relaxations, J Glob Optim, 52:1â28 (2012). doi:10.1007/s10898-011-9685-2