Breadcrumb
- Home
- Publications
- Proceedings
- 2015 AIChE Annual Meeting Proceedings
- Computing and Systems Technology Division
- Advances in Optimization I
- (449b) A New Portfolio of Relaxations for Global Optimization of NLPs and Minlps
Factorable programming techniques have been in the past in order to form convex relaxations of general global optimization problems [1, 2, 3]. Often, convex relaxations are further relaxed with polyhedral relaxations, thus facilitating outer approximation through efficient and reliable linear programming (LP) techniques [4, 5, 6]. In addition to LP relaxations, a non-linear programming (NLP) relaxation can be used for bounding. The use of hybrid LP and NLP relaxations was found beneficial for global optimization, especially in cases for which a problem becomes convex after branching at any given node of a branch-and-bound tree search [7].
In recent work, we incorporated mixed-integer linear programming (MILP) in the portfolio of relaxations of the BARON software [8]. MILP relaxations often result in much better lower bounds and feasible integer solutions, thus significantly reducing the number of nodes of the search tree for MINLPs. In this paper, we further extend BARON’s portfolio of relaxations to include piecewise approximations of NLPs and MINLPs. Nonconvex functions can be approximated with piecewise linear outer-estimators, a process that leads to MILP relaxations even for purely continuous NLPs [9]. The main challenge in this context is how to balance complexity and tightness of a relaxation. On the one hand, MILP and NLP relaxations are tighter than their LP counterparts. However, this increased tightness comes with increased, often prohibitively expensive relaxation times. We present techniques for the automatic incorporation of piecewise linear relaxations as part of the portfolio of relaxations in BARON [4, 5, 6]. Key in our approach is the incorporation of learning mechanisms that balance accuracy and costs of relaxations. This process is dynamic, thus resulting in different types of relaxations being used for different problems, with the corresponding decision taking place during run time. We find that this approach is advantageous for NLPs and MINLPs with difficult continuous nonconvex components. We report extensive computational results with these new portfolios of relaxations in BARON on problems from a collection of publicly available test sets.
References