CORS / Optimization Days
HEC Montréal, May 29-31, 2023
CORS-JOPT2023
HEC Montreal, 29 — 31 May 2023
BOII Blackbox Optimization II
May 31, 2023 01:30 PM – 03:10 PM
Location: Transat (yellow)
Chaired by Christophe Tribes
4 Presentations
-
01:30 PM - 01:55 PM
Scalable Bayesian optimization methods
Bayesian optimisation (BO) is a class of global optimization methods that targets to solve efficiently expensive blackbox optimization problems. BO iteratively constructs and uses conditional Gaussian processes to guide the optimization process.
In this talk, we first present a scalable BO approach that uses, when available, efficiently additional information on the composite-structure of the blackbox optimization problem. Our scalable BO approach shows excellent performance compared to classical BO approaches. A sensitivity analysis with respect to the dimension and the structure of the blackbox problem confirmed the good potential of the proposed BO approach.
The second part of this talk describes a second improved BO method, called TREGO (trust-region-like efficient global optimisation). TREGO alternates between regular BO steps and local steps within a trust region. A set of experiments reveal that TREGO consistently outperforms regular BO method and is competitive with other state-of-the-art blackbox optimization methods.
-
01:55 PM - 02:20 PM
Sequential stochastic blackbox optimization
This work considers stochastic optimization problems in which the objective function values can only be computed by a blackbox corrupted by some random noise following an unknown distribution. The method developed is based on sequential stochastic optimization (SSO) : the original problem is decomposed into a sequence of subproblems. Each of these subproblems is solved using a zeroth order version of a sign stochastic gradient descent with momentum algorithm (ZO-Signum) and with an increasingly fine precision. This decomposition allows a good exploration of the space while maintaining the efficiency of the algorithm once it gets close to the solution. Under Lipschitz continuity assumption on the blackbox, a convergence rate
in expectation is derived for the ZO-signum algorithm. If, moreover the blackbox is locally convex around its minima, an almost sure convergence rate may be given for the SSO algorithm. Finally, numerical experiments are conducted to compare the SSO algorithm with other state-of-the-art algorithms and to demonstrate its competitiveness. -
02:20 PM - 02:45 PM
Tabu Search Hyperparameters Tuning with Blackbox Optimization
The performance and behaviour of any metaheuristic algorithm largely depends on its hyperparameters. These parameters need to be chosen carefully to achieve the best results. Their impact is even more important in large and real-world size problems. The best values for these hyperparameters cannot be identified by hand and with trial and error. Research shows that hyperparameter tuning is a nontrivial task and efficient methods are required to obtain the best possible results. We present how blackbox optimization can help choose the tabu search parameters efficiently in a physician scheduling problem. We are solving this problem through a Mesh Adaptive Direct Search (MADS) algorithm with no derivative information. The experiments have been conducted and the results will be presented.
-
02:45 PM - 03:10 PM
Improving NOMAD with quadratic programming solver
Our presentation will discuss improvements in the resolution of constrained blackbox optimization (BBO) problems using the mesh adaptive direct search (Mads) algorithm implemented in the Nomad software package. Mads alternates between Search and Poll steps to generate trial points. A quadratic model search uses a subset of all the previously obtained blackbox evaluations to build function models. This search requires to solve an optimization sub-problem which is currently done with the Mads algorithm. But on some specific problems the results are not satisfactory. In particular, the COCO platform BBOB-constrained Sphere problem in 10 dimensions with one linear inequality constraint has a single known optimum. However, Mads converges very slowly to the sub-problem optimum.
This work focuses on replacing Mads by a Quadratic Programming (QP) solver for the sub-problem. The presentation will describe what happens when solving the Sphere problem. Then we will present one candidate QP solver to replace Mads for the search method and its implementation in Nomad. Some preliminary results showing the benefits of using a QP solver will also be given.