Journées de l'optimisation 2017

HEC Montréal, 8-10 mai 2017

1er Atelier Canadien sur l'optimisation des soins de santé (CHOW)

HEC Montréal, 10-11 mai 2017


HEC Montréal, 8 — 11 mai 2017

Horaire Auteurs Mon horaire
Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402

TB2 Optimization sans dérivées / Derivative-Free Optimization

9 mai 2017 10h30 – 12h10

Salle: Gérard-Parizeau

Présidée par Nadir Amaioua

4 présentations

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    10h30 - 10h55

    Parameter tuning: Runge-Kutta case study

    • Charles Audet, Présentateur, Polytechnique Montréal

    The Runge-Kutta class of iterative methods is designed to approximate solutions of a system of ordinary differential equations (ODE). The second-order class of Runge-Kutta methods is determined by a system of 3 nonlinear equations and 4 unknowns, and includes the modified-Euler and mid-point methods. The fourth-order class is determined by a system of 8 nonlinear equations and 10 unknowns. This work formulates the question of identifying good values of these 8 parameters for a given family of ODE as a blackbox optimization problem. The objective is to determine the parameter values that minimize the overall error produced by a Runge-Kutta method on a training set of ODE. Numerical experiments are conducted using the Nomad direct-search optimization solver.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    10h55 - 11h20

    Order-based error for managing ensembles of surrogates in derivative-free optimization

    • Sebastien Le Digabel, Présentateur, Polytechnique Montréal
    • Bastien Talgorn, Université McGill
    • Charles Audet, Polytechnique Montréal
    • Michael Kokkolaras, Université McGill

    We investigate surrogate-assisted strategies for derivative-free optimization using the mesh adaptive direct search (MADS) blackbox optimization algorithm. In particular, we build an ensemble of surrogate models to be used within the search step of MADS, and examine different methods for selecting the best model for a given problem at hand. To do so, we introduce an order-based error tailored to surrogate-based search. We report computational experiments for analytical benchmark problems and engineering design applications. Results demonstrate that different metrics may result in different model choices and that the use of order-based metrics improves performance.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    11h20 - 11h45

    Handling infeasibility in blackbox optimization using supervised classification

    • Stéphane Jacquet, Présentateur,

    Blackbox optimization problems, where the objective function and the constraints have unknown analytic expressions, lead to multiple difficulties such as no access to the gradient and long CPU time. Moreover, since the functions can sometimes be given by simulations or experiments, some of the computations can crash and give unreliable results. The MADS is algorithm deals with constrained blackbox optimization problems. Since its introduction in 2006, it has known severals improvements to manage constraints. However, binary constraints are currently managed the same way as the other constraints. Considering the lack of information given by binary constraints, they would benefit from a specific treatment.
    That presentation proposes a way to manage binary constraints using tools from supervised classification. Our work includes the case with a single constraint, which will be binary, since it offers a way to manage the case when simulations or experiments crash.

  • Cal add eabad1550a3cf3ed9646c36511a21a854fcb401e3247c61aefa77286b00fe402
    11h45 - 12h10

    A new variable selection strategy for the parallel space decomposition in derivative-free optimization.

    • Nadir Amaioua, Présentateur, Polytechnique Montréal
    • Charles Audet, Polytechnique Montréal
    • Sebastien Le Digabel, Polytechnique Montréal

    The current parallel space decomposition of the Mesh Adaptive Direct Search algorithm (PSD-MADS) is an asynchronous parallel method that uses a simple generic strategy to decompose a problem into smaller dimension subproblems. The present work explores new strategies for selecting the subset of variables defining subproblems to be explored in parallel. These strategies are based on ranking the variables using statistical tools to determine the most influential ones. The statistical approach improves the decomposition of the problem into smaller more relevant subproblems. This work aims to improve the use of available processors.