JOPT2025

HEC Montreal, 12 — 14 May 2025

JOPT2025

HEC Montreal, 12 — 14 May 2025

Schedule Authors My Schedule

Derivative-free optimization II

May 14, 2025 03:50 PM – 05:30 PM

Location: Accra (Yellow)

Chaired by Théo Denorme

3 Presentations

  • 03:50 PM - 04:15 PM

    Porifera: Cellular solid- and finite element-based blackboxes

    • Paul Patience, presenter, Polytechnique Montréal
    • Bruno Blais, Polytechnique Montréal
    • Charles Audet, GERAD - Polytechnique Montréal

    The field of blackbox optimization is rife with algorithms, but scarce
    in open source blackboxes to test them with.
    Here we present Porifera, a family of blackboxes consisting of finite
    element simulations on geometries involving cellular solids.
    Porifera includes the Porifera-Compression problem consisting of a 3D
    domain with rectangular bounds filled with a TPMS-like cellular solid
    described by the input variables of the problem.
    A vertical compression is applied to the top surface of the domain and
    information about the resulting shape is returned, including the solid
    fraction of the cellular solid.
    Various simulation parameters such as the mesh resolution, and also
    the possibly nonconvex hull of the domain, can be customized in a
    configuration file, which means Porifera-Compression gives rise to an
    infinite number of blackboxes and corresponding surrogates.
    Porifera is difficult to solve because the kind of TPMS is described
    by a categorical variable; the meshing step can fail, especially for
    complex geometries; and the simulation can fail to converge.
    We solve an instance of Porifera-Compression with various blackbox
    optimization algorithms, including NOMAD, to show how it is used, and
    also present performance, data and accuracy profiles to demonstrate
    its difficulty.

  • 04:15 PM - 04:40 PM

    Optimization over Trained Neural Network using Attack Techniques

    • Pierre-Yves Bouchet, presenter, Polytechnique Montréal
    • Thibaut Vidal, CIRRELT & SCALE-AI Chair in Data-Driven Supply Chains, Polytechnique Montréal

    This work solves optimization problems of the form "maximize f(Phi(x))", where f is a smooth function and Phi is an already trained Neural Network (NN), using tools from NN attacks. The first part of the talk shows that the mathematics behind a NN attack allow to design an iterative optimization algorithm which, roughly speaking, backpropagates efficiently the gradient of f into the NN Phi. In others words, for all x, a well-chosen NN attack of Phi at x provides an ascent directions for the function f(Phi( )) at x. Then, the second part of the talk discusses a so-called safeguard procedure that we use when the NN attack fails. This safeguard is inspired from derivative-free optimization (DFO) algorithms, which are usually slow but possess asymptotical guarantees. Finally, the third part of the talk shows that our resulting algorithm inherits the theoretical properties from DFO and that, according to our numerical experiments, our algorithm performs better than algorithms based purely on DFO and than algorithms based only on gradient backpropagation.

  • 04:40 PM - 05:05 PM

    Adaptive Direct Search algorithms for constrained optimization.

    • Théo Denorme, presenter, Polytechnique Montréal
    • Charles Audet, GERAD - Polytechnique Montréal
    • Youssef Diouane, Polytechnique Montréal
    • Sébastien Le Digabel, GERAD, Polytechnique Montréal
    • Christophe Tribes, Polytechnique Montréal

    This talk addresses the minimization of a nonsmooth objective function subject to general nonsmooth constraints, where the analytical form of the problem may be unavailable, leading to a blackbox optimization setting. The proposed method is a derivative-free optimization approach, suited for situations where derivative information is either inaccessible or costly to obtain. Convergence analyses of directional-based direct search methods generally require a sufficient decrease in the objective function value or require that trial points belong to a space discretization known as the mesh. Both approaches have their advantages. The sufficient decrease condition allows for more flexibility in the placement of trial points, as they do not need to be projected onto the mesh.
    The mesh-based approach ensures that no trial point that improves the current solution is wasted, even if the improvement is only marginal.
    In this talk, we present a new direct search algorithm named Adaptive Direct Search (ADS). To accept a trial point, ADS introduces a new criterion that is based neither on sufficient decrease in the objective function nor on a mesh, but rather on a new set called the {\em esh}. This new set allows placement of trial points while retaining theoretical benefits of the existing direct search algorithms. Moreover, ADS generalizes practical instances of MADS (i.e., a well-established mesh-based direct search method). Numerical results will be presented to illustrate the performance of the method.

Back