JOPT2025
HEC Montreal, 12 — 14 May 2025
JOPT2025
HEC Montreal, 12 — 14 May 2025

Derivative-free optimization II
May 14, 2025 03:45 PM – 05:25 PM
Location: Accra (Yellow)
Chaired by Sébastien Le Digabel
3 Presentations
-
03:45 PM - 04:10 PM
Porifera: Cellular solid- and finite element-based blackboxes
The field of blackbox optimization is rife with algorithms, but scarce
in open source blackboxes to test them with.
Here we present Porifera, a family of blackboxes consisting of finite
element simulations on geometries involving cellular solids.
Porifera includes the Porifera-Compression problem consisting of a 3D
domain with rectangular bounds filled with a TPMS-like cellular solid
described by the input variables of the problem.
A vertical compression is applied to the top surface of the domain and
information about the resulting shape is returned, including the solid
fraction of the cellular solid.
Various simulation parameters such as the mesh resolution, and also
the possibly nonconvex hull of the domain, can be customized in a
configuration file, which means Porifera-Compression gives rise to an
infinite number of blackboxes and corresponding surrogates.
Porifera is difficult to solve because the kind of TPMS is described
by a categorical variable; the meshing step can fail, especially for
complex geometries; and the simulation can fail to converge.
We solve an instance of Porifera-Compression with various blackbox
optimization algorithms, including NOMAD, to show how it is used, and
also present performance, data and accuracy profiles to demonstrate
its difficulty. -
04:10 PM - 04:35 PM
Blackbox Optimization for Loss Minimization in Power Distribution Networks using Feeder Reconfiguration
Modern power distribution networks (DNs) incorporate a growing number of active distribution network (ADN) technologies, such as distributed energy resources (DERs) and remotely activated switches. As a DN is a naturally unbalanced system due to the multi-phased highly fluctuating demand, DERs which can lead to bi-directional power flow amplify the phase imbalance, reducing system reliability and efficiency. The proposed network topology reconfiguration method uses tie and sectionalizing switches to minimize power losses in a three-phase unbalanced DN equipped with DERs. Strict, practical feasibility of the solution is ensured by using a high-accuracy load-flow simulator, and by formulating the problem as a blackbox optimization (BBO) problem, solved with the NOMAD software package. To circumvent the computational burden of BBO, combinatorial optimization-inspired algorithms are introduced and adapted to the DN context, namely the variable neighbourhood search (VNS) metaheuristic and the branch-and-bound (BB) framework. VNS incorporates a random component, thus potentially leading to faster progress toward a good solution. BB approximates the mechanisms underlying the exact branch-and-bound method used in mixed-integer programming, though its performance remains highly sensitive to the choice of initial point. Consequently, three methods using various combinations of a standalone BBO problem, a VNS, and a BB, are tested and compared. Each optimization technique being the warm start for the next induces constant improvement on the solution quality. These methods are tested on the modified IEEE 34-bus and 136-bus test feeders, both integrating DERs. The final solution typically results in a network topology that differs from the initial configuration, and the power losses are considerably diminished, illustrating the direct impact of combining local generation and network reconfiguration to improve the DN efficiency.
-
04:35 PM - 05:00 PM
Optimization over Trained Neural Network using Attack Techniques
This work solves optimization problems of the form "maximize f(Phi(x))", where f is a smooth function and Phi is an already trained Neural Network (NN), using tools from NN attacks. The first part of the talk shows that the mathematics behind a NN attack allow to design an iterative optimization algorithm which, roughly speaking, backpropagates efficiently the gradient of f into the NN Phi. In others words, for all x, a well-chosen NN attack of Phi at x provides an ascent directions for the function f(Phi( )) at x. Then, the second part of the talk discusses a so-called safeguard procedure that we use when the NN attack fails. This safeguard is inspired from derivative-free optimization (DFO) algorithms, which are usually slow but possess asymptotical guarantees. Finally, the third part of the talk shows that our resulting algorithm inherits the theoretical properties from DFO and that, according to our numerical experiments, our algorithm performs better than algorithms based purely on DFO and than algorithms based only on gradient backpropagation.