JOPT2025

HEC Montreal, 12 — 14 May 2025

JOPT2025

HEC Montreal, 12 — 14 May 2025

Schedule Authors My Schedule

Learning-Enhanced Optimization

May 14, 2025 01:20 PM – 03:00 PM

Location: Associations étudiantes (Green)

Chaired by Taha Varol

4 Presentations

  • 01:20 PM - 01:45 PM

    A neuro-symbolic approach to learn how to solve hard (serious) puzzles from solutions.

    • Thomas Schiex, presenter, INRAE
    • Marianne Defresne, KU Leuven
    • Sophie Barbe, TBI/INRAE

    In some settings, the precise definition of a discrete optimization problem is not completely known and only historical solutions with associated arbitrary contextual information is available. To be able to generate new high quality solutions for new contextual information, it becomes useful to learn a mapping from contextual information to formulations of discrete optimisation problems. A formulation produced by this learned generator can then be analyzed, and completed with side constraints or additional criteria, before being solved to produce a high quality solution that optimizes the learned problem and satisfies the added properties. In this talk, I will present this neuro-symbolic generative architecture which combines a dedicated loss, which can be exploited by an arbitrary neural architecture, to produce the formulation of a discrete optimization problem as a "Graphical Model" ( a family of mathematical models that encompass deterministic models such as boolean satisfiability and constraint programming but also stochastic models such as Bayesian nets and Markov random fields). This architecture offers scalable training, avoiding calls to NP-complete problem solvers at training. NP-complete solvers are however still needed at inference time, if guarantees in terms of constraint satisfaction or optimality are required. This architecture can be used to learn how to play logical grid games (Sudoku, Futoshiki,...) and has been used in practice to learn how to design new molecules (proteins) with experimentally tested molecules, with applications in health and biotechnology.

  • 01:45 PM - 02:10 PM

    Neural Heuristics for Mathematical Optimization via Value Function Approximation

    • Justin Dumouchelle, presenter, University of Toronto
    • Rahul Mihir Patel, University of Toronto
    • Julien Esther, TU Delft
    • Jannis Kurtz, University of Amsterdam
    • Merve Bodur, The University of Edinburgh
    • Elias Khalil, University of Toronto

    Mathematical optimization is an invaluable framework for modeling and solving
    decision-making problems with many successes in single-level deterministic
    problems (e.g., mixed-integer linear or nonlinear optimization). However, many
    real-world problems require accounting for uncertainty or the reaction of another
    agent. Paradigms such as stochastic programming, bilevel optimization, and
    robust optimization can model these situations but are much slower to solve than
    their deterministic counterparts, especially when discrete decisions must be
    made. This work demonstrates how a single framework based on value function
    approximation and optimizing over trained neural networks can be adapted to all
    three domains. Empirically, we find solutions of similar, and in some cases
    significantly better, quality than state-of-the-art algorithms in each field, often
    within a fraction of the running time. The datasets and three frameworks,
    Neur2SP (NeurIPS'22), Neur2RO (ICLR'24), and Neur2BiLO (NeurIPS'24), are
    open-sourced for further research.

  • 02:10 PM - 02:35 PM

    Reinforcement and Imitation Learning for Real-Time Train Trajectory Re-Optimization

    • Max Bourgeat, presenter, Polytechnique Montréal
    • Quentin Cappart, Cirrelt
    • Antoine Legrain, GERAD - Polytechnique Montréal

    Rail transport is a key lever for reducing carbon emissions, offering a more sustainable alternative to individual cars and trucks for both passenger and freight transportation. However, the increased adoption of rail services is hindered by disruptions such as network failures and the resulting uncertainty in arrival times. While mathematical solvers are widely used in the railway industry for train scheduling and real-time re-optimization during disruptions, their performance deteriorates as the scale and complexity of the network grows.

    In this work, we propose a hybrid approach that combines reinforcement learning (RL) and imitation learning (IL) to accelerate train trajectory re-optimization. Our method leverages world models to create a compact and actionable representation of the railway environment, enabling efficient long-term planning. To facilitate coordination between multiple trains, we integrate a communication mechanism based on Transformers, allowing agents to exchange information and adapt to changing network conditions. Additionally, we use imitation learning from a mathematical solver as the expert,enabling the model to replicate the solver’s decision patterns while significantly speeding up the decision-making process.

    This hybrid approach aims to combine the reliability of mathematical optimization with the adaptability and speed of artificial intelligence, offering a scalable solution to improve railway efficiency and reduce the impact of disruptions.

  • 02:35 PM - 03:00 PM

    Scalable Primal Heuristics Using Graph Neural Networks for Combinatorial Optimization

    • Furkan Canturk, Ozyegin University
    • Taha Varol, presenter, HEC Montreal
    • Reyhan Aydogan, Ozyegin University
    • Okan Orsan Ozener, Ozyegin University

    By examining the patterns of solutions obtained for various instances, one can gain
    insights into the structure and behavior of combinatorial optimization (CO) problems
    and develop efficient algorithms for solving them. Machine learning techniques, especially
    Graph Neural Networks (GNNs), have shown promise in parametrizing and automating
    this laborious design process. The inductive bias of GNNs allows for learning solutions
    to mixed-integer programming (MIP) formulations of constrained CO problems with a
    relational representation of decision variables and constraints. The trained GNNs can be
    leveraged with primal heuristics to construct high-quality feasible solutions to CO problems
    quickly. However, current GNN-based end-to-end learning approaches have limitations for
    scalable training and generalization on larger-scale instances; therefore, they have been
    mostly evaluated over small-scale instances. Addressing this issue, our study builds on su-
    pervised learning of optimal solutions to the downscaled instances of given large-scale CO
    problems. We introduce several improvements on a recent GNN model for CO to general-
    ize on instances of a larger scale than those used in training. We also propose a two-stage
    primal heuristic strategy based on uncertainty-quantification to automatically configure
    how solution search relies on the predicted decision values. Our models can generalize on
    16x upscaled instances of commonly benchmarked five CO problems. Unlike the regressive
    performance of existing GNN-based CO approaches as the scale of problems increases, the
    CO pipelines using our models offer an incremental performance improvement relative to
    CPLEX. The proposed uncertainty-based primal heuristics provide 6-75% better optimality
    gap values and 45-99% better primal gap values for the 16x upscaled instances and brings
    immense speedup to obtain high-quality solutions. All these gains are achieved through a
    computationally efficient modeling approach without sacrificing solution quality.

Back