Journées de l'optimisation 2019

HEC Montréal, 13-15 mai 2019

JOPT2019

HEC Montréal, 13 — 15 mai 2019

Horaire Auteurs Mon horaire

MD10 Optimization based on Learning

13 mai 2019 15h30 – 17h10

Salle: TD Assurance Meloche Monnex

Présidée par Adrien Rimélé

4 présentations

  • 15h30 - 15h55

    A novel method based on the TLBO algorithm for influence maximization in social networks

    • Salman Kimiagari, prés., TRU
    • Azam Taghizadeh, Shiraz University

    IInfluence Maximization Problem (IMP) is to find an initial seed set in a graph of social networks, which maximize the spread of influence and this is an important part of viral marketing. However, computations of this problem are considered as an NP-hard optimization. The efficiency of the original algorithm (Greedy algorithm) for this problem was not as expected. This problem refers to multiples Monte-Carlo computations for discovering one influencer nodes. We propose a model for IMP based on Teaching-Learning Based Optimization (TLBO). We use this algorithm for the first time to solve the IMP, and in order to assess the proposed model, we test the model on real datasets to show the better time efficiency of the model.

  • 15h55 - 16h20

    Deep inverse optimization

    • Yongcong Tan, prés., Université Concordia
    • Terekhov Daria, Concordia University
    • Andrew Delong,

    Abstract:
    Given a set of observations generated by an optimization process, the goal of inverse optimization is to determine likely parameters of that process. We cast inverse optimization as a form of deep learning. Our method, called deep inverse optimization, is to unroll an iterative optimization process and then use backpropagation to learn parameters that generate the observations. We demonstrate that by backpropagating through the interior point algorithm we can learn the coefficients determining the cost vector and the constraints, independently or jointly, for both non-parametric and parametric linear programs, starting from one or multiple observations. With this approach, inverse optimization can leverage concepts and algorithms from deep learning.

    Keywords: inverse optimization, deep learning, interior point

  • 16h20 - 16h45

    Learning graph elimination orderings

    • Defeng Liu, prés., Polytechnique Montreal
    • Mathieu Tanneau, Polytechnique Montreal
    • Andrea Lodi, Polytechnique Montreal

    The resolution of symmetric sparse linear systems typically employs Cholesky factorization, which potentially suffers from significant fill-in. Fill-in can be reduced by re-ordering the rows and columns of the matrix to be factored. We propose a learning framework, in order to learn efficient ordering heuristics.

    Keywords: sparse matrix, chordal extension, machine learning

  • 16h45 - 17h10

    Learning a storage policy for e-commerce warehouses

    • Adrien Rimélé, prés., Cirrelt
    • Philippe Grangier, Element AI
    • Louis-Martin Rousseau, Polytechnique Montréal
    • Michel Gamache, Polytechnique Montréal
    • Michel Gendreau, Polytechnique Montréal

    We consider an Amazon-type warehouse where a fleet of robots stores and retrieves shelves of objects to fulfill customers’ orders. To adapt to changing demand, one can dynamically modify the storage allocation of a shelf, with the goal of improving overall access time for future requests. We modelled this problem as a stochastic dynamic model and we propose a solution approach to learn a good storage policy using reinforcement learning on an associated partially observable MDP.

Retour