CORS / Optimization Days 

HEC Montréal, May 29-31, 2023

CORS-JOPT2023

HEC Montreal, 29 — 31 May 2023

Schedule Authors My Schedule

NOJII Numerical optimization and linear algebra with Julia II

May 29, 2023 03:30 PM – 05:10 PM

Location: Transat (yellow)

Chaired by Tangi Migot

3 Presentations

  • 03:30 PM - 03:55 PM

    Hyperplane Arrangements and Matroids in Complementarity Problems: the B-differential of the componentwise Minimum

    • Baptiste Plaquevent-Jourdain, presenter, Inria Paris - Université de Sherbrooke - Sorbonne Université
    • Jean-Pierre Dussault, Université de Sherbrooke
    • Jean Charles Gilbert, Inria Paris

    Systems of smooth nonlinear equations of the form H(x) = 0 can often be
    solved by the famous Newton method. When the equations involve nonsmooth
    functions, this algorithm is no longer adapted. The semismooth Newton
    method is a possible remedy to overcome this difficulty: it yields fast
    local convergence, provided the B-differential of H has only nonsingular
    Jacobians at the considered solution.

    An example of nonsmooth vector function H is encountered when
    complementarity problems are reformulated as nonsmooth equations
    thanks to the componentwise minimum function. This talk describes
    the content and properties of the B-differential of the componentwise
    minimum of two affine vector functions.

    Describing and computing this B-differential is a problem with
    connections to linear algebra, convex anaysis and discrete geometry
    (hyperplane arrangement), the latter being the approach chosen numerically.
    We shall emphasize the role of duality and matroid circuits in these links,
    and show how insight acquired by the theory serves the numerical aspect.

    The presentation will show significant improvements on the state of the art
    approach by Rada and Černý that solves numerically this combinatorial
    problem.

  • 03:55 PM - 04:20 PM

    PartiallySeparableNLPModels.jl: A Julia framework for partitioned quasi-Newton optimization

    • Paul Raynaud, presenter, GERAD and Polytechnique Montréal
    • Dominique Orban, GERAD - Polytechnique Montréal
    • Jean Bigeon, Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, F-38000 Grenoble, France

    In optimization applications, objectives and constraints often have a partially-separable structure: they are a sum of so-called element functions that depend on a small number of variables.
    It is well known that twice-differentiable functions with a sparse Hessian are partially-separable.
    We present a framework based on the JuliaSmoothOptimizers ecosystem to model and detect partially-separable problems, and to efficiently evaluate their derivatives by informing AD engines of their structure.
    We also implement partitioned trust-region quasi-Newton methods in which Hessian approximations are also informed by the partially-separable structure.
    Such methods were proposed in the 1980s and have rivaled limited-memory quasi-Newton methods in efficiency since.
    Our implementations feature novel aspects, including the automatic detection of convexity and its impact on the choice of a quasi-Newton approximation, and the use of limited-memory approximations for element functions that depend on a sufficiently large number of variables, which consumes significantly less memory than the dense matrices used in the 1980s and 1990s.
    Our numerical results illustrate the performance of a number of variants on a standard problem collection and demonstrate the viability of our methods for problems in which element functions depend on both a small or a substantial number of variables.

  • 04:20 PM - 04:45 PM

    Quadratic Regularization Optimizer in Low Precision for Deep Neural Networks: Implementation and Numerical Experience

    • Farhad Rahbarnia, presenter, GERAD Student
    • Dominique Orban, GERAD - Polytechnique Montréal
    • Paul Raynaud, GERAD
    • Nathan Allaire, GERAD
    • Dominique Monet, GERAD

    We propose a Julia framework to automatically port a deterministic optimization method to the stochastic setting for training deep neural networks (DNNs). We illustrate the process using the quadratic regularization method R2, which is a variant of the gradient method with adaptive step size. We report on numerical experiments using the stochastic R2 method on classification networks in low-precision arithmetic.

Back