Optimization Days 2017
HEC Montréal, May 8-10, 2017
1st Canadian Healthcare Optimization Workshop (CHOW)
HEC Montréal, May 10-11, 2017
JOPT2017
HEC Montréal, 8 — 11 May 2017
TD3 Développements méthodologiques / Methodological developments
May 9, 2017 03:30 PM – 05:10 PM
Location: Marie-Husny
4 Presentations
-
03:30 PM - 03:55 PM
Benders decomposition for binary programs
We adapt the work of Zou, Ahmed and Sun, 2016, on Benders decomposition for stochastic problems to pure binary programs with no specific structure. The resulting algorithm makes use of standard Benders cuts from the subproblem linear relaxation. Combinatorial Benders cuts from the binary subproblem are also added, both when infeasible and when optimal. Automatic decomposition strategies are presented. Computational experiments are carried on literature instances. Results suggest high sensitivity to the decomposition strategies.
-
03:55 PM - 04:20 PM
Using variables aggregation and Benders decomposition for solving large-scale extended formulations
Many optimization problems involve simultaneous decisions on high-level strategic decisions such as the location and/or dimensioning of facilities or devices, as well as operational decisions on the usage of these facilities. Moreover, these decisions often have to be taken for multiple demand sets over time or in an uncertain setting where multiple scenarios have to be considered. Hence, a large number of variables (and constraints) is often necessary to formulate the problem. Although sometimes more compact formulations exist, usually their linear relaxations provide much weaker lower bounds, or require the implementation of problem-specific cutting planes to be solved efficiently. A lot of research has focused in recent years on strong extended formulations of combinatorial optimization problems.
These large-scale models remain intractable today with traditional solvers, but Benders decomposition gained attention as successful applications of it have been reported. An alternative to these large-scale models is to use more compact formulations, often based on variable aggregations. We propose an intermediate strategy that consists of projecting the extended formulation on the space of aggregated variables with a Benders decomposition scheme, applicable to a large class of problems.
-
04:20 PM - 04:45 PM
Fast Approximate Solution of Very Large Linear Programs
It is often useful to find an approximate solution quickly for a very large linear program, e.g. for screening many possible solution options, or for providing an advanced start for an accurate solver. I present a new method that optimizes a combined objective function that includes a quadratic penalty term for constraint violations, and that uses a movement direction based on a simultaneous projection method (constraint consensus), and a quadratic approximation to the combined objective function surface. The algorithm is concurrent and uses random initial points scattered over a gradually shrinking box surrounding the current best solution. Extensive empirical results are given, showing that the method often provides good approximate solutions in seconds for models which require hours of computation for an accurate solution via commercial LP solvers.
-
04:45 PM - 05:10 PM
On a convex resource allocation problem with nested lower and upper constraints
We study a convex resource allocation problem in which lower and upper bounds are imposed on partial sums of allocations. This model is linked to a large variety of applications, including production planning, lot sizing, speed optimization, stratified sampling, support vector machines, portfolio management, and telecommunications.
We introduce a gradient-free divide-and-conquer algorithm, which uses monotonicity arguments to generate valid bounds from the recursive calls, and eliminate linking constraints based on the information from sub-problems. These principles are quite unusual: the algorithm is not based on greedy steps and scaling, or even flow propagation, as it is often the case for this family of problems. It also does not need strict convexity or differentiability, and improves upon the best known complexity for this problem, producing a solution to the integer version of the problem (or an epsilon-approximate solution to the continuous version) in linearithmic time as a function of the problem size.
Our experimental analyses confirm the practical performance of the method, which produces optimal solutions for problems with up to one million variables in a few seconds. Promising applications to the support vector ordinal regression problem, for machine learning, are also investigated.