Journées de l'optimisation 2014
Incluant une Journée industrielle de l'optimisation
HEC Montréal, 5  7 mai 2014
JOPT2014
HEC Montréal, 5 — 7 mai 2014
TB7 Programmation par contraintes 3 / Constraint Programming 3
6 mai 2014 10h30 – 12h10
Salle: TD Assurance Meloche Monnex
Présidée par John Chinneck
4 présentations

10h30  10h55
A Variable Ordering Heuristic within a Parallel GA Approach for Constraint Satisfaction Problems
A Constraint Satisfaction Problem (CSP) corresponds to a finite set of variables with finite domains, and a finite set of constraints restricting the possible values that each variable can take. Solving a CSP consists in finding a complete assignment of values to all the CSP variables such that all the constraints are satisfied. Many real life applications under constraints, such as scheduling and planning, can be efficiently solved with CSPs using a backtrack search algorithm where constraint propagation is performed to reduce the size of the search space. While some attempts have been made to tackling CSPs using Genetic Algorithms (GAs), this latter approach suffers from the poor crossover operator. In order to overcome this limitation, we propose a variable ordering heuristic with a novel crossover and their integration into a parallel architecture. This new system enables the solving of hard problem instances as validated by the experimental tests conducted on CSPs randomly generated using the RB model, as well as those instances taken from Lecoutre's CSP library. We will indeed demonstrate, through these tests, that our proposed method is superior to the known GA based techniques for CSPs. In addition, we will show that we are able to compete with the efficient MACbased Abscon 109 solver. Finally, we will demonstrate through additional experiments that our parallel architecture has an anytime property and is capable of solving CSPs in real time by returning a solution with a quality (number of solved constraints) depending on the time allocated for computation.

10h55  11h20
Parallel Depthbounded Discrepancy Search
Search strategies such as Limited Discrepancy Search (LDS) and Depthbounded Discrepancy Search (DDS) find solutions faster than a standard DepthFirst Search (DFS) when provided with good value selection heuristics. We propose a parallelization of DDS: Parallel Depth bounded Discrepancy Search (PDDS). This parallel search strategy has the property to visit the nodes of the search tree in the same order as the centralized version of the algorithm. The algorithm creates an intrinsic loadbalancing: pruning a branch of the search tree equally affects each worker’s workload. This algorithm is based on the implicit assignment of leaves to workers which allows the workers to operate without communi cation during the search. We present a theoretical analysis of DDS and PDDS. We show that PDDS scales to multiple thousands of workers. We experiment on a massively parallel supercomputer to solve an industrial problem and improve over the best known solution.

11h20  11h45
A Constraint Programming Approach to the Minimum Connected Dominating Set Problem
A dominating set of an undirected graph is a subset S of its vertices such that every vertex of that graph is either in S or adjacent to a vertex in S. Finding a minimum size connected dominating set is NPhard but has important applications in communication and computer networks, especially as a virtual backbone in wireless networks. An iterative probing strategy solving the problem by integer programming was recently proposed. We present an alternative of this approach using Constraint Programming. We propose a global constraint for handling connectivity and a propagator using a lower bound on the Steiner number obtained by performing a breadthfirstsearch on each connected component of the dominating set under construction. We provide comparative empirical results.

11h45  12h10
Experiments in Using Google's Go Language for Optimization Research
New optimization algorithms that do not take into account parallel execution are handicapped since multicore machines are now everywhere, including on desktop PCs. For a recent project I searched for a programming language that has three characteristics: (i) easy to program, (ii) simple facilities for dealing with parallelism, and (iii) fast compilation and execution. Google's free Go language (see golang.org) seemed to fill the bill. I report on experiments in using Go to program an experimental optimization heuristic.