Some people say Constraint Programming is

the Holy Grail of programming. The user just describes his problem, in terms

of variables, constraints, and possibly an objective function to minimize. Then the solver will find a solution for you. This is almost magical. Let me illustrate this with the Job-Shop scheduling

problem. A bunch of activities must be scheduled on

different machines. Every machine can execute only one activity

at a time. Also some precedence constraints must hold

between some activities. The objective is then to schedule all the

activities such that the overall duration of the schedule is minimized. This problem is very complex to solve, especially

if you want to solve it with a dedicated algorithm. With Constraint Programming you can describe

it at a high level. You just have to introduce some variables,

represent the start-time of the activities, some precedence constraints and the resource

constraints, and that’s it. The solver will find a solution for you. What is really nice with constraint programming

is that you have a very expressive language to describe your problem. We can really compare it with “Playing Lego”. Every type of brick corresponds with a different

type of constraint that can be reused in many different problems. But how does it work? First the user introduces some variables. Every variable has a set of possible values

that we call its domain. Then there are also the constraints of your

model. A constraint is actually an algorithm in charge

of removing impossible values. Let me illustrate this with a concept that

sudoku players are very familiar with : the all different constraint. Every variable must take a different value. As we can see, some deductions can be made

to remove impossible values. Then, to find a solution, the solver will

explore a search tree. But it won’t try every possible combination

of the domains, that would be too inefficient. Instead, the solver will prune the search

tree by letting the constraints remove impossible values. As soon as a domain becomes empty, it means

that there is no solution and so we can backtrack. In my lab we develop a Constraint Programming

solver called OscaR. We do research to improve the efficiency of

OscaR. If you like algorithms, you will for sure

love this line of research. We also try to improve the user experience

at modeling problems. For instance, we are currently investigating

how to make the computation distributed over hundreds of processors such that it is very

fast to solve and almost transparent to the user. We also use OscaR to solve challenging problems. One cool problem we recently worked on is

traffic engineering. As you know, the Internet is a large connected

network. But as it is the case for vehicules, there

can be congestion on the links because their capacity is limited. The traffic engineering problem is to decide

for every traffic demand what should be the path to follow so that everything runs smoothly. We tackled efficiently this problem using

OscaR and Constraint Programming. If you want to know more about Constraint

Programming and OscaR technology, feel free to click on the link below.

I suppose It is correct to say that the reduction (equivalence proof in this case) from NP-complete problem into MIP is the most common in every paper about computer science and optimization.

I could not find a documentation page on OscaR bitbucket page nor on professor Schaus homepage. Is the documentation available?