Error Function Learning with Interpretable Compositional Networks for Constraint-Based Local Search

Florian Richoux, Jean-François Baffier

In Constraint Programming, constraints are usually represented as predicates allowing or forbidding combinations of values. However, some Constraint-Based Local Search algorithms exploit a finer representation: error functions. By associating a function to each constraint type to evaluate the quality of an assignment, it extends the expressiveness of regular Constraint Satisfaction Problem/Constrained Optimization Problem formalisms. This comes with a heavy price: it makes problem modeling significantly harder. Indeed, one must provide a set of error functions that are not always easy to define. Here, we propose a method to automatically learn an error function corresponding to a constraint, given a function deciding if assignments are valid or not. This is, to the best of our knowledge, the first attempt to automatically learn error functions for hard constraints. Our method aims to learn error functions in a supervised fashion, trying to reproduce the Hamming distance, by using a variant of neural networks we named Interpretable Compositional Networks, allowing us to get interpretable results, unlike regular artificial neural networks. We run experiments on 5 different constraints to show its versatility. Experiments show that functions learned on small dimensions scale on high dimensions, outputting a perfect or near-perfect Hamming distance for most tested constraints.

Knowledge Graph

arrow_drop_up

Comments

Sign up or login to leave a comment