# Pipes, Clojure and Choco (part 5: Choco)

This is the fifth in a series of posts that explore **constraint
programming**. So far we can solve the problem with a “direct”
search, or we can use more smarts.

Both those solutions were written from scratch, which was fun, but, as good engineers, we would also like to know if an existing library can do the job.

This kind of problem - a search over a discrete space (there are no fractional tiles), where the solution is restricted by certain conditions - is known as “constraint programming over an integer domain”. Searching around for a suitable Java library, I found Choco (in the near future Java will have a relevant standard, based on JSR 331 - and I think that Choco will provide the default implementation).

A small aside on Choco - they seem to be updating their site, and I can’t find very much (like, no API docs or Maven repository). So I downloaded the source, which is built using Maven, and generated the site locally (for the Javadoc API) and installed the libraries in my local Maven repository (cache). Also, some tests fail, so compile with tests disabled.

We can call Java libraries from Clojure, and the syntax is easy to read, so I won’t dwell on that here. Instead, I will try to given an overview of what we need to do to use Choco. The aim is to understand Choco better so that we have a clearer idea of how it might be used elsewhere.

## A Model and Variables

Variables are the values that Choco will find; they are grouped together into
a model. We will use one variable for each cell - the value of each variable
(when known) is the value of the bit pattern (typically called `mask`

in the
code) for the pipes at that point.

Because Choco provides an abstract, generalised interface, it is simplest to
associate a variable with *every* cell, even if the cell’s value is fixed.
This avoids the need to filter for special cases in the constraints below, but
means that, in this article, references to masks and tiles will include *all*
possible values (unlike earlier articles, where they described only the
available values).

We still need to specify that some cells have fixed values - that the
associated “variables” are, in fact, constants. This is done by declaring an
`IntegerConstantVariable`

where appropriate:

```
(defn variable-map [n masks cells]
(let [all-values (int-array masks)]
(apply merge
(for [i (range n), j (range n)]
{[i j] (if-let [value (cells [i j])]
(IntegerConstantVariable. value)
(IntegerVariable. (str "[" i "," j "]") all-values))}))))
```

Note how, when a free variable is declared, we specify a set of possible values. Also, I have stored all the variables in a map for easy access by cordinate in the code below.

Choco provides no relationship between variables - it doesn’t “know” that they are on a grid, for example. All that must be provided by us.

## Cardinality

The next constraint to add (after the allowed ranges of values, implicit in the variable declarations above) is cardinality: the idea that there are a fixed number of each of the different tiles. So if we use a tile for one cell, we cannot also use it for another.

```
(defn cardinality [tiles lo hi]
(int-array (for [i (range lo (inc hi))] (get tiles i 0))))
(defn constrain-cardinality [model tiles variables]
(let [lo (apply min (keys tiles))
hi (apply max (keys tiles))
c (cardinality tiles lo hi)
v (into-array IntegerVariable (vals variables))]
(.addConstraint model
(Choco/globalCardinality v (ints c) (ints c) (int lo)))))
```

If the code above seems over-complex, it’s because the Java API is quite general:

```
/**
* Given an array of variables vars, the constraint ensures that the number
* of occurences of the value i among the variables is between low[i - min]
* and up[i - min]. Note that the length of low and up should be
* max - min + 1 (min is the minimal value over all variables, and max the
* maximal value over all variables).
*/
public static Constraint globalCardinality(
IntegerVariable[] vars, int[] low, int[] up, int offset)
```

For our use `low`

and `up`

are identical (also, the `min`

is the comment
refers to the `offset`

parameter).

## Edges

Pipes cannot “leave” the edges of the puzzle. We can add this constraint by excluding tiles which would have such pipes from the variables along the edges of the puzzle:

```
(defn constrain-edge [model to-ij n bit masks variables]
(let [excluded (int-array (filter #(bits % bit) masks))]
(doall (map (fn [variable]
(.addConstraint model (Choco/notMember variable excluded)))
(map variables (for [k (range n)] (to-ij k)))))))
(defn constrain-edges [model n masks variables]
(let [m (dec n)]
(constrain-edge model (fn [k] [0 k]) n left masks variables)
(constrain-edge model (fn [k] [m k]) n right masks variables)
(constrain-edge model (fn [k] [k 0]) n top masks variables)
(constrain-edge model (fn [k] [k m]) n bottom masks variables)))
```

Note that the approach above, excluding values, works correctly on corner cells. We could also have specified the constraint in an inclusive way, but then corners would have needed special treatment.

## Neighbours

A fixed cell constrains neighbouring cells in the puzzle (because pipes in both must join together). Choco supports “binary relations” which test whether two values are consistent via table lookup. These can provide the constraints we need if we construct appropriate tables - one for vertical neigbours, one for horizonatal - and then specify that they hold for each pair of neighbours:

```
(defn constrain-pairs [model n lo hi masks relation? variables swap]
(let [relation
(Choco/makeBinRelation
(int-array [lo lo]) (int-array [hi hi])
(java.util.ArrayList.
(map int-array
(for [m1 masks, m2 masks :when (relation? m1 m2)] [m1 m2])))
true)]
(doseq [[v1 v2]
(map #(map variables %)
(for [i (range (dec n)), j (range n)]
(map swap [[i j] [(inc i) j]])))]
(.addConstraint model (Choco/relationPairAC v1 v2 relation)))))
(defn consistent [our-mask our-bit nbr-mask nbr-bit]
(= (no-bits our-mask our-bit) (no-bits nbr-mask nbr-bit)))
(defn make-relation? [bit1 bit2]
(fn [mask1 mask2] (consistent mask1 bit1 mask2 bit2)))
(defn swap [[a b]] [b a])
(defn constrain-neighbours [model n lo hi masks tiles variables]
(constrain-pairs
model n lo hi masks (make-relation? right left) variables identity)
(constrain-pairs
model n lo hi masks (make-relation? bottom top) variables swap))
```

In `constrain-pairs`

the `do-seq`

is written to generate horizontal
neighbours, but applying `swap`

switches the coordinates to vertical
neighbours; the `filter apply-and`

drops pairs of variables where one cell is
already defined.

## Results

Once the constraints above are provided, we can invoke a “solver” (provided by Choco) that will provide a solution to the model. And, assuming it succeeds, we can then copy the contents of the variables into a puzzle.

Obviously there are some overheads involved in this process, and in constructing the initial constraints, but that is the price you pay for using a general library. The important question is whether these “one-off” costs are repaid by more efficient interations when solving the problem.

So we need to run the benchmarks. Here are the results, compare to the direct search (direct time in ms, Choco time in ms, the ratio Choco/direct):

Clues | 4x4 | 5x5 | 6x6 | 7x7 | 8x8 | 9x9 |
---|---|---|---|---|---|---|

20% | 0.1, 1, 7x | 0.7, 2, 3x | 4, 3, 0.8x | - | - | - |

30% | 0.1, 1, 9x | 0.3, 2, 4x | 2, 2, 1x | 8, 4, 0.5x | - | - |

40% | 0.1, 1, 14x | 0.2, 1, 7x | 0.6, 2, 3x | 3, 3, 1x | 11, 5, 0.5x | - |

50% | 0.1, 1, 16x | 0.1, 1, 9x | 0.3, 2, 9x | 0.8, 3, 3x | 2, 3, 1x | 3, 5, 1x |

(There are some changes to the direct times compared to the previous table, particularly on larger puzzles. This seems to be due to “noise” from the variations in the puzzles used, but since the same puzzles are used for each algorithm within the benchmark these do not affect the comparison within the table.)

As we saw with the “smart” search, adding more intelligence to the search does not seem to improve times - the lower overheads of direct search allow it to explore more options in less time.

It is also interesting to ask how Choco compares to the smart search (smart time in ms, Choco time in ms, the ratio Choco/smart):

Clues | 4x4 | 5x5 | 6x6 | 7x7 | 8x8 | 9x9 |
---|---|---|---|---|---|---|

20% | 0.5, 1, 2x | 1, 2, 1x | 4, 3, 0.8x | - | - | - |

30% | 0.5, 1, 2x | 1, 2, 2x | 3, 2, 1x | 6, 4, 0.7x | - | - |

40% | 0.4, 1, 3x | 1, 1, 2x | 2, 2, 1x | 3, 3, 0.8x | 7, 5, 0.8x | - |

50% | 0.3, 1, 3x | 1, 1, 2x | 1, 2, 2x | 2, 3, 1x | 4, 3, 0.7x | 8, 5, 0.7x |

Here, things are much closer. In general the hand-rolled smart routine is faster on smaller problems, which you might expect from the lower initialisation costs. And the Java code in Choco may give it the edge on larger problems with more iterations.

Unfortunately, we don’t have a good way to test larger puzzles: our examples become ambiguous even with 50% clues and the concentric circles are susceptible to systematic bias (I tried to avoid this with the initial algorithms by shuffling child nodes, but there is no way to do the same for Choco).

Having said the above, here are the times for concentric puzzles:

Algorithm | 6x6 | 8x8 | 10x10 | 12x12 |
---|---|---|---|---|

direct | 90ms | 500ms (0.5s) | 10,000ms (10s) | 300,000ms (5m) |

smart | 80ms | 1,000ms (10s) | 40,000ms (40s) | 2,000,000ms (30m) |

choco | 70ms | 80ms (0.1s) | 1,400ms (1s) | 90,000ms (2m) |

And for larger puzzles with 50% clues (single puzzle, same across all algorithms):

Algorithm | 10x10 | 11x11 | 12x12 | 13x13 | 14x14 | 15x15 | 20x20 | 25x25 | 30x30 |
---|---|---|---|---|---|---|---|---|---|

direct | 100ms | 200ms | 500ms | 500ms | 2,000ms (2s) | 500,000ms (8m) | - | - | - |

smart | 200ms | 80ms | 30ms | 70ms | 1,000ms (1s) | 1,000ms (1s) | 300ms (0.3s) | 500ms (0.5s) | 200,000ms (3m) |

choco | 200ms | 100ms | 50ms | 50ms | 200ms (0.2s) | 400ms (0.4s) | 20,000ms (20s) | 1,000ms (1s) | 300,000ms (5m) |

The per-puzzle noise is clearly significant and given all the qualifications and limitations mentioned above I am not motivated to look deeper. But, as ever, latest code is in the repo, so feel free to investigate yourself!

## Conclusions

We’ve covered a lot of ground in these articles.

If you’re new to Clojure you hopefully have a better idea of how it can be
used. For me, now that I am used to the syntax, it feels like “Python in
Java”, with the added advantage that lazy sequences are much easier to use
than Python’s `yield`

(I’ve tried prgramming in Lisps many times, but this is
the first time it has felt useful and productive - it may be just experience,
finally, but it feels like lazy sequences are the critical difference).

If you’re new to Clojure *and* functional programming then I suspect I have
moved too fast. Sorry - it’s impossible to explain everything in the time I
have available.

If you’re new to Choco then I hope you have a better idea of the way that it
works. One thing I have *not* done is show the range of constraints available
- I’d suggest reading through the API docs for the `Choco`

class to get a
better feel for that. And there is much I still don’t understand - I haven’t
tried different solvers, and I have no idea what other approaches the more
complex parts of Choco will support.

More generally, we’ve also seen that Choco didn’t offer much of an advantage over a hand-written solution - at least here, when such an approach was reasonable. Another imporant lesson is that direct (relatively - it still uses some “obvious” constraints) search was sufficient for the initial problem.

That’s the end (I think) of this series. Unless Scott writes a Prolog version (hint!). Thanks for reading - I hope you learnt as much as I did.

Andrew

## Related Posts

- Pipes, Clojure and Choco - Optimisation
- Pipes, Clojure and Choco (part 3: Smart Search)
- An ORM for C?
- Pipes, Clojure and Choco (part 2: Direct Search)
- Choco and Regular Expressions
- Pipes, Clojure and Choco (part 1: Limits of Laziness)
- Pipes, Clojure and Choco (part 4: Benchmarking)
- How to Average Complex Responses
- CMake with Check Unit Tests
- Javascript Graphics w/ Raphael