Solving Calibron 12, a very hard wooden block puzzle — Part 2

In Part 1 of this series, we showed how a solution to the puzzle could be represented simply by the sequence of 12 pieces in the order we place them, given an agreed upon strategy of where to place pieces. We then said a 1 GHz computer could brute force all ~10¹² possible sequences in about 33 minutes, but only if we could check that a single sequence was a valid solution in 1 CPU cycle. Unfortunately, doing that requires converting our linear sequence into a 2D geometric representation, which will almost definitely take longer than that!

In this part, we will show how the 2D geometry problem can be solved relatively quickly (though not in constant time). We will also use the same function to prune large swathes of the search tree, which when combined with a smart value-ordering strategy, finds a valid solution after trying fewer than 2000 possibilities, taking just tens of milliseconds of real time!

Two Geometry Questions

We need to do 2D geometry to answer two questions. First, we need to convert the sequence of pieces into physical locations on the board. Previously, we said that the placement strategy is to always place the next piece in the top-most, left-most empty corner. How do we find that corner? Second, once we’ve figured out all the physical locations of the pieces, how do we know if they form a valid solution or not?

Let’s address the second problem first. Just like in Part 1, we’ll borrow some insight from how a human would try to solve this puzzle. Say you’ve chosen the corner where you want to place your next piece. Depending on the size of the remaining “hole”, the piece you choose may or may not fit. In a complete 12-piece solution, every piece must fit! Conversely, in every invalid solution, at least one piece will not fit. So to check if a solution is valid, we simply make sure that every piece we place fits. (We’ll describe explicitly how we do this when we discuss the data structure below.) A nice side effect of this is that if any at point, we find a piece that doesn’t fit, we can stop trying sequences that involve that piece in that location! E.g. if the 4th piece in a sequence doesn’t fit, we don’t need to try placing the remaining 5th to 12th pieces in the sequence. This is called pruning and it can give a massive reduction in search time.

Corner Finding

How do we find the top-most, left-most empty corner of a (partially-filled) board? More importantly, how do we do it quickly? Let’s get the obviously inefficient answers out of the way. Any algorithm that allocates and then fills a 2D array of size 56 * 56 would make the whole program way too slow because we’d have to do 56² = 3136 operations every time we place a piece! Ideally, the algorithm would only look at each placed piece once, of which there are never more than 12.

The first hint is to recognize that our placement strategy always puts pieces next to one another, so we can think of the currently placed pieces as all merged together into one blob (okay fine, a polygon). So to find the corner, we don’t need to look at all the placed pieces, just their perimeter. And because our placement strategy always puts a piece under and to the right of some corner, we only care about the bottom and right edges of that perimeter, and the vertices between them (we are looking for a vertex after all). Finally, realize that tracking just the horizontal bottom edges is enough, since they already contain all the vertices and the other edges can be inferred from that.

Horizontal edges can be defined by 3 properties:

  • y (vertical position)
  • x1 (left end)
  • x2 (right end)

At any time, to find the top-most, left-most corner (vertex), we simply need to find top-most, left-most edge! We define an Edge class and implement the Comparable interface, so that they naturally sort this way. Then we keep track of the edges in a data structure and update it every time we place a piece.

Updating the Edges

How do we update the structure? Consider the initial state. When the board is empty, there is just one edge at the very top (y = 0), spanning the full width (x1 = 0, x2 = 56). We place a piece in the top-left corner, at (0,0). This alters the perimeter. Say that piece has height 5 and width 10. The previous edge needs to be replaced by two new (shorter) edges:

  • (y = 5, x1 = 0, x2 = 10)
  • (y = 0, x1 = 10, x2 = 56)

In general, every time we place a piece, we remove the edge with the minimal (yx1) value and add two new edges. Exceptions:

  1. If the new piece’s width is equal to the old edge’s width, then only one edge is added.
  2. If the new piece’s width is wider than the old edge, then the piece doesn’t fit! (This is always true, even if we haven’t reached the right-most edge of the board. Because we are always looking at the highest edge, that means any piece to the right of this edge has a lower bottom edge, and hence “overhangs,” blocking the placement.) So a simple width check is all that’s needed to see if the piece fits!

One final edge case (no pun intended): if two pieces are placed next to each other, such that their bottom edges are at the same vertical position, then our data structure will contain those two edges. But we should really treat them as one edge, otherwise rule #2 above would reject a piece for not fitting, when it really does! We will fix this by merging together these edges, either on insert or on removal from the structure.

Choosing the Right Data Structure

The right or wrong data structure can make or break your algorithm. The ideal data structure for this problem is the binary heap (Java provides this in java.util.PriorityQueue). A binary heap is a tree that always maintains the smallest element at the root, so finding that element, i.e. peek(), is constant time. Removing the smallest element is logarithmic time, as is inserting new elements. To fix the “final edge case,” we will use peek() to merge edges of the same height on removal.

That’s it for the geometry problem! In my code, this is all implemented in the insertRect(edges, rect) method. This method finds the corner, checks if the piece fits, and updates the structure all in O(log(E)) run-time 1, where the number of edges E ≤ the number of pieces (12). Given the various edge cases and exceptions, I decided to write unit tests for this method. This was well worth it, because it found one bug early — I swapped some parameters — and I didn’t encounter any more bugs the rest of the way.

“It’s not brute force, it’s a recursive backtracking search!”

Now that we’re pretty confident of our strategy, all that’s left is to write the rest of the search code.

Remember how we said a brute force solution looks like a nested loop? Well, writing a 12-level nested loop would be pretty unwieldy. Instead, we’ll use recursion. This is a common pattern when generating combinatoric results. Recursion also makes it easier to apply pruning on each level, which is what separates this from a dumb, brute force search. 🙂

The pseudo-code might look like this:

search():

    // Base case
    if no pieces remain:
        return true // solved!

    for all remaining pieces:
        insert piece

        if the piece fits:
            // recursively try placing the next piece

            if search() is true:
                 return true // solved!

        // backtrack
        remove piece // try another

    // no more options
    return false

Here is my full implementation, written in Java: https://github.com/zAlbee/calibron12.

This implementation takes about 12 ms and 1,910 backtracks to find the solution on a Core i7 processor from 2017, which beats the fastest documented time I could find on the Internet. The next best was 30 seconds/400K attempts with an Excel spreadsheet (in 2013)!

Final thoughts:

  • The order in which you try pieces matters — sorting by largest piece first will find a solution in 1,910 backtracks. The reverse sort takes 1,001,773 backtracks, which is 500 times more! This is known as value-ordering and can often be optimized to find a solution quicker in constraint programming. You can think of our puzzle as a constraint satisfaction problem (CSP) that tries to assign the correct values (piece and orientation) to 12 variables (locations). Variable-ordering is another possible optimization that we didn’t try.
  • If we make the search keep going after finding a solution, the program actually prints 8 solutions. Upon closer inspection, they are all simply rotations (4) and mirrors (2) of each other. This takes 5,485,425 backtracks, regardless of value-ordering.
  • Pruning helps massively! 5,485,425 is literally a million times smaller than the brute force search space of 1,961,990,553,600.
  • A naive program with a ridiculously large combinatoric search space (like the 10⁴⁵ from part 1) will NEVER be fast enough, no matter how fast computers get.
  • Algorithms can always benefit from human insight — the danger of AI taking over programming jobs is overstated. 😉

1 While the insertRect() function only takes O(log(E)) time, in my implementation, placing any piece still takes linear O(E) time. When we backtrack, we need to reverse any updates we made to our state, including the edge data structure. I did not implement a way to do this efficiently; instead, I make a copy of the edge structure before every call to insertRect(), which takes time linear in the number of edges. Sadly, this dwarfs the nice logarithmic time of the binary heap structure. But with such a small number of edges (≤ 12), this doesn’t matter much. The final program is plenty fast already! Making it faster is left as an exercise for the reader.

Posted by Albert Choi

Senior Software Architect