You know you're a programmer when...

Last night I was trying to decide where to plant some flowers. There were some restrictions - for one thing, I didn't want any plant to be hidden behind a taller one. The morning glory had to go next to the railing, so it would have something to climb. Identical plants should be adjacent, but similar colors shouldn't be. And something conspicuous should probably go in front, so no one would trample the flowerbed by accident. Obviously this was a constraint problem, and one too hard to solve in my head. So, trowel in hand, I set about figuring out how to solve it by machine.

It is a valuable habit for a programmer to hand any problem over to a program at the first sign of difficulty. Or at least it's valuable when you're sitting at the keyboard and already know how to solve the problem. Unfortunately I've never actually done constraint programming, and nothing in the garden proved helpful. So I was still trying to figure out how to explain the problem to a computer when I noticed it had gotten dark, and I had only planted a few of the flowers.

Real programmers can build constraint solvers out of dirt and weeds. I, on the other hand, got to plant my flowers in the dark.

1 comment:

  1. Constraint programming isn't difficult or complicated. It's basically combinatoric search with branch pruning. Most of the cleverness comes in your choice of pruning. If you've ever competed in any programming contests, or solved many of the typical problems they pose, then you'll be familiar with the general pattern:

    * A set of elements from which you can generate combinations and permutations

    * A boolean function that determines success; may be existential, such as minimizing or maximizing some scalar function

    Combinations and permutations are, for most problems, easily produced with one or two recursive routines, though some more complicated producers are easier to write in a language like C# - for example, consider generating all possible binary tree shapes for a given node count. This is slightly easier to reason about when an iterator-like construct is available than without. However, reducing tree operations to RPN operations (RPN = postorder traversal) usually makes such tree permutations easier to create.

    When maximizing or minimizing, an obvious pruning suggests itself: keep track of score of current min or max, and when hit (assuming monotonically increasing or decreasing fitness function), discard that subtree of the search space, as appropriate.

    For other conditions, careful analysis of the boolean function often reveals a certain order of pruning to excise larger subsets first; or some logical inferences may lead to similar early pruning.

    And finally, caching function calls (aka memoizing, aka dynamic programming) can be useful for search trees that reduce to trivially similar sub-problems.


It's OK to comment on old posts.