### A rant on recursion exercises

Introductions to functional programming, and especially Lisp, traditionally have a lot of exercises involving recursion on lists. Students are expected to reimplement standard functions like `map` and `filter` and `reverse`, and sometimes dubious ones like `flatten`. This makes a certain amount of sense: if you're trying to teach recursion and lists, why not cover both at once?

For one thing, it makes functional programming look bad. How many students have learned from such material that Lisp is about recursion (and strangely named `car` and `cdr` operators), without any hint of why it's so highly regarded, or of what else it can do? How many have learned (or had their prejudice reinforced) that functional programming is a silly academic game with no relevance to actual programming?

It also teaches students bad habits. Experienced functional programmers follow the heuristic: if you're recursing on lists, you should probably be using some high-order function instead. Even if it's one of the awkward ones like `fold` or `mappend`/`mapcat`/`concatMap`. (Awkward in that code using them is often harder to read than that using `map` or `filter`, though not as hard as plain recursion.) But beginners are taught to do things the hard way.

Some recursion exercises also operate on multiple levels of list structure, such as `flatten` or a multilevel `reverse`. Which is odd, because arbitrarily nested lists are rare in practice (except in code, which has its own complexities). These exercises teach students to recurse on both car and cdr, when they should instead be learning that lists are recursive only in the cdr. (ML and Haskell are spared this problem, as their type systems cannot readily conceive of arbitrarily nested lists. This is an advantage of restrictive static typing I hadn't considered.)

One of the most important lessons to learn about recursion is how widely applicable it is: it's not just for lists, or for recursive data structures; it's for any problem that can be broken down into smaller problems. Teaching recursion only on lists obscures this, leading students to think it's a specialized tool for one data structure, rather than one of the most general tools for creating algorithms.

It would be more natural to teach structural recursion on trees — directory trees, for instance, or at any rate something that naturally has tree structure. (Not search trees, because students who aren't yet comfortable with recursion won't understand why anyone would want such a structure. And not expression trees, because code-as-data is a lot for beginners to swallow, and it teaches them that functional languages are only for writing compilers for functional languages.) Non-structural recursion could be taught with the same sort of not-obviously-recursive problems that are used to teach dynamic programming.

(Prompted by some Clojure exercises which exhibit some of these problems.)

1. The Joy language, which is a functional concatenative language without local names, has a very nice set of recursion combinators, which I list here from the manual. By using these, one can write arbitrary recursive algorithms without explicit (that is, by-name) recursion anywhere. Brackets around arguments indicate (zero-argument) functions to be executed:

linrec : [P] [T] [R1] [R2] -> ...
Executes P. If that yields true, executes T. Else executes R1, recurses, executes R2.

tailrec : [P] [T] [R1] -> ...
Executes P. If that yields true, executes T. Else executes R1, recurses.

binrec : [B] [T] [R1] [R2] -> ...
Executes P. If that yields true, executes T. Else uses R1 to produce two intermediates, recurses on both, then executes R2 to combines their results.

genrec : [B] [T] [R1] [R2] -> ...
Executes B, if that yields true executes T. Else executes R1 and then [[B] [T] [R1] [R2] genrec] R2.

condlinrec : [ [C1] [C2] .. [D] ] -> ...
Each [Ci] is of the forms [[B] [T]] or [[B] [R1] [R2]]. Tries each B. If that yields true and there is just a [T], executes T and exit. If there are [R1] and [R2], executes R1, recurses, executes R2. Subsequent case are ignored. If no B yields true, then [D] is used. It is then of the forms [[T]] or [[R1] [R2]]. For the former, executes T. For the latter executes R1, recurses, executes R2.

primrec : X [I] [C] -> R
Executes I to obtain an initial value R0. For integer X uses increasing positive integers to X, combines by C for new R. For aggregate X uses successive members and combines by C for new R.

treerec : T [O] [C] -> ...
T is a tree. If T is a leaf, executes O. Else executes [[O] [C] treerec] C.

1. I have trouble reading calls to these combinators — they're like fold, but worse. Part of this is just the confusingness of any unfamiliar combinator, and part of it is their complexity (how many combinators take four arguments, all different?), but I think part of the problem is that they don't hide much. Their calls are almost as long as the explicit recursion they replace, but less transparent because the structure is determined by argument order instead of sequence and explicit conditionals. I'm not sure how much this would improve with familiarity.

IIRC some of the examples (which I can't find now) for linrec show off a little-known feature of stack languages: functional arguments can communicate by leaving values on the stack, without the combinator having to know about it. This sounds like it might make combinators more powerful; I don't know how often it works in practice.

WRT teaching, though, these are too complicated for beginners. Even fold is too complicated for beginners.

It's OK to comment on old posts.