Orthodox Otter

Here's another new Lisp: Perry Metzger's Otter is a sketch of a case-sensitive Lisp-1 with short names and a little extra syntax and a module system and regular expressions and a cute animal mascot. Hygenic macros and a fancy compiler are planned but have been put off. It's supposed to be radical and heretical.

Radical? Heretical? Quite the opposite! There are a lot of new Lisps nowadays, and most of them include most of Otter's features (except maybe the mascot). There's hardly anything unusual here, except maybe the distaste for improper lists (a mistake which I also made for a while) and the choice of ! for setf (which seems obvious in retrospect - I will steal it now kthx). The rest of Otter is a reflection of what most Lispers consider the right way forward. It's not heretical, it's orthodox!

It's hard to detect these changes from the inside, but it seems to me this is a new development. Ten years ago, there was not so much agreement on what new Lisps should look like, nor on their desirability. The old orthodoxy was to cling to the existing dialects, as the language was perceived to be under threat. Remember when everyone was defending Common Lisp, except the Schemers, who claimed their language wasn't even a Lisp? But nowadays most Lispers don't think of the language as threatened. Instead we look forward to its future growth, and we aren't afraid to let a hundred flowers bloom.

But Otter probably won't be one of them. That presentation, from last August, seems to be the only available information on it. There's still nothing on the website but a picture of some sea otters. Sketching new lisps is fun (well, I think so, and evidently Perry does too) and easy, especially when you have an orthodoxy to guide you. Implementing them, and solving all the problems that surface in the process, is a lot of work, and prone to being put off forever. I suspect that's what happened to Otter.

But where one person can sketch a Lisp, others can build one. And we should. Never in the history of Lisp (or any programming language, IMO) have we had such an appealing orthodoxy. It is an irony not surprising in the Lisp community that that orthodoxy is now a language that does not actually exist; you cannot use it. But it is the sort of language that makes you want to. The popularity of new Lisps, and the extent of agreement on their features, suggest that it will have many incarnations in the years to come.

And now I'm better go work on mine...

For want of a rant

For my day job, I hack C++. A great many rants could begin this way, and after a week of long days debugging crashes in deployed software, I'm tempted to write one. Why do so many programmers insist on using a low-level language for everything, whether it's needed or not?

But you've heard that rant before, and maybe even written it yourself; you know how it goes. So I won't bother. Besides, I'm not sure I agree with it. It's easy to blame any problems with C++ on the language's low level, but is that really where they come from?

One of the crashes I encountered this week was a buffer overflow, due to careless use of strcat. A classic case of C++'s unsafe, low-level data structures, right? Except that C++ does have real strings, and there was no reason not to use them. We just didn't use the tools the language gave us.

Part of the application I deal with at work is in Java, and crashes in that part are never hard to debug, because they always come with handy stack traces. The problem with C++ isn't that it crashes - that could hardly be prevented. It's how little information it reports when it crashes. But this is not C++'s fault. The information (except for symbols) needed for a stack trace is already there in the crashing image; it's just not normally reported. One must go to the trouble of using a debugger to extract it. This is a place where a small change in an operating system could make a big difference in a language's usability. On a system which generated stack traces by default when native code crashed, C++ would not be nearly as hard to debug.

Of course C++ is unsafe, and therefore sometimes clobbers the necessary information (as it did in that buffer overflow). And its limited expressiveness often means more code and more bugs than in a higher-level language. But not all of its troubles are a result of low language level. And some of them aren't even its fault.

For my day job, I hack C++. And it's really frustrating. It doesn't even give me a good excuse for a rant!


Most arguments are not a result of failure to communicate, and most communication problems are not a result of vaguely or multiply defined terms. But for those that are, Eliezer Yudkowsky suggests a solution: declare the poorly defined word taboo, and try to communicate without it. This is obvious, of course (at least in retrospect), but Eliezer points out what is not so obvious: avoiding a word can be useful even in the absence of an argument, to expose confusion that is masked by familiar terms.

Maybe it's just an illusion of familiarity, but it seems to me that the subject of programming languages has more than its share of such confusion. We talk about formal, unambiguous languages, but the language we do it in is rather sloppy. Many important words are defined so vaguely, or have so many disparate meanings, that they serve less to convey information than to encourage the listener to imagine whatever meaning they please. This is not a huge problem when talking to someone who knows what you mean, but between people with less shared context it's a fount of unenlightening arguments. So I think Eliezer's taboo trick could be useful.

Take object-oriented, for instance. It has a wide range of meanings, so regardless of which one you mean, you can be reasonably confident that your audience will misunderstand. And it's emotionally charged, so those misunderstandings will become arguments. Better to avoid it. You might fall back on words like encapsulation and polymorphism - but both of those are also ambiguous, and in ways that conceal controversies! Clarity is harder than it sounds.

Of course it would be unbearably tedious to say everything with rigorous precision. But much of our vocabulary is pointlessly vague, and I think it creates a lot of confusion with no gain of convenience. So take this as a challenge. Can you say what you have to say about programming languages without using those common words that have no common meaning? How much can you say without mentioning syntax, or functional, or type?

RSS is a dangerous drug

I didn't understand the point of RSS until I started using a feed reader about six months ago. Then I saw what I'd been missing. It automates something I did not realize I spent effort on: keeping track of what I haven't read yet. Doing that by hand - visiting sites periodically, having lots of tabs open, trying to remember what I haven't looked at recently - is time-consuming and unreliable. So it's a perfect candidate for automation. Handing it over to a machine means I don't forget to check a site, or lose track of something when I get distracted and don't finish reading. It means I can be confident I won't miss anything. It means I can read even more blogs!

Sometimes this is not a good thing. I often feel compelled to read every post of every feed I subscribe to, and I have to remind myself otherwise. And I'm always tempted to subscribe to anything that looks remotely interesting. But if I subscribe to more feeds than I can handle, or don't read them for a while, the unread posts pile up. I haven't checked my feeds for the last week, and I now have 528 items to read. For a compulsive reader like me, an ever-growing list of interesting things to read is dangerously addictive. Fun, yes. But tyrannical.

OK, it's down to 348 now. But this feels like work!

Multiple kinds of type in one language

It's not just different languages that distinguish types in different ways. It's pretty common for one implementation of one language to have several type mechanisms serving different purposes. For instance, a typical optimized Lisp will have:

  1. Multiple representations of variables (integer, float, and tagged pointer), for efficient unboxed arithmetic
  2. Tags distinguishing pointers from fixnums (and sometimes characters, nil, and immediate floats)
  3. Dynamically typed objects
  4. A sublanguage for describing sets of objects, especially inferred or declared static types.

The first two are not called "type" in Lisp, but they would be in a lower-level language. The last two are both called "type" even though they are quite different things. Perhaps surprisingly, this doesn't cause much confusion in practice.

An ML implementation might have:

  1. The same multiple representations of variables
  2. The same tagged pointers (they make garbage collection simpler, even if they're not actually used)
  3. Dynamically tagged objects, to allow sum types
  4. Those famous static types, which are the only one of these mechanisms that's normally called "type" in ML.

The first three mechanisms are basically the same as in Lisp, despite the languages' seemingly opposite approaches to type. Where they differ is in how they describe types - and of course in what static types they allow. I find it particularly amusing that ML sum types use the same mechanism as dynamic type. It's not called that, and it's not used the same way, but ML data is partly dynamically typed.

Smalltalk's small syntax

Smalltalk gets a lot of attention for its object-orientedness, which is fine, but I've already learned those lessons, thanks. What I think deserves more attention is its syntax. Much of what is distinctive about Smalltalk doesn't come from its semantics but from its unusually simple, clear syntax. There are several things language designers could learn from it.

Simple infix

In the perennial argument between infix and prefix syntax, almost everyone assumes infix is much more complicated. Smalltalk shows it needn't be. You can read most Smalltalk code with only three kinds of infix expressions, one postfix, and a few other constructs. There are no special features to support infix, because everything is infix, even ordinary user-defined messages.

It's still more complicated than S-expressions, but not by much. If you like infix or dislike superfluous parentheses, it doesn't look like a bad deal.

Keyword arguments

First, an implementation note: the simplest way to handle keyword arguments is to consider them part of the function's name, and have foo(bar=1) be syntax for foo_bar(1). A different set of keyword args calls a different function: foo(baz=1) is foo_baz(1). This approach doesn't work for large numbers of independent optional arguments, but most of the time it's simple and efficient. Calling a function with keyword arguments becomes equivalent to calling one with positional arguments. There's no overhead for passing or processing a table of keyword args. The downside is that you can't do things like pass keyword arguments to an unknown function: (lambda (f) (f :reverse t :silent-error nil)). On the other hand, you can pass functions that take keyword arguments as if they took positional arguments - because they do.

Smalltalk takes this approach thoroughly. All arguments except the receiver are keyword arguments, so all message sends use keywords, and therefore all methods have names like value:with:. Rather than having method definitions define multiple names (one for each combination of optional keyword arguments), Smalltalk just makes all keyword arguments required. If you want optional ones, you have to define each combination separately. That's a bit of an annoyance (and probably discourages people from using optional arguments) but overall this system is easy to understand, easy to implement, and easy to use.

Is it verbose? Yes, a bit. But it's a form of verbosity that lends itself well to self-explanatory code. For people like me, who don't know the language very well, Smalltalk can be surprisingly easy to read, because nearly every argument has a helpful label beside it. It might be slightly longer than in a keywordless language, but it's longer in the right place.

Multiple infix

All these keyword arguments mean that many message sends are multiple infix: arguments alternate with pieces of the operator. This sounds hard to read, like C's ternary operator: a ? b : c. But it's not actually a problem. This is puzzling: if multiple infix isn't hard to parse, why is the ternary operator so unreadable? Is it because ? and : are so short and easily overlooked next to their arguments? Because the ternary operator is the only multiple infix operator in C, and rare, so you don't look for it? Because many programmers forget its precedence and swaddle it in parentheses? Is it only unreadable when it is used in inconveniently large expressions, as seems to happen a lot?


Lambda isn't common when it has a six-letter name, but if it's shorter it is much easier to use casually. Smalltalk uses syntax to abbreviate its lambdas, and it's hard to beat them for terseness, or ubiquity. It also calls them blocks to be less intimidating - remember, this was originally a language for children - and it works: Smalltalkers don't seem to have a lot of trouble learning the mysteries of anonymous functions.

Surprisingly, Smalltalk doesn't abbreviate function call. But lambda is very useful even when calling is verbose, because it's much more common to pass a function as an argument than to accept one. It's especially useful when it's as short as Smalltalk's. The easy lambda alleviates much of the pressure of not having macros. It easily handles three of the most common uses of macros: binding constructs, control structures, and laziness. It's not transparent like macros, and it introduces a distracting asymmetry in expressions like foo and: [bar], but it's a lot better than nothing, and some people prefer the explicit lambdas, at least when they're only two characters long.

Unfortunately, the use of square brackets doesn't mix well with deeply nested parentheses, because of the difficulty of keeping the parentheses matched when editing. Smalltalk doesn't have a lot of parentheses, so this is only a minor problem, and it wouldn't be a problem at all in a structure editor, but it means this approach to lambda doesn't mix well with S-expressions. The brackets are also sometimes easy to overlook, due to their similarity to parentheses. Maybe a different syntax would be better, or even a short name, like, say, λ.

The bad parts

I don't like how Smalltalk handles variable declarations - surely a form of internal define (infix, of course) would be better. I don't like that it requires special syntax for constructing arrays and hashtables. No one likes its syntax for putting definitions in files, which is so clumsy it makes working without the browser practically impossible. There is no need for other languages to repeat these things. But the core of Smalltalk's syntax, its blocks and messages, is clear and convenient and above all simple. If you're designing a syntax, you could do much worse than imitate Smalltalk.

The true meaning of type

I missed something in my previous post on the many meanings of type. Despite the profusion of seemingly incompatible senses, there is one that is consistent across languages. In every language I know of, type is the difference between an integer and a string.

Some languages, like Bliss and Forth, have no such distinction. Everyone agrees in calling them untyped. Of course they still handle integers and strings, but the language doesn't know which is which. They're distingushed only by how they're used (hence the alternate name operator-typed).

Most languages do distinguish, of course, but they do it in completely different ways. In each language, the mechanism it uses is called type. And this is where the confusion starts. In each language community, the word type becomes attached to the mechanism that language uses, and then to other uses of that mechanism - even those that have nothing to do with distinguishing integers from strings. Eventually users come to believe their sense is the real one, and that all the others must be bastardizations of it. And if they realize how different the senses are, they are tempted to think that they have nothing in common, and that type means nothing at all.

But all these senses reflect different answers to the same underlying problem. There is a common meaning to type across languages, and it is not any of the languages' senses. It is the ill-defined problem of having multiple types of data in one language: integers, floats, functions, arrays, lists, hashtables, strings.

Simple destructive quicksort

Programming languages have become more expressive over time. Consider quicksort. Hoare's original version was quite difficult to understand - IIRC it wasn't even recursive. Nowadays the two-line quicksort is an icon of Haskell:

qs [] = []
qs (x:xs) = qs (filter (< x) xs) ++ [x] ++ qs (filter (>= x) xs)

Unfortunately it's also an icon of unrealistic examples - it makes too many passes over the lists, and conses far too much, so it isn't all that quick. The more efficient sorts don't make such impressive examples - especially the destructive ones, whose Haskell incarnations are infested with intimidating monads.

Here's a gem from the Pitmanual: a reasonably clear destructive quicksort in fifteen lines.

Date: 9 September 1981 18:44-EDT
From: George J. Carrette <GJC>
To:   KMP
cc:   JAR

I think that the QUICKSORT properly coded is even shorter and neater than
the BUBBLESORT. It is also better for showing students since it is
naturally recursive, divide&conquer.

(defmacro rplacd&pop (x &optional y)
  `(rplacd (prog1 ,x (setq ,x (cdr ,x))) ,y))

(defun quicksort (list judge)
  (if (null list)
      (do ((proof (rplacd&pop list))
           (goats ())
           (sheep ()))
          ((null list)
           (nconc (quicksort goats judge)
                  (quicksort sheep judge)))
        (if (funcall judge (car list) (car proof))
            (setq goats (rplacd&pop list goats))
            (setq sheep (rplacd&pop list sheep))))))

rplacd&pop captures the idea of destructively removing the first element of a list. It sounds like a nice example, since it uses a macro to express a by-reference operation in a language without by-reference parameters. But it's not really good for showing students, because it's not safe - it evaluates x twice and its subforms (if any) thrice. (Update 6 April: it uses setq, not setf, so it's ok in Maclisp, because it only works on variables. Even in Common Lisp it would only be a problem when used with a symbol-macro.) I don't know how to do it right in Maclisp, but in Common Lisp it requires the sort of macrology that inspires fear and loathing:

(defmacro rplacd&pop (x &optional y &environment e)
  (multiple-value-bind (temps forms svars writeform readform)
                       (get-setf-expansion x e)
    `(let* (,@(mapcar #'list temps forms)
            (,(car svars) (cdr ,readform)))
      (prog1 ,readform
        (setf (cdr ,readform) ,y)

This might be a good example of how to write setf-like macros, which is a confusing corner of Common Lisp that certainly deserves some examples. But it's no way to write a clear quicksort!

Of course, the simplest way for a language to make a realistic quicksort simple is to predefine partition. Is that cheating? I think it could be useful often enough to belong in a standard library. But to truly simplify quicksort down to a one-liner will require some advances in expressiveness, so we can write something closer to the natural definition: partition on the first element and recurse on both halves.