Abstract algebra with imperfect models

One more post before the end of the year:

Haskell users tend to be mathematically inclined, so it's no surprise that they should notice a mathematical application of typeclasses: they can represent abstract algebraic structures. You only get one model of the algebra structure per type (potentially annoying for common algebras structures like groups), but where applicable this makes a great (for some values of "great") way to show off your mathematical prowess. Indeed, some of the standard typeclasses are intended as abstract algebras algebraic structures: Monad and MonadPlus, at least.

There are probably quite a few Haskellers who have not only noticed this, but wished it were taken further. For instance, why isn't / defined like this?

(/) :: (Field a) => a -> a -> a

(Really mathematical Haskellers may also wish the return type was Maybe a.)

Isn't Num actually Ring, and Fractional Field?

Not quite. Integer is indeed a ring, but Int isn't because of overflow. Ratio is a field, but Float isn't, because of rounding. Abstract algebras Algebraic structures are useful only because it's possible to reason from their axioms and apply the conclusions to the models, and that doesn't work when the models don't actually satisfy those axioms. The reason Monad works is that its standard instances do correctly model it. The others are stuck with names less proudly abstract but less misleading (not to mention less opaque to non-mathematicians).

Of course, none of these models are really correct, because they all have the limitation of finite memory. But that's a less intrusive limitation than fixed-precision numbers. "Less intrusive" is a disappointingly vague description; I suspect the real difference is that finite memory afflicts virtually every operation in the language, while overflow and rounding are specific to certain types. So while assuming the monad laws work on lists, for example, will cause problems in some circumstances, it won't cause any more problems, because your programs would fail anyway in those cases. In contrast, it's pretty easy to write code that will work for any ring, but not for Int.

Defending against nonexistent problems

When do good programmers write bad code? There are a great many ways to do that, of course, and most of them aren't very illuminating. But there are some common syndromes that account for a lot of poor code. One is when the programmers are writing in a language they don't know very well. They can't use its less guessable features, and since they aren't sure what its pitfalls are, they guess, and end up defending against ones it doesn't actually have. For example, I often see C++ code like this:

if (somepointer)
    delete somepointer;

This is unnecessarily explicit - delete is defined to do nothing if its argument is null, precisely so you don't have to check. (And every implementation I've ever used has done this correctly.) It should be just:

delete somepointer;

But someone new to C++ won't know that unless they've learned it explicitly. Until then, a reasonable programmer may continue to defend against the obvious mistake of deleting null.

Programmers who switch between related languages are prone to infelicities like this:

f = fopen(filename, "r");

This is quite reasonable in C, because there might be (or might previously have been) something between these two lines - and in most dialects of C, declarations have to precede statements, so you can't always combine the two. C++, like recent C dialects, lacks this annoying restriction, but a C programmer won't necessarily know that.

They should know better than to read lines from a file like this, though:

while (!feof(somefile)) {
  memset(line, 0, MAXLINELENGTH);
  if (fgets(line, MAXLINELENGTH, somefile)) {
    //do something with line...

instead of the shorter equivalent:

while (fgets(line, MAXLINELENGTH, somefile)) {
  /* do something with line... */

It's easy to understand why someone might write the longer version. The C standard library is full of functions which are difficult or impossible to use safely (gets, for instance), or which require unnecessarily tedious checks when calling them (malloc, free). Someone familiar with C, but not with the details of fgets, might reasonably doubt it would work correctly without the explicit feof and memset. (And once you've learned to fear uninitialized buffers, it may be easier to memset everything than to think about whether it's necessary.) But fgets is actually well designed. It always null-terminates the buffer, and its return value is well-behaved at EOF. Neither of these defensive additions is necessary.

In fact, the shorter version of that loop actually has slightly better error behavior: if some error (loss of network connectivity?) prevents fgets from reading the file, but doesn't cause eof, the longer code might loop forever, while the shorter one will stop. So in this case the defenses have made things worse, although it's unlikely to matter. I suppose the truly paranoid answer would be to call ferror too...

I'm not complaining that the authors don't know the language they're writing in. It would be nice if they did, but in practice it's often necessary for people to work in languages they don't know very well, and in that situation they'll inevitably work around limitations in their knowledge the same way they work around limitations in the language. This will usually be more efficient than scouring the documentation for shortcuts that might exist. Most of the time their code will still work, and if they keep using the language, they'll learn the details soon.

But sometimes they're unwilling to learn. I've seen all three of these bugs in C++ code written by experienced C++ programmers - people who should really know the commonly used parts of the language by now. Most programmers will simply learn these features when they're pointed out, but some don't want to. They find excuses (sometimes rather contrived ones) to discount the new information and keep programming the way they were. I don't understand what's going on here. Are they accustomed to feeling confident in their skills, and don't want to disrupt the situation by venturing into unfamiliar territory? But how does someone become a programmer without being generally curious about the subject?

A combinator for a familiar operation

While playing with an interpreter, I noticed I had several functions that iterated some operation until they reached a fixed point, the way macroexpand repeats macroexpand-1 until its return value is eq to the original form. [1] You can capture this behaviour for any function by a simple combinator:

iterate-until-fixed f x =
  let x' = f x in
    if (eq x' x)
      iterate-until-fixed f x'

(I used eq instead of some other equality test because it avoids the difficulty of comparing arbitrary or circular structures.)

I was going to ask if anyone had a better name for this function, but iterate-until-fixed is not bad. Unfortunately it's not as useful as I thought. Most of these iterated operations (macroexpansion, many optimizations, etc.) are most useful when they're recursively applied to subforms, not just to the top level. This is much uglier, and it depends on the specific language, so it doesn't make a very reusable combinator. The language could be parameterized out by taking a map-subforms function as an argument, which makes it a little less ugly:

;map-subforms f exp -> another similar exp
recurse-until-fixed f map-subforms exp =
  let results = map-subforms
                  (λ x → recurse-until-fixed f map-subforms x)
                  (iterate-until-fixed f exp) in
    if (every eq results form)
      recurse-until-fixed f map-subforms results

This has some holes - it doesn't handle environments, and the every eq test doesn't work for forms with non-top-level subforms (e.g. let). But (given a suitable map-subforms) it makes a nice simple macroexpand-all:

macroexpand-all exp =
  recurse-until-fixed macroexpand-1 map-subforms exp

[1] In Common Lisp, macroexpand returns a second value to indicate whether it did anything, for historial reasons: in some ancient lisps, macroexpansion could side-effect forms, so mere equality didn't mean nothing had changed. The side effects were only done for performance reasons, and are not present in any modern lisp, so the second return value is not necessary. It is helpful at the REPL though.

You can never be too sure

Programmers can be compulsively paranoid. Yesterday I encountered some code that was very careful about closing a file once it was done with it:

FILE *fp = NULL;
while (a loop which is worth a post in its own right) {
  fp = fopen(somefile, "r");
  if (fp) {
    //Read from fp...
    fp = NULL;
  if (fp) {
    fp = NULL;
if (fp) {
  fp = NULL;

There were no breaks or anything else that might evade the first fclose. But the extra fcloses were separated by a page or two of other cr^Hode, so it wasn't entirely obvious that they weren't necessary. And this was in a long-running process, so it wouldn't do to leak file descriptors. The author was just being careful! Yeah, that's it.

If closing a file requires that much care, what about opening one? A lot of programs just charge reckessly ahead and read from a file once they've opened it. Not this one. Instead it did this:

FILE *fp = fopen(filename, "r");
if (fp) {
  //But what if the file doesn't exist?
  stat stats;
  if (!stat(filename, &stats)) {
    //OK, read from fp...
  } else
    fprintf(stderr, "Error: stat failed for %s", filename);

Yes, it used stat to verify the existence of a file it had just opened. I have no idea what the author thought this was supposed to accomplish.

It should come as no surprise that despite this excruciating attention to imaginary problems, this code was infested with real ones.

Arbitrary overloading isn't always confusing

I have always accepted the conventional wisdom about overloading: that while it's very nice for operations that are conceptually identical, overloading two unrelated operations, like + for catenation, is just an invitation to mistakes. But I have to revise this opinion, because there's a conspicuous counterexample in C++. It uses << and >> for both shifting and I/O, and I've never found this confusing. Why?

I suspect what's going on here is that the shifting and I/O operators are seldom used together, or even in similar contexts. I/O code doesn't look anything like bit-crunching code, so opportunities for confusion are few and far between. The problem with + may be that addition and concatenation have a vague semantic similarity (which is what inspired the overloading in the first place), which leads them to appear in similar contexts. (It seems to me that it's much easier to be uncertain about whether some expression is a number or a string than whether it's an integer or a stream.) It probably doesn't help that they're both very common operations found in all sorts of code. And programmers rely on the associativity of both operations, so the non-associativity of the overloaded combination is not just a theoretical problem. But C++'s shift-or-stream operators show that overloading unrelated operations isn't intrinsically confusing. It depends on how they're used.

Not that I'm advocating arbitrary overloading. But maybe we shouldn't be too paranoid about it - which is good because it turns up surprisingly often. Some familiar operations are traditionally overloaded in semantically dubious ways, including good old +. Integer and (fixed-precision) floating-point arithmetic have different mathematical and practical properties, because one rounds and the other doesn't. Theoretically they should be separate, as they are in OCaml, but most languages unify them anyway. Is this a problem? Maybe a little, when someone forgets they're doing floating-point. But for the most part the two (or more) +es coexist fine.

But addition and catenation don't. But shifting and I/O do. What's the pattern here?

The cost of naive scheduling

The other day one of my coworkers expressed alarm at the condition of one of his servers: it was running at 100% CPU usage! Obviously, he said, something was wrong. I resisted the urge to say that what was wrong was that it was a Windows machine. He's a longtime Windows user, after all. And like a lot of Windows users, he thinks of a busy processor as a bad thing, a sign of a machine about to fail. On any other platform, it's a normal situation. Of course there's work to do!

But my coworker was right. Full processor usage is a bad sign on a Windows machine, because it performs remarkably badly under load. Where a loaded Unix machine simply feels slower, a loaded Windows machine is unpredictably unresponsive. Why?

I understand the Windows scheduler is pretty simpleminded - it simply runs the highest-priority available process, and doesn't try very hard to schedule fairly or prevent starvation. The strange thing is that Windows has had this problem for years, even though there are many well-known solutions. Is there some other goal that conflicts with good scheduling?

There seems to be a similar problem with I/O. One process reading a lot of files can starve all the others - even of virtual memory, because their page faults are not given higher priority than ordinary I/O. As it happens, one of the programs I maintain at work is memory-intensive and I/O-bound, so when it's running, other processes' memory gets paged out, and they can wait a long time for it to be paged back in. The result is to make my 2 GHz machine feel slower than my pocket calculator.

My coworker felt the same way about his server. He tried to stop the offending services, but the machine was so unresponsive that his Remote Desktop session was repeatedly disconnected before he could do so. I think he eventually gave up and walked over the server room to hit the power switch. But as it turned out, there was nothing wrong with the machine. Only with the Windows scheduler.

Is there a better name for generic cons?

Any serious programming language has a variety of collections in its library. Most reduce the resulting complexity by overloading the common operations for multiple types. But few include cons among the generic operations. This is partly because it's unnatural for many types - only the functional collections like lists and trees support it efficiently. But it may also be a problem that the operation has no obvious name.

Dylan calls it add, which sounds like arithmetic. Clojure calls it conj, for conjoin, which is unfortunately confusable with the complex conjugate function. C++ has push, but that's implicitly imperative. Other possibilities: extend, augment (or aug?), with, or maybe a more abstract symbol like &. Does anyone have a better idea?

Mmm, tar!

Yossi Kreinin calls it Fun at the Turing tar-pit. It's another form of programming candy: the nastier a language, the more fun it is so solve simple problems in it.

I’ll tell you why it happens. When you write code in a full-featured programming language, clearly you can do a lot of useful things. Because, like, everybody can. So the job has to be quite special to give you satisfaction; if the task is prosaic, and most of them are, there’s little pride you’re going to feel. But if the language is crippled, it’s a whole different matter. “Look, a loop using templates!” Trivial stuff becomes an achievement, which feels good. I like feeling good.

It's just like a game - overcoming obstacles is fun! And a perverse language, like a low-level one, is a great source of obstacles. Artificial, unnecessary obstacles, sure, but that's fine - it means you don't have to feel bad about being defeated by them. And when you do overcome them, you have the reward of getting something done, so it's easy to forget the obstacles were of your own creation. Programming makes a great game!

But it's more than a game, or it should be. There's nothing wrong with solving puzzles for puzzles' sake, but programming has the additional reward of doing something useful. The trouble is that there are many puzzles available in any program, and only some of them make progress toward whatever the program is supposed to do. The "real" problems are not necessarily more fun, and are often harder to appreciate, than the obvious ones created by inadequate tools. This tarry candy may taste good, and provide something to chew on, but it's not very filling, not by itself. Here's a puzzle that matters: how can I, and the many other programmers in the same boat, find the pleasure of more interesting problems, rather than that of solving easy ones over and over?

Brute logic

There may be a risk to pursuing elegance too much: you can learn to ignore solutions that, while straightforward, are not particularly interesting. I recently added a feature I'd wanted for years to a program I maintain. The implementation didn't require any clever insights, or fall naturally out of the right view of the problem, nor was there anything particularly ugly about it. It was just ten or so lines of unexciting logic, with no surprises and no cleverness. So even though I knew this feature would be handy, I put it off because I didn't see a good way to do it. But it didn't require a good way; the obvious one was fine. If I were less hung up on elegance, I'd have written it years ago.

This is not the first time I've had this problem. To ignore anything that's not beautiful or interesting may be a good guide for hard problems, but for easy ones it's paralysing. Sometimes you just have to write the code.

Efficiency above all else

I hack C++ at work. When it comes to collections, this means using the Standard Template Library, which can be frustrating, because STL was designed with efficiency in mind, and convenience decidedly not in mind. For instance, many of STL's collection operations are defined not on collections but on pairs of iterators, which are to arbitrary collections as pointers are to arrays. That's so STL can handle arrays in the familiar, efficient way, but the extra abstraction makes them even more inconvenient than pointers. Iterator code is full of things like if (find(collection.begin(), collection.end(), x) != collection.end()) ..., but there are no versions that simply take collections instead, even though they're easy to define and they make for much clearer code. That's just not considered important in C++ culture.

Sometimes the obsession with efficiency makes for some strange omissions. STL includes a stack class, but it doesn't support pop. It has a pop method, but that just removes the top element; it doesn't actually return it, which is almost always what you want. What's going on? The STL documentation explains:

One might wonder why pop() returns void, instead of value_type. That is, why must one use top() and pop() to examine and remove the top element, instead of combining the two in a single member function? In fact, there is a good reason for this design. If pop() returned the top element, it would have to return by value rather than by reference: return by reference would create a dangling pointer. Return by value, however, is inefficient: it involves at least one redundant copy constructor call. Since it is impossible for pop() to return a value in such a way as to be both efficient and correct, it is more sensible for it to return no value at all and to require clients to use top() to inspect the value at the top of the stack.

This is a peculiar inversion: stack doesn't have the most useful form of pop, not because the language is too weak to express it, but because it's too powerful. Since C++ has types with copy constructors, which might be inefficient to return from pop, nothing can be returned - not even the primitive types which can be returned efficiently, and which (especially pointers) are most common in collections anyway.

OK, stack really ought to just support both operations. Just as all the functions accepting pairs of iterators should be overloaded to accept collections as well. But these are matters of convenience, not efficiency, so in C++ they don't matter.

Better Rebol info

My Google-fu is weak. Rebol's standard documentation may not be good, but there are better alternatives, and I didn't find them. Fortunately commenters supplied links and hints on what to search for, leading to at least:

  • A better tutorial that shows off Rebol's GUI library. How many languages can do one-liners with GUIs? How many language designers even consider the possibility, despite its obvious value?
  • Some articles on Rebol's semantics, including a metacircular interpreter. They're not entirely clear but they may be the best available.
  • ...and even a surviving copy of Joe Marshall's Rebol-to-Scheme compiler. (But that's for Rebol 1.0; the semantics have changed since.)

If I'm reading this right, Rebol's bind (which is implicit in evaluation) makes closures, but it doesn't store the environment once per function - it stores it once per word! That presumably makes func (λ) more expensive, but allows lexical scoping of code that's passed around without being compiled first. Which is very common in Rebol - one of the main ways to extend the language is to define interpreters that operate on blocks. Because blocks are self-evaluating, there's no syntactic overhead to this - it's just like FEXPRs. Rebol appears to have actual FEXPRs too, via unevaluated arguments, but they're not needed much.

Next (when I get around to it): investigating the libraries.

Why I haven't looked at Rebol much

I looked at Rebol a few years ago, but lost interest pretty quickly. Recently commenter Edoc recommended it, so I took another look, and remembered why I lost interest last time.

The first problem is that the docs are frustratingly introductory. Everything I can find is aimed at explaining the language to beginners, with lots of repetition and very little precision. Would it be so hard to explain the language kernel in a way someone who knows languages can understand? The closest I can find to a description of the semantics is the overview of evaluation and one entry in the FAQ. There are grandiose statements about how different Rebol is from other languages, and how it's supposed to take over the world, but not much explanation.

Okay, even from the beginner documentation you can learn something. Rebol is intended for making little DSLs - so, like Lisp, it separates the syntax from the language. Programs are defined in terms of trees, not text, and the trees are easily available to interpreters. But Rebol also tries to preserve a familiar appearance of programs - infix, and without too many parentheses. The usual way to do this is to keep the trees as abstract as practical, and have the parser transform complex syntax into a simple underlying representation. Rebol does the opposite. It preserves most of the syntax in its parse trees, so the parser is simple but the interpreter is complicated. In Rebol, 2 * sin :x reads as a four-element sequence: [2 * sin :x]. The advantage is that code closely resembles its source text; the disadvantage is that even parsed code is rather complicated and awkward to operate on.

The parser is not actually all that simple, because Rebol has syntax for a great many data types. In addition to the four kinds of symbols and three kinds of sequences used for programs, there are a variety of other types with syntax. The complete list is:

  • Word (symbol; 4 kinds): foo: bar :baz 'error-out-of-metasyntactic-variables
  • Block (vector): [1 two [three] "IV"]
  • Paren (same as Block, but different eval semantics): (alpha [beta * gamma] delta)
  • Refinement (generalized field accessor): /x
  • Path (field access expression): a/b/c
  • Integer: -55 16'777'216
  • "Decimal" (actually binary floating point; there's nothing decimal about them): 10.0 2,718'281'828'4 6.023E23
  • String: "Hello,^(line)world!" {Hello,
  • Binary: #{F00DF00DF00D}
  • Email: user@example.com
  • File (pathname): %/etc/passwd %rebol%20stuff\test.r
  • Issue: #123-456-789 (how is this usefully different from String?)
  • Tag: <a href="http://www.rebol.com">
  • URL: file:///etc/passwd
  • Character: #";" #"字"
  • Date: 2008-07-26 26-Jul-2008 7/26/2008
  • Time: 20:51:33
  • Tuple (of 3 or more small integers):
  • Money: EUR$0.02 $0.03
  • Pair (2-D point): 1024x768

There are only a few types that don't have syntax:

  • Logic (boolean): true false yes no on off (these are constants, not syntax)
  • None (nil): none (constant)
  • List
  • Hash (hashtable)
  • Image (bitmap)

Curiously, much of the syntax is defined by special kinds of tokens rather than with special characters. It's not just numbers, but dates, times, and even pairs. This makes it a bit hard to tell how a token will be parsed, and thus whether an unusual symbol needs to be escaped. It seems to me that Rebol goes a bit overboard on this. I'm not sure why dates or email addresses need to be handled by the parser rather than by the few operations that use them. The syntactic support is slightly convenient for some small programs, but I have a hard time believing it helps with larger ones.

Of course a language's semantics are more important than its syntax, and it's tricky to figure them out from a tutorial. The obvious alternative is to read the source - but here's the other reason I stopped looking at Rebol: it's not open-source. “REBOL/Core is free and will always be free”, says the license page, but they only mean gratis. Quite frustrating to someone who might understand it by looking at eval, or to anyone considering using the language. Joe Marshall (who used to work on Rebol) apparently wrote a Rebol-to-Scheme compiler, but the actual code seems to have vanished from the face of the Internet. (Update: Here's a copy. Thanks, anonymous!) Fortunately, bits of a Rebol interpreter survive in Joe's talk on the difficulties of doing tail-call optimization. From these, and from the documentation, I think I understand Rebol's eval now, though not with enough confidence to explain it formally - go read that talk if you want to know.

The odd thing is that it operates on sequences, so it has to parse as it evaluates. The two operations which consume arguments (assignments and function calls) evaluate as many as they need from the rest of the sequence, not knowing where an expression ends until they've evaluated it. (This is why tail-call optimization is hard: you can't tell whether you're in a tail context until the last moment.) After evaluating, if there's nothing left, then you just did a tail call, and its result is the value of the block. If there is something left, and it begins with an infix operator, then what you just evaluated was its first argument, so you evaluate another argument and call the infix operator. (This means all infix operators are left-associative.) This is what it takes to interpret unparenthesised prefix with variable arity, but it's certainly awkward.

Here's something disturbing: Rebol 1.0 had lexical scope, but Rebol 2.0 doesn't. Blocks are just self-evaluating, and it's up to functions (or should I say FEXPRs?) like if to evaluate them in some environment. (However, this article suggests words know something about their environment.) I don't know how assignment handles nested scopes. It is worrying, especially in a poorly specified language with no reference implementation, to see what appears to be a well-known mistake, and the possibility of several others. I don't trust my intuition when evaluating a language, but I'm beginning to get the impression that Rebol has a lot of cute features on a rather shaky foundation.

Trying to reconstruct a language from introductory documentation is a frustrating task, so I'll stop here. Unfortunately I've exhausted my patience before learning much about its library or how it works in practice, so I still don't know what the good details were that Edoc was talking about. Maybe some other time.

Array mapped hash tries and the nature of data structures

Clojure provides several immutable collection types: ordinary lists, associative arrays, and "a hash map and vector both based upon array mapped hash tries". Array mapped hash tries? What are those?

The short answer is that they're hashtables whose bodies are tries (prefix trees) made of bitmapped sparse arrays. ("Array mapped" refers, rather confusingly, to the last part.) The long answer can be found in two of Phil Bagwell's papers: Fast and Space Efficient Trie Searches and Ideal Hash Trees.

Nowadays most programmers are used to thinking of hashtables as a primitive data structure, but they're not as simple as they look. Ordinary hashtables combine three ideas in their implementation: arrays, hashing, and reprobing. Arrays are the most space-efficient collection type, so naturally you want to use them for everything, but they have the limitation that they can only be indexed with (a contiguous range of) integers. Hashing fixes this - it turns anything into an integer, suitable for indexing arrays or whatever data structure you please, at the cost of introducing collisions. Reprobing deals with those collisions, without the inefficiency of chaining. The combination has the coveted trait of fast constant-time lookup, so almost everyone uses it, but it's not the only way to build a hashtable.

Each of those three concepts can be used independently of the others. In particular, hashes can be used to index other structures than arrays. This is what Clojure does. Rather than backing its hashtables with (reprobed) arrays, it backs them with tries, treating the hashcode as a sequence of 5-bit integers. Tries are less efficient than arrays, of course, but they have the advantage of nondestructive update in log time. Clojure's collections are immutable, so efficient nondestructive update is a must.

(Mini-rant: I refuse to call it "persistence". There are plenty of other words for this - "nondestructive", "functional", "incremental". Do we have to appropriate a term with a well-established meaning that is both important and likely to be used in the same context?)

Tries have a singularly confusing name, which may explain how little attention they get. But they also have a space problem: each node may have a significant number of children, which must be accessed quickly, so an array is the obvious choice. But most nodes have only a few children, so arrays would waste a lot of space. Bagwell's (and Clojure's) solution is to use bitmapped sparse arrays. Each node of the tree has a bitmap showing which children are present - and then a small array containing only those entries. For example, in a base-32 trie, a node with children 1, 5, and 11 would be represented by a bitmap 100000100010 and a 3-element array containing pointers to the children. This represents a 32-element sparse array in only 4 words! It's not even inefficient to index into, as you can determine the index of an entry by counting the 1 bits below it in the bitmap. This is supported in hardware on most architectures - but not, alas, on x86.

But even with hardware support, these tries are considerably slower than ordinary hashtables - they're fast log time, not fast constant time. There is an unavoidable price to nondestructive updates: they require keeping more information around than destructive ones. We functional programmers are used to thinking of state as a valuable (if hazardous) optimization tool, but we often mistake the reason. The main performance benefit of side effects isn't from the obvious operations like nconc, that use state to recycle old objects and slightly reduce allocation. Much more important is the freedom they give us, to use different data structures and different algorithms, and to express our programs in different ways.

Proglangers and conlangers

Yesterday I used the obscure word "proglanger", meaning someone who designs programming languages. It's so rare that for the moment Google knows no other occurrences, but its meaning is probably obvious. Like "natlang", it comes from the slang of conlangers: people who construct languages for fun, like Tolkien. (People who make up whole languages don't hesitate to make up words.) Inventing human languages is obviously related to inventing programming languages (and there is some overlap between the two communities), so I want to think there are lessons to be learned from it, but... I can't think of any. Even from linguistics it's disappointingly hard to learn anything that applies to proglangs.

Okay, one habit from conlanging helps me in proglanging: When I encounter a strange new language (natural or not), I see it as a source of fun ideas, not a mysterious foreign gibberish. This gives me a rather exploitative attitude to new languages, but it probably makes me learn languages faster. But this doesn't impress the part of me that wants to see one of my hobbies inform another.

Learning by translating

A couple of years ago (see how long it takes me to blog this?) I translated two programs from languages I didn't know to one I did. The destination language in both cases was one of my pet Lisps. I was translating code into it to test its expressiveness, which was painfully instructive: there a number of places where my language couldn't do what I wanted, and some of them were hard to fix. But most of what I learned from the exercise had nothing to do with my language.

One of the programs was in obfuscated Perl. It (ab)used enough obscure parts of the language that I didn't translate it properly. That's fine. I learned a lot of Perl in the process.

The other program was in Groovy, a sort of functional Java. It's a nice example because it involves networking, XML, and a graphical user interface, so it tests a language's libraries in ways most toy examples don't. It was probably chosen to make Groovy look good, and it does - but my Lisp version came out about half the size of the Groovy one. Unfortunately for my bragging rights, this wasn't because my language has greater expressive power. It's because Groovy uses Java's libraries, and their interfaces are limited to things that can be used from Java (archaic Java, in many cases). Groovy is a much more expressive language than Java, but the libraries take no advantage of that expressiveness, so half of the program was spent working around limitations Groovy doesn't have. If Groovy had its own libraries, its program would be about as small as the Lisp one.

Well, maybe not quite as small. The Lisp translation had the unfair advantage that I invented many of the libraries as I went, so they were particularly convenient to that program. And since I didn't implement them, I may have missed some ill-definednesses. I don't think they were unrealistic, but in reality the program would probably be a bit longer than the one I came up with. Whenever you can reimagine your language to fit the task at hand, translation will flatter it. This can be useful - there is nothing as encouraging to a proglanger as seeing your language beat Perl at text processing, Unix at file management, and Haskell at data structure crunching. I've done all of these, and let me tell you, it feels good! But it doesn't mean much.

I didn't learn as much Groovy as I did Perl, probably because the Groovy program did not depend on many obscure details of the language. I treated it as a functional language with Java syntax, and that was all I needed to know to understand it. The Perl program was harder, of course; I had to learn a good deal of syntax to see through the line noise to the underlying structure, which was not as exotic as it looked. Like APL, Perl gets much of its apparent strangeness comes from its abbreviated syntax; the language behind the syntax is only a little weird. But in this case there were more layers of language-dependent obfuscation...

This isn't a bad way to learn a language, as long as you enjoy being puzzled. Just make sure the program you're translating is a bad one. Code that makes its meaning clear won't help you, because you'll understand it without having to think about how it works. For once, that's a bad thing.


An anonymous commenter says the tail-recursive loop in my previous post would be clearer with loop. I agree in principle: recursion is more general than needed, so a loop should be clearer. Should be. But Common Lisp is not especially great at expressing iteration.

(loop for c = (peek-char nil stream eof-error-p)
  while (whitespacep c)
  do (read-char stream)
  finally (return c))

I'm one of those “knee-jerk anti-loopists”, so maybe I'm reading this with unfriendly eyes, but it doesn't look any clearer to me. It saves a line of explicit recursion, but spends one to explicitly return the result (but maybe I just don't know how to use loop; is there a better way?), which makes the control flow a little odd. Plus I don't like having to parse an S-expression!

What about ordinary do?

(do ((c (peek-char nil stream eof-error-p) (peek-char nil stream eof-error-p)))
  ((not (whitespacep c)) c)
  (read-char stream eof-error-p))

As usual, do has more non-form parentheses than I'd like, but it would be considerably shorter if it weren't for the hideous repetition of (peek-char nil stream eof-error-p). This is an unfortunate side-effect of a feature (not stepping variables with no step-form) I rarely (never?) use - if I wanted a non-stepping variable, I'd use let. In my opinion do would be much more useful if it handled the common case instead, and made a two-element binding clause (var step) equivalent to (var step step). One could also reduce the mystery parentheses a bit (and increase generality) at the cost of only a little verbosity, by making the end-test a standalone form:

(defmacro until (test &body result-forms)
  "Terminate a loop if TEST evaluates to true."
  `(when ,test (return (progn ,@result-forms))))

(do ((c (peek-char nil stream eof-error-p)))
  (until (not (whitespacep c)) c)
  (read-char stream eof-error-p))

That's not so bad.

I'm not the first Common Lisper to complain about the lack of iteration facilities. Jonathan Amsterdam's answer is iterate, which is basically loop with parentheses and more features - including one, finding, for writing this sort of loop. Here's the (untested) iterate version:

(iter (for c next (peek-char nil stream eof-error-p))
  (finding c such-that (not (whitespacep c)))
  (read-char stream eof-error-p))

It would be a two-liner, except that (finding ... such-that (complement #'whitespacep)) doesn't work - that shortcut only works if the right side is a (function ...) form.

That functional bit is a reminder that I'm going about this the wrong way. None of these loops approaches the clarity of the functional way. If the sequence functions worked on streams, reading characters as needed, we could just do something like:

(find-if (complement #'whitespacep) stream)

and unread the character afterward, if we got one. Or, if peeking returns a derived stream which delays removing each character until the next one is needed:

(find-if (complement #'whitespacep) (peeking stream))

These functional conveniences are less general than loop or iterate, but that's okay. Their lack of generality makes the common loops much simpler. Too bad they don't exist.

Common Lisp skips whitespace, and skips whitespace

It's well known to Lispers that Common Lisp's library is remarkably complete in some places, and remarkably incomplete in others. I ran into both cases yesterday. I needed to see what was behind some whitespace, so I did the obvious thing:

(defun skip-whitespace (stream &optional eof-error-p)
  "Skip leading whitespace on STREAM, and return the first
non-whitespace character without removing it."
  (let ((c (peek-char stream eof-error-p)))
    (cond ((whitespacep c) (read-char stream eof-error-p)
                           (skip-whitespace stream eof-error-p))
          (t c))))

There are two problems with this code. The first is that whitespacep is missing. Did I get the name wrong? No, it simply doesn't exist. Common Lisp has graphic-char-p, alpha-char-p, digit-char-p, alphanumericp, standard-char-p, upper-case-p, lower-case-p, and even both-case-p (which is not quite what it sounds like), but nothing for whitespace. And you can't even easily assemble it out of the available predicates.

Oh well, it's easy to fake.

(defun whitespacep (c)
  (member c '(#\  #\Tab #\Return #\Newline)))

The second problem was a mysterious type error: peek-char didn't like *standard-input*; it wanted a character or a boolean instead. Huh?

If I had bothered to read the Hyperspec (or even look closely at the argument list Slime displayed), I would have known that peek-char's first argument isn't the stream, it's the whitespace setting. The function I wanted is already built in to the language: (peek-char t stream) skips whitespace and returns the first non-whitespace character. Obviously I'm not the first person to want this. peek-char may not be the most obvious place to put this feature, but this is one of those places where Common Lisp is almost absurdly complete. It may not be able to identify whitespace characters, but it knows what to do with them.

Edit: fixed an uninteresting third problem: I forgot to pass eof-error-p to peek-char.

No value

Slime (the Emacs mode for Common Lisp) is nothing if not helpful. It autoindents, it completes symbols, it shows backtraces, it fontifies output. It even helpfully points out when an expression returned no values:

CL-USER> (values)
; No value

This is missing the point. Almost all CL functions can find something useful to return — even those, like rplacd or write, that are called purely for their side effects usually return one of their arguments, just to make them easier to insert into existing code. Functions producing zero values usually do so not because they have nothing to return, but because they're intended to be used at the REPL, and they don't want to clutter up the output with an uninteresting value. But Slime's helpful comment defeats their efforts. Observe:

CL-USER> (describe 'values)
VALUES is an external symbol in #<PACKAGE "COMMON-LISP">.
Its associated name (as in FUNCTION-LAMBDA-EXPRESSION) is VALUES.
The function's arguments are:  (&REST VALUES)
Function documentation:
  Return all arguments, in order, as values.
On Fri, Mar 24, 2006 07:42:39 AM EST it was compiled from:
  Created: Friday, April 15, 2005 09:57:55 AM EDT

; No value

How is this an improvement on a meaningless NIL?

(Hey, look, a logical pathname! I didn't know anyone used those.)


In a rant about naming conventions, Yossi Kreinin praises the way Lisp does multi-word names:

I think that the best naming convention out there is the Lisp one: case-insensitive-dash-separated. It just doesn’t get any better:

  • You never have to hit Shift or Caps Lock in the middle of a name, which makes typing easier. This is especially important for names because you use auto-completion with names. Auto-completion requires you to press things like Ctrl-Space or Alt-Slash or Ctrl-P. Together with another Shift needed for the name to be completed correctly, auto-completion is much more likely to cause repetitive strain injury.
  • You never have to think about the case. Figuring out the case of a letter in a case-sensitive naming convention can be non-trivial; more on that later.
  • The dash-as-a-separator-convention is used in English, so your names look natural.

I'm not so fond of this convention. Well, it is nice to read; I can't think of anything more natural. And case-irrelevance is nice. (If your favourite lisp stopped case-folding one day and became case-sensitive in practice as well as theory, would you even notice?) There's just one problem: those hyphens take space.

How much?

(let ((syms 0) (chars 0) (hyphens 0))
  (do-symbols (s (find-package :cl))
    (incf syms)
    (incf chars (length (symbol-name s)))
    (incf hyphens (count #\- (symbol-name s))))
  (format nil "~S symbols, ~S chars (avg. ~2,2f)~%~
             ~S hyphens, ~2,2f% hyphens (avg. ~2,2f)~%"
    syms chars (/ chars syms 1.0) hyphens
    (/ hyphens chars 0.01) (/ hyphens syms 1.0)))

"978 symbols, 11271 chars (avg. 11.52)
875 hyphens, 7.76% hyphens (avg. .89)"

Hmm. Not as bad as I thought. 7.7% sounds like a lot, but most of those hyphens are in rarely-used names; the common ones are short. The big expression above has only four hyphens (plus the #\-), so it's not a huge problem. On the other hand, standard library names (even in CL) tend to be short; user code is often full of rather long names. Let's see...

(defun slurp (filename)
  "Read the contents of a file as a string."
  (with-open-file (s filename)
    (let* ((len (file-length s))
           (str (make-string len)))
      (do ((i 0 (1+ i)))
          ((>= i len) str)
        (setf (schar str i) (read-char s t)))))

(defun count-hyphens (filename)
  (let ((s (slurp filename)))
    (format t "~2,2f% of ~S chars~%"
    (/ (count #\- s) (length s) 0.01) (length s))))

(count-hyphens "asdf-install/installer.lisp")
3.13% of 12210 chars
(count-hyphens "edit-modes/python.el")
3.13% of 90348 chars   ;coincidence
(count-hyphens "keywiz.el")
2.88% of 10230 chars
(count-hyphens "tetris.el")
4.54% of 22458 chars
(count-hyphens "edit-modes/slime/swank.lisp")
2.86% of 191438 chars

There's a lot of variation (partly because some files are diluted with lots of comments, which I didn't try to strip out), but evidently hyphens are still significant. Tetris.el has a lot because it has a lot of names with short words, like tetris-top-left-x. Elisp in general suffers (in total length as well as hyphens) because it has no module system, so every name needs a module prefix. (If you want to make a language's code shorter, this is the first thing to worry about: make sure users don't have to defend against name collisions.) But even in CL, hyphens take around 3% of the code. That's about half as much as parentheses.

Maybe this isn't directly a problem. People probably don't look at the hyphens even as much as they look at parentheses, so what does it matter how many there are? But they still lengthen lines, making the eyes move farther, and increasing the chance of linebreaks. When one character accounts for 3% of your code, it's hard to argue that it doesn't matter. But as with parentheses, it's even harder to do anything about it, since all the alternatives are harder to read. So we use hyphens. But I still don't feel altogether good about them.

Update: It seems some people think I'm bashing Lisp on the grounds that hyphenated names make its code 3% longer. Relax; of course this is not a big problem, and I like Lisp, or I wouldn't be worrying about its naming conventions. And I prefer hyphens to every alternative, because they're so much more readable. But this readability isn't free, and at 3% the cost (in screen space) is surprisingly high. I don't have a solution, but it's nice to be aware of the problem. I will take CamelCase a little more seriously now.

Clearly illicit frobnication

I hack C++ at work. In addition to the usual difficulties of writing in a low-level language with a buggy compiler, this means I have to write in a style other C++ speakers will understand. Occasionally I forget, and write as I would in a functional language. The other day I wrote some code like this:

if (constraint == (frobnicated ? MUST_NOT_FROBNICATE : MUST_FROBNICATE))
  die("illicit frobnication");

Someone found it hard to read, so I changed it to make the logic very explicit:

if (frobnicated && constraint == MUST_NOT_FROBNICATE
    || !frobnicated && constraint == MUST_FROBNICATE)
  die("illicit frobnication");

On reflection I agreed it was easier to read, even though it was longer. Why?

Maybe it's because the ternary operator is so awkward in C++. It's often hard to parse, so it's not used much, so readers of C++ don't expect to see it, especially not as an argument to ==, and especially not in the control expression of another conditional. And since they don't expect to see it, it's harder to read. Is this easier in a different language?

(when (eq constraint (if frobnicated 'must-not-frobnicate 'must-frobnicate))
  (error "illicit frobnication"))

In my opinion that's a small improvement in syntax, but none in semantics. It's still confusing. So my initial impression was wrong. This isn't about C++ after all; it's the logic that's confusing.

What if the test is something other than equality? ISTM this variation is a little less confusing, despite being more general:

(when (member (if frobnicated 'must-not-frobnicate 'must-frobnicate)
  (error "illicit frobnication"))

I think the real problem is the pseudo-Boolean nature of the constraints. It's hard to read MUST_NOT_FROBNICATE in a Boolean expression without thinking the NOT is the negation operator and it's involved in the expression somehow. (Of course it is, but not directly enough that it helps to think of it.) That's only a minor problem, but when it's combined with a conditional expression and a comparison of Booleans, the total confusion is enough to slow the reader down, or to overwhelm the less logic-minded.

We can get rid of the whole thing easily enough. What if the constraint were a function instead of an enumeration?

(unless (funcall allow-frobnication frobnicated)
  (error "illicit frobnication"))

That's much simpler, and IMO easier to read too. Another victory for functional style! Unfortunately it moves some complexity (especially in C++) to the three allow-frobnication functions. And to many C++ speakers it's odd enough to be much more confusing than the original version. In this case expressiveness is limited by what the readers understand, not by what the language does.

Pretty fonts

I would just like to point out that Book Antiqua is a very pretty font. The sort of font you could set books in, even. Or blogs.

So is Gill Sans.

Infix lambda

I was playing with syntax today, specifically terse lambdas. After a brief dalliance with Smalltalk-like blocks with a Haskell-like -> separator, I dumped the square brackets and was left with an infix operator for lambda:

successor = (a → a + 1)
fib = memoize (n → if (n < 2) n (fib (n - 1) + fib (n - 2)))
apropos term = filter id
                 (map (f → if (find-substring term (docstring f))
                                  (name f))

I like it because it's terse without requiring any special parentheses, and it minimizes the difference between an expression and a function abstracting that expression. Like most infix operators, it gets harder to read as the expression gets bigger, but for small expressions it's great - it approaches the brevity of combinators without requiring gymnastics to convert an existing expression to a function. It's also a perfect example of the value of infix: the operator itself serves to delimit the argument list, saving a pair of parentheses.

I vaguely remember having heard of an infix lambda operator before, but casual googling does not reveal it - the results for both "infix lambda" and "lambda infix" are dominated by accidental juxtapositions. Do any of my fearsomely knowledgeable readers know whether and where this has been done before?

I haven't bothered to implement it - I know what it does; I'm trying to figure out what it's like to read. So far I've only found one significant problem: in a language which also has infix define, I'm tempted to read the first argument as the name of the function. Which is just fine, but I tend to confuse the λ and named-λ versions.

Consider (→ body). It's obviously a nice short way to write a thunk: (λ () body). But (f g → body) could be either (λ (f g) body) or (named-λ f (g) body). Neither λ nor named-λ is more right than the other, but there's no reason to choose one - they are both frequently useful, so it's fine to have both. How about for λ and for named-λ, since the double arrow resembles =?

There's also the define/defun question: for (named-λ f (a b) body), do you write (f a b ⇒ body) or (f (a b) ⇒ body)? In prefix syntax, I prefer a separate λ-list: I find (defun f (x) ...) easier to read than (define (f x) ...), because the function name is not mixed in with the semantically different argument names. But in infix I would rather dispose of the parentheses and write the Haskell-like f x = .... This pattern holds for non-defining forms too: I like named-λ to separate the name from the λ-list, but this just looks silly in infix.

The define style also has the advantage that it generalizes nicely to recursive bindings of non-functions:

endless x = (l ⇒ (lcons x l))

Now, does that look familiar? It's the infix version of SRFI-31's rec!

Update 7 Sep: It turns out C# 3.0 has infix lambda, and calls it =>.

Arc gets I/O right

In any significant program, a great deal of code is spent on two activities that have very little intrinsic interest: I/O and error handling. This makes them good targets for language design. If a language can make these two areas easier, it can remove a significant part of the accidental complexity of programs. Conversely, a language in which either is difficult is a language in which effort is wasted on uninteresting problems.

This is where Arc is good. Many parts of it are hacky or just not there yet, but its I/O library is strikingly convenient. There's a lot there worth seeing and maybe stealing. In particular:

  • The whole-file I/O routines, starting with readfile, are a great idea - usually what you want in a file is to read the whole thing. There should be one (file?) for reading the whole file as a string too. (The point here is not that they read the whole file in one piece, but that they handle the opening and closing - all they need is the filename.)
  • infile and outfile are nice and simple. Opening files for reading and writing are quite different operations, and there's no reason to do both with the same function. Getting rid of the mode argument also makes the functions easier to compose. This can go further: the remaining binary, text and append arguments should be separate functions too.
  • Using tostring is much simpler than having an optional stream argument to every output routine. For instance, (tostring (prall thingies "We have ") (when (no done?) (pr " and more coming!"))). Dynamic variables are a perfectly reasonable way to pass arguments, especially those like streams that tend to be used in dynamic scope - so why don't I feel good about this? Are there any cases it handles badly?
  • Printing several values separated by spaces is a common operation (especially in throwaway programs) but it needs a good name. I like prs better than what I was using (prv, for "print values") although it might conflict with other functions like "print to stream".
  • prall is another function I often want, although it feels unwieldy to me, possibly because the second argument is printed before the first. The prefix argument also seems out of place.
  • prf is (format t ...), plus it does string interpolation in case you prefer that. Where's the non-printing format (well, probably fmt in Arc)? This is definitely common enough to deserve a function, not just (tostring (prf ...)).
  • There are some other perlesque string operations - tokens and such. You can never have enough of these (even in Perl).
  • I'm not sure if w/bars is useful (for what, debugging?) but it's an interesting kind of output wrapper - it captures write operations, not characters. How about a similar wrapper that indents every line?
  • It's not an I/O routine, but ellipsize handles a common problem (truncating long strings and adding an ellipsis) that would often go unhandled without it.

I/O is really a subset of a more general category, glue, which probably accounts for more complexity than everything else put together. And it's all accidental complexity, so it should be a target for libraries. Unfortunately I don't think there are any general ways to reduce it, because there are so many different transformations that have to be done. But simplifying I/O addresses the most common kind of glue. Remarkably, considering its age and importance, this area is not solved. There are still considerable improvements to be found, and Arc is finding some of them.

I wish I could find similar improvements in error handling.

A peculiar deficiency

Internet Explorer 7 doesn't support HTML's q element. This is rather annoying, as I've been using it heavily for marking up quotes - on browsers that do support it, it produces nice language-sensitive quote marks. I could understand this if it were an ancient proto-IE, but why does the latest version not support an element that's been in the standard since 1999?

I'll have to go edit old posts and replace <q>...</q> with "...". Of course, Blogger's search helpfully ignores HTML tags, so I can't find them that way...

The other advantage of Wikipedia

Wikipedia is faster than Google, for the common class of Web searches where you already know the name of the thing you're looking for: "tell me about Ipomoea". When you look something up by googling it, Google gives you a list of results (often including the Wikipedia page), but then you have to click on a link to get to the actual page. When you type in the URL of a Wikipedia page, it takes you directly to the result - and most of the time it's a good one.

Google's I'm Feeling Lucky feature is just as fast, but less specific. It takes you to the most popular or important page on the search term, which is often not the most informative - it might be someone selling it, or a story about it, or something named after it. None of these are as much help as even a mediocre Wikipedia page.

Of course Wikipedia URLs are a bit long to type. Fortunately Mozilla (including Firefox) has a feature to make it easier: you can teach it commands that expand to URLs, so "w Proteobacteria" takes you to http://en.wikipedia.org/wiki/Proteobacteria, and "g carnot cycle" asks Google, which refers you to Wikipedia, of course. This feature is intended for searches, but it also works for many other purposes. Until I wrote this paragraph, I thought Firefox didn't have this feature; I've been exploiting URL autocompletion instead. That's awkward enough that I wonder if it's any faster than using Google; I may have just thought so because it's more interesting. Blogging can be educational.

Curiously, when you don't know the name of what you're looking for, Wikipedia's search is not much help. In my experience Google, even without the help of a site:en.wikipedia.org restriction, is more likely to find the right Wikipedia page than Wikipedia's own search is. It's that bad.

Scrolling vs. pointing

Scrolling is one of the most common operations in most user interfaces, so it's not surprising that we have special hardware for it. Scrollwheels (and trackballs, the two-dimensional equivalent) are now common on mice, and on some other devices such as Blackberries. But this is a fairly recent phenomenon. Fifteen years ago, almost nobody had hardware for scrolling.

This is significant because it affects user interface design. We're accustomed to having dedicated pointing devices, and consequently our designs assume that pointing is easy. (And when this assumption is wrong, as it usually is on laptops, our interfaces can become quite clunky.) But we're still getting used to having dedicated scrolling hardware, so we design as though we didn't. We waste screen space on scrollbars, even though users rarely use them. (I only use mine when my scroll ball is gummed up.) We try to avoid having to scroll, because we think it's cumbersome. (And it still is, even with hardware, but it's not quite as bad as we're accustomed to.) We eschew spatial navigation in favor of more abstract interfaces, because we don't expect to have a good way to express movement in space.

Standard scrollwheels do have some annoying problems: their quantum is large, so it's impossible to scroll slowly or smoothly with them. This could be overcome with small changes to the hardware, or entirely in software by smoothing the scrolling. And they require repetitive movements to scroll long distances, which may be an RSI risk. Maybe something like a joystick would be better.

Any interface designer knows to optimize the most common operations, and we should remember that one way to do this is to use what the hardware gives us. If scrolling can be expressed only clumsily, via pointing, we should minimize it. Now that we have hardware for it, shouldn't we exploit it?

Comments on Blogger

Has anyone in the programming blogosphere complained to Blogger about the paucity of tags allowed in comments? Anyone discussing programming is likely to want code and pre, and anyone discussing pretty much anything may want blockquote from time to time. The restriction to b, i and a greatly discourages conversations (at least about programming) in comments. And isn't that what blogs are for?

There's a small issue with pre: long lines will disrupt page layout, especially when comments are formatted narrow. This is possibly with plain old text if you use very long words, but it's much more likely to happen by accident with pre. But couldn't the comment software just check for long lines while it's checking for forbidden tags?

WordPress allows these elements. Why doesn't Blogger?

While I'm complaining about Blogger, I should complain about something else that's annoyed me: the small size of their textareas, both for posts and comments. I can see how the small comment area might help discourage long comments, but couldn't the posting area be a little bigger? Maybe twenty lines instead of fifteen?

Not that I really care. I write most of my posts in Emacs.

Edit: How could I forget Blogger's most annoying feature? By default, posts appear with the wrong timestamp: it's the time you created the post, not the time you published it. If you don't finish a post right away, you will accidentally backdate it, unless you remember to edit the date. This is one of the reasons I write my posts in Emacs instead.

Remapping keys

There are two keys on standard keyboards for which I have no use: Caps Lock and Insert. Both are intended to bring a slight convenience to a rare operation (typing in all caps or replacing fixed-length data), and both break keyboard input by entering a nonobvious mode where keys do not do what you expect. This wouldn't be a problem except that both keys are located next to frequently used ones, so it's easy to hit them by accident.

Well, not always by accident. I've remapped Caps Lock to Control on every machine I use regularly. This works great, except when I try to use someone else's machine. Then I routinely find myself typing something like C-d to delete a character, and being confused when I find the character still there, and subsequent typing in all caps. This is especially embarrassing when I do it on Windows, where C-d wouldn't have worked even if I'd used the right key.

I've also tried swapping parentheses with square brackets in my emacs (using keyboard-translate), but this is even more confusing, because it doesn't apply in other software on the same machine, let alone on others. (I've been using it for weeks and haven't gotten used to it.) And even when it works, its benefit is questionable. The square-bracket keys are far enough from the home keys that they're not terribly easy to type - maybe no easier than the regular parentheses. A number of Lispers recommend this swap, but I haven't found it helpful, even for Lisp.

I haven't remapped Insert, because it's not in as valuable a place as Caps Lock, and my home keyboard doesn't even have it. Mac keyboards use it for Help instead, which is only slightly less annoying when you hit it by mistake, and still not frequently used enough to deserve a key. Maybe it's comforting to beginners, but I suspect beginners would find an Undo key much more comforting, as long as it worked reliably. There must be many other common operations worthy of a key (although I can't think of any right now; someone help me). So do we have to waste hardware on annoyances like Caps Lock and Insert?

Learning to touch-type (finally)

Confession time: I can't touch-type. I've never bothered to learn, and when I try, I make so many mistakes that I give up and go back to visual typing. (And by the way, I mean visual typing, not hunt-and-peck. I know where the keys are; I just can't hit them reliably by touch alone.) In this I'm like most typists - touch typing is an ideal more aspired to than practiced.

But many people more productive than me highly recommend it. So today I googled up a nice typing-practice program, and I'm amazed at how fast I'm learning. My unconscious knowledge outstrips my knowledge of it, so after less than an hour of practice, I already find my fingers correctly typing things while I'm still trying to remember where the keys are. This suggests I actually touch-type more than I'm aware of, and have only a little kinesthesia to learn. Well, that and using the right fingers, and keeping my wrists straight. I should have done this years ago.

Better late than never. I touch-typed this post.

You know you're a programmer when...

Last night I was trying to decide where to plant some flowers. There were some restrictions - for one thing, I didn't want any plant to be hidden behind a taller one. The morning glory had to go next to the railing, so it would have something to climb. Identical plants should be adjacent, but similar colors shouldn't be. And something conspicuous should probably go in front, so no one would trample the flowerbed by accident. Obviously this was a constraint problem, and one too hard to solve in my head. So, trowel in hand, I set about figuring out how to solve it by machine.

It is a valuable habit for a programmer to hand any problem over to a program at the first sign of difficulty. Or at least it's valuable when you're sitting at the keyboard and already know how to solve the problem. Unfortunately I've never actually done constraint programming, and nothing in the garden proved helpful. So I was still trying to figure out how to explain the problem to a computer when I noticed it had gotten dark, and I had only planted a few of the flowers.

Real programmers can build constraint solvers out of dirt and weeds. I, on the other hand, got to plant my flowers in the dark.

A heap of possibilities

I needed a priority queue a few months ago, and didn't have a predefined one, so I used a sorted list, since n was always small. But that won't work for many applications. Yesterday I wanted a pq for a best-first search with a potentially large n. Obviously a heap was the answer. But the modest trouble of implementing one was enough to discourage me from trying that search. It wasn't in the standard library, so it was too much trouble to bother with.

Heaps are quite useful, and quite stereotyped, so they make good candidates for a standard library. I think they (like most data structures) would be most convenient as a heap type which supports the same operations as other collections, but with the particular performance characteristics of heaps. In addition, it's probably a good idea to expose the two internal reheap operations (adding and removing) so they can be reused for variations like limited-size heaps, or heapifying an existing array as in heapsort or heap-select.

And they should be ordinary destructive heaps, not functional ones. The motivation for heaps is performance, and heaps that require consing can't compete with those built on arrays. The flexibility functional heaps offer isn't worth the performance cost, because heaps are almost always used linearly.

When you have heaps, some fancy algorithms become much easier. The search I wanted yesterday, for example (in pseudo-Lisp):

(defun a* (expand min-cost done? init)
  "Best-first heuristic graph search. Starting from INIT,
find the lowest-cost node that satisfies DONE?, where
NEIGHBORS is a function on a node that returns a list of
its neighbors, and MIN-COST is function from a node to a
lower bound on the cost of any node reachable from it."
  (def pq (heap (o* < min-cost)))
  (def (step x)
    (if (done? x)
      (do (each (expand x) (add! _ pq))
          (aif (pop! pq)
            (step it)
  (step init))

If you really want to make fancy algorithms easy, you could include functions like that one in the standard library. The greatest strength of functional programming (in the high-order sense) is the ability to abstract algorithms; we might as well use it.

How not to keep a to-do list

One of the main functions of a feed reader, as I use it, is a to-do list: it keeps track of what items I haven't read yet. This is an affordance that I find useful, but it's not really intended by the authors of feedreaders. This leads to some annoying deficiencies.

Google Reader only remembers items for a month. After that, it automatically deletes them, whether you've read them or not. This makes sense if you use the reader for browsing recent news, but not if you use it for remembering to read every post. For a to-do list, it is difficult to imagine a misfeature worse than automatically forgetting items.

I've lost some 30 unread items in the last week. Some were unread because they were uninteresting. Some were unread because they were so interesting that I was not done with them after one reading. Some were unread because I simply hadn't gotten around to them yet. I don't know how many of each there were, because Google Reader won't even show me what's expired. This invisibility makes the expiration misfeature far more annoying than it would otherwise be. I have lost part of my to-do list, and I don't even know what I lost.

I like Google Reader's portability - I can read my feeds from any machine. But I do almost all of my reading from one machine, so maybe I should find a reader that doesn't forget things. I can do that just fine by myself.

Faster than an arrow

Speaking of Emacs keys - I have lately found myself using C-n and C-p more and more often, instead of the arrow keys. These chords seem like they'd be more work, since they require pressing two keys instead of one, but the fingers don't have to move as far from home, so it's faster. In addition, I frequently use them in combination with C-a or C-e, in which case I already have a finger on Control and they only cost one additional keypress.

I don't use C-f and C-b much, possibly because I usually want to move by more than one character, so I use commands like M-f and M-b instead. When I do want to move by less than a word (e.g. to insert a missing space in the middle of a word) I usually want to do so repeatedly, so moving a hand to the arrow keys is less of a burden. I mostly use C-f and C-b on the rare occasions when I only want to move by one or two characters. There are fewer alternatives for vertical movement, so I use C-n and C-p more often.

Learning emacs the fun way

Welcome to keywiz
There are currently 366 commands.

Only 366, yet I still don't know most of them...

Keywiz is a game for learning emacs commands. This is something like the second best way to help users learn an interface. (The best is to unobtrusively show them what commands are available, as menus do, but that's hard to do for hundreds of commands.) It's not a great game - it's obnoxiously modal, its difficulty isn't variable, and it rudely stops play after two minutes. Nor is it thorough - it appears to only cover global keybindings, so it misses mode-specific goodies and anything requiring M-x. All it does is give you the names and docstrings of functions bound to keys, and ask you to type the key sequence. But as a training tool it's better than any documentation ever written.

Insert spaces or tabs to next defined tab-stop column.

Well, that's easy. Tab.

Incorrect.  The correct answer is: M-i

There's another kind of tab? Ohhh, it's the manual indent, not autoindent. Hey, I needed that yesterday!

Display information about CODING-SYSTEM.

That's guessable. C-h c, right?

Wrong.  The correct answer is: C-h C, <f1> C, <help> C

Case-sensitivity, how do I hate thee...

Do incremental search backward.

At last, one I know!

Indeed.  The answer is: C-r

Time's up
You made 4 points.

Ouch. I've been using emacs for years, and I still only know a tiny fraction of it. Maybe if I play a few more games...

Via Marco Baringer (probably) on Cliki.

Lisp without lists

Any significant program uses collections heavily, so most languages have several collection types - usually at least one sequence type and one dictionary type. Many languages have more, but library effort is limited, so every language omits some useful collections. As long as there are good alternatives, it's not a big problem.

Ruby doesn't have lists. RLisp borrows Ruby's data structures, so it doesn't have lists either. But it's a Lisp dialect! How can you have a Lisp without lists?

Pretty easily, it turns out. Despite their historical importance, lists don't have much to do with Lisp being Lisp. Their most important use is to represent S-expressions - but they aren't the only data structure that can do that. The important feature of S-expressions isn't that they read as lists - it's that they read as something simple and flexible and easy to work with. Vectors have all of these advantages, so RLisp programs are made of vectors instead of lists.

This is not traditional, but it's not a bad idea. Sure, cons and cdr are more expensive, and those are common operations on code, but performance of code manipulation is rarely a problem. And vectors are smaller than lists, which recovers some speed, and makes it cheaper to keep code around for debuggability. So I think other new Lisps should consider using vectors instead of lists.

I don't mean to suggest that lists aren't important. They are the functional sequence data structure, and every language that intends to support functional programming should have them. But RLisp is in its infancy, and it's forgivable if it doesn't have lists yet.

On the other hand, there's no excuse for its convention of writing close parentheses on their own lines. They look so lonely! More to the point, they waste space, separating the lines that have actual code. This is just as annoying in Lisp as it is in C, and there's no reason to do it in Lisp. Let indentation show program structure, and leave the parentheses at the ends of lines, regardless of whether they delimit vectors or lists.

Alternating lists

I complained about the awkwardness of processing alternating lists, such as in (let (var1 form1 var2 form2) ...), because ordinary collection operations don't understand them. But this is easy to fix. Arc's assignment operator uses an alternating list: (= a 1 b 2) is (do (= a 1) (= b 2)), just like setf in Common Lisp. So Arc has a function pair to convert (a 1 b 2) to ((a 1) (b 2)). A look through the Arc source shows it's always used with some sort of map, so there's still some repetition to remove. But it reduces the added difficulty of processing alternating lists to one function call. The inverse operation is easily accomplished with some sort of mappend. So difficulty of processing alternating lists is not really a significant problem.

My other objection was that they're hard to read. But alternating lists turn up in other places: keyword arguments, plists, #s(...) syntax - anywhere you need to write a dictionary in sexprs. I don't find any of these especially hard to read, so I suspect my discomfort with alternating let is just unfamiliarity. I suppose I should get used to it and stop complaining.

The sound of logic

Turning arbitrary data into sound or pictures can be an easy way of getting insight into it, because human brains are so good at picking out patterns in those forms. It can also be fun, as seen in Doug McIlroy's talk about the history of computing at Bell Labs:

It was customary for computer operators, for the benefit of computer operators, to put a loudspeaker on the low bit of some register on the machine, and normally the operator would just hear kind of white noise. But if you got into a loop, suddenly the machine would scream, and this signal could be used to the operator "oh the machines in a loop. Go stop it and go on to the next job." I remember feeding them an Ackermann's function routine once.

These days registers change a bit too fast for that. But I still like to listen in on a machine. I listen to the disk rattle, and miss it when I'm using a remote machine. I often wish I had something more informative to listen to or look at. Not predigested data like the CPU graph. Just raw data, so I can accidentally notice what's going on, without having to think about it. Page faults, system calls, memory allocation, stack height - things that vary with what the machine is doing, but not in any simple way. Things best interpreted not by conscious thought, but by unconscious pattern recognition, because you have hardware for that.

RLisp and language integration

Now that the importance of libraries is widely appreciated, most new languages make a point of having good ones. Usually they get them the easy way: they borrow someone else's, by being closely integrated with their host languages. At a minimum this usually means sharing the same data and being able to call code of the host language. That's all you need to borrow libraries, and it works fine. But integration can go further.

RLisp is a Lisp integrated into Ruby. The two languages already have virtually the same data model, so that part is easy. Calling Ruby code has a small complication: Ruby is based on message send, not on function call. RLisp deals with this by supporting both. In addition to the ordinary funcall form, it has a send function, for sending Ruby messages. And there's syntax to make it convenient: [a + b] reads as (send a '+ b).

Since most RLisp calls are to Ruby methods, send is actually much more common than regular funcall. So much so that my first impression, on reading some RLisp code, was that they were backward: the Form Without A Name should be send, and funcall should be explicit. This is not the traditional Lisp way, but is it a bad way?

defun would have to define methods instead of functions, which would also make it easier for Ruby to call RLisp. It would be tempting to impose Ruby's restrictions on Lisp names, to minimize the hassles when calling back and forth. The main annoyance would be the loss of high-order-ness: ordinary methods would no longer be suitable as arguments for high-order functions, which would require an explicit lambda (or an η-expanding function macro - or could this be implicit?) and funcall. But that's no worse than in Common Lisp - or more precisely, in Smalltalk or Ruby.

The resulting language would be unusually closely integrated with Ruby, by borrowing its namespaces and even its Form Without A Name from the host language. In fact, it wouldn't differ much from Ruby except in syntax. But since the motivation for RLisp appears to be "Ruby with macros", this similarity is not a bad thing. If RLisp became essentially an S-expression syntax for Ruby, maybe with the rough corners cleaned up, that wold probably increase its utility, because it would be easier for users to move between the two languages.

One could make this even easier by further integration: defining a syntax for RLisp which looked exactly like Ruby, but read as S-expressions. The result would be a dialect of Ruby which happened to support macros. Not what either Matz or taw have in mind, perhaps, but it might be the easiest way to add macros to Ruby.

What was the problem with internal define in Arc?

According to Paul Graham, an early version of Arc had internal define, but there was a problem:

In a language with implicit local variables and macros, you're always tripping over unexpected lexical contours. You don't want to create new lexical contours without announcing it. But a lot of macros that don't look like blocks in the call expand into blocks. So we provided a second block operator, called justdo, which was like do but didn't create a new lexical contour (i.e. it is Common Lisp progn), and this is what you were supposed to use in macroexpansions.

The trouble was, I kept forgetting and using do instead. And I was thereby writing utilities with the worst sort of bug: the kind that might not show up for years, and only then in someone else's code.

Huh? My own Lisp dialect also has internal define, and I haven't had a problem with unexpected contours. Neither have thousands of Schemers. (They might point out the disagreement over define in let-syntax, and the difficulty of writing macros that expand into multiple defines, but I don't think anyone counts these among the biggest problems with the language.) I haven't even had trouble remembering to use justdo in my Lisp, possibly because it has the more distinctive name splice. What are these problems I'm supposed to be having?

I'm also surprised by how complicated the implementation was said to be:

We wrote a hideously complicated interpreter that allowed local variables to be declared this way. The ugliness of this code worried me: ugly things are generally a bad idea.

I suspect they just implemented it the wrong way. You can implement internal define quite simply by defining begin (including implicit begin!) as a macro which transforms define to letrec*. Most Schemes do it this way, and my own Lisp does too, more consistently: basic special forms like lambda and begin are actually macros over more primitive ones. It feels a little odd at first, but once you stop expecting to write in primitives all the time, it's quite comfortable. You get internal define and other macro-based luxuries without compromising the simplicity of the language kernel.

My experience with internal define has been almost entirely positive. This is so different from Graham and Morris' that I wonder if they were really doing something else. Now, how did they describe it?

In Arc we were planning to let users declare local variables implicitly, just by assigning values to them.

Were they actually creating variables by assignment? This is well known not to work, and for reasons having nothing to do with macros — ask a Python user about global. The Arc designers couldn't be repeating this old mistake. Could they?

You know you think about usability too much when...

Last night I dreamed I was hurriedly trying to type something into a cellphone. It had a clever keyboard layout, designed to minimize the number of presses for common letters at the expense of rare ones, so ETAONRISH were just one press each. Fortunately I never had to type Q.

So far, so good. There was just one little problem: it didn't match the keys. They were labeled with the standard layout, where the common letter S takes four presses. The phone worked around this problem by showing the captions on an onscreen keyboard, for the convenience of beginners like me.

Unfortunately the ones it showed belonged to a different clever layout - one that tried to reduce keystrokes by spreading them more uniformly across all twelve keys. (This doesn't make a lot of sense, but I was asleep, okay?) So I was reduced to guessing, while both keyboards helpfully misled me, and the efficient layout was for naught.

Remember those easy, carefree flying dreams? I don't have those anymore. Instead I dream about difficulties controlling my flight. This is what happens when you think about usability problems too much. You start to dream bad user interfaces.


While I'm talking about new Lisps, I should mention Clojure, a new Lisp with close JVM integration and fancy concurrency support. I've been meaning to write something in it and post about the experience, but it may be a while before I get around to that. So here's the quick summary:

Clojure has the usual features of modern Lisps: lisp-1-ness, case-sensitivity (because it's hard to do case-insensitivity right), distinguishing () from nil from |nil|, pure symbols, shorter names, modules. It's quite clean, although Lispers should watch out for the renamings (familiar names like cons and do aren't what you expect). There is destructuring in binding constructs, and lambda is really case-lambda. Its Lispy core looks quite good (although as I said, I haven't used it much yet), and it closely resembles the orthodox modern Lisp.

Except for one thing: most data is immutable. Clojure aims at the most popular open problem in languages today: safe concurrency. Like Erlang, it tries to provide state only in forms that are easier to use safely in concurrent programs. Clojure has three: mutable special variables (since their bindings are per-thread), software transactional memory, and reactive message-passing. (See those pages for explanations and examples. There's also a wiki with more examples.) What it doesn't have is the ordinary mutation we take for granted in other Lisps. There's no setf, no mutable arrays or hashtables, no push.

Fortunately there's a lot of careful support for immutable collections, including syntax for [vectors] and {maps}, and generic iteration (even on Java collections!). It includes some useful functions that are often forgotten, such as mapcat and range. Clojure may not have state, but it tries to do the alternatives well enough that you don't feel the lack.

The ultimate alternative to language limitations is an FFI. Clojure is implemented in Java and runs on the JVM, so its FFI takes the form of extensive Java integration: nil is Java null, some interfaces are supported (in both directions!), and it's very easy to call Java code. There are some downsides to staying so close to Java: there's no tail-call optimization, and since threads are Java threads, their number is limited (potentially annoying, in a concurrent language). The upside is that it inherits all sorts of functionality from Java - the VM, massive libraries, even mutable data if you need it. This goes a long way toward explaining the completeness and high quality of the implementation.

There's something I'm wondering about, though. How do you pronounce Clojure? Same as "closure"? Or with /ʤ/ as in "Java"?

Orthodox Otter

Here's another new Lisp: Perry Metzger's Otter is a sketch of a case-sensitive Lisp-1 with short names and a little extra syntax and a module system and regular expressions and a cute animal mascot. Hygenic macros and a fancy compiler are planned but have been put off. It's supposed to be radical and heretical.

Radical? Heretical? Quite the opposite! There are a lot of new Lisps nowadays, and most of them include most of Otter's features (except maybe the mascot). There's hardly anything unusual here, except maybe the distaste for improper lists (a mistake which I also made for a while) and the choice of ! for setf (which seems obvious in retrospect - I will steal it now kthx). The rest of Otter is a reflection of what most Lispers consider the right way forward. It's not heretical, it's orthodox!

It's hard to detect these changes from the inside, but it seems to me this is a new development. Ten years ago, there was not so much agreement on what new Lisps should look like, nor on their desirability. The old orthodoxy was to cling to the existing dialects, as the language was perceived to be under threat. Remember when everyone was defending Common Lisp, except the Schemers, who claimed their language wasn't even a Lisp? But nowadays most Lispers don't think of the language as threatened. Instead we look forward to its future growth, and we aren't afraid to let a hundred flowers bloom.

But Otter probably won't be one of them. That presentation, from last August, seems to be the only available information on it. There's still nothing on the website but a picture of some sea otters. Sketching new lisps is fun (well, I think so, and evidently Perry does too) and easy, especially when you have an orthodoxy to guide you. Implementing them, and solving all the problems that surface in the process, is a lot of work, and prone to being put off forever. I suspect that's what happened to Otter.

But where one person can sketch a Lisp, others can build one. And we should. Never in the history of Lisp (or any programming language, IMO) have we had such an appealing orthodoxy. It is an irony not surprising in the Lisp community that that orthodoxy is now a language that does not actually exist; you cannot use it. But it is the sort of language that makes you want to. The popularity of new Lisps, and the extent of agreement on their features, suggest that it will have many incarnations in the years to come.

And now I'm better go work on mine...

For want of a rant

For my day job, I hack C++. A great many rants could begin this way, and after a week of long days debugging crashes in deployed software, I'm tempted to write one. Why do so many programmers insist on using a low-level language for everything, whether it's needed or not?

But you've heard that rant before, and maybe even written it yourself; you know how it goes. So I won't bother. Besides, I'm not sure I agree with it. It's easy to blame any problems with C++ on the language's low level, but is that really where they come from?

One of the crashes I encountered this week was a buffer overflow, due to careless use of strcat. A classic case of C++'s unsafe, low-level data structures, right? Except that C++ does have real strings, and there was no reason not to use them. We just didn't use the tools the language gave us.

Part of the application I deal with at work is in Java, and crashes in that part are never hard to debug, because they always come with handy stack traces. The problem with C++ isn't that it crashes - that could hardly be prevented. It's how little information it reports when it crashes. But this is not C++'s fault. The information (except for symbols) needed for a stack trace is already there in the crashing image; it's just not normally reported. One must go to the trouble of using a debugger to extract it. This is a place where a small change in an operating system could make a big difference in a language's usability. On a system which generated stack traces by default when native code crashed, C++ would not be nearly as hard to debug.

Of course C++ is unsafe, and therefore sometimes clobbers the necessary information (as it did in that buffer overflow). And its limited expressiveness often means more code and more bugs than in a higher-level language. But not all of its troubles are a result of low language level. And some of them aren't even its fault.

For my day job, I hack C++. And it's really frustrating. It doesn't even give me a good excuse for a rant!