Specification vs. documentation (vs. Kernel)

Language specs are the most prestigious form of language documentation, and to many language designers the most familiar, so people describing new languages sometimes do so in the form of a spec. They write of what implementations “should” and “must not” do, as if there were any implementations, and of portability, as if there were any code to port, and with a language lawyer's opaque rigor, as if there were any language lawyers to defend against.

This is a waste, of course. Unless you're standardizing a language that's already popular enough to have multiple implementations, you should be writing documentation of what the existing implementation does, not a specification of what implementations should do. And if it's a research language, your audience is researchers, not users, so you should focus on elucidating your new ideas, not on thoroughly documenting the mundane parts of the language.

This is what I thought when I read the Kernel spec. But after only three years, there are some eighteen other implementations of vau-based languages, and at least two (klisp and klink) aim to comply with the Kernel spec. Fexpr lisps are so simple that a full implementation fits in the scale of a casual personal project. The other objections may still apply, but at least Kernel has multiple implementations to standardize.

Research cultures

John Regehr identifies some virtues of the culture of operating systems research, which are lacking in the culture of programming languages research:

The best argument is a working system. The more code, and the more results, the better. Something that is clearly a toy isn’t convincing. It is not necessary to build an abstract model, conduct a user study, prove soundness, prove correctness, or show any kind of asymptotic bound. In fact, if you want to do these things it may be better to do them elsewhere.

Yes. A working system, especially one that can support diverse applications without major pain, shows that you got everything important approximately right. A proof only shows that you got one thing right, and it's easy to be mistaken about whether that thing is important. So a working system can be stronger evidence that you got it right than a proof.

The style of exposition is simple and direct; this follows from the previous point. I have seen cases where a paper from the OS community and a paper from the programming languages community are describing almost exactly the same thing (probably a static analyzer) but the former paper is super clear whereas the latter is incredibly difficult to figure out.

I too find operating systems papers easier to read than language papers — and I'm from the languages community, so this isn't just a matter of familiarity. Systems researchers write as if they're trying to communicate, but language researchers write as if they're trying to make their results look like fancy academic research. This is a common failure mode (since researchers are evaluated partly on how difficult their work looks), and somehow it's become part of the standard style in language research, leading well-meaning writers to produce impenetrable papers.

It wasn't always so. Why has systems research kept these virtues while language research has descended into theory and impenetrability? How can this be reversed?