Don't I like to hack Lisp?

I've noticed a disturbing pattern in my motivation to play with Lisp implementations. Now and then I fall prey to a new idea and start writing one, and it goes fast at first. I dash off the basic interpreter, perhaps an experimental representation of environments, a new macro system, modules, primitives... And then I get the the point where the kernel is basically working, and I need to write enough library - macros, mostly - to get some simple programs running.

And I stop. I like macrology. I like rebuilding the foundations of a language. I like seeing programs running on a tool I built myself. But as soon as I switch from writing the language to writing in the language, I lose interest.

This applies to implementations in an existing lisp, but even more to ones in C. I'll happily spend all day writing a garbage collector, even though I know it's an unoriginal waste of time. But when I start building library infrastructure, an understudied area where I could make a difference, I get bored.

Could it be that I actually enjoy low-level hacking? It is superficially rewarding, because it presents a constant supply of easy problems to solve, and they can be solved in familiar ways. It's programming candy - the fun of debugging without the pesky intellectual challenges. In C I can enjoy constant victories over my tools, whereas when growing a Lisp I'm confronted more directly with the problems. And they're sufficiently ill-defined and hard to think about that I recoil and go do something else instead.

I am not a good programmer when I prefer irrelevant problems to those I want to solve. I suspect this is a common tendency, and it may explain some of the puzzling lack of enthusiasm for more expressive languages. Who wants to face new challenges when you can keep solving the ones you know?

Fire and water and Frink

For the first time, Frink has failed me: it doesn't know the specific heat of water!

OK, that's easy:

water_heat = calorie / gram / kelvin

But Frink doesn't know the heat of vaporization of water! And that takes actual looking up! How will I ever show why water is better than liquid nitrogen for extinguishing fires?

OK, that's still easy:

water_heat = calorie / gram / K
water_vap = 2260 kJ/kg
steam_heat_cp = 2.080 J / g / K //at constant pressure


N2_vap = 5.56 kJ / (28 g) //199 kJ/kg
GN2_heat_cp = 1.04 J / g / K
LN2_temp = 77 K
LN2_density = 0.808 water

tank_temp = Celsius[20]
fire_temp = Fahrenheit[500] //The fire is hotter than this,
//but most of the coolant won't get so hot.

water_cooling = water * ((Celsius[100] - tank_temp) water_heat + water_vap + (fire_temp - Celsius[100]) steam_heat_cp)
LN2_cooling = LN2_density * (N2_vap + (fire_temp - LN2_temp) GN2_heat_cp)

water_cooling, for those who haven't been following along in Frink, is 2.9 GPa. (Yes, pascals: energy over volume is pressure.) LN2_cooling is 544 MPa. So despite its higher storage temperature, water absorbs more than five times as much heat as an equal volume of liquid nitrogen, because of its enormous heat of vaporization. And its lower molecular weight means it produces a larger volume of gas, and displaces more air. Not to mention it's cheap and storable. There's a reason we fight fire with water.

In the course of this, I noticed something odd: the Fahrenheit and Celsius functions are their own inverses. They determine which operation to do from the dimensions of their input. This is a use of dynamic dimension-checking that I hadn't thought of, but it doesn't give me a warm fuzzy feeling.

Quick and easy physical calculations, however, do. Especially when the language makes them simple enough that they work on the first try, as this one did.

It's bloat all the way down

High-level languages are wonderful to program in, but they do offend my aesthetic sense in one way: their implementations are complicated. All those valuable features - libraries, garbage collection, runtime compilation, continuations and so on - take complexity and space, even for programs that don't use them. They're worth it, of course. But they are an affront to perfectionism, because they bloat the distributed forms of programs.

It's tempting to believe that low-level languages don't have this problem, that they're an efficient paradise, where there is no overhead but what a program inflicts on itself, where everything is possible, even if nothing is easy. It is a myth, of course. Brian Raiter's tiny ELF executables show that there's bloat even in trivial C and assembly programs, because of how code is packaged. By stripping out most of this bloat, and then abusing ELF, he managed to shrink a trivial C executable by 98%.

This is impressive, but depressing, because it shows that even at this level, there is arbitrary waste. And so it is everywhere - in hardware, in network protocols, in the problems computers are used to solve. No level of abstraction is a bloatless Utopia, but fortunately we can pretend they are, because the imperfections of one level don't greatly affect the level above.

(Via Randy Owens, who pointed out that one of Brian's tricks, overlapping data, is also used by some very small viruses.)

Chris Smith on type

Chris Smith has a good article on static and dynamic typing. It is clear about the terminological confusion:

I realize that may sound ridiculous; but this theme will recur throughout this article.  Dynamic and static type systems are two completely different things, whose goals happen to partially overlap.

It does overstate the confusion is a few places (weak and strong typing do have widely accepted definitions) but in general it's very good until the last third, when it falls into the very trap it warns about: supposing that all type systems have the single goal of proving correctness. That is not a goal of dynamic typing at all, and it's not even a major goal of static typing for most users, because the properties ordinary type systems prove are rarely the ones programmers care about. The main point of static typing, at least for me, is not to prove anything, but to find bugs. I don't care if the type system proves that a particular bug doesn't exist, because that's rarely information I can use. But when it locates a bug - or even suspects one - that is valuable, because it saves time. Proofs may help find bugs by narrowing the search space, but the proofs themselves are not why we use typecheckers. We use them to find bugs faster.