Provocative titles work. When a famous type theorist like Benjamin Pierce gives a talk titled Types Considered Harmful, it gets my attention. (Eventually. Years later. After someone links me to it.) This talk describes a contract-checking system, which wouldn't be very interesting in its own right, title or no, but it also includes a bunch of insights about typechecking. Here are the good bits:
First (and this is not new), why does type inference find bugs? It's not because there's anything special about types per se, but simply because it entails enough analysis that inconsistencies in the program are likely to produce contradictions.
- Attempting to prove any nontrivial theorem about your program will expose lots of bugs
- The particular choice of theorem makes little difference!
- Typechecking is good because it proves lots and lots of little theorems about your program
(“Proving theorems” sounds scary, but if you describe it as “checking properties” it means the same thing and is much less intimidating.)
Second, it acknowledges an oft-ignored cost of static typing: it makes languages much more complicated.
Fancy types make the programmer's head hurt
* two PhDs for Haskell
- Type systems – especially very precise ones – force programmers to think rigorously about what they are doing
- This is good... up to a point!
- Do we want languages where a PhD* is required to understand the library documentation?
Fancy types make my head hurt
- Complex type systems can lead to complex language definitions
- Easy to blow the overall complexity budget
The notion of a complexity budget ought to be more widely used. In addition to the limitations of designer and implementor effort, there's a limit to how much users will learn about a language. Once the language's complexity reaches this limit, additional features are not very useful, because the users who are supposed to benefit from them don't know about them.
Third, it has a useful interpretation of the relationship between typechecking, contract checking (in the particular form described in the talk), and tests:
- Types ~ General but weak theorems (usually checked statically)
- Contracts ~ General and strong theorems, checked dynamically for particular instances that occur during regular program operation
- Unit tests ~ Specific and strong theorems, checked quasi-statically on particular “interesting instances”
These two dimensions of generality and strength are easy to reuse:
- A dynamic typecheck is specific (because it checks only one case) and weak (because all it proves is that the value has the right type) and checked dynamically.
- The purity of core Haskell is another general and weak theorem — or rather assumption, as it's not checked at all.
- Traditional assertions are specific, of variable strength, and checked dynamically.
- Model checkers prove general strong theorems statically.
- The “typeful” style often used in ML and Haskell, where more distinctions are moved into types, is an attempt to make the typechecker's weak theorems stronger.
The title is not just for provocation. It's also a summary of the talk's apparent thesis. Pierce doesn't say this explicitly in any of the slides, so maybe I'm reading my own conclusion into them, but he seems to be saying: proving theorems about programs is good, but having no choice about which theorems to prove is bad. What if the theorem that's forced on you is hard to prove, or easy to prove but not by the methods the checker knows? What if it's not even true, or shouldn't be true? A typechecker (or any obligatory checker) is “considered harmful” because it chooses the theorems for you, and you can't refuse it when it chooses poorly.