Vesa Karvonen defends static sums:
In this particular case, we have functions whose behavior depends on the values of their arguments. But the sets of argument values that invoke the different behavior also have disjoint sets of types. So, one can treat the functions as if their behavior would depend on the types of their arguments. What part of this kind of reasoning do you disagree with?
One can treat them as if their behavior depends on their types - but that's not how it actually works, and many changes can expose the difference. If we turn off the typechecker and run on dynamic semantics, the static sum still works. In a type system where the types aren't disjoint, it still works. If a weakness in the type system allows passing inL
with inR
's type, then the value, not the type, controls the result. Only in very restricted conditions does the isomorphism tell the whole story.
This is a general weakness in reasoning by isomorphism. Two things may be equivalent in a circumscribed context, but that doesn't make them equivalent in general. In programming languages, it's easy to extend semantics in ways that expose implementation details, so this happens a lot. This doesn't make isomorphism useless; it only means one shouldn't apply it as freely as in math.
Correction: You can apply it just as freely as in math, of course. For some reason it seems to be more tempting to overestimate the scope of the isomorphism in programming. But maybe I only think that because I don't do enough math to know better.
Compare to typeclasses, which really do allow overloading on static type. They don't keep working when the static types change, and they do keep working when the values change. A static sum built with a typeclass might be equivalent to the vanilla Hindley-Milner version, but it extends completely differently.
No comments:
Post a Comment
It's OK to comment on old posts.