It's not just different languages that distinguish types in different ways. It's pretty common for one implementation of one language to have several type mechanisms serving different purposes. For instance, a typical optimized Lisp will have:
- Multiple representations of variables (integer, float, and tagged pointer), for efficient unboxed arithmetic
- Tags distinguishing pointers from fixnums (and sometimes characters,
nil, and immediate floats)
- Dynamically typed objects
- A sublanguage for describing sets of objects, especially inferred or declared static types.
The first two are not called "type" in Lisp, but they would be in a lower-level language. The last two are both called "type" even though they are quite different things. Perhaps surprisingly, this doesn't cause much confusion in practice.
An ML implementation might have:
- The same multiple representations of variables
- The same tagged pointers (they make garbage collection simpler, even if they're not actually used)
- Dynamically tagged objects, to allow sum types
- Those famous static types, which are the only one of these mechanisms that's normally called "type" in ML.
The first three mechanisms are basically the same as in Lisp, despite the languages' seemingly opposite approaches to type. Where they differ is in how they describe types - and of course in what static types they allow. I find it particularly amusing that ML sum types use the same mechanism as dynamic type. It's not called that, and it's not used the same way, but ML data is partly dynamically typed.