Type is one of the most overloaded words in computer science. It has at least six important senses:
- The representation of a variable, in low-level languages.
- The class or constructor of a value, in most dynamically typed languages.
- Any set of values, in Lisp.
- A protocol or interface an object implements, in Smalltalk.
- An intended meaning of values, in static typing.
- A theorem about a program, in type theory.
Each of these is the one true sense of “type” to some people, and absurd to others, even though the first five are obviously related.
A lot of misunderstandings come from confusion of different senses of “type”. For example, some static typists say “static typechecking eliminates runtime type errors”. If this means “static theorem checking eliminates runtime theorem errors”, it's trivially true. But “runtime type error” usually means a value of the wrong class, and with this interpretation it's blatantly wrong.
My advice is to avoid the word “type” when you want to be understood, unless you know how your audience will interpret it.