I've noticed an odd argument from the mathematical side of the Haskell/ML community: that languages like Lisp have their semantics tangled up with a particular class of implementations. Operations like
eq and mutation and dynamic type are supposed to be examples of this. Here's an example from Tim Sweeney:
But be careful with expectations of extensible types features, metareflection, and dynamic typing. They conspire globally against counter type soundness and universal properties that are desirable for program verification and concurrency.
Taken to their logical conclusion, you get things like LISP, SmallTalk, C# with LINQ and lose sight of the extensional meaning of your code as it's intractable intertwined its metareflective representation, pointer identities, the uncheckable exceptions it might throw, the interact with macros and syntax extensions which may transform it into something entirely different, etc.
This sounds absurd at first, but I think there's something to it. Lisp semantics are explicitly in terms of values - that is, heap objects. They have identity and type and slots; they can even be mutated. This is a reasonable way to define a language, but it's not the only way, nor the most abstract. ML semantics are in terms of sum types, whose representation by objects is only an implementation detail. Object operations like
eq are considered unnatural because they don't make sense on that level.
This is a difference of abstraction level, just like the difference between physical memory and object memory. It's just a less useful one. Abstracting from words to objects hides a lot of irrelevant, error-prone detail. Abstracting from objects to type-based values doesn't hide much, if any. Sometimes it even requires more work (to deal with explicit tagging), so it's not obviously beneficial. The added abstraction is still claimed to improve analyzability, but it isn't helping with expressiveness.
It's easy to be accustomed to abstraction being valuable, and expect that more is always better. But it's only better when it hides something irrelevant or usefully increases generality. It's easy to have pointless abstraction in programs, and it's possible in languages too.