A few months ago Joe Marshall did a series of posts about how computer science has affected his view of math, beginning with its surpising lack of rigor: unlike programming languages, math notation is often ambiguous.
Check out these two equations:
By the first equation, Q is obviously some sort of function (the quantile function). H therefore operates on a function. By the second equation, the argument to H, which is called x, is used as the argument to f, which is the probability distribution function. f maps numbers to numbers, so x is a number. The types don't match.
This type error is precisely why this notation works. We can write
Q to mean both a function and its result (or, to put it differently, both a quantity and a function relating that quantity to something else) because the types differentiate them. It's just like overloading in programming languages, but it's more pervasive, and it's not declared in advance, and one of the overloaded variables isn't even a function.
Indeed, if both
Qs were functions, or even if
Q returned another function instead of a number, this overloading wouldn't work. It would be too easy to confuse the two. But since their types are so different, we can safely overload arbitrary variables with the functions that compute them, and only programmers will be confused.
Programmers are paranoid about ambiguous notation, because computers can't be trusted to resolve it right. Mathematicians are much less so, because humans handle it better.
I think we can learn something from the mathematicians here. Language designers sometimes reject new overloadings for fear they'd be confusing, even if they wouldn't pose a problem for compilers. For example, overloading arithmetic operations to work on collections or functions sounds scary — wouldn't it be hard to tell what was being added and what was just being mapped or composed? But similarly wide-ranging overloadings in mathematical notation don't cause much trouble, and indeed these specific overloadings exist and work fine in APL and J. When a new construct looks confusing to humans but not to machines, that may be a sign that it's just unfamiliar.