|
|
>> Heh. If you mean "why isn't it some random symbol?" then the answer is
>> "you can only use symbols for *binary* functions, not *unary* functions
>> such as NOT".
>
> How do you negate an integral or floating point value, then?
Haskell possesses exactly *one* unary operator: unary minus. (Because,
let's face it, not having this would just be annoying.) This is
hard-wired into the language spec, however. You cannot define new unary
operators.
The existence of the unary minus operator causes the following glitch:
(5+) is a function.
(5-) is a function.
(5*) is a function.
(5/) is a function.
(+5) is a function.
(-5) is a number.
(*5) is a function.
(/5) is a function.
Spot the odd one out. (There's a library function named "subtract" so
that you can write "subtract 5" instead of "(-5)" to get a function.)
And it *is* unary minus, not part of the numeric literal syntax; for
example "-x" works just as well as "-5" does.
(Saying "-x" is equivilent to saying "negate x". In particular, since
unary minus is a syntax quirk, you can't define it directly. You define
"negate" directly, and the compiler does the rest. Also, just for
giggles, the default implementation of binary minus is "x - y = x +
(negate y)"...)
So there - aren't you sorry you asked?
You may also have noticed that "sin -5" doesn't work. That's because
it's interpretted as *binary* minus, i.e. "sin - 5", which obviously
doesn't type-check. And *that* is why you have to say "sin (-5)".
Annoying, isn't it?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|