POV-Ray : Newsgroups : povray.off-topic : My hypothesis : Re: My hypothesis Server Time
29 Jul 2024 14:21:48 EDT (-0400)
  Re: My hypothesis  
From: Orchid XP v8
Date: 5 Sep 2011 15:29:16
Message: <4e65230c@news.povray.org>
On 05/09/2011 07:45 PM, Warp wrote:
> Orchid XP v8<voi### [at] devnull>  wrote:
>>     insert :: Ord x =>  x ->  MinHeap x ->  MinHeap x
>>     insert x' (Leaf        ) = Node x' Leaf Leaf
>>     insert x' (Node x h0 h1) = Node (min x x') (insert (max x x') h1) h0
>
>    This is a good example of where it becomes confusing. Even after
> studying it for a few minutes I can't really figure out what is it
> that it's doing.

Hmm, OK. Apparently I've spent longer doing this than I thought...

Is the Java translation any easier to grok? Or is that equally baffling?

> (Or, more precisely *how* it's doing it. The 'insert'
> name makes it obvious what is it that it does, but the code doesn't make
> it at all clear how.)

Well, yeah, you can take a stab at identifier names in most languages.

>    Maybe the problem is that in an imperative language the different
> situations would be more explicitly expressed with an 'if...then...else'
> block. Here it's not clear at all what is it that the code is expressing.

I can rephrase it to

   insert x' h =
     case h of
       Leaf         -> Node x' Leaf Leaf
       Node x h0 h1 -> Node (min x x') (insert (max x x') h1) h0

if you prefer. I don't suppose that's any more enlightening though.

The irony is that "case" is a more powerful version of "if/then/else". 
(The latter is rigorously a special-case of the former.) Haskell's 
"case" is like what other languages have, on steriods. On the other 
hand, if/then/else is pretty self-explanatory, whereas a case-expression 
isn't quite so immediately obvious.

The fact that you can write [what appear to be] multiple definitions of 
the same function as a short-cut to a case-expression isn't the most 
intuitive thing. Until somebody tells you that's what it means.

>    Of course it doesn't help that the syntax is quite uncommon, and the
> meaning of the different operators isn't clear.

Sure. You can find strange operators in any language; I guess one of the 
biggest things about Haskell is that its syntax just plain /isn't like/ 
anything else.

C, C++, C#, Java, JavaScript, Pascal and a half-dozen other languages 
all have virtually identical function-call syntax, and (apart from 
Pascal) the rest of the basic guts of the language has mild syntactic 
differences. So if you know any of those languages, you can take a stab 
at reading any of the others. Haskell is utterly different. Doesn't even 
resemble anything else. (Except other obscure languages like ML.)

> (For instance, it's unclear
> whether the ' is some kind of operator, and if it is, what exactly its role
> is. If it isn't an operator but somehow just part of the variable name, it's
> highly unusual.)

It's not an operator, merely part of the variable name. It's a de facto 
Haskell idiom; if you have a variable named foo, the new version of it 
is named foo'. (If you have more than two versions, it's probably better 
to number them, although you do see people write foo'' and even foo'''.) 
Apparently it's supposed to look like the mathematical "prime" symbol 
(which is more usually used for derivatives, actually).

I'll grant you that one's /highly/ non-obvious. I could perhaps have 
called it y instead of x' for increased clarity. That's only one of your 
points, though.

>    The lack of separators is also confusing to someone who is accustomed to
> programming languages that use separators.

Sure. You do see a lot of people look at an expression like

   foo x y + z bar

and think to themselves "now what the hell does that mean?" Knowing the 
barest bit of Haskell, possible options include:

- foo(x, y + z, bar)
- foo(x, y) + z(bar)
- foo(x, y, +, z, bar)

It's also common for beginner's Haskell code to contain more brackets 
than a typical Lisp snippet because it's so disconcerting to figure out.



You apparently can't see far enough into the syntax to realise this, but 
I think perhaps part of the problem is the whole symmetry of it all.

The syntax for /inspecting/ data is identical to the syntax for 
/constructing/ data. Which makes the language very regular and 
beautiful, but it also perhaps makes it rather baffling to anybody 
trying to figure out whether you're inspecting or constructing stuff.

The syntax for a tuple /type/ is identical to the syntax for a tuple 
/value/. Compare:

   ('1', 2, 3.4, "five") :: (Char, Int, Double, String)

Thing on the left is data. Thing on the right is a type signature. Now 
consider

   (x, y, z) :: (x, y, z)

This perfectly valid Haskell expression has three value variables on the 
left, and (unrelated but identically named) three type variables on the 
right.

The exact same thing happens with list values and list types.

Here, the very symmetry of the language is arguably somewhat baffling. 
Patterns look like expressions. Types look like values. Once you know 
Haskell modestly well, you realise that patterns /only/ go in certain 
places, and everything else is an expression. Types /only/ go in certain 
places, so everything else is values. But until you reach that point, 
the language design certainly isn't helping you much.



PS. I just discovered that you can create local operator names in the 
same way that you create local variables. So you can make a function 
where the same operator name changes its meaning on each recursive call 
to the function. How evil is that?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.