POV-Ray : Newsgroups : povray.off-topic : Why is Haskell interesting? : Re: Why is Haskell interesting? Server Time
4 Sep 2024 09:18:09 EDT (-0400)
  Re: Why is Haskell interesting?  
From: Darren New
Date: 26 Feb 2010 12:15:47
Message: <4b8801c3@news.povray.org>
Invisible wrote:
> Haskell has automatic type inference, for example.

Lots of languages have this a little. In C# you can say
   var x = new Vector2(0,3);
and x gets the type Vector2.  Automatic casting in languages is a similar 
thing, methinks.

Certainly not to the large scale that Haskell does it.

> This pre-supposes that you have type-safe containers in the first place. 
> In most OO languages, you just create a container of "Object", which can 
> then contain anything. 

This hasn't been true for a decade or more. :-) Just because Java sucks, 
don't think every statically typed language sucks the same way.

> More recently, a couple of OO languages have introduced this feature 
> they call "generics". Eiffel is the first language I'm aware of that did 
> it, although I gather Java has it too now.

Java kinda sorta has it. It's *really* a container full of Objects, with 
syntactic sugar to wrap and unwrap them. (Similarly, "inner classes" are 
just regular classes with munged up names.)

> The difference is that Eiffel makes this seem like some highly complex 
> uber-feature that only a few people will ever need to use, in special 
> circumstances. 

No, Eiffel's complexity is due to inheritance. If you have an array of 
Animals, can you put a Dog in it? If you have an array of Dogs, can you pass 
it to a routine that expects an array of Animals? Can you return an array of 
Dogs from a routine declared to return an array of Animals or vice versa? If 
X multiply inherits from Y and Z, can you assign an array of X to an array of Y?

It's not the complexity of the feature as much as its interactions with 
other features of the type system.

> It seems that templates can also be used to do things which are 
> impossible with generics (e.g., making the implementation *different* 
> for certain special cases). So I'm not sure why people compare them at 
> all, because they're so totally different.

They get compared because it's as close as C++ has to having real generics, 
and C++ aficionados don't want to admit they don't have generics. ;-)

> Another thing Haskell has is algebraic data types. Other languages could 
> have this. It probably doesn't play well with the OO method though. In 
> fact, let me expand on that.

It's interesting that Eiffel is explicitly an attempt to make algebraic data 
systems on top of OOP, in a sense. Hence the invariants, preconditions, etc.

> Haskell with its algebraic data types assumes that the data you're 
> trying to model has a fixed, finite set of possibilities, which you can 
> describe once and for all. 

Integers are finite?

> Now consider the OOP case. You write an abstract "tree" class with 
> concrete "leaf" and "branch" subclasses. Each time you want to do 
> something new with the tree, you write an abstract method in the tree 
> class and add half of the implementation to each of the two concrete 
> subclasses.

I've never seen it done that way. :-)

> For a binary tree, or an abstract syntax tree, or any number of other 
> simple, ridigly-defined data structures you could envisage, this 
> approach is actually much better. 

It depends how complex your tree is vs how complex your operations. If your 
data is simple and your operations are complex, you're going to get 
different trade-offs than if your data is complex and your operations are 
simple. If a tree could have 300 different types of nodes, but you'll only 
have 3 operations, which is better?  Then invert that.

As you say, it depends what kind of extensibility you want. Add 50 new kinds 
of nodes. Add 50 new operations. OOP was envisioned as adding new kinds of 
things. New kinds of cars to a traffic simulation, new kinds of bank 
accounts to a financial analysis program, etc.

Other than that, I'm not sure I really understood what you're talking about 
in terms of "using the ADT approach" bit.

> - If you have a limited number of structures which you want to process 
> in an unlimited number of ways, ADTs work best.
> 
> - If you have an unlimited number of structures which you want to 
> process in a limited number of ways, a class hierachy works best.

Yes, that. Well, not even a "limited number of ways."  The point of OOP is 
that you can add new structures *without* changing existing structures *at 
all*.

> The one small glitch is that Haskell demands that all types are known at 
> compile-time. Which means that doing OO-style stuff where types are 
> changed dynamically at run-time is a little tricky. Haskell 98 (and the 
> newly-released Haskell 2010) can't do it, but there are 
> widely-implemented extensions that can. It gets kinda hairy though.

It also means you have to recompile everything whenever you change a type. 
Which kind of sucks when it takes several days to compile the system.

> (i.e., you left off the "break" 
> statement, so a whole bunch of code gets executed when you didn't want 
> it to).

This is one that C# finally fixed right. Every branch of a switch has to end 
with "break" or "goto". You're allowed to "goto" another case label, so you 
can emulate "falling through."  But even then, you can rearrange cases or 
insert another case anywhere in the list without breaking any existing 
cases. With a fallthrough, if you add a case without realizing the previous 
case falls through, you just broke the previous case.

> In Haskell, case expressions work on *everything*. 

Pattern matching is indeed cool in some set of circumstances.

> which is obviously a hell of a lot longer. 

Yet, oddly enough, will work on things that *aren't lists*. That's the 
point. :-)

> According to "the OO way", the only thing you should 
> ever do to objects is call methods on them, not inspect them for type 
> information or substructure.

Yep. And that's because you can substitute a new object and still use the 
same code. Your pattern match wouldn't work at all if you said "Hmmm, I'd 
like to use that same function, only with a set instead."

> But something like, say, JavaScript (which isn't really OO in the first 
> place) could certainly make use of ADTs and/or pattern matching, I think.

Huh? Javascript is about as OO as Smalltalk is. There's nothing in 
Javascript that is *not* an object.

> Another big deal about Haskell is how functions are first-class. The 
> fact that functions are pure makes this a really big win, but even 
> without pure functions, it's still useful. (Smalltalk implements it, for 
> example.) It doesn't immediately sound very interesting, but what it 
> enables you to do is write a loop once, in a library somwhere, and pass 
> in the "loop body" as a function.

Lots of languages have this, just so ya know. :-)

> Other languages have done this. Smalltalk is the obvious example. 

Smalltalk doesn't have first-class functions. It has first class blocks, 
which are closures (or continuations, more precisely). You're not passing a 
function to #do:. You're passing a stack frame.

> What Smalltalk doesn't do, and Haskell does, is make combining small 
> functions into larger functions trivial as well. 

That's because Smalltalk doesn't have functions. It has methods. And blocks. 
And continuations.

> All of this *could* be put into Smalltalk, or any other OO language for 
> that matter. (Whether it would be efficient is a whole other matter...)

Not... really.  The difference is that Haskell functions work on data, while 
Smalltalk only has objects. In other words, you can't invoke a function 
without knowing what object that "function" is a method of. (This is also 
the case for Javascript, btw.)

C# has lambdas and anonymous expressions and stuff like that. It also has 
"delegates", which is a list of pointers to object/method pairs (or just 
methods, if it's a static/class method that doesn't need an instance).

> I'm not aware of any mainstream languages that have this yet. 

Several languages have implemented this as libraries. Generally not for 
threading but for database stuff.

> (How long did it take Java to do Generics?) 

It wasn't that generics took a long time. It's that the owners of the 
language didn't want them, for whatever reason. Then, once Java was popular, 
it was impossible to change the JVM classfile format without killing what 
little momentum Java still had, so they had to be shoehorned in. See above.

I.e., it took Java a long time to do Generics *because* people cared about 
Java. ;-)

I mean, Python has explicitly said "We're going to not change anything in 
the language spec for 2 to 3 years until everyone else writing Python 
compilers/interpreters catch up."

It's easy to innovate when you have the only compiler.

> The pace of change is staggering. Lots of new and very cool 
> stuff happens in Haskell first. (No doubt it will eventually all be 
> stolen by lesser languages so that people still don't use Haskell itself.)

That's not hard to do when you have no user base you have to support.

-- 
Darren New, San Diego CA, USA (PST)
   The question in today's corporate environment is not
   so much "what color is your parachute?" as it is
   "what color is your nose?"


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.