POV-Ray : Newsgroups : povray.off-topic : Why is Haskell interesting? : Re: Why is Haskell interesting? Server Time
4 Sep 2024 09:21:03 EDT (-0400)
  Re: Why is Haskell interesting?  
From: Orchid XP v8
Date: 26 Feb 2010 17:41:15
Message: <4b884e0b$1@news.povray.org>
>> Haskell has automatic type inference, for example.
> 
> Lots of languages have this a little. In C#

Just out of curiosity, just how new *is* C#?

> This hasn't been true for a decade or more. :-) Just because Java sucks, 
> don't think every statically typed language sucks the same way.

...there are statically-typed OO languages which aren't Java? [Or 
Eiffel, which nobody ever uses.]

C++ is statically-typed, but even Warp will agree it's not a 
specifically OO language; it's more of a pragmatic hybrid of styles.

>> More recently, a couple of OO languages have introduced this feature 
>> they call "generics". Eiffel is the first language I'm aware of that 
>> did it, although I gather Java has it too now.
> 
> Java kinda sorta has it. It's *really* a container full of Objects, with 
> syntactic sugar to wrap and unwrap them.

Oh dears.

> (Similarly, "inner classes" are just regular classes with munged up names.)

Yeah, that's what I would have expected.

>> The difference is that Eiffel makes this seem like some highly complex 
>> uber-feature that only a few people will ever need to use, in special 
>> circumstances. 
> 
> No, Eiffel's complexity is due to inheritance.
> 
> It's not the complexity of the feature as much as its interactions with 
> other features of the type system.

All I know is that Eiffel makes it seem like this really exotic feature 
that you "shouldn't need to use" under normal circumstances, and if you 
find yourself using it, you've probably designed your program wrong.

Even C++ templates make it seem like a pretty big deal.

Haskell, on the other hand, makes it trivial.

> It's interesting that Eiffel is explicitly an attempt to make algebraic 
> data systems on top of OOP, in a sense. Hence the invariants, 
> preconditions, etc.

Uh... what?

>> Haskell with its algebraic data types assumes that the data you're 
>> trying to model has a fixed, finite set of possibilities, which you 
>> can describe once and for all. 
> 
> Integers are finite?

They are if there fixed-precision. ;-)

In fact, if you want to be really anal, if they're stored on a computer 
that exists in the physical world, they're finite, since the amount of 
matter [and energy] in the observable universe is finite. But I'm sure 
that's not what you meant.

>> Now consider the OOP case. You write an abstract "tree" class with 
>> concrete "leaf" and "branch" subclasses. Each time you want to do 
>> something new with the tree, you write an abstract method in the tree 
>> class and add half of the implementation to each of the two concrete 
>> subclasses.
> 
> I've never seen it done that way. :-)

Oh really? So how would you do it then?

> It depends how complex your tree is vs how complex your operations.

I could have sworn I just said that. ;-)

> As you say, it depends what kind of extensibility you want. Add 50 new 
> kinds of nodes. Add 50 new operations. OOP was envisioned as adding new 
> kinds of things. New kinds of cars to a traffic simulation, new kinds of 
> bank accounts to a financial analysis program, etc.

Indeed. You do NOT want to design one giant ADT that represents every 
possible kind of bank account, and write huge monolithic functions that 
understand how to process every possible kind of bank account. It's 
non-extensible, and all the code relating to one particular type of 
account is scattered across the different account processing functions, 
so it's the wrong way to factor the problem.

My point is that there are major classes of problems which *are* the 
other way around, and for that ADTs win. For example, the parse tree of 
an expression. You are almost never going to add new types of node to 
that, but you *are* going to be adding new processing passes all the 
time. For this, Haskell's ADT + functional approach factors the problem 
"the other way", and that works out better for this kind of problem.

And when it doesn't work out right, you use interfaces and get the 
essence of the OO solution.

>> - If you have a limited number of structures which you want to process 
>> in an unlimited number of ways, ADTs work best.
>>
>> - If you have an unlimited number of structures which you want to 
>> process in a limited number of ways, a class hierachy works best.
> 
> Yes, that.

LOL!

> Well, not even a "limited number of ways."  The point of OOP 
> is that you can add new structures *without* changing existing 
> structures *at all*.

And I was pointing out that for each thing you want to do to a 
structure, you have to add new methods up and down the inheritance hierachy.

Ultimately, if you have a kind of structure that you need to perform a 
bazillion operations on, what you'll hopefully do is define a set of 
"fundamental" operations as methods of the structure itself, and define 
all the complex processing somewhere else, in terms of these more 
fundamental operations. (If you take this to the extreme, you obviously 
end up with dumb data objects, which usually isn't good OO style.)

>> The one small glitch is that Haskell demands that all types are known 
>> at compile-time. Which means that doing OO-style stuff where types are 
>> changed dynamically at run-time is a little tricky. Haskell 98 (and 
>> the newly-released Haskell 2010) can't do it, but there are 
>> widely-implemented extensions that can. It gets kinda hairy though.
> 
> It also means you have to recompile everything whenever you change a 
> type. Which kind of sucks when it takes several days to compile the system.

Are there systems in existence which actually take that long to compile?

Anyway, Haskell extensions allow you to do runtime dynamic dispatch. 
That means you don't have to recompile your code. [Although you will 
have to relink to add in the new types you want to handle, etc.] It's 
just that it pisses the type checker off, which means you can't do 
things you'd normally expect to be able to do.

(In particular, the basic ExistentialQuantification extension allows 
typesafe upcasts, but utterly prohibits *downcasts*, which could be a 
problem...)

>> (i.e., you left off the "break" statement, so a whole bunch of code 
>> gets executed when you didn't want it to).
> 
> This is one that C# finally fixed right.

Oh thank God...

> Pattern matching is indeed cool in some set of circumstances.

Aye.

(Jay. Kay. El...)

>> which is obviously a hell of a lot longer. 
> 
> Yet, oddly enough, will work on things that *aren't lists*. That's the 
> point. :-)

Well, you can pattern match on the size of the container (presuming you 
have a polymorphic "size" function to get this). It won't be quite as 
neat though, obviously.

Secondly, there's actually an experimental extension to Haskell called 
"view patterns" which allow you to create sort-of "user defined pattern 
matching". This fixes this exact problem. (And a couple of others.)

> Yep. And that's because you can substitute a new object and still use 
> the same code. Your pattern match wouldn't work at all if you said 
> "Hmmm, I'd like to use that same function, only with a set instead."

Yes, in general pattern matching is used for implementing the "methods" 
for operating on one specific type. (Although of course in Haskell they 
aren't methods, they're just normal functions.) Your polymorphic stuff 
is then implemented with function calls, not pattern matching. But see 
my previous comments.

> Huh? Javascript is about as OO as Smalltalk is. There's nothing in 
> Javascript that is *not* an object.

1. Smalltalk has classes. JavaScript does not.

2. Smalltalk has encapsulation. JavaScript does not.

>> Another big deal about Haskell is how functions are first-class.
> 
> Lots of languages have this, just so ya know. :-)

Yeah. Smalltalk, JavaScript, Tcl after a fashion. C and C++ have 
function pointers, and you can sort-of cludge the same kind of 
techniques as found in Haskell. (But you wouldn't. It would be 
horrifyingly inefficient.)

>> Other languages have done this. Smalltalk is the obvious example. 
> 
> Smalltalk doesn't have first-class functions. It has first class blocks, 
> which are closures.

I won't pretend to comprehend what the difference is, but sure.

>> What Smalltalk doesn't do, and Haskell does, is make combining small 
>> functions into larger functions trivial as well. 
> 
> That's because Smalltalk doesn't have functions. It has methods. And 
> blocks. And continuations.

Plausibly.

>> All of this *could* be put into Smalltalk, or any other OO language 
>> for that matter. (Whether it would be efficient is a whole other 
>> matter...)
> 
> Not... really.  The difference is that Haskell functions work on data, 
> while Smalltalk only has objects. In other words, you can't invoke a 
> function without knowing what object that "function" is a method of. 
> (This is also the case for Javascript, btw.)

Erm... no, not really.

   function foo(x) {...}

   var bar = foo;

   var x = bar(5);

No objects here, no methods, just assigning a function to a variable and 
using it later. You *can* make a function an object method, but you do 
not *have* to. JavaScript is like "meh, whatever, dude. I'm green."

> C# has lambdas and anonymous expressions and stuff like that. It also 
> has "delegates", which is a list of pointers to object/method pairs (or 
> just methods, if it's a static/class method that doesn't need an instance).

Eiffel has... uh... "agents"? Which are basically GUI callback 
functions, but we wouldn't want to have actual functions, would we?

>> I'm not aware of any mainstream languages that have this yet. 
> 
> Several languages have implemented this as libraries. Generally not for 
> threading but for database stuff.

Yeah. Databases have been transactional for decades, and many languages 
can access a database. But STM is all internal to the program, and 
implements multi-way blocking and conditional branching and all kinds of 
craziness.

I'd point you to the paper I just wrote about implementing it using 
locks... but then you'd have my real name. The long and short of it is, 
it boils down to running the transaction *as if* there are no other 
transactions, but not actually performing any real writes. Then, when 
we've discovered exactly what the transaction wants to do to the central 
store, we take all the locks in sorted order. So the essence of the 
whole thing is that by simulating the transaction from beginning to end, 
we solve the old "I don't know what I need to lock until I've already 
locked stuff" problem. (And make all the locking stuff the library 
author's problem, not yours.)

>> (How long did it take Java to do Generics?) 
> 
> It wasn't that generics took a long time. It's that the owners of the 
> language didn't want them, for whatever reason.

Heh, nice.

> I mean, Python has explicitly said "We're going to not change anything 
> in the language spec for 2 to 3 years until everyone else writing Python 
> compilers/interpreters catch up."
> 
> It's easy to innovate when you have the only compiler.

Haskell has a motto: "Avoid success at all costs." The meaning of this, 
of course, is that if Haskell suddenly became crazy-popular, we wouldn't 
be able to arbitrarily change it from time to time. (In reality, this 
point has already long ago been reached.)

And just FYI, there *is* more than one Haskell compiler. Actually, there 
are several - it's just that only one has a big enough team working on 
it that they produce a production-ready product. [There are people who 
*get paid money* to write GHC.] All the others are PhD projects, or 
hobbiests. At one time there were several viable production compilers, 
but most of them have gone dormant due to the dominance of GHC.

>> The pace of change is staggering. Lots of new and very cool stuff 
>> happens in Haskell first.
> 
> That's not hard to do when you have no user base you have to support.

Heh. Better not tell that to Galois.com, Well-Typed.com, the authors of 
the 1,000+ packages in the Hackage DB, or the likes of Facebook, AT&T, 
Barclays Analytics or Linspire who all apparently use Haskell internally.

Sure, Haskell is no Java, C# or PHP. But that's not to say that *nobody* 
is using it...

See below for some outdated blurb:

http://www.haskell.org/haskellwiki/Haskell_in_industry

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.