POV-Ray : Newsgroups : povray.off-topic : Why is Haskell interesting? Server Time
5 Sep 2024 03:23:06 EDT (-0400)
  Why is Haskell interesting? (Message 8 to 17 of 87)  
<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid XP v8
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 16:59:35
Message: <4b884447@news.povray.org>
Warp wrote:

>   Thus you will be able to write things like:
> 
>     for(auto iter = v.begin(); iter != v.end(); ++iter) ...
> 
>   The compiler can see what is the return type of v.begin(), so there's no
> need for the programmer to know it. Hence he can just specify 'auto' as the
> variable type.

That's a nice touch. Presumably this also means that if the type of "v" 
changes, you don't have to manually change the type of "iter" to match 
any more...?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Warp
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 17:19:45
Message: <4b884901@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> Warp wrote:

> >   Thus you will be able to write things like:
> > 
> >     for(auto iter = v.begin(); iter != v.end(); ++iter) ...
> > 
> >   The compiler can see what is the return type of v.begin(), so there's no
> > need for the programmer to know it. Hence he can just specify 'auto' as the
> > variable type.

> That's a nice touch. Presumably this also means that if the type of "v" 
> changes, you don't have to manually change the type of "iter" to match 
> any more...?

  Right.

-- 
                                                          - Warp


Post a reply to this message

From: Orchid XP v8
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 17:41:15
Message: <4b884e0b$1@news.povray.org>
>> Haskell has automatic type inference, for example.
> 
> Lots of languages have this a little. In C#

Just out of curiosity, just how new *is* C#?

> This hasn't been true for a decade or more. :-) Just because Java sucks, 
> don't think every statically typed language sucks the same way.

...there are statically-typed OO languages which aren't Java? [Or 
Eiffel, which nobody ever uses.]

C++ is statically-typed, but even Warp will agree it's not a 
specifically OO language; it's more of a pragmatic hybrid of styles.

>> More recently, a couple of OO languages have introduced this feature 
>> they call "generics". Eiffel is the first language I'm aware of that 
>> did it, although I gather Java has it too now.
> 
> Java kinda sorta has it. It's *really* a container full of Objects, with 
> syntactic sugar to wrap and unwrap them.

Oh dears.

> (Similarly, "inner classes" are just regular classes with munged up names.)

Yeah, that's what I would have expected.

>> The difference is that Eiffel makes this seem like some highly complex 
>> uber-feature that only a few people will ever need to use, in special 
>> circumstances. 
> 
> No, Eiffel's complexity is due to inheritance.
> 
> It's not the complexity of the feature as much as its interactions with 
> other features of the type system.

All I know is that Eiffel makes it seem like this really exotic feature 
that you "shouldn't need to use" under normal circumstances, and if you 
find yourself using it, you've probably designed your program wrong.

Even C++ templates make it seem like a pretty big deal.

Haskell, on the other hand, makes it trivial.

> It's interesting that Eiffel is explicitly an attempt to make algebraic 
> data systems on top of OOP, in a sense. Hence the invariants, 
> preconditions, etc.

Uh... what?

>> Haskell with its algebraic data types assumes that the data you're 
>> trying to model has a fixed, finite set of possibilities, which you 
>> can describe once and for all. 
> 
> Integers are finite?

They are if there fixed-precision. ;-)

In fact, if you want to be really anal, if they're stored on a computer 
that exists in the physical world, they're finite, since the amount of 
matter [and energy] in the observable universe is finite. But I'm sure 
that's not what you meant.

>> Now consider the OOP case. You write an abstract "tree" class with 
>> concrete "leaf" and "branch" subclasses. Each time you want to do 
>> something new with the tree, you write an abstract method in the tree 
>> class and add half of the implementation to each of the two concrete 
>> subclasses.
> 
> I've never seen it done that way. :-)

Oh really? So how would you do it then?

> It depends how complex your tree is vs how complex your operations.

I could have sworn I just said that. ;-)

> As you say, it depends what kind of extensibility you want. Add 50 new 
> kinds of nodes. Add 50 new operations. OOP was envisioned as adding new 
> kinds of things. New kinds of cars to a traffic simulation, new kinds of 
> bank accounts to a financial analysis program, etc.

Indeed. You do NOT want to design one giant ADT that represents every 
possible kind of bank account, and write huge monolithic functions that 
understand how to process every possible kind of bank account. It's 
non-extensible, and all the code relating to one particular type of 
account is scattered across the different account processing functions, 
so it's the wrong way to factor the problem.

My point is that there are major classes of problems which *are* the 
other way around, and for that ADTs win. For example, the parse tree of 
an expression. You are almost never going to add new types of node to 
that, but you *are* going to be adding new processing passes all the 
time. For this, Haskell's ADT + functional approach factors the problem 
"the other way", and that works out better for this kind of problem.

And when it doesn't work out right, you use interfaces and get the 
essence of the OO solution.

>> - If you have a limited number of structures which you want to process 
>> in an unlimited number of ways, ADTs work best.
>>
>> - If you have an unlimited number of structures which you want to 
>> process in a limited number of ways, a class hierachy works best.
> 
> Yes, that.

LOL!

> Well, not even a "limited number of ways."  The point of OOP 
> is that you can add new structures *without* changing existing 
> structures *at all*.

And I was pointing out that for each thing you want to do to a 
structure, you have to add new methods up and down the inheritance hierachy.

Ultimately, if you have a kind of structure that you need to perform a 
bazillion operations on, what you'll hopefully do is define a set of 
"fundamental" operations as methods of the structure itself, and define 
all the complex processing somewhere else, in terms of these more 
fundamental operations. (If you take this to the extreme, you obviously 
end up with dumb data objects, which usually isn't good OO style.)

>> The one small glitch is that Haskell demands that all types are known 
>> at compile-time. Which means that doing OO-style stuff where types are 
>> changed dynamically at run-time is a little tricky. Haskell 98 (and 
>> the newly-released Haskell 2010) can't do it, but there are 
>> widely-implemented extensions that can. It gets kinda hairy though.
> 
> It also means you have to recompile everything whenever you change a 
> type. Which kind of sucks when it takes several days to compile the system.

Are there systems in existence which actually take that long to compile?

Anyway, Haskell extensions allow you to do runtime dynamic dispatch. 
That means you don't have to recompile your code. [Although you will 
have to relink to add in the new types you want to handle, etc.] It's 
just that it pisses the type checker off, which means you can't do 
things you'd normally expect to be able to do.

(In particular, the basic ExistentialQuantification extension allows 
typesafe upcasts, but utterly prohibits *downcasts*, which could be a 
problem...)

>> (i.e., you left off the "break" statement, so a whole bunch of code 
>> gets executed when you didn't want it to).
> 
> This is one that C# finally fixed right.

Oh thank God...

> Pattern matching is indeed cool in some set of circumstances.

Aye.

(Jay. Kay. El...)

>> which is obviously a hell of a lot longer. 
> 
> Yet, oddly enough, will work on things that *aren't lists*. That's the 
> point. :-)

Well, you can pattern match on the size of the container (presuming you 
have a polymorphic "size" function to get this). It won't be quite as 
neat though, obviously.

Secondly, there's actually an experimental extension to Haskell called 
"view patterns" which allow you to create sort-of "user defined pattern 
matching". This fixes this exact problem. (And a couple of others.)

> Yep. And that's because you can substitute a new object and still use 
> the same code. Your pattern match wouldn't work at all if you said 
> "Hmmm, I'd like to use that same function, only with a set instead."

Yes, in general pattern matching is used for implementing the "methods" 
for operating on one specific type. (Although of course in Haskell they 
aren't methods, they're just normal functions.) Your polymorphic stuff 
is then implemented with function calls, not pattern matching. But see 
my previous comments.

> Huh? Javascript is about as OO as Smalltalk is. There's nothing in 
> Javascript that is *not* an object.

1. Smalltalk has classes. JavaScript does not.

2. Smalltalk has encapsulation. JavaScript does not.

>> Another big deal about Haskell is how functions are first-class.
> 
> Lots of languages have this, just so ya know. :-)

Yeah. Smalltalk, JavaScript, Tcl after a fashion. C and C++ have 
function pointers, and you can sort-of cludge the same kind of 
techniques as found in Haskell. (But you wouldn't. It would be 
horrifyingly inefficient.)

>> Other languages have done this. Smalltalk is the obvious example. 
> 
> Smalltalk doesn't have first-class functions. It has first class blocks, 
> which are closures.

I won't pretend to comprehend what the difference is, but sure.

>> What Smalltalk doesn't do, and Haskell does, is make combining small 
>> functions into larger functions trivial as well. 
> 
> That's because Smalltalk doesn't have functions. It has methods. And 
> blocks. And continuations.

Plausibly.

>> All of this *could* be put into Smalltalk, or any other OO language 
>> for that matter. (Whether it would be efficient is a whole other 
>> matter...)
> 
> Not... really.  The difference is that Haskell functions work on data, 
> while Smalltalk only has objects. In other words, you can't invoke a 
> function without knowing what object that "function" is a method of. 
> (This is also the case for Javascript, btw.)

Erm... no, not really.

   function foo(x) {...}

   var bar = foo;

   var x = bar(5);

No objects here, no methods, just assigning a function to a variable and 
using it later. You *can* make a function an object method, but you do 
not *have* to. JavaScript is like "meh, whatever, dude. I'm green."

> C# has lambdas and anonymous expressions and stuff like that. It also 
> has "delegates", which is a list of pointers to object/method pairs (or 
> just methods, if it's a static/class method that doesn't need an instance).

Eiffel has... uh... "agents"? Which are basically GUI callback 
functions, but we wouldn't want to have actual functions, would we?

>> I'm not aware of any mainstream languages that have this yet. 
> 
> Several languages have implemented this as libraries. Generally not for 
> threading but for database stuff.

Yeah. Databases have been transactional for decades, and many languages 
can access a database. But STM is all internal to the program, and 
implements multi-way blocking and conditional branching and all kinds of 
craziness.

I'd point you to the paper I just wrote about implementing it using 
locks... but then you'd have my real name. The long and short of it is, 
it boils down to running the transaction *as if* there are no other 
transactions, but not actually performing any real writes. Then, when 
we've discovered exactly what the transaction wants to do to the central 
store, we take all the locks in sorted order. So the essence of the 
whole thing is that by simulating the transaction from beginning to end, 
we solve the old "I don't know what I need to lock until I've already 
locked stuff" problem. (And make all the locking stuff the library 
author's problem, not yours.)

>> (How long did it take Java to do Generics?) 
> 
> It wasn't that generics took a long time. It's that the owners of the 
> language didn't want them, for whatever reason.

Heh, nice.

> I mean, Python has explicitly said "We're going to not change anything 
> in the language spec for 2 to 3 years until everyone else writing Python 
> compilers/interpreters catch up."
> 
> It's easy to innovate when you have the only compiler.

Haskell has a motto: "Avoid success at all costs." The meaning of this, 
of course, is that if Haskell suddenly became crazy-popular, we wouldn't 
be able to arbitrarily change it from time to time. (In reality, this 
point has already long ago been reached.)

And just FYI, there *is* more than one Haskell compiler. Actually, there 
are several - it's just that only one has a big enough team working on 
it that they produce a production-ready product. [There are people who 
*get paid money* to write GHC.] All the others are PhD projects, or 
hobbiests. At one time there were several viable production compilers, 
but most of them have gone dormant due to the dominance of GHC.

>> The pace of change is staggering. Lots of new and very cool stuff 
>> happens in Haskell first.
> 
> That's not hard to do when you have no user base you have to support.

Heh. Better not tell that to Galois.com, Well-Typed.com, the authors of 
the 1,000+ packages in the Hackage DB, or the likes of Facebook, AT&T, 
Barclays Analytics or Linspire who all apparently use Haskell internally.

Sure, Haskell is no Java, C# or PHP. But that's not to say that *nobody* 
is using it...

See below for some outdated blurb:

http://www.haskell.org/haskellwiki/Haskell_in_industry

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 18:10:41
Message: <4b8854f1@news.povray.org>
Orchid XP v8 wrote:
>>> Haskell has automatic type inference, for example.
>>
>> Lots of languages have this a little. In C#
> 
> Just out of curiosity, just how new *is* C#?

Like 7 or 8 years. Newer versions are, of course, younger.

> ...there are statically-typed OO languages which aren't Java? [Or 
> Eiffel, which nobody ever uses.]

You're trolling here, right?

> All I know is that Eiffel makes it seem like this really exotic feature 
> that you "shouldn't need to use" under normal circumstances, and if you 
> find yourself using it, you've probably designed your program wrong.

I thnk you've misread that, since one of the primary purposes in the design 
of Eiffel was to provide a language where doing this tricky thing is right.

> Haskell, on the other hand, makes it trivial.

As long as you don't have inheritance, it *is* pretty trivial.

>> Integers are finite?
> They are if there fixed-precision. ;-)

Oh. I thought Haskell had bigints. Nevermind.

>> I've never seen it done that way. :-)
> Oh really? So how would you do it then?

Generally with the same class for nodes regardless of whether they're leaves 
or not.

> My point is that there are major classes of problems which *are* the 
> other way around, and for that ADTs win. 

Yep.

> For example, the parse tree of 
> an expression. You are almost never going to add new types of node to 
> that, but you *are* going to be adding new processing passes all the 
> time. 

I'd disagree with this, but OK. I get your point. I just think this specific 
example is probably wrong.

>> It also means you have to recompile everything whenever you change a 
>> type. Which kind of sucks when it takes several days to compile the 
>> system.
> 
> Are there systems in existence which actually take that long to compile?

Yep. Hell, it takes about 3 hours to compile the toolchain and 3 hours to 
compile Qt on my machine, and that's just C and C++ without anything 
sophisticated going on.

>> Huh? Javascript is about as OO as Smalltalk is. There's nothing in 
>> Javascript that is *not* an object.
> 
> 1. Smalltalk has classes. JavaScript does not.

Yes, it does.  Not in exactly the same way, mind. What do you think
    var d = new Date()
is doing there?

> 2. Smalltalk has encapsulation. JavaScript does not.

Yes, it does.  It's just harder and pointless.

>> Smalltalk doesn't have first-class functions. It has first class 
>> blocks, which are closures.
> 
> I won't pretend to comprehend what the difference is, but sure.

A function is a piece of code. A block is a reference to a piece of code 
that when evaluated returns a closure.  It's the same difference as between 
a class and an instance, or a lambda and a closure.

> Erm... no, not really.
> 
>   function foo(x) {...}
> 
>   var bar = foo;
> 
>   var x = bar(5);
> 
> No objects here, 

Bzzzt. You just don't know javascript very well. What object do you get when 
  foo or bar references "this"?

> Eiffel has... uh... "agents"? Which are basically GUI callback 
> functions, but we wouldn't want to have actual functions, would we?

I don't remember Eiffel well enough to remember that bit.

> Yeah. Databases have been transactional for decades, and many languages 
> can access a database. But STM is all internal to the program, and 
> implements multi-way blocking and conditional branching and all kinds of 
> craziness.

Right. And it has been implemented in database access code. Like, when you 
have "cloud" services. I think Google's cloud processing arguably uses STM. 
I'm pretty sure Erlang's Mnesia database works that way too.

> Heh. Better not tell that to Galois.com, Well-Typed.com, the authors of 
> the 1,000+ packages in the Hackage DB, or the likes of Facebook, AT&T, 
> Barclays Analytics or Linspire who all apparently use Haskell internally.

I think you missed "have to" there.

> Sure, Haskell is no Java, C# or PHP. But that's not to say that *nobody* 
> is using it...

I didn't say nobody is using it. I said you don't have to support it.

-- 
Darren New, San Diego CA, USA (PST)
   The question in today's corporate environment is not
   so much "what color is your parachute?" as it is
   "what color is your nose?"


Post a reply to this message

From: Tim Attwood
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 21:16:17
Message: <4b888071$1@news.povray.org>
> Assume Haskell doesn't have the "0xABC" kind of syntax for hex literals. 
> Could you add that with Haskell, or TH?

Haskell supports hex, octal and exponent floats. (0xff, 0o377, 0.255e+3)


Post a reply to this message

From: Darren New
Subject: Re: Why is Haskell interesting?
Date: 26 Feb 2010 21:43:41
Message: <4b8886dd$1@news.povray.org>
Tim Attwood wrote:
>> Assume Haskell doesn't have the "0xABC" kind of syntax for hex 
>> literals. Could you add that with Haskell, or TH?
> 
> Haskell supports hex, octal and exponent floats. (0xff, 0o377, 0.255e+3)

Pretend it doesn't.  Could you add it?  Can you make a literal syntax for 
"list of exactly two elements" that looks like [[-{ alpha / beta }-]] or 
something?

-- 
Darren New, San Diego CA, USA (PST)
   The question in today's corporate environment is not
   so much "what color is your parachute?" as it is
   "what color is your nose?"


Post a reply to this message

From: Orchid XP v8
Subject: Re: Why is Haskell interesting?
Date: 27 Feb 2010 04:53:01
Message: <4b88eb7d$1@news.povray.org>
Darren New wrote:

> Assume Haskell doesn't have the "0xABC" kind of syntax for hex literals. 
> Could you add that with Haskell, or TH?

You could write a function that converts an ASCII hex string into a 
number, and then pass the string to that. So you end up saying

   if x == hex "0xABC" then...

or similar.

If you use TH instead, you can write a "splice":

   if x == $(hex "0xABC") then ...

This is more typing, but the conversion to hex now happens at 
compile-time, not runtime. (It's plausible that calling a function with 
a constant will get executed at compile-time anyway, but not guaranteed. 
TH guarantees it. And if it errors, it errors at compile-time.)

Alternatively you can use the new "quasi-quoting" feature:

   if x == [$hex| 0xABC] then ...

Notice the lack of quote marks. Quasi-quoting is really intended for 
where you want to write a big long data literal, but it's too wordy. For 
example, rather than writing

   x = Expr_Define (Expr_Function (Name_Literal "Sinc") [Expr_Var 
(Name_Literal "x")]) (Expr_BinOp BinOp_Divide (Expr_Function 
(Name_Literal "Sin") [Expr_Var (Name_Literal "x")]) (Expr_Var 
(Name_Literal "x")))

(assuming I even nested all those brackets right!), you write an 
expression parser, and then do

   x = [$parser| Sinc(x) = Sin(x) / x]

and it generates the same thing, at compile-time. As well as generating 
expressions, you can also use it to generate patterns for pattern 
matching. (Your parser of course has to distinguish between expression 
variables and Haskell variables somehow...)

However, there is no way in Haskell to make it so that some arbitrary 
new string can be used as a literal, anywhere in the program. You have 
to tell the compiler what function to use to parse this stuff, one way 
or another.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Why is Haskell interesting?
Date: 27 Feb 2010 05:01:19
Message: <4b88ed6f@news.povray.org>
>> Just out of curiosity, just how new *is* C#?
> 
> Like 7 or 8 years. Newer versions are, of course, younger.

...I feel old now. :-(

>> ...there are statically-typed OO languages which aren't Java? [Or 
>> Eiffel, which nobody ever uses.]
> 
> You're trolling here, right?

Can you come up with names?

As you can imagine, I don't really follow this stuff very closely, but 
as far as I'm aware, there aren't very many "mainstream" languages out 
there.

>> All I know is that Eiffel makes it seem like this really exotic 
>> feature that you "shouldn't need to use" under normal circumstances, 
>> and if you find yourself using it, you've probably designed your 
>> program wrong.
> 
> I thnk you've misread that, since one of the primary purposes in the 
> design of Eiffel was to provide a language where doing this tricky thing 
> is right.
> 
>> Haskell, on the other hand, makes it trivial.
> 
> As long as you don't have inheritance, it *is* pretty trivial.

The synax seemed overly complex and intimidating to me. The actual rules 
for type compatibility are fairly complicated, but seem to intuitively 
"do what you'd expect", so that's not too much of a problem.

>>> Integers are finite?
>> They are if there fixed-precision. ;-)
> 
> Oh. I thought Haskell had bigints. Nevermind.

It does. Currently powered by the GMP. (Apparently they're trying to 
change that due to licensing issues or something...)

My point is that even bigints are, technically, finite if executed on a 
physical machine. But, like I said, I'm sure that's not what you meant. ;-)

>>> I've never seen it done that way. :-)
>> Oh really? So how would you do it then?
> 
> Generally with the same class for nodes regardless of whether they're 
> leaves or not.

Right... so if leaves contain a datum and branches don't... Oh, I 
suppose you just use a null-pointer instead or something?

Well anyway, I've never seen anyone implement it that way, but I guess 
you could.

>> For example, the parse tree of an expression. You are almost never 
>> going to add new types of node to that, but you *are* going to be 
>> adding new processing passes all the time. 
> 
> I'd disagree with this, but OK. I get your point. I just think this 
> specific example is probably wrong.

I wrote a program to interpret lambda calculus expressions. You know how 
many kinds of expression there are? 3. Exactly 3. By the formal 
definition of "lambda expression". There is always 3, and there will 
never be more than 3.

The list of things I might want to *do* to a lambda expression is of 
course huge. I might want to find all its free variables, or rename all 
variables to be unique, or reduce it to normal form, or just determine 
whether it *is* in normal form, or find the maximum nesting depth, or 
convert it into a string for display, or...

>> Are there systems in existence which actually take that long to compile?
> 
> Yep. Hell, it takes about 3 hours to compile the toolchain and 3 hours 
> to compile Qt on my machine, and that's just C and C++ without anything 
> sophisticated going on.

Hmm, interesting.

I've also heard people whine that Darcs is "too slow". Which puzzles me, 
given that every operation you perform with it takes, like, 0.02 seconds 
or something.

>>> Huh? Javascript is about as OO as Smalltalk is. There's nothing in 
>>> Javascript that is *not* an object.
>>
>> 1. Smalltalk has classes. JavaScript does not.
> 
> Yes, it does.  Not in exactly the same way, mind. What do you think
>    var d = new Date()
> is doing there?

It's running a function that fills out a bunch of fields in the object. 
You can completely *change* those fields afterwards. Or you can fill out 
the fields manually. An object can be given new fields and have its 
methods changed at any time. Sure, it's an object. But it does not have 
a well-defined class.

>> 2. Smalltalk has encapsulation. JavaScript does not.
> 
> Yes, it does.  It's just harder and pointless.

In Smalltalk, object fields are accessible only from inside the object. 
This is not the case in JavaScript, and AFAIK it is not possible to make 
it the case.

>>> Smalltalk doesn't have first-class functions. It has first class 
>>> blocks, which are closures.
>>
>> I won't pretend to comprehend what the difference is, but sure.
> 
> A function is a piece of code. A block is a reference to a piece of code 
> that when evaluated returns a closure.  It's the same difference as 
> between a class and an instance, or a lambda and a closure.

What's a closure?

>> Erm... no, not really.
>>
>>   function foo(x) {...}
>>
>>   var bar = foo;
>>
>>   var x = bar(5);
>>
>> No objects here, 
> 
> Bzzzt. You just don't know javascript very well. What object do you get 
> when  foo or bar references "this"?

I have no idea. (And no obvious way of finding out...)

>> Eiffel has... uh... "agents"? Which are basically GUI callback 
>> functions, but we wouldn't want to have actual functions, would we?
> 
> I don't remember Eiffel well enough to remember that bit.

It wasn't in the original spec. They added it later, when they figured 
out that they didn't want to do the whole Java thing with a bazillion 
interfaces. I don't recall off the top of my head how it works.

> I didn't say nobody is using it. I said you don't have to support it.

Ah, right. Well, don't tell that to the Industrial Haskell Group. ;-)

(Does anyone "have to" support Java? Or C for that matter? I can imagine 
that some C compiler vendor might have paying customers to support, but 
that doesn't stop the designers of C changing the spec...)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Why is Haskell interesting?
Date: 27 Feb 2010 12:55:19
Message: <4b895c87$1@news.povray.org>
Orchid XP v8 wrote:
>   if x == hex "0xABC" then...

That doesn't count.

>   if x == $(hex "0xABC") then ...

That's getting closer.

> Alternatively you can use the new "quasi-quoting" feature:
>   if x == [$hex| 0xABC] then ...

That's fairly close, yes.

> However, there is no way in Haskell to make it so that some arbitrary 
> new string can be used as a literal, anywhere in the program. 

LISP works it by (IIRC) passing each token to the "read macros" and seeing 
if any of them modify it, so it has to be distinguishable somehow. FORTH 
works it by literally letting you read the input stream, as well as calling 
a specific function when an unparsable word is encountered. So in FORTH, a 
literal works mostly like your quasi-quoting scheme, except there's no magic 
characters at the front or end to say "hey, this is quoted." You'd just write
    blah blah hex 0xABC blah blah
and the "hex" function would run at compile time and read input off stdin to 
determine what comes next. So the function to create string literals is the 
double-quotes character. Integers are parsed by having nobody know wtf they 
are, so you invoke the thing that says "what's this token" and the integer 
parser comes back and says "Hey, I know how to compile that!"

Even Erlang has a mechanism to pss the parse tree thru a number of routines 
each of which takes a parse tree and returns a new parse tree. It isn't 
quite as flexible as LISP or FORTH, but it lets you add parse-time features 
pretty easily, like your splice more than anything

> You have 
> to tell the compiler what function to use to parse this stuff, one way 
> or another.

Yeah, but you shouldn't be putting it inline in the stream. You should be 
able to say "anything with < on the front and > at the back should parse as 
an XML tag."

-- 
Darren New, San Diego CA, USA (PST)
   The question in today's corporate environment is not
   so much "what color is your parachute?" as it is
   "what color is your nose?"


Post a reply to this message

From: Darren New
Subject: Re: Why is Haskell interesting?
Date: 27 Feb 2010 13:41:15
Message: <4b89674b$1@news.povray.org>
Orchid XP v8 wrote:
> Can you come up with names?

Delphi. Object-C. Ada. D. Simula. C#. Visual Basic.

Hell, even Fortran is OO for the last 10 years.

> As you can imagine, I don't really follow this stuff very closely, but 
> as far as I'm aware, there aren't very many "mainstream" languages out 
> there.

So you having heard of it is what makes it "mainstream"? ;-)

>> As long as you don't have inheritance, it *is* pretty trivial.
> 
> The synax seemed overly complex and intimidating to me.

http://en.wikipedia.org/wiki/Eiffel_%28programming_language%29#Genericity

Seems to be as straightforward as I remember. Pretty much the simplest form 
of Generics out there.

>>>> Integers are finite?
>>> They are if there fixed-precision. ;-)
>>
>> Oh. I thought Haskell had bigints. Nevermind.
> 
> It does. Currently powered by the GMP. (Apparently they're trying to 
> change that due to licensing issues or something...)
> 
> My point is that even bigints are, technically, finite if executed on a 
> physical machine. But, like I said, I'm sure that's not what you meant. ;-)

But they're not bounded in the language. They aren't fixed precision. 
They're just not infinite precision. There's no number you can give me that 
says "they work up to this value, but not any higher."

You of all folks here should understand the difference between bounded, 
unbounded, and infinite, and the distinction between the language and its 
invocation on any one machine.

> Right... so if leaves contain a datum and branches don't... Oh, I 
> suppose you just use a null-pointer instead or something?

Or a flag. Or a leaf that inherits from a branch. Or a union-like structure. 
Or, for an N-ary tree, a list of children that happens to be empty. (After 
all, a binary tree is just an N-ary tree with restrictions on what it can hold.)

> Well anyway, I've never seen anyone implement it that way, but I guess 
> you could.

Yeah, but you wouldn't want to in Haskell.

> I wrote a program to interpret lambda calculus expressions. You know how 
> many kinds of expression there are? 3. Exactly 3. By the formal 
> definition of "lambda expression". There is always 3, and there will 
> never be more than 3.

Sure. But that's why lambda expressions were invented in the first place. 
That's like looking at a Turing machine and saying "See? No need for 
object-oriented features."

http://blogs.msdn.com/ericlippert/archive/2010/02/04/how-many-passes.aspx

Lots of passes, with about half of them being needed only for new features 
in the parse tree (compared to V1 of C#, for example). In a real compiler, 
it's nowhere near as clearcut. :-)

For example,

"""
Then we run a pass that transforms expression trees into the sequence of 
factory method calls necessary to create the expression trees at runtime.

Then we run a pass that rewrites all nullable arithmetic into code that 
tests for HasValue, and so on.
"""

Neither of those passes makes sense before you've added the feature to the 
parse tree data as well.

> I've also heard people whine that Darcs is "too slow". Which puzzles me, 
> given that every operation you perform with it takes, like, 0.02 seconds 
> or something.

You're not building real systems.

> It's running a function that fills out a bunch of fields in the object. 
> You can completely *change* those fields afterwards. Or you can fill out 
> the fields manually. An object can be given new fields and have its 
> methods changed at any time. Sure, it's an object. But it does not have 
> a well-defined class.

And you can do all that in Smalltalk too. But OK, Javascript has less of a 
"class" concept than Smalltalk does. I'd say neither is as "classy" as a 
language without code modification like Java or C++.

> In Smalltalk, object fields are accessible only from inside the object. 

Not really true. You can use reflection-type stuff to get to them, such as 
in the debugger.

> This is not the case in JavaScript, and AFAIK it is not possible to make 
> it the case.

Yes, it is. You use closures instead. It's more painful, so nobody bothers.

http://www.devx.com/getHelpOn/10MinuteSolution/16467/0/page/5

Ugly, but if you really need it, you can do it. Basically, you create a new 
object within the current object and only let the current object hold a 
reference to it.

> What's a closure?

It's the value that a lambda expression returns. A "new" statement returns 
an instance, a lambda expression returns a closure.

>> Bzzzt. You just don't know javascript very well. What object do you 
>> get when  foo or bar references "this"?
> 
> I have no idea. (And no obvious way of finding out...)

Hint: It's called "window".

function foo() {...}

is the same as

window["foo"] = Function(){...}

> It wasn't in the original spec. They added it later, when they figured 
> out that they didn't want to do the whole Java thing with a bazillion 
> interfaces. 

Oh yeah. They made a big deal of it at the time. I remember.

>> I didn't say nobody is using it. I said you don't have to support it.
> Ah, right. Well, don't tell that to the Industrial Haskell Group. ;-)

OK. But the language designers don't count that as "success" and don't mind 
breaking it?  I guess if you're using Haskell you just keep the version 
you're using around as long as you need it.

> (Does anyone "have to" support Java? 

Sure.

-- 
Darren New, San Diego CA, USA (PST)
   The question in today's corporate environment is not
   so much "what color is your parachute?" as it is
   "what color is your nose?"


Post a reply to this message

<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.