|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
If your answer is "it's not", you can probably stop reading now.
I've been thinking about the things that make Haskell interesting. Of
course, the main one is that it's a 100% functional language (as opposed
to an OOP/FP hybrid like Objective Camal), so if you want to see what
functional programming is like, Haskell is the place to go. But on
reflection, Haskell has several interesting and/or useful attributes
which actually have little or nothing to do with functional programming.
Haskell has automatic type inference, for example. You don't *need* a
functional language for that; you could totally do it in a normal
language. It's just that, historically, nobody has. (I guess the sort of
people who think about computer programs mathematically also tend to be
the sort of people who realise that things like type signatures can be
derived algorithmically.)
Everybody thinks that a programming language has to either have explicit
types (Pascal, C, C++, Java, Eiffel...) or no static typing at all
(Smalltalk, Tcl, JavaScript, every scripting language known to
mankind...) Haskell demonstrates that there is a Third Way: static
typing, but with the types implicit rather than explicit.
Obviously, there are times when either you NEED to be explicit - because
the correct answer is ambiguous, because you want improved performance,
whatever. There are also times when it's just "good style" to be
explicit; top-level functions should usually have type signatures. But
you don't have to write a signature for every single damned expression
and variable you use in the entire program; the compiler will figure it out.
Another thing Haskell has is parametric polymorphism. If you write some
code that makes data around, it shouldn't *matter* what the hell the
type of the data is. I mean, if you don't "do" anything with it other
than move it, what does it matter? So, for example, it should be
possible to write a function that joins two containers together,
regardless of what kind of element data they contain. (Obviously the
type of the containers themselves matters, but not the type of whatever
they contain.)
This pre-supposes that you have type-safe containers in the first place.
In most OO languages, you just create a container of "Object", which can
then contain anything. If you particularly want a container to contain,
say, only Customer objects, you usually have to implement this yourself,
by hand.
More recently, a couple of OO languages have introduced this feature
they call "generics". Eiffel is the first language I'm aware of that did
it, although I gather Java has it too now. What it allows you to do is
write a class parameterised by another class; the typical case being a
container class parameterised by the class of element it contains. (In
Eiffel, integers and floats are classes, unlike the Java brokenness.)
The difference is that Eiffel makes this seem like some highly complex
uber-feature that only a few people will ever need to use, in special
circumstances. Haskell makes it trivial. Defining a parametric type
requires barely a few keystrokes more than defining the same thing with
no parameters.
C++ has templates, and people often compare this to generics. I'm not
sure why. As far as I can tell, templates not even remotely similar to
generics.
- Generics are an extension to the type system which allow the type
checker to handle more sophisticated type signatures.
- Templates are, as best as I can tell, preprocessor macros on steriods.
They automate the process of creating a new, seperate, class for each
combination of parameter values. It's like copy & pasting each time you
want a new container class, except the template preprocessor automates
it for you. This is clearly very much more maintainable, but not the
same as extending the type checker.
It seems that templates can also be used to do things which are
impossible with generics (e.g., making the implementation *different*
for certain special cases). So I'm not sure why people compare them at
all, because they're so totally different.
Another thing Haskell has is algebraic data types. Other languages could
have this. It probably doesn't play well with the OO method though. In
fact, let me expand on that.
The OOP notion of classes, inheritance, substitution and so forth
basically assumes a model where you can always extend the current class
hierachy. For example, you initially decide what (conceptual) properties
every widget must have and write that in an abstract class. Thereafter,
you can always add new kinds of widget. (Or even specific kinds of
widget, of course.) It's a system that works fairly well.
Haskell with its algebraic data types assumes that the data you're
trying to model has a fixed, finite set of possibilities, which you can
describe once and for all. If that sounds stupid and limiting, consider
a binary tree. Every node in a binary tree is either a leaf node or a
branch node. You will never, ever, need to add a new kind of node. (If
you did, it wouldn't be a *binary tree*. It would be something else.)
So the data structure of a binary tree can be described once and for
all. However, the operations you can perform on it is extensible; you
can always add new operations.
Now consider the OOP case. You write an abstract "tree" class with
concrete "leaf" and "branch" subclasses. Each time you want to do
something new with the tree, you write an abstract method in the tree
class and add half of the implementation to each of the two concrete
subclasses.
In Haskell, you write one line of code which says "a tree is either a
leaf containing an item, or a branch containing two subtrees". You then
write as many functions as you like, which handle the leaf and branch
cases together, in one place (not split between three places).
For a binary tree, or an abstract syntax tree, or any number of other
simple, ridigly-defined data structures you could envisage, this
approach is actually much better. If you're writing a compiler or
interpretter, you don't *need* the abstract syntax tree to be
extensible. But you probably *do* need to be able to add new program
passes and code manipulation functions all the time. (And letting the
data structure be parameterised is probably Very Freaking Useful. Part
of the compiler might use strings for names, while another part might
use some more sophisticated structure representing fully-qualified
names, for example.)
Now let's flip this and look at widgets again. Using the ADT approach,
you'd need a giant "widget" type capable of representing any possible
widget. And functions that all know how to handle any possible widget.
And the code for handling a specific widget type is scattered between
all the widget-handling functions.
You're never ever going to need to add new functions for handling "any
possible widget" (although you might want one for handling a *specific*
kind of widget). But you very likely *will* want to add new types of
widget. But, uh, you can't.
In summary:
- If you have a limited number of structures which you want to process
in an unlimited number of ways, ADTs work best.
- If you have an unlimited number of structures which you want to
process in a limited number of ways, a class hierachy works best.
Haskell, of course, has to go one better: Haskell has what it calls
"classes", but really they're "interfaces". And you can add them after
the fact. Which means you can grab a bunch of unrelated data types
(possibly that you didn't even write) and create a single interface for
processing them all.
The one small glitch is that Haskell demands that all types are known at
compile-time. Which means that doing OO-style stuff where types are
changed dynamically at run-time is a little tricky. Haskell 98 (and the
newly-released Haskell 2010) can't do it, but there are
widely-implemented extensions that can. It gets kinda hairy though.
Married side by side with ADTs are case expressions. Now other languages
have case statements, but they tend to be quite feeble. In most
languages they only work on integers; I *think* Pascal lets you use 'em
on enumerations [but I'm not really sure]. And in C-like languages, you
can execute multiple alternatives at once. While this is no doubt
occasionally useful, most of the time it's just a source of
infuriatingly hard to find bugs (i.e., you left off the "break"
statement, so a whole bunch of code gets executed when you didn't want
it to).
In Haskell, case expressions work on *everything*. (Everything who's
internal structural details are in scope, anyway. These internal details
can be hidden from client code.) What is more, they use "pattern
matching". This powerful feature allows you to do things like determine
whether a tree has a particular shape and pluck several deeply-burried
substructures out of it, all in a single transparent 1-liner. For example,
case list of
[[x], [y,_], [z,_,_]] -> foo x y z
This checks whether "list" contains a list with exactly 3 elements, each
of which is a sublist of length 1, 2, 3 (respectively) - and, if all
this is the case, it fishes out the first element of each sublist and
feeds all the data to the "foo" function. All in one easily-readble line
of code. The alternative in Java would be something like
if (list.length == 3)
{
if (list.At(0).length == 1)
{
int x = list.At(0).At(0);
if (list.At(1).length == 2)
{
int y = list.At(1).At(0);
if (list.At(2).length == 3)
{
int z = list.At(2).At(0);
foo(x, y, z);
which is obviously a hell of a lot longer. What is more, if your Haskell
case expression has multiple cases in it (which it usually does),
because of the declarative form of the patterns, the compiler can
rearrange the code so that each test need only be performed once, and
the compiled code can proceed with the minimum number of branch
instructions. Such things are far, far harder to do with the
deeply-nested forest of if-statements above. (Not to mention that the
expressions in the if-statements might actually alter the program's
state, so changing their order may alter the program!)
Again, this probably doesn't gel too well with OOP, which states that
each data structure ("object") should be responsible for its own
internal substructure. In other words, you shouldn't *need* to fish out
deeply-burried data, the intervening objects in the chain should be
doing this for you. According to "the OO way", the only thing you should
ever do to objects is call methods on them, not inspect them for type
information or substructure.
But something like, say, JavaScript (which isn't really OO in the first
place) could certainly make use of ADTs and/or pattern matching, I think.
Another big deal about Haskell is how functions are first-class. The
fact that functions are pure makes this a really big win, but even
without pure functions, it's still useful. (Smalltalk implements it, for
example.) It doesn't immediately sound very interesting, but what it
enables you to do is write a loop once, in a library somwhere, and pass
in the "loop body" as a function.
A trivial example is a "fold". A fold takes some container (e.g., a
list) and repeatedly applies a binary function to it. The canonical
example is
sum = fold (+) 0
This says that to calculate the sum of a list, you fold it using the "+"
function, and with a starting value of 0. (This also means that the sum
of a completely empty list is 0.) It is obvious that
product = fold (*) 1
but there are plenty of other uses too. For example, the (++) function
joins to lists together, and
concat = fold (++) []
turns a list-of-lists into a single giant list. Other examples include
minimum = fold1 min
maximum = fold1 max
which find the minimum or maximum of a list using the binary "min" and
"max" functions. (The "fold1" function demands that the list be
non-empty, and doesn't require a start value.) I could go on all day,
but it turns out almost all uniform list operations can be implemented
as a fold of some kind.
Other languages have done this. Smalltalk is the obvious example. It has
#do: and #collect: which are approximately equivilent to Haskell's
"map", #select: and #reject: which are Haskell's "filter", and many
more. Of corse, being an object-oriented language, Smalltalk makes all
these methods work nicely for all container types, not just lists.
(Haskell hasn't copied this trick yet. At least, not in the *standard*
libraries. It's not difficult to change container, but it's tricky to
write code that works generically for any container.)
The other thing about first-class function is that in both Haskell and
Smalltalk, it is utterly trivial to spawn a new thread:
forkIO foo [foo] fork
What Smalltalk doesn't do, and Haskell does, is make combining small
functions into larger functions trivial as well. Haskell has anonymous
"lambda functions", it has the function composition operator, it has
curried functions, it has the dollar operator, and so on and so forth.
All of this *could* be put into Smalltalk, or any other OO language for
that matter. (Whether it would be efficient is a whole other matter...)
The final thing that's interesting about Haskell is the rate at which it
changes. Haskell (or rather, GHC) implemented Software Transactional
Memory 5 years ago. I'm not aware of any mainstream languages that have
this yet. GHC added Data Parallel Haskell 2 years ago, and the busy PhD
researchers are still hard at work getting it into a usable state.
(There's even an experimental GPU backend already.) Around the same
time, Type Families were added. (How long did it take Java to do
Generics?) The pace of change is staggering. Lots of new and very cool
stuff happens in Haskell first. (No doubt it will eventually all be
stolen by lesser languages so that people still don't use Haskell itself.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> Of course, the main one is that it's a 100% functional language
I thought that a 100% functional language has no mutable data structures
nor functions with side-effects.
> C++ has templates, and people often compare this to generics. I'm not
> sure why. As far as I can tell, templates not even remotely similar to
> generics.
Yeah, templates are a lot more powerful and versatile.
> - Templates are, as best as I can tell, preprocessor macros on steriods.
> They automate the process of creating a new, seperate, class for each
> combination of parameter values. It's like copy & pasting each time you
> want a new container class, except the template preprocessor automates
> it for you. This is clearly very much more maintainable, but not the
> same as extending the type checker.
Templates are not used only to generate classes, but also functions and
basic types. Template parameters can also be of almost any type (classes,
basic types, other templates, etc.) Also, templates can be recursive, which
is rather huge.
In short, templates allow compile-time polymorphism (as opposed to runtime
polymorphism in most OO and functional languages).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Of course, the main one is that it's a 100% functional language
>
> I thought that a 100% functional language has no mutable data structures
> nor functions with side-effects.
Technically, Haskell is such a language. Technically, a Haskell program
itself doesn't produce any side-effects, it just constructs a data
structure and passes it to the Haskell Runtime System. The Haskell
Runtime System is what performs these side-effects.
That may sound like splitting hairs, since any compiled, runnable
Haskell program consists of your source code AND the Runtime System,
running as a single OS entity. In practical terms, you can "in effect"
mutate things in Haskell, and cause other side-effects. (This is of
course quite deliberate.)
Most "functional" languages allow you to perform side-effect anywhere,
any time you like. It is up to you to not screw this up. Haskell is
genuinely different. All side-effects are tightly constrained and
controlled, and the system provides strong guarantees about them. This
is why I feel it is justified to describe Haskell as "purely functional".
(That and the fact that it clearly has no OO features, no relational
features, no logical features, etc. It's definitely not mixed with any
other paradigm.)
>> C++ has templates, and people often compare this to generics. I'm not
>> sure why. As far as I can tell, templates not even remotely similar to
>> generics.
>
> Yeah, templates are a lot more powerful and versatile.
This was my conclusion also.
> Templates are not used only to generate classes, but also functions and
> basic types.
Yes, I was simplifying. (The paragraph was already bigger than I would
have liked, given the simplicity of the basic point I was attempting to
express.)
> In short, templates allow compile-time polymorphism (as opposed to runtime
> polymorphism in most OO and functional languages).
Seems like a nice way to sum it up.
I also forgot to mention Template Haskell. This GHC extension allows you
to run arbitrary Haskell code inside the compiler. It can [in principle]
do anything normal Haskell code does - read files, open network sockets,
etc. But it can also do things to the compiler - most obviously, it can
generate source code and feed it into the compiler.
You could use this to (for example) read a text file containing a BNF
grammer and generate the source code for a parser for it. But then, you
could do that without TH; just save the source code as a text file and
then compile it as a seperate step. The interesting part is that TH can
query the compiler. For example, it can read the description of a
datatype and write some boilerplate code for it. (But hey, if you need
to write boilerplate, doesn't that just mean that your language sucks?)
C has cpp, which (as far as I know) is very primitive and not
Turing-complete. C++ has templates, which are quite sophisticated.
Haskell has TH, which is obviously Turing-complete. I have no idea how
Lisp macros work, so I can't compare. (I believe Lisp macros can
generate code at runtime, however, which TH definitely cannot do.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> Haskell has automatic type inference, for example.
Lots of languages have this a little. In C# you can say
var x = new Vector2(0,3);
and x gets the type Vector2. Automatic casting in languages is a similar
thing, methinks.
Certainly not to the large scale that Haskell does it.
> This pre-supposes that you have type-safe containers in the first place.
> In most OO languages, you just create a container of "Object", which can
> then contain anything.
This hasn't been true for a decade or more. :-) Just because Java sucks,
don't think every statically typed language sucks the same way.
> More recently, a couple of OO languages have introduced this feature
> they call "generics". Eiffel is the first language I'm aware of that did
> it, although I gather Java has it too now.
Java kinda sorta has it. It's *really* a container full of Objects, with
syntactic sugar to wrap and unwrap them. (Similarly, "inner classes" are
just regular classes with munged up names.)
> The difference is that Eiffel makes this seem like some highly complex
> uber-feature that only a few people will ever need to use, in special
> circumstances.
No, Eiffel's complexity is due to inheritance. If you have an array of
Animals, can you put a Dog in it? If you have an array of Dogs, can you pass
it to a routine that expects an array of Animals? Can you return an array of
Dogs from a routine declared to return an array of Animals or vice versa? If
X multiply inherits from Y and Z, can you assign an array of X to an array of Y?
It's not the complexity of the feature as much as its interactions with
other features of the type system.
> It seems that templates can also be used to do things which are
> impossible with generics (e.g., making the implementation *different*
> for certain special cases). So I'm not sure why people compare them at
> all, because they're so totally different.
They get compared because it's as close as C++ has to having real generics,
and C++ aficionados don't want to admit they don't have generics. ;-)
> Another thing Haskell has is algebraic data types. Other languages could
> have this. It probably doesn't play well with the OO method though. In
> fact, let me expand on that.
It's interesting that Eiffel is explicitly an attempt to make algebraic data
systems on top of OOP, in a sense. Hence the invariants, preconditions, etc.
> Haskell with its algebraic data types assumes that the data you're
> trying to model has a fixed, finite set of possibilities, which you can
> describe once and for all.
Integers are finite?
> Now consider the OOP case. You write an abstract "tree" class with
> concrete "leaf" and "branch" subclasses. Each time you want to do
> something new with the tree, you write an abstract method in the tree
> class and add half of the implementation to each of the two concrete
> subclasses.
I've never seen it done that way. :-)
> For a binary tree, or an abstract syntax tree, or any number of other
> simple, ridigly-defined data structures you could envisage, this
> approach is actually much better.
It depends how complex your tree is vs how complex your operations. If your
data is simple and your operations are complex, you're going to get
different trade-offs than if your data is complex and your operations are
simple. If a tree could have 300 different types of nodes, but you'll only
have 3 operations, which is better? Then invert that.
As you say, it depends what kind of extensibility you want. Add 50 new kinds
of nodes. Add 50 new operations. OOP was envisioned as adding new kinds of
things. New kinds of cars to a traffic simulation, new kinds of bank
accounts to a financial analysis program, etc.
Other than that, I'm not sure I really understood what you're talking about
in terms of "using the ADT approach" bit.
> - If you have a limited number of structures which you want to process
> in an unlimited number of ways, ADTs work best.
>
> - If you have an unlimited number of structures which you want to
> process in a limited number of ways, a class hierachy works best.
Yes, that. Well, not even a "limited number of ways." The point of OOP is
that you can add new structures *without* changing existing structures *at
all*.
> The one small glitch is that Haskell demands that all types are known at
> compile-time. Which means that doing OO-style stuff where types are
> changed dynamically at run-time is a little tricky. Haskell 98 (and the
> newly-released Haskell 2010) can't do it, but there are
> widely-implemented extensions that can. It gets kinda hairy though.
It also means you have to recompile everything whenever you change a type.
Which kind of sucks when it takes several days to compile the system.
> (i.e., you left off the "break"
> statement, so a whole bunch of code gets executed when you didn't want
> it to).
This is one that C# finally fixed right. Every branch of a switch has to end
with "break" or "goto". You're allowed to "goto" another case label, so you
can emulate "falling through." But even then, you can rearrange cases or
insert another case anywhere in the list without breaking any existing
cases. With a fallthrough, if you add a case without realizing the previous
case falls through, you just broke the previous case.
> In Haskell, case expressions work on *everything*.
Pattern matching is indeed cool in some set of circumstances.
> which is obviously a hell of a lot longer.
Yet, oddly enough, will work on things that *aren't lists*. That's the
point. :-)
> According to "the OO way", the only thing you should
> ever do to objects is call methods on them, not inspect them for type
> information or substructure.
Yep. And that's because you can substitute a new object and still use the
same code. Your pattern match wouldn't work at all if you said "Hmmm, I'd
like to use that same function, only with a set instead."
> But something like, say, JavaScript (which isn't really OO in the first
> place) could certainly make use of ADTs and/or pattern matching, I think.
Huh? Javascript is about as OO as Smalltalk is. There's nothing in
Javascript that is *not* an object.
> Another big deal about Haskell is how functions are first-class. The
> fact that functions are pure makes this a really big win, but even
> without pure functions, it's still useful. (Smalltalk implements it, for
> example.) It doesn't immediately sound very interesting, but what it
> enables you to do is write a loop once, in a library somwhere, and pass
> in the "loop body" as a function.
Lots of languages have this, just so ya know. :-)
> Other languages have done this. Smalltalk is the obvious example.
Smalltalk doesn't have first-class functions. It has first class blocks,
which are closures (or continuations, more precisely). You're not passing a
function to #do:. You're passing a stack frame.
> What Smalltalk doesn't do, and Haskell does, is make combining small
> functions into larger functions trivial as well.
That's because Smalltalk doesn't have functions. It has methods. And blocks.
And continuations.
> All of this *could* be put into Smalltalk, or any other OO language for
> that matter. (Whether it would be efficient is a whole other matter...)
Not... really. The difference is that Haskell functions work on data, while
Smalltalk only has objects. In other words, you can't invoke a function
without knowing what object that "function" is a method of. (This is also
the case for Javascript, btw.)
C# has lambdas and anonymous expressions and stuff like that. It also has
"delegates", which is a list of pointers to object/method pairs (or just
methods, if it's a static/class method that doesn't need an instance).
> I'm not aware of any mainstream languages that have this yet.
Several languages have implemented this as libraries. Generally not for
threading but for database stuff.
> (How long did it take Java to do Generics?)
It wasn't that generics took a long time. It's that the owners of the
language didn't want them, for whatever reason. Then, once Java was popular,
it was impossible to change the JVM classfile format without killing what
little momentum Java still had, so they had to be shoehorned in. See above.
I.e., it took Java a long time to do Generics *because* people cared about
Java. ;-)
I mean, Python has explicitly said "We're going to not change anything in
the language spec for 2 to 3 years until everyone else writing Python
compilers/interpreters catch up."
It's easy to innovate when you have the only compiler.
> The pace of change is staggering. Lots of new and very cool
> stuff happens in Haskell first. (No doubt it will eventually all be
> stolen by lesser languages so that people still don't use Haskell itself.)
That's not hard to do when you have no user base you have to support.
--
Darren New, San Diego CA, USA (PST)
The question in today's corporate environment is not
so much "what color is your parachute?" as it is
"what color is your nose?"
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible wrote:
> (I believe Lisp macros can
> generate code at runtime, however, which TH definitely cannot do.)
I don't think macros are still around at runtime. What they *can* do is read
user input at compile time, so you can (for example) make up new kinds of
literals.
Assume Haskell doesn't have the "0xABC" kind of syntax for hex literals.
Could you add that with Haskell, or TH?
--
Darren New, San Diego CA, USA (PST)
The question in today's corporate environment is not
so much "what color is your parachute?" as it is
"what color is your nose?"
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> Invisible wrote:
> > Haskell has automatic type inference, for example.
> Lots of languages have this a little. In C# you can say
> var x = new Vector2(0,3);
> and x gets the type Vector2. Automatic casting in languages is a similar
> thing, methinks.
The next C++ standard will instroduce something similar: The 'auto' type.
The idea is that since the compiler can deduce the type of the variable in
most situations, then why should the programmer have to specify it explicitly?
Thus you will be able to write things like:
for(auto iter = v.begin(); iter != v.end(); ++iter) ...
The compiler can see what is the return type of v.begin(), so there's no
need for the programmer to know it. Hence he can just specify 'auto' as the
variable type.
This will be more than just a convencience shortcut, though. For example,
it allows doing something like this:
template<typename Container>
void foo(const Container& container)
{
auto element = container[0];
....
}
The type of the element may differ from container to container, and if
you need a variable of that type, previously there was no fool-proof way
of doing that. (The usual way around this to define a typedef inside the
container named "value_type" which defines the type of the elements, but
the 'auto' keyword will eliminate the need for that.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Invisible" <voi### [at] devnull> wrote in message
news:4b87d781@news.povray.org...
>
> If your answer is "it's not", you can probably stop reading now.
I know it's interesting to *me* because I'd never heard of it before I
started reading this newsgroup. Always like learning something new... :-)
--
Jack
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Thus you will be able to write things like:
>
> for(auto iter = v.begin(); iter != v.end(); ++iter) ...
>
> The compiler can see what is the return type of v.begin(), so there's no
> need for the programmer to know it. Hence he can just specify 'auto' as the
> variable type.
That's a nice touch. Presumably this also means that if the type of "v"
changes, you don't have to manually change the type of "iter" to match
any more...?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> Warp wrote:
> > Thus you will be able to write things like:
> >
> > for(auto iter = v.begin(); iter != v.end(); ++iter) ...
> >
> > The compiler can see what is the return type of v.begin(), so there's no
> > need for the programmer to know it. Hence he can just specify 'auto' as the
> > variable type.
> That's a nice touch. Presumably this also means that if the type of "v"
> changes, you don't have to manually change the type of "iter" to match
> any more...?
Right.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> Haskell has automatic type inference, for example.
>
> Lots of languages have this a little. In C#
Just out of curiosity, just how new *is* C#?
> This hasn't been true for a decade or more. :-) Just because Java sucks,
> don't think every statically typed language sucks the same way.
...there are statically-typed OO languages which aren't Java? [Or
Eiffel, which nobody ever uses.]
C++ is statically-typed, but even Warp will agree it's not a
specifically OO language; it's more of a pragmatic hybrid of styles.
>> More recently, a couple of OO languages have introduced this feature
>> they call "generics". Eiffel is the first language I'm aware of that
>> did it, although I gather Java has it too now.
>
> Java kinda sorta has it. It's *really* a container full of Objects, with
> syntactic sugar to wrap and unwrap them.
Oh dears.
> (Similarly, "inner classes" are just regular classes with munged up names.)
Yeah, that's what I would have expected.
>> The difference is that Eiffel makes this seem like some highly complex
>> uber-feature that only a few people will ever need to use, in special
>> circumstances.
>
> No, Eiffel's complexity is due to inheritance.
>
> It's not the complexity of the feature as much as its interactions with
> other features of the type system.
All I know is that Eiffel makes it seem like this really exotic feature
that you "shouldn't need to use" under normal circumstances, and if you
find yourself using it, you've probably designed your program wrong.
Even C++ templates make it seem like a pretty big deal.
Haskell, on the other hand, makes it trivial.
> It's interesting that Eiffel is explicitly an attempt to make algebraic
> data systems on top of OOP, in a sense. Hence the invariants,
> preconditions, etc.
Uh... what?
>> Haskell with its algebraic data types assumes that the data you're
>> trying to model has a fixed, finite set of possibilities, which you
>> can describe once and for all.
>
> Integers are finite?
They are if there fixed-precision. ;-)
In fact, if you want to be really anal, if they're stored on a computer
that exists in the physical world, they're finite, since the amount of
matter [and energy] in the observable universe is finite. But I'm sure
that's not what you meant.
>> Now consider the OOP case. You write an abstract "tree" class with
>> concrete "leaf" and "branch" subclasses. Each time you want to do
>> something new with the tree, you write an abstract method in the tree
>> class and add half of the implementation to each of the two concrete
>> subclasses.
>
> I've never seen it done that way. :-)
Oh really? So how would you do it then?
> It depends how complex your tree is vs how complex your operations.
I could have sworn I just said that. ;-)
> As you say, it depends what kind of extensibility you want. Add 50 new
> kinds of nodes. Add 50 new operations. OOP was envisioned as adding new
> kinds of things. New kinds of cars to a traffic simulation, new kinds of
> bank accounts to a financial analysis program, etc.
Indeed. You do NOT want to design one giant ADT that represents every
possible kind of bank account, and write huge monolithic functions that
understand how to process every possible kind of bank account. It's
non-extensible, and all the code relating to one particular type of
account is scattered across the different account processing functions,
so it's the wrong way to factor the problem.
My point is that there are major classes of problems which *are* the
other way around, and for that ADTs win. For example, the parse tree of
an expression. You are almost never going to add new types of node to
that, but you *are* going to be adding new processing passes all the
time. For this, Haskell's ADT + functional approach factors the problem
"the other way", and that works out better for this kind of problem.
And when it doesn't work out right, you use interfaces and get the
essence of the OO solution.
>> - If you have a limited number of structures which you want to process
>> in an unlimited number of ways, ADTs work best.
>>
>> - If you have an unlimited number of structures which you want to
>> process in a limited number of ways, a class hierachy works best.
>
> Yes, that.
LOL!
> Well, not even a "limited number of ways." The point of OOP
> is that you can add new structures *without* changing existing
> structures *at all*.
And I was pointing out that for each thing you want to do to a
structure, you have to add new methods up and down the inheritance hierachy.
Ultimately, if you have a kind of structure that you need to perform a
bazillion operations on, what you'll hopefully do is define a set of
"fundamental" operations as methods of the structure itself, and define
all the complex processing somewhere else, in terms of these more
fundamental operations. (If you take this to the extreme, you obviously
end up with dumb data objects, which usually isn't good OO style.)
>> The one small glitch is that Haskell demands that all types are known
>> at compile-time. Which means that doing OO-style stuff where types are
>> changed dynamically at run-time is a little tricky. Haskell 98 (and
>> the newly-released Haskell 2010) can't do it, but there are
>> widely-implemented extensions that can. It gets kinda hairy though.
>
> It also means you have to recompile everything whenever you change a
> type. Which kind of sucks when it takes several days to compile the system.
Are there systems in existence which actually take that long to compile?
Anyway, Haskell extensions allow you to do runtime dynamic dispatch.
That means you don't have to recompile your code. [Although you will
have to relink to add in the new types you want to handle, etc.] It's
just that it pisses the type checker off, which means you can't do
things you'd normally expect to be able to do.
(In particular, the basic ExistentialQuantification extension allows
typesafe upcasts, but utterly prohibits *downcasts*, which could be a
problem...)
>> (i.e., you left off the "break" statement, so a whole bunch of code
>> gets executed when you didn't want it to).
>
> This is one that C# finally fixed right.
Oh thank God...
> Pattern matching is indeed cool in some set of circumstances.
Aye.
(Jay. Kay. El...)
>> which is obviously a hell of a lot longer.
>
> Yet, oddly enough, will work on things that *aren't lists*. That's the
> point. :-)
Well, you can pattern match on the size of the container (presuming you
have a polymorphic "size" function to get this). It won't be quite as
neat though, obviously.
Secondly, there's actually an experimental extension to Haskell called
"view patterns" which allow you to create sort-of "user defined pattern
matching". This fixes this exact problem. (And a couple of others.)
> Yep. And that's because you can substitute a new object and still use
> the same code. Your pattern match wouldn't work at all if you said
> "Hmmm, I'd like to use that same function, only with a set instead."
Yes, in general pattern matching is used for implementing the "methods"
for operating on one specific type. (Although of course in Haskell they
aren't methods, they're just normal functions.) Your polymorphic stuff
is then implemented with function calls, not pattern matching. But see
my previous comments.
> Huh? Javascript is about as OO as Smalltalk is. There's nothing in
> Javascript that is *not* an object.
1. Smalltalk has classes. JavaScript does not.
2. Smalltalk has encapsulation. JavaScript does not.
>> Another big deal about Haskell is how functions are first-class.
>
> Lots of languages have this, just so ya know. :-)
Yeah. Smalltalk, JavaScript, Tcl after a fashion. C and C++ have
function pointers, and you can sort-of cludge the same kind of
techniques as found in Haskell. (But you wouldn't. It would be
horrifyingly inefficient.)
>> Other languages have done this. Smalltalk is the obvious example.
>
> Smalltalk doesn't have first-class functions. It has first class blocks,
> which are closures.
I won't pretend to comprehend what the difference is, but sure.
>> What Smalltalk doesn't do, and Haskell does, is make combining small
>> functions into larger functions trivial as well.
>
> That's because Smalltalk doesn't have functions. It has methods. And
> blocks. And continuations.
Plausibly.
>> All of this *could* be put into Smalltalk, or any other OO language
>> for that matter. (Whether it would be efficient is a whole other
>> matter...)
>
> Not... really. The difference is that Haskell functions work on data,
> while Smalltalk only has objects. In other words, you can't invoke a
> function without knowing what object that "function" is a method of.
> (This is also the case for Javascript, btw.)
Erm... no, not really.
function foo(x) {...}
var bar = foo;
var x = bar(5);
No objects here, no methods, just assigning a function to a variable and
using it later. You *can* make a function an object method, but you do
not *have* to. JavaScript is like "meh, whatever, dude. I'm green."
> C# has lambdas and anonymous expressions and stuff like that. It also
> has "delegates", which is a list of pointers to object/method pairs (or
> just methods, if it's a static/class method that doesn't need an instance).
Eiffel has... uh... "agents"? Which are basically GUI callback
functions, but we wouldn't want to have actual functions, would we?
>> I'm not aware of any mainstream languages that have this yet.
>
> Several languages have implemented this as libraries. Generally not for
> threading but for database stuff.
Yeah. Databases have been transactional for decades, and many languages
can access a database. But STM is all internal to the program, and
implements multi-way blocking and conditional branching and all kinds of
craziness.
I'd point you to the paper I just wrote about implementing it using
locks... but then you'd have my real name. The long and short of it is,
it boils down to running the transaction *as if* there are no other
transactions, but not actually performing any real writes. Then, when
we've discovered exactly what the transaction wants to do to the central
store, we take all the locks in sorted order. So the essence of the
whole thing is that by simulating the transaction from beginning to end,
we solve the old "I don't know what I need to lock until I've already
locked stuff" problem. (And make all the locking stuff the library
author's problem, not yours.)
>> (How long did it take Java to do Generics?)
>
> It wasn't that generics took a long time. It's that the owners of the
> language didn't want them, for whatever reason.
Heh, nice.
> I mean, Python has explicitly said "We're going to not change anything
> in the language spec for 2 to 3 years until everyone else writing Python
> compilers/interpreters catch up."
>
> It's easy to innovate when you have the only compiler.
Haskell has a motto: "Avoid success at all costs." The meaning of this,
of course, is that if Haskell suddenly became crazy-popular, we wouldn't
be able to arbitrarily change it from time to time. (In reality, this
point has already long ago been reached.)
And just FYI, there *is* more than one Haskell compiler. Actually, there
are several - it's just that only one has a big enough team working on
it that they produce a production-ready product. [There are people who
*get paid money* to write GHC.] All the others are PhD projects, or
hobbiests. At one time there were several viable production compilers,
but most of them have gone dormant due to the dominance of GHC.
>> The pace of change is staggering. Lots of new and very cool stuff
>> happens in Haskell first.
>
> That's not hard to do when you have no user base you have to support.
Heh. Better not tell that to Galois.com, Well-Typed.com, the authors of
the 1,000+ packages in the Hackage DB, or the likes of Facebook, AT&T,
Barclays Analytics or Linspire who all apparently use Haskell internally.
Sure, Haskell is no Java, C# or PHP. But that's not to say that *nobody*
is using it...
See below for some outdated blurb:
http://www.haskell.org/haskellwiki/Haskell_in_industry
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|