 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Warp wrote:
>> It's just that with lazy evaluation even if you *do* execute all
>> possible
>> code paths that doesn't guarantee that their result is actually
>> calculated.
>> Thus code coverage might not have the same degree of guarantee than with
>> other languages.
>
> Depends on what you do with the results.
Heh, and I'm in such a hurry to reply that I completely miss one of the
most important points.
No matter how buggy your Haskell code is, it *cannot* segfault. It
cannot access initialised or dangling references. It cannot corrupt
global variables. It cannot be thread-unsafe. It cannot cause other
unrelated parts of your program to not malfunction. You do not need to
test for these bugs because they cannot exist in the first place.
(Unfortunately most or all of those guarantees vanish as soon as you
start performing I/O. This perhaps explains why Haskell programs usually
contain the tiniest amount of explicit I/O possible...)
So even if some code path wasn't adaquately tested, the worst thing your
program can do is
A) Give the caller the wrong answer.
B) Throw an exception.
C) Lock up and loop forever.
D) Eat all of your RAM.
Exceptions can be caught and logged if desired. It's possible to
annotate them to get some idea where they're coming from. You can use
the GHCi debugger to make it automatically drop you into debug mode when
an exception is thrown. If your program segfaults, there's not much you
can do about it. It just says "segmentation fault" and stops, and that's it.
The other problems are the same as you'd have in any other language.
(And they're roughly has difficult to fix as anywhere else.) Admittedly
the last two problems are somewhat more common in Haskell than in other
languages. (E.g., it's quite easy to write a function that *should* work
for an infinite input, but doesn't due to a subtle oversight on your part.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 <voi### [at] dev null> wrote:
> No matter how buggy your Haskell code is, it *cannot* segfault. It
> cannot access initialised or dangling references. It cannot corrupt
> global variables. It cannot be thread-unsafe. It cannot cause other
> unrelated parts of your program to not malfunction. You do not need to
> test for these bugs because they cannot exist in the first place.
You mean there aren't mutable arrays in Haskell?
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Orchid XP v8 <voi### [at] dev null> wrote:
>> No matter how buggy your Haskell code is, it *cannot* segfault. It
>> cannot access initialised or dangling references. It cannot corrupt
>> global variables. It cannot be thread-unsafe. It cannot cause other
>> unrelated parts of your program to not malfunction. You do not need to
>> test for these bugs because they cannot exist in the first place.
>
> You mean there aren't mutable arrays in Haskell?
You're fascinated by mutable arrays, aren't you? :-)
OK, I rephrase:
No matter how buggy your Haskell code is, there are a whole bunch of
Extremely Bad Things that cannot happen --- UNLESS your function uses
mutable state and/or performs I/O operations. (Or calls external C
functions. Obviously.)
Happy now? :-P
Haskell does indeed have mutable arrays. And (thread-safe) mutable
variables of several kinds. There's even a somewhat clunky mutable
hashtable. (Well let's face it, who would want an *immutable* hashtable??)
You will note that what Haskell *does not* have is mutable global
variables. So even if you have mutable state, it cannot be a global
variable. You must manually pass it to any function that wants it. That
limits the damage somewhat.
(Of course C can have global mutable state, and Haskell can talk to C.
But that's not really a Haskell property...)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 <voi### [at] dev null> wrote:
> > You mean there aren't mutable arrays in Haskell?
> You're fascinated by mutable arrays, aren't you? :-)
I suppose I just can't get over the mentality that arrays are the
fastest possible data containers for random access, that random access
is required for very many algorithms, that many algorithms require
modifying the data in the array, and that an array is one of the most
memory-efficient ways of storing data in memory (in certain circumstances
there are even more space-efficient ways, but those are a bit exceptional
cases).
Many programming languages seem to detest arrays because they are not
very dynamic (it's very hard to increase the size of an array in an
efficient way, and alternative O(1) random access containers which are
easy to resize have an annoying minimum overhead, which is bad for very
small arrays). Of course their mutability (immutable arrays are not very
useful, after all) also makes certain things difficult in a language which
tries to ensure certain things about safety and correctness.
I'm curious: Were mutable arrays a later addition to the Haskell language?
Did it start as a pure functional language, but mutable arrays were added
later because pure functionality is *not* the silver bullet?-)
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp <war### [at] tag povray org> wrote:
> I'm curious: Were mutable arrays a later addition to the Haskell language?
> Did it start as a pure functional language, but mutable arrays were added
> later because pure functionality is *not* the silver bullet?-)
Possibly, yes. Like Monadic IO, which was added later. You may ask how IO was
done before.
Think Haskell doing pure functional processing and producing and output list of
results and an outer program receiving the list and doing proper, ugly, impure
IO over it and you may have a somewhat accurate picture. Of course, relying on
an external program doesn't do the ego any good... :)
Or so I heard from Simon Peyton Jones in his excelent "Tackling the Awkward
Squad" paper:
http://research.microsoft.com/~simonpj/papers/marktoberdorf/
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> You're fascinated by mutable arrays, aren't you? :-)
>
> I suppose I just can't get over the mentality that arrays are the
> fastest possible data containers for random access, that random access
> is required for very many algorithms, that many algorithms require
> modifying the data in the array, and that an array is one of the most
> memory-efficient ways of storing data in memory (in certain circumstances
> there are even more space-efficient ways, but those are a bit exceptional
> cases).
All valid points of course. (Indeed, in languages like Pascal, you'd be
forgiven for thinking that arrays are the *only* container type.)
Oddly enough, I don't often find myself wanting to use arrays. I guess
this is just an artifact of the kinds of programs I tend to write. If I
was doing lots of image processing or DSP or something then I'd almost
certainly want arrays. But many of the programs I write involve parsing
some text input to generate a small parse tree, processing that tree
according to some highly complex algorithm, and writing the result back
out. There isn't much call for an array in there. (Although I commonly
want dictionaries, which you could conceivably implement as hash tables
instead of trees.)
> I'm curious: Were mutable arrays a later addition to the Haskell language?
> Did it start as a pure functional language, but mutable arrays were added
> later because pure functionality is *not* the silver bullet?-)
I wasn't there, but as I understand it, Haskell has had mutable arrays
for a very long time now. (E.g., they are explicitly mentioned in the
current language standard document, circa 1998.)
Monadic I/O was definitely a later addition; Haskell used to have a much
more chunky I/O system based on infinite lazy lists. (A Haskell
"program" takes an infinite list of I/O responses as input, and returns
an infinite list of I/O requests as output. If that sounds confusing -
it is. And it's very easy to get wrong!)
As for pure functional programming... it has many attractive properties.
It is self-evident that a complete program that is *totally* pure is
formally equivilent to a null program, so that isn't very useful.
Different functional languages solve this problem in different ways:
- SQL, while not technically a "functional" language, solves it by being
a special purpose language who's purpose is to return results to the
caller who then actually "does" stuff with it. (Oh, and by having DML
statements.)
- Lisp does it by being impure. (I.e., if you want to mutate something,
you just mutate it. But you're supposed to "avoid" doing that too much.)
- Clean does it using an interesting concept known as "uniqueness
typing". (Basically works like Lisp, except the type system enforces
some constraints.)
- Haskell uses monads instead. (It's a simple way to structure I/O, and
turns out to be useful for many other things too.)
- I have no idea what Erlang does...
Silver bullet? No. Useful? Well, I think so. But then, I'm biased. ;-)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 <voi### [at] dev null> wrote:
> One of the nice properties of Haskell is that since a function can have
> no side effects, you "only" need to check that it produces the correct
> output for every valid input.
Wait a minute... Exactly how do you create a random number generator in
Haskell?
A random number generator function is, by definition, a function which
returns different values every time it's called (with the exact same
parameters, eg. none), and to do that it must remember its state from
the previous call.
Not being able to create a RNG function would be rather restrictive in
many applications...
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Wait a minute... Exactly how do you create a random number generator in
> Haskell?
>
> Not being able to create a RNG function would be rather restrictive in
> many applications...
You write a function that takes a PRNG state and returns a random number
and a new PRNG state.
(That is, if you want *psuedo*-random numbers. If you're after truly
random numbers, you'll obviously need to perform some kind of I/O to get
them, since true randomness comes only from physical sources.)
This is a common idiom in Haskell; write a function that takes an
initial state and returns an answer and a new, modified state. In fact,
there's even a standard monad for automating this so you don't have to
manually pass the state around by hand all the time.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible <voi### [at] dev null> wrote:
> You write a function that takes a PRNG state and returns a random number
> and a new PRNG state.
Where do you store this state, given that you can't assign it to anything?
One common idiom in imperative languages to write a RNG function is,
for example in C++:
namespace
{
unsigned seed = 0;
unsigned getRandom(unsigned maxValue)
{
seed = <calculate a new seed using seed itself>;
return seed % maxValue;
}
}
Of course you could do it like this:
unsigned getRandom(unsigned seed)
{
return <calculate a new seed using seed itself>;
}
But then the calling code would have to keep that seed somewhere. This
can be a real burden if the same RNG stream would need to be used in
different parts of the code (ie. basically the different parts all would
have to share the same seed).
The idiom used in the POV-Ray SDL is that there's a seed identifier
which internally contains the seed value for that stream. However, the
rand() function *modifies* the contents of the seed identifier in order
to store the newest seed.
I don't understand how it could be done if modifying the identifier
was prohibited.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> You write a function that takes a PRNG state and returns a random number
>> and a new PRNG state.
>
> Where do you store this state, given that you can't assign it to anything?
It's not so hard.
get_random :: Seed -> (Int, Seed)
foo :: Seed -> (Int, Seed)
foo s0 =
let
(x1,s1) = get_random s0
(x2,s2) = get_random s1
(x3,s3) = get_random s2
in ((x1 + x2 + x3) `div` 3, s3)
bar :: Seed -> ([Int], Seed)
bar s0 =
let
(y1,s1) = foo s0
(y2,s2) = foo s1
(y3,s3) = foo s2
in ([y1, y2, y3], s3)
Here "foo" grabs three random numbers and returns their arithmetic mean.
(And also returns the final PRNG seed, in case anybody else wants it.)
Basically you create a new variable for each version of the seed value.
(Similarly, "bar" calls "foo" three times to generate three (more
uniformly distributed) random numbers and returns them in a list - along
with the new seed.)
> But then the calling code would have to keep that seed somewhere. This
> can be a real burden if the same RNG stream would need to be used in
> different parts of the code.
Yes. This does quickly become tedious if you need random numbers all
over the place. In principle each function that uses it must take an
initial seed value, and return a new seed value as part of its result.
And the callers must remember to use the correct version of the state
each time.
(E.g., imagine if I make a typo and foo actually returns s0 instead of
s3. Now the "random" number generator appears to always return the same
result!)
But basically, it works.
You can improve it by using a state monad. This basically means that the
seed is invisibly passed from place to place automatically (and you
can't accidentally use the wrong seed by mistake).
Alternatively, you can go down the mutable state road. On the face of
it, this would still require you to pass around a reference to the
mutable state, because Haskell doesn't have mutable global variables. So
some hacking is required there.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |