POV-Ray : Newsgroups : povray.off-topic : Quote of the day Server Time
29 Jul 2024 04:29:51 EDT (-0400)
  Quote of the day (Message 1 to 9 of 9)  
From: Invisible
Subject: Quote of the day
Date: 2 Jul 2012 09:43:07
Message: <4ff1a56b@news.povray.org>
I found the following quote on LinkedIn (of all places):

"Which financial firms (buyside or/and sellside) are currently actively 
using functional programming (FP) languages like F#, OCaml, Lisp, Scheme 
or Haskell?"

"I keep hearing buzz around this space. I think most of Investment Banks 
have have at last research initiatives ongoing.

OCaml (and F#) look to be much more promising in the realm of writing 
production systems de novo. Haskell isn't deterministic in its return 
time (it's Lazy so it may come back in 5 secs or may take 5 hours - and 
you cant really predict up front which - this is fatal for use in eg 
pricing libraries)"

You heard it here folks - Haskell "isn't deterministic". :-D

While obviously this statement is false (you /can/ predict run-time, 
it's just harder than you might imagine), the assertion that "this is 
fatal" is probably bang-on.


Post a reply to this message

From: nemesis
Subject: Re: Quote of the day
Date: 2 Jul 2012 14:00:01
Message: <web.4ff1e1792e017d32773c9a3e0@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> I found the following quote on LinkedIn (of all places):
>
> "Which financial firms (buyside or/and sellside) are currently actively
> using functional programming (FP) languages like F#, OCaml, Lisp, Scheme
> or Haskell?"
>
> "I keep hearing buzz around this space. I think most of Investment Banks
> have have at last research initiatives ongoing.
>
> OCaml (and F#) look to be much more promising in the realm of writing
> production systems de novo. Haskell isn't deterministic in its return
> time (it's Lazy so it may come back in 5 secs or may take 5 hours - and
> you cant really predict up front which - this is fatal for use in eg
> pricing libraries)"
>
> You heard it here folks - Haskell "isn't deterministic". :-D
>
> While obviously this statement is false (you /can/ predict run-time,
> it's just harder than you might imagine), the assertion that "this is
> fatal" is probably bang-on.

was that our OCaml/F# friend Jon Harrop? ;)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Quote of the day
Date: 2 Jul 2012 16:03:46
Message: <4ff1fea2$1@news.povray.org>
On 02/07/2012 06:59 PM, nemesis wrote:

> was that our OCaml/F# friend Jon Harrop? ;)

Well, it's certainly /possible/... I didn't actually check.

Even so, it has a ring of inevitable truth to it...


Post a reply to this message

From: Warp
Subject: Re: Quote of the day
Date: 3 Jul 2012 00:40:39
Message: <4ff277c7@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> OCaml (and F#) look to be much more promising in the realm of writing 
> production systems de novo. Haskell isn't deterministic in its return 
> time (it's Lazy so it may come back in 5 secs or may take 5 hours - and 
> you cant really predict up front which - this is fatal for use in eg 
> pricing libraries)"

> You heard it here folks - Haskell "isn't deterministic". :-D

> While obviously this statement is false (you /can/ predict run-time, 
> it's just harder than you might imagine), the assertion that "this is 
> fatal" is probably bang-on.

  I don't understand at which point Haskell decides to stop laziness and
start actually computing results. It cannot be lazy forever because it
would just run out of memory sooner or later (probably even with algorithms
that would normally require O(1) memory in an imperative language).

-- 
                                                          - Warp


Post a reply to this message

From: Invisible
Subject: Re: Quote of the day
Date: 3 Jul 2012 06:49:21
Message: <4ff2ce31@news.povray.org>
On 03/07/2012 05:40 AM, Warp wrote:

>    I don't understand at which point Haskell decides to stop laziness and
> start actually computing results. It cannot be lazy forever.

This is an important question, and one which I haven't seen really 
explained properly anywhere. The only time I've seen it explained is 
when people try to write complicated mathematical equations which 
rigorously expound the rules... These are not the sort of things your 
average Joe Programmer is likely to comprehend. And comprehending this 
stuff is really rather important if you want to write code with good 
performance!



The rules aren't especially complicated. They just don't appear to be 
written down anywhere in an easily digestible form.

1. Execution starts from "main".

2. I/O actions execute sequentially, in the order written.

3. Any arguments to an I/O action get executed when the I/O action 
executes. (Usually.)

4. If there is a conditional branch in the I/O sequence, the condition 
is evaluated so we can decide which branch to take.

Together, these rules tell us when I/O actions execute, and when that 
causes pure expressions to execute. When a pure expression executes, we 
walk down the expression tree in a certain order, recursively executing 
the sub-expressions.

For the most part, there are two rules to pure code:

1. If the result of a function is another expression, execute that 
expression.

2. If the result of a function depends on a conditional branch, execute 
the conditional so we can decide which branch to take.

An example may help to clarify:

   or1 False False = False
   or1 False True  = True
   or1 True  False = True
   or1 True  True  = True

   or2 False x = x
   or2 True  x = True

The first function /obviously/ implements the truth-table of the 
logical-OR function. It might not be immediately obvious, but for any 
True/False combination, or2 produces the same result as or1. (Try it!)

Given this, you might think that these are two equivalent ways of 
writing the same function. But actually, they are not. or1 takes a 
conditional branch based on the values of /both arguments/. Therefore, 
whenever or1 is called, /both arguments/ will be computed. or2, on the 
other hand, takes a conditional branch based on /one/ argument only. If 
that argument is False, then the second argument will be returned (and 
hence immediately computed). However, if the first argument is True, 
then the second argument will /never/ be computed.

Hopefully that gives you some idea how pure code implicitly defines 
execution order. Admittedly it's quite subtle; even the experts get this 
stuff occasionally. Arguably that's Haskell's greatest weakness. (Even 
some of the original language architects seem ambivilent about whether 
default-lazy was the right choice.)

As well as conditional constructs, there are various ways that you can 
/force/ stuff to execute Right Now, rather than at some later time. The 
main one is the "seq" function. seq is magical, in that it doesn't 
"need" its first argument, and yet the language spec guarantees that it 
will always be executed. For example,

   length list = work 0 list
     where
       work n (head : tail) = work (n+1) tail
       work n []            = n

This counts the number of elements in a list. However, what it does is 
fill "n" with a giant expression that looks like 1+(1+(1+(1+...)))). And 
then, right at the end, it returns n, forcing this giant expression to 
be executed. If the input list is large enough, this easily overflows 
the stack.

To fix this, we need to do two things. First, we create a local variable 
for the new, updated count:

   work (n+1) tail

becomes

   let n1 = n+1 in work n1 tail

By itself, this does nothing. But now we can apply the seq function:

   let n1 = n+1 in n1 `seq` work n1 tail

This /forces/ n1 to be executed /before/ the recursive call to work is 
executed. Since this happens every time round the loop, n always ends up 
holding an evaluated integer.

In fact, now the compiler can /see/ that n is always evaluated, and it 
starts applying optimisations like treating n is a normal 
machine-integer rather than a heap object, producing a tight loop like a 
Java or C programmer would write.

seq is used for other reasons; for example, when doing parallel 
programming, you /want/ the parallel threads to compute stuff, even 
though they don't "do" anything with the result. (You may remember the 
magical "par" function forces something to be evaluated in parallel, 
much like "seq" forces it in series.)

It's also important to remember that values are not simple "evaluated" 
or "unevaluated". It's not black and white, it's shades of grey. Each 
individual heap object is either black or white; a given expression's 
result might consist of hundreds of individual heap objects, hence the 
shades of grey. Anything atomic like an integer or a Bool is black or 
white; things like lists or strings or whatever have shades of grey.

(Actually, saying that, it's not inconceivable that an /arbitrary 
precision/ integer might consists of more than one heap object...)



Actually, having written Logic Box, one of the next projects I was 
thinking about attacking was writing an interactive simulator that shows 
you how the computer runs Haskell code. I actually ended up with a mild 
case of analysis-paralysis, but I think it's tractable. Would that be 
something that interests you? (Or anybody else here...?)


Post a reply to this message

From: andrel
Subject: Re: Quote of the day
Date: 3 Jul 2012 15:27:56
Message: <4FF347BC.7030308@gmail.com>
On 3-7-2012 6:40, Warp wrote:
> Invisible <voi### [at] devnull> wrote:
>> OCaml (and F#) look to be much more promising in the realm of writing
>> production systems de novo. Haskell isn't deterministic in its return
>> time (it's Lazy so it may come back in 5 secs or may take 5 hours - and
>> you cant really predict up front which - this is fatal for use in eg
>> pricing libraries)"

Wow, just wow. Let me rephrase: 'you should never use a sorting 
algorithm because the runtime depends on the input'.

>> You heard it here folks - Haskell "isn't deterministic". :-D

Deterministic is not the right word, he meant unpredictable. And even 
that is wrong (see also Andrews reply). I know of only one language that 
is unpredictable and that is Dijkstra's guarded command language. OTOH I 
don't know if anybody ever made an implementation of that.

>> While obviously this statement is false (you /can/ predict run-time,
>> it's just harder than you might imagine), the assertion that "this is
>> fatal" is probably bang-on.
>
>    I don't understand at which point Haskell decides to stop laziness and
> start actually computing results. It cannot be lazy forever because it
> would just run out of memory sooner or later (probably even with algorithms
> that would normally require O(1) memory in an imperative language).

No. Although you could write a program that needs an infinite amount of 
space. Such a program would make as much sense as a program in an 
imperative language that has an infinite loop.

Laziness basically means that you don't start computing things when you 
don't know if you are ever going to need the result. It is just the way 
you traverse your statespace in search of a solution. If we cut some 
corners we might say that it is a depth first search. So it is going to 
work on a branch if the branch on the left (or the right for Hebrew and 
Arabic speaking persons) is exhausted. For every specification and every 
input there might be another way of going through your statespace that 
is faster for that particular problem. But that is not the point. If you 
know the optimal strategy you can always force Haskell to use that. Yet, 
if you don't know the answer it is in general a good strategy (if you 
are running on a single thread machine).


-- 
tip: do not run in an unknown place when it is too dark to see the 
floor, unless you prefer to not use uppercase.


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Quote of the day
Date: 3 Jul 2012 16:05:30
Message: <4ff3508a$1@news.povray.org>
> Wow, just wow. Let me rephrase: 'you should never use a sorting
> algorithm because the runtime depends on the input'.

And yet, everybody prefers the unpredictable quicksort over the 
perfectly consistent mergesort, even though on paper they should be 
equally fast...

>>> You heard it here folks - Haskell "isn't deterministic". :-D
>
> Deterministic is not the right word, he meant unpredictable. And even
> that is wrong (see also Andrews reply).

Yeah, most things a computer program does are /deterministic/. This is 
not the same as /predictable/ at all.

(In some sense, whether a program ever halts at all is "unpredictable". 
For example, even though it's completely /deterministic/, it's provably 
not /computable/.)

 From a practical perspective, if you have no idea whatsoever how 
efficient or not the code you just wrote is going to be... you have a 
problem.

> I know of only one language that
> is unpredictable and that is Dijkstra's guarded command language. OTOH I
> don't know if anybody ever made an implementation of that.

OOC, do you have a reference for that?

>> I don't understand at which point Haskell decides to stop laziness and
>> start actually computing results. It cannot be lazy forever because it
>> would just run out of memory sooner or later (probably even with
>> algorithms
>> that would normally require O(1) memory in an imperative language).
>
> No. Although you could write a program that needs an infinite amount of
> space. Such a program would make as much sense as a program in an
> imperative language that has an infinite loop.

It's true though that a lazy language can unexpectedly change the 
complexity class of your algorithm if you're not paying attention.

For example, counting the number of nodes in a list /should/ be O(1) in 
time and space. However, the "obvious" thing to write is

   length (x:xs) = 1 + length xs
   length []     = 0

This gives the correct answer, but is unfortunately O(n) in time and 
space - which is very silly, of course. The "correct" solution is the 
much less obvious function

   length = count 0
     where
       count n (x:xs) = let n' = n+1 in n' `seq` count n' xs
       count n []     = n

This is actually O(1) like you'd expect.

On the other hand, the "obvious" way to implement the Fibonacci numbers 
[as if any computer program ever in the history of mankind will ever 
/need/ to look up a Fibonacci number for any purpose other than 
micro-benchmarks] is:

   fib 0 = 1
   fib 1 = 1
   fib n = fib (n-1) + fib (n+2)

This is unfortunately O(n^2). But the trixy incantation

   fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

is only O(n). And it works exactly /because/ of laziness. Otherwise, 
this would be an infinite loop which would never end.

(There are similar incantations for the prime numbers and for Pascal's 
triangle.)


Post a reply to this message

From: Orchid Win7 v1
Subject: Re: Quote of the day
Date: 3 Jul 2012 16:07:14
Message: <4ff350f2$1@news.povray.org>
On 03/07/2012 09:05 PM, Orchid Win7 v1 wrote:

> This is unfortunately O(n^2). But the trixy incantation
>
> fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
>
> is only O(n). And it works exactly /because/ of laziness. Otherwise,
> this would be an infinite loop which would never end.

...as opposed to those infinite loops that end after a while. o_O


Post a reply to this message

From: andrel
Subject: Re: Quote of the day
Date: 4 Jul 2012 11:38:17
Message: <4FF46365.6060706@gmail.com>
On 3-7-2012 22:05, Orchid Win7 v1 wrote:
>> I know of only one language that
>> is unpredictable and that is Dijkstra's guarded command language. OTOH I
>> don't know if anybody ever made an implementation of that.
>
> OOC, do you have a reference for that?

I reference for not knowing something???

For Dijkstra's GCL I think the most obvious source is A discipline of 
programming. I think he even explains in there why it is essential that 
the execution is non-deterministic.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.