|
|
On 12-Sep-08 15:37, Invisible wrote:
>>> Really? I mean, it's a little perplexing that the computer is
>>> actually able to unravel it without getting stuck, but once you
>>> understand what the machine is doing internally, it's not so hard.
>>>
>> You mean that you not only understands how it works, but it even
>> helped you understand how compilers work?
>
> Well I mean *notionally*, the way Haskell executes is fairly simple. In
> the case of the magic tail-chasing fibs definition, fibs starts out as a
> list where some list cells are defined, and some haven't been computed
> yet. And the ones that haven't been computed yet refer to the ones that
> have. (If it were the other way round, you WOULD have a problem.) As you
> ask for list cells, they get computed, one by one, until you get the one
> you want.
>
> It's a nice example of how lazy evaluation allows you to program in
> unusual ways, but it's probably not terrifically practical unless you
> only need a handful of Fibonacci numbers.
>
That is not really an answer to the question. OTOH it does show that the
Haskell example can also be used to explain how the Haskell compiler (or
whatever it is) works.
Aside, the Haskell evaluation needs much more to explain that the C code
execution. That is of course because all processors have instruction
sets that are modelled on imperative languages (and vice versa). For
most programmers that seems to imply that imperative languages are more
natural perhaps even more fundamental.
I personally think imperative languages are also perceived as more
natural because of the way maths is taught. Teachers tell you to do this
and then do that. That imposes a sense of direction on the process that
is not actually there. It does however make it easy to understand such
things like x=x+1. To such an extend that I have never seen any student
object to such silliness.
Post a reply to this message
|
|