|
|
scott wrote:
>> A true Haskeller would define a notionally infinite list which
>> notionally contains the entire state of the system at every future
>> point in time. Examining the elements of this list causes the
>> integration to actually be performed, and assuming you let go of the
>> start of the list, the GC will delete each state after it has been
>> calculated.
>
> What's that like on performance though? What would it be like if your
> state block consisted of hundreds of 4D vectors, and the system was
> being updated at 1000 Hz? Would the GC be noticeable?
Erm... without trying it, I couldn't say off the top of my head. Notice
though that since this is "the normal way" to do stuff in Haskell, the
compiler designers will be basing all their design decisions on the fact
that people are going to be writing code this way.
I would say that for *vast* numbers of vectors, you might need to move
over to using mutable arrays (uses less space and less GC time). But
without benchmarking it I couldn't tell you exactly how much of a
difference it makes. (I would also suspect that if you have "hundreds"
of vectors you probably need random access to them too...)
>> (You can't really speak of assigning several values to the same
>> variable "one after the other" because [pure] Haskell function don't
>> have any notion of "time".)
>
> Ah ok I see, so it's like you every variable is defined as "const" in
> the C++ sense, you can't modify it.
Well, kinda, yeah.
It's like, a function can only have one body. Nobody would find that
surprising. Like in C, you can't write a function, and then later change
that function to something else. (While we're on the subject, in Haskell
a "function name" is just a normal variable who's value happens to be a
function...)
A basic property of Haskell code [and any "pure" functional language] is
that if you take any variable and replace all occurrances of that
variable with the expression that defines its value, the meaning of the
program is unchanged. [Although possibly efficiency is reduced.]
Notationally this is how you "execute" such a program.
>> animate state = do
>> dispaly state
>> let new_state = integrate state
>> animate new_state
>
> I assume that the compiler is clever enough to not keep on the stack any
> data that is not needed, ie old versions of state that are no longer
> needed. If you attempted that method in C++ you would fill up the RAM
> very quickly ;-)
Er, yes, in C++ (or most "normal" programming languages) the above loop
would be an *absurdly* bad idea. In Haskell, this is the normal way to
achieve looping, and hence the compiler goes to extreme lengths to
optimise it well.
(If "state" is small enough, it may even get permanently allocated into
CPU registers and updated in-place. It depends on exactly what it
contains... If it's two floats like in the examples, then probably yes.
If it's a set of 200 vectors, obviously no...)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|