|
|
triple_r schrieb:
> It's likely, but they both slow down quickly since, for an n x n grid, the time
> increases at a rate of O(n^3). I'm running a version right now that's 6th
> order in space and time, and it takes about four hours or so for about 10000
> frames at 1000x1000 resolution.
I guess the O(n^3) applies not to an increased "tank" size, but an
increased resolution (which then also requires an increased resolution
of time I guess), right?
> Wait a minute--no #while? No arrays? Are you using textures or what? Sounds
> like great material for an obfuscated POV contest, but maybe it's really
> straightforward.
Indeed. The algorithm should also be perfectly suited for a cellular
automaton (ah well, in a way that water-tank-alike thing *is* a cellular
automaton, if only with a very high number of states...). There's only a
slight bit of obfuscation in it because...
- The math isn't as obvious when you do it with textures
- So far I've used a more naive approach than what you propose,
explicitly keeping track of both pressure and velocity (2 dimensions)
separately (yet side-by-side in the same frame - hence the 3:1 aspect
ratio), not realizing that the velocity can be inferred from the
difference between previous frames (then again, can it? after all, the
difference is an overlay of velocity influences in two dimensions...
except of course if you process only a single dimension per step). This
of course uses some weird hacks to properly superimpose the various
dimensions.
> So to get the next frame, read the last two as a texture. Add twice the last
> frame minus the second-to-last frame. Then add (c*dt/dx)^2 multiplied by the
> sum of the last frame shifted in every direction by a pixel minus 4 times the
> last frame.
>
> Render?
Something along these lines, yes. Except that I'm not that deep into
"official" math to know what a "laplacian" is - I just derived the
iteration approach from basics (and I should also note that it's not an
exact simulation of water waves - rather an idealized medium, in which
waves propagate by straightforward interaction of some scalar
"potential" field and a 2d-vector "flux" field (or however you want to
call it), without any other effects to mess up things (except a small
damping factor I introduced, to prevent quantization noise from building
up). And, as I mentioned, so far I explicitly store the "flux" field
instead of referring to older frames. (My first approach actually
computed each component separately, using a three-frame cycle for
potential, X-flux and Y-flux, but it was darn ugly to watch ;-) I also
figured (correctly) that using a combined-frame approach might speed up
things, especially since parsing time was a significant factor.)
I was wondering indeed whether there would be ways to streamline the
algorithm. I also wonder whether doing multiple iteration steps per
frame (by overlaying more copies of the last two frames) might speed up
things, as parsing (and probably image loading and saving) still eats a
good deal of the time, and there's possibly other significant overhead
involved. After all, we're talking about ~40 fps.
Ah, one last note: I guess OpenEXR should be recommended for this
application of POV-Ray.
Post a reply to this message
|
|