POV-Ray : Newsgroups : povray.binaries.images : Dropsnow Macro Examples (48kb, 52kb & 14kb) : Re: Dropsnow Macro Examples (52kb) Server Time
10 Aug 2024 01:21:11 EDT (-0400)
  Re: Dropsnow Macro Examples (52kb)  
From: Tim Nikias
Date: 15 Dec 2004 13:00:26
Message: <41c07bba@news.povray.org>
> That one looks quite good.  How do you sample the surfaces - do you just
> trace the whole bounding box area or is it some adaptive process?  How
> well does it work with some wind (i.e. with the snow coming a bit from
> the side and sticking to the vertical parts)?

Haven't followed the progress of this project, eh? ;-)

First, a macro generates a camera which creates an orthographic view onto
the object with the direction intended to be used for the surface sampling.
This view is plain black/white where the object is white.
Then, I do a pretest. Given the orthographic view, I divide the area into
cells and check each if they have cross from white to black at their edges.
If that fails, I check the center of the cell in case the entire cell is
enclosed within the object. This step is done by using the image as the
basis for a heightfield and applying trace() along the edges of the cells. I
know that I could probably save a little parsing time on this if I'd reuse
the edge-tests for neighbouring cells, but then again, this step is really
the smallest step in the entire chain and not really worth the work.
Once I've figured which cells to be active, I trace() the object with a type
of fibonacci distribution which spreads the samples a lot nicer than plain
u-v-discrete samples would have: this doesn't create such obvious patterns.
The trace() just passes through the entire object, collecting every surface
along the way. These are all saved into one giant file, along with the
discovered normal of those samples.

Then, I apply some selecting macros, which remove samples on surfaces that
are too steep (just a simple angle test between sample-normal and a given
direction), or are within/outside of a given boundary-box or object. A more
tricky macro will check for each sample how much of the sky can be seen when
viewing towards a given direction. A certain percentage is required to keep
the samples, otherwise, the macro assumes that the particle couldn't have
got there and removes it. Hence I get snow-free areas on the CSG object (not
on the trees, obviously there's lots of sky to be seen).

And finally, to calculate how high the cylindrical part of a sample may be,
I trace the upper hemispherical view and check how close the object is. If
enough of the samples are close enough, the particle grows in y-direction to
simulate wall-stacking behaviour.

Note that I use *many* trace() calls for all this, mainly because with
meshes functions are not really an option (and because I'm no pro on
functions, this fits me well). :-)

And as you can see, dependant on how many samples the hemispherical test
uses and with which distance the hemisphere still affects the particle, some
of the particles just grow because lots of hemisphere-samples just hit a
very tiny branch. Like I said, increasing sampling and decreasing distance
would help, but raise parsing even further. And since the stacking effect is
really just needed and visible near the trunk, I can use a simpler tree to
do all those tests, and get away with it. :-)

> Note parsing speed is not so much a question of the CPU but of general
> system performance.

I've always thought that POV-Ray relies heavily on the CPU, it's *the*
driving force. Of course, if the entire system is slow and has low resources
to offer, a CPU can only do *that* much. I guess that *I* would probably
notice the most difference when switching to a faster CPU. Of course, the
data bus and I/O transfer between CPU, Harddrive, RAM etc can slow down
things a lot if picked improperly, but would that really affect the render
*that* much? Honest question, not rhetorical.

-- 
"Tim Nikias v2.0"
Homepage: <http://www.nolights.de>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.