POV-Ray : Newsgroups : povray.programming : Radiosity code question #3 : Re: Radiosity code question #3 Server Time
28 Jul 2024 14:24:45 EDT (-0400)
  Re: Radiosity code question #3  
From: Jim McElhiney
Date: 26 Jun 2003 09:50:41
Message: <3EFAFA12.668916EF@excite.com>
Christoph Hormann wrote:

> Jim McElhiney wrote:
> >
> > [...]
> >
> > The 1600 samples were created by a fairly sophisticated program
> > (given the apparent simplicity of the problem), which I might, or might
> > not, be able to find kicking around.  I spend MANY days on it.  (really!)
> > Basically, it tries to meet the following criteria:
> > [...]
>
> That complies well with the observations that have been made with the
> sample set, namely that it is fairly good at certain count values but
> quite limited for values in between.  When you use 'normal on' in
> radiosity (i think this was added by Nathan) it is not guaranteed that the
> first N samples are actually used when you use 'count N'.

It is certainly designed to work well for any number over 50, but it
is designed around the idea that it should always use indices 1..N.
Using anything else would certainly give problems with the current
rad_data.  It would be better to rotate the whole set of points, then
take 1..N, rather than taking a "slanted" sample from the set.
Or, put in a whole different system, of course.
One school of thought is that any area sampling, such as anti-aliasing
pixel subsampling, should use a simple function which gives you
new points one by one, the first one a bit off centre, and each subsequent
one at the centroid of the largest remaining hole. (an effect which I
think Hammersly and Halton points approximate)  I looked around
for an algorithm to generate this sequence on the fly, but no joy.
I think there is a one-dimensional solution using splits based on
the golden ratio, but never got further on it for 2D.  (I'm confident
a solution for a rectangle could be adapted to the hemisphere).
So, I just generated it once and saved the results.  The big issue
in the choice of algorithm is that it be very fast, if it's done for every
point.  But there isn't really a need to do it for every point.

Interestingly, other implementations simply use an MxN grid of samples
(polar coordinates).  I've sometimes wondered whether some of
the smoothness they get is because of this:  there is sample-to-sample
coherence of the sampling directions, so a given object (say a long
thin object ending somewhere nearby) tends to be
either in, or out, of many adjacent samples, rather than jumping
in and out from sample to sample as a random sampling gives.
I call this "artifical smoothness".  Who's to say it's a bad thing?



> Another
> observation i made was that the sample distribution has a 'rim' structure
> at low counts (a lot of samples at the same elevation/theta values, this
> could be caused by the way you solved the 'horizon problem').
>

Yes, this is on purpose.  It should be harmless as long as the count
is high enough that those samples are not overweighting the horizon.
(at least 50;  I assumed people would be using 100+ if they were
interested in a good looking output).
The artifacts caused by leaving these out could probably be solved
other ways (other implementors just use extremely high sample counts),
but they are a big problem and some effort to avoid them should
be considered.  The "pencil on table" test is a good one.  The higher
the angle of your lowest-angle samples, the lower your max effective
radius has to be to compensate.  The low angle can be low for all
scenes without causing problems, whereas the max effective radius
is always something that has to be tuned, so one could consider the
sample-low-angles fix to be a more robust one.  I spent a long
time trying to find the best tradeoff point for those low angles.
A better solution might be to include some even lower angled points,
but underweight results them appropriately for low sample counts.
I'm sure there are even better solutions out there.

>
> The changes that will be in next Megapov concerning this are:
>
> - alternative custom (specified in the script) or generated (halton
> sequence) sample sets.  The latter is already available in MLPov:
>
> http://martial.rameaux.free.fr/mael/mlpov083.html
>

Sounds like a good idea.
(can't get that link right now (despite being in france!), but I'll try later)

>
> - optional random rotation of the sample set around the normal vector.

Yes, I tried this myself.  Probably a good idea.
It reduced the amount of "artificial smoothness" further:  the scenes
were probably more accurate on average, but at quicker renderings
sometimes looked worse, due to increased blotchiness.

>
>
> >
> > Note, if you have artifacts in a picture for which you want very high
> > quality output, it is 99% certainty that the problem is a bug in the
> > software (sorry) or the choice of other parameters, and <1% chance
> > you really need more samples.
>
> You need quite a lot of samples when you have strong local variation of
> brightness i.e. small and very bright objects.  Of course it is quite
> possible that there is a bug somewhere so it needs more samples to produce
> smooth results than really necessary.

You're right, sometimes you want to crank the
count way up, go away for the weekend, and have a really nice picture
at the end.  Stochastic sampling isn't really the right way to solve this
problem, but, since we're talking about how to improve a system that
is based on it, you might as well allow it to crunch away when you want
it to.
When I set up the 1600 limit, bear in mind that this was new to everyone,
and no one was really prepared for the huge increase in rendering times.
A higher limit (or, better yet, no limit), is definitely in order.

>
>
> I tested your suggestion about ot_index and it seems to make some
> difference but not much while the difference in speed can be quite
> significant (more than twice the time in one test).  It could of course be
> that i just tested with the wrong kind of scene and settings.

The effect of the bug that I would expect is as follows:  many samples
are getting stored in the ot_ tree at too small a node size, so are not
getting found when you need them.  Therefore, they aren't getting
included in the averages, except when your sample point is very
close to them.  Whether or not they are included is
dependent on which side of the caching tree node's boundary you
are on, so the old sample drops out of the contribution suddenly between
two pixels while its weight is significant, causing discontinuities.  You would

expect these discontinuities in lighting to align with the scene axes, and
to be more pronounced at low nearest_count values, and high
values of low_error_factor.

The quick hack just forces everything to be stored in a bigger node,
so more nodes are looked at when looking for things to average in.
This does take a little more time, since the tree is traversed a lot.
But, I'm kind of surprised that the speed got that much slower.
Probably a small part of it is the tree traversal per se, and a larger
part of it is the execution of the function (ra_avearge_near) which calculates
the weight and averages it in.


Rgds
Jim


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.