POV-Ray : Newsgroups : povray.binaries.images : Son of YMFWR: extremely low error_bound : Son of YMFWR: extremely low error_bound Server Time
20 Aug 2024 00:08:31 EDT (-0400)
  Son of YMFWR: extremely low error_bound  
From: Xplo Eristotle
Date: 28 Sep 2000 21:23:37
Message: <39D3EFCF.7E78890@unforgettable.com>
While working on a scene, I noticed that I seemed to be getting tiny
patches of really bright illumination where they didn't belong. I tried
a few radiosity tweaks, but nothing made them go away, so I opened up my
Arnold disks scene and made all sorts of radical changes to the
radiosity settings, when completely by accident I discovered an
interesting set of values even MORE radical than the current "Arnold" settings.

Below are two views of the disks, rendered with different radiosity
settings. The one on the left uses what I believe to be fairly typical
"Arnold" settings:

		brightness 1
		count 100
		recursion_limit 1
		gray_threshold .1
		error_bound .3
		pretrace_start .01
		pretrace_end .01

The one on the right uses my tweaked settings:

		brightness 1
		count 200
		recursion_limit 1
		gray_threshold .1
		error_bound .02
		nearest_count 1
		adc_bailout 1
		pretrace_start 1
		pretrace_end 1

(The titling font is mine. Neat huh?)

As you can see, the one on the right has smoother gradation in the
shadows; the Arnold settings produce a dark shadow with a definite cutoff.

Render time was four times longer with the new settings.. but some crude
preliminary testing shows that for complex scenes (read: "almost all of
them"), the new settings can be significantly faster, depending on how
they're tuned.

I don't pretend to know why these settings work, but I can take a stab
at a guess.

Standard MegaPOV radiosity seems to work by taking a whole lot of crappy
samples and interpolating between them; this produces a viable overall
lighting solution, but the interpolation process misses all the little
details, which is why it does a poor job of shading.

"Arnold" radiosity compensates for this by setting a low error_bound of
.3 or so. According to the docs, this means that the algorithm won't
allow more than about 30% error in the sampling, and this seems to be
accurate enough to pick out small shadows. As some people have noticed,
though, this can produce dark artifacts. Also, standard "Arnold"
radiosity uses the default nearest_count setting of 6 (or was it 10? one
of those); the nearest_count has to be this high to get good results,
but finding that many low-error samples to blend together can take a
long time, which is why Arnoldish renders can take so long.

The new "uber-Arnold" (or alternately "ELE", for "Extremely Low Error")
settings avoid artifacting by selecting only the best, most accurate
samples. As long as your count is high enough (more on that below), you
don't need a nearest_count higher than 1, because your samples are
already so good that they don't need averaging with other good samples;
this can provide a substantial speed benefit. Also, there's no need to
presample the scene, and in fact doing so has no benefit that I've been
able to see (aside from being able to progressively mosaic-view your
radiosity) and just wastes time which is saved by these settings.

There is one drawback to ELE radiosity, though: for reasons I don't
really understand, it's very RAM intensive. I've had a number of test
renders roll over and die because the radiosity ate all my free memory,
and even for those of you who have huge amounts of swap space, what time
you gain in faster radiosity might well be lost to intensive drive thrashing.

I'd like to have some of the other people who've experimented with
"Arnold" radiosity in here play with the new settings and see what happens.

Some notes on tuning:

- count should be 100 at minimum; 200-300 is even better. Setting count
lower than 100 will result in (probably) unacceptable blotching and speckling.

- nearest_count, pretrace, and adc_bailout should stay where they are.
(The adc_bailout of 1 provides some minor additional smoothing with no
significant increase in render time. Other than that, it has no visible
effect, and I'm not sure what it was originally supposed to do.)

- For best results, start with an error_bound of .1 and decrease it
until RAM consumption starts to be a problem.. of course, this requires
a lot of time-wasting test renders. If you have a lot of free RAM after
parsing (say, roughly 100 MB, or more), you can probably jump to a
really low setting right away.

- error_bound of .1 is only suitable for dirty test renders. .01 should
give pleasing results, and .001 is good for those lengthy final renders.

- To reduce RAM consumption and increase render speed, either reduce
count (but not below 100) or increase error_bound (but not above .1).


Post a reply to this message


Attachments:
Download 'arncomp.png' (68 KB)

Preview of image 'arncomp.png'
arncomp.png


 

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.