POV-Ray : Newsgroups : povray.text.tutorials : Radiosity Voodoo - Volume 1: How Radiosity Really Works Server Time
28 Mar 2024 11:34:39 EDT (-0400)
  Radiosity Voodoo - Volume 1: How Radiosity Really Works (Message 1 to 5 of 5)  
From: clipka
Subject: Radiosity Voodoo - Volume 1: How Radiosity Really Works
Date: 31 Dec 2008 11:25:01
Message: <web.495b9c647f59727a483cfa400@news.povray.org>
I guess it's time to sum up a few things I have learned both during
experimenting with POV-Ray radiosity, and digging through (and tampering with)
the source code. So here comes volume 1 of my "black magic" tome. I fear it's
still as flawed as the 3.6 radiosity code was (I confess I didn't bother to
read it again after hacking it in), but it may give you some insight already:


    1. How Radiosity Really Works
    -----------------------------


The Basics:

Radiosity is intended to replace PoV-ray's classic material-based approach of
modeling ambient illumination with a more realistic geometry-based model, to
take into account diffusely reflected (or possibly emitted) light from other
objects, giving a non-uniform ambient illumination.

Although it is correct to speak of ambient illumination here, I will avoid this
term to avoid its ambiguity in the PoV-ray radiosity world, and instead speak of
"indirect illumination".

The very basic concept is that in order to get the indirect illumination for a
certain point, you would shoot a vast number of rays in virtually every
direction, checking how much light would come from whatever object happens to be
hit.

As a straightforward implementation of this concept would lead to incredibly
long render times, it is of course not feasible without optimizations.
Furthermore, there are additional details to consider.


Radiosity Samples:

The main optimization is to not compute the indirect illumination for every
point encountered, but to take only representative "samples", and to determine
the actual illumination for any specific point by interpolating between nearby
samples. This can be done because in the real world this indirect illumination
typically changes only very gradually.

In PoV-ray terms, this interpolation is called "re-use" of samples.


Re-Use:

Whether a sample can be re-used for a certain point in question depends on a
variety of geometric factors, the main of which are:

* The overall distance to nearby geometric features: This comes as a byproduct
of the shooting of rays for the sample, and is simply an average of the
distances the rays have travelled before hitting an object. A "harmonic mean" is
used (the inverse of the arithmetic mean of the inverses) so that nearby objects
have a higher influence on this value.

* The direction and distance to the nearest object: This comes as a byproduct of
the shooting of rays as well. A sample is deemed less re-usable for points in or
near this direction.

* The curvature of the object: This is measured by the difference in the normal
vectors of the sample and the point in question.

From all these factors, a weight is computed for the sample, and all samples
interpolated according to their weight.

(In practice of course, there are several mechanisms in place to quickly filter
out samples that cannot possibly have a nonzero weight for a given point.)


Test Ray Pattern:

Shooting rays in virtually every direction is, of course, virtually impossible.
Therefore, only a limited number of rays can be shot, and their directions
should be chosen carefully. The algorithm to choose the number of rays should
ideally satisfy the following criteria:

* The algorithm should be fast. Doing things the stupid way but fast may
actually be more clever than spending time on finding the best way.

* The rays should be biased towards the normal, to accomodate for the fact that
light coming in at shallower angles contributes less to the overall brightness
of a surface. This could be modeled by multiplying the brightness of each ray
with a corresponding correctional term, but modeling it instead by biasing the
distribution of the rays is much more elegant, because it can do without the
correctional term (which would be a trigonometric function and therefore quite
expensive in terms of computng time), and also does not waste time on shooting
many rays in directions that contribute few to the total brightness.

* Aside from this bias, the rays should be distributed very uniformly; first to
not waste computing time on rays that happen to be almost identical and
therefore are likely to contribute the same brightness and color; and second to
prevent parts of the scene to have a higher impact on the computed brightness
(because they happen to receive more rays) than others.

* The algorithm should provide a way to control the total numbers of rays shot.

* The algorithm should be non-random, to be suitable for animations.

(For certain reasons it would be desirable to have a certain random element in
the distribution of rays; however, this typically collides with multiple of the
above criteria and is therefore ignored in PoV-ray.)

PoV-ray uses a precomputed table of directions specially designed to meet all of
the above. However, the maximum number of rays shot is limited by the size of
the table, which contains "only" 1600 entries.


Light Sources:

Radiosity significantly changes how lighting works, and allows for objects
themselves to act as light sources.

PoV-ray combines radiosity with its classic lighting model. This allows for
beginners to first experiment with it by just "pimping up" classically-lit
scenes with radiosity a bit. However, radiosity can also be used stand-alone
(although this sacrifices soft highlights unless one is familiar with other
advanced techniques), by making objects themselves emit light. Details how to do
this are mentioned later.


Recursions:

When PoV-ray starts taking radiosity samples, there is a hen-and-egg problem: To
determine the indirect illumination at a sample point, it needs to shoot rays
and determine how much light comes from that direction - which in turn may
depend on the indirect illumination there, and due to a current lack of
(completed) samples cannot be computed right now.

PoV-ray solves this problem in a pragmatic way, by doing a limited-depth
recursion, keeping track of separate sets of samples for each level of depth.

The levels of depth can be seen as some kind of "generations" (although the
individual generations are not computed in sequential order, but rather as
needed):

When shooting rays for a first-generation sample, the world is assumed do be
devoid of any indirect illumination, and any object hit is assumed to be
illuminated only from classic lighting, maybe reflect or refract, and maybe emit
light itself.

When shooting rays for the second-generation sample, the world is assumed to be
indirectly illuminated as described by the first-generation samples.

When shooting rays for the third-generation sample, the world is assumed to be
indirectly illuminated as described by the second-generation samples - and so
on, until the desired number of generations has been reached.

(Note that these "generations" are *not* coincidental with radiosity pre-trace
passes.)


Pre-Trace:

The intention of the radiosity sampling pre-trace is to take all samples that
the final render may ever need, to avoid any new samples to be taken during the
final render, as this may cause visible "jumps" in color or brightness between
areas rendered before and after the new sample was taken.

To make sure the collection of samples is dense enough, by default the pre-trace
actually takes more samples than it would normaly deem necessary. This is
achieved by running multiple passes, and collecting additional samples on each
pass until it finds a certain minimum number of samples "within reach" of each
spot it examines.

To make sure that the distribution of samples is comparatively homogenous, the
pre-trace starts with a very coarse pass, and gradually increases in resolution.
In addition, the pre-sampling uses a good deal of jitter to avoid any clear
patterns in the sampling distribution.

The pre-trace may not always be successful at achieving these goals, so
sometimes taking new samples during the final trace cannot be totally avoided.


--- (End of Volume 1) ---


Post a reply to this message

From: Christoph Hormann
Subject: Re: Radiosity Voodoo - Volume 1: How Radiosity Really Works
Date: 21 Jan 2009 02:29:50
Message: <4976ceee$1@news.povray.org>
clipka schrieb:

> I guess it's time to sum up a few things I have learned both during
> experimenting with POV-Ray radiosity, and digging through (and tampering with)
> the source code. So here comes volume 1 of my "black magic" tome. I fear it's
> still as flawed as the 3.6 radiosity code was (I confess I didn't bother to
> read it again after hacking it in), but it may give you some insight already:
> 
> [...]

This is a very comprehensive description of radiosity workings in 
POV-Ray but i think you are mistaken about one central point:  The 
radiosity pretrace will not be able to (and was never designed to) take 
all samples necessary during pretrace in a real life scene.  The main 
reason is that a scene with edges and corners in the geometry (i.e. 
surfaces with infinite curvature) will require an infinite sample 
density at these corners and unless you do not shoot any camera rays 
during the final render pass that have not yet been traced during 
pretrace (and this usually always happens - aa, jitter) you will need 
additional samples in the render.  To avoid this practically you would 
have to raise the error_bound during final trace by a huge amount 
(possibly adaptively like possible in latest megapov).

-- Christoph


Post a reply to this message

From: clipka
Subject: Re: Radiosity Voodoo - Volume 1: How Radiosity Really Works
Date: 21 Jan 2009 12:35:00
Message: <web.49775c3f8195ec9abdc576310@news.povray.org>
Christoph Hormann <chr### [at] gmxde> wrote:
> This is a very comprehensive description of radiosity workings in
> POV-Ray but i think you are mistaken about one central point:  The
> radiosity pretrace will not be able to (and was never designed to) take
> all samples necessary during pretrace in a real life scene.

It may not have been *designed* to do so, but for good quality it *must* do so
(at least to some good degree). Otherwise you get lots of artifacts everywhere
near new samples taken during final render, because some nearby pixels were
computed earlier, but would have been influenced by the new sample if it had
been there already.

There are a few things to note out here:

- When talking about samples *needed*, I mean needed by *algorithm*, not by
*theoretical* considerations.

- The algorithm in POV is designed so that it will enforce a certain minimum
effective radius of samples, so in this sense it will never *need* an infinite
number of samples.

- I'm advocating nothing more than *good* coverage, not a *perfect* one.

There's a point here to be made though, and it just gives me an idea how to
automatically get an exhaustive enough but not too slow pretrace:

Samples taken during final render that affect only a single pixel probably don't
hurt anyone. Artifacts appear only where samples are taken that should by
algorithm affect other pixels already calculated.

So there must be a way to use this fact to come up with a criterion when to stop
pretracing even though sample coverage doesn't seem to be rather incomplete yet.

Note however that just stopping pretrace at the final render's pixel size
doesn't suffice: on areas seen from a very shallow angle, a spot that still
needs samples may be "thin" enough on screen in one direction to slip through a
pixel-sized pretrace, but still "wide" enough in the other direction to do harm.

So the radiosity tracing code should report the "apparent" size of the largest
sample gathered for a pretrace ray (or some value directly enough related to
it), and if the pretrace finds that it needs to gather no more "oversized"
samples then it is (with a certain probability) safe to leave sampling in that
area to the final trace.

(I *love* having inspiring arguments with you, guys!)


Post a reply to this message

From: Thomas de Groot
Subject: Re: Radiosity Voodoo - Volume 1: How Radiosity Really Works
Date: 5 Feb 2009 03:19:31
Message: <498aa113$1@news.povray.org>
Just found this now. I am going to consult this in detail. Excellent work. 
Thank you very much indeed.

Thomas


Post a reply to this message

From: clipka
Subject: Re: Radiosity Voodoo - Volume 1: How Radiosity Really Works
Date: 6 Feb 2009 11:35:00
Message: <web.498c65fd8195ec9abdc576310@news.povray.org>
"Thomas de Groot" <tDOTdegroot@interDOTnlANOTHERDOTnet> wrote:
> Just found this now. I am going to consult this in detail. Excellent work.
> Thank you very much indeed.

Well, I guess I need to post a revised version; some of the observations &
guesses turned out wrong, as I found out by trying to optimize the code
accordingly.

For example, to my great dismay the low error factor can *not* be used for the
same effect as nearest reuse count when alway sample is off.

I still don't really understand *why* that is though (or, to be precise, I do
understand why it can make *some* difference, but I have no clue why it gives
the results it does).


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.