POV-Ray : Newsgroups : povray.beta-test : Radiosity status : Re: Radiosity status Server Time
28 Jul 2024 12:33:01 EDT (-0400)
  Re: Radiosity status  
From: nemesis
Date: 26 Dec 2008 22:55:01
Message: <web.4955a65ba9104c11180057960@news.povray.org>
Warp <war### [at] tagpovrayorg> wrote:
> The biggest problem with "radiosity" in POV-Ray
> has always been that there's no way to distribute it because if you make
> different instances render different parts of the program, the "radiosity"
> lighting will be calculated differently in them, and a visible difference
> will appear between the image sections rendered by different instances.

It seems this problem is related to the one that the calculation only takes into
account things in sight, right?  Like the problem I mentioned before of having
to calculate things again if the scene if shot from another angle and despite a
loaded rad file.

Ward's paper mentions his algorithm is meant for view independence, but only to
an extent.  Here's what he says on the subject, from page 2:

http://radsite.lbl.gov/radiance/papers/sg88/paper.html (the gifs have diagrams)

"For the sake of efficiency, indirect illuminance should not be recalculated at
each pixel, but should instead be averaged over surfaces from a small set of
computed values.  Computing each value might require many samples, but the
number of values would not depend on the number of pixels, so high resolution
images could be produced efficiently.  Also, since illuminance does not depend
on view, the value could be reused for many images."

So far, so good:  pixel and view independence mean reusing the same saved
calculations for any resolution and also from any camera angles and different
positions.  But on the next paragraph, the latter is compromised:

"How can we benefit from a view-independent calculation in the inherently
view-dependent world of ray tracing?  We do not wish to limit or burden the
geometric model with illuminance information, as required by the surface
discretization of the radiosity method.  By the same token, we do not wish to
take view-independence too far, calculating illuminance on surfaces that play
no part in the desired view.  Instead, we would like to take our large sample
of rays only when and where it is necessary for the accurate computation of an
image, storing the result in a separate data structure that puts no constraints
on the surface geometry."

He's a performance couscious guy and did tackle Kajiya's path tracing method in
the first page.

It seems this feature, while a nice optimization trick, is what prevents one
from reusing a full saved rad file to render a file from a different angle
without taking any more samples and probably also the root cause of not being
able to simply break an image into sections, render them separately and simply
merge the final renders into a single image later.  Each instance do not know
they should be taking samples from areas not seen in their particular view.

Perhaps this optimization was powerful then, but not quite as fitting in an era
of multicore processors and parallel processing.  Would it be possible to
simply remove this view-dependent constraints and allow for full view
independence in the current implementation?  How bad would performance drop?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.