POV-Ray : Newsgroups : povray.advanced-users : Radiosity : Re: Radiosity Server Time
29 Jul 2024 06:17:54 EDT (-0400)
  Re: Radiosity  
From: Jim McElhiney
Date: 28 Jun 2003 11:41:40
Message: <3EFDB736.F5B7421E@excite.com>
Mike Andrews wrote:

>
>
> Sorry Warp, I believe he does have a valid point, even if he is not
> expressing himself very well.
>
> In render.cpp, function Start_Tracing_Radiosity_Preview(), line about 1022:
> Thus random noise is introduced into the radiosity trace. And it is
> different for each frame of an animation. Some scenes show it more than
> others, but it is there.
>
> When I compile custom versions I always remove this jitter ...

Hi.

(BTW, I'm the guy who wrote the radiosity subsystem way back when)

There are many problems with getting the radiosity solution to be
smooth from frame to frame in an animation, of which this is but
one, and not the most major one.
(without the jitter, the same places in each frame would at least
be tried for the pre-pass for static-camera scenes, which is a plus,
but no help for moving cameras or sample points on moving objects)
Keeping the radiosity samples from frame to frame has as its
biggest problem this:
- if you use the same old samples from frame to frame, the lighting
won't change with the moving objects in the animation
- if you don't, you will get very different sample locations from
scene to scene, which will result in lighting shades flickering.  No
matter how low the error bound, in practice you will get some error,
which will be different from frame to frame.

The theoretically correct solution, I speculate, to this is to extend the ray
sampling from 3D to 4D.  Right now each sample has a location
and a range of effectivity in X, Y, and Z:  it could also have it in time.
The octree structure would become a 16-tree.  (hexadecatree?).
This part is easy...I used a 2D search algorithm extended to 3D already.
The software to try this isn't really all
that hard:  the problem is coming up with rule for the range
of time intervals that a sample is good for, based on the measured
magnitude of the effect of the moving objects on it.
X,Y,Z are symmetrical in this respect, but time isn't.

I speculate as follows:
First, you'd have to have each varying thing in the animation as
a function of time that could be evaluated randomly.  This is
possible, but a pretty major change if there is software which makes
decisions on the fly as it goes from frame to frame.  I'm not an
expert on the things you can do during an animation, so I can't
really say much about how hard this would be.  Object sort buffers
would be one obvious problem point, and anything else in the way
of frame overhead.  My suggested
solution would require sampling through time, so T would have to
be varying from ray to ray.  Might be a show stopper right there,
but let's continue:
-   you'd do a very large number of rays per sample location, sampling
in all directions (as now) as well as randomly across time.
- rays which were affected by time-variance would be tagged.  For
example, "I hit a moving object".  (there are lots of other things that
could happen too).
- You would calculate the variance and sensitivity of the gathered and
cached sample to the time-variant objects.  e.g., "this sample of 8000
rays would not vary by more than 1% if no moving objects had been
hit"
- From here, you might try different things:  if the variance due to
those objects were high, subdivide the rays by time, and create
two samples. Or, split it into 3:  the range before we hit the moving
object, the range during, and the range afterwards.  Recurse till the change in
contribution is negligible.

So, in the end, you might have an irradiance cache sample that
was good to within an error of 0.30 if you were at location
(4,5,6) +/- 1, and a time of frame=17 +/- 4.

The calculation time for the first frame would be huge, but it would
be much faster after that!

So far as I know, no one has ever tried this.  There might be an
acedemic paper in it if someone made it work.

A totally different approach might be to calculate the samples all
at the beginning, without any of the animation-dependent objects,
like an empty room, then approximate the incremental effects of
the objects and composite their change in light contribution into
the system.  Not clear in my own mind how this might work, but I have
a hunch that this is the type of solution that an animation studio would
follow, since it would likely be practical enough for real-seeming
scenes (a relatively coherent space with some static and some moving
objects and characters in it), and would scale up well, and would
parallelize well.
This technique has been used before for light sources:  they composite
perfectly.  That is:  you can do a radiosity calculation separately for
each lamp, then on the fly (in real time) composite the final lighting
values and render the scene with any combination of lights turned on.

At the less aggressive end of the spectrum, you could calculate
all the radiosity samples as you do now, calculating some sort of
metric of how sensitive each one was to animation-variant objects.
(for example, samples where over 5% of the rays shot hit a moving
object).  These samples would be deleted after each frame, and all
other samples would be kept from frame to frame.
This one (unlike the other approaches above) might actually be
mostly doable, at least for some simpler classes of animation like moving
objects and camera in a largely coherent setting.  It would be hard to
calculate the sensitivity for some kinds of things that vary in an animation,
(a light gets turned off?) but fairly easy for other things.  Might have to be
specified directly in the scene file by the animator.
Quick and easy to prototype, relatively speaking.

Jim


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.