POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. : Re: A box with no lights. Server Time
29 Jul 2024 00:36:01 EDT (-0400)
  Re: A box with no lights.  
From: Nathan Kopp
Date: 27 Jan 1999 00:41:40
Message: <36AEA792.E89CF5AB@Kopp.com>
Not to be too blunt, but: "been there done that".  It's a good idea (or at least
I thought so... I thought it was a great idea, in fact), but it doesn't work
very well in practice.  I'll give some background and details.  First, the
background.  I'm working on adding photon mapping to POV-Ray to simulate
reflective and refractive caustics.  A guy named Henrik Wann Jensen was the first
to implement the photon map idea.  Photon mapping is a way of storing information
from a backwards ray-tracing step in a tree structure called a kd-tree.  You
store color/brightness, location, and direction of the 'photon' packet.  Then,
during the rendering phase, you use the n-closest photons (within a specified
maximum radius) just as you would use light from a light source.

Here's the URL for his web page:
http://www.gk.dtu.dk/home/hwj/

I thought it would be good to extend this from reflection & refraction to a
monte-carlo approach to compute the entire radiance for the scene (at least all
of the indirect illumination).  I even implemented it, which wasn't too
difficult since I already had all of the backwards ray-tracing stuff mostly
done for reflective & refractive caustics.

Jensen does use a global photon map to aid in indirect illumination
calculations... I'll explain that later.

Ok... details:

Steve wrote:
> 
> I think this misses the point entirely.  The biggest problem with POVs
> so-called "radiosity" (this is badly named, since the word "radiosity" has
> been traditionally connected with some other totally different algorithm of
> which I won't go into here)  is that it still relies on AMBIENT values entered
> by the user.

I agree!  But I think the current system (the basic concept has been well-tested
in other systems) is good and just needs a few adjustments.

> My suggestion is this:
> When radiosity is turned on, it automatically makes all ambient = 0.0 on all
> surfaces (those who have been flagged to not "require it" by the user)

Not necessary.  If the user wants to totally eliminate the regular ambient,
they should set "ambient_light" to zero in the global settings.  Of course,
this wouldn't work with the current system, but that could be changed.

> I suggest tracing hords of rays out of the light sources. Then storing all
> these intersection points.

Hoards is right!  You'd be amazed at how many you'd need to get a good image.
How many terrabytes of RAM do you have again?  ;-)

Again, I want to emphasize that I thought this would be a great idea, but when
I tried it it just didn't work as well as planned.

> There
> should be alot of points stored where things are changing, and little stored
> where things are not.  Use adaptive mechanisms to control the density of these
> points in certain regions of the scene.  High density around shadow boundaries
> and the like; low density around flat surfaces that are flatly lit.

This may be possible, but it would take a lot of programming. (You'd want to do
a PhD thesis on it!)  Some of these details for reducing the number of points
needed by using adaptive densities and other techniques might make this a
feasable system, but it would not be trivial to implement.

> You will thus have a large database of points that have information about
> light coming _at_ them.   Then during the trace of the image, this information
> is used in the same way that regular light-source information is used.

Yes.  This is how the photon map model works.  But you need LOTS of points to
get a good result.  Too few points, when coupled with the monte-carlo approach,
leads to VERY splochy results.  (And by number of points, I mean 200+ need
to be averaged at each intersection.)

> How? --->  Store these database points in an octree.  During the regular
> tracing pass,  select the nth-closest points out of the database for a given
> pixel intersection.

One good way to store 3d points for quick access to the n-closest is a balanced
kd-tree.  Other octree structures might work, too... if there are any that would
work very well, let me know, since it might speed up the photon-mapping code.

> Favor those who have the most similar normal directions.

This might not be good... it could introduce a bias to the rendering and lead to
inaccurate results.  You could favor some, but you'd want to do so in accordance
with the surface's BRDF.

> Use this information as if it is regular
> light-source information.  Ignore ambient values altogethor.

Yes!!!  However, I think that the data gathered from the current "radiosity"
sampling technique could be used in a better way, so that ambient could be
ignored and direction could be utilized.  I'll work on it soon, but right now
I need to do more work on the photon mapping stuff (I'm doing a directed
study for school).

> 1. Totally removes this ambiguous, user-defined "ambient value" from running
> the game.  Which in turn, removes the annoying "pastyness" from scenes that
> ambient gives you.

This is a plus.

> 2. Is totally, 100%, preprocessed before the first pixel is even rendered.
> Essentially, not slowing down the tracing process at all! No new rays are
> traced during final pass!

Not totally true.  You still need to query the database (which would be bigger
than you think).  This can be quite time-consuming, even with a well-balanced
octree (or kd-tree in my implementation).

Also, you'll still have to do work to average the many photons each time you
want to figure out how much light is hitting an object.

> 3. Has all the powerful simulation effects that monte-carlo gives you.

I don't like monte-carlo.  Too noisy.  (And too slow if you want to reduce the
noise.)  Some monte-carlo is good, of course... but I like jitter better than
pure monte-carlo.  :-)

> 4. Any level of bounce recursion can be calculated in any scene in a very
> simple and elegant way.  (Take a genuine interest in this post and I will let
> the secret out.)

This is true.

Like I said earlier, I implemented a global indirect lighting solution using
photon maps.  I tested it on a cornell-box scene.  Normally, the scene would
take about 50 seconds to render.  With my photon-mapping solution, it took
7 minutes and 50 seconds to render.  :-(  Much of this time was spent tracing
'hoards' of rays from the single light source.  Probably around 20 megabytes
were used for the photon database.  And the result was very splochy and just
plain ugly.  Then, I rendered it with POV's radiosity feature.  The result
looked nice and took under two minutes to render.  That scene eventually
became my 'box with no lights' scene.

So... how does Jensen use photon maps to aid in indirect 'radiosity'
illumination?  He uses a very low-density global photon map, and uses the
directions stored in it to direct the samples shot when doing a POV-Ray-type
"radiosity" calculation.  This allows you to shoot fewer samples without a
loss in image quality.  But that allows you to shoot the samples from more
points, producing a better overall image quality with the same overall number
of samples.

Again, I want to emphasize that I still think this could be a vaible idea.
However, there are many things (primarily database size, creation time, and
search time) that need to be addressed out before it will work well.

-Nathan


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.