POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. : Re: A box with no lights. Server Time
29 Jul 2024 06:13:44 EDT (-0400)
  Re: A box with no lights.  
From: Steve
Date: 28 Jan 1999 18:15:35
Message: <36b0de3a.4165903@news.povray.org>
On Wed, 27 Jan 1999 00:43:46 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:


>
>Here's the URL for his web page:
>http://www.gk.dtu.dk/home/hwj/
>

The kd-tree sounds important.  Do any of Jensen's theses explain them in
detail?   I found his images with the cuastics alone are absolutely beautiful.


>
>I thought it would be good to extend this from reflection & refraction to a
>monte-carlo approach to compute the entire radiance for the scene (at least
all
>of the indirect illumination).  I even implemented it, which wasn't too
>difficult since I already had all of the backwards ray-tracing stuff mostly
>done for reflective & refractive caustics.
>

This is interesting.  I'll admit that the ideas I posted are only one possible
way.  The other wya I have been trying to keep secret.  I intend to introduce
its results in my master's thesis.  You can store photon maps in the database,
yes.  But another way is to store "contributing points."  These are like dim
light sources on surfaces.  Each contain complex information about light
_leaving_ that point.  Normally, this would be a definition of a function
about the hemisphere of the normal wherein it resides.  To get refraction, you
need only extend the function over the 360 degress, or the sphere of angles.
The problem, then, is having to trace more rays during the regular ray-tracing
pass.


>
>I agree!  But I think the current system (the basic concept has been
>well-tested
>in other systems) is good and just needs a few adjustments.
>

What could we call POVs technique?  Namewise I mean.  Radiosity is not a good
word.  Is it "distributive ray tracing" ?


>
>Not necessary.  If the user wants to totally eliminate the regular ambient,
>they should set "ambient_light" to zero in the global settings.  Of course,
>this wouldn't work with the current system, but that could be changed.
>

Good idea.


>> I suggest tracing hords of rays out of the light sources. Then storing all
>> these intersection points.
>
>Hoards is right!  You'd be amazed at how many you'd need to get a good image.
>How many terrabytes of RAM do you have again?  ;-)
>

This is another aspect of my thesis.  Not all directions out of a point light
source give light to the visible scene.  You can drive this with importance.
Importance is a serious aspect, since you run into pathological scenes.  For
example,  the viewpoint is turned towards a mirror that reflects the entire
scene.  Or the janitor has turned a light on in a closet in a 4-story office
biulding.  How does the light get to the viewpoint a floor up? :)   I say let
the user suffer for trying to "trick the renderer."  Create an algorithm that
can 'solve' any bizzare scene...But don't guarantee its speed.


>
>Again, I want to emphasize that I thought this would be a great idea, but
when
>I tried it it just didn't work as well as planned.
>

I'll get to this below, dont worry.



>> There
>> should be alot of points stored where things are changing, and little
stored
>> where things are not.  Use adaptive mechanisms to control the density of
these
>> points in certain regions of the scene.  High density around shadow
boundaries
>> and the like; low density around flat surfaces that are flatly lit.
>
>This may be possible, but it would take a lot of programming. (You'd want to
do
>a PhD thesis on it!)  Some of these details for reducing the number of points
>needed by using adaptive densities and other techniques might make this a
>feasable system, but it would not be trivial to implement.
>

It's a matter of sample-and-replace.  Solving this problem by storing a bigger
database is redundant.  If an angle of emmition is homogeneous around itself,
replace all these similar samples with a single sample that is an average,
both in importance and in direction.   If you have directions giving you
near-zero importance, remove them, and recast them in a more "important"
direction.  Your database size should remain the same in the end.  You will
have a _better_ not a _bigger_ database.  To set this process in motion, don't
shoot rays from the light sources randomly, but in a rectangular lattice.
Subdivision is now possible. This is sampling after all.  (Maybe I should be
emailing this to you privately! )  


>> You will thus have a large database of points that have information about
>> light coming _at_ them.   Then during the trace of the image, this
information
>> is used in the same way that regular light-source information is used.
>
>Yes.  This is how the photon map model works.  But you need LOTS of points to
>get a good result.  Too few points, when coupled with the monte-carlo
approach,
>leads to VERY splochy results.  (And by number of points, I mean 200+ need
>to be averaged at each intersection.)
>


:)
There is an elegant way to get rid of this splochiness.  It's called a
"contributing point network."  I'll email details later when I get the time. 

Also, consider the output of a ray-tracer.  For true-color it's 8bits per
color channel.  Giving  you a maximum 256  shades of color.  For a scene
containing a single light source with channels not exceeding 1.0 in
brightness, there is an upper theoretical limit on the number of points
averaged.  Is this limit the average of 256 points?  Will more points in your
average change the image?  Think about it.



>One good way to store 3d points for quick access to the n-closest is a
balanced
>kd-tree.  Other octree structures might work, too... if there are any that
would
>work very well, let me know, since it might speed up the photon-mapping code.
>

Yes, balancing is always better.   This is a tree question.  Averaging all the
points in the scene will give you a point upon which can be considered a sort
of "geometric middle" of a the scene.  Averaging any one of two components of
direction (x,y,z) will begin to subdivide the scene across planer and linear
boundaries.   You can begin to see how an octree forms automatically.


>> Favor those who have the most similar normal directions.
>
>This might not be good... it could introduce a bias to the rendering and lead
>to inaccurate results.  You could favor some, but you'd want to do so in
>accordance
>with the surface's BRDF.
>

Well, consider the edge of a cube.  Two points on different sides of an edge
will have normals that deviate by 90 degrees.  They are very close, but
possibly receiving totally different amounts of light.


>
>Yes!!!  However, I think that the data gathered from the current "radiosity"
>sampling technique could be used in a better way, so that ambient could be
>ignored and direction could be utilized.  I'll work on it soon, but right now
>I need to do more work on the photon mapping stuff (I'm doing a directed
>study for school).
>

Yes.  But you will find out that the user has to enter a "brightness factor."
There is no way to get around this using nothing but sampling.   Consider
averaging the samples.  This is not so bad, I think. Just something to keep in
mind.  You should definitely investigate.


>> 2. Is totally, 100%, preprocessed before the first pixel is even rendered.
>> Essentially, not slowing down the tracing process at all! No new rays are
>> traced during final pass!
>
>Not totally true.  You still need to query the database (which would be
>bigger
>than you think).  This can be quite time-consuming, even with a well-balanced
>octree (or kd-tree in my implementation).
>Also, you'll still have to do work to average the many photons each time you
>want to figure out how much light is hitting an object.
>

Well... yes,, that.  How slower is the database stuff anyway?  It seems it
could be potentially staggering.


>> 3. Has all the powerful simulation effects that monte-carlo gives you.
>
>I don't like monte-carlo.  Too noisy.  (And too slow if you want to reduce
>the
>noise.)  Some monte-carlo is good, of course... but I like jitter better than
>pure monte-carlo.  :-)
>

Isn't it true that on a theoretical level, you are computing a version of
monte carlo as soon as you trace rays out of the light sources?  Somewhat like
saying all these algorithms are different manifastations of the same equation?


>> 4. Any level of bounce recursion can be calculated in any scene in a very
>> simple and elegant way.  (Take a genuine interest in this post and I will
let
>> the secret out.)
>
>This is true.
>

Not so fast. :)   You may be considering bounces on the same photon.  I am
talking about something totally different.  Such as the fact that all
intersection points in the path of a multipley-reflected photon potentially
illuminate every intersection point on all the paths of all the other photons
traced.   The recursive nature of this boggles the mind.  But I assure you
this is attainable, and elegantly at that.  I will elaborate only over email.


>Like I said earlier, I implemented a global indirect lighting solution using
>photon maps.  I tested it on a cornell-box scene.  Normally, the scene would
>take about 50 seconds to render.  With my photon-mapping solution, it took
>7 minutes and 50 seconds to render.  :-(  Much of this time was spent tracing
>'hoards' of rays from the single light source.  Probably around 20 megabytes
>were used for the photon database.  And the result was very splochy and just
>plain ugly.  Then, I rendered it with POV's radiosity feature.  The result
>looked nice and took under two minutes to render.  That scene eventually
>became my 'box with no lights' scene.
>

Using a 20 meg database on a scene that simple should produce results that far
exceed the capabilities of any output device on any comuter.  The objects are
big, nice, and round.  Its not like you had a bonzai tree in your Cornell box.
You can stick light sources into a scene that simple by hand and get
fascinating results.   20 megs?  I think the randomness of light out the light
source is introducing the noise. Honestly.  Try an even distribution. Let me
know what happens.


>So... how does Jensen use photon maps to aid in indirect 'radiosity'
>illumination?  He uses a very low-density global photon map, and uses the
>directions stored in it to direct the samples shot when doing a POV-Ray-type
>"radiosity" calculation.  This allows you to shoot fewer samples without a
>loss in image quality.  But that allows you to shoot the samples from more
>points, producing a better overall image quality with the same overall number
>of samples.
>

Good.  Jensen has also realized the importance of replacement.  This seems to
keep coming up in both database size problems and here again in sampling
number.


--------------
Steve Horn


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.