POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. : Re: A box with no lights. Server Time
29 Jul 2024 00:34:24 EDT (-0400)
  Re: A box with no lights.  
From: Steve
Date: 26 Jan 1999 19:24:34
Message: <36ae5475.242493870@news.povray.org>
On Sun, 24 Jan 1999 03:43:46 +0200, "Margus Ramst" <mar### [at] peakeduee> wrote:

>Well, knowing only the general principles of POV's radiosity system, I'm not
>sure whether this is a good idea but...
>Maybe shoot out a small number of sample rays (count/10 or something) at
>every pixel and add this to the value calculated by the conventional method.
>Perhaps make this an object-level option.
>Since features usually represented by bump maps would (in real life) not
>have great effect on the shape of the object, the weight of the colors
>calculated by this pixel-by-pixel method could be small, so the the
>statistical errors resulting from using few sample rays would (in theory) be
>negligible.
>I can make really long sentences, n'est pas?
>
>Margus.
>


I think this misses the point entirely.  The biggest problem with POVs
so-called "radiosity" (this is badly named, since the word "radiosity" has
been traditionally connected with some other totally different algorithm of
which I won't go into here)  is that it still relies on AMBIENT values entered
by the user.  Pov's radiosity uses the already entered ambient values to check
the overall brightness of a scene. So that a scene with radiosity will not
have its "overall brightness" changed from the same scene with the radiosity
turned off.  To see proof of this, turn your ambient to zero on all your
surfaces to 0.0 and notice that radiosity no longer has any effect on the
rendered scene!!

My suggestion is this:  
When radiosity is turned on, it automatically makes all ambient = 0.0 on all
surfaces (those who have been flagged to not "require it" by the user)
I suggest tracing hords of rays out of the light sources.  Then storing all
these intersection points.  Then tracing rays out of these intersection points
randomly at first, but in a way that favors those contributions to the visible
image.  Then, when the contributing points have their emmited light
comfortably focused into the scene,  replace this large database of points
with a database of points in the visible image, where the points are on the
surfaces and also contain information of where the direction of the various
light is coming in.  This information may need to be stored in a 3D function
over an interval of phi- and theta-directions deviating from the surface
normal of a given point and for each point.  This could be acheived using
wavelet descriptions of the functions about the given directions.  There
should be alot of points stored where things are changing, and little stored
where things are not.  Use adaptive mechanisms to control the density of these
points in certain regions of the scene.  High density around shadow boundaries
and the like; low density around flat surfaces that are flatly lit.

You will thus have a large database of points that have information about
light coming _at_ them.   Then during the trace of the image, this information
is used in the same way that regular light-source information is used.  

How? --->  Store these database points in an octree.  During the regular
tracing pass,  select the nth-closest points out of the database for a given
pixel intersection.  Favor those who have the most similar normal directions.
Use the information from the database, taking into account any translational
differences than of the nth closest points.  (You can kind of imagine how this
would be done.  Consider the function as defined over an umbrella around the
database point.  Another point close to this central-umbrella-point will get
parts of the umbrella differently.)  Use this information as if it is regular
light-source information.  Ignore ambient values altogethor. 

Here are the plusses:
1. Totally removes this ambiguous, user-defined "ambient value" from running
the game.  Which in turn, removes the annoying "pastyness" from scenes that
ambient gives you.
2. Is totally, 100%, preprocessed before the first pixel is even rendered.
Essentially, not slowing down the tracing process at all! No new rays are
traced during final pass!
3. Has all the powerful simulation effects that monte-carlo gives you.
4. Any level of bounce recursion can be calculated in any scene in a very
simple and elegant way.  (Take a genuine interest in this post and I will let
the secret out.)


Questions?

--------------
Steve Horn


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.