POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. Server Time
29 Jul 2024 02:25:16 EDT (-0400)
  Re: A box with no lights. (Message 1 to 10 of 21)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Margus Ramst
Subject: Re: A box with no lights.
Date: 23 Jan 1999 20:43:42
Message: <36aa7ace.0@news.povray.org>
Well, knowing only the general principles of POV's radiosity system, I'm not
sure whether this is a good idea but...
Maybe shoot out a small number of sample rays (count/10 or something) at
every pixel and add this to the value calculated by the conventional method.
Perhaps make this an object-level option.
Since features usually represented by bump maps would (in real life) not
have great effect on the shape of the object, the weight of the colors
calculated by this pixel-by-pixel method could be small, so the the
statistical errors resulting from using few sample rays would (in theory) be
negligible.
I can make really long sentences, n'est pas?

Margus.

Nathan Kopp wrote in message <36AA03C8.26C55316@Kopp.com>...
>Margus Ramst wrote:
>>
>> BTW, talking of improvements to POV's radiosity, couldn't bump maps be
taken
>> into consideration when calculating the direction of the sample rays?
Right
>> now, normal perturbation is not visible in radiosity-only areas.
>
>I'm not sure.  The reason it could be difficult is that POV doesn't shoot
>sample rays at every point (for obvious speed reasons).  Then, the amount
of
>light gathered at the various points is interpolated between them.  But
>interpolating the light gathered doesn't factor in the surface normal.  I'm
>trying to figure out a way to do it (so you can get phong & specular
>highlights, too).  Suggestions would be appreciated, but we probably want
to
>discuss it in povray.programming instead of here.
>
>-Nathan


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 24 Jan 1999 00:53:34
Message: <36AAB5EF.A1F019CC@Kopp.com>
I think you'd find a lot more noise in the image (and it would probably be
a lot slower).  What I was thinking was this:

Right now, when POV takes samples, it averages them and saves just the
color.  It could, however, save the direction/brightness of each sample.
Then, instead of just doing a weighted-average of nearby points to
interpolate in-between colors, you would reuse the individual samples.
With knowledge of both direction and brightness, you could calculate
diffuse, phong, and specular components.  This might be considerable slower
(100+ samples per intersection), but probably not a great deal slower, since
you would be reusing samples instead of tracing them.  Going through a 'for'
loop 100 times doing a few dot products each time is not _that_ time
consuming.

I would keep an option of the current version, though, for speed reasons.
I also think it is important to generalize radiosity a little bit more, so
that it works for reflections (like the furry cat picture in the latest
IRTC) and allows more recursion depth.

Right now I'm working on the photon map stuff, but this is interesting
enough for me to play with it too.  :-)

-Nathan


Margus Ramst wrote:
> 
> Well, knowing only the general principles of POV's radiosity system, I'm not
> sure whether this is a good idea but...
> Maybe shoot out a small number of sample rays (count/10 or something) at
> every pixel and add this to the value calculated by the conventional method.
> Perhaps make this an object-level option.
> Since features usually represented by bump maps would (in real life) not
> have great effect on the shape of the object, the weight of the colors
> calculated by this pixel-by-pixel method could be small, so the the
> statistical errors resulting from using few sample rays would (in theory) be
> negligible.
> I can make really long sentences, n'est pas?
> 
> Margus.


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 24 Jan 1999 00:54:48
Message: <36AAB639.4C56BE0C@Kopp.com>
Nathan Kopp wrote:
> 
> Right now I'm working on the photon map stuff, but this is interesting
> enough for me to play with it too.  :-)
> 

But I'd encourage other people to also try stuff!  I probably won't get to
this for at least a week.

-Nathan


Post a reply to this message

From: Margus Ramst
Subject: Re: A box with no lights.
Date: 24 Jan 1999 13:23:00
Message: <36ab6504.0@news.povray.org>
Actually, I soon realized that I had more or less described blurry
reflections already incorporated into Wyzard's patch and the Suprpatch;
using standard radiosity and a little bit of very blurry reflectiveness
_might_ give the desired effect. I'll experiment with this.
Yes, it's slow, but the method you describe would still require a high
number of sample points to give a fairly accurate representation of the
normal map.
Incorporating diffuse and specular properties into radiosity calculations
would still be useful, of course.

Margus.

Nathan Kopp wrote in message <36AAB5EF.A1F019CC@Kopp.com>...
>I think you'd find a lot more noise in the image (and it would probably be
>a lot slower).  What I was thinking was this:
>
>Right now, when POV takes samples, it averages them and saves just the
>color.  It could, however, save the direction/brightness of each sample.
>Then, instead of just doing a weighted-average of nearby points to
>interpolate in-between colors, you would reuse the individual samples.
>With knowledge of both direction and brightness, you could calculate
>diffuse, phong, and specular components.  This might be considerable slower
>(100+ samples per intersection), but probably not a great deal slower,
since
>you would be reusing samples instead of tracing them.  Going through a
'for'
>loop 100 times doing a few dot products each time is not _that_ time
>consuming.
>
>I would keep an option of the current version, though, for speed reasons.
>I also think it is important to generalize radiosity a little bit more, so
>that it works for reflections (like the furry cat picture in the latest
>IRTC) and allows more recursion depth.
>
>Right now I'm working on the photon map stuff, but this is interesting
>enough for me to play with it too.  :-)
>
>-Nathan


Post a reply to this message

From: Margus Ramst
Subject: Re: A box with no lights.
Date: 24 Jan 1999 16:36:11
Message: <36ab924b.0@news.povray.org>
Dang! Of course, radiosity doesn't work on reflections. Should have read
your post more carefully. I think it needs to become a pre-rendering step
for this to work. Is this difficult? I don't know.

Margus

Margus Ramst wrote in message <36ab6504.0@news.povray.org>...
>Actually, I soon realized that I had more or less described blurry
>reflections already incorporated into Wyzard's patch and the Suprpatch;
>using standard radiosity and a little bit of very blurry reflectiveness
>_might_ give the desired effect. I'll experiment with this.
>Yes, it's slow, but the method you describe would still require a high
>number of sample points to give a fairly accurate representation of the
>normal map.
>Incorporating diffuse and specular properties into radiosity calculations
>would still be useful, of course.
>
>Margus.
>


Post a reply to this message

From: Nieminen Mika
Subject: Re: A box with no lights.
Date: 25 Jan 1999 05:37:45
Message: <36ac4979.0@news.povray.org>
Margus Ramst <mar### [at] peakeduee> wrote:
: Dang! Of course, radiosity doesn't work on reflections.

  I wonder what's the reason for this.

-- 
main(i){char*_="BdsyFBThhHFBThhHFRz]NFTITQF|DJIFHQhhF";while(i=
*_++)for(;i>1;printf("%s",i-70?i&1?"[]":" ":(i=0,"\n")),i/=2);} /*- Warp. -*/


Post a reply to this message

From: Steve
Subject: Re: A box with no lights.
Date: 26 Jan 1999 19:24:34
Message: <36ae5475.242493870@news.povray.org>
On Sun, 24 Jan 1999 03:43:46 +0200, "Margus Ramst" <mar### [at] peakeduee> wrote:

>Well, knowing only the general principles of POV's radiosity system, I'm not
>sure whether this is a good idea but...
>Maybe shoot out a small number of sample rays (count/10 or something) at
>every pixel and add this to the value calculated by the conventional method.
>Perhaps make this an object-level option.
>Since features usually represented by bump maps would (in real life) not
>have great effect on the shape of the object, the weight of the colors
>calculated by this pixel-by-pixel method could be small, so the the
>statistical errors resulting from using few sample rays would (in theory) be
>negligible.
>I can make really long sentences, n'est pas?
>
>Margus.
>


I think this misses the point entirely.  The biggest problem with POVs
so-called "radiosity" (this is badly named, since the word "radiosity" has
been traditionally connected with some other totally different algorithm of
which I won't go into here)  is that it still relies on AMBIENT values entered
by the user.  Pov's radiosity uses the already entered ambient values to check
the overall brightness of a scene. So that a scene with radiosity will not
have its "overall brightness" changed from the same scene with the radiosity
turned off.  To see proof of this, turn your ambient to zero on all your
surfaces to 0.0 and notice that radiosity no longer has any effect on the
rendered scene!!

My suggestion is this:  
When radiosity is turned on, it automatically makes all ambient = 0.0 on all
surfaces (those who have been flagged to not "require it" by the user)
I suggest tracing hords of rays out of the light sources.  Then storing all
these intersection points.  Then tracing rays out of these intersection points
randomly at first, but in a way that favors those contributions to the visible
image.  Then, when the contributing points have their emmited light
comfortably focused into the scene,  replace this large database of points
with a database of points in the visible image, where the points are on the
surfaces and also contain information of where the direction of the various
light is coming in.  This information may need to be stored in a 3D function
over an interval of phi- and theta-directions deviating from the surface
normal of a given point and for each point.  This could be acheived using
wavelet descriptions of the functions about the given directions.  There
should be alot of points stored where things are changing, and little stored
where things are not.  Use adaptive mechanisms to control the density of these
points in certain regions of the scene.  High density around shadow boundaries
and the like; low density around flat surfaces that are flatly lit.

You will thus have a large database of points that have information about
light coming _at_ them.   Then during the trace of the image, this information
is used in the same way that regular light-source information is used.  

How? --->  Store these database points in an octree.  During the regular
tracing pass,  select the nth-closest points out of the database for a given
pixel intersection.  Favor those who have the most similar normal directions.
Use the information from the database, taking into account any translational
differences than of the nth closest points.  (You can kind of imagine how this
would be done.  Consider the function as defined over an umbrella around the
database point.  Another point close to this central-umbrella-point will get
parts of the umbrella differently.)  Use this information as if it is regular
light-source information.  Ignore ambient values altogethor. 

Here are the plusses:
1. Totally removes this ambiguous, user-defined "ambient value" from running
the game.  Which in turn, removes the annoying "pastyness" from scenes that
ambient gives you.
2. Is totally, 100%, preprocessed before the first pixel is even rendered.
Essentially, not slowing down the tracing process at all! No new rays are
traced during final pass!
3. Has all the powerful simulation effects that monte-carlo gives you.
4. Any level of bounce recursion can be calculated in any scene in a very
simple and elegant way.  (Take a genuine interest in this post and I will let
the secret out.)


Questions?

--------------
Steve Horn


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 27 Jan 1999 00:41:40
Message: <36AEA792.E89CF5AB@Kopp.com>
Not to be too blunt, but: "been there done that".  It's a good idea (or at least
I thought so... I thought it was a great idea, in fact), but it doesn't work
very well in practice.  I'll give some background and details.  First, the
background.  I'm working on adding photon mapping to POV-Ray to simulate
reflective and refractive caustics.  A guy named Henrik Wann Jensen was the first
to implement the photon map idea.  Photon mapping is a way of storing information
from a backwards ray-tracing step in a tree structure called a kd-tree.  You
store color/brightness, location, and direction of the 'photon' packet.  Then,
during the rendering phase, you use the n-closest photons (within a specified
maximum radius) just as you would use light from a light source.

Here's the URL for his web page:
http://www.gk.dtu.dk/home/hwj/

I thought it would be good to extend this from reflection & refraction to a
monte-carlo approach to compute the entire radiance for the scene (at least all
of the indirect illumination).  I even implemented it, which wasn't too
difficult since I already had all of the backwards ray-tracing stuff mostly
done for reflective & refractive caustics.

Jensen does use a global photon map to aid in indirect illumination
calculations... I'll explain that later.

Ok... details:

Steve wrote:
> 
> I think this misses the point entirely.  The biggest problem with POVs
> so-called "radiosity" (this is badly named, since the word "radiosity" has
> been traditionally connected with some other totally different algorithm of
> which I won't go into here)  is that it still relies on AMBIENT values entered
> by the user.

I agree!  But I think the current system (the basic concept has been well-tested
in other systems) is good and just needs a few adjustments.

> My suggestion is this:
> When radiosity is turned on, it automatically makes all ambient = 0.0 on all
> surfaces (those who have been flagged to not "require it" by the user)

Not necessary.  If the user wants to totally eliminate the regular ambient,
they should set "ambient_light" to zero in the global settings.  Of course,
this wouldn't work with the current system, but that could be changed.

> I suggest tracing hords of rays out of the light sources. Then storing all
> these intersection points.

Hoards is right!  You'd be amazed at how many you'd need to get a good image.
How many terrabytes of RAM do you have again?  ;-)

Again, I want to emphasize that I thought this would be a great idea, but when
I tried it it just didn't work as well as planned.

> There
> should be alot of points stored where things are changing, and little stored
> where things are not.  Use adaptive mechanisms to control the density of these
> points in certain regions of the scene.  High density around shadow boundaries
> and the like; low density around flat surfaces that are flatly lit.

This may be possible, but it would take a lot of programming. (You'd want to do
a PhD thesis on it!)  Some of these details for reducing the number of points
needed by using adaptive densities and other techniques might make this a
feasable system, but it would not be trivial to implement.

> You will thus have a large database of points that have information about
> light coming _at_ them.   Then during the trace of the image, this information
> is used in the same way that regular light-source information is used.

Yes.  This is how the photon map model works.  But you need LOTS of points to
get a good result.  Too few points, when coupled with the monte-carlo approach,
leads to VERY splochy results.  (And by number of points, I mean 200+ need
to be averaged at each intersection.)

> How? --->  Store these database points in an octree.  During the regular
> tracing pass,  select the nth-closest points out of the database for a given
> pixel intersection.

One good way to store 3d points for quick access to the n-closest is a balanced
kd-tree.  Other octree structures might work, too... if there are any that would
work very well, let me know, since it might speed up the photon-mapping code.

> Favor those who have the most similar normal directions.

This might not be good... it could introduce a bias to the rendering and lead to
inaccurate results.  You could favor some, but you'd want to do so in accordance
with the surface's BRDF.

> Use this information as if it is regular
> light-source information.  Ignore ambient values altogethor.

Yes!!!  However, I think that the data gathered from the current "radiosity"
sampling technique could be used in a better way, so that ambient could be
ignored and direction could be utilized.  I'll work on it soon, but right now
I need to do more work on the photon mapping stuff (I'm doing a directed
study for school).

> 1. Totally removes this ambiguous, user-defined "ambient value" from running
> the game.  Which in turn, removes the annoying "pastyness" from scenes that
> ambient gives you.

This is a plus.

> 2. Is totally, 100%, preprocessed before the first pixel is even rendered.
> Essentially, not slowing down the tracing process at all! No new rays are
> traced during final pass!

Not totally true.  You still need to query the database (which would be bigger
than you think).  This can be quite time-consuming, even with a well-balanced
octree (or kd-tree in my implementation).

Also, you'll still have to do work to average the many photons each time you
want to figure out how much light is hitting an object.

> 3. Has all the powerful simulation effects that monte-carlo gives you.

I don't like monte-carlo.  Too noisy.  (And too slow if you want to reduce the
noise.)  Some monte-carlo is good, of course... but I like jitter better than
pure monte-carlo.  :-)

> 4. Any level of bounce recursion can be calculated in any scene in a very
> simple and elegant way.  (Take a genuine interest in this post and I will let
> the secret out.)

This is true.

Like I said earlier, I implemented a global indirect lighting solution using
photon maps.  I tested it on a cornell-box scene.  Normally, the scene would
take about 50 seconds to render.  With my photon-mapping solution, it took
7 minutes and 50 seconds to render.  :-(  Much of this time was spent tracing
'hoards' of rays from the single light source.  Probably around 20 megabytes
were used for the photon database.  And the result was very splochy and just
plain ugly.  Then, I rendered it with POV's radiosity feature.  The result
looked nice and took under two minutes to render.  That scene eventually
became my 'box with no lights' scene.

So... how does Jensen use photon maps to aid in indirect 'radiosity'
illumination?  He uses a very low-density global photon map, and uses the
directions stored in it to direct the samples shot when doing a POV-Ray-type
"radiosity" calculation.  This allows you to shoot fewer samples without a
loss in image quality.  But that allows you to shoot the samples from more
points, producing a better overall image quality with the same overall number
of samples.

Again, I want to emphasize that I still think this could be a vaible idea.
However, there are many things (primarily database size, creation time, and
search time) that need to be addressed out before it will work well.

-Nathan


Post a reply to this message

From: Nieminen Mika
Subject: Re: A box with no lights.
Date: 27 Jan 1999 07:40:35
Message: <36af0943.0@news.povray.org>
Steve  <hor### [at] osuedu> wrote:
: To see proof of this, turn your ambient to zero on all your
: surfaces to 0.0 and notice that radiosity no longer has any effect on the
: rendered scene!!

  I don't understand why this is so bad.
  Of course if I don't want ambient light on an object, I set it to 0. If
regardless of this the object still is illuminated by ambient light (for
the radiosity calculations), then it doesn't work as I expected and wanted.
  If you say "the ambient light is not illuminating this object" then it
means that.

-- 
main(i){char*_="BdsyFBThhHFBThhHFRz]NFTITQF|DJIFHQhhF";while(i=
*_++)for(;i>1;printf("%s",i-70?i&1?"[]":" ":(i=0,"\n")),i/=2);} /*- Warp. -*/


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 27 Jan 1999 10:14:49
Message: <36AF2DE9.AD1636FD@Kopp.com>
But... when you turn on radiosity, the ambient light setting has two effects on
an object.  First, it determines to what extent radiosity will be used. 
Second, when computing radiosity, it acts like the old boring ambient at the
deepest depth of the gathering process.  This double effect is confusing and
requires tweaking to get things to look right, which is generally not a good
thing.

For a physically accurate (and subsequently most realistic) rendering, the
extent to which indirect lighting contributes to the scene should be computed
automatically, not specified by the user.  The user is just guessing and would
probably get it wrong.

Of course, I don't want to ditch the ambient setting completely.  I still  want
the ability to add light to my scenes via a bright ambient object.

-Nathan

Nieminen Mika wrote:
> 
>   I don't understand why this is so bad.
>   Of course if I don't want ambient light on an object, I set it to 0. If
> regardless of this the object still is illuminated by ambient light (for
> the radiosity calculations), then it doesn't work as I expected and wanted.
>   If you say "the ambient light is not illuminating this object" then it
> means that.
>


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.