POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. Server Time
29 Jul 2024 04:17:09 EDT (-0400)
  Re: A box with no lights. (Message 11 to 20 of 21)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 1 Messages >>>
From: Steve
Subject: Re: A box with no lights.
Date: 28 Jan 1999 16:29:48
Message: <36b0d61a.2086321@news.povray.org>
On 27 Jan 1999 07:40:35 -0500, Nieminen Mika <war### [at] cctutfi> wrote:

>Steve  <hor### [at] osuedu> wrote:
>: To see proof of this, turn your ambient to zero on all your
>: surfaces to 0.0 and notice that radiosity no longer has any effect on the
>: rendered scene!!
>
>  I don't understand why this is so bad.
>


It's bad because indirect illumination is then not ADDITIVE to an object's
shading, but in fact is merely a MULTIPLIED factor onto the already existing
ambient coefficient.   So an ambient of zero will "mask the effects" of
radiosity totally.  Radiosity calculations should _replace_, not work with the
ambient value.  It's just like Nathan said, the user still GUESSES what the
brightness of a scene is, rather than radiosity figuring that out for you,
which is what it should do.  

See, ambient was meant to simulate indirect illumination in a scene.
Radiosity actually calculates it.  Do you see why one should then replace the
other? 

Ambient is also meant to be a "self illuminating factor."  This is where it
gets its name.  A surface that emmits light on its own will be "ambient"
becuase it will have some "ambience."  In reaility, no surface really does
this.  To simulate something like a flourescent tube ceiling light, large
ambient values can be used on a cylinder.  

------------
Steve Horn


Post a reply to this message

From: Steve
Subject: Re: A box with no lights.
Date: 28 Jan 1999 18:15:35
Message: <36b0de3a.4165903@news.povray.org>
On Wed, 27 Jan 1999 00:43:46 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:


>
>Here's the URL for his web page:
>http://www.gk.dtu.dk/home/hwj/
>

The kd-tree sounds important.  Do any of Jensen's theses explain them in
detail?   I found his images with the cuastics alone are absolutely beautiful.


>
>I thought it would be good to extend this from reflection & refraction to a
>monte-carlo approach to compute the entire radiance for the scene (at least
all
>of the indirect illumination).  I even implemented it, which wasn't too
>difficult since I already had all of the backwards ray-tracing stuff mostly
>done for reflective & refractive caustics.
>

This is interesting.  I'll admit that the ideas I posted are only one possible
way.  The other wya I have been trying to keep secret.  I intend to introduce
its results in my master's thesis.  You can store photon maps in the database,
yes.  But another way is to store "contributing points."  These are like dim
light sources on surfaces.  Each contain complex information about light
_leaving_ that point.  Normally, this would be a definition of a function
about the hemisphere of the normal wherein it resides.  To get refraction, you
need only extend the function over the 360 degress, or the sphere of angles.
The problem, then, is having to trace more rays during the regular ray-tracing
pass.


>
>I agree!  But I think the current system (the basic concept has been
>well-tested
>in other systems) is good and just needs a few adjustments.
>

What could we call POVs technique?  Namewise I mean.  Radiosity is not a good
word.  Is it "distributive ray tracing" ?


>
>Not necessary.  If the user wants to totally eliminate the regular ambient,
>they should set "ambient_light" to zero in the global settings.  Of course,
>this wouldn't work with the current system, but that could be changed.
>

Good idea.


>> I suggest tracing hords of rays out of the light sources. Then storing all
>> these intersection points.
>
>Hoards is right!  You'd be amazed at how many you'd need to get a good image.
>How many terrabytes of RAM do you have again?  ;-)
>

This is another aspect of my thesis.  Not all directions out of a point light
source give light to the visible scene.  You can drive this with importance.
Importance is a serious aspect, since you run into pathological scenes.  For
example,  the viewpoint is turned towards a mirror that reflects the entire
scene.  Or the janitor has turned a light on in a closet in a 4-story office
biulding.  How does the light get to the viewpoint a floor up? :)   I say let
the user suffer for trying to "trick the renderer."  Create an algorithm that
can 'solve' any bizzare scene...But don't guarantee its speed.


>
>Again, I want to emphasize that I thought this would be a great idea, but
when
>I tried it it just didn't work as well as planned.
>

I'll get to this below, dont worry.



>> There
>> should be alot of points stored where things are changing, and little
stored
>> where things are not.  Use adaptive mechanisms to control the density of
these
>> points in certain regions of the scene.  High density around shadow
boundaries
>> and the like; low density around flat surfaces that are flatly lit.
>
>This may be possible, but it would take a lot of programming. (You'd want to
do
>a PhD thesis on it!)  Some of these details for reducing the number of points
>needed by using adaptive densities and other techniques might make this a
>feasable system, but it would not be trivial to implement.
>

It's a matter of sample-and-replace.  Solving this problem by storing a bigger
database is redundant.  If an angle of emmition is homogeneous around itself,
replace all these similar samples with a single sample that is an average,
both in importance and in direction.   If you have directions giving you
near-zero importance, remove them, and recast them in a more "important"
direction.  Your database size should remain the same in the end.  You will
have a _better_ not a _bigger_ database.  To set this process in motion, don't
shoot rays from the light sources randomly, but in a rectangular lattice.
Subdivision is now possible. This is sampling after all.  (Maybe I should be
emailing this to you privately! )  


>> You will thus have a large database of points that have information about
>> light coming _at_ them.   Then during the trace of the image, this
information
>> is used in the same way that regular light-source information is used.
>
>Yes.  This is how the photon map model works.  But you need LOTS of points to
>get a good result.  Too few points, when coupled with the monte-carlo
approach,
>leads to VERY splochy results.  (And by number of points, I mean 200+ need
>to be averaged at each intersection.)
>


:)
There is an elegant way to get rid of this splochiness.  It's called a
"contributing point network."  I'll email details later when I get the time. 

Also, consider the output of a ray-tracer.  For true-color it's 8bits per
color channel.  Giving  you a maximum 256  shades of color.  For a scene
containing a single light source with channels not exceeding 1.0 in
brightness, there is an upper theoretical limit on the number of points
averaged.  Is this limit the average of 256 points?  Will more points in your
average change the image?  Think about it.



>One good way to store 3d points for quick access to the n-closest is a
balanced
>kd-tree.  Other octree structures might work, too... if there are any that
would
>work very well, let me know, since it might speed up the photon-mapping code.
>

Yes, balancing is always better.   This is a tree question.  Averaging all the
points in the scene will give you a point upon which can be considered a sort
of "geometric middle" of a the scene.  Averaging any one of two components of
direction (x,y,z) will begin to subdivide the scene across planer and linear
boundaries.   You can begin to see how an octree forms automatically.


>> Favor those who have the most similar normal directions.
>
>This might not be good... it could introduce a bias to the rendering and lead
>to inaccurate results.  You could favor some, but you'd want to do so in
>accordance
>with the surface's BRDF.
>

Well, consider the edge of a cube.  Two points on different sides of an edge
will have normals that deviate by 90 degrees.  They are very close, but
possibly receiving totally different amounts of light.


>
>Yes!!!  However, I think that the data gathered from the current "radiosity"
>sampling technique could be used in a better way, so that ambient could be
>ignored and direction could be utilized.  I'll work on it soon, but right now
>I need to do more work on the photon mapping stuff (I'm doing a directed
>study for school).
>

Yes.  But you will find out that the user has to enter a "brightness factor."
There is no way to get around this using nothing but sampling.   Consider
averaging the samples.  This is not so bad, I think. Just something to keep in
mind.  You should definitely investigate.


>> 2. Is totally, 100%, preprocessed before the first pixel is even rendered.
>> Essentially, not slowing down the tracing process at all! No new rays are
>> traced during final pass!
>
>Not totally true.  You still need to query the database (which would be
>bigger
>than you think).  This can be quite time-consuming, even with a well-balanced
>octree (or kd-tree in my implementation).
>Also, you'll still have to do work to average the many photons each time you
>want to figure out how much light is hitting an object.
>

Well... yes,, that.  How slower is the database stuff anyway?  It seems it
could be potentially staggering.


>> 3. Has all the powerful simulation effects that monte-carlo gives you.
>
>I don't like monte-carlo.  Too noisy.  (And too slow if you want to reduce
>the
>noise.)  Some monte-carlo is good, of course... but I like jitter better than
>pure monte-carlo.  :-)
>

Isn't it true that on a theoretical level, you are computing a version of
monte carlo as soon as you trace rays out of the light sources?  Somewhat like
saying all these algorithms are different manifastations of the same equation?


>> 4. Any level of bounce recursion can be calculated in any scene in a very
>> simple and elegant way.  (Take a genuine interest in this post and I will
let
>> the secret out.)
>
>This is true.
>

Not so fast. :)   You may be considering bounces on the same photon.  I am
talking about something totally different.  Such as the fact that all
intersection points in the path of a multipley-reflected photon potentially
illuminate every intersection point on all the paths of all the other photons
traced.   The recursive nature of this boggles the mind.  But I assure you
this is attainable, and elegantly at that.  I will elaborate only over email.


>Like I said earlier, I implemented a global indirect lighting solution using
>photon maps.  I tested it on a cornell-box scene.  Normally, the scene would
>take about 50 seconds to render.  With my photon-mapping solution, it took
>7 minutes and 50 seconds to render.  :-(  Much of this time was spent tracing
>'hoards' of rays from the single light source.  Probably around 20 megabytes
>were used for the photon database.  And the result was very splochy and just
>plain ugly.  Then, I rendered it with POV's radiosity feature.  The result
>looked nice and took under two minutes to render.  That scene eventually
>became my 'box with no lights' scene.
>

Using a 20 meg database on a scene that simple should produce results that far
exceed the capabilities of any output device on any comuter.  The objects are
big, nice, and round.  Its not like you had a bonzai tree in your Cornell box.
You can stick light sources into a scene that simple by hand and get
fascinating results.   20 megs?  I think the randomness of light out the light
source is introducing the noise. Honestly.  Try an even distribution. Let me
know what happens.


>So... how does Jensen use photon maps to aid in indirect 'radiosity'
>illumination?  He uses a very low-density global photon map, and uses the
>directions stored in it to direct the samples shot when doing a POV-Ray-type
>"radiosity" calculation.  This allows you to shoot fewer samples without a
>loss in image quality.  But that allows you to shoot the samples from more
>points, producing a better overall image quality with the same overall number
>of samples.
>

Good.  Jensen has also realized the importance of replacement.  This seems to
keep coming up in both database size problems and here again in sampling
number.


--------------
Steve Horn


Post a reply to this message

From: Ronald L  Parker
Subject: Re: A box with no lights.
Date: 28 Jan 1999 19:33:20
Message: <36b0fe75.43327563@news.povray.org>
On Thu, 28 Jan 1999 23:24:08 GMT, hor### [at] osuedu (Steve )
wrote:

>The kd-tree sounds important.  Do any of Jensen's theses explain them in
>detail?   I found his images with the cuastics alone are absolutely beautiful.

I believe he picked it up from someone else.  A web search on kd-tree
turns up a few references and even some nice tutorials.  I have some
bookmarks I could send you, but they're on my other machine; email me
if you're interested.  I found a nice implementation of kd-trees in a
program called "ranger."  I sent Nathan a copy of it, but again it's
on the other machine.

>(Maybe I should be
>emailing this to you privately! )  

Please don't.  I'm rather enjoying reading along.  (I was gonna do my
own implementation of photon maps before Nathan picked it up.)

>There is an elegant way to get rid of this splochiness.  It's called a
>"contributing point network."  I'll email details later when I get the time. 

Could you CC me?  Also, as you noticed from Jensen's images, the
splotchiness tends to go away if you don't visualize the photon
map directly.  Maybe I missed something, but I don't recall that
he had millions of photons stored.  I thought it was somewhat 
fewer.

>Yes, balancing is always better.   This is a tree question.  Averaging all the
>points in the scene will give you a point upon which can be considered a sort
>of "geometric middle" of a the scene.  Averaging any one of two components of
>direction (x,y,z) will begin to subdivide the scene across planer and linear
>boundaries.   You can begin to see how an octree forms automatically.

The kd-tree is like an octree, but it only splits along one dimension
at a time.  The code I mailed Nathan takes a predefined array of
points, splits it along the median of the dimension with the greatest
(range? variance? I don't remember) and then subdivides the halves
until it reaches the desired leaf size.  Obviously, this generates
a perfectly balanced tree every time, and since you don't need the
tree until after you generate all the data, the postprocessing is
just fine.

>Well, consider the edge of a cube.  Two points on different sides of an edge
>will have normals that deviate by 90 degrees.  They are very close, but
>possibly receiving totally different amounts of light.

True.  I'm pretty sure Jensen's formulas take this into account.

>Isn't it true that on a theoretical level, you are computing a version of
>monte carlo as soon as you trace rays out of the light sources?  Somewhat like
>saying all these algorithms are different manifastations of the same equation?

All of the algorithms are attempting to solve the rendering equation,
yes.  Whether photon maps are the same as monte carlo is a question
for the people who make up the definitions.

>>So... how does Jensen use photon maps to aid in indirect 'radiosity'
>>illumination?  He uses a very low-density global photon map, and uses the
>>directions stored in it to direct the samples shot when doing a POV-Ray-type
>>"radiosity" calculation. 

I'm not sure this is entirely correct, Nathan.  You might want to read
that part again.  My understanding was that he combined the nearby
photons with more traditional methods to create a close approximation
without actually having to fire any additional rays for diffuse
surfaces.  I could be wrong, though.  It's been a couple of months
since I read it. :)


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 29 Jan 1999 09:59:05
Message: <36B1CD32.5DDE7F67@Kopp.com>
Ronald L. Parker wrote:
> 
> The kd-tree is like an octree, but it only splits along one dimension
> at a time.  The code I mailed Nathan takes a predefined array of
> points, splits it along the median of the dimension with the greatest
> (range? variance? I don't remember) and then subdivides the halves
> until it reaches the desired leaf size.  Obviously, this generates
> a perfectly balanced tree every time, and since you don't need the
> tree until after you generate all the data, the postprocessing is
> just fine.

My current implementatin uses mean-split balancing instead of median-
split.  This saves time during the balancing phase (no full-fledged
sorting required), but requires a little bit more memory (although
not more than than ranger).  With the median-split, you don't really
need left/right pointers in the tree... searching the tree is like
doing a binary search on an array.

> >Well, consider the edge of a cube.  Two points on different sides of an edge
> >will have normals that deviate by 90 degrees.  They are very close, but
> >possibly receiving totally different amounts of light.
> 
> True.  I'm pretty sure Jensen's formulas take this into account.

Maybe, but I don't think he mentioned it in the paper.  He does store
the direction the light came from, but I don't think he stores the
surface normal where it hit.  This may be a good thing to do, though.

> I'm not sure this is entirely correct, Nathan.  You might want to read
> that part again.  My understanding was that he combined the nearby
> photons with more traditional methods to create a close approximation
> without actually having to fire any additional rays for diffuse
> surfaces.  I could be wrong, though.  It's been a couple of months
> since I read it. :)

Jensen has a paper on this topic called "Importance Driven Path Tracing
using the Photon Map".  From what it sounds like in the paper, he still
has to sample rays fro diffuse surfaces, but the photon map just adds
appropriate importance to directions of higher contribution.

-Nathan


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 29 Jan 1999 10:25:28
Message: <36B1D360.6B67FA57@Kopp.com>
Steve wrote:
> 
> What could we call POVs technique?  Namewise I mean.  Radiosity is not a good
> word.  Is it "distributive ray tracing" ?
> 

Yes.  Pov's radiosity is a modified type of distributive (monte-carlo) ray
tracing.  Some distributive ray tracers shoot the sample rays at every point
(to do blury reflections, for instance), but this is really slow.  Also,
the samples are usually chosen using the surface's BRDF... POV uses the
same lambertian-only pdf for all of its samples (so even shiny surfaces
tend to get a diffuse-surface look).

POV averages those samples and stores the color that it gets.  Then, instead
of always sampling, it sometimes interpolates between points already in its
cache.

> 
> It's a matter of sample-and-replace.  Solving this problem by storing a bigger
> database is redundant.  If an angle of emmition is homogeneous around itself,
> replace all these similar samples with a single sample that is an average,
> both in importance and in direction.   If you have directions giving you
> near-zero importance, remove them, and recast them in a more "important"
> direction.  Your database size should remain the same in the end.  You will
> have a _better_ not a _bigger_ database.  To set this process in motion, don't
> shoot rays from the light sources randomly, but in a rectangular lattice.
> Subdivision is now possible. This is sampling after all.  (Maybe I should be
> emailing this to you privately! )

I like the idea of replacing similar samples with a single sample.  This,
of course, requires you finding similar samples in the database, but that
might not be too time consuming.  And the savings in database size could be
very considerable!

> There is an elegant way to get rid of this splochiness.  It's called a
> "contributing point network."  I'll email details later when I get the time.

I'm very interested.  I look forward to hearing about it.  :-)

> Also, consider the output of a ray-tracer.  For true-color it's 8bits per
> color channel.  Giving  you a maximum 256  shades of color.  For a scene
> containing a single light source with channels not exceeding 1.0 in
> brightness, there is an upper theoretical limit on the number of points
> averaged.  Is this limit the average of 256 points?  Will more points in your
> average change the image?  Think about it.

If the points are not very uniform, the first 256 points you gather could be
very dim... then points 257-350 could be twice as bright as the others. This
would change the average.  Of course, If this were the case I would try a
better sampling technique to avoid such problems.

> Yes.  But you will find out that the user has to enter a "brightness factor."
> There is no way to get around this using nothing but sampling.   Consider
> averaging the samples.  This is not so bad, I think. Just something to keep in
> mind.  You should definitely investigate.

I think the 'brightness_factor' can be removed and a better averaging of
samples than the current technique could be used.  I'm not sure if it will
actually work the way I want it to, though.

> Well... yes,, that.  How slower is the database stuff anyway?  It seems it
> could be potentially staggering.

For a reasonable-sized photon map in a balanced kd-tree and only averaging
50-100 points per intersection, it's not too bad.  The search time, like
any other binary tree, is approximately on the order of log(n), so that
is good (n=number of photons in tree).

> Not so fast. :)   You may be considering bounces on the same photon.  I am
> talking about something totally different.  Such as the fact that all
> intersection points in the path of a multipley-reflected photon potentially
> illuminate every intersection point on all the paths of all the other photons
> traced.   The recursive nature of this boggles the mind.  But I assure you
> this is attainable, and elegantly at that.  I will elaborate only over email.

I look forward to hearing more.

> fascinating results.   20 megs?  I think the randomness of light out the light
> source is introducing the noise. Honestly.  Try an even distribution. Let me
> know what happens.

I did use an even distribution from the light source.  (My initial attempts
did use random sampling, which I quickly abandoned.) But once I hit an
object, I had to use random sampling, which brought the noise back.  I agree
that with such a large database, the results should have been much better.
Maybe there's a bug in my code (now that's unthinkable!!!).

-Nathan


Post a reply to this message

From: Ron Parker
Subject: Re: A box with no lights.
Date: 29 Jan 1999 12:07:39
Message: <36b1eadb.0@news.povray.org>
On Fri, 29 Jan 1999 10:01:06 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:
>My current implementatin uses mean-split balancing instead of median-
>split.  This saves time during the balancing phase (no full-fledged
>sorting required), but requires a little bit more memory (although
>not more than than ranger).  With the median-split, you don't really
>need left/right pointers in the tree... searching the tree is like
>doing a binary search on an array.

The code I sent you does median-split balancing without doing a full
sort at each phase, and I think finding the median of a group of points 
is still O(n).  Don't ask me to explain the algorithm, though.  I just
got Knuth for Christmas and I haven't gotten that far yet. :) The end 
result is that each leaf node is actually fairly unsorted.

>Jensen has a paper on this topic called "Importance Driven Path Tracing
>using the Photon Map".  From what it sounds like in the paper, he still
>has to sample rays fro diffuse surfaces, but the photon map just adds
>appropriate importance to directions of higher contribution.

That's an older paper.  While it has some good ideas that you need to
know when you get to the later papers, the actual technique he uses
in the later paper is vastly different than just firing a few rays.
Unless I completely misread it, that is. :)


Post a reply to this message

From: Steve
Subject: Re: A box with no lights.
Date: 3 Feb 1999 06:18:23
Message: <36b81f05.479577132@news.povray.org>
On Fri, 29 Jan 1999 10:27:28 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:


>
>I like the idea of replacing similar samples with a single sample.  This,
>of course, requires you finding similar samples in the database, but that
>might not be too time consuming.  And the savings in database size could be
>very considerable!
>

This is a database on two dimensions out of the light source. Theta/phi angles
and all.  You check adjacent samples for similarity. There is no searching.


>> There is an elegant way to get rid of this splochiness.  It's called a
>> "contributing point network."  I'll email details later when I get the
time.
>
>I'm very interested.  I look forward to hearing about it.  :-)
>


A contributing point is a point stored like a light source on a surface.  So
it's emmitting rather than gathering light as in a photon map. 

 A contributing point network has some bizzare thesis-worthy properties.  For
instance, in general, it gets SMALLER in MORE COMPLEX scenes.  If you have a
scene that is a light source in a sphere, the network size is maximally small,
of course.  But also, if your scene is a maze, then the size begins to
minimize.  There is some sort of strange middle ground of scene complexity
where the network size maximizes.   You can throw your intuition out the
window. :)

Consider a linear database of contributing points in a scene.   Now consider
storing visability information about these points.  For N points, there are
(N^2-N)/2  "mutal connections" amongst the points.  If you want to store
reciprocal information accross a link there will be twice as many connections,
call them "exclusive connections"...there will be N^2-N.  This is just the
details though, we are mostly concerned with the fact that there are O(N^2)
connections for N points.  Don't BMW though, the above formulas give the
theoretical MAXIMUM amount of connections.  In real scenes we will neither
trace this many rays nor store visibility information in a database this
large.

How? -->  In real life there are not surfaces that are 100% diffusive.  This
is so obvious. Think of a flat wall.  Does one point on a plane illuminate
another on the same plane?  It very well shouldn't.  This is a good thing,
especially when biulding this network, for if we have alot of highly glossy
surfaces, and especially mirrors, the network will be built with near magical
speed.  By the grace of glossiness, contributing points can have a "maximum
angular" emmission without screwing up the simulation too much.  This means
that contributing points emit light not like an omnilight, but as little
spotlights with large emmision angles for diffuse surfaces, and small angles
for glossies and mirrors.    

We will consider exclusive connections between the points, mostly because we
have to to get the benefits of compaction and optimization. So we are dealing
with (N^2-N).  So the database starts as "upper nodes."  There will be one for
each point, N total.   We can use the points themselves.  Each upper node
contains two linked lists, one of them is "potential" the other is
"confirmed."     We begin by backculling (dot product) all the points against
each other.  We do this twice for each test.  For points A & B the first test
checks the postion of A against the maximum emmision angle of B.  The second
checks the postion of B against the 180 degree angle about the normal at A.
We want to exhaust our first point, A, first.  Those points failing the
backcall get their address stored in the "comfirmed" list with their elements
marked "invisible."    Those not failing go into the "potential" list.  Then
after exhausting all the tests from A,  we trace rays only between A and the
points in the "potential" list.   After confirming visibilty (or invisibility)
with these potentials we move them into the "confirmed" list, with inv, vis
depending on what happened with the traced ray.  Thus we have saved tracing
rays by backculling.

At this point we have a large list under A's upper node that contains every
single contributing point in the scene, except itself.  Total=(N-1).  We scan
the list and find out if there are more visibles or if there are more
invisibles.   We remove the LARGER group from the list entirely, keeping the
smaller group as the contiuents of the list.  We then mark the list "visibles"
or "invisibles" depending.  We now automatically know that all other points in
the scene are opposite the case of the points in the conmfirmed list.  Thus we
have minimized the size of the network by storing only what's necesarry.

We continue this process. In all cases, we have been biulding up the lists in
the other upper nodes too, if you know what I mean.  So we never trace the
same ray twice, are backcull the same points twice.

Eventually, we have a confirmed list for every contributing point. We can
freely transport light between them to any level of recursion desired.

Image now, a scene that is a sphere with a light source inside.  All the nodes
will have null lists that say "here are all the invisible points."  There are
none!  So we know from this that all points are visible to each other.  Nice
network.   Now consider a horribly complex maze.  Notice that any given one
point "sees" only a small fraction of the total amount of contributing points.
The lists will be small visible lists.  What kind of scene has a maximum
network size?  There is indeed a funny middle-ground.  Perhaps two planes
facing each other with a light source in between?   Do you want to write a
formal proof though?  I intend to, eventually.  I'm thinking a tetrahedron
with a lightsource inside, what about you guys? :)



>> Also, consider the output of a ray-tracer.  For true-color it's 8bits per
>> color channel.  Giving  you a maximum 256  shades of color.  For a scene
>> containing a single light source with channels not exceeding 1.0 in
>> brightness, there is an upper theoretical limit on the number of points
>> averaged.  Is this limit the average of 256 points?  Will more points in
your
>> average change the image?  Think about it.
>
>If the points are not very uniform, the first 256 points you gather could be
>very dim... then points 257-350 could be twice as bright as the others. This
>would change the average.  Of course, If this were the case I would try a
>better sampling technique to avoid such problems.
>

This is a problem with knowing which points make big changes to another point
in the scene and which dont.  But how can you know ahead of time without
actually calculating transport?  You dont!  There are things that you do know
ahead of time though:

1. Points that are directly lit make big fat changes in the scene
illumination.
2. Points that are secondarily lit make marginal changes in the scene
illumination.
3. Points that are tertiarily lit blah blah... etc.

4. Points that are blocked from each other views dont make any changes at all.
Did somebody say contributing point network?  What kind of information can we
get from this thing?  Can we ask for the closest point, ask for its visibles,
then calculate?  I think we might be able to!

What does secondarily lit mean?  That means that the point is not directly lit
by any light source.  So what does tertiarily lit mean?  (not visible to any
directly-lit points OR light sources.)   So what does bisecondarily lit mean?
( not visible to blah blah)   So what does n-iarily lit mean?  (not visible to
n-2, n-3, n-4, etc lit sources)  Have we discovered a way to find how the
light from the janitor's closet works its way to an office a floor up?  *gasp*
I think Archimedes put it best when he said EUREKA!


>> Yes.  But you will find out that the user has to enter a "brightness
factor."
>> There is no way to get around this using nothing but sampling.   Consider
>> averaging the samples.  This is not so bad, I think. Just something to keep
in
>> mind.  You should definitely investigate.
>
>I think the 'brightness_factor' can be removed and a better averaging of
>samples than the current technique could be used.  I'm not sure if it will
>actually work the way I want it to, though.
>

hmmm.. I think there is a conservation of energy problem here.  How much does
that one point take up in square meters on that surface over there?  We can't
know without tracing out of the light source.


>> Not so fast. :)   You may be considering bounces on the same photon.  I am
>> talking about something totally different.  Such as the fact that all
>> intersection points in the path of a multipley-reflected photon potentially
>> illuminate every intersection point on all the paths of all the other
photons
>> traced.   The recursive nature of this boggles the mind.  But I assure you
>> this is attainable, and elegantly at that.  I will elaborate only over
email.
>
>I look forward to hearing more.
>


I you  know the "absolute" recursion level (as described above) then you know
exactly how the transport works between them.   You are already calculating
the transport from one bounce to the next.  A network can be created at any
stage.  Using this information, you have a bias on the absolute levels and you
can focus and continue having the photon search around the maze, so to speak.

>
>I did use an even distribution from the light source.  (My initial attempts
>did use random sampling, which I quickly abandoned.) But once I hit an
>object, I had to use random sampling, which brought the noise back.  I agree
>that with such a large database, the results should have been much better.
>Maybe there's a bug in my code (now that's unthinkable!!!).
>

Tell me what you mean by random sampling. What are you sampling exactly?
Thanks. 
------------
Steve


Post a reply to this message

From: Ron Parker
Subject: Re: A box with no lights.
Date: 3 Feb 1999 09:01:34
Message: <36b856be.0@news.povray.org>
On Wed, 03 Feb 1999 11:27:40 GMT, Steve <hor### [at] osuedu> wrote:

I really like all of the theory; it certainly seems to make sense.
One question, though: where does the initial set of points come 
from?

>Image now, a scene that is a sphere with a light source inside.  All the nodes
>will have null lists that say "here are all the invisible points."  There are
>none!  So we know from this that all points are visible to each other.  Nice
>network.   Now consider a horribly complex maze.  Notice that any given one
>point "sees" only a small fraction of the total amount of contributing points.
>The lists will be small visible lists.  What kind of scene has a maximum
>network size?  There is indeed a funny middle-ground.  Perhaps two planes
>facing each other with a light source in between?   Do you want to write a
>formal proof though?  I intend to, eventually.  I'm thinking a tetrahedron
>with a lightsource inside, what about you guys? :)

I'm thinking of two barely-intersecting (merged) diffuse spheres with the light 
source halfway between their centers.  Each point is visible to roughly half 
of the other points, and all are visible to the light source.  Still not 
more complex than two planes, though, unless you consider that there are 
fewer glancing angles involved.


Post a reply to this message

From: Nathan Kopp
Subject: Re: A box with no lights.
Date: 3 Feb 1999 10:37:05
Message: <36B86D87.1AF041F2@Kopp.com>
That is a very interesting idea!  Have you done any testing on it yet?  I'm
interested in how well it performs.  Are you considering adding it to POV
eventually?  If so, we should (at some future time) discuss how photon maps
and the contributing network can work together (unless, of course, the
contributing network works so well that it replaces the photon maps... I
won't feel too bad if that happens).

Steve wrote:
> 
> Tell me what you mean by random sampling. What are you sampling exactly?
> Thanks.

I was speaking of the beams shot out of the light source into the scene.
Originally, I chose a random theta/phi combination (based on a good PDF
which was supposed to lead to uniform distribution over the area of the
unit sphere).  But I was also speaking of splitting those beams up when
they hit a diffuse surface.  Sample beams would then be chosen over the
unit hemisphere based on the surface normal.  This time, the PDF favored
rays based on the surface's BRDF.

-Nathan


Post a reply to this message

From: Steve
Subject: Re: A box with no lights.
Date: 3 Feb 1999 15:31:03
Message: <36b8ae72.516299357@news.povray.org>
On Wed, 03 Feb 1999 10:38:47 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:

>That is a very interesting idea!  Have you done any testing on it yet?  I'm
>interested in how well it performs.  Are you considering adding it to POV
>eventually?  If so, we should (at some future time) discuss how photon maps
>and the contributing network can work together (unless, of course, the
>contributing network works so well that it replaces the photon maps... I
>won't feel too bad if that happens).
>

I'm an old DOS programmer, admittedly.   POV is a Win95 app.  I'm not very
confident when programming windows apps.  For one thing, I'm not sure about
how to use POV_Alloc().  I need to get the amount of free extended memory too.
This would mean more than just writing the code. I need to learn the details
of Win-Apps.  And I also need to become comfortable with Borland 5.0, meaning,
a book on it would be nice.  This is why patches written by me aren't
springing up in binaries.programming


>Steve wrote:
>> 
>> Tell me what you mean by random sampling. What are you sampling exactly?
>> Thanks.
>
>I was speaking of the beams shot out of the light source into the scene.
>Originally, I chose a random theta/phi combination (based on a good PDF
>which was supposed to lead to uniform distribution over the area of the
>unit sphere).  But I was also speaking of splitting those beams up when
>they hit a diffuse surface.  Sample beams would then be chosen over the
>unit hemisphere based on the surface normal.  This time, the PDF favored
>rays based on the surface's BRDF.
>

I see.  It sounds like you are taking an entirely random distribution of rays
and focusing them in peaks of the BRDF of a given point.  So you want to shoot
more rays where there are high emmisions in the BRDF.  This is initially a
good idea, and I can see why noise would result.    Try an even distribution,
and let the natural light level of the rays do the dirty work.  You will get a
smaller number of "important" rays, though.   The question is: is this a bad
thing?   

For very tight BRDFs, use cut-off angles, wherein you dont shoot any rays
outside the bright, central "cone."  This would give you an artificial way to
get more important rays emitted into the scene, without resorting to a
randomizer.  

I hear some BRDFS are a simultaneous combination of a large specular peak with
a small  PDF in all directions.  Perhaps seperating the two while distributing
is a good idea?

-------------
Steve


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.