POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. : Re: A box with no lights. Server Time
29 Jul 2024 06:14:27 EDT (-0400)
  Re: A box with no lights.  
From: Steve
Date: 3 Feb 1999 06:18:23
Message: <36b81f05.479577132@news.povray.org>
On Fri, 29 Jan 1999 10:27:28 -0500, Nathan Kopp <Nat### [at] Koppcom> wrote:


>
>I like the idea of replacing similar samples with a single sample.  This,
>of course, requires you finding similar samples in the database, but that
>might not be too time consuming.  And the savings in database size could be
>very considerable!
>

This is a database on two dimensions out of the light source. Theta/phi angles
and all.  You check adjacent samples for similarity. There is no searching.


>> There is an elegant way to get rid of this splochiness.  It's called a
>> "contributing point network."  I'll email details later when I get the
time.
>
>I'm very interested.  I look forward to hearing about it.  :-)
>


A contributing point is a point stored like a light source on a surface.  So
it's emmitting rather than gathering light as in a photon map. 

 A contributing point network has some bizzare thesis-worthy properties.  For
instance, in general, it gets SMALLER in MORE COMPLEX scenes.  If you have a
scene that is a light source in a sphere, the network size is maximally small,
of course.  But also, if your scene is a maze, then the size begins to
minimize.  There is some sort of strange middle ground of scene complexity
where the network size maximizes.   You can throw your intuition out the
window. :)

Consider a linear database of contributing points in a scene.   Now consider
storing visability information about these points.  For N points, there are
(N^2-N)/2  "mutal connections" amongst the points.  If you want to store
reciprocal information accross a link there will be twice as many connections,
call them "exclusive connections"...there will be N^2-N.  This is just the
details though, we are mostly concerned with the fact that there are O(N^2)
connections for N points.  Don't BMW though, the above formulas give the
theoretical MAXIMUM amount of connections.  In real scenes we will neither
trace this many rays nor store visibility information in a database this
large.

How? -->  In real life there are not surfaces that are 100% diffusive.  This
is so obvious. Think of a flat wall.  Does one point on a plane illuminate
another on the same plane?  It very well shouldn't.  This is a good thing,
especially when biulding this network, for if we have alot of highly glossy
surfaces, and especially mirrors, the network will be built with near magical
speed.  By the grace of glossiness, contributing points can have a "maximum
angular" emmission without screwing up the simulation too much.  This means
that contributing points emit light not like an omnilight, but as little
spotlights with large emmision angles for diffuse surfaces, and small angles
for glossies and mirrors.    

We will consider exclusive connections between the points, mostly because we
have to to get the benefits of compaction and optimization. So we are dealing
with (N^2-N).  So the database starts as "upper nodes."  There will be one for
each point, N total.   We can use the points themselves.  Each upper node
contains two linked lists, one of them is "potential" the other is
"confirmed."     We begin by backculling (dot product) all the points against
each other.  We do this twice for each test.  For points A & B the first test
checks the postion of A against the maximum emmision angle of B.  The second
checks the postion of B against the 180 degree angle about the normal at A.
We want to exhaust our first point, A, first.  Those points failing the
backcall get their address stored in the "comfirmed" list with their elements
marked "invisible."    Those not failing go into the "potential" list.  Then
after exhausting all the tests from A,  we trace rays only between A and the
points in the "potential" list.   After confirming visibilty (or invisibility)
with these potentials we move them into the "confirmed" list, with inv, vis
depending on what happened with the traced ray.  Thus we have saved tracing
rays by backculling.

At this point we have a large list under A's upper node that contains every
single contributing point in the scene, except itself.  Total=(N-1).  We scan
the list and find out if there are more visibles or if there are more
invisibles.   We remove the LARGER group from the list entirely, keeping the
smaller group as the contiuents of the list.  We then mark the list "visibles"
or "invisibles" depending.  We now automatically know that all other points in
the scene are opposite the case of the points in the conmfirmed list.  Thus we
have minimized the size of the network by storing only what's necesarry.

We continue this process. In all cases, we have been biulding up the lists in
the other upper nodes too, if you know what I mean.  So we never trace the
same ray twice, are backcull the same points twice.

Eventually, we have a confirmed list for every contributing point. We can
freely transport light between them to any level of recursion desired.

Image now, a scene that is a sphere with a light source inside.  All the nodes
will have null lists that say "here are all the invisible points."  There are
none!  So we know from this that all points are visible to each other.  Nice
network.   Now consider a horribly complex maze.  Notice that any given one
point "sees" only a small fraction of the total amount of contributing points.
The lists will be small visible lists.  What kind of scene has a maximum
network size?  There is indeed a funny middle-ground.  Perhaps two planes
facing each other with a light source in between?   Do you want to write a
formal proof though?  I intend to, eventually.  I'm thinking a tetrahedron
with a lightsource inside, what about you guys? :)



>> Also, consider the output of a ray-tracer.  For true-color it's 8bits per
>> color channel.  Giving  you a maximum 256  shades of color.  For a scene
>> containing a single light source with channels not exceeding 1.0 in
>> brightness, there is an upper theoretical limit on the number of points
>> averaged.  Is this limit the average of 256 points?  Will more points in
your
>> average change the image?  Think about it.
>
>If the points are not very uniform, the first 256 points you gather could be
>very dim... then points 257-350 could be twice as bright as the others. This
>would change the average.  Of course, If this were the case I would try a
>better sampling technique to avoid such problems.
>

This is a problem with knowing which points make big changes to another point
in the scene and which dont.  But how can you know ahead of time without
actually calculating transport?  You dont!  There are things that you do know
ahead of time though:

1. Points that are directly lit make big fat changes in the scene
illumination.
2. Points that are secondarily lit make marginal changes in the scene
illumination.
3. Points that are tertiarily lit blah blah... etc.

4. Points that are blocked from each other views dont make any changes at all.
Did somebody say contributing point network?  What kind of information can we
get from this thing?  Can we ask for the closest point, ask for its visibles,
then calculate?  I think we might be able to!

What does secondarily lit mean?  That means that the point is not directly lit
by any light source.  So what does tertiarily lit mean?  (not visible to any
directly-lit points OR light sources.)   So what does bisecondarily lit mean?
( not visible to blah blah)   So what does n-iarily lit mean?  (not visible to
n-2, n-3, n-4, etc lit sources)  Have we discovered a way to find how the
light from the janitor's closet works its way to an office a floor up?  *gasp*
I think Archimedes put it best when he said EUREKA!


>> Yes.  But you will find out that the user has to enter a "brightness
factor."
>> There is no way to get around this using nothing but sampling.   Consider
>> averaging the samples.  This is not so bad, I think. Just something to keep
in
>> mind.  You should definitely investigate.
>
>I think the 'brightness_factor' can be removed and a better averaging of
>samples than the current technique could be used.  I'm not sure if it will
>actually work the way I want it to, though.
>

hmmm.. I think there is a conservation of energy problem here.  How much does
that one point take up in square meters on that surface over there?  We can't
know without tracing out of the light source.


>> Not so fast. :)   You may be considering bounces on the same photon.  I am
>> talking about something totally different.  Such as the fact that all
>> intersection points in the path of a multipley-reflected photon potentially
>> illuminate every intersection point on all the paths of all the other
photons
>> traced.   The recursive nature of this boggles the mind.  But I assure you
>> this is attainable, and elegantly at that.  I will elaborate only over
email.
>
>I look forward to hearing more.
>


I you  know the "absolute" recursion level (as described above) then you know
exactly how the transport works between them.   You are already calculating
the transport from one bounce to the next.  A network can be created at any
stage.  Using this information, you have a bias on the absolute levels and you
can focus and continue having the photon search around the maze, so to speak.

>
>I did use an even distribution from the light source.  (My initial attempts
>did use random sampling, which I quickly abandoned.) But once I hit an
>object, I had to use random sampling, which brought the noise back.  I agree
>that with such a large database, the results should have been much better.
>Maybe there's a bug in my code (now that's unthinkable!!!).
>

Tell me what you mean by random sampling. What are you sampling exactly?
Thanks. 
------------
Steve


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.