POV-Ray : Newsgroups : povray.programming : Re: A box with no lights. : Re: A box with no lights. Server Time
29 Jul 2024 06:20:22 EDT (-0400)
  Re: A box with no lights.  
From: Ronald L  Parker
Date: 28 Jan 1999 19:33:20
Message: <36b0fe75.43327563@news.povray.org>
On Thu, 28 Jan 1999 23:24:08 GMT, hor### [at] osuedu (Steve )
wrote:

>The kd-tree sounds important.  Do any of Jensen's theses explain them in
>detail?   I found his images with the cuastics alone are absolutely beautiful.

I believe he picked it up from someone else.  A web search on kd-tree
turns up a few references and even some nice tutorials.  I have some
bookmarks I could send you, but they're on my other machine; email me
if you're interested.  I found a nice implementation of kd-trees in a
program called "ranger."  I sent Nathan a copy of it, but again it's
on the other machine.

>(Maybe I should be
>emailing this to you privately! )  

Please don't.  I'm rather enjoying reading along.  (I was gonna do my
own implementation of photon maps before Nathan picked it up.)

>There is an elegant way to get rid of this splochiness.  It's called a
>"contributing point network."  I'll email details later when I get the time. 

Could you CC me?  Also, as you noticed from Jensen's images, the
splotchiness tends to go away if you don't visualize the photon
map directly.  Maybe I missed something, but I don't recall that
he had millions of photons stored.  I thought it was somewhat 
fewer.

>Yes, balancing is always better.   This is a tree question.  Averaging all the
>points in the scene will give you a point upon which can be considered a sort
>of "geometric middle" of a the scene.  Averaging any one of two components of
>direction (x,y,z) will begin to subdivide the scene across planer and linear
>boundaries.   You can begin to see how an octree forms automatically.

The kd-tree is like an octree, but it only splits along one dimension
at a time.  The code I mailed Nathan takes a predefined array of
points, splits it along the median of the dimension with the greatest
(range? variance? I don't remember) and then subdivides the halves
until it reaches the desired leaf size.  Obviously, this generates
a perfectly balanced tree every time, and since you don't need the
tree until after you generate all the data, the postprocessing is
just fine.

>Well, consider the edge of a cube.  Two points on different sides of an edge
>will have normals that deviate by 90 degrees.  They are very close, but
>possibly receiving totally different amounts of light.

True.  I'm pretty sure Jensen's formulas take this into account.

>Isn't it true that on a theoretical level, you are computing a version of
>monte carlo as soon as you trace rays out of the light sources?  Somewhat like
>saying all these algorithms are different manifastations of the same equation?

All of the algorithms are attempting to solve the rendering equation,
yes.  Whether photon maps are the same as monte carlo is a question
for the people who make up the definitions.

>>So... how does Jensen use photon maps to aid in indirect 'radiosity'
>>illumination?  He uses a very low-density global photon map, and uses the
>>directions stored in it to direct the samples shot when doing a POV-Ray-type
>>"radiosity" calculation. 

I'm not sure this is entirely correct, Nathan.  You might want to read
that part again.  My understanding was that he combined the nearby
photons with more traditional methods to create a close approximation
without actually having to fire any additional rays for diffuse
surfaces.  I could be wrong, though.  It's been a couple of months
since I read it. :)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.