POV-Ray : Newsgroups : povray.advanced-users : <no subject> Server Time
24 Nov 2024 04:55:15 EST (-0500)
  <no subject> (Message 1 to 2 of 2)  
From: Jay Fox
Subject: <no subject>
Date: 12 May 2009 16:05:00
Message: <web.4a09d5bdc23a6d9d92e869d0@news.povray.org>
For the last few years, the sampling method used for focal blur has really
bugged me. I have two main problems with it. The first, and easier to
understand, is that the "aperture" is not round. It's at best hexagonal, and at
worst, square. This can be easily fixed by using a "sunflower" distribution, a
topic for another day.

The second problem, one that is easier to see, but perhaps harder to understand,
is undersampling. They say pictures are worth a thousand words, so here's my
picture:

/********** BEGIN sample.pov file ***********/
#include "colors.inc"

#declare E = difference {
  box {<-1.0, -1.0, -1.0>, < 1.0,  1.0,  1.0>}
  box {<-0.5, -0.6, -1.1>, < 1.1, -0.2,  1.1>}
  box {<-0.5,  0.6, -1.1>, < 1.1,  0.2,  1.1>}
  texture {pigment {color White}
    finish {diffuse 0 ambient 1}}
  scale <0.12, 0.15, 0.0001>
};

object {E scale 1 translate <0, 0.8, 0>}

object {E scale 0.5 translate <0, -0.4, 5>}

camera {
  location <0, 0, 10>
  direction -3*z
  look_at <0, 0, 0>
  focal_point <0, 0, 0>
  aperture 2
  blur_samples 2000
  confidence 1-pow(0.5,52)
  variance pow(0.5,52)
}
/*********** END sample.pov file ************/

I bring this up because it's been brought up at least once before and dismissed
a bit too hastily:
http://news.povray.org/povray.programming/thread/%3Cst8m53-nt.ln1%40raf256.com%3E/

The simple explanation is this: with a sufficiently high confidence and
sufficiently low variance, POV-Ray will initially take 19 samples. The sample
variance is computed and used to determine whether or not we should continue
sampling. In the code I provided, there are many pixels where those 19 samples
hit the black background. The sample variance is zero for those pixels. No
confidence or variance settings (short of the nearly useless "variance 0",
which causes every pixel to use the maximum number of samples) will cause
additional samples to be taken. To see the correct result, modify my code,
changing the variance to 0. Warning: It will run for a long, long time.

Now that you can "see" the problem, how do we fix it? To fix this, we need to
perform a thought experiment. For any given pixel, the "true" color is an
average of the colors of every ray inside a "focal cone", which is the cone
from the circular (or hexagonal, or square) aperture through the focal point,
then continuing to infinity.

Let's assume that a hypothetical object intrudes into this cone. It might be the
corner of a box that clips the surface of the cone, or a cylinder that cuts all
the way through and clips the surface twice, or an object fully embedded in the
cone that does not clip the cone surface at all. It could even be a union of
several objects, embedded and/or clipping.

The interesting thing about this hypothetical object is that a certain
percentage of all rays cast will intersect it. This gives us a probability that
a randomly selected sample will hit the intruding object. Let's call this
probability p.

We don't know what p is. However, after having sampled 19 pixels, we can say
that the likelihood that any further samples will hit the intruding object
1/21. This isn't quite a true probability, but we can use it like one. In other
words, we can use a "sample p" of 1/21. This is the rule of succession of
Laplace, which I always remember by the "sunrise problem".

So, after sampling 19 rays, we can say that 20/21 of all future rays will hit
"known" surfaces, which have the known sample variance we already calculate
today. The other 1/21 will hit unknown surfaces, which have some unknown
variance. Let's assume for pessimism's sake that the unknown object (or union
of objects) has a texture with colors uniformly distributed across the unit
color-cube.

Then for each component (r,g,b,a), the variance of the unit cube from the sample
mean (for the component) is 1/3*((1-k)^3 - (-k)^3) = k^2 - k + 1/3, with k being
r, g, b, or a.

Putting it altogether, assuming we've taking n samples so far:
unknown_variance = sum{k=r,g,b,a}((mean(k)^2 - mean(k) + 1/3)
adjusted_variance = (unknown_variance + (n+1)*sample_variance)/(n+2)

I plan on implementing a test run of this algorithm, but I brought it up to see
if this was already an active area of research, and if so, am I needlessly
duplicating existing work?

A final word. You might be thinking, what was the point? Now we have to take a
bunch of samples, even if they are all the same color? Doesn't that defeat the
purpose of the confidence/variance system?

Remember, with each new sample, the effect of that unknown_variance parameter
will diminish. If we are hitting all black rays because there really is no
intruder, we'll still get out pretty early. On the other hand, if we are
sampling a complex object, the unknown_variance will eventually go away, but
the sample_variance will STILL BE HIGH. So this confidence/variance scheme will
still do its job: speed up areas of low "noise", and take lots of samples when
the noise is high.


Post a reply to this message

From: Jay Fox
Subject: Re: <no subject>
Date: 12 May 2009 16:15:00
Message: <web.4a09d7ce6f92e856d92e869d0@news.povray.org>
I apologize for the lack of a subject. Every time I tried to edit my message, I
had to log in from scratch (what, no cookies?), and I would lose the title and
contents. I had to edit in Notepad and copy-paste each time I previewed, but I
forgot to add the title back in. I realize this will make this message easy to
lose track of. Is there a way to change the title after it's been posted?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.