POV-Ray : Newsgroups : povray.beta-test : Povray Progress Server Time
25 Dec 2024 12:24:06 EST (-0500)
  Povray Progress (Message 1 to 4 of 4)  
From: clipka
Subject: Povray Progress
Date: 8 Apr 2009 11:25:01
Message: <web.49dcc127f6c14a48f3b4f0@news.povray.org>
Ah, yeeeees!

Now with some good deal of *systematic* (and well-documented) cheating, it looks
like we're closing back in on POV 3.6 performance, at the same quality, minus
some of the artifacts, and minus the ugliness of the code...

Maybe there's even more to gain here.

The key to success turns out to be in the sample cache performance, regarding
"false positives" (samples returned by cache lookup but found unsuitable for
re-use due to various geometric reasons) vs. "false negatives" (samples that
could be re-used but aren't found by cache lookup).

Obviously, "false negatives" are costly, because they cause additional samples
to be taken when actually there's enough around already - and, as we all know,
taking samples is the most expensive thing about radiosity.

Wrong.

As it seems, "false positives" are actually much worse. Sure, processing a
sample returned by cache lookup just to find out that it cannot be used right
here after all - due to scene geometry issues - costs a good deal less than
taking another sample. But it typically happens something like 5,000 times more
often!

So why not tune the whole radiosity process a good deal towards "false
negatives" instead of "false positives"? The answer is one single word:

Artifacts!

For a perfectly artifact-free shot, you need a *zero* "false negatives" approach
- otherwise it is impossible to properly "blur" the samples together, and
instead hard "cutoffs" are seen at the boundaries between octree cells. You
don't want that. We've seen exactly that happening in beta.29 and earlier.

The good news, however, is that it doesn't matter that much for deeper recursion
levels: Artifacts at that level are only "seen" by the radiosity algorithm
itself - which has a very blurry vision by design.

So obviously this is the way to go - not in a seemingly haphazard fashion like
3.6, but a controlled one, allowing even for a much better tuning.


So, what's the bottom line of all this? Well, noting actually - just spreading
my enthusiasm about getting radiosity back to full gear.

We can't expect identical performance behavior as in 3.6 for all scenes - but on
average it seems to be a close match what I have here right now. I'll try some
more tweaking to see if it can even be improved.


Post a reply to this message

From: Carlo C 
Subject: Re: Povray Progress
Date: 9 Apr 2009 03:40:01
Message: <web.49dda52cce34f70755df90d60@news.povray.org>
"clipka" <nomail@nomail> wrote:
> Ah, yeeeees!
>
> Now with some good deal of *systematic* (and well-documented) cheating, it looks
> like we're closing back in on POV 3.6 performance, at the same quality, minus
> some of the artifacts, and minus the ugliness of the code...
>
> Maybe there's even more to gain here.
>
> The key to success turns out to be in the sample cache performance, regarding
> "false positives" (samples returned by cache lookup but found unsuitable for
> re-use due to various geometric reasons) vs. "false negatives" (samples that
> could be re-used but aren't found by cache lookup).
>
> Obviously, "false negatives" are costly, because they cause additional samples
> to be taken when actually there's enough around already - and, as we all know,
> taking samples is the most expensive thing about radiosity.
>
> Wrong.
>
> As it seems, "false positives" are actually much worse. Sure, processing a
> sample returned by cache lookup just to find out that it cannot be used right
> here after all - due to scene geometry issues - costs a good deal less than
> taking another sample. But it typically happens something like 5,000 times more
> often!
>
> So why not tune the whole radiosity process a good deal towards "false
> negatives" instead of "false positives"? The answer is one single word:
>
> Artifacts!
>
> For a perfectly artifact-free shot, you need a *zero* "false negatives" approach
> - otherwise it is impossible to properly "blur" the samples together, and
> instead hard "cutoffs" are seen at the boundaries between octree cells. You
> don't want that. We've seen exactly that happening in beta.29 and earlier.
>
> The good news, however, is that it doesn't matter that much for deeper recursion
> levels: Artifacts at that level are only "seen" by the radiosity algorithm
> itself - which has a very blurry vision by design.
>
> So obviously this is the way to go - not in a seemingly haphazard fashion like
> 3.6, but a controlled one, allowing even for a much better tuning.
>
>
> So, what's the bottom line of all this? Well, noting actually - just spreading
> my enthusiasm about getting radiosity back to full gear.
>
> We can't expect identical performance behavior as in 3.6 for all scenes - but on
> average it seems to be a close match what I have here right now. I'll try some
> more tweaking to see if it can even be improved.

Another question, clipka. :-)
You think that is useful to work around a "count", like Megapov, to understand,
to offer the possibility *count > 1600* (with sample statement)?
Or you think that it is not necessary?


--
Carlo


Post a reply to this message

From: clipka
Subject: Re: Povray Progress
Date: 9 Apr 2009 06:50:01
Message: <web.49ddd2d3ce34f707b06defeb0@news.povray.org>
"Carlo C." <nomail@nomail> wrote:
> Another question, clipka. :-)
> You think that is useful to work around a "count", like Megapov, to understand,
> to offer the possibility *count > 1600* (with sample statement)?
> Or you think that it is not necessary?

I think that the question is moot, and that the whole fixed-count approach is a
dead-end road. My vision is an adaptive sampling, based on both the brightness
actually encountered (like the adaptive algorithms used for area lights or
anti-aliasing) and hints from the scene designer (like the "portals" in MCPov).

As far as fixed-count sampling goes, I think it makes sense to ditch the currend
sampling pattern altogether, and instead implement an algorithm that computes a
suitable pattern at startup based on the chosen count - which then would
basically have no upper limit.

Unfortunately, there does not seem to be any documentation about how the current
fixed sampling pattern was generated.


However, there are still a number of things that rank higher on the radiosity
agenda; load_file/save_file for instance, which is still out of order; and some
old kludges in the algorithm I want to get rid of.


Post a reply to this message

From: Carlo C 
Subject: Re: Povray Progress
Date: 9 Apr 2009 08:15:01
Message: <web.49dde5f4ce34f70755df90d60@news.povray.org>
"clipka" <nomail@nomail> wrote:
> "Carlo C." <nomail@nomail> wrote:
> > Another question, clipka. :-)
> > You think that is useful to work around a "count", like Megapov, to understand,
> > to offer the possibility *count > 1600* (with sample statement)?
> > Or you think that it is not necessary?
>
> I think that the question is moot, and that the whole fixed-count approach is a
> dead-end road. My vision is an adaptive sampling, based on both the brightness
> actually encountered (like the adaptive algorithms used for area lights or
> anti-aliasing) and hints from the scene designer (like the "portals" in MCPov).
>
> As far as fixed-count sampling goes, I think it makes sense to ditch the currend
> sampling pattern altogether, and instead implement an algorithm that computes a
> suitable pattern at startup based on the chosen count - which then would
> basically have no upper limit.
>
> Unfortunately, there does not seem to be any documentation about how the current
> fixed sampling pattern was generated.
>
>
> However, there are still a number of things that rank higher on the radiosity
> agenda; load_file/save_file for instance, which is still out of order; and some
> old kludges in the algorithm I want to get rid of.

Exhaustive answer, as usual.
Danke.

--
Carlo


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.