POV-Ray : Newsgroups : povray.beta-test : focal blur only using HexGrid2Size? : Re: focal blur only using HexGrid2Size? Server Time
1 May 2024 19:49:53 EDT (-0400)
  Re: focal blur only using HexGrid2Size?  
From: clipka
Date: 11 Jan 2013 16:37:54
Message: <50f08632@news.povray.org>
Am 11.01.2013 18:34, schrieb John:
> I'd like to know more about
> your algorithm - do you have a publication or paper describing what you've done?
> -John

You mean the statistical anti-aliasing mode?

I'm not an academic, so no publications or papers here. But the idea is 
easily outlined:

- The existing anti-aliasing (or more generally, oversampling) modes in 
POV-Ray use a geometric grid of sub-samples, which might cause moiree 
pattern artifacts; jitter helps a bit, but especially in the case of 
adaptive anti-aliasing is not a cure-all because the random sub-sample 
offsets are limited. A stochastic oversampling approach, as implemented 
in POV-Ray's focal blur with high sample count, might be better suited 
here, and indeed does a good job in practice to avoid those moiree patterns.

- POV-Ray's focal blur oversampling algorithm also gives a simple 
elegant answer to the question, "how many samples do we need to shoot": 
We need to shoot so many rays that we can be sufficiently confident that 
our estimate of a pixel's color is sufficiently precise. Basic 
stochastic tests give us well-defined metrics for this.

- When viewing an image, the observer doesn't /per se/ notice pixels 
that are computed wrong due to insufficient oversampling; they're only 
noticed because they somehow don't match the neighboring pixels. 
Therefore, for a visually pleasing result we should trigger additional 
oversampling also based on neighboring pixels.

- Therefore, instead of requiring that we can confidently estimate a 
pixel's mean color within given precision, we require that we could 
confidently estimate the mean color of the pixel's /neighborhood/ 
(including itself). (We're a bit biased towards the center pixel of the 
neighborhood though: If we're not confident enough with the neighborbood 
of a pixel yet, we'll always sample the center pixel; oversampling of 
other pixels in the neighborhood is taken care of when we examine their 
own local neighborhood.) Of course when it comes to actually determine 
the resulting color, we'll only use the samples taken for each single pixel.

- Currently, "the pixel's neighborhood" is defined as the pixel itself 
plus the four immediately adjacent pixels, but the method could also be 
applied to a larger region. It may also make sense to assign different 
weights to the samples in a neighborhood depending on the distance to 
the center pixel; this is currently not done either.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.