|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
When I blur a sphere, it looks like HexGrid2Size is used regardless of
camera.Blur_Samples (see tracepixel.cpp). That is, far enough out-of-focus I see
only the first hexagonal pattern with a center defined by static const Vector2d
HexGrid2[HexGrid2Size]. I would expect to see the 19 and 37 grids for
blur_samples >= 19 and 37 respectively. With blur_samples 7, it looks correct,
i.e. it's using HexGrid2Size. I don't think the huge-ish sphere distance makes
any difference, i.e. you can see this with closer objects. I see this in
3.7.0RC6 Unofficial (Universal 64 bit) Dec 14 2012, but I think official 3.6
Linux does it too. Here's a sample
#include "shapes.inc"
#include "colors.inc"
global_settings { assumed_gamma 1.0 }
camera {
perspective
location <0, 0, 0>
look_at <0, 1000, 0>
angle 30/60/60
aperture 1.1
blur_samples 37 // see tracepixel.cpp for 7, 19, 37 HexGrid.
focal_point <0, 9100, 0> // out-of-focus
}
sphere { <0, 100000, 0>, 1
texture {
pigment { checker color Red }
}
}
light_source { <0,99990,0>, <1,1,1>*10000 }
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 10/01/2013 18:57, John nous fit lire :
> When I blur a sphere, it looks like HexGrid2Size is used regardless of
> camera.Blur_Samples (see tracepixel.cpp). That is, far enough out-of-focus I see
> only the first hexagonal pattern with a center defined by static const Vector2d
> HexGrid2[HexGrid2Size]. I would expect to see the 19 and 37 grids for
> blur_samples >= 19 and 37 respectively. With blur_samples 7, it looks correct,
> i.e. it's using HexGrid2Size. I don't think the huge-ish sphere distance makes
> any difference, i.e. you can see this with closer objects. I see this in
> 3.7.0RC6 Unofficial (Universal 64 bit) Dec 14 2012, but I think official 3.6
> Linux does it too. Here's a sample
I just tested your scenes (with 7,19 & 37)... the images are differents.
(done with latest code of 3.7), so it seems ok.
The grid for 7 is a classical grid of hexagone, diameter 3.
The grid for 19 is a first level of the grid of 7, second level of 6
points between the 6 external points and finaly another set of 6
intermediate. (so a classical grid of hexagone, diameter 5)
The grid of 37 is a grid of hexagone, diameter 7.
(they follow the progression of centered hexagonal number: 1, 7, 19, 37;
next would be 61, 91, 127, 169)
Your scene has the sphere really out of focus, so it might happen that
only the inner grid of 19 & 37 actually hit the sphere, and that looks
like the the grid of 7.
Try it with:
focal_point <0,80000, 0> // less out-of-focus
}
sphere { <0, 100000, 0>, 1
texture {
pigment { checker color Red scale 1/3}
}
}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
It might help also to specify a minimal number of samples:
blur_samples 37,37
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I like the suggestion that maybe I'm only looking at the inner 7, but shouldn't
I be able to find an out-of-focus that also reveals the 19 and 37? I can see the
square at blur_samples 4, for example.
Out of focus is the idea: in my original code I'm looking at a parabolic mirror
with a secondary mirror obstruction, and I get the rings (donuts) you'd see in a
real reflecting telescope that is out of focus. But as I de-focus, the ring
separates into 6 blobs, not 19, and not 37, no matter how I set blur_samples,
focal point, distance, and size of my sphere.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 10.01.2013 18:57, schrieb John:
> When I blur a sphere, it looks like HexGrid2Size is used regardless of
> camera.Blur_Samples (see tracepixel.cpp). That is, far enough out-of-focus I see
> only the first hexagonal pattern with a center defined by static const Vector2d
> HexGrid2[HexGrid2Size]. I would expect to see the 19 and 37 grids for
> blur_samples >= 19 and 37 respectively. With blur_samples 7, it looks correct,
> i.e. it's using HexGrid2Size. I don't think the huge-ish sphere distance makes
> any difference, i.e. you can see this with closer objects. I see this in
> 3.7.0RC6 Unofficial (Universal 64 bit) Dec 14 2012, but I think official 3.6
> Linux does it too. Here's a sample
The problem is that with the background being uniform, and your object
very out-of-focus, POV-Ray's adaptive focal blur algorithm gets derailed.
Here's what happens for each pixel in general:
- POV-Ray does not always shoot the specified maximum number of sample
rays. Instead, it shoots just an initial subset of them (7 in this case)
to see how it fares, examining statistical properties of the sampled
color values.
- If the samples vary significantly, POV-Ray shoots another batch of
samples (6 in this case), then takes another look again at the
statistical properties.
- If the statistical properties of the samples still indicate that more
samples need to be taken, this process is repeated with more batches (of
6, 4, 4, 4, 4 and 2 samples), until the resulting statistical properties
indicate that enough samples have been taken.
And this is what happens in your special case:
- POV-Ray shoots the initial batch of 7 sample rays, which are spread
apart so much that at most one (but typically none) of the rays can hit
the sphere.
- If one ray does hit the sphere, POV-Ray will find a huge disagreement
between this sample and the others, and will most likely shoot the
specified maximum of 37 rays (and still be dissatisfied with the
result). These are the regions where you see images of the sphere.
- However, if none of the initial 7 rays hit the sphere, POV-Ray will
happily assume that the whole region shows only pitch black darkness,
and decide to not waste any more computing time on this pixel. These are
the regions where there /should/ be images of the sphere visible but aren't.
This is a problem that affects any scene with small, significantly
out-of-focus structures among otherwise comparatively uniform areas.
Obviously this problem could easily be solved by forcing POV-Ray to take
more samples initially, so that it has a fair chance of hitting the
sphere in the first attempt. Unfortunately, POV-Ray 3.6 did not provide
for such a mechanism.
This is where POV-Ray 3.7's two-parameter version of the blur_samples
setting comes in:
blur_samples MIN, MAX
where MIN specifies the minimum number of samples to take (actually
POV-Ray may effectively use a slightly higher value), and MAX specifies
the traditional parameter, the maximum number of samples (again POV-Ray
may effectively use a slightly higher value).
Another approach would be to take into account neighboring pixels: If
there's a huge difference between neighboring pixels, there's reason to
examine both pixels more closely (and in the aftermath of this, yet more
neighboring pixels may need to be re-examined as well.) But that's not
available in POV-Ray yet. (I did some very promising experiments to use
this as a generic anti-aliasing mechanism though, so it's very likely to
find its way into some later 3.7.x release.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thanks for the explanation and the info from Le Forgeron. Indeed using the
min,max form does give me a better approximation. I'd like to know more about
your algorithm - do you have a publication or paper describing what you've done?
-John
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 11.01.2013 18:34, schrieb John:
> I'd like to know more about
> your algorithm - do you have a publication or paper describing what you've done?
> -John
You mean the statistical anti-aliasing mode?
I'm not an academic, so no publications or papers here. But the idea is
easily outlined:
- The existing anti-aliasing (or more generally, oversampling) modes in
POV-Ray use a geometric grid of sub-samples, which might cause moiree
pattern artifacts; jitter helps a bit, but especially in the case of
adaptive anti-aliasing is not a cure-all because the random sub-sample
offsets are limited. A stochastic oversampling approach, as implemented
in POV-Ray's focal blur with high sample count, might be better suited
here, and indeed does a good job in practice to avoid those moiree patterns.
- POV-Ray's focal blur oversampling algorithm also gives a simple
elegant answer to the question, "how many samples do we need to shoot":
We need to shoot so many rays that we can be sufficiently confident that
our estimate of a pixel's color is sufficiently precise. Basic
stochastic tests give us well-defined metrics for this.
- When viewing an image, the observer doesn't /per se/ notice pixels
that are computed wrong due to insufficient oversampling; they're only
noticed because they somehow don't match the neighboring pixels.
Therefore, for a visually pleasing result we should trigger additional
oversampling also based on neighboring pixels.
- Therefore, instead of requiring that we can confidently estimate a
pixel's mean color within given precision, we require that we could
confidently estimate the mean color of the pixel's /neighborhood/
(including itself). (We're a bit biased towards the center pixel of the
neighborhood though: If we're not confident enough with the neighborbood
of a pixel yet, we'll always sample the center pixel; oversampling of
other pixels in the neighborhood is taken care of when we examine their
own local neighborhood.) Of course when it comes to actually determine
the resulting color, we'll only use the samples taken for each single pixel.
- Currently, "the pixel's neighborhood" is defined as the pixel itself
plus the four immediately adjacent pixels, but the method could also be
applied to a larger region. It may also make sense to assign different
weights to the samples in a neighborhood depending on the distance to
the center pixel; this is currently not done either.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Obviously this problem could easily be solved by forcing POV-Ray to take
> more samples initially, so that it has a fair chance of hitting the
> sphere in the first attempt. Unfortunately, POV-Ray 3.6 did not provide
> for such a mechanism.
>
> This is where POV-Ray 3.7's two-parameter version of the blur_samples
> setting comes in:
>
> blur_samples MIN, MAX
> ... (etc.)...
This is indeed a welcome change. I've had a similar problem with my recent
underwater scene (e.g. tiny spheres, very close to the camera, with a more or
less uniform background.) In 3.62, the '7-objects-only-syndrome' is very
apparent--and adding more blur samples just makes the 7 spheres cleaner and more
obvious. :-/
I haven't yet had the chance to run my scene with 3.7RC6, but am looking forward
to doing so. Thanks for tackling this issue! Kudos.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] gmailcom> wrote:
> clipka <ano### [at] anonymousorg> wrote:
>
> > Obviously this problem could easily be solved by forcing POV-Ray to take
> > more samples initially, so that it has a fair chance of hitting the
> > sphere in the first attempt. Unfortunately, POV-Ray 3.6 did not provide
> > for such a mechanism.
> >
> > This is where POV-Ray 3.7's two-parameter version of the blur_samples
> > setting comes in:
> >
> > blur_samples MIN, MAX
> > ... (etc.)...
>
> This is indeed a welcome change. I've had a similar problem with my recent
> underwater scene (e.g. tiny spheres, very close to the camera, with a more or
> less uniform background.) In 3.62, the '7-objects-only-syndrome' is very
> apparent--and adding more blur samples just makes the 7 spheres cleaner and more
> obvious. :-/
>
> I haven't yet had the chance to run my scene with 3.7RC6, but am looking forward
> to doing so. Thanks for tackling this issue! Kudos.
The aperture synthesized by focal_blur in 3.7 using min,max is good enough to
see coma aberration in a parabolic mirror! For the work I'm doing in optical
modeling, I just need to understand what happens in focal_blur when things start
to break down with extreme situations. Basically, when you start to see a
hexagonal structure, or lots of little hexagons, you've gone too far. I don't
know if this can be solved by using more HexGrids.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|