|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I noticed that if I make a height field like this...
height_field
{
function nPoints, nPoints {fn_X(x,0,y)}
smooth
}
as you increase "nPoints" the normals get messed up somehow (visible easily
if you have specular or reflection enabled) both with and without "smooth".
Check the attached images, notice how the grid texture becomes better with
nPoints at 1000, but then the normals get screwed up. Is it a bug?
Post a reply to this message
Attachments:
Download 'image1.png' (602 KB)
Preview of image 'image1.png'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"scott" <sco### [at] scottcom> schreef in bericht
news:4ba0aea8@news.povray.org...
>I noticed that if I make a height field like this...
>
> height_field
> {
> function nPoints, nPoints {fn_X(x,0,y)}
> smooth
> }
>
> as you increase "nPoints" the normals get messed up somehow (visible
> easily
> if you have specular or reflection enabled) both with and without
> "smooth".
> Check the attached images, notice how the grid texture becomes better with
> nPoints at 1000, but then the normals get screwed up. Is it a bug?
>
>
I am probably wrong, but imo, those are not the normals (no normals defined
here) but just the detail of the grid becoming smaller. Have you tried
nPoints=10000?
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I am probably wrong, but imo, those are not the normals (no normals
> defined here)
I only said that because the problem seems to really show up around the
specular light rather than in the texturing. Presumably POV needs to
somehow calculate the normals internally?
> Have you tried nPoints=10000?
Same problem, just smaller scale (see attached the "smooth" version). IME
with mesh modellers the appearance indicates incorrectly calculated normals
(this type of pattern certainly shouldn't be visible with triangles the size
of a few pixels). Anyway, the image seems to get *worse* with more
elements, shouldn't it be getting better?
Post a reply to this message
Attachments:
Download 'image2.png' (238 KB)
Preview of image 'image2.png'
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"scott" <sco### [at] scottcom> schreef in bericht
news:4ba0d4c3@news.povray.org...
> Same problem, just smaller scale (see attached the "smooth" version). IME
> with mesh modellers the appearance indicates incorrectly calculated
> normals
> (this type of pattern certainly shouldn't be visible with triangles the
> size
> of a few pixels). Anyway, the image seems to get *worse* with more
> elements, shouldn't it be getting better?
>
I see what you mean. This goes beyond my capabilities. I suppose that the
function is generating a kind of mesh-like object, but is that comparable to
a *real* mesh I wonder? Time for the experts to have their say :-)
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I noticed that if I make a height field like this...
>
> height_field
> {
> function nPoints, nPoints {fn_X(x,0,y)}
> smooth
> }
>
> as you increase "nPoints" the normals get messed up somehow (visible
> easily
> if you have specular or reflection enabled) both with and without
> "smooth".
> Check the attached images, notice how the grid texture becomes better with
> nPoints at 1000, but then the normals get screwed up. Is it a bug?
I found the problem. Height_field values *and normals* are stored as 16 bit
integers internally at parse time, even if you specify a function pattern as
I did. Thus when zooming in (or scaling) you will see the discrete nature
of these values, the same problem if you use an image. I did a quick hack
in the source to use floating point rather than integers, and it seems to
fix this problem at the cost of memory usage.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thomas de Groot schrieb:
> "scott" <sco### [at] scottcom> schreef in bericht
> news:4ba0d4c3@news.povray.org...
>> Same problem, just smaller scale (see attached the "smooth" version). IME
>> with mesh modellers the appearance indicates incorrectly calculated
>> normals
>> (this type of pattern certainly shouldn't be visible with triangles the
>> size
>> of a few pixels). Anyway, the image seems to get *worse* with more
>> elements, shouldn't it be getting better?
>>
>
> I see what you mean. This goes beyond my capabilities. I suppose that the
> function is generating a kind of mesh-like object, but is that comparable to
> a *real* mesh I wonder? Time for the experts to have their say :-)
I suspect an aliasing issue with the Y coordinates.
POV-Ray has three limitations applying to this context:
(1) Smooth height fields use precomputed surface normals, which are
stored with a precision of 3x16 bits. I don't suspect that this has much
of an influence though, as this still corresponds to a resolution of
roughly 0.01 degrees, which shouldn't make any visible difference in
brightness.
(2) The surface normals are not precomputed from the original source,
but from the Y coordinate data as stored in the height field's internal
data structure. As this data structure holds the Y coordinate values
using 16-bit integers, giving a precision of no more than 1/65535,
terracing effects may occur, which will obviously "kill" the normal
smoothing algorithm. At a resolution of 10000 by 10000 samples, this
will happen wherever the slope is less than about 15% (10000/65535).
Worse yet: As the surface normal depends not on the absolute Y
coordinates, but rather on coordinate differences, the normals'
precision directly corresponds not on absolute precision of the Y
coordinates, but on their precision /relative/ to the Y coordinate
differences. That is, aliasing problems will already kick in much
earlier; for instance, even at a resolution of just 1000 by 1000
samples, a slope of 15% would theoretically give "raw" coordinate
difference values of 9.83025 (65535*0.15/1000), which due to aliasing of
the absolute Y values will in practice result in raw Y coordinate
difference values alternating between 9 and 10; at the default height
field "aspect ratio" of 1:1:1, this translates to Y coordinate
difference values of approx. 0.000137 and 0.000153 respectively (9/65535
and 10/65535), which in turn correspond to normal angles of approx. 7.82
and 8.68 degrees (atan(1000*9/65535) and atan(1000*10/65535)), respectively.
So at a resolution of 1000x1000 samples, the normals on a 15% slope will
exhibit "quantization noise" with an amplitude of 0.86 degrees, which
may already cause visible artifacts in highlights (especially since this
type of artifacts is likely to exhibit a certain banding structure). At
more shallow slopes, the noise only gets a little bit worse, but it
doesn't get much better at steeper slopes too soon eithger: At a slope
of 100% (45 degrees), for instance, the noise is still at approx. 0.437
degrees.
It appears to me that the noise roughly follows the formula
0.5*(1+cos(slope_angle*2))*(image_resolution/65535), with the result
giving the normal jitter in radians. I guess some mathematically
inclined will even be able to show why that is - I'm not in that mood
myself right now :-).
(3) The function is not evaluated directly by the height field code, but
instead first sampled and stored as a function image; for function
images, a similar limitation applies: Again, these happen to be limited
to a depth of 16 bit (per channel in case of pigment functions).
As far as workarounds go, I currently know none, except for using a
totally different primitive; if you don't need a solid, the parametric
primitive may be the way to go, using f(u,v)=u and f(u,v)=v for the x
and z coordinates respectively. If you absolutely need a solid, you may
want to try the isosurface primitive, using f(x,y,z)=y-g(x,z) (with g
being your own function) and the default threshold of 0. Or you can use
SDL to create a mesh primitive from the function.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> As far as workarounds go, I currently know none, except for using a
> totally different primitive; if you don't need a solid, the parametric
> primitive may be the way to go, using f(u,v)=u and f(u,v)=v for the x and
> z coordinates respectively. If you absolutely need a solid, you may want
> to try the isosurface primitive, using f(x,y,z)=y-g(x,z) (with g being
> your own function) and the default threshold of 0. Or you can use SDL to
> create a mesh primitive from the function.
Thanks for the detailed analysis, after a rough look through the source for
the height field I assumed something similar was happening - I couldn't
believe it when I saw the normals were stored as 16 bit integers! I guess
the code was written in a time where every byte counted. The code seems to
be written in such a way that changing it to use floating point numbers is
straightforward (I guess the original author suspected that in the future
someone might want to change it).
I had used an isosurface originally but it was painfully slow to render.
I'll take a look at the parametric primitive, I always forget about that
one. Thanks.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I had used an isosurface originally but it was painfully slow to render.
> I'll take a look at the parametric primitive, I always forget about that
> one. Thanks.
OK scrap that, parametric is *really* slow. I'm going to make a mesh2 using
a macro...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> Thanks for the detailed analysis, after a rough look through the source
> for the height field I assumed something similar was happening - I
> couldn't believe it when I saw the normals were stored as 16 bit
> integers! I guess the code was written in a time where every byte
> counted.
As indicated in my post, the normals' precision isn't the bottleneck -
it's the limitation to 16 bit of the values from which the normals are
computed.
As for memory consumption, with POV-Ray we're still living in a time
where every byte does count. I recently tried to render an (admittedly
pathological) scene and faild because POV-Ray would have had to generate
more radiosity samples than my physical memory (6 GB) could hold. Trying
to stabilize the system after it had started swapping was no fun, with
every mouse click or keyboard press literally taking minutes to be
processed >_<
... and that was even with a custom POV-Ray version that featured a
smaller radiosity sample memory footprint than standard beta...
> The code seems to be written in such a way that changing it to
> use floating point numbers is straightforward (I guess the original
> author suspected that in the future someone might want to change it).
Or maybe he just had particularly good programming habits. Or someone
already changed it from bytes to shorts. Or the original author wanted
to provide a simple way to change the type on other platforms where a
short int might be something else than 16 bit - after all, POV-Ray has
been developed with such portability issues in mind.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> OK scrap that, parametric is *really* slow. I'm going to make a mesh2
> using a macro...
Did you try using the precompute keyword for the parametric?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|