POV-Ray : Newsgroups : povray.general : image based normal maps? : Re: image based normal maps? Server Time
28 Apr 2024 01:33:29 EDT (-0400)
  Re: image based normal maps?  
From: clipka
Date: 21 Jan 2019 10:33:06
Message: <5c45e632$1@news.povray.org>
Am 21.01.2019 um 13:05 schrieb Kenneth:

> There's a fundamental difference in appearance between box 1 and boxes 2 and 3.
> The normals produced from the function are... layered(?), or *something*. It's
> as if there are no 'blends' between the three color channels' normals-- unlike
> the smooth blend in the left box. I don't know if this is expected behavior or
> not.

The "plateau" effect is indeed to be expected: When using a pattern in a 
`normal` statement, the "raw" pattern is interpreted as a height value. 
When instead using a pattern in a pigment (as you do in NORM_FUNCTION), 
the pattern value is first translated into a colour value according to a 
colour map, and then the three channels of the colour value are 
translated (as per your function definition) into a scalar again, and 
then /that/ value is used as the height value.

Now if the pigment has no colour map associated, then "raw" pattern 
value is interpreted as a greyscale value, and the subsequent 
translation from colour to scalar simply un-does the operation. But some 
patterns - and `bozo` is one of them - have default colour maps associated.

In the case of `bozo` this colour map has some ranges of "raw" pattern 
values where the colour (and thus the height value computed from it) 
remains constant, which correspond to the "plateaus".


> But the strangest thing is that the *function's* normals react to the light
> only in 90-degree quadrants. In other words, with the light at 45-degrees up and
> to the right, the normals are a BLEND of two strict 90-degree effects
> superimposed, one horizontal, the other vertical, with no 'blend' inbetween.
> (This is more easily seen in an animation, with the light tracing an arc from
> horizon to horizon.)

This, too, can be explained:

When plugging in a pattern into a normal, the normal is computed by 
first sampling the pattern at four points in 3D space, from that 
computing the gradient of the pattern in 3D space, and from that 
computing how to perturb the normal. The four samples are always taken 
at fixed offsets relative to the intersection points, distributed in a 
tetrahedral arrangement.

If you have a pattern with plateaus, and the distance between those 
plateaus (the slope regions) are narrow compared to the dimensions of 
the sampling tetrahedron, the four sampling points will frequently all 
end up on one of the plateaus, and you'll end up with a very limited set 
of possible computed gradient directions (and hence resulting normal 
orientations).


With a pattern that has at least /some/ slope regions, you can work 
around this by increasing the scaling factor in the pigment inside the 
function by some factor Q, and compensating by scaling the normal by the 
inverse of that factor, 1/Q.

With `bozo` you're out of luck though: Its default colour map consists 
exclusively of plateaus, and the slopes are infinitely narrow. So as you 
increase the precision of the normal computations, the slopes will 
eventually vanish entirely, and you'll end up with a seemingly flat 
surface (because perturbed normals don't cast shadows). So the only way 
around this is to override the pattern's default colour map.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.