 |
 |
|
 |
|
 |
|  |
|  |
|
 |
From: William F Pokorny
Subject: Re: Documenting wrinkles normal pattern bias. v3.7/v3.8.
Date: 5 May 2021 07:38:00
Message: <60928398$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
On 5/5/21 6:22 AM, Bald Eagle wrote:
>> Might we implement a normals pattern which allows users to lean/bend
> normals toward a light source as a new normal block pattern?
>
> Sure (?) - would grabbing the light source position and applying the shear_trans
> with the normalized inverse position, and using a normal {function {}} (like I
> _just_ did for Mike H's fake shadow) not do it?
>
A thought, but I'm thinking simpler at the moment. A weighted normalized
vector addition to the incoming normal in the direction of the point
light(1).
(1) - Partly because in adding the ability to pass a vector to represent
the point light, the same 'eventual keyword parsing' could be used to
specify an arbitrary axis of rotation for other types of normal
perturbation patterns I have banging around in my head.
> Add that (normalized) vector to whatever existing normal(s) and then
> renormalize?
>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
William F Pokorny <ano### [at] anonymous org> wrote:
> A thought, but I'm thinking simpler at the moment. A weighted normalized
> vector addition to the incoming normal in the direction of the point
> light(1).
Yep - that's the revised version that I came up with during lunch, after I
scratched my initial suggestion.
Normalize the normal vector.
Normalize the Light source vector.
Have the normal function take a parameter that gets clamped to 0-1.
Then interpolate over the line segment extending between the actual normal and
the light source based on that parameter. No angles, no atan2, no matrix
transform.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Bald Eagle" <cre### [at] netscape net> wrote:
> William F Pokorny <ano### [at] anonymous org> wrote:
>
> > A thought, but I'm thinking simpler at the moment. A weighted normalized
> > vector addition to the incoming normal in the direction of the point
> > light(1).
And of course that's vastly easier to do in source, since POV-Ray functions
return scalar values not vectors.
https://wiki.povray.org/content/Reference:Function_Pattern
Maybe there's a clever way to do this in SDL using the color channels and
average {} or some other method...
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: William F Pokorny
Subject: Re: Documenting wrinkles normal pattern bias. v3.7/v3.8.
Date: 6 May 2021 10:11:50
Message: <6093f926$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
On 5/5/21 7:00 PM, Bald Eagle wrote:
> "Bald Eagle" <cre### [at] netscape net> wrote:
>> William F Pokorny <ano### [at] anonymous org> wrote:
>>
>>> A thought, but I'm thinking simpler at the moment. A weighted normalized
>>> vector addition to the incoming normal in the direction of the point
>>> light(1).
>
> And of course that's vastly easier to do in source, since POV-Ray functions
> return scalar values not vectors.
> https://wiki.povray.org/content/Reference:Function_Pattern
>
> Maybe there's a clever way to do this in SDL using the color channels and
> average {} or some other method...
>
:-) Re: "vastly easier" - I had the thought, "not with my C++ skills..."
Not useful to most, but as mentioned elsewhere, my povr branch has
functions packing/unpacking three 21 bit / two 32 bit values into the
passed around double's 'space'. I think the 21 bit x,y,z enough for most
normals work.
This works today all in the functional space, but guess we could open up
2x/3x value paths into the map-pattern and normal pertubation spaces via
double values. With normal perturbation it more or less directly fits as
a new perturbation pattern(1), with scalar map pattern's might be
limited to specific patterns.
I'll have to let that thought cook for a bit. :-)
Bill P.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
William F Pokorny <ano### [at] anonymous org> wrote:
> :-) Re: "vastly easier" - I had the thought, "not with my C++ skills..."
I was stating that from the perspective that such would be "an algorithmic
process" rather than a function {}.
What you're proposing seems awfully similar to how slope and aoi already work,
no? Perhaps there's a way to piggyback on those in SDL, or repurpose that
already-written source code for the normal perturbation?
Thinking about this some more today reminded me that I am not at all clear about
how a scalar function perturbs a normal vector. Maybe this got covered in the
quilted pattern thread - I will have to look. It would be nice if this was
covered in the docs somehow.
It also caused me to wonder if there is a way to define a pattern in SDL that
has an rgb output, such that the .r .g and .b could be used. The average {}
pattern?
If so, then could we in theory have "vector functions" by rolling 3 pigment
{function {}} statements into an average{} block, and then using function
{pigment{}} ?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Bald Eagle" <cre### [at] netscape net> wrote:
> William F Pokorny <ano### [at] anonymous org> wrote:
>
> > :-) Re: "vastly easier" - I had the thought, "not with my C++ skills..."
>
> I was stating that from the perspective that such would be "an algorithmic
> process" rather than a function {}.
>
> What you're proposing seems awfully similar to how slope and aoi already work,
> no? Perhaps there's a way to piggyback on those in SDL, or repurpose that
> already-written source code for the normal perturbation?
>
> Thinking about this some more today reminded me that I am not at all clear about
> how a scalar function perturbs a normal vector. Maybe this got covered in the
> quilted pattern thread - I will have to look. It would be nice if this was
> covered in the docs somehow.
>
> It also caused me to wonder if there is a way to define a pattern in SDL that
> has an rgb output, such that the .r .g and .b could be used. The average {}
> pattern?
> If so, then could we in theory have "vector functions" by rolling 3 pigment
> {function {}} statements into an average{} block, and then using function
> {pigment{}} ?
Yes, you are right; something like that is possible. But patterns can only
return values from 0 to 1, so it must be done with pigments only.
Here's one way of doing it:
// ===== 1 ======= 2 ======= 3 ======= 4 ======= 5 ======= 6 ======= 7
#version 3.7;
#include "Gaussian_Blur.inc"
#declare FnX = function { x*z };
#declare FnY = function { y - 0.2 };
#declare FnZ = function { y + z };
#declare Fn = function { FunctionsPigmentRGB(FnX, FnY, FnZ) };
#declare vP = Fn(0.5, 0.6, 0.3);
#debug "\n"
#debug concat("<", vstr(3, vP, ", ", 0, -1), ">")
#debug "\n\n"
#error "Finished"
// ===== 1 ======= 2 ======= 3 ======= 4 ======= 5 ======= 6 ======= 7
For this be be more useful, one can create one or more functions that one wraps
around the FnX, FnY and FnZ functions in order to map their return values into
the 0 to 1 interval and one or more functions that one applies to the returned
components from the Fn vector function in order to map them from the 0 to 1
interval into one or more desired intervals.
IIRC somebody once made some macros that did something like this.
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: William F Pokorny
Subject: Re: Documenting wrinkles normal pattern bias. v3.7/v3.8.
Date: 7 May 2021 08:13:15
Message: <60952edb@news.povray.org>
|
|
 |
|  |
|  |
|
 |
On 5/6/21 1:46 PM, Bald Eagle wrote:
> William F Pokorny <ano### [at] anonymous org> wrote:
>
>> :-) Re: "vastly easier" - I had the thought, "not with my C++ skills..."
>
> I was stating that from the perspective that such would be "an algorithmic
> process" rather than a function {}.
>
> What you're proposing seems awfully similar to how slope and aoi already work,
> no? Perhaps there's a way to piggyback on those in SDL, or repurpose that
> already-written source code for the normal perturbation?
>
Me thinking aloud...
Patterns for map use always have access to the raw normal and the
perturbed normal as part of the ray surface intersection work as well as
the active involved ray. We calculated a scalar value based upon
intersection x,y,z. This value calculation can be an inbuilt pattern or
a function.
The normal perturbation patterns are passed the raw normal by reference
as a vector which is then perturbed before returning to the calling ray
surface intersection code. The code also has access to intersection
position.
Both scale value patterns and normal perturbation patterns have access
to all the other pattern settings - the controls, the knobs. Those are
fixed from parse time.
What I'm thinking about is 21 bit 3d vectors (in a doubles' space) as a
function calculated scalar value to a 'special' scalar value pattern.
One which perturbs the perturbed normal before returning always 0.0 as
the map value. No new overall mechanism would be needed and this
'special' nperturb 'map' pattern would be used as the first pattern in
an average with a zero weight.
There are probably ten things wrong with my thinking, but I know one
issue is the intersection usually comes into the pattern as a constant
pointer. Meaning with the usual set up I cannot update the perturbed
normal vector as I'd like to do with the 3x 21 bit encoded function value.
> Thinking about this some more today reminded me that I am not at all clear about
> how a scalar function perturbs a normal vector. Maybe this got covered in the
> quilted pattern thread - I will have to look. It would be nice if this was
> covered in the docs somehow.
>
We've discussed it some about and once in private emails. The "pyramid"
of scalar value samples and the reason for the accuracy keyword/setting.
The reason you are perhaps not clear on how it works is the code itself
has some funky values and scaling in it not documented with comments.
It's a pyramid of 4 samples (why that type of normal's bias) but why the
extra stuff I'm not clear. And I wonder too how well the accuracy works
should the same material be variously scaled in a single scene.
> It also caused me to wonder if there is a way to define a pattern in SDL that
> has an rgb output, such that the .r .g and .b could be used. The average {}
> pattern?
> If so, then could we in theory have "vector functions" by rolling 3 pigment
> {function {}} statements into an average{} block, and then using function
> {pigment{}} ?
>
Like Tor Olav, I've played some with this and you can pass vector
information around this way, but I've found it slow. I think it's likely
all you can do in standard POV-Ray releases. The vector interface where
it exists with functions leans on / is tangled with the parser code.
Bill P.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
William F Pokorny <ano### [at] anonymous org> wrote:
Following along in /source/core/material/normal.cpp
> Patterns for map use always have access to the raw normal and the
> perturbed normal as part of the ray surface intersection work as well as
> the active involved ray. We calculated a scalar value based upon
> intersection x,y,z. This value calculation can be an inbuilt pattern or
> a function.
Seems like (most of) the internal functions and values at play are
EPoint
TPoint
Tnormal
Intersection
Layer_Normal
Warp_Normal ()
Warp_EPoint ()
Pyramid_Vect []
> The normal perturbation patterns are passed the raw normal by reference
> as a vector which is then perturbed before returning to the calling ray
> surface intersection code. The code also has access to intersection
> position.
>
> Both scale value patterns and normal perturbation patterns have access
> to all the other pattern settings - the controls, the knobs. Those are
> fixed from parse time.
>
> What I'm thinking about is 21 bit 3d vectors (in a doubles' space) as a
> function calculated scalar value to a 'special' scalar value pattern.
> One which perturbs the perturbed normal before returning always 0.0 as
> the map value. No new overall mechanism would be needed and this
> 'special' nperturb 'map' pattern would be used as the first pattern in
> an average with a zero weight.
>
> There are probably ten things wrong with my thinking, but I know one
> issue is the intersection usually comes into the pattern as a constant
> pointer. Meaning with the usual set up I cannot update the perturbed
> normal vector as I'd like to do with the 3x 21 bit encoded function value.
const Intersection *Inter
> We've discussed it some about and once in private emails. The "pyramid"
> of scalar value samples and the reason for the accuracy keyword/setting.
>
> The reason you are perhaps not clear on how it works is the code itself
> has some funky values and scaling in it not documented with comments.
> It's a pyramid of 4 samples (why that type of normal's bias) but why the
> extra stuff I'm not clear. And I wonder too how well the accuracy works
> should the same material be variously scaled in a single scene.
Not sure about the accuracy, and not clear why the pyramid is "biased".
I had problems using the internal sum () function, but adding the 4 pyramid
vectors by hand gave me <0, 0, 0> (out to 8 dec places) so I'm not sure what the
bias would be.
Post a reply to this message
|
 |
|  |
|  |
|
 |
From: William F Pokorny
Subject: Re: Documenting wrinkles normal pattern bias. v3.7/v3.8.
Date: 8 May 2021 08:02:00
Message: <60967db8$1@news.povray.org>
|
|
 |
|  |
|  |
|
 |
On 5/7/21 6:54 PM, Bald Eagle wrote:
> William F Pokorny <ano### [at] anonymous org> wrote:
>
> Following along in /source/core/material/normal.cpp
>
As I was thinking aloud I was thinking about what the base normal
perturbation pattern would see as in normal.cpp and what the base scalar
value patterns would see in pattern.cpp.
You're right more is available "up the call" chain. I have some code
partly done adding a new pattern to normal.cpp where I now pass more
information down to the base pattern. I plan to test the 'pass a 3x 21
vector' bit as an explicit normal perturbation via a function. We'll see.
Side tracked at the moment on some questions which popped to the surface
as I worked on the code.
>
> Not sure about the accuracy, and not clear why the pyramid is "biased".
> I had problems using the internal sum () function, but adding the 4 pyramid
> vectors by hand gave me <0, 0, 0> (out to 8 dec places) so I'm not sure what the
> bias would be.
>
Bias comes from the fact it's a pyramid of four scalar value evaluations
about the center intersection point (the evaluation point ie EPoint) and
not the min of 8 samples (a cube, dual pyramid, dual +, or...) which I
believe necessary for 'better balanced' sampling(1). The pyramid is
getting used for performance reasons I'd bet - and maybe we continue to
use it for this reason(2).
(1) - The reality is there are also biases coming in from shapes on raw
normal. From isosurfaces, for example, where the raw normals are
calculated with three + offsets for a 'leaning pyramid' with the EPoint
at the 'pyramid top'. My belief today is in isosurfaces this is done to
get the inside/outside surface normals pointing in the right direction
with respect to the ray/surface intersection (the at zero value), but
maybe that thinking is off?
(2) - What I've not done is look at how large the bias typically is by
coding up alternatives and measuring it! A complication is results will
be affected by the accuracy setting because often during pattern
perturbations / turbulence the 3D gradients about the EPoint are not at
all constant.
Bill P.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Following along at:
qtpovray-3.80.1/source/core/material/normal.cpp
> Patterns for map use always have access to the raw normal and the
> perturbed normal as part of the ray surface intersection work as well as
> the active involved ray. We calculated a scalar value based upon
> intersection x,y,z. This value calculation can be an inbuilt pattern or
> a function.
Presumably that's this last bit starting at line 873:
{
shared_ptr<SlopeBlendMap> slopeMap =
dynamic_pointer_cast<SlopeBlendMap>(Tnormal->Blend_Map);
Warp_Normal(Layer_Normal,Layer_Normal, Tnormal,
Test_Flag(Tnormal,DONT_SCALE_BUMPS_FLAG));
// TODO FIXME - two magic fudge factors
Amount=Tnormal->Amount * -5.0; /*fudge factor*/
Amount*=0.02/Tnormal->Delta; /* NK delta */
/* warp the center point first - this is the last warp */
Warp_EPoint(TPoint,EPoint,Tnormal);
for(i=0; i<=3; i++)
{
P1 = TPoint + (DBL)Tnormal->Delta * Pyramid_Vect[i]; /* NK delta */
value1 = Do_Slope_Map(Evaluate_TPat(Tnormal, P1, Intersection, ray,
Thread), slopeMap.get());
Layer_Normal += (value1*Amount) * Pyramid_Vect[i];
}
UnWarp_Normal(Layer_Normal,Layer_Normal,Tnormal,
Test_Flag(Tnormal,DONT_SCALE_BUMPS_FLAG));
}
> The normal perturbation patterns are passed the raw normal by reference
> as a vector which is then perturbed before returning to the calling ray
> surface intersection code. The code also has access to intersection
> position.
const TNORMAL *Tnormal, Vector3d& normal, Intersection *Inter
> There are probably ten things wrong with my thinking, but I know one
> issue is the intersection usually comes into the pattern as a constant
> pointer. Meaning with the usual set up I cannot update the perturbed
> normal vector as I'd like to do with the 3x 21 bit encoded function value.
Hmmm.
Is there a point after that where you have direct access to the intersection /
normal?
Maybe you can do what you want off to the side,in parallel, and set a flag.
Then later in the code, if the flag is set, directly overwrite the
intersection/normal. I guess it all depends on the order of how things happen
and what state things are in throughout all those steps.
> We've discussed it some about and once in private emails. The "pyramid"
> of scalar value samples and the reason for the accuracy keyword/setting.
Well, I just dug all of that up and looked it over again, and I'm not sure what
the specific evidence is that supports the assertion that a bias exists. When I
add up all of the vectors in the pyramid, I get a vector sum of <0, 0, 0> out to
8 places. Whatever those values are (presumably carefully selected pairs of
angles from y : 109.5 deg I'm guessing/approximating) they were carefully chosen
to give a result that is centered at the origin.
I haven't unraveled the code to the point where I can see how a scalar value
gets applied differently to each of those 4 vectors in order to give a net
change in the normal, but following things like pointers, etc can be challenging
esp when the code is written by someone else and the comments are too sparse and
terse.
Anyway, here's my visual for the vector pyramid.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |