|
|
|
|
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Re: v3.8b2. height_field input values at 0.0 not clean.
Date: 18 Feb 2023 05:40:25
Message: <63f0ab19$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2/17/23 20:49, Tor Olav Kristensen wrote:
>> Aside: You can use the 3 term select depending upon a boolean result in
>> the first term test by doing something like:
>>
>> select(1-(2*((x<0.0) | (x>1.0))), 0, 1)
>> ...
> How about just this:
>
> select(
> -((x < 0.0) | (1.0 < x)),
> 0,
> 1
> )
>
> - or this:
>
> select(
> -((0.0 <= x) & (x <= 1.0)),
> 1,
> 0
> )
:-)
Very likely OK in practice, and cleaner in form than my three term select.
What spooks me some is that -0 and +0 are real things in the IEEE
floating point standard and as supported by C++. If a C++ coder has
thought to test for -0 < 0, they can.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> What spooks me some is that -0 and +0 are real things in the IEEE
> floating point standard and as supported by C++. If a C++ coder has
> thought to test for -0 < 0, they can.
Which raises the question about how the IEEE 754 gets implemented in POV-Ray's
source code, and if a user can test for -0 through SDL.
I actually just watched:
https://www.youtube.com/watch?v=p8u_k2LIZyo
yesterday (it was in the sidebar when watching yesbird's animation), and they
went over some interesting points, and was also wondering if we could implement
something like this in POV-Ray SDL, and source, and if you'd find it useful or
are already using it in povr.
- BW
Post a reply to this message
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Re: v3.8b2. height_field input values at 0.0 not clean.
Date: 18 Feb 2023 12:49:43
Message: <63f10fb7$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2/18/23 08:33, Bald Eagle wrote:
> William F Pokorny <ano### [at] anonymousorg> wrote:
>
>> What spooks me some is that -0 and +0 are real things in the IEEE
>> floating point standard and as supported by C++. If a C++ coder has
>> thought to test for -0 < 0, they can.
>
> Which raises the question about how the IEEE 754 gets implemented in POV-Ray's
> source code, and if a user can test for -0 through SDL.
So tempted to answer I don't know(a)... ;-)
For the floating point standard we mostly get what the C++ compilers
give us depending upon options used while compiling. Excepting where in
the POV-Ray source there might be hard coded behavior which is not
strictly compliant or in days gone by was intended to handle things is
some more compliant manner(b).
As for testing for -0 via SDL, I cannot think of any direct method...
On my development compiles using g++ 11.3 and the -ffast_math flag, I
can code:
#declare negZero = -0.0;
#declare posZero = +0.0;
#declare hmmZero = 0.0/-1.0;
#debug concat("-0.0 shows up as : ",str(-0.0,0,-1)," \n")
#debug concat("+0.0 shows up as : ",str(+0.0,0,-1)," \n")
#debug concat("Var negZero shows up as : ",str(negZero,0,-1)," \n")
#debug concat("Var posZero shows up as : ",str(posZero,0,-1)," \n")
#debug concat("Var hmmZero shows up as : ",str(hmmZero,0,-1)," \n")
#error "\nStopping at end of parsing\n"
and see as a result:
-0.0 shows up as : -0.000000
+0.0 shows up as : 0.000000
Var negZero shows up as : -0.000000
Var posZero shows up as : 0.000000
Var hmmZero shows up as : -0.000000
So maybe build up strings and do a string compare?
Aside: The C++ value comparison operators mirrored in SDL will - by
default - ignore the sign of zero.
>
> I actually just watched:
>
> Fast Inverse Square Root — A Quake III Algorithm
> https://www.youtube.com/watch?v=p8u_k2LIZyo
>
> yesterday (it was in the sidebar when watching yesbird's animation), and they
> went over some interesting points, and was also wondering if we could implement
> something like this in POV-Ray SDL, and source, and if you'd find it useful or
> are already using it in povr.
>
Cool old stuff!
As for fast, clever algorithms... I've tried some tricks in povr and
looked over quite a few, though not the particular one in the video.
Most come with somewhat noisy behavior compared to standards compliant
code. The noise is difficult to swallow given the end benefit(c) -
floor() and ceil() fast equivalents, for example.
Bill P.
(a) I don't know it all - about anything. Where I do know a little,
there's almost never a simple, complete answer.
(b) The radiosity code to this day has some complicated configurations
related to the floating point math, for example. Unsure what all of this
used or needed these days.
(c) The povr fork, while coding up new functions, did implement single
and floating point flavors / options for many functions. Anywhere the
single floating point code was significantly faster on my i3 hardware -
at the expense of accuracy. Often single float accuracy in functions is
OK depending on - stuff.
Measuring and tuning for performance is REALLY difficult these days
given the hardware realities. For core functionality, the tuning job is
best left to the compiler folks as a near hard rule.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
>
> Just something I happened to see while looking into other height_field
> questions of late.
>
> The HF zero (y as image/fnct evaluated) and z HF result should cleanly
> show up no matter scaling! At best it is today noisy.
>
> Looks to be an issue back through v3.7 stable at least. I think given
> the noise it's likely some numerical and/or bounding issue rather than
> the actual HF mesh. We'll see.
>
[Running v3.8.0 beta 1 in Windows 10]
I ran a bunch of animation tests of
height_field{
function 500,500 {0}
.....
....while changing various values. Here are some results:
If the HF is given a solid color pigment, I don't see any speckles or odd
coincident-surface problems at all.
But if I give it
pigment{ gradient y color_map{[0 rgb 0][1 red 10]}}
I do see the speckles.
Apparently, the color_map is repeating from the top of its red color, but from
'below' the HF.
I also ran some tests while slightly varying the function value itself, and also
used min_extent/max_extent to see what values they would return. The results are
a bit odd, to say the least:
(using the gradient y color_map):
function 500,500 {0} has the speckles
MIN_EXT = <0.0000000000, -0.0000000023, 0.0000000000> note minus sign for y
MAX_EXT = <0.0000000000, -0.0000000023, 0.0000000000>
function 500,500 { 0 + .0000152} has the speckles.
The resulting height_field size:
MIN_EXT = <0.0000000000, -0.0000000023, 0.0000000000> -- same as above
MAX_EXT = <0.0000000000, -0.0000000023, 0.0000000000>
function 500,500 { 0 + .0000153} shows no speckles at all -- but with an abrupt
change in the y value:
MIN_EXT = <0.0000000000, 0.0000022865, 0.0000000000>
MAX_EXT = <0.0000000000, 0.0000022865, 0.0000000000>
---------
Now, if I change the function additions to subtractions:
function 500,500 { 0 - .0000152} has the speckles.
MIN_EXT = <0.0000000000, -0.0000000023, 0.0000000000> -- same as addition
MAX_EXT = <0.0000000000, -0.0000000023, 0.0000000000> -- ditto
function 500,500 { 0 - .0000152} -- The planar HF jumps up to y=1 (almost!) as I
kind of expected... but no speckles at all again, which was surprising.
MIN_EXT = <0.0000000000, 0.9999847412, 0.0000000000> -- y is not quite 1.0
MAX_EXT = <0.0000000000, 0.9999847412, 0.0000000000>
So from these stats and results, my *guess* is that there could be a very slight
'bias'(?) in the HF creation code-- or function-to-HF process? -- whereby the HF
is not actually from 0-1, but (0 minus a small value) to (1 minus the small
value.) OR, the same with the color_map mechanism. Although, the abrupt 'jumps'
between .0000152 and .0000153 are puzzling.
The tests were interesting at least!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Kenneth" <kdw### [at] gmailcom> wrote:
A typo, so sorry.
> function 500,500 { 0 - .0000152} -- The planar HF jumps up to y=1 (almost!) as I
> kind of expected... but no speckles at all again, which was surprising.
> MIN_EXT = <0.0000000000, 0.9999847412, 0.0000000000> -- y is not quite 1.0
> MAX_EXT = <0.0000000000, 0.9999847412, 0.0000000000>
function 500,500 { 0 - .0000153}
Post a reply to this message
|
|
| |
| |
|
|
From: William F Pokorny
Subject: Re: v3.8b2. height_field input values at 0.0 not clean.
Date: 18 Feb 2023 16:46:21
Message: <63f1472d$1@news.povray.org>
|
|
|
| |
| |
|
|
On 2/18/23 14:50, Kenneth wrote:
> So from these stats and results, my*guess* is that there could be a very slight
> 'bias'(?) in the HF creation code-- or function-to-HF process? -- whereby the HF
> is not actually from 0-1, but (0 minus a small value) to (1 minus the small
> value.) OR, the same with the color_map mechanism. Although, the abrupt 'jumps'
> between .0000152 and .0000153 are puzzling
Interesting results and I think on the right track for some of what is
unique about height_field bounding. What you see with the sudden large
jump on that last subtraction is the out of bounds ramp wave value re-map.
For the smaller jumps. Internally, and probably due the original
image_map only usage, the max 3d size we can have is 2^16 a side -
HF_VAL is an unsigned short int. Vertically this means we are working in
a best resolution of 1/2^16 steps = 0.00001525... This lines up with
the values where you see abrupt change in extents and that's kinda cool.
The bounds tracking is done at least in part in terms of doubles during
calculation and where a value of HFIELD_OFFSET (currently 0.001) is
subtracted from the lowest value per side and added to the greatest
value on the top side. After the at double calculation, the result is
converted back to HF_VAL.
Because that 0.001 isn't a multiple of the 16bits of resolution I
believe there must be some value snapping going on during the double to
HF_VAL conversion.
That's as far as I got before bailing out.
In looking at your results I'm not at all sure why the max extent values
for x and z are all zero? The HF max extent should be 1 or larger for x
and z (always I think if not scaled) ?
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
>
> In looking at your results I'm not at all sure why the max extent values
> for x and z are all zero? The HF max extent should be 1 or larger for x
> and z (always I think if not scaled) ?
Yes, that surprised me as well. My HF scale was at <1,1,1>. Odd results indeed.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmailcom> wrote:
> William F Pokorny <ano### [at] anonymousorg> wrote:
> >...
> > I used the 4 term because I think it reads a little cleaner when setting
> > up a boolean test in the first term which can only return a zero or one
> > - the negative action is never used as Bill W said.
>
> I agree..
>
>
> > Aside: You can use the 3 term select depending upon a boolean result in
> > the first term test by doing something like:
> >
> > select(1-(2*((x<0.0) | (x>1.0))), 0, 1)
> >...
>
> How about just this:
>
> select(
> -((x < 0.0) | (1.0 < x)),
> 0,
> 1
> )
>
> - or this:
>
> select(
> -((0.0 <= x) & (x <= 1.0)),
> 1,
> 0
> )
But for that check select() isn't needed.
This should be sufficient:
((0.0 <= x) & (x <= 1.0))
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 2/17/23 20:49, Tor Olav Kristensen wrote:
> >> Aside: You can use the 3 term select depending upon a boolean result in
> >> the first term test by doing something like:
> >>
> >> select(1-(2*((x<0.0) | (x>1.0))), 0, 1)
> >> ...
> > How about just this:
> >
> > select(
> > -((x < 0.0) | (1.0 < x)),
> > 0,
> > 1
> > )
> >
> > - or this:
> >
> > select(
> > -((0.0 <= x) & (x <= 1.0)),
> > 1,
> > 0
> > )
>
> :-)
>
> Very likely OK in practice, and cleaner in form than my three term select.
>
> What spooks me some is that -0 and +0 are real things in the IEEE
> floating point standard and as supported by C++. If a C++ coder has
> thought to test for -0 < 0, they can.
When you mention it, I think that I've actually have run into a problem with
negative zero in a version of POV-Ray.
IIRC I got different result when I used select() from what I got when I used <
or > in an #if statement.
So yes, it is scary.
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
William F Pokorny <ano### [at] anonymousorg> wrote:
> In looking at your results I'm not at all sure why the max extent values
> for x and z are all zero? The HF max extent should be 1 or larger for x
> and z (always I think if not scaled) ?
This got me curious as well.
So I just threw some quick lines of code together.
The hf looks fine to me. Typo in Ken's code?
#declare delta_hf = 1/pow (2, 16);
#debug concat ("1/pow (2, 16) = ", str (delta_hf, 0, 8), "\n")
#declare HF = height_field {
function 800, 800 { 1 }
}
#declare Min = min_extent (HF);
#declare Max = max_extent (HF);
-0.0 shows up as : -0.000000
+0.0 shows up as : 0.000000
Var negZero shows up as : -0.000000
Var posZero shows up as : 0.000000
Var hmmZero shows up as : -0.000000
select () interprets -0 as zero
sgn () interprets -0 as zero
Ternary interprets -0 as zero
#if interprets -0 as zero
1/pow (2, 16) = 0.00001526
Heightfield min_extent = = 0.00000000, 0.99998474, 0.00000000
Heightfield max_extent = = 1.00000000, 0.99998480, 1.00000000
Changing the function result to 0 yields:
Heightfield min_extent = = 0.00000000, -0.00000002, 0.00000000
Heightfield max_extent = = 1.00000000, 0.00000002, 1.00000000
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|