POV-Ray : Newsgroups : povray.bugreports : Two BBox calculation bugs in CSG and Quadric? Server Time
21 Nov 2024 14:58:26 EST (-0500)
  Two BBox calculation bugs in CSG and Quadric? (Message 1 to 10 of 10)  
From: Andreas Kaiser
Subject: Two BBox calculation bugs in CSG and Quadric?
Date: 19 Aug 2019 16:28:11
Message: <rdulle5su2vkqmah9jbpmf4tvhgh81l9gp@4ax.com>
see CSG::Compute_BBox() in CSG.cpp and Quadric::Compute_BBox(Vector3d&
ClipMin, Vector3d& ClipMax) in Quadric.cpp:
...
            Make_BBox_from_min_max(BBox, NewMin, NewMax);
 
            /* Beware of bounding boxes too large. */

            if((BBox.size[X] > CRITICAL_LENGTH) ||
               (BBox.size[Y] > CRITICAL_LENGTH) ||
               (BBox.size[Z] > CRITICAL_LENGTH))
                Make_BBox(BBox, -BOUND_HUGE/2, -BOUND_HUGE/2,
-BOUND_HUGE/2, BOUND_HUGE, BOUND_HUGE, BOUND_HUGE);

CRITICAL_LENGTH is defined as 1.0e06, BOUND_HUGE/2 as 1.0e10 in both
cases.

This code will never shrink/limit the resulting BBox like the comment
above might suggest.
It will 'blow up' instead all dimensions of a BBox if just one of its
dimensions exceeds CRITICAL_LENGTH (which is still smaller than the
resulting dimension(s)).
 
I have no idea what the original intention might have been.


Post a reply to this message

From: William F Pokorny
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 19 Aug 2019 17:41:17
Message: <5d5b177d$1@news.povray.org>
On 8/19/19 4:28 PM, Andreas Kaiser wrote:
> 
> see CSG::Compute_BBox() in CSG.cpp and Quadric::Compute_BBox(Vector3d&
> ClipMin, Vector3d& ClipMax) in Quadric.cpp:
> ...
>              Make_BBox_from_min_max(BBox, NewMin, NewMax);
>   
>              /* Beware of bounding boxes too large. */
> 
>              if((BBox.size[X] > CRITICAL_LENGTH) ||
>                 (BBox.size[Y] > CRITICAL_LENGTH) ||
>                 (BBox.size[Z] > CRITICAL_LENGTH))
>                  Make_BBox(BBox, -BOUND_HUGE/2, -BOUND_HUGE/2,
> -BOUND_HUGE/2, BOUND_HUGE, BOUND_HUGE, BOUND_HUGE);
> 
> CRITICAL_LENGTH is defined as 1.0e06, BOUND_HUGE/2 as 1.0e10 in both
> cases.
> 
> This code will never shrink/limit the resulting BBox like the comment
> above might suggest.
> It will 'blow up' instead all dimensions of a BBox if just one of its
> dimensions exceeds CRITICAL_LENGTH (which is still smaller than the
> resulting dimension(s)).
>   
> I have no idea what the original intention might have been.
>

That bit of code came in with v3.0, but I have no idea why. I 'think' it 
might be doing what's intended...

The aim isn't to shrink the bounding box, but rather to make it so 
large, if already >1e6 on any side, that it is seen as an infinite 
object and not included in normal bounding. BOUND_HUGE is larger than 
MAX_DISTANCE (1e7). I say this without verifying that this the real 
behavior though.

Bill P.


Post a reply to this message

From: Andreas Kaiser
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 20 Aug 2019 20:33:08
Message: <243plethedulo5vg5go1otoj4vf4va5opc@4ax.com>
On Mon, 19 Aug 2019 17:41:16 -0400, William F Pokorny wrote:

>On 8/19/19 4:28 PM, Andreas Kaiser wrote:
>> 
>> see CSG::Compute_BBox() in CSG.cpp and Quadric::Compute_BBox(Vector3d&
>> ClipMin, Vector3d& ClipMax) in Quadric.cpp:
>> ...
>>              Make_BBox_from_min_max(BBox, NewMin, NewMax);
>>   
>>              /* Beware of bounding boxes too large. */
>> 
>>              if((BBox.size[X] > CRITICAL_LENGTH) ||
>>                 (BBox.size[Y] > CRITICAL_LENGTH) ||
>>                 (BBox.size[Z] > CRITICAL_LENGTH))
>>                  Make_BBox(BBox, -BOUND_HUGE/2, -BOUND_HUGE/2,
>> -BOUND_HUGE/2, BOUND_HUGE, BOUND_HUGE, BOUND_HUGE);
>> 
>> CRITICAL_LENGTH is defined as 1.0e06, BOUND_HUGE/2 as 1.0e10 in both
>> cases.
>> 
>> This code will never shrink/limit the resulting BBox like the comment
>> above might suggest.
>> It will 'blow up' instead all dimensions of a BBox if just one of its
>> dimensions exceeds CRITICAL_LENGTH (which is still smaller than the
>> resulting dimension(s)).
>>   
>> I have no idea what the original intention might have been.
>>
>
>That bit of code came in with v3.0, but I have no idea why. I 'think' it 
>might be doing what's intended...
>
>The aim isn't to shrink the bounding box, but rather to make it so 
>large, if already >1e6 on any side, that it is seen as an infinite 
>object and not included in normal bounding. BOUND_HUGE is larger than 
>MAX_DISTANCE (1e7). I say this without verifying that this the real 
>behavior though.
>
>Bill P.

I don't remember anything in the code where it might help to set a big
but finite AABB to +/- infinity.
Actually this turns off BBox testing for the corresponding object.

What should be done (and is done most often) is to clip all
coordinates of a BBox to +/-BOUND_HUGE/2.
IMHO this should be done in Make_BBox(...) always, not in the caller's
code.


Post a reply to this message

From: William F Pokorny
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 21 Aug 2019 06:09:48
Message: <5d5d186c$1@news.povray.org>
On 8/20/19 8:33 PM, Andreas Kaiser wrote:
> On Mon, 19 Aug 2019 17:41:16 -0400, William F Pokorny wrote:
> 
...
>>
>> That bit of code came in with v3.0, but I have no idea why. I 'think' it
>> might be doing what's intended...
>>
>> The aim isn't to shrink the bounding box, but rather to make it so
>> large, if already >1e6 on any side, that it is seen as an infinite
>> object and not included in normal bounding. BOUND_HUGE is larger than
>> MAX_DISTANCE (1e7). I say this without verifying that this the real
>> behavior though.
>>
>> Bill P.
> 
> I don't remember anything in the code where it might help to set a big
> but finite AABB to +/- infinity.

Yes, same here, but this looks to me to be the intent of that code.

> Actually this turns off BBox testing for the corresponding object.
> 

I was thinking this too when I responded, but it's perhaps not entirely 
true in 3.7 onward.

It's the case changes in 3.7/3.8 leave a last level bounding test around 
the object even when bounding is off by -mb or +mb<count>. I have code 
branch - now some years old - which re-enables the 3.6 and prior like 
behavior of -mb. In other words, my branch really disables the inner 
most bounding box test in a way similar to 3.6.

I'm wondering this morning whether that last level of bounding box test 
is off for infinite objects? Though I have that patch, I just don't 
remember the behavior of the code well enough to know without 
investigation. If that inner most bounding test still gets done, it 
might be this code hack is - in 3.7/3.8 - of no value in any case. 


> What should be done (and is done most often) is to clip all
> coordinates of a BBox to +/-BOUND_HUGE/2.
> IMHO this should be done in Make_BBox(...) always, not in the caller's
> code.
> 

I agree with you and take the code to which you point to be a hack of 
some sort to get around some bounding based issue. Unfortunately, there 
is not a central library of test cases for past problem scenes. I don't 
myself have access to the perforce code control system so don't even 
know who introduced the code. We can only remove the hack and see 
whether anything breaks.

I work mostly with my own version of POV-Ray. I'll put it on my list to 
create a branch which removes these hacks before I build my next 
personal POV-Ray version. Suppose some test cases aimed at triggering 
the code hack in order too. I'm pretty certain I have nothing in my 
personal test case collection which will trigger this code.

Lastly, I'd suggest we open up a github issue. Whether a bug or 
intentional, seems to me the code related to CRITICAL_LENGTH should be 
investigated.

Bill P.


Post a reply to this message

From: William F Pokorny
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 26 Aug 2019 08:24:31
Message: <5d63cf7f$1@news.povray.org>
On 8/21/19 6:09 AM, William F Pokorny wrote:
> On 8/20/19 8:33 PM, Andreas Kaiser wrote:
...
>>
>> I don't remember anything in the code where it might help to set a big
>> but finite AABB to +/- infinity.
> 
> Yes, same here, but this looks to me to be the intent of that code.
> 
I've confirmed this is the behavior for the csg code with the test scene 
attached to the github issue.

>> Actually this turns off BBox testing for the corresponding object.
>>
> 
> I was thinking this too when I responded, but it's perhaps not entirely 
> true in 3.7 onward.
> 
> It's the case changes in 3.7/3.8 leave a last level bounding test around 
> the object even when bounding is off by -mb or +mb<count>. I have code 
> branch - now some years old - which re-enables the 3.6 and prior like 
> behavior of -mb. In other words, my branch really disables the inner 
> most bounding box test in a way similar to 3.6.
> 
> I'm wondering this morning whether that last level of bounding box test 
> is off for infinite objects? Though I have that patch, I just don't 
> remember the behavior of the code well enough to know without 
> investigation. If that inner most bounding test still gets done, it 
> might be this code hack is - in 3.7/3.8 - of no value in any case.
> 

In some quick tests it looks like when an object is infinite it's not 
tripping the bounding tests avoided by my patch branch. Given this at 
least sometimes true there is some potential for this code to be fixing 
or addressing some issue. In other words, see enough already to ignore 
the rambling above.

...
> 
> Lastly, I'd suggest we open up a github issue. Whether a bug or 
> intentional, seems to me the code related to CRITICAL_LENGTH should be 
> investigated.
> 

See: https://github.com/POV-Ray/povray/issues/379


Post a reply to this message

From: Andreas Kaiser
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 31 Aug 2019 17:09:31
Message: <i1jlmel8tnouoe3qi8p7tcugbbcm9vd86t@4ax.com>
On Mon, 26 Aug 2019 08:24:31 -0400, William F Pokorny wrote:

>On 8/21/19 6:09 AM, William F Pokorny wrote:
>> On 8/20/19 8:33 PM, Andreas Kaiser wrote:
>...
>>>
>>> I don't remember anything in the code where it might help to set a big
>>> but finite AABB to +/- infinity.
>> 
>> Yes, same here, but this looks to me to be the intent of that code.
>> 
>I've confirmed this is the behavior for the csg code with the test scene 
>attached to the github issue.
>
>>> Actually this turns off BBox testing for the corresponding object.
>>>
>> 
>> I was thinking this too when I responded, but it's perhaps not entirely 
>> true in 3.7 onward.
>> 
>> It's the case changes in 3.7/3.8 leave a last level bounding test around 
>> the object even when bounding is off by -mb or +mb<count>. I have code 
>> branch - now some years old - which re-enables the 3.6 and prior like 
>> behavior of -mb. In other words, my branch really disables the inner 
>> most bounding box test in a way similar to 3.6.
>> 
>> I'm wondering this morning whether that last level of bounding box test 
>> is off for infinite objects? Though I have that patch, I just don't 
>> remember the behavior of the code well enough to know without 
>> investigation. If that inner most bounding test still gets done, it 
>> might be this code hack is - in 3.7/3.8 - of no value in any case.
>> 
>
>In some quick tests it looks like when an object is infinite it's not 
>tripping the bounding tests avoided by my patch branch. Given this at 
>least sometimes true there is some potential for this code to be fixing 
>or addressing some issue. In other words, see enough already to ignore 
>the rambling above.
>
>...
>> 
>> Lastly, I'd suggest we open up a github issue. Whether a bug or 
>> intentional, seems to me the code related to CRITICAL_LENGTH should be 
>> investigated.
>> 
>
>See: https://github.com/POV-Ray/povray/issues/379

Thanks Bill for creating the issue.

I assume it was the limited accuracy of float (used in BBox code)
which caused some problems: There are only 7 significant digits.
E.g. when you translate a BBox over a large distance it literally
loses its bits.
Also adding some +/-EPSILON to the bounds when its size is 1.0e05
already no longer works.

Generally current POV code could be improved in such situations.
E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
the float vectors of BBox without checking and might lose almost all
of its significant digits that way.

bool Inside_BBox(const Vector3d& point, const BoundingBox& bbox):
It doesn't handle the implicit +/-BOUND_HUGE means +/-Infinity at all.

The recurring recalcution of an object's BBox while parsing the scene
file makes things worse and is completely superfluous:
- do some consecutive rotations and the resulting AABB will grow each
time
- translate over some large distances and off are the bits.

An object's AABB is used in the tracing stage only, IMHO POV-Ray
should simply compose all transformations during the parsing stage and
then compute the AABB once.
This could help a lot to keep AABBs as correct and close to their
objects as possible.


Post a reply to this message

From: William F Pokorny
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 2 Sep 2019 11:40:51
Message: <5d6d3803$1@news.povray.org>
On 8/31/19 5:09 PM, Andreas Kaiser wrote:
> On Mon, 26 Aug 2019 08:24:31 -0400, William F Pokorny wrote:
> 
...
>>
>> See: https://github.com/POV-Ray/povray/issues/379
> 
> Thanks Bill for creating the issue.
> 
> I assume it was the limited accuracy of float (used in BBox code)
> which caused some problems: There are only 7 significant digits.
> E.g. when you translate a BBox over a large distance it literally
> loses its bits.
> Also adding some +/-EPSILON to the bounds when its size is 1.0e05
> already no longer works.

Yes, thanks, conversion to float a good candidate reason for this 
special treatment. Wonder why it's only applied to CSG and the quadric. 
Is it because the quadric can be part of CSG and there's something about 
quadric bounding making the CRITICAL_LENGTH code necessary for it alone..?

> 
> Generally current POV code could be improved in such situations.
> E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
> the float vectors of BBox without checking and might lose almost all
> of its significant digits that way.
> 
> bool Inside_BBox(const Vector3d& point, const BoundingBox& bbox):
> It doesn't handle the implicit +/-BOUND_HUGE means +/-Infinity at all.
> 
> The recurring recalcution of an object's BBox while parsing the scene
> file makes things worse and is completely superfluous:
> - do some consecutive rotations and the resulting AABB will grow each
> time
> - translate over some large distances and off are the bits.
> 
> An object's AABB is used in the tracing stage only, IMHO POV-Ray
> should simply compose all transformations during the parsing stage and
> then compute the AABB once.
> This could help a lot to keep AABBs as correct and close to their
> objects as possible.
> 

While it's work that happened before I was as deep in the source code, 
the bounding box code was substantially re-worked v37 to v38. A number 
of issues were fixed while re-factoring the code. The work mostly, or 
all, Christoph's (clipka's) I believe.

With the little I 'really' understand about the two bounding methods, I 
agree, don't and I'm unsure what you mean in parts. Certainly there is 
room for improvement.

My current opinion is accuracy issues due bounding follow accuracy 
issues with our ray -> shape/surface solvers. Today the practical scene 
limits, due ray intersection accuracy, set the working range to >1e-2 to 
maybe <1e5. Though, one can do better or worse depending on many factors.

Further, numerical errors (excluding the extreme crash test kind) in 
bounding are in a different numerical domain. They matter less to the 
'visual' result than errors happening in the ray - surface/shape 
equation normalized / somewhat-normalized domain.

Numerical error in the transforms / inverse transforms matter more, but 
also not as much as the ray surface intersections to 'visual' result.

With bounding - larger is OK/safe. Missing during bounds testing of 
course bad and there is a little of that going on of which we are aware 
in v37 and v38. It's one of a tangle of issues part of the 
https://github.com/POV-Ray/povray/pull/358 effort.

The idea of accumulating transforms before calculating the AABBs has 
merit I think, though I don't see it that simply done. Usually not too 
many transforms after the primitive is created we are into CSG and 
optimal AABBs at that level don't, to me, look easy - excepting some 
cases. Better over optimal AABBs perhaps.

Aside: We've been looking a little at bits of the bbox code due a v37 to 
v38 performance degrade: https://github.com/POV-Ray/povray/issues/363. 
Something for the, what that's worth, basket - maybe my understanding of 
the bounding mechanism will deepen some as a result. I'll keep what 
you've said in mind as I thrash around.

Bill P.


Post a reply to this message

From: Bald Eagle
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 2 Sep 2019 18:35:00
Message: <web.5d6d9804c6c13c7b4eec112d0@news.povray.org>
William F Pokorny <ano### [at] anonymousorg> wrote:

> > Generally current POV code could be improved in such situations.
> > E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
> > the float vectors of BBox without checking and might lose almost all
> > of its significant digits that way.

I haven't dug down to _that_ level of the source code, so forgive me if this is
a naive question, but does POV-Ray make use of "extended precision" floats?

https://en.wikipedia.org/wiki/Extended_precision


> While it's work that happened before I was as deep in the source code,
> the bounding box code was substantially re-worked v37 to v38. A number
> of issues were fixed while re-factoring the code. The work mostly, or
> all, Christoph's (clipka's) I believe.

> My current opinion is accuracy issues due bounding follow accuracy
> issues with our ray -> shape/surface solvers. Today the practical scene
> limits, due ray intersection accuracy, set the working range to >1e-2 to
> maybe <1e5. Though, one can do better or worse depending on many factors.

Perhaps there's a way to track the min/max ranges and report on them in the
scene statistics?  It might help in debugging scenes, and interpreting user
feedback when there are avoidable problems simply due to scale.


> The idea of accumulating transforms before calculating the AABBs has
> merit I think, though I don't see it that simply done. Usually not too
> many transforms after the primitive is created we are into CSG and
> optimal AABBs at that level don't, to me, look easy - excepting some
> cases. Better over optimal AABBs perhaps.

Now that I have enough experience, and have asked enough newbie questions, I can
properly envision that CSG code tangle.  eek.   A naive question might be
whether or not a primitive could be internally/virtually/temporarily translated
to the origin and that "metadata" stored somehow.  Then the composed transform
matrix could be applied, and perhaps a modified ray/object intersection test
could be done in a domain where the float errors wouldn't mess everything up...

Sort of an automated version of your suggestion here:
http://news.povray.org/povray.newusers/message/%3C5bfac735%241%40news.povray.org%3E/#%3C5bfac735%241%40news.povray.org%
3E

"Today, you can 'sometimes' clean up many of the artifacts by
scaling the entire scene up (or down) by 100 or 1000x."


Post a reply to this message

From: William F Pokorny
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 3 Sep 2019 08:59:45
Message: <5d6e63c1@news.povray.org>
On 9/2/19 6:30 PM, Bald Eagle wrote:
> 
> William F Pokorny <ano### [at] anonymousorg> wrote:
> 
>>> Generally current POV code could be improved in such situations.
>>> E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
>>> the float vectors of BBox without checking and might lose almost all
>>> of its significant digits that way.
> 
> I haven't dug down to _that_ level of the source code, so forgive me if this is
> a naive question, but does POV-Ray make use of "extended precision" floats?
> 
> https://en.wikipedia.org/wiki/Extended_precision
> 

Not in any of the mainstream versions. My solver patch branch:

   https://github.com/wfpokorny/povray/tree/fix/polynomialsolverAccuracy

supports a PRECISE_FLOAT macro mechanism for the common solver code. 
It's expensive. Even long double which is on my i3 still hardware backed 
was something like +150% slower IIRC. More detail on these and 128 bit 
float experiments are posted about elsewhere.

My first concern with the comment to which you attached your question 
isn't the potential loss of accuracy on any particular conversion given 
our hard coded +-1e7 range limit, but that we are doing all these double 
to single conversions which are fast opcodes, but not free.

There is a storage savings to using single floats and - in theory - a 
potential to fit more single floats in any one SIMD register for 
potentially faster math. However, I think due other v37/v38 code changes 
the SIMD aspect mostly not happening in practice even on machine 
targeted compiles. Such gains mostly don't happen with machine generic 
compiles in any case.

A valid first accuracy concern - not an AABBs growing and growing by 
rotations concern - I think is that we don't today accumulate as many 
transforms (stored as doubles IIRC) as possible before updating the 
bounding boxes, but rather - often enough - do them as they come. Each 
seeing double to float conversions and potential value snapping. 
Andreas's suggestion is to do the bounding box update once to a 'final' 
transform and I agree this likely better.

> 
...
>> issues with our ray -> shape/surface solvers. Today the practical scene
>> limits, due ray intersection accuracy, set the working range to >1e-2 to
>> maybe <1e5. Though, one can do better or worse depending on many factors.
> 
> Perhaps there's a way to track the min/max ranges and report on them in the
> scene statistics?  It might help in debugging scenes, and interpreting user
> feedback when there are avoidable problems simply due to scale.
> 

I have thought about the parsing + bounding process creating a least 
enclosing environment box which users can access. The exposure I see is 
more than the final scene scale. It's that users can today use 
intermediate original definitions or transformations which corrupt the 
accuracy of the final rendered representation mostly without notice.

> 
>> The idea of accumulating transforms before calculating the AABBs has
>> merit I think, though I don't see it that simply done. Usually not too
>> many transforms after the primitive is created we are into CSG and
>> optimal AABBs at that level don't, to me, look easy - excepting some
>> cases. Better over optimal AABBs perhaps.
> 
> Now that I have enough experience, and have asked enough newbie questions, I can
> properly envision that CSG code tangle.  eek.   A naive question might be
> whether or not a primitive could be internally/virtually/temporarily translated
> to the origin and that "metadata" stored somehow.  Then the composed transform
> matrix could be applied, and perhaps a modified ray/object intersection test
> could be done in a domain where the float errors wouldn't mess everything up...
> 

What you describe sometimes happens today and sometimes not. Numerically 
for the solvers I'd like it to happen more often, but what that really 
means for some shapes - sphere_sweeps for example, isn't all that clear. 
Further there are other trade offs in play. On my list is a completely 
different implementation for spheres using a normalized representation 
so I can do some comparisons.

The new solver approach discussed elsewhere might support trade-offs 
traditional solvers don't - normalization is potentially less important 
numerically with the new approach. It's complicated - so much so I daily 
doubt what I can really do - especially given my personal C++ 
impediment. Some recent ideas potentially break from recent POV-Ray code 
direction too which means I'm not sure what, longer term.

> Sort of an automated version of your suggestion here:
>
http://news.povray.org/povray.newusers/message/%3C5bfac735%241%40news.povray.org%3E/#%3C5bfac735%241%40news.povray.org%
> 3E
> 
> "Today, you can 'sometimes' clean up many of the artifacts by
> scaling the entire scene up (or down) by 100 or 1000x."
> 

I now know the scene scaling is two edged. You can get into as many 
issues scaling up as down. It just happens, I think, given the 
asymmetric nature of our current practical range, and that people tend 
to create data about zero, that scaling up often better centers scenes 
for POV-Ray's current numerical condition.

Bill P.


Post a reply to this message

From: Andreas Kaiser
Subject: Re: Two BBox calculation bugs in CSG and Quadric?
Date: 5 Sep 2019 16:57:25
Message: <9mr2nepumrmdn2s125hj5dhl33rs2q4g34@4ax.com>
On Tue, 3 Sep 2019 08:59:44 -0400, William F Pokorny wrote:

>On 9/2/19 6:30 PM, Bald Eagle wrote:
>> 
>> William F Pokorny <ano### [at] anonymousorg> wrote:
>> 
>>>> Generally current POV code could be improved in such situations.
>>>> E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
>>>> the float vectors of BBox without checking and might lose almost all
>>>> of its significant digits that way.
>> 
>> I haven't dug down to _that_ level of the source code, so forgive me if this is
>> a naive question, but does POV-Ray make use of "extended precision" floats?
>> 
>> https://en.wikipedia.org/wiki/Extended_precision
>> 
>
>Not in any of the mainstream versions. My solver patch branch:
>
>   https://github.com/wfpokorny/povray/tree/fix/polynomialsolverAccuracy
>
>supports a PRECISE_FLOAT macro mechanism for the common solver code. 
>It's expensive. Even long double which is on my i3 still hardware backed 
>was something like +150% slower IIRC. More detail on these and 128 bit 
>float experiments are posted about elsewhere.
>
>My first concern with the comment to which you attached your question 
>isn't the potential loss of accuracy on any particular conversion given 
>our hard coded +-1e7 range limit, but that we are doing all these double 
>to single conversions which are fast opcodes, but not free.
>
>There is a storage savings to using single floats and - in theory - a 
>potential to fit more single floats in any one SIMD register for 
>potentially faster math. However, I think due other v37/v38 code changes 
>the SIMD aspect mostly not happening in practice even on machine 
>targeted compiles. Such gains mostly don't happen with machine generic 
>compiles in any case.
>
>A valid first accuracy concern - not an AABBs growing and growing by 
>rotations concern - I think is that we don't today accumulate as many 
>transforms (stored as doubles IIRC) as possible before updating the 
>bounding boxes, but rather - often enough - do them as they come. Each 
>seeing double to float conversions and potential value snapping. 
>Andreas's suggestion is to do the bounding box update once to a 'final' 
>transform and I agree this likely better.
>
>> 
>...
>>> issues with our ray -> shape/surface solvers. Today the practical scene
>>> limits, due ray intersection accuracy, set the working range to >1e-2 to
>>> maybe <1e5. Though, one can do better or worse depending on many factors.
>> 
>> Perhaps there's a way to track the min/max ranges and report on them in the
>> scene statistics?  It might help in debugging scenes, and interpreting user
>> feedback when there are avoidable problems simply due to scale.
>> 
>
>I have thought about the parsing + bounding process creating a least 
>enclosing environment box which users can access. The exposure I see is 
>more than the final scene scale. It's that users can today use 
>intermediate original definitions or transformations which corrupt the 
>accuracy of the final rendered representation mostly without notice.
>
>> 
>>> The idea of accumulating transforms before calculating the AABBs has
>>> merit I think, though I don't see it that simply done. Usually not too
>>> many transforms after the primitive is created we are into CSG and
>>> optimal AABBs at that level don't, to me, look easy - excepting some
>>> cases. Better over optimal AABBs perhaps.
>> 
>> Now that I have enough experience, and have asked enough newbie questions, I can
>> properly envision that CSG code tangle.  eek.   A naive question might be
>> whether or not a primitive could be internally/virtually/temporarily translated
>> to the origin and that "metadata" stored somehow.  Then the composed transform
>> matrix could be applied, and perhaps a modified ray/object intersection test
>> could be done in a domain where the float errors wouldn't mess everything up...
>> 
>
>What you describe sometimes happens today and sometimes not. Numerically 
>for the solvers I'd like it to happen more often, but what that really 
>means for some shapes - sphere_sweeps for example, isn't all that clear. 
>Further there are other trade offs in play. On my list is a completely 
>different implementation for spheres using a normalized representation 
>so I can do some comparisons.
>
>The new solver approach discussed elsewhere might support trade-offs 
>traditional solvers don't - normalization is potentially less important 
>numerically with the new approach. It's complicated - so much so I daily 
>doubt what I can really do - especially given my personal C++ 
>impediment. Some recent ideas potentially break from recent POV-Ray code 
>direction too which means I'm not sure what, longer term.
>
>> Sort of an automated version of your suggestion here:
>>
http://news.povray.org/povray.newusers/message/%3C5bfac735%241%40news.povray.org%3E/#%3C5bfac735%241%40news.povray.org%
>> 3E
>> 
>> "Today, you can 'sometimes' clean up many of the artifacts by
>> scaling the entire scene up (or down) by 100 or 1000x."
>> 
>
>I now know the scene scaling is two edged. You can get into as many 
>issues scaling up as down. It just happens, I think, given the 
>asymmetric nature of our current practical range, and that people tend 
>to create data about zero, that scaling up often better centers scenes 
>for POV-Ray's current numerical condition.
>
>Bill P.

I will add deferral of transformation computation and AABB calculation
next although I was doing something different right now.

I will also write down some issues with current use & intersection
calculation of AABBs.

Meanwhile here is a good read (note: really old but valid):
Search the web for "Goldberg What Every Computer Scientist Should Know
About Floating-Point Arithmetic".
The web page at oracle looked broken for me with Chrome wrt. equations
but was fine with Firefox. There are a lot of pdf versions as well.

You don't need to follow every equation, just read over them and get
an idea what can go wrong with floating-point calculations and why.
Most important are rounding errors and cancellation during subsequent
calculations with some good examples.

You'll wonder how well POV-Ray actually works :D
And we'll improve it to work even better ;)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.