POV-Ray : Newsgroups : povray.bugreports : Two BBox calculation bugs in CSG and Quadric? : Re: Two BBox calculation bugs in CSG and Quadric? Server Time
26 Jul 2024 20:24:34 EDT (-0400)
  Re: Two BBox calculation bugs in CSG and Quadric?  
From: William F Pokorny
Date: 3 Sep 2019 08:59:45
Message: <5d6e63c1@news.povray.org>
On 9/2/19 6:30 PM, Bald Eagle wrote:
> 
> William F Pokorny <ano### [at] anonymousorg> wrote:
> 
>>> Generally current POV code could be improved in such situations.
>>> E.g look at void Box::Compute_BBox(): It assigns its DBL vectors to
>>> the float vectors of BBox without checking and might lose almost all
>>> of its significant digits that way.
> 
> I haven't dug down to _that_ level of the source code, so forgive me if this is
> a naive question, but does POV-Ray make use of "extended precision" floats?
> 
> https://en.wikipedia.org/wiki/Extended_precision
> 

Not in any of the mainstream versions. My solver patch branch:

   https://github.com/wfpokorny/povray/tree/fix/polynomialsolverAccuracy

supports a PRECISE_FLOAT macro mechanism for the common solver code. 
It's expensive. Even long double which is on my i3 still hardware backed 
was something like +150% slower IIRC. More detail on these and 128 bit 
float experiments are posted about elsewhere.

My first concern with the comment to which you attached your question 
isn't the potential loss of accuracy on any particular conversion given 
our hard coded +-1e7 range limit, but that we are doing all these double 
to single conversions which are fast opcodes, but not free.

There is a storage savings to using single floats and - in theory - a 
potential to fit more single floats in any one SIMD register for 
potentially faster math. However, I think due other v37/v38 code changes 
the SIMD aspect mostly not happening in practice even on machine 
targeted compiles. Such gains mostly don't happen with machine generic 
compiles in any case.

A valid first accuracy concern - not an AABBs growing and growing by 
rotations concern - I think is that we don't today accumulate as many 
transforms (stored as doubles IIRC) as possible before updating the 
bounding boxes, but rather - often enough - do them as they come. Each 
seeing double to float conversions and potential value snapping. 
Andreas's suggestion is to do the bounding box update once to a 'final' 
transform and I agree this likely better.

> 
...
>> issues with our ray -> shape/surface solvers. Today the practical scene
>> limits, due ray intersection accuracy, set the working range to >1e-2 to
>> maybe <1e5. Though, one can do better or worse depending on many factors.
> 
> Perhaps there's a way to track the min/max ranges and report on them in the
> scene statistics?  It might help in debugging scenes, and interpreting user
> feedback when there are avoidable problems simply due to scale.
> 

I have thought about the parsing + bounding process creating a least 
enclosing environment box which users can access. The exposure I see is 
more than the final scene scale. It's that users can today use 
intermediate original definitions or transformations which corrupt the 
accuracy of the final rendered representation mostly without notice.

> 
>> The idea of accumulating transforms before calculating the AABBs has
>> merit I think, though I don't see it that simply done. Usually not too
>> many transforms after the primitive is created we are into CSG and
>> optimal AABBs at that level don't, to me, look easy - excepting some
>> cases. Better over optimal AABBs perhaps.
> 
> Now that I have enough experience, and have asked enough newbie questions, I can
> properly envision that CSG code tangle.  eek.   A naive question might be
> whether or not a primitive could be internally/virtually/temporarily translated
> to the origin and that "metadata" stored somehow.  Then the composed transform
> matrix could be applied, and perhaps a modified ray/object intersection test
> could be done in a domain where the float errors wouldn't mess everything up...
> 

What you describe sometimes happens today and sometimes not. Numerically 
for the solvers I'd like it to happen more often, but what that really 
means for some shapes - sphere_sweeps for example, isn't all that clear. 
Further there are other trade offs in play. On my list is a completely 
different implementation for spheres using a normalized representation 
so I can do some comparisons.

The new solver approach discussed elsewhere might support trade-offs 
traditional solvers don't - normalization is potentially less important 
numerically with the new approach. It's complicated - so much so I daily 
doubt what I can really do - especially given my personal C++ 
impediment. Some recent ideas potentially break from recent POV-Ray code 
direction too which means I'm not sure what, longer term.

> Sort of an automated version of your suggestion here:
>
http://news.povray.org/povray.newusers/message/%3C5bfac735%241%40news.povray.org%3E/#%3C5bfac735%241%40news.povray.org%
> 3E
> 
> "Today, you can 'sometimes' clean up many of the artifacts by
> scaling the entire scene up (or down) by 100 or 1000x."
> 

I now know the scene scaling is two edged. You can get into as many 
issues scaling up as down. It just happens, I think, given the 
asymmetric nature of our current practical range, and that people tend 
to create data about zero, that scaling up often better centers scenes 
for POV-Ray's current numerical condition.

Bill P.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.