|
|
On 7/10/22 15:27, jceddy wrote:
> Can I inverse-transform the point being tested, run the minimum-distance
> calculation against the original data to get a minimum-distance*VECTOR* from
> the test point to the original mesh, then just apply the object's transformation
> matrix to that vector before measuring its length?
I've never attempted a complete survey, but I'd say this the more common
approach inside POV-Ray's code for various calculations, but it's not
universal.
---
The numerical issues Cousin Ricky suggested as applying to the inverse
transform in particular(a) are very complicated on the whole. Especially
true in POV-Ray's code base where over the years numerically
inconsistent numerical approaches were employed.
The parsing stage today flattens 'some' transforms for some shapes.
Spheres and sphere sweeps come to mind. This kind of xfrm flattening I
believe was done to try and help performance(b). This flattening crashes
into the base v3.7/v3.8/v4.0 issue of pull request:
https://github.com/POV-Ray/povray/pull/358
Though, the discussion thread for that pull req goes on to look at
additional numerical issues; including fundamental limits at the end
related to loss of accuracy due the length of rays and polynomial
representations for the ray-surface intersections.
We can work to reduce the exposure to numerical issues in ray tracing,
but we can never eliminate them. More so the case due needing to use
fast floating point hardware for good performance.
Bill P.
(a) - The span and locality of references for many 'object'
representations are effectively better with inverse transformations
where the working numerical space is more robust. It can be less robust
too. However, I think this less common in practice. In other words, it
doesn't matter if the inverse transform / result-re-transform is
somewhat inaccurate for planet X, if all the local calculations on and
near planet X are more accurate. I'm lying much less here where planet X
exists in 'effective' isolation in the global numerical space of the
scene.
(b) - The issue pull 358 is trying to address is the unfortunate
introduction of a kind of min intersection depth filter in the "global"
numerical space in v3.7+ which results in the trimming or elimination of
shapes from ray-surface intersection calculations. In povr - where this
issue fixed in a way different than the actual #358 pull req - I've got
sphere point cloud test cases which now run 30-40% slower! Why? Well
povr isn't ignoring 30-40% of the ray->sphere calculations. Makes me
wonder what portion of the performance gains from the parser
flattening/eliminating of transforms was fools-gold when measured.
Post a reply to this message
|
|