POV-Ray : Newsgroups : povray.binaries.images : Old media blob issue leading to a look at sturm / polysolve. : Re: Old media blob issue leading to a look at sturm / polysolve. Server Time
1 May 2024 22:07:13 EDT (-0400)
  Re: Old media blob issue leading to a look at sturm / polysolve.  
From: William F Pokorny
Date: 21 May 2018 10:39:20
Message: <5b02da18$1@news.povray.org>
On 05/20/2018 05:09 PM, clipka wrote:
> Am 20.05.2018 um 18:10 schrieb William F Pokorny:
> 
>> My position is still the difference isn't large enough to be of concern,
>> but if so, maybe we should more strongly consider going to >0.0 or some
>> very small values for all shapes. Doing it only in blob.cpp doesn't make
>> sense to me.
> 
> I guess the reason I changed it only for blobs was because there I
> identified it as a problem - and I might not have been familiar enough
> with the shapes stuff to know that the other shapes have a similar
> mechanism.
> 
> After all, why would one expect such a mechanism if there's the "global"
> MIN_ISECT_DEPTH?

Understand. Only recently are the root filtering and bounding mechanisms 
coming into some focus for me & I'm certain I still don't understand it 
all. There are too, the two, higher level bounding mechanism which are 
tangled in the root/intersection handling. I'm starting to get the 
brain-itch the current bounding is not always optimal for 'root finding.'

> 
> Technically, I'm pretty sure the mechanism should be disabled for SSLT
> rays in all shapes.
> 

OK. Agree from what I see. Work toward >0.0 I guess - though I have this 
feeling there is likely an effective good performance 'limit' ahead of 
 >0.0.

> 
>> Aside: MIN_ISECT_DEPTH doesn't exist before 3.7. Is it's creation
>> related to the SSLT implementation?
> 
> No, it was already there before I joined the team. The only relation to
> SSLT is that it gets in the way there.
> 
> My best guess would be that someone tried to pull the DEPTH_TOLERANCE
> mechanism out of all the shapes, and stopped with the work half-finished.
> 
> 
> BTW, one thing that has been bugging me all along about the
> DEPTH_TOLERANCE and MIN_ISECT_DEPTH mechanisms (and other near-zero
> tests in POV-Ray) is that they're using absolute values, rather than
> adapting to the overall scale of stuff. That should be possible, right?
> 

Sure, but... Not presently in POV-Ray is my 'anytime soon' answer. Too 
many magic values or defined values like EPSILON used for differing -or 
effectively differing - purposes. Remember, for example, one of the 
recent changes I made to the regula-falsi method used within polysolve() 
was to use universally the ray value domain instead of a mix of 
polynomial values and ray values for relative & absolute error stops.

Root / intersection work is sometimes done in a normalized space(1) and 
sometimes not, etc. The pool of issues, questions and possibilities in 
which I'm mentally drowning is already deep - and I'm far from seeing to 
the bottom. We can often scale scenes up for better result because 
POV-Ray code has long been numerically biased to 0+ (1e-3 to 1e7) (2). 
Near term think we should work toward something zero centered (1e-7 to 
1e7).

Would like to first get to where we've got a more accurate polysolve() 
against which any changes to other solvers / specialized shape solvers 
can be tested. After which we can perhaps work to get all the solvers & 
solver variant code buried in shape code to double / 'DBL' accuracy.

Underneath everything solver / intersection wise is double accuracy 
which is only 15 decimal digits. The best double value step 
(DBL_EPSILON) off 1.0 is 2.22045e-16. Plus we use fastmath which 
degrades that accuracy.

Aside: I coded up a 128bit polysolve() but it was very slow (+10x) and 
the compiler feature doesn't look to be supported broadly enough as a 
standard to be part of POV-Ray. A near term interesting idea to me - one 
Dick Balaska, I think, touched upon recently in another thread -  is 
coding up a version of polysolve() we'd ship using 'long double'. Long 
double is guaranteed at only 64bits (double), but in practice today 
folks would mostly get 80bits over 64bits - and sometimes more. It would 
need to be an option to the default 64 bit double as it would certainly 
be slower - but being generally hardware backed, performance should be 
<< the +10x degrade I see for 128bits on my Intel processor. Would be - 
for most - an improvement to 18 decimal digits and an epsilon step of 
1.0842e-19 off 1.0. Worthwhile...?

Bill P.

(1) Normalizing & the inverse, transforms & the inverse all degrade 
accuracy too.

(2) From the days of single floats - where the scale up would have 
worked less well at hiding numerical issues (ie coincident surface 
noise) because the non-normalized / global accuracy was then something 
like 6 decimal digits or 1e-3 to + 1e3.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.