POV-Ray : Newsgroups : povray.beta-test : v3.8 Clean up TODOs. f_superellipsoid() / shadow cache. Server Time
23 Apr 2024 11:48:30 EDT (-0400)
  v3.8 Clean up TODOs. f_superellipsoid() / shadow cache. (Message 11 to 13 of 13)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: William F Pokorny
Subject: Re: v3.8 Clean up TODOs. f_superellipsoid() / shadow cache.
Date: 19 Apr 2020 18:13:06
Message: <5e9cccf2$1@news.povray.org>
On 4/18/20 4:12 PM, jr wrote:
> hi,
> 
> William F Pokorny <ano### [at] anonymousorg> wrote:
>> ...
>> Trick helps enough, I wonder if some other inbuilts could benefit from a
>> float over double option too.
> 
> does the .. cost of extra speed, in context, matter so much?  asking because
> (and perhaps I'm completely off-track) only today was a post (by user 'guarnio')
> where the problem is/was the range of float not being enough.
> 

Not trying to be flippant, but I think it does when it does, and doesn't 
when it doesn't. It's a judgement.

The scale and range of a scene with respect to accuracy as an issue is 
always there relative to the accuracy you have available.

With functions and isosurfaces, the speed of even very fast inbuilt 
functions matters because you mostly want to combine them with other 
functions to create whatever. The performance of all those functions 
mixed together mathematically is what can quickly get out of hand to the 
point of being practically unusable performance wise.

With functions and isosurfaces, we already have an object with user 
variable accuracy via the accuracy value passed which is often << 7/8 
digits (I typically use 0.0005). I've done some limited testing and the 
isosurface solver and - partly due the types of functional input - it 
cannot deliver more than 6-7 digits of accuracy max as a rule sometimes 
less. With other object types and solvers you can get up in the 11/12 
digit ranges though often less. All at doubles.

Relatedly, I believe in going after better performance continually in 
software tools - otherwise you're on the slippery slope to poky. :-)

Bill P.


Post a reply to this message

From: jr
Subject: Re: v3.8 Clean up TODOs. f_superellipsoid() / shadow cache.
Date: 21 Apr 2020 03:55:00
Message: <web.5e9ea6551e176347827e2b3e0@news.povray.org>
hi,

William F Pokorny <ano### [at] anonymousorg> wrote:
> On 4/18/20 4:12 PM, jr wrote:
> > ... the problem is/was the range of float not being enough.
>
> Not trying to be flippant, but I think it does when it does, and doesn't
> when it doesn't. It's a judgement.
>
> The scale and range of a scene with respect to accuracy as an issue is
> always there relative to the accuracy you have available.

naively, I'd assumed some kind of upgrade/development "policy" that sees all
floats replaced with doubles, in time.

> ...
> Relatedly, I believe in going after better performance continually in
> software tools - otherwise you're on the slippery slope to poky. :-)

hmm, I probably "sit on the fence" on that.  eg agree with you when it's a
compiler or other s/ware which has to take h/ware developments into account, but
kind of disagree for, say, programs not tied to h/ware, like 'sed'.


regards, jr.


Post a reply to this message

From: William F Pokorny
Subject: Re: v3.8 Clean up TODOs. f_superellipsoid() / shadow cache.
Date: 22 Apr 2020 08:30:34
Message: <5ea038ea@news.povray.org>
On 4/21/20 3:52 AM, jr wrote:
> hi,
> 
...
>>
>> The scale and range of a scene with respect to accuracy as an issue is
>> always there relative to the accuracy you have available.
> 
> naively, I'd assumed some kind of upgrade/development "policy" that sees all
> floats replaced with doubles, in time.
> Maybe. I'm not aware of any such policy, but I'm not a core developer.

The code base is internally mostly at double floats. There are a few 
places like bounding and color management where single floats get used. 
Done to save storage in the former I think or where the additional 
accuracy is of no practical value (to color results at least) in the 
later. On 'my' list to look at moving these to doubles.

For povr in the continuous pattern wave modification code I recently 
moved a few pattern stored values from singles to doubles. Partly to 
avoid the type conversions, but mostly because my grand plan is to flush 
out the function/pattern code so the interplay between functions and 
patterns is as seamless as it can be. I didn't want functions modified 
by a wave modifier to be getting single float parameters - in a way not 
visible to the user - when the reasonable assumption is everything is at 
double floats.

>> ...
>> Relatedly, I believe in going after better performance continually in
>> software tools - otherwise you're on the slippery slope to poky. :-)
> 
> hmm, I probably "sit on the fence" on that.  eg agree with you when it's a
> compiler or other s/ware which has to take h/ware developments into account, but
> kind of disagree for, say, programs not tied to h/ware, like 'sed'.
> 

I'm with you I think. I failed to be clear (I 'was' too flippant :-)). I 
am pushing for continual performance testing and especially an 
unwillingness to take much slowdown due changes over time without 
compensating improvements somewhere.

What has happened intentionally - or not - with POV-Ray moving v37 to 
v38 and the generic architecture compile shipped with linux 
distributions is a 30-40% slow down with certain common types of scenes.

https://github.com/POV-Ray/povray/issues/363

This after running down a lot of stuff like dynamic casts in the ray 
tracing code to recover performance seen in the benchmark scene.

In part the benchmark scene doesn't cover but a small slice of 
functionality in POV-Ray and mostly this was all that was getting run 
for performance testing.

I believe too, too many times we said this change is only a 1 or 2% slow 
down... Do enough of those in a year and you are well on to pokey at 
year end. The 1 or 2% slowdown at year end is relative to current 
performance. Many later changes, if looked at January 1st, might have 
been rejected out of hand as being too much of a slow down.

Aside: The GNU build methodology supports a code marking method for 
hardware optimized versions of functions that get picked/set at 'load 
time' depending upon your particular hardware or certain hardware 
capabilities. Both compiler and hand optimized code can be implemented 
in this way. Yes, this a reason my personal povr version is headed to a 
GNU only build(1) process. I want to play with this capability in povr 
proper.

Bill P.

(1) - Our current vector template class looks to be somewhat in the way 
of best 'compiler' hardware optimization...


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.