POV-Ray : Newsgroups : povray.general : World size limits? : Re: World size limits? Server Time
30 Jul 2024 02:25:01 EDT (-0400)
  Re: World size limits?  
From: Le Forgeron
Date: 2 Jan 2010 10:41:11
Message: <4b3f6917@news.povray.org>
Le 02/01/2010 15:34, David Given nous fit lire :
> On 02/01/10 13:02, Thorsten Froehlich wrote:
>> On 02.01.10 12:57, David Given wrote:
>>> I'm trying to model an astronomical situation, with everything to scale.
>>> As this is vaguely scientific, I'm trying to use standard units
>>> throughout, which means that some of my coordinates are rather large.
>> <http://wiki.povray.org/content/Knowledgebase:Language_Questions_and_Tips#Topic_24>
> 
> Thanks, but I'm not sure whether this is actually the problem --- I've
> tried scaling my entirely model down by a factor of 1000 (so that I'm
> now working with kilometres rather than metres) and everything behaves
> much better. This suggests that it's related to absolute magnitude
> rather than precision (because that hasn't changed).
> 
> In addition, if I carefully ease an object over the edge of the boundary
> I can see it getting clipped. I wouldn't expect precision errors to
> behave with, well, such precision!
> 
> Can Povray be built to use long doubles instead of doubles?
> 
Your problem is that there is two kind of precision...magnitude.
Your scene provide units with magnitude U & precision u.
But some objects are transformed to unit object internally and back to
your scene dimension once rendered. For instance, the length of a full
cone is always 1.0 in the solver. And that solver has a finite precision
limit too. That precision get boosted back as magnitude when returning
to the scene space.

So, reducing the magnitude in the scene does in fact has an impact on
the precision. For astronomical system, you would better use thousand of
kilometers as a unit. (Earth perimeter = 4... Radius = 4/(2*PI) )

In fact, you'd better use UA directly ( 1 UA = 149.60×10^9 m),
leaving the oort clouds at its 50 000 UA radius!
Sedna has quite a small orbit with its 928 UA aphelie.

Changing to long doubles instead of double: the issue is that not only
the type must be changed in structure (easy), but that all functions
calls (for the maths) must be replaced with the relevant long double
functions (and long double are not portable, at least not backward with
old architectures still supported): cosl instead of cos, sqrtl for sqrt,
... everywhere. (otherwise, the internal would use 64 bits while storing
it inside a 128 bits **) That's a lot of code to change, for a very
marginal gain, if any.

The time to play on the long double might be significantly longer than
on a double, especially on 32 bits architectures. (you would also fill
the cpu cache quicker, meaning a very actual loss of performance
sooner). Have you yet some FPU with a 160 bits units ? (assuming the
same ranges for 64/80: 128/160 ?? )
Graphics cards have large bus, but usually they pump more than one float
at a time, they rarely deals with such precision either.

**: Beware, even when using long double, you might get a compiler which
handles them as double (from the size point of view).

PS: the industry seems to hesitate currently between binary128 (basic
extension of IEEE-754 to 1/15/113 (whereas 64 bits is 1/11/53, & 32 :
1/8/24 ), and a new standard of 2008: decimal128 (allowing to keep
rounding precision as for accounting; the limit of decimal128 seems to
be that 10^34-1 with a precision of 1)...
and to make things worst, decimal*** has two representation: a simple
binary one, and a densely packed decimal significand one.

Wait & See!


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.