|
|
_theCardinal <_the### [at] yahoocom> wrote:
> "But double precision is actually 64bit."
> To be technical the number of bits used for a double is implementation
> dependent. The requirement is simply that a float <= double. It is up to
> compiler to decide how to interpret that. Using double in lieu of float
> simply indicates the desire for additional precision - not the requirement
> (in C and C++). Hence it is impossible in general to say povray is using
> 64 bits. See: The C++ Programming Language (TCPL) 74-75.
That may be true *in theory*. In practice it's not compiler-dependent
but hardware-dependent. Since basically all existing hardware (including
non-intel one) uses IEEE 64-bit double-precision floating point numbers,
that's what all compilers use too. It wouldn't make sense using anything
else.
So yes, you can say *for sure* that POV-Ray uses 64-bit floating point
numbers. I challenge you to mention a computer where POV-Ray runs and where
'double' is not 64 bits long.
> Compilers may have more than a few techniques to simulate 64 bit computation
> on a 32 bit architecture, but I am not experienced enough in compiler design
> to state them within reasonable doubt.
You have a serious misconception about FPUs and double-precision floating
point numbers.
Compilers don't need to simulate anything: Intel FPUs have supported 64-bit
floating point numbers since probably the 8087. That has nothing to do with
the register size of the CPU, as the FPU is quite independent of it.
> Its worth noting that the time lost
> in doing 2 ops instead of 1 is easily regained in shifting from 1-2
> processors to an array of processors, so this is not a concern provided the
> utilization of the array is sufficiently high.
1) There's no need to do "2 ops instead of 1" when speaking about 64-bit
floating point numbers.
2) Even if it was, it's not possible to perform floating point arithmetic
with a larger floating point type by simply doing 2 operations instead of
the regular 1.
--
- Warp
Post a reply to this message
|
|