|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott <sco### [at] scottcom> wrote:
> Try a simple 1-line scene like this:
> sphere{ <0,0,1> .4 scale X pigment{color rgb 10}}
> And increase X until the sphere disappears, on my system X=2e7 doesn't
> render.
AFAIK that's not because of precision, but because of an explicit limit
in the source code for such things.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
>> And increase X until the sphere disappears, on my system X=2e7 doesn't
>> render.
>
> AFAIK that's not because of precision, but because of an explicit limit
> in the source code for such things.
That is so indeed.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>> And increase X until the sphere disappears, on my system X=2e7 doesn't
>> render.
>
> AFAIK that's not because of precision, but because of an explicit limit
> in the source code for such things.
Is the purpose of the explicit limit to hide precision errors becoming
visible?
Why is the limit so low, 2e7 is absolutely tiny compared to the 64-bit
floating point range of 1e300 or whatever it is (which presumably POV uses).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
scott schrieb:
> Why is the limit so low, 2e7 is absolutely tiny compared to the 64-bit
> floating point range of 1e300 or whatever it is (which presumably POV
> uses).
Personally, I have the impression that the rationale behind most of
these limits has been lost in obscurity by now.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
>>> And increase X until the sphere disappears, on my system X=2e7 doesn't
>>> render.
>>
>> AFAIK that's not because of precision, but because of an explicit limit
>> in the source code for such things.
>
> Is the purpose of the explicit limit to hide precision errors becoming
> visible?
>
> Why is the limit so low, 2e7 is absolutely tiny compared to the 64-bit
> floating point range of 1e300 or whatever it is (which presumably POV
> uses).
>
>
If you have a value of about 1e100, then, the last digit presision would
be about 1e90 to 1e93. This is a serious loss of precision.
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain <aze### [at] qwertyorg> wrote:
> If you have a value of about 1e100, then, the last digit presision would
> be about 1e90 to 1e93. This is a serious loss of precision.
Note that a double-precision floating point number (which is 64 bits
in size) has only 53 bits of precision for the base. That's approximately
15 digits of precision in base 10. (In other words, if you try to store
a number with more significant decimal digits than 15 into such a floating
point value, the lower ones will just be lost.)
I assume 1e7 was chose as half of that.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Alain <aze### [at] qwertyorg> wrote:
>> If you have a value of about 1e100, then, the last digit presision would
>> be about 1e90 to 1e93. This is a serious loss of precision.
>
> Note that a double-precision floating point number (which is 64 bits
> in size) has only 53 bits of precision for the base. That's approximately
> 15 digits of precision in base 10. (In other words, if you try to store
> a number with more significant decimal digits than 15 into such a floating
> point value, the lower ones will just be lost.)
>
> I assume 1e7 was chose as half of that.
>
Probably. It looks like a reasonable cut point when you concider that,
during the calculations, you need to get square roots as well as squares
and cubes.
Just a multiplication of two floats can double the number of digits.
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain schrieb:
>> I assume 1e7 was chose as half of that.
>>
> Probably. It looks like a reasonable cut point when you concider that,
> during the calculations, you need to get square roots as well as squares
> and cubes.
> Just a multiplication of two floats can double the number of digits.
Although this is the case, multiplications do not affect the precision
of floating-point computations too much. The same goes for square roots.
Note that even though a double-precision floating-point number can hold
numbers with only 15 /significant/ digits, it can still hold values much
larger than 1e+15 (albeit at lower precision).
The most troublesome operations in the realm of floating-point numbers
are actually subtractions (or additions of values with different sign,
for that matter): For instance, 1.000 - 0.999 will give 0.001 at a
precision of only 12 significant digits.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Although this is the case, multiplications do not affect the precision
> of floating-point computations too much.
Of course it doesn't affect the precision of floating point computations.
Double-precision floating point numbers will always have 53 bits of
precision. that doesn't change.
However, when multiplying two 15-digit numbers, the result will have
30 significant digits of information, and from those the 15 least significant
will be lost. Thus the result will not be exact.
As a concrete example:
0.123456789012345 * 0.123456789012345 =
0.01524157875323866912056239902
When stored into a 'double', the result will be rounded to something like
0.0152415787532387
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp schrieb:
> Of course it doesn't affect the precision of floating point computations.
> Double-precision floating point numbers will always have 53 bits of
> precision. that doesn't change.
>
> However, when multiplying two 15-digit numbers, the result will have
> 30 significant digits of information, and from those the 15 least significant
> will be lost. Thus the result will not be exact.
Well, it will be just about as exact as the input numbers:
> 0.123456789012345 * 0.123456789012345 =
> 0.01524157875323866912056239902
That's actually more like:
( 0.123456789012345e0 +/- 1.0e-15 )
* ( 0.123456789012345e0 +/- 1.0e-15 )
= 0.01524157875323866912056239902e0
+/- 0.123456789012345e-15
+/- 0.123456789012345e-15
+ 1.0e-30
= 0.1524157875323866912056239902e-1
+/- 2.46913578024690e-16
+ 1.0e-30
which again fits neatly within the original relative precision. No harm
done by storing it into a double-precision floating point again (the
number format even provides a slightly higher precision than the
multiplication result could ever have, given that the operands were
double-precision floats as well).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |