![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> floor(A) Floor of A. Returns the largest integer less than A. Rounds
> down to the next lower integer.
>
> And you can use A - int(A) to see if the fractional part is > 0.5
Or just
floor( A + 0.5 )
(Although this will cause -3.5 to be rounded to -3, which I'm not sure is
correct.)
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Slime <fak### [at] email address> wrote:
> floor( A + 0.5 )
> (Although this will cause -3.5 to be rounded to -3, which I'm not sure is
> correct.)
Why would rounding it to -4 be more correct? 3.5 is equally close to
either value.
--
- Warp
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
> Why would rounding it to -4 be more correct? 3.5 is equally close to
> either value.
There's an arbitrary rule that you round up for .5, according to my third
grade teacher. =) I just mentioned that in case it's something someone cares
about. It doesn't bother me because I've never encountered a practical
situation where it mattered.
However, I've always assumed the reason for the rule was that the original
number might have been truncated from something higher than the .5 value,
like .53. I suppose this *could* be a practical consideration if your input
is from a user who might have truncated the number.
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Unrelated to this topic in particular, but related to rounding. There's
a curious anecdote about rounding in FPUs:
As we all know, FPUs have, naturally, a limited number of bits to
represent floating point values. This, of course, means that if more
binary decimal places would be needed to represent the result of an
operation than is available in an FPU register, the lowest decimals
are simply dropped off.
Now this of course raises the question of what to do with the
least-signifant bit when this happens. The most correct way would be,
of course, to round the number properly, so the least-significat bit
gets a rounded value depending on the even-less-significant bits of
the calculation which were dropped off. (And in fact, other bits may
be changed because of this rounding as well.)
But this requires quite a lot of extra logic. It would mean that the
FPU has to actually calculate the value with at least a few bits more
precision than the size of an FPU register, and round accordingly.
This would require a lot more logic, make the FPU a lot more complicated
and expensive, increase power requirements, increase heat production, etc.
For this reason most FPUs simply clamp those bits away, period, without
any kind of rounding. It's the most cost-effective thing to do, and the
error produced is, after all, very small. However, this still introduces
small rounding errors with lengthy calculations which use and reuse
values from earlier calculations.
In some FPU (I really can't remember which) they tried something a bit
different: Round the value up or down *randomly*. In other words, instead
of calculating extra bits of the result, just randomly assume that the
value needs to be rounded up or down.
Curiously, when they tested this with applications where the rounding
errors caused by the regular clamping method were significant, the random
method caused much smaller rounding errors.
The reason is rather logical: When you consistently clamp the lowest
bits away, you are introducing a bias to the results: All results will
be rounded towards zero, and with lengthy calculations all results will
start slowly drifting because of this. However, with random rounding the
bias is averaged away. Assuming that approximately half of the results
would indeed have to be rounded down and the rest up, the random rounding
produces, with lengthy calculations, a result which is much closer to the
correct one.
--
- Warp
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"Warp" <war### [at] tag povray org> wrote in message
news:486340c6@news.povray.org...
> Unrelated to this topic in particular, but related to rounding. There's
> a curious anecdote about rounding in FPUs:
>
> ... most FPUs simply clamp those bits away, period, without
> any kind of rounding.
> ...
> In some FPU ... they ... round the value up or down *randomly*.... the
> rounding
> errors caused by the regular clamping method were significant, the random
> method caused much smaller rounding errors.
> ...
> Assuming that approximately half of the results
> would indeed have to be rounded down and the rest up, the random rounding
> produces, with lengthy calculations, a result which is much closer to the
> correct one.
But a result which is not necessarily the same each time you calculate the
answer using the same dataset as input.
Regard,
Chris B.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Chris B <nom### [at] nomail com> wrote:
> But a result which is not necessarily the same each time you calculate the
> answer using the same dataset as input.
You should never rely on getting an exact result when calculating with
floating point numbers anyways.
--
- Warp
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
"Warp" <war### [at] tag povray org> wrote in message
news:48635c1a@news.povray.org...
> Chris B <nom### [at] nomail com> wrote:
>> But a result which is not necessarily the same each time you calculate
>> the
>> answer using the same dataset as input.
>
> You should never rely on getting an exact result when calculating with
> floating point numbers anyways.
>
No, but, a computer that doesn't give the same results when you run an
algorithm twice in a row would be hell to debug. So you'd expect 'if
(1/3=1/3)' to always give the same answer, not true half the time and false
the other half.
Regards,
Chris B.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Chris B napsal(a):
> "Warp" <war### [at] tag povray org> wrote in message
> news:48635c1a@news.povray.org...
>> Chris B <nom### [at] nomail com> wrote:
>>> But a result which is not necessarily the same each time you calculate
>>> the
>>> answer using the same dataset as input.
>> You should never rely on getting an exact result when calculating with
>> floating point numbers anyways.
>>
>
> No, but, a computer that doesn't give the same results when you run an
> algorithm twice in a row would be hell to debug. So you'd expect 'if
> (1/3=1/3)' to always give the same answer, not true half the time and false
> the other half.
>
> Regards,
> Chris B.
>
>
I've got another idea: force the LSB to 1. If it was 0 the result is
rounded up. If it was 1 the result is rounded down.
This emulates the randomness in a deterministic way.
--
You know you've been raytracing too long when...
you start thinking up your own "You know you've been raytracing too long
when..." sigs (I did).
-Johnny D
Johnny D
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Chris B <nom### [at] nomail com> wrote:
> No, but, a computer that doesn't give the same results when you run an
> algorithm twice in a row would be hell to debug. So you'd expect 'if
> (1/3=1/3)' to always give the same answer, not true half the time and false
> the other half.
You'd also expect (0.1*10.0 == 1.0) to give true, but it probably doesn't.
--
- Warp
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
>> But a result which is not necessarily the same each time you calculate
>> the
>> answer using the same dataset as input.
>
> You should never rely on getting an exact result when calculating with
> floating point numbers anyways.
But don't a number of video games, RTS's in particular, rely on getting
exactly the same results when given the same input? I believe the IEEE
standard specifies deterministic rounding (
http://en.wikipedia.org/wiki/IEEE_754#Rounding_floating-point_numbers ),
which is probably why they work. Round-to-even has basically the same
properties as the random method you gave.
- Slime
[ http://www.slimeland.com/ ]
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |