POV-Ray : Newsgroups : povray.newusers : rounding : Re: rounding Server Time
28 Jul 2024 16:20:39 EDT (-0400)
  Re: rounding  
From: Warp
Date: 26 Jun 2008 03:09:59
Message: <486340c6@news.povray.org>
Unrelated to this topic in particular, but related to rounding. There's
a curious anecdote about rounding in FPUs:

  As we all know, FPUs have, naturally, a limited number of bits to
represent floating point values. This, of course, means that if more
binary decimal places would be needed to represent the result of an
operation than is available in an FPU register, the lowest decimals
are simply dropped off.

  Now this of course raises the question of what to do with the
least-signifant bit when this happens. The most correct way would be,
of course, to round the number properly, so the least-significat bit
gets a rounded value depending on the even-less-significant bits of
the calculation which were dropped off. (And in fact, other bits may
be changed because of this rounding as well.)

  But this requires quite a lot of extra logic. It would mean that the
FPU has to actually calculate the value with at least a few bits more
precision than the size of an FPU register, and round accordingly.
This would require a lot more logic, make the FPU a lot more complicated
and expensive, increase power requirements, increase heat production, etc.

  For this reason most FPUs simply clamp those bits away, period, without
any kind of rounding. It's the most cost-effective thing to do, and the
error produced is, after all, very small. However, this still introduces
small rounding errors with lengthy calculations which use and reuse
values from earlier calculations.

  In some FPU (I really can't remember which) they tried something a bit
different: Round the value up or down *randomly*. In other words, instead
of calculating extra bits of the result, just randomly assume that the
value needs to be rounded up or down.

  Curiously, when they tested this with applications where the rounding
errors caused by the regular clamping method were significant, the random
method caused much smaller rounding errors.

  The reason is rather logical: When you consistently clamp the lowest
bits away, you are introducing a bias to the results: All results will
be rounded towards zero, and with lengthy calculations all results will
start slowly drifting because of this. However, with random rounding the
bias is averaged away. Assuming that approximately half of the results
would indeed have to be rounded down and the rest up, the random rounding
produces, with lengthy calculations, a result which is much closer to the
correct one.

-- 
                                                          - Warp


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.