|
|
Warp wrote:
> Thorsten Froehlich <tho### [at] trfde> wrote:
>> In fact, the integer division is slow due to the iterative algorithm (in
>> hardware!) needed to execute it. It would in fact be slower for most 64 bit
>> integers assuming the numbers you divide are sufficiently big (outside 32
>> bit range) because the time an integer division takes is not constant but
>> rather depends on the number of one- and zero-bits with any reasonable
>> processor released in the last three decades.
>
> Are you sure it's an iterative algorithm at transistor level?
Yes. I don't have a good reference at hand, but a very quick search reveals
<http://www.intel.com/technology/itj/2006/volume10issue02/art01_Intro_to_Core_Duo/p03_improved_cores.htm>.
Just search for "IDIV" in that document and notice the remark about the
algorithm being iterative and them employing "early exit", which accounts
for the variable execution time (aka number of zeros and ones in the input).
Given that this talks about Intel's most modern CPU core, implications
should be clear.
Of course, all this is much better explained in books about computer
hardware (plenty of those around). The good ones present several variations
of algorithms available in detail. However, in essence the algorithms remain
rather similar to a binary version of the common decimal "by hand" division
method we all once learned in school. As you may recall, that "algorithm" is
also iterative ;-)
Thorsten
Post a reply to this message
|
|