|
 |
scott wrote:
>> Because when you're talking about thousands of dollars as the smallest
>> increment, that wastes space.
>
> Oh I see, so it's like a base-10 float?
Not exactly. It's packed BCD. Two decimal digits per byte. The exponent is
declared at the source code level but isn't stored in memory. It's more like
"take that number that has two decimal places and add it to this number here
that has four decimal places."
>> And usually (at least in modern standards) these numbers go up to
>> something like 38 digits. So why would an "int" be better than packed
>> decimal in terms of processing speed?
>
> It would avoid the need for custom hardware, but obviously it works out
> faster per $ to get the non-standard CPUs if that's what they actually use.
They're really only non-standard compared to x86. Most of the mainframe CPUs
have had decimal math for years. When you're talking about 38-digit binary
numbers with assorted number of digits after the decimal points, trying to
do all that in pure machine-register-size integer operations is slow. You
don't want slow.
Or, to phrase it another way... You need to print 30 million phone bills
with itemized phone calls today. How much time are you going to spend
dividing by ten in order to be able to print the cost of each call, vs the
storage space you waste by storing the call times and costs as BCD instead
of binary?
--
Darren New, San Diego CA, USA (PST)
C# - a language whose greatest drawback
is that its best implementation comes
from a company that doesn't hate Microsoft.
Post a reply to this message
|
 |