|
 |
scott wrote:
> Why not just store the total pence/cents in an int?
Because when you're talking about thousands of dollars as the smallest
increment, that wastes space. Because when you add dollars to pennies to
fractional pennies, you need to do the scaling anyway. Because when you
print out the number, you wind up doing a whole bunch of divide-by-10's
anyway, and lots and lots of financial processing involves printing out numbers.
And usually (at least in modern standards) these numbers go up to something
like 38 digits. So why would an "int" be better than packed decimal in terms
of processing speed?
Anyway, yes, that's basically what this is doing, except with appropriate
scaling. Your question is like asking "why not just store the floating
point number without the exponent?"
The oldest mainframe I worked on has a "scientific" unit (the floating point
coprocessor) and the "business" unit (the decimal math coprocessor). The
business unit had things like scatter/gather memory moves, packed decimal,
and the equivalent of BASIC's "print using" (which I think COBOL called an
Edit field).
> Anyway, I thought banks worked internally to fractions of a pence/cent
> and only rounded for statements etc?
Depends what you're doing. Interest is probably in fractions of a cent.
Statements are to the penny. Taxes are to the dollar. Etc.
--
Darren New, San Diego CA, USA (PST)
C# - a language whose greatest drawback
is that its best implementation comes
from a company that doesn't hate Microsoft.
Post a reply to this message
|
 |