|
|
Mike Sobers nous apporta ses lumieres en ce 31/07/2006 18:43:
> Warp <war### [at] tagpovrayorg> wrote:
>> Mike Sobers wrote:
>> How the heck could it do that?
>
> Well, I don't know. I'm not an OS programming expert. Maybe it's not
> possible, but in any case it's probably unlikely that an OS would be
> designed to operate this way, since dual 32-bit processors would be more
> efficient I would think. Mathematically, you could use the upper 32 bits
> of a 64-bit memory allocation simultaneously with the lower 32-bits by
> shifting the information in the registry upward. That way one 64-bit
> operation could accomplish two 32-bit operations. Whether this is useful
> in a larger programming sense, I don't know, but I suspect not otherwise
> programmers would take advantage of it. While 64-bit generally means more
> _precision_ in the calculations, a program could utilize the extra memory
> capacity available to each 64-bit operation to accomplish two 32-bit
> operations at the same time.
>
> Your question was "why do some people think it could be twice as fast?".
> It's because some people understand math and binary operations, but not
> neccessarily how computer architecture is designed around them. That's why
> the question was asked in the first place, because a lot of us have a lot to
> learn about what advantages the new higher-precision hardware/software will
> provide.
>
> Mike
>
>
>
You can have more precision in integer calculations. You can do 64 bits add in
one operation while you need to emulate it on a 32 bits CPU. BUT when you do
floating point operations, both 32 and 64 bits CPUs use 64 bits FPUs, you may
gain a little speed while transmiting the operands and retriving the results,
but it's prety slim. Normaly, you do hundreds, even thousands, more FP
calculations than INT ones.
--
Alain
-------------------------------------------------
To the world you may be one person, but to one
person you may be the world.
Post a reply to this message
|
|