|
|
Warp wrote:
> The source code has to
> know the endianess of the target system in order to be able to write
> bytes in the proper order.
> Which would be slower than writing the bytes in the proper order in
> the first place...
I'm not sure how these two jibe. Either you're processing the muti-byte
objects in native order and writing them in a different order, or you're
writing them without processing them. In either of those cases, I don't
see compiler-generated code being slower than programmer-generated code.
The only way I can see it being slower is there's actually two different
sets of code, based on what order things are in, such that (for example)
one branch adds two numbers using "+" and the other does some magic to
add 0xA000 and 0xA000 to get 0x4001 automatically without actually
treating them as binary integers. Is that what you're referring to?
--
Darren New / San Diego, CA, USA (PST)
"That's pretty. Where's that?"
"It's the Age of Channelwood."
"We should go there on vacation some time."
Post a reply to this message
|
|