|
 |
Invisible schrieb:
> Captain Jack wrote:
>
>> I remember having written a game on a portable Unix machine (with a
>> Motorala 68010 processor) and being amazed at what happened with my
>> save files when I moved to my first DOS machine (with an 80386
>> processor); I hadn't ever had to deal with byte order before that. In
>> that case, the save data was relatively small, so I re-wrote it to
>> save in ASCII printable characters, which solved that problem. :)
>
> It still makes me sad that Intel chose to store bytes in the wrong order
> all those years ago...
Define "wrong" in this context...
As a matter of fact, the only situation where byte ordering can be
defined as "right" or "wrong" with irrefutable arguments is in serial
transmission, dependent on the native bit ordering of the physical
layer: If the physical layer sends each byte starting with the least
significant bit, then consistency would demand to send multi-byte values
starting with the least significant byte, so that all in all the least
significant bit of the multi-byte value is sent first; on the other
hand, if the physical layer transmits the most significant bit of each
byte first, the same reasoning would mandate sending the most
significant byte first. There are other arguments pro and contra both
little and big endian, but none as compelling as serial transmission.
It so happens that Intel format is actually doing it "right" in this
respect: AFAIK two of the most important serial interfaces - RS232 and
Ethernet - both transmit each byte starting with the least significant
bit first.
So in this sense the "network byte ordering" used for multy-octet data
in most Internet standards is actually a crappy convention, as the bits
of multi-byte data will be transmitted in an inconsistent order.
BTW, Intel is not the only company that preferred little-endian convention.
Post a reply to this message
|
 |