|
 |
>> When you read data from a first, you read the first byte first, and
>> the last byte last. Therefore, the first byte should be the MSB.
>
> Seriously: Why? Just because us Westerners writing numbers this way round?
Because the most significant digit is the most important piece of
information. It's "most significant".
>> And now we have the spectacle
>> of cryptosystems and so forth designed with each data block being
>> split into octets and reversed before you can process it...
>
> That is not because little-endian would be wrong, but because the
> cryptosystems usually happen to be specified to use big-endian in- and
> output.
Erm, NO.
This happens because most cryptosystems are (IMHO incorrectly) specified
to *not* use big-endian encoding.
This means that if I want to implement such a system, I have to waste
time and effort turning all the numbers backwards before I can process
them, and turning them back the right way around again afterwards. It
also means that when a paper says 0x3847275F, I can't tell whether they
actually mean 3847275F hex, or whether they really mean 5F274738 hex,
which is a completely different number.
> Wouldn't it be more convenient to start with the least significant
> digit, and stop the transmission once you have transmitted the most
> significant nonzero digit? If you did that the other way round, you'd
> have no way to know how long the number will be, and won't be able to
> determine the actual value of each individual digit until after the
> transmission.
If you start from the least digit, you *still* can't determine the final
size of the number until all digits are received.
Post a reply to this message
|
 |