 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New wrote:
> I'm thinking of two numbers. The first starts with a 7. The second
> starts with a 2. Which is bigger?
Conversely, I'm thinking of two numbers. One ends with 3. The other ends
with 5. Which is bigger?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> Invisible schrieb:
>
>> Think about it. If you store the textural representation of a number
>> in ASCII, the most significant digit comes first. But if Intel has
>> their way, the most significant byte comes... last? And if you have to
>> split a large number into several machine words, unless you number
>> those words backwards, you get monstrosities such as
>>
>> 04 03 02 01 08 07 06 05 0C 0B 0A 09
>
> That only happens if, for some obscure reason, you try to interpret a
> /character sequence/ as an integer value. Which it isn't.
>
> Directly interpreting ASCII representations of numbers is made
> problematic in multiple ways anyway:
I'm not talking about storing numbers as ASCII. I'm talking about
storing a number such as 0x0102030405060708090A0C0B. Any sane person
would store 0x01, followed by 0x02, followed by 0x03, and so on. But
Intel and a few like them have decided to instead muddle up the ordering.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible schrieb:
>>> When you read data from a first, you read the first byte first, and
>>> the last byte last. Therefore, the first byte should be the MSB.
>>
>> Seriously: Why? Just because us Westerners writing numbers this way
>> round?
>
> Because the most significant digit is the most important piece of
> information. It's "most significant".
Right. And in rhethorics they teach to leave the most important argument
for the end ("last but not least"). So?
It's not that I don't see those arguments. But they're not compelling
enough to declare big-endian "right" and little-endian "wrong". You may
call one of them "more intuitive [for us Westerners]", but that's as far
as it gets.
> Erm, NO.
>
> This happens because most cryptosystems are (IMHO incorrectly) specified
> to *not* use big-endian encoding.
Whatever: My point is that the fault is not in the little-endians, nor
in the big-endians, but just an inevitable consequence of the existence
of both, and the fault can be assigned to either side for not having
"given in".
> This means that if I want to implement such a system, I have to waste
> time and effort turning all the numbers backwards before I can process
> them, and turning them back the right way around again afterwards. It
> also means that when a paper says 0x3847275F, I can't tell whether they
> actually mean 3847275F hex, or whether they really mean 5F274738 hex,
> which is a completely different number.
0x3847275F is, in any case, 3847275F hex - although you'll indeed need
to know the endianness to tell whether it is 38;47;27;5F or 5F;27;47;38.
If some paper doesn't answer this question while it is of any relevance,
it is a bad paper.
>> Wouldn't it be more convenient to start with the least significant
>> digit, and stop the transmission once you have transmitted the most
>> significant nonzero digit? If you did that the other way round, you'd
>> have no way to know how long the number will be, and won't be able to
>> determine the actual value of each individual digit until after the
>> transmission.
>
> If you start from the least digit, you *still* can't determine the final
> size of the number until all digits are received.
But you can already store the digits at their final position if you know
the /maximum/ size. If you do it the other way round, you'll need to
shift the whole thing after the transmission if the maximum size is not
used.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Are you by any chance a native of that country that decided that neither
> day/month/year nor year/month/day was a good idea and so adopted
> month/day/year?
I'm pretty sure we inherited that one from you guys. We just never went back
and fixed it. ;-)
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New wrote:
> Invisible wrote:
>> Are you by any chance a native of that country that decided that
>> neither day/month/year nor year/month/day was a good idea and so
>> adopted month/day/year?
>
> I'm pretty sure we inherited that one from you guys. We just never went
> back and fixed it. ;-)
Ah, right. That would explain why you lot use month/day/year and we use
day/month/year. Oh, wait a sec...
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Darren New wrote:
>
>> I'm thinking of two numbers. The first starts with a 7. The second
>> starts with a 2. Which is bigger?
>
> Conversely, I'm thinking of two numbers. One ends with 3. The other ends
> with 5. Which is bigger?
Yes. But I'm not the one claiming one is right and the other is wrong. :-)
You're making my point for me.
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New schrieb:
> Invisible wrote:
>> As far as I know, even in languages that are usually written from
>> right to left, the most significant digit is still written "first" and
>> the least written "last".
>
> I'm pretty sure that in arabic, where numbers come from, the LSB is
> written first (i.e., arabic, right-to-left, writes digits in the same
> order as we do).
Yup, apparently so. Quote from the 'Pedia:
"The numerals are arranged with their lowest value digit to the right,
with higher value positions added to the left. This arrangement was
adopted identically into the numerals as used in Europe. The Latin
alphabet runs from left to right, unlike the Arabic alphabet. Hence,
from the point of view of the reader, numerals in western texts are
written with the highest power of the base first whereas numerals in
Arabic texts are written with the lowest power of the base first"
(http://en.wikipedia.org/wiki/Arabic_numerals#Adoption_in_Europe)
So Arabs (who can claim to be closer to the invention of the whole
positional smash) would probably consider the Intel format to be more
intuitive.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible wrote:
> Darren New wrote:
>> Invisible wrote:
>>> Are you by any chance a native of that country that decided that
>>> neither day/month/year nor year/month/day was a good idea and so
>>> adopted month/day/year?
>>
>> I'm pretty sure we inherited that one from you guys. We just never
>> went back and fixed it. ;-)
>
> Ah, right. That would explain why you lot use month/day/year and we use
> day/month/year. Oh, wait a sec...
You didn't read the second sentence, right?
--
Darren New, San Diego CA, USA (PST)
I ordered stamps from Zazzle that read "Place Stamp Here".
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Invisible schrieb:
>
>> And since it's all imaginary anyway, and there is *NO ORDER* to bytes
>> in a computer, it's all just mathematical relationships between powers
>> of a polynomial and addresses, any argument you make against LSB vs
>> MSB you can also make against writing polynomials one way or the other.
>
> Are you by any chance a native of that country that decided that neither
> day/month/year nor year/month/day was a good idea and so adopted
> month/day/year?
Now you're trying to flee into ridicule.
Are /your/ fellow people using YYYY-MM-DD (the only really consistent
format, unless one changes the digit order within each element), except
on formulae layed out that way?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Invisible" <voi### [at] dev null> wrote in message
news:4ae5cee1$1@news.povray.org...
> I'm not talking about storing numbers as ASCII. I'm talking about storing
> a number such as 0x0102030405060708090A0C0B. Any sane person would store
> 0x01, followed by 0x02, followed by 0x03, and so on.
I would never store my numbers like that, bec-- wait a minute... then I'm
not sane?
Funny, the voices in my head seem to think I'm okay.
*giggle*
Sorry, returning you to your regularly scheduled discussion now...
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |