POV-Ray : Newsgroups : povray.off-topic : Programming langauges Server Time
5 Sep 2024 11:24:34 EDT (-0400)
  Programming langauges (Message 65 to 74 of 114)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: scott
Subject: Re: Programming langauges
Date: 26 Oct 2009 11:55:08
Message: <4ae5c65c$1@news.povray.org>
> When you read data from a first, you read the first byte first, and the 
> last byte last.

OK...

> Therefore, the first byte should be the MSB.

That's not a very good argument.

Little-endian rulez!

It just seems way more natural if you are writing any sort of processing 
with bits/bytes etc that array index zero should be the LSB.


Post a reply to this message

From: clipka
Subject: Re: Programming langauges
Date: 26 Oct 2009 11:59:14
Message: <4ae5c752$1@news.povray.org>
Invisible schrieb:
>>> It still makes me sad that Intel chose to store bytes in the wrong 
>>> order all those years ago...
>>
>> Define "wrong" in this context...
> 
> When you read data from a first, you read the first byte first, and the 
> last byte last. Therefore, the first byte should be the MSB.

Seriously: Why? Just because us Westerners writing numbers this way round?

Note that this digit ordering has its drawbacks; as a practical example, 
have a look at any data table: While all the text columns are usually, 
left-adjusted, numbers are always right-adjusted, because otherwise they 
wouldn't line up.


 > But no, Intel decided that this would be too easy.

It wasn't Intel who made that decision. As mentioned before, they were 
not the only ones to follow this path.


 > And now we have the spectacle
> of cryptosystems and so forth designed with each data block being split 
> into octets and reversed before you can process it...

That is not because little-endian would be wrong, but because the 
cryptosystems usually happen to be specified to use big-endian in- and 
output (owing to the fact that big-endian has become quasi-standard in 
Internet protocols). They would work just as well with little-endian 
values - but then the big-endians would have to do the byte reversal, or 
the algorithms would simply be incompatible.

So you can just as well blame this problem on the people who kept 
insisting on big-endian byte ordering.


>> It so happens that Intel format is actually doing it "right" in this 
>> respect: AFAIK two of the most important serial interfaces - RS232 and 
>> Ethernet - both transmit each byte starting with the least significant 
>> bit first.
> 
> Well then that would be pretty dubious as well. (AFAIK, MIDI does it the 
> correct way around.)

Again: Why would that be dubious? The only reason I can think of is the 
convention how we write numbers, but that's just a convention, and - as 
shown above - not even an ideal one.

Wouldn't it be more convenient to start with the least significant 
digit, and stop the transmission once you have transmitted the most 
significant nonzero digit? If you did that the other way round, you'd 
have no way to know how long the number will be, and won't be able to 
determine the actual value of each individual digit until after the 
transmission.


>> BTW, Intel is not the only company that preferred little-endian 
>> convention.
> 
> Indeed, the 6502 did it decades ago. Still doesn't make it right.

And your arguments so far don't make it wrong. Unconventional maybe (in 
the most literal sense) - but not wrong.

I think the terms "little-endians" and "big-endians" for adherents of 
each philosophy are pretty suitably chosen (see "Gulliver's Travels" by 
Jonathan Swift).


Post a reply to this message

From: Invisible
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:03:16
Message: <4ae5c844$1@news.povray.org>
scott wrote:

> That's not a very good argument.
> 
> It just seems way more natural if you are writing any sort of processing 
> with bits/bytes etc that array index zero should be the LSB.

...and this *is* a good argument??



Think about it. If you store the textural representation of a number in 
ASCII, the most significant digit comes first. But if Intel has their 
way, the most significant byte comes... last? And if you have to split a 
large number into several machine words, unless you number those words 
backwards, you get monstrosities such as

   04 03 02 01 08 07 06 05 0C 0B 0A 09

instead of

   01 02 03 04 05 06 07 08 09 0A 0B 0C

as it should be.

There's no two ways about it. Little endian = backwards.


Post a reply to this message

From: Invisible
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:10:47
Message: <4ae5ca07$1@news.povray.org>
>> When you read data from a first, you read the first byte first, and 
>> the last byte last. Therefore, the first byte should be the MSB.
> 
> Seriously: Why? Just because us Westerners writing numbers this way round?

Because the most significant digit is the most important piece of 
information. It's "most significant".

>> And now we have the spectacle
>> of cryptosystems and so forth designed with each data block being 
>> split into octets and reversed before you can process it...
> 
> That is not because little-endian would be wrong, but because the 
> cryptosystems usually happen to be specified to use big-endian in- and 
> output.

Erm, NO.

This happens because most cryptosystems are (IMHO incorrectly) specified 
to *not* use big-endian encoding.

This means that if I want to implement such a system, I have to waste 
time and effort turning all the numbers backwards before I can process 
them, and turning them back the right way around again afterwards. It 
also means that when a paper says 0x3847275F, I can't tell whether they 
actually mean 3847275F hex, or whether they really mean 5F274738 hex, 
which is a completely different number.

> Wouldn't it be more convenient to start with the least significant 
> digit, and stop the transmission once you have transmitted the most 
> significant nonzero digit? If you did that the other way round, you'd 
> have no way to know how long the number will be, and won't be able to 
> determine the actual value of each individual digit until after the 
> transmission.

If you start from the least digit, you *still* can't determine the final 
size of the number until all digits are received.


Post a reply to this message

From: clipka
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:14:26
Message: <4ae5cae2$1@news.povray.org>
scott schrieb:

> Little-endian rulez!
> 
> It just seems way more natural if you are writing any sort of processing 
> with bits/bytes etc that array index zero should be the LSB.

I must say that I see benefits in both conventions.

For instance, the arbitration scheme used on a CAN bus mandates that 
bits are sent highest bit first, so that the numerical value of the data 
identifier directly governs the message priority (lower data ID = higher 
message priority).


Post a reply to this message

From: Darren New
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:17:56
Message: <4ae5cbb4$1@news.povray.org>
Invisible wrote:
> As far as I know, even in languages that are usually written from right 
> to left, the most significant digit is still written "first" and the 
> least written "last".

I'm pretty sure that in arabic, where numbers come from, the LSB is written 
first (i.e., arabic, right-to-left, writes digits in the same order as we do).

The benefit of MSB first is that it's in "readable" order. The benefit of 
LSB first is that you can easily find the LSB in case you want to move the 
bytes from a long to a short, for example.

I saw one discussion that printed hex dumps with the addresses incrementing 
for the ASCII part and decrementing for the numeric part, with the address 
in a column down the middle, which made reading little endian numbers trivial.

And since it's all imaginary anyway, and there is *NO ORDER* to bytes in a 
computer, it's all just mathematical relationships between powers of a 
polynomial and addresses, any argument you make against LSB vs MSB you can 
also make against writing polynomials one way or the other.

-- 
   Darren New, San Diego CA, USA (PST)
   I ordered stamps from Zazzle that read "Place Stamp Here".


Post a reply to this message

From: Darren New
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:19:08
Message: <4ae5cbfc$1@news.povray.org>
Invisible wrote:
> Think about it. If you store the textural representation of a number in 
> ASCII, the most significant digit comes first. 

But that's *wrong*.  Line up a column of numbers to be added up. How do you 
do it?  You right-justify the numbers, yes?

-- 
   Darren New, San Diego CA, USA (PST)
   I ordered stamps from Zazzle that read "Place Stamp Here".


Post a reply to this message

From: Darren New
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:21:04
Message: <4ae5cc70$1@news.povray.org>
Invisible wrote:
> Because the most significant digit is the most important piece of 
> information. It's "most significant".

Only if you know how many digits it is.

I'm thinking of two numbers. The first starts with a 7. The second starts 
with a 2. Which is bigger?

-- 
   Darren New, San Diego CA, USA (PST)
   I ordered stamps from Zazzle that read "Place Stamp Here".


Post a reply to this message

From: Invisible
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:26:27
Message: <4ae5cdb3@news.povray.org>
Darren New wrote:

> And since it's all imaginary anyway, and there is *NO ORDER* to bytes in 
> a computer, it's all just mathematical relationships between powers of a 
> polynomial and addresses, any argument you make against LSB vs MSB you 
> can also make against writing polynomials one way or the other.

Are you by any chance a native of that country that decided that neither 
day/month/year nor year/month/day was a good idea and so adopted 
month/day/year?


Post a reply to this message

From: clipka
Subject: Re: Programming langauges
Date: 26 Oct 2009 12:29:03
Message: <4ae5ce4f$1@news.povray.org>
Invisible schrieb:

> Think about it. If you store the textural representation of a number in 
> ASCII, the most significant digit comes first. But if Intel has their 
> way, the most significant byte comes... last? And if you have to split a 
> large number into several machine words, unless you number those words 
> backwards, you get monstrosities such as
> 
>   04 03 02 01 08 07 06 05 0C 0B 0A 09

That only happens if, for some obscure reason, you try to interpret a 
/character sequence/ as an integer value. Which it isn't.

Directly interpreting ASCII representations of numbers is made 
problematic in multiple ways anyway:

- The ASCII representation has no fixed width; The sequence "42" may 
represent the value 4*10+2, but it may just as well be just part of a 
much bigger value, such as 4*1e55 + 2*1e54 + something.

- You may encounter digit grouping characters such as "," (or, depending 
on language, even "." or what-have-you).

- The numerical base does not match anything remotely suitable for 
digital representation.

- Half of each byte (actually even a few fractions of a bit more) just 
contains redundant overhead.


> instead of
> 
>   01 02 03 04 05 06 07 08 09 0A 0B 0C
> 
> as it should be.
> 
> There's no two ways about it. Little endian = backwards.

No - you've got the whole thing wrong. Technically, little- and 
big-endian are "up-" and "downwards", as in the physical memory address 
lines and data lines are orthogonal to one another:

    01
    02
    03
    04
    05
    06
    07
    08
    09
    ...

If you need to group these, you'll get:

    01020304
    05060708
    090A0B0C
    ...

or

    04030201
    08070605
    0C0B0A09
    ...

neither of which has any inherent inconsistency.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.