 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Am 02.08.2010 23:44, schrieb Stephen:
> On 02/08/2010 10:27 PM, clipka wrote:
>>
>> Actually the terms "big-endian" and "little-endian" are only confusing
>> as long as you don't know about their etymology. If you do, it's
>> actually pretty easy to remember that "big-endian" is, of course, the
>> byte ordering that /starts/ with the "big end".
>
> I thought it was all about which way you opened your boiled egg.
Exactly. With the people being named according to the end where they
opened it and, hence, /started eating/ it. I.e., "big-endians" /started/
at the egg's big end.
(I must confess I probably wouldn't have learned which is which to this
day, if it wasn't for this background knowledge.)
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 02/08/2010 11:30 PM, clipka wrote:
> (I must confess I probably wouldn't have learned which is which to this
> day, if it wasn't for this background knowledge.)
I read the book.
--
Best Regards,
Stephen
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> byte ordering that /starts/ with the "big end".
But that's my point. "Starts" and "ends" is meaningless. The MSB is either
at the lowest address space or it's at the highest address space. The only
reason we think of "starts" and "ends" that was is we write from left to
right. I suspect "big endian" and "little endian" would be confusing to
those speaking hebrew, arabic, or whatever version of chinese goes from top
to bottom.
I mean, if the terms were in chinese, they'd be "highendian" and
"lowendian", and you'd have a heck of a time guessing what that's supposed
to mean.
> or just vice versa. Yeah, sure, eggheads :-)
Well, there's arguments for both that make sense. But it's really quite
arbitrary exactly because there *are* arguments for both.
--
Darren New, San Diego CA, USA (PST)
C# - a language whose greatest drawback
is that its best implementation comes
from a company that doesn't hate Microsoft.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Am 03.08.2010 02:20, schrieb Darren New:
> clipka wrote:
>> byte ordering that /starts/ with the "big end".
>
> But that's my point. "Starts" and "ends" is meaningless. The MSB is
> either at the lowest address space or it's at the highest address space.
> The only reason we think of "starts" and "ends" that was is we write
> from left to right. I suspect "big endian" and "little endian" would be
> confusing to those speaking hebrew, arabic, or whatever version of
> chinese goes from top to bottom.
Huh?
Processors /do/ have a "native" ordering of the memory words: The
typical direction of program flow. While this may theoretically differ
between processors just like the writing direction does between
languages (though I've never seen a processor so far that would
decrement its program counter, but maybe some might use gray code
addresses), it still defines a unique logical "start" and "end" - just
like there is a unique logical "start" and "end" in each written text
(though in some writing systems you may have to guess that start and end
from the content or context, and of course you'll have the occasional
sample where start and end are intentionally ambiguous for artistic
reasons).
It is "top" and "bottom", "left" and "right" that are meaningless in the
context of memory layout, but "start" and "end" are absolutely not: They
might be different between processors, too, but they're always
unambiguous and meaningful.
> I mean, if the terms were in chinese, they'd be "highendian" and
> "lowendian", and you'd have a heck of a time guessing what that's
> supposed to mean.
Not really - note that we don't call them "left-endian" or
"right-endian" either.
They're called "big-endian" and "little-endian" not because a particular
byte is on the "biggest" (or "smallest") address, but because the
"biggest" byte (=MSB, or "smallest" byte =LSB) comes first in memory.
/And/ these particular terms are used because they had already been used
in different context and... well, let's put it this way: It wasn't the
byte ordering zealots that suggested them. Abso-bloody-lutely not. I
guess they were rather pissed off when the terms first came up in this
use. Well, those that knew the story behind it, that is.
>> or just vice versa. Yeah, sure, eggheads :-)
>
> Well, there's arguments for both that make sense. But it's really quite
> arbitrary exactly because there *are* arguments for both.
... which, as a matter of fact, is exactly why the terms "big-endian"
and "little-endian" so beautifully hit home.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
clipka wrote:
> It is "top" and "bottom", "left" and "right" that are meaningless in the
> context of memory layout, but "start" and "end" are absolutely not:
If you have a machine whose instructions are all 32-bits long, asking the
order of bytes within a machine word might not make much sense.
But OK, I'll grant you that "low address" chronologically comes before "high
address" for at least some instructions, generally speaking.
> They're called "big-endian" and "little-endian" not because a particular
> byte is on the "biggest" (or "smallest") address, but because the
> "biggest" byte (=MSB, or "smallest" byte =LSB) comes first in memory.
I know that. I was arguing about "first" in memory, not "biggest" address.
Indeed, I'd say "bigger address" makes more sense / is less ambiguous than
"first address". But all the memory is there all the time. To call one
memory address "before" another, you have to have some chronological
ordering of memory addresses. Which you pointed out is the auto-increment
types of instructions, including the implicit auto-increment of the PC.
> ... which, as a matter of fact, is exactly why the terms "big-endian"
> and "little-endian" so beautifully hit home.
Yep.
--
Darren New, San Diego CA, USA (PST)
C# - a language whose greatest drawback
is that its best implementation comes
from a company that doesn't hate Microsoft.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Le 02/08/2010 17:46, Invisible a écrit :
> Insert-sort is an O(N^2) algorithm. It takes O(N) time to work out where
> the next item needs to go, and it takes O(N) time to move all the data
> down one slot in order to make room to insert the new item. And you have
> to do this sequence of operations exactly N times. Hence, O(N^2).
> Now, suppose I built a machine containing special RAM chips, where
> there's a special signal I can send that copies the contents of every
> memory register in a specified range into the next register along.
> Assuming all these shifts happen concurrently, I just made shifting an
> O(1) operation. (!!)
Congratulations,
that the difference between a linked list to store your data, and an
indexed array.
>
> Depending on how wide your array slots are, you might need to perform
> several hardware-level shifts. But since the slot size is by definition
> fixed, that's still O(1).
If instead of a single value, the cells of your array contains also a
pointer to the next value in the list, you could sort it without moving
the value, and only updating the "next" index field.
That approach is trading memory (an additional explicit next-index value
for each cell) for better performance.
Notice: the size of the values to sort might be bigger than the value of
the next-index field, which only need to be able to store about the size
of the array in term of cells.
--
A: Because it messes up the order in which people normally read text.<br/>
Q: Why is it such a bad thing?<br/>
A: Top-posting.<br/>
Q: What is the most annoying thing on usenet and in e-mail?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Le_Forgeron <lef### [at] free fr> wrote:
> If instead of a single value, the cells of your array contains also a
> pointer to the next value in the list, you could sort it without moving
> the value, and only updating the "next" index field.
How would you find the proper place for the value faster than O(n)?
Performing insertion sort on a linked list is still O(n^2) even though
the insertion itself can be done in constant-time. That's because finding
the point of insertion takes O(n) time.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |