|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>>> Just because the 16-bit operations are performed on pairs of 8-bit
>>> registers that doesn't make it any less of a 16-bit operation.
>
>> OK. I guess we're just disagreeing about whether that ability makes a
>> CPU an "8-bit CPU" or a "16-bit CPU".
>
> IMO "16-bit CPU" means "can perform most calculations on 16-bit values
> with single opcodes".
I'm pretty sure the 8080 at least was not like that.
Hmmm... A quick google shows opcodes like "add hl,bc" and such, as well
as some subtracts, but many more opcodes for 8-bit than 16-bit ops. (And
some extra prefix codes for IX and IY, yes.) The accumulator, for
example, was 8 bits, and (IIRC) you couldn't load a 2-byte address into
a two-byte pointer register unless it was an absolute address.
The 8080 had no such opcodes at all, from what I can see (and what I
remember). I probably stopped programming in assembler before the Z80
was widespread enough you could just rely on it being there instead of
an 8080. :-)
> Could you calculate eg. additions and substractions using 16-bit values
> with single opcodes?
Nope.
OK, so you're saying a 16-bit CPU has a 16-bit ALU? I'm not sure how
wide the Z80's ALU was. I wouldn't be surprised if "sub bc,hl" was
calculated with two runs thru the ALU.
Anyway, as I said, I think at this point we're just discussing what one
wants to call the CPU, without adding any additional information to it.
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> andrel <a_l### [at] hotmailcom> wrote:
>> Most people here might have consulted wikipedia or similar before
>> posting.
>
> When I write like this, I get accused of bullying.
I think it's because you write that way more often then most do. Doing
anything once isn't bullying. :-)
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> I mean, hell, I use electricity every single day. I have *no clue* who
> figured out that it exists though... why is that a problem?
How did the world survive before computers, cellphones and frozen pizza?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Warp" <war### [at] tagpovrayorg> wrote in message
news:487a65de@news.povray.org...
> somebody <x### [at] ycom> wrote:
> > "Warp" <war### [at] tagpovrayorg> wrote
> > > Just because the 16-bit operations are performed on pairs of 8-bit
> > > registers that doesn't make it any less of a 16-bit operation. The
> > > crucial thing is that you can perform a 16-bit operation with *one*
> > > single opcode.
> > It doesn't work like that. Otherwise, we should call x86 architecture 64
> > bits, 128 bits or even higher.
> I was talking about *all* the ALU operations, such as addition,
> substraction, etc.
Then, if there are *some* OpCodes that operate with 8 bits, does that make
it an 8bit CPU?
I think your (unconventional) definition is arbitrary and unworkable.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
somebody <x### [at] ycom> wrote:
> I think your (unconventional) definition is arbitrary and unworkable.
So is the "traditional" definition because it says absolutely nothing
about the processor nor the computer architecture. It's inconsistent too
(eg. it calls z80 an "8-bit processor" because of having an 8-bit data
bus, but a pentium4 is called a "32-bit processor" even though, AFAIK,
it has a 64-bit data bus; there are also 32-bit processors with even
wider data buses, and they are still called "32-bit").
At least "the natural word size of the processor" (which is really what
I'm talking about) is a much better definition. It's more descriptive and
useful (because it tells what kind of integer arithmetic you can perform
with the processor most efficiently).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 13-Jul-08 23:21, Darren New wrote:
> Warp wrote:
>> andrel <a_l### [at] hotmailcom> wrote:
>>> Most people here might have consulted wikipedia or similar before
>>> posting.
>>
>> When I write like this, I get accused of bullying.
>
> I think it's because you write that way more often then most do. Doing
> anything once isn't bullying. :-)
>
But I did it before. ;)
Explanation for Warp: I knew this was on the edge or might be perceived
by people as such. I though it was on the right side of that edge for a
number of reasons (although this feels a bit like explaining a joke,
generally not something I like to do often.) If you look at what I wrote
(including the part that you so carelessly snipped whitout using
ellipses) I did not say he should, I said others would have done
differently and that I applauded him for it. You could read that as
sarcasm or irony or whatever (and you'd be right), but it was followed
by a reference to another post that I knew he'd read this morning. So,
taken out of context and especially in your abbreviated version, one
could perceive it as bullying, but I knew Andrew would not see it that
way. (assuming he would be able to remember who I was over this short
timespan). BTW I think the 21:46 post was much worse, but I think I got
away with that too. Anyway, I know from experience that it is quite hard
to insult Andrew (at least for me) and I don't think I ever managed
that. Though sometimes onlookers may have had to hold their breath. Of
course my record with you is not so good, but that is mutual. ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> So is the "traditional" definition because it says absolutely nothing
> about the processor nor the computer architecture.
I think the real problem is that CPUs have gotten complex enough that
there's no long a single dimension along which you measure "size." It's
like arguing over whether something is "CISC" or "RISC".
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> I think the real problem is that CPUs have gotten complex enough that
> there's no long a single dimension along which you measure "size." It's
> like arguing over whether something is "CISC" or "RISC".
I don't think there's any confusion about that. In a typical RISC
processor each opcode has exactly the same size, and a fixed amount of
bits in the opcode are allocated for specified things. (This really
limits the total number of commands (disregarding its parameters inside
the opcode), making it a truely reduced instruction set.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> I don't think there's any confusion about that. In a typical RISC
> processor each opcode has exactly the same size, and a fixed amount of
> bits in the opcode are allocated for specified things.
I don't think that's sufficient to make it a RISC processor. That would
mean both the PDP-11 and the X-560 were RISC processors. The X-560 had
built-in instructions for COBOL data types, string manipulation (aka
block moves/compares/character set and case conversions/etc),
instructions that would do things like push a word on a stack whose
pointer was in a particular register and set the condition bits to stack
full/empty/almost full/almost empty, etc. Yet it had 7 bits of opcode,
one "indirect" bit, four bits of register ID, then either three bits of
index register and 17 bits of address, or 20 bits of absolute
(immediate) data. Very straightforward enough that I can still remember
how the opcodes were laid out after 20 years not using it. Pretty much
all the microcoded CISC machines were like that, especially those
expected to be programmed in assembler.
You'd have to talk about addressing modes, pipelines, generalness of
registers, etc. Sure, the original RISC processors had a very simple
model so they could fit more registers, but I think we've gone past that
now. What you describe might be true of *typical* RISC processors and
untrue of *typical* CISC processors, but I think everything's complex
enough now that you need to measure things on multiple dimensions in
order for it to make any sense.
This one's actually pretty interesting:
http://arstechnica.com/cpu/4q99/risc-cisc/rvc-1.html
(Which has an interesting statistic that might explain why people don't
code languages for memory efficiency any more: """To help you wrap your
mind around the situation, consider the fact that in 1977, 1MB of DRAM
cost about $5,000. By 1994, that price had dropped to under $6 (in 1977
dollars.""")
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New wrote:
> Almost everyone calls the processor the number of bits on the data bus,
> fwiw, when talking about this stuff. The 8088 was an 8-bit processor
> and the 8086 was a 16-bit processor even tho they were 100% software
> compatible.
I thought they called it the native register size?
Most registers in modern x86 chips are 32 bit or 64 bits, so they're 32
or 64 bit CPUs.
I first heard about the 8088/8086 duo reading something in Intel's
literature, and I'm pretty sure they stated that both were 16 bit chips,
even though the 8088 had the castrated data bus.
Et cetera.
...Chambers
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |