|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> > Just because the 16-bit operations are performed on pairs of 8-bit
> > registers that doesn't make it any less of a 16-bit operation.
> OK. I guess we're just disagreeing about whether that ability makes a
> CPU an "8-bit CPU" or a "16-bit CPU".
IMO "16-bit CPU" means "can perform most calculations on 16-bit values
with single opcodes". There are examples of true 8-bit processors where
you really are limited to 8-bit operations (and if you want to calculate
on larger numbers you have to use at least two opcodes, the second one
using the possible flag set by the previous one).
> I mean, I've used mainframes with "string" type opcodes that would
> operate on 1KBytes at a time. That wouldn't really make them a KByte
> CPU. :-)
The question is whether its registers were 1kB in size or not.
Just because you can fill 1MB of memory with zeros using a single opcode
doesn't tell anything about the bitsize of the registers and how the CPU
uses them.
> > I understand "8-bit" to mean "has 8-bit registers, and you can only
> > perform an 8-bit operation with a single opcode, because registers can
> > only hold 8 bits of data". Likewise for any other bitsize.
> Well, the 6502 had 16-bit absolute jumps, IIRC. I wouldn't call it a
> 16-bit CPU.
Could you calculate eg. additions and substractions using 16-bit values
with single opcodes?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
andrel wrote:
> FYI we are talking about the Apple II without which the whole concept of
> a personal computer that anyone could buy would not have existed. OK
> another computer would have done it a few years later, but that is not
> the point.
OOC, when did this happen? I have a sneaking suspicion it might have
been before I was born...
> It feels a bit like someone working with genetic material everyday and
> then say: 'Mendel, never heard of that guy, or was it a girl?'.
Still not seeing why it's important for such a person to have heard of
Mendel...
I mean, hell, I use electricity every single day. I have *no clue* who
figured out that it exists though... why is that a problem?
> Anyway,
> if we compiled a list of entertaining books about the history of
> computers and surrounding science is there any change you'd read them?
Um... I'm going to walk round the Science Museum tomorrow? Does that count?
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>>> Just because the 16-bit operations are performed on pairs of 8-bit
>>> registers that doesn't make it any less of a 16-bit operation.
>
>> OK. I guess we're just disagreeing about whether that ability makes a
>> CPU an "8-bit CPU" or a "16-bit CPU".
>
> IMO "16-bit CPU" means "can perform most calculations on 16-bit values
> with single opcodes".
I'm pretty sure the 8080 at least was not like that.
Hmmm... A quick google shows opcodes like "add hl,bc" and such, as well
as some subtracts, but many more opcodes for 8-bit than 16-bit ops. (And
some extra prefix codes for IX and IY, yes.) The accumulator, for
example, was 8 bits, and (IIRC) you couldn't load a 2-byte address into
a two-byte pointer register unless it was an absolute address.
The 8080 had no such opcodes at all, from what I can see (and what I
remember). I probably stopped programming in assembler before the Z80
was widespread enough you could just rely on it being there instead of
an 8080. :-)
> Could you calculate eg. additions and substractions using 16-bit values
> with single opcodes?
Nope.
OK, so you're saying a 16-bit CPU has a 16-bit ALU? I'm not sure how
wide the Z80's ALU was. I wouldn't be surprised if "sub bc,hl" was
calculated with two runs thru the ALU.
Anyway, as I said, I think at this point we're just discussing what one
wants to call the CPU, without adding any additional information to it.
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> andrel <a_l### [at] hotmailcom> wrote:
>> Most people here might have consulted wikipedia or similar before
>> posting.
>
> When I write like this, I get accused of bullying.
I think it's because you write that way more often then most do. Doing
anything once isn't bullying. :-)
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Orchid XP v8 <voi### [at] devnull> wrote:
> I mean, hell, I use electricity every single day. I have *no clue* who
> figured out that it exists though... why is that a problem?
How did the world survive before computers, cellphones and frozen pizza?
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Warp" <war### [at] tagpovrayorg> wrote in message
news:487a65de@news.povray.org...
> somebody <x### [at] ycom> wrote:
> > "Warp" <war### [at] tagpovrayorg> wrote
> > > Just because the 16-bit operations are performed on pairs of 8-bit
> > > registers that doesn't make it any less of a 16-bit operation. The
> > > crucial thing is that you can perform a 16-bit operation with *one*
> > > single opcode.
> > It doesn't work like that. Otherwise, we should call x86 architecture 64
> > bits, 128 bits or even higher.
> I was talking about *all* the ALU operations, such as addition,
> substraction, etc.
Then, if there are *some* OpCodes that operate with 8 bits, does that make
it an 8bit CPU?
I think your (unconventional) definition is arbitrary and unworkable.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
somebody <x### [at] ycom> wrote:
> I think your (unconventional) definition is arbitrary and unworkable.
So is the "traditional" definition because it says absolutely nothing
about the processor nor the computer architecture. It's inconsistent too
(eg. it calls z80 an "8-bit processor" because of having an 8-bit data
bus, but a pentium4 is called a "32-bit processor" even though, AFAIK,
it has a 64-bit data bus; there are also 32-bit processors with even
wider data buses, and they are still called "32-bit").
At least "the natural word size of the processor" (which is really what
I'm talking about) is a much better definition. It's more descriptive and
useful (because it tells what kind of integer arithmetic you can perform
with the processor most efficiently).
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 13-Jul-08 23:21, Darren New wrote:
> Warp wrote:
>> andrel <a_l### [at] hotmailcom> wrote:
>>> Most people here might have consulted wikipedia or similar before
>>> posting.
>>
>> When I write like this, I get accused of bullying.
>
> I think it's because you write that way more often then most do. Doing
> anything once isn't bullying. :-)
>
But I did it before. ;)
Explanation for Warp: I knew this was on the edge or might be perceived
by people as such. I though it was on the right side of that edge for a
number of reasons (although this feels a bit like explaining a joke,
generally not something I like to do often.) If you look at what I wrote
(including the part that you so carelessly snipped whitout using
ellipses) I did not say he should, I said others would have done
differently and that I applauded him for it. You could read that as
sarcasm or irony or whatever (and you'd be right), but it was followed
by a reference to another post that I knew he'd read this morning. So,
taken out of context and especially in your abbreviated version, one
could perceive it as bullying, but I knew Andrew would not see it that
way. (assuming he would be able to remember who I was over this short
timespan). BTW I think the 21:46 post was much worse, but I think I got
away with that too. Anyway, I know from experience that it is quite hard
to insult Andrew (at least for me) and I don't think I ever managed
that. Though sometimes onlookers may have had to hold their breath. Of
course my record with you is not so good, but that is mutual. ;)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Warp wrote:
> So is the "traditional" definition because it says absolutely nothing
> about the processor nor the computer architecture.
I think the real problem is that CPUs have gotten complex enough that
there's no long a single dimension along which you measure "size." It's
like arguing over whether something is "CISC" or "RISC".
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Darren New <dne### [at] sanrrcom> wrote:
> I think the real problem is that CPUs have gotten complex enough that
> there's no long a single dimension along which you measure "size." It's
> like arguing over whether something is "CISC" or "RISC".
I don't think there's any confusion about that. In a typical RISC
processor each opcode has exactly the same size, and a fixed amount of
bits in the opcode are allocated for specified things. (This really
limits the total number of commands (disregarding its parameters inside
the opcode), making it a truely reduced instruction set.)
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |