 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Actually the Z80 has 16-bit registers, 16-bit memory addressing and
> a 16-bit ALU (don't believe wikipedia's lies about calling the Z80 an
> "8-bit processor"). But anyways.
I'm pretty sure that's not correct. Granted, it's been decades since I
did Z80 assembler. It had a 16-bit ALU for the addressing, for the most
part, but the registers were definitely 8-bit registers. Some of the
opcodes would pair them up into an address or some such, but you'd be
taking two registers to do it. Not unlike the "AX = AH:AL" sort of thing
the x86 series does.
Actually, I take that back, didn't the Z80 add some IX and IY registers
the 8080 didn't have or something? The 8080 was 8 bit; the Z80 might
have had *some* 16-bit registers.
Almost everyone calls the processor the number of bits on the data bus,
fwiw, when talking about this stuff. The 8088 was an 8-bit processor
and the 8086 was a 16-bit processor even tho they were 100% software
compatible.
--
Darren New / San Diego, CA, USA (PST)
Helpful housekeeping hints:
Check your feather pillows for holes
before putting them in the washing machine.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Darren New <dne### [at] san rr com> wrote:
> I'm pretty sure that's not correct. Granted, it's been decades since I
> did Z80 assembler. It had a 16-bit ALU for the addressing, for the most
> part, but the registers were definitely 8-bit registers. Some of the
> opcodes would pair them up into an address or some such, but you'd be
> taking two registers to do it. Not unlike the "AX = AH:AL" sort of thing
> the x86 series does.
It's exactly what the x86 does, and that's why the x86 series is a
16-bit processor (up to the 386).
With the z80 you can have 16-bit literals, perform 16-bit ALU operations
(such as additions, substractions, shifts, etc), you can address the entire
memory with one single 16-bit register, etc. I don't understand what's *not*
16-bit about the z80.
Just because the 16-bit operations are performed on pairs of 8-bit
registers that doesn't make it any less of a 16-bit operation. The
crucial thing is that you can perform a 16-bit operation with *one*
single opcode. You can also load a 16-bit value into such a pair with
one single opcode. This means the opcode is a 16-bit one.
If the z80 is not a 16-bit processor, then neither is the 80286.
> Almost everyone calls the processor the number of bits on the data bus,
> fwiw, when talking about this stuff.
How is that even useful? It might tell something about the speed at which
the processor can handle data, but it doesn't tell anything about the
processor itself.
I understand "8-bit" to mean "has 8-bit registers, and you can only
perform an 8-bit operation with a single opcode, because registers can
only hold 8 bits of data". Likewise for any other bitsize.
Btw, didn't the 386 usually have a 16-bit data bus? The 386 is still
a 32-bit processor, though.
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Can somebody find out the typical MIPS and FLOPS for the following:
>
> - Commodore 64 (6510 @ ~1 MHz)
Most instructions took from 2 to 5 clock cycles, so I'd venture to say
that the 6510 ran between .2 and .5 MIPS. Floating point is implemented
in software, and since the processor didn't have a hardware multiply
(and shifts were one bit at a time), it probably took dozens of machine
cycles for addition/subtraction and hundreds for multiplication and
division. I doubt that it ever got much past ten kiloflops, and
probably averaged lower than that.
Regards,
John
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Right. Suddenly integer-only algorithms seem like a Big Deal. ;-)
There were games on the Apple (6502-based) which had seven versions of
any given sprite graphic so that they wouldn't have to be shifted in
order to display them on the screen. 8-bit game programmers learned
much about squeezing every last drop of performance out of limited speed
and memory.
One of the best C64 games was called Pinball Construction Set, which had
the pinball moving with apparently natural motion and reflecting off of
barriers of any angle. I should have looked at the code to see how they
pulled it off.
Regards,
John
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
andrel wrote:
> http://e-tradition.net/bytes/6502/6502_instruction_set.html
I had the whole instruction set, and the opcodes, memorized at one point
in life. There were less than 160 of them to remember, so it wasn't hard.
During the 1985-87 time frame, I wrote a word processor, in assembler,
for my C64, I entered most of it as machine code as I went along. It
worked quite well (given that it only had to send Epson-compatible
formatting codes for things like italics, bold face, and so on); I wrote
a short novel using it.
Regards,
John
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
John VanSickle <evi### [at] hotmail com> wrote:
> One of the best C64 games was called Pinball Construction Set, which had
> the pinball moving with apparently natural motion and reflecting off of
> barriers of any angle. I should have looked at the code to see how they
> pulled it off.
Well, have you seen the best C64 demos?
--
- Warp
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
John VanSickle wrote:
> There were games on the Apple (6502-based) which had seven versions of
> any given sprite graphic so that they wouldn't have to be shifted in
> order to display them on the screen. 8-bit game programmers learned
> much about squeezing every last drop of performance out of limited speed
> and memory.
Wouldn't having 7 copies of the same data eat more memory?
Did it actually store 7 copies, or just precompute them?
Also... Apple made a 6502-based product??
> One of the best C64 games was called Pinball Construction Set, which had
> the pinball moving with apparently natural motion and reflecting off of
> barriers of any angle. I should have looked at the code to see how they
> pulled it off.
The equations for simple 2D acceleration and reflection are fairly easy,
and probably implementable in fixed-point arithmetic. The *hard* part
about physical simulations is that they usually involve a huge number of
items; *one* marble isn't too big a deal.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
John VanSickle wrote:
> I had the whole instruction set, and the opcodes, memorized at one point
> in life. There were less than 160 of them to remember, so it wasn't hard.
>
> During the 1985-87 time frame, I wrote a word processor, in assembler,
> for my C64, I entered most of it as machine code as I went along. It
> worked quite well (given that it only had to send Epson-compatible
> formatting codes for things like italics, bold face, and so on); I wrote
> a short novel using it.
Jesus! o_O
I just wrote the assembly on a piece of paper, and when the program was
properly finished, it'd do the "assembling" part by hand. (I.e., open my
dad's book and leaf through the op-code table.)
Eventually I tired of this, and wrote by old assembler.
*cough*
Well OK - wrote my own program to look up op-codes anyway. I typed the
whole op-code table into the computer (remember DATA statements?) and
wrote a program that does a trivial linear search to find the op-code
for the mnumonic I typed in. It was astonishingly slow, actually... hmm...
I didn't know much about algorithms back then. Give me a break! I was
only 11...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
>> - Commodore 64 (6510 @ ~1 MHz)
>
> Most instructions took from 2 to 5 clock cycles, so I'd venture to say
> that the 6510 ran between .2 and .5 MIPS. Floating point is implemented
> in software, and since the processor didn't have a hardware multiply
> (and shifts were one bit at a time), it probably took dozens of machine
> cycles for addition/subtraction and hundreds for multiplication and
> division. I doubt that it ever got much past ten kiloflops, and
> probably averaged lower than that.
I recall there was no multiplication or division (in fact, I have the
listing somewhere for a program that does repeated addition to achieve
multiplication), but I'd forgotten about the lack of arbitrary bit
shifts. (Not that it would matter for integer multiplication...)
Yes, definitely hundreds of instructions for floating-point arithmetic!
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Warp" <war### [at] tag povray org> wrote
> Just because the 16-bit operations are performed on pairs of 8-bit
> registers that doesn't make it any less of a 16-bit operation. The
> crucial thing is that you can perform a 16-bit operation with *one*
> single opcode.
It doesn't work like that. Otherwise, we should call x86 architecture 64
bits, 128 bits or even higher.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |