POV-Ray : Newsgroups : povray.off-topic : Need for speed Server Time
8 Sep 2024 17:18:33 EDT (-0400)
  Need for speed (Message 9 to 18 of 168)  
<<< Previous 8 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: andrel
Subject: Re: Need for speed
Date: 13 Jul 2008 07:43:36
Message: <4879EAA1.7090008@hotmail.com>
On 13-Jul-08 13:10, Orchid XP v8 wrote:
>>> That's true. But assuming we want, say, a normal "double precision" 
>>> floating point number, how many clock cycles would you estimate it 
>>> takes to operation on? A dozen? A hundred?
>>
>>   A lot. I don't believe *any* existing program for those processors
>> does double precision floating point calculations.
> 
> You're probably right about that. (Just moving 8 bytes around has to 
> take a minimum of 8 instructions, before you *do* anything to those 
> bytes.) Just wanted to make it a like-for-like comparison. ;-)
8 cycles to read, 8 to write and some more to fetch all the read and 
write opcodes plus some overhead.

> 
>>   As he said, I don't think the term FLOPS even applies if floating point
>> calculations are done in software instead of in hardware.
> 
> Floating-point operations per second. Does it matter *how* it does them? 
> Surely the important point is how many of 'em it can do.
It does. Multiplication is much slower than addition. Some operation's 
timing also depend on the specific bit patterns and overflows 
encountered during the processing. Best case and worst case could easily 
differ by a factor of 2 or more for one operation. So what time are you 
going to use?
> 
>>>> Both had a variable instruction set that took a variable amount of 
>>>> cycles to execute and therefor the number of instruction processed 
>>>> depended on the program and especially on the addressing modes used.
>>
>>> I thought this was true for *all* processors?
>>
>>   No. The idea with RISC processors is that each opcode has the same size
>> and takes exactly 1 clock cycle to execute.
> 
> Interesting. I was under the impression that processors such as the 
> Pentium can execute multiple instructions in parallel, and therefore 
> several instructions can reach the "completed" stage in a single given 
> clock cycle, but that each individual instruction still takes multiple 
> cycles from start to finish.
> 
yes? The number of FLOPS are given by the producer for the optimal case 
of completely filled pipelines so effectively 1 operation finished per 
cycle per core (for an arbitrary value of core). In benchmarks they use 
a more typical case yet still the pipelines will make a good contribution.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 07:59:48
Message: <4879ee34$1@news.povray.org>
>> You're probably right about that. (Just moving 8 bytes around has to 
>> take a minimum of 8 instructions, before you *do* anything to those 
>> bytes.) Just wanted to make it a like-for-like comparison. ;-)
> 
>   Actually the Z80 has 16-bit registers, 16-bit memory addressing and
> a 16-bit ALU (don't believe wikipedia's lies about calling the Z80 an
> "8-bit processor"). But anyways.

OK. Well I was actually thinking more about the 6502. I don't know much 
about the Z80...

>> Floating-point operations per second. Does it matter *how* it does them? 
>> Surely the important point is how many of 'em it can do.
> 
>   I think it becomes a bit fuzzy if it's done in software, because then
> it becomes a question of how optimized that software is.

Well OK. But you would have thought that various "best case" numbers 
wouldn't differ by huge factors. (Now, if you wanted a *precise* 
number... no, that would be rather arbitrary.)

>> Interesting. I was under the impression that processors such as the 
>> Pentium can execute multiple instructions in parallel, and therefore 
>> several instructions can reach the "completed" stage in a single given 
>> clock cycle, but that each individual instruction still takes multiple 
>> cycles from start to finish.
> 
>   When calculating MIPS it doesn't matter how many clock cycles it takes
> for one opcode to be fetched and passed through the entire pipeline and
> executed.

This is true. I was just making a side-comment that I didn't think that 
*any* processor could complete one entire instruction in just 1 clock 
cycle...

>> I'm only trying to figure out "how many zeros" are in the number, if you 
>> see what I mean...
> 
>> Is it 10 MIPS? 100? 1,000? 1,000,000??
> 
>   The wikipedia article about the subject has some numbers.

Apparently, yes. (I'm damn *sure* I checked that article and didn't find 
any numbers... But they're there now.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Darren New
Subject: Re: Need for speed
Date: 13 Jul 2008 12:51:14
Message: <487a3282$1@news.povray.org>
Warp wrote:
>   Actually the Z80 has 16-bit registers, 16-bit memory addressing and
> a 16-bit ALU (don't believe wikipedia's lies about calling the Z80 an
> "8-bit processor"). But anyways.

I'm pretty sure that's not correct.  Granted, it's been decades since I 
did Z80 assembler. It had a 16-bit ALU for the addressing, for the most 
part, but the registers were definitely 8-bit registers. Some of the 
opcodes would pair them up into an address or some such, but you'd be 
taking two registers to do it. Not unlike the "AX = AH:AL" sort of thing 
the x86 series does.

Actually, I take that back, didn't the Z80 add some IX and IY registers 
the 8080 didn't have or something?  The 8080 was 8 bit; the Z80 might 
have had *some* 16-bit registers.

Almost everyone calls the processor the number of bits on the data bus, 
fwiw, when talking about this stuff.  The 8088 was an 8-bit processor 
and the 8086 was a 16-bit processor even tho they were 100% software 
compatible.

-- 
Darren New / San Diego, CA, USA (PST)
  Helpful housekeeping hints:
   Check your feather pillows for holes
    before putting them in the washing machine.


Post a reply to this message

From: Warp
Subject: Re: Need for speed
Date: 13 Jul 2008 13:30:59
Message: <487a3bd3@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> I'm pretty sure that's not correct.  Granted, it's been decades since I 
> did Z80 assembler. It had a 16-bit ALU for the addressing, for the most 
> part, but the registers were definitely 8-bit registers. Some of the 
> opcodes would pair them up into an address or some such, but you'd be 
> taking two registers to do it. Not unlike the "AX = AH:AL" sort of thing 
> the x86 series does.

  It's exactly what the x86 does, and that's why the x86 series is a
16-bit processor (up to the 386).

  With the z80 you can have 16-bit literals, perform 16-bit ALU operations
(such as additions, substractions, shifts, etc), you can address the entire
memory with one single 16-bit register, etc. I don't understand what's *not*
16-bit about the z80.

  Just because the 16-bit operations are performed on pairs of 8-bit
registers that doesn't make it any less of a 16-bit operation. The
crucial thing is that you can perform a 16-bit operation with *one*
single opcode. You can also load a 16-bit value into such a pair with
one single opcode. This means the opcode is a 16-bit one.

  If the z80 is not a 16-bit processor, then neither is the 80286.

> Almost everyone calls the processor the number of bits on the data bus, 
> fwiw, when talking about this stuff.

  How is that even useful? It might tell something about the speed at which
the processor can handle data, but it doesn't tell anything about the
processor itself.

  I understand "8-bit" to mean "has 8-bit registers, and you can only
perform an 8-bit operation with a single opcode, because registers can
only hold 8 bits of data". Likewise for any other bitsize.

  Btw, didn't the 386 usually have a 16-bit data bus? The 386 is still
a 32-bit processor, though.

-- 
                                                          - Warp


Post a reply to this message

From: John VanSickle
Subject: Re: Need for speed
Date: 13 Jul 2008 13:38:29
Message: <487a3d95$1@news.povray.org>
Orchid XP v8 wrote:
> Can somebody find out the typical MIPS and FLOPS for the following:
> 
> - Commodore 64 (6510 @ ~1 MHz)

Most instructions took from 2 to 5 clock cycles, so I'd venture to say 
that the 6510 ran between .2 and .5 MIPS.  Floating point is implemented 
in software, and since the processor didn't have a hardware multiply 
(and shifts were one bit at a time), it probably took dozens of machine 
cycles for addition/subtraction and hundreds for multiplication and 
division.  I doubt that it ever got much past ten kiloflops, and 
probably averaged lower than that.

Regards,
John


Post a reply to this message

From: John VanSickle
Subject: Re: Need for speed
Date: 13 Jul 2008 13:49:30
Message: <487a402a@news.povray.org>
Orchid XP v8 wrote:

> Right. Suddenly integer-only algorithms seem like a Big Deal. ;-)

There were games on the Apple (6502-based) which had seven versions of 
any given sprite graphic so that they wouldn't have to be shifted in 
order to display them on the screen.  8-bit game programmers learned 
much about squeezing every last drop of performance out of limited speed 
and memory.

One of the best C64 games was called Pinball Construction Set, which had 
the pinball moving with apparently natural motion and reflecting off of 
barriers of any angle.  I should have looked at the code to see how they 
pulled it off.

Regards,
John


Post a reply to this message

From: John VanSickle
Subject: Re: Need for speed
Date: 13 Jul 2008 13:54:51
Message: <487a416b@news.povray.org>
andrel wrote:

> http://e-tradition.net/bytes/6502/6502_instruction_set.html

I had the whole instruction set, and the opcodes, memorized at one point 
in life.  There were less than 160 of them to remember, so it wasn't hard.

During the 1985-87 time frame, I wrote a word processor, in assembler, 
for my C64, I entered most of it as machine code as I went along.  It 
worked quite well (given that it only had to send Epson-compatible 
formatting codes for things like italics, bold face, and so on); I wrote 
a short novel using it.

Regards,
John


Post a reply to this message

From: Warp
Subject: Re: Need for speed
Date: 13 Jul 2008 13:56:13
Message: <487a41bd@news.povray.org>
John VanSickle <evi### [at] hotmailcom> wrote:
> One of the best C64 games was called Pinball Construction Set, which had 
> the pinball moving with apparently natural motion and reflecting off of 
> barriers of any angle.  I should have looked at the code to see how they 
> pulled it off.

  Well, have you seen the best C64 demos?

-- 
                                                          - Warp


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 14:53:06
Message: <487a4f12$1@news.povray.org>
John VanSickle wrote:

> There were games on the Apple (6502-based) which had seven versions of 
> any given sprite graphic so that they wouldn't have to be shifted in 
> order to display them on the screen.  8-bit game programmers learned 
> much about squeezing every last drop of performance out of limited speed 
> and memory.

Wouldn't having 7 copies of the same data eat more memory?

Did it actually store 7 copies, or just precompute them?

Also... Apple made a 6502-based product??

> One of the best C64 games was called Pinball Construction Set, which had 
> the pinball moving with apparently natural motion and reflecting off of 
> barriers of any angle.  I should have looked at the code to see how they 
> pulled it off.

The equations for simple 2D acceleration and reflection are fairly easy, 
and probably implementable in fixed-point arithmetic. The *hard* part 
about physical simulations is that they usually involve a huge number of 
items; *one* marble isn't too big a deal.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 14:55:54
Message: <487a4fba$1@news.povray.org>
John VanSickle wrote:

> I had the whole instruction set, and the opcodes, memorized at one point 
> in life.  There were less than 160 of them to remember, so it wasn't hard.
> 
> During the 1985-87 time frame, I wrote a word processor, in assembler, 
> for my C64, I entered most of it as machine code as I went along.  It 
> worked quite well (given that it only had to send Epson-compatible 
> formatting codes for things like italics, bold face, and so on); I wrote 
> a short novel using it.

Jesus! o_O

I just wrote the assembly on a piece of paper, and when the program was 
properly finished, it'd do the "assembling" part by hand. (I.e., open my 
dad's book and leaf through the op-code table.)

Eventually I tired of this, and wrote by old assembler.

*cough*

Well OK - wrote my own program to look up op-codes anyway. I typed the 
whole op-code table into the computer (remember DATA statements?) and 
wrote a program that does a trivial linear search to find the op-code 
for the mnumonic I typed in. It was astonishingly slow, actually... hmm...

I didn't know much about algorithms back then. Give me a break! I was 
only 11...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

<<< Previous 8 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.