POV-Ray : Newsgroups : povray.off-topic : Need for speed Server Time
7 Sep 2024 15:23:58 EDT (-0400)
  Need for speed (Message 1 to 10 of 168)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid XP v8
Subject: Need for speed
Date: 13 Jul 2008 04:52:13
Message: <4879c23d$1@news.povray.org>
Can somebody find out the typical MIPS and FLOPS for the following:

- Commodore 64 (6510 @ ~1 MHz)
- ZX Spectrum (Z80 @ 3.5 MHz)
- Pentium I @ 66 MHz
- Pentium II @ 233 MHz
- Pentium III @ 500 MHz
- Pentium IV @ 4.0 GHz
- Intel Core 2 Quad @ 3.0 GHz

Surprisingly, Google fails to yield any useful data. I can find out the 
*clock speed*, FSB speed, cache size, and numerous other details of 
these CPUs, plus various benchmark results, but not MIPS and FLOPS counts...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: andrel
Subject: Re: Need for speed
Date: 13 Jul 2008 05:22:04
Message: <4879C975.1040409@hotmail.com>
On 13-Jul-08 10:52, Orchid XP v8 wrote:
> Can somebody find out the typical MIPS and FLOPS for the following:
> 
> - Commodore 64 (6510 @ ~1 MHz)
> - ZX Spectrum (Z80 @ 3.5 MHz)
> - Pentium I @ 66 MHz
> - Pentium II @ 233 MHz
> - Pentium III @ 500 MHz
> - Pentium IV @ 4.0 GHz
> - Intel Core 2 Quad @ 3.0 GHz
> 
> Surprisingly, Google fails to yield any useful data. I can find out the 
> *clock speed*, FSB speed, cache size, and numerous other details of 
> these CPUs, plus various benchmark results, but not MIPS and FLOPS 
> counts...
> 

Neither the 6510 nor the Z80 had a floating point processor. Floating 
point was in software. FLOPS is not really defined because it depends on 
what operation is performed.

Both had a variable instruction set that took a variable amount of 
cycles to execute and therefor the number of instruction processed 
depended on the program and especially on the addressing modes used. I'd 
say that although the MIPS rate is not very well defined, on average it 
may be in the order of 1/3rd of the clock speed for 65xx and 1/5th-1/4th 
for Z80. Some info at:
http://e-tradition.net/bytes/6502/6502_instruction_set.html
http://wikiti.denglend.net/index.php?title=Z80_Instruction_Set


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 05:33:13
Message: <4879cbd9$1@news.povray.org>
andrel wrote:

> Neither the 6510 nor the Z80 had a floating point processor. Floating 
> point was in software.

That's true. But assuming we want, say, a normal "double precision" 
floating point number, how many clock cycles would you estimate it takes 
to operation on? A dozen? A hundred?

> Both had a variable instruction set that took a variable amount of 
> cycles to execute and therefor the number of instruction processed 
> depended on the program and especially on the addressing modes used.

I thought this was true for *all* processors?

(Of course, unlike modern processors, cache effects are not present.)

> although the MIPS rate is not very well defined, on average it 
> may be in the order of 1/3rd of the clock speed for 65xx and 1/5th-1/4th 
> for Z80.

Sounds roughly right. (For the 65xx anyway - I have a manual somewhere 
that lists all the opcodes and addressing modes...)

So that gives us, very approximately,

- C64 = 1.0 MHz / 3 = 0.333 MIPS.
- ZX Spectrum = 3.5 MHz / 4 = 0.875 MIPS.

So each is giving us probably a few hundred thousand complete opcodes 
executed every second.

Now, anybody have any clue "how big" the numbers are for less ancient CPUs?

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Warp
Subject: Re: Need for speed
Date: 13 Jul 2008 05:55:44
Message: <4879d11f@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> andrel wrote:

> > Neither the 6510 nor the Z80 had a floating point processor. Floating 
> > point was in software.

> That's true. But assuming we want, say, a normal "double precision" 
> floating point number, how many clock cycles would you estimate it takes 
> to operation on? A dozen? A hundred?

  A lot. I don't believe *any* existing program for those processors
does double precision floating point calculations.

  As he said, I don't think the term FLOPS even applies if floating point
calculations are done in software instead of in hardware.

> > Both had a variable instruction set that took a variable amount of 
> > cycles to execute and therefor the number of instruction processed 
> > depended on the program and especially on the addressing modes used.

> I thought this was true for *all* processors?

  No. The idea with RISC processors is that each opcode has the same size
and takes exactly 1 clock cycle to execute. (Ok, granted, practical RISC
processors do have some opcodes which take more than 1 clock cycle to
execute because it would simply be a physical impossibility to perform
the operation in 1, but the vast majority are executed in 1.)

> Now, anybody have any clue "how big" the numbers are for less ancient CPUs?

  For Intel processors it depends a lot on the executed program and the
processor. With the 486 you might get something close if you divide the
clockrate with 1.5 (or something like that). With the Pentium and newer
it becomes very complicated (because the newer Pentiums have whacky things
like parallel pipelines and out-of-order execution).

-- 
                                                          - Warp


Post a reply to this message

From: andrel
Subject: Re: Need for speed
Date: 13 Jul 2008 06:04:36
Message: <4879D36C.4020304@hotmail.com>
On 13-Jul-08 11:33, Orchid XP v8 wrote:
> andrel wrote:
> 
>> Neither the 6510 nor the Z80 had a floating point processor. Floating 
>> point was in software.
> 
> That's true. But assuming we want, say, a normal "double precision" 
> floating point number, how many clock cycles would you estimate it takes 
> to operation on? A dozen? A hundred?

My estimate would be that adding 2 floating points would be around 50 
and multiplication more 100-150, but I could be an order of magnitude wrong.

> 
>> Both had a variable instruction set that took a variable amount of 
>> cycles to execute and therefor the number of instruction processed 
>> depended on the program and especially on the addressing modes used.
> 
> I thought this was true for *all* processors?

not for RISC and only partial for state of the art processors.

> 
> (Of course, unlike modern processors, cache effects are not present.)
> 
>> although the MIPS rate is not very well defined, on average it may be 
>> in the order of 1/3rd of the clock speed for 65xx and 1/5th-1/4th for 
>> Z80.
> 
> Sounds roughly right. (For the 65xx anyway - I have a manual somewhere 
> that lists all the opcodes and addressing modes...)
> 
> So that gives us, very approximately,
> 
> - C64 = 1.0 MHz / 3 = 0.333 MIPS.
> - ZX Spectrum = 3.5 MHz / 4 = 0.875 MIPS.

please don't use 3 significant digits.

> 
> So each is giving us probably a few hundred thousand complete opcodes 
> executed every second.
> 
> Now, anybody have any clue "how big" the numbers are for less ancient CPUs?
> 

http://en.wikipedia.org/wiki/Million_instructions_per_second


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 07:10:06
Message: <4879e28e$1@news.povray.org>
>> That's true. But assuming we want, say, a normal "double precision" 
>> floating point number, how many clock cycles would you estimate it takes 
>> to operation on? A dozen? A hundred?
> 
>   A lot. I don't believe *any* existing program for those processors
> does double precision floating point calculations.

You're probably right about that. (Just moving 8 bytes around has to 
take a minimum of 8 instructions, before you *do* anything to those 
bytes.) Just wanted to make it a like-for-like comparison. ;-)

>   As he said, I don't think the term FLOPS even applies if floating point
> calculations are done in software instead of in hardware.

Floating-point operations per second. Does it matter *how* it does them? 
Surely the important point is how many of 'em it can do.

>>> Both had a variable instruction set that took a variable amount of 
>>> cycles to execute and therefor the number of instruction processed 
>>> depended on the program and especially on the addressing modes used.
> 
>> I thought this was true for *all* processors?
> 
>   No. The idea with RISC processors is that each opcode has the same size
> and takes exactly 1 clock cycle to execute.

Interesting. I was under the impression that processors such as the 
Pentium can execute multiple instructions in parallel, and therefore 
several instructions can reach the "completed" stage in a single given 
clock cycle, but that each individual instruction still takes multiple 
cycles from start to finish.

>> Now, anybody have any clue "how big" the numbers are for less ancient CPUs?
> 
>   For Intel processors it depends a lot on the executed program and the
> processor. With the 486 you might get something close if you divide the
> clockrate with 1.5 (or something like that). With the Pentium and newer
> it becomes very complicated (because the newer Pentiums have whacky things
> like parallel pipelines and out-of-order execution).

I'm only trying to figure out "how many zeros" are in the number, if you 
see what I mean...

Is it 10 MIPS? 100? 1,000? 1,000,000??

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 07:11:54
Message: <4879e2fa@news.povray.org>
>> That's true. But assuming we want, say, a normal "double precision" 
>> floating point number, how many clock cycles would you estimate it 
>> takes to operation on? A dozen? A hundred?
> 
> My estimate would be that adding 2 floating points would be around 50 
> and multiplication more 100-150, but I could be an order of magnitude 
> wrong.

Right. Suddenly integer-only algorithms seem like a Big Deal. ;-)

>> So that gives us, very approximately,
>>
>> - C64 = 1.0 MHz / 3 = 0.333 MIPS.
>> - ZX Spectrum = 3.5 MHz / 4 = 0.875 MIPS.
> 
> please don't use 3 significant digits.

But my very next statement is

>> So each is giving us probably a few hundred thousand complete opcodes 
>> executed every second.

;-)

> http://en.wikipedia.org/wiki/Million_instructions_per_second

Ah. Now this looks like the kind of data I'm after...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Warp
Subject: Re: Need for speed
Date: 13 Jul 2008 07:36:18
Message: <4879e8b1@news.povray.org>
Orchid XP v8 <voi### [at] devnull> wrote:
> >> That's true. But assuming we want, say, a normal "double precision" 
> >> floating point number, how many clock cycles would you estimate it takes 
> >> to operation on? A dozen? A hundred?
> > 
> >   A lot. I don't believe *any* existing program for those processors
> > does double precision floating point calculations.

> You're probably right about that. (Just moving 8 bytes around has to 
> take a minimum of 8 instructions, before you *do* anything to those 
> bytes.) Just wanted to make it a like-for-like comparison. ;-)

  Actually the Z80 has 16-bit registers, 16-bit memory addressing and
a 16-bit ALU (don't believe wikipedia's lies about calling the Z80 an
"8-bit processor"). But anyways.

> >   As he said, I don't think the term FLOPS even applies if floating point
> > calculations are done in software instead of in hardware.

> Floating-point operations per second. Does it matter *how* it does them? 
> Surely the important point is how many of 'em it can do.

  I think it becomes a bit fuzzy if it's done in software, because then
it becomes a question of how optimized that software is. One software
might calculate floating point operations twice as fast as another because
of better optimizations, but that doesn't tell anything about the FLOPS
of the *processor architecture*. Calculating the theoretical and practical
maximum software FLOPS for a given non-FPU processor could be next to
impossible.

> >>> Both had a variable instruction set that took a variable amount of 
> >>> cycles to execute and therefor the number of instruction processed 
> >>> depended on the program and especially on the addressing modes used.
> > 
> >> I thought this was true for *all* processors?
> > 
> >   No. The idea with RISC processors is that each opcode has the same size
> > and takes exactly 1 clock cycle to execute.

> Interesting. I was under the impression that processors such as the 
> Pentium can execute multiple instructions in parallel, and therefore 
> several instructions can reach the "completed" stage in a single given 
> clock cycle, but that each individual instruction still takes multiple 
> cycles from start to finish.

  When calculating MIPS it doesn't matter how many clock cycles it takes
for one opcode to be fetched and passed through the entire pipeline and
executed. What matters is the throughput. In other words, as the very
acronym says, how many instructions the processor can execute per second
(not how long it takes for one single instruction to be completely processed).

  The throughput of most RISC processors is, at least theoretically, 1 clock
cycle per instruction (except for the few instructions which require more).

> I'm only trying to figure out "how many zeros" are in the number, if you 
> see what I mean...

> Is it 10 MIPS? 100? 1,000? 1,000,000??

  The wikipedia article about the subject has some numbers.

-- 
                                                          - Warp


Post a reply to this message

From: andrel
Subject: Re: Need for speed
Date: 13 Jul 2008 07:43:36
Message: <4879EAA1.7090008@hotmail.com>
On 13-Jul-08 13:10, Orchid XP v8 wrote:
>>> That's true. But assuming we want, say, a normal "double precision" 
>>> floating point number, how many clock cycles would you estimate it 
>>> takes to operation on? A dozen? A hundred?
>>
>>   A lot. I don't believe *any* existing program for those processors
>> does double precision floating point calculations.
> 
> You're probably right about that. (Just moving 8 bytes around has to 
> take a minimum of 8 instructions, before you *do* anything to those 
> bytes.) Just wanted to make it a like-for-like comparison. ;-)
8 cycles to read, 8 to write and some more to fetch all the read and 
write opcodes plus some overhead.

> 
>>   As he said, I don't think the term FLOPS even applies if floating point
>> calculations are done in software instead of in hardware.
> 
> Floating-point operations per second. Does it matter *how* it does them? 
> Surely the important point is how many of 'em it can do.
It does. Multiplication is much slower than addition. Some operation's 
timing also depend on the specific bit patterns and overflows 
encountered during the processing. Best case and worst case could easily 
differ by a factor of 2 or more for one operation. So what time are you 
going to use?
> 
>>>> Both had a variable instruction set that took a variable amount of 
>>>> cycles to execute and therefor the number of instruction processed 
>>>> depended on the program and especially on the addressing modes used.
>>
>>> I thought this was true for *all* processors?
>>
>>   No. The idea with RISC processors is that each opcode has the same size
>> and takes exactly 1 clock cycle to execute.
> 
> Interesting. I was under the impression that processors such as the 
> Pentium can execute multiple instructions in parallel, and therefore 
> several instructions can reach the "completed" stage in a single given 
> clock cycle, but that each individual instruction still takes multiple 
> cycles from start to finish.
> 
yes? The number of FLOPS are given by the producer for the optimal case 
of completely filled pipelines so effectively 1 operation finished per 
cycle per core (for an arbitrary value of core). In benchmarks they use 
a more typical case yet still the pipelines will make a good contribution.


Post a reply to this message

From: Orchid XP v8
Subject: Re: Need for speed
Date: 13 Jul 2008 07:59:48
Message: <4879ee34$1@news.povray.org>
>> You're probably right about that. (Just moving 8 bytes around has to 
>> take a minimum of 8 instructions, before you *do* anything to those 
>> bytes.) Just wanted to make it a like-for-like comparison. ;-)
> 
>   Actually the Z80 has 16-bit registers, 16-bit memory addressing and
> a 16-bit ALU (don't believe wikipedia's lies about calling the Z80 an
> "8-bit processor"). But anyways.

OK. Well I was actually thinking more about the 6502. I don't know much 
about the Z80...

>> Floating-point operations per second. Does it matter *how* it does them? 
>> Surely the important point is how many of 'em it can do.
> 
>   I think it becomes a bit fuzzy if it's done in software, because then
> it becomes a question of how optimized that software is.

Well OK. But you would have thought that various "best case" numbers 
wouldn't differ by huge factors. (Now, if you wanted a *precise* 
number... no, that would be rather arbitrary.)

>> Interesting. I was under the impression that processors such as the 
>> Pentium can execute multiple instructions in parallel, and therefore 
>> several instructions can reach the "completed" stage in a single given 
>> clock cycle, but that each individual instruction still takes multiple 
>> cycles from start to finish.
> 
>   When calculating MIPS it doesn't matter how many clock cycles it takes
> for one opcode to be fetched and passed through the entire pipeline and
> executed.

This is true. I was just making a side-comment that I didn't think that 
*any* processor could complete one entire instruction in just 1 clock 
cycle...

>> I'm only trying to figure out "how many zeros" are in the number, if you 
>> see what I mean...
> 
>> Is it 10 MIPS? 100? 1,000? 1,000,000??
> 
>   The wikipedia article about the subject has some numbers.

Apparently, yes. (I'm damn *sure* I checked that article and didn't find 
any numbers... But they're there now.)

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.