POV-Ray : Newsgroups : povray.off-topic : Mac Plus vs AMD Dual Core : Re: Mac Plus vs AMD Dual Core Server Time
12 Oct 2024 03:16:10 EDT (-0400)
  Re: Mac Plus vs AMD Dual Core  
From: Darren New
Date: 26 Oct 2007 00:46:16
Message: <47217118$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>>>   Integral types which support unlimited precision can *not* be as efficient
>>> as CPU-register-sized integers
> 
>> Sure they can. I've used many computers where the unbounded arithmetic 
>> was just as fast as the bounded arithmetic when you stayed within bounds.
> 
>   The only way for that is that the CPU has explicit support. 

Right. Which is why, a few months ago, I was claiming that processors 
for the last few generations have been targeted at C code, rather than C 
code being particularly good for general processors.

Processors already do this for floating point. I'm not sure why they 
don't for integers any more, other than C not supporting it, so it's not 
put in.

Of course, if your register size is such that you need to do 
multi-instruction operations for usably-sized ints anyway (e.g., you're 
on a 6502 CPU), you might as well, since it's all part of the same thing 
anyway.

>   AFAIK x86 processors do not have such a feature. The only thing an integer
> overflow will do is to set a flag, which can then be checked with additional
> opcodes, at the cost of additional clock cycles.

Right. But you hadn't qualified your statement to x86 processors.

>   This would require even more CPU support than simply checking for a
> register-sized integer overflow, as you would have to check if the value
> goes out of arbitrarily-set boundaries (instead of just overflowing or
> underflowing over). In other words, you would have to be able to tell
> the CPU, at no clock cycle cost, "if this register ever gets a value
> smaller than this or larger than this, throw an exception". I have hard
> time believing any existing CPU has such a feature.

Burroughs B-series, for example. Since the CPU also checked array 
bounds, it had that sort of math built in. I'm not sure if it's still 
"existing", but IBM had versions of this in production not too long ago. 
iAPXyaddamumble or something like that.

Modern processors running Ada add another check, yes. It can be less 
efficient, unless you declare the variable to be a "modulo integer", 
which then works like C's integers.

On the other hand, if I'm writing the code for a weapon, I'd rather have 
it throw an exception than suddenly negate the amount it decides it 
needs to rotate to fire towards the enemy. ;-)

>   With normal CPUs the compiler would have to put a comparison and a
> conditional jump after each single operation done to the variable, at the
> cost of additional clock cycles. 

I expect you can optimize out a lot of this, but yes.  You can probably 
do a lot by putting in smaller bounds checks near the beginning of a 
routine, for example, to prove that nothing in the middle of a formula 
goes out of bounds.

>   Unless you give me an understandable and logical explanation of how these
> kinds of checks could be implemented without making the code slower, I just
> cannot believe it's possible.

As I said in the previous post, I've used a number of processors where 
it's the case.

>   Sure, there are some other CPUs which do, but in practical terms that
> doesn't help the average programmer too much.

I wasn't disputing an assertion that x86 architectures need more 
processing. I was disputing that it's impossible in general and by 
definition to have no overhead for multi-precision math traps.

I would also dispute that the default should be "do the wrong thing 
fast". I would argue that the *default* should be "do the right thing, 
and let me tell you where it's too slow."  Premature optimization and 
all that, you know.

C isn't the way it is because it's efficient that way. C is the way it 
is because the compiler had to fit in 12K. The reason C has ++ and -- is 
it was built into the addressing hardware of the machines C was first 
implemented on, not because it's particularly good for programming.

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.