POV-Ray : Newsgroups : povray.off-topic : Mac Plus vs AMD Dual Core : Re: Mac Plus vs AMD Dual Core Server Time
12 Oct 2024 05:08:43 EDT (-0400)
  Re: Mac Plus vs AMD Dual Core  
From: Darren New
Date: 24 Oct 2007 20:47:08
Message: <471fe78c@news.povray.org>
Warp wrote:
>   Having automatic unbounded arithmetic types is fine as long as you don't
> care about efficiency.

Ah yes. The "it's better to be fast than correct" philosophy. It serves 
Microsoft so well, after all. ;-)

>   Integral types which support unlimited precision can *not* be as efficient
> as CPU-register-sized integers

Sure they can. I've used many computers where the unbounded arithmetic 
was just as fast as the bounded arithmetic when you stayed within bounds.

> unless you explicitly tell to the compiler
> that "this variable will always stay within these limits", in which case
> you are already stuck with the same limitation as the C integral variables.

Nope. Ada, for example, allows you to specify an upper size on variables 
and then actually *enforces* it, rather than just failing in bizarre 
ways like leaving you with the sum of two positives being negative. And 
Ada isn't known for its inefficiencies.

> If you don't tell the compiler those limits and it has to make sure that
> in case of overflow it switches to unlimited precision, then those integral
> types simply cannot be as fast as the bounded ones. It's just physically
> impossible.  If nothing else, the compiler will have to add an overflow
> check after each single operation done with those integers, thus adding
> clock cycles and code size (filling code caches faster).

Errr, no. I've used plenty of CPUs that provide for traps on overflow, 
so the normal case is fast and when the value overflows, it traps out 
and switches to the slower bignums rather than giving the wrong answers.

>   I'm also sure that being forced to prepare for unlimited precision math
> makes many compiler optimizations impossible (which would be possible with
> register-sized integers).

Maybe. I would think this is more a CPU design issue than anything. If 
the trap carried enough information, I bet you could handle it in the 
compiler.

>   (Also, in the general case a compiler cannot deduce by examining a piece
> of code that a variable will never have a value outside certain boundaries.
> I'm certain this kind of check would be equivalent to the halting problem.)

Yah, probably. On the other hand, you expect humans to be able to do 
this? If the compiler can't figure it out, neither can the programmer.

-- 
   Darren New / San Diego, CA, USA (PST)
     Remember the good old days, when we
     used to complain about cryptography
     being export-restricted?


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.