POV-Ray : Newsgroups : povray.off-topic : Interesting performance paper : Re: Interesting performance paper Server Time
4 Sep 2024 01:16:39 EDT (-0400)
  Re: Interesting performance paper  
From: Invisible
Date: 14 Jun 2010 05:24:52
Message: <4c15f564$1@news.povray.org>
Darren New wrote:
> Kind of like what Warp talks about w.r.t. cache lines, except with VM 
> pages.

It's all just different levels of the memory hierachy.

It used to be that you had a block of RAM, all accessible in O(1) time. 
Now you have virtual memory (which pretends to be RAM, but is orders of 
magnitude slower), maybe three levels of cache (which pretends to be 
RAM, but is orders of magnitude faster - except if it's another core's 
cache, in which case it's *slower* than RAM again due to coherance 
issues), and you could regard the CPU registers themselves as the 
ultimate level-0 "cache".

The sad fact is, processors are now 1000x faster than they used to be, 
but RAM is still roughly the same speed. And that means that if you 
don't want your expensive new CPU to just spend 1000x more cycles doing 
nothing, you need to have this memory hierachy. And *that* means that 
the time complexity of your algorithm is now *irrelevant*. The *only* 
thing that matters now is cache behaviour.

As the paper says, a stupid O(n^2) algorithm could be *faster* than a 
sophisticated algorithm with O(log n) or even O(1) time-complexity, all 
due to cache effects that shouldn't be there in the first place. Decades 
of research and development ruined. Now we have no way to predict 
performance, because it depends on cache behaviour, which intimately 
depends on the specifics of a particular implementation, not on the 
abstract properties of the algorithm used.

Possibly the worst part is that, unless you program in some horribly 
low-level language, you have no control over any of this.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.