|
 |
>> As the paper says, a stupid O(n^2) algorithm could be *faster* than a
>> sophisticated algorithm with O(log n) or even O(1) time-complexity,
>> all due to cache effects that shouldn't be there in the first place.
>> Decades of research and development ruined.
>
> You cannot just take a mathematical paper and expect it to
> work as described on a computer, you never have been able to.
Back in the days of the 8-bit home computer, memory access really was a
constant-time operation that always took 1 clock cycle. Unfortunately it
seems that the larger a memory bank is, the slower it is to access. What
this means is that a 2.2 GHz CPU only runs at 2.2 GHz if you use a tiny
amount of data. If you try to random-access a large chunk of data,
suddenly the program slows down to the speed of the RAM bus.
I guess the only consolation is that my CPU's L3 cache is probably
bigger than the entire address space of the 6502...
> Sure, but it's always been like that and always will, there's nothing
> you can do about it.
It wasn't always like that - but, as you say, there's nothing you can do
about it.
>> Possibly the worst part is that, unless you program in some horribly
>> low-level language, you have no control over any of this.
>
> If your language/compiler is clever enough to switch around the order
> and storage of your code, then it should be clever enough to optimise
> for the memory structure of the machine.
Sure. But if it isn't, you can't do anything about it. You can't rewrite
the compiler, after all...
Post a reply to this message
|
 |