|
 |
> As the paper says, a stupid O(n^2) algorithm could be *faster* than a
> sophisticated algorithm with O(log n) or even O(1) time-complexity, all
> due to cache effects that shouldn't be there in the first place. Decades
> of research and development ruined.
What you are referring to are mathematical papers, they do not consider
actual implementation on a machine, they are not meant to. Computers have
always had different amounts of memory with different speeds and access
patterns, there are plenty of papers that take this into account. You
cannot just take a mathematical paper and expect it to work as described on
a computer, you never have been able to.
> Now we have no way to predict performance, because it depends on cache
> behaviour, which intimately depends on the specifics of a particular
> implementation, not on the abstract properties of the algorithm used.
Sure, but it's always been like that and always will, there's nothing you
can do about it.
> Possibly the worst part is that, unless you program in some horribly
> low-level language, you have no control over any of this.
If your language/compiler is clever enough to switch around the order and
storage of your code, then it should be clever enough to optimise for the
memory structure of the machine.
Post a reply to this message
|
 |