|
 |
Invisible wrote:
> Of course, unless you're programming in assembly, this isn't much help.
Or if your compiler is smart enough. For example, C's memset(), which
just assigns a specific value to a range of locations (say, to zero out
an array) can often be coded to use instructions that say "I'm going to
clobber this whole cache line, so don't read it." I guess that counts
as writing in assembly if you consider the authors of the routine, but
you don't need to do that to use it.
> I don't know much about how CPU caches
> work, but I imagine having the data scattered over several memory pages
> isn't very cache-friendly.
I don't think it's a matter of being spread over several pages, but a
matter of having small pieces (much smaller than a cache line) being
randomly accessed.
I.e., when you read xyz[37] as an integer in C (that's the 38'th element
of the array of integers), the cache may pull in
xyz[35],xyz[36],...,xyz[39] in one memory cycle. If you're walking
sequentially through the array, this is helpful. If you're jumping all
over, this isn't helpful (altho maybe not harmful, depending on what
your access just replaces). If you're using linked lists, even going
sequentially will cause the cache accesses to be unhelpful, because
xyz[37] isn't next to xyz[38].
--
Darren New / San Diego, CA, USA (PST)
Post a reply to this message
|
 |