|
 |
Nicolas Alvarez wrote:
> Well, you can't move objects around in memory to compact, that is true.
> However, nothing stops a memory allocator from keeping lots of metadata
> internally to find free memory quickly.
Sure. If you're always willing to (for example) allocate powers of two
for memory, and you keep a free list for each power of two, then you can
allocate and free pretty quickly. The problem is this wastes even more
space than a compacting GC does, and you still have to do some
management to coalesce adjacent free blocks eventually.
> How do filesystems know what empty blocks are available to save a new
> file? A linear search from the beginning of the disk?
It's a lot easier when you can fragment your allocations. If all your
allocations are one page in size, yah, it's also pretty easy to go fast. :-)
Put it this way: there's nothing a "manual" non-compacting allocation
could do to make allocation faster than you could with a compacting GC
scheme, or the compacting GC scheme could just use that technique to
allocate stuff, then do a GC if it ran out of memory.
But don't argue with *me*. Go look up the research papers on the
efficiency of manual and automatic memory management. I already posted a
list once.
The most interesting recent one I found points out that if you take
typical Java programs, find objects that are allocated but their
pointers never get passed outside the immediate scope and add bytecodes
to "manually" free them, it goes even faster and uses even less memory.
So, basically, if you detect that you *could* stack-allocate it, and
then do, you get the best of both worlds, with very few "I could have
stack allocated it but didn't" situations.
--
Darren New / San Diego, CA, USA (PST)
"That's pretty. Where's that?"
"It's the Age of Channelwood."
"We should go there on vacation some time."
Post a reply to this message
|
 |