|
|
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>
>>> The only other alternative is that it performs memory defragmentation
>>> before freeing the objects.
>
>
>
>> No. It usually performs memory defragmentation as a concurrent part
of freeing the objects.
>
>
>
> If it performs memory defragmentation, that means that memory does get
> fragmented (and thus need the defragmentation in the first place).
You're oversimplifying. You asked if I have an efficient way to avoid
fragmentation. The answer is "yes".
> Your
> original claim was that there's *no* fragmentation at all and that's why
> freeing a group of objects is faster.
No. My claim is that's why the whole system overall is faster. I take it
from this description that you didn't bother to read yet the pages I
linked to? Did you not understand the description, or the explanation of
why it's more efficient than individual allocations and deallocations?
But yes, Bruno is right. It doesn't make any sense to discuss the
benefits and flaws of the details of various systems before you learn
how they work.
> So it's faster only under certain conditions, not all?
See the other post. Of course every algorithm that works for all uses
will have a situation under which a specific pattern is inefficient. If
you only ever create files and never delete them or move them, having an
i-node listing individual blocks is inefficient compared to just listing
the start sector and length. If you only ever allocate and never free,
or you always allocate in a LIFO order, GC is a net loss compared to
using a stack. If nothing you allocate dynamically ever has an embedded
pointer to something else, reference counting isn't too bad. Even
bubble-sort is efficient with few enough items in your list.
--
Darren New / San Diego, CA, USA (PST)
Just because you find out you are
telepathic, don't let it go to your head.
Post a reply to this message
|
|