POV-Ray : Newsgroups : povray.off-topic : A tale of two cities : Re: A tale of two cities Server Time
29 Jul 2024 18:25:09 EDT (-0400)
  Re: A tale of two cities  
From: Invisible
Date: 14 Mar 2012 12:07:59
Message: <4f60c25f$1@news.povray.org>
>> You seem to suggest that there's 16 bytes of overhead /per object/. I'm
>> saying I'm used to there being a fixed number of bytes of overhead,
>> regardless of the number of objects. That's O(N) vs O(1).
>
>    That's obviously impossible. The only way to achieve that would be
> to allocate an array of objects (by value).
>
>    If you allocate individual objects, the allocator has to know where they
> are and if those memory blocks are free or not. At least if the allocator
> ever wants to reuse freed memory blocks.

If you're using garbage collection, it's quite possible - and, indeed, 
usual. Each time the GC collects the garbage, it also defragments the 
free space. You therefore need only store a pointer to the next free 
byte of memory. To allocate, just increment this pointer by the size of 
the block allocated. Allocation is now O(1) in time and space.

What you're saying is that C++ (and any other language with manual 
memory management) can't do it this way, since there would be no way to 
move stuff around without breaking all the things trying to point to it. 
So free space becomes fragmented, and you start needing O(N) storage to 
keep track of it - and, presumably, O(N) to allocate and deallocate stuff...

>>>>>      The array doesn't necessarily have to be fixed in size.
>>>
>>>> It does if it's static.
>>>
>>>     You don't have to use a static array.
>
>> Sure. But then you're dynamically allocating again.
>
>    You are allocating *an array* dynamically, not individual objects.
> You are making like a couple of 'news' rather than thousands or millions.

Right, OK. You seem to be saying essentially the same thing as I am - 
dynamic allocation is OK, just try not to individually allocate millions 
of blocks.

>> Right. I think I know how to redesign to avoid that. Should make things
>> a bit more coherent...
>
>    If you are creating a dynamic binary tree, there's no avoiding memory
> allocation.

I meant, I think can avoid copying bits of tree around while I'm 
building the tree.

> (One of the very few cases where you can avoid it is with
> things like a heap. While a heap is a dynamic binary tree, it can be
> implemented as a single array rather than each object being allocated
> separately. In fact, the C++ standard library offers heap creation and
> managing functions.)

Yeah, I believe I've read about that. (Heaps in arrays, that is.) Which 
property of a heap is it that makes this possible?

>    A possible approach is that the nodes themselves do not manage the
> objects they are pointing to, but rather the data container module
> itself. This is, for example, what std::set does (in other words,
> the nodes in a std::set do not manage the other objects they are
> pointing to, but it's the std::set class itself that takes care of
> allocating and deleting them as needed.) Of course this requires a
> careful design as well, in order to avoid leaks or double deletions.

Yeah, with manual memory management, almost /everything/ requires 
careful thought and design. You want to make damned sure you know what 
you're trying to implement before you implement it...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.