POV-Ray : Newsgroups : povray.off-topic : A tale of two cities Server Time
29 Jul 2024 20:27:00 EDT (-0400)
  A tale of two cities (Message 46 to 55 of 105)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Warp
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:50:18
Message: <4f60be3a@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> >> Heh, really? Most languages I work with can allocate new objects in O(1)
> >> time and using O(1) memory for bookkeeping. I'm not used to dynamic
> >> allocation being anything to worry about.
> >
> >    Last time I checked, 16 bytes is O(1) memory.

> You seem to suggest that there's 16 bytes of overhead /per object/. I'm 
> saying I'm used to there being a fixed number of bytes of overhead, 
> regardless of the number of objects. That's O(N) vs O(1).

  That's obviously impossible. The only way to achieve that would be
to allocate an array of objects (by value).

  If you allocate individual objects, the allocator has to know where they
are and if those memory blocks are free or not. At least if the allocator
ever wants to reuse freed memory blocks.

> >>>> Mmm, I hadn't even thought of using an /array/. I suppose the upper
> >>>> bound on the tree size is statically known, so why not?
> >>>
> >>>     The array doesn't necessarily have to be fixed in size.
> >
> >> It does if it's static. Either that or the source I'm reading is
> >> incorrect...
> >
> >    You don't have to use a static array.

> Sure. But then you're dynamically allocating again.

  You are allocating *an array* dynamically, not individual objects.
You are making like a couple of 'news' rather than thousands or millions.

> Right. I think I know how to redesign to avoid that. Should make things 
> a bit more coherent...

  If you are creating a dynamic binary tree, there's no avoiding memory
allocation. (One of the very few cases where you can avoid it is with
things like a heap. While a heap is a dynamic binary tree, it can be
implemented as a single array rather than each object being allocated
separately. In fact, the C++ standard library offers heap creation and
managing functions.)

  A possible approach is that the nodes themselves do not manage the
objects they are pointing to, but rather the data container module
itself. This is, for example, what std::set does (in other words,
the nodes in a std::set do not manage the other objects they are
pointing to, but it's the std::set class itself that takes care of
allocating and deleting them as needed.) Of course this requires a
careful design as well, in order to avoid leaks or double deletions.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:52:05
Message: <4f60bea4@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> While this is true, it is not a problem specific to reference counting, 
> but one that is inherent in /any/ attempt at automatic object lifetime 
> management.

  It's not a problem for a garbage collector.

  And btw, cyclic references are not the only problematic situation with
reference counting. There are cases where objects may be deleted too early,
while there's still code using them.

-- 
                                                          - Warp


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:05:36
Message: <4f60c1d0@news.povray.org>
Am 14.03.2012 16:37, schrieb Invisible:

>> And then you'd also expect MS to give away VS for free (the real thing,
>> with the tools to write the real powerful software, not just crappy
>> hello world stuff), yet for some reason they don't.
>
> Um... So how is the pro version actually different from VS Express?

Last time I checked (though I must confess that's years ago) one 
difference was that the Express editions had quite limited flexibility 
regarding the project settings, to an extent that I didn't even bother 
trying them out further.

>> I think there's another reason why VS is so good. I'm convinced they're
>> not trying to make it easy for /anyone/ to write code - they're trying
>> to make it easy for /themselves/. And make extra money by selling the
>> toolset.
>
> Quite possibly. ;-)
>
> Then again, Valve released their editing tools, and (from what I hear)
> they are really, really hard to use...

Maybe that's because the development cycle in game engines is faster 
than that in general-purpose programming languages, so it wouldn't have 
paid off to develop better tools?

Remember that VS has been developed over how long now? Two decades?


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:07:59
Message: <4f60c25f$1@news.povray.org>
>> You seem to suggest that there's 16 bytes of overhead /per object/. I'm
>> saying I'm used to there being a fixed number of bytes of overhead,
>> regardless of the number of objects. That's O(N) vs O(1).
>
>    That's obviously impossible. The only way to achieve that would be
> to allocate an array of objects (by value).
>
>    If you allocate individual objects, the allocator has to know where they
> are and if those memory blocks are free or not. At least if the allocator
> ever wants to reuse freed memory blocks.

If you're using garbage collection, it's quite possible - and, indeed, 
usual. Each time the GC collects the garbage, it also defragments the 
free space. You therefore need only store a pointer to the next free 
byte of memory. To allocate, just increment this pointer by the size of 
the block allocated. Allocation is now O(1) in time and space.

What you're saying is that C++ (and any other language with manual 
memory management) can't do it this way, since there would be no way to 
move stuff around without breaking all the things trying to point to it. 
So free space becomes fragmented, and you start needing O(N) storage to 
keep track of it - and, presumably, O(N) to allocate and deallocate stuff...

>>>>>      The array doesn't necessarily have to be fixed in size.
>>>
>>>> It does if it's static.
>>>
>>>     You don't have to use a static array.
>
>> Sure. But then you're dynamically allocating again.
>
>    You are allocating *an array* dynamically, not individual objects.
> You are making like a couple of 'news' rather than thousands or millions.

Right, OK. You seem to be saying essentially the same thing as I am - 
dynamic allocation is OK, just try not to individually allocate millions 
of blocks.

>> Right. I think I know how to redesign to avoid that. Should make things
>> a bit more coherent...
>
>    If you are creating a dynamic binary tree, there's no avoiding memory
> allocation.

I meant, I think can avoid copying bits of tree around while I'm 
building the tree.

> (One of the very few cases where you can avoid it is with
> things like a heap. While a heap is a dynamic binary tree, it can be
> implemented as a single array rather than each object being allocated
> separately. In fact, the C++ standard library offers heap creation and
> managing functions.)

Yeah, I believe I've read about that. (Heaps in arrays, that is.) Which 
property of a heap is it that makes this possible?

>    A possible approach is that the nodes themselves do not manage the
> objects they are pointing to, but rather the data container module
> itself. This is, for example, what std::set does (in other words,
> the nodes in a std::set do not manage the other objects they are
> pointing to, but it's the std::set class itself that takes care of
> allocating and deleting them as needed.) Of course this requires a
> careful design as well, in order to avoid leaks or double deletions.

Yeah, with manual memory management, almost /everything/ requires 
careful thought and design. You want to make damned sure you know what 
you're trying to implement before you implement it...


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:14:41
Message: <4f60c3f1$1@news.povray.org>
Am 14.03.2012 16:52, schrieb Warp:
> clipka<ano### [at] anonymousorg>  wrote:
>> While this is true, it is not a problem specific to reference counting,
>> but one that is inherent in /any/ attempt at automatic object lifetime
>> management.
>
>    It's not a problem for a garbage collector.

Last time I heard, reference counting /is/ a garbage collection mechanism.

>    And btw, cyclic references are not the only problematic situation with
> reference counting. There are cases where objects may be deleted too early,
> while there's still code using them.

I can't think of any reason why that could possibly happen, provided 
that the reference counting is implemented in a thread-safe manner.


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:24:20
Message: <4f60c634$1@news.povray.org>
Am 14.03.2012 17:07, schrieb Invisible:
>>> You seem to suggest that there's 16 bytes of overhead /per object/. I'm
>>> saying I'm used to there being a fixed number of bytes of overhead,
>>> regardless of the number of objects. That's O(N) vs O(1).
>>
>> That's obviously impossible. The only way to achieve that would be
>> to allocate an array of objects (by value).
>>
>> If you allocate individual objects, the allocator has to know where they
>> are and if those memory blocks are free or not. At least if the allocator
>> ever wants to reuse freed memory blocks.
>
> If you're using garbage collection, it's quite possible - and, indeed,
> usual. Each time the GC collects the garbage, it also defragments the
> free space. You therefore need only store a pointer to the next free
> byte of memory. To allocate, just increment this pointer by the size of
> the block allocated. Allocation is now O(1) in time and space.

So far, so good. But how does the GC know how long each chunk of 
garbage-to-collect is?


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:29:32
Message: <4f60c76c$1@news.povray.org>
>> If you're using garbage collection, it's quite possible - and, indeed,
>> usual. Each time the GC collects the garbage, it also defragments the
>> free space. You therefore need only store a pointer to the next free
>> byte of memory. To allocate, just increment this pointer by the size of
>> the block allocated. Allocation is now O(1) in time and space.
>
> So far, so good. But how does the GC know how long each chunk of
> garbage-to-collect is?

The GC engine needs to know quite a bit about the objects it's scanning 
- how big they are, where their pointer fields are, etc. It also usually 
needs access to each thread's stack, to see what local variables are 
pointing to objects. And so on and so forth. There's a /reason/ why it's 
usually hard-coded into the language run-time system rather than being 
an add-on library.


Post a reply to this message

From: Kevin Wampler
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:33:08
Message: <4f60c844$1@news.povray.org>
On 3/14/2012 9:14 AM, clipka wrote:
>> And btw, cyclic references are not the only problematic situation with
>> reference counting. There are cases where objects may be deleted too
>> early,
>> while there's still code using them.
>
> I can't think of any reason why that could possibly happen, provided
> that the reference counting is implemented in a thread-safe manner.

Imagine you've implemented a tree where each node is an object which has 
a "delete" method that removes the node from the tree.  If you're not 
careful the node could end up removing all references-counted pointers 
to itself (since "this" isn't reference counted) and be reclaimed while 
still running its own delete method.


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:37:18
Message: <4f60c93e$1@news.povray.org>
Am 14.03.2012 17:29, schrieb Invisible:
>>> If you're using garbage collection, it's quite possible - and, indeed,
>>> usual. Each time the GC collects the garbage, it also defragments the
>>> free space. You therefore need only store a pointer to the next free
>>> byte of memory. To allocate, just increment this pointer by the size of
>>> the block allocated. Allocation is now O(1) in time and space.
>>
>> So far, so good. But how does the GC know how long each chunk of
>> garbage-to-collect is?
>
> The GC engine needs to know quite a bit about the objects it's scanning
> - how big they are, where their pointer fields are, etc. It also usually
> needs access to each thread's stack, to see what local variables are
> pointing to objects. And so on and so forth. There's a /reason/ why it's
> usually hard-coded into the language run-time system rather than being
> an add-on library.

You didn't quite /exactly/ answer my question :-P

My point here being that without any per-object memory overhead in 
addition to the raw data, you can't do any dynamic memory management at 
all. Whether that overhead is accounted for in the language run-time or 
an added library is quite irrelevant in that respect.


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:44:09
Message: <4f60cad9@news.povray.org>
>>> So far, so good. But how does the GC know how long each chunk of
>>> garbage-to-collect is?
>>
>> The GC engine needs to know quite a bit about the objects it's scanning
>> - how big they are, where their pointer fields are, etc. It also usually
>> needs access to each thread's stack, to see what local variables are
>> pointing to objects. And so on and so forth. There's a /reason/ why it's
>> usually hard-coded into the language run-time system rather than being
>> an add-on library.
>
> You didn't quite /exactly/ answer my question :-P
>
> My point here being that without any per-object memory overhead in
> addition to the raw data, you can't do any dynamic memory management at
> all. Whether that overhead is accounted for in the language run-time or
> an added library is quite irrelevant in that respect.

OK. Well typically each object has some overhead that tells you which 
class it belongs to. Typically the GC engine is going to go off that 
information, so you're arguably not adding any additional cost there. 
Typically each object just gets one pointer wider.

Whether a static allocator can match that or not is open to debate. In 
particular, I'd imagine a static allocator might have O(N) cost per 
/free block/, whereas a typical GC implementation doesn't need to track 
free blocks at all...


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.