POV-Ray : Newsgroups : povray.off-topic : A tale of two cities Server Time
29 Jul 2024 16:33:31 EDT (-0400)
  A tale of two cities (Message 41 to 50 of 105)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:34:37
Message: <4f60ba8d$1@news.povray.org>
Am 14.03.2012 15:38, schrieb Invisible:
> On 14/03/2012 02:36 PM, clipka wrote:
>> Am 14.03.2012 10:55, schrieb Invisible:
>>
>>>> Note that the dispose is gone for good. Shared pointers use reference
>>>> counters to accomplish this feat.
>>>
>>> Every time I hear somebody say this, a part of me wants to scream "you
>>> /do/ understand that reference counting doesn't actually work, right?"
>>
>> Which is because...?
>
> Allow me to rephrase: Reference counting works if and only if the node
> reference graph is acyclic.
>
> Sometimes it's feasible to ensure that this is the case. Sometimes it is
> not.

While this is true, it is not a problem specific to reference counting, 
but one that is inherent in /any/ attempt at automatic object lifetime 
management.

That's why C++11 also provides weak pointers. They're not a cure-all for 
such problems, but they do help in some situations (such as when the 
cyclic references come from pointers to parent objects in a tree).


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:37:05
Message: <4f60bb21$1@news.povray.org>
> the recommended minimum for that maximum is 65536.

Irony...

> Seriously, I'm pretty sure MS uses Visual Studio themselves in-house. In
> a serious number of seriously large projects. Knowing that every second
> a programmer is busy ranting about the development tools is one second
> that costs money.

Well, yeah, there is that. If it's actually true...

> And then you'd also expect MS to give away VS for free (the real thing,
> with the tools to write the real powerful software, not just crappy
> hello world stuff), yet for some reason they don't.

Um... So how is the pro version actually different from VS Express?

> I think there's another reason why VS is so good. I'm convinced they're
> not trying to make it easy for /anyone/ to write code - they're trying
> to make it easy for /themselves/. And make extra money by selling the
> toolset.

Quite possibly. ;-)

Then again, Valve released their editing tools, and (from what I hear) 
they are really, really hard to use...


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:39:30
Message: <4f60bbb2$1@news.povray.org>
>> Allow me to rephrase: Reference counting works if and only if the node
>> reference graph is acyclic.
>>
>> Sometimes it's feasible to ensure that this is the case. Sometimes it is
>> not.
>
> While this is true, it is not a problem specific to reference counting,
> but one that is inherent in /any/ attempt at automatic object lifetime
> management.

And automatic garbage collection is the only general-purpose solution to 
this problem. Unfortunately, it's also very expensive (in time and often 
in space).

> That's why C++11 also provides weak pointers. They're not a cure-all for
> such problems, but they do help in some situations (such as when the
> cyclic references come from pointers to parent objects in a tree).

I'm not even sure what the semantics of a "weak pointer" would be in the 
absence of automatic memory management...


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:42:49
Message: <4f60bc79$1@news.povray.org>
Am 14.03.2012 10:42, schrieb Invisible:

> It seems either they implemented everything twice (once with, and once
> without generics), or it's actually legal to not specify type parameters
> and then they default to Object. Which, either way, is highly
> counter-intuitive...

It's actually that they /first/ implemented the stuff as non-generics 
(because Java didn't have them back then), and then added a generics 
implementation later when they discovered that generics were a good 
thing to have after all (retaining the old non-generics implementation 
for backward compatibility).


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:45:44
Message: <4f60bd28$1@news.povray.org>
On 14/03/2012 03:42 PM, clipka wrote:
> Am 14.03.2012 10:42, schrieb Invisible:
>
>> It seems either they implemented everything twice (once with, and once
>> without generics), or it's actually legal to not specify type parameters
>> and then they default to Object. Which, either way, is highly
>> counter-intuitive...
>
> It's actually that they /first/ implemented the stuff as non-generics
> (because Java didn't have them back then), and then added a generics
> implementation later when they discovered that generics were a good
> thing to have after all (retaining the old non-generics implementation
> for backward compatibility).

Backwards compatibility is pretty evil, eh? ;-)


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:50:18
Message: <4f60be3a@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> >> Heh, really? Most languages I work with can allocate new objects in O(1)
> >> time and using O(1) memory for bookkeeping. I'm not used to dynamic
> >> allocation being anything to worry about.
> >
> >    Last time I checked, 16 bytes is O(1) memory.

> You seem to suggest that there's 16 bytes of overhead /per object/. I'm 
> saying I'm used to there being a fixed number of bytes of overhead, 
> regardless of the number of objects. That's O(N) vs O(1).

  That's obviously impossible. The only way to achieve that would be
to allocate an array of objects (by value).

  If you allocate individual objects, the allocator has to know where they
are and if those memory blocks are free or not. At least if the allocator
ever wants to reuse freed memory blocks.

> >>>> Mmm, I hadn't even thought of using an /array/. I suppose the upper
> >>>> bound on the tree size is statically known, so why not?
> >>>
> >>>     The array doesn't necessarily have to be fixed in size.
> >
> >> It does if it's static. Either that or the source I'm reading is
> >> incorrect...
> >
> >    You don't have to use a static array.

> Sure. But then you're dynamically allocating again.

  You are allocating *an array* dynamically, not individual objects.
You are making like a couple of 'news' rather than thousands or millions.

> Right. I think I know how to redesign to avoid that. Should make things 
> a bit more coherent...

  If you are creating a dynamic binary tree, there's no avoiding memory
allocation. (One of the very few cases where you can avoid it is with
things like a heap. While a heap is a dynamic binary tree, it can be
implemented as a single array rather than each object being allocated
separately. In fact, the C++ standard library offers heap creation and
managing functions.)

  A possible approach is that the nodes themselves do not manage the
objects they are pointing to, but rather the data container module
itself. This is, for example, what std::set does (in other words,
the nodes in a std::set do not manage the other objects they are
pointing to, but it's the std::set class itself that takes care of
allocating and deleting them as needed.) Of course this requires a
careful design as well, in order to avoid leaks or double deletions.

-- 
                                                          - Warp


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 14 Mar 2012 11:52:05
Message: <4f60bea4@news.povray.org>
clipka <ano### [at] anonymousorg> wrote:
> While this is true, it is not a problem specific to reference counting, 
> but one that is inherent in /any/ attempt at automatic object lifetime 
> management.

  It's not a problem for a garbage collector.

  And btw, cyclic references are not the only problematic situation with
reference counting. There are cases where objects may be deleted too early,
while there's still code using them.

-- 
                                                          - Warp


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:05:36
Message: <4f60c1d0@news.povray.org>
Am 14.03.2012 16:37, schrieb Invisible:

>> And then you'd also expect MS to give away VS for free (the real thing,
>> with the tools to write the real powerful software, not just crappy
>> hello world stuff), yet for some reason they don't.
>
> Um... So how is the pro version actually different from VS Express?

Last time I checked (though I must confess that's years ago) one 
difference was that the Express editions had quite limited flexibility 
regarding the project settings, to an extent that I didn't even bother 
trying them out further.

>> I think there's another reason why VS is so good. I'm convinced they're
>> not trying to make it easy for /anyone/ to write code - they're trying
>> to make it easy for /themselves/. And make extra money by selling the
>> toolset.
>
> Quite possibly. ;-)
>
> Then again, Valve released their editing tools, and (from what I hear)
> they are really, really hard to use...

Maybe that's because the development cycle in game engines is faster 
than that in general-purpose programming languages, so it wouldn't have 
paid off to develop better tools?

Remember that VS has been developed over how long now? Two decades?


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:07:59
Message: <4f60c25f$1@news.povray.org>
>> You seem to suggest that there's 16 bytes of overhead /per object/. I'm
>> saying I'm used to there being a fixed number of bytes of overhead,
>> regardless of the number of objects. That's O(N) vs O(1).
>
>    That's obviously impossible. The only way to achieve that would be
> to allocate an array of objects (by value).
>
>    If you allocate individual objects, the allocator has to know where they
> are and if those memory blocks are free or not. At least if the allocator
> ever wants to reuse freed memory blocks.

If you're using garbage collection, it's quite possible - and, indeed, 
usual. Each time the GC collects the garbage, it also defragments the 
free space. You therefore need only store a pointer to the next free 
byte of memory. To allocate, just increment this pointer by the size of 
the block allocated. Allocation is now O(1) in time and space.

What you're saying is that C++ (and any other language with manual 
memory management) can't do it this way, since there would be no way to 
move stuff around without breaking all the things trying to point to it. 
So free space becomes fragmented, and you start needing O(N) storage to 
keep track of it - and, presumably, O(N) to allocate and deallocate stuff...

>>>>>      The array doesn't necessarily have to be fixed in size.
>>>
>>>> It does if it's static.
>>>
>>>     You don't have to use a static array.
>
>> Sure. But then you're dynamically allocating again.
>
>    You are allocating *an array* dynamically, not individual objects.
> You are making like a couple of 'news' rather than thousands or millions.

Right, OK. You seem to be saying essentially the same thing as I am - 
dynamic allocation is OK, just try not to individually allocate millions 
of blocks.

>> Right. I think I know how to redesign to avoid that. Should make things
>> a bit more coherent...
>
>    If you are creating a dynamic binary tree, there's no avoiding memory
> allocation.

I meant, I think can avoid copying bits of tree around while I'm 
building the tree.

> (One of the very few cases where you can avoid it is with
> things like a heap. While a heap is a dynamic binary tree, it can be
> implemented as a single array rather than each object being allocated
> separately. In fact, the C++ standard library offers heap creation and
> managing functions.)

Yeah, I believe I've read about that. (Heaps in arrays, that is.) Which 
property of a heap is it that makes this possible?

>    A possible approach is that the nodes themselves do not manage the
> objects they are pointing to, but rather the data container module
> itself. This is, for example, what std::set does (in other words,
> the nodes in a std::set do not manage the other objects they are
> pointing to, but it's the std::set class itself that takes care of
> allocating and deleting them as needed.) Of course this requires a
> careful design as well, in order to avoid leaks or double deletions.

Yeah, with manual memory management, almost /everything/ requires 
careful thought and design. You want to make damned sure you know what 
you're trying to implement before you implement it...


Post a reply to this message

From: clipka
Subject: Re: A tale of two cities
Date: 14 Mar 2012 12:14:41
Message: <4f60c3f1$1@news.povray.org>
Am 14.03.2012 16:52, schrieb Warp:
> clipka<ano### [at] anonymousorg>  wrote:
>> While this is true, it is not a problem specific to reference counting,
>> but one that is inherent in /any/ attempt at automatic object lifetime
>> management.
>
>    It's not a problem for a garbage collector.

Last time I heard, reference counting /is/ a garbage collection mechanism.

>    And btw, cyclic references are not the only problematic situation with
> reference counting. There are cases where objects may be deleted too early,
> while there's still code using them.

I can't think of any reason why that could possibly happen, provided 
that the reference counting is implemented in a thread-safe manner.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.