|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/14/2012 8:52, Warp wrote:
> And btw, cyclic references are not the only problematic situation with
> reference counting. There are cases where objects may be deleted too early,
> while there's still code using them.
And your references have to be as big as your address space, approximately.
Smalltalk back in the 64K days used a 5-bit reference count. If you ever had
more than 30 references, it just wouldn't get collected until the mark/sweep
collector ran.
--
Darren New, San Diego CA, USA (PST)
People tell me I am the counter-example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/14/2012 10:10, clipka wrote:
> Then again, how would any other GC prevent this kind of thing happening?
Because real GC engines count references on the stack. The very fact that A
is going to return to a member function of B is enough to keep it live.
--
Darren New, San Diego CA, USA (PST)
People tell me I am the counter-example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/14/2012 10:52, clipka wrote:
> But doesn't the "this-pointer tracking" in a reference-tracking GC approach
> incur approximately that very same penalty?
No, because it's not counting references. The efficiency of GC over RC is to
a large extent the elimination of updating something every time you assign a
pointer. The GC runs, sees a reference to B on the stack, and avoids
discarding B. C++ programs don't have access to the stack per se, so they
can't do that sort of thing.
--
Darren New, San Diego CA, USA (PST)
People tell me I am the counter-example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/14/2012 8:37, Invisible wrote:
> Um... So how is the pro version actually different from VS Express?
Some of them include a bunch of stuff you don't need if you're only working
by yourself, for example.
--
Darren New, San Diego CA, USA (PST)
People tell me I am the counter-example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 15/03/2012 03:40 AM, Darren New wrote:
> On 3/14/2012 8:03, Invisible wrote:
>> Heh, really? Most languages I work with can allocate new objects in O(1)
>> time and using O(1) memory for bookkeeping. I'm not used to dynamic
>> allocation being anything to worry about.
>
> Yeah, it's the *de*allocation that takes a while.
...but not as long as figuring out /what/ to deallocate. ;-)
The other fun thing is that few people have built concurrent GC engines.
At least with manual memory management, one thread doesn't usually block
other threads from running.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Invisible <voi### [at] devnull> wrote:
> The other fun thing is that few people have built concurrent GC engines.
> At least with manual memory management, one thread doesn't usually block
> other threads from running.
A concurrent compacting GC sounds to me like a very hard problem.
If the GC moves objects around in RAM, it has to make sure that no code
is modifying the object while the GC is moving it. How does it achieve
that efficiently, I have no idea.
--
- Warp
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 15/03/2012 02:42 PM, Warp wrote:
> A concurrent compacting GC sounds to me like a very hard problem.
Everybody seems to think they know how it could be done... and yet,
nobody has written the code that does it.
> If the GC moves objects around in RAM, it has to make sure that no code
> is modifying the object while the GC is moving it. How does it achieve
> that efficiently, I have no idea.
Well, Haskell currently does thinks like have a separate heap for each
processor core. But that doesn't guarantee there are no pointers from
one heap to another, so you still gotta be careful. What you would
probably do is "mark" each object somehow, before you go about moving
it. Trouble is, that adds the overhead of testing whether each object is
marked every single time you want to access any object...
I'm sure it can be implemented somehow. The question is how complicated
it would be, and how much overhead it would add.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 15.03.2012 15:42, schrieb Warp:
> Invisible<voi### [at] devnull> wrote:
>> The other fun thing is that few people have built concurrent GC engines.
>> At least with manual memory management, one thread doesn't usually block
>> other threads from running.
>
> A concurrent compacting GC sounds to me like a very hard problem.
> If the GC moves objects around in RAM, it has to make sure that no code
> is modifying the object while the GC is moving it. How does it achieve
> that efficiently, I have no idea.
If the GC was part of the OS, maybe something could be done with page
faulting.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 15/03/2012 04:14 PM, clipka wrote:
> If the GC was part of the OS, maybe something could be done with page
> faulting.
Indeed. Darren found a paper about this very idea a while back. Some
guys patched the Linux kernel to do interesting stuff in this direction.
And of course, if GC was in the OS, then you wouldn't have this
situation of "the GC engine for program X isn't actually using this RAM
right now, but because it's reserved it from the OS, the GC engine for
program Y can't use it"...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/15/2012 2:09, Invisible wrote:
> least with manual memory management, one thread doesn't usually block other
> threads from running.
You would be surprised at the number of implementations of malloc() and
free() that assume you're single-threaded and require locks.
Also, in a language that supports threads in the first place (like Erlang),
the heaps are per-thread, so that's OK.
--
Darren New, San Diego CA, USA (PST)
People tell me I am the counter-example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |