|
|
Darren New <dne### [at] sanrrcom> wrote:
> Warp wrote:
> > Manually closing the file handle when it's not needed anymore is not
> > any better then manual memory management: There's always the danger of
> > a leak. Functions may be exited at unexpected places, etc.
> Of course, all these objections are fairly irrelevant when the OS is
> actually designed to help the GC system.
The OS may help the GC system when we are talking about OS resources.
However, it can't help when we are talking about resources within the
program itself. (For the sake of example, let's assume that the program
uses its own "file handles" instead of the ones provided by the system.
It doesn't need to be something related to physical files per se.)
> For example, nobody complains "UNIX signals are bad, because if you
> don't catch them, your process might have files open when it exits."
The problem is not leaving file handles open when the program exits.
The problem is opening more and more file handles without closing the
old ones: In most systems at some point the OS will put a limit to this
and refuse to give any more file handles. At this point the program will
malfunction, and finding and solving the cause can be quite difficult.
Usually this is a sign of a bug: Some code somewhere is "leaking".
Not all unused file handles are being closed.
However, with GC, where there's delayed object destruction, it might
not be a bug per se, but a consequence of GC.
> No, the OS closes the files for you.
It can't close them if it sees that they are in use: Some live objects
are holding the file handles (because these objects have not been
properly destroyed).
> In a system that actually actively supports GC, a program trying to
> access a file handle that someone else has open (and locked) will cause
> a GC in that program to finalize those resources, freeing up the lock,
> in exactly the same way that a modern desktop OS will page out inactive
> portions of one process to free up memory for other processes.
It was not a question of accessing locked files. It was a question of
leaking open file handles: At some point the OS may not given any more
of them.
Of course one thing the OS might try is to tell the GC to make a sweep
to see if that frees any file handles (does any OS actually do this?).
However, as I said, this will work only with system resources.
> And again, there are pretty trivial techniques to prevent this from
> being a problem. At the top of your main loop, you do a quick GC to
> scavenge anything you accidentally freed.
This assumes that the main loop is advanced before the amount of leaked
resources grows too large. It can alleviate the problem, but it's not a
guarantee.
(And if the main loop is executed very often, eg. thousands of times
per second, wouldn't it cause overhead to call the GC each time?)
> And of course anything that allocated stuff on the heap has this sort of
> problem anyway, unless you wind up essentially writing your own GC anyway.
There are alternatives to GC which call destructors immediately when
the objects go out of scope. For example, if I'm not completely mistaken,
objective C uses reference counting.
Someone mentioned that in C# it's possible to tell for an object that
it should be destroyed immediately when it goes out of scope (but otherwise
regular GC is performed). That sounds like a good solution to me. You get
the best of both worlds.
> > So, basically, garbage collection takes care of memory management,
> > often at the cost of reducing the possibility of the user implementing
> > automatic management of resources other than memory. These resources may
> > be system resources, or resources in the program itself. GC seems to
> > completely disregard them.
> Only naive GC. Of course, sophisticated GC needs support throughout the
> system. Some systems have this.
There's no way the OS can know about the program's internal resources
even if it cooperates with the GC engine to handle system resources.
> Other systems don't have this sort of problem at all, in that files are
> memory resources just like "in-memory" structures, so going out of scope
> deletes the file just like you'd expect.
Don't stick to the file handles in particular. It was just an easy to
understand example.
--
- Warp
Post a reply to this message
|
|