POV-Ray : Newsgroups : povray.off-topic : A tale of two cities Server Time
29 Jul 2024 10:25:18 EDT (-0400)
  A tale of two cities (Message 96 to 105 of 105)  
<<< Previous 10 Messages Goto Initial 10 Messages
From: Darren New
Subject: Re: A tale of two cities
Date: 16 Mar 2012 14:33:36
Message: <4f638780$1@news.povray.org>
On 3/16/2012 4:41, Warp wrote:
> Darren New<dne### [at] sanrrcom>  wrote:
>> You would be surprised at the number of implementations of malloc() and
>> free() that assume you're single-threaded and require locks.
>
>    Care to give an example (in a modern OS)? At least Gnu libc uses locking
> malloc() and free() (and it's one of the reasons for their slowness).

I think you just answered the question.

Maybe I phrased my statement wrong. Lots of implementations of malloc() and 
free() share a heap amongst multiple threads, so either need to lock 
internally or need to be used from a single-threaded system. And hence, they 
assume you're single threaded within the context of malloc/free because 
they've acquired a global lock.  I.e., yes, you get contention between 
threads with manual memory management.

-- 
Darren New, San Diego CA, USA (PST)
   People tell me I am the counter-example.


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 16 Mar 2012 14:39:18
Message: <4f6388d6@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> On 3/16/2012 1:52, Invisible wrote:
> > On 15/03/2012 11:17 PM, Darren New wrote:
> >> On 3/15/2012 9:20, Invisible wrote:
> >>> And of course, if GC was in the OS, then you wouldn't have this
> >>> situation of
> >>
> >> You also wouldn't need finalizers.
> >
> > Yeah you would. The OS doesn't magically "know" that a specific object is
> > holding a specific external resource.

> If your OS is garbage-collected, it's not an "external" resource, now is it?

  I think that the idea was that RAM is not the only resource that an
object could own, and having to manually free such resource is error-prone
and going back to the days of C programming.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: A tale of two cities
Date: 16 Mar 2012 15:29:10
Message: <4f639486$1@news.povray.org>
On 3/16/2012 11:39, Warp wrote:
>    I think that the idea was that RAM is not the only resource that an
> object could own, and having to manually free such resource is error-prone
> and going back to the days of C programming.

Right. In a GCed operating system, why would you own anything other than 
GCed resources? You don't need file handles. You don't need IPC ports. You 
don't need window handles. Etc. Look at Smalltalk - nothing had finalizers, 
because it had its own OS. Or Hermes. Or Eros. (Eros and Hermes, for 
example, treats all your disk space as just one big virtual memory address 
space. There's no files. You just store data in your variables, and serve 
them over IPC ports.)

About the only thing you'd need is some way of closing a socket when the 
connection got GCed, in which case you could have a process that closes the 
socket when its local IPC connections all get closed, and you just talk 
through that process. That's basically how Erlang manages it.

-- 
Darren New, San Diego CA, USA (PST)
   People tell me I am the counter-example.


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 16 Mar 2012 15:38:14
Message: <4f6396a5@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> Right. In a GCed operating system, why would you own anything other than 
> GCed resources?

  I think you are talking about system resources. A program may use other
resources than simply system resources. If objects are never finalized,
how exactly are you going to free those resources? Manually?

  Even then, sometimes even *memory* handling could require finalizers.
The quintessential example would be if you wanted to implement a
copy-on-write mechanism. In this case you need to "retain" and "release"
a reference counter each time an object takes hold of or drops a shared
block of memory. Without finalizers it becomes difficult.

  You could have manual "retains" and "release" calls, but then we are
back to square one, ie. manual memory management, which is laborious and
error-prone, and hard to make exception-safe.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: A tale of two cities
Date: 16 Mar 2012 23:56:50
Message: <4f640b82$1@news.povray.org>
On 3/16/2012 12:38, Warp wrote:
>    I think you are talking about system resources. A program may use other
> resources than simply system resources.

Like what?

> The quintessential example would be if you wanted to implement a
> copy-on-write mechanism.

That hasn't anything to do with finalizers. In advanced systems, that sort 
of stuff isn't something you write in the application code, either, any more 
than worrying about taking things out of the B-tree is something a SQL 
programmer worries about when deleting a row.

And yes, at IBM, when they were working in NIL (the precursor to Hermes), 
they ported a large and complex system from a single machine implementation 
to run on a distributed hot-failover cluster changing nothing but the 
compiler, just like you could with SQL code.

-- 
Darren New, San Diego CA, USA (PST)
   People tell me I am the counter-example.


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 17 Mar 2012 02:29:46
Message: <4f642f5a@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> On 3/16/2012 12:38, Warp wrote:
> >    I think you are talking about system resources. A program may use other
> > resources than simply system resources.

> Like what?

  "When the last reference to this sprite dies, remove it from the screen."

  "When the last reference to this timer dies, remove it from the runloop."

  And so on. (Actual real-life examples.)

> > The quintessential example would be if you wanted to implement a
> > copy-on-write mechanism.

> That hasn't anything to do with finalizers. In advanced systems, that sort 
> of stuff isn't something you write in the application code, either, any more 
> than worrying about taking things out of the B-tree is something a SQL 
> programmer worries about when deleting a row.

  Sure. If that's the principle, then every program you could ever want
to write is rather simple: Just something like "do_what_i_want();"

  A system cannot offer *everything* that a programmer might ever want.
At some level a feature has to be implemented. If you want CoW, that has
to be implemented somewhere. It cannot just magically work out of nowhere.

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: A tale of two cities
Date: 17 Mar 2012 13:06:58
Message: <4f64c4b2$1@news.povray.org>
On 3/16/2012 23:29, Warp wrote:
> Darren New<dne### [at] sanrrcom>  wrote:
>> On 3/16/2012 12:38, Warp wrote:
>>>     I think you are talking about system resources. A program may use other
>>> resources than simply system resources.
>
>> Like what?
>
>    "When the last reference to this sprite dies, remove it from the screen."

I will point you to Smalltalk, which had no trouble doing things like this.

>    "When the last reference to this timer dies, remove it from the runloop."

I will point you at Smalltalk, which had no trouble doing things like this. :-)

>    And so on. (Actual real-life examples.)

Again, you seem to be assuming that the OS isn't garbage collected. If you 
just GCed the timer, why would it still be firing events? If you just GCed 
the sprite, why would it be still drawing on the screen? What's going to 
refresh it?

Now, granted, if you have hardware sprites that don't actually stop drawing 
when you write over them, you'd need something in the OS to handle that, but 
that's again the OS's job to regulate shared hardware.

>    A system cannot offer *everything* that a programmer might ever want.
> At some level a feature has to be implemented. If you want CoW, that has
> to be implemented somewhere. It cannot just magically work out of nowhere.

Sure. And my point is that if you're writing and OS designed for languages 
that are GCed, then that sort of thing belongs in either the compiler or the 
OS. Just like if you're writing a database designed for multiple 
applications to access it at once, you don't just go "well, leave the 
locking out and let the applications themselves worry about that, because 
you have to implement it somewhere."

No, if you don't implement GC in the OS, you need finalizers to tell the OS 
that you're done with a resource. If you *do* implement it in the OS, the OS 
knows you're done with the resource because it knows there aren't any more 
references to the resource.

That said, your idea of COW is an interesting case. I remember you talking 
about it before. And I'll grant that it's not the sort of thing that's 
trivial to do without keeping track of how many references there are to the 
object. But I'd rather see this as something like a different type of class, 
rather than taking advantage of a more global functionality designed to 
bypass limitations in the OS. In other words, your COW doesn't really need 
finalizers. It needs a way of knowing how many references there are to your 
writable block, whether it's already shared. Clearly the modern OSes already 
support copy-on-write semantics (leading to the OOM killer, for example), so 
it's not really obvious that we're not solving this particular problem at 
the wrong level of architecture.

Now, granted, I think having a mechanism whereby you can mark a particular 
class as (say) having no circular references and needing reference-counted 
GC or something, and maybe that would buy you something in some various 
cases like your COW or in other circumstances like network sockets where 
you're necessarily talking to something that can't be garbage collected and 
you want it released as soon as possible. But mostly it's still the sort of 
thing that should go in the compiler so everyone can use it.

-- 
Darren New, San Diego CA, USA (PST)
   "Oh no! We're out of code juice!"
   "Don't panic. There's beans and filters
    in the cabinet."


Post a reply to this message

From: Warp
Subject: Re: A tale of two cities
Date: 17 Mar 2012 13:40:00
Message: <4f64cc70@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> >    "When the last reference to this sprite dies, remove it from the screen."

> I will point you to Smalltalk, which had no trouble doing things like this.

> >    "When the last reference to this timer dies, remove it from the runloop."

> I will point you at Smalltalk, which had no trouble doing things like this. :-)

  And exactly how does Smalltalk know what to do if you don't tell it?
You have to be able to tell it somehow. It cannot guess it by magic.

> >    And so on. (Actual real-life examples.)

> Again, you seem to be assuming that the OS isn't garbage collected. If you 
> just GCed the timer, why would it still be firing events?

  Because you have told the runloop to fire events with that timer on
regular intervals. You have to tell the timer/runloop to stop doing that.
The runloop owns the timer, so there's at least one pointer pointing to
it until you explicitly tell the runtime to stop it.

  Same goes for sprites in environments where you tell to the runtime
(which might be eg. a custom game engine) "this object is placed here":
The runtime owns the object and it will be there until you tell it to
drop it.

  How do you tell it to drop them when the last reference to them dies?
With a destructor/finalizer.

> No, if you don't implement GC in the OS, you need finalizers to tell the OS 
> that you're done with a resource. If you *do* implement it in the OS, the OS 
> knows you're done with the resource because it knows there aren't any more 
> references to the resource.

  But not everything is a system resource.

> That said, your idea of COW is an interesting case. I remember you talking 
> about it before. And I'll grant that it's not the sort of thing that's 
> trivial to do without keeping track of how many references there are to the 
> object. But I'd rather see this as something like a different type of class, 
> rather than taking advantage of a more global functionality designed to 
> bypass limitations in the OS. In other words, your COW doesn't really need 
> finalizers. It needs a way of knowing how many references there are to your 
> writable block, whether it's already shared.

  I can't think of any other way of knowing if an object is being shared
than by either using deterministic scope-bound reference counting, or by
running a GC sweep, which would ostensibly be extremely heavy if done too
often.

  Why can't RAII *and* automatic GC be supported in the same language?

-- 
                                                          - Warp


Post a reply to this message

From: Darren New
Subject: Re: A tale of two cities
Date: 17 Mar 2012 13:56:07
Message: <4f64d037@news.povray.org>
On 3/17/2012 10:40, Warp wrote:
>>>     "When the last reference to this timer dies, remove it from the runloop."
>> I will point you at Smalltalk, which had no trouble doing things like this. :-)
>    And exactly how does Smalltalk know what to do if you don't tell it?
> You have to be able to tell it somehow. It cannot guess it by magic.

The timer doesn't fire if you've garbage-collected it.

>    Because you have told the runloop to fire events with that timer on
> regular intervals. You have to tell the timer/runloop to stop doing that.
> The runloop owns the timer, so there's at least one pointer pointing to
> it until you explicitly tell the runtime to stop it.

Well, sure. So? I'm not following why this is a problem. If you want the 
timer to stop, you stop it.

>    Same goes for sprites in environments where you tell to the runtime
> (which might be eg. a custom game engine) "this object is placed here":
> The runtime owns the object and it will be there until you tell it to
> drop it.
>
>    How do you tell it to drop them when the last reference to them dies?
> With a destructor/finalizer.

You just told me the last reference will not go away, because it's in the 
runloop or the game engine, right? I'm not following.

It sounds like what you're saying is you want a type of object to keep 
reference counts, so when you dispose of an object, it can have an action 
other than finalizing other objects?  I.e., you don't want the timer garbage 
collected, but you want the timer to keep track of how many references to it 
exist from objects other than the owner, so it can be stopped when the last 
user gets collected?

In that case, you use weak references. The runloop would hold a weak 
reference to the timer, and when the last user of that timer gets collected, 
the timer gets collected out from under the runloop. Now, granted, you might 
want the timer or sprite to disappear before the next GC, if that's what 
you're talking about, but again that's not an appropriate task for a 
finalizer that only runs during GC in the first place.

The other question is whether you want the timer to keep alive the objects 
whose methods it invokes when the timer fires. In C++, you can delete the 
invoked object out from under the timer, but you can't do that in a GCed 
language. If you said "Run xyz.pdq() every ten seconds", then the xyz 
instance is going to hang around so it can be run, so asking how to stop the 
timer when nobody is run by it doesn't even make sense, from that point of 
view.

Do you want the fact that the sprite is on the screen to keep the sprite 
object alive? Or are you really saying "I want to use scope to keep track of 
when to start and stop various processes"

>    Why can't RAII *and* automatic GC be supported in the same language?

I think not so much RAII as reference counting. I fully support having weak 
references as well as reference-counted objects. (I think actually that 
Python supports both.) Reference-counted objects are high overhead compared 
to GCed objects, tho, so unless there's really a reason you promptly need 
them to free their resources, you probably want to avoid declaring your 
class that way. And if you manage to get a circular loop of 
reference-counted objects, your reference counting is going to be screwed up 
anyway, so all the more reason to mark reference-counted classes as special 
- you can have the compiler check that no reference-counted class can 
transitively point to an instance of itself.

-- 
Darren New, San Diego CA, USA (PST)
   "Oh no! We're out of code juice!"
   "Don't panic. There's beans and filters
    in the cabinet."


Post a reply to this message

From: Invisible
Subject: Re: A tale of two cities
Date: 30 Mar 2012 05:10:31
Message: <4f757887$1@news.povray.org>
Now there's interesting. I had a go with NetBeans on my laptop while I 
was in Switzerland, and it didn't keep giving me random build failures 
for no apparent reason. And while it was still slow, it wasn't 
unacceptably unresponsive. All of which is interesting, because when I 
originally tried it out, it was running on a more powerful PC. 
(Admittedly in a VM, but it's using hardware virtualisation, and no 
other applications seemed unduly slow.)

Also, it appears that NetBeans has wired-in support for Git, Mercurial 
and Subversion. Obviously my source control system of choice is not 
supported, largely because nobody has ever heard of it. I did try to use 
Git though. I /presume/ it's recording my changes, because damned if I 
can find any way of, you know, /looking at/ the change history. :-P Just 
to be confusing, NetBeans keeps its own session history as well, so if 
you just accidentally edited the wrong file or something, you can 
quickly pull up the last few diffs and revert them.

Is there some kind of tool you can use to /actually see/ what's in a Git 
repository? Because NetBeans isn't being very helpful here.


Post a reply to this message

<<< Previous 10 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.