POV-Ray : Newsgroups : povray.advanced-users : the POV-ray SDL--similar to Java and/or Python? : Re: the POV-ray SDL--similar to Java and/or Python? Server Time
6 Oct 2024 20:21:46 EDT (-0400)
  Re: the POV-ray SDL--similar to Java and/or Python?  
From: Darren New
Date: 14 Sep 2006 15:25:42
Message: <4509acb6$1@news.povray.org>
Warp wrote:
>   GC was supposed to be faster than a RC method.

That's called "Mark and sweep."  It's also fairly slow, if you don't 
tend to generate a lot of garbage. That and reference counting are 
basically the two oldest, most primitive attempts at garbage collection. 
25 years of research has improved things tremendously. Even generational 
GC like MS uses is a dozen years old.

Warp wrote:
 >   And how is that faster from simply freeing the dead objects?
 > The original
 > point was that GC is much faster than freeing each
 > object individually.

Well, it's faster than freeing each one individually *and* doing the 
math to maintain the reference counters.

 > Moving the dead objects around in memory before freeing
 > them isn't going to help.

You don't move the dead objects around, any more than a disk defrag 
copies the contents of unallocated sectors out of the way of the file 
being moved. You only move the live objects around. If you have many 
more dead objects than live objects, it's a win, especially if new 
allocations are sped up by not having to avoid clobbering live objects 
(i.e., sped up by not having to walk a free-space chain looking for a 
sufficiently-large hole). Figuring out how to identify dead objects 
without actually looking at them at all is challenging but mostly solved.

If you're looking for a description with reference to C++, try
ftp://gatekeeper.research.compaq.com/pub/DEC/SRC/research-reports/SRC-102.pdf
which is probably about as extensive and detailed as you need. Note that 
  this paper too is 15 years old, and people haven't stopped researching.

Note that [Zorn 92] in the citations discusses actual measurements of 
actual efficiency. If you don't want to do the googling,
http://www.cs.ubc.ca/local/reading/proceedings/spe91-95/spe/vol23/issue7/spe836.pdf
This paper is also 15 years old, so it's saying 15 years ago, we already 
knew how to make GC more efficient than doing things manually, including 
with reference counting. Note that cfrac was written using reference 
counting, and when the researcher changed it to use GC, the time went 
from 215 seconds to 175 seconds. And this is conservative GC, meaning 
there's no support from the language helping out, which isn't the norm 
for GCed languages.

Wow! Look!  Actual Data!  But don't mind me, because I don't know 
anything about the topic at all, no. Because, ya know, I just make stuff 
up without ever having read any research in the field I'm talking about, 
thinking that if I don't understand it already, it must not be important 
enough to learn about.

Of course, the performance relative to explicit management depends in 
part on the algorithm your version of new and delete uses. What 
algorithm are you comparing to? Do you even know what algorithm your 
libraries use? Do you even know the order of efficiency of it? Have you 
tried replacing it with a faster one?

http://gcc.gnu.org/ml/gcc/2002-08/msg00577.html
Wow. Linus himself saying "The gcc allocators have just always been 
bad." So what are you comparing your manual-allocation efficiency to?

-- 
   Darren New / San Diego, CA, USA (PST)
     Just because you find out you are
     telepathic, don't let it go to your head.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.