POV-Ray : Newsgroups : povray.unofficial.patches : jr's large csv and dictionary segfault in the povr fork. : Re: jr's large csv and dictionary segfault in the povr fork. Server Time
11 Dec 2024 19:21:05 EST (-0500)
  Re: jr's large csv and dictionary segfault in the povr fork.  
From: William F Pokorny
Date: 26 Mar 2023 09:47:08
Message: <64204cdc$1@news.povray.org>
On 3/26/23 05:55, jr wrote:
> wondering whether it's possible we're "out-running" the (C++) garbage collector,
> and when the macro is called, it then sees "inappropriate" memory on occasion?

There isn't a garbage collector in C++/C. Memory is allocated when 
needed and released when no longer needed.

Your question thought still a good one. It takes considerable time to 
walk through all the allocations and free them more or less unwinding 
all the allocations(a). It should be nothing new parser related proceeds 
until the memory is freed, but...

Today the memory free up and re-allocation happens in a big way when we 
move frame to frame in an animation because one parsing thread goes 
away(a) and another gets created for the next frame. Parsing itself is 
always single threaded, unlike most other parts of POV-Ray, so we should 
not see multi-threading issues per-se.

What I too suspect is that we are perhaps sometimes seeing not quite (or 
perhaps in-correctly) initialized new parser memory that still contains 
data from the previous parser thread. This could explain why once we see 
fail points, they sometimes repeat that fail signature for a while.

Aside: I've gotten another two complete povr animation passes through 
with those changes to foreach.inc. Magic, but still real magic! FWIW. :-)

Bill P.

(a) - Back in my working years we were using a large, internally 
developed, interactive tool. On it's conversion to C++ we got frustrated 
because it took forever to exit the application as the memory was 
painstakingly released bit by bit. The developers solved the problem by 
intentionally crashing out of the application and letting the OS clean 
up the process related memory! ;-)

Anyhow. There is a performance cost to maintaining a, sort of, minimum 
memory foot print over time (as there is too for garbage collection 
memory management when it kicks in). I've wondered how much time we are 
burning doing memory management alone. Plus C++, because it tends to 
allocate as needed, ends up with bits and pieces of things all over the 
place in physical memory where it would be much better for performance 
if related memory were allocated (or re-allocated) in big contiguous 
blocks. Newer C++ versions have features intended to help with this 
memory fragmentation issue. Ah, whatever I guess. All still well down on 
my todo / toplaywith list.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.