POV-Ray : Newsgroups : povray.general : RAM consumption increasing steadily. Rendering 100k images : Re: RAM consumption increasing steadily. Rendering 100k images Server Time
18 May 2024 08:06:06 EDT (-0400)
  Re: RAM consumption increasing steadily. Rendering 100k images  
From: clipka
Date: 7 May 2016 14:18:36
Message: <572e317c@news.povray.org>
Am 07.05.2016 um 10:32 schrieb Jaime Vives Piqueres:

>> I wouldn't be surprised if the memory leaks turned out to be related
>> not so much to /what/ you populate your scene with, but /how/.
> 
>   First, let see if I understand this correctly: in an animation where
> there are the same exact number of objects, and only some objects move,
> the memory consumption should stay stable from the first frame, isn't?
> 
>   If so, I see a little increase from frame to frame on a test I did:
> along 1000 frames, the memory increased some 300K. And in this special
> case, there is only /what/, not any /how/... the movement was calculated
> with the bullet physics playground, and as such there are no
> calculations, no functions, no macros, no splines. Only a different
> include for each frame with the same objects using different matrix values.

Au contraire -- loading 1000 different include files, one for each
frame, is a /remarkable/ "how"!

It would be interesting to know whether you see the same memory increase
if all frames include the same file.

Because, suppose POV-Ray keeps some records on each and every include
files it has seen since its start, and doesn't forget between frames; in
that case an increase in memory consumption is /exactly/ what we'd have
to expect.

And as a matter of fact POV-Ray for Windows /does/ have a mechanism that
may do exactly that, and intentionally so: Ever noticed how, if you try
to render an include file, POV-Ray offers you to instead render the POV
file that you had previously rendered and which includes that file?
Obviously it needs to do /some/ bookkeeping for that stunt.

300K per 1000 frames, that would be 300 bytes per frame, which doesn't
sound too unreasonable for that case: Suppose the mechanism stores the
absolute paths, then a typical file name may have a length of, say, 70
characters. Now suppose the mechanism is part of the code that uses
UTF-16 for encoding, requiring at least 2 bytes per chacater, then
you'll need 140 bytes per filename. Now consider that the mechanism
needs to associate include file names with scene file names, and imagine
that it naively stores a straigtforward list of filename pairs (include
file vs. scene file), and you would have 280 bytes "payload" data stored
for each include file ever seen. An overhead of a few pointers will
easily get you from there to 300 bytes.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.