|
|
Am 05.12.2016 um 18:23 schrieb Bald Eagle:
> Probably somewhat related to:
>
http://news.povray.org/povray.windows/thread/%3Cweb.56edd77df9a19c455e7df57c0%40news.povray.org%3E/
>
> When I begin a render on a large size resolution image (10240 x 7680), PPOV-Ray
> seems to "hiccup" - it pauses, I get the spinny torus thing, the title bar says
> "[unresponsive]" and then it kicks in, I get the usual messages, and the parsing
> continues as usual. This seems odd to me and less related the delayed stopping,
> because the rendering hasn't started yet - because the parsing hasn't even begun
> yet.
>
> Has anyone else experienced this behaviour?
This is perfectly normal behaviour from a certain file size onward.
There are various different data structudes in POV-Ray that hold image data:
- The image buffer.
- The state file.
- The final image.
The image buffer is essentially a chunk of memory where POV-Ray
assembles the render results into a coherent image as they trickle in
from the render threads in somewhat random order. It can be viewed as a
kind of staging area for the image data, to get it into the right order
for processing it into the final image.
By default the image buffer is kept in memory for the sake of speedy
random access, as POV-Ray needs to do a lot of jumping around in it,
which isn't exactly the thing you want to do on a hard drive. However,
this requires that the entire image buffer (which requires 20(*) bytes
per pixel) fits inside the CPU's address space alongside the scene data.
On a 32-bit machine, this would typically limit the image size to just a
little over 12k x 12k pixels, and even less for complex scenes. (Even on
64-bit machines, having the scene plus image buffer exceed the physical
memory limit may not be a good idea.)
Therefore, if the image buffer size exceeds a somewhat arbitrary limit,
POV-Ray will fall back to keeping the image buffer on hard disk.
Since the required file capacity is known beforehand, and it is also
expected that all of it will be filled with data, POV-Ray grows the
corresponding file to the desired size right from the start, so that it
doesn't have to worry about adjusting the size as it writes data to
random positions within the file. The drawback is that the operation to
grow a file to a large size can take a few moments on Windows machines
-- and probably also on other machines if the file system doesn't
support sparse files.
And that's exactly the symptom you're seeing.
The state file is a separate beast: It is used exclusively for render
abort/continue, and is essentially a log of internal messages passed
between the front- and back-end, which includes render results, but also
other status information relevant to re-create POV-Ray's status in a
continue scenario.
The state file starts out at a size of zero, and grows from there as
needed. And while it also effectively writes out image data to the hard
disk like a file-backed image buffer, it does so in a sequential
fashion, which is much easier on the system's file I/O load.
BTW, creation of the state file can be disabled using the `-CC` option.
The final image file is yet another thing: This is where the image data
ultimately ends up in, ready to be loaded by other applications, but
only as soon as image computation has completed and all the image data
has been assembled in the image buffer.
Mental notes to myself:
- On 64-bit systems it is probably a good idea to always keep the image
buffer in memory. Even if it exceeds the physical memory size, the
accesses should be sporadic and localized enough to not displace the
scene data from physical memory (which would result in fatal thrashing),
and while the file-backed image buffer absolutely positively requires
file I/O for each access, holding the image buffer in virtual memory
will give the operating system a fighting chance to avoid at least a
portion of those file I/O operations. Allowing the user to choose
file-backed mode may be useful in fringe cases, but this can easily be
achieved by choosing a higher default for the setting while still
allowing it to be set lower.
- Growing the file-backed image buffer to the final size in a single
instant is probably a bad idea (at least on file systems that don't
support sparse files): Sure, if we would grow the file incrementally the
operating system may have to perform about the same workload, just
spread out over time; however, this is mainly file I/O workload, which
wouldn't stop the render threads from jogging along -- provided they
have been started already.
Post a reply to this message
|
|