POV-Ray : Newsgroups : povray.advanced-users : Problem rendering large image : Re: Problem rendering large image Server Time
26 Jun 2024 09:17:59 EDT (-0400)
  Re: Problem rendering large image  
From: Le Forgeron
Date: 19 Oct 2012 05:30:11
Message: <50811da3$1@news.povray.org>
Le 18/10/2012 19:40, Warp nous fit lire :
> Jos leys <jos### [at] pandorabe> wrote:
>> When attempting to render a 6000*8000 pixel image (V 3.7.0 RC6 msvc10.win64), I
>> get the following message :
> 
>> Intermediate image storage backing file write failed at creation.
>> Failed to start render: Cannot access data in file.
> 
> I'm guessing that the file is hitting the magic 2-gigabyte line, but
> I'm not sure why that should matter, especially since you are using
> the 64-bit version of POV-Ray. (Which filesystem are you using? The
> only one that has a 2-gigabyte limit is FAT32, but it would be really
> strange if your system were using that. Could still be a possibility.)
> 
> Someone who has more detailed knowledge about the internal working of
> the intermediate file should be able to give a better answer.
> 

The 2 Gigabyte line should be met with
Intermediate image storage backing file write/seek failed at creation.
Normally...

(that's the one that's doing the lseek64/seek_set call to reach the tail
of the file at creation: combined with the next write, it will allocate
the full file on the disk at creation time, you do not want the write to
fail later due to a disk full after 72 hours of rendering ;-) (unless
the OS handle it with an hollow file, which windows does not, IIRC) )

The message "Intermediate image storage backing file write failed at
creation." is due when the actual write of 3 integers (well, it's  wider
than "int", currently it's a "long") fails (or rather: it should have
returned the actual size written, and it does not match the size.

To sum up: (oversimplifying the actual code)


long long last_position;
//value is about height*width * 5 * sizeof(COLC)
//  it might get a bit more for rounding of blocking factor
// it's about 2 400 000 000 in this case
lseek64(the_file_descriptor, last_position, SEEK_SET)

long info[3];
info[0] = something;
// the data structure size for a single pixel in the cache
info[1] = more; // the width
info[2] = yet; // the height
if (write(the_file_descriptor ,&info[0],3*sizeof(long)) !=
3*sizeof(long)) { Your Failure }

Any clue why it might fails on Windows ?
Does the lseek be deferred until actual usage ? (which write does)
A packing issue with the array of long ? (but even if the written data
are bogus, we asked for X bytes to be written, we should get X bytes
written, no more, no less. (at worst, X could be shorter than the
coverage of info[], or do you see a way to access bytes out of info[]
scope ?)
Or the call to write was interrupted ? (Windows actually has to allocate
the whole file now on disk: if the allocation mechanism suck, it can
take a lot of time: it has to write allocated and write all the needed
sectors, that's a write of 2.4G file here... windows is not know to
handle such request (of nearly empty file) as lightly as linux
(according the actual filesystem, linux might just somehow fake it with
an hollow file, whereas Windows will actually make the plain file right
now: with a write to disk at 50MB/s (my reference for hard disk speed
I/O), that's about 45 seconds of optimal I/O... may be more, as there is
also the underlying filesystem structures to update)

Is there on Windows a limit to the filesize that a user can create ?
(man of write talk about RLIMIT_FSIZE resource)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.