POV-Ray : Newsgroups : povray.general : Maximum Resolution of Renders? : Re: Maximum Resolution of Renders? Server Time
29 Jul 2024 18:27:04 EDT (-0400)
  Re: Maximum Resolution of Renders?  
From: Le Forgeron
Date: 29 Sep 2010 13:25:09
Message: <4ca37675$1@news.povray.org>
Le 29/09/2010 16:23, fillibar nous fit lire :
> Does anyone know if there is an effective maximum to the resolution you can
> render at?
> 
> Background of question:
> I am rendering some very high-object images as well as some high-detail ones at
> times and would like to up the resolution significantly. I added a 12800x10240
> resolution that has been beneficial but the ultimate goal would be a 41000x41000
> image. However each time I have attempted to render that Pov-Ray has crashed on
> me. Even when using a single, simple test object (a plain white sphere that
> should use a single pixel). So I think this is running into a program limitation
> at present.

Sort of.
The issue seems to be in image/image.cpp, Image::Create (w=41000,
h=41000, t unknown, f=true)
Turn out that It's allocating RGBT_Float (t) as MemoryRGBFTImage which
does not even trigger the try/catch but goes SegFault directly.

An option FILE_MAPPED_IMAGE_ALLOCATOR seems to be possible, but is not
active by default (configure does not know about it), and in beta 38 at
least, the relevant code does not compile when CPPFLAGS is set to
-DFILE_MAPPED_IMAGE_ALLOCATOR .

> 8GB RAM
> 16GB Virtual Memory
> 
> 

A memory RGBFT_Float image of 41000 x 41000 is going to need at least 5
floats (single precision), that's a bit large for the poor default
implementation based on vector.
The iterator gets an issue: if it was a single memory block, direct
access, it would already reach the size of more than 31 Giga bytes
(assuming a float on 4 bytes)...

In the following code, replacing the (4) with (o) get a nice process
which use 31.3 GB;
But g++ has an issue if the code is "size_t o=41000*41000*5":
a.cpp: In function ‘int main()’:
a.cpp:9: warning: integer overflow in expression



using namespace std;
#include <stdio.h>
#include <vector>
typedef vector<float, allocator<float> > PixelContainer;

main()
{
PixelContainer a;
size_t o=41000;
o*=41000;
o*=5;
a.resize(4);
printf("vector: %zd\nSizeof %lu\n",a.max_size(),sizeof(size_t));
}


PS: arithmetic on unsigned int (as RGBFTImage creator is using) will
never get promoted to size_t by the compiler itself. But that does not
prevent an issue of overflow later: 41000² * 5 *4 = 0x00 07 D3 E8 7D 00
(beware 48 bits values, using already 36 bits), with "D3 E8 7D 00" being
a very huge number (negative).

In fact the resize parameter value is 0x01 F4 FA 1F 40
5 being signed, that's a signed value (well "F4FA1F40" is signed),
which lead probably with sign extension to requesting a resize to
0x FF FF FF FF F4 FA 1F 40
That's just too far for vector, as max_size is 0x3F FF FF FF FF FF FF FF

Or resizing to negative size just remove the vector, and there is no
iterator ? (difficult to see with -O3, and fastmath is just removing all
checks)


PS2: me running on linux64 with lot of memory( but not enough for 31.3
GB, at least I survive)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.