news:39D1EF0C.943C7518@enst.fr...
| Gilles Tran wrote:
| >
| > Does it mean that a good strategy when rendering a large radiosity image
likely to
| > cause "out of memory" problems is to render it in a batch ?
| > +ER300 +C
| > +ER600 +C
| > +ER900 +C
| > etc.
| >
| > Each successive partial render picks up the image where the previous one
left it
| > (solution not tested ?).
| >
| No because in that case, either you didn't save the radiosity data and
| you'll see it in the final image, or this data will be reloaded each
| time and the last render will use as much memory as the whole would
| have...
Seemed okay when I tested Gilles idea. Memory remained constant on all
three partial renders. I also used save_file to keep the radiosity info and
pass it to the next segment with load_file. Of course you can't start out
with a +C if there's a previous test render done.
The only problem was with post_process find_edges, where it apparently uses
only the first segment to apply the change for each successive segment.
Same with depth, however soft_glow was okay (I think so) and posterize
almost did okay (not perfect). Those were the only I checked on.
Attached image shows what I mean.
I didn't see any mention of post_process not working right with partial
renders, maybe this is one of the reasons it didn't make it into POV-Ray
3.5.
Bob
Post a reply to this message
Attachments:
Download 'radiosityagain2.jpg' (5 KB)
Preview of image 'radiosityagain2.jpg'

|