POV-Ray : Newsgroups : povray.general : Which would be more efficient? : Re: Which would be more efficient? Server Time
2 Aug 2024 18:13:54 EDT (-0400)
  Re: Which would be more efficient?  
From: Alain
Date: 18 Aug 2004 20:22:07
Message: <4123f2af$1@news.povray.org>
Stefan Viljoen <rylan@ nous apporta ses lumieres ainsi en ce 18/08/2004 
12:53... :

>Hi guys
>
>I am having some problems with some of my scenes taking absurdly long to
>render (months). I suspect this is mostly due to stupid / inexperienced
>scene design (I am new to Pov).
>
>My question: how do you ppl optimise offscreen objects / meshes? I. e. parts
>of your scene that are not in the camera field of view - a plane I suppose
>would make no difference, but what about complicated isosurfaces and so
>forth extending "around" the camera? Even just a bit?
>
>For example, is there any advantage to differencing an isosurface when it
>goes "offscreen" to prevent calculations being done for the missing piece,
>or is this in fact precisely the wrong thing to do?
>
>As far as I can reason out, POV does take "offscreen" stuff in account since
>you can for example see reflections of offscreen objects onscreen (i. e. in
>the camera's viewfield).
>
>I am aware that I could precisely size stuff to fit exactly in the camera
>viewfield, but my specific problem is that I need a certain "part" of an
>isosurface and I do not have the mathematical ability to isolate only that
>part, for example. So I translate it until I have the "part" I want in the
>camera's viewfield. The isosurface is not much bigger than the camera's
>view angle (say about two units in its "flat" (screen plane) axis).
>
>Or is this exactly the wrong approach?
>
>My problem is that I see incredible scenes all the time that take literally
>thousands of times less time than my uncomplicated scenes and I would
>desperately like to optimise my  trace times.
>
>Thanks!
>  
>
Off field objects don't take time during render time unless they are 
made visible by a reflection or refraction: no direct ray are shoot at 
them. They do, owever, take some parse time.
When using isosurfaces, try to set the containing shape as tight as 
possible. If that shape is to large, POV Ray need to needlessly evaluate 
many samples that are outside of the actual isosurface. Usualy, you use 
a box or sphere, but for some cases, a torus, cone or some other 
primitive can be more effecient. Make a transparent copy of the 
contained_by shape for your test to see how efficient you containement 
is. Check the max_gradient, and set it as low as possible as long as it 
don't cause black spots in your shape.
Do the test with only a pigment and no other texture component, remove 
all filter and transmit, use only one light_source. Use a lower quality 
setting to speed up composition/placing tests.
If you know that an object can only be seen in a reflection (not visible 
in a test without reflection), give it a simplified texture.
Lower max_trace_level a little and adjust adc_bailout to compensate.
Using area_light, use adaptive to optimize the shade evaluation. Don't 
replace area_light with an array of lights, the many lights will take 
much more time to evaluate, and probably produce more banding.

Alain


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.