|
|
Jim Holsenback <jho### [at] povrayorg> wrote:
> On 03/17/2011 02:01 PM, Trevor G Quayle wrote:
> > A full scene sample test rendering of my crevice grime experimentations with
> > textures, focal blur and lighting (lightdome of course...). The grime texture
> > is layered over the base texture with transparency. I always have difficulty
> > getting a smooth fading transtion with transparency.
> >
> > Object height: 200
> > Normal angle: 85
> > Control depth: 10
> > Surface offset: +0.1
> > Resolution: 600 (based on bounding box diagonal of ~314)
> >
> > -tgq
>
> There seems to be a great deal of accuracy in this example. How tough is
> it to tune from one case to the next? Nice job btw ...
Thanks.
If you noted in the thread, I was able to find a way to convert my data to a
pigment (save to array -> export to df3 file -> use density_file pattern
function). This speeds up the rendertime considerably, instead of having to
render 100,000 mesh 'pixels', each intersected with the object. So really the
majority of time is parse time which is a function of the input parameters.
Generally speaking, increasing resolution has a squaring effect on parse time,
increasing the number of sample traces per normal (which I didn't specify here,
but I typically use 16 or 32) has a direct linear effect on time. Adjustment of
Normal Angle, Control Depth and Surface Offset will have some effect on render
time depending on the mesh (actually currently they have very little, as I have
not implemented any optimization to the sampling, so each successful surface
trace runs all sample traces, but I am planning on adding adaptive, recursive
sampling to at least this aspect).
So for tuning, it is really a function of the parameters I am using, the main
one being resolution, higher "Resolution" will give more accurate (finer
results), ideally you want this to be at least as small if not smaller than your
image pixels to remove any blocky effect (bilinear interpolation of the density
file does help as well though). Second to this is the number of traces set per
normal (how many sampling traces rotated around each normal).
So tuning the results is a matter of how fast you can run test runs. I usually
bump down the quality settings to start.
However, once you get a feel for how the parameters effect the results, it could
become somewhat intuitive based on the model detailing, what you want to achieve
and relative scale of the model. E.g. if you want to catch more shallow details
like the back of the elephant, you would increase your normal angle towards 90,
if you want to cath larger crevices or holes, you would increase the control
depth.
This has been fun and educational to develop. I hope to be able to release a
macro for others to give a whirl soon when I am satisfied with the level of
development. Hopefully at sometime in the future I can work on source coding it
for possible future releases if there is interest.
-tgq
Post a reply to this message
|
|