|
|
Darren New wrote:
> Color me unimpressed. Maybe it's because I'm not an expert, but some of
> the sub-surface scattering stuff is the only stuff that looks
> particularly good to me. Balanced against most of their proud gallery
> being obnoxiously grainy, I don't see it as a win just from the photos.
>
> Is it possible to automatically know when a scene is good enough? Or
> does it take human intervention to say "ok, stop now and move on to the
> next frame"?
For animations this is a show-stopper. Picture quality *must* be
consistent from frame to frame, and that rules out any perceptible
degree of graininess. Letting the unbiased renderers go until the grain
is gone is not practical, because that requires a human to monitor the
render, and requires that human to decide consistently from one frame to
the next. The only way an unbiased renderer could be used in animation
work is to let it render the first frame of every shot, decide on an
acceptable quality level, and then allow that much time for each frame,
and hope that the movement of some object or the camera doesn't increase
the time requirement significantly.
(And if you want grain for some reason, other renderers, and
post-processors too, can supply it in a way what is much easier to control.)
Ray-tracing and z-buffering deliver consistency from frame to frame,
which is why animators use those rendering algorithms. Pixar's renderer
uses a z-buffering architecture, combined with ray-tracing for certain
situations; in their docs they say that the only real drawback to
ray-tracing is the requirement that the entire scene be containable in
memory (which for Pixar's work is a show-stopper; their scenes can use
insane amounts of data). To this I'd add that z-buffering handles
displacement mapping much more efficiently than ray-tracing does.
Regards,
John
Post a reply to this message
|
|