|
 |
> Is it still ray-tracing? You only get an image that is based on a ray-traced
> one. But if you end up with what you want, then OK.
It's not raytracing, indeed, it's post-processing. The post-processing
included in megapov (for example) isn't raytracing, either. The purpose of
post-processing is precisely to allow some transformations over the already
rendered image, ideally using more data (like depth or normals) than the image
itself, but it happens AFTER rendering, which makes it fast and flexible.
If we are discussing render-time effects, well, okay, but let's not call it
POST-processing, then. Just call it 'effects' (and limit them to things
that *cannot* be done in post-processing).
Nevertheless, simultaneous output of rendered image, and depth, normal,... data
would allow extremely interesting things to be done (besides being easy to
implement).
(BTW, thinking of it, being able to simultaneously output images with different
settings would be great; i.e : rendering an image with and without media,
but doing intersection calculations only once; of course, it becomes
complicated...)
Fabien.
Post a reply to this message
|
 |