|
|
Gilles Tran wrote:
>
> Very possibly, Mick just meant that certain renderers (and 3D hardware) now
> support a form of displacement mapping that is associated with on-the-fly
> subdivision/tesselation/whatever, so that one can use low poly meshes and
> "shape" them with maps. See for instance:
> http://www.edharriss.com/tutorials/tutorial_xsi_text_displacement_map/extrusion.htm
> http://www.rhythm.com/~ivan/dispMap.html
I know what subdivision and displacement mapping are, but that's
completely different in a raytracer than in a scanline renderer. What's
commonly advertised as 'subpixel displacement mapping' is simply not
just drawing the large triangles on the screen but subdividing and
displacing them until their size is smaller that that of the pixels of
the resulting image and then drawing them.
In a raytracer you have to choose a subdivision depth a priori, you can
try to do this adaptively depending on the distance from the camera but
this won't work for indirect rays (reflections etc.) Also note that
depending on how much memory you reserve for caching the subdivision
data you may have to subdivide every triangle visible more than once.
In the end an already displaced mesh (as long as it fits into memory of
course) will probably always be faster in a raytracer than a
displacement at render time.
> Other renderers (such as C4D) only displace the existing geometry so that
> any necessary subdivision has to be done (selectively if possible) before
> displacement by the user.
This is already possible in POV:
http://jgrimbert.free.fr/pov/patch/tessel/index.html
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 21 Mar. 2004 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
|