|
![](/i/fill.gif) |
They work usually, from my limited understanding, by building a Z-Buffer of
the scene and then sampling the Z-Buffer for each pixel to determine the
object and normal of the pixel (well, normal of the surface of the object in
the pixel) and then using standard lighting techniques to figure out how the
surface is lit and then passing the information over to the shader which
determines the correct color at that point (in MAX all information such as
surface normal is also sent so the shader can make use of that information
too, for example, a normal-dependant texture such as X-Ray).
The nice thing about scanline renderers is it's possible to save the
Z-Buffer for use as a depth map for a SIS (stereogram). In MAX it allows
you to do a lot of things post-render as it also allows direct access to the
Object channel, Effects ID channel, Normal channel and un-scaled color
channel. An example would be using the Object channel to apply a Video Post
effect such as Glow to just one object.
--
Lance
The Zone
http://come.to/the.zone
Post a reply to this message
|
![](/i/fill.gif) |