|
|
The ideology in scanline rendering is completely different from raytracing.
In raytracing the program "shoots" rays from the camera towards the scene
and sees what does it hit (this allows, among other things, the objects to
be mathematical functions, not necessarily polygons).
In scanline rendering the process is completely different (this is a very
simple explanation; the actual process used by current scanline renderers is
a bit more complicated):
The scene consists entirely of polygons. Each polygon (that is, each vertex
of each polygon) is projected on the viewing plane (the "screen"). That is,
each 3D vertex is projected on the viewing plane thus getting 2D points
(with depth information).
Then the polygons are drawn as if they were just 2D polygons on screen
(of course texturing and lighting takes into account the depth information
and normal vectors of the vertex points).
Hidden surface removal is (usually, but not necessarily) achieved with
a z-buffering algorithm.
Drawing 2D polygons on screen (even taking into account the depth
information) is a lot faster than raytracing.
Current 3D acceleration cards use purely scanline rendering.
If you have heard the term "perspective correct texture mapping", it's
very closely related to scanline rendering. It's an algorithm which is
necessary to use when drawing the polygons in order to get correct textures.
Explaining its idea is beyond this short text.
Raytracing doesn't need this to get correct texturing, since the raytracing
algorithm itself "automatically" gets the correct color in the texture.
--
char*i="b[7FK@`3NB6>B:b3O6>:B:b3O6><`3:;8:6f733:>::b?7B>:>^B>C73;S1";
main(_,c,m){for(m=32;c=*i++-49;c&m?puts(""):m)for(_=(
c/4)&7;putchar(m),_--?m:(_=(1<<(c&3))-1,(m^=3)&3););} /*- Warp -*/
Post a reply to this message
|
|