POV-Ray : Newsgroups : povray.general : Scanline : Re: Scanline Server Time
19 Nov 2024 07:37:46 EST (-0500)
  Re: Scanline  
From: Warp
Date: 12 Apr 2002 10:59:54
Message: <3cb6f66a@news.povray.org>
Corey Woodworth <cdw### [at] mpinetnet> wrote:
> Ok, I know what raytracing is, and I know what radiosity is

  Do you?
  Well, you don't say it, but from the "tone of voice" of that sentence it
is probable that you think that radiosity is a rendering technique comparable
to raytracing and scanline-rendering.
  For some reason this seems to be a quite common misconception. Many
people even talk about "radiosity renderers" as if they were programs which
use "radiosity" to render the image.

  No. Radiosity is not a rendering technique at all. Radiosity is an algorithm
to calculate the lighting of surfaces (eg. by creating light maps). The
radiosity algorithm in itself does not generate an image, but just creates
an internal data structure which tells how surfaces are illuminated.
  Once you have this information, you have to use some *rendering technique*
in order to get the final image (using the lighting information). Usually
the rendering technique used to get the final image is scanline-rendering,
but also raytracing is commonly used (the latter is used when accurate
reflections and refractions are needed besides the global illumination).

  Also: Radiosity is just *one* algorithm to calculate global illumination
(ie. the inter-reflection of light between surfaces). There are others as well.
For example POV-Ray uses a stochastic monte-carlo sampling method suitable for
raytracing mathematical surfaces (the radiosity algorithm is suitable only
for polygons). POV-Ray could not use the radiosity algorithm because it uses
many other primitives than just polygons.
  A third example of a global illumination algorithm is photon mapping (POV-Ray
does *not* support this for global illumination). Photon mapping is also
suitable for raytracing mathematical surfaces.

> but what is scanline rendering? What are the pros and cons of it etc?

  Scanline-rendering is in some sense the opposite of raytracing.
  In raytracing you "shoot" rays from the camera, through the projection plane,
and see what does it hit. That is, in a sense you start from the camera and
go towards the scene.
  In scanline-rendering the direction is the opposite: You calculate the
projection of the scene on the projection plane by project the scene
towards the camera. In a sense you "move" the scene towards the camera until
it hits the projection plane.

  There's of course a catch in the latter method: You can only project
individual points onto the projection plane in a feasibe way (projecting
mathematical surfaces would be just way too difficult).
  This is why scanline-rendering is almost exclusively limited to polygons.
You can only project the vertex points of the polygons onto the projection
plane. Then you "fill" the 2D-projections of these polygons.

  This is where one of the advantages of scanline-rendering kicks in: Speed.
  Projecting points onto a plane and then filling 2D polygons is extremely
fast. If you don't use any other more complicated algorithms, you can do
this to millions or even billions of polygons per second in modern computers.

  Of course getting just flat-colored polygons is not very rewarding. In order
to get a decent 3D image you need at least a simple lighting model as well as
texturing.
  Both things can be done in a rather fast way. There are many different
lighting models for polygons, such as gouraud and phong shading, which are
rather fast to calculate (specially with dedicated hardware). The same thing
applies to texturing. Even though perspective-correct texturing needs some
processing power for it to be real-time, current 3D-hardware can do it pretty
quick (as we can see in 3D games).

  So summarizing:

  Pros:
  - Speed!
  - Dedicated hardware.

  Cons:
  - Supports only polygons (and in more advanced algorithms surfaces which can
    be polygonized on the fly, for example NURBS surfaces).
  - Reflections, refractions and shadows are very complicated to calculate,
    and often limited (eg. usually you can't get multiple interreflection).

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.