POV-Ray : Newsgroups : povray.general : Scanline : Re: Scanline Server Time
6 Aug 2024 14:24:06 EDT (-0400)
  Re: Scanline  
From: Corey Woodworth
Date: 12 Apr 2002 23:21:40
Message: <3cb7a444$1@news.povray.org>
"Warp" <war### [at] tagpovrayorg> wrote in message
news:3cb6f66a@news.povray.org...
> Corey Woodworth <cdw### [at] mpinetnet> wrote:
> > Ok, I know what raytracing is, and I know what radiosity is
>
>   Do you?
>   Well, you don't say it, but from the "tone of voice" of that sentence it
> is probable that you think that radiosity is a rendering technique
comparable
> to raytracing and scanline-rendering.
>   For some reason this seems to be a quite common misconception. Many
> people even talk about "radiosity renderers" as if they were programs
which
> use "radiosity" to render the image.
>
>   No. Radiosity is not a rendering technique at all. Radiosity is an
algorithm
> to calculate the lighting of surfaces (eg. by creating light maps). The
> radiosity algorithm in itself does not generate an image, but just creates
> an internal data structure which tells how surfaces are illuminated.
>   Once you have this information, you have to use some *rendering
technique*
> in order to get the final image (using the lighting information). Usually
> the rendering technique used to get the final image is scanline-rendering,
> but also raytracing is commonly used (the latter is used when accurate
> reflections and refractions are needed besides the global illumination).

Yeah I knew what radiosity was. Although my paragraph didn't sound like it

   Also: Radiosity is just *one* algorithm to calculate global illumination
> (ie. the inter-reflection of light between surfaces). There are others as
well.
> For example POV-Ray uses a stochastic monte-carlo sampling method suitable
for
> raytracing mathematical surfaces (the radiosity algorithm is suitable only
> for polygons). POV-Ray could not use the radiosity algorithm because it
uses
> many other primitives than just polygons.
>   A third example of a global illumination algorithm is photon mapping
(POV-Ray
> does *not* support this for global illumination). Photon mapping is also
> suitable for raytracing mathematical surfaces.

Ooh this I did NOT know. Pretty interesting stuff.

> > but what is scanline rendering? What are the pros and cons of it etc?
>
>   Scanline-rendering is in some sense the opposite of raytracing.
>   In raytracing you "shoot" rays from the camera, through the projection
plane,
> and see what does it hit. That is, in a sense you start from the camera
and
> go towards the scene.
>   In scanline-rendering the direction is the opposite: You calculate the
> projection of the scene on the projection plane by project the scene
> towards the camera. In a sense you "move" the scene towards the camera
until
> it hits the projection plane.

This is what I was lookin' for, and explination of what exactly it does.
Thanks.

>   There's of course a catch in the latter method: You can only project
> individual points onto the projection plane in a feasibe way (projecting
> mathematical surfaces would be just way too difficult).
>   This is why scanline-rendering is almost exclusively limited to
polygons.
> You can only project the vertex points of the polygons onto the projection
> plane. Then you "fill" the 2D-projections of these polygons.
>
>   This is where one of the advantages of scanline-rendering kicks in:
Speed.
>   Projecting points onto a plane and then filling 2D polygons is extremely
> fast. If you don't use any other more complicated algorithms, you can do
> this to millions or even billions of polygons per second in modern
computers.

I had always known that POV used mathmatical shapes instead of collections
of vertexes (spheres are REALLY shperes) But I had never realized that this
impacted the way scenes were rendered. I just thought it was a better but
slower way to model.

>   Of course getting just flat-colored polygons is not very rewarding. In
order
> to get a decent 3D image you need at least a simple lighting model as well
as
> texturing.
>   Both things can be done in a rather fast way. There are many different
> lighting models for polygons, such as gouraud and phong shading, which are
> rather fast to calculate (specially with dedicated hardware). The same
thing
> applies to texturing. Even though perspective-correct texturing needs some
> processing power for it to be real-time, current 3D-hardware can do it
pretty
> quick (as we can see in 3D games).
>
>   So summarizing:
>
>   Pros:
>   - Speed!
>   - Dedicated hardware.
>
>   Cons:
>   - Supports only polygons (and in more advanced algorithms surfaces which
can
>     be polygonized on the fly, for example NURBS surfaces).
>   - Reflections, refractions and shadows are very complicated to
calculate,
>     and often limited (eg. usually you can't get multiple
interreflection).

Thanks for the in depth post! :)  It explained everything to me.

Corey


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.