POV-Ray : Newsgroups : povray.general : Scanline Server Time
6 Aug 2024 12:17:09 EDT (-0400)
  Scanline (Message 6 to 15 of 15)  
<<< Previous 5 Messages Goto Initial 10 Messages
From: Warp
Subject: Re: Scanline
Date: 12 Apr 2002 10:31:35
Message: <3cb6efc6@news.povray.org>
TinCanMan <Tin### [at] hotmailcom> wrote:
>> > What are the pros and cons of it etc?
> pros=positives
> cons=negatives

  If "pro" is the opposite of "con", then "progress" is the opposite of
"congress".

-- 
#macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
[1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
-1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -


Post a reply to this message

From: Corey Woodworth
Subject: Re: Scanline
Date: 12 Apr 2002 10:43:58
Message: <3cb6f2ae$1@news.povray.org>

news:8pgdbucq5da356fd18t6kpn19ukhgn81ur@4ax.com...
> On Fri, 12 Apr 2002 07:15:49 -0400, "Corey Woodworth" <cdw### [at] mpinetnet>
wrote:
> > Ok, I know what raytracing is, and I know what radiosity is, but what is
> > scanline rendering?
>
> http://www.google.com/search?q=scanline+rendering :-)
>
> With every next question from you I'm starting to think that your famouse
funny
> wishlist wasn't a joke ;-)

Well I'm not suggesting it be added to pov this time :) I just was curios
because all the big bucks programs use it so It has to have some merit.

> > What are the pros and cons of it etc?
>
> Whould you like write more understable english ? For 'pros' my dictionary
shows
> 'prosaic', for 'cons' it shows 'consanguinity'. acronymfinder.com not
helped.

Its fairly common english here in the USA. Pros are the upsides and Cons are
the downsides.

> ABX

Corey


Post a reply to this message

From: Tom Melly
Subject: Re: Scanline
Date: 12 Apr 2002 10:51:05
Message: <3cb6f459$1@news.povray.org>
"Corey Woodworth" <cdw### [at] mpinetnet> wrote in message
news:3cb6f2ae$1@news.povray.org...
>
> Well I'm not suggesting it be added to pov this time :) I just was curios
> because all the big bucks programs use it so It has to have some merit.
>

AFAIK Speed. End of story.


Post a reply to this message

From:
Subject: Re: Scanline
Date: 12 Apr 2002 10:54:21
Message: <e4tdbu4mhj8pbipoigougqomo1l86568f9@4ax.com>
On Fri, 12 Apr 2002 10:48:53 -0400, "Corey Woodworth" <cdw### [at] mpinetnet> wrote:
> Its fairly common english here in the USA.

So you asked about wady i zalety ? :-)

ABX


Post a reply to this message

From: Warp
Subject: Re: Scanline
Date: 12 Apr 2002 10:59:54
Message: <3cb6f66a@news.povray.org>
Corey Woodworth <cdw### [at] mpinetnet> wrote:
> Ok, I know what raytracing is, and I know what radiosity is

  Do you?
  Well, you don't say it, but from the "tone of voice" of that sentence it
is probable that you think that radiosity is a rendering technique comparable
to raytracing and scanline-rendering.
  For some reason this seems to be a quite common misconception. Many
people even talk about "radiosity renderers" as if they were programs which
use "radiosity" to render the image.

  No. Radiosity is not a rendering technique at all. Radiosity is an algorithm
to calculate the lighting of surfaces (eg. by creating light maps). The
radiosity algorithm in itself does not generate an image, but just creates
an internal data structure which tells how surfaces are illuminated.
  Once you have this information, you have to use some *rendering technique*
in order to get the final image (using the lighting information). Usually
the rendering technique used to get the final image is scanline-rendering,
but also raytracing is commonly used (the latter is used when accurate
reflections and refractions are needed besides the global illumination).

  Also: Radiosity is just *one* algorithm to calculate global illumination
(ie. the inter-reflection of light between surfaces). There are others as well.
For example POV-Ray uses a stochastic monte-carlo sampling method suitable for
raytracing mathematical surfaces (the radiosity algorithm is suitable only
for polygons). POV-Ray could not use the radiosity algorithm because it uses
many other primitives than just polygons.
  A third example of a global illumination algorithm is photon mapping (POV-Ray
does *not* support this for global illumination). Photon mapping is also
suitable for raytracing mathematical surfaces.

> but what is scanline rendering? What are the pros and cons of it etc?

  Scanline-rendering is in some sense the opposite of raytracing.
  In raytracing you "shoot" rays from the camera, through the projection plane,
and see what does it hit. That is, in a sense you start from the camera and
go towards the scene.
  In scanline-rendering the direction is the opposite: You calculate the
projection of the scene on the projection plane by project the scene
towards the camera. In a sense you "move" the scene towards the camera until
it hits the projection plane.

  There's of course a catch in the latter method: You can only project
individual points onto the projection plane in a feasibe way (projecting
mathematical surfaces would be just way too difficult).
  This is why scanline-rendering is almost exclusively limited to polygons.
You can only project the vertex points of the polygons onto the projection
plane. Then you "fill" the 2D-projections of these polygons.

  This is where one of the advantages of scanline-rendering kicks in: Speed.
  Projecting points onto a plane and then filling 2D polygons is extremely
fast. If you don't use any other more complicated algorithms, you can do
this to millions or even billions of polygons per second in modern computers.

  Of course getting just flat-colored polygons is not very rewarding. In order
to get a decent 3D image you need at least a simple lighting model as well as
texturing.
  Both things can be done in a rather fast way. There are many different
lighting models for polygons, such as gouraud and phong shading, which are
rather fast to calculate (specially with dedicated hardware). The same thing
applies to texturing. Even though perspective-correct texturing needs some
processing power for it to be real-time, current 3D-hardware can do it pretty
quick (as we can see in 3D games).

  So summarizing:

  Pros:
  - Speed!
  - Dedicated hardware.

  Cons:
  - Supports only polygons (and in more advanced algorithms surfaces which can
    be polygonized on the fly, for example NURBS surfaces).
  - Reflections, refractions and shadows are very complicated to calculate,
    and often limited (eg. usually you can't get multiple interreflection).

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

From: Apache
Subject: Re: Scanline
Date: 12 Apr 2002 15:06:34
Message: <3cb7303a$1@news.povray.org>
prostipation = !constipation

--
Apache
http://geitenkaas.dns2go.com/experiments/
apa### [at] yahoocom


Post a reply to this message

From: Mahalis
Subject: Re: Scanline
Date: 12 Apr 2002 16:48:26
Message: <3cb7481a$1@news.povray.org>
Other way around ;-)
"Warp" <war### [at] tagpovrayorg> wrote in message news:3cb6efc6@news.povray.org...
> TinCanMan <Tin### [at] hotmailcom> wrote:
> >> > What are the pros and cons of it etc?
> > pros=positives
> > cons=negatives
>
>   If "pro" is the opposite of "con", then "progress" is the opposite of
> "congress".
>
> --
> #macro M(A,N,D,L)plane{-z,-9pigment{mandel L*9translate N color_map{[0rgb x]
> [1rgb 9]}scale<D,D*3D>*1e3}rotate y*A*8}#end M(-3<1.206434.28623>70,7)M(
> -1<.7438.1795>1,20)M(1<.77595.13699>30,20)M(3<.75923.07145>80,99)// - Warp -


Post a reply to this message

From: Corey Woodworth
Subject: Re: Scanline
Date: 12 Apr 2002 23:21:40
Message: <3cb7a444$1@news.povray.org>
"Warp" <war### [at] tagpovrayorg> wrote in message
news:3cb6f66a@news.povray.org...
> Corey Woodworth <cdw### [at] mpinetnet> wrote:
> > Ok, I know what raytracing is, and I know what radiosity is
>
>   Do you?
>   Well, you don't say it, but from the "tone of voice" of that sentence it
> is probable that you think that radiosity is a rendering technique
comparable
> to raytracing and scanline-rendering.
>   For some reason this seems to be a quite common misconception. Many
> people even talk about "radiosity renderers" as if they were programs
which
> use "radiosity" to render the image.
>
>   No. Radiosity is not a rendering technique at all. Radiosity is an
algorithm
> to calculate the lighting of surfaces (eg. by creating light maps). The
> radiosity algorithm in itself does not generate an image, but just creates
> an internal data structure which tells how surfaces are illuminated.
>   Once you have this information, you have to use some *rendering
technique*
> in order to get the final image (using the lighting information). Usually
> the rendering technique used to get the final image is scanline-rendering,
> but also raytracing is commonly used (the latter is used when accurate
> reflections and refractions are needed besides the global illumination).

Yeah I knew what radiosity was. Although my paragraph didn't sound like it

   Also: Radiosity is just *one* algorithm to calculate global illumination
> (ie. the inter-reflection of light between surfaces). There are others as
well.
> For example POV-Ray uses a stochastic monte-carlo sampling method suitable
for
> raytracing mathematical surfaces (the radiosity algorithm is suitable only
> for polygons). POV-Ray could not use the radiosity algorithm because it
uses
> many other primitives than just polygons.
>   A third example of a global illumination algorithm is photon mapping
(POV-Ray
> does *not* support this for global illumination). Photon mapping is also
> suitable for raytracing mathematical surfaces.

Ooh this I did NOT know. Pretty interesting stuff.

> > but what is scanline rendering? What are the pros and cons of it etc?
>
>   Scanline-rendering is in some sense the opposite of raytracing.
>   In raytracing you "shoot" rays from the camera, through the projection
plane,
> and see what does it hit. That is, in a sense you start from the camera
and
> go towards the scene.
>   In scanline-rendering the direction is the opposite: You calculate the
> projection of the scene on the projection plane by project the scene
> towards the camera. In a sense you "move" the scene towards the camera
until
> it hits the projection plane.

This is what I was lookin' for, and explination of what exactly it does.
Thanks.

>   There's of course a catch in the latter method: You can only project
> individual points onto the projection plane in a feasibe way (projecting
> mathematical surfaces would be just way too difficult).
>   This is why scanline-rendering is almost exclusively limited to
polygons.
> You can only project the vertex points of the polygons onto the projection
> plane. Then you "fill" the 2D-projections of these polygons.
>
>   This is where one of the advantages of scanline-rendering kicks in:
Speed.
>   Projecting points onto a plane and then filling 2D polygons is extremely
> fast. If you don't use any other more complicated algorithms, you can do
> this to millions or even billions of polygons per second in modern
computers.

I had always known that POV used mathmatical shapes instead of collections
of vertexes (spheres are REALLY shperes) But I had never realized that this
impacted the way scenes were rendered. I just thought it was a better but
slower way to model.

>   Of course getting just flat-colored polygons is not very rewarding. In
order
> to get a decent 3D image you need at least a simple lighting model as well
as
> texturing.
>   Both things can be done in a rather fast way. There are many different
> lighting models for polygons, such as gouraud and phong shading, which are
> rather fast to calculate (specially with dedicated hardware). The same
thing
> applies to texturing. Even though perspective-correct texturing needs some
> processing power for it to be real-time, current 3D-hardware can do it
pretty
> quick (as we can see in 3D games).
>
>   So summarizing:
>
>   Pros:
>   - Speed!
>   - Dedicated hardware.
>
>   Cons:
>   - Supports only polygons (and in more advanced algorithms surfaces which
can
>     be polygonized on the fly, for example NURBS surfaces).
>   - Reflections, refractions and shadows are very complicated to
calculate,
>     and often limited (eg. usually you can't get multiple
interreflection).

Thanks for the in depth post! :)  It explained everything to me.

Corey


Post a reply to this message

From: JRG
Subject: Re: Scanline
Date: 13 Apr 2002 04:16:04
Message: <3cb7e944@news.povray.org>

> On Fri, 12 Apr 2002 12:42:43 +0100, "Tom Melly" <tom### [at] tomandlucouk> wrote:
> > Heh - it's actually fairly common
> > pro - good points
> > con - bad points
>
> The meaning somehow I guessed :-)


It should be Latin: "pro" and "contra".


--
Jonathan.

Home: http://digilander.iol.it/jrgpov


Post a reply to this message

From: Warp
Subject: Re: Scanline
Date: 13 Apr 2002 16:11:52
Message: <3cb89108@news.povray.org>
Corey Woodworth <cdw### [at] mpinetnet> wrote:
> I had always known that POV used mathmatical shapes instead of collections
> of vertexes (spheres are REALLY shperes) But I had never realized that this
> impacted the way scenes were rendered. I just thought it was a better but
> slower way to model.

  Note that rendering a sphere is in no way slower than rendering a bunch
of triangles (in fact, raytracing a sphere is usually a lot faster).

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

<<< Previous 5 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.