POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
4 Aug 2024 18:19:35 EDT (-0400)
  Scanline rendering in POV-Ray (Message 31 to 40 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Lutz Kretzschmar
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 17:45:55
Message: <5ahndv47k8grn3i7i58026jhmrmhp9q53p@4ax.com>
Hi Ray Gardener, you recently wrote in povray.general:

> ... Don't worry; I'm not asking the POV-Team to do anything.
Ray, Thorsten is not posting on behalf of the POV-Team, those are
purely his own thoughts. 

- Lutz
  email : lut### [at] stmuccom
  Web   : http://www.stmuc.com/moray


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 18:05:28
Message: <MPG.194577cdcf464339897fe@news.povray.org>
Another point to make is that most games don't use raytracing since until 
relatively recently the hardware architecture needed to implement a true 
raytracer in a graphics cards would have required something 3 times the 
size of a normal card. There is only one that I have even heard rumored 
to have been made and it is used in movie production, cost tens of 
thousands of dollars to buy and was nearly the size of a laptops entire 
motherboard. With the number of component we could fit into a new card 
the result would be no bigger than an existing NVidia card, however the 
development time needed to produce the chip, make sure it was stable and 
then pray that game companies that are used to using the cheats supplied 
through OpenGL and DirectX will actually use it makes the odds of anyone 
seeing such a card on the market any time soon very unlikely. The irony 
being that in many cases you have to upload large chunks of additional 
geometry and pre-made textures every few seconds in a game, while a true 
raytrace engine would only have to upload terrain geometry and those 
textures you couldn't produce using procedural systems (which for most 
games would be maybe 10% of them...) If you couldn't get a jump in speed 
and frame rate from have 90% of your data remain in-place throughout most 
of the game then you may as well not even bother trying. lol And most 
experts believe that short of a major increase in the size or speed of 
data transfer, the latest generation of cards are close to the physical 
limit of what they can improve on.

Hmm. Maybe there is a market for a true raytracing based card after 
all...

Not that any of the above addresses the central issue, which as everyone 
else has said, is that there is not likely to be a major improvement in 
speed or memory use from what you are planning to do. Or which in the 
case of memory couldn't be accomplished by a method of trading out stuff 
that POVRay doesn't specifically need at any given moment.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 19:05:46
Message: <3edbd84a@news.povray.org>
>   There's nothing in raytracing that *forces* you to use these extra
> features. If it's enough to get the same result as you would get with
> scanline-rendering, you can use simple raycasting (ie. don't calculate
> any reflected nor refracted rays) without shadows. This way the necessity
> of having all the objects in memory at once is removed in the exact same
> way as in scanline-rendering.

Not quite. Hidden surface removal is accomplished in
raytracing by using the nearest intersection. If an
eye ray does not intersect other objects/bboxes along its vector,
it cannot be sure that it has hit the nearest one.

Even if it were so, POV-Ray doesn't give
you the choice. It retains all the geometry regardless.



>   1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
> with a raytracing algorithm than with a scanline-rendering algorithm.

Is it? With a (arbitrary) mesh, you need to determine the intersection
of a ray with some kind of octree. This means knowing the ray-plane
intersection algorithm and how to create, populate, and search octrees.
Then you need to forward the ray to intersect the particular triangle
in the mesh, and get its ST coords. You may also need to check other
parts of the octree to be sure your intersection is nearest the eye.

In scanline, you just go through the triangles in the mesh
and draw them. ST coords are easy because you get them
from vertex and edge interpolation. If one does not have
a triangle rasterizer handy, however, I can see how
the raytracing approach can be desirable.

I would characterize the relative difficulties as being
the tasks involved in adding new primitive types.
In raytracing, one must provide an intersection function.
In scanline, one must provide a tesselator. Having
written both a simple raytracer and scanliner, I
would say both consume about equal development time.
For the heightfield primitive, however, it was
challenging getting the last row/column to intersect
correctly -- the slab system and DDA traversal are
prone to easy-to-make off-by-one errors. For scanline,
it was just a matter of pulling triangles.



>   Also, even if full raytracing done, keeping the whole scene geometry
> in memory at once is not strictly necessary.
>   If the bounding box of a mesh object is never hit, there's no reason
> to even load that mesh into memory. It should be possible to develop
> a load-at-demand scheme, where meshes are loaded only when needed. It
> could also be a cache scheme, where meshes which have not been hit in
> longest time can be freed from memory.

Not true in full raytracing, unless you know in advance
the direction of every secondary ray. An object behind
the camera, for instance, can appear reflected in another
object in front of the camera. The trouble is, you don't
know if an object is never hit until you try to hit it,
hence you must bring it into memory.

Granted, some culling cases are possible. A reflective
flat opaque sphere, for example, can only cast secondary rays
in a half-sphere centered around the point that is
nearest to the eye. If no other objects lie nearer
than the plane parallel to the half-sphere's flat part,
and no other objects cast secondary rays, then those
objects behind the plane can be unloaded. The cull
search goes up exponentially with additional
reflective objects, however.



> > In a scanline renderer, each object is considered
> > only once, so an object only exists in memory
> > while being drawn.
>
>   This has the drawback that if a hundred triangles cover a certain
> pixel, that pixel is recalculated a hundred times (including texturing
> and lighting).

Not true. Texture and lighting can be deferred in scanline rendering
until after the z-buffer is populated. One needs a larger z-buffer, however.
In fact, I will implement this in my existing renderer to see
what the performance increase is, since my z-buffer already has
the necessary structure.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 19:07:59
Message: <3edbd8cf$1@news.povray.org>
> Ray, Thorsten is not posting on behalf of the POV-Team, those are
> purely his own thoughts.

Okay. I wasn't sure, because his sig mentioned
he was on the POV-Team.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Deferred texturing benchmarks in scanline renderer
Date: 2 Jun 2003 22:10:25
Message: <3edc0391$1@news.povray.org>
I finished benchmarking my new z buffer
which defers texturing and lighting until
after all the pixels have been depth sorted.
Each render was preceded with an srand(0) call
to ensure that all random number dependant
shading generated the exact same triangles.
Times for deferred rendering include iterating
through the light cache and computing the
final appearance of the scene. The scene was
a typical landscape using an ~ 500 x 500 pixel
heightfield. These results are preliminary
and I will be doing more tests as time goes on.

For a window size of 900 x 250, the difference
in speed was small, between 2 and 8 seconds on jobs
that ran between 1 and 2 minutes depending on LOD.
The texturing code in this case is very simple,
being just a colormap lookup; it represents a
small percentage of the total render pipeline.

Increasing the window size to 1800 x 500
changed things quite a bit:

    900 x 250        1800 x 500
  --------------   ---------------
   2m10s normal     6m05s normal
   2m02s deferred   5m35s deferred
      8s better       30s better

The time savings scaled proportionally
with the window size, but the render
time scaled less, resulting in greater
percentage time savings at the larger size.
With more screen pixels, the benefits of
deferred texturing increase.

I will be recompiling the Leveller Viewer
up to version 2.3 including the scanline
renderer for those who wish to evaluate
this particular scanline renderer implementation
with their own data.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Wrapping up
Date: 3 Jun 2003 00:19:02
Message: <3edc21b6$1@news.povray.org>
I thank everyone for their input.
I am documenting the scanline work at
http://www.daylongraphics.com/products/leveller/render/


Ray Gardener
Daylon Graphics Ltd.
"Heightfield modeling perfected"
http://www.daylongraphics.com


Post a reply to this message

From: Warp
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 07:30:35
Message: <3edc86db@news.povray.org>
Ray Gardener <ray### [at] daylongraphicscom> wrote:
> Not quite. Hidden surface removal is accomplished in
> raytracing by using the nearest intersection. If an
> eye ray does not intersect other objects/bboxes along its vector,
> it cannot be sure that it has hit the nearest one.

  You didn't understand what I meant.

  Your scanline algorithm renders all meshes separately, one by one.
There's nothing in raytracing which would disallow this: You can first
raytrace one mesh and then another (using the same z-buffer method for
surface removal as in scanline rendering). Of course this means that the
same pixels need to be raytraced more than once, but that's exactly what
the scanline method is doing (except that with two meshes the raytracer
calculates each pixel twice, but the scanline method can calculate
many pixels tens of times, depending on the meshes).

> Even if it were so, POV-Ray doesn't give
> you the choice. It retains all the geometry regardless.

  The original question was memory usage of the raytracing and scanline
algorithms.

>>   1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
>> with a raytracing algorithm than with a scanline-rendering algorithm.

> Is it? With a (arbitrary) mesh, you need to determine the intersection
> of a ray with some kind of octree. This means knowing the ray-plane
> intersection algorithm and how to create, populate, and search octrees.
> Then you need to forward the ray to intersect the particular triangle
> in the mesh, and get its ST coords. You may also need to check other
> parts of the octree to be sure your intersection is nearest the eye.

  If the ray hits a triangle in a nearer octree node, there's absolutely
no way that a triangle in a farther octree node would be nearer. The octree
optimization works in such way that the first time the ray hits a triangle,
it's surely the closest one.

> In scanline, you just go through the triangles in the mesh
> and draw them.

  If your mesh has 100 million triangles and 10 of them are visible (so that
the rest get hidden behind them), you don't want to do that. Of course unless
render time is not critical.
  If you want any speed, you will need to perform a spatial tree optimization
to your scanline rendering algorithm as well. AFAIK commercial high-end
renderers do this.

  Besides, you say "draw them" as if it was trivial (or at least more trivial
than calculating the intersection of a ray and a triangle). I tend to
disagree.

> ST coords are easy because you get them
> from vertex and edge interpolation.

  You need perspective-correct interpolation (linear interpolation will
give you awful results), which is less trivial.

> Not true in full raytracing, unless you know in advance
> the direction of every secondary ray.

  You didn't understand this either. Please read my text again.

  You don't need to load meshes to memory, you only need to know their
bounding box. When a ray hits the bounding box, you load the mesh. If a
ray never hits the bounding box of a mesh, that mesh is never loaded.
Understand?

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 09:10:01
Message: <web.3edc9d6bc13294582ff34a90@news.povray.org>
Patrick Elliott wrote:
>With the number of component we could fit into a new card
>the result would be no bigger than an existing NVidia card, however the
>development time needed to produce the chip, make sure it was stable and
>then pray that game companies that are used to using the cheats supplied
>through OpenGL and DirectX will actually use it makes the odds of anyone
>seeing such a card on the market any time soon very unlikely.

Cheats? Was there ever a 30fps algorithm free of them? :-)

>The irony being that in many cases you have to upload large chunks of
>additional geometry and pre-made textures every few seconds in a game,
>while a true raytrace engine would only have to upload terrain geometry
>and those textures you couldn't produce using procedural systems (which for
>most games would be maybe 10% of them...)

Why won't a raytracing card have to load new geometry regularly? It's not
necessarily going to know ahead of time which objects will be needed during
a game, and they are not all going to fit in memory at the same time for a
modern game.

Procedural textures will have to be calculated by the graphics card, which
surely has enough to do already (if you get the CPU to do it and send it to
the graphics card, you might as well send an image). I thought anyway that
procedural textures are not specific to raytracing?

Whilst I'm sure that, given sufficient time, you could reproduce 90% (or
100%) of game image textures with procedural textures, the question surely
is whether it is worth the extra time taken to generate the textures in the
first place, every frame. I can especially see problems producing things
like text at speed.


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 3 Jun 2003 10:11:58
Message: <3edcacae$1@news.povray.org>
>  If the ray hits a triangle in a nearer octree node, there's absolutely
no way that a triangle in a farther octree node would be nearer.

Well, there's that word, "if". If it doesn't hit
the nearer octree node, then another one has to be tested.


>   If you want any speed, you will need to perform a spatial tree
optimization
to your scanline rendering algorithm as well. AFAIK commercial high-end
renderers do this.

Yes, I agree that's a given. Using POV-Ray as
infrastructure is good in this respect
since it has raycasting in which to help build
such trees. Games of course do this (BSP, portals, etc.).
I think the attraction of scanline in film was
that many scenes are in rooms, where setting
up spatial culls is less demanding.


>  You need perspective-correct interpolation (linear interpolation will
give you awful results), which is less trivial.

True, except for micropolygons. My particular prototype
does draw macropolygons, so I will need to do that.
In REYES, it's not necessary.


>   You don't need to load meshes to memory, you only need to know their
> bounding box. When a ray hits the bounding box, you load the mesh. If a
> ray never hits the bounding box of a mesh, that mesh is never loaded.
> Understand?

Yes, I see what you're saying. So you would
load the bboxes of the geometry by themselves,
and then they would fill with the actual
respective geometry as initial hits occurred.
Yes, that's doable. You eventually wind
up with only the necessary geometry.
For some reason I thought you meant trying to
load/unload. Sorry about that.

I guess a developer might reason that, worst case,
you need to have enough memory for all objects,
so why bother adding the (albeit slight) overhead
of doing the first-time hit test. But I can see
this being a win on distributed or shared resource
systems when rendering movies or different view
angles of stills, because statistically more RAM
would be available to other tasks. POV would
definitely benefit, because when it animates
it reparses the .pov file for each frame. So as
you zoom in on a scene, less and less geometry
would get loaded (assuming no reflected rays that
wind up hitting the off-camera objects, at least).

I don't want to get into a scanline vs. raytracing
debate (except to collect the pros and cons of
each approach) because I want to augment POV-Ray, not replace
its raytracer. Hybrid renderers offer a compelling
wealth of strengths. Maybe a person doesn't need
to scanline anything most of the time, but
it's nice to know that he could if he wanted to.
For me, it's about having choice. I don't want
to invest lots of time learning POV and then
be forced to always raytrace when scanlining
might offer a big speed benefit on a particular job
or various parts of a job.

Thanks,
Ray


Post a reply to this message

From: Ray Gardener
Subject: Raytracing individual objects
Date: 3 Jun 2003 13:16:05
Message: <3edcd7d5$1@news.povray.org>
> You can first
> raytrace one mesh and then another (using the same z-buffer method for
> surface removal as in scanline rendering).

Yes, that's true. That can be an efficient way
in some cases, like when a heightfield presents
a slim or small profile. What I particularly
like is that it provides a fine-grained way
to mix scanlining and raytracing.

One difficulty I have with doing that is that
sometimes its easier to drive a shader using
some stepped coordinate system of the primitive rather
than the world locations a raytracer returns.
For example, when scanlining a heightfield,
I do it cell by cell, so rocks can be distributed
based on how likely a cell is to be occupied.
With raytracing, I have to derive the cell
coords, and then maintain some kind of map
to keep track of which cell was painted with what.
I realize it's not good form to write primitive-specific
shaders, but they can be much easier to do sometimes,
especially when prototyping.

Displacement shading is also easier when scanlining.
At least, I haven't figured an easy way to do it
when raytracing.

Hmm.. this thread has been a good exploration
of the pros and cons of the two rendering techniques.
I'll make a digest of them and add it to my website.

Ray


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.