POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
5 Aug 2024 02:19:23 EDT (-0400)
  Scanline rendering in POV-Ray (Message 27 to 36 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: ingo
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 16:08:09
Message: <Xns938EE1B6C217Bseed7@povray.org>
in news:as7ndvkenr4ku0uienvc66tu2jmckhvcol@4ax.com Lutz Kretzschmar wrote:

> It is extremely hard to tesselate (triangulate) an
> arbitrary CSG object.

Just for my understanding; Everytime I see a statement simular to this I 
wonder, does it mean building a CSG from POV-Ray primitives and then 
tesselate it, or tesselate the "POV-Ray primitives" to a certain level and 
then do the CSG (as I saw it in a very old version of 3D-studio)?

Ingo


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 16:25:15
Message: <3edbb2ab$1@news.povray.org>
> > By Mr. Froehlich's own admission, raytracing speed doesn't compete with
> > scanlining until you have billions of objects, so there's definitely
room for
> > exploration under that limit.
>
> That is not what I said!  We were talking about video game realtime usage
> using triangles.  And some fuzzy argument about the film industry that you
> mentioned.

I stand corrected. A billion triangles, then.
I would imagine there's also room for exploration
under that limit.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 16:40:38
Message: <3edbb646$1@news.povray.org>
> I think it would be a great feature to have in POV-Ray, be it simply
> for 3D hardware supported previewing :-) So, I would definitely
> support any undertaking you do.

Thanks, I appreciate that.

> And I know you have a good grip on the depth of the task, but I would
> like to reply to the above quoted paragraph that it is not quite that
> simple, at least as far as POV-Rays primary modelling paradigm (CSG)
> is concerned. It is extremely hard to tesselate (triangulate) an
> arbitrary CSG object. I have seen OpenGL code that renders CSGs using
> stencil buffers and such, but I'm not sure if that actually works for
> complex, nested CSG objects made out of more than just a couple of
> boxes and speheres.
> I evaluated a lot of the same things when thinking about how to
> display a scene in Moray....

Well, CSG is undoubtedly a challenge.
As long as "extremely hard" doesn't
wind up becoming "intractable" I'm
willing to forge ahead. In the reverse
direction, I would love to see Moray
display POV CSG objects.

In REYES, which tesselates down to
micropolygons, one can consider the
3D location of the polygon to be
inside or outside another primitive, and
thus cull it at render time. So all the
primitives comprising the CSG object
need to be in memory (not a heavy requirement)
and they must be iterated and volume tested
during micropolygon rendering (possibly heavy).

That also suggests a brute force solution
to obtaining triangle meshes from a CSG object:
Tesselate to micropolygons, insert the visible
polys as points into a 3D point cloud, and
then convert the point cloud into a mesh.

Ray


Post a reply to this message

From: Warp
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 16:46:26
Message: <3edbb7a2@news.povray.org>
Ray Gardener <ray### [at] daylongraphicscom> wrote:
> In a raytracer, all of the scene's geometry must
> be retained in memory because secondary rays due
> to reflection, refraction, shadows, etc. could
> be aimed anywhere, thus random access to the geometry
> database must be possible.

  There are several things here which are not completely accurate.

  The reason given above about why all geometry needs to be retained
on memory is related exactly to what raytracing can do *more* than
scanline-rendering (ie. reflections, refractions and shadow-testing).
  There's nothing in raytracing that *forces* you to use these extra
features. If it's enough to get the same result as you would get with
scanline-rendering, you can use simple raycasting (ie. don't calculate
any reflected nor refracted rays) without shadows. This way the necessity
of having all the objects in memory at once is removed in the exact same
way as in scanline-rendering.
  (If you want to use scanline-type reflections and shadows, there's nothing
in raytracing which would not allow using the same techniques.)

  Of course one would ask: If the result is the same as with
scanline-rendering, then why don't do it with scanline-rendering?
  There are two reasons:
  1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
with a raytracing algorithm than with a scanline-rendering algorithm.
  2. With very complex meshes raytracing can be even faster (because
triangles which are not visible are automatically skipped due to
octree optimizations).

  Also, even if full raytracing done, keeping the whole scene geometry
in memory at once is not strictly necessary.
  If the bounding box of a mesh object is never hit, there's no reason
to even load that mesh into memory. It should be possible to develop
a load-at-demand scheme, where meshes are loaded only when needed. It
could also be a cache scheme, where meshes which have not been hit in
longest time can be freed from memory.

> In a scanline renderer, each object is considered
> only once, so an object only exists in memory
> while being drawn.

  This has the drawback that if a hundred triangles cover a certain
pixel, that pixel is recalculated a hundred times (including texturing
and lighting).
  In raytracing the closest intersection point for the pixel is
calculated and then texturing and lighting is calculated for that
triangle only.

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

From: Lutz Kretzschmar
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 17:45:55
Message: <5ahndv47k8grn3i7i58026jhmrmhp9q53p@4ax.com>
Hi Ray Gardener, you recently wrote in povray.general:

> ... Don't worry; I'm not asking the POV-Team to do anything.
Ray, Thorsten is not posting on behalf of the POV-Team, those are
purely his own thoughts. 

- Lutz
  email : lut### [at] stmuccom
  Web   : http://www.stmuc.com/moray


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 18:05:28
Message: <MPG.194577cdcf464339897fe@news.povray.org>
Another point to make is that most games don't use raytracing since until 
relatively recently the hardware architecture needed to implement a true 
raytracer in a graphics cards would have required something 3 times the 
size of a normal card. There is only one that I have even heard rumored 
to have been made and it is used in movie production, cost tens of 
thousands of dollars to buy and was nearly the size of a laptops entire 
motherboard. With the number of component we could fit into a new card 
the result would be no bigger than an existing NVidia card, however the 
development time needed to produce the chip, make sure it was stable and 
then pray that game companies that are used to using the cheats supplied 
through OpenGL and DirectX will actually use it makes the odds of anyone 
seeing such a card on the market any time soon very unlikely. The irony 
being that in many cases you have to upload large chunks of additional 
geometry and pre-made textures every few seconds in a game, while a true 
raytrace engine would only have to upload terrain geometry and those 
textures you couldn't produce using procedural systems (which for most 
games would be maybe 10% of them...) If you couldn't get a jump in speed 
and frame rate from have 90% of your data remain in-place throughout most 
of the game then you may as well not even bother trying. lol And most 
experts believe that short of a major increase in the size or speed of 
data transfer, the latest generation of cards are close to the physical 
limit of what they can improve on.

Hmm. Maybe there is a market for a true raytracing based card after 
all...

Not that any of the above addresses the central issue, which as everyone 
else has said, is that there is not likely to be a major improvement in 
speed or memory use from what you are planning to do. Or which in the 
case of memory couldn't be accomplished by a method of trading out stuff 
that POVRay doesn't specifically need at any given moment.

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 19:05:46
Message: <3edbd84a@news.povray.org>
>   There's nothing in raytracing that *forces* you to use these extra
> features. If it's enough to get the same result as you would get with
> scanline-rendering, you can use simple raycasting (ie. don't calculate
> any reflected nor refracted rays) without shadows. This way the necessity
> of having all the objects in memory at once is removed in the exact same
> way as in scanline-rendering.

Not quite. Hidden surface removal is accomplished in
raytracing by using the nearest intersection. If an
eye ray does not intersect other objects/bboxes along its vector,
it cannot be sure that it has hit the nearest one.

Even if it were so, POV-Ray doesn't give
you the choice. It retains all the geometry regardless.



>   1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
> with a raytracing algorithm than with a scanline-rendering algorithm.

Is it? With a (arbitrary) mesh, you need to determine the intersection
of a ray with some kind of octree. This means knowing the ray-plane
intersection algorithm and how to create, populate, and search octrees.
Then you need to forward the ray to intersect the particular triangle
in the mesh, and get its ST coords. You may also need to check other
parts of the octree to be sure your intersection is nearest the eye.

In scanline, you just go through the triangles in the mesh
and draw them. ST coords are easy because you get them
from vertex and edge interpolation. If one does not have
a triangle rasterizer handy, however, I can see how
the raytracing approach can be desirable.

I would characterize the relative difficulties as being
the tasks involved in adding new primitive types.
In raytracing, one must provide an intersection function.
In scanline, one must provide a tesselator. Having
written both a simple raytracer and scanliner, I
would say both consume about equal development time.
For the heightfield primitive, however, it was
challenging getting the last row/column to intersect
correctly -- the slab system and DDA traversal are
prone to easy-to-make off-by-one errors. For scanline,
it was just a matter of pulling triangles.



>   Also, even if full raytracing done, keeping the whole scene geometry
> in memory at once is not strictly necessary.
>   If the bounding box of a mesh object is never hit, there's no reason
> to even load that mesh into memory. It should be possible to develop
> a load-at-demand scheme, where meshes are loaded only when needed. It
> could also be a cache scheme, where meshes which have not been hit in
> longest time can be freed from memory.

Not true in full raytracing, unless you know in advance
the direction of every secondary ray. An object behind
the camera, for instance, can appear reflected in another
object in front of the camera. The trouble is, you don't
know if an object is never hit until you try to hit it,
hence you must bring it into memory.

Granted, some culling cases are possible. A reflective
flat opaque sphere, for example, can only cast secondary rays
in a half-sphere centered around the point that is
nearest to the eye. If no other objects lie nearer
than the plane parallel to the half-sphere's flat part,
and no other objects cast secondary rays, then those
objects behind the plane can be unloaded. The cull
search goes up exponentially with additional
reflective objects, however.



> > In a scanline renderer, each object is considered
> > only once, so an object only exists in memory
> > while being drawn.
>
>   This has the drawback that if a hundred triangles cover a certain
> pixel, that pixel is recalculated a hundred times (including texturing
> and lighting).

Not true. Texture and lighting can be deferred in scanline rendering
until after the z-buffer is populated. One needs a larger z-buffer, however.
In fact, I will implement this in my existing renderer to see
what the performance increase is, since my z-buffer already has
the necessary structure.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 2 Jun 2003 19:07:59
Message: <3edbd8cf$1@news.povray.org>
> Ray, Thorsten is not posting on behalf of the POV-Team, those are
> purely his own thoughts.

Okay. I wasn't sure, because his sig mentioned
he was on the POV-Team.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Deferred texturing benchmarks in scanline renderer
Date: 2 Jun 2003 22:10:25
Message: <3edc0391$1@news.povray.org>
I finished benchmarking my new z buffer
which defers texturing and lighting until
after all the pixels have been depth sorted.
Each render was preceded with an srand(0) call
to ensure that all random number dependant
shading generated the exact same triangles.
Times for deferred rendering include iterating
through the light cache and computing the
final appearance of the scene. The scene was
a typical landscape using an ~ 500 x 500 pixel
heightfield. These results are preliminary
and I will be doing more tests as time goes on.

For a window size of 900 x 250, the difference
in speed was small, between 2 and 8 seconds on jobs
that ran between 1 and 2 minutes depending on LOD.
The texturing code in this case is very simple,
being just a colormap lookup; it represents a
small percentage of the total render pipeline.

Increasing the window size to 1800 x 500
changed things quite a bit:

    900 x 250        1800 x 500
  --------------   ---------------
   2m10s normal     6m05s normal
   2m02s deferred   5m35s deferred
      8s better       30s better

The time savings scaled proportionally
with the window size, but the render
time scaled less, resulting in greater
percentage time savings at the larger size.
With more screen pixels, the benefits of
deferred texturing increase.

I will be recompiling the Leveller Viewer
up to version 2.3 including the scanline
renderer for those who wish to evaluate
this particular scanline renderer implementation
with their own data.

Ray


Post a reply to this message

From: Ray Gardener
Subject: Wrapping up
Date: 3 Jun 2003 00:19:02
Message: <3edc21b6$1@news.povray.org>
I thank everyone for their input.
I am documenting the scanline work at
http://www.daylongraphics.com/products/leveller/render/


Ray Gardener
Daylon Graphics Ltd.
"Heightfield modeling perfected"
http://www.daylongraphics.com


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.