POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 20:16:52 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Warp
Date: 3 Jun 2003 07:30:35
Message: <3edc86db@news.povray.org>
Ray Gardener <ray### [at] daylongraphicscom> wrote:
> Not quite. Hidden surface removal is accomplished in
> raytracing by using the nearest intersection. If an
> eye ray does not intersect other objects/bboxes along its vector,
> it cannot be sure that it has hit the nearest one.

  You didn't understand what I meant.

  Your scanline algorithm renders all meshes separately, one by one.
There's nothing in raytracing which would disallow this: You can first
raytrace one mesh and then another (using the same z-buffer method for
surface removal as in scanline rendering). Of course this means that the
same pixels need to be raytraced more than once, but that's exactly what
the scanline method is doing (except that with two meshes the raytracer
calculates each pixel twice, but the scanline method can calculate
many pixels tens of times, depending on the meshes).

> Even if it were so, POV-Ray doesn't give
> you the choice. It retains all the geometry regardless.

  The original question was memory usage of the raytracing and scanline
algorithms.

>>   1. It's a lot simpler. Rendering and texturing a mesh is a lot simpler
>> with a raytracing algorithm than with a scanline-rendering algorithm.

> Is it? With a (arbitrary) mesh, you need to determine the intersection
> of a ray with some kind of octree. This means knowing the ray-plane
> intersection algorithm and how to create, populate, and search octrees.
> Then you need to forward the ray to intersect the particular triangle
> in the mesh, and get its ST coords. You may also need to check other
> parts of the octree to be sure your intersection is nearest the eye.

  If the ray hits a triangle in a nearer octree node, there's absolutely
no way that a triangle in a farther octree node would be nearer. The octree
optimization works in such way that the first time the ray hits a triangle,
it's surely the closest one.

> In scanline, you just go through the triangles in the mesh
> and draw them.

  If your mesh has 100 million triangles and 10 of them are visible (so that
the rest get hidden behind them), you don't want to do that. Of course unless
render time is not critical.
  If you want any speed, you will need to perform a spatial tree optimization
to your scanline rendering algorithm as well. AFAIK commercial high-end
renderers do this.

  Besides, you say "draw them" as if it was trivial (or at least more trivial
than calculating the intersection of a ray and a triangle). I tend to
disagree.

> ST coords are easy because you get them
> from vertex and edge interpolation.

  You need perspective-correct interpolation (linear interpolation will
give you awful results), which is less trivial.

> Not true in full raytracing, unless you know in advance
> the direction of every secondary ray.

  You didn't understand this either. Please read my text again.

  You don't need to load meshes to memory, you only need to know their
bounding box. When a ray hits the bounding box, you load the mesh. If a
ray never hits the bounding box of a mesh, that mesh is never loaded.
Understand?

-- 
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}//  - Warp -


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.