POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Perhaps a "process_geometry" keyword Server Time
4 Aug 2024 22:18:30 EDT (-0400)
  Perhaps a "process_geometry" keyword  
From: Ray Gardener
Date: 4 Jun 2003 16:31:54
Message: <3ede573a@news.povray.org>
> POV textures *are* procedural. Image maps are relatively rarely used.

Sorry, I meant 'procedural' in the user-definable sense.
POV-Ray's procedural textures are predefined.

To be fair, I thought I remembered reading
somewhere that POV 3.5 let textures use
user-defined functions, although just going
through the docs again I can't find
references to that. There's definitely
no displacement shading, at any rate,
except with isosurfaces (more below).


> Uh, you may want to look at the POVMAN patch, which lets you use
> Renderman shaders on top of the existing procedural texture system.

Thanks. It's a great hack, but it appears
limited to surface shading. Using POVMAN with
scanlining would make displacement shading easy.



> You like procedural geometry but don't like isosurfaces?

Well, isosurfaces have a sampling function interface
analogous to regular shading in Renderman. For some
shapes, this is preferable -- e.g., a sphere is
just x^2 + y^2 + z^2. But for other shapes,
determining from (x,y,z) whether the point is
inside or outside gets complicated.

Gritz had a good explanation in one of his
Rman papers -- he used drawing a line on a
surface as an example. One way is to iterate
all points (x,y) on the surface and determine
if a given point was on the line. This amounts
to hit testing within the rectangle formed
by the line's endpoints and thickness.
Another way to draw the line is to just
rasterize the rectangle directly. The latter
approach is also more intuitive for most
people, because it parallels the way
artists draw in the real world. If you want
to draw a line, you touch a pen to one point,
and drag the pen to the other point.

I like having both approaches available.
For the particular set of shaders I'm currently
working on, I find using procedural geometry
easier.

POV-Ray might benefit from including a
procedural geometry keyword, which would
have an option of emitting the geometry
to the scene's object tree (for raytracing
along with the other objects) or to be
rendered immediately using a z buffer.
In fact, I may take this approach, since
the scanliner can reside outside POV-Ray.
I'd have to let POV pass information about
objects (for shading), lights, and camera.
So one would have something like this
in an example script:

  global_settings
  {
     geometry_processing
     {
        enable=true

        overrides
        {
           raytrace=false   // if true, force processors
                            // to add created objects to object tree
                            // for raytracing.

           scanline=false   // If true, processors always
                            // render immediately into zbuffer.

           scanline_method=triangle
             // Force processors to allow macropolygons.
             // If method is 'reyes', then they are forced
             // to use REYES algorithm and dice to micropolygons.
        }
     }
  }
  // Zbuffer allocates and initializes here if necessary.

  ...

  height_field
  {
     png "hf.png"
     process_geometry "landscape_detail" true
  }

There would be a DLL (on win32) named landscape_detail.dll
that would be passed the object, along with a reference
to the zbuffer, the object tree, and cam/lighting.
The DLL could then proceed to create whatever geometry
it wanted, adding it to the tree or rasterizing it
into the zbuffer (the bool arg after the DLL name indicates which).

The geometry_processing keyword would also be available
outside global_settings, so that one could easily change
the rendering manner for sections of a script in specific ways.

Ray


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.