POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
4 Aug 2024 20:16:05 EDT (-0400)
  Scanline rendering in POV-Ray (Message 57 to 66 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 05:25:01
Message: <web.3eddb93bc1329458541c87100@news.povray.org>
Patrick Elliott wrote:

>True enough, but in this case I mean 'cheat' as in faking real geometry
>like boxes and spheres using triangles. I consider it a cheat because it
>takes advantage of the existing hardware capability to produce something
>that only looks real by using so many triangles that you can't place
>anything else in the scene with it. That is the nature of most cheats,
>something that looks good enough for you purpose, but doesn't really
>reproduce the result accurately.

You can exactly reproduce a mathematical box with 12 triangles. The sphere
is a better example, but where does this approach end? You end up with a
plethora of "fundamental" objects (including triangles!) to avoid
impersonating one geometry with another. This is great for many purposes
(see: POV), but I assert that it's not too good for games - being able to
manufacture anything out of a single shape (especially one with the
projection properties of a triangle) is valuable.
Nobody is going to bother with perfect spheres if conventional
triangle-based characters slow to a comparative crawl.

As for real geometry, I have yet to see a perfect triangle, sphere or box
in the real world :-)

>True, but some things don't need to be fed back into in continually and
>those bases on primitives would take less room to store, meaning you can
>leave more of them in memory than normal. This should cut in half the
>amount of stuff you have to cram into the card in each frame, possibly
>more.

I don't see why a raytracing card can retain an object but a scanline card
cannot, *if* both know in advance that an object will persist from frame to
frame (which they most likely won't).

Looking at typical objects that appear in games, a fast triangle is very
valuable. There once was a game that built characters from ellipsoids, but
they too are an approximation and the concept could not obviously benefit
from improving system speed. Objects in games are often not simple spheres.

>Think some new cards may use them, but the same issue existed for them as
>for a true card based engine, the methods used to produce them where
>simply too complex to 'fit' in the existing architecture.

I thought some modern cards implement things like pixel shaders, and even
small shading languages, which fit in nicely. I don't think that procedural
textures (or textures generated on the fly) are unique to raytracing, or
are intrinsically harder in scanline than raytracing (or vice versa). I've
seen some raytracers that rely entirely on image mapping.

>Kind of hard to say, since the option isn't exactly common or if it does
>exist is not used from what I have seen.

I think some games companies are still getting used to programmable shaders
on cards (and also have the problem that on some platforms, a lot of cards
are not modern enough to have them). Perhaps that's another point - a new
raytracing card for games would have to be compatible with already-released
games, or nobody would buy it. It would need to support the existing OpenGL
or DirectX interfaces.

Procedural textures (or programmable textures on these cards) are
increasingly useful, but they do take time to compute and hence seem mainly
restricted to special effects. When you need to output 800x600 pixel frames
at >24fps, almost every trick becomes useful.


Post a reply to this message

From: Ken
Subject: Re: The Meaning of POV-Ray
Date: 4 Jun 2003 08:56:03
Message: <3EDDECBF.276AB833@pacbell.net>
Ray Gardener wrote:

> What are POV's goals, actually? Does the
> POV-Team have a specific mission statement
> in mind, or do they incrementally review
> and adjust the code on a as-things-crop-up basis?
> Is POV a renderer (method unimportant) or
> a raytracer? Is the goal to produce graphics
> or specifically to raytrace?

I think you may be surprised to learn that the goal of the POV-Team is not
so much POV-Ray itself as much as it is the programming challenges involved
in making it, which are many and varied. The core developers of the program
are either professional programmers who work on if for fun in their spare
time or are computer science students working on it to help develop their
programming skills. That we as a public get to enjoy the fruits of their
labor is secondary to that goal. With that in mind, the development model
of POV-Ray is not clearly defined and is pretty much at the whim of the
developers and whatever tickles their fancy at the time. For a commercial
application that would be a disaster. For a hobbyists toy, and student
learning tool, it is an amazing success.

The above is based solely on my own personal observations and does not
necessarily represent the views and opinions of the POV-Team.

-- 
Ken Tyler


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 10:45:48
Message: <cjameshuff-947AFC.09383004062003@netplex.aussie.org>
In article <web.3eddb93bc1329458541c87100@news.povray.org>,
 "Tom York" <tom### [at] compsocmanacuk> wrote:

> You can exactly reproduce a mathematical box with 12 triangles. The sphere
> is a better example, but where does this approach end? You end up with a
> plethora of "fundamental" objects (including triangles!) to avoid
> impersonating one geometry with another. This is great for many purposes
> (see: POV), but I assert that it's not too good for games - being able to
> manufacture anything out of a single shape (especially one with the
> projection properties of a triangle) is valuable.

And why is being able to manufacture things out of many shapes worse 
than only having one shape to use?
(Actually, more than one shape, at least in OpenGL: points, lines, 
line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads, 
quad-strip, polygon. And utilities to make disks, cylinders, spheres, 
other quadrics, NURBSs, etc...though those are tessellated. GLUT even 
includes a *teapot* primitive!) 


> Nobody is going to bother with perfect spheres if conventional
> triangle-based characters slow to a comparative crawl.

Nobody will use the faster option if the slower option is slower? Or 
nobody will use the raytracing card if it is slower, even if it gives 
better quality?


> As for real geometry, I have yet to see a perfect triangle, sphere or box
> in the real world :-)

Well, a triangle doesn't model anything in the real world, you need at 
least 4 of them to get a decent approximation of a real-world object, 
and you don't see many tetrahedrons lying around. There are many things 
that a box models quite closely, to a level that would be invisible on a 
typical camera image with comparable resolution.


> I don't see why a raytracing card can retain an object but a scanline card
> cannot, *if* both know in advance that an object will persist from frame to
> frame (which they most likely won't).

Both can, and there are have been ways to do so for quite some time. The 
difference is that a few thousand primitives can be stored in the space 
of a few thousand triangle mesh.


> Looking at typical objects that appear in games, a fast triangle is very
> valuable. There once was a game that built characters from ellipsoids, but
> they too are an approximation and the concept could not obviously benefit
> from improving system speed. Objects in games are often not simple spheres.

Marathon 1, an old Mac first-person shooter, had some left-over bits and 
pieces for a sphere renderer in its code...wasn't ever used in the game, 
though, and this predated 3D accelerator cards for home computers. But 
why couldn't a game benefit from a sphere primitive? With procedural 
shaders, you could do a lot with a sphere, like virtually any kind of 
energy bolt or blast effect. With more complex CSG available, you could 
build a complex room with primitives and procedural shaders and still 
have space available on the card for character meshes and skins.


> I think some games companies are still getting used to programmable shaders
> on cards (and also have the problem that on some platforms, a lot of cards
> are not modern enough to have them). Perhaps that's another point - a new
> raytracing card for games would have to be compatible with already-released
> games, or nobody would buy it. It would need to support the existing OpenGL
> or DirectX interfaces.

That is the primary problem: R&D would cost money and resources that 
could be used on a better but more conventional card, and at release 
there would be nobody ready to make use of the card, and no guarantee 
that there ever would be. It requires a card supporting those features, 
software to use the new features of the card, and game designers to use 
the new features of the software.


> Procedural textures (or programmable textures on these cards) are
> increasingly useful, but they do take time to compute and hence seem mainly
> restricted to special effects. When you need to output 800x600 pixel frames
> at >24fps, almost every trick becomes useful.

There is no limit on how long they can take, but that doesn't mean they 
are too slow to be useful. Dedicated hardware should be able to evaluate 
procedural textures extremely quickly, more quickly than an image map if 
it has to drag the image data back from main memory. Here you have that 
size issue again: image maps are big, and video card memory is limited, 
so things often have to be shuffled between video card memory and main 
system memory, which is surprisingly slow.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Tom York
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 13:00:02
Message: <web.3ede2562c1329458bd7f22250@news.povray.org>
Christopher James Huff wrote:
>And why is being able to manufacture things out of many shapes worse
>than only having one shape to use?
>(Actually, more than one shape, at least in OpenGL: points, lines,
>line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
>quad-strip, polygon.

Complexity (and so cost) of the card. And the primitives that the 3D API
gives you are not necessarily the same as the primitives that get passed to
the card (for instance, DirectX now deals in pixel-sized triangles instead
of points, and seems to have done away with the 2D DirectDraw stuff).

>And utilities to make disks, cylinders, spheres,
>other quadrics, NURBSs, etc...though those are tessellated. GLUT even
>includes a *teapot* primitive!)

The card isn't going to see any of these.

>Nobody will use the faster option if the slower option is slower? Or
>nobody will use the raytracing card if it is slower, even if it gives
>better quality?

Nobody will use the raytracing card for games if the quality gain is
insufficient given the speed drop. I (emph.) assume that there will be a
speed drop because I have seen many real-time scanline-based engines that
didn't use a 3D card. I have seen one real-time raytracer, and that was one
of the highly hand-optimised demos that used to be popular in the '90s. The
resolution was very low, the framerate was very low, and the reflection at
that resolution was indistinguishable from a reflection map. I would be
very happy for someone to prove me wrong with a realtime raytracer that can
compete on equal terms with a good realtime scanline renderer (in software,
of course - no 3D accelerator).

>Well, a triangle doesn't model anything in the real world, you need at
>least 4 of them to get a decent approximation of a real-world object,
>and you don't see many tetrahedrons lying around. There are many things
>that a box models quite closely, to a level that would be invisible on a
>typical camera image with comparable resolution.

I don't see a single box convincing anyone nowadays. You must use groups of
them, just as you must use groups of triangles to resemble anything useful.
And triangles certainly have their uses - what about terrain?
I think the ability to deform/split up objects in realtime using a triangle
mesh has quite a few advantages in games. Can you explode a box? Not as
easily as you can explode a few triangles. If you model an object using
more and more complex primitives, you necessarily have problems if at some
point you want to treat the object as a collection of smaller items.

>Both can, and there are have been ways to do so for quite some time.

In a game, an object may be removed at almost any time, either due to player
action directly or to something else. Surely unpredictable, especially
given trends towards destroyable/interacting scenery.

>The
>difference is that a few thousand primitives can be stored in the space
>of a few thousand triangle mesh.

But who's going to construct a game with thousands of sphere or box
primitives but no triangles? Room games maybe, but games in the open or set
in space? Surely you're not proposing the isosurface as a practical
realtime primitive shape? :-)

>Marathon 1, an old Mac first-person shooter, had some left-over bits and
>pieces for a sphere renderer in its code...wasn't ever used in the game,
>though, and this predated 3D accelerator cards for home computers. But
>why couldn't a game benefit from a sphere primitive? With procedural
>shaders, you could do a lot with a sphere, like virtually any kind of
>energy bolt or blast effect. With more complex CSG available, you could
>build a complex room with primitives and procedural shaders and still
>have space available on the card for character meshes and skins.

Yes, the game I mentioned also predated 3D cards (part of the reason they
were impelled to try ellipsoids, perhaps). For things like energy bolts, a
player will usually insufficient time to see the difference between a
sphere and a couple of crossed texture-mapped polygons (or a more
complicated model). For blast effects, I have seen textures mapped to a
sphere used as a kind of blast, and it generally looks terrible IMO. The
edge is too sharp, too uniform (same problem as in Quake 2-style blasts,
which were done with a sort of simple polygonal explosion model). I think
some sort of volumetric method would be far better here. With polygons, you
can have the procedural shader and a flexible type that doesn't have to
enforce spherical or elliptical symmetry, or be a closed surface.

For CSG applied to rooms, it would need to be as fast as the BSP-style
methods used for static geometry at the moment (although I guess BSPs are
not used for interactive scenery).

>That is the primary problem: R&D would cost money and resources that
>could be used on a better but more conventional card, and at release
>there would be nobody ready to make use of the card, and no guarantee
>that there ever would be. It requires a card supporting those features,
>software to use the new features of the card, and game designers to use
>the new features of the software.

Yes. I think you could get away with introducing a new card, even if nobody
used the new features, but it would have to support existing games,
performing at least as well as the conventional cards. This would be
difficult, particularly since it would inevitably be more expensive.

>There is no limit on how long they can take, but that doesn't mean they
>are too slow to be useful. Dedicated hardware should be able to evaluate
>procedural textures extremely quickly, more quickly than an image map if
>it has to drag the image data back from main memory.

Why? The procedural must be calculated using a (probably) user-specified
formula for every pixel that uses it. The image map must certainly be
projected every pixel (a single unchanging operation), but the
time-consuming step of actually acquiring the bitmap from system memory
hopefully occurs only once for a scene.

>Here you have that
>size issue again: image maps are big, and video card memory is limited,
>so things often have to be shuffled between video card memory and main
>system memory, which is surprisingly slow.

I think 3D card time is more limited. Bitmaps have the advantage that
texture loading time is independent of surface texture complexity. A flat
grey texture will take no more time to apply than a weathered texture with
"hello world" scrawled on it of the same resolution. Obviously, there's
going to be a significant difference with a procedural approach. Procedural
shaders are certainly going to be useful for particular effects, but I
don't believe they're going to dominate any time soon, if ever.


Post a reply to this message

From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 14:26:57
Message: <3ede39f1@news.povray.org>
> To be clear: you mean your company would sell your modified POV ?

No, it would be free, with source code also.

Ray


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 14:54:07
Message: <MPG.1947edee2d8eda16989816@news.povray.org>
In article <web.3ede2562c1329458bd7f22250@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Christopher James Huff wrote:
> >And utilities to make disks, cylinders, spheres,
> >other quadrics, NURBSs, etc...though those are tessellated. GLUT even
> >includes a *teapot* primitive!)
> 
> The card isn't going to see any of these.
> 
But it will see the triangles that make them up, which takes lots of 
space.

> >Nobody will use the faster option if the slower option is slower? Or
> >nobody will use the raytracing card if it is slower, even if it gives
> >better quality?
> 
> Nobody will use the raytracing card for games if the quality gain is
> insufficient given the speed drop. I (emph.) assume that there will be a
> speed drop because I have seen many real-time scanline-based engines that
> didn't use a 3D card. I have seen one real-time raytracer, and that was one
> of the highly hand-optimised demos that used to be popular in the '90s. The
> resolution was very low, the framerate was very low, and the reflection at
> that resolution was indistinguishable from a reflection map. I would be
> very happy for someone to prove me wrong with a realtime raytracer that can
> compete on equal terms with a good realtime scanline renderer (in software,
> of course - no 3D accelerator).
> 
POV-Ray has a built in example of real time raytracing. It is small, but 
then you are dealing with an engine that is running on top of and OS and 
can't take full advantage of the hardware, since it will 'never' have 
100% total access to the processor. A card based one would likely be far 
more optimized, support possible speed improvements that don't exist in 
POV-Ray and have complete access to the full power of the chip running 
it. This would be slower why?

> I think the ability to deform/split up objects in realtime using a triangle
> mesh has quite a few advantages in games. Can you explode a box?

That is a point, but nothing prevents you from making explodable objects 
from triangles. In facts, the increase in available memory by using 
primitives in those things that are not going to undergo such a change 
means you can use even more triangles and make the explosion even more 
realistic. Current AGP technology it reaching its limits as to how much 
you can shove through the door and use. Short of a major redesign of both 
the cards and the motherboards, simply adding more memory or a faster 
chip isn't going to cut it.

> >The
> >difference is that a few thousand primitives can be stored in the space
> >of a few thousand triangle mesh.
> 
> But who's going to construct a game with thousands of sphere or box
> primitives but no triangles? Room games maybe, but games in the open or set
> in space? Surely you're not proposing the isosurface as a practical
> realtime primitive shape? :-)
> 
Again. Why would anyone design one that 'only' supported such primitives? 
That's like asking why POV-Ray supports meshes if we all think primitives 
are so great. You use what is appropriate for the circumstances. If you 
want a space ship that explodes into a hundred fragments use a mesh, if 
you want one that gets sliced in half by a beam weapon, then use a mesh 
along the cut line and primitive where it makes sense. Duh!

> Yes. I think you could get away with introducing a new card, even if nobody
> used the new features, but it would have to support existing games,
> performing at least as well as the conventional cards. This would be
> difficult, particularly since it would inevitably be more expensive.
> 
Well, that kind of describes most of the new cards that come out. lol 
Yes, it would need compatibility with the previous systems, but that 
isn't exactly an impossibility.

> >There is no limit on how long they can take, but that doesn't mean they
> >are too slow to be useful. Dedicated hardware should be able to evaluate
> >procedural textures extremely quickly, more quickly than an image map if
> >it has to drag the image data back from main memory.
> 
> Why? The procedural must be calculated using a (probably) user-specified
> formula for every pixel that uses it. The image map must certainly be
> projected every pixel (a single unchanging operation), but the
> time-consuming step of actually acquiring the bitmap from system memory
> hopefully occurs only once for a scene.
> 
There is nothing to prevent using a second chip dedicated to processing 
such things and having it drop the result into a block of memory to be 
used like a 'normal' bitmap. This assumes that the speed increase gained 
by building the rendering engine into the card wouldn't offset the time 
cost of the procedural texture. In any case, there are ways around this 
issue, especially if such methods turn out to already be in use on the 
newer DirectX cards.


-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 15:21:52
Message: <cjameshuff-FDF7C4.14143604062003@netplex.aussie.org>
In article <web.3ede2562c1329458bd7f22250@news.povray.org>,
 "Tom York" <tom### [at] compsocmanacuk> wrote:

> Christopher James Huff wrote:
> >And why is being able to manufacture things out of many shapes worse
> >than only having one shape to use?
> >(Actually, more than one shape, at least in OpenGL: points, lines,
> >line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
> >quad-strip, polygon.
> 
> Complexity (and so cost) of the card. And the primitives that the 3D API
> gives you are not necessarily the same as the primitives that get passed to
> the card (for instance, DirectX now deals in pixel-sized triangles instead
> of points, and seems to have done away with the 2D DirectDraw stuff).


> >And utilities to make disks, cylinders, spheres,
> >other quadrics, NURBSs, etc...though those are tessellated. GLUT even
> >includes a *teapot* primitive!)
> 
> The card isn't going to see any of these.

True, as I said, they were reduced to triangles. The API could probably 
hook up to built-in primitives if they were available, though.


> Nobody will use the raytracing card for games if the quality gain is
> insufficient given the speed drop. I (emph.) assume that there will be a
> speed drop because I have seen many real-time scanline-based engines that
> didn't use a 3D card. I have seen one real-time raytracer, and that was one
> of the highly hand-optimised demos that used to be popular in the '90s. The
> resolution was very low, the framerate was very low, and the reflection at
> that resolution was indistinguishable from a reflection map. I would be
> very happy for someone to prove me wrong with a realtime raytracer that can
> compete on equal terms with a good realtime scanline renderer (in software,
> of course - no 3D accelerator).

Real-time involves several simplifications in computation and geometry, 
which does push the advantage to scanline. But do you need 200FPS? If 
the raytracing card is *fast enough*, its advantages could outweigh 
those of a scanline card.
(There was some RTRT...Real Time Ray-Tracing...demo called something 
like Heaven 7, you may want to look at it. It was quite impressive on my 
PC.)


> I don't see a single box convincing anyone nowadays. You must use groups of
> them, just as you must use groups of triangles to resemble anything useful.
> And triangles certainly have their uses - what about terrain?

I didn't say they were useless. Triangles are definitely useful for 
things like terrain, though I wonder if a hardware isosurface solver 
could compete...
And the advantage of a box is memory: one box takes up less memory than 
one triangle. You *can* use groups of them, nobody has said you would be 
limited to single primitives.


> I think the ability to deform/split up objects in realtime using a triangle
> mesh has quite a few advantages in games. Can you explode a box? Not as
> easily as you can explode a few triangles. If you model an object using
> more and more complex primitives, you necessarily have problems if at some
> point you want to treat the object as a collection of smaller items.

There was a POV-Ray include that used CSG to "explode" objects. If that 
method wasn't suitable, an animated mesh would probably be the best way 
to go. Again, I'm not saying meshes are useless, just that there are 
often better things to use.


> >Both can, and there are have been ways to do so for quite some time.
> 
> In a game, an object may be removed at almost any time, either due to player
> action directly or to something else. Surely unpredictable, especially
> given trends towards destroyable/interacting scenery.

This is pretty much irrelevant. I just said you can add and remove 
geometry. What you are talking about would be more switching between 
different models for different frames of the animation.


> But who's going to construct a game with thousands of sphere or box
> primitives but no triangles? Room games maybe, but games in the open or set
> in space? Surely you're not proposing the isosurface as a practical
> realtime primitive shape? :-)

Who said no triangles? "More primitives" != "Ban triangles"!


> Yes, the game I mentioned also predated 3D cards (part of the reason they
> were impelled to try ellipsoids, perhaps). For things like energy bolts, a
> player will usually insufficient time to see the difference between a
> sphere and a couple of crossed texture-mapped polygons (or a more
> complicated model).

With a raytracing engine, the sphere would likely be faster than the 
crossed polygons.


> For blast effects, I have seen textures mapped to a
> sphere used as a kind of blast, and it generally looks terrible IMO. The
> edge is too sharp, too uniform (same problem as in Quake 2-style blasts,
> which were done with a sort of simple polygonal explosion model). I think
> some sort of volumetric method would be far better here. With polygons, you
> can have the procedural shader and a flexible type that doesn't have to
> enforce spherical or elliptical symmetry, or be a closed surface.

I was not talking about texture-mapped spheres. I was specifically 
talking about using procedural shaders: something volumetric or based on 
angle of incidence to the surface, or something more like the glow patch.


> Why? The procedural must be calculated using a (probably) user-specified
> formula for every pixel that uses it. The image map must certainly be
> projected every pixel (a single unchanging operation), but the
> time-consuming step of actually acquiring the bitmap from system memory
> hopefully occurs only once for a scene.

You're assuming all the texture maps and models fit in the card memory. 
Yes, when the image map is already local, using it would be faster than 
all but the simplest shaders. That doesn't mean procedural shaders are 
too slow. If a specific shader is too slow, you use a faster one, maybe 
resorting to an image map if it will do the job and there is memory for 
it.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Ray Gardener
Subject: Perhaps a "process_geometry" keyword
Date: 4 Jun 2003 16:31:54
Message: <3ede573a@news.povray.org>
> POV textures *are* procedural. Image maps are relatively rarely used.

Sorry, I meant 'procedural' in the user-definable sense.
POV-Ray's procedural textures are predefined.

To be fair, I thought I remembered reading
somewhere that POV 3.5 let textures use
user-defined functions, although just going
through the docs again I can't find
references to that. There's definitely
no displacement shading, at any rate,
except with isosurfaces (more below).


> Uh, you may want to look at the POVMAN patch, which lets you use
> Renderman shaders on top of the existing procedural texture system.

Thanks. It's a great hack, but it appears
limited to surface shading. Using POVMAN with
scanlining would make displacement shading easy.



> You like procedural geometry but don't like isosurfaces?

Well, isosurfaces have a sampling function interface
analogous to regular shading in Renderman. For some
shapes, this is preferable -- e.g., a sphere is
just x^2 + y^2 + z^2. But for other shapes,
determining from (x,y,z) whether the point is
inside or outside gets complicated.

Gritz had a good explanation in one of his
Rman papers -- he used drawing a line on a
surface as an example. One way is to iterate
all points (x,y) on the surface and determine
if a given point was on the line. This amounts
to hit testing within the rectangle formed
by the line's endpoints and thickness.
Another way to draw the line is to just
rasterize the rectangle directly. The latter
approach is also more intuitive for most
people, because it parallels the way
artists draw in the real world. If you want
to draw a line, you touch a pen to one point,
and drag the pen to the other point.

I like having both approaches available.
For the particular set of shaders I'm currently
working on, I find using procedural geometry
easier.

POV-Ray might benefit from including a
procedural geometry keyword, which would
have an option of emitting the geometry
to the scene's object tree (for raytracing
along with the other objects) or to be
rendered immediately using a z buffer.
In fact, I may take this approach, since
the scanliner can reside outside POV-Ray.
I'd have to let POV pass information about
objects (for shading), lights, and camera.
So one would have something like this
in an example script:

  global_settings
  {
     geometry_processing
     {
        enable=true

        overrides
        {
           raytrace=false   // if true, force processors
                            // to add created objects to object tree
                            // for raytracing.

           scanline=false   // If true, processors always
                            // render immediately into zbuffer.

           scanline_method=triangle
             // Force processors to allow macropolygons.
             // If method is 'reyes', then they are forced
             // to use REYES algorithm and dice to micropolygons.
        }
     }
  }
  // Zbuffer allocates and initializes here if necessary.

  ...

  height_field
  {
     png "hf.png"
     process_geometry "landscape_detail" true
  }

There would be a DLL (on win32) named landscape_detail.dll
that would be passed the object, along with a reference
to the zbuffer, the object tree, and cam/lighting.
The DLL could then proceed to create whatever geometry
it wanted, adding it to the tree or rasterizing it
into the zbuffer (the bool arg after the DLL name indicates which).

The geometry_processing keyword would also be available
outside global_settings, so that one could easily change
the rendering manner for sections of a script in specific ways.

Ray


Post a reply to this message

From: Christopher James Huff
Subject: Re: Perhaps a "process_geometry" keyword
Date: 4 Jun 2003 17:21:25
Message: <cjameshuff-86A2CC.16140904062003@netplex.aussie.org>
In article <3ede573a@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> > POV textures *are* procedural. Image maps are relatively rarely used.
> 
> Sorry, I meant 'procedural' in the user-definable sense.
> POV-Ray's procedural textures are predefined.

No...they are quite user-defined. The system is a bit more structured 
than a shader programming language, but still extremely flexible, and 
far from anything like predefined textures.


> To be fair, I thought I remembered reading
> somewhere that POV 3.5 let textures use
> user-defined functions, although just going
> through the docs again I can't find
> references to that.

You can use functions as patterns, and pigments as vector functions. By 
combining three functions with the average pattern, you can specify a 
function for each color channel of a pigment, though this isn't as 
flexible or convenient as a full shading language. Adding such a 
language would certainly be possible, though.


> There's definitely
> no displacement shading, at any rate,
> except with isosurfaces (more below).

If "displacement shading" means displacement done at render-time, then 
no, there isn't. It could be added without requiring a scanline renderer 
however, in fact there's probably a few people working on it.


> Well, isosurfaces have a sampling function interface
> analogous to regular shading in Renderman. For some
> shapes, this is preferable -- e.g., a sphere is
> just x^2 + y^2 + z^2. But for other shapes,
> determining from (x,y,z) whether the point is
> inside or outside gets complicated.

For insideness testing, just use the inside() function...coming up with 
a custom function for it is overkill. I don't know what this has to do 
with isosurfaces though. If you're talking about the difficulty of 
making specific shapes with isosurfaces, there are functions and macros 
to make it more convenient.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Ray Gardener
Subject: Raytracing displacement shading
Date: 4 Jun 2003 18:19:53
Message: <3ede7089$1@news.povray.org>
> Sounds like you're talking about something like this:
> http://www.cs.utah.edu/~bes/papers/height/paper.html
>
> There is a landscape example with the equivalent of 1,000,000,000,000
> triangles. And instead of generating and discarding millions (or
> billions) of unused microtriangles, it generates them as needed.


That is very neat. It has some restrictions on
how the displacements can be expressed, but still,
offers a compelling option over bumpmapping.

Would you know if this algorithm is being considered
for inclusion in POV-Ray?

Thanks,
Ray


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.