POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray Server Time
5 Aug 2024 00:26:57 EDT (-0400)
  Scanline rendering in POV-Ray (Message 61 to 70 of 96)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Ray Gardener
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 14:26:57
Message: <3ede39f1@news.povray.org>
> To be clear: you mean your company would sell your modified POV ?

No, it would be free, with source code also.

Ray


Post a reply to this message

From: Patrick Elliott
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 14:54:07
Message: <MPG.1947edee2d8eda16989816@news.povray.org>
In article <web.3ede2562c1329458bd7f22250@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Christopher James Huff wrote:
> >And utilities to make disks, cylinders, spheres,
> >other quadrics, NURBSs, etc...though those are tessellated. GLUT even
> >includes a *teapot* primitive!)
> 
> The card isn't going to see any of these.
> 
But it will see the triangles that make them up, which takes lots of 
space.

> >Nobody will use the faster option if the slower option is slower? Or
> >nobody will use the raytracing card if it is slower, even if it gives
> >better quality?
> 
> Nobody will use the raytracing card for games if the quality gain is
> insufficient given the speed drop. I (emph.) assume that there will be a
> speed drop because I have seen many real-time scanline-based engines that
> didn't use a 3D card. I have seen one real-time raytracer, and that was one
> of the highly hand-optimised demos that used to be popular in the '90s. The
> resolution was very low, the framerate was very low, and the reflection at
> that resolution was indistinguishable from a reflection map. I would be
> very happy for someone to prove me wrong with a realtime raytracer that can
> compete on equal terms with a good realtime scanline renderer (in software,
> of course - no 3D accelerator).
> 
POV-Ray has a built in example of real time raytracing. It is small, but 
then you are dealing with an engine that is running on top of and OS and 
can't take full advantage of the hardware, since it will 'never' have 
100% total access to the processor. A card based one would likely be far 
more optimized, support possible speed improvements that don't exist in 
POV-Ray and have complete access to the full power of the chip running 
it. This would be slower why?

> I think the ability to deform/split up objects in realtime using a triangle
> mesh has quite a few advantages in games. Can you explode a box?

That is a point, but nothing prevents you from making explodable objects 
from triangles. In facts, the increase in available memory by using 
primitives in those things that are not going to undergo such a change 
means you can use even more triangles and make the explosion even more 
realistic. Current AGP technology it reaching its limits as to how much 
you can shove through the door and use. Short of a major redesign of both 
the cards and the motherboards, simply adding more memory or a faster 
chip isn't going to cut it.

> >The
> >difference is that a few thousand primitives can be stored in the space
> >of a few thousand triangle mesh.
> 
> But who's going to construct a game with thousands of sphere or box
> primitives but no triangles? Room games maybe, but games in the open or set
> in space? Surely you're not proposing the isosurface as a practical
> realtime primitive shape? :-)
> 
Again. Why would anyone design one that 'only' supported such primitives? 
That's like asking why POV-Ray supports meshes if we all think primitives 
are so great. You use what is appropriate for the circumstances. If you 
want a space ship that explodes into a hundred fragments use a mesh, if 
you want one that gets sliced in half by a beam weapon, then use a mesh 
along the cut line and primitive where it makes sense. Duh!

> Yes. I think you could get away with introducing a new card, even if nobody
> used the new features, but it would have to support existing games,
> performing at least as well as the conventional cards. This would be
> difficult, particularly since it would inevitably be more expensive.
> 
Well, that kind of describes most of the new cards that come out. lol 
Yes, it would need compatibility with the previous systems, but that 
isn't exactly an impossibility.

> >There is no limit on how long they can take, but that doesn't mean they
> >are too slow to be useful. Dedicated hardware should be able to evaluate
> >procedural textures extremely quickly, more quickly than an image map if
> >it has to drag the image data back from main memory.
> 
> Why? The procedural must be calculated using a (probably) user-specified
> formula for every pixel that uses it. The image map must certainly be
> projected every pixel (a single unchanging operation), but the
> time-consuming step of actually acquiring the bitmap from system memory
> hopefully occurs only once for a scene.
> 
There is nothing to prevent using a second chip dedicated to processing 
such things and having it drop the result into a block of memory to be 
used like a 'normal' bitmap. This assumes that the speed increase gained 
by building the rendering engine into the card wouldn't offset the time 
cost of the procedural texture. In any case, there are ways around this 
issue, especially if such methods turn out to already be in use on the 
newer DirectX cards.


-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

From: Christopher James Huff
Subject: Re: Scanline rendering in POV-Ray
Date: 4 Jun 2003 15:21:52
Message: <cjameshuff-FDF7C4.14143604062003@netplex.aussie.org>
In article <web.3ede2562c1329458bd7f22250@news.povray.org>,
 "Tom York" <tom### [at] compsocmanacuk> wrote:

> Christopher James Huff wrote:
> >And why is being able to manufacture things out of many shapes worse
> >than only having one shape to use?
> >(Actually, more than one shape, at least in OpenGL: points, lines,
> >line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
> >quad-strip, polygon.
> 
> Complexity (and so cost) of the card. And the primitives that the 3D API
> gives you are not necessarily the same as the primitives that get passed to
> the card (for instance, DirectX now deals in pixel-sized triangles instead
> of points, and seems to have done away with the 2D DirectDraw stuff).


> >And utilities to make disks, cylinders, spheres,
> >other quadrics, NURBSs, etc...though those are tessellated. GLUT even
> >includes a *teapot* primitive!)
> 
> The card isn't going to see any of these.

True, as I said, they were reduced to triangles. The API could probably 
hook up to built-in primitives if they were available, though.


> Nobody will use the raytracing card for games if the quality gain is
> insufficient given the speed drop. I (emph.) assume that there will be a
> speed drop because I have seen many real-time scanline-based engines that
> didn't use a 3D card. I have seen one real-time raytracer, and that was one
> of the highly hand-optimised demos that used to be popular in the '90s. The
> resolution was very low, the framerate was very low, and the reflection at
> that resolution was indistinguishable from a reflection map. I would be
> very happy for someone to prove me wrong with a realtime raytracer that can
> compete on equal terms with a good realtime scanline renderer (in software,
> of course - no 3D accelerator).

Real-time involves several simplifications in computation and geometry, 
which does push the advantage to scanline. But do you need 200FPS? If 
the raytracing card is *fast enough*, its advantages could outweigh 
those of a scanline card.
(There was some RTRT...Real Time Ray-Tracing...demo called something 
like Heaven 7, you may want to look at it. It was quite impressive on my 
PC.)


> I don't see a single box convincing anyone nowadays. You must use groups of
> them, just as you must use groups of triangles to resemble anything useful.
> And triangles certainly have their uses - what about terrain?

I didn't say they were useless. Triangles are definitely useful for 
things like terrain, though I wonder if a hardware isosurface solver 
could compete...
And the advantage of a box is memory: one box takes up less memory than 
one triangle. You *can* use groups of them, nobody has said you would be 
limited to single primitives.


> I think the ability to deform/split up objects in realtime using a triangle
> mesh has quite a few advantages in games. Can you explode a box? Not as
> easily as you can explode a few triangles. If you model an object using
> more and more complex primitives, you necessarily have problems if at some
> point you want to treat the object as a collection of smaller items.

There was a POV-Ray include that used CSG to "explode" objects. If that 
method wasn't suitable, an animated mesh would probably be the best way 
to go. Again, I'm not saying meshes are useless, just that there are 
often better things to use.


> >Both can, and there are have been ways to do so for quite some time.
> 
> In a game, an object may be removed at almost any time, either due to player
> action directly or to something else. Surely unpredictable, especially
> given trends towards destroyable/interacting scenery.

This is pretty much irrelevant. I just said you can add and remove 
geometry. What you are talking about would be more switching between 
different models for different frames of the animation.


> But who's going to construct a game with thousands of sphere or box
> primitives but no triangles? Room games maybe, but games in the open or set
> in space? Surely you're not proposing the isosurface as a practical
> realtime primitive shape? :-)

Who said no triangles? "More primitives" != "Ban triangles"!


> Yes, the game I mentioned also predated 3D cards (part of the reason they
> were impelled to try ellipsoids, perhaps). For things like energy bolts, a
> player will usually insufficient time to see the difference between a
> sphere and a couple of crossed texture-mapped polygons (or a more
> complicated model).

With a raytracing engine, the sphere would likely be faster than the 
crossed polygons.


> For blast effects, I have seen textures mapped to a
> sphere used as a kind of blast, and it generally looks terrible IMO. The
> edge is too sharp, too uniform (same problem as in Quake 2-style blasts,
> which were done with a sort of simple polygonal explosion model). I think
> some sort of volumetric method would be far better here. With polygons, you
> can have the procedural shader and a flexible type that doesn't have to
> enforce spherical or elliptical symmetry, or be a closed surface.

I was not talking about texture-mapped spheres. I was specifically 
talking about using procedural shaders: something volumetric or based on 
angle of incidence to the surface, or something more like the glow patch.


> Why? The procedural must be calculated using a (probably) user-specified
> formula for every pixel that uses it. The image map must certainly be
> projected every pixel (a single unchanging operation), but the
> time-consuming step of actually acquiring the bitmap from system memory
> hopefully occurs only once for a scene.

You're assuming all the texture maps and models fit in the card memory. 
Yes, when the image map is already local, using it would be faster than 
all but the simplest shaders. That doesn't mean procedural shaders are 
too slow. If a specific shader is too slow, you use a faster one, maybe 
resorting to an image map if it will do the job and there is memory for 
it.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Ray Gardener
Subject: Perhaps a "process_geometry" keyword
Date: 4 Jun 2003 16:31:54
Message: <3ede573a@news.povray.org>
> POV textures *are* procedural. Image maps are relatively rarely used.

Sorry, I meant 'procedural' in the user-definable sense.
POV-Ray's procedural textures are predefined.

To be fair, I thought I remembered reading
somewhere that POV 3.5 let textures use
user-defined functions, although just going
through the docs again I can't find
references to that. There's definitely
no displacement shading, at any rate,
except with isosurfaces (more below).


> Uh, you may want to look at the POVMAN patch, which lets you use
> Renderman shaders on top of the existing procedural texture system.

Thanks. It's a great hack, but it appears
limited to surface shading. Using POVMAN with
scanlining would make displacement shading easy.



> You like procedural geometry but don't like isosurfaces?

Well, isosurfaces have a sampling function interface
analogous to regular shading in Renderman. For some
shapes, this is preferable -- e.g., a sphere is
just x^2 + y^2 + z^2. But for other shapes,
determining from (x,y,z) whether the point is
inside or outside gets complicated.

Gritz had a good explanation in one of his
Rman papers -- he used drawing a line on a
surface as an example. One way is to iterate
all points (x,y) on the surface and determine
if a given point was on the line. This amounts
to hit testing within the rectangle formed
by the line's endpoints and thickness.
Another way to draw the line is to just
rasterize the rectangle directly. The latter
approach is also more intuitive for most
people, because it parallels the way
artists draw in the real world. If you want
to draw a line, you touch a pen to one point,
and drag the pen to the other point.

I like having both approaches available.
For the particular set of shaders I'm currently
working on, I find using procedural geometry
easier.

POV-Ray might benefit from including a
procedural geometry keyword, which would
have an option of emitting the geometry
to the scene's object tree (for raytracing
along with the other objects) or to be
rendered immediately using a z buffer.
In fact, I may take this approach, since
the scanliner can reside outside POV-Ray.
I'd have to let POV pass information about
objects (for shading), lights, and camera.
So one would have something like this
in an example script:

  global_settings
  {
     geometry_processing
     {
        enable=true

        overrides
        {
           raytrace=false   // if true, force processors
                            // to add created objects to object tree
                            // for raytracing.

           scanline=false   // If true, processors always
                            // render immediately into zbuffer.

           scanline_method=triangle
             // Force processors to allow macropolygons.
             // If method is 'reyes', then they are forced
             // to use REYES algorithm and dice to micropolygons.
        }
     }
  }
  // Zbuffer allocates and initializes here if necessary.

  ...

  height_field
  {
     png "hf.png"
     process_geometry "landscape_detail" true
  }

There would be a DLL (on win32) named landscape_detail.dll
that would be passed the object, along with a reference
to the zbuffer, the object tree, and cam/lighting.
The DLL could then proceed to create whatever geometry
it wanted, adding it to the tree or rasterizing it
into the zbuffer (the bool arg after the DLL name indicates which).

The geometry_processing keyword would also be available
outside global_settings, so that one could easily change
the rendering manner for sections of a script in specific ways.

Ray


Post a reply to this message

From: Christopher James Huff
Subject: Re: Perhaps a "process_geometry" keyword
Date: 4 Jun 2003 17:21:25
Message: <cjameshuff-86A2CC.16140904062003@netplex.aussie.org>
In article <3ede573a@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> > POV textures *are* procedural. Image maps are relatively rarely used.
> 
> Sorry, I meant 'procedural' in the user-definable sense.
> POV-Ray's procedural textures are predefined.

No...they are quite user-defined. The system is a bit more structured 
than a shader programming language, but still extremely flexible, and 
far from anything like predefined textures.


> To be fair, I thought I remembered reading
> somewhere that POV 3.5 let textures use
> user-defined functions, although just going
> through the docs again I can't find
> references to that.

You can use functions as patterns, and pigments as vector functions. By 
combining three functions with the average pattern, you can specify a 
function for each color channel of a pigment, though this isn't as 
flexible or convenient as a full shading language. Adding such a 
language would certainly be possible, though.


> There's definitely
> no displacement shading, at any rate,
> except with isosurfaces (more below).

If "displacement shading" means displacement done at render-time, then 
no, there isn't. It could be added without requiring a scanline renderer 
however, in fact there's probably a few people working on it.


> Well, isosurfaces have a sampling function interface
> analogous to regular shading in Renderman. For some
> shapes, this is preferable -- e.g., a sphere is
> just x^2 + y^2 + z^2. But for other shapes,
> determining from (x,y,z) whether the point is
> inside or outside gets complicated.

For insideness testing, just use the inside() function...coming up with 
a custom function for it is overkill. I don't know what this has to do 
with isosurfaces though. If you're talking about the difficulty of 
making specific shapes with isosurfaces, there are functions and macros 
to make it more convenient.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Ray Gardener
Subject: Raytracing displacement shading
Date: 4 Jun 2003 18:19:53
Message: <3ede7089$1@news.povray.org>
> Sounds like you're talking about something like this:
> http://www.cs.utah.edu/~bes/papers/height/paper.html
>
> There is a landscape example with the equivalent of 1,000,000,000,000
> triangles. And instead of generating and discarding millions (or
> billions) of unused microtriangles, it generates them as needed.


That is very neat. It has some restrictions on
how the displacements can be expressed, but still,
offers a compelling option over bumpmapping.

Would you know if this algorithm is being considered
for inclusion in POV-Ray?

Thanks,
Ray


Post a reply to this message

From: Ray Gardener
Subject: Benchmarking
Date: 4 Jun 2003 18:35:06
Message: <3ede741a@news.povray.org>
> Ah, look, one billion triangles rendered in 43 hours* for 1080 Kpixels.
> Scaling your previously reported result of 16.8 million triangles drawn in
> 12 minutes as 275 Kpixels, your scanline method would take over 47 hours.
> And their scene render time includes global illumination and shadows.

That scene I submitted did generate a lot of
geometry that was ultimately occluded by nearer geometry.
Here is another scene,
http://www.daylongraphics.com/products/leveller/render/testimg_001.jpg

which took 67 seconds.

When rendering, after a few seconds, the image looks like this:
http://www.daylongraphics.com/products/leveller/render/testimg_004.jpg

Sometimes it's nice to scanline to see the object draw
that way, instead of having the render window draw
line by line (or mosaic into smaller and smaller squares).

Has anyone implemented displacment shading in POV-Ray
using that algorithm? If not, I'd volunteer.


Ray


Post a reply to this message

From: Christopher James Huff
Subject: Re: Raytracing displacement shading
Date: 4 Jun 2003 18:39:38
Message: <cjameshuff-4B5D53.17322304062003@netplex.aussie.org>
In article <3ede7089$1@news.povray.org>,
 "Ray Gardener" <ray### [at] daylongraphicscom> wrote:

> That is very neat. It has some restrictions on
> how the displacements can be expressed, but still,
> offers a compelling option over bumpmapping.

As the paper mentions, it is more restrictive than REYES, but is the 
same type used in Maya, so in practice it may not be as restrictive as 
it first seems.


> Would you know if this algorithm is being considered
> for inclusion in POV-Ray?

In a near-term official version, it is unlikely. 3.5 is near the end of 
life for this code-base, there are plans for a redesign and rewrite. 
There's several patch writers interested in it though, and the POV Team 
is aware of it. I'm going to try to implement it in my own raytracer, if 
I succeed I will get to work on a POV patch. This and Schaufler and 
Jensen's point geometry raytracing algorithm both look quite 
interesting. (In fact, I've been wondering if the point geometry 
rendering could be extended with a similar feature, generating 
additional displaced points on the fly.)

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Francois Labreque
Subject: Re: The Meaning of POV-Ray
Date: 4 Jun 2003 19:58:28
Message: <3EDE86E1.8080600@videotron.ca>
Program ended abnormally on 6/4/03 4:32 AM, Due to a catastrophic Gilles
Tran error:
> But do I want better, faster, artefact-free
> global illumination and volumetrics? Programmable shaders? Full HDRI
> support? A usable atmospheric (clouds and sky) model? You bet!
> 
> 

In your hands, these weapons would be deadly.

;)

-- 
/*Francois Labreque*/#local a=x+y;#local b=x+a;#local c=a+b;#macro P(F//
/*    flabreque    */L)polygon{5,F,F+z,L+z,L,F pigment{rgb 9}}#end union
/*        @        */{P(0,a)P(a,b)P(b,c)P(2*a,2*b)P(2*b,b+c)P(b+c,<2,3>)
/*   videotron.ca  */}camera{orthographic location<6,1.25,-6>look_at a }


Post a reply to this message

From: Ray Gardener
Subject: Re: The Meaning of POV-Ray
Date: 5 Jun 2003 03:49:38
Message: <3edef612$1@news.povray.org>
> ... The core developers of the program
> are either professional programmers who work on if for fun in their spare
> time or are computer science students working on it to help develop their
> programming skills. That we as a public get to enjoy the fruits of their
> labor is secondary to that goal. With that in mind, the development model
> of POV-Ray is not clearly defined and is pretty much at the whim of the
> developers and whatever tickles their fancy at the time.

Hmm... I understand, but what I now wonder is
what happens when a critical mass of users have
made large investments in using POV-Ray.

Speaking hypothetically, let's say that all copies of POV-Ray
somehow vanished, along with any source code versions, and the
developers got tired of the whole thing as well. Essentially,
no more POV-Ray.

But there would be terabytes of POV-Ray scene files still left,
and tons of POV-Ray experience in its user base. So how much
time would elapse before someone would recreate it? I would
guess that a new POV-Team would spring up in a few days.

It's a weird extreme scenario, but I think it demonstrates
that at a certain point, a program's existence isn't
confined tightly to its development team. Intermediate
scenarios (e.g., the POV-Team decides on a radical change
that breaks half of all scene files, but another group instantly
springs up forking POV-Ray to preserve compatability, or
the backlash is so strong the POV-Team changes their minds)
would also demonstrate the same effect. It's similar to
what people notice about political leaders: yes, the people
at the top technically have absolute power, but in practice,
they do not.

Let's imagine another case: an outsider develops a really
popular patch. Nearly everyone loves it, but (again, hypothetically)
the POV-Team doesn't. What would happen? The code would be
forked and the majority of the user base would follow
whatever group managed the fork.

Here's a really disturbing thought: Like they did with
C/C++ and web browsing, Microsoft clones POV-Ray and
does the usual embrace and extend. They do a good enough
job that most of the user base eventually migrates all of its
files to use MS-POV (arrgh, I actually get queasy just
thinking about it :). As the years go by, the POV-Team is
left maintaining an increasingly incompatible and
not-as-powerful program. To most users, whatever code
the POV-Team is maintaining is no longer POV-Ray, because
they're not using that code anymore.

I'm not trying to sound weird or anything, but I
thought it would be interesting to ponder the
deeper nature of project ownership when the project
has reached a critical mass of users and legacy data.
I would say that it really does rest with the users; the
developers basically have no choice but to keep
doing what the users want, or risk having their
leadership given to others.

Ray


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.