POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 22:16:21 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Tom York
Date: 4 Jun 2003 13:00:02
Message: <web.3ede2562c1329458bd7f22250@news.povray.org>
Christopher James Huff wrote:
>And why is being able to manufacture things out of many shapes worse
>than only having one shape to use?
>(Actually, more than one shape, at least in OpenGL: points, lines,
>line-loop, line-strip, triangles, triangle-strip, triangle-fan, quads,
>quad-strip, polygon.

Complexity (and so cost) of the card. And the primitives that the 3D API
gives you are not necessarily the same as the primitives that get passed to
the card (for instance, DirectX now deals in pixel-sized triangles instead
of points, and seems to have done away with the 2D DirectDraw stuff).

>And utilities to make disks, cylinders, spheres,
>other quadrics, NURBSs, etc...though those are tessellated. GLUT even
>includes a *teapot* primitive!)

The card isn't going to see any of these.

>Nobody will use the faster option if the slower option is slower? Or
>nobody will use the raytracing card if it is slower, even if it gives
>better quality?

Nobody will use the raytracing card for games if the quality gain is
insufficient given the speed drop. I (emph.) assume that there will be a
speed drop because I have seen many real-time scanline-based engines that
didn't use a 3D card. I have seen one real-time raytracer, and that was one
of the highly hand-optimised demos that used to be popular in the '90s. The
resolution was very low, the framerate was very low, and the reflection at
that resolution was indistinguishable from a reflection map. I would be
very happy for someone to prove me wrong with a realtime raytracer that can
compete on equal terms with a good realtime scanline renderer (in software,
of course - no 3D accelerator).

>Well, a triangle doesn't model anything in the real world, you need at
>least 4 of them to get a decent approximation of a real-world object,
>and you don't see many tetrahedrons lying around. There are many things
>that a box models quite closely, to a level that would be invisible on a
>typical camera image with comparable resolution.

I don't see a single box convincing anyone nowadays. You must use groups of
them, just as you must use groups of triangles to resemble anything useful.
And triangles certainly have their uses - what about terrain?
I think the ability to deform/split up objects in realtime using a triangle
mesh has quite a few advantages in games. Can you explode a box? Not as
easily as you can explode a few triangles. If you model an object using
more and more complex primitives, you necessarily have problems if at some
point you want to treat the object as a collection of smaller items.

>Both can, and there are have been ways to do so for quite some time.

In a game, an object may be removed at almost any time, either due to player
action directly or to something else. Surely unpredictable, especially
given trends towards destroyable/interacting scenery.

>The
>difference is that a few thousand primitives can be stored in the space
>of a few thousand triangle mesh.

But who's going to construct a game with thousands of sphere or box
primitives but no triangles? Room games maybe, but games in the open or set
in space? Surely you're not proposing the isosurface as a practical
realtime primitive shape? :-)

>Marathon 1, an old Mac first-person shooter, had some left-over bits and
>pieces for a sphere renderer in its code...wasn't ever used in the game,
>though, and this predated 3D accelerator cards for home computers. But
>why couldn't a game benefit from a sphere primitive? With procedural
>shaders, you could do a lot with a sphere, like virtually any kind of
>energy bolt or blast effect. With more complex CSG available, you could
>build a complex room with primitives and procedural shaders and still
>have space available on the card for character meshes and skins.

Yes, the game I mentioned also predated 3D cards (part of the reason they
were impelled to try ellipsoids, perhaps). For things like energy bolts, a
player will usually insufficient time to see the difference between a
sphere and a couple of crossed texture-mapped polygons (or a more
complicated model). For blast effects, I have seen textures mapped to a
sphere used as a kind of blast, and it generally looks terrible IMO. The
edge is too sharp, too uniform (same problem as in Quake 2-style blasts,
which were done with a sort of simple polygonal explosion model). I think
some sort of volumetric method would be far better here. With polygons, you
can have the procedural shader and a flexible type that doesn't have to
enforce spherical or elliptical symmetry, or be a closed surface.

For CSG applied to rooms, it would need to be as fast as the BSP-style
methods used for static geometry at the moment (although I guess BSPs are
not used for interactive scenery).

>That is the primary problem: R&D would cost money and resources that
>could be used on a better but more conventional card, and at release
>there would be nobody ready to make use of the card, and no guarantee
>that there ever would be. It requires a card supporting those features,
>software to use the new features of the card, and game designers to use
>the new features of the software.

Yes. I think you could get away with introducing a new card, even if nobody
used the new features, but it would have to support existing games,
performing at least as well as the conventional cards. This would be
difficult, particularly since it would inevitably be more expensive.

>There is no limit on how long they can take, but that doesn't mean they
>are too slow to be useful. Dedicated hardware should be able to evaluate
>procedural textures extremely quickly, more quickly than an image map if
>it has to drag the image data back from main memory.

Why? The procedural must be calculated using a (probably) user-specified
formula for every pixel that uses it. The image map must certainly be
projected every pixel (a single unchanging operation), but the
time-consuming step of actually acquiring the bitmap from system memory
hopefully occurs only once for a scene.

>Here you have that
>size issue again: image maps are big, and video card memory is limited,
>so things often have to be shuffled between video card memory and main
>system memory, which is surprisingly slow.

I think 3D card time is more limited. Bitmaps have the advantage that
texture loading time is independent of surface texture complexity. A flat
grey texture will take no more time to apply than a weathered texture with
"hello world" scrawled on it of the same resolution. Obviously, there's
going to be a significant difference with a procedural approach. Procedural
shaders are certainly going to be useful for particular effects, but I
don't believe they're going to dominate any time soon, if ever.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.