POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 22:13:44 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Patrick Elliott
Date: 4 Jun 2003 14:54:07
Message: <MPG.1947edee2d8eda16989816@news.povray.org>
In article <web.3ede2562c1329458bd7f22250@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Christopher James Huff wrote:
> >And utilities to make disks, cylinders, spheres,
> >other quadrics, NURBSs, etc...though those are tessellated. GLUT even
> >includes a *teapot* primitive!)
> 
> The card isn't going to see any of these.
> 
But it will see the triangles that make them up, which takes lots of 
space.

> >Nobody will use the faster option if the slower option is slower? Or
> >nobody will use the raytracing card if it is slower, even if it gives
> >better quality?
> 
> Nobody will use the raytracing card for games if the quality gain is
> insufficient given the speed drop. I (emph.) assume that there will be a
> speed drop because I have seen many real-time scanline-based engines that
> didn't use a 3D card. I have seen one real-time raytracer, and that was one
> of the highly hand-optimised demos that used to be popular in the '90s. The
> resolution was very low, the framerate was very low, and the reflection at
> that resolution was indistinguishable from a reflection map. I would be
> very happy for someone to prove me wrong with a realtime raytracer that can
> compete on equal terms with a good realtime scanline renderer (in software,
> of course - no 3D accelerator).
> 
POV-Ray has a built in example of real time raytracing. It is small, but 
then you are dealing with an engine that is running on top of and OS and 
can't take full advantage of the hardware, since it will 'never' have 
100% total access to the processor. A card based one would likely be far 
more optimized, support possible speed improvements that don't exist in 
POV-Ray and have complete access to the full power of the chip running 
it. This would be slower why?

> I think the ability to deform/split up objects in realtime using a triangle
> mesh has quite a few advantages in games. Can you explode a box?

That is a point, but nothing prevents you from making explodable objects 
from triangles. In facts, the increase in available memory by using 
primitives in those things that are not going to undergo such a change 
means you can use even more triangles and make the explosion even more 
realistic. Current AGP technology it reaching its limits as to how much 
you can shove through the door and use. Short of a major redesign of both 
the cards and the motherboards, simply adding more memory or a faster 
chip isn't going to cut it.

> >The
> >difference is that a few thousand primitives can be stored in the space
> >of a few thousand triangle mesh.
> 
> But who's going to construct a game with thousands of sphere or box
> primitives but no triangles? Room games maybe, but games in the open or set
> in space? Surely you're not proposing the isosurface as a practical
> realtime primitive shape? :-)
> 
Again. Why would anyone design one that 'only' supported such primitives? 
That's like asking why POV-Ray supports meshes if we all think primitives 
are so great. You use what is appropriate for the circumstances. If you 
want a space ship that explodes into a hundred fragments use a mesh, if 
you want one that gets sliced in half by a beam weapon, then use a mesh 
along the cut line and primitive where it makes sense. Duh!

> Yes. I think you could get away with introducing a new card, even if nobody
> used the new features, but it would have to support existing games,
> performing at least as well as the conventional cards. This would be
> difficult, particularly since it would inevitably be more expensive.
> 
Well, that kind of describes most of the new cards that come out. lol 
Yes, it would need compatibility with the previous systems, but that 
isn't exactly an impossibility.

> >There is no limit on how long they can take, but that doesn't mean they
> >are too slow to be useful. Dedicated hardware should be able to evaluate
> >procedural textures extremely quickly, more quickly than an image map if
> >it has to drag the image data back from main memory.
> 
> Why? The procedural must be calculated using a (probably) user-specified
> formula for every pixel that uses it. The image map must certainly be
> projected every pixel (a single unchanging operation), but the
> time-consuming step of actually acquiring the bitmap from system memory
> hopefully occurs only once for a scene.
> 
There is nothing to prevent using a second chip dedicated to processing 
such things and having it drop the result into a block of memory to be 
used like a 'normal' bitmap. This assumes that the speed increase gained 
by building the rendering engine into the card wouldn't offset the time 
cost of the procedural texture. In any case, there are ways around this 
issue, especially if such methods turn out to already be in use on the 
newer DirectX cards.


-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.