POV-Ray : Newsgroups : povray.general : Scanline rendering in POV-Ray : Re: Scanline rendering in POV-Ray Server Time
4 Aug 2024 18:19:00 EDT (-0400)
  Re: Scanline rendering in POV-Ray  
From: Patrick Elliott
Date: 5 Jun 2003 17:04:45
Message: <MPG.19495df84baf0efe98981a@news.povray.org>
In article <web.3edf071ec1329458541c87100@news.povray.org>, 
tom### [at] compsocmanacuk says...
> Speed and complexity (hence cost). Current cards (at the high end for games)
> can cost a couple of hundred dollars. Any simplification that can be made
> can save cost, and having a card that doesn't need to switch over from
> using triangles to drawing boxes halfway through a scene is good for
> efficiency.
Cost will always be high in the first of some new technology. This is a 
given. As for it changing the way it does things when it goes from a mesh 
to a box, any implementation would optimize this, assuming of course that 
calculating a single point on the surface of a triangle that is defined 
by three points is 'really' that incredibly different and calculating a 
point on the surface of a sphere or box. They both require very similar 
calculations. I am not sure how you would get a major change in speed due 
to what minor transition actually happens.

> I don't think a raytracer on a card
> designed for realtime game use has to solve exactly the same problems as a
> top-level raytracer for non-realtime use. In the same way that the Quake
> engine and 3D Studio aren't kin.
> 
Obviously, however neither you nor I know exactly what set of features or 
capabilities would really need or make sense to support, or even 
necessarily how. All I have is POV-Ray as an example of a top-level 
raytracer and stripped down to the bare bones it might be able to produce 
decent real time by itself in a 640x480 mode (maybe even in higher 
resolutions). However you would have to severely strip it down to do so. 
As it is, most of the time spent is parsing the SDL, which in the 3D card 
would be replaced by feeding it much more specific data that wouldn't 
require parsing. Even the bounding boxes could be pre-defined and 
supplied as part of the object. If you eliminate the #1 biggest time 
waster and then optimize for those features most useful in a real time 
game... Its not like there are a lot of examples of stuff like this 
people have written. I think there where a few that used such predefined 
objects and pre-calculated information to do real time demos for the 
Apple IIgs in the 80s and it managed a frame rate good enough for a 
simple game on a 2.5mhz system. Are you honestly telling me that someone 
couldn't do thousand times better on even a 500mhz machine, let alone a 
1ghz. And that is just running it as software.

The key issues are development time, cost of the product and 
backward compatibility. The last item being the only one that no one that 
wouldn't bury any attempt made right now to do it.

> >Well, that kind of describes most of the new cards that come out. lol
> >Yes, it would need compatibility with the previous systems, but that
> >isn't exactly an impossibility.
> 
> It is more difficult when you have completely changed the philosophy behind
> the card, but want it to remain compatible with the previous philosophy.
> You don't agree?
> 
Not necessarily. OpenGL structure for storing objects differs from POV-
Rays SDL for example, but the underlying data is more or less identical. 
You need a converter only because no native support exists to load such 
an object. If you designed a card, then you wouldn't likely be 
incorporating the ability to support the same structures as previous 
cards already use. There is no practical reason to not do so.

> >There is nothing to prevent using a second chip dedicated to processing
> >such things and having it drop the result into a block of memory to be
> >used like a 'normal' bitmap. This assumes that the speed increase gained
> >by building the rendering engine into the card wouldn't offset the time
> >cost of the procedural texture. In any case, there are ways around this
> >issue, especially if such methods turn out to already be in use on the
> >newer DirectX cards.
> 
> Then aren't you going to lose the advantage of generating textures on the
> card? If I generate a bitmap by procedure or by artist and subsequent
> loading, I must still store it. Newer cards do procedural shading on a
> pixel as it's rendered (or so I thought), so no extra storage is required.
> 
This contradicts you previous suggestion that somehow using such 
procedural systems is tied to complexity and the a bitmap has added 
advantages. If that is true, then there would be no reason to not simply 
generate a bitmap from the procedural texture and use it. Now you say 
this isn't needed, since newer cards already do what I said..?

-- 
void main () {

    call functional_code()
  else
    call crash_windows();
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.