|
![](/i/fill.gif) |
In article <3ea29335@news.povray.org>, sim### [at] gaussschule-bs de says...
> Thorsten Froehlich wrote:
> > In article <3ea1d418$1@news.povray.org> , Simon Adameit
> > <sim### [at] gaussschule-bs de> wrote:
> >
> >
> >>regarding your last comment, how compares raytracing with other
> >>techniques when complexity grows?
> >
> >
> > In short:
> >
>
> Thanks for you detailed explanation.
>
> > So, as you can see (if you managed to read on to here), if the scene
> > complexity rises, ray-tracing gets more economical, even for things like
> > triangle meshes.
> >
>
> Unfortunately complexity is limited by memory.
>
>
It should be noted that, with games at least, some rendering engines
combine features of scanline and raytrace. In general, the difference is
often accuracy and effects. A triangle based sphere needs to somehow gain
triangles the closer you get to it, so will never be as accurate as a
raytraced one. Obvious of course, but the same issue exists with 'any'
mesh you use. The other factor is that you either use the hardware
acceleration, or calc nearly everything in your program anyway, since the
cards can't do complex things like refraction, reflection, etc. They just
slap triangles in the screen and slap a texture over them. Some older
cards couldn't even do this right. lol
Due to this a lot of games speed things up when they have huge meshes by
z-buffering them internally into planes. In other words, they chop up the
existing meshes into layers that will get sent to the card, use something
like a raytracers bounding to eliminate any triangles that you won't see
and then dump the rest to the card, or lacking a card, through a scanline
system that just draws everything and maps a texture to it (which is more
or less what Doom did). Or if games don't do this, then it is quite
silly, since even if the cards themselves perform something similar,
there is as you say a distinct limit to how much junk you can hand even
the best OpenGL or DirectX card before you run out of memory, especially
since you also have to dump huge textures to the card as well. On a
modern machine you could very nearly produce a frame rate as good as or
better than any 3D card by using pure raytrace and procedural textures,
where you only used meshes when needed, and even that isn't a 'major'
slow down.
The only real issue is structuring the data in a way that lets you load
it fast. Some level of persistence of objects from frame to frame, so you
only parsed 'new' objects or transitional information would pretty
well vaporize that issue. Considering how high-poly action scenes in
stuff like the XBox's Yu-Gi-Oh: War of the Roses game load so insanely
slow from what I hear that people turn them off, even the existing parse
time for each frame of the same scene using POVRay would probably take
less time to 'load' the animation frames needed by generating them each
frame individually. ;) I would say that no matter how much they improve
the cards themselves, the main issue is now becoming how to get the
bloody data to them in the first place and the way they work actually
makes that significantly harder to do, no matter what the frame rate per
poly-count now is. ;) lol
But that gets slightly off topic. ;) Knowing how poly-based systems work,
I suspect that the in-program views use scanline, but any such tool that
claims to be able to export the result will use more conventional
raytracing to produce a final result. A few even say something to that
effect in some obscure corner of the docs, but since the main problem is
almost always the obvious mesh nature of its objects when seen too
closely, or just plain at the wrong angle....
--
void main () {
call functional_code()
else
call crash_windows();
}
Post a reply to this message
|
![](/i/fill.gif) |