|
![](/i/fill.gif) |
In article <3ee5829b$1@news.povray.org>,
"Thorsten Froehlich" <tho### [at] trf de> wrote:
> No, the problem is, you cannot optimise the scanline algorithm further than
> what I outlined (at least not bringing down the complexity). In fact, the
> complexity I outlined holds for the first implementations 30 or so years ago
> and it still holds in the 3d hardware accelerators of today. The main
> benefit of the scanline algorithm is that you can make the constant time
> factor very, very small compared to ray-tracing.
You could probably do something somewhat similar to bounding...do an
assay render on the bounding box of a group of triangles. If it would
have been drawn, fully render the individual triangles, if it would not
have been drawn then ignore them. Probably not as efficient as in
raytracing, but it could work...I don't see a way to get it to work with
hardware accelerated rendering though, unless there's a way to check if
the depth buffer would have been modified without actually modifying it.
You could save/render/compare/restore, but that'd probably take longer,
especially on large resolutions.
> Yes, it does. The problem is, the one million to one difference of the
> constant factors. And due to the simplicity of the scanline algorithm, it
> is, compared to ray-tracing, much easier to implement it in hardware (you
> need only a few thousand gates, most of the logic on today's accelerator
> chips is used for texture and geometry computations.
It can also be done easily with integer math, while raytracing requires
floating point...even 64 bits isn't enough for practical fixed point
raytracing. This lets the logic be simplified, cutting development costs
and allowing higher speeds to be attained more easily.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |