|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Right, well here's a question I bet none of you have ever been asked
before...........................
..............Why is raytracing so slow?
OK, so on the surface I know it probably sounds daft, but think about it...
They have graphics cards that can render polygon meshes in realtime - meshes
with just downright silly numbers of polygons in them - so why does it take
a raytracer a few whole *seconds* to render a single sphere with 1 light
source? OK, OK, because it's in software, not hardware, I know... but I
played Quake II for years without hardware acceleration. (Poor deprived
student... *sniff*) So I say again, WHY is raytracing slower?
Obviouse answer #1 is that the results are higher quality. For something
like Quake (insert Half Life, Unreal, Alien vs Predetor, etc. instead if you
prefer) all the game needs to do is work out where each polygon fits on the
screen, calculate lighting (without shaddows - at least *I* have never seen
a game with shaddows!) and map a texture onto it. With hardware acceleration
it usually does texture smoothing too. And perhaps even edge antialiasing.
But essentially, the overall lighting of the map is recomputed radiosity (at
fairly low res). Now, what does a raytracer do that means a graphics card
can map a dozen textures onto several hundred polygons and animate it at 30
FPS while a raytracer takes seeveral seconds for a scene with a single
sphere?
The only answer I've ever come across (and correct me if this is no longer
true) is that ray intersection tests take up as much as 90% of the
calculation time. (3D hardware uses Z-buffers and elaborate coherence
techniques to ensure it can operate on just the polygons that are visible on
each line, if I'm not mistaken.) So if you wanted to make a raytracer go
faster, reducing the number of ray intersection tests would be the place to
start, yes?
Is that or is that not what POV-Ray's vista buffer and light buffers are
designed to do? And would I be right in thinking that these don't apply to
reflection and refraction?
Thanks.
Andrew.
PS. If the above text doesn't appear to have a particular POINT to it...
what can I say? I need more sleep!
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ee33411@news.povray.org>,
"Andrew Coppin" <orp### [at] btinternetcom> wrote:
> But essentially, the overall lighting of the map is recomputed radiosity (at
> fairly low res). Now, what does a raytracer do that means a graphics card
> can map a dozen textures onto several hundred polygons and animate it at 30
> FPS while a raytracer takes seeveral seconds for a scene with a single
> sphere?
It actually simulates the light travelling through the scene. And modern
hardware can render a sphere + light source at a decent framerate.
> The only answer I've ever come across (and correct me if this is no longer
> true) is that ray intersection tests take up as much as 90% of the
> calculation time.
They can. Texturing calculations also account for a lot, especially with
the more complex procedural textures, but the main reason raytracing is
slower is that it traces the rays of light through the scene instead of
projecting triangles onto an image plane and "painting" them onto the
image with an image map stretched across them.
> (3D hardware uses Z-buffers and elaborate coherence
> techniques to ensure it can operate on just the polygons that are visible on
> each line, if I'm not mistaken.) So if you wanted to make a raytracer go
> faster, reducing the number of ray intersection tests would be the place to
> start, yes?
Correct. You can also reduce the time taken for intersections, for
example by replacing a slow isosurface with a mesh. And you can bound
expensive-to-calculate objects with simpler objects like boxes or
spheres: if the ray doesn't hit the bounding shape, you know it can't
hit the real shape so you can just skip those computations. You can then
bound a group of these bounding shapes with another single bouding
shape, ending up with a tree structure that eliminates a great number of
unnecessary calculations. POV builds such a structure automatically,
though manual bounding can still give better results in some cases.
> Is that or is that not what POV-Ray's vista buffer and light buffers are
> designed to do? And would I be right in thinking that these don't apply to
> reflection and refraction?
Correct. These techniques are useful when you have a lot of rays from a
known location. Reflections and refractions spawn rays from anywhere in
the scene, so they can't be optimized in this way. The bounding tree
works no matter where the ray comes from, though.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
The simple answer is that raytracing has a large default overhead.
That is, any scene with any object will take a *minimum* of time, which
is relatively large.
However, while the rendering time of a 3D card increases linearly with
the number of polygons, raytracing time grows about logarithmically
(supposing we have some bounding box hierarchy).
That is, rendering 20000 triangles with a 3D card will take roughly twice
the time required to render 10000 triangles. However, raytracing 20000
spheres takes only some percents more than raytracing 10000 spheres
(you can try it with POV-Ray if you want).
--
plane{-x+y,-1pigment{bozo color_map{[0rgb x][1rgb x+y]}turbulence 1}}
sphere{0,2pigment{rgbt 1}interior{media{emission 1density{spherical
density_map{[0rgb 0][.5rgb<1,.5>][1rgb 1]}turbulence.9}}}scale
<1,1,3>hollow}text{ttf"timrom""Warp".1,0translate<-1,-.1,2>}// - Warp -
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ee33411@news.povray.org> , "Andrew Coppin"
<orp### [at] btinternetcom> wrote:
> Why is raytracing so slow?
It is not slow. You have to understand a very important area when dealing
with algorithms in computer science to understand why. Actually, it isn't
too difficult if you paid attention in the advance math you had in the last
few years in high school:
The comparison of ray-tracing with scanline rendering is a bit to complex to
understand the general concept, so I will pick a well described example of
sorting algorithms. Sorting is a very well studied problem in computer
science, and one that can very well described mathematically as well. You
can read about it in any book about algorithms.
It is possible to show that you can sort every sequence of n elements in a
maximum of n lg n steps (this is oversimplified, see the book" Introduction
to Algorithms" for an detailed discussion). Now, there is a common and
popular algorithm called quicksort, for which you always find an input
sequence which takes n*n steps to get sorted. So why is everybody using
quicksort if it takes so much longer for (for a big n) than the best
solution possible? Well, the worst-case runtime of an algorithm does not
necessarily say anything relevant about its average running-time. And in
case of quicksort that time is still n lg n.
It can even be shown that you will always need at least n lg n steps to sort
a sequence if you need to compare elements in that sequence (that is if you
use sorting algorithms form the family of comparison sorts). But does this
mean you cannot sort any sequence of elements in less than n log n steps?
No, you just have to use more information than just saying you have
"elements". For example if you know your elements are integers from 1 to n,
there are algorithms which sort these in n steps!
> OK, so on the surface I know it probably sounds daft, but think about it...
> They have graphics cards that can render polygon meshes in realtime - meshes
> with just downright silly numbers of polygons in them - so why does it take
> a raytracer a few whole *seconds* to render a single sphere with 1 light
> source? OK, OK, because it's in software, not hardware, I know... but I
> played Quake II for years without hardware acceleration. (Poor deprived
> student... *sniff*) So I say again, WHY is raytracing slower?
It has nothing to do with software or hardware at all. The fundamental
property, the runtime bounds, cannot be changed. But there is two more
things you need to know that I didn't mention above.
Every step you need to sort takes time. And, obviously, it depends on a lot
of things how long a step takes. In fact, depending on the algorithm, a
step can be relatively slow or fast.
Another thing I did not mention so far is that you cannot use average
runtime to compare algorithms where the elements are dissimilar. Or, more
complex, the runtime is a combination of several dissimilar elements. And
this is why your assumption that ray-tracing "is slower" based on looking at
a simple scene is misguided.
For (simple) ray-tracing you have to test for all the m pixels each of the n
objects in the scene, so the runtime to render will be about m*n. For
scanline rendering you have n objects which consist of t triangles which
consist of p pixels, which itself is m/q, where q is the share of each
triangle of the overall image (one that is counts all *drawn* triangle
pixels, not all visible triangle pixels, you need to draw pixels to
determine their visibility, usually many pixels over each other and
selecting the closes pixel). So the runtime to render will be about n*t*p.
Notice something? For ray-tracing the render-time is directly proportional
to the number of pixels in the image. But for scanline rendering it is not.
On the other hand, for ray-tracing the number of objects is just one part of
the equation.
So, to say anything about these two algorithms, we do need the constant
factors, and this is where your "is slower" statement can be shown to be
relative. That is, we say c1*m*n and c2*n*t*m/q, or lets say q is a
constant fact, i.e. each triangle contributes 1/10000 to the image, so
c2*n*t*m/10000, which is simply c2*n*t*m. Further, we assume every object
consists of (very few) triangles, say 10. So we get, after eliminating
constants, c2*n*m.
See, something about the runtime now? By setting some constants we were
able to make the two equations comparable! And they now take both the same
time. But wait, there are these annoying constant factors. It turns out
that c2 is much smaller than c1. So this shows ray-tracing will always be
slower? Well, yes and no. It shows that the simple algorithm I presented
as "ray-tracing" will always be slower. Lets pick up at the number of
objects in our ray-tracing runtime estimation again. Do we really need to
test each and every object each and every time?
No! There are various ways to optimise the finding of the closes object.
Lets consider a very simple algorithm: Assume all objects are approximately
evenly distributed in a specific volume in the scene. So subdivide the
space in the scene into cubes (like a grid on paper, but 3d), with an
average number of q objects in each cube. So you have r*r*r = n/q cubes.
Now, to find an intersection you only need to follow a line (the ray)
through these cubes, and only those cubes you touch need to be checked. The
longest line in the space consisting of r*r*r cubes is sqrt(r*r*3), which in
turn is n^(1/3), as we again, ignore constant factors (because they don't
change their constness).
So now the ray-tracing runtime is c2*m*sqrt(n^(1/3)) = c2*m*n^(1/6). If you
compare this to the scanline rendering runtime of c1*m*n, do you notice what
will happen, even if you assume c1 = 1000000 and c2 = 1? You simply pick an
n, that is the number of objects, big enough and the runtime of the
ray-tracing algorithm will be smaller than that of the scanline algorithm.
The problem is, for the constants I suggested, you need a really big n. So,
only if you have a really lot of objects, ray-tracing will be faster.
Analysing algorithm depends on defining a function of their runtime and the
comparing the growth of their functions. And as you know from school, just
looking at one, or even many of samples of a function will not tell you
much, if anything about the growths of a function!
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ee46215@news.povray.org> , "Thorsten Froehlich"
<tho### [at] trfde> wrote:
> c2*m*sqrt(n^(1/3)) = c2*m*n^(1/6)
Ups, this should be c2*m*sqrt((n^(1/3))^2*3) and thus c2*m*n^(1/3), but it
does not change anything.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wow. Now that was a explicit and long answer. Nice job! :-)
--
Tim Nikias v2.0
Homepage: http://www.digitaltwilight.de/no_lights
Email: Tim### [at] gmxde
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hey Thorsten
[Really long and nice answer]
Thank you for that wonderful explanation! It really helped me to understand
the subject properly.
Florian
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> However, while the rendering time of a 3D card increases linearly with
> the number of polygons, raytracing time grows about logarithmically
> (supposing we have some bounding box hierarchy).
Mmm... that's a very interesting observation... I gotta try this...
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
OK, OK, let me try again...
Comparing a raytracer to scanline rendering is not really a like-for-like
comparism. They work in very different ways. All I was trying to get at is
1) for a scene complex enough to make up part of, say, a computer game,
software raytracing takes a lot longer than hardware scanline rendering, and
2) what can you do to a raytracer to make it go faster?
Personally, I have yet to create a texture complex enough for it to have any
measureable effect on render time. But throw in a shape that's difficult to
do intersection tests on - isosurface, text, or even just a torus - and the
render time goes up by miles. (Also turning on reflection or refraction.)
Lighting also slows things down more than would seem reasonable - I presume
due to the extra shaddow tests (which require - yes - more intersection
tests).
But anyway... I too have spent hours pondering the enigma of sorting lists.
(I don't have access to the books with all the answers, so I have to
reinvent the wheel.) It's a toughy, innit?
Thanks.
Andrew.
PS. I downloaded the POV-Ray sources once... I still have no friggin idea
what a Sturmian root solver is!!! Oh well...
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> It actually simulates the light travelling through the scene. And modern
> hardware can render a sphere + light source at a decent framerate.
Yeah, I know how a raytracer works - I've built one ;-)
I'd be interested to see one go in realtime... (Actually, some of my Chaos
Pendulunm renders nearly had POV-Ray in realtime... I presume it's preview
window isn't designed with realtime performance in mind. It goes way faster
if I switch it off!)
> They can. Texturing calculations also account for a lot, especially with
> the more complex procedural textures, but the main reason raytracing is
> slower is that it traces the rays of light through the scene instead of
> projecting triangles onto an image plane and "painting" them onto the
> image with an image map stretched across them.
I've yet to see texturing have any significant effect on my renders...
Complex object geometry slows it to a crawl, as does excessive lighting, and
media (which is a whole OTHER kettle of fish - you could do media
simulations without a raytracer). But texturing has never seemed to have any
effect at all. (Until you turn on AA, or reflection/refraction, but these
are due to other effects.)
> Correct.
Yay!
> POV builds such a structure automatically,
> though manual bounding can still give better results in some cases.
...such as building polyhedrons out of planes using CSG? (I do this all the
time... *is* there any other way to build polyhedrons in POV?!?)
> > Is that or is that not what POV-Ray's vista buffer and light buffers are
> > designed to do? And would I be right in thinking that these don't apply
to
> > reflection and refraction?
>
> Correct. These techniques are useful when you have a lot of rays from a
> known location. Reflections and refractions spawn rays from anywhere in
> the scene, so they can't be optimized in this way. The bounding tree
> works no matter where the ray comes from, though.
The other night I convinced myself that voxels were the answer to all our
problems, and I was about to post a message damanding that it be implemented
immediately in the name of speed. Fortunately, I just happened to enguage my
brain first. And I realised that it's actually a pretty lame idea after all.
Glad I didn't post that suggestion!
Andrew.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|