POV-Ray : Newsgroups : povray.general : Amapi is free : Re: Amapi is free Server Time
4 Aug 2024 22:15:28 EDT (-0400)
  Re: Amapi is free  
From: Thorsten Froehlich
Date: 20 Apr 2003 04:57:21
Message: <3ea260f1@news.povray.org>
In article <3ea1d418$1@news.povray.org> , Simon Adameit 
<sim### [at] gaussschule-bsde>  wrote:

> regarding your last comment, how compares raytracing with other
> techniques when complexity grows?

In short:

With scanline rendering you have to draw every object potentially
intersecting the view frustum in order to determine its visibility.  That
is, the bounding volumes a scene contains are only good for determining
whether objects (actually, their bounding volumes) intersect the view
frustum.  Then you have to transform them and draw those objects.  As
objects can only consist of polygons, you have to transform all those points
(or build the mesh on the fly, which is even more work).  And then start
drawing the triangles.  And it may turn out that only a tiny little fraction
of those triangles is really visible.  Many points drawn will in fact be
hidden behind others the more objects are in the scene.

With ray-tracing you can use bounding everywhere. Once you have identified
the object with the closest bounding volume, you calculate the intersection.
If there is non, pick the object with the second closest bounding volume
(which you already have as you found it while searching for the closest one)
and so on. And there are a lot of ways to easily determine which bounding
volumes are closest.  Certain algorithms allow storage of bounding
information such that you will have an almost sorted list of only the
bounding volumes within the view frustum.  Or you use other data structures
to represent the scene.

So, you end up with something like this for scanline rendering:

Lets assume you have a scene with 50 million triangles within the view
frustum at one frame (which is about as many triangles as scanline rendering
hardware can draw per second today - and I am talking about realistic
numbers, not the theoretical maximum marketing people list somewhere).

Lets further assume we have a screen with about one megapixel, that is i.e.
1152*870 pixels (a format not too common today, but then years ago it was).

So, now there are on average 50 triangles which contribute to one pixel. In
fact, assuming an average size of 10 pixels, there are 500 triangle pixels
per pixel, but lets ignore this number here now - the 50 triangles are
already bad enough.  So, for every pixel you will have to perform at least
one transformation (assuming every triangle has two points in common with
other triangles - this isn't the case, they will have less than two in
common on average), which is a matrix multiplication plus a transformation
into screen coordinates (to get your triangles from 3D to 2D).

Lets assume to perform this computation takes 20 nanoseconds (ns), which is
realistic on hardware today.  So, 50 * 20 ns = 1000 ns for the 50 triangles
(and thus for 50 million triangles one second).

And for ray-tracing, using only simple algorithms, without (!!!) advanced
algorithms that do better than plain bounding volumes, you get:

Assume that the 50 million triangles belong to 1000 objects with 50000
triangles each.  So we end up with 1000 bounding volumes, which will take no
more than 1000 * lg 1000 = (about) 10000 steps to sort by distance from the
viewpoint.  Lets assume every step takes 10 ns (probably it takes less).  So
it takes 100000 ns to do.  Lets also ignore that it took another 100 ns or
so to determine the distance of each viewing volume from the viewpoint.
Thus, this takes 1/5000 of a second, and we will ignore this for now because
the really important computation follows.

Compute the intersection of a ray with the bounding volume.  Lets assume for
simplicity that this volume is a sphere, and the intersection test takes 20
ns on average (which is fairly realistic).  But how many will we have to
compute on average?  This is the really important point, and it depends on
the size of the objects.  Remember we assumed before that every triangle
consists of about 10 visible pixels?  Lets now also assume that objects have
a volume, that is, they have two sides, so we end up with about 50000
triangles per object * 10 pixels per triangle = 500000 pixels per objects,
and half are not facing us, thus 250000 pixels is the average size of the
objects.  This yields about 250 objects per pixel.  Thus, it takes about
four intersection tests with bounding volumes to find the first one that is
intersecting (less actually, as bounding volumes won't be optimal).  Lets
assume we always miss the first object, thus there will be on average 20 ns
* 8 = 160 ns intersection tests with bounding volumes.  Which leaves up
about 840 ns to finding the actual intersection with one of the 50000
triangles.  This is possible because the triangles are again separated in
several bounding volumes.  I am not going to go into detail here, but you
can find many algorithms on the internet.  Lets assume the bounding
hierarchy splits the space in even boxes, 16 * 16 * 16 boxes (4096), this
yields on average 12 triangles per box and we can walk them following a 3D
line (which costs us essentially nothing).  Lets assume an intersection test
takes 30 ns on average (triangle intersection is a bit more complex, but
allows early bailout if no intersection can occur), we can do 840 ns/ 30 ns
= 28 intersection tests per pixel.

In conclusion, just taking the transformations, any ignoring the fact that
the scanline rendered will have to test 500 million pixels whether they are
visible or not by performing a comparison with the z-buffer, ray-tracing
will be faster finding the intersection already, without still having to
perform those 500 million comparisons, which take 1/5 of second for sure,
and generate 1.9 GB memory traffic, plus another 1.9 GB for storing the
pixels and overwriting them when a closer one was found.  Plus the 0.8 GB it
takes to read the triangles.  Which gives a total of 4.6 GB.  Texturing and
so on isn't even included.

For the plain drawing ray-tracing will only generate 4 MB of memory traffic.
The bounding volume tests will be require 8 spheres * 16 bytes (position
plus radius of the sphere) per pixel, plus 28 triangles * 36 bytes (three
points) yield only 1.6 GB of memory traffic, roughly one third the scanline
renderer needed.

So, as you can see (if you managed to read on to here), if the scene
complexity rises, ray-tracing gets more economical, even for things like
triangle meshes.

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.