![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea07c18$1@news.povray.org>,
Simon Adameit <sim### [at] gaussschule-bs de> wrote:
> > No, here you are wrong. It simply happens that CSG is harder to do if you
> > don't use plain ray-tracing.
>
> I think even if people used plain raytracing, they wouldn't use CSG as
> their primary modelling method.
Most modellers use polygon representations to provide a real-time
preview of the scene, and faster but lower quality scanline rendering,
so of course when they raytrace they use the same data. Primitive CSG is
generally faster and uses less memory, but it requires primitives that
are harder to preview and scanline render, you generally have to
tesselate them first. If they used pure raytracing, they would use CSG
of primitives unless meshes were absolutely necessary, it would be the
most efficient way to do the job.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Christopher James Huff wrote:
> In article <3ea07c18$1@news.povray.org>,
> Simon Adameit <sim### [at] gaussschule-bs de> wrote:
>
>
>>>No, here you are wrong. It simply happens that CSG is harder to do if you
>>>don't use plain ray-tracing.
>>
>>I think even if people used plain raytracing, they wouldn't use CSG as
>>their primary modelling method.
>
>
> Most modellers use polygon representations to provide a real-time
> preview of the scene, and faster but lower quality scanline rendering,
> so of course when they raytrace they use the same data. Primitive CSG is
> generally faster and uses less memory, but it requires primitives that
> are harder to preview and scanline render, you generally have to
> tesselate them first. If they used pure raytracing, they would use CSG
> of primitives unless meshes were absolutely necessary, it would be the
> most efficient way to do the job.
>
Its just that most of the time meshes are absolutely necessary.
Primitive CSG is to limited in what kind of objects you can create so it
would be used but probably not much. I think the reason why CSG is used
so much in POV is that its easy to create objects this way using the
SDL. And pure raytracing doesn't mean that you are not able to model and
preview in real-time.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea19c47@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
> And pure raytracing doesn't mean that you are not able to model and
> preview in real-time.
Well, if the preview excludes everything (most importantly the recursive
part of the process) that makes ray-tracing what it is, sure, but then you
can just as well use other approaches which will actually produce better
results more quickly for everything but the most complex scenes.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
> In article <3ea19c47@news.povray.org> , Simon Adameit
> <sim### [at] gaussschule-bs de> wrote:
>
>
>>And pure raytracing doesn't mean that you are not able to model and
>>preview in real-time.
>
>
> Well, if the preview excludes everything (most importantly the recursive
> part of the process) that makes ray-tracing what it is, sure, but then you
> can just as well use other approaches which will actually produce better
> results more quickly for everything but the most complex scenes.
>
but is it still pure raytracing ;-)
regarding your last comment, how compares raytracing with other
techniques when complexity grows?
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea1d418$1@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
> regarding your last comment, how compares raytracing with other
> techniques when complexity grows?
In short:
With scanline rendering you have to draw every object potentially
intersecting the view frustum in order to determine its visibility. That
is, the bounding volumes a scene contains are only good for determining
whether objects (actually, their bounding volumes) intersect the view
frustum. Then you have to transform them and draw those objects. As
objects can only consist of polygons, you have to transform all those points
(or build the mesh on the fly, which is even more work). And then start
drawing the triangles. And it may turn out that only a tiny little fraction
of those triangles is really visible. Many points drawn will in fact be
hidden behind others the more objects are in the scene.
With ray-tracing you can use bounding everywhere. Once you have identified
the object with the closest bounding volume, you calculate the intersection.
If there is non, pick the object with the second closest bounding volume
(which you already have as you found it while searching for the closest one)
and so on. And there are a lot of ways to easily determine which bounding
volumes are closest. Certain algorithms allow storage of bounding
information such that you will have an almost sorted list of only the
bounding volumes within the view frustum. Or you use other data structures
to represent the scene.
So, you end up with something like this for scanline rendering:
Lets assume you have a scene with 50 million triangles within the view
frustum at one frame (which is about as many triangles as scanline rendering
hardware can draw per second today - and I am talking about realistic
numbers, not the theoretical maximum marketing people list somewhere).
Lets further assume we have a screen with about one megapixel, that is i.e.
1152*870 pixels (a format not too common today, but then years ago it was).
So, now there are on average 50 triangles which contribute to one pixel. In
fact, assuming an average size of 10 pixels, there are 500 triangle pixels
per pixel, but lets ignore this number here now - the 50 triangles are
already bad enough. So, for every pixel you will have to perform at least
one transformation (assuming every triangle has two points in common with
other triangles - this isn't the case, they will have less than two in
common on average), which is a matrix multiplication plus a transformation
into screen coordinates (to get your triangles from 3D to 2D).
Lets assume to perform this computation takes 20 nanoseconds (ns), which is
realistic on hardware today. So, 50 * 20 ns = 1000 ns for the 50 triangles
(and thus for 50 million triangles one second).
And for ray-tracing, using only simple algorithms, without (!!!) advanced
algorithms that do better than plain bounding volumes, you get:
Assume that the 50 million triangles belong to 1000 objects with 50000
triangles each. So we end up with 1000 bounding volumes, which will take no
more than 1000 * lg 1000 = (about) 10000 steps to sort by distance from the
viewpoint. Lets assume every step takes 10 ns (probably it takes less). So
it takes 100000 ns to do. Lets also ignore that it took another 100 ns or
so to determine the distance of each viewing volume from the viewpoint.
Thus, this takes 1/5000 of a second, and we will ignore this for now because
the really important computation follows.
Compute the intersection of a ray with the bounding volume. Lets assume for
simplicity that this volume is a sphere, and the intersection test takes 20
ns on average (which is fairly realistic). But how many will we have to
compute on average? This is the really important point, and it depends on
the size of the objects. Remember we assumed before that every triangle
consists of about 10 visible pixels? Lets now also assume that objects have
a volume, that is, they have two sides, so we end up with about 50000
triangles per object * 10 pixels per triangle = 500000 pixels per objects,
and half are not facing us, thus 250000 pixels is the average size of the
objects. This yields about 250 objects per pixel. Thus, it takes about
four intersection tests with bounding volumes to find the first one that is
intersecting (less actually, as bounding volumes won't be optimal). Lets
assume we always miss the first object, thus there will be on average 20 ns
* 8 = 160 ns intersection tests with bounding volumes. Which leaves up
about 840 ns to finding the actual intersection with one of the 50000
triangles. This is possible because the triangles are again separated in
several bounding volumes. I am not going to go into detail here, but you
can find many algorithms on the internet. Lets assume the bounding
hierarchy splits the space in even boxes, 16 * 16 * 16 boxes (4096), this
yields on average 12 triangles per box and we can walk them following a 3D
line (which costs us essentially nothing). Lets assume an intersection test
takes 30 ns on average (triangle intersection is a bit more complex, but
allows early bailout if no intersection can occur), we can do 840 ns/ 30 ns
= 28 intersection tests per pixel.
In conclusion, just taking the transformations, any ignoring the fact that
the scanline rendered will have to test 500 million pixels whether they are
visible or not by performing a comparison with the z-buffer, ray-tracing
will be faster finding the intersection already, without still having to
perform those 500 million comparisons, which take 1/5 of second for sure,
and generate 1.9 GB memory traffic, plus another 1.9 GB for storing the
pixels and overwriting them when a closer one was found. Plus the 0.8 GB it
takes to read the triangles. Which gives a total of 4.6 GB. Texturing and
so on isn't even included.
For the plain drawing ray-tracing will only generate 4 MB of memory traffic.
The bounding volume tests will be require 8 spheres * 16 bytes (position
plus radius of the sphere) per pixel, plus 28 triangles * 36 bytes (three
points) yield only 1.6 GB of memory traffic, roughly one third the scanline
renderer needed.
So, as you can see (if you managed to read on to here), if the scene
complexity rises, ray-tracing gets more economical, even for things like
triangle meshes.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
> In article <3ea1d418$1@news.povray.org> , Simon Adameit
> <sim### [at] gaussschule-bs de> wrote:
>
>
>>regarding your last comment, how compares raytracing with other
>>techniques when complexity grows?
>
>
> In short:
>
Thanks for you detailed explanation.
> So, as you can see (if you managed to read on to here), if the scene
> complexity rises, ray-tracing gets more economical, even for things like
> triangle meshes.
>
Unfortunately complexity is limited by memory.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Simon Adameit wrote:
> Thorsten Froehlich wrote:
>> In article <3ea1d418$1@news.povray.org> , Simon Adameit
>> <sim### [at] gaussschule-bs de> wrote:
>>
>>
>>>regarding your last comment, how compares raytracing with other
>>>techniques when complexity grows?
>>
>>
>> In short:
>>
>
> Thanks for you detailed explanation.
>
>> So, as you can see (if you managed to read on to here), if the scene
>> complexity rises, ray-tracing gets more economical, even for things like
>> triangle meshes.
>>
>
> Unfortunately complexity is limited by memory.
I'm not familiar with rendering algorithms, but aren't there any rendering
systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
are able to pipe the input so that the memory is not the limited factor?
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Andreas Kreisig wrote:
>
> I'm not familiar with rendering algorithms, but aren't there any rendering
> systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
> are able to pipe the input so that the memory is not the limited factor?
>
> Andreas
>
Pixar's Renderman uses the reyes algorithm. Afaik its much like
scanlining but divides the objects into a grid of sub-pixel
micropolygons which makes detailed displacement mapping possible.
But due to its nature it cannot render realistic shadows, reflections
and refractions.
There are also some approaches to memoy coherent raytracing (normal
raytracing always requires access to any part of the scene) but from
what I know they only work when you dont have complex lightning like
radiosity or hundrets of light sources.
But complexity in raytracing can also be achieved with other rhings than
meshes like isosurfaces for detailed landscapes.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29431@news.povray.org> , Andreas Kreisig <and### [at] gmx de>
wrote:
> I'm not familiar with rendering algorithms, but aren't there any rendering
> systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
> are able to pipe the input so that the memory is not the limited factor?
Of course you can always parallelize either process, random memory access
does not scale well at all. Memory access is the limit even for
ray-tracing, and for scanline rendering hardware the memory access is the
reason why memory hardware is much faster than main system memory. But
there are limits to the memory speed, which are physical and somewhere
around 50-100 times of what is possible today (because the signal can't
travel faster than light). So those absolute limits will be reached really
soon for classic integrated circuits; and already the high-frequency
characteristics of memory bus signals cause a lot of problems...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29335@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
>> In short:
>
> Thanks for you detailed explanation.
Actually, I was serious that this was the short version. In reality the
whole issue is even more complicated, and very hard to describe, let alone
predict exactly :-(
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |