![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
> In article <3ea1d418$1@news.povray.org> , Simon Adameit
> <sim### [at] gaussschule-bs de> wrote:
>
>
>>regarding your last comment, how compares raytracing with other
>>techniques when complexity grows?
>
>
> In short:
>
Thanks for you detailed explanation.
> So, as you can see (if you managed to read on to here), if the scene
> complexity rises, ray-tracing gets more economical, even for things like
> triangle meshes.
>
Unfortunately complexity is limited by memory.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Simon Adameit wrote:
> Thorsten Froehlich wrote:
>> In article <3ea1d418$1@news.povray.org> , Simon Adameit
>> <sim### [at] gaussschule-bs de> wrote:
>>
>>
>>>regarding your last comment, how compares raytracing with other
>>>techniques when complexity grows?
>>
>>
>> In short:
>>
>
> Thanks for you detailed explanation.
>
>> So, as you can see (if you managed to read on to here), if the scene
>> complexity rises, ray-tracing gets more economical, even for things like
>> triangle meshes.
>>
>
> Unfortunately complexity is limited by memory.
I'm not familiar with rendering algorithms, but aren't there any rendering
systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
are able to pipe the input so that the memory is not the limited factor?
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Andreas Kreisig wrote:
>
> I'm not familiar with rendering algorithms, but aren't there any rendering
> systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
> are able to pipe the input so that the memory is not the limited factor?
>
> Andreas
>
Pixar's Renderman uses the reyes algorithm. Afaik its much like
scanlining but divides the objects into a grid of sub-pixel
micropolygons which makes detailed displacement mapping possible.
But due to its nature it cannot render realistic shadows, reflections
and refractions.
There are also some approaches to memoy coherent raytracing (normal
raytracing always requires access to any part of the scene) but from
what I know they only work when you dont have complex lightning like
radiosity or hundrets of light sources.
But complexity in raytracing can also be achieved with other rhings than
meshes like isosurfaces for detailed landscapes.
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29431@news.povray.org> , Andreas Kreisig <and### [at] gmx de>
wrote:
> I'm not familiar with rendering algorithms, but aren't there any rendering
> systems available(e.g. Pixar's Renderman wich is a hybrid renderer) wich
> are able to pipe the input so that the memory is not the limited factor?
Of course you can always parallelize either process, random memory access
does not scale well at all. Memory access is the limit even for
ray-tracing, and for scanline rendering hardware the memory access is the
reason why memory hardware is much faster than main system memory. But
there are limits to the memory speed, which are physical and somewhere
around 50-100 times of what is possible today (because the signal can't
travel faster than light). So those absolute limits will be reached really
soon for classic integrated circuits; and already the high-frequency
characteristics of memory bus signals cause a lot of problems...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29335@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
>> In short:
>
> Thanks for you detailed explanation.
Actually, I was serious that this was the short version. In reality the
whole issue is even more complicated, and very hard to describe, let alone
predict exactly :-(
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea2a059@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
> But complexity in raytracing can also be achieved with other rhings than
> meshes like isosurfaces for detailed landscapes.
Actually, landscapes are one of the few things for which extremely efficient
algorithms exist that are very good at limiting complexity using dynamic
level of detail methods. Much of these where invented because the military
needed flight simulators* that where realistic long before hardware could
solve the problem without clever algorithms...
Thorsten
* Realism was reached by the mid-1980s, or at least that was when it was
presented on SIGGRAPH for the first time.
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Simon Adameit wrote:
> Pixar's Renderman uses the reyes algorithm. Afaik its much like
> scanlining but divides the objects into a grid of sub-pixel
> micropolygons which makes detailed displacement mapping possible.
> But due to its nature it cannot render realistic shadows, reflections
> and refractions.
As far as I know the latest PRman release can do both: reyes based rendering
and reytracing. It uses selective raytracing wich will only be enabled when
it is needed.
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
> In article <3ea29431@news.povray.org> , Andreas Kreisig <and### [at] gmx de>
> wrote:
>
>> I'm not familiar with rendering algorithms, but aren't there any
>> rendering systems available(e.g. Pixar's Renderman wich is a hybrid
>> renderer) wich are able to pipe the input so that the memory is not the
>> limited factor?
>
> Of course you can always parallelize either process, random memory access
> does not scale well at all. Memory access is the limit even for
> ray-tracing, and for scanline rendering hardware the memory access is the
> reason why memory hardware is much faster than main system memory.
Well, I mean render engines wich don't have to parse the hole scene. It can
do it in parts and when one part is rendered it will leave the pipe (and so
it will free the memory). Don't know how they realize this but I read an
article about that and they rendered test scenes with an astronomic number
of verts or triangels.
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
>
> > But complexity in raytracing can also be achieved with other rhings than
> > meshes like isosurfaces for detailed landscapes.
>
> Actually, landscapes are one of the few things for which extremely efficient
> algorithms exist that are very good at limiting complexity using dynamic
> level of detail methods. Much of these where invented because the military
> needed flight simulators* that where realistic long before hardware could
> solve the problem without clever algorithms...
Reducing the problem of generating realistic terrain display to the
requirements of 'flight simulator like' applications (meaning low
resolution, large area, possibly realtime display of heightfield data)
does not cover this problem sufficiently. As i have mentioned elsewhere
in this thread terrain geometry is a very good example for a problem where
procedural geometry can clearly be superior to the classical hand made
mesh approach because terrain geometry is very complex but can be fairly
well described algorithmically.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 28 Feb. 2003 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29335@news.povray.org>, sim### [at] gaussschule-bs de says...
> Thorsten Froehlich wrote:
> > In article <3ea1d418$1@news.povray.org> , Simon Adameit
> > <sim### [at] gaussschule-bs de> wrote:
> >
> >
> >>regarding your last comment, how compares raytracing with other
> >>techniques when complexity grows?
> >
> >
> > In short:
> >
>
> Thanks for you detailed explanation.
>
> > So, as you can see (if you managed to read on to here), if the scene
> > complexity rises, ray-tracing gets more economical, even for things like
> > triangle meshes.
> >
>
> Unfortunately complexity is limited by memory.
>
>
It should be noted that, with games at least, some rendering engines
combine features of scanline and raytrace. In general, the difference is
often accuracy and effects. A triangle based sphere needs to somehow gain
triangles the closer you get to it, so will never be as accurate as a
raytraced one. Obvious of course, but the same issue exists with 'any'
mesh you use. The other factor is that you either use the hardware
acceleration, or calc nearly everything in your program anyway, since the
cards can't do complex things like refraction, reflection, etc. They just
slap triangles in the screen and slap a texture over them. Some older
cards couldn't even do this right. lol
Due to this a lot of games speed things up when they have huge meshes by
z-buffering them internally into planes. In other words, they chop up the
existing meshes into layers that will get sent to the card, use something
like a raytracers bounding to eliminate any triangles that you won't see
and then dump the rest to the card, or lacking a card, through a scanline
system that just draws everything and maps a texture to it (which is more
or less what Doom did). Or if games don't do this, then it is quite
silly, since even if the cards themselves perform something similar,
there is as you say a distinct limit to how much junk you can hand even
the best OpenGL or DirectX card before you run out of memory, especially
since you also have to dump huge textures to the card as well. On a
modern machine you could very nearly produce a frame rate as good as or
better than any 3D card by using pure raytrace and procedural textures,
where you only used meshes when needed, and even that isn't a 'major'
slow down.
The only real issue is structuring the data in a way that lets you load
it fast. Some level of persistence of objects from frame to frame, so you
only parsed 'new' objects or transitional information would pretty
well vaporize that issue. Considering how high-poly action scenes in
stuff like the XBox's Yu-Gi-Oh: War of the Roses game load so insanely
slow from what I hear that people turn them off, even the existing parse
time for each frame of the same scene using POVRay would probably take
less time to 'load' the animation frames needed by generating them each
frame individually. ;) I would say that no matter how much they improve
the cards themselves, the main issue is now becoming how to get the
bloody data to them in the first place and the way they work actually
makes that significantly harder to do, no matter what the frame rate per
poly-count now is. ;) lol
But that gets slightly off topic. ;) Knowing how poly-based systems work,
I suspect that the in-program views use scanline, but any such tool that
claims to be able to export the result will use more conventional
raytracing to produce a final result. A few even say something to that
effect in some obscure corner of the docs, but since the main problem is
almost always the obvious mesh nature of its objects when seen too
closely, or just plain at the wrong angle....
--
void main () {
call functional_code()
else
call crash_windows();
}
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |