![](/i/fill.gif) |
![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea2a059@news.povray.org> , Simon Adameit
<sim### [at] gaussschule-bs de> wrote:
> But complexity in raytracing can also be achieved with other rhings than
> meshes like isosurfaces for detailed landscapes.
Actually, landscapes are one of the few things for which extremely efficient
algorithms exist that are very good at limiting complexity using dynamic
level of detail methods. Much of these where invented because the military
needed flight simulators* that where realistic long before hardware could
solve the problem without clever algorithms...
Thorsten
* Realism was reached by the mid-1980s, or at least that was when it was
presented on SIGGRAPH for the first time.
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Simon Adameit wrote:
> Pixar's Renderman uses the reyes algorithm. Afaik its much like
> scanlining but divides the objects into a grid of sub-pixel
> micropolygons which makes detailed displacement mapping possible.
> But due to its nature it cannot render realistic shadows, reflections
> and refractions.
As far as I know the latest PRman release can do both: reyes based rendering
and reytracing. It uses selective raytracing wich will only be enabled when
it is needed.
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
> In article <3ea29431@news.povray.org> , Andreas Kreisig <and### [at] gmx de>
> wrote:
>
>> I'm not familiar with rendering algorithms, but aren't there any
>> rendering systems available(e.g. Pixar's Renderman wich is a hybrid
>> renderer) wich are able to pipe the input so that the memory is not the
>> limited factor?
>
> Of course you can always parallelize either process, random memory access
> does not scale well at all. Memory access is the limit even for
> ray-tracing, and for scanline rendering hardware the memory access is the
> reason why memory hardware is much faster than main system memory.
Well, I mean render engines wich don't have to parse the hole scene. It can
do it in parts and when one part is rendered it will leave the pipe (and so
it will free the memory). Don't know how they realize this but I read an
article about that and they rendered test scenes with an astronomic number
of verts or triangels.
Andreas
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Thorsten Froehlich wrote:
>
> > But complexity in raytracing can also be achieved with other rhings than
> > meshes like isosurfaces for detailed landscapes.
>
> Actually, landscapes are one of the few things for which extremely efficient
> algorithms exist that are very good at limiting complexity using dynamic
> level of detail methods. Much of these where invented because the military
> needed flight simulators* that where realistic long before hardware could
> solve the problem without clever algorithms...
Reducing the problem of generating realistic terrain display to the
requirements of 'flight simulator like' applications (meaning low
resolution, large area, possibly realtime display of heightfield data)
does not cover this problem sufficiently. As i have mentioned elsewhere
in this thread terrain geometry is a very good example for a problem where
procedural geometry can clearly be superior to the classical hand made
mesh approach because terrain geometry is very complex but can be fairly
well described algorithmically.
Christoph
--
POV-Ray tutorials, include files, Sim-POV,
HCR-Edit and more: http://www.tu-bs.de/~y0013390/
Last updated 28 Feb. 2003 _____./\/^>_*_<^\/\.______
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea29335@news.povray.org>, sim### [at] gaussschule-bs de says...
> Thorsten Froehlich wrote:
> > In article <3ea1d418$1@news.povray.org> , Simon Adameit
> > <sim### [at] gaussschule-bs de> wrote:
> >
> >
> >>regarding your last comment, how compares raytracing with other
> >>techniques when complexity grows?
> >
> >
> > In short:
> >
>
> Thanks for you detailed explanation.
>
> > So, as you can see (if you managed to read on to here), if the scene
> > complexity rises, ray-tracing gets more economical, even for things like
> > triangle meshes.
> >
>
> Unfortunately complexity is limited by memory.
>
>
It should be noted that, with games at least, some rendering engines
combine features of scanline and raytrace. In general, the difference is
often accuracy and effects. A triangle based sphere needs to somehow gain
triangles the closer you get to it, so will never be as accurate as a
raytraced one. Obvious of course, but the same issue exists with 'any'
mesh you use. The other factor is that you either use the hardware
acceleration, or calc nearly everything in your program anyway, since the
cards can't do complex things like refraction, reflection, etc. They just
slap triangles in the screen and slap a texture over them. Some older
cards couldn't even do this right. lol
Due to this a lot of games speed things up when they have huge meshes by
z-buffering them internally into planes. In other words, they chop up the
existing meshes into layers that will get sent to the card, use something
like a raytracers bounding to eliminate any triangles that you won't see
and then dump the rest to the card, or lacking a card, through a scanline
system that just draws everything and maps a texture to it (which is more
or less what Doom did). Or if games don't do this, then it is quite
silly, since even if the cards themselves perform something similar,
there is as you say a distinct limit to how much junk you can hand even
the best OpenGL or DirectX card before you run out of memory, especially
since you also have to dump huge textures to the card as well. On a
modern machine you could very nearly produce a frame rate as good as or
better than any 3D card by using pure raytrace and procedural textures,
where you only used meshes when needed, and even that isn't a 'major'
slow down.
The only real issue is structuring the data in a way that lets you load
it fast. Some level of persistence of objects from frame to frame, so you
only parsed 'new' objects or transitional information would pretty
well vaporize that issue. Considering how high-poly action scenes in
stuff like the XBox's Yu-Gi-Oh: War of the Roses game load so insanely
slow from what I hear that people turn them off, even the existing parse
time for each frame of the same scene using POVRay would probably take
less time to 'load' the animation frames needed by generating them each
frame individually. ;) I would say that no matter how much they improve
the cards themselves, the main issue is now becoming how to get the
bloody data to them in the first place and the way they work actually
makes that significantly harder to do, no matter what the frame rate per
poly-count now is. ;) lol
But that gets slightly off topic. ;) Knowing how poly-based systems work,
I suspect that the in-program views use scanline, but any such tool that
claims to be able to export the result will use more conventional
raytracing to produce a final result. A few even say something to that
effect in some obscure corner of the docs, but since the main problem is
almost always the obvious mesh nature of its objects when seen too
closely, or just plain at the wrong angle....
--
void main () {
call functional_code()
else
call crash_windows();
}
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea2c1e7@news.povray.org> , Andreas Kreisig <and### [at] gmx de>
wrote:
> Don't know how they realize this but I read an
> article about that and they rendered test scenes with an astronomic number
> of verts or triangels.
Of course, if only a faction is visible...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3EA2C68E.E34CE2DA@gmx.de> , Christoph Hormann
<chr### [at] gmx de> wrote:
> Reducing the problem of generating realistic terrain display to the
> requirements of 'flight simulator like' applications (meaning low
> resolution, large area, possibly realtime display of heightfield data)
Actually, today the terrain will be down to 1 meter or better resolution for
flight simulators. But as it is created using air photographs, air radar
and satellites, it is always two dimensional data with height information.
While this does not cover 100% of the Earth surface, it covers well over
99.9%...
> does not cover this problem sufficiently. As i have mentioned elsewhere
> in this thread terrain geometry is a very good example for a problem where
> procedural geometry can clearly be superior to the classical hand made
> mesh approach because terrain geometry is very complex but can be fairly
> well described algorithmically.
Yes, artificial geometry that creates true 3d surfaces will be easier this
way, but given all the advantages of terrain stored as height field for
simulations of real terrain, it is common to combine that information with
the 3d information only as needed. Even for rendering, calculating an
artificial geometry at properly spaced point, scanline rendering will
outperform ray-tracing unless the ray-tracing algorithm handles the terrain
as a special case.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea260f1@news.povray.org>,
"Thorsten Froehlich" <tho### [at] trf de> wrote:
> So, as you can see (if you managed to read on to here), if the scene
> complexity rises, ray-tracing gets more economical, even for things like
> triangle meshes.
Scanline rendering has some advantages though. For one, the entire scene
doesn't have to exist at the same time: for example, the renderer could
draw each blade of grass in a lawn as it is generated, instead of
creating a mesh in RAM. Raytracing, being image-to-screen instead of
world-to-screen, can't do this...at least not as efficiently. The blades
of grass would have to be regenerated for every ray tested. A smart
bounding scheme could greatly reduce the number of blades to generate
and test, but it is still a huge amount of work.
When you require lots of geometry and realistic lighting, the situation
is quite a bit different...there are many cases where this trick will
just not work. For realism, you really can't get much better than a pure
raytracer.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
Christopher James Huff wrote:
> When you require lots of geometry and realistic lighting, the situation
> is quite a bit different...there are many cases where this trick will
> just not work. For realism, you really can't get much better than a pure
> raytracer.
A pure raytracer with GI or another type of indirect lightning.
--
http://www.render-zone.com
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
In article <3ea3cfe9@news.povray.org>,
Andreas Kreisig <and### [at] gmx de> wrote:
> > When you require lots of geometry and realistic lighting, the situation
> > is quite a bit different...there are many cases where this trick will
> > just not work. For realism, you really can't get much better than a pure
> > raytracer.
>
> A pure raytracer with GI or another type of indirect lightning.
GI can be done with pure raytracing.
--
Christopher James Huff <cja### [at] earthlink net>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tag povray org
http://tag.povray.org/
Post a reply to this message
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |
|
![](/i/fill.gif) |
| ![](/i/fill.gif) |
|
![](/i/fill.gif) |