|
|
|
|
|
|
| |
| |
|
|
From: Simon Adameit
Subject: Direct Ray Tracing of Displacement Mapped Triangles
Date: 18 Apr 2003 18:44:59
Message: <3ea07feb@news.povray.org>
|
|
|
| |
| |
|
|
I just read this paper and found it to be very interesting as this could
perhaps be implemented in POV:
http://www.cs.utah.edu/~bes/papers/height/
I also found some other papers anout this topic but they described
either something like isosurfaces or required things that are surely not
going to be implemented in POV like memory coherent raytracing.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ea07feb@news.povray.org>,
Simon Adameit <sim### [at] gaussschule-bsde> wrote:
> I just read this paper and found it to be very interesting as this could
> perhaps be implemented in POV:
>
> http://www.cs.utah.edu/~bes/papers/height/
>
> I also found some other papers anout this topic but they described
> either something like isosurfaces or required things that are surely not
> going to be implemented in POV like memory coherent raytracing.
Neat...I was talking about something similar to this in a thread on
scanline vs. raytracing renderers, about generating grass on the fly,
creating the blades when testing them against a ray...I didn't think it
was practical, but maybe it is after all. Render-time subdivision of
meshes is something that I've been interested in for a while, though
this is more sophisticated than any ideas I've come up with.
I'm not too sure of the usefulness though. I mean, memory is really
cheap, especially compared to CPU power, and there has to be some render
speed penalty for this. On the other hand, this technique could be
adapted to make grass and other plantlife that could stretch even todays
memory capacity, even with tricks like duplicating patches. And it has
the advantage of only generating the triangles that are really tested
against rays, which could actually make it faster overall.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 23 Apr 2003 13:48:20
Message: <3ea6d1e3@news.povray.org>
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
> I'm not too sure of the usefulness though. I mean, memory is really
> cheap,
>
Well...
Ever tried to render a several-million-triangle mesh with POVRay?
If you want to trace topography data you run out of memory much faster
than you think.
Staying at that example, a 1-million-triangle topography of a planet looks
not very well unless you add some "artificial" complexity like
subdivision surfaces. Or, maybe, something these people described
"addition of large amounts of geometric complexity into models".
Wolfgang
Post a reply to this message
|
|
| |
| |
|
|
From: Thorsten Froehlich
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 23 Apr 2003 17:15:50
Message: <3ea70286@news.povray.org>
|
|
|
| |
| |
|
|
In article <3ea6d1e3@news.povray.org> , Wolfgang Wieser <wwi### [at] gmxde>
wrote:
> Ever tried to render a several-million-triangle mesh with POVRay?
> If you want to trace topography data you run out of memory much faster
> than you think.
Well, then you better get system with a 64 bit processor, or use a
heightfield. If a desktop level system with a 32 bit processor can't handle
the amounts of data, that is hardly a problem of POV-Ray...
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3ea6d1e3@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde>
wrote:
> Ever tried to render a several-million-triangle mesh with POVRay?
No, but it shouldn't be a huge problem given enough RAM. A few million
should stay well within the capabilities of 32 bit systems. If memory
space is restricted, this algorithm could be very useful.
> If you want to trace topography data you run out of memory much faster
> than you think.
Why? What makes topography data inherently more memory consuming than
other meshes?
> Staying at that example, a 1-million-triangle topography of a planet looks
> not very well unless you add some "artificial" complexity like
> subdivision surfaces. Or, maybe, something these people described
> "addition of large amounts of geometric complexity into models".
This doesn't require doing it at render time, at the expense of CPU time
that could be used for actual rendering.
Besides, why would you use a 1 million triangle mesh of an entire planet
when you are close enough to see geometry that can't be represented with
that mesh?
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 08:47:52
Message: <3eaa7ff7@news.povray.org>
|
|
|
| |
| |
|
|
Thorsten Froehlich wrote:
> In article <3ea6d1e3@news.povray.org> , Wolfgang Wieser <wwi### [at] gmxde>
> wrote:
>
>> Ever tried to render a several-million-triangle mesh with POVRay?
>> If you want to trace topography data you run out of memory much faster
>> than you think.
>
> Well, then you better get system with a 64 bit processor,
>
Oh, have one for me?
> or use a heightfield.
>
Correct.
But what I need is actually a height-SPHERE, so I'm back to plain mesh.
> If a desktop level system with a 32 bit processor can't
> handle the amounts of data, that is hardly a problem of POV-Ray...
>
Well, in some way it IS. Because one could imagine an algorithm which
uses less memory (at the expense of CPU time), but only for the
specialized problem of a height sphere.
But, I agree: The fact that a genuine mesh does not fit into RAM is
not a POVRay bug because I see little chance to significantly reduce
genuine mesh RAM consumption (after looking at the POV code).
Wolfgang
Post a reply to this message
|
|
| |
| |
|
|
From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 09:59:00
Message: <3eaa90a3@news.povray.org>
|
|
|
| |
| |
|
|
>> Ever tried to render a several-million-triangle mesh with POVRay?
>
> No, but it shouldn't be a huge problem given enough RAM. A few million
> should stay well within the capabilities of 32 bit systems. If memory
> space is restricted, this algorithm could be very useful.
>
The amount of consumed memory _IS_ the problem.
Each triangle consumes quite a lot of memory.
(Rendering 1 million triangles consumes 180 MB RAM, at least that is
what I just measured.)
>> If you want to trace topography data you run out of memory much faster
>> than you think.
>
> Why? What makes topography data inherently more memory consuming than
> other meshes?
>
The problem is that in order to get a nice image of topography data
it needs to have a very fine grid. Which requires either
- a huge amount of triangles
- an algorithm which adds subdivision surfaces or something similar
producing a decent image (the fine details won't be actual totpgraphy
but _look_ nice) -- but I repeat myself
- some other trick?
>> Staying at that example, a 1-million-triangle topography of a planet
>> looks not very well unless you add some "artificial" complexity like
>> subdivision surfaces. Or, maybe, something these people described
>> "addition of large amounts of geometric complexity into models".
>
> This doesn't require doing it at render time, at the expense of CPU time
> that could be used for actual rendering.
>
?! You mean I should buy 256 GigaB RAM?
Furthermore, adding the complexity at render-time will effectively be
faster because we save such a lot of parse time:
One million triangles topography data showing the visible half of
a planet traced at 800x600 full quality, two light sources, no
anti-aliasing:
Time For Parse: 0 hours 4 minutes 36.0 seconds (276 seconds)
Time For Trace: 0 hours 0 minutes 9.0 seconds (9 seconds)
That's a ratio 30 : 1 (!)
> Besides, why would you use a 1 million triangle mesh of an entire planet
> when you are close enough to see geometry that can't be represented with
> that mesh?
>
First of all, there may be reasons: It is hard to know which triangles
are needed for reflection. (I mean: Ever saw the hollow (culled) back-face
of a planet on a reflective surface of a space craft?)
And then, maybe you are not aware about how many triangles you need
for a decent landscape...
Of course, one could use meshes with different grid size for different
image camera distances which brings other problems (wholes in surface,
lots of meshes for camera flights).
The easiest solution for the mentioned problem would be to implement a
height sphere for POVRay using only 2 bytes per triangle (storing the
data as 16 bit spherically-mapped height field).
But the more general solution would be some support for auto-generated
"artificial" complexity in meshes (added at render time).
Or, maybe, support for "mesh textures" for primitive objects (i.e.
height fields on top of sphere, cylinder/cone, torus)
Wolfgang
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3eaa90a3@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde>
wrote:
> The amount of consumed memory _IS_ the problem.
> Each triangle consumes quite a lot of memory.
> (Rendering 1 million triangles consumes 180 MB RAM, at least that is
> what I just measured.)
And I've seen 256MB modules for less than $30, 1GB is well within the
reach of a serious hobbyist. A 1GB system could handle several million
triangles, more or less depending on how efficiently the mesh can be
stored.
> > that could be used for actual rendering.
> >
> ?! You mean I should buy 256 GigaB RAM?
No...just enough to hold what you need to render, and use some
intelligence in setting up the scene. A high resolution mesh of an
entire planet is ridiculous when you are viewing it from orbit. If you
are close enough to see details, you don't need anything close to the
entire planet.
> Furthermore, adding the complexity at render-time will effectively be
> faster because we save such a lot of parse time:
You assume there is no way to increase loading speed. In your planet
example, a low-res planet mesh would take little time to parse, and a
high-res landscape height field would be much faster loading than an
equivalent mesh, because it involves opening a binary image file instead
of parsing a scene description. A binary mesh format would make loading
high-res meshes faster.
> First of all, there may be reasons: It is hard to know which triangles
> are needed for reflection. (I mean: Ever saw the hollow (culled) back-face
> of a planet on a reflective surface of a space craft?)
It's not going to matter. You aren't going to be able to tell the
difference between a high-res and low-res planet mesh when seen
directly, certainly not when you are viewing a reflection of it on a
ship.
> And then, maybe you are not aware about how many triangles you need
> for a decent landscape...
Quite a few. Not enough to be a problem. I have mentioned the example of
grass and other plants though, which could probably benefit from it.
> The easiest solution for the mentioned problem would be to implement a
> height sphere for POVRay using only 2 bytes per triangle (storing the
> data as 16 bit spherically-mapped height field).
It's unnecessary, and not the easiest solution, but it is possible. It
might be possible to add some additional optimizations to improve speed.
> But the more general solution would be some support for auto-generated
> "artificial" complexity in meshes (added at render time).
Auto-generated mesh complexity is not a bad idea, but doing it at render
time involves inefficiencies that could make it slower than just doing
it on the mesh and storing the higher resolution version. On the other
hand, it only does it to the parts of the mesh where it is needed...so
in some cases, it could be faster.
> Or, maybe, support for "mesh textures" for primitive objects (i.e.
> height fields on top of sphere, cylinder/cone, torus)
There's macros that make spherical and cylinderical height fields. Not
toroidal or cubical though.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 15:24:57
Message: <3eaadd08@news.povray.org>
|
|
|
| |
| |
|
|
Christopher James Huff wrote:
>> > that could be used for actual rendering.
>> >
>> ?! You mean I should buy 256 GigaB RAM?
>
> No...just enough to hold what you need to render, and use some
> intelligence in setting up the scene.
>
Just tell my why I should use "intelligence" doing complicated
viewport and culling calculations (think of animations) which require
a separate mesh include file for each frame if the problem could be
delt with in an easier and (as I think) more elegant way?
> A high resolution mesh of an
> entire planet is ridiculous when you are viewing it from orbit.
>
(You need 1 million triangles for the visible half of a planet from orbit
(height is scaled by some factor to make it look more interesting) when
you look at it at diameter = screen height. Medium-sized, I guess...)
> If you are close enough to see details, you don't need anything close
> to the entire planet.
>
Correct. Did I doubt that?
> A binary mesh format would make loading
> high-res meshes faster.
>
...and smaller on HD.
>> But the more general solution would be some support for auto-generated
>> "artificial" complexity in meshes (added at render time).
>
> Auto-generated mesh complexity is not a bad idea, but doing it at render
> time involves inefficiencies that could make it slower than just doing
> it on the mesh and storing the higher resolution version.
>
There are 3 major advantages:
- may be faster than mesh2 because we save parse time.
- uses less memory while not requiring the user to do complicated
viewport and grid size calculations.
This means, if you use a very deep scene (fly along a valley),
it would produce nice scenery from a low/med-resolution mesh
with constant grid size.
- And, as you mentioned:
> it only does it to the parts of the mesh where it is needed...
>> Or, maybe, support for "mesh textures" for primitive objects (i.e.
>> height fields on top of sphere, cylinder/cone, torus)
>
> There's macros that make spherical and cylinderical height fields. Not
> toroidal or cubical though.
>
IIRC these macros are just creating a mesh from the data.
And a cube is not needed. One can use 6 height fields instead.
Wolfgang
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <3eaadd08@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde>
wrote:
> > No...just enough to hold what you need to render, and use some
> > intelligence in setting up the scene.
> >
> Just tell my why I should use "intelligence" doing complicated
> viewport and culling calculations (think of animations) which require
> a separate mesh include file for each frame if the problem could be
> delt with in an easier and (as I think) more elegant way?
I never mentioned viewport calculations or culling. It doesn't require
huge amounts of work...just don't use high resolution meshes where low
res meshes are adequate.
> > A high resolution mesh of an
> > entire planet is ridiculous when you are viewing it from orbit.
> (You need 1 million triangles for the visible half of a planet from orbit
> (height is scaled by some factor to make it look more interesting) when
> you look at it at diameter = screen height. Medium-sized, I guess...)
> > A binary mesh format would make loading
> > high-res meshes faster.
> ...and smaller on HD.
Really, who cares about file size? It is only an issue when transferring
files. I view it as simply a side effect of using a format more
convenient for fast loading.
> There are 3 major advantages:
> - may be faster than mesh2 because we save parse time.
But if the goal is faster parsing, there are much simpler and more
effective ways to accomplish it.
> - uses less memory while not requiring the user to do complicated
> viewport and grid size calculations.
I've never suggested the user should have to do that.
> This means, if you use a very deep scene (fly along a valley),
> it would produce nice scenery from a low/med-resolution mesh
> with constant grid size.
Which could be done just as well before rendering.
> - And, as you mentioned:
> > it only does it to the parts of the mesh where it is needed...
This seems to be the main advantage. Subdividing and displacing a big
mesh could take a lot of CPU time, and limiting it to the needed areas
would not be easy, so you would store more triangles than necessary. My
main point is that memory use seems to be more of a side benefit, if it
really can give a speed benefit. (time/CPU is much more costly than
memory or storage)
> > There's macros that make spherical and cylinderical height fields. Not
> > toroidal or cubical though.
> >
> IIRC these macros are just creating a mesh from the data.
> And a cube is not needed. One can use 6 height fields instead.
I'm not really sure what a cube would be defined as, but it wouldn't be
very useful if it was simply 6 planar height fields. Maybe something
more like the spherical height field, just using a cube as the base
shape. I actually can't think of any use for it...it would probably be
better to just implement subdivision/displacement for meshes.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|