POV-Ray : Newsgroups : povray.unofficial.patches : Direct Ray Tracing of Displacement Mapped Triangles Server Time
8 Jul 2024 10:30:09 EDT (-0400)
  Direct Ray Tracing of Displacement Mapped Triangles (Message 7 to 16 of 46)  
<<< Previous 6 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 09:59:00
Message: <3eaa90a3@news.povray.org>
>> Ever tried to render a several-million-triangle mesh with POVRay?
> 
> No, but it shouldn't be a huge problem given enough RAM. A few million
> should stay well within the capabilities of 32 bit systems. If memory
> space is restricted, this algorithm could be very useful.
> 
The amount of consumed memory _IS_ the problem. 
Each triangle consumes quite a lot of memory. 
(Rendering 1 million triangles consumes 180 MB RAM, at least that is 
what I just measured.)

>> If you want to trace topography data you run out of memory much faster
>> than you think.
> 
> Why? What makes topography data inherently more memory consuming than
> other meshes?
> 
The problem is that in order to get a nice image of topography data 
it needs to have a very fine grid. Which requires either
- a huge amount of triangles
- an algorithm which adds subdivision surfaces or something similar 
  producing a decent image (the fine details won't be actual totpgraphy 
  but _look_ nice) -- but I repeat myself
- some other trick?

>> Staying at that example, a 1-million-triangle topography of a planet
>> looks not very well unless you add some "artificial" complexity like
>> subdivision surfaces. Or, maybe, something these people described
>> "addition of large amounts of geometric complexity into models".
> 
> This doesn't require doing it at render time, at the expense of CPU time
> that could be used for actual rendering.
>
?! You mean I should buy 256 GigaB RAM?
Furthermore, adding the complexity at render-time will effectively be 
faster because we save such a lot of parse time: 

One million triangles topography data showing the visible half of 
a planet traced at 800x600 full quality, two light sources, no 
anti-aliasing: 

Time For Parse:    0 hours  4 minutes  36.0 seconds (276 seconds)
Time For Trace:    0 hours  0 minutes   9.0 seconds (9 seconds)

That's a ratio 30 : 1 (!)

> Besides, why would you use a 1 million triangle mesh of an entire planet
> when you are close enough to see geometry that can't be represented with
> that mesh?
> 
First of all, there may be reasons: It is hard to know which triangles 
are needed for reflection. (I mean: Ever saw the hollow (culled) back-face 
of a planet on a reflective surface of a space craft?)

And then, maybe you are not aware about how many triangles you need 
for a decent landscape...
Of course, one could use meshes with different grid size for different 
image camera distances which brings other problems (wholes in surface, 
lots of meshes for camera flights). 

The easiest solution for the mentioned problem would be to implement a 
height sphere for POVRay using only 2 bytes per triangle (storing the 
data as 16 bit spherically-mapped height field). 

But the more general solution would be some support for auto-generated 
"artificial" complexity in meshes (added at render time). 

Or, maybe, support for "mesh textures" for primitive objects (i.e. 
height fields on top of sphere, cylinder/cone, torus)

Wolfgang


Post a reply to this message

From: Christopher James Huff
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 13:49:04
Message: <cjameshuff-FFDE23.13501026042003@netplex.aussie.org>
In article <3eaa90a3@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde> 
wrote:

> The amount of consumed memory _IS_ the problem. 
> Each triangle consumes quite a lot of memory. 
> (Rendering 1 million triangles consumes 180 MB RAM, at least that is 
> what I just measured.)

And I've seen 256MB modules for less than $30, 1GB is well within the 
reach of a serious hobbyist. A 1GB system could handle several million 
triangles, more or less depending on how efficiently the mesh can be 
stored.


> > that could be used for actual rendering.
> >
> ?! You mean I should buy 256 GigaB RAM?

No...just enough to hold what you need to render, and use some 
intelligence in setting up the scene. A high resolution mesh of an 
entire planet is ridiculous when you are viewing it from orbit. If you 
are close enough to see details, you don't need anything close to the 
entire planet.


> Furthermore, adding the complexity at render-time will effectively be 
> faster because we save such a lot of parse time: 

You assume there is no way to increase loading speed. In your planet 
example, a low-res planet mesh would take little time to parse, and a 
high-res landscape height field would be much faster loading than an 
equivalent mesh, because it involves opening a binary image file instead 
of parsing a scene description. A binary mesh format would make loading 
high-res meshes faster.


> First of all, there may be reasons: It is hard to know which triangles 
> are needed for reflection. (I mean: Ever saw the hollow (culled) back-face 
> of a planet on a reflective surface of a space craft?)

It's not going to matter. You aren't going to be able to tell the 
difference between a high-res and low-res planet mesh when seen 
directly, certainly not when you are viewing a reflection of it on a 
ship.


> And then, maybe you are not aware about how many triangles you need 
> for a decent landscape...

Quite a few. Not enough to be a problem. I have mentioned the example of 
grass and other plants though, which could probably benefit from it.


> The easiest solution for the mentioned problem would be to implement a 
> height sphere for POVRay using only 2 bytes per triangle (storing the 
> data as 16 bit spherically-mapped height field). 

It's unnecessary, and not the easiest solution, but it is possible. It 
might be possible to add some additional optimizations to improve speed.


> But the more general solution would be some support for auto-generated 
> "artificial" complexity in meshes (added at render time). 

Auto-generated mesh complexity is not a bad idea, but doing it at render 
time involves inefficiencies that could make it slower than just doing 
it on the mesh and storing the higher resolution version. On the other 
hand, it only does it to the parts of the mesh where it is needed...so 
in some cases, it could be faster.


> Or, maybe, support for "mesh textures" for primitive objects (i.e. 
> height fields on top of sphere, cylinder/cone, torus)

There's macros that make spherical and cylinderical height fields. Not 
toroidal or cubical though.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 15:24:57
Message: <3eaadd08@news.povray.org>
Christopher James Huff wrote:

>> > that could be used for actual rendering.
>> >
>> ?! You mean I should buy 256 GigaB RAM?
> 
> No...just enough to hold what you need to render, and use some
> intelligence in setting up the scene. 
>
Just tell my why I should use "intelligence" doing complicated 
viewport and culling calculations (think of animations) which require 
a separate mesh include file for each frame if the problem could be 
delt with in an easier and (as I think) more elegant way?

> A high resolution mesh of an
> entire planet is ridiculous when you are viewing it from orbit. 
>
(You need 1 million triangles for the visible half of a planet from orbit 
(height is scaled by some factor to make it look more interesting) when 
you look at it at diameter = screen height. Medium-sized, I guess...)

> If you are close enough to see details, you don't need anything close 
> to the entire planet.
> 
Correct. Did I doubt that?

> A binary mesh format would make loading
> high-res meshes faster.
> 
...and smaller on HD. 

>> But the more general solution would be some support for auto-generated
>> "artificial" complexity in meshes (added at render time).
> 
> Auto-generated mesh complexity is not a bad idea, but doing it at render
> time involves inefficiencies that could make it slower than just doing
> it on the mesh and storing the higher resolution version. 
>
There are 3 major advantages: 
- may be faster than mesh2 because we save parse time. 
- uses less memory while not requiring the user to do complicated 
  viewport and grid size calculations. 
  This means, if you use a very deep scene (fly along a valley), 
  it would produce nice scenery from a low/med-resolution mesh 
  with constant grid size. 
- And, as you mentioned: 
> it only does it to the parts of the mesh where it is needed...
 
>> Or, maybe, support for "mesh textures" for primitive objects (i.e.
>> height fields on top of sphere, cylinder/cone, torus)
> 
> There's macros that make spherical and cylinderical height fields. Not
> toroidal or cubical though.
> 
IIRC these macros are just creating a mesh from the data. 
And a cube is not needed. One can use 6 height fields instead. 

Wolfgang


Post a reply to this message

From: Christopher James Huff
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 16:57:22
Message: <cjameshuff-73CA31.16583226042003@netplex.aussie.org>
In article <3eaadd08@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde> 
wrote:

> > No...just enough to hold what you need to render, and use some
> > intelligence in setting up the scene. 
> >
> Just tell my why I should use "intelligence" doing complicated 
> viewport and culling calculations (think of animations) which require 
> a separate mesh include file for each frame if the problem could be 
> delt with in an easier and (as I think) more elegant way?

I never mentioned viewport calculations or culling. It doesn't require 
huge amounts of work...just don't use high resolution meshes where low 
res meshes are adequate.


> > A high resolution mesh of an
> > entire planet is ridiculous when you are viewing it from orbit. 
> (You need 1 million triangles for the visible half of a planet from orbit 
> (height is scaled by some factor to make it look more interesting) when 
> you look at it at diameter = screen height. Medium-sized, I guess...)


> > A binary mesh format would make loading
> > high-res meshes faster.
> ...and smaller on HD. 

Really, who cares about file size? It is only an issue when transferring 
files. I view it as simply a side effect of using a format more 
convenient for fast loading.


> There are 3 major advantages: 
> - may be faster than mesh2 because we save parse time. 

But if the goal is faster parsing, there are much simpler and more 
effective ways to accomplish it.


> - uses less memory while not requiring the user to do complicated 
>   viewport and grid size calculations. 

I've never suggested the user should have to do that.


>   This means, if you use a very deep scene (fly along a valley), 
>   it would produce nice scenery from a low/med-resolution mesh 
>   with constant grid size. 

Which could be done just as well before rendering.


> - And, as you mentioned: 
> > it only does it to the parts of the mesh where it is needed...

This seems to be the main advantage. Subdividing and displacing a big 
mesh could take a lot of CPU time, and limiting it to the needed areas 
would not be easy, so you would store more triangles than necessary. My 
main point is that memory use seems to be more of a side benefit, if it 
really can give a speed benefit. (time/CPU is much more costly than 
memory or storage)


> > There's macros that make spherical and cylinderical height fields. Not
> > toroidal or cubical though.
> > 
> IIRC these macros are just creating a mesh from the data. 
> And a cube is not needed. One can use 6 height fields instead. 

I'm not really sure what a cube would be defined as, but it wouldn't be 
very useful if it was simply 6 planar height fields. Maybe something 
more like the spherical height field, just using a cube as the base 
shape. I actually can't think of any use for it...it would probably be 
better to just implement subdivision/displacement for meshes.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 17:01:15
Message: <3eaaf39b$1@news.povray.org>
In article <3eaa7ff7@news.povray.org> , Wolfgang Wieser <wwi### [at] gmxde>  
wrote:

>> Well, then you better get system with a 64 bit processor,
>>
> Oh, have one for me?

Sure, it costs only about 400 Euros more than an Aldi PC ;-)
<http://store.sun.com/catalog/doc/BrowsePage.jhtml?cid=85825&parentId=48612>

> But, I agree: The fact that a genuine mesh does not fit into RAM is
> not a POVRay bug because I see little chance to significantly reduce
> genuine mesh RAM consumption (after looking at the POV code).

Indeed, there is little that can be done about it.  And it already uses
"only" 32-bit floats...

    Thorsten


Post a reply to this message

From: Simon Adameit
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 17:35:25
Message: <3eaafb9d@news.povray.org>
Christopher James Huff wrote:
> In article <3eaadd08@news.povray.org>, Wolfgang Wieser <wwi### [at] gmxde> 
> wrote:
> 
> 
> This seems to be the main advantage. Subdividing and displacing a big 
> mesh could take a lot of CPU time, and limiting it to the needed areas 
> would not be easy, so you would store more triangles than necessary. My 
> main point is that memory use seems to be more of a side benefit, if it 
> really can give a speed benefit. (time/CPU is much more costly than 
> memory or storage)
> 

There has to be a reason why the reyes algorithm is still used ;-)
The problem is that if you hit a memory limit there often is not much 
you can do about it besides buying more memory, with time/CPU you can at 
least wait. And it's not only the geometry that has big memory 
requirements but also radiosity, photon mapping, etc..


Post a reply to this message

From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 17:50:49
Message: <3eaaff38@news.povray.org>
Thorsten Froehlich wrote:

> In article <3eaa7ff7@news.povray.org> , Wolfgang Wieser <wwi### [at] gmxde>
> wrote:
> 
>>> Well, then you better get system with a 64 bit processor,
>>>
>> Oh, have one for me?
> 
> Sure, it costs only about 400 Euros more than an Aldi PC ;-)
><http://store.sun.com/catalog/doc/BrowsePage.jhtml?cid=85825&parentId=48612>
>
:) Nice, but...

>The Sun Blade[tm] 150 workstation is an affordable, full-featured, 64-bit
>workstation with a 550/650-MHz UltraSPARC[R] IIi processor, up to 2 GB of
>memory
>
Oh, just up to 2 GB of RAM?
The issue was to be able to use more than 4 GB...

Wolfgang


Post a reply to this message

From: Wolfgang Wieser
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 18:10:55
Message: <3eab03ee@news.povray.org>
Christopher James Huff wrote:

>> Just tell my why I should use "intelligence" doing complicated
>> viewport and culling calculations (think of animations) which require
>> a separate mesh include file for each frame if the problem could be
>> delt with in an easier and (as I think) more elegant way?
> 
> I never mentioned viewport calculations or culling. It doesn't require
> huge amounts of work...just don't use high resolution meshes where low
> res meshes are adequate.
> 
The alternative is to use 100 million triangles. 
95% will be useless but the foreground needs needs the fine grid. 

Either smart "intelligent" mixed-resolution meshes with (at least 
primitive) viewport culling or a very fine mesh is required. 
OR, subdivision at render time. 
Anything else?

>> > A binary mesh format would make loading
>> > high-res meshes faster.
>> ...and smaller on HD.
> 
> Really, who cares about file size? It is only an issue when transferring
> files. I view it as simply a side effect of using a format more
> convenient for fast loading.
> 
When rendering films, file size gets interesting, especially if you 
need a separate mesh for each frame. And when rendering the film in 
a distributed environment the issue is transferring the meshes. 

But that's not the major issue we're talking about here. 

>>   This means, if you use a very deep scene (fly along a valley),
>>   it would produce nice scenery from a low/med-resolution mesh
>>   with constant grid size.
> 
> Which could be done just as well before rendering.
> 
Which results in a 100 million triangle mesh.
Or requires some "intelligent" mixed-resolution mesh and triangle culling. 
AND it requires knowledge of the camera position which means that a 
separate mesh is needed for each frame. 

Oh dear... we're back at the beginning. 

I don't know if you never tried to put up a POV camera in a topography 
mesh valley and looked at all the ugly triangles in the foreground. 
Only solution (without patching POVRay) I see is doing some really 
non-trivial calculations on the input topography data extracting a 
mesh with fine grid in foreground and larger grid in the background. 

Wolfgang


Post a reply to this message

From: Christopher James Huff
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 26 Apr 2003 19:11:32
Message: <cjameshuff-49BC15.19124526042003@netplex.aussie.org>
In article <3eaafb9d@news.povray.org>,
 Simon Adameit <sim### [at] gaussschule-bsde> wrote:

> There has to be a reason why the reyes algorithm is still used ;-)

Not with raytracing. As far as I know, Reyes is limited to scanline only.

-- 
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: chr### [at] tagpovrayorg
http://tag.povray.org/


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Direct Ray Tracing of Displacement Mapped Triangles
Date: 27 Apr 2003 05:10:34
Message: <3eab9e8a@news.povray.org>
In article <3eaaff38@news.povray.org> , Wolfgang Wieser <wwi### [at] gmxde>  
wrote:

> Oh, just up to 2 GB of RAM?
> The issue was to be able to use more than 4 GB...

Since when is a system limited by the amount of physical memory? - It has a
40 GB harddisk in the standard configuration, so you can use 39 GB as swap
space!  Or do you expect they allow you to put 4000 Euros worth of RAM into
a entry level system? ;-)

If you want more RAM and have the necessary pocket money to spend, I would
recommend one of these systems:

<http://www-132.ibm.com/content/home/store_IBMPublicUSA/en_US/eServer/pSerie
s/pSeries.html>
<http://store.sun.com/catalog/doc/BrowsePage.jhtml?cid=48620&parentId=26829>

;-)

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

<<< Previous 6 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.