POV-Ray : Newsgroups : povray.advanced-users : decreasing mesh memory usage. Server Time
29 Jul 2024 02:26:57 EDT (-0400)
  decreasing mesh memory usage. (Message 5 to 14 of 14)  
<<< Previous 4 Messages Goto Initial 10 Messages
From: Gilles Tran
Subject: Re: decreasing mesh memory usage.
Date: 28 Oct 2003 12:59:20
Message: <3f9eae78$1@news.povray.org>

news:3f9ea208@news.povray.org...
> I'm trying to create as large a mesh as possible with my available
> memory. I have questions about what may help.

People are going to give you very precise answers about memory use and
meshes, but in my experience, for practical purpose, you can fit a lot of
big meshes in 1 Go (which is what I
have). For instance, in a scene I'm working on, I have around 4-500 Mb worth
of different meshes and textures, and I'm rendering this at 4800*6400 with
radiosity. Note that mesh size is not only a RAM issue, but also a parsing
time one: scenes that takes several minutes to parse limit your own
development time.

Here are some memory saving ideas if the computer starts swapping:

- some meshes (typically man-made objects rather than organic ones) can
benefit from symmetry and repetition. For instance, a car mesh can be made
of a half-car, one wheel/tyre and of the remaining non-symmetrical pieces.
The full car is then made of the half-car and its mirror, 4 copies of the
wheel and of the other pieces. The wheel itself can be a created as a
quarter of the object. Stuff like bolts and small details can also benefit a
lot from instanciation, provided that you know where to put them.

- not all objects must be full-res and not all parts of an object deserve to
be full-res. Reserve the full res for the main visible parts of the main
objects and use smaller res for everything else. Modellers can also help you
to reduce the poly count. In Rhino, I fine tune the poly count for each
object part according to its importance and visibility in the final image.
Low res versions can also be handy during tests and for tricks like what
follows:

- render in several passes. Purists will certainly throw things at me, but
nothing (apart a clear conscience and reflection-crazy scenes) prevents you
from adding some objects at a later stage, by rendering them separately and
then cutting and pasting the partiel renders where they belong in the pic.
If you have low-res versions of the other objects, you can even use them to
participate in reflections, radiosity etc. shown in the partial render. In
one case, I rendered the main object over a simplified background (but with
the right shapes, colors and lighting) and then pasted it in the picture
rendered without it using a mask. It was a big time and RAM saver.

Of course, this is all scene-dependent ultimately.

G.

-- 

**********************
http://www.oyonale.com
**********************
- Graphic experiments
- POV-Ray and Poser computer images
- Posters


Post a reply to this message

From: Shay
Subject: Results
Date: 28 Oct 2003 14:27:01
Message: <3f9ec305$1@news.povray.org>
"Christoph Hormann" <chr### [at] gmxde> wrote in message
news:rgl### [at] tritonimagicode...
|
| Why not try it out?
|

OK, tried it out.

My first idea, a separate mesh for each texture, was a failure. One mesh
with 2500 vertices, 250 textures, and 1500 face and texture indices had
a peak memory usage of 608,867. A group of 250 meshes, each with 10
vertices and 6 face indices had a peak memory usage of 698,799. The mesh
without textures used 337,203 and the texture definitions used and
additional 144,640 for a total of 481,843. So, one big mesh used 127,024
for texture lists and indices, and the group of small meshes used (for
the purposes of the test) 216,956, more than 70% more!!

Removing normals did help a little, although I was surprised that the
number of entries in the normal_indices had *no* effect. The number of
entries in the normal_vectors did. The above mesh with a normal defined
for each vertex used 367,203 bytes. The mesh with onlt 1500 normal
vectors used 355,203.

 -Shay


Post a reply to this message

From: Shay
Subject: Re: decreasing mesh memory usage.
Date: 28 Oct 2003 15:31:33
Message: <3f9ed225@news.povray.org>
"Gilles Tran" <tra### [at] inapginrafr> wrote in message
news:3f9eae78$1@news.povray.org...

| scenes that takes several minutes to parse limit your own
| development time.

I have an advantage here in that I already know the shape of what I am
developing, so adjustments in that area will not be required. I am
writing my code to produce dummies at the same time it produces a
high-resolution mesh, so this will help me a lot when it comes to
texturing and positioning the scene.

|
| - some meshes (typically man-made objects rather than
| organic ones) can benefit from symmetry and repetition.

Yes, I am fortunate in that my model has a lot of symmetry and
repitition. It does, however, also have an incredible amount of distinct
pieces.

|
| - Modellers can also help you to reduce the poly count.



Well, I'd use a modeler if I could, but I can't stand losing control
over each and every one of my vertices. I am writing new code which is
very stingy with vertices, so I doubt that anything could be reduced
anyway.

|
| - render in several passes.

After the disappointing results of my tests, I think that this will be
the best way to go. A real advantage of this method is that it will be
simple after my render is complete to assemble a full-resolution mesh
with no compromises. I may not be able to render it, but at least I'll
have the complete model.

Thank you,
 -Shay


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: Results
Date: 29 Oct 2003 08:59:33
Message: <3f9fc7c5$1@news.povray.org>
In article <3f9ec305$1@news.povray.org> , "Shay" <sah### [at] simcopartscom> wrote:

> Removing normals did help a little, although I was surprised that the
> number of entries in the normal_indices had *no* effect.

This is to be expected and not surprsing at all if you think about it...

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Thorsten Froehlich
Subject: Re: decreasing mesh memory usage.
Date: 29 Oct 2003 09:01:47
Message: <3f9fc84b$1@news.povray.org>
In article <3f9ea4c0$1@news.povray.org> , "Shay" <sah### [at] simcopartscom> wrote:

> I'm not sure what you mean. I'm talking about my 360 or so megs of RAM.
> If there is a way in which I could make use of my 2Gig swap partition,
> then I would like to know about that as well.

Ever since Windows 95 virtual memory has been available to applications in
Windos without any restrictions.  It is nothing new and nothing an
application proggram or user has to bother about at all if you don't care
for speed...

    Thorsten

____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trfde

Visit POV-Ray on the web: http://mac.povray.org


Post a reply to this message

From: Shay
Subject: Re: decreasing mesh memory usage.
Date: 29 Oct 2003 09:09:25
Message: <3f9fca15$1@news.povray.org>
"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3f9fc84b$1@news.povray.org...
|
| Ever since Windows 95 virtual memory has been available
| to applications in Windos without any restrictions.  It
| is nothing new and nothing an application proggram or
| user has to bother about at all if you don't care
| for speed...
|

Is this true in Linux as well?

 -Shay


Post a reply to this message

From: Shay
Subject: Re: Results
Date: 29 Oct 2003 09:31:07
Message: <3f9fcf2b$1@news.povray.org>
"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3f9fc7c5$1@news.povray.org...
|
| This is to be expected and not surprsing at all if you think
| about it...
|

I wondered in my first post if each vertex instance had a normal vector
whether it's face was flat or not. I guess that this is the case. The
good news is that if by some mechanism it were otherwise, it would be a
real pain to take advantage of that fact.

Thank you everyone for your help. My picture is proceeding nicely.

 -Shay


Post a reply to this message

From: Warp
Subject: Re: decreasing mesh memory usage.
Date: 29 Oct 2003 10:13:06
Message: <3f9fd901@news.povray.org>
Shay <sah### [at] simcopartscom> wrote:
> Is this true in Linux as well?

  What do you think the swap partition is for?

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

From: Shay
Subject: Re: decreasing mesh memory usage.
Date: 29 Oct 2003 10:25:18
Message: <3f9fdbde@news.povray.org>
"Warp" <war### [at] tagpovrayorg> wrote in message
news:3f9fd901@news.povray.org...
| Shay:
|   Is this true in Linux as well?
|
| Warp:
|   What do you think the swap partition is for?

Thanks, I'll take that as a yes.

Honestly, I don't know d!(k about these types of things. I knew that the
swap partition *could* do this. That's why I gave myself a 2G swap
partition when I set up my hard disk. I just wasn't sure whether it
required some action on my or a program's part in order to make this
happen. I've exceeded the RAM on a Win2000 computer here at work a few
times, and the computer just stopped cold and required a restart after
all of the programs had been killed.

I do have a reasonably fast hard drive at home, so I guess I've got 2G
of memory to play with as long as I can wait for the render.

Thank you,
 -Shay


Post a reply to this message

From: Warp
Subject: Re: decreasing mesh memory usage.
Date: 29 Oct 2003 10:53:34
Message: <3f9fe27e@news.povray.org>
Shay <sah### [at] simcopartscom> wrote:
> I just wasn't sure whether it
> required some action on my or a program's part in order to make this
> happen.

  Nope.

  In DOS it required, but that's about it. Any *real* operating system
will have a decent memory management system which, with the aid of the
CPU itself, will automatically swap memory pages from the RAM to the HD and
viceversa with a more or less complicated algorithm (the basic idea being
that oldest/least used memory pages are written to the HD when the OS starts
running out of RAM, and most recent/most used pages are kept in RAM).
  This happens completely automatically and transparently to the process.
The process will just see that you have eg. 2 gigabytes of memory even
though in reality you only have 256 megs of physical RAM.

  (The same memory management system will, among other things, also disallow
processes from accessing memory not reserved for them, again with the aid
of the CPU.)

-- 
#macro N(D)#if(D>99)cylinder{M()#local D=div(D,104);M().5,2pigment{rgb M()}}
N(D)#end#end#macro M()<mod(D,13)-6mod(div(D,13)8)-3,10>#end blob{
N(11117333955)N(4254934330)N(3900569407)N(7382340)N(3358)N(970)}//  - Warp -


Post a reply to this message

<<< Previous 4 Messages Goto Initial 10 Messages

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.