POV-Ray : Newsgroups : povray.general : parse vs. render stages, and best use of memory Server Time
19 Apr 2024 13:19:59 EDT (-0400)
  parse vs. render stages, and best use of memory (Message 1 to 10 of 26)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Kenneth
Subject: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 16:25:01
Message: <web.5a6b9c29b3013f23a47873e10@news.povray.org>
[My apologies beforehand; this is probably going to turn into a series of
inter-related questions.]

In POV-Ray's current structure, there are two 'stages' to any render: the
parsing, and the rendering. ( Although, I don't know if those are called the
'front end' and 'back end', in the current use of those words.)

It's still not truly clear to me which aspects of the program are 'evaluated'
during those particular stages, in regards to parsing times *and* the best use
of RAM memory-- particularly for *duplicated* elements in a scene (objects,
textures, etc. etc.) For example, I do know that image_maps, height_fields and
triangle meshes can be -pre#declared before their (repeated) use, to save memory
-- they are then 'instantiated' in the scene repeatedly, with very little if any
additional memory overhead. But some objects are not-- I think isosurfaces and
parametric objects would be examples. They are evaluated during the rendering
stage(?)  For other elements like typical textures/pigments/finishes, I don't
know the answer, nor do I have a clear idea as to what other kinds of elements
might fall into the parsing vs. rendering category... or which can be
instantiated and which cannot.

A first basic question: Does pre-#declaring *anything* in POV-Ray (or any other
programming language, for that matter) cause it to be evaluated only once and
instantiated later (excepting isosurfaces and ....)?  It's quite difficult to
set up a meaningful experiment to test this question, as there are just too many
elements and permutations to consider.

A simple example would be...

// the elements are pre-#declared here...
#declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
#declare B = box{0,1}

#declare C = 1;
#while(C <= 100000)
object{B texture TEX translate 1.1*C*x}
#declare C = C + 1;
#end

// ...versus NO pre-#declared elements
#declare C = 1;
#while(C <= 100000)
box{0,1
    texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
    translate 1.1*C*x
   }
#declare C = C + 1;
#end

Is there a difference (of any kind) between using one vs. the other?

To 'muddy the waters' a bit (or  maybe not?), add something simple like a random
scale to the texture, in the first-example #while loop (just a typical
'conceptual' example of a change.) Does this cause the texture itself to be
re-evaluated every time, and/or to require more memory?

#declare S = seed(123);
#declare C = 1;
#while(C <= 100000)
object{B texture (TEX scale .5 + .5*rand(S)} translate 1.1*C*x}
#declare C = C + 1;
#end


Post a reply to this message

From: Stephen
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 17:26:17
Message: <5a6bab09$1@news.povray.org>
On 26/01/2018 21:22, Kenneth wrote:
> A simple example would be...
> 
> // the elements are pre-#declared here...
> #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> #declare B = box{0,1}
> 
> #declare C = 1;
> #while(C <= 100000)
> object{B texture TEX translate 1.1*C*x}
> #declare C = C + 1;
> #end
> 
> // ...versus NO pre-#declared elements
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
>      texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
>      translate 1.1*C*x
>     }
> #declare C = C + 1;
> #end

Being a smarty pants. I was going to direct you to the command line 
+gsfile (I only learned about it in the last year or two) But I ran into 
a problem.
I created two files one with:

#declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse  .7}}
#declare B = box{0,1}

#declare C = 1;
#while(C <= 100000)
object{B texture {TEX} translate 1.1*C*x}
#declare C = C + 1;
#end

And got
----------------------------------------------------------------------------
Shadow Ray Tests:            239987   Succeeded:                     0
----------------------------------------------------------------------------
Peak memory used:         289583104 bytes
----------------------------------------------------------------------------

But with:

#declare C = 1;
#while(C <= 100000)
box{0,1
     texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
     translate 1.1*C*x
    }
#declare C = C + 1;
#end

There was no Peak memory message.



I hope that makes sense.
Using Win7 Pov 3.7






-- 

Regards
     Stephen


Post a reply to this message

From: Stephen
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 17:29:36
Message: <5a6babd0@news.povray.org>
On 26/01/2018 22:26, Stephen wrote:
> I hope that makes sense.

Because it doesn't to me.

-- 

Regards
     Stephen


Post a reply to this message

From: Stephen
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 17:46:06
Message: <5a6bafae@news.povray.org>
On 26/01/2018 22:29, Stephen wrote:
> On 26/01/2018 22:26, Stephen wrote:
>> I hope that makes sense.
> 
> Because it doesn't to me.
> 

A couple more tests.
Sometimes you get the stats sometimes you don't.

The file what I used. ;-)

Line breaks on paste, sorry


// PoVRay 3.7 Scene File " ... .pov"
// author:  ...
// date:    ...
//----------------------------------------------------
#version 3.7;
global_settings{ assumed_gamma 1.0 }
#default{ finish{ ambient 0.1 diffuse 0.9 }}
//-----------------------------------------------
#include "colors.inc"
#include "textures.inc"
#include "glass.inc"
#include "metals.inc"
#include "golds.inc"
#include "stones.inc"
#include "woods.inc"
#include "shapes.inc"
#include "shapes2.inc"
#include "functions.inc"
#include "math.inc"
#include "transforms.inc"
//-----------------------------------------------
// camera ---------------------------------------
#declare Camera_0 = camera {/*ultra_wide_angle*/ angle 75      // front view
                             location  <0.0 , 1.0 ,-3.0>
                             right     x*image_width/image_height
                             look_at   <0.0 , 1.0 , 0.0>}

camera{Camera_0}

// sun ------------------------
light_source{<-1500,2500,-2500> color White}
// sky ----------------------------------------
sky_sphere { pigment { gradient <0,1,0>
                        color_map { [0.00 rgb <1.0,1.0,1.0>]
                                    [0.30 rgb <0.0,0.1,1.0>]
                                    [0.70 rgb <0.0,0.1,1.0>]
                                    [1.00 rgb <1.0,1.0,1.0>]
                                  }
                        scale 2
                      } // end of pigment
            } //end of skysphere
// fog ------------------------------------------------

fog{fog_type   2
     distance   50
     color      White
     fog_offset 0.1
     fog_alt    2.0
     turbulence 0.8}
// ground --------------------------------------
plane{ <0,1,0>, 0
        texture{ pigment{ color rgb <0.825,0.57,0.35>}
                 normal { bumps 0.75 scale 0.025  }
                 finish { phong 0.1 }
               } // end of texture
      } // end of plane
//--------------------------------------------------------------------------
//---------------------------- objects in scene
//--------------------------------------------------------------------------



  /*

#declare C = 1;
#while(C <= 100000)
box{0,1
     texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
     translate 1.1*C*x
    }
#declare C = C + 1;
#end

  */

// /*

  #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}

// Line break above  ***************************

#declare B = box{0,1}

#declare C = 1;
#while(C <= 100000)
object{B texture {TEX} translate 1.1*C*x}
#declare C = C + 1;
#end


//   */

-- 

Regards
     Stephen


Post a reply to this message

From: Alain
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 20:54:50
Message: <5a6bdbea$1@news.povray.org>
Le 18-01-26 à 16:22, Kenneth a écrit :
> [My apologies beforehand; this is probably going to turn into a series of
> inter-related questions.]
> 
> In POV-Ray's current structure, there are two 'stages' to any render: the
> parsing, and the rendering. ( Although, I don't know if those are called the
> 'front end' and 'back end', in the current use of those words.)
> 
> It's still not truly clear to me which aspects of the program are 'evaluated'
> during those particular stages, in regards to parsing times *and* the best use
> of RAM memory-- particularly for *duplicated* elements in a scene (objects,
> textures, etc. etc.) For example, I do know that image_maps, height_fields and
> triangle meshes can be -pre#declared before their (repeated) use, to save memory
> -- they are then 'instantiated' in the scene repeatedly, with very little if any
> additional memory overhead. But some objects are not-- I think isosurfaces and
> parametric objects would be examples. They are evaluated during the rendering
> stage(?)  For other elements like typical textures/pigments/finishes, I don't
> know the answer, nor do I have a clear idea as to what other kinds of elements
> might fall into the parsing vs. rendering category... or which can be
> instantiated and which cannot.
> 
> A first basic question: Does pre-#declaring *anything* in POV-Ray (or any other
> programming language, for that matter) cause it to be evaluated only once and
> instantiated later (excepting isosurfaces and ....)?  It's quite difficult to
> set up a meaningful experiment to test this question, as there are just too many
> elements and permutations to consider.
> 
> A simple example would be...
> 
> // the elements are pre-#declared here...
> #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> #declare B = box{0,1}
> 
> #declare C = 1;
> #while(C <= 100000)
> object{B texture {TEX} translate 1.1*C*x}
> #declare C = C + 1;
> #end
> 
> // ...versus NO pre-#declared elements
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
>      texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
>      translate 1.1*C*x
>     }
> #declare C = C + 1;
> #end

Both will use the same amount of memory, but the first way will parse 
faster.

> 
> Is there a difference (of any kind) between using one vs. the other?
> 
> To 'muddy the waters' a bit (or  maybe not?), add something simple like a random
> scale to the texture, in the first-example #while loop (just a typical
> 'conceptual' example of a change.) Does this cause the texture itself to be
> re-evaluated every time, and/or to require more memory?
> 
> #declare S = seed(123);
> #declare C = 1;
> #while(C <= 100000)
> object{B texture {TEX scale .5 + .5*rand(S)} translate 1.1*C*x}
> #declare C = C + 1;
> #end

It will use more memory as each texture will also get a transform matrix 
attached to it.

> 
> 
> 

Instantiating only work with meshes and image files.
hight_field are really just a specialized kind of mesh defined by some 
image or function.

Declaring an object and using it many times get new copies made for each 
instances. It can parse faster, but use the same amount of memory.

In your example of many objects sharing a single texture, you can save a 
*LOT* of memory by grouping them into an union and applying the texture 
to the whole union at once :
union{
  #declare C = 1;
  #while(C <= 100000)
  object{B translate 1.1*C*x}
  #declare C = C + 1;
  #end
texture { TEX }
}

This way, you only have a single texture instead of 100000 whole textures.

If the texture is altered for each object, then, you can't use the union 
to optimize it.


Post a reply to this message

From: Kenneth
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 21:15:01
Message: <web.5a6be01fa3012ab7a47873e10@news.povray.org>
Stephen <mca### [at] aolcom> wrote:

>
> ...And got
> ----------------------------------------------------------------------------
> Shadow Ray Tests:            239987   Succeeded:                     0
> ----------------------------------------------------------------------------
> Peak memory used:         289583104 bytes
> ----------------------------------------------------------------------------
>
> But with:
>
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
>      texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
>      translate 1.1*C*x
>     }
> #declare C = C + 1;
> #end
>
> There was no Peak memory message.
>

From time to time, I've noticed the same 'missing' peak memory message. I have
no idea why that happens. If I recall(??), this was mentioned in some other post
during the last three months-- or maybe not...?


Post a reply to this message

From: Bald Eagle
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 21:20:00
Message: <web.5a6be0f6a3012ab75cafe28e0@news.povray.org>
"Kenneth" <kdw### [at] gmailcom> wrote:

> In POV-Ray's current structure, there are two 'stages' to any render: the
> parsing, and the rendering.

POV-Ray needs to read the instructions you give for creating the scene (parsing)
before it then shoots the rays and does the calculations for the color value of
each pixel in the image (render).


>( Although, I don't know if those are called the
> 'front end' and 'back end', in the current use of those words.)

Front end would be the POV-Ray editor and the render window.  Back end would be
all the stuff that goes on "behind the scenes".

> It's still not truly clear to me which aspects of the program are 'evaluated'
> during those particular stages, in regards to parsing times *and* the best use
> of RAM memory-- particularly for *duplicated* elements in a scene (objects,
> textures, etc. etc.) For example, I do know that image_maps, height_fields and
> triangle meshes can be -pre#declared before their (repeated) use, to save memory
> -- they are then 'instantiated' in the scene repeatedly, with very little if any
> additional memory overhead.

Well the image_map just gets read in, and then used as a pigment pattern where
needed.
The height field is sorta the same thing - just geometric coordinates instead of
color values.
I believe the mesh geometry data takes up a block of memory once, and then each
instantiation carries whatever texture overhead you add to it.

> But some objects are not-- I think isosurfaces and
> parametric objects would be examples. They are evaluated during the rendering
> stage(?)  For other elements like typical textures/pigments/finishes, I don't
> know the answer, nor do I have a clear idea as to what other kinds of elements
> might fall into the parsing vs. rendering category... or which can be
> instantiated and which cannot.

I _think_ the isosurface and parametric objects can be declared as objects,
which get evaluated at parse time, and then you can just use them like anything
else.

Your timing is interesting with this, as I was thinking about some of this over
the last few days with regard to include files.
I think that the texture info takes up a block of memory, and then each object
you attach it to needs whatever reference information, but I believe the only
time you use up another memory block is when you declare a new texture
identifier.
So I think if you do object {Whatever texture {Texture1 rotate x*90}}, that's
better on memory than #declare Texture2 = texture {Texture1 rotate x*90} then
object {Whatever texture {Texture2}}


>
> A simple example would be...
>
> // the elements are pre-#declared here...
> #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> #declare B = box{0,1}
>
> #declare C = 1;
> #while(C <= 100000)
> object{B texture TEX translate 1.1*C*x}
> #declare C = C + 1;
> #end
>
> // ...versus NO pre-#declared elements
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
>     texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
>     translate 1.1*C*x
>    }
> #declare C = C + 1;
> #end
>
> Is there a difference (of any kind) between using one vs. the other?

AFAIK, not with regard to memory usage.

> To 'muddy the waters' a bit (or  maybe not?), add something simple like a random
> scale to the texture, in the first-example #while loop (just a typical
> 'conceptual' example of a change.) Does this cause the texture itself to be
> re-evaluated every time, and/or to require more memory?
>
> #declare S = seed(123);
> #declare C = 1;
> #while(C <= 100000)
> object{B texture (TEX scale .5 + .5*rand(S)} translate 1.1*C*x}
> #declare C = C + 1;
> #end

I think so due to the way it gets "attached" to the object...


Post a reply to this message

From: Kenneth
Subject: Re: parse vs. render stages, and best use of memory
Date: 26 Jan 2018 21:40:01
Message: <web.5a6be5dca3012ab7a47873e10@news.povray.org>
Alain <kua### [at] videotronca> wrote:
>
> In your example of many objects sharing a single texture, you can save a
> *LOT* of memory by grouping them into an union and applying the texture
> to the whole union at once :
> union{
>   #declare C = 1;
>   #while(C <= 100000)
>   object{B translate 1.1*C*x}
>   #declare C = C + 1;
>   #end
> texture { TEX }
> }
>
> This way, you only have a single texture instead of 100000 whole textures.
>

A good idea; I didn't take that into account. (Another example of the 'endless
permutations' to consider!) But my own example was purposely designed with
100000 same-textured items that are then individually moved (or scaled, or
rotated, or...)-- as an example of how that single 'change' may or may not
impact parse time and memory usage. And this particular object/texture/loop
example is just one of hundreds of similar situations that might arise while
coding a scene. It's not clear to me what (if any) impact such 'minor' changes
might have.

BTW, another example would be MEDIA. I *think* it's evaluated during the render
stage rather than the parse stage-- negating any particular advantage of
pre-#declareing such a media object beforehand to save memory, when it's used
multiple times in a scene. But I don't know for sure.


Post a reply to this message

From: Thomas de Groot
Subject: Re: parse vs. render stages, and best use of memory
Date: 27 Jan 2018 02:45:42
Message: <5a6c2e26@news.povray.org>
On 27-1-2018 2:56, Alain wrote:
> Instantiating only work with meshes and image files.
> hight_field are really just a specialized kind of mesh defined by some 
> image or function.
> 
> Declaring an object and using it many times get new copies made for each 
> instances. It can parse faster, but use the same amount of memory.
> 

Another interesting example in this vein can be found here:

http://www.econym.demon.co.uk/holetut/index.htm

-- 
Thomas


Post a reply to this message

From: clipka
Subject: Re: parse vs. render stages, and best use of memory
Date: 28 Jan 2018 07:32:22
Message: <5a6dc2d6@news.povray.org>
Am 26.01.2018 um 22:22 schrieb Kenneth:

> A first basic question: Does pre-#declaring *anything* in POV-Ray (or any other
> programming language, for that matter) cause it to be evaluated only once and
> instantiated later (excepting isosurfaces and ....)?  It's quite difficult to
> set up a meaningful experiment to test this question, as there are just too many
> elements and permutations to consider.

Maybe it's worth first clearing up the term "evaluate".

That term is normally only used when talking about /expressions/, e.g.
`2*3+4` - more specifically, /constant/ expressions, i.e. expressions
that have no unspecified variables. For example, `2*3+x` can only be
evaluated if x is a concrete value. Such expressions may contain
functions, but again all parameters of such functions must be constant
expressions themselves, otherwise it cannot be evaluated.

You can describe and/or interpret geometric objects and textures as
functions, but not as constant expressions. So technically you cannot
"evaluate" them in the above sense, unless you want to know something
about the object or function for a given point in space (e.g. test
whether that particular point is inside or outside a given object, or
compute a given texture's colour for that particular point in space).

Thus objects and textures can only ever be "evaluated" during the render
phase (the functions `trace`, `inside` and `eval_pigment` are special
cases which I won't discuss here), and I suspect you probably mean
something else entirely.


Maybe what you mean is something I'd call "creating an instance" of, or
just "instantiating", an object: Reserving a block of memory as required
for a primitive of that particular type, and filling in the values
required later during the render stage. This may include a few
precomputed values, but for most primitives it is just the data
specified in the SDL code. (In contrast to most rendering engines,
POV-Ray does not first need to tesselate, i.e. create a mesh from, the
primitives.)

Such instantiation of an object /always/ happen during the parse stage,
namely in the following situations:

- Whenever you use any of the primitive keywords (`sphere`, `box`,
`mesh`, etc.), an object is instantiated from scratch as per the
parameters specified in the SDL code. This is also true for CSG compound
objects: They can be considered a special type of primitive that is
comprised of multiple elements which in turn are also primitives.

- Whenever you use the `object` keyword, an object is instantiated as a
copy of whatever you specify inside.

What happens to such an instance depends on the context:

- When used in a `#declare` or `#local` statement, the object instance
is stored in that variable.

- When used in a CSG statement, the object instance effectively becomes
incorporated into the CSG you're about to instantiate.

- When used in the scene itself, the object instance is inserted into
the scene.

Only objects inserted into the scene (either directly or by being part
of a CSG that is inserted into the scene) continue to "live" past the
end of the parse stage.

The same is essentially true for textures, pigments and interiors.

Whenever an object instance is inserted into the scene, some additional
instantiations may also happen. For instance, if a primitive does not
have an explicit texture, a texture will be instantiated as a copy of
either the containing CSG's texture or the default texture.


As a special case, some elements that are notorious for requiring lots
of memory (such as mesh or blob) are implemented in such a way that they
share the bulk (but not all) of their data across copied instances.

I'd have to take a deep look into the code to say which scene elements
do share portions of their data; the only thing I can say off the top of
my head is that there's still quite some room for improvement there.


> // the elements are pre-#declared here...
> #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> #declare B = box{0,1}
> 
> #declare C = 1;
> #while(C <= 100000)
> object{B texture TEX translate 1.1*C*x}
> #declare C = C + 1;
> #end

(If I'm not mistaken you need curly braces around `TEX` when using it.)

> // ...versus NO pre-#declared elements
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
>     texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
>     translate 1.1*C*x
>    }
> #declare C = C + 1;
> #end
> 
> Is there a difference (of any kind) between using one vs. the other?

Yes: The version /with/ pre-declared elements actually takes up /more/
memory during parsing (namely the space required to hold the declared
element). On the upside, it will parse faster.


> To 'muddy the waters' a bit (or  maybe not?), add something simple like a random
> scale to the texture, in the first-example #while loop (just a typical
> 'conceptual' example of a change.) Does this cause the texture itself to be
> re-evaluated every time, and/or to require more memory?
> 
> #declare S = seed(123);
> #declare C = 1;
> #while(C <= 100000)
> object{B texture (TEX scale .5 + .5*rand(S)} translate 1.1*C*x}
> #declare C = C + 1;
> #end

No change in memory consumption there: Even without the transformation
of the texture a copy is currently created.


A while ago I had made an attempt to avoid copying of textures wherever
possible, but that turned out to be more complicated than expected, most
notably because when transforming an object the texture is also
implicitly transformed accordingly. Thus, at least portions of the
texture (namely any transformation information) have to be copied in
most cases anyway. So for now that endeavour has been shelved.


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.