|
|
Le 18-01-26 à 16:22, Kenneth a écrit :
> [My apologies beforehand; this is probably going to turn into a series of
> inter-related questions.]
>
> In POV-Ray's current structure, there are two 'stages' to any render: the
> parsing, and the rendering. ( Although, I don't know if those are called the
> 'front end' and 'back end', in the current use of those words.)
>
> It's still not truly clear to me which aspects of the program are 'evaluated'
> during those particular stages, in regards to parsing times *and* the best use
> of RAM memory-- particularly for *duplicated* elements in a scene (objects,
> textures, etc. etc.) For example, I do know that image_maps, height_fields and
> triangle meshes can be -pre#declared before their (repeated) use, to save memory
> -- they are then 'instantiated' in the scene repeatedly, with very little if any
> additional memory overhead. But some objects are not-- I think isosurfaces and
> parametric objects would be examples. They are evaluated during the rendering
> stage(?) For other elements like typical textures/pigments/finishes, I don't
> know the answer, nor do I have a clear idea as to what other kinds of elements
> might fall into the parsing vs. rendering category... or which can be
> instantiated and which cannot.
>
> A first basic question: Does pre-#declaring *anything* in POV-Ray (or any other
> programming language, for that matter) cause it to be evaluated only once and
> instantiated later (excepting isosurfaces and ....)? It's quite difficult to
> set up a meaningful experiment to test this question, as there are just too many
> elements and permutations to consider.
>
> A simple example would be...
>
> // the elements are pre-#declared here...
> #declare TEX = texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> #declare B = box{0,1}
>
> #declare C = 1;
> #while(C <= 100000)
> object{B texture {TEX} translate 1.1*C*x}
> #declare C = C + 1;
> #end
>
> // ...versus NO pre-#declared elements
> #declare C = 1;
> #while(C <= 100000)
> box{0,1
> texture{pigment{gradient y} finish{ambient .1 diffuse .7}}
> translate 1.1*C*x
> }
> #declare C = C + 1;
> #end
Both will use the same amount of memory, but the first way will parse
faster.
>
> Is there a difference (of any kind) between using one vs. the other?
>
> To 'muddy the waters' a bit (or maybe not?), add something simple like a random
> scale to the texture, in the first-example #while loop (just a typical
> 'conceptual' example of a change.) Does this cause the texture itself to be
> re-evaluated every time, and/or to require more memory?
>
> #declare S = seed(123);
> #declare C = 1;
> #while(C <= 100000)
> object{B texture {TEX scale .5 + .5*rand(S)} translate 1.1*C*x}
> #declare C = C + 1;
> #end
It will use more memory as each texture will also get a transform matrix
attached to it.
>
>
>
Instantiating only work with meshes and image files.
hight_field are really just a specialized kind of mesh defined by some
image or function.
Declaring an object and using it many times get new copies made for each
instances. It can parse faster, but use the same amount of memory.
In your example of many objects sharing a single texture, you can save a
*LOT* of memory by grouping them into an union and applying the texture
to the whole union at once :
union{
#declare C = 1;
#while(C <= 100000)
object{B translate 1.1*C*x}
#declare C = C + 1;
#end
texture { TEX }
}
This way, you only have a single texture instead of 100000 whole textures.
If the texture is altered for each object, then, you can't use the union
to optimize it.
Post a reply to this message
|
|