 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 2021-05-17 12:58 PM (-4), Alain Martel wrote:
>
>> At the moment the macros are made for POV-Ray v3.7,
>> but they should also work in v3.8.
>
> If a macro is made for version 3.7, then, it will run in version 3.8.
> To my knowledge, anything that works in version 3.3 to 3.7 will run in 3.8.
That's not always the case. See, for example:
https://news.povray.org/5cb13ce4%40news.povray.org
Also, where do you find POV-Ray 3.3?
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 5/20/21 2:24 PM, Tor Olav Kristensen wrote:
> Try this:
>
> #declare SomeVectors2D =
> array[5] {
> < 1, 5>,
> <-2, -2>,
> < 6, 3>,
> < 0, 2>,
> < 3, -1>
> }
> ;
> #declare vMean2D = Mean(SomeVectors2D);
Ah, OK! I was thinking only in terms of sets of floats. Still arrays,
but arrays of vectors.
---
With the introduction of mixed arrays in v3.8, I guess, so long as we
are willing to work in terms of the largest defined vector and take the
unspecified components of shorter vectors as being zero, things will
work out.
There is an exposure if someone drops in a float in the wrong place. We
get two different answers with a set up like the following where the
Mean of the SomeVectorsA is OK if you take the float as being promoted
to a 4 component vector, but that of SomeVectorsB is not because at the
time of the addition the parser doesn't know the max vector size in the
mixed array is four.
#declare SomeVectorsA =
array mixed[5] {
< 1, 5>,
<-2, -2, 0, -1>,
< 6, 3>,
< 0, 2, 0, +2>,
0.123
}
#declare SomeVectorsB =
array mixed[5] {
0.123
< 1, 5>,
<-2, -2, 0, -1>,
< 6, 3>,
< 0, 2, 0, +2>,
}
...
Mean <1.0246,1.6246,0.0246,0.2246,0.0000> SomeVectorsA
Mean <1.0246,1.6246,0.0000,0.2000,0.0000> SomeVectorsB
Would it be enough to recommend against mixed array usage? Or might we
test for mixed arrays in some way and stop, though, I have no idea how
to do such a thing at the moment.
Aside 1: If someone drops a non-numeric thing into a mixed array the
parser appropriately generates an error.
Aside 2: If someone uses color the length always goes to the full five
color component vector. It's possible with colors to mix lengths of
vector specification - even in the traditional non-mixed array case.
rgb 1, // promoted to 1,1,1,0,0
color 1, // warning. promoted to 1,1,1,1,1
rgb< 0, 2>, // promoted to five components. Trailing 0s
rgb< 3, -1, 1> // Same
rgb< 1, 2, 3, 4> // Warning and 4th component used. 5th 0.
rgbf<1, 2, 3, 4> // 4th used, no warning. 5th 0.
Maybe behavior all just what it is - user beware?
Bill P.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
William F Pokorny <ano### [at] anonymous org> wrote:
> On 5/20/21 2:24 PM, Tor Olav Kristensen wrote:
> > Try this:
> >
> > #declare SomeVectors2D =
> > array[5] {
> > < 1, 5>,
> > <-2, -2>,
> > < 6, 3>,
> > < 0, 2>,
> > < 3, -1>
> > }
> > ;
> > #declare vMean2D = Mean(SomeVectors2D);
>
> Ah, OK! I was thinking only in terms of sets of floats. Still arrays,
> but arrays of vectors.
>
> ---
> With the introduction of mixed arrays in v3.8, I guess, so long as we
> are willing to work in terms of the largest defined vector and take the
> unspecified components of shorter vectors as being zero, things will
> work out.
>
> There is an exposure if someone drops in a float in the wrong place. We
> get two different answers with a set up like the following where the
> Mean of the SomeVectorsA is OK if you take the float as being promoted
> to a 4 component vector, but that of SomeVectorsB is not because at the
> time of the addition the parser doesn't know the max vector size in the
> mixed array is four.
>
> #declare SomeVectorsA =
> array mixed[5] {
> < 1, 5>,
> <-2, -2, 0, -1>,
> < 6, 3>,
> < 0, 2, 0, +2>,
> 0.123
> }
> #declare SomeVectorsB =
> array mixed[5] {
> 0.123
> < 1, 5>,
> <-2, -2, 0, -1>,
> < 6, 3>,
> < 0, 2, 0, +2>,
> }
> ...
> Mean <1.0246,1.6246,0.0246,0.2246,0.0000> SomeVectorsA
> Mean <1.0246,1.6246,0.0000,0.2000,0.0000> SomeVectorsB
>
> Would it be enough to recommend against mixed array usage? Or might we
> test for mixed arrays in some way and stop, though, I have no idea how
> to do such a thing at the moment.
>
> Aside 1: If someone drops a non-numeric thing into a mixed array the
> parser appropriately generates an error.
>
> Aside 2: If someone uses color the length always goes to the full five
> color component vector. It's possible with colors to mix lengths of
> vector specification - even in the traditional non-mixed array case.
>
> rgb 1, // promoted to 1,1,1,0,0
> color 1, // warning. promoted to 1,1,1,1,1
> rgb< 0, 2>, // promoted to five components. Trailing 0s
> rgb< 3, -1, 1> // Same
> rgb< 1, 2, 3, 4> // Warning and 4th component used. 5th 0.
> rgbf<1, 2, 3, 4> // 4th used, no warning. 5th 0.
>
> Maybe behavior all just what it is - user beware?
When I see this I'm thinking that users that try to calculate the mean or
variance of an array containing mixed vectors deserves the messy result that
they may get.
My opinion is that POV-Rays automatic vector promotion is a design flaw. It
allows users to write code that is confusing and not so clear. And it creates
some hard to find errors.
Now I'm tempted to rewrite those macros so that they fail immediately if the
array contains anything else than floats. Perhaps it is best to not mention that
the mean() and variance() macros can be used with anything else than an array of
floats.
We can not, and should not, try to anticipate and warn/guard against all the
crazy things that users may do with macros. That would make them big, clumsy and
hard to read and understand.
If we keep the macros like they are now, then yes; we should have a user beware
warning.
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmail com> wrote:
> When I see this I'm thinking that users that try to calculate the mean or
> variance of an array containing mixed vectors deserves the messy result that
> they may get.
>
> My opinion is that POV-Rays automatic vector promotion is a design flaw. It
> allows users to write code that is confusing and not so clear. And it creates
> some hard to find errors.
I would agree that some features are more problematic than others, especially
for newer users. Is there a way to add, multiply, or test for [nonexistant]
vector components with dot-notation in order to purposefully invoke the vector
promotion and force everything to me the largest vector size (5?) from the
start?
[Just as a related aside, what would be useful is a way to perform typical
mathematical functions like cosine on vectors and get a vector result.]
> We can not, and should not, try to anticipate and warn/guard against all the
> crazy things that users may do with macros. That would make them big, clumsy and
> hard to read and understand.
True, however one can have a scene containing all the cases one might think are
tricky or at least not obvious, and that allows plenty of room for doing all
sorts of things that have no proper place in a macro.
Macros, functions, and the documentation are the (overly brief) textbooks of
POV-Ray. Scene files are the lecture and recitation notes.
In engineering, the rule of thumb for how tight a bolt should be is, "Torque it
down until it snaps, then back it off a half a turn." That's valuable knowledge
to have which is easy to say and scribble on a blackboard, or show in a
demonstration. However, it's probably not going to make it into the textbook.
> If we keep the macros like they are now, then yes; we should have a user beware
> warning.
This is always the most difficult and painful part of writing real-world code.
- Bill
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
William F Pokorny <ano### [at] anonymous org> wrote:
> On 5/19/21 3:55 PM, Tor Olav Kristensen wrote:
> > I have also rewritten a few of the macros slightly and added some new ones.
> >
>
> I like how macros using functions of transforms & inverse transforms
> ended up. :-) Those, I didn't expect you to change and yet what you did
> looks to me to be a clever approach to handling a macro's dynamically
> defined functions.
=))
The global namespace for functions have troubled me for many years. Local
variables inside macros holding functions does not really work the way it IMHO
should.
Now I'll go through some of my other old macros to see if the same trick can be
applied in any of them to get rid of "named functions".
I also wonder if "hiding" functions inside an array or a dictionary is a good
and robust way to avoid function related name clashes. But I guess we will still
have to struggle with name clashes related to the function parameters.
> I'll have to spend more time thinking about it and the actual inner
> workings. I suspect you've gotten to something near like what happens
> with prod and sum internally with the vm.
Sorry, but I don't understand what you mean here. I've never studied the
internals of that vm.
>...
> I've recently moved the vector analysis macros in math.inc to a
> vectoranalysis.inc include - which itself won't be a 'core' include.
> Thinking the array statistics a better fit in a similar
> arraystatistics.inc include - again now not a 'core' include.
I agree with you, neither the vector analysis macros nor the array statistics
macros belong in a core include file. They are for users with special needs.
> With your vectors.inc, on the other hand, I'm starting to lean toward it
> being a 'core' include file. Need to resolve collisions with math.inc
> macros as a start.
I'm happy to hear that :)
I've now added some documentation for vectors.inc in the Github repository.
> Aside: Instead of moving some of the contents of arraycoupleddf3s.inc to
> arrays.inc, I went the other way! I moved the ARRAY_WriteDF3 macro into
> arraycoupleddf3s.inc alongside the ARRAY_ReadDF3 macro already therein
> along with some other DF3 related macros. I dropped too the ARRAY_
> prefixes. I now plan to treat arraycoupleddf3s.inc as a core include
> file. I believe reading and writing DF3s should be doable from SDL as
> 'POV-Ray' is shipped.
Yes, that is a sensible move.
Btw.:
Have you considered using sum() in the FnctNxNx9ArrayEntryToDBL() macro in
arraycoupleddf3s.inc ?
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Bald Eagle" <cre### [at] netscape net> wrote:
> "Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmail com> wrote:
>
> > When I see this I'm thinking that users that try to calculate the mean or
> > variance of an array containing mixed vectors deserves the messy result that
> > they may get.
> >
> > My opinion is that POV-Rays automatic vector promotion is a design flaw. It
> > allows users to write code that is confusing and not so clear. And it creates
> > some hard to find errors.
>
> I would agree that some features are more problematic than others, especially
> for newer users. Is there a way to add, multiply, or test for [nonexistant]
> vector components with dot-notation in order to purposefully invoke the vector
> promotion and force everything to me the largest vector size (5?) from the
> start?
I don't think that it is possible to investigate the number of components in a
vector without it being promoted automatically or having POV-Ray halt on an
error.
It would have been nice to have some functionality like with Python's
try-except-else-finally statements.
But I assume that that would be a huge task to undertake in POV-Ray. So perhaps
we could implement something like #ifdef (v0.z) instead.
> [Just as a related aside, what would be useful is a way to perform typical
> mathematical functions like cosine on vectors and get a vector result.]
I take it that you would like to be able write things like cos(v0) and then the
cos function will be applied to all of the components of the vector. If so I
second that. But only if that if is done with a built in functionality. I don't
like the macro's we currently have in the include files for such things.
To see how I've provided a similar functionality in my scikit-vectors Python
library:
- Look at the cells 'In [26]' and 'Out[26]' in this file:
https://github.com/t-o-k/scikit-vectors/blob/master/skvectors/doc/Using_a_Vector_Class.pdf
- and at the cells from 'In [48]' to 'Out[52]' in this file:
https://github.com/t-o-k/scikit-vectors/blob/master/skvectors/doc/Using_a_Fundamental_Vector_Class.pdf
>...
> Macros, functions, and the documentation are the (overly brief) textbooks of
> POV-Ray. Scene files are the lecture and recitation notes.
Good analogy.
> In engineering, the rule of thumb for how tight a bolt should be is, "Torque it
> down until it snaps, then back it off a half a turn." That's valuable knowledge
> to have which is easy to say and scribble on a blackboard, or show in a
> demonstration. However, it's probably not going to make it into the textbook.
Funny :) I haven't heard that one before.
> > If we keep the macros like they are now, then yes; we should have a user beware
> > warning.
>
> This is always the most difficult and painful part of writing real-world code.
Yes, you are right about that.
I hate writing documentation. I suspect that one of the reasons for this is that
I'm struggling very much to be concise and precise with a rich, varied and good
language and at the same time use correct terms for the topic (or profession ?).
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmail com> wrote:
>
> I don't think that it is possible to investigate the number of components in a
> vector without it being promoted automatically or having POV-Ray halt on an
> error.
It is possible to do certain tests. For example, this macro tests to see
whether or not v_Padding has a .t component. This macro is ad hoc, but perhaps
a more general macro can be written. I have not found a way to distinguish
between a scalar and a macro, though.
----------[BEGIN CODE]---------
// Convert input to a 4-D vector. If the input is a 2-D or 3-D vector, then
// use the .y component for .t.
#macro Caption__Get_padding (v_Padding)
#local caption_Promoted = <0, 0, 0, 0> + v_Padding;
// See whether .t exists:
#local caption_Test1 = v_Padding + 1;
#local caption_Test2 = v_Padding + 2;
< caption_Promoted.x,
caption_Promoted.y,
caption_Promoted.z,
// Test whether .t exists:
( (<0, 0, 0, 0> + caption_Test1).t = (<0, 0, 0, 0> + caption_Test2).t?
caption_Promoted.y: caption_Promoted.t
)
// N.B. Although a scalar tests incorrectly as having a .t component, it
// doesn't matter, because the returned 4-D vector is the same either way.
>
#end
-----------[END CODE]----------
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Cousin Ricky" <rickysttATyahooDOTcom> wrote:
> "Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmail com> wrote:
> >
> > I don't think that it is possible to investigate the number of components in a
> > vector without it being promoted automatically or having POV-Ray halt on an
> > error.
>
> It is possible to do certain tests. For example, this macro tests to see
> whether or not v_Padding has a .t component. This macro is ad hoc, but perhaps
> a more general macro can be written. I have not found a way to distinguish
> between a scalar and a macro, though.
>
> ----------[BEGIN CODE]---------
> // Convert input to a 4-D vector. If the input is a 2-D or 3-D vector, then
> // use the .y component for .t.
> #macro Caption__Get_padding (v_Padding)
> #local caption_Promoted = <0, 0, 0, 0> + v_Padding;
>
> // See whether .t exists:
> #local caption_Test1 = v_Padding + 1;
> #local caption_Test2 = v_Padding + 2;
>
> < caption_Promoted.x,
> caption_Promoted.y,
> caption_Promoted.z,
> // Test whether .t exists:
> ( (<0, 0, 0, 0> + caption_Test1).t = (<0, 0, 0, 0> + caption_Test2).t?
> caption_Promoted.y: caption_Promoted.t
> )
> // N.B. Although a scalar tests incorrectly as having a .t component, it
> // doesn't matter, because the returned 4-D vector is the same either way.
> >
> #end
> -----------[END CODE]----------
Hi Ricky
It took me a while to figure out what is going on there.
You are exploiting the automatic vector promoting "feature".
That's clever sorcery !
Here's my take on it:
#macro VectorDimTest(v0)
#local vZ3D = <0, 0, 0>;
#local vZ4D = <0, 0, 0, 0>;
#local vZ5D = <0, 0, 0, 0, 0>;
#local v1 = v0 + 1;
#local v2 = v0 + 2;
#local D2 = ((vZ3D + v1).z = (vZ3D + v2).z);
#local D3 = ((vZ4D + v1).t = (vZ4D + v2).t);
#local D4 = ((vZ5D + v1).transmit = (vZ5D + v2).transmit);
#if (D2)
#debug "2D"
#else
#if (D3)
#debug "3D"
#else
#if (D4)
#debug "4D"
#else
#debug "5D or scalar"
#end // if
#end // if
#end // if
#end // VectorDimTest
--
Tor Olav
http://subcube.com
https://github.com/t-o-k
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
On 5/21/21 4:09 PM, Tor Olav Kristensen wrote:
> I also wonder if "hiding" functions inside an array or a dictionary is a good
> and robust way to avoid function related name clashes. But I guess we will still
> have to struggle with name clashes related to the function parameters.
>
RE: name clashes on function parameters.
---
Note. In v3.8 with optional macro parameters, parameter names can
collide. This an additional reason I'm chasing the no (a-z,0-9,_)
identifiers checking in the povr branch.
//---
#version 3.8;
#local Filename = "File99"
#macro Mcr00(_string, optional Filename)
//#ifndef(local.Filename)
#ifndef(Filename)
#local Filename = "File00";
#end
#debug concat("00 File name: ",Filename," String: ",_string,"\n")
#end
#macro Mcr01(_string, optional _filename)
//#ifndef(local._filename)
#ifndef(_filename)
#local Filename = "File01";
#else
#local Filename = _filename;
#end
#debug concat("01 File name: ",Filename," String: ",_string,"\n")
#end
Mcr00("abc",Filename)
Mcr01("abc",Filename)
Mcr00("xyz",Wildname) // oops
Mcr01("xyz",Wildname)
#error "Stop early"
//---
RE: Dictionary of functions.
---
Suspect useful. When functions are dynamic, macro call to macro call, as
is the case passing the transform - and not used inside the macro
multiple times in a loop say - I think your recent approach likely the
best way to go.
Where the function internal to the macro 'could' be compiled and kept
for future use I've been wondering if we should not in v3.8 onward be
testing for function definition in the global dictionary space;
compiling only if not already compiled and thereafter using the global
identifier pointing to compiled code? Something like:
#version 3.8;
#macro Mcr02(_string,_a,_b,_c)
#ifndef(global.Fn00_Macro2)
#declare Fn00_Macro2 = function { x+y+z }
#debug "Compiled Fn00_Macro2\n"
#end
#debug concat("02 ",_string," ",str(Fn00_Macro2(_a,_b,_c),6,4),"\n")
#end
Mcr02("Call 0",1,2,3)
Mcr02("Call 1",4,5,6)
Mcr02("Call 2",7,8,9)
#error "Stop early"
>
>> I'll have to spend more time thinking about it and the actual inner
>> workings. I suspect you've gotten to something near like what happens
>> with prod and sum internally with the vm.
> Sorry, but I don't understand what you mean here. I've never studied the
> internals of that vm.
>
Suppose it's that I'm not completely sure I do either! :-)
In looking at a performance question from Bald Eagle where he had used
prod - as I recall -it seemed like the sum * prod functions were doing
parser expression work and compilation on each call in the 'loop.' I
take your code to be doing something similar. As in, no call to already
compiled vm code from a function ID. Beyond that, there are details I
don't fully understand.
>
...
>
> Btw.:
> Have you considered using sum() in the FnctNxNx9ArrayEntryToDBL() macro in
> arraycoupleddf3s.inc ?
I'd not.
Aside 1: The sum and prod functions are suspect performance wise. (see
related comments just prior)
Aside 2: I'm considering dropping the FnctNxNx9ArrayEntryToDBL() from
the core arraycoupleddf3s.inc file as the povr branch has more capable
inbuilt functions for nearly identical functionality. Perhaps it'll be
code retained as a non-core macro / include. We'll see.
Bill P.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
"Tor Olav Kristensen" <tor### [at] TOBEREMOVEDgmail com> wrote:
> Hi Ricky
>
> It took me a while to figure out what is going on there.
> You are exploiting the automatic vector promoting "feature".
Well, that's what I was implying, and it seemed like you "knew" about that
already.
> > > I don't think that it is possible to investigate the number of components in a
vector without it being promoted a
utomatically ...
I know that clipka gave me an example that stuck, of which I use variants all
the time:
http://news.povray.org/povray.documentation.inbuilt/message/%3Cweb.59398df47cb8f4dcc437ac910%40news.povray.org%3E/#%3Cw
eb.59398df47cb8f4dcc437ac910%40news.povray.org%3E
> Here's my take on it:
>
>
> #macro VectorDimTest(v0)
>
> #local vZ3D = <0, 0, 0>;
> #local vZ4D = <0, 0, 0, 0>;
> #local vZ5D = <0, 0, 0, 0, 0>;
> #local v1 = v0 + 1;
> #local v2 = v0 + 2;
> #local D2 = ((vZ3D + v1).z = (vZ3D + v2).z);
> #local D3 = ((vZ4D + v1).t = (vZ4D + v2).t);
> #local D4 = ((vZ5D + v1).transmit = (vZ5D + v2).transmit);
>
> #if (D2)
> #debug "2D"
> #else
> #if (D3)
> #debug "3D"
> #else
> #if (D4)
> #debug "4D"
> #else
> #debug "5D or scalar"
> #end // if
> #end // if
> #end // if
>
> #end // VectorDimTest
Both of your macros are pretty nice :)
It's early and I'm still on First Coffee, but I'm fuzzily thinking that maybe
there's a way to disambiguate a 5D vector and a scalar by adding and/or
multiplying with vectors containing different signs and testing the result?
My brain is also giving me the impression that binary math and things like (I
said _like_) bitmasks and xor might offer inspiration for a solution.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |