POV-Ray : Newsgroups : povray.general : Why only a +1 clamp with VAngle, VAngleD macros in math.inc? : Re: Why only a +1 clamp with VAngle, VAngleD macros in math.inc? Server Time
25 Apr 2024 15:01:47 EDT (-0400)
  Re: Why only a +1 clamp with VAngle, VAngleD macros in math.inc?  
From: William F Pokorny
Date: 9 May 2021 09:31:42
Message: <6097e43e$1@news.povray.org>
On 5/9/21 2:22 AM, Tor Olav Kristensen wrote:
> William F Pokorny <ano### [at] anonymousorg> wrote:
>> On 5/6/21 10:58 AM, Tor Olav Kristensen wrote:
>>> ...
>> ...
>>> Anyway the way that macro calculates the angle is not the best numerically.
>>> I would rather suggest something like this:
>>>
>>>
>>> #macro AngleBetweenVectors(v1, v2)
>>>
>>>       #local v1n = vnormalize(v1);
>>>       #local v2n = vnormalize(v2);
>>>
>>>       (2*atan2(vlength(v1n - v2n), vlength(v1n + v2n)))
>>>
>>> #end // macro AngleBetweenVectors
>>>
>>
>> Interesting, thanks. I'll play with it. There will be four sqrt calls
>> with this vs two so I expect it's a slower approach.
> 
> I prefer better accuracy before speed. If we need this to go faster we could
> compile it into POV-Ray and make it a new command; e.g. vangle().
> 
> 
>> Reminds me I captured fast approximation atan2 code I was thinking of
>> making an inbuilt function, but I've not gotten to it...
>>
>> And that thought reminds me of my plans to turn many macros into
>> "munctions"(1) calling new faster, inbuilt functions. Done only a tiny
>> bit of that so far.
>>
>> (1) - Yes, thinking about creating a munctions.inc file breaking out
>> what are really macro wrappers to functions into that file to make clear
>> what they are. Maybe all as M_ prefixed so my current F_dents() would
>> become M_dents(). Angle perhaps then M_Angle.
> 
> This sounds interesting. Can you please explain more ?

Unsure how much you've been following my povr branch posts...

For starters, over the last year or two I've replaced most of the 
existing 'odd shape' inbuilt functions for which functions.inc defines 
the interface and documents the usage with more generic ones. I've also 
extended and cleaned up the function<->pattern interface as related 
work. New function related wave modifiers for example.

On the idea on munctions.
-------------------------
One old macro always a "munction" is the Supertorus in shapes.inc but in 
the povr branch it now calls a new inbuilt function f_supertorus() for 
backward compatibility. New users should call f_supertorus() directly.

---
We've long had simple "munctions" that look like inbuilt capability such as:

#declare clip = function (x,y,z) {min(z,max(x,y))}

which in povr I changed to

#declare F_clip = function (x,y,z) {min(z,max(x,y))}

and I'm leaning now toward calling it M_clip because I want to make 
clear it is today a munction and not an inbuilt 'clip' keyword.

Longer term F_clip/M_Clip is educational in being simple, and it will be 
left as is on the creation of a proper 'clip' implementation to create 
inbuilt include test cases comparing M_clip results to 'clip' ones.

> Are there any benefits from using these instead of new built in commands ?

Sometimes. What munctions can today do not at all doable with inbuilts 
of the straight keyword kind, or the inbuilt functions, is pre-condition 
a larger set of stuff. Stuff being data / partial solutions - or the 
particular configuration and inter operation of a collection of 
functions(1).

When you say inbuilt command, there are flavors. For performance inbuilt 
is 'almost' - but not always better.

I've gotten somewhat comfortable adding inbuilt functions, patterns and 
shapes and the associated keywords.

When you talk a something like 'acos' there is an implementation in the 
parser and there is a parallel implementation in the vm to create/run 
the acos compiled code. Not all things in the parser are in the vm and 
visa versa. Things only in the parser cannot as a rule be used with the 
vm. Where the vm things mostly (all?) work during parsing.

Though 'acos' like implementations might often be best, I don't 
understand the code well enough to pull these sorts of dual keywords at 
the moment. In any case, I'd probably go after only a few things in this 
way such as: the 'munctions' even, odd, max3, min3, sind,... (Adding the 
constant 'e' is another though that I think I can work through today - 
just not done).

I can implement inbuilt functions helping simplify/speed or replace many 
effective 'munctions.' I've been working largely in this direction of late.

---
(1) Aside: I want to get to the point where we can tell whether a 
function is being run from the parser (single thread) or during 
rendering so we can precondition functions too. I have some ideas/notes 
around thread ids and such, but I've not worked out how to really do it.

Even today in f_ridged_mf() there is an allocation of memory which I 
believe should be done only by pre-conditioning during parsing(1a). 
Function equivalents of ripples, waves, wood, etc need a parse time 
preconditioning / set up capability too, to best work.

(1a) Depending on how threaded rendering work hits f_ridged_mf() today, 
I suspect we are sometimes getting minor memory leaks. In this case to 
with respect to functionality / result I think it harmless.

> 
> 
>> Something I've done too with the inbuilt function changes is allow
>> alternative calculations for the same thing. Compiled such conditionals
>> not that expensive. Most frequently single float options where those are
>> much faster, but different methods too. For one thing, the alternatives
>> I can bang against each other as a way to test!
> 
> Are you using this functionality to perform speed tests ?
> 

Sometimes, yes.

Aside: I've had several false starts looking at integrating the work you 
and I beleive more Alain Ducharme did years ago with quaternions.inc - 
as inbuilt capability of some kind, but never gotten very far with the 
effort. Always so much on that todo list - and I'm not that smart, so it 
goes slowly. :-)

Bill P.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.