POV-Ray : Newsgroups : povray.beta-test.binaries : Function / pattern issues. Updated f_ellipsoid(). New f_lame,... Server Time
28 Mar 2024 06:34:32 EDT (-0400)
  Function / pattern issues. Updated f_ellipsoid(). New f_lame,... (Message 1 to 1 of 1)  
From: William F Pokorny
Subject: Function / pattern issues. Updated f_ellipsoid(). New f_lame,...
Date: 15 May 2020 09:20:06
Message: <5ebe9706$1@news.povray.org>
<---------------------- References. Eight previous posts

http://news.povray.org/povray.beta-test.binaries/thread/%3C5ead9027%40news.povray.org%3E/

and

http://news.povray.org/povray.beta-test.binaries/thread/%3C5eb4b9c6%241%40news.povray.org%3E/

Updating the inbuilt f_ellipsoid() function. Adding new f_lame() and 
f_polyhedron() functions.

The f_ellipsoid function was one of those with it's threshold / roots up 
a 1.0. It also specified the scaling as inverse values. It's now a 0.0 
centered ellipsoid function with expected a,b,c scaling. Also brought 
out the exponential value to get some superellipsoid functionality on 
the cheap. Two of the latter sort of super forms shown in the top two 
rows of the attached image.

The new f_lame (1) and f_polyhedron functions are ways to create n faced 
objects by passing in axis aligned gradients (and/or using x,y,z). The 
latter is a subset of the former's result, but faster if you only want 
flat faces. Results from f_polyhedron in the lower left and f_lame in 
the lower right of the attached image with the same 5 input gradients.

(1) Yes the e should be accented in f_lame, but wimped out on having 
that character in the code. Interesting to me is Lamé's work seems to be 
underneath a lot of the super__oid stuff.

In other than the f_polyhedron implementation, I'm also playing with the 
ability for the user to specify additional internal sqrt()s to reduce 
the gradients. So long as the equations set up as "<calculated> - 1.0 = 
0.0" this works well in my testing. It becomes another performance trade 
off used over just increasing the gradient. As important, it's a method 
to bring the gradients down toward 1.0 where the inbuilt function values 
play well with other functions and patterns.

------- f_ellipsoid
Parameters: x, y, z
Five extra parameters required:

1. If 0, calculations are done with double floats. Otherwise with single 
floats. The latter often a lot faster with not much loss in accuracy.

2. x scale

3. y scale

4. z scale

5. The pow() exponent to use. For a proper ellipsoid use 2.0. For an 
octahedron use 1.0. For an inward curvature from axises use values < 
1.0. For Rounded box to round use values >2.0. The exponent here not 
locked at 2.0 offers a subset of results like that of the superellipsoid 
at less compute expense.

6. The number of times to take the sqrt() internally to reduce the 
gradient.  1 a good starting point. Usually more than 2 not worth it. 0 
for very round results works fine.

Notes. Previously f_ellipsoid() only worked for isosurfaces of threshold 
1.0 and the axis scaling was inverted. In other words, it was not a 
general function thought relatively fast. Changed in povr May 14, 2020 
to align with standard root/threshold behavior.  Further, extended to 
pass exponent as this enables some f_superellipsoid results at a much 
lower compute cost.

---------- f_polyhedron
Parameters: x, y, z as x, y z or passed f_gradient()s on arbitrary 
axises. 13 parameters required:

1. Count of input gradients. 3 to use only x,y,z.

2-13. Up to 12 input axis gradients. See f_gradient function, but any 
value input method allowed.

Notes. When using x,y,z or f_gradient() the gradients are projected onto 
normalized input vectors. This is useful because it often allows for 
some normalization of the scaling.

#declare normScale = 3/(2*<additional_gradients>)
f_polyhedron(x*normScale,y,z*normScale,5,
              f_gradient(-1,0,1)*normScale,0,0,0,
              0,0,0,0,
              0,0,0,0)

------------ f_lame
Parameters:
1. Number of input gradients.

2. If 0 does calculations as doubles, otherwise as floats for 
performance less accuracy.  Here whether floats help depends a lot of 
the exponents. Mixed 0.6 to 1.6 on 5 gradients 31.019 -> 28.882 ---> 
-6.89% going to single floats.

3. The number of times to take the sqrt() internally to reduce the 
gradient.  Using 1 a good starting point. Usually more than 2 not worth 
it. 0 for very round results works fine.

4-11. One to eight input gradient values.

12-19. One to eight matching pow() exponent values.

Notes. If all exponents ==1, look at more efficient f_polyhedron 
alternative. Generally if exponents are <1.0 introduces an inward / 
concave curvature. If 1, flat faces, If >1 outward / convex curvature. 
See f_polyhedron comments on normalized scaling with x,y,z projects onto 
normalized vectors.

----------------- Code snipets
#declare Iso99 = isosurface {
     function { f_ellipsoid(x,y,z,1,0.3,0.9,0.6,4.4,1) }
....

#declare nrmScl = 3/7;
#declare Iso99 = isosurface {
     function { f_polyhedron(x*nrmScl,y*2,z*nrmScl,5,
             f_gradient(x,y*2,z,-1,0,-1)*nrmScl,
             f_gradient(x,y*2,z,-1,0,+1)*nrmScl,0,0,
             0,0,0,0,
             0,0,0,0)
     }

#declare nrmScl = 3/7;
#declare Iso99 = isosurface {
     function { f_lame(5,1,2,
             x*nrmScl,y*2,z*nrmScl,
             f_gradient(x,y*2,z,-1,0,-1)*nrmScl,
             f_gradient(x,y*2,z,-1,0,+1)*nrmScl,0,0,0,
             0.7,1.0,1.3,1.6,0.6,0,0,0)
     }

Lastly, not sure I've anywhere mentioned adding the povr f_gradient() 
inbuilt used here. It replaces the three, fixed axis, pattern wrapped 
ones in functions.inc with one which has 3 additional parameters to 
specify the axis onto which to project (now like the pattern version). 
Opens up another transform-ish like capability for all functions; though 
admit it's a new knob where I've not played much yet.

Bill P.


Post a reply to this message


Attachments:
Download 'storyfellipse_fpolyhedon_flame.png' (74 KB)

Preview of image 'storyfellipse_fpolyhedon_flame.png'
storyfellipse_fpolyhedon_flame.png


 

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.