|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
clipka <ano### [at] anonymousorg> wrote:
> Instead, I want to provide users with a dedicated syntax for cases where
> they know that the supplied function _already_ satisfies that form, so
> that the parser and render engine can exploit this property.
Right, sorry - I probably just confused the issue by speculating on a tenuous
similarity.
> You know,
> something like:
>
> #declare MyFn1 = function(x,y,z) { ... }
> #declare MyFn2 = function(x,y,z) { ... }
> ...
> #declare MyFnN = function(x,y,z) { ... }
>
> #declare MyFn = function {
> MyFn1(x,y,z bounded_by{box{ ... }}) +
> MyFn2(x,y,z bounded_by{box{ ... }}) +
> ...
> MyFnN(x,y,z bounded_by{box{ ... }})
> }
>
> except that this particular syntax would neither be reasonably easy to
> parse, nor do I consider it reasonably pretty.
Well, I was thinking that the way color maps and splines are defined, you could
just do something like
#declare MyFunctions = boundedf {
(x,y,z), pow(x,2) + pow(y,2) + pow(z,2), sphere { ...},
(x,y,z), -(pow(x,3) + pow(y,1) + pow(z,3)), box { ...},
(u,v,z), MyFn1, box { ...},
(u,v,t), MyFn2, box { ...},
(Theta, Phi), MyParametricFn, sphere { ...}
}
That way you don't need to have separate code blocks, retype "function" and
"bounded_by" and their associated braces every time, etc.
"boundedf" implies it's a function bounded by something already known.
It would be a nice feature to be able to use some mixture of (, [, and { to
define functions, as it always helps me differentiate between different parts of
large equations, and keep track of key terms - but that's "related to but
separate from" the present issue.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 23.04.2016 um 15:12 schrieb Christian Froeschlin:
> On 23.04.2016 8:19, Le_Forgeron wrote:
>
>> Could you move the bounded_by in the initial declare ?
>
> This would seem more logical to me too. I don't even think
> a new keyword is needed since the presence of the bounded_by
> block is obvious to the parser. So having a bounding object
> becomes an attribute that any function object may have (and
> which might later be exploited in other context too).
It would also be my own preference, if it wasn't for the technical
problems I'd expect with this approach.
> However as far as I understand it the problem technically
> would be in the need to analyze the compound function expression
> to determine that it is indeed just a sum of functions with
> attached bounding object.
That's one of the problems.
The other is that I would need to attach the bounding information to the
function object in the first place.
You see, the whole function thing in POV-Ray is (at least to me) a black
box of equally black magic. The less I need to intrude there, the better.
(Of course that doesn't need to limit your brainstorming. Maybe the
issues happen to be less of an obstacle than I currently expect.)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 04/22/2016 08:02 PM, clipka wrote:
> Folks, I need your help.
>
> I'm planning on a feature, but can't think of a good syntax to go with
> it. Here's the deal:
>
> Some of you may be aware that isosurfaces can be used as a replacement
> for blobs, to get some added flexibility, such as adding toroids or
> boxes to the set of elements, or tweaking the way the elements fuse with
> each other. Unfortunately this comes at a huge performance penalty.
>
> Part of this difference in performance is because POV-Ray's blob
> implementation makes use of the fact that the individual components have
> a limited range of influence, employing a bounding hierarchy to bypass
> computation of components that are too far away to make any difference.
> Isosurfaces do not provide such a mechanism.
>
> I want to change this.
...
>
I'm still finishing a move to a new city and apartment as I try my hand
at retirement, but as I play quite a bit with isosurfaces, saw your
post, and have some initial thoughts more on the internals than syntax I
guess.
- I have often enough wished for a more flexible contained_by mechanism
for isosurfaces than the sphere / box container shapes we have today so
I welcome any effort to improve it.
- We can today use shape functions as in "(F_Stem(x,y,z)>=0)" or
functions testing whether inside or outside and object with select()
limit the evaluation of other functions. I sometimes use left side
multiplication too because it is quicker to code though not sure if the
VM really bails early. I've played with isosurfaces defined by inside
tests on objects in the past too.
The point is the performance limiter on implementing container/range
methods for isosurfaces very quickly becomes the fact there is no
internal optimization on the evaluation of the container objects. Any
scheme to enable many bounded_by objects in functions, I believe should
look to make rapid inside evaluations of all those container shapes. It
was not clear to me from your proposal if this would be the case? I
believe all the containers would have to be evaluated for each
evaluation step the isosurface code takes.
- Suppose obvious, but containers internal to the isosurface can
themselves introduce discontinuities / gradient issues in the overall
function. Inside a single isosurface they'd need to be used with care.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 23.04.2016 um 17:09 schrieb William F Pokorny:
> I'm still finishing a move to a new city and apartment as I try my hand
> at retirement, but as I play quite a bit with isosurfaces, saw your
> post, and have some initial thoughts more on the internals than syntax I
> guess.
Your brainstorming contribution is much appreciated.
> - I have often enough wished for a more flexible contained_by mechanism
> for isosurfaces than the sphere / box container shapes we have today so
> I welcome any effort to improve it.
I'll put this thought somewhere at the back of my head, and let it
ferment for a while.
> - We can today use shape functions as in "(F_Stem(x,y,z)>=0)" or
> functions testing whether inside or outside and object with select()
> limit the evaluation of other functions.
While the select() statement does give some boost by cutting short the
evaluation of sub-expressions, a dedicated mechanism for "blob-like" use
cases could do even better.
Imagine a blob-ish isosurface comprised of, say, 1000 elements, with any
given point in space influenced by at most, say, 5 of them.
A naive function would require the complete evaulation of 1000
sub-expressions, as well as 999 additions, each time the compound
function is to be evaluated.
A function wrapping each sub-expression in a select() could cut that
down to complete evaluation of only 5 sub-expressions, but would
introduce an additional 6000 comparisons, 5000 logical operators and
1000 branches (in the most straightforward implementation that is; using
nested switch() expressions instead of logical operators might instead
give you some guesstimated 2000 comparisons, no logical ops, and a
guesstimated 2000 branches). You'd also still be left with the 999
additions. Also, all of this would happen in a virtual machine without
JIT-compilation. So the workload would still grow linearly with the
number of component functions.
A dedicated mechanism, on the other hand, could make use of some
space-partitioning structure like a KD-tree, to further trim down the
workload to a mere log2(1000)=10 comparisons and branches in native
machine code, followed by 5 complete evaluations of sub-expressions,
mixed with 4 additions in native machine code. So the workload would
primarily be determined by how much "overlap" you have between component
functions, while their total number would be almost irrelevant).
> I sometimes use left side
> multiplication too because it is quicker to code though not sure if the
> VM really bails early. I've played with isosurfaces defined by inside
> tests on objects in the past too.
To the best of my knowledge, POV-Ray's function VM does not use
short-circuit evaluation anywhere, except in switch() expressions.
> The point is the performance limiter on implementing container/range
> methods for isosurfaces very quickly becomes the fact there is no
> internal optimization on the evaluation of the container objects. Any
> scheme to enable many bounded_by objects in functions, I believe should
> look to make rapid inside evaluations of all those container shapes. It
> was not clear to me from your proposal if this would be the case? I
> believe all the containers would have to be evaluated for each
> evaluation step the isosurface code takes.
The main speedup provided by bounding shapes is actually not from
allowing to make a quick first guess on insideness or intersection tests
for an _individual_ object, but rather from providing coordinate ranges
to compile a (very sophisticated) spatial "lookup table" of objects,
which in turn allows to make a quick first guess across _all_ objects at
once, eliminating most objects without even performing individual
bounded_by tests on them.
When intersection or insideness tests need to be performed for a large
number of N objects, per-object bounded_by tests can only reduce the
workload from a*N to b*N with b<a. Lookup structures based on the
bounded_by coordinate ranges, on the other hand, can reduce the workload
to as little as c*log(N) (often with c<=b, but for sufficiently large N
that's of little relevance anyway).
> - Suppose obvious, but containers internal to the isosurface can
> themselves introduce discontinuities / gradient issues in the overall
> function. Inside a single isosurface they'd need to be used with care.
It should indeed be flagged as a caveat; but it would raise a
max_gradient warning, so I think that's ok.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 04/23/2016 01:40 PM, clipka wrote:
> Am 23.04.2016 um 17:09 schrieb William F Pokorny:
>
>> I'm still finishing a move to a new city and apartment as I try my hand
>> at retirement, but as I play quite a bit with isosurfaces, saw your
>> post, and have some initial thoughts more on the internals than syntax I
>> guess.
>
> Your brainstorming contribution is much appreciated.
>
>> - I have often enough wished for a more flexible contained_by mechanism
>> for isosurfaces than the sphere / box container shapes we have today so
>> I welcome any effort to improve it.
>
> I'll put this thought somewhere at the back of my head, and let it
> ferment for a while.
>
Yes, suppose this outer most container is peripherally related to what
you are working toward with bounded functions and perhaps more like the
bounding Bald Eagle was mentioning.
I've played with isosurface sphere_sweep equivalents(1) and they work -
slowly. The gradients are somewhat steep - which is very often tolerable
if the container is tight to the shape so to speak - but today the
container for such things often ends up by volume very much larger due
having only spheres and boxes.
I don't believe the function bounding being suggested is going to help
with such isosurfaces. It is at least not apparent to me how I could
employ it. Perhaps I could though if this new method could associate
many containers - as in say a spline of spheres/ranges - with one function?
(1) - One axis must contain no folds.
Of course we often struggle with the more complex bounding methods (ie
sphere_sweep) so perhaps asking for trouble by introducing more complex
bounding to isosurfaces! ;-)
Aside: Hmm. Wondering... Did we ever wrap up that sphere_sweep bounding
bug someone found as a git bug or otherwise fix it? This one:
http://news.povray.org/povray.bugreports/thread/%3Cweb.5692847b8bcbbc2219f71b680%40news.povray.org%3E/
>> - We can today use shape functions as in "(F_Stem(x,y,z)>=0)" or
>> functions testing whether inside or outside and object with select()
>> limit the evaluation of other functions.
>
> While the select() statement does give some boost by cutting short the
> evaluation of sub-expressions, a dedicated mechanism for "blob-like" use
> cases could do even better.
>
> Imagine a blob-ish isosurface comprised of, say, 1000 elements, with any
> given point in space influenced by at most, say, 5 of them.
>
> A naive function would require the complete evaulation of 1000
> sub-expressions, as well as 999 additions, each time the compound
> function is to be evaluated.
>
> A function wrapping each sub-expression in a select() could cut that
> down to complete evaluation of only 5 sub-expressions, but would
> introduce an additional 6000 comparisons, 5000 logical operators and
> 1000 branches (in the most straightforward implementation that is; using
> nested switch() expressions instead of logical operators might instead
> give you some guesstimated 2000 comparisons, no logical ops, and a
> guesstimated 2000 branches). You'd also still be left with the 999
> additions. Also, all of this would happen in a virtual machine without
> JIT-compilation. So the workload would still grow linearly with the
> number of component functions.
>
> A dedicated mechanism, on the other hand, could make use of some
> space-partitioning structure like a KD-tree, to further trim down the
> workload to a mere log2(1000)=10 comparisons and branches in native
> machine code, followed by 5 complete evaluations of sub-expressions,
> mixed with 4 additions in native machine code. So the workload would
> primarily be determined by how much "overlap" you have between component
> functions, while their total number would be almost irrelevant).
>
>
>> I sometimes use left side
>> multiplication too because it is quicker to code though not sure if the
>> VM really bails early. I've played with isosurfaces defined by inside
>> tests on objects in the past too.
>
> To the best of my knowledge, POV-Ray's function VM does not use
> short-circuit evaluation anywhere, except in switch() expressions.
>
>
>> The point is the performance limiter on implementing container/range
>> methods for isosurfaces very quickly becomes the fact there is no
>> internal optimization on the evaluation of the container objects. Any
>> scheme to enable many bounded_by objects in functions, I believe should
>> look to make rapid inside evaluations of all those container shapes. It
>> was not clear to me from your proposal if this would be the case? I
>> believe all the containers would have to be evaluated for each
>> evaluation step the isosurface code takes.
>
> The main speedup provided by bounding shapes is actually not from
> allowing to make a quick first guess on insideness or intersection tests
> for an _individual_ object, but rather from providing coordinate ranges
> to compile a (very sophisticated) spatial "lookup table" of objects,
> which in turn allows to make a quick first guess across _all_ objects at
> once, eliminating most objects without even performing individual
> bounded_by tests on them.
>
> When intersection or insideness tests need to be performed for a large
> number of N objects, per-object bounded_by tests can only reduce the
> workload from a*N to b*N with b<a. Lookup structures based on the
> bounded_by coordinate ranges, on the other hand, can reduce the workload
> to as little as c*log(N) (often with c<=b, but for sufficiently large N
> that's of little relevance anyway).
>
Thanks much for the detail. I think I mostly follow it and the plan
sounds very promising.
Expect you are already aware of it, but to be certain, Christoph Horman
did quite a bit of isosurface optimization work/study some years back.
See the isosurface speed up items on the page at :
http://www.imagico.de/pov/tools_en.php
I did wake up this morning wondering more about the interaction internal
function bounding with the isosurface mechanism itself.
I don't see any issue for bounded sub-functions used in patterns. There
the function will be passed a point in space and return a value.
In isosurfaces though there is that search inside the outer container
along the ray for threshold value(s) defining shape surface(s). My
thinking is still cloudy, but wondering if this could lead to additional
iso-search work due the internal container boundaries interacting at
distance from any intended shape(s)?
Guess thinking some rays might travel through multiple adjacent search
ranges causing differing sets of functions to be evaluated along the ray
in a way that is noisier. Ah, maybe though this is just the same
gradient concern but away from the shape surfaces. For large numbers of
functions have to think we'd still come out ahead.
>
>> - Suppose obvious, but containers internal to the isosurface can
>> themselves introduce discontinuities / gradient issues in the overall
>> function. Inside a single isosurface they'd need to be used with care.
>
> It should indeed be flagged as a caveat; but it would raise a
> max_gradient warning, so I think that's ok.
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 24/04/2016 17:19, William F Pokorny a écrit :
> I don't believe the function bounding being suggested is going to help with such
isosurfaces. It is at least not apparent to me how I could employ it. Perhaps I could
though if this new method could associate many containers - as in say a spline of
spheres/ranges - with
> one function?
Now thinking about it: what is the gain of one function being the sum of bounded boxed
functions,
compared to a multitude of distinct functions which are bounded by their own box ?
Wouldn't it duplicate the union{} for isosurfaces ?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 04/24/2016 11:36 AM, Le_Forgeron wrote:
> Le 24/04/2016 17:19, William F Pokorny a écrit :
>
>> I don't believe the function bounding being suggested is going to help with such
isosurfaces. It is at least not apparent to me how I could employ it. Perhaps I could
though if this new method could associate many containers - as in say a spline of
spheres/ranges - with
>> one function?
>
> Now thinking about it: what is the gain of one function being the sum of bounded
boxed functions,
> compared to a multitude of distinct functions which are bounded by their own box ?
>
> Wouldn't it duplicate the union{} for isosurfaces ?
>
I think I did a poor job of describing what I was thinking while
thinking aloud...
With the sphere sweep iso, the one function is based upon a spline
function along which an f_sphere is swept(1).
What I'd like - or think I'd like - in cases like this is a way to
better bound this one function. To that end I was thinking about being
able to specify set of spheres/ranges to bound the function roughly
following the same spline, but larger as a collection of bounding
regions which I presumed would fit nicely in any new search scheme.
If the point in space being tested is inside - any - of the spheres in
this collection of bounding spheres the one function would be evaluated.
It does need a many bounding shapes to on function mechanism for this.
The gain or aim would be evaluating the one function a good deal less
than happens today - or would happen if I had access to only one
container shape for the function.
I think what you are suggesting is a set up multiple f_sphere/blobbing
functions already placed following the spline each with there own
bounding shape? This would probably work too I guess with some work on
how to distribute the sphere/blobs ahead of time along the spline so as
to look right. This approach would have the significant advantage the
iso sphere sweep could fold every which way with no issues. Maybe this
is the better way to go given internal bounding...
Aside: Some more thinking aloud. Not seen it stated out right in this
thread, but to be most general we too would like to be able to distort -
with passed coordinates - both the functions and containers/ranges
together. Or, is the thinking we'd set the sub function containers up to
handle any distortion as is the case with the outer contained_by shape
today ?
Bill P.
(1)- Lying a bit as there were derivative adjustments too, to keep the
sphere rounder as it was swept.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 24.04.2016 um 17:19 schrieb William F Pokorny:
> I've played with isosurface sphere_sweep equivalents(1) and they work -
> slowly. The gradients are somewhat steep - which is very often tolerable
> if the container is tight to the shape so to speak - but today the
> container for such things often ends up by volume very much larger due
> having only spheres and boxes.
>
> I don't believe the function bounding being suggested is going to help
> with such isosurfaces. It is at least not apparent to me how I could
> employ it. Perhaps I could though if this new method could associate
> many containers - as in say a spline of spheres/ranges - with one function?
No, I don't expect much of a speedup for sphere_sweep equivalents
either, unless they have a lot of segments.
This thing I'm planning on is specifically intended for blob-like
isosurfaces.
> Aside: Hmm. Wondering... Did we ever wrap up that sphere_sweep bounding
> bug someone found as a git bug or otherwise fix it? This one:
>
>
http://news.povray.org/povray.bugreports/thread/%3Cweb.5692847b8bcbbc2219f71b680%40news.povray.org%3E/
Not that I remember.
> Expect you are already aware of it, but to be certain, Christoph Horman
> did quite a bit of isosurface optimization work/study some years back.
> See the isosurface speed up items on the page at :
> http://www.imagico.de/pov/tools_en.php
No, I wasn't aware of that. This looks like an interesting approach as
well. Entirely orthogonal to my idea though.
> I did wake up this morning wondering more about the interaction internal
> function bounding with the isosurface mechanism itself.
>
> I don't see any issue for bounded sub-functions used in patterns. There
> the function will be passed a point in space and return a value.
>
> In isosurfaces though there is that search inside the outer container
> along the ray for threshold value(s) defining shape surface(s). My
> thinking is still cloudy, but wondering if this could lead to additional
> iso-search work due the internal container boundaries interacting at
> distance from any intended shape(s)?
Provided the bounding information isn't a lie (i.e. provided each
bounding shape includes the entire volume for which the corresponding
component function is non-zero), this will have no impact on the
Isosurface algorithm whatsoever (aside from speeding it up, obviously)
-- the algorithm doesn't do anything magic with the function, it just
evaluates it plenty of times with different parameter values. So
whatever speeds up the function evaluation without giving different
results is fair game, and unless the bounding information is a lie
that's exactly what the mechanism will do.
> Guess thinking some rays might travel through multiple adjacent search
> ranges causing differing sets of functions to be evaluated along the ray
> in a way that is noisier. Ah, maybe though this is just the same
> gradient concern but away from the shape surfaces. For large numbers of
> functions have to think we'd still come out ahead.
The algorithm will just replace additions of zero with no additions at
all, while retaining all other addends.
There is a theoretical possibility that careless implementation of the
mechanism could lead to the addends being summed up in a different order
at different points in space, which indeed would have some potential for
wrecking continuity if a particularly large number of non-zero addends
were involved, but I consider that a purely academic issue; in such a
borderline pathological case, the isosurface would probably be suffering
from precision limit noise anyway even with stable addend ordering.
Besides, I expect stable ordering of addends to be virtually inevitable
except maybe for a pathologically over-optimized implementation.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 24.04.2016 um 17:36 schrieb Le_Forgeron:
> Now thinking about it: what is the gain of one function being the sum of bounded
boxed functions,
> compared to a multitude of distinct functions which are bounded by their own box ?
>
> Wouldn't it duplicate the union{} for isosurfaces ?
Not at all.
Just think of blobs: As far as the underlying maths goes, a blob is just
a special kind of isosurface, and its underlying function is indeed the
sum of multiple component functions, each of which have a limited volume
of influence, and the algorithm makes use of bounding; so in a blob you
have exactly the situation in question.
Now note that a blob comprised of a single element (and thus a single
component function) is spherical in shape. If you'd union{} multiple
such single-element blobs together, you'd get a boring set of (possibly
overlapping) spheres -- an entirely different shape than if you add the
components up together in a single blob.
The difference is that if you use a union, you're combining the results
of a threshold operation applied to individual elements, whereas in a
blob (and also in a isosurface based on a sum of functions) you're
applying a threshold operation to a combination of those elements.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 04/24/2016 01:23 PM, clipka wrote:
> Am 24.04.2016 um 17:19 schrieb William F Pokorny:
>
>> Aside: Hmm. Wondering... Did we ever wrap up that sphere_sweep bounding
>> bug someone found as a git bug or otherwise fix it? This one:
>>
>>
http://news.povray.org/povray.bugreports/thread/%3Cweb.5692847b8bcbbc2219f71b680%40news.povray.org%3E/
>
> Not that I remember.
>
>
Thanks.
Still getting settled post move, but I'll go back and take a look today.
I had the idea then an explicit bounded_by might help as a work around,
but I didn't try it. I'll look at the source code too, but as you know,
I'm a hack coder...
In any case, I'll open a bug on github for it.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|