POV-Ray : Newsgroups : povray.general : Request for Syntax Suggestions : Re: Request for Syntax Suggestions Server Time
7 May 2024 06:28:20 EDT (-0400)
  Re: Request for Syntax Suggestions  
From: clipka
Date: 23 Apr 2016 13:40:13
Message: <571bb37d$1@news.povray.org>
Am 23.04.2016 um 17:09 schrieb William F Pokorny:

> I'm still finishing a move to a new city and apartment as I try my hand
> at retirement, but as I play quite a bit with isosurfaces, saw your
> post, and have some initial thoughts more on the internals than syntax I
> guess.

Your brainstorming contribution is much appreciated.

> - I have often enough wished for a more flexible contained_by mechanism
> for isosurfaces than the sphere / box container shapes we have today so
> I welcome any effort to improve it.

I'll put this thought somewhere at the back of my head, and let it
ferment for a while.

> - We can today use shape functions as in "(F_Stem(x,y,z)>=0)" or
> functions testing whether inside or outside and object with select()
> limit the evaluation of other functions.

While the select() statement does give some boost by cutting short the
evaluation of sub-expressions, a dedicated mechanism for "blob-like" use
cases could do even better.

Imagine a blob-ish isosurface comprised of, say, 1000 elements, with any
given point in space influenced by at most, say, 5 of them.

A naive function would require the complete evaulation of 1000
sub-expressions, as well as 999 additions, each time the compound
function is to be evaluated.

A function wrapping each sub-expression in a select() could cut that
down to complete evaluation of only 5 sub-expressions, but would
introduce an additional 6000 comparisons, 5000 logical operators and
1000 branches (in the most straightforward implementation that is; using
nested switch() expressions instead of logical operators might instead
give you some guesstimated 2000 comparisons, no logical ops, and a
guesstimated 2000 branches). You'd also still be left with the 999
additions. Also, all of this would happen in a virtual machine without
JIT-compilation. So the workload would still grow linearly with the
number of component functions.

A dedicated mechanism, on the other hand, could make use of some
space-partitioning structure like a KD-tree, to further trim down the
workload to a mere log2(1000)=10 comparisons and branches in native
machine code, followed by 5 complete evaluations of sub-expressions,
mixed with 4 additions in native machine code. So the workload would
primarily be determined by how much "overlap" you have between component
functions, while their total number would be almost irrelevant).


> I sometimes use left side
> multiplication too because it is quicker to code though not sure if the
> VM really bails early. I've played with isosurfaces defined by inside
> tests on objects in the past too.

To the best of my knowledge, POV-Ray's function VM does not use
short-circuit evaluation anywhere, except in switch() expressions.


> The point is the performance limiter on implementing container/range
> methods for isosurfaces very quickly becomes the fact there is no
> internal optimization on the evaluation of the container objects. Any
> scheme to enable many bounded_by objects in functions, I believe should
> look to make rapid inside evaluations of all those container shapes. It
> was not clear to me from your proposal if this would be the case? I
> believe all the containers would have to be evaluated for each
> evaluation step the isosurface code takes.

The main speedup provided by bounding shapes is actually not from
allowing to make a quick first guess on insideness or intersection tests
for an _individual_ object, but rather from providing coordinate ranges
to compile a (very sophisticated) spatial "lookup table" of objects,
which in turn allows to make a quick first guess across _all_ objects at
once, eliminating most objects without even performing individual
bounded_by tests on them.

When intersection or insideness tests need to be performed for a large
number of N objects, per-object bounded_by tests can only reduce the
workload from a*N to b*N with b<a. Lookup structures based on the
bounded_by coordinate ranges, on the other hand, can reduce the workload
to as little as c*log(N) (often with c<=b, but for sufficiently large N
that's of little relevance anyway).


> - Suppose obvious, but containers internal to the isosurface can
> themselves introduce discontinuities / gradient issues in the overall
> function. Inside a single isosurface they'd need to be used with care.

It should indeed be flagged as a caveat; but it would raise a
max_gradient warning, so I think that's ok.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.