POV-Ray : Newsgroups : povray.general : Minimum Distance Function : Re: Minimum Distance Function Server Time
15 Aug 2022 08:57:12 EDT (-0400)
  Re: Minimum Distance Function  
From: jceddy
Date: 7 Jul 2022 10:55:00
Message: <web.62c6f34d6fb4e448a6ee740d5d51d79c@news.povray.org>
> So... I think you are saying you've added a new f_mindist() function
> with the appropriate hook in functions.inc - or a stand alone hook in
> the SDL? And also a new pattern calling the same mindist code?

I actually implemented it as a "special" function, the way the pattern function
is implemented. So in the scene it looks like:

#declare fn = function { minimum_distance { ObjectIdentifier } };

> I have an interest in the former work more than the latter as have on my
> wish list to expose an f_trace() capability in my povr branch play which
> is roughly equivalent to the SDL's trace() function. I'd like to see how
> you handled the internal code hooks for tracing, inside tests (surface
> handling?) and pointers to the objects to trace.

I'm sure you could do a trace function similar to the way I did the
minimum_distance one. I am currently working on learning the parser a bit better
so I can more easily come up with a general set of arguments. I was thinking I'd
like to be able to do something like:

#declare fn = function {
  minimum_distance {
    method brute_force
    quality 2

or similar, and then possible implement brute_force, simulated_annealing, and
gradient_descent methods with appropriate arguments

> Official development is stalled again since mid 2021, so while a pull
> request would be one handy way I could look at your code, it's unlikely
> to be acted on any time soon - in any 'official' capacity.

I had seen that development had been done as recently as 2021, which gave me
hope that it was moving forward again! XD

I have been using POV long enough that I am used to the punctuated equilibrium
model of development. This is the second time I had tried to look into the
source, and the previous time was I believe before the C++ implementation (there
was a before C++ time, right? I'm not just imagining it?) and I had no idea what
was going on. I am having a lot more success navigating it now.

> There is in all v3.8 branches a potential pattern hook for all shapes.
> Today it's implemented only for blobs(b) and the isosurface(b1) itself.
> Extending it to other simple shapes like spheres would be easy, but it's
> not been done. I think because it's hard to see the benefit when you can
> represent simpler shapes directly with functions for isosurfaces and
> these simpler functions can be used as patterns already. Using
> 'patterns' in isosurface functions comes with significant overhead - and
> in the official POV-Ray releases there are bugs and usage issues too.

I actually did my first pass at this implementation by creating a
minumum_distance pattern, and then using that inside a function. As you allude
to here, that was not satisfactory...my code is still messy, though, and I have
"helper" functions in the pattern class that I am calling from the built-in
function. I need to clean all that up, but will probably work on that today and
push it up to my fork in github.

One thing I am curious about, that I haven't dug into yet, is if I can get more
information about the object being passed to the function and possible enable
performance improvements based on that. I started down this path because I have
a mesh object I am working with that I want this functionality for...in the case
where the "top level" object is a mesh, and I could get access to the mesh data,
I have a feeling I could improved this quite a bit for that specific use case.

> Truly knowing 'the' mindist for all 3D shapes is likely impossible to
> any reasonable cost(c). This of course says nothing about whether a
> mindist functionality is useful - a somewhat fuzzy one would be.

Yes, the performance of finding the minimum_distance with *very* good precision
slows things down tremendously. Been doing a lot of testing, though, and it
seems that in a lot of cases you can get a "good enough" solution without it
being *too* painful (of course that is subjective).

For a bunch of objects I've been testing on, just shooting rays out uniformly
45-degrees in polar coordinates (so like 26 ray samples) gets you nearly as good
a solution as a "smarter" method that might still end up taking longer to get
you like a .01% improvement in reality.

> I'm interested in what your trying for possible use/adaptation with
> stuff I'm trying. :-)

I'll work on this more today, and be back later with a link to my github fork.

Post a reply to this message

Copyright 2003-2021 Persistence of Vision Raytracer Pty. Ltd.