POV-Ray : Newsgroups : povray.general : Minimum Distance Function Server Time
23 Apr 2024 09:08:02 EDT (-0400)
  Minimum Distance Function (Message 11 to 20 of 39)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: jr
Subject: Re: Minimum Distance Function
Date: 8 Jul 2022 05:30:00
Message: <web.62c7f8816fb4e4481be3cd4b6cde94f1@news.povray.org>
hi,

"jceddy" <jce### [at] gmailcom> wrote:
> ...
> > > Yeah, a "developer's guide to patching povray" would be great. 😁
> > >
> > > I could possible post some details about how to add keywords and get them
> > > processed, but it would only really be useful to folks that have a compiler and
> > > know how to use it.

that sounds like a useful article for the wiki, imo, including a "worked
example" perhaps?  <https://wiki.povray.org/content/Main_Page>


regards, jr.


Post a reply to this message

From: William F Pokorny
Subject: Re: Minimum Distance Function
Date: 8 Jul 2022 05:57:49
Message: <62c7ff9d$1@news.povray.org>
On 7/7/22 17:57, jceddy wrote:
> 
>> I'll work on this more today, and be back later with a link to my github fork.
> 
> My fork is over here: https://github.com/jceddy/povray
> 
> If you search for "minimum" you are going to find most of my changes.
> 
> You can create a minimum distance function with:
> 
> #declare fn = function { minimum_distance { object } };
> 
> The minimum distance pattern allows a little more control over the simulated
> annealing step, but I am currently working on additional changes to expose all
> of the relevant tweakable parameters.
> 
> At any rate, if you take a look at how I ma using Find_Intersection in the
> MinimumDistanceSolver class, that is basically the core trace() functionality
> that you had mentioned you are interested in exposing via a function.
> 
> 

Thanks. Is this minimum_distance work the only thing changed in that 
fork - or is there (or will there be) other stuff changed in it too?

I implemented my own special function / new parser keyword in 
'pattern_modifiers' a couple years back in povr. I wasn't thinking about 
that approach for an f_trace(), but maybe it's a way to go...

In a post following this one you were talking about recognizing the mesh 
object for special handling. It's a shape with its own internal 
bounding. Tracing of rays(a) should be relatively fast with meshes - FWIW.

Bill P.

(a) - A thought that popped into my head yesterday is that if your ray 
origin point for is very close to an object's surface you can get fooled 
for mindist due the closest root/intersection being inside the minimum 
intersection depth for the particular shape/surface - those 
intersections are typically thrown out (or are never found depending 
upon the solver used). In practice this issue might not come up that 
much. The issue is tangled in the isosurface (~parametric~) accuracy 
setting which is already useless beyond about 1e-6 as a soft rule. Most 
folks run with an accuracy of something more like 1e-3 or a little less 
and most shapes support smaller depths than that (blobs in official 
POV-Ray versions are at a quite large 1e-2). An advantage of 
"proximity/density" approaches based upon inside tests only is they 
don't have this exposure to unstable ray traced results near surfaces. 
Anyhow. Maybe this a worry for later should you see near surface 
noise/issues with your ray traced approach.


Post a reply to this message

From: jceddy
Subject: Re: Minimum Distance Function
Date: 8 Jul 2022 16:40:00
Message: <web.62c895df6fb4e448864166f75d51d79c@news.povray.org>
> You can specify objects and object identifiers in SDL...there are ways to hook
> into the parser so that you can get an "object pointer" when an object
> definition of object identifier shows up in the scene. It may be that it's
> possible given the object pointer to figure out that the top level object is a
> mesh, and get the mesh data. I will definitely be looking into that at some
> point. (My priority at the moment is cleaning up the code and implementing an
> easy way to tweak calls to the minimum distance function easily in SDL).
>
> It is unclear whether that helps with performance of a minimum distance
> function, but one thing it *would* be useful for is testing, to evaluate how
> accurate various methods are. Of course, I can also evaluate that stuff using
> other methods, as well, since I know the mesh definition "offline" already.
>

So today, I verified that you can, indeed, get at the underlying Mesh data if
the object *is a* Mesh. So, in that case, given the Vertex/Triangle data in the
Mesh, along with the transformation matrix (also easy to get), you should be
able to implement a fast and accurate minimum-distance calculation by applying
the inverse of the object's transformation matrix to the test point (to get the
point in the same coordinates as the original mesh data...I THINK???), and then
doing some math.

Unfortunately the math required to do the calculation quickly is not all that
simple...it requires building a KD-Tree from the vertex data, doing some nearest
neighbor calculations, zeroing in on the nearest triangle, then calculating the
nearest point on that triangle.

The most exciting part of this is that I think that it may be possible to create
a fast and accurate minimum-distance function for each shape supported by pov
(with the possible exceptions of isosurface and blob), and then combine them
such that you can also calculate it for objects defined by CSG operations. (i.e.
for an intersection, calculate the minimum distance for each object in the
intersection first, then do a little logic based on whether the nearest point on
one object is inside the other, etc., to figure out which one to keep for the
intersected object). Been doing some drawings on napkins to suss it out.

Anyway...fun stuff.

My *specific* use case is for a simple Mesh object, maybe with some
transformations applied, so probably the next thing I will work on is pulling in
the necessary operations to support that (like a kdtree class, etc.)

Now need to figure out where in the code the classes fit...figuring out where to
put my code is the hardest part...

P.S. Last night/this morning I was dreaming up a plugin architecture that could
possibly allow developers to expose their own C++-backed functionality in
pov-ray without requiring them to actually learn to program pov source.


Post a reply to this message

From: jceddy
Subject: Re: Minimum Distance Function
Date: 8 Jul 2022 17:30:00
Message: <web.62c8a0f66fb4e448864166f75d51d79c@news.povray.org>
> Thanks. Is this minimum_distance work the only thing changed in that
> fork - or is there (or will there be) other stuff changed in it too?

There should only be changes in there to support minimum_distance stuff, but
that is also potentially a bunch of general-purpose math stuff, as well.

> In a post following this one you were talking about recognizing the mesh
> object for special handling. It's a shape with its own internal
> bounding. Tracing of rays(a) should be relatively fast with meshes - FWIW.

Yes, I am figuring this all out today. :)

Given that for basically everything other than blob and isosurface there is some
more straightforward way to figure out the minimum distance, it might end up
being that a bunch of the stuff I did for estimating it might not be necessary.

Put a post about it before I realized there were other responses here already.


Post a reply to this message

From: jceddy
Subject: Re: Minimum Distance Function
Date: 9 Jul 2022 12:10:00
Message: <web.62c9a7ca6fb4e448864166f75d51d79c@news.povray.org>
> So today, I verified that you can, indeed, get at the underlying Mesh data if
> the object *is a* Mesh. So, in that case, given the Vertex/Triangle data in the
> Mesh, along with the transformation matrix (also easy to get), you should be
> able to implement a fast and accurate minimum-distance calculation by applying
> the inverse of the object's transformation matrix to the test point (to get the
> point in the same coordinates as the original mesh data...I THINK???), and then
> doing some math.

Been working on the math. Added a couple of things to the mesh's data that can
be computed and cached as needed:

1) A kd-tree of the vertex data (makes it easy to quickly query the nearest mesh
vertex to a point).
2) A list of triangles for each vertex index. So ones you query the nearest
vertex, you can immediately get a list of triangles with that vertex.

Working on the code to check the minimum distance for each triangle touching
with the nearest vertex...that should be the minimum distance to the mesh.

The next thing will be handling transformations...I realized this morning that I
can't just apply an inverse transform to the point being tested...this won't
work for scale and shear transformations. I think I will need to actually
generate and cache a version of the mesh data with the object transformations
applied, and then use THAT mesh data for calculating minimum distance.

Anyway, one step at a time.

Also, I am thinking about changing "minimum_distance" to "proximity", as it
feels perhaps more succinct? Yea or nay?


Post a reply to this message

From: jceddy
Subject: Re: Minimum Distance Function
Date: 10 Jul 2022 15:30:00
Message: <web.62cb280b6fb4e448864166f75d51d79c@news.povray.org>
> The next thing will be handling transformations...I realized this morning that I
> can't just apply an inverse transform to the point being tested...this won't
> work for scale and shear transformations. I think I will need to actually
> generate and cache a version of the mesh data with the object transformations
> applied, and then use THAT mesh data for calculating minimum distance.
>

So I got the code to calculated minimum distance using nearest-vertex from a
kd-tree working. At least I think so...I have some tiny artefacts that I think
might be a result of a bug in the code, but might also just be a result of the
kinds of tolerance issues that were mentioned above.

I added code to the underlying Mesh object in povray for calculating a
transformed copy of the vertex data, and verified that that seems to be
working...but now thinking there might be an easier solution. Like...

Can I inverse-transform the point being tested, run the minimum-distance
calculation against the original data to get a minimum-distance *VECTOR* from
the test point to the original mesh, then just apply the object's transformation
matrix to that vector before measuring its length?


Post a reply to this message

From: Cousin Ricky
Subject: Re: Minimum Distance Function
Date: 10 Jul 2022 20:14:52
Message: <62cb6b7c@news.povray.org>
On 2022-07-10 15:27 (-4), jceddy wrote:
> 
> So I got the code to calculated minimum distance using nearest-vertex from a
> kd-tree working. At least I think so...I have some tiny artefacts that I think
> might be a result of a bug in the code, but might also just be a result of the
> kinds of tolerance issues that were mentioned above.
> 
> I added code to the underlying Mesh object in povray for calculating a
> transformed copy of the vertex data, and verified that that seems to be
> working...but now thinking there might be an easier solution. Like...
> 
> Can I inverse-transform the point being tested, run the minimum-distance
> calculation against the original data to get a minimum-distance *VECTOR* from
> the test point to the original mesh, then just apply the object's transformation
> matrix to that vector before measuring its length?

It's been almost 40 years, so I don't remember why, but I seem to recall
learning in university that inverse transformations take a heavy toll on
numeric precision.


Post a reply to this message

From: William F Pokorny
Subject: Re: Minimum Distance Function
Date: 11 Jul 2022 06:28:24
Message: <62cbfb48$1@news.povray.org>
On 7/10/22 15:27, jceddy wrote:
> Can I inverse-transform the point being tested, run the minimum-distance
> calculation against the original data to get a minimum-distance*VECTOR*  from
> the test point to the original mesh, then just apply the object's transformation
> matrix to that vector before measuring its length?

I've never attempted a complete survey, but I'd say this the more common 
approach inside POV-Ray's code for various calculations, but it's not 
universal.

---

The numerical issues Cousin Ricky suggested as applying to the inverse 
transform in particular(a) are very complicated on the whole. Especially 
true in POV-Ray's code base where over the years numerically 
inconsistent numerical approaches were employed.

The parsing stage today flattens 'some' transforms for some shapes. 
Spheres and sphere sweeps come to mind. This kind of xfrm flattening I 
believe was done to try and help performance(b). This flattening crashes 
into the base v3.7/v3.8/v4.0 issue of pull request:

https://github.com/POV-Ray/povray/pull/358

Though, the discussion thread for that pull req goes on to look at 
additional numerical issues; including fundamental limits at the end 
related to loss of accuracy due the length of rays and polynomial 
representations for the ray-surface intersections.

We can work to reduce the exposure to numerical issues in ray tracing, 
but we can never eliminate them. More so the case due needing to use 
fast floating point hardware for good performance.

Bill P.

(a) - The span and locality of references for many 'object' 
representations are effectively better with inverse transformations 
where the working numerical space is more robust. It can be less robust 
too. However, I think this less common in practice. In other words, it 
doesn't matter if the inverse transform / result-re-transform is 
somewhat inaccurate for planet X, if all the local calculations on and 
near planet X are more accurate. I'm lying much less here where planet X 
exists in 'effective' isolation in the global numerical space of the 
scene.

(b) - The issue pull 358 is trying to address is the unfortunate 
introduction of a kind of min intersection depth filter in the "global" 
numerical space in v3.7+ which results in the trimming or elimination of 
shapes from ray-surface intersection calculations. In povr - where this 
issue fixed in a way different than the actual #358 pull req - I've got 
sphere point cloud test cases which now run 30-40% slower! Why? Well 
povr isn't ignoring 30-40% of the ray->sphere calculations. Makes me 
wonder what portion of the performance gains from the parser 
flattening/eliminating of transforms was fools-gold when measured.


Post a reply to this message

From: jceddy
Subject: Re: Minimum Distance Function
Date: 11 Jul 2022 11:25:00
Message: <web.62cc3fec6fb4e448864166f75d51d79c@news.povray.org>
> It's been almost 40 years, so I don't remember why, but I seem to recall
> learning in university that inverse transformations take a heavy toll on
> numeric precision.

Well, based on my tests, this seems to be the case.

When I inverse-transform the test point, do a minimum distance vector to the
original mesh data, then transform that direction vector back to take the
length, the result is slightly off from the version where I keep a parallel
version of the mesh data around that I recalculate by transforming all the
vertices whenever the mesh's Trans changes.

The difference seems small, but the difference in result is stark when you
actually run the ray tracer. The version when I inverse-transform the test
point, then transform the result vector has all kinds of ugly holes. The version
where I carry around a transformed version of the mesh data looks a lot better,
but still has some small artefacts that aren't there using my old ray-sampling
method (although it is quite a bit faster). I think those remaining artefacts
might be due to simple precision issues, and I think I may be able to mitigate
them by introducing a "zero" factor that is a bit larger than zero (like 1e-11
or maybe a bit larger than that even) that I use in the calculation.


Post a reply to this message

From: Bald Eagle
Subject: Re: Minimum Distance Function
Date: 11 Jul 2022 13:35:00
Message: <web.62cc5ed46fb4e4481f9dae3025979125@news.povray.org>
"jceddy" <jce### [at] gmailcom> wrote:

> When I inverse-transform the test point, do a minimum distance vector to the
> original mesh data, then transform that direction vector back to take the
> length, the result is slightly off from the version where I keep a parallel
> version of the mesh data around that I recalculate by transforming all the
> vertices whenever the mesh's Trans changes.

What if instead of re-transforming the mesh data, you transform the reference
point to measure the length?
Then it's only ever one transform in either direction.

I'm only guessing, based on what I'm envisioning you doing...


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.