POV-Ray : Newsgroups : povray.bugreports : sphere slicing problem : Re: sphere slicing problem Server Time
20 Apr 2024 07:51:30 EDT (-0400)
  Re: sphere slicing problem  
From: William F Pokorny
Date: 15 Oct 2019 17:53:59
Message: <5da63ff7$1@news.povray.org>
On 10/15/19 10:26 AM, jr wrote:
...
> 
>> This means for
>> you, you'll have both coincident (sometimes wrongly ordered)
>> intersections and ones less and less likely to align with the
>> orthographic camera rays as the sampled surface becomes parallel to the
>> ray. Such rays, if they hit, would be near tangents to the surface - the
>> shortest of these tends to get filtered in many objects too for a few
>> reasons.
> 
> and that's the core problem I guess.  when the surface is seen along the "thin"
> edge.  </sigh>
> 
> could you please elaborate on "coincident (sometimes wrongly ordered)
> intersections"?  with my (admittedly) limited understanding I would have thought
> that a box/sphere intersection must be the same as a sphere/box.  in the macro
> I'd "oversized" the slice (box) in the plane to avoid coincident surfaces
> at/near the bounding box.
> 

.........then the troll-of-inside_tests jumps from under the bridge... :-)

I can elaborate some, but I don't understand it all. What exactly 
happens depends on much. Scene scale, what objects and so what 
intersection code, which solvers, the inside tests are often coded 
differently for performance relative to the intersection tests -and 
more.

When there are coincident or near coincident surfaces POV-Ray can get 
confused about the actual order (or existence) of intersections. I don't 
know gaming engines, but others have said the problem is similar to the 
z buffer problem in gaming. So. I was speaking of this fundamental 
internal issue when I said "sometimes wrongly ordered." Nothing to do 
with your intersection object order - which shouldn't matter to a 'first 
order.' Though, I'd not be too surprised if sometimes it did.

---
Starting a little ahead of your question, when we talk about coincident 
surfaces in these newsgroups we really mean coincident, as in box face 
overlapping a box face; intersecting surfaces, where one surface flows 
through another surface - and sometimes we get patches/effective surface 
patches sharing all or part of an edge.

Where you want to see red, you have the second type - intersecting surfaces.

Ray tracing runs in a numerically noisy environment. Where we hit the 
coincident surface issue it means we have surfaces and intersections all 
within some ever changing (ray to ray and more) numerically noisy 
window/tolerance.

I'm not going to try to enumerate particular situations where your 
approach works and it doesn't with respect to coincident surfaces / 
numerical noise issues; other than to say, sometimes you get the result 
you want and sometimes not.

I believe there are two issues in your approach. I think +am3 is able to 
help with the coincident surfaces type, but not the 'things got so 
small' they are sitting between our camera rays(a) type.

(a) - Hmm, you might get some improvement too by moving to larger 
renders for this reason. Ray spacing is relative to the scene is smaller 
the larger the render.

> 
>> (1) - And perhaps help more with more aggressive options (more than +am3).
> 
> ?? do you a mean smaller aa threshold?
> 

Maybe. I was though thinking more about AA depth (+r<n>) and IIRC +am3 
has a confidence setting of some kind. I've played with the new AA 
method 3, but not much. It can be very slow. When I was working on those 
mazes, I had a situation where with +am3 running fine and I changed the 
ground color. Suddenly the runs times went up 100 fold - as if the 
adaptive mechanism was suddenly off. Maybe a bug, maybe just the 
behavior - not gotten back to look.

Aside: Method 3's ability to hone in on detail missing the original 
camera rays also makes it good at enhancing bugs/artefacts! I recommend 
using it when you are trying to pick up tiny details, and method 2 
otherwise.

> 
> yes, for my purposes, 'eval_pigment' doesn't do.  as I wrote in reply to BE, a
> variant of 'trace' would be nice; perhaps reversing the normal and use it as a
> camera ray to get the colour.  it would be useful, even given the provisos you
> mention, because it would work on an in-situ object as is.
> 

Hmm. Another bit of code I've hardly used. I have it in my head 
eval_pigment is passed a pigment and it returns the raw pigment value at 
the x,y,z of the intersection. No normals no other factors, but maybe 
that's sort of your point? You want something which evaluates - maybe - 
overlapped textures on an object at a point?

We do have the user defined functional camera in v38. If we can capture 
the normal and intersections, it should be these can be turned into 
object facing camera rays the reverse of raw surface normal.

Still glossing over lots of detailed issues about what folks 'really' 
want vs what we'd really get.

>> What are you really trying to do - and for what inputs and outputs?
> 
> :-)  my current "motivation" is creating .. fodder for my shiny, new program.
> it reads three DF3s (with the RGB components) and outputs a VRML PROTO with that
> data as a 'PointSet'.
> 
> inputs - arbitrary ("pretty" :-)) objects (or whole, small-ish scenes).
> 

:-) I didn't completely follow what your trying with VRML. I don't know 
it. If you get something going, fire off a news post or two. I'll keep 
thinking about the edge detection, maybe something will pop into my head 
worth trying.

Will mention some were creating cartoon images on the fly from scenes 
years ago. Prior work which might be worth looking over. I think most 
approached it with multiple renders, but...long time ago.

Aside: I play with df3s and have downloaded several of your utilities 
with the intent of giving them a whirl. Not gotten to it... :-(

Bill P.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.