William F Pokorny <ano### [at] anonymousorg> wrote:
> On 10/15/19 10:26 AM, jr wrote:
> >> This means for
> >> you, you'll have both coincident (sometimes wrongly ordered)
> >> intersections and ones less and less likely to align with the
> >> orthographic camera rays as the sampled surface becomes parallel to the
> >> ray. Such rays, if they hit, would be near tangents to the surface - the
> >> shortest of these tends to get filtered in many objects too for a few
> >> reasons.
> > and that's the core problem I guess. when the surface is seen along the "thin"
> > edge. </sigh>
> > could you please elaborate on "coincident (sometimes wrongly ordered)
> > intersections"? with my (admittedly) limited understanding I would have thought
> > that a box/sphere intersection must be the same as a sphere/box. in the macro
> > I'd "oversized" the slice (box) in the plane to avoid coincident surfaces
> > at/near the bounding box.
> .........then the troll-of-inside_tests jumps from under the bridge... :-)
> I can elaborate some, but I don't understand it all. What exactly
> happens depends on much. Scene scale, what objects and so what
> intersection code, which solvers, the inside tests are often coded
> differently for performance relative to the intersection tests -and
> When there are coincident or near coincident surfaces POV-Ray can get
> confused about the actual order (or existence) of intersections. I don't
> know gaming engines, but others have said the problem is similar to the
> z buffer problem in gaming. So. I was speaking of this fundamental
> internal issue when I said "sometimes wrongly ordered." Nothing to do
> with your intersection object order - which shouldn't matter to a 'first
> order.' Though, I'd not be too surprised if sometimes it did.
yes. not knowing the innards of POV-Ray, object order _should_ not matter, but
there's always the occasional .. stab of doubt (particularly when things aren't
going well. :-)).
> Where you want to see red, you have the second type - intersecting surfaces.
wondering whether[*] making the slice have an interior with explicit ior etc
might make a difference.
[*] will try later.
> I believe there are two issues in your approach. I think +am3 is able to
> help with the coincident surfaces type, but not the 'things got so
> small' they are sitting between our camera rays(a) type.
> (a) - Hmm, you might get some improvement too by moving to larger
> renders for this reason. Ray spacing is relative to the scene is smaller
> the larger the render.
will give this a try too, later (Wednesdays usually busy with RL).
the current render outputs 256x256, will check whether doubling makes a visual
> >> (1) - And perhaps help more with more aggressive options (more than +am3).
> > ?? do you a mean smaller aa threshold?
> Maybe. I was though thinking more about AA depth (+r<n>) and IIRC +am3
> has a confidence setting of some kind. I've played with the new AA
> method 3, but not much. It can be very slow.
I've never used confidence, and usually stay away from the recursion limit.
still, much to think about, interplay of options etc, will tinker with those
> > yes, for my purposes, 'eval_pigment' doesn't do. as I wrote in reply to BE, a
> > variant of 'trace' would be nice; perhaps reversing the normal and use it as a
> > camera ray to get the colour. it would be useful, even given the provisos you
> > mention, because it would work on an in-situ object as is.
> Hmm. Another bit of code I've hardly used. I have it in my head
> eval_pigment is passed a pigment and it returns the raw pigment value at
> the x,y,z of the intersection. No normals no other factors, but maybe
> that's sort of your point? You want something which evaluates - maybe -
> overlapped textures on an object at a point?
sure. the "problem" I have with eval_pigment is that it works irrespective of
lighting, which is unhelpful (imo).
you're right, evaluating a point on a (textured) surface is indeed what I'm
after; a feature/function sorely missing from POV-Ray, cf the feedback available
(for cameras etc) in 'hgpovray'.
> We do have the user defined functional camera in v38. If we can capture
> the normal and intersections, it should be these can be turned into
> object facing camera rays the reverse of raw surface normal.
I do not even understand the concept, I guess. (when I first heard "mesh
camera" I'd naively thought it'd be like an insect's compound eye view. :-))
> >> What are you really trying to do - and for what inputs and outputs?
> > :-) my current "motivation" is creating .. fodder for my shiny, new program.
> > it reads three DF3s (with the RGB components) and outputs a VRML PROTO with that
> > data as a 'PointSet'.
> > inputs - arbitrary ("pretty" :-)) objects (or whole, small-ish scenes).
> :-) I didn't completely follow what your trying with VRML. I don't know
> it. If you get something going, fire off a news post or two. I'll keep
> thinking about the edge detection, maybe something will pop into my head
> worth trying.
basically, "export" RGB DF3 data to a custom (VRML) node, making it available as
a 'PointSet' (node type). the idea is to take some object made with POV-Ray,
scan it into DF3, export it, and view/peruse at leisure in some virtual world;
my pet project/ambition is to fashion a gallery with various .. exhibits.
I posted a grayscale version of 'spiral.df3' a month or so ago (cf BE) but
cannot find it now. :-( will in the coming days start a new off-topic thread
on the subject.
> Will mention some were creating cartoon images on the fly from scenes
> years ago. Prior work which might be worth looking over. I think most
> approached it with multiple renders, but...long time ago.
maybe one of the other "old timers" reads this _and_ can remember. </grin>
> Aside: I play with df3s and have downloaded several of your utilities
> with the intent of giving them a whirl. Not gotten to it... :-(
:-) (when (if!) you do, I really could do with some feedback. (he said,
Post a reply to this message