|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Bald Eagle" <cre### [at] netscapenet> wrote:
> ...
> With regard to the slicing problem:
> I'm wondering if you can use/abuse the orthographic camera itself to get what
> you want.
> I came across this when Ton posted a link to FL's site.
> see:
> http://www.f-lohmueller.de/pov_tut/scale_model/s_mod_150e.htm
>
> Maybe with some sort of no_shadow or double_illuminate keywords, you could get
> it to light the surface of that interior "slice".
nice.
had a very cursory look at the FL page, not sure I understood but using the
direction vector is .. interesting, and I'll give this a try.
already using no_shadow but will see if double_illuminate (on the slice) makes a
difference.
thank you very much + "bless yr little cotton socks". :-)
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 10/15/19 10:26 AM, jr wrote:
> ...
> >
> >> This means for
> >> you, you'll have both coincident (sometimes wrongly ordered)
> >> intersections and ones less and less likely to align with the
> >> orthographic camera rays as the sampled surface becomes parallel to the
> >> ray. Such rays, if they hit, would be near tangents to the surface - the
> >> shortest of these tends to get filtered in many objects too for a few
> >> reasons.
> >
> > and that's the core problem I guess. when the surface is seen along the "thin"
> > edge. </sigh>
> >
> > could you please elaborate on "coincident (sometimes wrongly ordered)
> > intersections"? with my (admittedly) limited understanding I would have thought
> > that a box/sphere intersection must be the same as a sphere/box. in the macro
> > I'd "oversized" the slice (box) in the plane to avoid coincident surfaces
> > at/near the bounding box.
> >
>
> .........then the troll-of-inside_tests jumps from under the bridge... :-)
>
> I can elaborate some, but I don't understand it all. What exactly
> happens depends on much. Scene scale, what objects and so what
> intersection code, which solvers, the inside tests are often coded
> differently for performance relative to the intersection tests -and
> more.
ok, understood.
> When there are coincident or near coincident surfaces POV-Ray can get
> confused about the actual order (or existence) of intersections. I don't
> know gaming engines, but others have said the problem is similar to the
> z buffer problem in gaming. So. I was speaking of this fundamental
> internal issue when I said "sometimes wrongly ordered." Nothing to do
> with your intersection object order - which shouldn't matter to a 'first
> order.' Though, I'd not be too surprised if sometimes it did.
yes. not knowing the innards of POV-Ray, object order _should_ not matter, but
there's always the occasional .. stab of doubt (particularly when things aren't
going well. :-)).
> ...
> Where you want to see red, you have the second type - intersecting surfaces.
wondering whether[*] making the slice have an interior with explicit ior etc
might make a difference.
[*] will try later.
> ...
> I believe there are two issues in your approach. I think +am3 is able to
> help with the coincident surfaces type, but not the 'things got so
> small' they are sitting between our camera rays(a) type.
>
> (a) - Hmm, you might get some improvement too by moving to larger
> renders for this reason. Ray spacing is relative to the scene is smaller
> the larger the render.
will give this a try too, later (Wednesdays usually busy with RL).
the current render outputs 256x256, will check whether doubling makes a visual
difference.
> >> (1) - And perhaps help more with more aggressive options (more than +am3).
> >
> > ?? do you a mean smaller aa threshold?
>
> Maybe. I was though thinking more about AA depth (+r<n>) and IIRC +am3
> has a confidence setting of some kind. I've played with the new AA
> method 3, but not much. It can be very slow.
I've never used confidence, and usually stay away from the recursion limit.
still, much to think about, interplay of options etc, will tinker with those
too. thanks.
> > yes, for my purposes, 'eval_pigment' doesn't do. as I wrote in reply to BE, a
> > variant of 'trace' would be nice; perhaps reversing the normal and use it as a
> > camera ray to get the colour. it would be useful, even given the provisos you
> > mention, because it would work on an in-situ object as is.
>
> Hmm. Another bit of code I've hardly used. I have it in my head
> eval_pigment is passed a pigment and it returns the raw pigment value at
> the x,y,z of the intersection. No normals no other factors, but maybe
> that's sort of your point? You want something which evaluates - maybe -
> overlapped textures on an object at a point?
sure. the "problem" I have with eval_pigment is that it works irrespective of
lighting, which is unhelpful (imo).
you're right, evaluating a point on a (textured) surface is indeed what I'm
after; a feature/function sorely missing from POV-Ray, cf the feedback available
(for cameras etc) in 'hgpovray'.
> We do have the user defined functional camera in v38. If we can capture
> the normal and intersections, it should be these can be turned into
> object facing camera rays the reverse of raw surface normal.
I do not even understand the concept, I guess. (when I first heard "mesh
camera" I'd naively thought it'd be like an insect's compound eye view. :-))
> ...
> >> What are you really trying to do - and for what inputs and outputs?
> >
> > :-) my current "motivation" is creating .. fodder for my shiny, new program.
> > it reads three DF3s (with the RGB components) and outputs a VRML PROTO with that
> > data as a 'PointSet'.
> >
> > inputs - arbitrary ("pretty" :-)) objects (or whole, small-ish scenes).
> >
>
> :-) I didn't completely follow what your trying with VRML. I don't know
> it. If you get something going, fire off a news post or two. I'll keep
> thinking about the edge detection, maybe something will pop into my head
> worth trying.
basically, "export" RGB DF3 data to a custom (VRML) node, making it available as
a 'PointSet' (node type). the idea is to take some object made with POV-Ray,
scan it into DF3, export it, and view/peruse at leisure in some virtual world;
my pet project/ambition is to fashion a gallery with various .. exhibits.
I posted a grayscale version of 'spiral.df3' a month or so ago (cf BE) but
cannot find it now. :-( will in the coming days start a new off-topic thread
on the subject.
> Will mention some were creating cartoon images on the fly from scenes
> years ago. Prior work which might be worth looking over. I think most
> approached it with multiple renders, but...long time ago.
maybe one of the other "old timers" reads this _and_ can remember. </grin>
> Aside: I play with df3s and have downloaded several of your utilities
> with the intent of giving them a whirl. Not gotten to it... :-(
:-) (when (if!) you do, I really could do with some feedback. (he said,
selfishly))
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> William F Pokorny <ano### [at] anonymousorg> wrote:
> > On 10/15/19 10:26 AM, jr wrote:
> > ...
> > I believe there are two issues in your approach. I think +am3 is able to
> > help with the coincident surfaces type, but not the 'things got so
> > small' they are sitting between our camera rays(a) type.
> >
> > (a) - Hmm, you might get some improvement too by moving to larger
> > renders for this reason. Ray spacing is relative to the scene is smaller
> > the larger the render.
>
> will give this a try too, later (Wednesdays usually busy with RL).
> the current render outputs 256x256, will check whether doubling makes a visual
> difference.
> > >> (1) - And perhaps help more with more aggressive options (more than +am3).
I added 'double_illumination' to both of objects in the intersection ("slice"),
as per BE's suggestion, and a transparent 'interior_texture' to the box, and
found that helped some. I then doubled the resolution of the render (to
512x512), and tried "better" values for antialias_{confidence,depth,threshold},
again, small improvements. I've posted an animation of it in p.b.misc, same
subject. as can be seen, Bill P's advice re the near parallel (to camera) rays
was v accurate -- unfortunately. I want to avoid going down the multi-pass
approach suggested, given that, in the end, the colour would still be
"generated" by eval_pigment. so I'll use the rig as is now, and accept that
I'll have to drop resolution in some instances or avoid some shape(s)
altogether, and will have a look at the camera (cf BE Lohmueller tip) when I
find the .. energy. meanwhile new ideas/insights would be appreciated.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 14/10/2019 à 19:20, Bald Eagle a écrit :
>
> "jr" <cre### [at] gmailcom> wrote:
>
>> does not for me. :-( I get solid (filled) circles, not rings.
>
> Ah.
>
>> that may not always be an option. the idea is to slice arbitrary objects
>> (suitably scaled + positioned). the object may not always be my own work
>> though.
>
> Sounds like that ex-post facto texturing lots of people would like...
what about then, using cutaway_texture ?
http://wiki.povray.org/content/Reference:Cutaway_Textures
I have a feeling it was made for these people.
>
> try using 2 "clipped_by" objects instead of an intersection?
>
> I really never use that command - so maybe others have advice.
>
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le_Forgeron <jgr### [at] freefr> wrote:
> what about then, using cutaway_texture ?
>
> http://wiki.povray.org/content/Reference:Cutaway_Textures
>
> I have a feeling it was made for these people.
Aha. That looks very nice indeed. I'm not sure if I ever knew about that one,
or it just got lost in the noise. Thanks very much for pointing that one out -
another nice feature that could probably use an example scene just to show it
off and make it more memorable.
Thanks, Jerome! :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
Le_Forgeron <jgr### [at] freefr> wrote:
> Le 14/10/2019 à 19:20, Bald Eagle a écrit :
> > "jr" <cre### [at] gmailcom> wrote:
> >> does not for me. :-( I get solid (filled) circles, not rings.
> > Ah.
> >> that may not always be an option. the idea is to slice arbitrary objects
> >> (suitably scaled + positioned). the object may not always be my own work
> >> though.
> > Sounds like that ex-post facto texturing lots of people would like...
>
> what about then, using cutaway_texture ?
hey, thanks. like Bald Eagle, I had not seen this "object modifier" before.
alas, adding it seems to make no discernable difference to what I now get. in
accordance withe the reference, I changed the "slice" to:
box {p1, p2 hollow no_reflection no_shadow
texture {pigment {srgbt 1}}
interior_texture {pigment {srgbt 1}}
}
and use it in:
intersection {
object {O}
object {S}
cutaway_texture
double_illuminate
}
with "tight" antialias settings in the .ini this gives reasonably good results.
however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
P identified when he asked "what happens with a box". I'm currently wondering
whether using a perspective camera for "difficult" shapes mightn't be better.
and for shapes like spheres + tori I may get milage our of lowering the vertical
resolution (ie thicker slices).
> I have a feeling it was made for these people.
and in reply to Bald Eagle:
the Lohmueller tip is confusing. haven't tried yet but the documentation is
quite clear that the direction vector is not used with prthographic cameras.
have you come across working code which utilises the "trick"?
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
adding it seems to make no discernable difference to what I now get.
I don't think he was suggesting it would - he was just commenting on my "ex post
facto" differencing comment. That's a whole different story.
> however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
> P identified when he asked "what happens with a box". I'm currently wondering
> whether using a perspective camera for "difficult" shapes mightn't be better.
> and for shapes like spheres + tori I may get milage our of lowering the vertical
> resolution (ie thicker slices).
This almost makes me wonder if you can do some sort of weird wide-angle /
panoramic render and then re-map it with --- for lack of a better understanding
and proper vocabulary - an anamorphic camera
> the Lohmueller tip is confusing. haven't tried yet but the documentation is
> quite clear that the direction vector is not used with prthographic cameras.
> have you come across working code which utilises the "trick"?
I believe you can just make a simple animation of a camera getting closer and
closer to a shape with a gradient pigment along the object-camera axis. Maybe
slightly off-axis, so that you can see the effect better.
The camera will start to exclude all of the parts of the object "behind" it, and
you'll effectively get a slicing effect.
I've done this plenty of times by accident when setting up scenes with the
orthographic camera.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "jr" <cre### [at] gmailcom> wrote:
> adding it seems to make no discernable difference to what I now get.
>
> I don't think he was suggesting it would - he was just commenting on my "ex post
> facto" differencing comment. That's a whole different story.
</blush> misread. apologies.
> > however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
> > P identified when he asked "what happens with a box". I'm currently wondering
> > whether using a perspective camera for "difficult" shapes mightn't be better.
> > and for shapes like spheres + tori I may get milage our of lowering the vertical
> > resolution (ie thicker slices).
>
> This almost makes me wonder if you can do some sort of weird wide-angle /
> panoramic render and then re-map it with --- for lack of a better understanding
> and proper vocabulary - an anamorphic camera
I can see what you mean but that's way beyond my .. pay grade.
ideally there'd be a simple way of, say, using a (slightly) offset perspective
camera, and relating that angle to some compensating factor to approximate the
orthographic equivalent (a bit like an effect using a matrix to slant/skew an
object, then reversing that).
> > the Lohmueller tip is confusing. haven't tried yet but the documentation is
> > quite clear that the direction vector is not used with prthographic cameras.
> > have you come across working code which utilises the "trick"?
> I believe you can just make a simple animation of a camera getting closer and
> closer to a shape with a gradient pigment along the object-camera axis. Maybe
> slightly off-axis, so that you can see the effect better.
> The camera will start to exclude all of the parts of the object "behind" it, and
> you'll effectively get a slicing effect.
> I've done this plenty of times by accident when setting up scenes with the
> orthographic camera.
:-) yes, I too have found myself .. inside, unexpectedly. anyway, the
"remainder" would still need to be cut/sliced, so it looks like using a box to
intersect seems unavoidable.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> ... This means for
> you, you'll have both coincident (sometimes wrongly ordered)
> intersections and ones less and less likely to align with the
> orthographic camera rays as the sampled surface becomes parallel to the
> ray. Such rays, if they hit, would be near tangents to the surface - the
> shortest of these tends to get filtered in many objects too for a few
> reasons.
the image posted in p.b.misc shows a lo-res Y-axis scan, orthographic camera on
the left, perspective on the right. and the 'surface becomes parallel' thing is
clearly where expected on the left. I cannot fathom though why for the
perspective camera the problem manifests above "the bulge" instead of the
centre; cameras in both stationed directly above the sphere.
I now think that if, perhaps, there is a relatively simple and accurate way of
calculating camera parameters for both camera types such that the object is seen
in exactly the same size/position, perhaps one could merge the results. any
thought(s) welcome.
I also find the patterns appearing in many of the sliced rings interesting, the
regularity of the spacing etc holds information -- I just can't .. parse it. :-(
if anyone's interested I can post the frame/slice images.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/1/19 10:16 AM, jr wrote:
> hi,
>
> William F Pokorny <ano### [at] anonymousorg> wrote:
>> ... This means for
>> you, you'll have both coincident (sometimes wrongly ordered)
>> intersections and ones less and less likely to align with the
>> orthographic camera rays as the sampled surface becomes parallel to the
>> ray. Such rays, if they hit, would be near tangents to the surface - the
>> shortest of these tends to get filtered in many objects too for a few
>> reasons.
>
> the image posted in p.b.misc shows a lo-res Y-axis scan, orthographic camera on
> the left, perspective on the right. and the 'surface becomes parallel' thing is
> clearly where expected on the left. I cannot fathom though why for the
> perspective camera the problem manifests above "the bulge" instead of the
> centre; cameras in both stationed directly above the sphere.
It's the same, sampled-surface being mostly parallel to the rays, issue.
The results are about what I'd expect. With the perspective camera the
rays are not parallel to the y axis, but some rays are still essentially
running parallel/tangent to the parts of the sphere causing that upper
bulge.
>
> I now think that if, perhaps, there is a relatively simple and accurate way of
> calculating camera parameters for both camera types such that the object is seen
> in exactly the same size/position, perhaps one could merge the results. any
> thought(s) welcome.
>
Maybe... I had the thought while traveling it might be possible to get
further for your simple sphere with my hard_object patch by creating a
thin skin to the sphere - an aim for that pattern was to create the
peeling paint isosurface skins more or less automatically(1).
(1) - The issue immediately hit was performance. The inside test is
really slow for some objects and for all shapes it slows dramatically as
the complexity of the input objects gets complex. With large CSGs for
example, it looks to be just trundling through all the shapes.
Performance aside, while the hard_object pattern handles convex shapes
really well, it struggles with a shape's concave parts. It can get noisy
/ fill in crevices.
> I also find the patterns appearing in many of the sliced rings interesting, the
> regularity of the spacing etc holds information -- I just can't .. parse it. :-(
> if anyone's interested I can post the frame/slice images.
...
Perhaps reasons, but doubt the usefulness for anything found.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|