|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le 14/10/2019 à 19:20, Bald Eagle a écrit :
>
> "jr" <cre### [at] gmailcom> wrote:
>
>> does not for me. :-( I get solid (filled) circles, not rings.
>
> Ah.
>
>> that may not always be an option. the idea is to slice arbitrary objects
>> (suitably scaled + positioned). the object may not always be my own work
>> though.
>
> Sounds like that ex-post facto texturing lots of people would like...
what about then, using cutaway_texture ?
http://wiki.povray.org/content/Reference:Cutaway_Textures
I have a feeling it was made for these people.
>
> try using 2 "clipped_by" objects instead of an intersection?
>
> I really never use that command - so maybe others have advice.
>
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Le_Forgeron <jgr### [at] freefr> wrote:
> what about then, using cutaway_texture ?
>
> http://wiki.povray.org/content/Reference:Cutaway_Textures
>
> I have a feeling it was made for these people.
Aha. That looks very nice indeed. I'm not sure if I ever knew about that one,
or it just got lost in the noise. Thanks very much for pointing that one out -
another nice feature that could probably use an example scene just to show it
off and make it more memorable.
Thanks, Jerome! :)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
Le_Forgeron <jgr### [at] freefr> wrote:
> Le 14/10/2019 à 19:20, Bald Eagle a écrit :
> > "jr" <cre### [at] gmailcom> wrote:
> >> does not for me. :-( I get solid (filled) circles, not rings.
> > Ah.
> >> that may not always be an option. the idea is to slice arbitrary objects
> >> (suitably scaled + positioned). the object may not always be my own work
> >> though.
> > Sounds like that ex-post facto texturing lots of people would like...
>
> what about then, using cutaway_texture ?
hey, thanks. like Bald Eagle, I had not seen this "object modifier" before.
alas, adding it seems to make no discernable difference to what I now get. in
accordance withe the reference, I changed the "slice" to:
box {p1, p2 hollow no_reflection no_shadow
texture {pigment {srgbt 1}}
interior_texture {pigment {srgbt 1}}
}
and use it in:
intersection {
object {O}
object {S}
cutaway_texture
double_illuminate
}
with "tight" antialias settings in the .ini this gives reasonably good results.
however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
P identified when he asked "what happens with a box". I'm currently wondering
whether using a perspective camera for "difficult" shapes mightn't be better.
and for shapes like spheres + tori I may get milage our of lowering the vertical
resolution (ie thicker slices).
> I have a feeling it was made for these people.
and in reply to Bald Eagle:
the Lohmueller tip is confusing. haven't tried yet but the documentation is
quite clear that the direction vector is not used with prthographic cameras.
have you come across working code which utilises the "trick"?
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
adding it seems to make no discernable difference to what I now get.
I don't think he was suggesting it would - he was just commenting on my "ex post
facto" differencing comment. That's a whole different story.
> however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
> P identified when he asked "what happens with a box". I'm currently wondering
> whether using a perspective camera for "difficult" shapes mightn't be better.
> and for shapes like spheres + tori I may get milage our of lowering the vertical
> resolution (ie thicker slices).
This almost makes me wonder if you can do some sort of weird wide-angle /
panoramic render and then re-map it with --- for lack of a better understanding
and proper vocabulary - an anamorphic camera
> the Lohmueller tip is confusing. haven't tried yet but the documentation is
> quite clear that the direction vector is not used with prthographic cameras.
> have you come across working code which utilises the "trick"?
I believe you can just make a simple animation of a camera getting closer and
closer to a shape with a gradient pigment along the object-camera axis. Maybe
slightly off-axis, so that you can see the effect better.
The camera will start to exclude all of the parts of the object "behind" it, and
you'll effectively get a slicing effect.
I've done this plenty of times by accident when setting up scenes with the
orthographic camera.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "jr" <cre### [at] gmailcom> wrote:
> adding it seems to make no discernable difference to what I now get.
>
> I don't think he was suggesting it would - he was just commenting on my "ex post
> facto" differencing comment. That's a whole different story.
</blush> misread. apologies.
> > however, _the_ problem is the "pathological" case of parallel-to-camera, as Bill
> > P identified when he asked "what happens with a box". I'm currently wondering
> > whether using a perspective camera for "difficult" shapes mightn't be better.
> > and for shapes like spheres + tori I may get milage our of lowering the vertical
> > resolution (ie thicker slices).
>
> This almost makes me wonder if you can do some sort of weird wide-angle /
> panoramic render and then re-map it with --- for lack of a better understanding
> and proper vocabulary - an anamorphic camera
I can see what you mean but that's way beyond my .. pay grade.
ideally there'd be a simple way of, say, using a (slightly) offset perspective
camera, and relating that angle to some compensating factor to approximate the
orthographic equivalent (a bit like an effect using a matrix to slant/skew an
object, then reversing that).
> > the Lohmueller tip is confusing. haven't tried yet but the documentation is
> > quite clear that the direction vector is not used with prthographic cameras.
> > have you come across working code which utilises the "trick"?
> I believe you can just make a simple animation of a camera getting closer and
> closer to a shape with a gradient pigment along the object-camera axis. Maybe
> slightly off-axis, so that you can see the effect better.
> The camera will start to exclude all of the parts of the object "behind" it, and
> you'll effectively get a slicing effect.
> I've done this plenty of times by accident when setting up scenes with the
> orthographic camera.
:-) yes, I too have found myself .. inside, unexpectedly. anyway, the
"remainder" would still need to be cut/sliced, so it looks like using a box to
intersect seems unavoidable.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> ... This means for
> you, you'll have both coincident (sometimes wrongly ordered)
> intersections and ones less and less likely to align with the
> orthographic camera rays as the sampled surface becomes parallel to the
> ray. Such rays, if they hit, would be near tangents to the surface - the
> shortest of these tends to get filtered in many objects too for a few
> reasons.
the image posted in p.b.misc shows a lo-res Y-axis scan, orthographic camera on
the left, perspective on the right. and the 'surface becomes parallel' thing is
clearly where expected on the left. I cannot fathom though why for the
perspective camera the problem manifests above "the bulge" instead of the
centre; cameras in both stationed directly above the sphere.
I now think that if, perhaps, there is a relatively simple and accurate way of
calculating camera parameters for both camera types such that the object is seen
in exactly the same size/position, perhaps one could merge the results. any
thought(s) welcome.
I also find the patterns appearing in many of the sliced rings interesting, the
regularity of the spacing etc holds information -- I just can't .. parse it. :-(
if anyone's interested I can post the frame/slice images.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/1/19 10:16 AM, jr wrote:
> hi,
>
> William F Pokorny <ano### [at] anonymousorg> wrote:
>> ... This means for
>> you, you'll have both coincident (sometimes wrongly ordered)
>> intersections and ones less and less likely to align with the
>> orthographic camera rays as the sampled surface becomes parallel to the
>> ray. Such rays, if they hit, would be near tangents to the surface - the
>> shortest of these tends to get filtered in many objects too for a few
>> reasons.
>
> the image posted in p.b.misc shows a lo-res Y-axis scan, orthographic camera on
> the left, perspective on the right. and the 'surface becomes parallel' thing is
> clearly where expected on the left. I cannot fathom though why for the
> perspective camera the problem manifests above "the bulge" instead of the
> centre; cameras in both stationed directly above the sphere.
It's the same, sampled-surface being mostly parallel to the rays, issue.
The results are about what I'd expect. With the perspective camera the
rays are not parallel to the y axis, but some rays are still essentially
running parallel/tangent to the parts of the sphere causing that upper
bulge.
>
> I now think that if, perhaps, there is a relatively simple and accurate way of
> calculating camera parameters for both camera types such that the object is seen
> in exactly the same size/position, perhaps one could merge the results. any
> thought(s) welcome.
>
Maybe... I had the thought while traveling it might be possible to get
further for your simple sphere with my hard_object patch by creating a
thin skin to the sphere - an aim for that pattern was to create the
peeling paint isosurface skins more or less automatically(1).
(1) - The issue immediately hit was performance. The inside test is
really slow for some objects and for all shapes it slows dramatically as
the complexity of the input objects gets complex. With large CSGs for
example, it looks to be just trundling through all the shapes.
Performance aside, while the hard_object pattern handles convex shapes
really well, it struggles with a shape's concave parts. It can get noisy
/ fill in crevices.
> I also find the patterns appearing in many of the sliced rings interesting, the
> regularity of the spacing etc holds information -- I just can't .. parse it. :-(
> if anyone's interested I can post the frame/slice images.
...
Perhaps reasons, but doubt the usefulness for anything found.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/1/19 1:39 PM, William F Pokorny wrote:
> On 11/1/19 10:16 AM, jr wrote:
...
>>
>> I now think that if, perhaps, there is a relatively simple and
>> accurate way of
>> calculating camera parameters for both camera types such that the
>> object is seen
>> in exactly the same size/position, perhaps one could merge the
>> results. any
>> thought(s) welcome.
>>
>
> Maybe...
...
Woke up this morning with a thought. If you're willing to render each
sample as three frames your idea might fly using two perspective cameras.
While in general using whatever you are using set up wise for best
results:
Using a perspective camera down in the first frame of a sample; A
perspective camera up in the second frame of a sample; In the last frame
use the results of the of the first two sample frames for planar
pigments which themselves would be used in a user defined pigment
applied to a plane with no lights but an ambient 1 finish and an
orthographic camera.
In frame 3/3 of each sample in the scan the user defined pigment would
be something like:
#declare PigmentMaxUpDownPerspective = pigment {
user_defined {
function { max(FnUp(x,y,z).r,FnDown(x,y,z).r) },
function { max(FnUp(x,y,z).g,FnDown(x,y,z).g) },
function { max(FnUp(x,y,z).b,FnDown(x,y,z).b) },
,
}
}
There is planar-position distortion due the perspective camera. You
would want to keep the angle small(1) to limit this. Further for each
sample the up / down cameras should be equal distance to the middle of
each sample's slice. In other words, the camera positions should be
based upon the middle of each sample slice.
Aside: A user defined perspective camera simultaneously up and down an
option too - say alternating rows at twice the vertical height - could
be rendered in one frame. You'd have to be willing to scale down
vertically only with some post render program/process.
Bill P.
(1) - Small perspective camera angles will tend to work against you in
being more parallel - more like the orthographic - some balancing
required. Small perspective camera angles also require the camera be
further away. So, I'll also mention there are accuracy issues especially
with higher order polynomial shapes when the camera rays start far away.
The accuracy issue is improved if you use my updated solver branch, but
still significantly present. The sphere is essentially order 2, so with
it your OK accuracy wise even at large distances away for the camera.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 11/1/19 1:39 PM, William F Pokorny wrote:
> > On 11/1/19 10:16 AM, jr wrote:
> ...
> >>
> >> I now think that if, perhaps, there is a relatively simple and
> >> accurate way of
> >> calculating camera parameters for both camera types such that the
> >> object is seen
> >> in exactly the same size/position, perhaps one could merge the
> >> results. any
> >> thought(s) welcome.
> >
> > Maybe...
> ...
>
> Woke up this morning with a thought.
:-)
> If you're willing to render each
> sample as three frames your idea might fly using two perspective cameras.
> While in general using whatever you are using set up wise for best
> results:
> Using a perspective camera down in the first frame of a sample; A
> perspective camera up in the second frame of a sample; In the last frame
> use the results of the of the first two sample frames for planar
> pigments which themselves would be used in a user defined pigment
> applied to a plane with no lights but an ambient 1 finish and an
> orthographic camera.
> In frame 3/3 of each sample in the scan the user defined pigment would
> be something like:
> #declare PigmentMaxUpDownPerspective = pigment {
> user_defined {
> function { max(FnUp(x,y,z).r,FnDown(x,y,z).r) },
> function { max(FnUp(x,y,z).g,FnDown(x,y,z).g) },
> function { max(FnUp(x,y,z).b,FnDown(x,y,z).b) },
> ,
> }
> }
while I can see/appreciate the concept, I'll need to think about how to approach
this. I have never used more than one camera at a time, for instance. and then
there's the .. function voodoo. :-) FnUp/FnDown are what? the contents of the
slice wrapped somehow in a function?
> There is planar-position distortion due the perspective camera. You
> would want to keep the angle small(1) to limit this.
the opposite to what I've been doing so far. I use(d) the angle and location to
fine tune the shape to fill the frame.
> Further for each
> sample the up / down cameras should be equal distance to the middle of
> each sample's slice. In other words, the camera positions should be
> based upon the middle of each sample slice.
another thing different. until now I have used the lowest point, but will
incorporate this as a first change.
> Aside: A user defined perspective camera simultaneously up and down an
> option too - say alternating rows at twice the vertical height - could
> be rendered in one frame. You'd have to be willing to scale down
> vertically only with some post render program/process.
open to ideas. more function voodoo, I bet. :-)
> Bill P.
> (1) - Small perspective camera angles will tend to work against you in
> being more parallel - more like the orthographic - some balancing
> required. Small perspective camera angles also require the camera be
> further away. So, I'll also mention there are accuracy issues especially
> with higher order polynomial shapes when the camera rays start far away.
> The accuracy issue is improved if you use my updated solver branch, but
> still significantly present. The sphere is essentially order 2, so with
> it your OK accuracy wise even at large distances away for the camera.
while I've never compiled a patched version of POV-Ray, I'm willing to give it a
try (with a little hand-holding).
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/3/19 8:56 AM, jr wrote:
> hi,
>
> William F Pokorny <ano### [at] anonymousorg> wrote:
...
>
> open to ideas. more function voodoo, I bet. :-)
>
:-) Ah sorry, I aimed my response toward doing as much in POV-Ray as
possible given you seemed averse to intersecting the shapes with solid
pigments and using a stand-alone edge detection program.
To just try the -render perspective camera above, below and combine
idea- do one set of frames with the camera above and stick those frames
in a directory aboveFramesDir and similarly a camera below scan.
You're already using your own image->df3 program. Read in both above
frame and below frame images and take the max r,g,b found before you
write the df3 sample position. If the method works, refinements in
position etc, can wait until you find you need them. Might be the basic
idea doesn't work for some reason I don't see(1)...
Bill P.
(1) - There are issues where any scan approach will run into
complications beyond those being considered presently. Interior textures
different than outside textures for objects scanned, for example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|