|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/1/19 1:39 PM, William F Pokorny wrote:
> On 11/1/19 10:16 AM, jr wrote:
...
>>
>> I now think that if, perhaps, there is a relatively simple and
>> accurate way of
>> calculating camera parameters for both camera types such that the
>> object is seen
>> in exactly the same size/position, perhaps one could merge the
>> results. any
>> thought(s) welcome.
>>
>
> Maybe...
...
Woke up this morning with a thought. If you're willing to render each
sample as three frames your idea might fly using two perspective cameras.
While in general using whatever you are using set up wise for best
results:
Using a perspective camera down in the first frame of a sample; A
perspective camera up in the second frame of a sample; In the last frame
use the results of the of the first two sample frames for planar
pigments which themselves would be used in a user defined pigment
applied to a plane with no lights but an ambient 1 finish and an
orthographic camera.
In frame 3/3 of each sample in the scan the user defined pigment would
be something like:
#declare PigmentMaxUpDownPerspective = pigment {
user_defined {
function { max(FnUp(x,y,z).r,FnDown(x,y,z).r) },
function { max(FnUp(x,y,z).g,FnDown(x,y,z).g) },
function { max(FnUp(x,y,z).b,FnDown(x,y,z).b) },
,
}
}
There is planar-position distortion due the perspective camera. You
would want to keep the angle small(1) to limit this. Further for each
sample the up / down cameras should be equal distance to the middle of
each sample's slice. In other words, the camera positions should be
based upon the middle of each sample slice.
Aside: A user defined perspective camera simultaneously up and down an
option too - say alternating rows at twice the vertical height - could
be rendered in one frame. You'd have to be willing to scale down
vertically only with some post render program/process.
Bill P.
(1) - Small perspective camera angles will tend to work against you in
being more parallel - more like the orthographic - some balancing
required. Small perspective camera angles also require the camera be
further away. So, I'll also mention there are accuracy issues especially
with higher order polynomial shapes when the camera rays start far away.
The accuracy issue is improved if you use my updated solver branch, but
still significantly present. The sphere is essentially order 2, so with
it your OK accuracy wise even at large distances away for the camera.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 11/1/19 1:39 PM, William F Pokorny wrote:
> > On 11/1/19 10:16 AM, jr wrote:
> ...
> >>
> >> I now think that if, perhaps, there is a relatively simple and
> >> accurate way of
> >> calculating camera parameters for both camera types such that the
> >> object is seen
> >> in exactly the same size/position, perhaps one could merge the
> >> results. any
> >> thought(s) welcome.
> >
> > Maybe...
> ...
>
> Woke up this morning with a thought.
:-)
> If you're willing to render each
> sample as three frames your idea might fly using two perspective cameras.
> While in general using whatever you are using set up wise for best
> results:
> Using a perspective camera down in the first frame of a sample; A
> perspective camera up in the second frame of a sample; In the last frame
> use the results of the of the first two sample frames for planar
> pigments which themselves would be used in a user defined pigment
> applied to a plane with no lights but an ambient 1 finish and an
> orthographic camera.
> In frame 3/3 of each sample in the scan the user defined pigment would
> be something like:
> #declare PigmentMaxUpDownPerspective = pigment {
> user_defined {
> function { max(FnUp(x,y,z).r,FnDown(x,y,z).r) },
> function { max(FnUp(x,y,z).g,FnDown(x,y,z).g) },
> function { max(FnUp(x,y,z).b,FnDown(x,y,z).b) },
> ,
> }
> }
while I can see/appreciate the concept, I'll need to think about how to approach
this. I have never used more than one camera at a time, for instance. and then
there's the .. function voodoo. :-) FnUp/FnDown are what? the contents of the
slice wrapped somehow in a function?
> There is planar-position distortion due the perspective camera. You
> would want to keep the angle small(1) to limit this.
the opposite to what I've been doing so far. I use(d) the angle and location to
fine tune the shape to fill the frame.
> Further for each
> sample the up / down cameras should be equal distance to the middle of
> each sample's slice. In other words, the camera positions should be
> based upon the middle of each sample slice.
another thing different. until now I have used the lowest point, but will
incorporate this as a first change.
> Aside: A user defined perspective camera simultaneously up and down an
> option too - say alternating rows at twice the vertical height - could
> be rendered in one frame. You'd have to be willing to scale down
> vertically only with some post render program/process.
open to ideas. more function voodoo, I bet. :-)
> Bill P.
> (1) - Small perspective camera angles will tend to work against you in
> being more parallel - more like the orthographic - some balancing
> required. Small perspective camera angles also require the camera be
> further away. So, I'll also mention there are accuracy issues especially
> with higher order polynomial shapes when the camera rays start far away.
> The accuracy issue is improved if you use my updated solver branch, but
> still significantly present. The sphere is essentially order 2, so with
> it your OK accuracy wise even at large distances away for the camera.
while I've never compiled a patched version of POV-Ray, I'm willing to give it a
try (with a little hand-holding).
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/3/19 8:56 AM, jr wrote:
> hi,
>
> William F Pokorny <ano### [at] anonymousorg> wrote:
...
>
> open to ideas. more function voodoo, I bet. :-)
>
:-) Ah sorry, I aimed my response toward doing as much in POV-Ray as
possible given you seemed averse to intersecting the shapes with solid
pigments and using a stand-alone edge detection program.
To just try the -render perspective camera above, below and combine
idea- do one set of frames with the camera above and stick those frames
in a directory aboveFramesDir and similarly a camera below scan.
You're already using your own image->df3 program. Read in both above
frame and below frame images and take the max r,g,b found before you
write the df3 sample position. If the method works, refinements in
position etc, can wait until you find you need them. Might be the basic
idea doesn't work for some reason I don't see(1)...
Bill P.
(1) - There are issues where any scan approach will run into
complications beyond those being considered presently. Interior textures
different than outside textures for objects scanned, for example.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 11/3/19 8:56 AM, jr wrote:
> > William F Pokorny <ano### [at] anonymousorg> wrote:
> ...
> > open to ideas. more function voodoo, I bet. :-)
>
> :-) Ah sorry, I aimed my response toward doing as much in POV-Ray as
> possible given you seemed averse to intersecting the shapes with solid
> pigments and using a stand-alone edge detection program.
yes, SDL as much as possible/feasible. not so much "averse" as .. ignorant of,
ie I have not had cause to use such s/wares before. if you know of a suitable
utility which will process a set of images like the ones I'm likely to
encounter, please recommend.
> To just try the -render perspective camera above, below and combine
> idea- do one set of frames with the camera above and stick those frames
> in a directory aboveFramesDir and similarly a camera below scan.
>
> You're already using your own image->df3 program. Read in both above
> frame and below frame images and take the max r,g,b found before you
> write the df3 sample position. If the method works, refinements in
> position etc, can wait until you find you need them. Might be the basic
> idea doesn't work for some reason I don't see(1)...
yes, 'df3util' can import PNG, either as grayscale or _one_ of RGBA, so I don't
see/use colours at that stage. :-( still, I'd like to understand better the
way you outlined in the previous post, ie using a third cam to post-process the
two perspective cams slices (iiuc).
> (1) - There are issues where any scan approach will run into
> complications beyond those being considered presently. Interior textures
> different than outside textures for objects scanned, for example.
another can of worms. </sigh> ;-)
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/3/19 1:31 PM, jr wrote:
> hi,
>
> William F Pokorny <ano### [at] anonymousorg> wrote:
>
> yes, 'df3util' can import PNG, either as grayscale or _one_ of RGBA, so I don't
> see/use colours at that stage. :-( still, I'd like to understand better the
> way you outlined in the previous post, ie using a third cam to post-process the
> two perspective cams slices (iiuc).
>
...
What I had in mind was to create two pigments which are planar image
maps of say sliceDown00.png and sliceUp00.png, respectively. Call these
PigDown and PigUp. (You might need to rotate/scale to align with the
final orthographic camera - max_extent() can grab the input image x,y
size)
Wrap these two pigments in functions (only way they'll get used):
#declare FnDown = function { pigment { PigDown } }
#declare FnUp = function { pigment { PigUp } }
Use these in the use_defined the PigmentMaxUpDownPerspective I mentioned
previously and use that texture with ambient 1 or emission 1 (diffuse 0
use no lights) to texture a plane. Place this plane in front of an
orthographic camera and render to get your real sample taking the max
r,g,b values for each pixel from the down, up slices. This third render
is your slice00 sample. Here we are using POV-Ray as an image processing
tool. Something at which it's very good - if not as fast as dedicated
image processing tools.
FYI. There might be typos in my SDL, I didn't set up a small scene as I
often do.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
So, just thinking out loud about this a little more,
If much of the issues arise from the angle between the camera and the surface,
I'm wondering what sort of improvement there would be if the scene were done
with an orthographic camera, but set off to the side by some angle.
Then you'd get an oblique view of the slice, and the ray-object intersection
that the solver would have to deal with wouldn't hit the same issues.
Then you take that render and scale it to stretch it back out and faux-undo the
angled view. Perhaps if it only required a few degrees offset, it might not
introduce significant distortion.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
William F Pokorny <ano### [at] anonymousorg> wrote:
> On 11/3/19 1:31 PM, jr wrote:
> > yes, 'df3util' can import PNG, either as grayscale or _one_ of RGBA, so I don't
> > see/use colours at that stage. :-( still, I'd like to understand better the
> > way you outlined in the previous post, ie using a third cam to post-process the
> > two perspective cams slices (iiuc).
> ...
> What I had in mind was to create two pigments which are planar image
> maps of say sliceDown00.png and sliceUp00.png, respectively. Call these
> PigDown and PigUp. (You might need to rotate/scale to align with the
> final orthographic camera - max_extent() can grab the input image x,y
> size)
>
> Wrap these two pigments in functions (only way they'll get used):
>
> #declare FnDown = function { pigment { PigDown } }
> #declare FnUp = function { pigment { PigUp } }
>
> Use these in the use_defined the PigmentMaxUpDownPerspective I mentioned
> previously and use that texture with ambient 1 or emission 1 (diffuse 0
> use no lights) to texture a plane. Place this plane in front of an
> orthographic camera and render to get your real sample taking the max
> r,g,b values for each pixel from the down, up slices. This third render
> is your slice00 sample.
a fair amount of work + organising but (gut feeling) up + down combined ought to
catch 100% of the info. (perhaps even at slightly less insane quality settings
than used now. :-)) I need to work out real detail[*] but think two template
scenes and three ini files should cover it.
[*] pen + paper. :-)
> Here we are using POV-Ray as an image processing
> tool. Something at which it's very good - if not as fast as dedicated
> image processing tools.
lack of speed is a drawback during development, but hey..
> FYI. There might be typos in my SDL, I didn't set up a small scene as I
> often do.
I'm glad you're giving such detailed advice + feedback, and will probably need
more in a few days time. :-) thanks.
regards ,jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
hi,
"Bald Eagle" <cre### [at] netscapenet> wrote:
> So, just thinking out loud about this a little more,
>
> If much of the issues arise from the angle between the camera and the surface,
> I'm wondering what sort of improvement there would be if the scene were done
> with an orthographic camera, but set off to the side by some angle.
won't work unfortunately. either object is in view, or not.
> ...
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> William F Pokorny <ano### [at] anonymousorg> wrote:
> > ...
> > What I had in mind was to create two pigments which are planar image
> > maps of say sliceDown00.png and sliceUp00.png, respectively. Call these
> > PigDown and PigUp. (You might need to rotate/scale to align with the
> > final orthographic camera - max_extent() can grab the input image x,y
> > size)
> >
> > Wrap these two pigments in functions (only way they'll get used):
> >
> > #declare FnDown = function { pigment { PigDown } }
> > #declare FnUp = function { pigment { PigUp } }
ok, a first basic test confirms the approach works. thanks. it seems that in
the combined pigment, the lines are less smooth than in the two source images.
need to explore this over the coming days. did I say "thank you"? :-)
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"jr" <cre### [at] gmailcom> wrote:
> "jr" <cre### [at] gmailcom> wrote:
> > William F Pokorny <ano### [at] anonymousorg> wrote:
> > > ...
> > > What I had in mind was to create two pigments ...
>
> ok, a first basic test confirms the approach works. ...
that test was done with two images, names hardwired. while listening to BBC
Radio 6 Music celebrating an 'Chemical Brothers' album, I managed to get ..
stuff done. :-) cannot be bothered to render a full rotation of the resulting
DF3s, so have posted a VMRL version[*]. next (in the coming days) I'll see how
it goes with a couple of complex shapes. happy days..
[*] in p.b.misc again.
regards, jr.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|