





 
 




 
 


Hi,
When the raytracing POVRay program projects a 3D movement in a 2D
image sequence, it uses what is called "projective transformation" that
produces the images. This projective transform is well known in the
literature, and is called "homography". It takes the aspect of :
(up*w) (m00, m01, m02) (u)
(vp*w) = (m10, m11, m12) (v)
( w ) (m20, m21, m22) (1)
where (u,v) is the coordinates in the first image, and (up,vp) its
movement's transform correspondent in the second image, in homogeneous
notations. This projective transformation takes 9 parameters (mij),
but you can consider that it takes only 8 parameters, if those are
normalized (one of the parameters (mij) is forced to 1.0 value).
If you consider those 8 unique parameters, because it allows to
project a 3D movement, in 2D images, are linked to the 6 degrees
of freedom in 3D space, via the 3 Euler angles in rotations, and
3 translations. Those (mij) parameters are linked to :
(Tx, Ty, Tz) the 3 translations in pixels of images, and
(Rx, Ry, Rz) the (yaw, pitch, roll) Euler angles in degrees.
When someone like me, uses POVRay scripting capabilities, I have
the following directives in the language that represent the movement :
translate <Tx, Ty, Tz> and
rotate <Rx, Ry, Rz>
in the 3D space. But these directives only takes account of 6 parameters
of the projective transform, and I miss 2 more parameters in "skewness".
<http://stackoverflow.com/questions/13206220/3dskewtransformationmatri
xalongonecoordinateaxis>
This is well represented in the above document. When I consider a
movement in 3D space, projected in a 2D sequence of images, I must
consider two other parameters (Sx,Sy) in (x,y) coordinates of image,
that are linked to (x,y) skewness. My question is, how are those skew
angles present in the POVRay scripting language ? There should be a
skew <Sx, Sy>
directive that would render the skewness type of movement of the camera.
I can't figure out myself, how these 2 skew movements are represented
in the POVRay scripting language directives. It must be linked to
the way how the perspective is generated in the camera properties.
Can somebody tell me how to generate those parameter movements with
POVRay language ? I'm confused, and probably not very clear, but maybe
someone else has thought about this problem, before me.
Many thanks for your help, by advance ...
Best regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


I think what you want is just called "Shear".
http://www.flohmueller.de/pov_tut/trans/trans_450e.htm
Most times when people refer to "skew" it's more of a corkscrew type of
deformation  typically encountered 50 times when trying to get a single
straight 2x4 out of a pallet of lumber at a Big Box store. :
Post a reply to this message


 
 




 
 


Hi,
Bald Eagle writes:
> I think what you want is just called "Shear".
>
> http://www.flohmueller.de/pov_tut/trans/trans_450e.htm
>
> Most times when people refer to "skew" it's more of a corkscrew type of
> deformation  typically encountered 50 times when trying to get a singl
e
> straight 2x4 out of a pallet of lumber at a Big Box store. :
This "Shear" transformation is very interesting ! It's a 3D
transformation in space. We are very close to an answer to the problem.
I'm speaking of a 2D projective transformation in the image plane, that
is rendered when two images of the same scene, are viewed from different
viewpoints. Here is two real successive aerial images taken with the
same camera :
<http://eureka.atari.org/perspective.gif>
They are linked with the 8 projective transformation parameters :
Tx = 30.6 pixels
Ty = 22.7 pixels
Tz = 13.5 pixels
(Rx, Ry, Rz) are (yaw, pitch, roll) Euler angles in 3D space, and
(Tx, Ty, Tz) the 3 translations in 3D space. The two supplementary
parameters, "Shear" or "Skew" (Sx, Sy) are two perspective angles.
The global "Correlation = 83.7%" coefficient tells about good matching.
The corresponding mosaic is shown to illustrate the geometric transform.
Is the "Shear" transform applicable to the POVRay camera properties ?
I mean, "translate <Tx, Ty, Tz>" and "rotate <Rx, Ry, Rz>" are
applicable to the camera properties, to render movement of the camera.
Can I apply these two "matrix" transforms to modify the perspective ?
Thanks a lot for the very positive help =)
Best regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


Francois LE COAT <lec### [at] atariorg> wrote:
> This "Shear" transformation is very interesting ! It's a 3D
> transformation in space. We are very close to an answer to the problem.
>
> I'm speaking of a 2D projective transformation in the image plane, that
> is rendered when two images of the same scene, are viewed from different
> viewpoints. Here is two real successive aerial images taken with the
> same camera :
Very interesting  looks like some aerial photogrammetry you're working on.
> The two supplementary
> parameters, "Shear" or "Skew" (Sx, Sy) are two perspective angles.
> The global "Correlation = 83.7%" coefficient tells about good matching.
> Is the "Shear" transform applicable to the POVRay camera properties ?
Yes, you can apply a matrix transform to the camera.
> I mean, "translate <Tx, Ty, Tz>" and "rotate <Rx, Ry, Rz>" are
> applicable to the camera properties, to render movement of the camera.
> Can I apply these two "matrix" transforms to modify the perspective ?
I'd have to say yes, even though I have yet to work out all of the particulars
in detail.
It looks to me like what you want to do is apply a shear in one direction, and a
shear perpendicular to that.
So, looking at Friedrich Lohmueller's page, when you add that 0.5 in to the
matrix, that equates to a shear of a certain angle. FM says it's 30 degrees,
but I thought it would be calculated as atan (0.5/1) = 26.565.
Do the same to a different axis, and that ought to get you the "skew" you're
looking for.
Looking down along the y axis, I think that might be:
matrix< 1 , 0, 0,
0.5, 1, 0.5,
0 , 0, 1,
0 , 0, 0 >
Maybe this is closer to what you need?
Post a reply to this message


 
 




 
 


Am 04.10.2018 um 22:30 schrieb Francois LE COAT:
> When the raytracing POVRay program projects a 3D movement in a 2D
> image sequence, it uses what is called "projective transformation" that
> produces the images.
[...]
You're convoluting a couple of things, so let me try to clarify:
(1) The matrix you've posted and described is for a 2D>2D projective
transformation; that would be of no use in a 3D animation, as you can't
just transform one output image into another, as depth is already lost
at that point. The only projective transformations of interest would be
3D>3D (to change the scene)...:
(x') 1 (m00 m01 m02 m03) (x)
(y') =  (m10 m11 m12 m13) (y)
(z') w (m20 m21 m22 m23) (z)
(1 ) (m30 m31 m32 m33) (1)
... or 3D>2D (for the camera; not sure if the term "projective
transformation" would be correct there):
(u) 1 (m00 m01 m02 m03) (x)
(v) =  (m10 m11 m12 m13) (y)
(1) w (m20 m21 m22 m23) (z)
(1)
(2) The "transform", "rotate", "scale" and "matrix" 3D>3D
transformations in POVRay are not /projective/ transformations, but
rather /affine/ transformations:
(x) (m00 m01 m02 m03) (x)
(y) = (m10 m11 m12 m13) (y)
(z) (m20 m21 m22 m23) (z)
(1)
(Technically, "scale" and "rotate" are actually not affine but linear,
but the implementation uses affine matrices throughout.)
(3) The camera projection in POVRay does /not/ use a 3D>2D matrix
transformation as shown above, but rather transforms each individual ray
according to a potentially more complex formula, allowing POVRay to
support more camera projections than just a simple pinhole camera. Even
the pinhole camera model is implemented that way.
(4) At no point does POVRay perform transformations from one frame of
an animation to the next. Instead, each frame of an animation is created
"from scratch" according to the "recipe" provided by the user in the
form of the scene file script, varying the recipe depending on a "clock"
parameter. In the creation of each frame, affine 3D>3D transformations
are used to specify the orientation, size, position and shearing of the
components.
Now that we got that out of our way:
POVRay's primary transformations are far sufficient to exploit all
degrees of freedom: Besides `translate` and `rotate` there is also
`scale`, which can be combined with `rotate` to produce arbitrary
shearing transformations.
Note that in POVRay transformations are instructions, not parameters;
thus you can chain multiple instructions, such as apply a rotation, then
a scaling, then another rotation again.
Since it may a major PITA to create a shearing transformation with just
`translate`, `rotate`, `scale`, POVRay provides a fourth transformation
type, `matrix`, which gives you direct control over the transformation
matrix. (Again this is an instruction, and chains with the other
transformations.)
To make life even easier, the include file "transforms.inc" distributed
with POVRay provides a macro to specify a pure shearing transformation,
`Shear_Trans(X,Y,Z)`.
Theoretically it would be possible to implement a `shear` transformation
in the language itself, but there's no genuine need for it, thanks to
the macro; and unlike the other primary transformations  `translate`,
`rotate` and `scale`  there is no shape that would benefit from
implementing special handling for it. (For example, the sphere shape is
defined by its center and radius; `translate` can be implemented by
changing the center, `scale` (if uniform) can be implemented by changing
the radius, and `rotate` can be ignored entirely.)
Post a reply to this message


 
 




 
 


Hi,
clipka writes:
> The matrix you've posted and described is for a 2D>2D projective
> transformation; that would be of no use in a 3D animation, as you can't
> just transform one output image into another, as depth is already lost
> at that point.
The projective transformation that I posted is useful in computer vision
to determine how the camera moves, the egomotion, knowing the image
that is observed. If you can determine the motion of a planar surface
located in the image, this allows you to know the camera movements.
There's two simplifying hypothesis. Either the scene is static, and the
camera is moving. Or the scene moves, and the camera is static. If the
scene and camera are both moving, you have to segment static and
dynamic objects in the scene, in order to evaluate the camera motion.
I'm not really speaking of image synthesis, but rather computer vision.
The image synthesis with POVRay allows me to represent experimental
results. For instance in this video with the help of POVRay :
<http://hebergement.upsud.fr/lecoat/camera_fixe.mp4>
I've determined the movements of the head, and represented it in VR ...
The goal of these researches is to measure as precisely as possible
the camera movements, knowing an image sequence contents. Knowing that,
you can elaborate trajectories, almost like a GPS would do it. You may
have heard about an algorithm called "Simultaneous Localization and
Mapping" (SLAM) helping robots to know their position and environment.
I've noticed the "Shear_Trans(X,Y,Z)" that will be useful in my work.
Thanks for your explanations.
Regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


Hi,
Bald Eagle writes:
> It looks to me like what you want to do is apply a shear in one directi
on, and a
> shear perpendicular to that.
>
> So, looking at Friedrich Lohmueller's page, when you add that 0.5 in to
the
> matrix, that equates to a shear of a certain angle. FM says it's 30 de
grees,
> but I thought it would be calculated as atan (0.5/1) = 26.565.
> Do the same to a different axis, and that ought to get you the "skew" y
ou're
> looking for.
>
> Looking down along the y axis, I think that might be:
>
> matrix< 1 , 0, 0,
> 0.5, 1, 0.5,
> 0 , 0, 1,
> 0 , 0, 0 >
>
> Maybe this is closer to what you need?
Now, with the image matching method that I developed, I compute 8
parameters in 3D space :
Tx : translation along the x axis
Ty : translation along the y axis
Tz : translation along the z axis
Rx : pitch angle along the x axis
Ry : yaw angle along the y axis
Rz : roll angle along the z axis
Sx : skew angle along the x axis
Sy : skew angle along the y axis
To render the computed projective transform I use the POVRay code :
"
matrix < 1 , 0, 0,
atan(Sx*3.141592654/180.0), 1, 0,
0 , 0, 1,
0 , 0, 0 >
matrix < 1,atan(Sy*3.141592654/180.0), 0,
0, 1, 0,
0, 0, 1,
0, 0, 0 >
rotate <Rx,Ry,Rz>
translate <Tx,Ty,Tz>
"
<Tx,Ty,Tz> are expressed in pixels. <Rx,Ry,Rz> and <Sx,Sy> are expressed
in degrees. This gives the following result following head's movement :
<http://hebergement.upsud.fr/lecoat/camera_fixe.mp4>
The interest with the method, is to evaluate large movements. It works !
Thanks for your help,
Best regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


Francois LE COAT <lec### [at] atariorg> wrote:
This is wonderful! I'm glad you succeeded in working out the details for your
specialized application. And thanks for reporting back  it's always a
pleasure to see the successes of interesting projects such as yours. :)
If I may ask, do you think your work is close enough to something like:
http://kevinkarsch.com/publications/sa11lowres.pdf
https://kevinkarsch.com/?page_id=445
such that given a 2D photo, 3dimensional parameters of objects in it could be
extrapolated from it?
Perhaps you may find Dr. Karsch's computer vision research projects, and his
source code, helpful in developing your own methods more quickly. I myself have
been keeping an occasional eye on his work for several years now.
I like your little yellow smiley face  I did a similar animated face a few
years back. :)
I hope to see more of your work  it looks like you are lucky enough to work on
a very fun and enjoyable project.
All the best.
Bill Walker
> The interest with the method, is to evaluate large movements. It works !
>
> Thanks for your help,
>
> Best regards,
>
> 
> <http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


Hi,
Bald Eagle writes:
> Francois LE COAT wrote:
> This is wonderful! I'm glad you succeeded in working out the details
for your
> specialized application. And thanks for reporting back  it's always
a
> pleasure to see the successes of interesting projects such as yours. :
)
I've updated video <http://hebergement.upsud.fr/lecoat/camera_fixe.mp4>
because in POVRay transformations, I shouldn't have used arctangent:
"
matrix < 1 , 0, 0,
atan(Sx*3.141592654/180.0), 1, 0,
0 , 0, 1,
0 , 0, 0 >
matrix < 1,atan(Sy*3.141592654/180.0), 0,
0, 1, 0,
0, 0, 1,
0, 0, 0 >
rotate <Rx,Ry,Rz>
translate <Tx,Ty,Tz>
"
but rather tangent:
"
matrix < 1 , 0, 0,
tan(Sx*3.141592654/180.0), 1, 0,
0 , 0, 1,
0 , 0, 0 >
matrix < 1,tan(Sy*3.141592654/180.0), 0,
0, 1, 0,
0, 0, 1,
0, 0, 0 >
rotate <Rx,Ry,Rz>
translate <Tx,Ty,Tz>
"
for skew along (x,y) axis. That means <Sx,Sy> skew (or shear) angles in
degrees are now better rendered. This is almost the same video, because
tangent and arctangent are similar for small skew (or shear) angles.
> If I may ask, do you think your work is close enough to something like
:
> http://kevinkarsch.com/publications/sa11lowres.pdf
> https://kevinkarsch.com/?page_id=445
> such that given a 2D photo, 3dimensional parameters of objects in it c
ould be
> extrapolated from it?
> Perhaps you may find Dr. Karsch's computer vision research projects, an
d his
> source code, helpful in developing your own methods more quickly. I my
self have
> been keeping an occasional eye on his work for several years now.
I'm trying to model the camera motion, if it's moving, and dominant
motion if the camera is static. This is a static camera in the video, so
I model head's movement. I'm not modelling the contents of the scene,
but rather the dominant motion. I'm modelling the opticalflow, meaning
a speed field of motion vectors between images. I'm not modelling
pixels, what POVRay is doing extremely well, indeed.
About the WEB page you're showing, it remembers me results of neural
networks I've seen recently, trying to model scenes, and synthesize ...
> I like your little yellow smiley face  I did a similar animated face a
few
> years back. :)
It's a smiley. I like it also because it looks like PacMan character :)
> I hope to see more of your work  it looks like you are lucky enough to
work on
> a very fun and enjoyable project.
>
> All the best.
>
> Bill Walker
You're welcome. You gave me good help, knowing advanced POVRay usage !
>> The interest with the method, is to evaluate large movements. It works
!
Well, this is preliminary results ...
Thanks for your help,
Best regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 
 


Hi,
Bald Eagle writes:
> I hope to see more of your work  it looks like you are lucky enough to
work on
> a very fun and enjoyable project.
>
> All the best.
>
> Bill Walker
I made a WEB page to explain how to approximate the 2D transformations
of the images, in order to model 3D position of the camera in space :
<http://hebergement.upsud.fr/lecoat/demoweb/camera_localization.html>
I used a video sequence, extracted from <http://www.ipol.im/> web site,
and took the first image of the sequence as a position reference, to
approximate the movement of following ones. So I evaluated the position
in space, and rendered it with POVRay. Then, when the camera is too far
from the reference position, and the correlation range is below 50%, I
took a new reference image.
All along the video sequence, the 3D localization of the camera is
estimated, thanks to the image contents of the sequence, and the
position is represented with the POVRay transformations. 8 transforms
are used : <Tx,Ty,Tz> translations in pixels, <Rx,Ry,Rz> pitch, yaw and
roll rotations in degrees, and <Sx,Sy> skew (or shear) angles in degrees
Here is the video <http://www.youtube.com/watch?v=vfFDIQj4ZAA> of the 3
D
camera localization result, and the original sequence that I used :
<http://ipolvideo.ipol.im:5000/209/arc/A1/A0CA7D639E3A6266AF7A26D3F2C887/
input.mp4>
I'm happy that it works, because the movement of camera is very large =
)
You're welcome to comment ...
Thanks again for your help.
Best regards,

<http://eureka.atari.org/>
Post a reply to this message


 
 




 

