POV-Ray : Newsgroups : povray.advanced-users : simulating camera output using calibration matrix Server Time
25 Nov 2024 07:24:41 EST (-0500)
  simulating camera output using calibration matrix (Message 1 to 2 of 2)  
From: ahrivera
Subject: simulating camera output using calibration matrix
Date: 25 Mar 2003 01:25:10
Message: <web.3e7ff5478d8d2a7b3ff2087f0@news.povray.org>
Hi,

I want to simulate the output of a real camera using pov ray.  The way
that I'm doing it is that I determine the camera calibration matrix of the
real camera:

[ fx a*fx cx ]
[ 0  fy   cy ]
[ 0  0    1  ]

Here
fx = mx*f
fy = my*f
Where, 1/mx and 1/my are the pixel size dimensions.

Usually a=0

From what I've read fy/fy gives me the aspect ratio of the image.  What I
don't
know is how to extract the field of view and the focal length information
from this matrix.

I want to use only the calibration matrix information to render the image.
I don't want to assume anything about the focal length and the pixel
dimensions because in the case of my camera, I don't have that information.

Has anybody seen work related to this?  Or anybody has any input on how to
approach the problem?

I will greatly appreciate your comments,
Thanks
ALexis


Post a reply to this message

From: ahrivera
Subject: Re: simulating camera output using calibration matrix
Date: 1 Apr 2003 00:15:06
Message: <web.3e891fdbf57d918e3ff2087f0@news.povray.org>
Just in case somebody needs to do this again someday.  Here's how I did
it...
I tried it with a couple of cases and it seemed to work.  Im still testing
it though.

Any comments will be welcomed.
Alexis

PS.
Simulating camera output in POVRay using intrinsic and extrinsic camera
parameters
------------

For now, checkout how to calibrate the camera using the Camera Calibration
Toolbox for Matlab http://www.vision.caltech.edu/bouguetj/calib_doc/.
The tutorial in that page will show you how to use the software. But, here
are some pointers on how to improve the calibration results.

Use at least 20 images
Analyze the error and delete from the project the images with big
reprojection errors
It’s very important that you undistort the images first. The model
below doesn’t simulate any lens distortion.

Below, I explain how to map the calibration matrix and the extrinsic
parameters to a POVRay camera model

Given the calibration matrix:

[ fc(1)  alpha_c*fc(1)  cc(1) ]
[ 0              fc(2)  cc(2) ]
[ 0                  0      1 ]

Where,

fc(1) = m1 * f
fc(2) = m2 * f
1/m1 is the horizontal pixel dimension
1/m2 is the vertical pixel dimension
f is the camera focal length
alpha_c encodes the angle between the x and y sensor axes. Usually its zero.
cc(1), cc(2) is the coordinate of the principal point

And, given the rotation matrix Rc_ext and translation vector Tc_ext from the
calibration tool. A world coordinate point Xw is mapped to the image
coordinate as follows:

Xc = Rc_ext*Xw + Tc_ext;
Xi = [fc(1) 0 cc(1); 0 fc(2) cc(2); 0 0 1]*Xc;
Xi = Xi/Xi(3)

However, in POVRay the principal point is the center of the image, not cc.
So, we need a way to adjust the camera position so that the center of the
image is the point cc. This is done by translating the image plane.

We derive the camera position as follows: Let,

H be the image height
W be the image width
PO be the principal point offset given by:

[ 0 0 (cc(1) - W/2 + 1)/fc(1) ]
[ 0 0 (cc(2) - H/2 + 1)/fc(2) ] * Tc_ext
[ 0 0 0                       ]


The camera position p is given by the following formula:

p = -inv(Rc_ext)*(PO+Tc_ext)

Now, we can use the following transformation to map a point Xw in the world
coordinates to a point Xi in the image plane.

Xc = Rc_ext*Xw+ PO+Tc_ext;
Xi = [fc(1) 0 img_center_width; 0 fc(2) img_center_height; 0 0 1]*Xc;
Xi = Xi/Xi(3)+[offset;0]

The POVRay camera model will look like the following:

// The resolution, and fov relate to the instrinsic camera parameters
#declare size_width = 1280;
#declare size_height = 960;
#declare resolution_x = 1/m1;
#declare resolution_y = 1/m2;
#declare focal_length =  1;
#declare fov_angle =
abs(min(degrees(2*atan(resolution_x*size_width,2*focal_length)), 179));

// The camera position is related to the extrinsic
// camera parameters.  Also, the camera position needs to be adjusted so the
// principal point cc is the center of the image.  See the wiki page below
// for an explanation on how to determine the camera position
//---
#declare camera_position = < p1, p2, p3 >;

// The up and viewing direction are the row elements of the rotation matrix
// Rc_ext.  The minus sign is important, but I don't know why we need it.
//----
#declare up_direction = -vnormalize(< r21, r22, r23 >);
#declare viewing_direction = vnormalize(< r31, r32, r33 >);

// Sometimes, we want to adjust the image plane so it lies on the origin of
the
// coordinate systems or some distance behind it.  We do this by moving the
camera
// position along the viewing direction
//---
#declare adjusted_camera_position =
camera_position;//+(viewing_direction*focal_length);
camera {


        right   <-(size_width*resolution_x)/(size_height*resolution_y),0,0>
//the minus sign converts from left handed system to right handed system
        sky     up_direction
        angle   fov_angle

        //focus parameters
        //aperture      aperture_value
        //focal_point   <0,0,0>
        //blur_samples   100
        //confidence    .7
        //variance      1/100

        location        adjusted_camera_position
        look_at         adjusted_camera_position+viewing_direction*100
}


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.