|
![](/i/fill.gif) |
Hi all,
oh well... I know it is confusing... I'm not really sure on how to
present the whole stuff clearly as the software is quite complicated...
Let's start from the beginning, I'm going to test a vision based
positioning system with both simulation and in the real world
environment. The past experience of using this system is quite accurate.
For now, I have to develop an augmented reality application, which use
this positioning system as the base to get the user's location in the room.
The simulation is to test the accuracy of this system, as an "ideal"
environment. It is because this system hasn't been tested properly
before. But surprisingly, the result of the simulation is very inaccurate.
Before doing the test, the camera has to be calibrated first so that the
software collects the lens properties of the camera. These properties
are responsible for those complicated matrix transformation in
calculating the camera position from those markers. And the coordinate
of the marker (the 4 centroid coordinates of the 4 regions in the corner
of each target) is supplied to the software so that it can calculate
those transformation.
Therefore, (as I am the one who know the system most) I think the camera
definition is not the major problem (maybe I'm wrong). And by posting
(and looking at) these two images, I think the problem is about the
markers aspect ration. As Tim Nikias suggest, the povray unit shouldn't
be the problem. By putting those markers in the same "distance" away
from the camera, it seems the one generated by povray is really
distorted (taller and thinner). (since the target dimension has to be
supplied into the software) And therefore I think this is the major problem.
Here is the camera definition:
camera
{
location <61.5, 74.55, -200>
direction <0, 0, 1.7526>
look_at <61.5, 74.55, 0>
}
The output image is rendered to 384x288 (which is the same aspect ratio
as the default povray camera ratio). And here is the texture mapping
code, which simulate a marker that has printed on an A4 size paper:
box
{
<0, 0, 0>, <21, 29.7, 0.01>
pigment
{
image_map{png target_file once interpolate 2 map_type 0}
scale <21, 29.7, 0.01>
}
}
By ignoring the complicated positioning software, did I do anything
wrong on simulating an A4 sized paper and the texture mapping bit?
Thanks for the attention and sorry for the confusing post before. I hope
I have make this issue clearer now.
Regards
Colin
Post a reply to this message
|
![](/i/fill.gif) |