

I'm trying to do a sort of rough photogrammetric conversion of points on the
screen to true 3D coordinates with the same apparent position.
Assume an orthographic camera.
Let's say that I have a line segment extending from the top left of an image to
the lower right. If I draw a line perpendicular to this, then I can use this
new line as an axis for rotation.
If I rotate a copy of the line segment around this axis, it should visually
remain in line with the original line segment, but the ends would appear to
contract.
If I scaled the copy of the line segment up so that it was the estimated true
size of the original line segment (which represents the view of a longer line
segment extending through the plane of the screen), then when I rotate it the
proper amount, it should exactly line up with the original line segment, but the
endpoints will now have the proper zcoordinates. We'll call this the zbuffer
axis.
If I take the negative reciprocal of the original line segment's slope, I get a
perpendicular line. This should correspond to all of the points in the plane
that are of the same 3D z coordinate. If I rotate this perpendicular line
around the zbuffer axis, it should cross all of the points in the image that
have the same zvalue if it were actually in 3D. We'll call this the xy line.
And if I take the cross product of this line and the zbuffer axis, I should get
the up vector.
If I then take points on the image and apply the same rotational transform to
these as I did to get the zbufffer axis, and then rotate around that the same
amount as I did to get the xy line, then I should get a coordinate that if
scaled to coincide with its original position will give me it 3D zcoordinate.
Does this make sense, or am I going to be chasing my tail in circles on this?
Post a reply to this message

