POV-Ray : Newsgroups : povray.general : How can I shift the image plane? : Re: Important information for camera placement and keystone distortion problems Server Time
6 Aug 2024 12:25:54 EDT (-0400)
  Re: Important information for camera placement and keystone distortion problems  
From: Hermann Voßeler
Date: 30 Apr 2002 20:38:25
Message: <3CCF36D6.6010602@webcon.de>
Vic,

 > Unfortunately (due to my poor English), I don't know the English
 > counterpart of all the ususal expressions in this field. For
 > example: I've learned the "keystone distortion" expression from
 > this thread. Can I learn convergence too? ;-))
 >
:-) I often have the same problem. You probably noticed this,
when reading the texts on my StereoPOV webpage.

If I understand your first posting right, you didn't mean
"convergence", but the stereoscopic base interpreted as a vector.
Convergence measures, how much the eyes are turned inward to look
at a given object. If we shoot a (virtual) ray from the eye through
the corresponding image element on the stereoscopic pair, then
the two rays (one for each eye) usually will be convergent, i.e
they intersect at the location, where the original object is/was.
And at this location, you will percieve the 3D image of the object.

That means, for near objects, there is much convergence of the
"viewing rays", and for distant objects, there is less convergence.

With convergence, there is one "hard limit": For objects at infinite
distance, the viewing rays are parallel ("they intersect at
infinity"). With natural stereoscopic viewing, there will be never
divergence of the viewing rays.

This limit is very hard. Even if there is a little bit tolerance,
people viewing stereoscopic images won't have much fun, if their
eyes are forced to diverge in order to see the image.


Maybee, I should summarize some important rules for
getting easy-to view stereoscopic images.

Hard limits:
(1) Vertical Alignement: Corresponding image elements must be
     presented at exactly the same height.
(2) No Divergence: The spacing for image elements for objects
     at infinite distance must not be larger than the average
     human eye spacing (65 mm). This spacing often is called
     "maximal on screen deviation"

"Should be"-Rules:
(3) You should try to avoid to large "depth contrasts" or
     "depth gaps". The overall depth-of-field should not be to large.
(4) You should try to arrange your objects behind a well defined
     stereoscopic window. Objects placed before the window
     ("off screen") should not touch the image borders.



Vic wrote:
 > You must have much more experience in this field accoring to your
 > postings. The human eyes and brain can compensate large keystone
 > distortions in some cases. In other cases a small distortion causes
 >  the eyes not to converge. What is the important difference between
 >  the situations?
 >

In many cases, there is a large range of tolerance when breaking the
mentioned rules or limits. In my opinion, the key concept for
understanding this tolerance is "image pregnance".
If you are presenting a very visually pregnant, eye-catching Sujet,
then there will be much tolerance. If the things you show are
complicated (I mean visually), difficult to recognize etc, then
allredy going a little bit beyond the mentioned rules will cause
problems for a lot of people.
If you nicely arrange your objects near the stereoscopic window,
give them a well-balanced distribution in space etc. then your
stereo image will "work" on virtually every device (monitor,
anaglyph print, slide projection...) and you can be shure everyone
will be saying "WOW".
If you want to show architecture, large landscapes, mountains in
there real dimensions and proportions, you have to be much more
precise.

It is well known, that we stereo enthusiasts have allmost infinite
tollerance with respect to stereo images. We can fuse (=see in 3D)
allmost everything, even 2 beer bottles posed side by side on a table
:-)


Harold wrote
 >> Stereo photographers rarely need to resort to complex equations,
 >> the 1:30 rule is more than enough unless extreme lens focal
 >> lenghths are involved.
 >

Vic wrote
 > But the resulting images are adjusted when mounted. In raytracing,
 > my goal is to do this adjustment by setting the camera properly.
 >
....
 > I couldn't cause my eyes to diverge, because stereo_base measured
 > in display metric did not exceed 6.55cm. Conclusion: Physical
 > dimensions of display is important when displaying stereo images,
 > because stereo_base can easily go above 6.55cm in real world
 > (monitor) metrics.
 >

This is especially an issue when making stereo films (or videos).
In most cases, it is practically impossible to readjust the film
after the shot, so the camera has to be constructed in a way, that
it gets the window adjustment right from beginning. This is ofted
done by shifting the optics inward, so the optical axes converge.
(This again is often denoted by the term "convergence")
The intersection point of the optical axes defines, where the
stereoscopic window will be. Note the image planes stay parallel.

This was the background, why I built a similar behaviour into
my StereoPOV patch. Stereo images indeed are linked to specific
physical display dimensions. (Of course, there is tolerance
to view them on a display with differing dimensions; see my
discussion about tolerance breaking the rules above)

If a stereo image is *optimally* suited for a slide projection
screen of 1,80 m width, it can not be scaled arbitrary. If you
make it larger, you will get divergence. If you make it smaller,
you will get less depth (i.e. objects at larger distance will
look "flat"). You can compensate by readjusting the images though.

So, for my StereoPOV patch, I defined the camera in a way, that
the stereoscopic window ist identical with the "natural window"
defined by the image plane, i.e. defined by the "up" and "right"
vector. So, if you define the "up" and "right" vector to the
size of your final presentation device, you will get the correct
maximal on screen deviation, i.e. the correct spacing of 65mm
for objects at infinite distance.

It showed up, that some users were rather confused by this
behaviour of my patch. In normal 2D imaging, the image size is
not this important. "up" and "right" there only define the
aspect ratio. Especialy scenes of the type of the "skyvase"
mentioned by Harold Baize caused problems.
So I need to introduce some means to readjust the distance
of the stereoscopic window. I am considering to add a second
keyword besides "stereo_base" in the next Version of StereoPOV
(that will be POV 3.5-based, if possible). What is to be done
internally is evenly scaling the whole camera while pertaining
the position and look_at.


Hermann


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.