POV-Ray : Newsgroups : povray.unofficial.patches : the next StereoPOV : the next StereoPOV Server Time
5 Jul 2024 14:47:14 EDT (-0400)
  the next StereoPOV  
From: Ichthyostega
Date: 7 Aug 2002 15:20:08
Message: <web.3d517219dbea275172f2eb850@news.povray.org>
(I think, I should post this here as well,
 because discussion on patches rather goes on here)
 -----------------------

Hi all,

as promised, I am now working on the POV 3.5 based release of
my Stereo-Tracing-Patch. After a first review of the new sources,
it seems not so hard a work, as I had feared. THANKS TO POV-TEAM!
(especially for not including post-processing :-P )


The feedback of the users (I got much more feedback, than I expected
for such a exotic patch) did not uncover bugs I didn't alredy know.
But it showed, that the usage of my stereoscopic cameras can be
difficult, especially when trying to trace existing scenes not
specifically designed for real-3D-viewing.
So I want to introduce some new parameters or "convinience shortcuts"
to the cameras. With this I mean parameters of the sort of POV-Ray's
"look_at": It adjusts the camera vectors at parse time, but doesn't
alter the internal working of the camera.

I want to do a proposal and hope to get some further suggestions.


_EXPLANATION_

StereoPOV 0.1 was based on POV 3.1g. It is here:
http://wwww.geocities.com/StereoPOV/

In principle, it is very easy to create stereoscopic images with
standard POV-Ray. Simply shift the camera laterally by a small
amount and trace a second image. If you later on e.g. want to make
slides out of this images, you normally have to crop the two half-
images in an image manipulation programm in order to set a
stereoscopic window. In most cases, this can be done intuitively,
by looking at the result (cross-eyed view, wall-eyed view, LCD
shutter glasses...).

StereoPOV comes in where this simple aproach has its limitations.
- It computes both half images simultaneously and shares
  lighting and texturing computations between both halfs.
- The latter ensures that both sides get the same results.
  This is especially important on small, anti-aliased structures
  and enables the use of area lights with jitter.
- Stereoscopy is built into the cameras; this allows for non-standard
  camera types specifically designed for creating stereoscopic output.
  The cameras are defined in a way that the resulting images are
  alredy adjusted for viewing on a screen of a given size.


The last point seems to cause the troubles, because it forces
the user to consider the dimensions and locations the objects
shall show up with respect to the intended, final presentation
of the image.

All cameras in StereoPOV conform to the following rule:
The user explicitely sets a stereo_base. The stereoscopic
window is located in the image plane, i.e. (location + direction)
and it has the size given by the up and right vector.
StereoPOV makes *no* assumptions as to what is the
relation of 1 POV-Ray unit to the real-world eye-distance
of the viewer. It is up to the user to use sensible units
and parameters.


_PROPOSAL_

In order to ease the use and selection of the right camera
parameters, I want to propose the following extensions:

(a) a heuristic to guess the stereo_base. Somewhat like
    the "1:30" or - better - the "1/f"-Rule (empirical
    rule well known and approved in stereoscopy).
    I am not quite clear at this point, because the
    named rules seem implicitely to contain a relation
    to fixed units (format of the film in camera, focal
    length, size and aspect ratio of "normal" projection


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.