POV-Ray : Newsgroups : povray.general : the next StereoPOV Server Time
18 Nov 2024 15:21:49 EST (-0500)
  the next StereoPOV (Message 1 to 1 of 1)  
From: Ichthyostega
Subject: the next StereoPOV
Date: 5 Aug 2002 11:05:08
Message: <web.3d4e9317dbea275172f2eb850@news.povray.org>
Hi all,

as proposed, I am now working on the POV 3.5 based release of
my Stereo-Tracing-Patch. After a first review of the new sources,
it seems not so hard a work, as I had feared. THANKS TO POV-TEAM!
(especially for not including post-processing :-P )


The feedback of the users (I got much more feedback, than I expected
for such a exotic patch) did not uncover bugs I didn't alredy know.
But it showed, that the usage of my stereoscopic cameras can be
difficult, especially when trying to trace existing scenes not
specifically designed for real-3D-viewing.
So I want to introduce some new parameters or "convinience shortcuts"
to the cameras. With this I mean parameters of the sort of POV-Ray's
"look_at": It adjusts the camera vectors at parse time, but doesn't
alter the internal working of the camera.

I want to do a proposal and hope to get some further suggestions.


_EXPLANATION_

StereoPOV 0.1 was based on POV 3.1g. It is here:
http://wwww.geocities.com/StereoPOV/

In principle, it is very easy to create stereoscopic images with
standard POV-Ray. Simply shift the camera laterally by a small
amount and trace a second image. If you later on e.g. want to make
slides out of this images, you normally have to crop the two half-
images in an image manipulation programm in order to set a
stereoscopic window. In most cases, this can be done intuitively,
by looking at the result (cross-eyed view, wall-eyed view, LCD
shutter glasses...).

StereoPOV comes in where this simple aproach has its limitations.
- It computes both half images simultaneously and shares
  lighting and texturing computations between both halfs.
- The latter ensures that both sides get the same results.
  This is especially important on small, anti-aliased structures
  and enables the use of area lights with jitter.
- Stereoscopy is built into the cameras; this allows for non-standard
  camera types specifically designed for creating stereoscopic output.
  The cameras are defined in a way that the resulting images are
  alredy adjusted for viewing on a screen of a given size.


The last point seems to cause the troubles, because it forces
the user to consider the dimensions and locations the objects
shall show up with respect to the intended, final presentation
of the image.

All cameras in StereoPOV conform to the following rule:
The user explicitely sets a stereo_base. The stereoscopic
window is located in the image plane, i.e. (location + direction)
and it has the size given by the up and right vector.
StereoPOV makes *no* assumptions as to what is the
relation of 1 POV-Ray unit to the real-world eye-distance
of the viewer. It is up to the user to use sensible units
and parameters.


_PROPOSAL_

In order to ease the use and selection of the right camera
parameters, I want to propose the following extensions:

(a) a heuristic to guess the stereo_base. Somewhat like
    the "1:30" or - better - the "1/f"-Rule (empirical
    rule well known and approved in stereoscopy).
    I am not quite clear at this point, because the
    named rules seem implicitely to contain a relation
    to fixed units (format of the film in camera, focal
    length, size and aspect ratio of "normal" projection
    screen, characteristic dimensions of human perceptual
    system). I don't think, this is wrong, but I fear
    to introduce the dependence to a certain metric.

(b) Parameter window_distance: "this is the distance
    in POV-Ray units from camera position (location-
    vector) to where my stereoscopic window will be".

(c) Parameter screen_width: "my final presentation
    screen shall have this width measured in POV-Ray
    units".


The following combinations will be possible:

no parameter:    normal (mono) image rendered.

window_distance: scale the whole camera so that
                 direction = window_distance.
                 Use heuristics to guess stereo_base
                 for this window_distance

screen_width:    Use heuristics to guess stereo_base
                 for the length of direction vector.
                 Then fix this stereo_base and scale
                 the whole universe (including camera)
                 to match the given screen_width.

stereo_base:     explicitely set stereo_base, no
                 further adjustments (i.e. window
                 distance given by direction vector)

stereo_base,
window_distance: scale the whole camera so that
                 direction = window_distance.
                 Use given stereo_base

stereo_base
screen_width:    Fix this base and scale the whole
                 universe to match screen_width.

window_distance,
screen_width:    scale camera to get direction=window_distance
                 Use heuristics to guess stereo_base
                 for this window_distance.
                 Then fix this stereo_base and scale
                 whole universe to match screen_width

stereo_base,
window_distance,
screen_width:    scale camera to get direction=window_distance
                 Use the given stereo_base as fixed.
                 Scale whole universe to match sceen_width.


This is only a draft. My intention is to cover many
very different use cases. You certainly noted that the
implementation is simple and each parameter has allways
the same behaviour.
Do you think this proposal makes sense?


Ichthyostega
(Hermann Vosseler)


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.