POV-Ray : Newsgroups : povray.unofficial.patches : Announce: StereoPOV 0.1 Server Time
19 Nov 2024 09:26:36 EST (-0500)
  Announce: StereoPOV 0.1 (Message 1 to 8 of 8)  
From: Hermann Voßeler
Subject: Announce: StereoPOV 0.1
Date: 7 Apr 2002 15:13:46
Message: <3CB09859.1000706@web.de>
Some weeks ago I told you that I have been working on a patch
for tracing stereoscopic images with POV-Ray. I asked, if there
were interest. There was, and several people contacted me via
email too. So I made an unofficial POV-Ray Version out of my patch.

You will find documentation, source code, a Windows executable
and some demo images at the following locations:

http://www.geocities.com/stereopov/
http://212.224.43.114/StereoPOV/

(the second is sort of a mirror, but it serves the
   high res *.jps-images as well.)

This is an *unofficial* compile, based on POV-Ray 3.1g.
I consider it beta software, "works for me" with some known bugs.
I am very interested at any feedback, suggestions, bug reports etc.
(And: as I am not a native speaker, my english is rather clumsy :-) )


--------------------------------------------------------------------
What is it about?
*Stereoscopy* is a method to create images delivering a real 3D
depth impression. It utilizes our natural ability to view with
two eyes and gain an immediate depth sensation. Common ("flat"
"mono") images on the contrary only simulate depth by means like
perspective, shading, focal blur, athmosphere.
It is very easy to create stereoscopic images. The down side is:
we allways need some viewing device like lens stereoscopes,
colored or polarizing spectacles etc. As a exception to this rule,
there exist two "free viewing techniques" ("cross-eye" and
"wall-eye"). They permit with some experience and training to have
a quick look at a stereo pair in 3D, but this causes some eye strain.

It is very easy to create stereoscopic images with POV-Ray as well.
But the key feature of the patch presented here is ability to render
the two halfimages in a single raytracing pass, whilst sharing the
results of lighting, texture and radiosity calculations.

Five built in camera types are "stereoscopically enabeled":
perspective, orthogonal, fisheye, cylindrical and spherical wideangle,
the latter beeing a new addition specifically designed to create full
range stereoscopic images of fisheye type.


Why I wrote it?
The primary reason is: I need it for my own projects. The main focus
for me is to create high quality images for 3D slide projection.
Stereoscopy is very quality demanding and -- moreover -- several of
the tricks of computer graphics don't work so good in real 3D. So
anyway it would be good to be able to go to the core rendering engine
and make things work exactly as needed.
This means, I am planing to do further developement. The first thing
to do will be porting to 3.5, of course.


Follow-up to p.unofficial.patches.

-- Hermann


Post a reply to this message

From: Edward Coffey
Subject: Re: Announce: StereoPOV 0.1
Date: 8 Apr 2002 09:18:00
Message: <3CB19A19.6000001@alphalink.X.remove.this.X.com.au>


> Some weeks ago I told you that I have been working on a patch
> for tracing stereoscopic images with POV-Ray. I asked, if there
> were interest. There was, and several people contacted me via
> email too. So I made an unofficial POV-Ray Version out of my patch.


Could this be extended to speed the rendering of fly-throughs of static 
(or near static) scenes?  Rather than two slightly different views of a 
scene, an arbitrary number, each slightly different than the one before? 
  Or would it only be useful where every frame is shot from a fairly 
similar perspective, rather than simply being similar to the preceeding one?


Post a reply to this message

From: Nicolas Calimet
Subject: Re: Announce: StereoPOV 0.1
Date: 8 Apr 2002 10:20:03
Message: <3CB1A712.AAA5361E@free.fr>
> Could this be extended to speed the rendering of fly-throughs of static
> (or near static) scenes?  Rather than two slightly different views of a
> scene, an arbitrary number, each slightly different than the one before?

	This is somewhat what I did to simulate a camera motion blur effect
(that is: the camera is moving with its simulated shutter open for a while) in
my yet-another-not-so-useful-patch-of-POV  ;o)  There the camera is slightly
displaced according to a linear vector, as well as its look_at vector.  The
requested number of frames are averaged on a pixel basis to try speeding up the
whole image calculation, pretty much how focal blur works.

	So the answer is most probably "yes" in this case.

	BTW, that's an interesting y-a-n-s-u-p-o-P  :o)

	- N.C.


Post a reply to this message

From: Edward Coffey
Subject: Re: Announce: StereoPOV 0.1
Date: 9 Apr 2002 09:37:27
Message: <3CB2F028.2050404@alphalink.X.remove.this.X.com.au>
Nicolas Calimet wrote:

> 	BTW, that's an interesting y-a-n-s-u-p-o-P  :o)


Sorry, I'm lost :?)


Post a reply to this message

From: Hermann Voßeler
Subject: Re: Announce: StereoPOV 0.1
Date: 11 Apr 2002 18:35:20
Message: <3CB60D91.5080007@webcon.de>

  >> Some weeks ago I told you that I have been working on a patch for
  >> tracing stereoscopic images with POV-Ray. I asked, if there were
  >> interest. There was, and several people contacted me via email
  >> too. So I made an unofficial POV-Ray Version out of my patch.

Edward Coffey wrote:
  > Could this be extended to speed the rendering of fly-throughs of
  > static (or near static) scenes?  Rather than two slightly
  > different views of a scene, an arbitrary number, each slightly
  > different than the one before? Or would it only be useful where
  > every frame is shot from a fairly similar perspective, rather
  > than simply being similar to the preceeding one?

The stereoscopic baseline is allways parallel to the "right" vector.
My "StereoCache" datastructure exploits this fact: When one pixel is
traced, we know that the corresponding pixel on the other halfimage
will be on the same line (row). And by using the camera geometry and
the depth of the intersection found, I can predict the exact pixel
where it will be needed. I then store the reusable lighting and
texturing data at this pixel. So every pixel has to look at a single
memory location if there is cached data.

So the answer is: It could be extended to fly-throughs only if
the camera motion is strictly collinear to the "right" vector.

---
BTW: As standard fisheye camera doesn't fulfill this prerequisite,
it can not use StereoCache. For the same reason it produces so
called "height errors" and is not very good suited for stereo.
Because of this I invented a new fisheye-like camera type
(ultra_wide_angle 2) that fulfills this condition and thus can
use StereoCache and doesn't produce "height errors".


Hermann


Post a reply to this message

From: Harold Baize
Subject: Re: Announce: StereoPOV 0.1
Date: 16 Apr 2002 14:38:38
Message: <3cbc6fae@news.povray.org>
Thank you for making the StereoPOV patch. I have
not had any time to use it yet. I'll send some feedback
when I do.

Harolddd


news:3CB### [at] webde...
> Some weeks ago I told you that I have been working on a patch
> for tracing stereoscopic images with POV-Ray. I asked, if there
> were interest. There was, and several people contacted me via
> email too. So I made an unofficial POV-Ray Version out of my patch.
>
> You will find documentation, source code, a Windows executable
> and some demo images at the following locations:
>
> http://www.geocities.com/stereopov/
> http://212.224.43.114/StereoPOV/
>
> (the second is sort of a mirror, but it serves the
>    high res *.jps-images as well.)
>
> This is an *unofficial* compile, based on POV-Ray 3.1g.
> I consider it beta software, "works for me" with some known bugs.
> I am very interested at any feedback, suggestions, bug reports etc.
> (And: as I am not a native speaker, my english is rather clumsy :-) )
>
>
> --------------------------------------------------------------------
> What is it about?
> *Stereoscopy* is a method to create images delivering a real 3D
> depth impression. It utilizes our natural ability to view with
> two eyes and gain an immediate depth sensation. Common ("flat"
> "mono") images on the contrary only simulate depth by means like
> perspective, shading, focal blur, athmosphere.
> It is very easy to create stereoscopic images. The down side is:
> we allways need some viewing device like lens stereoscopes,
> colored or polarizing spectacles etc. As a exception to this rule,
> there exist two "free viewing techniques" ("cross-eye" and
> "wall-eye"). They permit with some experience and training to have
> a quick look at a stereo pair in 3D, but this causes some eye strain.
>
> It is very easy to create stereoscopic images with POV-Ray as well.
> But the key feature of the patch presented here is ability to render
> the two halfimages in a single raytracing pass, whilst sharing the
> results of lighting, texture and radiosity calculations.
>
> Five built in camera types are "stereoscopically enabeled":
> perspective, orthogonal, fisheye, cylindrical and spherical wideangle,
> the latter beeing a new addition specifically designed to create full
> range stereoscopic images of fisheye type.
>
>
> Why I wrote it?
> The primary reason is: I need it for my own projects. The main focus
> for me is to create high quality images for 3D slide projection.
> Stereoscopy is very quality demanding and -- moreover -- several of
> the tricks of computer graphics don't work so good in real 3D. So
> anyway it would be good to be able to go to the core rendering engine
> and make things work exactly as needed.
> This means, I am planing to do further developement. The first thing
> to do will be porting to 3.5, of course.
>
>
> Follow-up to p.unofficial.patches.
>
> -- Hermann
>
>
>


Post a reply to this message

From: Sam Van Oort
Subject: Re: Announce: StereoPOV 0.1
Date: 11 May 2002 09:39:11
Message: <3cdd1eff@news.povray.org>
Stereo ptch is VERY useful.  One question though: what are "Good"
stereo_base value for normal eyes?

I can't seem to get it to "pop out" when I set the values.


Post a reply to this message

From: Hermann Vosseler
Subject: Re: Announce: StereoPOV 0.1
Date: 13 May 2002 16:04:37
Message: <3CE01A1E.1000904@web.de>
Sam Van Oort wrote:
 > Stereo ptch is VERY useful.  One question though: what are "Good"
 > stereo_base value for normal eyes?
 >
 > I can't seem to get it to "pop out" when I set the values.
 >
 >

Thank you, I am allways glad if it's usfull for someone :-)

Meanwhile it showed up, that the handling of the stereo window can
be confusing for some scenes/camera setups. There were some threads
on povray.general:
"How can I shift the image plane?"
"Scenes and rendered images of this problem"
"Stereoscopy"


To sumarize it:
I defined the stereoscopic camera in StereoPOV in a way it produces
a sort of "natural window" located in the image plane. So all objects
*behind* the image plane will be behind the screen, regardless of the
setting of the stereo_base.
Now this happens to be confusing, if the size of the camera is not
related to the size of the objects in the scene. This often is the
case, because in "normal" (mono, 2D) mode this whole issue is
irrelevant; so e.g. with some of the example scenes of POV-Ray you may
run into problems.
I myself didn't notice this problems, because I have the habit to
rather use real world units in my scenes.
So in the next release (based on 3.5 code, after the official final
release of 3.5 will be out), I plan to introduce a sort of
"convienience shortcut" to adjust the distance to the stereo window.


For now, consider the following example:


------------------------------------------------------------

camera{
   location 0
   direction z
   up y
   right 4/3*x
   stereo_base -0.065
   }

light_source{<-2,3,-2>
              colour rgb 1
             }

plane {y, -0.5
   texture{
      pigment{checker
             color red 1 green 1 blue 1
             color red 0 green 1 blue 0
      }
   }
}


sphere{ 0.8*z 0.3
   texture{pigment{colour rgb 0.8}}}
------------------------------------------------------

The image plane is defined by the camera location and direction.
Hence, because the stereoscopic window is in the plane, the window
will be at <0,0,1> and will be 1 Unit high and 1.33 Units wide.
Because the sphere is nearer than this window, it will show up
"off screen", i.e. "plop up" before the window.

(Btw: I used a negative stereo_base for cross eyed view)


Here, I used units oriented at real world units (1 Unit=1m)
But it is not required to do so.
I you are unshure how to choose the stereo_base, try 1/30 of
the distance to the nearest/main object as a starting point.
(the so called "1:30 Rule")


Hope, this helps.
Please tell me, if it was confusing, or
if you need further help with setting up the camera.

Regards,
Hermann Vosseler


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.