|
![](/i/fill.gif) |
Apache wrote:
> I'm interested! I suppose you're talking about those two-colored images? I
> don't like crossing my eyes during a 30 minutes povray animation, you
> know.... :-)
>
Hi Apache,
yes and no -- :-)
as I started this patch first for my own use, I didn't build in
anaglypic (two-colored) output. The main use for me is to make
screenshots to slide and then to look at them in a stereo viewer or
projecting them full color with stereo projector and polarizing
spectacles.
I.e. I want to get high quality/high res.
So I focused at the problem, that making two independent renders will
double the time amount needed. To ameliorate this, I built in a
"Stereo-Cache" datastructure in order to share lighting/texturing and
radiosity(!) calculation between corresponding pixels in both half
images. This was a bit tricky, but rather successfull: Alredy in
rather simple scenes with a bit of texturing and some area lights, I
achieve rendering times of 140% (to be compared to 200% for two
independend renders)
Another feature I focused on was to be able to use some of the
non-standard camera types stereoscopically. In my patch, you can
use (besides perspective camera) orthographic, fisheye, a cylindrical
type and a new designed "spherical wideangle".
As far as stills are concerned, I simply convert the output to JPG,
call it *.JPS and point a JPS-viewer (e.g. DepthCharge) at the file;
so I can chose "wall-eyed", "cross-eyed", red/green, red/blue or
interlaced (for LCD shutter) viewing method.
At the process of making animations -- I must confss -- I didn't
think. But one could think of hooking into the output_line() call
and postprocessing to -- say red/green -- on the fly. Would this
be of some help for you? Or does someone know a better method?
Anyway, I will package it together, write a short docu, render some
demo imgs and put this on a web page in some days, so you can have a
look at it. Stay tuned!
Hermann
Post a reply to this message
|
![](/i/fill.gif) |