|
|
Stephen <mca### [at] aolDOTcom> wrote:
> On 11/07/2010 6:02 PM, AS wrote:
> > Stereoscopy has at last reached the mainstream of mass media, a technique in
> > which POV-Ray has been shining for a long time.
>
> What do you mean? "Shining"
surely he's a great fan of your anaglyphs! :) Really, I don't know anyone else
who's been povving in stereo... :)
There are pros and cons to both methods, and I'll go just with the cons (reminds
me of Lisp):
* anaglyph loses color info
* anaglyph requires glasses not readily available (and my cheap hand-made
cellophane one doesn't really offer such a marvelous experience)
* very few people are able to visualize cross-view or parallel-view stereograms
without getting pain or angry at not getting it
in short: very few people are able to watch 3D content either because they lack
glasses or patience or skill or whatever.
I don't think 3D TVs with glasses will ever become popular. It'll take a while
for devices with builtin parallax barriers in the display itself to become
popular and affordable, beginning with Nintendo's 3DS I guess... that's when 3D
will get popular and holograms will be the next big thing... :)
that said, it would make life so much easier if there was a builtin parameter in
the camera, something like:
camera { perspective location -5*z look_at 0 stereoscopic .6 }
which would make 2 renders: 1 from <-.3,0,-5> and another from <.3,0,-5> :)
automatically joining the two renders into one according to some visualization
scheme would be nice, but it seems youtube and others expect 2 separate frames
to be joined at will by the user agent, not preset...
yes, I know, it could be done with a macro. Heck, even NURBS or a raytracer can
get done with a macro, but that doesn't mean they should...
Post a reply to this message
|
|