POV-Ray : Newsgroups : povray.general : Omni-directional Stereo Content Server Time
2 May 2024 04:53:33 EDT (-0400)
  Omni-directional Stereo Content (Message 31 to 40 of 54)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 6 Mar 2016 18:35:01
Message: <web.56dcbe96976329954e8811590@news.povray.org>
Now i started from Bill mesh camera .pov file.
Commenting camera mesh part, adding a simple camera{spherical} :
http://www.clodo.it/host/images/0f258e33fe64385e5f9b465a13644a9295ee3118.png
tested on Rift DK2, perfect (but 2D).

Now, i convert it to PovRay 3.6:
http://pastebin.com/Yx74HQ3i
- convert srgbft to rgb
- convert text "internal 1" to "ttf "timrom.ttf""
- comment #version 3.7
- removed mesh directives

Adding Paul Bourke camera syntax, render with PovRay 3.6.1 patched,
left eye:
http://www.clodo.it/host/images/1f2635881f4bab3d5ae82b805aa032435e8b588c.png
right eye:
http://www.clodo.it/host/images/1a2fb3332d9ae97699138b7e4debff075ea70d33.png

Mixed in a unique 4096x4096 (left image on top, right image on bottom) and
tested with Rift DK2.

In my previous message i tell that Paul Bourke method is perfect: not true,
sorry.
The above stereo image generated with Paul method have a correct aspect (X,Y,Z
at right place), but if i'm looking at top <0,+inf,0>
it's seem it render a spiral at infinite (like the Escher spiral). On both Y+
and Y- 'horizon'.
This seems the same issue that occur in mesh camera method, the "deformed leg"
of the X.
Sorry i don't know how to explain it better.

The default 2D camera{spherical} don't have this issue, it's perfect.

So, the situation imho:
0- Default camera{spherical}: 2D only, but perfect.
1- Bill mesh camera method: Y spiral issue, aspect-ratio restrictions, other
undefined issues
2- Paul Bourke stereospherical method: Y spiral issue.
3- Clipka function-based user-defined camera method: anyone can build a .pov
example to test? Thanks.


Post a reply to this message

From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 6 Mar 2016 18:55:01
Message: <web.56dcc300976329954e8811590@news.povray.org>
My brainstorming method:

Default camera{spherical} render pixels columns from 0 to 360 angle. Ok.
Alter the current spherical behiavour by adding a single param "eye-offset"
(generally +/-0.065/2)
For each pixel, if "eye-offset" it's not 0 (for compatibility)
 rotate the vector3 <-eye_offset,0,0> to the same rotation Y axis of the pixel
column current rendering
 and adding it to the 'location' of the camera for the current pixel
computation.
If anyone can suggest to me how to hack the tracepixel.cpp, i can try it, if
anyone think that can works. I don't know if can generate the Y spiral issue.


Post a reply to this message

From: William F Pokorny
Subject: Re: Omni-directional Stereo Content
Date: 7 Mar 2016 07:27:59
Message: <56dd73cf$1@news.povray.org>
On 03/06/2016 04:41 PM, Clodo wrote:
>> Would you please render, view and
>> let us know if the result OK or not?
>
>
> Your .pov in attachment, rendered with square 2048x2048 resolution:
> http://www.clodo.it/host/images/5f9811c6314aefa13bc7f619424fd0537f2b9483.png
>
> Rotate 90 CW with an external image:
> http://www.clodo.it/host/images/ec5b3235955cef0cbab6707b4a54259dcbd111dc.png
>
> and played in VR headset (spherical environment, stereo top/bottom) look almost
> ok in front view, but
> looking vertically up, there are issue:
> http://www.clodo.it/host/images/252fbce0fdc3188a7e1a2fedd7da7ba75e0471bc.jpg
> look the "leg" of the X, it's deformed and inverted between eyes.

Thanks for testing. First, am I correct that the top/bottom vs 
left/right in the stereo VR world refers to the placement of the left 
and right eye images and not to the intended orientation for head 
movement? Think the answer is yes, but want to be sure.

The pull apart up and down with the X doesn't surprise me. What I 
believe is happening is that at the top and bottom we are fulling seeing 
the pupil offset in the ODS approximation. This approximation is not 
noticed where things are all grey, but we see it clearly on the leg of 
the X while happens to be at the top pole.

It seems to me the ODS scheme will work reasonably well at the equator 
so to speak, and less well the further one or both eye tilt off it for 
single top/bottom images. It relates to what Alain said in another post 
to this thread in that to really do things cleanly you pretty much have 
to render another image/frame once one or both eyes much leave the 
existing image's horizontal/pupil equator.

That said. What I've got no clue about is what games might be getting 
played in the VR hardware itself & perhaps I missed some existing 
adjustment when reading the PDF? I can, for example, imagine a scheme 
which forfeits the 3D effect at top and bottom to prevent the visual 
pull apart.

Do you have other golden images where there is a thin shape like the leg 
of the X cross at or very near straight up or down? Might clue us in to 
what special handling gets done at the poles - if any.

>
> Sorry but it's difficult to understand if the 3D effect is correct in your
> example, and i'm fail to render other scenes.
> I took the "spherical.pov" sample from Povray distribution.
> standard camera{spherical} render (2048x2048):
> http://www.clodo.it/host/images/4ecb53b74a7ffdb39369f14fd2664aeb7c6aa817.png
> Simply comment the camera, adding your code (final .pov on pastebin:
> http://pastebin.com/ajyHzH1E )
> render:
> http://www.clodo.it/host/images/69f31e143d340ce078a07a349108b0466271b868.png
>
> Also my scene with your mesh camera render totally unexpected big pixels:
> http://www.clodo.it/host/images/160d46699c5da84daa11d06362b36856173497a7.jpg
>
>
> All tests rendered with version 3.7.0.msvc10.win64, the official stable.

The spherical.pov is set up with the camera location at a y of 10 where 
the mesh camera as defined is at the origin. Something I should have 
added as a note in the header is that the usual available camera {} 
transformations do not work with the mesh camera. You can move the mesh 
camera to the same location at the spherical camera in spherical.pov by 
changing the line :

  mesh { Mesh00 rotate x*(360*(i/ImageHeight))}

to

  mesh { Mesh00 rotate x*(360*(i/ImageHeight)) translate <0,10,0> }

This will eliminate the big pixels - at 0,0 the camera is sometimes at 
or in the scene's plane.

This leaves us with the fact spherical.pov is set up as shapes on the 
X,Z plane Y up which doesn't match the ODSC mesh cameras fixed X+ up so 
the result will be oriented differently than the spherical camera result.

>
> --------------------------------
> In the mean-time, i compiled the patch of Paul Bourke
> http://paulbourke.net/stereographics/povcameras/
> with old PovRay 3.6.1 sources, with my scene with floating spheres,
> settings zero-parallax far away (1000), IPD 0.065 and test in with VR headset.
> For me, it's perfect, projection, scale and 3D effect.
>
> I still have issue to porting it to POV-Ray 3.7, i don't know how to convert
> VLinComb3, Ray->Initial etc.
>
> --------------------------------
> The new function-based user-defined camera available in 3.7.1 alpha look
> interesting, but i'm totally newbie about using it to render a spherical stereo.
>

Yes, cool feature for sure & not just for stereo implementations. I too 
would need to think for a bit on how to code up the ODS camera with it.

Bill P.


Post a reply to this message

From: William F Pokorny
Subject: Re: Omni-directional Stereo Content
Date: 7 Mar 2016 07:37:19
Message: <56dd75ff$1@news.povray.org>
On 03/07/2016 07:27 AM, William F Pokorny wrote:
...

I took a quick look at the Google PDF again and with respect to the X 
leg pull apart at the pole and I see this note:

"Objects appearing directly above or below the camera should remain at 
least 3.5m from the camera (relative to an IPD of 6.5 cm)."

I placed the text much closer (1m?) and I think the "rule" above is 
precisely about minimizing the distortion of shapes at the poles.

Bill P.


Post a reply to this message

From: Stephen
Subject: Re: Omni-directional Stereo Content
Date: 7 Mar 2016 08:50:41
Message: <56dd8731$1@news.povray.org>
On 3/7/2016 12:27 PM, William F Pokorny wrote:
> Thanks for testing. First, am I correct that the top/bottom vs
> left/right in the stereo VR world refers to the placement of the left
> and right eye images and not to the intended orientation for head
> movement? Think the answer is yes, but want to be sure.

I've wondered about that too. So I googled it.
One article I read said that Top/Bottom formatting MUST be used with 
progressive (720p and 1080p) HD video formats exclusively.
SbS formatting MUST be used with interlaced (1080i) HD video formats 
exclusively.

https://opticalflow.wordpress.com/2010/09/19/side-by-side-versus-top-and-bottom-3d-formats/



-- 

Regards
     Stephen


Post a reply to this message

From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 7 Mar 2016 14:35:00
Message: <web.56ddd767976329954e8811590@news.povray.org>
> Thanks for testing. First, am I correct that the top/bottom vs
> left/right in the stereo VR world refers to the placement of the left
> and right eye images and not to the intended orientation for head
> movement? Think the answer is yes, but want to be sure.

Yes.

Stephen is right, but actually side-by-side are the most frequently used format
even with progressive.

Some 3D video out there (separate image for each eyes, not anaglyph of course)
are builded
with left eye on top and right eye on bottom (generally called top-bottom or
up-down),
other with left eye on left and right eye on right (generally called
side-by-side).

I think there are only 'human' reason about it, no any scientific/optimized
reason, also because are almost all progressive.
There are a mixed use and not a de-facto standard. Any video player have options
about switching format.

The headset are physically side-by-side of course,
people capture footage from headset and publish it on YouTube (random example:
https://www.youtube.com/watch?v=A7q3mY0iNOQ)
other people view them and think side-by-side it's a more "natural" format for
3D videos.
Capturing hardware, like GoPro 3D, have lens side-by side, and it's more
"natural" view the captured footage in the same format.

Major headache, video and image don't store this kind of information, and it
can't be detected easy.
For example, Oculus (major market player of VR headset) ask people to provide a
..txt file with the same name of the video, with inside a JSON about the format.
Docs:
https://support.oculus.com/hc/en-us/articles/205453088-Watching-your-videos-in-Oculus-Video
in other docs say it's better to append a suffix _TB or _LR to the filename:
https://support.oculus.com/hc/en-us/articles/204401983-Viewing-Your-360-videos-in-Oculus-Video-

Personally, i prefer top/bottom.
In panorama it's more frequently that user look around horizontally, not
vertically, so an aspect-ratio with more width and less height it's better imho.
And, for comodity, i put images top/bottom to obtain an almost square image more
friendly for thumbnail preview.



Bill, i will reply as soon i understand well your message. Thanks for your
patience.


Post a reply to this message

From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 8 Mar 2016 14:30:00
Message: <web.56df278a976329954e8811590@news.povray.org>
> It adds a new function-based user-defined camera:
>
>     camera {
>       user_defined
>       location {
>         FUNCTION, // x-coordinate of ray origin
>         FUNCTION, // y-coordinate of ray origin
>         FUNCTION  // z-coordinate of ray origin
>       }
>       direction {
>         FUNCTION, // x-coordinate of ray direction
>         FUNCTION, // y-coordinate of ray direction
>         FUNCTION  // z-coordinate of ray direction
>       }
>       CAMERA_MODIFIERS
>     }
>
> where each FUNCTION takes the screen coordinates as parameters, ranging
> from -0.5 (left/bottom) to 0.5 (right/top).

Hi, i'm missing something here about syntax...

camera {
    user_defined
    location {
        function (sx,sy) { sx*sy },
        function (sx,sy) { sx*sy },
        function (sx,sy) { sx*sy }
        }
    direction {
        function (sx,sy) { sx*sy },
        function (sx,sy) { sx*sy },
        function (sx,sy) { sx*sy }
    }
}

give a
Parse Error: Missing { after 'location', ( found instead
on both 3.7.1-alpha binaries of your message, and also on my GitHub updated
clone. Currently lost in debugging Parse_User_Defined_Camera...
Anyone have tried user_defined camera and can post an example? Thanks.


Post a reply to this message

From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 8 Mar 2016 17:00:00
Message: <web.56df4a1d976329954e8811590@news.povray.org>
I write directly in POV-Ray sources the adaption of formulas based on ODS
document in the first post of this thread.
I added the simple code to directly render both eyes in single image.

      // Feature missing: original direction ignored (front)

   // Maybe params:
   DBL ipd = 0.065;
   int mode = 4; // 0: nostereo, 1: left, 2: right, 3: side-by-side, 4:
top/bottom

   // Convert the x coordinate to be a DBL from 0 to 1.
   x0 = x / width;

   // Convert the y coordinate to be a DBL from 0 to 1.
   y0 = y / height;

   int eye = 0;
   if (mode == 0)
   {
    eye = 0;
   }
   else if (mode == 1)
   {
    eye = -1;
   }
   else if (mode == 2)
   {
    eye = +1;
   }
   else if (mode == 3)
   {
    if (x0 < 0.5) // Left eye on Left
    {
     x0 *= 2;
     eye = -1;
    }
    else // Right eye on Right
    {
     x0 -= 0.5;
     x0 *= 2;
     eye = +1;
    }
   }
   else if (mode == 4)
   {
    if (y0 < 0.5) // Left eye on Top
    {
     y0 *= 2;
     eye = -1;
    }
    else // Right eye on Bottom
    {
     y0 -= 0.5;
     y0 *= 2;
     eye = +1;
    }
   }


   DBL pi = M_PI;

   DBL theta = x0 * 2 * pi - pi;
   DBL phi = pi / 2 - y0*pi;

   DBL scale = eye * ipd / 2;

   ray.Origin[0] = cameraLocation[0] + cos(theta) * scale;
   ray.Origin[1] = cameraLocation[1] + 0;
   ray.Origin[2] = cameraLocation[2] + sin(theta) * scale;

   ray.Direction[0] = sin(theta) * cos(phi);
   ray.Direction[1] = sin(phi);
   ray.Direction[2] = -cos(theta) * cos(phi);

   if (useFocalBlur)
    JitterCameraRay(ray, x, y, ray_number);

   InitRayContainerState(ray, true);


The Bill scene test rendered with IPD 0.065 and mode 4 (top/bottom) with the
above code:

http://www.clodo.it/host/images/0cfb243a2018ff0d3e668329da99756626c95594.png

The projection and the 3D effect seem correct on headset. Z axis is inverted,
doesn't matter, easy to fix.
The big issue it's the same issue that also have the "Bill P. mesh camera" and
"Paul Bourke povray 3.6 patch":
near the polar in Y axis, a spiral effect.

http://www.clodo.it/host/images/42c1548d0620db0c5e54324e6ad6cd23ee86f8fb.png

Tested with two of more popular image/video player for Oculus Headset: Virtual
Desktop & MaxVR . No differences.


Post a reply to this message

From: clipka
Subject: Re: Omni-directional Stereo Content
Date: 9 Mar 2016 04:50:37
Message: <56dff1ed$1@news.povray.org>
Am 08.03.2016 um 20:29 schrieb Clodo:

> Hi, i'm missing something here about syntax...
> 
> camera {
>     user_defined
>     location {
>         function (sx,sy) { sx*sy },
>         function (sx,sy) { sx*sy },
>         function (sx,sy) { sx*sy }
>         }
>     direction {
>         function (sx,sy) { sx*sy },
>         function (sx,sy) { sx*sy },
>         function (sx,sy) { sx*sy }
>     }
> }
> 
> give a
> Parse Error: Missing { after 'location', ( found instead

Sorry, my bad, I should have explained the syntax in more detail.

Like in the `parametric` shape, the functions' parameter list is fixed,
and cannot be changed. Use

    function { EXPRESSION }

and use `x` and `y` or, alternatively, `u` and `v` to reference the
parameters, e.g.

    camera {
      user_defined
      location {
        function { sin(x) }
        function { sin(y) }
        function { 1 }
      }
      direction {
        function { 0 }
        function { 1 }
        function { -10 }
      }
    }


Post a reply to this message

From: Clodo
Subject: Re: Omni-directional Stereo Content
Date: 9 Mar 2016 08:20:00
Message: <web.56e02265976329954e8811590@news.povray.org>
Thanks clipka.


Quick Benchmark: scenes/camera/spherical.pov, 4096x2048 AA 0.3, no-stereo
(tested 3 times each).
All three method generate the same identical image.
----------------
Default camera{spherical}: 22 seconds
----------------
With my C implementation of ODS, direct in tracepixel.cpp (
http://pastebin.com/fJ0Z978Q ), mode 0 (no-stereo): 19 seconds
----------------
With the below user-defined camera: 22 seconds

#declare ipd = 0.065;
#declare eye = 0; // 0: No-Stereo, 1: Left, 2: Right
camera {
      user_defined
      location {
        function { cos((x+0.5) * 2 * pi - pi)*ipd/2*eye }
        function { 0 }
        function { sin((x+0.5) * 2 * pi - pi)*ipd/2*eye }
      }
      direction {
        function { sin((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
        function { sin(pi / 2 - (1-(y+0.5))*pi) }
        function { cos((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
      }
    }
---------------

clipka, my C implementation can render directly in one image both side-by-side
or top-bottom.
I was unable to do the same thing with user_defined camera.
Besides the need of declaration/computation of common variables theta/phi in
each vector3 components (i can't find a method to define it outside),
i don't find the right syntax for inject some #if.
For example
function { #if(x<0.5) x #else 1/x #end }
don't work...
I'm missing something about syntax, or isn't possible? Thanks for any feedback.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.