|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 3/7/2016 12:27 PM, William F Pokorny wrote:
> Thanks for testing. First, am I correct that the top/bottom vs
> left/right in the stereo VR world refers to the placement of the left
> and right eye images and not to the intended orientation for head
> movement? Think the answer is yes, but want to be sure.
I've wondered about that too. So I googled it.
One article I read said that Top/Bottom formatting MUST be used with
progressive (720p and 1080p) HD video formats exclusively.
SbS formatting MUST be used with interlaced (1080i) HD video formats
exclusively.
https://opticalflow.wordpress.com/2010/09/19/side-by-side-versus-top-and-bottom-3d-formats/
--
Regards
Stephen
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Thanks for testing. First, am I correct that the top/bottom vs
> left/right in the stereo VR world refers to the placement of the left
> and right eye images and not to the intended orientation for head
> movement? Think the answer is yes, but want to be sure.
Yes.
Stephen is right, but actually side-by-side are the most frequently used format
even with progressive.
Some 3D video out there (separate image for each eyes, not anaglyph of course)
are builded
with left eye on top and right eye on bottom (generally called top-bottom or
up-down),
other with left eye on left and right eye on right (generally called
side-by-side).
I think there are only 'human' reason about it, no any scientific/optimized
reason, also because are almost all progressive.
There are a mixed use and not a de-facto standard. Any video player have options
about switching format.
The headset are physically side-by-side of course,
people capture footage from headset and publish it on YouTube (random example:
https://www.youtube.com/watch?v=A7q3mY0iNOQ)
other people view them and think side-by-side it's a more "natural" format for
3D videos.
Capturing hardware, like GoPro 3D, have lens side-by side, and it's more
"natural" view the captured footage in the same format.
Major headache, video and image don't store this kind of information, and it
can't be detected easy.
For example, Oculus (major market player of VR headset) ask people to provide a
..txt file with the same name of the video, with inside a JSON about the format.
Docs:
https://support.oculus.com/hc/en-us/articles/205453088-Watching-your-videos-in-Oculus-Video
in other docs say it's better to append a suffix _TB or _LR to the filename:
https://support.oculus.com/hc/en-us/articles/204401983-Viewing-Your-360-videos-in-Oculus-Video-
Personally, i prefer top/bottom.
In panorama it's more frequently that user look around horizontally, not
vertically, so an aspect-ratio with more width and less height it's better imho.
And, for comodity, i put images top/bottom to obtain an almost square image more
friendly for thumbnail preview.
Bill, i will reply as soon i understand well your message. Thanks for your
patience.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> It adds a new function-based user-defined camera:
>
> camera {
> user_defined
> location {
> FUNCTION, // x-coordinate of ray origin
> FUNCTION, // y-coordinate of ray origin
> FUNCTION // z-coordinate of ray origin
> }
> direction {
> FUNCTION, // x-coordinate of ray direction
> FUNCTION, // y-coordinate of ray direction
> FUNCTION // z-coordinate of ray direction
> }
> CAMERA_MODIFIERS
> }
>
> where each FUNCTION takes the screen coordinates as parameters, ranging
> from -0.5 (left/bottom) to 0.5 (right/top).
Hi, i'm missing something here about syntax...
camera {
user_defined
location {
function (sx,sy) { sx*sy },
function (sx,sy) { sx*sy },
function (sx,sy) { sx*sy }
}
direction {
function (sx,sy) { sx*sy },
function (sx,sy) { sx*sy },
function (sx,sy) { sx*sy }
}
}
give a
Parse Error: Missing { after 'location', ( found instead
on both 3.7.1-alpha binaries of your message, and also on my GitHub updated
clone. Currently lost in debugging Parse_User_Defined_Camera...
Anyone have tried user_defined camera and can post an example? Thanks.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I write directly in POV-Ray sources the adaption of formulas based on ODS
document in the first post of this thread.
I added the simple code to directly render both eyes in single image.
// Feature missing: original direction ignored (front)
// Maybe params:
DBL ipd = 0.065;
int mode = 4; // 0: nostereo, 1: left, 2: right, 3: side-by-side, 4:
top/bottom
// Convert the x coordinate to be a DBL from 0 to 1.
x0 = x / width;
// Convert the y coordinate to be a DBL from 0 to 1.
y0 = y / height;
int eye = 0;
if (mode == 0)
{
eye = 0;
}
else if (mode == 1)
{
eye = -1;
}
else if (mode == 2)
{
eye = +1;
}
else if (mode == 3)
{
if (x0 < 0.5) // Left eye on Left
{
x0 *= 2;
eye = -1;
}
else // Right eye on Right
{
x0 -= 0.5;
x0 *= 2;
eye = +1;
}
}
else if (mode == 4)
{
if (y0 < 0.5) // Left eye on Top
{
y0 *= 2;
eye = -1;
}
else // Right eye on Bottom
{
y0 -= 0.5;
y0 *= 2;
eye = +1;
}
}
DBL pi = M_PI;
DBL theta = x0 * 2 * pi - pi;
DBL phi = pi / 2 - y0*pi;
DBL scale = eye * ipd / 2;
ray.Origin[0] = cameraLocation[0] + cos(theta) * scale;
ray.Origin[1] = cameraLocation[1] + 0;
ray.Origin[2] = cameraLocation[2] + sin(theta) * scale;
ray.Direction[0] = sin(theta) * cos(phi);
ray.Direction[1] = sin(phi);
ray.Direction[2] = -cos(theta) * cos(phi);
if (useFocalBlur)
JitterCameraRay(ray, x, y, ray_number);
InitRayContainerState(ray, true);
The Bill scene test rendered with IPD 0.065 and mode 4 (top/bottom) with the
above code:
http://www.clodo.it/host/images/0cfb243a2018ff0d3e668329da99756626c95594.png
The projection and the 3D effect seem correct on headset. Z axis is inverted,
doesn't matter, easy to fix.
The big issue it's the same issue that also have the "Bill P. mesh camera" and
"Paul Bourke povray 3.6 patch":
near the polar in Y axis, a spiral effect.
http://www.clodo.it/host/images/42c1548d0620db0c5e54324e6ad6cd23ee86f8fb.png
Tested with two of more popular image/video player for Oculus Headset: Virtual
Desktop & MaxVR . No differences.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Am 08.03.2016 um 20:29 schrieb Clodo:
> Hi, i'm missing something here about syntax...
>
> camera {
> user_defined
> location {
> function (sx,sy) { sx*sy },
> function (sx,sy) { sx*sy },
> function (sx,sy) { sx*sy }
> }
> direction {
> function (sx,sy) { sx*sy },
> function (sx,sy) { sx*sy },
> function (sx,sy) { sx*sy }
> }
> }
>
> give a
> Parse Error: Missing { after 'location', ( found instead
Sorry, my bad, I should have explained the syntax in more detail.
Like in the `parametric` shape, the functions' parameter list is fixed,
and cannot be changed. Use
function { EXPRESSION }
and use `x` and `y` or, alternatively, `u` and `v` to reference the
parameters, e.g.
camera {
user_defined
location {
function { sin(x) }
function { sin(y) }
function { 1 }
}
direction {
function { 0 }
function { 1 }
function { -10 }
}
}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thanks clipka.
Quick Benchmark: scenes/camera/spherical.pov, 4096x2048 AA 0.3, no-stereo
(tested 3 times each).
All three method generate the same identical image.
----------------
Default camera{spherical}: 22 seconds
----------------
With my C implementation of ODS, direct in tracepixel.cpp (
http://pastebin.com/fJ0Z978Q ), mode 0 (no-stereo): 19 seconds
----------------
With the below user-defined camera: 22 seconds
#declare ipd = 0.065;
#declare eye = 0; // 0: No-Stereo, 1: Left, 2: Right
camera {
user_defined
location {
function { cos((x+0.5) * 2 * pi - pi)*ipd/2*eye }
function { 0 }
function { sin((x+0.5) * 2 * pi - pi)*ipd/2*eye }
}
direction {
function { sin((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
function { sin(pi / 2 - (1-(y+0.5))*pi) }
function { cos((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
}
}
---------------
clipka, my C implementation can render directly in one image both side-by-side
or top-bottom.
I was unable to do the same thing with user_defined camera.
Besides the need of declaration/computation of common variables theta/phi in
each vector3 components (i can't find a method to define it outside),
i don't find the right syntax for inject some #if.
For example
function { #if(x<0.5) x #else 1/x #end }
don't work...
I'm missing something about syntax, or isn't possible? Thanks for any feedback.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 03/08/2016 04:56 PM, Clodo wrote:
> The big issue it's the same issue that also have the "Bill P. mesh camera" and
> "Paul Bourke povray 3.6 patch":
> near the polar in Y axis, a spiral effect.
>
> http://www.clodo.it/host/images/42c1548d0620db0c5e54324e6ad6cd23ee86f8fb.png
>
> Tested with two of more popular image/video player for Oculus Headset: Virtual
> Desktop & MaxVR . No differences.
>
On your modification of source code for this camera - well done!
Interesting to me too is that the green cylinder behind the Y+ character
at the pole in your image is 10m away - much more than the 3.5m the
Google ODS PDF recommends at the poles - and no distortion is apparent
for that shape to my eyes.
Supposing the distortion is in fact out at those distances too, but
given image resolution & typical environments (ground/sky) at larger
distances we often enough won't see it.
The ODS "rule" of >=3.5m open space at the poles is certainly
inconvenient given typical human dimensions and our inside environments.
Light fixture on a ceiling, a patterned throw rug on the floor or
walking through a doorway as problematic examples. :-)
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 03/09/2016 08:17 AM, Clodo wrote:
>
> Thanks clipka.
>
>
> Quick Benchmark: scenes/camera/spherical.pov, 4096x2048 AA 0.3, no-stereo
> (tested 3 times each).
> All three method generate the same identical image.
> ----------------
> Default camera{spherical}: 22 seconds
> ----------------
> With my C implementation of ODS, direct in tracepixel.cpp (
> http://pastebin.com/fJ0Z978Q ), mode 0 (no-stereo): 19 seconds
> ----------------
> With the below user-defined camera: 22 seconds
>
> #declare ipd = 0.065;
> #declare eye = 0; // 0: No-Stereo, 1: Left, 2: Right
> camera {
> user_defined
> location {
> function { cos((x+0.5) * 2 * pi - pi)*ipd/2*eye }
> function { 0 }
> function { sin((x+0.5) * 2 * pi - pi)*ipd/2*eye }
> }
> direction {
> function { sin((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
> function { sin(pi / 2 - (1-(y+0.5))*pi) }
> function { cos((x+0.5) * 2 * pi - pi) * cos(pi / 2 - (1-(y+0.5))*pi) }
> }
> }
> ---------------
>
> clipka, my C implementation can render directly in one image both side-by-side
> or top-bottom.
> I was unable to do the same thing with user_defined camera.
> Besides the need of declaration/computation of common variables theta/phi in
> each vector3 components (i can't find a method to define it outside),
> i don't find the right syntax for inject some #if.
> For example
> function { #if(x<0.5) x #else 1/x #end }
> don't work...
> I'm missing something about syntax, or isn't possible? Thanks for any feedback.
>
>
>
Functional camera implementation - again well done.
There is a select() function which I think will work for you.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Sorry, a mistake:
#declare eye = 0; // 0: No-Stereo, 1: Left, 2: Right
it's wrong, the correct is
#declare eye = 0; // 0: No-Stereo, -1: Left, +1: Right
Anyway, i still think the above user-defined camera can be used as proof of
concept of the new great camera type.
But imho ODS (used by up-coming VR market) deserve a C implementation, with
user-friendly params for users.
Maybe current 'spherical' can be extended with ODS params, maintain backward
compatibility if absent, without need a new camera type.
Writing a full ODS user_defined camera, considering both eyes render in
different formats, and also start location and direction, can result in a huge,
unreadable macro.
Just my cents.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> There is a select() function which I think will work for you.
Thanks!
// user_defined camera, OSD, both eyes in one image side-by-side.
#declare ipd = 0.065;
#declare cameraLocationX = 0;
#declare cameraLocationY = 10;
#declare cameraLocationZ = 0;
camera {
user_defined
location {
function { cameraLocationX + cos(select(x,(x+0.5)*2,(x*2)) * 2 * pi -
pi)*ipd/2*select(x,-1,1) }
function { cameraLocationY }
function { cameraLocationZ + sin(select(x,(x+0.5)*2,(x*2)) * 2 * pi -
pi)*ipd/2*select(x,-1,1) }
}
direction {
function { sin(select(x,(x+0.5)*2,(x*2)) * 2 * pi - pi) * cos(pi / 2 -
(1-(y+0.5))*pi) }
function { sin(pi / 2 - (1-(y+0.5))*pi) }
function { cos(select(x,(x+0.5)*2,(x*2)) * 2 * pi - pi) * cos(pi / 2 -
(1-(y+0.5))*pi) }
}
}
Render:
https://www.clodo.it/host/images/b6175498afef1f961cfed2d2dba2c5b5df515f39.png
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|