POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts Server Time
28 Mar 2024 05:40:26 EDT (-0400)
  Facebook 3D posts (Message 1 to 10 of 41)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Mike Horvath
Subject: Facebook 3D posts
Date: 17 Jun 2021 23:30:46
Message: <60cc1366$1@news.povray.org>
On Facebook you can create 3D posts using an image and a depth map.

Described here:

https://www.facebook.com/help/414295416095269

A couple of questions.

Has anyone created a script to generate these?

Does depth increase linearly? How deep are white and black supposed to 
be? Do these follow geometric rules?

Thanks.


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 17 Jun 2021 23:35:23
Message: <60cc147b@news.povray.org>
On 6/17/2021 11:30 PM, Mike Horvath wrote:
> Do these follow geometric rules?

I mean, does depth follow inverse square law or something?


Mike


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 00:10:57
Message: <60cc1cd1$1@news.povray.org>
On 6/17/2021 11:30 PM, Mike Horvath wrote:
> On Facebook you can create 3D posts using an image and a depth map.
> 
> Described here:
> 
> https://www.facebook.com/help/414295416095269
> 
> A couple of questions.
> 
> Has anyone created a script to generate these?
> 
> Does depth increase linearly? How deep are white and black supposed to 
> be? Do these follow geometric rules?
> 
> Thanks.


Thinking about it, blackness should be the inverse tangent of the 
distance of an object from the camera. But what about the scale? How 
much is 1 unit? I'm guessing the scale should be somehow related to the 
distance between the viewer's eyes. It's also possible that the Facebook 
plugin fudges all these numbers somehow in completely arbitrary ways.


Mike


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 00:20:00
Message: <web.60cc1df793de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/17/2021 11:30 PM, Mike Horvath wrote:
> > Do these follow geometric rules?
>
> I mean, does depth follow inverse square law or something?
>
>
> Mike

Hi Mike,

I recommend you to have a look at this article:

http://paulbourke.net/reconstruction/depthmap2/

Hope it will help.

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 00:39:28
Message: <60cc2380$1@news.povray.org>
On 6/18/2021 12:16 AM, BayashiPascal wrote:
> Mike Horvath <mik### [at] gmailcom> wrote:
>> On 6/17/2021 11:30 PM, Mike Horvath wrote:
>>> Do these follow geometric rules?
>>
>> I mean, does depth follow inverse square law or something?
>>
>>
>> Mike
> 
> Hi Mike,
> 
> I recommend you to have a look at this article:
> 
> http://paulbourke.net/reconstruction/depthmap2/
> 
> Hope it will help.
> 
> Pascal
> 
> 
> 

Interesting. The pigment in that example is clipped and normalized, 
however. Wouldn't it make more sense geometrically to extend the pigment 
to infinity?


Mike


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 01:25:00
Message: <web.60cc2cfd93de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/18/2021 12:16 AM, BayashiPascal wrote:
> > Mike Horvath <mik### [at] gmailcom> wrote:
> >> On 6/17/2021 11:30 PM, Mike Horvath wrote:
> >>> Do these follow geometric rules?
> >>
> >> I mean, does depth follow inverse square law or something?
> >>
> >>
> >> Mike
> >
> > Hi Mike,
> >
> > I recommend you to have a look at this article:
> >
> > http://paulbourke.net/reconstruction/depthmap2/
> >
> > Hope it will help.
> >
> > Pascal
> >
> >
> >
>
> Interesting. The pigment in that example is clipped and normalized,
> however. Wouldn't it make more sense geometrically to extend the pigment
> to infinity?
>
>
> Mike

Do you mean the *gradient* is clipped and normalized ? If so, it is done in
order to use it as an entry of the color_map (which afaik only takes value in
[0, 1]). If not, I don't understand your comment.

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 01:35:20
Message: <60cc3098$1@news.povray.org>
On 6/18/2021 1:19 AM, BayashiPascal wrote:
>> Interesting. The pigment in that example is clipped and normalized,
>> however. Wouldn't it make more sense geometrically to extend the pigment
>> to infinity?
>>
>>
>> Mike
> 
> Do you mean the *gradient* is clipped and normalized ? If so, it is done in
> order to use it as an entry of the color_map (which afaik only takes value in
> [0, 1]). If not, I don't understand your comment.
> 
> Pascal
> 
> 

I mean the gradient function has a minimum and maximum range versus. It 
should more realistically start at the camera and extend into infinity. 
(But also be scaled somehow to the scale of the scene.)

Here is my attempt, though I don't think it is working.

///////////////////////////////////////////////////////////////////////

#declare CAMERAPOS    = <3,3,3>;
#declare CAMERALOOKAT = <0,0,0>;
#declare CAMERAFRONT  = vnormalize(CAMERALOOKAT - CAMERAPOS);
#declare CAMERAFRONTX = CAMERAFRONT.x;
#declare CAMERAFRONTY = CAMERAFRONT.y;
#declare CAMERAFRONTZ = CAMERAFRONT.z;

#declare my_gradient = function(x, y, z, gradx, grady, gradz)
{
	atan(x * gradx + y * grady + z * gradz)/(pi/2)
}

#declare Muns_depth_pigment = pigment
{
	function
	{
//		clipped_scaled_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY, 
CAMERAFRONTZ, DEPTHMIN, DEPTHMAX)
		my_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY, CAMERAFRONTZ)
	}
	color_map
	{
		[0 color rgb <1,1,1>]
		[1 color rgb <0,0,0>]
	}
	translate CAMERAPOS
}

///////////////////////////////////////////////////////////////////////

Mike


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 02:55:00
Message: <web.60cc425493de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/18/2021 1:19 AM, BayashiPascal wrote:
> >> Interesting. The pigment in that example is clipped and normalized,
> >> however. Wouldn't it make more sense geometrically to extend the pigment
> >> to infinity?
> >>
> >>
> >> Mike
> >
> > Do you mean the *gradient* is clipped and normalized ? If so, it is done in
> > order to use it as an entry of the color_map (which afaik only takes value in
> > [0, 1]). If not, I don't understand your comment.
> >
> > Pascal
> >
> >
>
> I mean the gradient function has a minimum and maximum range versus. It
> should more realistically start at the camera and extend into infinity.
> (But also be scaled somehow to the scale of the scene.)
>
> Here is my attempt, though I don't think it is working.
>
> ///////////////////////////////////////////////////////////////////////
>
> #declare CAMERAPOS    = <3,3,3>;
> #declare CAMERALOOKAT = <0,0,0>;
> #declare CAMERAFRONT  = vnormalize(CAMERALOOKAT - CAMERAPOS);
> #declare CAMERAFRONTX = CAMERAFRONT.x;
> #declare CAMERAFRONTY = CAMERAFRONT.y;
> #declare CAMERAFRONTZ = CAMERAFRONT.z;
>
> #declare my_gradient = function(x, y, z, gradx, grady, gradz)
> {
>  atan(x * gradx + y * grady + z * gradz)/(pi/2)
> }
>
> #declare Muns_depth_pigment = pigment
> {
>  function
>  {
> //  clipped_scaled_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY,
> CAMERAFRONTZ, DEPTHMIN, DEPTHMAX)
>   my_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY, CAMERAFRONTZ)
>  }
>  color_map
>  {
>   [0 color rgb <1,1,1>]
>   [1 color rgb <0,0,0>]
>  }
>  translate CAMERAPOS
> }
>
> ///////////////////////////////////////////////////////////////////////
>
> Mike

I don't see any problem with your solution. It really just depends on how you
(or the tool you use) interpret the generated depth values.

I've never made such images for Facebook, and have no idea how their plugin
works. Googling a little about it brought me to this article:
https://techcrunch.com/2018/06/07/how-facebooks-new-3d-photos-work/
containing a link to the research paper behind the plugin:
http://visual.cs.ucl.ac.uk/pubs/instant3d/
I had a quick look at it but had neither the courage nor motivation to read it
entirely. However I've noticed page 5 they said "However, this
did not achieve good results, because, as we learned, many depth
maps are normalized using unknown curves. We tried a variety of
other classes of global transformations, ...". Which makes me think they're
reconstructing an approximate 3D mesh, pleasant for the eye but not realistic,
to manage any kind of input, noise, and extrapolation from even a single image.
After all, their goal was probably to make an user-friendly plugin, rather than
accurate.

Conclusion, it probably doesn't really matter what kind of depth encoding you
use if you plan to use it on Facebook ! Maybe the best is to experiment...

I've wrote the script on the site of Paul Bourke while working with 3D scanners
where depth was encoding linearly either planar or spherical distance from the
camera within a range. You can also set DEPTHMIN to 0.0 and DEPTHMAX to an
arbitrarily big value to get a depth map from the camera to (kind of) infinity.
That's why they are variables and not hard coded in my script.

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 03:11:44
Message: <60cc4730$1@news.povray.org>
On 6/18/2021 2:51 AM, BayashiPascal wrote:
> Conclusion, it probably doesn't really matter what kind of depth encoding you
> use if you plan to use it on Facebook ! Maybe the best is to experiment...

Interesting.

I wonder if there is a tool that works with stereo images that is less 
ambiguous?

But it might not matter.


Mike


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 03:50:00
Message: <web.60cc4f0993de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/18/2021 2:51 AM, BayashiPascal wrote:
> > Conclusion, it probably doesn't really matter what kind of depth encoding you
> > use if you plan to use it on Facebook ! Maybe the best is to experiment...
>
> Interesting.
>
> I wonder if there is a tool that works with stereo images that is less
> ambiguous?
>
> But it might not matter.
>
>
> Mike

This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
work on it every day) !!

Maybe you can have look at the wikipedia page first if you don't know about it.
https://en.wikipedia.org/wiki/Photogrammetry

There are tons of techniques and tools, each with its particular advantage,
inconvenient and use case...

Pascal


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.