POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts Server Time
7 May 2024 20:38:03 EDT (-0400)
  Facebook 3D posts (Message 8 to 17 of 41)  
<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 02:55:00
Message: <web.60cc425493de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/18/2021 1:19 AM, BayashiPascal wrote:
> >> Interesting. The pigment in that example is clipped and normalized,
> >> however. Wouldn't it make more sense geometrically to extend the pigment
> >> to infinity?
> >>
> >>
> >> Mike
> >
> > Do you mean the *gradient* is clipped and normalized ? If so, it is done in
> > order to use it as an entry of the color_map (which afaik only takes value in
> > [0, 1]). If not, I don't understand your comment.
> >
> > Pascal
> >
> >
>
> I mean the gradient function has a minimum and maximum range versus. It
> should more realistically start at the camera and extend into infinity.
> (But also be scaled somehow to the scale of the scene.)
>
> Here is my attempt, though I don't think it is working.
>
> ///////////////////////////////////////////////////////////////////////
>
> #declare CAMERAPOS    = <3,3,3>;
> #declare CAMERALOOKAT = <0,0,0>;
> #declare CAMERAFRONT  = vnormalize(CAMERALOOKAT - CAMERAPOS);
> #declare CAMERAFRONTX = CAMERAFRONT.x;
> #declare CAMERAFRONTY = CAMERAFRONT.y;
> #declare CAMERAFRONTZ = CAMERAFRONT.z;
>
> #declare my_gradient = function(x, y, z, gradx, grady, gradz)
> {
>  atan(x * gradx + y * grady + z * gradz)/(pi/2)
> }
>
> #declare Muns_depth_pigment = pigment
> {
>  function
>  {
> //  clipped_scaled_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY,
> CAMERAFRONTZ, DEPTHMIN, DEPTHMAX)
>   my_gradient(x, y, z, CAMERAFRONTX, CAMERAFRONTY, CAMERAFRONTZ)
>  }
>  color_map
>  {
>   [0 color rgb <1,1,1>]
>   [1 color rgb <0,0,0>]
>  }
>  translate CAMERAPOS
> }
>
> ///////////////////////////////////////////////////////////////////////
>
> Mike

I don't see any problem with your solution. It really just depends on how you
(or the tool you use) interpret the generated depth values.

I've never made such images for Facebook, and have no idea how their plugin
works. Googling a little about it brought me to this article:
https://techcrunch.com/2018/06/07/how-facebooks-new-3d-photos-work/
containing a link to the research paper behind the plugin:
http://visual.cs.ucl.ac.uk/pubs/instant3d/
I had a quick look at it but had neither the courage nor motivation to read it
entirely. However I've noticed page 5 they said "However, this
did not achieve good results, because, as we learned, many depth
maps are normalized using unknown curves. We tried a variety of
other classes of global transformations, ...". Which makes me think they're
reconstructing an approximate 3D mesh, pleasant for the eye but not realistic,
to manage any kind of input, noise, and extrapolation from even a single image.
After all, their goal was probably to make an user-friendly plugin, rather than
accurate.

Conclusion, it probably doesn't really matter what kind of depth encoding you
use if you plan to use it on Facebook ! Maybe the best is to experiment...

I've wrote the script on the site of Paul Bourke while working with 3D scanners
where depth was encoding linearly either planar or spherical distance from the
camera within a range. You can also set DEPTHMIN to 0.0 and DEPTHMAX to an
arbitrarily big value to get a depth map from the camera to (kind of) infinity.
That's why they are variables and not hard coded in my script.

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 03:11:44
Message: <60cc4730$1@news.povray.org>
On 6/18/2021 2:51 AM, BayashiPascal wrote:
> Conclusion, it probably doesn't really matter what kind of depth encoding you
> use if you plan to use it on Facebook ! Maybe the best is to experiment...

Interesting.

I wonder if there is a tool that works with stereo images that is less 
ambiguous?

But it might not matter.


Mike


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 03:50:00
Message: <web.60cc4f0993de9b14a3e088d5e0f8c582@news.povray.org>
Mike Horvath <mik### [at] gmailcom> wrote:
> On 6/18/2021 2:51 AM, BayashiPascal wrote:
> > Conclusion, it probably doesn't really matter what kind of depth encoding you
> > use if you plan to use it on Facebook ! Maybe the best is to experiment...
>
> Interesting.
>
> I wonder if there is a tool that works with stereo images that is less
> ambiguous?
>
> But it might not matter.
>
>
> Mike

This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
work on it every day) !!

Maybe you can have look at the wikipedia page first if you don't know about it.
https://en.wikipedia.org/wiki/Photogrammetry

There are tons of techniques and tools, each with its particular advantage,
inconvenient and use case...

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 18 Jun 2021 09:24:34
Message: <60cc9e92$1@news.povray.org>
Here is a tutorial on how to generate a depth map from stereo images:

https://www.docs.opencv.org/master/dd/d53/tutorial_py_depthmap.html

However, the accuracy does not look very good. I suspect there is also a 
bias in accuracy in one (the horizontal?) direction.

I may have to do this for a few of my scenes, though, as overriding all 
of my textures just to create a depth map is a serious PITA. (There's no 
easy way to do it since povray SDL is not OOP.)


Mike


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 19 Jun 2021 15:43:59
Message: <60ce48ff$1@news.povray.org>
Two examples:

https://www.facebook.com/groups/fansofpovray/permalink/4181681918565652/

https://www.facebook.com/groups/fansofpovray/permalink/4185621304838380/

You need to have an FB account and be a member of the group in order to 
see them (I think).

One turned out okay; the other was a complete failure.


Mike


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 19 Jun 2021 16:42:52
Message: <60ce56cc$1@news.povray.org>
On 6/19/2021 3:43 PM, Mike Horvath wrote:
> the other was a complete failure.
> 

Well, not a total failure, but it came out pretty bad.


Mike


Post a reply to this message

From: Bald Eagle
Subject: Re: Facebook 3D posts
Date: 26 Jun 2021 15:40:00
Message: <web.60d7824f93de9b141f9dae3025979125@news.povray.org>
"BayashiPascal" <bai### [at] gmailcom> wrote:

> This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
> work on it every day) !!

Have you implemented any of that work in POV-Ray?  I think even some of the
basics would encourage others to learn more about this intriguing field!


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 26 Jun 2021 22:00:00
Message: <web.60d7db9293de9b14a3e088d5e0f8c582@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "BayashiPascal" <bai### [at] gmailcom> wrote:
>
> > This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
> > work on it every day) !!
>
> Have you implemented any of that work in POV-Ray?  I think even some of the
> basics would encourage others to learn more about this intriguing field!

I did, but not in POV-Ray. POV-Ray is a rendering tool, it takes as input a
representation of a 3D scene and output a 2D image. Photogrammetry does the
opposite, it takes 2D image(s) as input and outputs a representation of the
corresponding 3D scene. So, I don't see how one could implements photogrammetry
in POV-Ray, they serve both two opposite purposes.

In this thread, Mike Horvath was initially speaking about how to create depth
image using POV-Ray. The depth image is the input of the photogrammetry
algorithm used in Facebook, which would otherwise be acquired using a 3D scanner
for example. This depth image can be generated as described in the link I gave
earlier. But this is really just about generating an input for a certain type of
photogrammetry algorithm, not about performing photogrammetry itself.

Let me know if I misunderstand your question, I'll be happy to correct my reply.

Pascal


Post a reply to this message

From: Bald Eagle
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 08:20:00
Message: <web.60d86ce893de9b141f9dae3025979125@news.povray.org>
> I did, but not in POV-Ray. POV-Ray is a rendering tool, it takes as input a
> representation of a 3D scene and output a 2D image. Photogrammetry does the
> opposite, it takes 2D image(s) as input and outputs a representation of the
> corresponding 3D scene. So, I don't see how one could implements photogrammetry
> in POV-Ray, they serve both two opposite purposes.

Yes, but we don't always start with a blank slate and start coding a scene in a
vacuum.  We have all sort of "input data" that we can use for POV-Ray's SDL
language to perform operations upon.  And so sometimes we need to take a 2D
image, reverse the process to obtain the relevant 3D information, and then use
that to render a new 2D image.

(1)
Take for example Francois LE COAT's very interesting work:
http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f36de80c437ac910%40news.povray.org%3E/

Forgive me if there's a lot of things I don't understand, properly recall, or
have wrong, but it seems to me that an aerial photograph of a landscape has a
lot of perspective distortion, and part of the photogrammetry process would be
correcting for that.

(2)
If I wanted to take a photograph of a piece of writing paper laying on a table
and use the portion of the image with the paper in it as an image_map, I'd
likely have a horribly non-rectangular area to cope with.  Preumably there are
photogrammetry-related tricks to deduce what sort of transformation matrix I
would need to apply in order to mimic reorienting the paper to be perpendicular
to the camera, and have straight edges with 90-degree corners.

(3)
Let me preface this one with some information that will help you understand what
I'm talking about:

http://imcs.dvfu.ru/lib.int/docs/Programming/Graphics/Academic Press Graphics
Gems Ii 1995.pdf
(use the wayback machine to get the PDF.   pg 181 (pg 208 of pdf) )

View correlation.
"To combine computer-generated objects into a photographic scene, it is
necessary to render the objects from the same point of view as was used
to make the photo."


Rendering Synthetic Objects into Legacy Photographs
Kevin Karsch Varsha Hedau David Forsyth Derek Hoiem
University of Illinois at Urbana-Champaign
{karsch1,vhedau2,daf,dhoiem}@uiuc.edu
https://web.archive.org/web/20190211102945/http://kevinkarsch.com/publications/sa11-lowres.pdf

or:
https://www.popsci.com/technology/article/2011-10/new-program-slips-super-accurate-false-images-existing-photographs

https://www.kevinkarsch.com/?page_id=445

https://arxiv.org/abs/2001.00986


There are plenty of examples of people creating POV-Ray renderings based on real
things - things they often only have photographs of.

But then I was thinking that there are probably actual POV-Ray renderings, but
somehow the source to generate those renderings have been lost due to the code
never being posted, a HDD crash, or just getting - lost.  People might want to
recreate a scene, and having some basic information about the size and placement
of the objects in the scene would greatly speed up the writing of a new scene
file.

I know that I have done some work to recreate some of the documentation images
for things like isosurfaces, and didn't have the code for those images.   I had
to make educated guesses.  I "knew" the probably size of the object, or at least
its relative scale, and then I just needed to place the camera and light source
in the right place to get the image to look the same.   But determining where
the camera and light source are seems to me to be something that could be
calculated using "photogrammetric image cues" and well-established equations.

Let's take a photograph of a room.  It likely has tables and chairs and windows,
and these all have right angles and typical sizes.  It seems to me that there
might be a way to use photogrammetry to compute the 3D points of the corners and
rapidly generate a basic set of vectors for the proper sizing and placement of
everything in a basic rendered version of that same room.

I know that they have cell phone apps that can generate a floor plan of your
house just by snapping a few photos of the rooms from different angles.

Also, the "augmented reality" apps for cell phones.



I think there are a lot of cool things that could be done with POV-Ray in the
"opposite direction" of what people normally do, and the results could be very
interesting, very educational, and give rise to tools that would probably find
every day usage for generated new scenes, and perhaps serve to attract new
users.

It was just my thought that if you do this "every day", that you would have the
level of understanding to quickly implement at least some of the basics, whereas
I'd probably spend the next few months chasing my tail and tying equations up
into Gordian knots before I finally understoof what I was doing wrong.


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 09:55:00
Message: <web.60d8825893de9b14a3e088d5e0f8c582@news.povray.org>
> ...

Thanks for your precision. I'll reply to you within a few days.

Pascal


Post a reply to this message

<<< Previous 7 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.