POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts Server Time
7 May 2024 18:49:13 EDT (-0400)
  Facebook 3D posts (Message 12 to 21 of 41)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 19 Jun 2021 15:43:59
Message: <60ce48ff$1@news.povray.org>
Two examples:

https://www.facebook.com/groups/fansofpovray/permalink/4181681918565652/

https://www.facebook.com/groups/fansofpovray/permalink/4185621304838380/

You need to have an FB account and be a member of the group in order to 
see them (I think).

One turned out okay; the other was a complete failure.


Mike


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 19 Jun 2021 16:42:52
Message: <60ce56cc$1@news.povray.org>
On 6/19/2021 3:43 PM, Mike Horvath wrote:
> the other was a complete failure.
> 

Well, not a total failure, but it came out pretty bad.


Mike


Post a reply to this message

From: Bald Eagle
Subject: Re: Facebook 3D posts
Date: 26 Jun 2021 15:40:00
Message: <web.60d7824f93de9b141f9dae3025979125@news.povray.org>
"BayashiPascal" <bai### [at] gmailcom> wrote:

> This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
> work on it every day) !!

Have you implemented any of that work in POV-Ray?  I think even some of the
basics would encourage others to learn more about this intriguing field!


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 26 Jun 2021 22:00:00
Message: <web.60d7db9293de9b14a3e088d5e0f8c582@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "BayashiPascal" <bai### [at] gmailcom> wrote:
>
> > This field is called 'photogrammetry' ... and it's a rabbit hole, I warn you (I
> > work on it every day) !!
>
> Have you implemented any of that work in POV-Ray?  I think even some of the
> basics would encourage others to learn more about this intriguing field!

I did, but not in POV-Ray. POV-Ray is a rendering tool, it takes as input a
representation of a 3D scene and output a 2D image. Photogrammetry does the
opposite, it takes 2D image(s) as input and outputs a representation of the
corresponding 3D scene. So, I don't see how one could implements photogrammetry
in POV-Ray, they serve both two opposite purposes.

In this thread, Mike Horvath was initially speaking about how to create depth
image using POV-Ray. The depth image is the input of the photogrammetry
algorithm used in Facebook, which would otherwise be acquired using a 3D scanner
for example. This depth image can be generated as described in the link I gave
earlier. But this is really just about generating an input for a certain type of
photogrammetry algorithm, not about performing photogrammetry itself.

Let me know if I misunderstand your question, I'll be happy to correct my reply.

Pascal


Post a reply to this message

From: Bald Eagle
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 08:20:00
Message: <web.60d86ce893de9b141f9dae3025979125@news.povray.org>
> I did, but not in POV-Ray. POV-Ray is a rendering tool, it takes as input a
> representation of a 3D scene and output a 2D image. Photogrammetry does the
> opposite, it takes 2D image(s) as input and outputs a representation of the
> corresponding 3D scene. So, I don't see how one could implements photogrammetry
> in POV-Ray, they serve both two opposite purposes.

Yes, but we don't always start with a blank slate and start coding a scene in a
vacuum.  We have all sort of "input data" that we can use for POV-Ray's SDL
language to perform operations upon.  And so sometimes we need to take a 2D
image, reverse the process to obtain the relevant 3D information, and then use
that to render a new 2D image.

(1)
Take for example Francois LE COAT's very interesting work:
http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f36de80c437ac910%40news.povray.org%3E/

Forgive me if there's a lot of things I don't understand, properly recall, or
have wrong, but it seems to me that an aerial photograph of a landscape has a
lot of perspective distortion, and part of the photogrammetry process would be
correcting for that.

(2)
If I wanted to take a photograph of a piece of writing paper laying on a table
and use the portion of the image with the paper in it as an image_map, I'd
likely have a horribly non-rectangular area to cope with.  Preumably there are
photogrammetry-related tricks to deduce what sort of transformation matrix I
would need to apply in order to mimic reorienting the paper to be perpendicular
to the camera, and have straight edges with 90-degree corners.

(3)
Let me preface this one with some information that will help you understand what
I'm talking about:

http://imcs.dvfu.ru/lib.int/docs/Programming/Graphics/Academic Press Graphics
Gems Ii 1995.pdf
(use the wayback machine to get the PDF.   pg 181 (pg 208 of pdf) )

View correlation.
"To combine computer-generated objects into a photographic scene, it is
necessary to render the objects from the same point of view as was used
to make the photo."


Rendering Synthetic Objects into Legacy Photographs
Kevin Karsch Varsha Hedau David Forsyth Derek Hoiem
University of Illinois at Urbana-Champaign
{karsch1,vhedau2,daf,dhoiem}@uiuc.edu
https://web.archive.org/web/20190211102945/http://kevinkarsch.com/publications/sa11-lowres.pdf

or:
https://www.popsci.com/technology/article/2011-10/new-program-slips-super-accurate-false-images-existing-photographs

https://www.kevinkarsch.com/?page_id=445

https://arxiv.org/abs/2001.00986


There are plenty of examples of people creating POV-Ray renderings based on real
things - things they often only have photographs of.

But then I was thinking that there are probably actual POV-Ray renderings, but
somehow the source to generate those renderings have been lost due to the code
never being posted, a HDD crash, or just getting - lost.  People might want to
recreate a scene, and having some basic information about the size and placement
of the objects in the scene would greatly speed up the writing of a new scene
file.

I know that I have done some work to recreate some of the documentation images
for things like isosurfaces, and didn't have the code for those images.   I had
to make educated guesses.  I "knew" the probably size of the object, or at least
its relative scale, and then I just needed to place the camera and light source
in the right place to get the image to look the same.   But determining where
the camera and light source are seems to me to be something that could be
calculated using "photogrammetric image cues" and well-established equations.

Let's take a photograph of a room.  It likely has tables and chairs and windows,
and these all have right angles and typical sizes.  It seems to me that there
might be a way to use photogrammetry to compute the 3D points of the corners and
rapidly generate a basic set of vectors for the proper sizing and placement of
everything in a basic rendered version of that same room.

I know that they have cell phone apps that can generate a floor plan of your
house just by snapping a few photos of the rooms from different angles.

Also, the "augmented reality" apps for cell phones.



I think there are a lot of cool things that could be done with POV-Ray in the
"opposite direction" of what people normally do, and the results could be very
interesting, very educational, and give rise to tools that would probably find
every day usage for generated new scenes, and perhaps serve to attract new
users.

It was just my thought that if you do this "every day", that you would have the
level of understanding to quickly implement at least some of the basics, whereas
I'd probably spend the next few months chasing my tail and tying equations up
into Gordian knots before I finally understoof what I was doing wrong.


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 09:55:00
Message: <web.60d8825893de9b14a3e088d5e0f8c582@news.povray.org>
> ...

Thanks for your precision. I'll reply to you within a few days.

Pascal


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 13:44:34
Message: <60d8b902$1@news.povray.org>
On 6/27/2021 8:19 AM, Bald Eagle wrote:
> (1)
> Take for example Francois LE COAT's very interesting work:
>
http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f36de80c437ac910%40news.povray.org%3E/
> 
> Forgive me if there's a lot of things I don't understand, properly recall, or
> have wrong, but it seems to me that an aerial photograph of a landscape has a
> lot of perspective distortion, and part of the photogrammetry process would be
> correcting for that.
> 

Unless you use an orthographic camera. But IIRC they only work well for 
extremely short distances.


Mike


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 27 Jun 2021 13:53:34
Message: <60d8bb1e$1@news.povray.org>
On 6/27/2021 8:19 AM, Bald Eagle wrote:
> There are plenty of examples of people creating POV-Ray renderings based on real
> things - things they often only have photographs of.
> 
> But then I was thinking that there are probably actual POV-Ray renderings, but
> somehow the source to generate those renderings have been lost due to the code
> never being posted, a HDD crash, or just getting - lost.  People might want to
> recreate a scene, and having some basic information about the size and placement
> of the objects in the scene would greatly speed up the writing of a new scene
> file.
> 
> I know that I have done some work to recreate some of the documentation images
> for things like isosurfaces, and didn't have the code for those images.   I had
> to make educated guesses.  I "knew" the probably size of the object, or at least
> its relative scale, and then I just needed to place the camera and light source
> in the right place to get the image to look the same.   But determining where
> the camera and light source are seems to me to be something that could be
> calculated using "photogrammetric image cues" and well-established equations.
> 
> Let's take a photograph of a room.  It likely has tables and chairs and windows,
> and these all have right angles and typical sizes.  It seems to me that there
> might be a way to use photogrammetry to compute the 3D points of the corners and
> rapidly generate a basic set of vectors for the proper sizing and placement of
> everything in a basic rendered version of that same room.
> 
> I know that they have cell phone apps that can generate a floor plan of your
> house just by snapping a few photos of the rooms from different angles.
> 
> Also, the "augmented reality" apps for cell phones.
> 

Can (most likely) already be done using general purpose external tools. 
At what stage is povray useful/necessary?


Mike


Post a reply to this message

From: Francois LE COAT
Subject: Re: Facebook 3D posts
Date: 28 Jun 2021 15:30:26
Message: <60da2352$1@news.povray.org>
Hi,

Bald Eagle writes:
> Take for example Francois LE COAT's very interesting work:
> 
> http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f36
de80c437ac910%40news.povray.org%3E/
> 
> Forgive me if there's a lot of things I don't understand, properly reca
ll, or
> have wrong, but it seems to me that an aerial photograph of a landscape
 has a
> lot of perspective distortion, and part of the photogrammetry process w
ould be
> correcting for that.

For the time being, I'm experimenting to model trajectories in 3D...

	<https://www.youtube.com/watch?v=uQEo7fD0GAo>

...but it is a difficult work, because public images given by the NASA
are not of good quality. If the Ingenuity helicopter flying on Mars was
driven with my trajectory modelling, it would have crashed. But it is
what I'm measuring, with input data I could collect. This is strange!
I have no other possibility than to improve my computations further on.

Best regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 3 Jul 2021 23:10:00
Message: <web.60e125fa93de9b14a3e088d5e0f8c582@news.povray.org>
@baldeagle, sorry for the late reply.

> Yes, but we don't always start with a blank slate and start coding a scene in a
> vacuum.  We have all sort of "input data" that we can use for POV-Ray's SDL
> language to perform operations upon.  And so sometimes we need to take a 2D
> image, reverse the process to obtain the relevant 3D information, and then use
> that to render a new 2D image.

Yes, I agree.

> Forgive me if there's a lot of things I don't understand, properly recall, or
> have wrong, but it seems to me that an aerial photograph of a landscape has a
> lot of perspective distortion, and part of the photogrammetry process would be
> correcting for that.
> If I wanted to take a photograph of a piece of writing paper laying on a table
> and use the portion of the image with the paper in it as an image_map, I'd
> likely have a horribly non-rectangular area to cope with.  Preumably there are
> photogrammetry-related tricks to deduce what sort of transformation matrix I
> would need to apply in order to mimic reorienting the paper to be perpendicular
> to the camera, and have straight edges with 90-degree corners.

Yes, you're right.

> Let me preface this one with some information that will help you understand what
> I'm talking about:

Thanks for the links.

> There are plenty of examples of people creating POV-Ray renderings based on real
> things - things they often only have photographs of.

Yes, that's exactly one of the things I'm doing at work. (I've used
photogrammetry to create 3D models of archeological artifacts for few years, and
I'm know working on the development of photogrammetric algorithms applied to
AI).

> But then I was thinking that there are probably actual POV-Ray renderings, but
> somehow the source to generate those renderings have been lost due to the code
> never being posted, a HDD crash, or just getting - lost.  People might want to
> recreate a scene, and having some basic information about the size and placement
> of the objects in the scene would greatly speed up the writing of a new scene
> file.

Sure, that's a legitimate motivation.

> I know that I have done some work to recreate some of the documentation images
> for things like isosurfaces, and didn't have the code for those images.   I had
> to make educated guesses.  I "knew" the probably size of the object, or at least
> its relative scale, and then I just needed to place the camera and light source
> in the right place to get the image to look the same.   But determining where
> the camera and light source are seems to me to be something that could be
> calculated using "photogrammetric image cues" and well-established equations.
> Let's take a photograph of a room.  It likely has tables and chairs and windows,
> and these all have right angles and typical sizes.  It seems to me that there
> might be a way to use photogrammetry to compute the 3D points of the corners and
> rapidly generate a basic set of vectors for the proper sizing and placement of
> everything in a basic rendered version of that same room.

Yes, and no. Let's take the example of the coordinates of the corner of your
table. First, you have to know that's a corner of the table. It can be specify
by the user manually, or done by automatic detection which works more or less
depending on the image. Once you have the 2D coordinates in the image, in order
to get the 3D coordinates you actually need several images (and identify that
particular corner in each of them) and the intrinsic and extrinsic parameters of
the camera. If you have none of these, you can try with algorithms like the one
used on Facebook, and you can see what kind of result to expect with the tests
by Mike. If you have only part of these, there are other solutions, depending on
what you have and giving more or less accurate results. These are all very
complicated and I'm not going to explain them here. Then, even if you have your
3D coordinates of corners, how do you know which corner relates to which other
corner ? You, as a human, can guess it by looking at the picture, but if you
expect something entirely automated, you have to use or implement yet other very
complex technics. Then, let say you've found your way up to an approximate 3D
model of the geometry (not speaking of the fact that you'll get one big mesh for
the whole scene, if you want to split by logical entities or get POV-Ray
primitives you'll have to find a way to identity and convert them), next comes
the texture. Textures are influenced by the lights (ambient and locals), the
camera sensor, each other textures through radiosity, even the eye of each
individual (of which I'm well aware, having deuteranopia)... About finding the
light position, in a real environment I hope you understand now that's barely
feasible in an fully automated way. Even if your input image is a simple
rendered image with a single pointlight ligth_source and no radiosity, would you
expect to guess the position from the shadows ? Again, you as a human, having a
high level of comprehension of the scene you're looking at, it makes sense, but
for an algorithm it really doesn't. Consider this simple question: if you see a
cylinder with a dark side on the left and a bright side on the right, is this
because the light was on the right, or because the cylinder happens to have such
a texture that it looks that way even when light up from the left ?

> I know that they have cell phone apps that can generate a floor plan of your
> house just by snapping a few photos of the rooms from different angles.

Yes, and from my experience these apps are all clumsy and inaccurate at best. I
know there are a lot of mesmerizing videos on Youtube about such apps. To me,
these are just clickbait and commercials. When you have to do it every days
under real conditions and looking for professional level accurate results, it
takes a lot of experience, preprocessing and postprocessing work, which are
rarely spoken of in those videos.

> Also, the "augmented reality" apps for cell phones.

Augmented reality is a bit different in my opinion. While some applications may
create content that seems to be integrated into the 3D scene, they actually all
works at the 2D level, adding contents directly in the displayed image,
eventually using some 3D cues from the underlying image. But I've never worked
with augmented reality technics so I may be wrong.

> I think there are a lot of cool things that could be done with POV-Ray in the
> "opposite direction" of what people normally do, and the results could be very
> interesting, very educational, and give rise to tools that would probably find
> every day usage for generated new scenes, and perhaps serve to attract new
> users.
> It was just my thought that if you do this "every day", that you would have the
> level of understanding to quickly implement at least some of the basics, whereas
> I'd probably spend the next few months chasing my tail and tying equations up
> into Gordian knots before I finally understoof what I was doing wrong.

In a sense that's what I'm doing, having convinced my employer that we could use
a combination of POV-Ray and photogrammetry to help solving a certain AI
problem. But it's already been 1.5 years of R&D and still a WIP, so when it
comes to "quickly implement some of the basics", I hope you now understand that
even the basics are far from simple and definitely not quick to implement. I'm
really sorry but writing tutorials on the subject would be a way too big project
for me to embark on (just replying to your message took me a week!).

If you want to try photogrammetry by yourself, first I would recommend to use
already available softwares, and have modest expectation about the result,
instead of trying to implement anything yourself. I've used a lot Agisoft
Metashape (https://www.agisoft.com/) and recommend it. On the free open-source
side, have a look at Meshroom (https://alicevision.org/#meshroom). If you really
wants to implement something by yourself, Open3D could be a good start point and
they have some tutorials available
(http://www.open3d.org/docs/latest/tutorial/Basic/index.html).

Hoping that will help!

Pascal


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.