POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts Server Time
18 Apr 2024 09:23:25 EDT (-0400)
  Facebook 3D posts (Message 21 to 30 of 41)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 3 Jul 2021 23:10:00
Message: <web.60e125fa93de9b14a3e088d5e0f8c582@news.povray.org>
@baldeagle, sorry for the late reply.

> Yes, but we don't always start with a blank slate and start coding a scene in a
> vacuum.  We have all sort of "input data" that we can use for POV-Ray's SDL
> language to perform operations upon.  And so sometimes we need to take a 2D
> image, reverse the process to obtain the relevant 3D information, and then use
> that to render a new 2D image.

Yes, I agree.

> Forgive me if there's a lot of things I don't understand, properly recall, or
> have wrong, but it seems to me that an aerial photograph of a landscape has a
> lot of perspective distortion, and part of the photogrammetry process would be
> correcting for that.
> If I wanted to take a photograph of a piece of writing paper laying on a table
> and use the portion of the image with the paper in it as an image_map, I'd
> likely have a horribly non-rectangular area to cope with.  Preumably there are
> photogrammetry-related tricks to deduce what sort of transformation matrix I
> would need to apply in order to mimic reorienting the paper to be perpendicular
> to the camera, and have straight edges with 90-degree corners.

Yes, you're right.

> Let me preface this one with some information that will help you understand what
> I'm talking about:

Thanks for the links.

> There are plenty of examples of people creating POV-Ray renderings based on real
> things - things they often only have photographs of.

Yes, that's exactly one of the things I'm doing at work. (I've used
photogrammetry to create 3D models of archeological artifacts for few years, and
I'm know working on the development of photogrammetric algorithms applied to
AI).

> But then I was thinking that there are probably actual POV-Ray renderings, but
> somehow the source to generate those renderings have been lost due to the code
> never being posted, a HDD crash, or just getting - lost.  People might want to
> recreate a scene, and having some basic information about the size and placement
> of the objects in the scene would greatly speed up the writing of a new scene
> file.

Sure, that's a legitimate motivation.

> I know that I have done some work to recreate some of the documentation images
> for things like isosurfaces, and didn't have the code for those images.   I had
> to make educated guesses.  I "knew" the probably size of the object, or at least
> its relative scale, and then I just needed to place the camera and light source
> in the right place to get the image to look the same.   But determining where
> the camera and light source are seems to me to be something that could be
> calculated using "photogrammetric image cues" and well-established equations.
> Let's take a photograph of a room.  It likely has tables and chairs and windows,
> and these all have right angles and typical sizes.  It seems to me that there
> might be a way to use photogrammetry to compute the 3D points of the corners and
> rapidly generate a basic set of vectors for the proper sizing and placement of
> everything in a basic rendered version of that same room.

Yes, and no. Let's take the example of the coordinates of the corner of your
table. First, you have to know that's a corner of the table. It can be specify
by the user manually, or done by automatic detection which works more or less
depending on the image. Once you have the 2D coordinates in the image, in order
to get the 3D coordinates you actually need several images (and identify that
particular corner in each of them) and the intrinsic and extrinsic parameters of
the camera. If you have none of these, you can try with algorithms like the one
used on Facebook, and you can see what kind of result to expect with the tests
by Mike. If you have only part of these, there are other solutions, depending on
what you have and giving more or less accurate results. These are all very
complicated and I'm not going to explain them here. Then, even if you have your
3D coordinates of corners, how do you know which corner relates to which other
corner ? You, as a human, can guess it by looking at the picture, but if you
expect something entirely automated, you have to use or implement yet other very
complex technics. Then, let say you've found your way up to an approximate 3D
model of the geometry (not speaking of the fact that you'll get one big mesh for
the whole scene, if you want to split by logical entities or get POV-Ray
primitives you'll have to find a way to identity and convert them), next comes
the texture. Textures are influenced by the lights (ambient and locals), the
camera sensor, each other textures through radiosity, even the eye of each
individual (of which I'm well aware, having deuteranopia)... About finding the
light position, in a real environment I hope you understand now that's barely
feasible in an fully automated way. Even if your input image is a simple
rendered image with a single pointlight ligth_source and no radiosity, would you
expect to guess the position from the shadows ? Again, you as a human, having a
high level of comprehension of the scene you're looking at, it makes sense, but
for an algorithm it really doesn't. Consider this simple question: if you see a
cylinder with a dark side on the left and a bright side on the right, is this
because the light was on the right, or because the cylinder happens to have such
a texture that it looks that way even when light up from the left ?

> I know that they have cell phone apps that can generate a floor plan of your
> house just by snapping a few photos of the rooms from different angles.

Yes, and from my experience these apps are all clumsy and inaccurate at best. I
know there are a lot of mesmerizing videos on Youtube about such apps. To me,
these are just clickbait and commercials. When you have to do it every days
under real conditions and looking for professional level accurate results, it
takes a lot of experience, preprocessing and postprocessing work, which are
rarely spoken of in those videos.

> Also, the "augmented reality" apps for cell phones.

Augmented reality is a bit different in my opinion. While some applications may
create content that seems to be integrated into the 3D scene, they actually all
works at the 2D level, adding contents directly in the displayed image,
eventually using some 3D cues from the underlying image. But I've never worked
with augmented reality technics so I may be wrong.

> I think there are a lot of cool things that could be done with POV-Ray in the
> "opposite direction" of what people normally do, and the results could be very
> interesting, very educational, and give rise to tools that would probably find
> every day usage for generated new scenes, and perhaps serve to attract new
> users.
> It was just my thought that if you do this "every day", that you would have the
> level of understanding to quickly implement at least some of the basics, whereas
> I'd probably spend the next few months chasing my tail and tying equations up
> into Gordian knots before I finally understoof what I was doing wrong.

In a sense that's what I'm doing, having convinced my employer that we could use
a combination of POV-Ray and photogrammetry to help solving a certain AI
problem. But it's already been 1.5 years of R&D and still a WIP, so when it
comes to "quickly implement some of the basics", I hope you now understand that
even the basics are far from simple and definitely not quick to implement. I'm
really sorry but writing tutorials on the subject would be a way too big project
for me to embark on (just replying to your message took me a week!).

If you want to try photogrammetry by yourself, first I would recommend to use
already available softwares, and have modest expectation about the result,
instead of trying to implement anything yourself. I've used a lot Agisoft
Metashape (https://www.agisoft.com/) and recommend it. On the free open-source
side, have a look at Meshroom (https://alicevision.org/#meshroom). If you really
wants to implement something by yourself, Open3D could be a good start point and
they have some tutorials available
(http://www.open3d.org/docs/latest/tutorial/Basic/index.html).

Hoping that will help!

Pascal


Post a reply to this message

From: Thomas de Groot
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 02:20:13
Message: <60e1531d@news.povray.org>
Thanks for that - very detailed - explanation. It reminds me of two 
things (or rather three):

1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years 
ago, based on a photograph by Sabine Weiss in 1953. It was just done by 
trial and error of course, and a hell of a challenge with all kind of 
assumptions. Nothing to do with photogrammetry of course, but I 
appreciate your caveats about any "easy magic" ;-).

2) I have seen on a couple of occasions (on TV), archaeologists make a 
lot of photographs of an object, under all kind of angles, and later 
combine those into a 3d model (with software of course). I saw that done 
  in particular on the terracotta army in China. Closer to home, geology 
students recently used a drone to photograph the walls of a quarry in 
the same manner, and assembled them into a 3d model of the quarry. 
Fascinating stuff, and relatively cheap to implement, especially for 
students I understood.

3) I have been interested in archaeology for most of my life and so came 
quite early across the use of photogrammetry there. If I remember well, 
it was used by Unesco during the construction of the Assouan Dam in 
Egypt to move the Abou Simbel temple to a higher position. I was a 
reader of 'Archeologia' at the time.

-- 
Thomas


Post a reply to this message

From: Mike Horvath
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 05:09:22
Message: <60e17ac2$1@news.povray.org>
On 7/4/2021 2:20 AM, Thomas de Groot wrote:
> 2) I have seen on a couple of occasions (on TV), archaeologists make a 
> lot of photographs of an object, under all kind of angles, and later 
> combine those into a 3d model (with software of course). I saw that done 

> students recently used a drone to photograph the walls of a quarry in 
> the same manner, and assembled them into a 3d model of the quarry. 
> Fascinating stuff, and relatively cheap to implement, especially for 
> students I understood.
> 

I wonder *how* cheap it really is. Probably *not* cheap, in terms of the 
work and expertise involved.


Mike


Post a reply to this message

From: Thomas de Groot
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 08:22:20
Message: <60e1a7fc$1@news.povray.org>
Op 4-7-2021 om 11:09 schreef Mike Horvath:
> On 7/4/2021 2:20 AM, Thomas de Groot wrote:
>> 2) I have seen on a couple of occasions (on TV), archaeologists make a 
>> lot of photographs of an object, under all kind of angles, and later 
>> combine those into a 3d model (with software of course). I saw that 

>> geology students recently used a drone to photograph the walls of a 
>> quarry in the same manner, and assembled them into a 3d model of the 
>> quarry. Fascinating stuff, and relatively cheap to implement, 
>> especially for students I understood.
>>
> 
> I wonder *how* cheap it really is. Probably *not* cheap, in terms of the 
> work and expertise involved.
> 
I don't really know. What I understood, in particular from the Chinese 
example, was that the hardware came 'cheap' as only a perfectly common 
digital camera was needed and no sophisticated laser-controlled stuff. 
And a simple stepladder in addition, to get around the statues. The same 
applied for the students with their drone. Concerning the software, I 
have no idea. The results looked good however, and I doubt that they had 
any particularly high expertise in the matter.

-- 
Thomas


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 08:40:00
Message: <web.60e1ab4893de9b14a3e088d5e0f8c582@news.povray.org>
> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
> trial and error of course, and a hell of a challenge with all kind of
> assumptions. Nothing to do with photogrammetry of course, but I
> appreciate your caveats about any "easy magic" ;-).

I can't recall it from the name, neither find it with Google. Would you have a
link ? I would certainly enjoy seeing it again, as usual with your scenes :-)

> 2) I have seen on a couple of occasions (on TV), archaeologists make a
> lot of photographs of an object, under all kind of angles, and later
> combine those into a 3d model (with software of course). I saw that done
>   in particular on the terracotta army in China. Closer to home, geology
> students recently used a drone to photograph the walls of a quarry in
> the same manner, and assembled them into a 3d model of the quarry.
> Fascinating stuff, and relatively cheap to implement, especially for
> students I understood.

Yes, it's heavily used in archaeology, generally only for the most important
pieces as it is very time consuming, and sometime challenging. You will probably
enjoy this speech of an archaeologist talking about their struggles to produce
models of obsidian artifacts:
https://www.youtube.com/watch?v=g0YaWDrl5qI
I haven't mention it in my previous post but, any reflective or transparent
texture is also an immediate 'no go'. It completely confuses all current
algorithms.

> 3) I have been interested in archaeology for most of my life and so came
> quite early across the use of photogrammetry there. If I remember well,
> it was used by Unesco during the construction of the Assouan Dam in
> Egypt to move the Abou Simbel temple to a higher position. I was a
> reader of 'Archeologia' at the time.

I had plan to become a paleobiologist until I entered university where I've been
reoriented toward computer science. This has probably been a wise advice but I
always wonder what would have become that other me. So, I've been really happy
when I had the chance to work for archaeologists a few years ago.


Post a reply to this message

From: BayashiPascal
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 08:45:00
Message: <web.60e1acb093de9b14a3e088d5e0f8c582@news.povray.org>
Thomas de Groot <tho### [at] degrootorg> wrote:
> Op 4-7-2021 om 11:09 schreef Mike Horvath:
> > On 7/4/2021 2:20 AM, Thomas de Groot wrote:
> >> 2) I have seen on a couple of occasions (on TV), archaeologists make a
> >> lot of photographs of an object, under all kind of angles, and later
> >> combine those into a 3d model (with software of course). I saw that
>
> >> geology students recently used a drone to photograph the walls of a
> >> quarry in the same manner, and assembled them into a 3d model of the
> >> quarry. Fascinating stuff, and relatively cheap to implement,
> >> especially for students I understood.
> >>
> >
> > I wonder *how* cheap it really is. Probably *not* cheap, in terms of the
> > work and expertise involved.
> >
> I don't really know. What I understood, in particular from the Chinese
> example, was that the hardware came 'cheap' as only a perfectly common
> digital camera was needed and no sophisticated laser-controlled stuff.
> And a simple stepladder in addition, to get around the statues. The same
> applied for the students with their drone. Concerning the software, I
> have no idea. The results looked good however, and I doubt that they had
> any particularly high expertise in the matter.
>
> --
> Thomas


You can indeed produce good results with a standard camera. The software itself,
if not freeware is quite expensive, but if they were students it's probably
provided by their university and that's surely not much compared to a university
budget. About expertise, if they were students, that very probably mean they had
teachers with a high level of expertise to guide them.


Post a reply to this message

From: Bald Eagle
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 10:15:00
Message: <web.60e1c22493de9b141f9dae3025979125@news.povray.org>
"BayashiPascal" <bai### [at] gmailcom> wrote:
> @baldeagle, sorry for the late reply.

No apologies necessary, as it's a complicated topic, and I'm sure you have
PLENTY to keep you busy IRL.

> Yes, and no. Let's take the example of the coordinates of the corner of your
> table. First, you have to know that's a corner of the table. It can be specify
> by the user manually, or done by automatic detection which works more or less
> depending on the image. Once you have the 2D coordinates in the image, in order
> to get the 3D coordinates you actually need several images (and identify that
> particular corner in each of them) and the intrinsic and extrinsic parameters of
> the camera.

This is certainly what I would have expected from my )all too brief) readings of
the photogrammetry literature, which is why I was so surprised when I came
across the work of Kevin Karsch.  It seems like "magic" - certainly very bold
claims, and I was wondering if
a) you could look deeper into his work, with your education and experience in
the area and see how much of what he claims to accomplish is "embellished"
and
b) if he and his colleagues/coworkers/students have done a lot of the puzzling
out and heavy lifting already, then perhaps he might provide you with tools that
would help you in your own work that you might not be familiar with.

I certainly didn't want to send you "off into the weeds" without thinking that
there might be some personal/professional benefit for you in the process.

> Then, even if you have your
> 3D coordinates of corners, how do you know which corner relates to which other
> corner ?

Dr Karsch apparently has most of that figured out, and even has the code (mostly
..js) posted on his site.  He welcomes someone to get it up and running on a
server again, and if we have a web guru amongst us, I can envision using his
work as a basis for better understanding what he did, converting some of that
javascript to SDL / source code, and perhaps even spurring some further
development of a POV-Ray modeler.

I am not worried about the texture, as that is separate from the geometry.

> About finding the
> light position, in a real environment I hope you understand now that's barely
> feasible in an fully automated way.

Again, this is part of the series of bold claims that Karsch makes and
apparently demonstrates in his papers, software, and videos.  (He does mention
that a database is used, and references an open source DB)


> Yes, and from my experience these apps are all clumsy and inaccurate at best. I
> know there are a lot of mesmerizing videos on Youtube about such apps. To me,
> these are just clickbait and commercials. When you have to do it every days
> under real conditions and looking for professional level accurate results, it
> takes a lot of experience, preprocessing and postprocessing work, which are
> rarely spoken of in those videos.

Right - "the more you know"...  And I'm fine with a lot of what I've come across
in the past few years being declared to be fantastic claims that mislead the
reader/viewer.

> In a sense that's what I'm doing, having convinced my employer that we could use
> a combination of POV-Ray and photogrammetry to help solving a certain AI
> problem. But it's already been 1.5 years of R&D and still a WIP, so when it
> comes to "quickly implement some of the basics", I hope you now understand that
> even the basics are far from simple and definitely not quick to implement.

Certainly.  I understand that, and was in no way expecting a tutorial of any
kind.

> I'm
> really sorry but writing tutorials on the subject would be a way too big project
> for me to embark on (just replying to your message took me a week!).


Besides the explicitly photogrammetric aspect of the above where the first step
is acquiring key vertices of the geometry in the image, I was thinking it would
be much simpler, straightforward, and immediately useful if any POV-Ray work
were limited to what to do with vertex data already in possession of the user.

What if I took the uv coordinates or the interpolated <x,y,z> coordinates from
the screen position-finder macro

http://news.povray.org/povray.binaries.images/thread/%3C5baddad1%40news.povray.org%3E/?mtop=424795
or the modifications to the screen.inc macros
http://news.povray.org/povray.binaries.scene-files/thread/%3C4afccd8a%241%40news.povray.org%3E/

and worked from there, without having to worry about all the messy data
extraction from a photograph?   I think the basic geometric transformations
would be instructive and useful all on their own.

I mean, you're already familiar with the issues regarding the FieldCam macro.
http://news.povray.org/povray.binaries.images/thread/%3C5fb22ee0%40news.povray.org%3E/?mtop=432106

I mean, I was able to measure and guess when I was recreating this challenging
piece:
https://secureservercdn.net/45.40.146.28/ik7.9d0.myftpupload.com/wp-content/uploads/revslider/home-part/hero_part_with_
circles.png

But perhaps there is a way to embed specially colored pixels into an image at
key reference points, like they do with steganography?  (thinking for testing,
and future recovery of scene geometry if it is lost) Then a scan for vertices
could be done with the raw pixel data, and it would be even easier.   Also, a
user could mark a copy of the photo with an image editor before a macro
processed it.

Just some ideas for what we could do with existing povray technology and some
educated application of matrix transforms.


Thanks for your caveats and opinions.   Hopefully some of what I posted can help
you in your own work.  I am always interested in hearing about what challenges
you experience in trying to solve your own geometric problems - it's an
interesting field.

- Bill


Post a reply to this message

From: Thomas de Groot
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 10:59:51
Message: <60e1cce7$1@news.povray.org>
Op 4-7-2021 om 14:36 schreef BayashiPascal:
>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
>> trial and error of course, and a hell of a challenge with all kind of
>> assumptions. Nothing to do with photogrammetry of course, but I
>> appreciate your caveats about any "easy magic" ;-).
> 
> I can't recall it from the name, neither find it with Google. Would you have a
> link ? I would certainly enjoy seeing it again, as usual with your scenes :-)
> 
It was made for one of the TC-RTC Challenges. Since Stephen and I closed 
that down, it has also disappeared from the web.

I shall repost the image with some comments I wrote at the time; I hope 
to find that back in my archives...

>> 2) I have seen on a couple of occasions (on TV), archaeologists make a
>> lot of photographs of an object, under all kind of angles, and later
>> combine those into a 3d model (with software of course). I saw that done
>>    in particular on the terracotta army in China. Closer to home, geology
>> students recently used a drone to photograph the walls of a quarry in
>> the same manner, and assembled them into a 3d model of the quarry.
>> Fascinating stuff, and relatively cheap to implement, especially for
>> students I understood.
> 
> Yes, it's heavily used in archaeology, generally only for the most important
> pieces as it is very time consuming, and sometime challenging. You will probably
> enjoy this speech of an archaeologist talking about their struggles to produce
> models of obsidian artifacts:
> https://www.youtube.com/watch?v=g0YaWDrl5qI
> I haven't mention it in my previous post but, any reflective or transparent
> texture is also an immediate 'no go'. It completely confuses all current
> algorithms.
> 
Yes, I can understand that !

>> 3) I have been interested in archaeology for most of my life and so came
>> quite early across the use of photogrammetry there. If I remember well,
>> it was used by Unesco during the construction of the Assouan Dam in
>> Egypt to move the Abou Simbel temple to a higher position. I was a
>> reader of 'Archeologia' at the time.
> 
> I had plan to become a paleobiologist until I entered university where I've been
> reoriented toward computer science. This has probably been a wise advice but I
> always wonder what would have become that other me. So, I've been really happy
> when I had the chance to work for archaeologists a few years ago.
> 
Who knows? I wonder sometimes about my life in an alternate universe ;-] 
  But life has sometimes a high degree of serendipity.

-- 
Thomas


Post a reply to this message

From: Thomas de Groot
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 11:32:43
Message: <60e1d49b$1@news.povray.org>
Op 4-7-2021 om 14:36 schreef BayashiPascal:
>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
>> trial and error of course, and a hell of a challenge with all kind of
>> assumptions. Nothing to do with photogrammetry of course, but I
>> appreciate your caveats about any "easy magic" ;-).
> 
> I can't recall it from the name, neither find it with Google. Would you have a
> link ? I would certainly enjoy seeing it again, as usual with your scenes :-)
> 
There is this:

http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553

-- 
Thomas


Post a reply to this message

From: Thomas de Groot
Subject: Re: Facebook 3D posts
Date: 4 Jul 2021 11:46:26
Message: <60e1d7d2$1@news.povray.org>
Op 4-7-2021 om 17:32 schreef Thomas de Groot:
> Op 4-7-2021 om 14:36 schreef BayashiPascal:
>>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
>>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
>>> trial and error of course, and a hell of a challenge with all kind of
>>> assumptions. Nothing to do with photogrammetry of course, but I
>>> appreciate your caveats about any "easy magic" ;-).
>>
>> I can't recall it from the name, neither find it with Google. Would 
>> you have a
>> link ? I would certainly enjoy seeing it again, as usual with your 
>> scenes :-)
>>
> There is this:
> 
>
http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553

> 
> 
There is also this. A later version:

http://news.povray.org/povray.binaries.images/thread/%3C5b7a6cb0%40news.povray.org%3E/?ttop=428926&toff=150&mtop=424347

-- 
Thomas


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.