POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts : Re: Facebook 3D posts Server Time
17 May 2024 09:27:38 EDT (-0400)
  Re: Facebook 3D posts  
From: Bald Eagle
Date: 4 Jul 2021 10:15:00
Message: <web.60e1c22493de9b141f9dae3025979125@news.povray.org>
"BayashiPascal" <bai### [at] gmailcom> wrote:
> @baldeagle, sorry for the late reply.

No apologies necessary, as it's a complicated topic, and I'm sure you have
PLENTY to keep you busy IRL.

> Yes, and no. Let's take the example of the coordinates of the corner of your
> table. First, you have to know that's a corner of the table. It can be specify
> by the user manually, or done by automatic detection which works more or less
> depending on the image. Once you have the 2D coordinates in the image, in order
> to get the 3D coordinates you actually need several images (and identify that
> particular corner in each of them) and the intrinsic and extrinsic parameters of
> the camera.

This is certainly what I would have expected from my )all too brief) readings of
the photogrammetry literature, which is why I was so surprised when I came
across the work of Kevin Karsch.  It seems like "magic" - certainly very bold
claims, and I was wondering if
a) you could look deeper into his work, with your education and experience in
the area and see how much of what he claims to accomplish is "embellished"
and
b) if he and his colleagues/coworkers/students have done a lot of the puzzling
out and heavy lifting already, then perhaps he might provide you with tools that
would help you in your own work that you might not be familiar with.

I certainly didn't want to send you "off into the weeds" without thinking that
there might be some personal/professional benefit for you in the process.

> Then, even if you have your
> 3D coordinates of corners, how do you know which corner relates to which other
> corner ?

Dr Karsch apparently has most of that figured out, and even has the code (mostly
..js) posted on his site.  He welcomes someone to get it up and running on a
server again, and if we have a web guru amongst us, I can envision using his
work as a basis for better understanding what he did, converting some of that
javascript to SDL / source code, and perhaps even spurring some further
development of a POV-Ray modeler.

I am not worried about the texture, as that is separate from the geometry.

> About finding the
> light position, in a real environment I hope you understand now that's barely
> feasible in an fully automated way.

Again, this is part of the series of bold claims that Karsch makes and
apparently demonstrates in his papers, software, and videos.  (He does mention
that a database is used, and references an open source DB)


> Yes, and from my experience these apps are all clumsy and inaccurate at best. I
> know there are a lot of mesmerizing videos on Youtube about such apps. To me,
> these are just clickbait and commercials. When you have to do it every days
> under real conditions and looking for professional level accurate results, it
> takes a lot of experience, preprocessing and postprocessing work, which are
> rarely spoken of in those videos.

Right - "the more you know"...  And I'm fine with a lot of what I've come across
in the past few years being declared to be fantastic claims that mislead the
reader/viewer.

> In a sense that's what I'm doing, having convinced my employer that we could use
> a combination of POV-Ray and photogrammetry to help solving a certain AI
> problem. But it's already been 1.5 years of R&D and still a WIP, so when it
> comes to "quickly implement some of the basics", I hope you now understand that
> even the basics are far from simple and definitely not quick to implement.

Certainly.  I understand that, and was in no way expecting a tutorial of any
kind.

> I'm
> really sorry but writing tutorials on the subject would be a way too big project
> for me to embark on (just replying to your message took me a week!).


Besides the explicitly photogrammetric aspect of the above where the first step
is acquiring key vertices of the geometry in the image, I was thinking it would
be much simpler, straightforward, and immediately useful if any POV-Ray work
were limited to what to do with vertex data already in possession of the user.

What if I took the uv coordinates or the interpolated <x,y,z> coordinates from
the screen position-finder macro

http://news.povray.org/povray.binaries.images/thread/%3C5baddad1%40news.povray.org%3E/?mtop=424795
or the modifications to the screen.inc macros
http://news.povray.org/povray.binaries.scene-files/thread/%3C4afccd8a%241%40news.povray.org%3E/

and worked from there, without having to worry about all the messy data
extraction from a photograph?   I think the basic geometric transformations
would be instructive and useful all on their own.

I mean, you're already familiar with the issues regarding the FieldCam macro.
http://news.povray.org/povray.binaries.images/thread/%3C5fb22ee0%40news.povray.org%3E/?mtop=432106

I mean, I was able to measure and guess when I was recreating this challenging
piece:
https://secureservercdn.net/45.40.146.28/ik7.9d0.myftpupload.com/wp-content/uploads/revslider/home-part/hero_part_with_
circles.png

But perhaps there is a way to embed specially colored pixels into an image at
key reference points, like they do with steganography?  (thinking for testing,
and future recovery of scene geometry if it is lost) Then a scan for vertices
could be done with the raw pixel data, and it would be even easier.   Also, a
user could mark a copy of the photo with an image editor before a macro
processed it.

Just some ideas for what we could do with existing povray technology and some
educated application of matrix transforms.


Thanks for your caveats and opinions.   Hopefully some of what I posted can help
you in your own work.  I am always interested in hearing about what challenges
you experience in trying to solve your own geometric problems - it's an
interesting field.

- Bill


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.