POV-Ray : Newsgroups : povray.advanced-users : Facebook 3D posts : Re: Facebook 3D posts Server Time
2 Aug 2021 03:09:23 EDT (-0400)
  Re: Facebook 3D posts  
From: BayashiPascal
Date: 10 Jul 2021 22:10:00
Message: <web.60ea51d793de9b14a3e088d5e0f8c582@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> ...

Hi again Bill,
I'll only reply about the paper of Dr Karsch and leave the reply to your other
comments for another (hypothetic) day. Not that they aren't interesting me, not
at all, it's just a time problem...

> a) you could look deeper into his work, with your education and experience in
> the area and see how much of what he claims to accomplish is "embellished"

I've read the paper and had a look at the website of Dr Karsch.
The results introduced in the paper are indeed very good, and certainly not
embellished in any way. You just have to understand the limitations of their
method. It relies heavily on user guidance to provide sufficient information
about the scene geometry, lighting, interaction between the scene and the added
objects, as well as user guidance for correction and supplementation of
automatic estimates. It produces a very coarse representation of the scene
essentially limited to planes, and any more complex object must be modeled with
an external 3D modeling tool and imported, or manually segmented for occlusion
surfaces. It is not applicable to all kind of scenes. It produces qualitatively
good results rather than quantitatively accurate ones.
The only point I would argue with is when they say a novice can with a few
annotations obtain professional results. That's maybe true for cases where their
method works well, but maybe reality is a little more complex for cases where it
doesn't. For example, in equation (1), about the weights they say the "user can
also modify these weights depending on the confidence of their manual source
estimates". This looks to me like needing a bit of expertise to know when and
how to modify these weights. Or, how to split the scene into planes, choose the
type and position of lights, the material of planes (they speak about selecting
reflecting surface for example), ... A simple scene like the one in figure 3 is
surely straight forward, but one like figure 6 or 7 doesn't look so. These are
the kind of things I call 'unspoken and time-consuming pre/post-processing
requiring experience'.
About using their work with POV-Ray, their implementation is done using
LuxRender, so at the very least it would take some refactoring to adapt it to
POV-Ray. It's also basically a method to calculate rendering parameters for
objects a posteriori composited into an existing 2D image. It doesn't seem to
match your expectation: recreating the scene geometry, if I understand well.
Part of their method could be reused, like the calculation of camera parameters
from vanishing points. But rather than deconstruct their work and look for
interesting bits, I would personnally look directly into the relevant papers
about these bits. If their coarse scene made of planes is enough for you, this
could also be reused but there too you would need to deconstruct part of their
work and refactor it to your needs (also it is unclear to me if they really
calculate down to the 3D coordinates of the plane, it seems to me they don't
need it in their method). This may be not easy work, it would discard the most
valuable part of their method, so there again I would probably rather go for my
own implementation from scratch.

> b) if he and his colleagues/coworkers/students have done a lot of the puzzling
> out and heavy lifting already, then perhaps he might provide you with tools that
> would help you in your own work that you might not be familiar with.

About my own work, my goal is quite different (fully automated reconstruction of
highly accurate geometry and texture of a single object from photographs), so
this paper won't help. But it was an interesting read anyway and may be relevant
in a future project. Thank you.

Hope we'll have other opportunities to speak about it in the future! :-)


Post a reply to this message

Copyright 2003-2021 Persistence of Vision Raytracer Pty. Ltd.