|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thomas de Groot <tho### [at] degrootorg> wrote:
> Op 4-7-2021 om 17:32 schreef Thomas de Groot:
> > Op 4-7-2021 om 14:36 schreef BayashiPascal:
> >>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
> >>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
> >>> trial and error of course, and a hell of a challenge with all kind of
> >>> assumptions. Nothing to do with photogrammetry of course, but I
> >>> appreciate your caveats about any "easy magic" ;-).
> >>
> >> I can't recall it from the name, neither find it with Google. Would
> >> you have a
> >> link ? I would certainly enjoy seeing it again, as usual with your
> >> scenes :-)
> >>
> > There is this:
> >
> >
http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553
> >
> >
> There is also this. A later version:
>
>
http://news.povray.org/povray.binaries.images/thread/%3C5b7a6cb0%40news.povray.org%3E/?ttop=428926&toff=150&mtop=4243
47
>
> --
> Thomas
Thanks. A very faithful reproduction of the original, very nice work :-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
....
To be honest I haven't read the link you've provided about the works of Pr
Kasch, by lack of time. But I'll try to do so and come back to you this week.
Pascal
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Op 5-7-2021 om 05:30 schreef BayashiPascal:
> Thomas de Groot <tho### [at] degrootorg> wrote:
>> Op 4-7-2021 om 17:32 schreef Thomas de Groot:
>>> Op 4-7-2021 om 14:36 schreef BayashiPascal:
>>>>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
>>>>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
>>>>> trial and error of course, and a hell of a challenge with all kind of
>>>>> assumptions. Nothing to do with photogrammetry of course, but I
>>>>> appreciate your caveats about any "easy magic" ;-).
>>>>
>>>> I can't recall it from the name, neither find it with Google. Would
>>>> you have a
>>>> link ? I would certainly enjoy seeing it again, as usual with your
>>>> scenes :-)
>>>>
>>> There is this:
>>>
>>>
http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553
>>>
>>>
>> There is also this. A later version:
>>
>>
http://news.povray.org/povray.binaries.images/thread/%3C5b7a6cb0%40news.povray.org%3E/?ttop=428926&toff=150&mtop=4243
> 47
>>
>> --
>> Thomas
>
> Thanks. A very faithful reproduction of the original, very nice work :-)
>
>
At the time of building I was surprised about the perspective problems I
had to tackle. I naively assumed that would be easy but of course the
lens and its orientation had introduced all kinds of deformations to the
photograph. Also assuming that walls are straight, as well as streets
are level... ;-)
--
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Thomas de Groot <tho### [at] degrootorg> wrote:
> Op 5-7-2021 om 05:30 schreef BayashiPascal:
> > Thomas de Groot <tho### [at] degrootorg> wrote:
> >> Op 4-7-2021 om 17:32 schreef Thomas de Groot:
> >>> Op 4-7-2021 om 14:36 schreef BayashiPascal:
> >>>>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
> >>>>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
> >>>>> trial and error of course, and a hell of a challenge with all kind of
> >>>>> assumptions. Nothing to do with photogrammetry of course, but I
> >>>>> appreciate your caveats about any "easy magic" ;-).
> >>>>
> >>>> I can't recall it from the name, neither find it with Google. Would
> >>>> you have a
> >>>> link ? I would certainly enjoy seeing it again, as usual with your
> >>>> scenes :-)
> >>>>
> >>> There is this:
> >>>
> >>>
http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553
> >>>
> >>>
> >> There is also this. A later version:
> >>
> >>
http://news.povray.org/povray.binaries.images/thread/%3C5b7a6cb0%40news.povray.org%3E/?ttop=428926&toff=150&mtop=4
243
> > 47
> >>
> >> --
> >> Thomas
> >
> > Thanks. A very faithful reproduction of the original, very nice work :-)
> >
> >
>
> At the time of building I was surprised about the perspective problems I
> had to tackle. I naively assumed that would be easy but of course the
> lens and its orientation had introduced all kinds of deformations to the
> photograph. Also assuming that walls are straight, as well as streets
> are level... ;-)
>
> --
> Thomas
Surely !
If you remember my work on Escher's house of stairs, you'll also understand I'm
well aware of the amount of work it must have taken you to achieve this scene.
:-)
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Op 06/07/2021 om 05:54 schreef BayashiPascal:
> Thomas de Groot <tho### [at] degrootorg> wrote:
>> Op 5-7-2021 om 05:30 schreef BayashiPascal:
>>> Thomas de Groot <tho### [at] degrootorg> wrote:
>>>> Op 4-7-2021 om 17:32 schreef Thomas de Groot:
>>>>> Op 4-7-2021 om 14:36 schreef BayashiPascal:
>>>>>>> 1) You may remember my POV-Ray scene 'Paris la nuit' a couple of years
>>>>>>> ago, based on a photograph by Sabine Weiss in 1953. It was just done by
>>>>>>> trial and error of course, and a hell of a challenge with all kind of
>>>>>>> assumptions. Nothing to do with photogrammetry of course, but I
>>>>>>> appreciate your caveats about any "easy magic" ;-).
>>>>>>
>>>>>> I can't recall it from the name, neither find it with Google. Would
>>>>>> you have a
>>>>>> link ? I would certainly enjoy seeing it again, as usual with your
>>>>>> scenes :-)
>>>>>>
>>>>> There is this:
>>>>>
>>>>>
http://news.povray.org/povray.binaries.images/thread/%3C4d8dea6b%40news.povray.org%3E/?mtop=359553
>>>>>
>>>>>
>>>> There is also this. A later version:
>>>>
>>>>
http://news.povray.org/povray.binaries.images/thread/%3C5b7a6cb0%40news.povray.org%3E/?ttop=428926&toff=150&mtop=4
> 243
>>> 47
>>>>
>>>> --
>>>> Thomas
>>>
>>> Thanks. A very faithful reproduction of the original, very nice work :-)
>>>
>>>
>>
>> At the time of building I was surprised about the perspective problems I
>> had to tackle. I naively assumed that would be easy but of course the
>> lens and its orientation had introduced all kinds of deformations to the
>> photograph. Also assuming that walls are straight, as well as streets
>> are level... ;-)
>>
>> --
>> Thomas
>
> Surely !
> If you remember my work on Escher's house of stairs, you'll also understand I'm
> well aware of the amount of work it must have taken you to achieve this scene.
> :-)
>
Indeed! That is a brilliant piece of work.
--
Thomas
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Bald Eagle" <cre### [at] netscapenet> wrote:
> ...
Hi again Bill,
I'll only reply about the paper of Dr Karsch and leave the reply to your other
comments for another (hypothetic) day. Not that they aren't interesting me, not
at all, it's just a time problem...
> a) you could look deeper into his work, with your education and experience in
> the area and see how much of what he claims to accomplish is "embellished"
I've read the paper and had a look at the website of Dr Karsch.
The results introduced in the paper are indeed very good, and certainly not
embellished in any way. You just have to understand the limitations of their
method. It relies heavily on user guidance to provide sufficient information
about the scene geometry, lighting, interaction between the scene and the added
objects, as well as user guidance for correction and supplementation of
automatic estimates. It produces a very coarse representation of the scene
essentially limited to planes, and any more complex object must be modeled with
an external 3D modeling tool and imported, or manually segmented for occlusion
surfaces. It is not applicable to all kind of scenes. It produces qualitatively
good results rather than quantitatively accurate ones.
The only point I would argue with is when they say a novice can with a few
annotations obtain professional results. That's maybe true for cases where their
method works well, but maybe reality is a little more complex for cases where it
doesn't. For example, in equation (1), about the weights they say the "user can
also modify these weights depending on the confidence of their manual source
estimates". This looks to me like needing a bit of expertise to know when and
how to modify these weights. Or, how to split the scene into planes, choose the
type and position of lights, the material of planes (they speak about selecting
reflecting surface for example), ... A simple scene like the one in figure 3 is
surely straight forward, but one like figure 6 or 7 doesn't look so. These are
the kind of things I call 'unspoken and time-consuming pre/post-processing
requiring experience'.
About using their work with POV-Ray, their implementation is done using
LuxRender, so at the very least it would take some refactoring to adapt it to
POV-Ray. It's also basically a method to calculate rendering parameters for
objects a posteriori composited into an existing 2D image. It doesn't seem to
match your expectation: recreating the scene geometry, if I understand well.
Part of their method could be reused, like the calculation of camera parameters
from vanishing points. But rather than deconstruct their work and look for
interesting bits, I would personnally look directly into the relevant papers
about these bits. If their coarse scene made of planes is enough for you, this
could also be reused but there too you would need to deconstruct part of their
work and refactor it to your needs (also it is unclear to me if they really
calculate down to the 3D coordinates of the plane, it seems to me they don't
need it in their method). This may be not easy work, it would discard the most
valuable part of their method, so there again I would probably rather go for my
own implementation from scratch.
> b) if he and his colleagues/coworkers/students have done a lot of the puzzling
> out and heavy lifting already, then perhaps he might provide you with tools that
> would help you in your own work that you might not be familiar with.
About my own work, my goal is quite different (fully automated reconstruction of
highly accurate geometry and texture of a single object from photographs), so
this paper won't help. But it was an interesting read anyway and may be relevant
in a future project. Thank you.
Hope we'll have other opportunities to speak about it in the future! :-)
Pascal
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
> Bald Eagle writes:
>> Take for example Francois LE COAT's very interesting work:
>>
>> http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f3
6de80c437ac910%40news.povray.org%3E/
>>
>> Forgive me if there's a lot of things I don't understand, properly rec
all, or
>> have wrong, but it seems to me that an aerial photograph of a landscap
e has a
>> lot of perspective distortion, and part of the photogrammetry process
would be
>> correcting for that.
>
> For the time being, I'm experimenting to model trajectories in 3D.
> But it is a difficult work, because public images given by the NASA
> are not of good quality. If the Ingenuity helicopter flying on Mars was
> driven with my trajectory modelling, it would have crashed. But it is
> what I'm measuring, with input data I could collect. This is strange!
> I have no other possibility than to improve my computations further on.
For the 9th flight over Mars, I collected better images from the
surface. To understand what I'm computing, here is another work
from mine...
(1) <https://twitter.com/Astro_Aure/status/1414699163569246210>
You can see a cursor pointing at Ingenuity's localization, moving on
the map, following the helicopter's motion. What I would like to do, is
to compute this in space, with "translate <Tx, Ty, Tz>" in 3 dimensions.
(2) <https://www.youtube.com/watch?v=MT5SqhJ0Jio>
I could also "rotate <Rx, Ry, Rz>" and "shear <Sx, Sy, 0>" but it is not
working for the moment. And I rendered this with Persistence Of Vision.
The algorithm I'm using is probably close to what is used on Mars,
because it is constructed to avoid the drift from the embedded IMU.
It's not viewed from the top (like (1)) but from the surface of Mars.
Best regards,
--
<http://eureka.atari.org/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
>> Bald Eagle writes:
>>> Take for example Francois LE COAT's very interesting work:
>>>
>>> http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1f
36de80c437ac910%40news.povray.org%3E/
>>>
>>> Forgive me if there's a lot of things I don't understand, properly re
call, or
>>> have wrong, but it seems to me that an aerial photograph of a landsca
pe has a
>>> lot of perspective distortion, and part of the photogrammetry process
would be
>>> correcting for that.
>>
>> For the time being, I'm experimenting to model trajectories in 3D.
>> But it is a difficult work, because public images given by the NASA
>> are not of good quality. If the Ingenuity helicopter flying on Mars wa
s
>> driven with my trajectory modelling, it would have crashed. But it is
>> what I'm measuring, with input data I could collect. This is strange!
>> I have no other possibility than to improve my computations further on
.
>
> For the 9th flight over Mars, I collected better images from the
> surface. To understand what I'm computing, here is another work
> from mine...
>
> (1) <https://twitter.com/Astro_Aure/status/1414699163569246210>
>
> You can see a cursor pointing at Ingenuity's localization, moving on
> the map, following the helicopter's motion. What I would like to do, is
> to compute this in space, with "translate <Tx, Ty, Tz>" in 3 dimensions
.
>
> (2) <https://www.youtube.com/watch?v=MT5SqhJ0Jio>
>
> I could also "rotate <Rx, Ry, Rz>" and "shear <Sx, Sy, 0>" but it is no
t
> working for the moment. And I rendered this with Persistence Of Vision.
> The algorithm I'm using is probably close to what is used on Mars,
> because it is constructed to avoid the drift from the embedded IMU.
> It's not viewed from the top (like (1)) but from the surface of Mars.
For 10th flight over Mars, Ingenuity followed an elaborate trajectory...
<https://www.youtube.com/watch?v=QiF9VJJamkE>
NASA planned to take color pictures of remarkable points on ground.
Best regards,
--
<http://eureka.atari.org/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
>>> Bald Eagle writes:
>>>> Take for example Francois LE COAT's very interesting work:
>>>>
>>>> http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec1
f36de80c437ac910%40news.povray.org%3E/
>>>>
>>>> Forgive me if there's a lot of things I don't understand, properly
>>>> recall, or
>>>> have wrong, but it seems to me that an aerial photograph of a
>>>> landscape has a
>>>> lot of perspective distortion, and part of the photogrammetry
>>>> process would be
>>>> correcting for that.
>>>
>>> For the time being, I'm experimenting to model trajectories in 3D.
>>> But it is a difficult work, because public images given by the NASA
>>> are not of good quality. If the Ingenuity helicopter flying on Mars w
as
>>> driven with my trajectory modelling, it would have crashed. But it is
>>> what I'm measuring, with input data I could collect. This is strange!
>>> I have no other possibility than to improve my computations further o
n.
>>
>> For the 9th flight over Mars, I collected better images from the
>> surface. To understand what I'm computing, here is another work
>> from mine...
>>
>> (1) <https://twitter.com/Astro_Aure/status/1414699163569246210>
>>
>> You can see a cursor pointing at Ingenuity's localization, moving on
>> the map, following the helicopter's motion. What I would like to do, i
s
>> to compute this in space, with "translate <Tx, Ty, Tz>" in 3 dimension
s.
>>
>> (2) <https://www.youtube.com/watch?v=MT5SqhJ0Jio>
>>
>> I could also "rotate <Rx, Ry, Rz>" and "shear <Sx, Sy, 0>" but it is n
ot
>> working for the moment. And I rendered this with Persistence Of Vision
.
>> The algorithm I'm using is probably close to what is used on Mars,
>> because it is constructed to avoid the drift from the embedded IMU.
>> It's not viewed from the top (like (1)) but from the surface of Mars.
>
> For 10th flight over Mars, Ingenuity followed an elaborate trajectory..
.
>
>
> NASA planned to take color pictures of remarkable points on ground.
I've been concerned by 11th flight over planet Mars yesterday August
14th, after Ingenuity helicopter happened August 4th...
<https://www.youtube.com/watch?v=50fccs79W1A>
RAW separated images were released by the NASA at the location:
<https://mars.nasa.gov/mars2020/multimedia/raw-images/?af=HELI_NAV,HELI
_RTE#raw-images>
<https://twitter.com/RevesdEspace/status/1426147951693402114>
August 13th. I worked on this video. It takes a delay between the
day of the flight, and final result here. But this video comes
from a very distant location in space!
Best regards,
--
<http://eureka.atari.org/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
>>>> Bald Eagle writes:
>>>>> Take for example Francois LE COAT's very interesting work:
>>>>>
>>>>> http://news.povray.org/povray.advanced-users/thread/%3Cweb.5bb77cec
1f36de80c437ac910%40news.povray.org%3E/
>>>>>
>>>>> Forgive me if there's a lot of things I don't understand, properly
recall, or
>>>>> have wrong, but it seems to me that an aerial photograph of a lands
cape has a
>>>>> lot of perspective distortion, and part of the photogrammetry proce
ss would be
>>>>> correcting for that.
>>>>
>>>> For the time being, I'm experimenting to model trajectories in 3D.
>>>> But it is a difficult work, because public images given by the NASA
>>>> are not of good quality. If the Ingenuity helicopter flying on Mars
was
>>>> driven with my trajectory modelling, it would have crashed. But it i
s
>>>> what I'm measuring, with input data I could collect. This is strange
!
>>>> I have no other possibility than to improve my computations further
on.
>>>
>>> For the 9th flight over Mars, I collected better images from the
>>> surface. To understand what I'm computing, here is another work
>>> from mine...
>>>
>>> (1) <https://twitter.com/Astro_Aure/status/1414699163569246210>
>>>
>>> You can see a cursor pointing at Ingenuity's localization, moving on
>>> the map, following the helicopter's motion. What I would like to do,
is
>>> to compute this in space, with "translate <Tx, Ty, Tz>" in 3 dimensio
ns.
>>>
>>> (2) <https://www.youtube.com/watch?v=MT5SqhJ0Jio>
>>>
>>> I could also "rotate <Rx, Ry, Rz>" and "shear <Sx, Sy, 0>" but it is
not
>>> working for the moment. And I rendered this with Persistence Of Visio
n.
>>> The algorithm I'm using is probably close to what is used on Mars,
>>> because it is constructed to avoid the drift from the embedded IMU.
>>> It's not viewed from the top (like (1)) but from the surface of Mars.
>>
>> For 10th flight over Mars, Ingenuity followed an elaborate trajectory.
..
>>
>>
>> NASA planned to take color pictures of remarkable points on ground.
>
> I've been concerned by 11th flight over planet Mars yesterday August
> 14th, after Ingenuity helicopter happened August 4th...
>
>
> RAW separated images were released by the NASA at the location:
>
> <https://mars.nasa.gov/mars2020/multimedia/raw-images/?af=HELI_NAV,HE
LI_RTE#raw-images>
>
>
> <https://twitter.com/RevesdEspace/status/1426147951693402114>
>
> August 13th. I worked on this video. It takes a delay between the
> day of the flight, and final result here. But this video comes
> from a very distant location in space!
<https://twitter.com/Astro_Aure/status/1455284564499341319>
This is interesting because the cadence is higher with 7 images/s. So
reconstructed 3D localization of the helicopter is now more precise...
<https://www.youtube.com/watch?v=OkclJd3Fmv0>
Ingenuity flew Oct. 24th and the mission is not yet fully accomplished!
Best regards,
--
<http://eureka.atari.org/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|