POV-Ray : Newsgroups : povray.advanced-users : Optical Inertia : Re: Optical Inertia Server Time
8 Dec 2024 09:40:58 EST (-0500)
  Re: Optical Inertia  
From: Francois LE COAT
Date: 18 Oct 2024 20:20:12
Message: <6712fb3c$1@news.povray.org>
Hi,

Bald Eagle writes:
> Francois LE COAT wrote:
>> I have the first reference image, and the second one, from which I
>> determine the optical-flow. That means every pixel integer displacemen
t
>> between an image, to the other. That gives a vector field that can
>> be approximated by a global projective transformation. That means eigh
t
>> parameters in rotation and translation, that match the two images the
>> best. The quality of the images' matching is measured with correlation
,
>> between the first image and the second (projectively) transformed.
> 
> So, you have this vector field that - sort of maps the pixels in the se
cond
> frame to the pixels in the first frame.
> And then somehow the projection matrix is calculated from that.
> 
>> Disparity field which is an integer displacement between images, is
>> linked to depth in the image (3rd dimension) with an inverse relation
>> (depth=base/disparity). That means we can evaluate image's depth fro
m
>> a continuous video stream.
> 
> Presumably when an object gets closer, the image gets "scaled up" in th
e frame,
> and you use that to calculate the distance of the object from the camer
a.
> 
>>> I'm also wondering if you could create a 3D rendering with the data y
ou're
>>> extracting, and maybe a 2D orthographic overhead map of the scene tha
t the
>>> drones are flying through, mapping the position of the drones in the 
forest.
> 
>> All is not so perfect, that we can imagine what
>> you're wishing, from the state of the art...
> 
> Well, I'm just thinking that you must have an approximate idea of where
 each
> tree is, given that you calculate a projection matrix and know somethin
g about
> the depth.  So I was just wondering if, given that information, you cou
ld simply
> place a cylinder of the approximate diameter and at the right depth.
> 
>> It's a long work since I can show some promising results. Now I'm
>> happy to share it with a larger audience. Unfortunately all the
>> good people that worked with me, are not here to appreciate. That's
>> why I'm surprised with your interest =)
> 
> Well, I have been interested in the fundamentals of photogrammetry for 
quite
> some time.  And I have been following your work for the last several ye
ars,
> hoping to learn how to create such projection matrices and apply them.
> 
> https://news.povray.org/povray.advanced-users/thread/%3C5be592ea%241%40
news.povray.org%3E/
> 
> I don't work in academia or the graphics industry, so I only have what 
free time
> that I can devote to independently learning this stuff on my own.
> 
> Even if I were to simply use a POV-Ray scene, where I rendered two imag
es with
> different camera locations - then I'm assuming that I could calculate a
 vector
> field and a projection matrix. (something simple like cubes, spheres, a
nd
> cylinders)
> 
> Given The projection matrix and one of the two renders, would I then ha
ve the
> necessary and sufficient information to write a .pov scene to recreate 
the
> render from scratch?
> 
> - BW

I understand your question. The problem is I'm far from reconstituting
a 3D scene from the monocular information I have at the moment. I know
a company which is doing this sort of application, called Stereolabs...

	<https://www.stereolabs.com/>

I'm far from their perfect 3D acquisition process. And it is obtained
with two cameras. I only have one camera, and a video stream that I
didn't acquired myself. Is there an interest, and applications? I'm
not at this step in my work.

I know that similar monocular image processing have been used on planet
Mars, because the helicopter only had one piloting camera, for weight
and embedding reasons.

The main goal at this point of the work, is to show that we could
eventually do the same job, using many cameras, or with only one.
But I'm far from obtaining a similar result, as elaborated systems,
like with Stereolabs camera ZED, for instance.

That is already being done perfectly with a stereoscopic system...

Do you understand? Thanks for your attention.

Best regards,

-- 

<https://eureka.atari.org/>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.