POV-Ray : Newsgroups : povray.advanced-users : Optical Inertia : Re: Optical Inertia Server Time
25 Oct 2024 19:52:59 EDT (-0400)
  Re: Optical Inertia  
From: Bald Eagle
Date: 18 Oct 2024 17:45:00
Message: <web.6712d6cc4e45f5111f9dae3025979125@news.povray.org>
Francois LE COAT <lec### [at] atariorg> wrote:

> I have the first reference image, and the second one, from which I
> determine the optical-flow. That means every pixel integer displacement
> between an image, to the other. That gives a vector field that can
> be approximated by a global projective transformation. That means eight
> parameters in rotation and translation, that match the two images the
> best. The quality of the images' matching is measured with correlation,
> between the first image and the second (projectively) transformed.

So, you have this vector field that - sort of maps the pixels in the second
frame to the pixels in the first frame.
And then somehow the projection matrix is calculated from that.

> Disparity field which is an integer displacement between images, is
> linked to depth in the image (3rd dimension) with an inverse relation
> (depth=base/disparity). That means we can evaluate image's depth from
> a continuous video stream.

Presumably when an object gets closer, the image gets "scaled up" in the frame,
and you use that to calculate the distance of the object from the camera.

> > I'm also wondering if you could create a 3D rendering with the data you
> 're
> > extracting, and maybe a 2D orthographic overhead map of the scene that
> the
> > drones are flying through, mapping the position of the drones in the fo
> rest.

> All is not so perfect, that we can imagine what
> you're wishing, from the state of the art...

Well, I'm just thinking that you must have an approximate idea of where each
tree is, given that you calculate a projection matrix and know something about
the depth.  So I was just wondering if, given that information, you could simply
place a cylinder of the approximate diameter and at the right depth.

> It's a long work since I can show some promising results. Now I'm
> happy to share it with a larger audience. Unfortunately all the
> good people that worked with me, are not here to appreciate. That's
> why I'm surprised with your interest =)

Well, I have been interested in the fundamentals of photogrammetry for quite
some time.  And I have been following your work for the last several years,
hoping to learn how to create such projection matrices and apply them.

https://news.povray.org/povray.advanced-users/thread/%3C5be592ea%241%40news.povray.org%3E/

I don't work in academia or the graphics industry, so I only have what free time
that I can devote to independently learning this stuff on my own.

Even if I were to simply use a POV-Ray scene, where I rendered two images with
different camera locations - then I'm assuming that I could calculate a vector
field and a projection matrix. (something simple like cubes, spheres, and
cylinders)

Given The projection matrix and one of the two renders, would I then have the
necessary and sufficient information to write a .pov scene to recreate the
render from scratch?

- BW


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.