|
|
Francois LE COAT <lec### [at] atariorg> wrote:
> It is possible to perceive the relief (in depth) of a scene, when we
> have at least two different viewpoints of it. Here is a new example with
> a drone flying in the middle of a forest of trees, and from which we
> process the video stream from the embedded camera...
>
> <https://www.youtube.com/watch?v=WJ20EBM3PTc>
>
> When the two views of the same scene are distant in space, we speak
> of "spatial disparity". In the present case, the two viewpoints are
> distant in time, and we then speak of "temporal disparity". This
> involves knowing whether the two images of the same scene are acquired
> simultaneously, or delayed in time. We can perceive the relief in depth
> in this case, with a single camera and its continuous video stream.
Francois,
Although I believe that I understand the general idea of what you're doing in
your work, it's a bit difficult to fully grasp the details from the video.
I'm assuming that you're taking the first frame as a reference image, and then
reorienting the second frame to optimize the registration.
Then you're using the OpenCV to do some sort of photogrammetry to generate a
projection matrix from the 2D image, and extract/back-calculate 3D data from
that matrix for everything in the frame.
Then you move to the pair of 2nd frame + 3rd frame and repeat the process.
Is this correct?
I'm also wondering if you could create a 3D rendering with the data you're
extracting, and maybe a 2D orthographic overhead map of the scene that the
drones are flying through, mapping the position of the drones in the forest.
As always, I am very interested in understanding the details of what you're
doing.
This is great work, and I hope you are properly recognized for your
achievements.
- BW
Post a reply to this message
|
|