POV-Ray : Newsgroups : povray.advanced-users : Optical Inertia : Re: Optical Inertia Server Time
21 Dec 2024 11:48:39 EST (-0500)
  Re: Optical Inertia  
From: Francois LE COAT
Date: 21 Nov 2024 10:15:42
Message: <673f4e9e$1@news.povray.org>
Hi,

Here is another result...

> Bald Eagle writes:
>> Even if I were to simply use a POV-Ray scene, where I rendered two ima
ges with
>> different camera locations - then I'm assuming that I could calculate 
a vector
>> field and a projection matrix. (something simple like cubes, spheres, 
and cylinders)
>>
>> Given The projection matrix and one of the two renders, would I then h
ave the
>> necessary and sufficient information to write a .pov scene to recreate
 the
>> render from scratch?
>>
>> - BW
> 
> For the moment, the work with depth from monocular vision is not enough

> advanced that we can recreate the visible scene. Vision with two camera
s
> or more, gives a much advanced result for 3D reconstruction of scenes.
> 
> Let remind us the starting point from this thread... We've redone the
> experiment from Hernan Badino, who is walking with a camera on his head
:
> 

> 
> Hernan determines his 2D ego-motion in the x-y plane, from correspondin
g
> interest points that persist in the video stream. That means he is
> calculating the projection matrix of the movement to deduce translation
s
> in the ground plane. With time integration, it gives him the trajectory
.
> 
> We're doing almost the same, but I work with OpenCV's optical-flow, and

> not interest points. And my motion model is 3D, to obtain 8 parameters
> in rotation and translation, that I can use in Persistence Of Vision.
> 
> I hope you're understanding... I'm reconstituting the 3D movement, and 
I
> discover it's giving "temporal disparity", that is depth from motion.

An instrumented motorcycle rolls on the track of a speed circuit. Thanks
to the approximation of optical flow (DIS - OpenCV) by the dominant
projective movement, we determine translations on the ground plane,
roll and yaw. That is to say the trajectory by projective parameters
(Tx,Tz,Ry,Rz).

<https://www.youtube.com/watch?v=-QLJ2ke9mN8>

Image data comes from the publication:

Bastien Vincke, Pauline Michel, Abdelhafid El Ouardi, Bruno Larnaudie,

o
Rodriguez, Abderrahmane Boubezoul. (Dec. 2024). Real Track Experiment
Dataset for Motorcycle Rider Behavior and Trajectory Reconstruction.
Data in Brief, Vol. 57, 111026.

The instrumented motorcycle makes a complete lap of the track. The
correlation threshold is set at 90% between successive images, to
reset the calculation of the projective dynamic model.

Best regards,

-- 

<https://eureka.atari.org/>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.