|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
I have a webcam on my Vaio ... I've moved my head in front of the
camera, extracted the pose of the head with projective transform
parameters, and re-injected those parameters in POV-ray in order to
move a "virtual head" rendered with it. This give the video :
<http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
Does somebody already seen that kind of demonstration results ?
Best regards,
--
<http://hebergement.u-psud.fr/lecoat/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hmmmmmm. Not that I can remember. Something like other motion-capture things
I've seen before but maybe not the same method I am guessing.
And I couldn't say if I've seen motion-capture done with POV-Ray either. Or is
this exactly that?
Great effect, and especially interesting if more than one thing could be tracked
(head+hands+??) at the same time.
Bob
Francois LE COAT <lec### [at] atariorg> wrote:
>
> I have a webcam on my Vaio ... I've moved my head in front of the
> camera, extracted the pose of the head with projective transform
> parameters, and re-injected those parameters in POV-ray in order to
> move a "virtual head" rendered with it. This give the video :
>
> <http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
>
> Does somebody already seen that kind of demonstration results ?
> --
> <http://hebergement.u-psud.fr/lecoat/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
On 11/09/2016 11:30 AM, Francois LE COAT wrote:
> Hi,
>
> I have a webcam on my Vaio ... I've moved my head in front of the
> camera, extracted the pose of the head with projective transform
> parameters, and re-injected those parameters in POV-ray in order to
> move a "virtual head" rendered with it. This give the video :
>
> <http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
>
> Does somebody already seen that kind of demonstration results ?
>
> Best regards,
>
Neat! I do not recall seeing this sort of camera to model coupling
being accomplished with POV-Ray.
Bill P.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
William F Pokorny writes:
> Francois LE COAT wrote:
>> I have a webcam on my Vaio ... I've moved my head in front of the
>> camera, extracted the pose of the head with projective transform
>> parameters, and re-injected those parameters in POV-ray in order to
>> move a "virtual head" rendered with it. This give the video :
>>
>> <http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
>>
>> Does somebody already seen that kind of demonstration results ?
>>
> Neat! I do not recall seeing this sort of camera to model coupling bein
g
> accomplished with POV-Ray.
The processing is only based on computer vision and image processing,
with one webcam. I obtained the parameters of the movement with image
registration and an original projective model. I don't think this
result can be obtained without the POV-Ray scripting capabilities.
Best regards,
--
<http://hebergement.u-psud.fr/lecoat/>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
Francois LE COAT writes:
> William F Pokorny writes:
>> Francois LE COAT wrote:
>>> I have a webcam on my Vaio ... I've moved my head in front of the
>>> camera, extracted the pose of the head with projective transform
>>> parameters, and re-injected those parameters in POV-ray in order to
>>> move a "virtual head" rendered with it. This give the video :
>>>
>>> <http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
>>>
>>> Does somebody already seen that kind of demonstration results ?
>>>
>> Neat! I do not recall seeing this sort of camera to model coupling bei
ng
>> accomplished with POV-Ray.
>
> The processing is only based on computer vision and image processing,
> with one webcam. I obtained the parameters of the movement with image
> registration and an original projective model. I don't think this
> result can be obtained without the POV-Ray scripting capabilities.
I have done the same experimentation again, and it gives the video ...
<https://www.youtube.com/watch?v=an_9_BFjAEg>
This is a height degrees of freedom (8-dof) correspondence of images.
In the video, the projective transform is shown. There's also the
mosaics of the 3D head tracking, and the reconstruction of the motion,
with POV-Ray. I also displayed the optical-flow between successive
images. The head pose is obtained, even for very large movements !
I added some information to the video, so that you can understand ...
I hope it help =)
Best regards,
--
François LE COAT
<https://hebergement.universite-paris-saclay.fr/lecoat>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Francois LE COAT <lec### [at] atariorg> wrote:
> I have done the same experimentation again, and it gives the video ...
>
> <https://www.youtube.com/watch?v=an_9_BFjAEg>
Wow. That's _really_ close agreement between frames! I love the bordering of
the successive frames to show the perspective adjustment.
You've been at this a LONG time - and your persistent and dedication show and
have paid off well.
Have you added more details of the process on your website?
Do you think this would work well to take maps and satellite images and place
them into register?
Thanks again so much for sharing your excellent work. :)
Bill
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
Bald Eagle writes:
> Francois LE COAT wrote:
>> I have done the same experimentation again, and it gives the video ...
>>
>> <https://www.youtube.com/watch?v=an_9_BFjAEg>
>
> Wow. That's _really_ close agreement between frames! I love the border
ing of
> the successive frames to show the perspective adjustment.
>
> You've been at this a LONG time - and your persistent and dedication sh
ow and
> have paid off well.
>
> Have you added more details of the process on your website?
> Do you think this would work well to take maps and satellite images and
place
> them into register?
>
> Thanks again so much for sharing your excellent work. :)
>
> Bill
There's a WEB page of the experiment, with another image dataset at :
<https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/camera_loc
alization.html>
The observed scene is different, but the correspondence of images is
performed in the same conditions, with the exact same method. I'm
currently writing a scientific article to explain all these things.
Thanks for your interest, and your kind help with POV-Ray transforms.
Best regards,
--
François LE COAT
<https://hebergement.universite-paris-saclay.fr/lecoat>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi,
> William F Pokorny writes:
>> Francois LE COAT wrote:
>>> I have a webcam on my Vaio ... I've moved my head in front of the
>>> camera, extracted the pose of the head with projective transform
>>> parameters, and re-injected those parameters in POV-ray in order to
>>> move a "virtual head" rendered with it. This give the video :
>>>
>>> <http://hebergement.u-psud.fr/lecoat/camera_fixe.mp4>
>>>
>>> Does somebody already seen that kind of demonstration results ?
>>>
>> Neat! I do not recall seeing this sort of camera to model coupling bei
ng
>> accomplished with POV-Ray.
>
> The processing is only based on computer vision and image processing,
> with one webcam. I obtained the parameters of the movement with image
> registration and an original projective model. I don't think this
> result can be obtained without the POV-Ray scripting capabilities.
A WEB page was made to illustrate the Head Pose experiment...
When we want to determine the movement, there are two simplifying
hypotheses. Either the camera is fixed, and the observed scene is
moving. Or the camera moves, and the scene is static. In the general
case, the camera and the scene are moving, and it is necessary to
segment static and dynamic elements of what is observed. In this case
the camera is fixed, and it observes the person who is located in front
of the computer and moving. The goal of the experiment is to reconstruct
the visible relief, by "monocular depth"...
<https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/profondeur
.html>
That is to say we obtain the depth (inversely proportional to the
disparity) by measuring the optical-flow, with a single camera, tough
this measurement is classically made by binocular vision (for disparity)
Best regards,
--
<https://hebergement.universite-paris-saclay.fr/lecoat>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |