POV-Ray : Newsgroups : povray.macintosh : [Q] POV-Ray in command line Server Time
8 Oct 2024 04:48:34 EDT (-0400)
  [Q] POV-Ray in command line (Message 18 to 27 of 37)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: jr
Subject: Re: [Q] POV-Ray in command line
Date: 30 Jan 2021 11:00:01
Message: <web.601581d3b3a5582679819d980@news.povray.org>
hi,

Francois LE COAT <lec### [at] atariorg> wrote:
> ...
> >> ...
> >> The rendering of POV-Ray synthesis images are done in "real-time". It
> >> depends on past data, and future parameters are unknown. I have a
> >> POV-Ray script from which I substitute series of float numbers, that
> >> produces 2000 different scenes, one after the other.
> >> ...
> Well, I have a model "pacman.pov" from which I generate "pacman_mod.pov"
> substituting the parameters at n step. Then I render this script,
> producing "pacman.png". And I move "pacman.png" to "pac%04d.png" with n.
> ...

I found the web page helpful, in the video I find it difficult to reconcile
movement of the indicator/pacman with the changes in orientation wrt the images
on the left (cf ~0:25, ~1:14).  concluding thought(s).  in reply to
BayashiPascal you write "The issue here, is just to make POV-Ray quiet when I'm
rendering."  with respect, disagree.  the issue I think, even not knowing the
particulars, is work-flow[*], is to work with POV-Ray, and its animation
provisions, rather than launching thousands of instances.  hope you will find a
satisfactory solution.


regards, jr.


[*] in my head/naively I'd collect the sets of eight values in a text file (CSV
style), have a few lines of 'awk' or such to convert that to an .inc file with
an array, then run an "animation" where the scene simply displays the current
video frame and the indicator gets rendered "on top".


Post a reply to this message

From: BayashiPascal
Subject: Re: [Q] POV-Ray in command line
Date: 30 Jan 2021 21:05:01
Message: <web.60160f37b3a558266396ca3e0@news.povray.org>
Hi,

Thank you very much for the web page, it looks like a very interesting research
project.

> POV-Ray is totally appropriate to show what I'm doing, because it can
> represent the eight parameters I'm obtaining from the camera movement.

Sure, Pov-ray can do that perfectly. What I meant was another rendering engine
could also do it as well, while avoiding the problem you face with Pov-ray. For
example a graphic library integrated to what you're using to launch the Pov-ray
instances would avoid creating those 2000 external processes to run Pov-ray. In
a previous comment you were writing "pac%04d.png", maybe you're using the C
programming language to generate the Pov-ray scripts and launch there rendering?
In that case, using a graphic library like, for example, OpenGL to render images
directly in the C program instead of using Pov-ray would allow you to get the
same result while avoiding the problem encountered with Pov-ray.

But, I still do not understand completely your constraints and I shall no go
further with hypothesis.

Anyway, bonne chance in your research ! :-)



Francois LE COAT <lec### [at] atariorg> wrote:
> Hi,
>
> To explain what I'm doing I've done a WEB page that is not yet finished:
>
> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/temporal_d
> isparity.html>
>
> POV-Ray is totally appropriate to show what I'm doing, because it can
> represent the eight parameters I'm obtaining from the camera movement.
>
> I obtain the eight:
>
> - Tx horizontal translation
> - Ty vertical translation
> - Tz depth translation
> - Rx pitch angle
> - Ry yaw angle
> - Rz roll angle
> - Sx horizontal shear angle
> - Sy vertical shear angle
>
> and POV-Ray can represent those all. This already have been discussed in
> <news://povray.advanced-users> because I'm modelling the 3D motion.
>
> The issue here, is just to make POV-Ray quiet when I'm rendering...
>
> BayashiPascal writes:
> > Trying to guess what you're doing. The execution of the 2000 renderings
>  is
> > automated in some way but you're getting your data used to create the r
> endering
> > script in real time, one image after the other, waiting new data to ren
> der the
> > next image, thus don't know in advance the new parameters. Am I right ?
>
> >
> > If that's the case, and if the rendering could be delayed, you could wa
> it until
> > you acquired the whole data for the 2000 images and implement a solutio
> n as
> > we've suggested in previous posts to render images at the end of acquis
> ition.
> > But may be you need to render the image as soon as its data are acquire
> d and use
> > the rendered image to acquire the next data ?
> >
> > Your video really sparks my curiosity. I'm also working on project usin
> g
> > Pov-ray, real world data and depth images. Would you mind telling us a
> little
> > more about what you're doing ? Looks like some kind of 3D reconstructio
> n from
> > data acquired by a drone ?
> >
> > I also understand that finding a solution, which I have no idea of, to
> the
> > finder problem may be more practical to your use case, but, as jr, I st
> ill
> > believe there may be a work around. The image you render looks simple,
> and given
> > the real time constraints (either during acquisition, rendering process
>  or
> > rendered image post processing) you seem to have, maybe Pov-ray is simp
> ly not
> > the appropriate tool to your use case ?
> >
> > Hoping to be helpful,
> > Pascal
> >
> > Francois LE COAT wrote:
> >> jr writes:
> >>> Francois LE COAT wrote:
> >>>> ...
> >>>> The rendering of POV-Ray synthesis images are done in "real-time". I
> t
> >>>> depends on past data, and future parameters are unknown. I have a
> >>>> POV-Ray script from which I substitute series of float numbers, that
>
> >>>> produces 2000 different scenes, one after the other.
> >>>>
> >>>> I can't launch POV-Ray one time, to produce 2000 images. I must laun
> ch
> >>
> >>>> it 2000 times, to render 2000 images. ...
> >>>> The question is how to configure POV-Ray with the command-line, so t
> ha
> >> t
> >>>> it is quiet ? ...
> >>>
> >>> looking at the ini you posted earlier, am I correct in assuming that
> th
> >> e
> >>> generating "POV-Ray script" produces 2000 scene files named 'pacman_m
> od
> >> .pov'?
> >>
> >> Well, I have a model "pacman.pov" from which I generate "pacman_mod.po
> v"
> >> substituting the parameters at n step. Then I render this script,
> >> producing "pacman.png". And I move "pacman.png" to "pac%04d.png" with
> n.
> >>
> >>> also, since I cannot believe that you'd invoke a(ny) program 2000 tim
> es
> >>
> >>> manually, the script must somehow tell 2nd from 23rd run.  do you run
>  t
> >> he whole
> >>> thing lot from within another script?  (o/wise how do you prevent ove
> rw
> >> riting
> >>> 'pacman.png'?)
> >>
> >> It results 2000 files, from "pac0001.png" to "pac2000.png" with 2000
> >> steps. I have launched POV-Ray 2000 times, knowing parameters from
> >> 1 to n steps. Each step n I have new parameters, but I don't know n+1.
>
> >>
> >>> are you free to modify said POV-Ray script?  then, for instance, you
> co
> >> uld
> >>> change it to generate an array and include that from your scene, usin
> g
> >>> frame_number as index; though there'd likely be other, more efficient
>  w
> >> ays.  (I
> >>> also assume that the newly calculated data only depends on previous)
> >>
> >> I can't generate an array of parameters, because I know those partiall
> y.
> >> I'm drawing a trajectory, steps to steps, and I can't predict future.
> >>
> >>> anyway, I'm fairly certain that the "problem" can be addressed w/out
> re
> >> sorting
> >>> to compiling a new program.  :-)
> >>
> >> It's the simplest solution. If there's no command-line option to make
> >> POV-Ray quiet, I can build a macOS/Macports version, that will be quie
> t.
>
> Thanks for your help.
>
> Regards,
>
> --

> <http://eureka.atari.org/>


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 31 Jan 2021 08:14:10
Message: <6016ad22$1@news.povray.org>
Hi,

I had a talk with Bald Eagle, that is present in this newsgroup, about
the transformations of POV-Ray, in order to render all the eight
parameters I'm obtaining. I could also use OpenGL, because it may be
more appropriate for "real-time". But POV-Ray is more realistic, and
will be more and more convenient for "real-time" photographic rendering.

The WEB page that I mentioned will be modified, because it doesn't
represent all what I wanted to explain about what I'm doing.

All this is written in C/C++ using POV-Ray and OpenCV, but I also use
shell scripts, mainly `tcsh`. And I use extensively the macOS Macports
environment, that allows to use `xv`, ImageMagick, `ffmpeg` etc. And
of course POV-Ray and OpenCV library, regularly updated.

All of this would be totally impossible to develop without the Unix
environment, simply. It also works under GNU/Linux, with the same tools.

BayashiPascal writes:
> Francois LE COAT wrote:
> Thank you very much for the web page, it looks like a very interesting 
research
> project.
> 
>> POV-Ray is totally appropriate to show what I'm doing, because it can
>> represent the eight parameters I'm obtaining from the camera movement.

> 
> Sure, Pov-ray can do that perfectly. What I meant was another rendering
 engine
> could also do it as well, while avoiding the problem you face with Pov-
ray. For
> example a graphic library integrated to what you're using to launch the
 Pov-ray
> instances would avoid creating those 2000 external processes to run Pov
-ray. In
> a previous comment you were writing "pac%04d.png", maybe you're using t
he C
> programming language to generate the Pov-ray scripts and launch there r
endering?
> In that case, using a graphic library like, for example, OpenGL to rend
er images
> directly in the C program instead of using Pov-ray would allow you to g
et the
> same result while avoiding the problem encountered with Pov-ray.
> 
> But, I still do not understand completely your constraints and I shall 
no go
> further with hypothesis.
> 
> Anyway, bonne chance in your research ! :-)
> 
> Francois LE COAT wrote:
>> To explain what I'm doing I've done a WEB page that is not yet finishe
d:
>>
>> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/tempora
l_disparity.html>
>>
>> POV-Ray is totally appropriate to show what I'm doing, because it can
>> represent the eight parameters I'm obtaining from the camera movement.

>>
>> I obtain the eight:
>>
>> - Tx horizontal translation
>> - Ty vertical translation
>> - Tz depth translation
>> - Rx pitch angle
>> - Ry yaw angle
>> - Rz roll angle
>> - Sx horizontal shear angle
>> - Sy vertical shear angle
>>
>> and POV-Ray can represent those all. This already have been discussed 
in
>> <news://povray.advanced-users> because I'm modelling the 3D motion.
>>
>> The issue here, is just to make POV-Ray quiet when I'm rendering...
>>
>> BayashiPascal writes:
>>> Trying to guess what you're doing. The execution of the 2000 renderin
gs is
>>> automated in some way but you're getting your data used to create the
 rendering
>>> script in real time, one image after the other, waiting new data to r
ender the
>>> next image, thus don't know in advance the new parameters. Am I right
 ?
>>>
>>> If that's the case, and if the rendering could be delayed, you could 
wait until
>>> you acquired the whole data for the 2000 images and implement a solut
ion as
>>> we've suggested in previous posts to render images at the end of acqu
isition.
>>> But may be you need to render the image as soon as its data are acqui
red and use
>>> the rendered image to acquire the next data ?
>>>
>>> Your video really sparks my curiosity. I'm also working on project us
ing
>>> Pov-ray, real world data and depth images. Would you mind telling us 
a little
>>> more about what you're doing ? Looks like some kind of 3D reconstruct
ion from
>>> data acquired by a drone ?
>>>
>>> I also understand that finding a solution, which I have no idea of, t
o the
>>> finder problem may be more practical to your use case, but, as jr, I 
still
>>> believe there may be a work around. The image you render looks simple
, and given
>>> the real time constraints (either during acquisition, rendering proce
ss or
>>> rendered image post processing) you seem to have, maybe Pov-ray is si
mply not
>>> the appropriate tool to your use case ?
>>>
>>> Hoping to be helpful,
>>> Pascal

Thanks for your help.

Regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 18 Feb 2021 11:45:17
Message: <602e999d$1@news.povray.org>
Hi,

I've completed the WEB page I mentioned. This image processing is
applied to a drone flying in Vosges a few days ago. I was thinking about
to apply the same computations with the flight of Ingenuity on Mars...

<https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>

The autonomous drone is landing today, and we'll have video sequences =
)

Francois LE COAT writes:
> I had a talk with Bald Eagle, that is present in this newsgroup, about
> the transformations of POV-Ray, in order to render all the eight
> parameters I'm obtaining. I could also use OpenGL, because it may be
> more appropriate for "real-time". But POV-Ray is more realistic, and
> will be more and more convenient for "real-time" photographic rendering
.
> 
> The WEB page that I mentioned will be modified, because it doesn't
> represent all what I wanted to explain about what I'm doing.
> 
> All this is written in C/C++ using POV-Ray and OpenCV, but I also use
> shell scripts, mainly `tcsh`. And I use extensively the macOS Macports
> environment, that allows to use `xv`, ImageMagick, `ffmpeg` etc. And
> of course POV-Ray and OpenCV library, regularly updated.
> 
> All of this would be totally impossible to develop without the Unix
> environment, simply. It also works under GNU/Linux, with the same tools
.
> 
> BayashiPascal writes:
>> Francois LE COAT wrote:
>> Thank you very much for the web page, it looks like a very interesting
 
>> research project.
>>
>>> POV-Ray is totally appropriate to show what I'm doing, because it can

>>> represent the eight parameters I'm obtaining from the camera movement
.
>>
>> Sure, Pov-ray can do that perfectly. What I meant was another renderin
g engine
>> could also do it as well, while avoiding the problem you face with Pov
-ray. For
>> example a graphic library integrated to what you're using to launch th
e Pov-ray
>> instances would avoid creating those 2000 external processes to run Po
v-ray. In
>> a previous comment you were writing "pac%04d.png", maybe you're using 
the C
>> programming language to generate the Pov-ray scripts and launch there 
rendering?
>> In that case, using a graphic library like, for example, OpenGL to ren
der images
>> directly in the C program instead of using Pov-ray would allow you to 
get the
>> same result while avoiding the problem encountered with Pov-ray.
>>
>> But, I still do not understand completely your constraints and I shall
 no go
>> further with hypothesis.
>>
>> Anyway, bonne chance in your research ! :-)
>>
>> Francois LE COAT wrote:
>>> To explain what I'm doing I've done a WEB page that is not yet finish
ed:
>>>
>>> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/tempor
al_disparity.html> 
>>>
>>> POV-Ray is totally appropriate to show what I'm doing, because it can

>>> represent the eight parameters I'm obtaining from the camera movement
.
>>>
>>> I obtain the eight:
>>>
>>> - Tx horizontal translation
>>> - Ty vertical translation
>>> - Tz depth translation
>>> - Rx pitch angle
>>> - Ry yaw angle
>>> - Rz roll angle
>>> - Sx horizontal shear angle
>>> - Sy vertical shear angle
>>>
>>> and POV-Ray can represent those all. This already have been discussed
 in
>>> <news://povray.advanced-users> because I'm modelling the 3D motion.
>>>
>>> The issue here, is just to make POV-Ray quiet when I'm rendering...

Thanks for your help.

Regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: Bald Eagle
Subject: Re: [Q] POV-Ray in command line
Date: 18 Feb 2021 13:30:01
Message: <web.602eb1e2b3a558261f9dae300@news.povray.org>
Francois LE COAT <lec### [at] atariorg> wrote:
> Hi,
>
> I've completed the WEB page I mentioned. This image processing is
> applied to a drone flying in Vosges a few days ago. I was thinking about
> to apply the same computations with the flight of Ingenuity on Mars...
>
> <https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>

Nice.
I modeled a drone propeller like that ... 7 years ago?

http://news.povray.org/povray.binaries.images/attachment/%3Cweb.53dd26749e0d00ba5e7df57c0%40news.povray.org%3E/propelle
r2_dragonfly.png?ttop=432863&toff=950


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 18 Feb 2021 13:49:31
Message: <602eb6bb$1@news.povray.org>
Hi,

Bald Eagle writes:
> Francois LE COAT wrote:
>> I've completed the WEB page I mentioned. This image processing is
>> applied to a drone flying in Vosges a few days ago. I was thinking abo
ut
>> to apply the same computations with the flight of Ingenuity on Mars...

>>
>> <https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>
> 
> Nice.
> I modeled a drone propeller like that ... 7 years ago?
> 
> <http://news.povray.org/povray.binaries.images/attachment/%3Cweb.53dd26
749e0d00ba5e7df57c0%40news.povray.org%3E/propeller2_dragonfly.png?ttop=
432863&toff=950>

Great =)

The goal is modelling the trajectory and the visible relief with a
simple video, using the images from Ingenuity's camera, like in :

<https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/temporal_d
isparity.html>

the video is <https://www.youtube.com/watch?v=MzWu7zwdJSk> Do you
remember that we talked about translate <Tx,Ty,Tz>, rotate
<Rx,Ry,Rz> and shear (or skew) angles <Sx,Sy,0> for the camera ?
Then we can deduce the trajectory, and the monocular depth.

You'll see what it gives on Mars, if you haven't seen it in a
forest, with a drone, in winter ... This is spectacular :-)

Thanks for your help.

Best regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 16 Mar 2021 12:17:07
Message: <6050da03$1@news.povray.org>
Hi,

> Bald Eagle writes:
>> Francois LE COAT wrote:
>>> I've completed the WEB page I mentioned. This image processing is
>>> applied to a drone flying in Vosges a few days ago. I was thinking ab
out
>>> to apply the same computations with the flight of Ingenuity on Mars..
.
>>>
>>> <https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>
>>
>> Nice.
>> I modeled a drone propeller like that ... 7 years ago?
>>
>> <http://news.povray.org/povray.binaries.images/attachment/%3Cweb.53dd2
6749e0d00ba5e7df57c0%40news.povray.org%3E/propeller2_dragonfly.png?ttop=
432863&toff=950> 
>>
> 
> Great =)
> 
> The goal is modelling the trajectory and the visible relief with a
> simple video, using the images from Ingenuity's camera, like in :
> 
> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/temporal
_disparity.html> 
> 
> 
> the video is <https://www.youtube.com/watch?v=MzWu7zwdJSk> Do you
> remember that we talked about translate <Tx,Ty,Tz>, rotate
> <Rx,Ry,Rz> and shear (or skew) angles <Sx,Sy,0> for the camera ?
> Then we can deduce the trajectory, and the monocular depth.
> 
> You'll see what it gives on Mars, if you haven't seen it in a
> forest, with a drone, in winter ... This is spectacular :-)

I recently worked a little further on the trajectory of the drone...

     <https://www.youtube.com/watch?v=3PdUvGDCbQc>

Instead of using uniquely Ry (yaw) and Tz (translation), I also used
Tx (translation) and Rz (roll) to reconstruct the trajectory. I couldn't
use Ty (translation) and Rx (pitch) because it is not looking like a
valid camera displacement. I have no real explanation.

But the aspect of the drone's trajectory is looking better ...

Thanks for your help.

Best regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 30 May 2021 11:00:01
Message: <60b3a871$1@news.povray.org>
Hi,

>> Bald Eagle writes:
>>> Francois LE COAT wrote:
>>>> I've completed the WEB page I mentioned. This image processing is
>>>> applied to a drone flying in Vosges a few days ago. I was thinking 
>>>> about
>>>> to apply the same computations with the flight of Ingenuity on Mars.
..
>>>>
>>>> <https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>
>>>
>>> Nice.
>>> I modeled a drone propeller like that ... 7 years ago?
>>>
>>> <http://news.povray.org/povray.binaries.images/attachment/%3Cweb.53dd
26749e0d00ba5e7df57c0%40news.povray.org%3E/propeller2_dragonfly.png?ttop=
432863&toff=950> 
>>>
>>
>> Great =)
>>
>> The goal is modelling the trajectory and the visible relief with a
>> simple video, using the images from Ingenuity's camera, like in :
>>
>> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/tempora
l_disparity.html> 
>>
>>
>> the video is <https://www.youtube.com/watch?v=MzWu7zwdJSk> Do you
>> remember that we talked about translate <Tx,Ty,Tz>, rotate
>> <Rx,Ry,Rz> and shear (or skew) angles <Sx,Sy,0> for the camera ?
>> Then we can deduce the trajectory, and the monocular depth.
>>
>> You'll see what it gives on Mars, if you haven't seen it in a
>> forest, with a drone, in winter ... This is spectacular :-)
> 
> I recently worked a little further on the trajectory of the drone...
> 

> 
> Instead of using uniquely Ry (yaw) and Tz (translation), I also used
> Tx (translation) and Rz (roll) to reconstruct the trajectory. I couldn'
t
> use Ty (translation) and Rx (pitch) because it is not looking like a
> valid camera displacement. I have no real explanation.
> 
> But the aspect of the drone's trajectory is looking better ...

I worked on the sixth flight of the "Ingenuity" helicopter on Mars...

	<https://www.youtube.com/watch?v=pKUAsuXF6EA>

Unfortunately, there was an incident with the dated flight information
and the video sources from the NASA aren't so good for processing :-(
I wish we will have a color video, from the second embedded camera :-)

Thanks for your help.

Best regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: Francois LE COAT
Subject: Re: [Q] POV-Ray in command line
Date: 26 Jun 2021 09:51:19
Message: <60d730d7$1@news.povray.org>
Hi,

>>> Bald Eagle writes:
>>>> Francois LE COAT wrote:
>>>>> I've completed the WEB page I mentioned. This image processing is
>>>>> applied to a drone flying in Vosges a few days ago. I was thinking 

>>>>> about
>>>>> to apply the same computations with the flight of Ingenuity on Mars
...
>>>>>
>>>>> <https://en.wikipedia.org/wiki/Mars_Helicopter_Ingenuity>
>>>>
>>>> Nice.
>>>> I modeled a drone propeller like that ... 7 years ago?
>>>>
>>>> <http://news.povray.org/povray.binaries.images/attachment/%3Cweb.53d
d26749e0d00ba5e7df57c0%40news.povray.org%3E/propeller2_dragonfly.png?ttop
=432863&toff=950> 
>>>>
>>>
>>> Great =)
>>>
>>> The goal is modelling the trajectory and the visible relief with a
>>> simple video, using the images from Ingenuity's camera, like in :
>>>
>>> <https://hebergement.universite-paris-saclay.fr/lecoat/demoweb/tempor
al_disparity.html> 
>>>
>>>
>>> the video is <https://www.youtube.com/watch?v=MzWu7zwdJSk> Do you
>>> remember that we talked about translate <Tx,Ty,Tz>, rotate
>>> <Rx,Ry,Rz> and shear (or skew) angles <Sx,Sy,0> for the camera ?
>>> Then we can deduce the trajectory, and the monocular depth.
>>>
>>> You'll see what it gives on Mars, if you haven't seen it in a
>>> forest, with a drone, in winter ... This is spectacular :-)
>>
>> I recently worked a little further on the trajectory of the drone...
>>

>>
>> Instead of using uniquely Ry (yaw) and Tz (translation), I also used
>> Tx (translation) and Rz (roll) to reconstruct the trajectory. I couldn
't
>> use Ty (translation) and Rx (pitch) because it is not looking like a
>> valid camera displacement. I have no real explanation.
>>
>> But the aspect of the drone's trajectory is looking better ...
> 
> I worked on the sixth flight of the "Ingenuity" helicopter on Mars...
> 

> 
> Unfortunately, there was an incident with the dated flight information
> and the video sources from the NASA aren't so good for processing :-(
> I wish we will have a color video, from the second embedded camera :-)

If you read the comment from the NASA about the 8th flight of Ingenuity:

<https://mars.nasa.gov/technology/helicopter/status/308>

you understand that there was no color camera acquisition for the 7th
and the 8th flight on Mars. This was due to the incident on the 6th
flight, and a conflict between acquisition of the two embedded cameras.
Let's hope for subsequent flights of the helicopter, that NASA has fixed
the horodating problem. Let's hope we will have a color video from Mars.

Thanks for your help.

Best regards,

-- 

<http://eureka.atari.org/>


Post a reply to this message

From: BayashiPascal
Subject: Re: [Q] POV-Ray in command line
Date: 27 Jun 2021 06:40:00
Message: <web.60d85509b3a55826a3e088d5e0f8c582@news.povray.org>
Francois LE COAT <lec### [at] atariorg> wrote:
> Hi,
>
>
> If you read the comment from the NASA about the 8th flight of Ingenuity:
>
> <https://mars.nasa.gov/technology/helicopter/status/308>
>
> you understand that there was no color camera acquisition for the 7th
> and the 8th flight on Mars. This was due to the incident on the 6th
> flight, and a conflict between acquisition of the two embedded cameras.
> Let's hope for subsequent flights of the helicopter, that NASA has fixed
> the horodating problem. Let's hope we will have a color video from Mars.
>
> Thanks for your help.
>
> Best regards,
>
> --
>
> <http://eureka.atari.org/>


Well done ! :-)
I hope too there will be colors for the next videos.
Your system seems to work quite well. Do you have the data for the actual
trajectory and can you quantify the accuracy of your reconstructed trajectory ?
Are you working directly with the Ingenuity team, or just using their public
data ?

Bonne continuation :-)

Pascal


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.