|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hello,
I am evaluating whether it would be possible to simulate the trajectory of a
Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a vehicle with
two light sensors each connected to one of two motors. The speed of motor
rotation is wholly dependent on the light intensity received by the sensor. We
have a working prototype here where the light sensors are being replaced with a
360 degree camera.
What I need is to be able to model a robot with the camera on it in POV and move
the 'bot through a POV scene. Where the 'bot goes is a function of the light
values received by the camera.
I therefore need routines for the transformation camera-picture -> new
coordinates. My query is the following. The camera picture is 360 degrees which
will be split into 8 regions -> I need to be able to capture the camera output
into a programming structure, such as an array, analyse it and return a new
coordinate.
The obvious way of doing it would be to make a raytrace, store the picture in a
file get a linux task to analyse that file and then return the coordinates and
set pov up for another raytrace. Is there any way of capturing the camera input
and linking external routines more elegantly?
Thanks
Hans
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"ibhd" <han### [at] ibhdorancom> wrote in message
news:web.4821b68b85d176678fd0f2f30@news.povray.org...
> Hello,
>
> I am evaluating whether it would be possible to simulate the trajectory of
> a
> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a vehicle
> with
> two light sensors each connected to one of two motors. The speed of motor
> rotation is wholly dependent on the light intensity received by the
> sensor. We
> have a working prototype here where the light sensors are being replaced
> with a
> 360 degree camera.
>
> What I need is to be able to model a robot with the camera on it in POV
> and move
> the 'bot through a POV scene. Where the 'bot goes is a function of the
> light
> values received by the camera.
>
> I therefore need routines for the transformation camera-picture -> new
> coordinates. My query is the following. The camera picture is 360 degrees
> which
> will be split into 8 regions -> I need to be able to capture the camera
> output
> into a programming structure, such as an array, analyse it and return a
> new
> coordinate.
>
> The obvious way of doing it would be to make a raytrace, store the picture
> in a
> file get a linux task to analyse that file and then return the coordinates
> and
> set pov up for another raytrace. Is there any way of capturing the camera
> input
> and linking external routines more elegantly?
>
> Thanks
>
> Hans
>
Hi Hans,
Your question stimulated the following thoughts, none of which are based on
anything I've tried, so I can't guarantee any of it will work, but maybe it
will inspire further thought/discussion.
If you set up a camera with 360 degree coverage and rendered at a resolution
of 1x8, then you may be able to get an image where POV-Ray averages out the
regions for you, so long as you can find appropriate camera and antialiasing
settings that match the Linux algorithm you wish to use for the real thing.
You should be able to set up an animation that retrieves the previous 1x8
pixel image as a pigment. You can use the eval_pigment function to determine
the brightness in each of the 8 regions/pixels and work out the next camera
position, writing it to a CSV file.
Having run through the animation once to output the path that the
robot/camera would take you could then do a second animation at 640x480 for
use by a human observer.
Regards,
Chris B.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
ibhd nous illumina en ce 2008-05-07 10:02 -->
> Hello,
>
> I am evaluating whether it would be possible to simulate the trajectory of a
> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a vehicle with
> two light sensors each connected to one of two motors. The speed of motor
> rotation is wholly dependent on the light intensity received by the sensor. We
> have a working prototype here where the light sensors are being replaced with a
> 360 degree camera.
>
> What I need is to be able to model a robot with the camera on it in POV and move
> the 'bot through a POV scene. Where the 'bot goes is a function of the light
> values received by the camera.
>
> I therefore need routines for the transformation camera-picture -> new
> coordinates. My query is the following. The camera picture is 360 degrees which
> will be split into 8 regions -> I need to be able to capture the camera output
> into a programming structure, such as an array, analyse it and return a new
> coordinate.
>
> The obvious way of doing it would be to make a raytrace, store the picture in a
> file get a linux task to analyse that file and then return the coordinates and
> set pov up for another raytrace. Is there any way of capturing the camera input
> and linking external routines more elegantly?
>
> Thanks
>
> Hans
>
>
>
It may be possible to do everything inside POV-Ray. By using the trace function
and eval_pigment you could get the "light" intensity in various directions. you
calculate the direction to move in the next frame and save to a file. You render
the frame. On the next frame, you read back the location and direction saved by
the previous frame, and you repeat the process.
You may need to make 100 traces for this to work.
It works best if the lights are broad, like surfaces with a high ambient. It
can't work with point_light unless they are close to a surface.
In the same scene, you can have several such robots, some of whitch carying some
light, or been bright themself.
Alain
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Alain" <ele### [at] netscapenet> wrote in message
news:48239703$1@news.povray.org...
> ibhd nous illumina en ce 2008-05-07 10:02 -->
>> Hello,
>>
>> I am evaluating whether it would be possible to simulate the trajectory
>> of a
>> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a
>> vehicle with
>> two light sensors each connected to one of two motors. The speed of motor
>> rotation is wholly dependent on the light intensity received by the
>> sensor. We
>> have a working prototype here where the light sensors are being replaced
>> with a
>> 360 degree camera.
>>
>> What I need is to be able to model a robot with the camera on it in POV
>> and move
>> the 'bot through a POV scene. Where the 'bot goes is a function of the
>> light
>> values received by the camera.
>>
>> I therefore need routines for the transformation camera-picture -> new
>> coordinates. My query is the following. The camera picture is 360 degrees
>> which
>> will be split into 8 regions -> I need to be able to capture the camera
>> output
>> into a programming structure, such as an array, analyse it and return a
>> new
>> coordinate.
>>
>> The obvious way of doing it would be to make a raytrace, store the
>> picture in a
>> file get a linux task to analyse that file and then return the
>> coordinates and
>> set pov up for another raytrace. Is there any way of capturing the camera
>> input
>> and linking external routines more elegantly?
>>
>> Thanks
>>
>> Hans
>>
>>
>>
> It may be possible to do everything inside POV-Ray. By using the trace
> function and eval_pigment you could get the "light" intensity in various
> directions.
Hi Alain,
I don't think this would work well enough to emulate a real device. Mainly
because the eval_pigment function works on pigments rather than the amount
of light coming off a surface in the direction of the camera. You'd have a
devil of a job taking any account of reflection, transparency, lighting in
the scene or the object's ambient setting (which is part of the finish
rather than the pigment).
Also the 'trace' function works on an object that you pass it. If you pass
it a union of all objects in the scene you'd still have a problem working
out which object it hit and therefore which pigment to use. Otherwise you'd
have to trace every component or sub-component that had a separate pigment
and work out which point of contact is closest to the camera.
Regards,
Chris B.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Chris B napsal(a):
> "Alain" <ele### [at] netscapenet> wrote in message
> news:48239703$1@news.povray.org...
>> ibhd nous illumina en ce 2008-05-07 10:02 -->
>>> Hello,
>>>
>>> I am evaluating whether it would be possible to simulate the trajectory
>>> of a
>>> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a
>>> vehicle with
>>> two light sensors each connected to one of two motors. The speed of motor
>>> rotation is wholly dependent on the light intensity received by the
>>> sensor. We
>>> have a working prototype here where the light sensors are being replaced
>>> with a
>>> 360 degree camera.
>>>
>>> What I need is to be able to model a robot with the camera on it in POV
>>> and move
>>> the 'bot through a POV scene. Where the 'bot goes is a function of the
>>> light
>>> values received by the camera.
>>>
>>> I therefore need routines for the transformation camera-picture -> new
>>> coordinates. My query is the following. The camera picture is 360 degrees
>>> which
>>> will be split into 8 regions -> I need to be able to capture the camera
>>> output
>>> into a programming structure, such as an array, analyse it and return a
>>> new
>>> coordinate.
>>>
>>> The obvious way of doing it would be to make a raytrace, store the
>>> picture in a
>>> file get a linux task to analyse that file and then return the
>>> coordinates and
>>> set pov up for another raytrace. Is there any way of capturing the camera
>>> input
>>> and linking external routines more elegantly?
>>>
>>> Thanks
>>>
>>> Hans
>>>
>>>
>>>
>> It may be possible to do everything inside POV-Ray. By using the trace
>> function and eval_pigment you could get the "light" intensity in various
>> directions.
>
> Hi Alain,
>
> I don't think this would work well enough to emulate a real device. Mainly
> because the eval_pigment function works on pigments rather than the amount
> of light coming off a surface in the direction of the camera. You'd have a
> devil of a job taking any account of reflection, transparency, lighting in
> the scene or the object's ambient setting (which is part of the finish
> rather than the pigment).
>
> Also the 'trace' function works on an object that you pass it. If you pass
> it a union of all objects in the scene you'd still have a problem working
> out which object it hit and therefore which pigment to use. Otherwise you'd
> have to trace every component or sub-component that had a separate pigment
> and work out which point of contact is closest to the camera.
>
> Regards,
> Chris B.
>
>
both methods are equally fast
Or he can group the objects in two groups, trace each group. Pick one,
split it, trace each and so on until you're left with just one object.
--
You know you've been raytracing too long when...
you start thinking up your own "You know you've been raytracing too long
when..." sigs (I did).
-Johnny D
Johnny D
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|