|
![](/i/fill.gif) |
"ibhd" <han### [at] ibhdoran com> wrote in message
news:web.4821b68b85d176678fd0f2f30@news.povray.org...
> Hello,
>
> I am evaluating whether it would be possible to simulate the trajectory of
> a
> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a vehicle
> with
> two light sensors each connected to one of two motors. The speed of motor
> rotation is wholly dependent on the light intensity received by the
> sensor. We
> have a working prototype here where the light sensors are being replaced
> with a
> 360 degree camera.
>
> What I need is to be able to model a robot with the camera on it in POV
> and move
> the 'bot through a POV scene. Where the 'bot goes is a function of the
> light
> values received by the camera.
>
> I therefore need routines for the transformation camera-picture -> new
> coordinates. My query is the following. The camera picture is 360 degrees
> which
> will be split into 8 regions -> I need to be able to capture the camera
> output
> into a programming structure, such as an array, analyse it and return a
> new
> coordinate.
>
> The obvious way of doing it would be to make a raytrace, store the picture
> in a
> file get a linux task to analyse that file and then return the coordinates
> and
> set pov up for another raytrace. Is there any way of capturing the camera
> input
> and linking external routines more elegantly?
>
> Thanks
>
> Hans
>
Hi Hans,
Your question stimulated the following thoughts, none of which are based on
anything I've tried, so I can't guarantee any of it will work, but maybe it
will inspire further thought/discussion.
If you set up a camera with 360 degree coverage and rendered at a resolution
of 1x8, then you may be able to get an image where POV-Ray averages out the
regions for you, so long as you can find appropriate camera and antialiasing
settings that match the Linux algorithm you wish to use for the real thing.
You should be able to set up an animation that retrieves the previous 1x8
pixel image as a pigment. You can use the eval_pigment function to determine
the brightness in each of the 8 regions/pixels and work out the next camera
position, writing it to a CSV file.
Having run through the animation once to output the path that the
robot/camera would take you could then do a second animation at 640x480 for
use by a human observer.
Regards,
Chris B.
Post a reply to this message
|
![](/i/fill.gif) |