POV-Ray : Newsgroups : povray.advanced-users : pov runtime interface with external software : Re: pov runtime interface with external software Server Time
5 Jul 2024 15:13:27 EDT (-0400)
  Re: pov runtime interface with external software  
From: Jan Dvorak
Date: 9 May 2008 15:18:46
Message: <4824a396$1@news.povray.org>
Chris B napsal(a):
> "Alain" <ele### [at] netscapenet> wrote in message 
> news:48239703$1@news.povray.org...
>> ibhd nous illumina en ce 2008-05-07 10:02 -->
>>> Hello,
>>>
>>> I am evaluating whether it would be possible to simulate the trajectory 
>>> of a
>>> Braitenberg vehicle with POV. A BraitBot (type 2) is essentially a 
>>> vehicle with
>>> two light sensors each connected to one of two motors. The speed of motor
>>> rotation is wholly dependent on the light intensity received by the 
>>> sensor. We
>>> have a working prototype here where the light sensors are being replaced 
>>> with a
>>> 360 degree camera.
>>>
>>> What I need is to be able to model a robot with the camera on it in POV 
>>> and move
>>> the 'bot through a POV scene. Where the 'bot goes is a function of the 
>>> light
>>> values received by the camera.
>>>
>>> I therefore need routines for the transformation camera-picture -> new
>>> coordinates. My query is the following. The camera picture is 360 degrees 
>>> which
>>> will be split into 8 regions -> I need to be able to capture the camera 
>>> output
>>> into a programming structure, such as an array, analyse it and return a 
>>> new
>>> coordinate.
>>>
>>> The obvious way of doing it would be to make a raytrace, store the 
>>> picture in a
>>> file get a linux task to analyse that file and then return the 
>>> coordinates and
>>> set pov up for another raytrace. Is there any way of capturing the camera 
>>> input
>>> and linking external routines more elegantly?
>>>
>>> Thanks
>>>
>>> Hans
>>>
>>>
>>>
>> It may be possible to do everything inside POV-Ray. By using the trace 
>> function and eval_pigment you could get the "light" intensity in various 
>> directions.
> 
> Hi Alain,
> 
> I don't think this would work well enough to emulate a real device. Mainly 
> because the eval_pigment function works on pigments rather than the amount 
> of light coming off a surface in the direction of the camera. You'd have a 
> devil of a job taking any account of reflection, transparency, lighting in 
> the scene or the object's ambient setting (which is part of the finish 
> rather than the pigment).
> 
> Also the 'trace' function works on an object that you pass it. If you pass 
> it a union of all objects in the scene you'd still have a problem working 
> out which object it hit and therefore which pigment to use. Otherwise you'd 
> have to trace every component or sub-component that had a separate pigment 
> and work out which point of contact is closest to the camera.
> 
> Regards,
> Chris B. 
> 
> 
both methods are equally fast
Or he can group the objects in two groups, trace each group. Pick one, 
split it, trace each and so on until you're left with just one object.
-- 
You know you've been raytracing too long when...
you start thinking up your own "You know you've been raytracing too long 
when..." sigs (I did).
		-Johnny D
Johnny D


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.