|
|
This slashdot article
http://slashdot.org/article.pl?sid=03/09/10/1734256&mode=thread&tid=152&tid=
185&tid=188&tid=97 got me thinking I would like to experiment with these
multiple location cameras per frame. Do anyone know whether this can be
done? I not thinking of playing with camera normals, rather completely
changing the camera configuration based on rendering position. Is there a
variable that indicates the current rendering coordinate? I figure a could
manipulate the camera based on an x,y screen coordinate (if one exists.)
Just an idea.
David.
Post a reply to this message
|
|
|
|
On Thu, 11 Sep 2003 16:21:20 -0700, "David Newman" <dan### [at] ctscom> wrote:
> This slashdot article
> http://slashdot.org/article.pl?sid=03/09/10/1734256&mode=thread&tid=152&tid=
> 185&tid=188&tid=97 got me thinking I would like to experiment with these
> multiple location cameras per frame. Do anyone know whether this can be
> done? I not thinking of playing with camera normals, rather completely
> changing the camera configuration based on rendering position. Is there a
> variable that indicates the current rendering coordinate? I figure a could
> manipulate the camera based on an x,y screen coordinate (if one exists.)
In current POV it is only possible with workarounds. For example you can first
render images and then use these images for mapping with warped texturing and
distorted uv coordinates on flat grid of triangles in mesh in front of
orthogonal camera. In MegaPOV 1.1 there will be build-in new user-defined
camera type which could be used for sending rays in any direction and thus one
can introduce distortions of any kind. See examples in:
http://news.povray.org/nq155v8t8alc9mtvokun6h9qsfc5me5nd6%404ax.com
http://news.povray.org/sh255vomna91r0d9htauhkuhl3p2harpcv%404ax.com
http://news.povray.org/3ea9707d%40news.povray.org
ABX
Post a reply to this message
|
|