POV-Ray : Newsgroups : povray.newusers : Question about Camera Geometry Server Time
28 Jul 2024 18:26:01 EDT (-0400)
  Question about Camera Geometry (Message 1 to 10 of 11)  
Goto Latest 10 Messages Next 1 Messages >>>
From: callendorph
Subject: Question about Camera Geometry
Date: 16 Jul 2008 13:00:01
Message: <web.487e28bcc5fa8d3175ee02920@news.povray.org>
Hello,

So I have read through documentation on the camera object in POV-ray and I have
a few questions about how it creates the image plane geometry. From the diagram
in the documentation, it seems the <direction> vector controls the "focal
length" of the camera, where the <location> vector is the focal point. The
<direction> vector can be used to set the angle of view of the camera with its
length. It seems from the documentation's figure that the direction vector does
this by assuming an image plane that is 1 unit by 1 unit. Such that the
following camera:

camera {
  location <0,0,0>
  direction <0,0,0.5>
}

Should make a camera with a field of view of 90 degrees. Now, if I am correct so
far, then Povray will create the <right> and <up> vectors to satisfy the default
aspect ratio. For an output image of 640x480, This would imply that each pixel
(square pixel) will have a side that is (1unit/640) or 0.0016 units/pixel.
However, from other computation that I am doing, it seems I am making an error
with my calculation of the pixel width of the image. Does this seems correct or
am I doing something wrong?

Thanks,
~Carl


Post a reply to this message

From: Tim Attwood
Subject: Re: Question about Camera Geometry
Date: 16 Jul 2008 15:40:57
Message: <487e4ec9$1@news.povray.org>
> Should make a camera with a field of view of 90 degrees. Now, if I am 
> correct so
> far, then Povray will create the <right> and <up> vectors to satisfy the 
> default
> aspect ratio. For an output image of 640x480, This would imply that each 
> pixel
> (square pixel) will have a side that is (1unit/640) or 0.0016 units/pixel.
> However, from other computation that I am doing, it seems I am making an 
> error
> with my calculation of the pixel width of the image. Does this seems 
> correct or
> am I doing something wrong?

The default right vector is 1.33*x to match a common aspect ratio, but
it doesn't change automatically if you use a different aspect ratio.
I like to use
right     x*image_width/image_height
in my cameras just because of that.

This also means that the default horizontal FoV angle is 67.38 or so,
if you want to control the horizontal FoV angle with the direction
length, just set the aspect ratio with "up" instead of with "right".

camera { // a camera with 90 degree horizontal FoV
   location <0,0,0>
   direction 0.5*z
   right x
   up 0.75*y
}


Post a reply to this message

From: alphaQuad
Subject: Re: Question about Camera Geometry
Date: 17 Jul 2008 01:50:01
Message: <web.487edc15d14244a6850dfab90@news.povray.org>
"callendorph" <cal### [at] esolarcom> wrote:
> Hello,
>
> So I have read through documentation on the camera object in POV-ray and I have
> a few questions about how it creates the image plane geometry. From the diagram
> in the documentation, it seems the <direction> vector controls the "focal
> length" of the camera, where the <location> vector is the focal point. The
> <direction> vector can be used to set the angle of view of the camera with its
> length. It seems from the documentation's figure that the direction vector does
> this by assuming an image plane that is 1 unit by 1 unit. Such that the
> following camera:
>
> camera {
>   location <0,0,0>
>   direction <0,0,0.5>
> }
>
> Should make a camera with a field of view of 90 degrees. Now, if I am correct so
> far, then Povray will create the <right> and <up> vectors to satisfy the default
> aspect ratio. For an output image of 640x480, This would imply that each pixel
> (square pixel) will have a side that is (1unit/640) or 0.0016 units/pixel.
> However, from other computation that I am doing, it seems I am making an error
> with my calculation of the pixel width of the image. Does this seems correct or
> am I doing something wrong?
>
> Thanks,
> ~Carl

look foward with FOV of 90
camera {
>   location <0,0,0>
>   direction <0,0,0.5>
    angle 90
> }

CAMERA

the right vector is used to set angle.
if you input 40 degrees, right needs to be length of 1 or fov is not 40

I determined this with the docs and my planetarium work

The default aspect is 1.333333
this can get real confusing on a monitor with a physical aspect of 1.33333
but viewing a resolution of 1.25 i.e. 1280x1024, circles are not round.

Examples

camera {
  up y
  right x*image_width/image_height
  angle 40

  location -z*100
  look_at <0,0,0>
}
not actually 40 degrees determined by stellar angles unless right has a length
of 1.

OR full scripted control of camera matrix, such as targeting a known point
and vcross-ing a camera matrix. This would allow scripted control of a camera
z-roll.  Leaves right with a length of 1. Can still be done 10 other ways, but
if you intend to invert matrix, you'd end up with something close to this:

camera { perspective
  location campos
  right <cameramatrix[0],cameramatrix[1],cameramatrix[2]>
  up <cameramatrix[4],cameramatrix[5],cameramatrix[6]>*
(image_height/image_width)
  direction <cameramatrix[8],cameramatrix[9],cameramatrix[10]>*
(image_height/image_width)
  angle camang
}


Post a reply to this message

From: alphaQuad
Subject: Re: Question about Camera Geometry
Date: 17 Jul 2008 02:05:00
Message: <web.487ee0bdd14244a6850dfab90@news.povray.org>
also the help file seems incomplete when it says right is used to set angle,
right's length is used IN RELATION to up and eye vector lengths,

or something to that effect


Post a reply to this message

From: callendorph
Subject: Re: Question about Camera Geometry
Date: 17 Jul 2008 12:50:01
Message: <web.487f77b6d14244a675ee02920@news.povray.org>
"alphaQuad" <alp### [at] earthlinknet> wrote:
>
> look foward with FOV of 90
> camera {
> >   location <0,0,0>
> >   direction <0,0,0.5>
>     angle 90
> > }
>
> CAMERA
>
> the right vector is used to set angle.
> if you input 40 degrees, right needs to be length of 1 or fov is not 40
>
> I determined this with the docs and my planetarium work
>
> The default aspect is 1.333333
> this can get real confusing on a monitor with a physical aspect of 1.33333
> but viewing a resolution of 1.25 i.e. 1280x1024, circles are not round.
>
> Examples
>
> camera {
>   up y
>   right x*image_width/image_height
>   angle 40
>
>   location -z*100
>   look_at <0,0,0>
> }
> not actually 40 degrees determined by stellar angles unless right has a length
> of 1.
>
> OR full scripted control of camera matrix, such as targeting a known point
> and vcross-ing a camera matrix. This would allow scripted control of a camera
> z-roll.  Leaves right with a length of 1. Can still be done 10 other ways, but
> if you intend to invert matrix, you'd end up with something close to this:
>
> camera { perspective
>   location campos
>   right <cameramatrix[0],cameramatrix[1],cameramatrix[2]>
>   up <cameramatrix[4],cameramatrix[5],cameramatrix[6]>*
> (image_height/image_width)
>   direction <cameramatrix[8],cameramatrix[9],cameramatrix[10]>*
> (image_height/image_width)
>   angle camang
> }

I understand what you are saying in terms of controlling the camera's field of
view with angle and controlling with a camera matrix transform, but the field
of view of the camera is only a means to an end for me. What I really want to
quantify, is the width of a single pixel in an image of a known resolution. So
from what it seems to me, is that I can use the direction vector to contol the
"focal length" of the camera, and the right/up vectors to create the
appropriate field of view. Now the thing I am having trouble understanding is
how to convert this image plane into width and height of the image in Povray
standard units. From this I should be able to use the resolution to calculate
how big a pixel is in terms of these "Povray units." Does this make sense?
Does the right and up vector actually control the size of this image plane? If
so then I should be able to set
camera {
location = <0,0,0>
right = 1*x
up = (img_height/img_width)*y
direction = 0.5*z
}
This , in theory would give me an image plane with dimensions 1 units x 0.75
units for an aspect ratio of 1.333. From here then, I should be able to divide
the width (1unit) by 640 and get the "pixel width" of each pixel in the image.
Does it work this way, or am I completely missing the mark?

Thanks for all your help,
~Carl


Post a reply to this message

From: alphaQuad
Subject: Re: Question about Camera Geometry
Date: 17 Jul 2008 19:45:01
Message: <web.487fd902d14244a66d912ba0@news.povray.org>
"callendorph" <cal### [at] esolarcom> wrote:

> ... Now the thing I am having trouble understanding is
> how to convert this image plane into width and height of the image in Povray
> standard units. From this I should be able to use the resolution to calculate
> how big a pixel is in terms of these "Povray units." Does this make sense?
> Does the right and up vector actually control the size of this image plane?

I think this will help, I can calc the points on the xy screen, by understanding
a sin law and world to screen transformation.

there are 9000 stars in the catalog, do i set up all stars or just the ones in
frame. All of them If I cant calc the frame. This is where invertmatrix must be
used

//cx and cy are center screen
//focal length
#macro FOL(a)
  #local L = radians((180 - a) / 2);
  #local L = asa_sin(pi/2,cx,L);
  L
#end
#macro world2screen(_v,fol)
  #local s = <int((fol / _v.z) *_v.x), int((fol / _v.z) * _v.y)>;
  s
#end

where worldmatrix is cameramatrix inverted

#local ab = pop_matrix4(worldmatrix,_v);
    #if ((ab.z > 0.0) | BackStars)
      #local av = world2screen(ab,FOL(camang));    //faster rendering than
adding 640.512 in w2screen
      #if (((av.x < cx) & (av.x > -cx) & (av.y < cy) & (av.y > -cy)) |
BackStars)

av.x and av.y are screen points
see pov planetarium for full code
http://home.earthlink.net/~openuniverse/

here is camera proof. left image is a 1.25 aspect ratio and the right image
1.333333. notice lower right corner. width fov is same. and both have round
circles 1to1 drawn around rendered earth by paint program.

oops I'll to post in images
aQ


Post a reply to this message

From: callendorph
Subject: Re: Question about Camera Geometry
Date: 17 Jul 2008 20:40:00
Message: <web.487fe5f6d14244a675ee02920@news.povray.org>
"alphaQuad" <alp### [at] earthlinknet> wrote:
> I think this will help, I can calc the points on the xy screen, by understanding
> a sin law and world to screen transformation.

Yes - this makes sense. Actually this is exactly what I am trying to do only I
am doing the computation part of things in Matlab. Thanks for pointing me in
the right direction.

~Carl


Post a reply to this message

From: alphaQuad
Subject: Re: Question about Camera Geometry
Date: 18 Jul 2008 00:10:00
Message: <web.48801722d14244a66d912ba0@news.povray.org>
"callendorph" <cal### [at] esolarcom> wrote:
> "alphaQuad" <alp### [at] earthlinknet> wrote:
> > I think this will help, I can calc the points on the xy screen, by understanding
> > a sin law and world to screen transformation.
>
> Yes - this makes sense. Actually this is exactly what I am trying to do only I
> am doing the computation part of things in Matlab. Thanks for pointing me in
> the right direction.
>
> ~Carl

hmm all of this made me realize that I never actually rolled the camera. Had I,
I might have seen missing frame stars since I forgot to invert after developing
z-roll feature as final orbit cam step. updated once again. 406393


Post a reply to this message

From: Alain
Subject: Re: Question about Camera Geometry
Date: 18 Jul 2008 14:36:00
Message: <4880e290$1@news.povray.org>
callendorph nous illumina en ce 2008-07-17 12:47 -->

> I understand what you are saying in terms of controlling the camera's field of
> view with angle and controlling with a camera matrix transform, but the field
> of view of the camera is only a means to an end for me. What I really want to
> quantify, is the width of a single pixel in an image of a known resolution. So
> from what it seems to me, is that I can use the direction vector to contol the
> "focal length" of the camera, and the right/up vectors to create the
> appropriate field of view. Now the thing I am having trouble understanding is
> how to convert this image plane into width and height of the image in Povray
> standard units. From this I should be able to use the resolution to calculate
> how big a pixel is in terms of these "Povray units." Does this make sense?
> Does the right and up vector actually control the size of this image plane? If
> so then I should be able to set
> camera {
> location = <0,0,0>
> right = 1*x
> up = (img_height/img_width)*y
> direction = 0.5*z
> }
> This , in theory would give me an image plane with dimensions 1 units x 0.75
> units for an aspect ratio of 1.333. From here then, I should be able to divide
> the width (1unit) by 640 and get the "pixel width" of each pixel in the image.
> Does it work this way, or am I completely missing the mark?
> 
> Thanks for all your help,
> ~Carl
> 
> 
The width in unit of a pixel at a given resolution depends on at least two 
things: The field of view AND the distance of the object from the camera. As the 
distance between any object and the camera change from object to object, and 
even for a given object, in any image you can have 1 pixel that covers 0.001 
unit next to one that covers 100 units, or even more.
What you can calculate is the angular width of a pixel.

-- 
Alain
-------------------------------------------------
I believe that banking institutions are more dangerous to our liberties than
standing armies. Already they have raised up a monied aristocracy that has
set the government at defiance. The issuing power should be taken from the
banks and restored to the people to whom it properly belongs.
Thomas Jefferson


Post a reply to this message

From: callendorph
Subject: Re: Question about Camera Geometry
Date: 18 Jul 2008 16:30:00
Message: <web.4880fc4ed14244a675ee02920@news.povray.org>
Alain <ele### [at] netscapenet> wrote:
> The width in unit of a pixel at a given resolution depends on at least two
> things: The field of view AND the distance of the object from the camera. As the
> distance between any object and the camera change from object to object, and
> even for a given object, in any image you can have 1 pixel that covers 0.001
> unit next to one that covers 100 units, or even more.
> What you can calculate is the angular width of a pixel.
>
> --
> Alain

I agree that perspective is going to change the actual number of units that the
pixel corresponds to for each object in the field (closer objects look bigger
than far away objects), but that isn't really the question that I am asking.
Your last statement is right, but the issue becomes that to use the perspective
transformation in a stereo algorithm, you need to know what distance the angular
width corresponds to in the image plane. This value will change only based on
the parameters of the camera, not what you are imaging. From the last few
exchanges with alphaQuad I figured out what I need to know. Basically, the
pixel width is related to the focal length of the camera, not just right and
up.(Thanks aq). When I get a chance I may try submitting a patch to the
documentation for camera that may help make things more clear. Or perhaps
everyone else understands and I'm just retarded... :-P

Thanks,
~Carl


Post a reply to this message

Goto Latest 10 Messages Next 1 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.