|
|
|
|
|
|
| |
| |
|
|
From: Bill Brehm
Subject: simulating lens distortion with normal and function in 3.5
Date: 24 Feb 2002 05:44:37
Message: <3c78c415$1@news.povray.org>
|
|
|
| |
| |
|
|
Hi,
I've asked before about simulating lens distortion. I got some helpful
pointers, but I'm stuck again.
A mathematician working at another company came up with the following
additions to my camera definition and sent it to me. But I cannot reach him
now, so I'm hoping someone here can help me understand.
The distortion has the correct form, i.e., a square is distorted to the
correct shape. But the image is also magnified and I need to control that
too. Even when I set the factors to zero, there is magnification.
So my questions are:
1. What does lens_dist() return? A float? A vector? I think it's returning a
single float value based on the X, Y coordinates passed in.
2. What is the Z parameter there for? Is it needed for the normal?
3. How exactly does a normal work when applied to a camera to distort the
image? What is a normal? Is it a function or pattern that has one value for
each X, Y coordinate? How does that bend the ray? Is there any diagram that
shows how it works? I would think that a normal would need two values at
each coordinate, so it knows how much and in which direction to bend the
ray.
Thanks,
Bill
PS: I have tried to find this info in the help, but couldn't.
// lens distortion model
// i' = i + a * i * sqrt(i^2 + j^2) + b * i^3 * sqrt(i^2 + j^2)
// j' = j + a * j * sqrt(i^2 + j^2) + b * j^3 * sqrt(i^2 + j^2)
// lens distortion implementation
#declare lens_dist = function(x, y, z, a, b) {
(x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 +
(y + a * y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
}
#declare a_factor = 0.65;
#declare b_factor = 0.1;
#declare HFOV = 5.00;
#declare VFOV = HFOV * 3 / 4;
#declare cameraheight = 10.000;
camera {
orthographic
location <0, 0, 0>
right <HFOV, 0, 0>
up <0, VFOV, 0>
sky <0, 1, 0>
direction <0, 0, cameraheight>
look_at <0, 0, 1>
translate <0, 0, -cameraheight>
normal{function {lens_dist(x, y, z, a_factor, b_factor)} }
}
#local mx = 9;
#local my = 7;
#local ix = 0;
#while(ix < mx)
#local iy = 0;
#while(iy < my)
cylinder {
<0,0,0>, <0,0,0.010>, 0.25
pigment {color rgb <0, 0, 0>}
translate <(ix - ((mx - 1) / 2))* 0.8, (iy - ((my - 1) / 2))* 0.8, 0>
}
#local iy = iy + 1;
#end
#local ix = ix + 1;
#end
Post a reply to this message
|
|
| |
| |
|
|
From: Dennis Miller
Subject: Re: simulating lens distortion with normal and function in 3.5
Date: 24 Feb 2002 15:04:47
Message: <3c79475f$1@news.povray.org>
|
|
|
| |
| |
|
|
This code shows up completely black under 3.5....
I don't have the technical answers, but I have used normals in camera
statements for great effect; the result is sort of like a 3D filter on the
camera lens. It changes the surface properties of the lens, in essence.
Here's a little example: nothing but a sky sphere in here. Animation
possibilities are endless!
global_settings { assumed_gamma 2.2 }
#include "colors.inc"
camera {
location <-5, 6,-58>
look_at < 0, 5.5, 0>
angle 35
normal {
gradient <1,0,1>
normal_map {
[.1 agate 30]
[.3 marble 50 ]
[ .6 wood 70 ]
[ .9 crackle 50 ] } } }
light_source { <10,10,-30> rgb 1.5 }
sky_sphere
{ pigment
{gradient y color_map { [0.0 color blue 0.6] [1.0 color rgb 1] } } }
Best,
D.
"Bill Brehm" <bbr### [at] netzeronet> wrote in message
news:3c78c415$1@news.povray.org...
> Hi,
>
> I've asked before about simulating lens distortion. I got some helpful
> pointers, but I'm stuck again.
>
> A mathematician working at another company came up with the following
> additions to my camera definition and sent it to me. But I cannot reach
him
> now, so I'm hoping someone here can help me understand.
>
> The distortion has the correct form, i.e., a square is distorted to the
> correct shape. But the image is also magnified and I need to control that
> too. Even when I set the factors to zero, there is magnification.
>
> So my questions are:
>
> 1. What does lens_dist() return? A float? A vector? I think it's returning
a
> single float value based on the X, Y coordinates passed in.
> 2. What is the Z parameter there for? Is it needed for the normal?
> 3. How exactly does a normal work when applied to a camera to distort the
> image? What is a normal? Is it a function or pattern that has one value
for
> each X, Y coordinate? How does that bend the ray? Is there any diagram
that
> shows how it works? I would think that a normal would need two values at
> each coordinate, so it knows how much and in which direction to bend the
> ray.
>
> Thanks,
>
> Bill
>
> PS: I have tried to find this info in the help, but couldn't.
>
>
> // lens distortion model
> // i' = i + a * i * sqrt(i^2 + j^2) + b * i^3 * sqrt(i^2 + j^2)
> // j' = j + a * j * sqrt(i^2 + j^2) + b * j^3 * sqrt(i^2 + j^2)
>
> // lens distortion implementation
> #declare lens_dist = function(x, y, z, a, b) {
> (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 +
> (y + a * y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
> }
>
> #declare a_factor = 0.65;
> #declare b_factor = 0.1;
>
> #declare HFOV = 5.00;
> #declare VFOV = HFOV * 3 / 4;
>
> #declare cameraheight = 10.000;
>
> camera {
> orthographic
> location <0, 0, 0>
> right <HFOV, 0, 0>
> up <0, VFOV, 0>
> sky <0, 1, 0>
> direction <0, 0, cameraheight>
> look_at <0, 0, 1>
> translate <0, 0, -cameraheight>
>
> normal{function {lens_dist(x, y, z, a_factor, b_factor)} }
> }
>
> #local mx = 9;
> #local my = 7;
> #local ix = 0;
> #while(ix < mx)
> #local iy = 0;
> #while(iy < my)
> cylinder {
> <0,0,0>, <0,0,0.010>, 0.25
> pigment {color rgb <0, 0, 0>}
> translate <(ix - ((mx - 1) / 2))* 0.8, (iy - ((my - 1) / 2))* 0.8,
0>
> }
> #local iy = iy + 1;
> #end
> #local ix = ix + 1;
> #end
>
>
>
>
>
Post a reply to this message
|
|
| |
| |
|
|
From: Bill Brehm
Subject: Re: simulating lens distortion with normal and function in 3.5
Date: 24 Feb 2002 22:23:44
Message: <3c79ae40@news.povray.org>
|
|
|
| |
| |
|
|
I forgot to put a light source in the code I uploaded.
Your example crashes 3.5.11, but runs under 3.1. It's creates an interesting
pattern, but I still don't understand how, so anyone else's input will be
appreciated.
Thanks,
Bill
"Dennis Miller" <dhm### [at] attbicom> wrote in message
news:3c79475f$1@news.povray.org...
> This code shows up completely black under 3.5....
> I don't have the technical answers, but I have used normals in camera
> statements for great effect; the result is sort of like a 3D filter on the
> camera lens. It changes the surface properties of the lens, in essence.
> Here's a little example: nothing but a sky sphere in here. Animation
> possibilities are endless!
>
> global_settings { assumed_gamma 2.2 }
> #include "colors.inc"
>
> camera {
> location <-5, 6,-58>
> look_at < 0, 5.5, 0>
> angle 35
> normal {
> gradient <1,0,1>
> normal_map {
> [.1 agate 30]
> [.3 marble 50 ]
> [ .6 wood 70 ]
> [ .9 crackle 50 ] } } }
>
> light_source { <10,10,-30> rgb 1.5 }
> sky_sphere
> { pigment
> {gradient y color_map { [0.0 color blue 0.6] [1.0 color rgb 1] } } }
>
> Best,
> D.
>
> "Bill Brehm" <bbr### [at] netzeronet> wrote in message
> news:3c78c415$1@news.povray.org...
> > Hi,
> >
> > I've asked before about simulating lens distortion. I got some helpful
> > pointers, but I'm stuck again.
> >
> > A mathematician working at another company came up with the following
> > additions to my camera definition and sent it to me. But I cannot reach
> him
> > now, so I'm hoping someone here can help me understand.
> >
> > The distortion has the correct form, i.e., a square is distorted to the
> > correct shape. But the image is also magnified and I need to control
that
> > too. Even when I set the factors to zero, there is magnification.
> >
> > So my questions are:
> >
> > 1. What does lens_dist() return? A float? A vector? I think it's
returning
> a
> > single float value based on the X, Y coordinates passed in.
> > 2. What is the Z parameter there for? Is it needed for the normal?
> > 3. How exactly does a normal work when applied to a camera to distort
the
> > image? What is a normal? Is it a function or pattern that has one value
> for
> > each X, Y coordinate? How does that bend the ray? Is there any diagram
> that
> > shows how it works? I would think that a normal would need two values at
> > each coordinate, so it knows how much and in which direction to bend the
> > ray.
> >
> > Thanks,
> >
> > Bill
> >
> > PS: I have tried to find this info in the help, but couldn't.
> >
> >
> > // lens distortion model
> > // i' = i + a * i * sqrt(i^2 + j^2) + b * i^3 * sqrt(i^2 + j^2)
> > // j' = j + a * j * sqrt(i^2 + j^2) + b * j^3 * sqrt(i^2 + j^2)
> >
> > // lens distortion implementation
> > #declare lens_dist = function(x, y, z, a, b) {
> > (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 +
> > (y + a * y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
> > }
> >
> > #declare a_factor = 0.65;
> > #declare b_factor = 0.1;
> >
> > #declare HFOV = 5.00;
> > #declare VFOV = HFOV * 3 / 4;
> >
> > #declare cameraheight = 10.000;
> >
> > camera {
> > orthographic
> > location <0, 0, 0>
> > right <HFOV, 0, 0>
> > up <0, VFOV, 0>
> > sky <0, 1, 0>
> > direction <0, 0, cameraheight>
> > look_at <0, 0, 1>
> > translate <0, 0, -cameraheight>
> >
> > normal{function {lens_dist(x, y, z, a_factor, b_factor)} }
> > }
> >
> > #local mx = 9;
> > #local my = 7;
> > #local ix = 0;
> > #while(ix < mx)
> > #local iy = 0;
> > #while(iy < my)
> > cylinder {
> > <0,0,0>, <0,0,0.010>, 0.25
> > pigment {color rgb <0, 0, 0>}
> > translate <(ix - ((mx - 1) / 2))* 0.8, (iy - ((my - 1) / 2))*
0.8,
> 0>
> > }
> > #local iy = iy + 1;
> > #end
> > #local ix = ix + 1;
> > #end
> >
> >
> >
> >
> >
>
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Bill Brehm who wrote:
>Hi,
>
>I've asked before about simulating lens distortion. I got some helpful
>pointers, but I'm stuck again.
>
>A mathematician working at another company came up with the following
>additions to my camera definition and sent it to me. But I cannot reach him
>now, so I'm hoping someone here can help me understand.
>
>The distortion has the correct form, i.e., a square is distorted to the
>correct shape. But the image is also magnified and I need to control that
>too. Even when I set the factors to zero, there is magnification.
>
>So my questions are:
>
>1. What does lens_dist() return? A float? A vector? I think it's returning a
>single float value based on the X, Y coordinates passed in.
It's returning a float value for every point in space. For this particular set
of camera parameters, the camera behaves as if it is a flat plate with z
parameter about -0.016, and with the x and y parameters varying between -0.5 and
+0.5 across and up the image.
E.g. at the centre of the image, the virtual camera point is about <0,0,-0.16>,
and the lens_dist(x,y,z,0.65,0.1) function returns a value of 0 at that point.
At the corner, the virtual camera point is <0.5,0.5,-0.16> and the
lens_dist(x,y,z,0.65,0.1) function returns a value of 1.09120 at that point. I
calculated that with a math package, and then also obtained it from POV by
typing
debug str( lens_dist(0.5,0.5,-0.16,a_factor,b_factor) ,5,5)
and looking at the message pane.
>2. What is the Z parameter there for? Is it needed for the normal?
No. You get exactly the same result if you declare the lens_dist() function as
#declare lens_dist = function(x, y, a, b) { ...
instead of
#declare lens_dist = function(x, y, z, a, b) { ...
and invoke it as
normal{function {lens_dist(x, y, a_factor, b_factor)} }
You just have to be sure that you then don't change the function so that it uses
Z.
>3. How exactly does a normal work when applied to a camera to distort
>the image? What is a normal? Is it a function or pattern that has one value for
>each X, Y coordinate? How does that bend the ray? Is there any diagram that
>shows how it works? I would think that a normal would need two values at
>each coordinate, so it knows how much and in which direction to bend the
>ray.
In physics, a "normal" is a vector perpendicular to the surface of an object at
a particular point.
In POV, we often fake the normals of objects to give the appearance of the
actual object having a certain surface structure. When a ray strikes the object,
the ray is reflected and refracted as if it had struck the object at a different
angle.
POV also represents what you might expect to be a vector field (having a
magnitude and direction at every point) with a scalar field (having only a
single float value for each point in 3d space). It's as if the surface gets
"raised" by the specified value, and the actual normal is the tangent plane to
the resulting surface. For the vast majority of cases, such a representation is
very much easier for users to handle - specifying the magnitude and direction of
the normal at every point would be unnecessarily tedious in most scenes.
Post a reply to this message
|
|
| |
| |
|
|
From: Bill Brehm
Subject: Re: simulating lens distortion with normal and function in 3.5
Date: 25 Feb 2002 18:12:25
Message: <3c7ac4d9$1@news.povray.org>
|
|
|
| |
| |
|
|
Thanks Mike.
1. Where does the 0.016 for Z come from? Is that a constant built into
POVRay? Do X and Y both vary from -0.5 to +0.5, even though the camera is
rectangular (3/4)? Then I need to consider the camera's aspect ratio in my
function, right?
2. When would I want to use Z, if it is a constant? Do one or more of the
camera types have a variable Z? Is it useful when the camera is translated
in some way?
3. POVRay help shows an diagram explaining the perspective camera. The rays
emanate from "location", pass through the image plane, then eventually hit
an object. Those rays are not normal to the plane, except at the center. For
the orthographic camera, one could consider that "location" is infinitely
far away, so that the rays are normal to the image plane. Or one could
consider that the rays emanate from the image plane itself normal to the
plane.
I'm I correct so far? Assuming yes,
Can I imagine that the normal function distorts the image plane into a shape
determined by the scalar field and that the rays emanate from that distorted
image "plane" normal to the "plane" at that point? Is that still true for a
perspective camera with a normal applied to it? I.e., should I sort of
forget about "location" and think of a perspective camera as having an image
plane distorted into a piece of a sphere having the rays emanate normal to
that distorted image "plane"? Couldn't I then simulate any camera type with
the proper normal function?
How does POVRay actually determine the normal at each point? Does it use the
derivative of the function to calculate it exactly or does it look at the
surrounding points and calculate a rough approximation? (I'm trying to
generate "perfect" images with a known amount of lens distortion to test a
correction algorithm.)
Thanks again,
Bill
"Mike Williams" <mik### [at] nospamplease> wrote in message
news:78E### [at] econymdemoncouk...
> Wasn't it Bill Brehm who wrote:
> >Hi,
> >
> >I've asked before about simulating lens distortion. I got some helpful
> >pointers, but I'm stuck again.
> >
> >A mathematician working at another company came up with the following
> >additions to my camera definition and sent it to me. But I cannot reach
him
> >now, so I'm hoping someone here can help me understand.
> >
> >The distortion has the correct form, i.e., a square is distorted to the
> >correct shape. But the image is also magnified and I need to control that
> >too. Even when I set the factors to zero, there is magnification.
> >
> >So my questions are:
> >
> >1. What does lens_dist() return? A float? A vector? I think it's
returning a
> >single float value based on the X, Y coordinates passed in.
>
> It's returning a float value for every point in space. For this particular
set
> of camera parameters, the camera behaves as if it is a flat plate with z
> parameter about -0.016, and with the x and y parameters varying
between -0.5 and
> +0.5 across and up the image.
>
> E.g. at the centre of the image, the virtual camera point is about
<0,0,-0.16>,
> and the lens_dist(x,y,z,0.65,0.1) function returns a value of 0 at that
point.
> At the corner, the virtual camera point is <0.5,0.5,-0.16> and the
> lens_dist(x,y,z,0.65,0.1) function returns a value of 1.09120 at that
point. I
> calculated that with a math package, and then also obtained it from POV by
> typing
> debug str( lens_dist(0.5,0.5,-0.16,a_factor,b_factor) ,5,5)
> and looking at the message pane.
>
> >2. What is the Z parameter there for? Is it needed for the normal?
>
> No. You get exactly the same result if you declare the lens_dist()
function as
> #declare lens_dist = function(x, y, a, b) { ...
> instead of
> #declare lens_dist = function(x, y, z, a, b) { ...
> and invoke it as
> normal{function {lens_dist(x, y, a_factor, b_factor)} }
> You just have to be sure that you then don't change the function so that
it uses
> Z.
>
> >3. How exactly does a normal work when applied to a camera to distort
> >the image? What is a normal? Is it a function or pattern that has one
value for
> >each X, Y coordinate? How does that bend the ray? Is there any diagram
that
> >shows how it works? I would think that a normal would need two values at
> >each coordinate, so it knows how much and in which direction to bend the
> >ray.
>
> In physics, a "normal" is a vector perpendicular to the surface of an
object at
> a particular point.
>
> In POV, we often fake the normals of objects to give the appearance of the
> actual object having a certain surface structure. When a ray strikes the
object,
> the ray is reflected and refracted as if it had struck the object at a
different
> angle.
>
> POV also represents what you might expect to be a vector field (having a
> magnitude and direction at every point) with a scalar field (having only a
> single float value for each point in 3d space). It's as if the surface
gets
> "raised" by the specified value, and the actual normal is the tangent
plane to
> the resulting surface. For the vast majority of cases, such a
representation is
> very much easier for users to handle - specifying the magnitude and
direction of
> the normal at every point would be unnecessarily tedious in most scenes.
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Bill Brehm who wrote:
>Thanks Mike.
>
>1. Where does the 0.016 for Z come from? Is that a constant built into
>POVRay?
I haven't a clue. I just did some experiments and that's where it turned
out to be. It's possible that the camera{} parameters may affect it.
>Do X and Y both vary from -0.5 to +0.5, even though the camera is
>rectangular (3/4)?
Yes.
> Then I need to consider the camera's aspect ratio in my
>function, right?
It may be more sensible, in your case, to work with square images.
>2. When would I want to use Z, if it is a constant? Do one or more of the
>camera types have a variable Z? Is it useful when the camera is translated
>in some way?
Probably not in your case.
>3. POVRay help shows an diagram explaining the perspective camera. The rays
>emanate from "location", pass through the image plane, then eventually hit
>an object. Those rays are not normal to the plane, except at the center. For
>the orthographic camera, one could consider that "location" is infinitely
>far away, so that the rays are normal to the image plane. Or one could
>consider that the rays emanate from the image plane itself normal to the
>plane.
>
>I'm I correct so far? Assuming yes,
I believe so.
>Can I imagine that the normal function distorts the image plane into a shape
>determined by the scalar field and that the rays emanate from that distorted
>image "plane" normal to the "plane" at that point?
That's probably not a very helpful way to visualize it. Try thinking of
something like a freznel lens or a diffraction grating, where the
surface is flat but it changes the direction of light that passes
through it.
> Is that still true for a
>perspective camera with a normal applied to it? I.e., should I sort of
>forget about "location" and think of a perspective camera as having an image
>plane distorted into a piece of a sphere having the rays emanate normal to
>that distorted image "plane"? Couldn't I then simulate any camera type with
>the proper normal function?
I guess you could simulate any camera by changing the normal. E.g. if
you set a_factor and b_factor to zero in your lens_dist() function, then
you're simulating a perspective camera with angle 20.5 (approximately).
>How does POVRay actually determine the normal at each point? Does it use the
>derivative of the function to calculate it exactly or does it look at the
>surrounding points and calculate a rough approximation? (I'm trying to
>generate "perfect" images with a known amount of lens distortion to test a
>correction algorithm.)
I don't know. I do know that POV normals do work with functions that are
fiendishly difficult to differentiate analytically, so I guess that it's
using some sort of numerical approximation method.
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
From: Bill Brehm
Subject: Re: simulating lens distortion with normal and function in 3.5
Date: 26 Feb 2002 03:13:03
Message: <3c7b438f@news.povray.org>
|
|
|
| |
| |
|
|
Mike,
1. I cannot use square images, but was able to modify the function to give
symmetrical results. I'm curious how you found the Z value experimentally,
if you wouldn't mind explaining. I might have to confirm if it ever changes
for different camera parameters.
3. I was thinking before I read your response that the image plane staying
flat might be a possibility too. Just for fun I think I'll make a test to
confirm it. Some further experiments convinced me that the camera is not
exactly perspective.
My camera looks like this:
#declare HFOV = 5.00;
#declare VFOV = HFOV * 3 / 4;
#declare lens_dist = function(x, y, z, a, b) {
(HFOV / VFOV) * (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) )
^ 2 + (y + a * y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
}
#declare a_factor = 0.0;
#declare b_factor = 0.0;
#declare cameraheight = 10.000;
camera {
orthographic
location <0, 0, 0>
right <HFOV, 0, 0>
up <0, VFOV, 0>
sky <0, 1, 0>
direction <0, 0, cameraheight>
look_at <0, 0, 1>
translate <0, 0, -cameraheight>
normal{function {lens_dist(x, y, z, a_factor, b_factor)} }
}
With the normal commented out, changing the camera height does not change
the magnification, whether the direction vector is based on cameraheight or
a constant (<0, 0, 1>). With orthographic also commented out, the
magnification doesn't change if the direction vector is based on
cameraheight, but it does if the direction vector is a constant. But with
the normal and orthographic on, the increasing camera height actually
increases magnification when the direction vector is based on cameraheight,
but decreases it when the direction vector is the constant. Strange.
I think the normal does some kind of approximate calculation. Try rendering
the code below and you'll see that the rendered image is not symmetrical,
even though everything in the code is symmetrical.
#declare lens_dist = function(x, y, a, b) {
(x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 + (y + a *
y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
}
#declare a_factor = 0.69;
#declare b_factor = 0.1;
camera {
location <0, 0, -100>
look_at <0, 0, 0>
right <4/4, 0, 0>
normal{function {lens_dist(x, y, a_factor, b_factor)} }
}
light_source { <0, 0, -100> color rgb <1, 1, 1> }
plane{-z, -10 pigment{color rgb <0, 0, 1>}}
box{<-25, -25, 0>, <25, 25, 0> pigment {color rgb <1, 0, 0>}}
Thanks,
Bill
"Mike Williams" <mik### [at] nospamplease> wrote in message
news:Grd### [at] econymdemoncouk...
> Wasn't it Bill Brehm who wrote:
> >Thanks Mike.
> >
> >1. Where does the 0.016 for Z come from? Is that a constant built into
> >POVRay?
>
> I haven't a clue. I just did some experiments and that's where it turned
> out to be. It's possible that the camera{} parameters may affect it.
>
> >Do X and Y both vary from -0.5 to +0.5, even though the camera is
> >rectangular (3/4)?
>
> Yes.
>
> > Then I need to consider the camera's aspect ratio in my
> >function, right?
>
> It may be more sensible, in your case, to work with square images.
>
> >2. When would I want to use Z, if it is a constant? Do one or more of the
> >camera types have a variable Z? Is it useful when the camera is
translated
> >in some way?
>
> Probably not in your case.
>
> >3. POVRay help shows an diagram explaining the perspective camera. The
rays
> >emanate from "location", pass through the image plane, then eventually
hit
> >an object. Those rays are not normal to the plane, except at the center.
For
> >the orthographic camera, one could consider that "location" is infinitely
> >far away, so that the rays are normal to the image plane. Or one could
> >consider that the rays emanate from the image plane itself normal to the
> >plane.
> >
> >I'm I correct so far? Assuming yes,
>
> I believe so.
>
> >Can I imagine that the normal function distorts the image plane into a
shape
> >determined by the scalar field and that the rays emanate from that
distorted
> >image "plane" normal to the "plane" at that point?
>
> That's probably not a very helpful way to visualize it. Try thinking of
> something like a freznel lens or a diffraction grating, where the
> surface is flat but it changes the direction of light that passes
> through it.
>
> > Is that still true for a
> >perspective camera with a normal applied to it? I.e., should I sort of
> >forget about "location" and think of a perspective camera as having an
image
> >plane distorted into a piece of a sphere having the rays emanate normal
to
> >that distorted image "plane"? Couldn't I then simulate any camera type
with
> >the proper normal function?
>
> I guess you could simulate any camera by changing the normal. E.g. if
> you set a_factor and b_factor to zero in your lens_dist() function, then
> you're simulating a perspective camera with angle 20.5 (approximately).
>
> >How does POVRay actually determine the normal at each point? Does it use
the
> >derivative of the function to calculate it exactly or does it look at the
> >surrounding points and calculate a rough approximation? (I'm trying to
> >generate "perfect" images with a known amount of lens distortion to test
a
> >correction algorithm.)
>
> I don't know. I do know that POV normals do work with functions that are
> fiendishly difficult to differentiate analytically, so I guess that it's
> using some sort of numerical approximation method.
>
> --
> Mike Williams
> Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Bill Brehm who wrote:
>Mike,
>
>1. I cannot use square images, but was able to modify the function to give
>symmetrical results. I'm curious how you found the Z value experimentally,
>if you wouldn't mind explaining. I might have to confirm if it ever changes
>for different camera parameters.
I tried these two functions
#declare lens_dist = function(x, y, z, a, b) {
select (z-0.017, 1, 1/x) }
#declare lens_dist = function(x, y, z, a, b) {
select (z-0.016, 1, 1/x) }
Note: "select()" is like an "#if" that works inside a function{}. It's
the same as saying
if ((z-0.016)<0) then 1 else 1/x
With z-0.017 (or any higher numbers) I get the undistorted picture which
indicates that "1" was selected. With z-0.016 (or any lower numbers) I
get a very distorted picture because "1/x" is being selected. Therefore
the z value being tested during the camera normal is somewhere between
0.016 and 0.017.
I used the same trick (with select x+-K and select z+-K) to determine
that the edges of the image are at +-0.5 in the x any y directions.
>I think the normal does some kind of approximate calculation. Try rendering
>the code below and you'll see that the rendered image is not symmetrical,
>even though everything in the code is symmetrical.
>
>#declare lens_dist = function(x, y, a, b) {
> (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 + (y + a *
>y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
>}
>
>#declare a_factor = 0.69;
>#declare b_factor = 0.1;
>
>camera {
>
> location <0, 0, -100>
> look_at <0, 0, 0>
> right <4/4, 0, 0>
>
> normal{function {lens_dist(x, y, a_factor, b_factor)} }
>}
>
>light_source { <0, 0, -100> color rgb <1, 1, 1> }
>
>plane{-z, -10 pigment{color rgb <0, 0, 1>}}
>
>box{<-25, -25, 0>, <25, 25, 0> pigment {color rgb <1, 0, 0>}}
Now that is weird.
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
From: Bill Brehm
Subject: Re: simulating lens distortion with normal and function in 3.5
Date: 26 Feb 2002 19:51:56
Message: <3c7c2dac@news.povray.org>
|
|
|
| |
| |
|
|
Mike, thanks for all your help. Unfortunately, if I can't figure out how to
solve the the symmetry problem, I won't be able to use povray to simulate
lens distortion for high accuracy tests. But I did learn a lot. Thanks,
again. Bill
"Mike Williams" <mik### [at] nospamplease> wrote in message
news:emU### [at] econymdemoncouk...
> Wasn't it Bill Brehm who wrote:
> >Mike,
> >
> >1. I cannot use square images, but was able to modify the function to
give
> >symmetrical results. I'm curious how you found the Z value
experimentally,
> >if you wouldn't mind explaining. I might have to confirm if it ever
changes
> >for different camera parameters.
>
> I tried these two functions
>
> #declare lens_dist = function(x, y, z, a, b) {
> select (z-0.017, 1, 1/x) }
>
> #declare lens_dist = function(x, y, z, a, b) {
> select (z-0.016, 1, 1/x) }
>
> Note: "select()" is like an "#if" that works inside a function{}. It's
> the same as saying
> if ((z-0.016)<0) then 1 else 1/x
>
> With z-0.017 (or any higher numbers) I get the undistorted picture which
> indicates that "1" was selected. With z-0.016 (or any lower numbers) I
> get a very distorted picture because "1/x" is being selected. Therefore
> the z value being tested during the camera normal is somewhere between
> 0.016 and 0.017.
>
> I used the same trick (with select x+-K and select z+-K) to determine
> that the edges of the image are at +-0.5 in the x any y directions.
>
>
> >I think the normal does some kind of approximate calculation. Try
rendering
> >the code below and you'll see that the rendered image is not symmetrical,
> >even though everything in the code is symmetrical.
> >
> >#declare lens_dist = function(x, y, a, b) {
> > (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 + (y +
a *
> >y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
> >}
> >
> >#declare a_factor = 0.69;
> >#declare b_factor = 0.1;
> >
> >camera {
> >
> > location <0, 0, -100>
> > look_at <0, 0, 0>
> > right <4/4, 0, 0>
> >
> > normal{function {lens_dist(x, y, a_factor, b_factor)} }
> >}
> >
> >light_source { <0, 0, -100> color rgb <1, 1, 1> }
> >
> >plane{-z, -10 pigment{color rgb <0, 0, 1>}}
> >
> >box{<-25, -25, 0>, <25, 25, 0> pigment {color rgb <1, 0, 0>}}
>
> Now that is weird.
>
> --
> Mike Williams
> Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hi Bill,
did you came up with an other solution finally ?
Thanks for any help.
Ben.
Bill Brehm wrote:
>Mike, thanks for all your help. Unfortunately, if I can't figure out how to
>solve the the symmetry problem, I won't be able to use povray to simulate
>lens distortion for high accuracy tests. But I did learn a lot. Thanks,
>again. Bill
>
>
>"Mike Williams" <mik### [at] nospamplease> wrote in message
>news:emU### [at] econymdemoncouk...
>> Wasn't it Bill Brehm who wrote:
>> >Mike,
>> >
>> >1. I cannot use square images, but was able to modify the function to
>give
>> >symmetrical results. I'm curious how you found the Z value
>experimentally,
>> >if you wouldn't mind explaining. I might have to confirm if it ever
>changes
>> >for different camera parameters.
>>
>> I tried these two functions
>>
>> #declare lens_dist = function(x, y, z, a, b) {
>> select (z-0.017, 1, 1/x) }
>>
>> #declare lens_dist = function(x, y, z, a, b) {
>> select (z-0.016, 1, 1/x) }
>>
>> Note: "select()" is like an "#if" that works inside a function{}. It's
>> the same as saying
>> if ((z-0.016)<0) then 1 else 1/x
>>
>> With z-0.017 (or any higher numbers) I get the undistorted picture which
>> indicates that "1" was selected. With z-0.016 (or any lower numbers) I
>> get a very distorted picture because "1/x" is being selected. Therefore
>> the z value being tested during the camera normal is somewhere between
>> 0.016 and 0.017.
>>
>> I used the same trick (with select x+-K and select z+-K) to determine
>> that the edges of the image are at +-0.5 in the x any y directions.
>>
>>
>> >I think the normal does some kind of approximate calculation. Try
>rendering
>> >the code below and you'll see that the rendered image is not symmetrical,
>> >even though everything in the code is symmetrical.
>> >
>> >#declare lens_dist = function(x, y, a, b) {
>> > (x + a * x * sqrt(x*x + y*y) + b * x^3 * sqrt(x*x + y*y) ) ^ 2 + (y +
>a *
>> >y * sqrt(x*x + y*y) + b * y^3 * sqrt(x*x + y*y) ) ^ 2
>> >}
>> >
>> >#declare a_factor = 0.69;
>> >#declare b_factor = 0.1;
>> >
>> >camera {
>> >
>> > location <0, 0, -100>
>> > look_at <0, 0, 0>
>> > right <4/4, 0, 0>
>> >
>> > normal{function {lens_dist(x, y, a_factor, b_factor)} }
>> >}
>> >
>> >light_source { <0, 0, -100> color rgb <1, 1, 1> }
>> >
>> >plane{-z, -10 pigment{color rgb <0, 0, 1>}}
>> >
>> >box{<-25, -25, 0>, <25, 25, 0> pigment {color rgb <1, 0, 0>}}
>>
>> Now that is weird.
>>
>> --
>> Mike Williams
>> Gentleman of Leisure
>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|