|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Hello,
what's the best way to get a stereographic projection in megaPov?
I mean, projecting the scene to a sphere from its center, then
projecting the sphere to a plane stereographically. I don't feel like
defining a camera type but I can do if needed.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Jan Dvorak <jan### [at] centrumcz> wrote:
> Hello,
> what's the best way to get a stereographic projection in megaPov?
> I mean, projecting the scene to a sphere from its center, then
> projecting the sphere to a plane stereographically. I don't feel like
> defining a camera type but I can do if needed.
Not sure if I exactly know what you mean. Is the sphere the viewing plane with
the viewing vectors all looking inwards? If you can figure out the math, the
user-defined camera type should be quite simple to setup.
-tgq
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> what's the best way to get a stereographic projection in megaPov?
> I mean, projecting the scene to a sphere from its center, then
> projecting the sphere to a plane stereographically. I don't feel like
> defining a camera type but I can do if needed.
The general idea is, that you want to emit camera rays from the pole of
the sphere and double their angle to the camera direction when they hit
the sphere.
1. You can do this with reflection, if you specify a rather fake normal
for the sphere. PovRay does not seem to allow arbitrary normal
functions but I don't know about MegaPov.
2. You can do this with refraction, by giving the interior of the
sphere an ior of 0. If 0 is not allowed or leads to division by 0
(I would bet it does), you can approximate the intended result by
specifying an ior close to 0. This could be done in PovRay alone.
3. The size of the sphere does not matter at all. (But note, that the
sphere's center has to be fixed, not its pole.) For an
infinitesimally small sphere, its sphereness becomes unimportant
and non-sphereness will lead only to infinitesimal displacements of
the camera rays. So you might replace the sphere by some other shape
which allows for relection with unfaked normals or refraction with
non-zero ior. This would be possible in PovRay alone. (But again, it
is only an approximation to what you really want.)
In all these cases, you should be sure, that the camera is not visible
in your scene.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> > what's the best way to get a stereographic projection in megaPov?
> > I mean, projecting the scene to a sphere from its center, then
> > projecting the sphere to a plane stereographically. I don't feel like
> > defining a camera type but I can do if needed.
>
> The general idea is, that you want to emit camera rays from the pole of
> the sphere and double their angle to the camera direction when they hit
> the sphere.
>
OK, I had a look at stereographic projection and get it now (it has nothing to
do with stereoscopic). Since you said you were using megaPOV, it is a very
simple user-defined camera setup:
This camera looks down on the sphere from the positive y-axis. Assuming the
sphere is centered at the origin. If the desired sphere center is translated,
it is simply a matter of adding the translation to the X,Y,Z components of the
location vector. The camera can easily be converted to other axis planes of
projections by swapping the appropriate X/Y/Z components in both the location
and direction vectors. For arbritrary planes of projection, a lot more trig
will be needed to get the proper location and direction vectors, best to stick
with x,y or z.
//START
#declare R=1;//Radius of projected sphere
#declare d=2;//distance from center of sphere
camera {
user_defined
location{
function{(u-1/2)*2*(R+d)}
function{d}
function{(v-1/2)*2*(R+d)}
}
direction{
function{-(u-1/2)*2}
function{-1}
function{-(v-1/2)*2}
}
}
//END
-tgq
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> #declare R=1;//Radius of projected sphere
> #declare d=2;//distance from center of sphere
> camera {
> user_defined
> location{
> function{(u-1/2)*2*(R+d)}
> function{d}
> function{(v-1/2)*2*(R+d)}
> }
> direction{
> function{-(u-1/2)*2}
> function{-1}
> function{-(v-1/2)*2}
> }
> }
I don't think this will do. Basically, this doubles the tangent of the angle,
not the angle itself.
But if that is the syntax for user-defined cameras, the following should work:
#declare Width = ... // corresponds to length of right in other camera types
#declare Height = ... // corresponds to length of up in other camera types
camera {
user_defined
location {function {0} function {0} function {0}}
direction {
function {tan (atan((u-1/2)*Width) * 2)}
function {1}
function {tan (atan((v-1/2)*Height) * 2)}
}
}
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> > #declare R=1;//Radius of projected sphere
> > #declare d=2;//distance from center of sphere
> > camera {
> > user_defined
> > location{
> > function{(u-1/2)*2*(R+d)}
> > function{d}
> > function{(v-1/2)*2*(R+d)}
> > }
> > direction{
> > function{-(u-1/2)*2}
> > function{-1}
> > function{-(v-1/2)*2}
> > }
> > }
>
> I don't think this will do. Basically, this doubles the tangent of the angle,
> not the angle itself.
>
> But if that is the syntax for user-defined cameras, the following should work:
>
> #declare Width = ... // corresponds to length of right in other camera types
> #declare Height = ... // corresponds to length of up in other camera types
> camera {
> user_defined
> location {function {0} function {0} function {0}}
> direction {
> function {tan (atan((u-1/2)*Width) * 2)}
> function {1}
> function {tan (atan((v-1/2)*Height) * 2)}
> }
> }
>
> Mark Weyer
What I have should work (and seems to, I tested it with a checkered mapped
hemisphere. Rather than looking from the pole to the sphere, I have placed the
viewing plane at the user-defined distance d from the center. To view the
entire hemisphere, the size of the viewplane is a 2*(R+d) square. The camera
looks back from this viewplane to the opposite pole of the sphere. The camera
viewplane is defined in u,v cordinates from 0-1, these are converted to a
centered coordinate system by subtracting -1/2 so they run from -1/2 to 1/2
with the length still being 1. Hence, each viewpoint is defined by the
modified u,v coordinate multplied by the size of the viewplane (for X & Z): x=
(u-1/2) * 2*(R+d), z= (v-1/2) * 2*(R+d). The y coordinate will always be the
distance from the center: y=d. Given these as the location coordinates, the
look at coordinate is the south pole or <0,-R,0>, the direction is the look at
minus the location: x=-(u-1/2)*2*(R+d), y=-R-d=-(R+d), z=-(v-1/2)*2*(R+d).
This vector can be further simplified by dividing out the common (R+d):
x=-(u-1/2)*2, y=-1, z=-(v-1/2)*2.
If needed I can provide a sketch showing exactly this.
One advantage here, is that the camera can be positioned some distance from the
look_at point to ensure that it is not inside an object.
-tgq
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> > > #declare R=1;//Radius of projected sphere
> > > #declare d=2;//distance from center of sphere
> > > camera {
> > > user_defined
> > > location{
> > > function{(u-1/2)*2*(R+d)}
> > > function{d}
> > > function{(v-1/2)*2*(R+d)}
> > > }
> > > direction{
> > > function{-(u-1/2)*2}
> > > function{-1}
> > > function{-(v-1/2)*2}
> > > }
> > > }
Consider the following:
camera {
location -R*y
direction y
right -2*x
up -2*z
}
If I understand syntax correctly, this should be equivalent to
camera
user_defined
location {function {0} function {-R*y} function {0}}
direction {
function {(u-1/2)*-2}
function {-1}
function {(v-1/2)*-2}
}
}
Now, assuming I made no mistake, the second camera is a shorthand for the
third one. The directions in the first and third are equal. The locations
are not equal, but, by your argument cited below, they differ only by a
factor (R+d) of the respective direction.
Now assume, that nothing is between the two version of location, in
particular no media and no fog. Then the first and third camera give the
same result. As the second is a shorthand for this, we may conclude that
your camera is equivalent to a standard perspective camera. (Up to nothing
beeing in between the different locations.) This is not intended for
stereographic projection.
> What I have should work (and seems to, I tested it with a checkered mapped
> hemisphere. Rather than looking from the pole to the sphere, I have placed the
> viewing plane at the user-defined distance d from the center. To view the
> entire hemisphere, the size of the viewplane is a 2*(R+d) square. The camera
> looks back from this viewplane to the opposite pole of the sphere. The camera
> viewplane is defined in u,v cordinates from 0-1, these are converted to a
> centered coordinate system by subtracting -1/2 so they run from -1/2 to 1/2
> with the length still being 1. Hence, each viewpoint is defined by the
> modified u,v coordinate multplied by the size of the viewplane (for X & Z): x=
> (u-1/2) * 2*(R+d), z= (v-1/2) * 2*(R+d). The y coordinate will always be the
> distance from the center: y=d. Given these as the location coordinates, the
> look at coordinate is the south pole or <0,-R,0>, the direction is the look at
> minus the location: x=-(u-1/2)*2*(R+d), y=-R-d=-(R+d), z=-(v-1/2)*2*(R+d).
> This vector can be further simplified by dividing out the common (R+d):
> x=-(u-1/2)*2, y=-1, z=-(v-1/2)*2.
> One advantage here, is that the camera can be positioned some distance from the
> look_at point to ensure that it is not inside an object.
This is indeed true.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> > > > #declare R=1;//Radius of projected sphere
> > > > #declare d=2;//distance from center of sphere
> > > > camera {
> > > > user_defined
> > > > location{
> > > > function{(u-1/2)*2*(R+d)}
> > > > function{d}
> > > > function{(v-1/2)*2*(R+d)}
> > > > }
> > > > direction{
> > > > function{-(u-1/2)*2}
> > > > function{-1}
> > > > function{-(v-1/2)*2}
> > > > }
> > > > }
>
> Consider the following:
>
> camera {
> location -R*y
> direction y
> right -2*x
> up -2*z
> }
>
> If I understand syntax correctly, this should be equivalent to
>
> camera
> user_defined
> location {function {0} function {-R*y} function {0}}
> direction {
> function {(u-1/2)*-2}
> function {-1}
> function {(v-1/2)*-2}
> }
> }
>
> Now, assuming I made no mistake, the second camera is a shorthand for the
> third one. The directions in the first and third are equal. The locations
> are not equal, but, by your argument cited below, they differ only by a
> factor (R+d) of the respective direction.
>
There is a big difference between the locations. u and v represent the location
in the image, so with the first one, each u/v gives a different location,
essentially each position on the image plane, the view then *converges* from
the image plane to the south pole, passing through the equator.
In the third one, the location does not vary with u/v, so it is a static
viewpoint (such as a standard camera), the direction vectors still correspond
to the converging vectors, but because they are coming from a point, they
essentially diverge, albeit in the negative directions that they should.
So: the first camera looks from outside the scene inwards, the third camera
looks from inside the scene outwards. Now it depends on how you want to use
the projection. If it is merely the mapping of a sphere surface and pattern,
then looking in to out works, but if you have, for example, created a small
planet with forests/buildings/lakes/etc, then you more likely want to view it
from the outside in.
Another diffence is that in the first camera, whatever is located at the look-at
point will be seen as a singularity that fills the whole image if visible. In
the second camera, this situation would be equal to an object placed at the
location point, however the camera would be inside or coincident with it.
Also, the second is not equal to the third one. In the second one the direction
is given as +y, in the thrid, the equivivalent direction is -y. The locatio,
right and up values are equal though. The two cameras look in opposite
directions.
> Now assume, that nothing is between the two version of location, in
> particular no media and no fog. Then the first and third camera give the
> same result. As the second is a shorthand for this, we may conclude that
> your camera is equivalent to a standard perspective camera. (Up to nothing
> beeing in between the different locations.) This is not intended for
> stereographic projection.
>
You are right, when I look at it, it is essentially equal to a perspective
camera (except inside out, which isn't possible with a regular camera except by
using refractive lenses). But this is exactly how I understood it from
wikipedia, each ray passes from the opposite pole, straight through the sphere
surface, onto the projection plane. To me it appears that is supposed to be
exactly equale to a perspective projection (with a 90deg viewing angle), but
that it is traditionally meant as a means of projecting a spherical surface
onto a flat surface. We are thinking of it as 3d objects rather than 2
surfaces which I think confuses the matter in our interpretation.
-tgq
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> You are right, when I look at it, it is essentially equal to a perspective
> camera (except inside out, which isn't possible with a regular camera except by
> using refractive lenses). But this is exactly how I understood it from
> wikipedia, each ray passes from the opposite pole, straight through the sphere
> surface, onto the projection plane. To me it appears that is supposed to be
> exactly equale to a perspective projection (with a 90deg viewing angle), but
> that it is traditionally meant as a means of projecting a spherical surface
> onto a flat surface. We are thinking of it as 3d objects rather than 2
> surfaces which I think confuses the matter in our interpretation.
Right. If the scene is entirely contained in the (surface of) the sphere,
then a perspective camera suffices. The original poster wanted something
different, though: To apply this kind of projection after the scene has
been projected to the sphere. So, the camera must provide two projections
at once, giving importance to the angle at which a ray of vision hits the
sphere.
Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Mark Weyer" <nomail@nomail> wrote:
> > You are right, when I look at it, it is essentially equal to a perspective
> > camera (except inside out, which isn't possible with a regular camera except by
> > using refractive lenses). But this is exactly how I understood it from
> > wikipedia, each ray passes from the opposite pole, straight through the sphere
> > surface, onto the projection plane. To me it appears that is supposed to be
> > exactly equale to a perspective projection (with a 90deg viewing angle), but
> > that it is traditionally meant as a means of projecting a spherical surface
> > onto a flat surface. We are thinking of it as 3d objects rather than 2
> > surfaces which I think confuses the matter in our interpretation.
>
> Right. If the scene is entirely contained in the (surface of) the sphere,
> then a perspective camera suffices. The original poster wanted something
> different, though: To apply this kind of projection after the scene has
> been projected to the sphere. So, the camera must provide two projections
> at once, giving importance to the angle at which a ray of vision hits the
> sphere.
>
> Mark Weyer
There's a MUCH easier way to create a stereographic projection in POV-Ray. No
math, no fancy code. I can provide a diagram for anyone who is interested. If
interested, respond to my e-mail address. i won't necessarily monitor the
group.
Rick Smith
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|