POV-Ray : Newsgroups : povray.programming : A few ideas Server Time
29 Jul 2024 08:16:20 EDT (-0400)
  A few ideas (Message 1 to 7 of 7)  
From: Mike Hough
Subject: A few ideas
Date: 31 May 1998 06:55:18
Message: <35713716.4E875048@aol.com>
I've tried going through the POV source and figure out how to do a few
of these things, but I didn't even come close (I can't really program),
so I thought I'd post a few ideas for additions to POV if anyone wants
to take a crack at them.

1)A true spherical camera, such that if rendered and then mapped onto a
sphere in POV would look just like the original scene.  Such images
could be used in VRML, Livepicture, or QTVR.  I think the trick here is
that a ray for the screen space should be matched to a sphere.  I
couldn't figure it out, but I know it can be done.  I actually found
where I think this would be added in create_ray in render.c.

2)A distance mask.  This is pretty easily done by making a union of the
scene and applying a gradient, but would be much more useful if it could
be rendered to the alpha channel of a tga or png image.  Two types would
be useful.  The first would invlove going from Black at the camera and
White going into the image, becoming completely white at a distance
specified in the scene file.  The resulting image could have a color
added to it with the mask in place to create a very quick fog.  The
second type would use a focal point for the center of the black area of
the image and then would become lighter (eventually to white) as the
distance away from the focal point increases in line with the camera
direction.  The resulting image could then have a gaussian blur added to
it to create a blur effect in a much shorter amount of time.  It could
also be saved and reused after changes have been made to a scene
provided the camera and object do not move.

The first one is mostly for fun, but it sure would provide an
interesting way to render and view scenes created with POV-Ray.  The
second is a way of speeding up a some of the slower features of POV
through the use of Post-processing (though all info is created by the
renderer).  The post-processing could probably even be handled by the
renderer.  Atmospheric effects could even be added to the list of
options.

3)This is a little more involved, but I've been wondering if a patch
mesh primitive would be more useful than the present bi_cubic patch.
This could allow a group of meshes to be treated as one object and a uv
space could be generated for the whole mesh. Could make parametric
mapping a more realistic proposition in the future, though I imagine
writing a modeller to handle them would require a little more work.  (I
stole the idea from BMRT, so sue me)

Some of these may sound silly (they are from my long running wish list),
but I'm sure there's a winner in there somewhere ;-) <---anti-flame wink

-Mike


Post a reply to this message

From: Ronald L  Parker
Subject: Re: A few ideas
Date: 1 Jun 1998 22:32:15
Message: <35736186.273775680@news.povray.org>
On Sun, 31 May 1998 05:55:18 -0500, Mike Hough <POV### [at] aolcom>
wrote:

>2)A distance mask.  This is pretty easily done by making a union of the
>scene and applying a gradient, but would be much more useful if it could
>be rendered to the alpha channel of a tga or png image.  Two types would
>be useful.  The first would invlove going from Black at the camera and
>White going into the image, becoming completely white at a distance
>specified in the scene file.  

I made this patch for POV 2.2, though I did the reverse (white close,
black far.)  It's quite useful as the depth map for a stereogram, or
as a heightfield in POV itself.  It's quite easy to do, and only takes
a few lines of code in determine_apparent_colour and a few more to set
up the options (min/max distance, grayscale vs. red/green, etc.)  I've
made several offers since I created the patch to make the code
available, but nobody ever wanted it.  I've now lost that patch, but
it could be easily recreated.

>The resulting image could have a color
>added to it with the mask in place to create a very quick fog.  

I hate to burst your bubble, but this is what POV does already, except
that it also takes fog into account on reflected and refracted rays,
which this method would not.  Consider: your scene consists of a
single perfectly transparent glass disk, close to the camera, and a
large sphere surrounding the scene.  Should you be able to see the
glass disk?  You will - it will cut a hole out of the "fog" and you'll
see the sphere much more clearly through it than you should.  

If you don't have any reflected or refracted rays - the only kind of
scene where the "shortcut" method would work - POV will be faster than
the two-step process.  Since it would be calculating the distance for
either scenario, the only gain would be in the shortcut method not
mixing the colors within POV - which your postprocessing step would
then have to do, with less precision and more work on your part.


Post a reply to this message

From: Ronald L  Parker
Subject: Re: A few ideas
Date: 2 Jun 1998 22:26:11
Message: <3574b373.360298007@news.povray.org>
On Sun, 31 May 1998 05:55:18 -0500, Mike Hough <POV### [at] aolcom>
wrote:
>1)A true spherical camera, such that if rendered and then mapped onto a
>sphere in POV would look just like the original scene.  Such images
>could be used in VRML, Livepicture, or QTVR.  I think the trick here is
>that a ray for the screen space should be matched to a sphere.  I
>couldn't figure it out, but I know it can be done.  I actually found
>where I think this would be added in create_ray in render.c.

The ultra_wide_angle camera type is almost what you're looking for.
If you set the angle to 360 * pi (not 360, due to a bug in render.c)
then chop off the top and bottom quarter of the image, you get an
image that would wrap correctly.  If you duplicate the
ultra_wide_angle code into, say, the placeholder left for
test_camera_1, then divide y0 by 2 just before setting cy and sy, and
fix it so instead of dividing by 180 it multiplies by M_PI_180, you'll
get what you're looking for.


Post a reply to this message

From: Mike Hough
Subject: Re: A few ideas
Date: 3 Jun 1998 04:48:45
Message: <35750DED.8B0BDEEE@aol.com>
Hey, thanks alot.  I had noticed that the ultra_wide_angle camera did look
like it would do the projection correctly for a spherical image but it
wouldn't go over what I thought was 180 degrees.   As far as chopping off
the top and bottom of the image, the tests I had done didn't require that,
but I used the 1:2 aspect ratio recommended for spherical images.  I don't
know if that is a factor.   If I get this working I'll try to put a note
about it on my web site eventually (it's under a big reconstruction.  Ugh)
and also some notes on how to view them.   Could make for fun image
exchange.

Happy Raytracing!

-Mike

Ronald L. Parker wrote:

> On Sun, 31 May 1998 05:55:18 -0500, Mike Hough <POV### [at] aolcom>
> wrote:
> >1)A true spherical camera, such that if rendered and then mapped onto a
> >sphere in POV would look just like the original scene.  Such images
> >could be used in VRML, Livepicture, or QTVR.  I think the trick here is
> >that a ray for the screen space should be matched to a sphere.  I
> >couldn't figure it out, but I know it can be done.  I actually found
> >where I think this would be added in create_ray in render.c.
>
> The ultra_wide_angle camera type is almost what you're looking for.
> If you set the angle to 360 * pi (not 360, due to a bug in render.c)
> then chop off the top and bottom quarter of the image, you get an
> image that would wrap correctly.  If you duplicate the
> ultra_wide_angle code into, say, the placeholder left for
> test_camera_1, then divide y0 by 2 just before setting cy and sy, and
> fix it so instead of dividing by 180 it multiplies by M_PI_180, you'll
> get what you're looking for.


Post a reply to this message

From: Ronald L  Parker
Subject: Re: A few ideas
Date: 3 Jun 1998 13:38:17
Message: <35758805.414724101@news.povray.org>
On Wed, 03 Jun 1998 03:48:45 -0500, Mike Hough <POV### [at] aolcom>
wrote:

>As far as chopping off the top and bottom of the image, the tests I had done 
>didn't require that, but I used the 1:2 aspect ratio recommended for spherical
images. 

I'm not sure what you mean here.  If you specify a right of 2x and an
up of z, you'll get an image suitable for mapping on an ellipsoid.  If
you specify an image width of 600 and a height of 300, it'll still
have the views to the rear above and below, and they'll still need to
be chopped off to work correctly.  ASCII art below.

Imagine you have a scene with two spheres, one in front and one
behind.  Render it with ultra_wide_angle and an angle of 360*pi, and
you'll get something like this, without the dotted horizontal lines:


      \_/

 - - - - - - - 
       _
\     / \    /
 |   |   |  |
/     \_/    \

 - - - - - - - 
       _
      / \


The sphere above and below is the same one as right and left, viewed
by looking up or down so far you're now looking backwards, between
your knees if you will, or bent over double with your hands behind
your head in a crab-walk.  The image you need for mapping is the part
between the dotted horizontal lines, which mark the zenith and nadir
and should ideally be the same color, all the way across the image.


Post a reply to this message

From: Mike Hough
Subject: Re: A few ideas
Date: 4 Jun 1998 01:33:25
Message: <357631A5.815045EE@aol.com>
I tried it with the changes made to render.c.  It worked just like you said, but I
realized
that this camera type is not going to create a suitable image.  I didn't realize,
however,
that I could simply add cameras using the test camera slots.  I'll keep working on
this, as
that is very convenient for me.

I'll try to explain how an image must look in order for it to be wrapped onto a sphere
without any distortion.  It's very easy for me to visualize, but the math escapes me
(I
keep trying).  If we imagine a camera pointed at <0, 0, 1> and at the center of a
room,
what we want is for the point directly overhead to occupy the entire first row of the
image.  Same goes for the point directly below us.  The center of the image will be a
cylindrical projection of an angle perpendicular to the sky_vector and wrapping around
the
right direction.  The rest needs to have a point on an imaginary sphere (with a camera
at
the center) match a point on a rectangular screen.

The reason for the 1:2 aspect ratio is that the up only goes from -90 to 90.  It
doesn't
wrap all the way around.  The right goes all the way from -180 to 180, but we get all
the
info in the view in the image, since once the right gets to the other half of the
image, we
start getting samples of the other half of the room.  That description may sound
awful, so
I'll just say it's like slicing a sphere from pole to pole on one side and then laying
it
out on a rectangle.

 I'd try an ascii drawing, but it won't show it very good, so I uploaded an example to
my
web space(I can't upload to news.povray.org through aol for some reason).  It shows
the
effect pretty well, although the top and bottom got cut off when I rendered..  If you
put
this on a sphere in POV using a spherical mapping and put a camera inside, I think
you'll
see what I mean.  The entire image is distorted, but you can clearly see that the four
corners of the room (each a different color) are in the center of the image and that
the
top and bottom flare out at the edges.  When wrapped onto a sphere this flare squishes
together at the poles, creating an undistorted replica of the original scene.  You'll
get a
white circle on the top and bottom because the image is incomplete.

I uploaded the image as:

http://members.aol.com/amaltheaj5/panorama.html

It is nothing more than a square room with each of the four walls given a different
color,
the ceiling being White and the Floor black, all of which with a grid on them to make
the
effect more clear.  I rendered this with Ray Dream Studio, as it has a spherical
camera.
It's been a side project of mine to duplicate this in POV-Ray, hence my original post
(the
others were just ideas I never tried).

I appreciate the help you've given me so far, and I surely wouldn't mind if you
continued,
but don't go to too much trouble.  If I find a way to do it I'll definitely post the
code.
At this point I'm just pluggin away with numbers, with no real knowledge other than
scraps
of math I've picked up here and there to go on.  LOL.

-Mike






Ronald L. Parker wrote:

> On Wed, 03 Jun 1998 03:48:45 -0500, Mike Hough <POV### [at] aolcom>
> wrote:
>
> >As far as chopping off the top and bottom of the image, the tests I had done
> >didn't require that, but I used the 1:2 aspect ratio recommended for spherical
images.
>
> I'm not sure what you mean here.  If you specify a right of 2x and an
> up of z, you'll get an image suitable for mapping on an ellipsoid.  If
> you specify an image width of 600 and a height of 300, it'll still
> have the views to the rear above and below, and they'll still need to
> be chopped off to work correctly.  ASCII art below.
>
> Imagine you have a scene with two spheres, one in front and one
> behind.  Render it with ultra_wide_angle and an angle of 360*pi, and
> you'll get something like this, without the dotted horizontal lines:
>
>       \_/
>
>  - - - - - - -
>        _
> \     / \    /
>  |   |   |  |
> /     \_/    \
>
>  - - - - - - -
>        _
>       / \
>
> The sphere above and below is the same one as right and left, viewed
> by looking up or down so far you're now looking backwards, between
> your knees if you will, or bent over double with your hands behind
> your head in a crab-walk.  The image you need for mapping is the part
> between the dotted horizontal lines, which mark the zenith and nadir
> and should ideally be the same color, all the way across the image.


Post a reply to this message

From: Chris Colefax
Subject: Re: Spherical camera
Date: 14 Jun 1998 10:34:23
Message: <3583DF6F.5E8F9A5D@geocities.com>
The last time this query came up (on comp.graphics.rendering.raytracing)
I suggested using a cylindrical type 1 camera, which allows a 360-degree
view horizontally.  Unfortunately, though, it does not allow you to see
the points directly above and below the camera, and the larger the
vertical viewing angle, the larger the distortion.  However, while
answering another question in this group, I tried the panoramic camera
(an obvious choice, surely!), and found that it does indeed allow you to
view the points above and below the camera.

Unfortunately, the camera does not properly render angles over 180
degrees (possibly the same bug as Roland pointed out with the
ultra_wide_angle?).  This problem can be quite easily overcome by first
rendering the image with the desired camera:

   camera {location <0, 0, 0> look_at <0, 0, 1>
      right x up y panoramic angle 180}

at a 1:1 ratio, and then rotating the camera by y*180 and rendering to a
second image of the same size.  When the two images are combined
side-by-side you should have the perfect image_map for a sphere.  In
fact, I tried mapping an quick test scene to a semitransparent sphere in
front of the original scene, and all objects seemed to correlate very
well, with no distortion (even at the poles).

------------

Mike Hough wrote:
> 
> I tried it with the changes made to render.c.  It worked just like you said, but I
realized
> that this camera type is not going to create a suitable image.  I didn't realize,
however,
> that I could simply add cameras using the test camera slots.  I'll keep working on
this, as
> that is very convenient for me.
> 
> I'll try to explain how an image must look in order for it to be wrapped onto a
sphere
> without any distortion.  It's very easy for me to visualize, but the math escapes me
(I
> keep trying).  If we imagine a camera pointed at <0, 0, 1> and at the center of a
room,
> what we want is for the point directly overhead to occupy the entire first row of
the
> image.  Same goes for the point directly below us.  The center of the image will be
a
> cylindrical projection of an angle perpendicular to the sky_vector and wrapping
around the
> right direction.  The rest needs to have a point on an imaginary sphere (with a
camera at
> the center) match a point on a rectangular screen.
> 
> The reason for the 1:2 aspect ratio is that the up only goes from -90 to 90.  It
doesn't
> wrap all the way around.  The right goes all the way from -180 to 180, but we get
all the
> info in the view in the image, since once the right gets to the other half of the
image, we
> start getting samples of the other half of the room.  That description may sound
awful, so
> I'll just say it's like slicing a sphere from pole to pole on one side and then
laying it
> out on a rectangle.
> 
[snip]
> 
> I appreciate the help you've given me so far, and I surely wouldn't mind if you
continued,
> but don't go to too much trouble.  If I find a way to do it I'll definitely post the
code.
> At this point I'm just pluggin away with numbers, with no real knowledge other than
scraps
> of math I've picked up here and there to go on.  LOL.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.