POV-Ray : Newsgroups : povray.advanced-users : Depth output from POVray Server Time
30 Jul 2024 14:28:47 EDT (-0400)
  Depth output from POVray (Message 11 to 20 of 23)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 3 Messages >>>
From: Simen Kvaal
Subject: Re: Depth output from POVray
Date: 22 Sep 1999 06:55:13
Message: <37e8b591@news.povray.org>
I am replying to my own post, now, because I've developed a way to do what I
wanted. I hope this could be of use to anyone who needs a depth-buffer from
a pov-ray scene.

I looked at an old pov-zine, and there it was mentioned that if I removed
all texures from the object and applied a deafult texture with gradient
which varies from white (near) to black (far) in the direction of the
looking vector of the camera, then I'd essentially get the depth information
in the rendering. Near points would be white and black points would be the
ones far away.

Some of you asked why someone would want tihis? One answer, (not mine), is
to use then in a SIRDS-program, which requires depth information from the
scene in order to generate the illusion of depth. This method works fine for
that purpose.

However, I wanted a real Z-buffer output. I am testing a shading program I
have written and want to use some objects that I really know what looks
like. I have generated several million random points that form a shape in
three dimensions. POV cannot easily handle  that many spheres, so I've
written a program that makes a Z-buffer and shades the scene from that. In
order to test the performance, I needed som Z-buffers of "real" world
things, like cubes, spheres, torii et.c. Then it would be easy too se how
realistic the shading of the z-buffer was. (It's hard to see with some
smoke-resembling cluod of dots.)

To the point. Here is a scene I wrote this morning (local time):


// beginning of code:

#declare cam_pos = <0,0, -5>
#declare cam_rot = <20, 0, 0>
#declare max_depth = 10;
#declare min_depth = 1;


default {
        texture {
                pigment {
                        onion
                        color_map {
                                [0.0 color rgb 1]
                                [1.0 color rgb 0]
                        }
                        phase min_depth/(max_depth-min_depth)
                        scale (max_depth-min_depth)
                }
                finish { ambient 1 diffuse 0 }
                translate cam_pos
                rotate cam_rot

        }
}
camera {
        location cam_pos
        direction z
        up y
        right 4/3*x
        look_at 0
        rotate cam_rot
}

box {-1, 1 rotate <50, 40>}

// end of code.


This scene is a general case. I can change the camera position in cam_pos
and rotation in cam_rot and still generate a z-buffer output. If I used a
simple gradient texture, I would have errors. Mainly because of perspective
distortion. Points on the edge of the image which have color w, say 0.8,
would actually be farther away than a point with color w in the middle of
the image. The solution was to use the onion procedural texture, which is
cocentric spheres centered at the origin. For the depth-view purposes, the
texture is translated to the camera location, so that the centre of the
spheres is located at the view-point. Then, it is rotated, if the camera is
rotated to a new place.

By using this method, one can easily verify that a point (x,y,z) with
distance d from the eye, would get the same color as any other point (x1,
y1, z1) with distance d from the eye.

The next problem is scaling of the texture. The texture has originally a
range of 1, that is it wraps from black to white every 1 units from the
centre of the texture, that is the eye in this case. We want to scale it so
that the nearest *visible* point gets a shade close to white, while the
farthest visible point gets a shade closer to black. There must be no
wrapping, or else we would get wrong depth-information.

The solution is quite easy, you scale and offset the wrapping of the texture
with
            phase min_depth/(max_depth-min_depth)
            scale (max_depth-min_depth)

where max_depth is an estimate of the distance to the farthest point we want
to shade, and min_depth is the distance to the nearest.

The output from this source is an approximate depth-buffer from the scene,
which then can be used as wished.

Simen.


Post a reply to this message

From: Ron Parker
Subject: Re: Depth output from POVray
Date: 22 Sep 1999 09:15:26
Message: <37e8d66e@news.povray.org>
On Wed, 22 Sep 1999 11:26:54 +0300, Peter Popov wrote:
>On Tue, 21 Sep 1999 20:16:01 -0500, "Bob Hughes" <inv### [at] aolcom>
>wrote:
>
>>Think I'd use a gradient z pigment instead.  I have one RDS (random
>>dot stereogram, not RayDreamStudio) program here that accepts the
>>depth info in Bmp images which I suppose might be simple color
>>shifting, not sure.  I need to dust it off and give it a try.
>>
>>Bob
>
>A linear gradient will give a very inaccurate result in points not
>directly in front of the camera. I sometimes use a spherical pigment.
>The problem is that it's hard to get its scale right, especially if
>you have a ground plane, but for indoor or single object scenes it is
>the best choice.

When I wrote my patch I originally wrote it to take the length of 
the eye ray (analogous to a spherical gradient) but it seemed to
warp the resulting scene.  When I switched to a linear gradient 
(length of eye ray dotted with camera direction vector) it flattened
out.

Another thing that can help, if your SIRDS program supports it (I
have one that does) is to use hf_gray_16.


Post a reply to this message

From: Bob Hughes
Subject: Re: Depth output from POVray
Date: 22 Sep 1999 17:33:08
Message: <37e94b14@news.povray.org>
Interesting to know. Thanks guys.

Bob

Ron Parker <par### [at] fwicom> wrote in message
news:37e8d66e@news.povray.org...
> On Wed, 22 Sep 1999 11:26:54 +0300, Peter Popov wrote:
> >On Tue, 21 Sep 1999 20:16:01 -0500, "Bob Hughes" <inv### [at] aolcom>
> >wrote:
> >
> >>Think I'd use a gradient z pigment instead.  I have one RDS
(random
> >>dot stereogram, not RayDreamStudio) program here that accepts the
> >>depth info in Bmp images which I suppose might be simple color
> >>shifting, not sure.  I need to dust it off and give it a try.
> >>
> >>Bob
> >
> >A linear gradient will give a very inaccurate result in points not
> >directly in front of the camera. I sometimes use a spherical
pigment.
> >The problem is that it's hard to get its scale right, especially if
> >you have a ground plane, but for indoor or single object scenes it
is
> >the best choice.
>
> When I wrote my patch I originally wrote it to take the length of
> the eye ray (analogous to a spherical gradient) but it seemed to
> warp the resulting scene.  When I switched to a linear gradient
> (length of eye ray dotted with camera direction vector) it flattened
> out.
>
> Another thing that can help, if your SIRDS program supports it (I
> have one that does) is to use hf_gray_16.


Post a reply to this message

From: Bob Hughes
Subject: Re: Depth output from POVray
Date: 22 Sep 1999 17:48:42
Message: <37e94eba@news.povray.org>
Please forgive me if I'm being ignorant but from what Ron has said
about planar applied rays (sorry Ron, I'm quoting vaguely) versus
spherical it seems the "depth" is not a part of the original
calculations.  Could that be true at all? For instance, a scene gets
figured orthographically first maybe and then any other camera
perspectives are added into that afterward, like a sort of non-ray
warping?
Just curious now.

Bob

Simen Kvaal <sim### [at] studentmatnatuiono> wrote in message
news:37e8b591@news.povray.org...
> I am replying to my own post, now, because I've developed a way to
do what I
> wanted. I hope this could be of use to anyone who needs a
depth-buffer from
> a pov-ray scene.
>
> I looked at an old pov-zine, and there it was mentioned that if I
removed
> all texures from the object and applied a deafult texture with
gradient
> which varies from white (near) to black (far) in the direction of
the
> looking vector of the camera, then I'd essentially get the depth
information
> in the rendering. Near points would be white and black points would
be the
> ones far away.
>
> Some of you asked why someone would want tihis? One answer, (not
mine), is
> to use then in a SIRDS-program, which requires depth information
from the
> scene in order to generate the illusion of depth. This method works
fine for
> that purpose.
>
> However, I wanted a real Z-buffer output. I am testing a shading
program I
> have written and want to use some objects that I really know what
looks
> like. I have generated several million random points that form a
shape in
> three dimensions. POV cannot easily handle  that many spheres, so
I've
> written a program that makes a Z-buffer and shades the scene from
that. In
> order to test the performance, I needed som Z-buffers of "real"
world
> things, like cubes, spheres, torii et.c. Then it would be easy too
se how
> realistic the shading of the z-buffer was. (It's hard to see with
some
> smoke-resembling cluod of dots.)
>
> To the point. Here is a scene I wrote this morning (local time):
>
>
> // beginning of code:
>
> #declare cam_pos = <0,0, -5>
> #declare cam_rot = <20, 0, 0>
> #declare max_depth = 10;
> #declare min_depth = 1;
>
>
> default {
>         texture {
>                 pigment {
>                         onion
>                         color_map {
>                                 [0.0 color rgb 1]
>                                 [1.0 color rgb 0]
>                         }
>                         phase min_depth/(max_depth-min_depth)
>                         scale (max_depth-min_depth)
>                 }
>                 finish { ambient 1 diffuse 0 }
>                 translate cam_pos
>                 rotate cam_rot
>
>         }
> }
> camera {
>         location cam_pos
>         direction z
>         up y
>         right 4/3*x
>         look_at 0
>         rotate cam_rot
> }
>
> box {-1, 1 rotate <50, 40>}
>
> // end of code.
>
>
> This scene is a general case. I can change the camera position in
cam_pos
> and rotation in cam_rot and still generate a z-buffer output. If I
used a
> simple gradient texture, I would have errors. Mainly because of
perspective
> distortion. Points on the edge of the image which have color w, say
0.8,
> would actually be farther away than a point with color w in the
middle of
> the image. The solution was to use the onion procedural texture,
which is
> cocentric spheres centered at the origin. For the depth-view
purposes, the
> texture is translated to the camera location, so that the centre of
the
> spheres is located at the view-point. Then, it is rotated, if the
camera is
> rotated to a new place.
>
> By using this method, one can easily verify that a point (x,y,z)
with
> distance d from the eye, would get the same color as any other point
(x1,
> y1, z1) with distance d from the eye.
>
> The next problem is scaling of the texture. The texture has
originally a
> range of 1, that is it wraps from black to white every 1 units from
the
> centre of the texture, that is the eye in this case. We want to
scale it so
> that the nearest *visible* point gets a shade close to white, while
the
> farthest visible point gets a shade closer to black. There must be
no
> wrapping, or else we would get wrong depth-information.
>
> The solution is quite easy, you scale and offset the wrapping of the
texture
> with
>             phase min_depth/(max_depth-min_depth)
>             scale (max_depth-min_depth)
>
> where max_depth is an estimate of the distance to the farthest point
we want
> to shade, and min_depth is the distance to the nearest.
>
> The output from this source is an approximate depth-buffer from the
scene,
> which then can be used as wished.
>
> Simen.
>
>
>


Post a reply to this message

From: Ron Parker
Subject: Re: Depth output from POVray
Date: 22 Sep 1999 17:53:31
Message: <37e94fdb@news.povray.org>
On Wed, 22 Sep 1999 16:37:19 -0500, Bob Hughes wrote:
>Please forgive me if I'm being ignorant but from what Ron has said
>about planar applied rays (sorry Ron, I'm quoting vaguely) versus
>spherical it seems the "depth" is not a part of the original
>calculations.  Could that be true at all? For instance, a scene gets
>figured orthographically first maybe and then any other camera
>perspectives are added into that afterward, like a sort of non-ray
>warping?

Nope, not true at all.  The camera type is the only thing that determines 
the distribution of the eye rays that are shot into the scene.

My calculations were for a standard perspective camera, as that was the 
only kind available in POV 2.2.  They'd do weird things with the newer 
camera types.


Post a reply to this message

From: Bob Hughes
Subject: Re: Depth output from POVray
Date: 23 Sep 1999 05:19:57
Message: <37e9f0bd@news.povray.org>
Great, thanks for making that clear to me.  Camera is the key I guess
a person could say.  I knew of the typical concept of rays going from
camera to scene rather than from scene to camera, just wondered of
that possible final derivitive because it sure sounded like it was a
two step process you were talking about.  I need to stay out of the
logistics of it anyway if I can't work with them  : )

Bob

Ron Parker <par### [at] fwicom> wrote in message
news:37e94fdb@news.povray.org...
> On Wed, 22 Sep 1999 16:37:19 -0500, Bob Hughes wrote:
> >Please forgive me if I'm being ignorant but from what Ron has said
> >about planar applied rays (sorry Ron, I'm quoting vaguely) versus
> >spherical it seems the "depth" is not a part of the original
> >calculations.  Could that be true at all? For instance, a scene
gets
> >figured orthographically first maybe and then any other camera
> >perspectives are added into that afterward, like a sort of non-ray
> >warping?
>
> Nope, not true at all.  The camera type is the only thing that
determines
> the distribution of the eye rays that are shot into the scene.
>
> My calculations were for a standard perspective camera, as that was
the
> only kind available in POV 2.2.  They'd do weird things with the
newer
> camera types.
>


Post a reply to this message

From: muyu
Subject: Re: Depth output from POVray
Date: 24 Feb 2019 19:45:00
Message: <web.5c733a4950a5b280a6ca73bb0@news.povray.org>
"Simen Kvaal" <sim### [at] studentmatnatuiono> wrote:
> I am replying to my own post, now, because I've developed a way to do what I
> wanted. I hope this could be of use to anyone who needs a depth-buffer from
> a pov-ray scene.
>
> I looked at an old pov-zine, and there it was mentioned that if I removed
> all texures from the object and applied a deafult texture with gradient
> which varies from white (near) to black (far) in the direction of the
> looking vector of the camera, then I'd essentially get the depth information
> in the rendering. Near points would be white and black points would be the
> ones far away.
>
> Some of you asked why someone would want tihis? One answer, (not mine), is
> to use then in a SIRDS-program, which requires depth information from the
> scene in order to generate the illusion of depth. This method works fine for
> that purpose.
>
> However, I wanted a real Z-buffer output. I am testing a shading program I
> have written and want to use some objects that I really know what looks
> like. I have generated several million random points that form a shape in
> three dimensions. POV cannot easily handle  that many spheres, so I've
> written a program that makes a Z-buffer and shades the scene from that. In
> order to test the performance, I needed som Z-buffers of "real" world
> things, like cubes, spheres, torii et.c. Then it would be easy too se how
> realistic the shading of the z-buffer was. (It's hard to see with some
> smoke-resembling cluod of dots.)
>
> To the point. Here is a scene I wrote this morning (local time):
>
>
> // beginning of code:
>
> #declare cam_pos = <0,0, -5>
> #declare cam_rot = <20, 0, 0>
> #declare max_depth = 10;
> #declare min_depth = 1;
>
>
> default {
>         texture {
>                 pigment {
>                         onion
>                         color_map {
>                                 [0.0 color rgb 1]
>                                 [1.0 color rgb 0]
>                         }
>                         phase min_depth/(max_depth-min_depth)
>                         scale (max_depth-min_depth)
>                 }
>                 finish { ambient 1 diffuse 0 }
>                 translate cam_pos
>                 rotate cam_rot
>
>         }
> }
> camera {
>         location cam_pos
>         direction z
>         up y
>         right 4/3*x
>         look_at 0
>         rotate cam_rot
> }
>
> box {-1, 1 rotate <50, 40>}
>
> // end of code.
>
>
> This scene is a general case. I can change the camera position in cam_pos
> and rotation in cam_rot and still generate a z-buffer output. If I used a
> simple gradient texture, I would have errors. Mainly because of perspective
> distortion. Points on the edge of the image which have color w, say 0.8,
> would actually be farther away than a point with color w in the middle of
> the image. The solution was to use the onion procedural texture, which is
> cocentric spheres centered at the origin. For the depth-view purposes, the
> texture is translated to the camera location, so that the centre of the
> spheres is located at the view-point. Then, it is rotated, if the camera is
> rotated to a new place.
>
> By using this method, one can easily verify that a point (x,y,z) with
> distance d from the eye, would get the same color as any other point (x1,
> y1, z1) with distance d from the eye.
>
> The next problem is scaling of the texture. The texture has originally a
> range of 1, that is it wraps from black to white every 1 units from the
> centre of the texture, that is the eye in this case. We want to scale it so
> that the nearest *visible* point gets a shade close to white, while the
> farthest visible point gets a shade closer to black. There must be no
> wrapping, or else we would get wrong depth-information.
>
> The solution is quite easy, you scale and offset the wrapping of the texture
> with
>             phase min_depth/(max_depth-min_depth)
>             scale (max_depth-min_depth)
>
> where max_depth is an estimate of the distance to the farthest point we want
> to shade, and min_depth is the distance to the nearest.
>
> The output from this source is an approximate depth-buffer from the scene,
> which then can be used as wished.
>
> Simen.


I would like to get depth image and found your solution here. Thanks.
Beyond depth image, I would like to get the coordinates of each pixel.
Do you have any idea to do that? Thanks again.

Best
Shouyang


Post a reply to this message

From: William F Pokorny
Subject: Re: Depth output from POVray
Date: 25 Feb 2019 10:00:12
Message: <5c7402fc$1@news.povray.org>
On 2/24/19 7:43 PM, muyu wrote:
> "Simen Kvaal" <sim### [at] studentmatnatuiono> wrote:
...
> 
> I would like to get depth image and found your solution here. Thanks.
> Beyond depth image, I would like to get the coordinates of each pixel.
> Do you have any idea to do that? Thanks again.
> 
> Best
> Shouyang
> 
> 

Hi, If you are willing to compile your own POV-Ray version, and if I 
remember correctly, someone in the last year or two created a 
fork/branch of POV-Ray on github which created a depth map output.

Unfortunately, while trying to figure out how to search all POV-Ray 
forks for the branch on github I triggered some sort of "abuse 
detection" thing and I'm for a while frozen out of github search... 
Unsure how to really do such a search so maybe I indeed created some 
kind of search a problem for github.

It should be at worst you can look through the user-forks for the branch 
starting perhaps from:

   https://github.com/POV-Ray/povray/network/members

There have been other patches over the years too, but unsure of the 
state of these.

Bill P.


Post a reply to this message

From: Bald Eagle
Subject: Re: Depth output from POVray
Date: 25 Feb 2019 16:45:01
Message: <web.5c74617750a5b280765e06870@news.povray.org>
"muyu" <lsy### [at] gmailcom> wrote:

> I would like to get depth image and found your solution here. Thanks.
> Beyond depth image, I would like to get the coordinates of each pixel.
> Do you have any idea to do that? Thanks again.
>
> Best
> Shouyang

Hi Shouyang,

Perhaps you could take a look at something a few of us worked on a while back:

http://news.povray.org/povray.tools.general/thread/%3Cweb.5ba183edb47e1707a47873e10%40news.povray.org%3E/?mtop=424641

You could put your whole scene inside a union{} and then use the macro to output
where the ray intersects the scene for each pixel.

Alternatively, you might be able to use the color of the depth map to return a
numerical z-position, and use the x,y position on the screen to give the rest of
the vector info.   Using an orthographic camera would probably make life easiest
here.

So, create a deep gradient texture, apply it to an untextured scene, Then you
can do an x, y nested loop with eval_pigment to determine the x, y, and z
coordinates (within certain limits of accuracy)


Post a reply to this message

From: muyu
Subject: Re: Depth output from POVray
Date: 26 Feb 2019 10:10:01
Message: <web.5c75558f50a5b2807d6f44740@news.povray.org>
"Bald Eagle" <cre### [at] netscapenet> wrote:
> "muyu" <lsy### [at] gmailcom> wrote:
>
> > I would like to get depth image and found your solution here. Thanks.
> > Beyond depth image, I would like to get the coordinates of each pixel.
> > Do you have any idea to do that? Thanks again.
> >
> > Best
> > Shouyang
>
> Hi Shouyang,
>
> Perhaps you could take a look at something a few of us worked on a while back:
>
>
http://news.povray.org/povray.tools.general/thread/%3Cweb.5ba183edb47e1707a47873e10%40news.povray.org%3E/?mtop=424641

>
> You could put your whole scene inside a union{} and then use the macro to output
> where the ray intersects the scene for each pixel.
>
> Alternatively, you might be able to use the color of the depth map to return a
> numerical z-position, and use the x,y position on the screen to give the rest of
> the vector info.   Using an orthographic camera would probably make life easiest
> here.
>
> So, create a deep gradient texture, apply it to an untextured scene, Then you
> can do an x, y nested loop with eval_pigment to determine the x, y, and z
> coordinates (within certain limits of accuracy)

Thanks for your answer. Your work on the trace macro is very interesting!
However, I did not know how to do that for all the pixels and output the
coordinates information. Could you please provide a small code example?
(Sorry for this greedy request.) Thanks again.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 3 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.