POV-Ray : Newsgroups : povray.advanced-users : Depth output from POVray : Re: Depth output from POVray Server Time
30 Jul 2024 16:25:24 EDT (-0400)
  Re: Depth output from POVray  
From: muyu
Date: 24 Feb 2019 19:45:00
Message: <web.5c733a4950a5b280a6ca73bb0@news.povray.org>
"Simen Kvaal" <sim### [at] studentmatnatuiono> wrote:
> I am replying to my own post, now, because I've developed a way to do what I
> wanted. I hope this could be of use to anyone who needs a depth-buffer from
> a pov-ray scene.
>
> I looked at an old pov-zine, and there it was mentioned that if I removed
> all texures from the object and applied a deafult texture with gradient
> which varies from white (near) to black (far) in the direction of the
> looking vector of the camera, then I'd essentially get the depth information
> in the rendering. Near points would be white and black points would be the
> ones far away.
>
> Some of you asked why someone would want tihis? One answer, (not mine), is
> to use then in a SIRDS-program, which requires depth information from the
> scene in order to generate the illusion of depth. This method works fine for
> that purpose.
>
> However, I wanted a real Z-buffer output. I am testing a shading program I
> have written and want to use some objects that I really know what looks
> like. I have generated several million random points that form a shape in
> three dimensions. POV cannot easily handle  that many spheres, so I've
> written a program that makes a Z-buffer and shades the scene from that. In
> order to test the performance, I needed som Z-buffers of "real" world
> things, like cubes, spheres, torii et.c. Then it would be easy too se how
> realistic the shading of the z-buffer was. (It's hard to see with some
> smoke-resembling cluod of dots.)
>
> To the point. Here is a scene I wrote this morning (local time):
>
>
> // beginning of code:
>
> #declare cam_pos = <0,0, -5>
> #declare cam_rot = <20, 0, 0>
> #declare max_depth = 10;
> #declare min_depth = 1;
>
>
> default {
>         texture {
>                 pigment {
>                         onion
>                         color_map {
>                                 [0.0 color rgb 1]
>                                 [1.0 color rgb 0]
>                         }
>                         phase min_depth/(max_depth-min_depth)
>                         scale (max_depth-min_depth)
>                 }
>                 finish { ambient 1 diffuse 0 }
>                 translate cam_pos
>                 rotate cam_rot
>
>         }
> }
> camera {
>         location cam_pos
>         direction z
>         up y
>         right 4/3*x
>         look_at 0
>         rotate cam_rot
> }
>
> box {-1, 1 rotate <50, 40>}
>
> // end of code.
>
>
> This scene is a general case. I can change the camera position in cam_pos
> and rotation in cam_rot and still generate a z-buffer output. If I used a
> simple gradient texture, I would have errors. Mainly because of perspective
> distortion. Points on the edge of the image which have color w, say 0.8,
> would actually be farther away than a point with color w in the middle of
> the image. The solution was to use the onion procedural texture, which is
> cocentric spheres centered at the origin. For the depth-view purposes, the
> texture is translated to the camera location, so that the centre of the
> spheres is located at the view-point. Then, it is rotated, if the camera is
> rotated to a new place.
>
> By using this method, one can easily verify that a point (x,y,z) with
> distance d from the eye, would get the same color as any other point (x1,
> y1, z1) with distance d from the eye.
>
> The next problem is scaling of the texture. The texture has originally a
> range of 1, that is it wraps from black to white every 1 units from the
> centre of the texture, that is the eye in this case. We want to scale it so
> that the nearest *visible* point gets a shade close to white, while the
> farthest visible point gets a shade closer to black. There must be no
> wrapping, or else we would get wrong depth-information.
>
> The solution is quite easy, you scale and offset the wrapping of the texture
> with
>             phase min_depth/(max_depth-min_depth)
>             scale (max_depth-min_depth)
>
> where max_depth is an estimate of the distance to the farthest point we want
> to shade, and min_depth is the distance to the nearest.
>
> The output from this source is an approximate depth-buffer from the scene,
> which then can be used as wished.
>
> Simen.


I would like to get depth image and found your solution here. Thanks.
Beyond depth image, I would like to get the coordinates of each pixel.
Do you have any idea to do that? Thanks again.

Best
Shouyang


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.