|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Dear all,
For a technical project I am doing concerning stereo reconstruction, it would be
helpful to be able to have access to the depth (z) coordinate of the directly
visible object at each pixel in the rendered image.
Ideally, I would have a file that gives (IMAGEHEIGHT x IMAGEWIDTH) floating
point numbers (IEEE-754 format), each giving the projected "z" value for the
corresponding pixel (or "infinity" if there is no object in the eye->pixel ray
path).
Has something like this perhaps been done before? If not, how difficult would it
be to add this myself (I am proficient in C, but I have never looked at the
source code of POV-Ray) ?
Best regards, Sidney
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Sidney Cadot" <sid### [at] jigsawnl> wrote in message
news:web.49c79d64a563150ecb94416e0@news.povray.org...
> Dear all,
>
> For a technical project I am doing concerning stereo reconstruction, it
> would be
> helpful to be able to have access to the depth (z) coordinate of the
> directly
> visible object at each pixel in the rendered image.
>
> Ideally, I would have a file that gives (IMAGEHEIGHT x IMAGEWIDTH)
> floating
> point numbers (IEEE-754 format), each giving the projected "z" value for
> the
> corresponding pixel (or "infinity" if there is no object in the eye->pixel
> ray
> path).
>
> Has something like this perhaps been done before? If not, how difficult
> would it
> be to add this myself (I am proficient in C, but I have never looked at
> the
> source code of POV-Ray) ?
>
> Best regards, Sidney
>
I don't think you'd need to necessarily get into the source code to do this.
If you can build the contents of the scene into a single CSG union assigned
to an identifier in POV-Ray SDL, then you should be able to get this data
using a bit of maths and the POV-Ray 'trace' function. You can loop through
the pixel positions on the image plane tracing rays out from the camera
position to retrieve the position at which the traced rays hit the union
object. You can write the floats out using the #fopen, #write and #fclose
directives.
Regards,
Chris B.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> Dear all,
>
> For a technical project I am doing concerning stereo reconstruction, it would be
> helpful to be able to have access to the depth (z) coordinate of the directly
> visible object at each pixel in the rendered image.
>
> Ideally, I would have a file that gives (IMAGEHEIGHT x IMAGEWIDTH) floating
> point numbers (IEEE-754 format), each giving the projected "z" value for the
> corresponding pixel (or "infinity" if there is no object in the eye->pixel ray
> path).
>
> Has something like this perhaps been done before? If not, how difficult would it
> be to add this myself (I am proficient in C, but I have never looked at the
> source code of POV-Ray) ?
>
> Best regards, Sidney
>
>
>
I think you may have access to depth with MegaPov : have a look at
"2.7.5. Post processing" in the Megapov doc
(http://megapov.inetart.net/manual-1.2.1/global_settings.html#post_processing)
Thibaut
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Sidney Cadot" <sid### [at] jigsawnl> wrote:
> For a technical project I am doing concerning stereo reconstruction, it would be
> helpful to be able to have access to the depth (z) coordinate of the directly
> visible object at each pixel in the rendered image.
....
> Has something like this perhaps been done before? If not, how difficult would it
> be to add this myself (I am proficient in C, but I have never looked at the
> source code of POV-Ray) ?
To do this in POV-Ray, you might assign to all objects a texture with a
greyscale gradient pigment, and an ambient-only finish, so you get a render
with pixel brightness indicating depth.
You can also use a multi-layered texture using differently scaled gradient
pigments with red, green and blue color respectively to increase the
resolution.
If you use a custom-tailored function pattern instead of the gradient, you can
get e.g. a logarithmic relationship between depth and brightness.
If you don't have to use official POV, you may also try out MegaPOV's
post_process feature, which allows you to mess around quite a lot with the
rendered image before writing it to file, including manipulations based on the
depth of the pixel (although this is not strictly z, but distance between
observer and scene geometry).
Or you can mess around with the source code, of course. In that case,
Trace::TraceRay() (in trace.cpp) is probably the place to start (assuming
you're going for the 3.7 code; I'm not so much into 3.6's bowels), as it is the
"topmost" code that has access to the "hit point" co-ordinates. Make sure you
test for ray.IsPrimaryRay() (to check whether it is not an auxiliary ray for
radiosity, shadow testing or what-have-you) and check ticket.traceLevel (which
gives you information about how often the ray was reflected/refracted already).
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Chris, Thibaut, Clipka,
Thanks for your excellent suggestions. I wasn't aware of the mega-pov
post-processing feature or the "trace" feature in povray. I will experiment a
bit, this will probably work out -- both approaches seem promising.
Regards, Sidney
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Sidney Cadot" <sid### [at] jigsawnl> wrote:
> Chris, Thibaut, Clipka,
>
>
> Thanks for your excellent suggestions. I wasn't aware of the mega-pov
> post-processing feature or the "trace" feature in povray. I will experiment a
> bit, this will probably work out -- both approaches seem promising.
>
> Regards, Sidney
Sidney,
I've been thinking about this thread and what is explained on
http://www.wowvx.com/create/Format.aspx
Depth map may allow creating of 2D-plus-Depth image.
Is there a way to extract "Declypse" information from POV-Ray model?
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Marvin nous illumina en ce 2009-05-08 09:49 -->
> "Sidney Cadot" <sid### [at] jigsawnl> wrote:
>> Chris, Thibaut, Clipka,
>>
>>
>> Thanks for your excellent suggestions. I wasn't aware of the mega-pov
>> post-processing feature or the "trace" feature in povray. I will experiment a
>> bit, this will probably work out -- both approaches seem promising.
>>
>> Regards, Sidney
>
> Sidney,
>
> I've been thinking about this thread and what is explained on
> http://www.wowvx.com/create/Format.aspx
>
> Depth map may allow creating of 2D-plus-Depth image.
>
> Is there a way to extract "Declypse" information from POV-Ray model?
>
>
>
>
You can't with a single image.
What you can do:
- Render only the background then the foreground.
- Render twice from two different points of view shifted left and right.
For the depth map, just render the scene with all textures replaced by a single
gradient oriented toward the camera. The gradient direction goes from the
look_at to the camera location, black for far, white for close.
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Alain <ele### [at] netscapenet> wrote:
> Marvin nous illumina en ce 2009-05-08 09:49 -->
> > "Sidney Cadot" <sid### [at] jigsawnl> wrote:
> >> Chris, Thibaut, Clipka,
> >>
> >>
> >> Thanks for your excellent suggestions. I wasn't aware of the mega-pov
> >> post-processing feature or the "trace" feature in povray. I will experiment a
> >> bit, this will probably work out -- both approaches seem promising.
> >>
> >> Regards, Sidney
> >
> > Sidney,
> >
> > I've been thinking about this thread and what is explained on
> > http://www.wowvx.com/create/Format.aspx
> >
> > Depth map may allow creating of 2D-plus-Depth image.
> >
> > Is there a way to extract "Declypse" information from POV-Ray model?
> You can't with a single image.
> What you can do:
> - Render only the background then the foreground.
> - Render twice from two different points of view shifted left and right.
>
> For the depth map, just render the scene with all textures replaced by a single
> gradient oriented toward the camera. The gradient direction goes from the
> look_at to the camera location, black for far, white for close.
Could you please give an example of a texture with grayscale gradient oriented
towards the camera?
Regards,
Marvin
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Marvin" <mto### [at] grfhr> wrote in message
news:web.4a0942cf3a6933703f8cf8140@news.povray.org...
> Alain <ele### [at] netscapenet> wrote:
>> Marvin nous illumina en ce 2009-05-08 09:49 -->
>> > "Sidney Cadot" <sid### [at] jigsawnl> wrote:
>> >
>> > I've been thinking about this thread and what is explained on
>> > http://www.wowvx.com/create/Format.aspx
>> >
>> > Depth map may allow creating of 2D-plus-Depth image.
>> >
>> > Is there a way to extract "Declypse" information from POV-Ray model?
>
>> You can't with a single image.
>> What you can do:
>> - Render only the background then the foreground.
>> - Render twice from two different points of view shifted left and right.
>>
>> For the depth map, just render the scene with all textures replaced by a
>> single
>> gradient oriented toward the camera. The gradient direction goes from the
>> look_at to the camera location, black for far, white for close.
>
> Could you please give an example of a texture with grayscale gradient
> oriented
> towards the camera?
>
#include "transforms.inc"
#declare Camera_Location = <-4,7,-4>;
#declare Camera_Lookat = <10,-10,10>;
camera {location Camera_Location look_at Camera_Lookat}
// Union containing all scene objects
union {
sphere {0,1}
sphere {<-2,0,-2>,1}
sphere {< 5,0, 5>,1}
sphere {<10,0,10>,1}
box {-10,10}
pigment {
gradient x color_map {
[0 color rgb 1]
[1 color rgb 0]
}
scale <vlength(Camera_Location-Camera_Lookat),1,1>
Reorient_Trans(x, Camera_Lookat-Camera_Location)
translate Camera_Location
}
finish {ambient 1}
}
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> "Marvin" <mto### [at] grfhr> wrote in message
>> Could you please give an example of a texture with grayscale gradient
>> oriented
>> towards the camera?
.. but thinking about it, you're probably better off using an 'onion'
pattern than a gradient pattern for this so that it fades to black as a
function of distance from the camera rather than as a function of the
distance from the plane the camera sits on. Note you should scale in all 3
dimensions and you no longer need to reorient the pigment.
pigment {
onion color_map {
[0 color rgb 1]
[1 color rgb 0]
}
scale vlength(Camera_Location-Camera_Lookat)
translate Camera_Location
}
Regards,
Chris B
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
|
|