|
|
|
|
|
|
| |
| |
|
|
|
|
| |
| |
|
|
I was wondering if there is a way to access the hit time (aka depth value)
for the output pixels of a render. The idea is to render a heightfield with
povray, and reading back each pixel ray's hit time.
If you are wondering why I want this information, take a look at this paper
from SIGGRAPH2003: "View-dependent displacement mapping" by Wang et al
(Microsoft) available on ACM.ORG. They store displacements of height fields
relative to a reference plane, and store them in a texture, one for each
viewdirection. Also, I would need to be able to set up an orthographic
camera (ie. the rays should be shot parallel to each other through the
viewplane pixels, not originating from an eye point)
If anyone has some idea's if this would be possible with POVRay, please set
me on the right track :)
--n
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> I was wondering if there is a way to access the hit time (aka depth value)
> for the output pixels of a render. The idea is to render a heightfield with
> povray, and reading back each pixel ray's hit time.
Employ no light sources and make all your objects
hollow
texture {
pigment {rgb 1/eps + transmit 1-eps}
finish {ambient 1/255}
}
for small values of eps, eg 1/1000000. Now all your surfaces are (almost)
completely tranparent but add 1/255 to the brightness of the pixel. Hence
the byte value of the pixel (say its red channel) contains the number of
surfaces hit.
Note that I did not test this. To be sure you should set up a scene in
which every value for hit time occurs and check its output.
> [...] Also, I would need to be able to set up an orthographic
> camera (ie. the rays should be shot parallel to each other through the
> viewplane pixels, not originating from an eye point)
Orthographic cameras are directly supported. Look at the docs,
Section 6.4.2.2.
--
merge{#local i=-11;#while(i<11)#local
i=i+.1;sphere{<i*(i*i*(.05-i*i*(4e-7*i*i+3e-4))-3)10*sin(i)30>.5}#end
pigment{rgbt 1}interior{media{emission x}}hollow}// Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> > I was wondering if there is a way to access the hit time (aka depth
value)
> > for the output pixels of a render. The idea is to render a heightfield
with
> > povray, and reading back each pixel ray's hit time.
>
> Employ no light sources and make all your objects
>
> hollow
> texture {
> pigment {rgb 1/eps + transmit 1-eps}
> finish {ambient 1/255}
> }
>
> for small values of eps, eg 1/1000000. Now all your surfaces are (almost)
> completely tranparent but add 1/255 to the brightness of the pixel. Hence
> the byte value of the pixel (say its red channel) contains the number of
> surfaces hit.
> Note that I did not test this. To be sure you should set up a scene in
> which every value for hit time occurs and check its output.
This won't work. What you're actually describing is a method to find how
many surfaces in the line through a given pixel are, not the depth value.
To get the depth value, just use the following texture for ALL objects AFTER
they've been positioned and rotated to their place in the scene:
texture{
pigment{spherical color_map{[0 rgb ][1 rgb 1]} scale Max_Distance translate
Camera_Location}
finish{ambient 1 diffuse 0}
}
Now, Max_Distance has to be declared and gives the maximum distance you want
to be able to see, e.g.
#declare Max_Distance = 4;
will result in everything being black after its four units away from the
camera. The spherical-colormap has to be positioned at the camera's location
so that distances are "measured" away from it, not the origin, so declare
Camera_Location as you're current camera's location.
That's about it!
Regards,
Tim
--
"Tim Nikias v2.0"
Homepage: <http://www.nolights.de>
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
> This won't work. What you're actually describing is a method to find how
> many surfaces in the line through a given pixel are, not the depth value.
You are right. I misunderstood the question.
> texture{
> pigment{spherical color_map{[0 rgb ][1 rgb 1]} scale Max_Distance translate
> Camera_Location}
> finish{ambient 1 diffuse 0}
> }
This assumes perspective projection. For orthogonal projection the gradient
pattern might be more suitable than the spherical pattern. That would be:
pigment {
gradient Camera_Direction*Max_Distance
colour_map {[0 rgb 0] [1 rgb 1]}}
translate Camera_Location
}
(assuming a normalized Camera_Direction) and the rest as you said.
--
merge{#local i=-11;#while(i<11)#local
i=i+.1;sphere{<i*(i*i*(.05-i*i*(4e-7*i*i+3e-4))-3)10*sin(i)30>.5}#end
pigment{rgbt 1}interior{media{emission x}}hollow}// Mark Weyer
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
An alternative to these solutions is to modify the pov-ray source to do
this. The distance a ray has travelled is returned already by the
ray-tracing function, so its a very simple modification to set the pixel
output to this. This might be a easier solution if you need a very accurate
depthmap or need to customise the map further. Unless you've looked at the
Pov-ray source before, though, Tim's method is less hassle.
-Chris
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
"Chris Johnson" <chris(at)chris-j(dot)co(dot)uk> wrote:
> An alternative to these solutions is to modify the pov-ray source to do
> this. The distance a ray has travelled is returned already by the
> ray-tracing function, so its a very simple modification to set the pixel
> output to this. This might be a easier solution if you need a very accurate
> depthmap or need to customise the map further. Unless you've looked at the
> Pov-ray source before, though, Tim's method is less hassle.
>
> -Chris
Thanks for all the replies! Tim's method looks like a good suggestion,
although, now that I come to think of it, I think I forgot to mention
something which might not make it work.
In reality, I need to shoot rays to fixed positions (on a uniform 2d grid)
on the reference plane (the reference plane lies 'on top' of the height
field). I then want to have the distance travelled by that ray. This is
because, the input (eg 128x128 heightmap) has to exactly map to another
128x128 patch. The way to do this would thus be to iterate over the 128x128
positions on the reference plane, shoot rays, and deposit the depth on the
output 128x128 patch.
Chris, I haven't looked at the povray source before, but I have written my
own raytracers before. Any pointer where in the source I should look first?
Thanks,
--nico
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
-[Any pointer where in the source I should look first?]-
The function 'Trace_Primary_Ray' in vbuffer.cpp is the place. When the vista
buffer is off, pov-ray uses the 'Trace' function in render.cpp.
My implementation just comments out everything in Trace_Primary_Ray after
line 235 of vbuffer.cpp (in my source from May 2003):
Intersection_Found = intersect_vista_tree(Ray, Root_Vista, x,
&Best_Intersection);
and replaces it with
Colour[0]=Best_Intersection.Depth/50;
Colour[1]=Best_Intersection.Depth/50;
Colour[2]=Best_Intersection.Depth/50;
return (Best_Intersection.Depth);
The divisor by 50 seemed to give about the right brightness for the scale of
my scenes. This is entirely arbitrary - perhaps a logerithmic function would
be more suitable.
The code is very similar for modifying the Trace function if necessary,
though +UV can be used as a parameter when rendering to force the use of the
Trace_Primary_Ray function.
This code was a hack done in a few minutes to see if I could compile the pov
sources. I havent tested it extensively at all - it seems to work with
"normal" settings though. I haven't tried turning photos on radiosity on
while using it: whether they affect the image in subtle ways I'm not sure. I
have tried it with 16 bit output accuracy though, and it works fine.
-Chris
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
Wasn't it Chris Johnson who wrote:
>An alternative to these solutions is to modify the pov-ray source to do
>this. The distance a ray has travelled is returned already by the
>ray-tracing function, so its a very simple modification to set the pixel
>output to this.
This was incorporated in a previous version of MegaPOV, but (as far as I
know) hasn't yet been incorporated in to the current version of MegaPOV
that's based on POV 3.5. In versions of MegaPOV above 0.3 and below 1.0,
you just add a post_process global setting like
global_settings {
post_process { depth {50, 100} }
}
The two parameters control the depth corresponding to fully white and
the distance from there to the depth for fully black. (In the above
example: points 50 units or less from the camera are white, points 150
or more units away are black and points between 50 and 150 units away
are shades of grey).
It's odd that this feature didn't make it into MegaPOV 1.0, since it did
seem to be quite popular, particularly among people interested in "Magic
Eye" type autostereograms.
--
Mike Williams
Gentleman of Leisure
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |
| |
|
|
In article <stK### [at] econymdemoncouk>,
Mike Williams <nos### [at] econymdemoncouk> wrote:
> It's odd that this feature didn't make it into MegaPOV 1.0, since it did
> seem to be quite popular, particularly among people interested in "Magic
> Eye" type autostereograms.
Well, the post process feature itself needs additional work. Depth
output could have been added alone, but it'd be better to do it as a
post process feature.
--
Christopher James Huff <cja### [at] earthlinknet>
http://home.earthlink.net/~cjameshuff/
POV-Ray TAG: <chr### [at] tagpovrayorg>
http://tag.povray.org/
Post a reply to this message
|
|
| |
| |
|
|
|
|
| |