POV-Ray : Newsgroups : povray.off-topic : scanline predator : Re: scanline predator Server Time
3 Sep 2024 17:19:55 EDT (-0400)
  Re: scanline predator  
From: Darren New
Date: 14 Oct 2010 13:46:31
Message: <4cb741f7$1@news.povray.org>
scott wrote:
>> The only thing I can think of is that you save the depth map along 
>> with the image, and then you only post-process the things at the same 
>> depth as the model of the predator, or something like that.
> 
> I'm not entirely sure what you are looking for, but your pixel shader 
> "strength" could depend on depth, so it blends out smoothly once a 
> certain distance away from the target depth.

I probably phrased it poorly.

Basically, I want to apply a screen-space shader but only to parts of the 
scene occluded by a model which is in turn partially occluded by other parts 
of the scene.

So imagine say a magnifying glass. It makes what's behind it bigger (which 
is easy to do with a second pass over the screen data), but it doesn't make 
what's in front of it bigger.

The APIs I'm using definitely expose the depth buffer. One of the tutorial 
programs I have renders the scene into a depth buffer from the POV of the 
light source, then uses that to figure out if a pixel is lit or shadowed 
from the POV of the camera.  This of course draws all the geometry twice.

Since the effect I want is only from the POV of the camera, I'm trying to 
figure out if I can do it without rendering the entire scene twice.


> If your platform supports multiple render targets then the simplest 
> solution would be to just render the depth info simultaneously to a 
> separate FP32 texture as you go along.  This can obviously then be used 
> by the post processing shader.

Sure. I'm trying to figure out a simple way to apply that, tho. I can do 
that, I'm just not sure how to efficiently figure out *both* when the ghost 
is the closest thing to the camera at a particular pixel *and* what color 
the pixel would be behind the ghost.

All I can think of, using the depth buffer, is to render the entire scene 
into a depth buffer with the ghost, then a second time without the ghost, 
and where the depth buffer has changed, that's where I apply the 
post-processing effect.

Or maybe I can do something funky, like render everything except the ghosts, 
put the ghosts in last, and somehow take advantage of that, but I think 
everything goes through the pixel shader. Maybe something like telling the 
pixel shader *not* to render the pixel, but managing to save the final 
locations in a different render target?  I don't think I have a way of doing 
that. Maybe have two render targets, one where I draw the ghost in the 
normal place to get the depth, and the other where I set the ghost's depth 
waaaaay back, making the ghost effectively out of the picture. Then I apply 
the post-process to pixels whose depth doesn't match the depth-map of the 
normal place.  I am pretty sure I can't resolve the target without 
destroying the 3D information, so I can't render everything except the 
ghost, snapshot the scene, and then add in the ghost.

Maybe I can at least depth-sort the models and only render the models whose 
bounding box is in front of the back of the ghost, so I only have to 
calculate what occludes the ghost.  (And of course, once the ghost is far 
enough away, he's too hard to see anyway, so don't even bother to render it.)

> I was trying to come up with an alternative that involved rendering full 
> screen quads with various depth/stencil options, but couldn't come up 
> with anything.  I have a niggling feeling something should be possible 
> that way though...

I think it's efficiently getting the stencil to start with that's the 
problem. Running it through a second pass to draw the ghost effect shouldn't 
be bad.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.