|
 |
> The only thing I can think of is that you save the depth map along with
> the image, and then you only post-process the things at the same depth as
> the model of the predator, or something like that.
I'm not entirely sure what you are looking for, but your pixel shader
"strength" could depend on depth, so it blends out smoothly once a certain
distance away from the target depth.
> But how do you know what that is? Re-render just the predator in a
> separate buffer?
A lot of hardware won't give you access to the depth buffer, so APIs
typically don't expose the depth information (or if they do they have a very
slow backup method or it might just fail, so I wouldn't use it).
If your platform supports multiple render targets then the simplest solution
would be to just render the depth info simultaneously to a separate FP32
texture as you go along. This can obviously then be used by the post
processing shader.
If not you might be able to squeeze the depth data into the alpha channel,
so long as you don't need it for anything else and are happy with 8bit
resolution of the depth info. Just set the alpha component of your pixel
output colour to a suitably scaled depth at the end of the pixel shader.
I was trying to come up with an alternative that involved rendering full
screen quads with various depth/stencil options, but couldn't come up with
anything. I have a niggling feeling something should be possible that way
though...
Post a reply to this message
|
 |