POV-Ray : Newsgroups : povray.off-topic : scanline predator Server Time
3 Sep 2024 19:20:16 EDT (-0400)
  scanline predator (Message 1 to 10 of 22)  
Goto Latest 10 Messages Next 10 Messages >>>
From: Darren New
Subject: scanline predator
Date: 13 Oct 2010 22:43:04
Message: <4cb66e38$1@news.povray.org>
One thing I can't figure out...

How do you do a ghost or predator effect on a graphics card, like with 
opengl or directx or something like that?  I.e., as a vertex/pixel shader.

I understand how you do shadows, and reflections, and how to do something 
like a lens bloom where you post-process the whole image.  What I am not 
sure I've figured out is how to post-process only part of the screen you can 
see. I.e., if the "predator" walks behind a tree, you want the parts you can 
see to distort what's behind it, but you don't want to distort the tree he's 
behind.

The only thing I can think of is that you save the depth map along with the 
image, and then you only post-process the things at the same depth as the 
model of the predator, or something like that.  But how do you know what 
that is? Re-render just the predator in a separate buffer?

Alternately, you *could* render everything twice, with the predator having a 
pixel shader that just made it white, everything else being black. But then 
you're rendering the whole scene three times.

Is there a better way to do that?

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: scott
Subject: Re: scanline predator
Date: 14 Oct 2010 04:12:29
Message: <4cb6bb6d@news.povray.org>
> The only thing I can think of is that you save the depth map along with 
> the image, and then you only post-process the things at the same depth as 
> the model of the predator, or something like that.

I'm not entirely sure what you are looking for, but your pixel shader 
"strength" could depend on depth, so it blends out smoothly once a certain 
distance away from the target depth.

> But how do you know what that is? Re-render just the predator in a 
> separate buffer?

A lot of hardware won't give you access to the depth buffer, so APIs 
typically don't expose the depth information (or if they do they have a very 
slow backup method or it might just fail, so I wouldn't use it).

If your platform supports multiple render targets then the simplest solution 
would be to just render the depth info simultaneously to a separate FP32 
texture as you go along.  This can obviously then be used by the post 
processing shader.

If not you might be able to squeeze the depth data into the alpha channel, 
so long as you don't need it for anything else and are happy with 8bit 
resolution of the depth info.  Just set the alpha component of your pixel 
output colour to a suitably scaled depth at the end of the pixel shader.

I was trying to come up with an alternative that involved rendering full 
screen quads with various depth/stencil options, but couldn't come up with 
anything.  I have a niggling feeling something should be possible that way 
though...


Post a reply to this message

From: Invisible
Subject: Re: scanline predator
Date: 14 Oct 2010 05:30:19
Message: <4cb6cdab$1@news.povray.org>
On 14/10/2010 03:43 AM, Darren New wrote:

> Alternately, you *could* render everything twice, with the predator
> having a pixel shader that just made it white, everything else being
> black. But then you're rendering the whole scene three times.
>
> Is there a better way to do that?

I was under the impression that most complex GPU effects require 
rendering the entire scene multiple times (in addition to rendering 
things in a very specific order). But hey, what would I know?


Post a reply to this message

From: scott
Subject: Re: scanline predator
Date: 14 Oct 2010 05:43:48
Message: <4cb6d0d4@news.povray.org>
> I was under the impression that most complex GPU effects require rendering 
> the entire scene multiple times (in addition to rendering things in a very 
> specific order). But hey, what would I know?

You usually want to try and avoid rendering the *entire scene* multiple 
times, as this usually involves many relatively slow state changes within 
the GPU (ie switching textures and vertex buffers etc).  Since complex pixel 
shaders became available, this has largely reduced the need for multi-pass 
rendering.  However some effects still need multiple passes, and in these 
situations you try to minimise the amount of stuff drawn multiple times (eg 
for a reflection texture you might omit small objects).  DX10 has helped 
quite a lot here, as it allows you to process a piece of geometry once, yet 
output it to several render targets.  If you need to do this for a large 
number of models it saves a huge amount of time compared to the DX9 method.

Another interesting idea is deferred shading.  With this as you go through 
the geometry of your scene you don't calculate final lit colour values, but 
values to describe the 3D surface at each screen pixel (texture colour, 
normal, shinyness etc).  This is relatively fast as no lighting calculations 
are done.  Then once you've drawn all the geometry you end up with a simple 
2D array of what is visible.  With this you can then run a pixel shader over 
it to calculate the final colour values.  The advantage is you don't have to 
put your lighting code in the shaders for each model, the lighting 
calculations only get done for pixels that are visible in the final image, 
and it is very cheap to add more lights to the scene.  Example here: 
http://www.youtube.com/watch?v=hBtfryQBAlk


Post a reply to this message

From: Darren New
Subject: Re: scanline predator
Date: 14 Oct 2010 13:46:31
Message: <4cb741f7$1@news.povray.org>
scott wrote:
>> The only thing I can think of is that you save the depth map along 
>> with the image, and then you only post-process the things at the same 
>> depth as the model of the predator, or something like that.
> 
> I'm not entirely sure what you are looking for, but your pixel shader 
> "strength" could depend on depth, so it blends out smoothly once a 
> certain distance away from the target depth.

I probably phrased it poorly.

Basically, I want to apply a screen-space shader but only to parts of the 
scene occluded by a model which is in turn partially occluded by other parts 
of the scene.

So imagine say a magnifying glass. It makes what's behind it bigger (which 
is easy to do with a second pass over the screen data), but it doesn't make 
what's in front of it bigger.

The APIs I'm using definitely expose the depth buffer. One of the tutorial 
programs I have renders the scene into a depth buffer from the POV of the 
light source, then uses that to figure out if a pixel is lit or shadowed 
from the POV of the camera.  This of course draws all the geometry twice.

Since the effect I want is only from the POV of the camera, I'm trying to 
figure out if I can do it without rendering the entire scene twice.


> If your platform supports multiple render targets then the simplest 
> solution would be to just render the depth info simultaneously to a 
> separate FP32 texture as you go along.  This can obviously then be used 
> by the post processing shader.

Sure. I'm trying to figure out a simple way to apply that, tho. I can do 
that, I'm just not sure how to efficiently figure out *both* when the ghost 
is the closest thing to the camera at a particular pixel *and* what color 
the pixel would be behind the ghost.

All I can think of, using the depth buffer, is to render the entire scene 
into a depth buffer with the ghost, then a second time without the ghost, 
and where the depth buffer has changed, that's where I apply the 
post-processing effect.

Or maybe I can do something funky, like render everything except the ghosts, 
put the ghosts in last, and somehow take advantage of that, but I think 
everything goes through the pixel shader. Maybe something like telling the 
pixel shader *not* to render the pixel, but managing to save the final 
locations in a different render target?  I don't think I have a way of doing 
that. Maybe have two render targets, one where I draw the ghost in the 
normal place to get the depth, and the other where I set the ghost's depth 
waaaaay back, making the ghost effectively out of the picture. Then I apply 
the post-process to pixels whose depth doesn't match the depth-map of the 
normal place.  I am pretty sure I can't resolve the target without 
destroying the 3D information, so I can't render everything except the 
ghost, snapshot the scene, and then add in the ghost.

Maybe I can at least depth-sort the models and only render the models whose 
bounding box is in front of the back of the ghost, so I only have to 
calculate what occludes the ghost.  (And of course, once the ghost is far 
enough away, he's too hard to see anyway, so don't even bother to render it.)

> I was trying to come up with an alternative that involved rendering full 
> screen quads with various depth/stencil options, but couldn't come up 
> with anything.  I have a niggling feeling something should be possible 
> that way though...

I think it's efficiently getting the stencil to start with that's the 
problem. Running it through a second pass to draw the ghost effect shouldn't 
be bad.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: Warp
Subject: Re: scanline predator
Date: 14 Oct 2010 15:50:31
Message: <4cb75f07@news.povray.org>
Darren New <dne### [at] sanrrcom> wrote:
> How do you do a ghost or predator effect on a graphics card, like with 
> opengl or directx or something like that?  I.e., as a vertex/pixel shader.

  I don't know the answer, but I wouldn't be surprised if the stencil buffer
is involved (probably in conjunction with the depth buffer).

  The basic idea of the stencil buffer is so laughably simple that one
would hastily think that it has no practical use. In practice people have
developed incredible applications to the stencil buffer, including
dynamic shadowing, planar reflections, outline drawing and portal
rendering.

-- 
                                                          - Warp


Post a reply to this message

From: nemesis
Subject: Re: scanline predator
Date: 14 Oct 2010 15:53:16
Message: <4cb75fac@news.povray.org>
scott escreveu:
> Another interesting idea is deferred shading.  With this as you go 
> through the geometry of your scene you don't calculate final lit colour 
> values, but values to describe the 3D surface at each screen pixel 
> (texture colour, normal, shinyness etc).  This is relatively fast as no 
> lighting calculations are done.  Then once you've drawn all the geometry 
> you end up with a simple 2D array of what is visible.  With this you can 
> then run a pixel shader over it to calculate the final colour values.  
> The advantage is you don't have to put your lighting code in the shaders 
> for each model, the lighting calculations only get done for pixels that 
> are visible in the final image, and it is very cheap to add more lights 
> to the scene.  Example here: http://www.youtube.com/watch?v=hBtfryQBAlk

it's pretty much lazy evaluation for rendering. :)

-- 
a game sig: http://tinyurl.com/d3rxz9


Post a reply to this message

From: Darren New
Subject: Re: scanline predator
Date: 14 Oct 2010 17:50:03
Message: <4cb77b0b$1@news.povray.org>
Warp wrote:
> Darren New <dne### [at] sanrrcom> wrote:
>> How do you do a ghost or predator effect on a graphics card, like with 
>> opengl or directx or something like that?  I.e., as a vertex/pixel shader.
> 
>   I don't know the answer, but I wouldn't be surprised if the stencil buffer
> is involved (probably in conjunction with the depth buffer).
> 
>   The basic idea of the stencil buffer is so laughably simple that one
> would hastily think that it has no practical use. In practice people have
> developed incredible applications to the stencil buffer, including
> dynamic shadowing, planar reflections, outline drawing and portal
> rendering.

Ah! Ok, now that you mention it, I suppose I could render the ghosts with a 
shader that puts 1's in the stencil buffer and 0's everywhere else, and then 
use the result to do the post-process step.  I never really saw an example 
of using the stencil buffer, but now that you mention it, it's entirely 
likely that's exactly what it was intended for. I'll have to study up how 
you can get a shader to write to it.

-- 
Darren New, San Diego CA, USA (PST)
   Serving Suggestion:
     "Don't serve this any more. It's awful."


Post a reply to this message

From: scott
Subject: Re: scanline predator
Date: 15 Oct 2010 04:42:48
Message: <4cb81408@news.povray.org>
> Basically, I want to apply a screen-space shader but only to parts of the 
> scene occluded by a model which is in turn partially occluded by other 
> parts of the scene.
>
> So imagine say a magnifying glass. It makes what's behind it bigger (which 
> is easy to do with a second pass over the screen data), but it doesn't 
> make what's in front of it bigger.

You could do something like this (as Warp said, the stencil buffer is useful 
for stuff like this).  You need to be careful to not switch the depth buffer 
in steps 1 and 5 (don't know if that's possible or not in your API, 
otherwise it might get complicated to make sure the ghost geomtry renders 
correctly in step 8).  It is a little bit complex because you do not have 
access to the depth/stencil buffer when that is also the render target.

1. Set render target to temporary one
2. Render scene without ghost normally, but set stencil writes to always 0
3. Render ghost geometry without writes to RGBA, but stencil write of always 
1
4. Switch the render target to the actual back buffer
5. Render a full-screen quad to simply copy over the entire temp buffer RGBA 
contents
5. Set your special effect pixel shader
6. Set stencil pass function to equal 1
7. Render a full-screen quad at the depth of the ghost (this will avoid 
affecting anything infront of the ghost)
8. Render the ghost normally

> The APIs I'm using definitely expose the depth buffer. One of the tutorial 
> programs I have renders the scene into a depth buffer from the POV of the 
> light source, then uses that to figure out if a pixel is lit or shadowed 
> from the POV of the camera.

Sure, if you write depth on purpose to a special render target, what is not 
normally available is the "internal" depth/stencil buffer that the GPU uses 
to do depth and stencil tests during normal rendering.  So you're usually 
having to duplicate the depth buffer if you want to use it in a non-standard 
way.


Post a reply to this message

From: Invisible
Subject: Re: scanline predator
Date: 15 Oct 2010 04:44:31
Message: <4cb8146f@news.povray.org>
On 14/10/2010 08:50 PM, Warp wrote:

>    I don't know the answer, but I wouldn't be surprised if the stencil buffer
> is involved (probably in conjunction with the depth buffer).

I assumed you guys had already thought of that... ;-)

(Besides, I don't _actually_ know what the stencil buffer is. I just 
assume it does approximately what its name says it does.)


Post a reply to this message

Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.