POV-Ray : Newsgroups : povray.off-topic : GPU rendering Server Time
7 Sep 2024 09:24:26 EDT (-0400)
  GPU rendering (Message 11 to 20 of 34)  
<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>
From: Orchid XP v8
Subject: Re: GPU rendering
Date: 18 Jul 2008 03:36:02
Message: <488047e2$1@news.povray.org>
Orchid XP v8 wrote:

> Yeah, CUDA seems the obvious way to do this.
> 
> Unfortunately, only GeForce 8 and newer support this technology. I have 
> a GeForce 7900GT. :-(

Also... I can't program in C to save my life. ESPECIALLY if it involves 
floating point arithmetic...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: GPU rendering
Date: 18 Jul 2008 03:55:56
Message: <48804c8c@news.povray.org>
Warp wrote:
> Orchid XP v8 <voi### [at] devnull> wrote:
>> Also, how about random number generation? Is this "easy" to do on a GPU?
> 
>   Even if pixel shaders didn't have some RNG available (I don't know if
> they have), a simple linear congruential generator should be rather
> trivial to implement.
> 
>   (OTOH that means that the LCG would have to store the seed somewhere
> so that the next time the shader is called it can use it. Again, I don't
> know if that's possible. There could well be parallelism problems with
> that.)

Yeah, that's going to be the fun part - getting each PRNG to produe a 
different stream. Maybe it could be seeded from pixel coordinates or 
something...

>> Could you do something like rendering millions of tiny semi-transparent 
>> polygons of roughly 1-pixel size?
> 
>   Why would you want to do that?

The idea being to scatter polygons around using a geometry shader, 
rather than trying to implement the IFS as a pixel shader. I don't know 
if it would work well though.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:01:00
Message: <48804dbc$1@news.povray.org>
> Uuhhh... any chance of some code? :-}

I don't have time to write anything working now, but I'm just having a look 
at something I have already that would be a good basis...  This is using 
DirectX 9 btw.

Use CreateTexture() to create two new 256x256 textures with the format 
D3DFMT_R32F (that gives you a texture full of 32bit floats, as opposed to 
the more usual RGBA bytes).  Also create two temp textures.

You can then lock the texture and write to it using CPU code, to set the 
initial values.

I would create a big vertex buffer full of the 65k squares 1 unit big 
centered at the origin.  Also assign an x,y coordinate to each vertex of 
each square (this is so the vertex shader knows which square it is 
processing).

Then, in the game loop, use the Direct3D call SetRenderTarget() to set the 
first temp texture as the render target, and run your pixel shader that 
calculates the new x coordinate.  DO the same for y.  If you have DX10 then 
you can render to multiple render targets in one shot, so could be faster.

Pixel shader code would look something like:

{
  float currentX , currentY;
  currentX = tex2D( xCoordTexture , ps_in.textureCoordinate );
  currentY = tex2D( yCoordTexture , ps_in.textureCoordinate );

  < calculate new X coord here >

  return newX;

}

> (Also... what's a vertex shader?)

OK, so then the next bit, to actually draw all those points somewhere.  The 
vertex shader is what "normally" converts your mesh from model space into 
screen space that is then used by the pixel shader to know where to draw 
each pixel.  Usually it just multiplies by a big matrix, but we can do 
whatever we want in it.

I'm going to look up the X,Y coordinates from those textures, using the XY 
coordinates we set in the vertex buffer above, remember?

{
  currentX = tex2D( xCoordTexture , vs_in.xCoordinate );
  currentY = tex2D( yCoordTexture , vs_in.yCoordinate );

  vs_out.Pos.x = vs_in.Pos.x + vs_in.x + currentX;
  vs_out.Pos.y = vs_in.Pos.y + vs_in.y + currentX;

  return vs_out;
}

It returns each vertex of each square shifted to the position defined by the 
coordinates in the xy texture.

The pixel shader used to draw those squares would then do something really 
simple, like just increment the colour at that coordinate by 1.  Or if you 
want to get really clever you could make the square a bit bigger (so it 
covers say 9 pixels) and do some antialiasing by hand in the pixel shader.

> Would it be possible to do all this using just, say, OpenGL?

Yes.  But I don't know any of the syntax, sorry.


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:03:34
Message: <48804e56$1@news.povray.org>
> OOC, is it possible to use a GPU to copy and transform chunks of video 
> data? (E.g., rotate, scale, that kind of thing.)

Yes, you just draw a textured square wherever you want, using whatever you 
want as the texture (the result of a previous calculation?).

The only limitation is that you cannot render to a texture at the same time 
as using that texture as a source for drawing operations.


Post a reply to this message

From: Orchid XP v8
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:16:14
Message: <4880514e$1@news.povray.org>
>> OOC, is it possible to use a GPU to copy and transform chunks of video 
>> data? (E.g., rotate, scale, that kind of thing.)
> 
> Yes, you just draw a textured square wherever you want, using whatever 
> you want as the texture (the result of a previous calculation?).

I was hoping you'd say something like that. :-)

Is it possible to take a chunk of the frame buffer and turn it into a 
texture? Or would I have to render texture to texture and then make a 
copy to the screen so I can see it?

> The only limitation is that you cannot render to a texture at the same 
> time as using that texture as a source for drawing operations.

That won't be a problem.

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Orchid XP v8
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:19:40
Message: <4880521c@news.povray.org>
scott wrote:
>> Uuhhh... any chance of some code? :-}
> 
> I don't have time to write anything working now, but I'm just having a 
> look at something I have already that would be a good basis...  This is 
> using DirectX 9 btw.
> 
> Use CreateTexture() to create two new 256x256 textures with the format 
> D3DFMT_R32F (that gives you a texture full of 32bit floats, as opposed 
> to the more usual RGBA bytes).  Also create two temp textures.
> 
> You can then lock the texture and write to it using CPU code, to set the 
> initial values.
> 
> I would create a big vertex buffer full of the 65k squares 1 unit big 
> centered at the origin.  Also assign an x,y coordinate to each vertex of 
> each square (this is so the vertex shader knows which square it is 
> processing).
> 
> Then, in the game loop, use the Direct3D call SetRenderTarget() to set 
> the first temp texture as the render target, and run your pixel shader 
> that calculates the new x coordinate.  DO the same for y.  If you have 
> DX10 then you can render to multiple render targets in one shot, so 
> could be faster.
> 
> Pixel shader code would look something like:
> 
> {
>  float currentX , currentY;
>  currentX = tex2D( xCoordTexture , ps_in.textureCoordinate );
>  currentY = tex2D( yCoordTexture , ps_in.textureCoordinate );
> 
>  < calculate new X coord here >
> 
>  return newX;
> 
> }
> 
>> (Also... what's a vertex shader?)
> 
> OK, so then the next bit, to actually draw all those points somewhere.  
> The vertex shader is what "normally" converts your mesh from model space 
> into screen space that is then used by the pixel shader to know where to 
> draw each pixel.  Usually it just multiplies by a big matrix, but we can 
> do whatever we want in it.
> 
> I'm going to look up the X,Y coordinates from those textures, using the 
> XY coordinates we set in the vertex buffer above, remember?
> 
> {
>  currentX = tex2D( xCoordTexture , vs_in.xCoordinate );
>  currentY = tex2D( yCoordTexture , vs_in.yCoordinate );
> 
>  vs_out.Pos.x = vs_in.Pos.x + vs_in.x + currentX;
>  vs_out.Pos.y = vs_in.Pos.y + vs_in.y + currentX;
> 
>  return vs_out;
> }
> 
> It returns each vertex of each square shifted to the position defined by 
> the coordinates in the xy texture.
> 
> The pixel shader used to draw those squares would then do something 
> really simple, like just increment the colour at that coordinate by 1.  
> Or if you want to get really clever you could make the square a bit 
> bigger (so it covers say 9 pixels) and do some antialiasing by hand in 
> the pixel shader.
> 
>> Would it be possible to do all this using just, say, OpenGL?
> 
> Yes.  But I don't know any of the syntax, sorry.

Ah, OK. Well I have a Haskell binding to OpenGL, but not for DirectX.

So the exact function calls you're talking about won't help me, but it 
gives me some idea what I'm trying to do...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:24:33
Message: <48805341$1@news.povray.org>
> Is it possible to take a chunk of the frame buffer and turn it into a 
> texture?

You can, but it's a bit long-winded and slow.

> Or would I have to render texture to texture and then make a copy to the 
> screen so I can see it?

Yes, that's what is normally done.  Also gives you the advantage that you 
can do funky things with the pixel shader you use to write the texture to 
the screen, like using colour look up tables, motion blur, bloom, edge 
detection, etc etc.


Post a reply to this message

From: scott
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:25:38
Message: <48805382$1@news.povray.org>
> Ah, OK. Well I have a Haskell binding to OpenGL, but not for DirectX.
>
> So the exact function calls you're talking about won't help me, but it 
> gives me some idea what I'm trying to do...

I would try and find some very simple demos of pixel and vertex shaders 
written in OpenGL first and get them working.  IIRC they are called 
something else in OpenGL too, just to make matters more confusing.


Post a reply to this message

From: Orchid XP v8
Subject: Re: GPU rendering
Date: 18 Jul 2008 04:52:07
Message: <488059b7$1@news.povray.org>
scott wrote:
>> Ah, OK. Well I have a Haskell binding to OpenGL, but not for DirectX.
>>
>> So the exact function calls you're talking about won't help me, but it 
>> gives me some idea what I'm trying to do...
> 
> I would try and find some very simple demos of pixel and vertex shaders 
> written in OpenGL first and get them working.  IIRC they are called 
> something else in OpenGL too, just to make matters more confusing.

OpenGL apparently calls a pixel shader a "fragment shader".

I found a small program that allows you to type in GLSL source code and 
see it applied to a mesh object. Should help me get to know the 
language, but won't help much with rendering a texture of floats. ;-)

The "other" fun thing is that shaders were introduced in OpenGL 2.0, for 
which no language spec is available online. You have to buy the book for 
that. But having a look at the Haskell OpenGL bindings, I think I see 
how it's supposed to work...

-- 
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*


Post a reply to this message

From: Kevin Wampler
Subject: Re: GPU rendering
Date: 18 Jul 2008 14:42:03
Message: <4880e3fb@news.povray.org>
Orchid XP v8 wrote:
> There are 3 processes that need to happen.
> 
> 1. A stream of coordinates needs to be generated.
> 2. A histogram of the coordinates needs to be constructed.
> 3. A non-linear mapping from histogram frequencies to colours is performed.
> 
> Step 2 looks to be the tricky one - but it shouldn't be *vastly* 
> difficult I think. The question is whether the histograms will fit into 
> the limited RAM on the card...

I'd think that memory wouldn't be a problem here, as 256MB is enough to 
store a reasonably large histogram.


Post a reply to this message

<<< Previous 10 Messages Goto Latest 10 Messages Next 10 Messages >>>

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.