|
 |
scott wrote:
>> Uuhhh... any chance of some code? :-}
>
> I don't have time to write anything working now, but I'm just having a
> look at something I have already that would be a good basis... This is
> using DirectX 9 btw.
>
> Use CreateTexture() to create two new 256x256 textures with the format
> D3DFMT_R32F (that gives you a texture full of 32bit floats, as opposed
> to the more usual RGBA bytes). Also create two temp textures.
>
> You can then lock the texture and write to it using CPU code, to set the
> initial values.
>
> I would create a big vertex buffer full of the 65k squares 1 unit big
> centered at the origin. Also assign an x,y coordinate to each vertex of
> each square (this is so the vertex shader knows which square it is
> processing).
>
> Then, in the game loop, use the Direct3D call SetRenderTarget() to set
> the first temp texture as the render target, and run your pixel shader
> that calculates the new x coordinate. DO the same for y. If you have
> DX10 then you can render to multiple render targets in one shot, so
> could be faster.
>
> Pixel shader code would look something like:
>
> {
> float currentX , currentY;
> currentX = tex2D( xCoordTexture , ps_in.textureCoordinate );
> currentY = tex2D( yCoordTexture , ps_in.textureCoordinate );
>
> < calculate new X coord here >
>
> return newX;
>
> }
>
>> (Also... what's a vertex shader?)
>
> OK, so then the next bit, to actually draw all those points somewhere.
> The vertex shader is what "normally" converts your mesh from model space
> into screen space that is then used by the pixel shader to know where to
> draw each pixel. Usually it just multiplies by a big matrix, but we can
> do whatever we want in it.
>
> I'm going to look up the X,Y coordinates from those textures, using the
> XY coordinates we set in the vertex buffer above, remember?
>
> {
> currentX = tex2D( xCoordTexture , vs_in.xCoordinate );
> currentY = tex2D( yCoordTexture , vs_in.yCoordinate );
>
> vs_out.Pos.x = vs_in.Pos.x + vs_in.x + currentX;
> vs_out.Pos.y = vs_in.Pos.y + vs_in.y + currentX;
>
> return vs_out;
> }
>
> It returns each vertex of each square shifted to the position defined by
> the coordinates in the xy texture.
>
> The pixel shader used to draw those squares would then do something
> really simple, like just increment the colour at that coordinate by 1.
> Or if you want to get really clever you could make the square a bit
> bigger (so it covers say 9 pixels) and do some antialiasing by hand in
> the pixel shader.
>
>> Would it be possible to do all this using just, say, OpenGL?
>
> Yes. But I don't know any of the syntax, sorry.
Ah, OK. Well I have a Haskell binding to OpenGL, but not for DirectX.
So the exact function calls you're talking about won't help me, but it
gives me some idea what I'm trying to do...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |