|
 |
scott wrote:
> Yes, and yes it would be much faster. Sometimes you need to think a bit
> before implementing such a thing though...
>
> I would run several points through the IFS in parallel. Say, 2^16
> points. Create two 256x256 textures of floats, one to hold the X
> coordinates and one to hold the Y coordinates of all your points.
>
> Then write two pixel shaders, one to calculate the new X coordinate, and
> one to create the new Y coordinate.
>
> Then, for each pass, set the current X and Y textures and inputs, and
> render to the newX texture. Render again to the newY texture with the
> other pixel shader.
>
> Now, here comes the clever bit. After you've generated the new XY
> textures, you now run a *vertex* shader with 2^16 quads. THe vertex
> shader looks up the XY values from the texture for each quad and
> translates the quad to that position. You can then just use a fairly
> normal pixel shader to increment the pixel value on the screen, or some
> off-screen texture.
>
> It should run pretty fast, and don't forget you're doing 64k points in
> parallel so it should outrun the CPU by a huge factor.
Uuhhh... any chance of some code? :-}
(Also... what's a vertex shader?)
Would it be possible to do all this using just, say, OpenGL? So far the
approaches I've considered would all require CUDA, which obviously works
*only* for certain GPUs.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |