 |
 |
|
 |
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> OK, so for anybody who knows about such things... Would it be feasible
> to render an Iterated Function System image using a GPU? And would it be
> any faster than using the CPU?
>
As long as you can break down your function so that a single step is
performed on the entire screen at once, then you can.
Ie, you can do function f() on all pixels, store the output in a buffer,
and use that buffer as the input in the next step.
If you have to perform varying steps on individual pixels, you get
problems. As in, if you go f() then g() then h() on one pixel, then
only g() and h() on its neighbor, then only f() and h() on a third, this
won't work.
The same instructions have to be run on the whole screen (well, the
whole polygon - for what you're asking about, I would just draw a quad
to the whole screen).
...Chambers
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
I haven't done any GPU programming myself, but from what I've gathered
from friends who have modern GPUs are essentially fully programmable
architectures these days, so you can use them to boost the speed of
computations which have seemingly little to do with actually displaying
things on the screen (for instance, I believe that folding@home can use
the GPU).
It probably depends on what graphics card you have as to weather or not
you can do this on it, but you might take a look at CUDA, a C compiler
revently released by nvidia that allows you to write things for the GPU:
http://www.nvidia.com/object/cuda_learn.html
You'll, of course, need to design your IFS algorithm so that it can be
run in a highly parallel manner, but unless I'm missing something that
should be pretty straightforward in this case (just run multiple
iterators simultaneously). Of course, not having ever done this sort of
thing increases the chances that I'll have missed something :-)
Orchid XP v8 wrote:
> OK, so for anybody who knows about such things... Would it be feasible
> to render an Iterated Function System image using a GPU? And would it be
> any faster than using the CPU?
>
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> OK, so for anybody who knows about such things... Would it be feasible to
> render an Iterated Function System image using a GPU? And would it be any
> faster than using the CPU?
Yes, and yes it would be much faster. Sometimes you need to think a bit
before implementing such a thing though...
I would run several points through the IFS in parallel. Say, 2^16 points.
Create two 256x256 textures of floats, one to hold the X coordinates and one
to hold the Y coordinates of all your points.
Then write two pixel shaders, one to calculate the new X coordinate, and one
to create the new Y coordinate.
Then, for each pass, set the current X and Y textures and inputs, and render
to the newX texture. Render again to the newY texture with the other pixel
shader.
Now, here comes the clever bit. After you've generated the new XY textures,
you now run a *vertex* shader with 2^16 quads. THe vertex shader looks up
the XY values from the texture for each quad and translates the quad to that
position. You can then just use a fairly normal pixel shader to increment
the pixel value on the screen, or some off-screen texture.
It should run pretty fast, and don't forget you're doing 64k points in
parallel so it should outrun the CPU by a huge factor.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Kevin Wampler wrote:
> It probably depends on what graphics card you have as to weather or not
> you can do this on it, but you might take a look at CUDA, a C compiler
> revently released by nvidia that allows you to write things for the GPU:
Yeah, CUDA seems the obvious way to do this.
Unfortunately, only GeForce 8 and newer support this technology. I have
a GeForce 7900GT. :-(
(Also... I get the impression you need Vista for the GeForce 8 to work.)
> You'll, of course, need to design your IFS algorithm so that it can be
> run in a highly parallel manner, but unless I'm missing something that
> should be pretty straightforward in this case (just run multiple
> iterators simultaneously). Of course, not having ever done this sort of
> thing increases the chances that I'll have missed something :-)
There are 3 processes that need to happen.
1. A stream of coordinates needs to be generated.
2. A histogram of the coordinates needs to be constructed.
3. A non-linear mapping from histogram frequencies to colours is performed.
Step 2 looks to be the tricky one - but it shouldn't be *vastly*
difficult I think. The question is whether the histograms will fit into
the limited RAM on the card...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Chambers wrote:
> As long as you can break down your function so that a single step is
> performed on the entire screen at once, then you can.
This is the program. An IFS image is essentially a histogram plot. You
can't just compute each pixel's colour independently.
OOC, is it possible to use a GPU to copy and transform chunks of video
data? (E.g., rotate, scale, that kind of thing.)
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
scott wrote:
> Yes, and yes it would be much faster. Sometimes you need to think a bit
> before implementing such a thing though...
>
> I would run several points through the IFS in parallel. Say, 2^16
> points. Create two 256x256 textures of floats, one to hold the X
> coordinates and one to hold the Y coordinates of all your points.
>
> Then write two pixel shaders, one to calculate the new X coordinate, and
> one to create the new Y coordinate.
>
> Then, for each pass, set the current X and Y textures and inputs, and
> render to the newX texture. Render again to the newY texture with the
> other pixel shader.
>
> Now, here comes the clever bit. After you've generated the new XY
> textures, you now run a *vertex* shader with 2^16 quads. THe vertex
> shader looks up the XY values from the texture for each quad and
> translates the quad to that position. You can then just use a fairly
> normal pixel shader to increment the pixel value on the screen, or some
> off-screen texture.
>
> It should run pretty fast, and don't forget you're doing 64k points in
> parallel so it should outrun the CPU by a huge factor.
Uuhhh... any chance of some code? :-}
(Also... what's a vertex shader?)
Would it be possible to do all this using just, say, OpenGL? So far the
approaches I've considered would all require CUDA, which obviously works
*only* for certain GPUs.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Orchid XP v8 wrote:
> Yeah, CUDA seems the obvious way to do this.
>
> Unfortunately, only GeForce 8 and newer support this technology. I have
> a GeForce 7900GT. :-(
Also... I can't program in C to save my life. ESPECIALLY if it involves
floating point arithmetic...
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
Warp wrote:
> Orchid XP v8 <voi### [at] dev null> wrote:
>> Also, how about random number generation? Is this "easy" to do on a GPU?
>
> Even if pixel shaders didn't have some RNG available (I don't know if
> they have), a simple linear congruential generator should be rather
> trivial to implement.
>
> (OTOH that means that the LCG would have to store the seed somewhere
> so that the next time the shader is called it can use it. Again, I don't
> know if that's possible. There could well be parallelism problems with
> that.)
Yeah, that's going to be the fun part - getting each PRNG to produe a
different stream. Maybe it could be seeded from pixel coordinates or
something...
>> Could you do something like rendering millions of tiny semi-transparent
>> polygons of roughly 1-pixel size?
>
> Why would you want to do that?
The idea being to scatter polygons around using a geometry shader,
rather than trying to implement the IFS as a pixel shader. I don't know
if it would work well though.
--
http://blog.orphi.me.uk/
http://www.zazzle.com/MathematicalOrchid*
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> Uuhhh... any chance of some code? :-}
I don't have time to write anything working now, but I'm just having a look
at something I have already that would be a good basis... This is using
DirectX 9 btw.
Use CreateTexture() to create two new 256x256 textures with the format
D3DFMT_R32F (that gives you a texture full of 32bit floats, as opposed to
the more usual RGBA bytes). Also create two temp textures.
You can then lock the texture and write to it using CPU code, to set the
initial values.
I would create a big vertex buffer full of the 65k squares 1 unit big
centered at the origin. Also assign an x,y coordinate to each vertex of
each square (this is so the vertex shader knows which square it is
processing).
Then, in the game loop, use the Direct3D call SetRenderTarget() to set the
first temp texture as the render target, and run your pixel shader that
calculates the new x coordinate. DO the same for y. If you have DX10 then
you can render to multiple render targets in one shot, so could be faster.
Pixel shader code would look something like:
{
float currentX , currentY;
currentX = tex2D( xCoordTexture , ps_in.textureCoordinate );
currentY = tex2D( yCoordTexture , ps_in.textureCoordinate );
< calculate new X coord here >
return newX;
}
> (Also... what's a vertex shader?)
OK, so then the next bit, to actually draw all those points somewhere. The
vertex shader is what "normally" converts your mesh from model space into
screen space that is then used by the pixel shader to know where to draw
each pixel. Usually it just multiplies by a big matrix, but we can do
whatever we want in it.
I'm going to look up the X,Y coordinates from those textures, using the XY
coordinates we set in the vertex buffer above, remember?
{
currentX = tex2D( xCoordTexture , vs_in.xCoordinate );
currentY = tex2D( yCoordTexture , vs_in.yCoordinate );
vs_out.Pos.x = vs_in.Pos.x + vs_in.x + currentX;
vs_out.Pos.y = vs_in.Pos.y + vs_in.y + currentX;
return vs_out;
}
It returns each vertex of each square shifted to the position defined by the
coordinates in the xy texture.
The pixel shader used to draw those squares would then do something really
simple, like just increment the colour at that coordinate by 1. Or if you
want to get really clever you could make the square a bit bigger (so it
covers say 9 pixels) and do some antialiasing by hand in the pixel shader.
> Would it be possible to do all this using just, say, OpenGL?
Yes. But I don't know any of the syntax, sorry.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|  |
|
 |
> OOC, is it possible to use a GPU to copy and transform chunks of video
> data? (E.g., rotate, scale, that kind of thing.)
Yes, you just draw a textured square wherever you want, using whatever you
want as the texture (the result of a previous calculation?).
The only limitation is that you cannot render to a texture at the same time
as using that texture as a source for drawing operations.
Post a reply to this message
|
 |
|  |
|  |
|
 |
|
 |
|  |
|
 |