|
|
Orchid XP v8 wrote:
> But if you think about common scenes where you have walls and floors and
> so forth, it might be worth using the GPU to test ray intersections
> against these, and against bounding volumes (if you're using them). Huge
> numers of rays need to be run through these tests, so the GPU can fire
> those off quite quickly.
But those tests are also quite simple, so would benefit the least from
the GPU. If isosurfaces could be translated efficiently into shaders,
then those would show the most benefit (and Julia fractals, of course).
> It might also be worth running the "so what the
> hell is the colour of this surface?" calculation on the GPU - what with
> texturing, normal maps, multiple light rays of different angles and
> colours, etc.
Possibly. In fact, using the GPU for GI might be an option. For
instance, we could run a single extremely high resolution pass with no
lighting or texturing, just intersections, and cache all the
intersection locations. Then, feed these intersections to the GPU, and
have it calculate lighting for those points. Feed the lighting data
back to the CPU for radiosity, and voila! Fast GI :) (Of course,
implementing it would be a b*tch, but that's beside the point).
> Also, let's clear this up: My understanding is that the GPU does not
> require *all* cores to run an identical kernel. IIRC, the cores are
> grouped into (fairly large) bundles, each bundle runs a single kernel,
> but different bundles can run completely different kernels. So you don't
> need a ray queue with enough rays for the entire GPU, just for a whole
> bundle of cores.
True, I think the shaders are in blocks of 4, and you have to have
groups of 32 blocks running the same shader or something like that
(which would be 128 shaders per program). I don't remember the exact
numbers, though, and in fact it's probably GPU dependent.
...Chambers
Post a reply to this message
|
|