|
|
scott wrote:
>> When some rays hit an object and others don't,
>> you add them to different queues.
>
> Why add it to a different queue? All rays that still need to be traced
> (whether they are reflection, refraction, primary or shadow rays) simply
> need to be intersected with the scene geometry.
I was thinking more along the lines of each surface possibly having a
different code path for computing illumination. Rays in the same bundle
can't take different code paths.
> In that way the GPU will just process as many rays as it can in batches
> until the scene is complete. The CPU would handle preparing the queue
> at each step, ie removing rays that have terminated and inserting new
> rays for shadows and reflections etc.
Depends on whether you're thinking of running the whole computation on
the GPU, or just the ray intersection tests, or what.
> Of course to be able to do efficient scene intersection on the GPU it
> would probably be best to only allow triangle-based scenes
I see no especial reason why you can't do arbitrary intersection tests
on a GPU. The math isn't that complicated. The only real problem is
going to be trying to run something like isosurfaces, which use adaptive
space division; a GPU probably won't like that. But something like a
Newton iteration should be fine. [That obviously isn't applicable to
general isosurfaces though...]
> and figure
> out some way to store a Kd-tree of triangles efficiently in a texture.
Also depends on whether you're trying to write your code as a shaper, or
use a real GPGPU system. (Such as the OpenCL in the title.)
> Sounds like the sort of thing someone has already done a PhD on :-)
Indeed, that's where I read about the render queue approach. (No, I
don't remember where *exactly*.)
Post a reply to this message
|
|