|
![](/i/fill.gif) |
First, pardon me if the idea presented below has already been done to death.
From what I've seen, current methods for distributing a render involve
dicing up the scene and having the separate processors render different
parts of the scene, but with each processor using the same render database.
I had the half-baked idea of dividing up the render task a different
way, and that is to split up the rendered scene among the separate
processors. One processor decides which rays get traced, and it sends a
ray-intersection request to each of the processsors. Each one does the
tests against a smaller set of the data, and reports the results to the
controlling processor. This controller then interprets the results as
need be.
Pros:
* Scenes can now be larger without stressing memory limits.
* Ray-intersection tests take less time.
Cons:
* Communication time between processors may present an unacceptable
bottleneck.
Regards,
John
Post a reply to this message
|
![](/i/fill.gif) |