|
![](/i/fill.gif) |
> Another question on ray-tracing, why does parallel ray tracing use screen
> space task subdivision?
It's the only effective way you could spread the workload.
The existing version of POV-Ray can do this if you use special utilities
or a few command-line or ini file tricks. You can also obtain a degree
of parallel processing on multi-CPU machines by running multiple POV
instances with this technique, although your memory requirements are
multiplied by the number of CPUs.
The challenge is dealing with the fact that with most scenes, different
regions will render at different speeds. For instance, if you had one
machine do the top half and another do the bottom half, with most scenes
the top half (often mostly sky) will be done far sooner than the bottom.
With the SMP route, dividing by pixel is fine because one extra memory
access per pixel is trivial. It's not fine for clustering because
network latency is definitely not trivial.
The solution in my mind is to use relatively small tile sizes (like
32x32 pixels), and add some communication so that different machines are
always working. I like this idea because the machines in a cluster
could be far apart in speed and still be helpful.
Again, this sort of effect can be faked using utility software, but it
couldn't get around the re-parsing, re-radiosity, re-photon shooting
stuff, which can add a considerable amount of time to a complex scene if
it were rendered in small tiles.
Be aware that I'm not on the POV-Team and nothing I'm saying should be
indicative of what POV-Ray 4.0 may or may not be capable of doing. I'm
working on this independantly because it's an interesting challenge.
-Ryan
Post a reply to this message
|
![](/i/fill.gif) |