|
|
Am 05.06.2014 19:17, schrieb gaprie:
> I actually wanted to find a file whose job is to read the commad line
> (ex: "Povray +SR1 +ER200 +Ipawns.pov ER200") and threw it into the main Povray
> rendering process.
There is no such file, because POV-Ray /already/ uses some MPI-ish
interface between the part we call the front-end and the part we call
the back-end (with the design goal to ultimately support multi-node
operation in some future version).
The front-end is a single thread responsible for reading the command
line, and later assembling and writing the output image.
The back-end is actually multiple threads: One parser thread responsible
for parsing the scene file (in a multi-node environment there would have
to be one such thread per node), and a number of threads rendering the
image in chunks of 32x32 pixels at a time. A thread finished with its
chunk would send the results to the front-end, then go for the next
yet-unrendered chunk.
The interface used for message passing does not conform to the MPI
standard defined by the Argonne National Laboratory, but uses a
proprietary API: The thing called POVMS, which - as Thorsten has already
noted - comes in both a C and a C++ version (the latter being
implemented as a wrapper around the C API).
If your goal is not to just throw together a quick hack to run POV-Ray
on multiple nodes, but to create some decent solution that future
development can build on, maybe you should actually work on the POVMS
implementation itself to get it flying across node boundaries. (Or, as a
potential alternative, rip out POVMS entirely and replace it with some
widespread and well-maintained MPI library.)
If your goal /is/ to just throw together a quick hack, then maybe you'd
actually be better of with some shell scripts that just run multiple
POV-Ray instances with suitable parameters, followed up with some
imagemagick call to paste the results together.
Post a reply to this message
|
|