|
![](/i/fill.gif) |
waggy schrieb:
>
> But I put together a much simpler scheme that may be more appropriate for a
> shared cluster using the installed job queue manager. All I needed to write
> were two scripts: one to create job submission tickets, and one to run POV-Ray
> on the frame range (or image part) associated with each task number.
That sounds like a pretty plausible approach indeed.
> The only performance problem I'm having is with the delay of as much as a few
> seconds between a node finishing a frame range and the queue manager pushing the
> next task onto it.
I guess the impact of this could be reduced by increasing the number of
frames per job package.
> I also anticipate decreased performance on images with
> (single-thread) parse times approaching trace times.
Why should that /decreasse/ performance? At present, parsing is done
over and over again for each image anyway. (Of course it will not max
out the node while parsing, but this would be the case on a single-node
system as well.)
> I have had some success
> overcoming these problems by running two tasks (two POV-Ray instances) on each
> node at markedly different niceness to keep each node busy. (Niceness is used
> in an attempt decrease task-switching a bit and to help stagger render times.)
> However, this works a bit too well as the processing environments I have
> available watch for time-averaged node overloading, and during my tests, many
> nodes stop accepting new jobs for a while when they have more than 12 active
> threads on an 8-core node, then alarm when over 14.
Hmm... it might be an interesting idea for animations to have POV-Ray
run parse threads for a number of frames in parallel (depending on
available cores), then render the batch of frames.
Post a reply to this message
|
![](/i/fill.gif) |