You already have the histogram rendered with some quick settings or a
previous full render and want to use it to divide the work ?
Deciding the perfect division is going to be a 'hard' problem. So you
need a 'near enough' heuristic.
Allowing any rectangular subdivision is also going to lead to difficulty
determining the best packing.
So how about this simple approach:
- Define minT and maxT as the minimum and maximum 'time' in those
arbitrary units that you want a block to take. Eg. minT = 5000, maxT =
10000. You could probably come up with a rule of thumb for these based
on the total time estimate for the entire image and the number of
available processors
- Consider scan lines and proceed left to right
- If T(line) < maxT
- If (T(line) + T(line+1)) < maxT Then accumalate line, line+1 (etc)
into a single block so long as it fits under maxT
- Else output line as a block
- Else split the line
- In splitting the line accumulate T(pixel) until it is > minT and <
maxT. Could even split it as close to proportionally as possible. For
example if T(line) = 6500 then you would want parts such that T(part1) ~
3250 and T(part2) ~ 3250 rather than T(part1) = 5000 and T(part2) = 1500.
You could get some blocks that take less than minT but probably not
many. You'll only get blocks that take more than maxT if a single pixel
is the cause and there is nothing to be done about that.
Also I'd suggest sorting the blocks in descending time and passing out
the heaviest ones first. That way your render won't wait for some large
block to finish after everything else is done.
You might also develop a measure Q of the efficiency of different
algorithms. If the Q of a simple algorithm is near enough to the
optimum and in the same ballpark as the Q of a fancy algorithm then it
is good enough.
It would be interesting to see those division images for different
approaches.
Post a reply to this message
|