|
![](/i/fill.gif) |
Am 06.08.2011 23:52, schrieb jhu:
> Christian Froeschlin<chr### [at] chrfr de> wrote:
>> jhu wrote:
>>
>>> I've split up each line to be rendered by different cores and computers.
>>
>> I wonder if that actually increases the total time needed for radiosity?
>
> I think it does, but since I'm using 3.6.1, the overall time taken to render is
> less. Also, because it's taking so long, if anything happens, I still have some
> progress saved (more than if I hadn't split up the render), like when my
> computer's PSU died a few weeks ago from probably too much rendering.
(1) Some of the nastiest & hardest-to-eliminate radiosity artifacts seen
with 3.6.x (and earlier) were due to serious flaws in the radiosity
algorithm that have been eliminated in 3.7.
(2) How's memory usage? With exceptionally high-quality radiosity
settings you can easily max out your system's physical RAM limit,
leading to swapping and essentially stalling the render. If you split up
such a workload across multiple cores by running multiple instances of
3.6x, you're also multiplying RAM usage accordingly, which makes
physical RAM limit an even bigger problem. With 3.7, multiple rendering
threads running on different cores share a single radiosity data cache,
keeping the memory footprint more or less close to what a single thread
would use.
Bottom line: Use 3.7.
Post a reply to this message
|
![](/i/fill.gif) |