|
|
Here is a little benchmark study conducted with the current beta (beta.21)
of POV-Ray 3.7 for Linux on the x86_64 architecture. The built-in benchmark is
ran on a 8-core machine using 2^N render threads, where N is ranging from 1 to 6
(i.e. from 1 to 64 render threads). Three independent runs were performed per
thread increment and the CPU time was monitored using the unix 'time' command;
only the fastest run is reported here.
The following summarizes the results, given as the number of render
threads (+wt), the total ELAPSED time (in seconds), and the overall speedup
with respect to 1 thread:
+wt elapsed speedup
1 637 100%
2 318 200%
4 162 393%
8 82 777%
16 82 777%
32 82 777%
64 83 767%
NOTES:
1) Machine specs
4x Dual Core AMD Opteron(tm) Processor 875 @ 2.2 GHz (stock speed)
Linux 64-bit kernel 2.6.9
2) Running the benchmark
For testing purposes only, the benchmark were ran directly from within the
directory where the beta.21 package was unpacked. This requires adding a
Library_Path="./include" in the povray.ini file therein. Then one can run
the built-in benchmark:
./povray --benchmark +wtN <enter><enter>
where N is the number of render threads: 1, 2, 4, 8, 16, 32, 64, respectively.
The built-in benchmark requires to press <enter> before starting: the <enter>
key was thus hit twice quickly to get 1-second precision elapsed time (see
below).
3) ELAPSED time versus CPU time
The Linux beta.21 build is not currently able to report the correct CPU time
when more than 1 render threads are used; this is not a bug in POV-Ray but
the consequence of using slightly outdated versions of the Linux kernel and
glibc when building the POV-Ray binary. For similar reasons, the Unix 'time'
command can only be used to monitor elapsed time. The benchmarks were thus
conducted on a machine where POV-Ray was the only CPU-hungry job running.
4) Performance speedup
This benchmark is unfortunately way too fast on this kind of machine to obtain
reliable scaling figures, especially given the limitations described above.
- NC
Post a reply to this message
|
|