POV-Ray : Newsgroups : povray.general : skyvase : Re: skyvase => (better povbench) Server Time
7 Aug 2024 07:19:17 EDT (-0400)
  Re: skyvase => (better povbench)  
From: pan
Date: 9 Dec 2001 22:41:49
Message: <3c142efd@news.povray.org>
"Thorsten Froehlich" <tho### [at] trfde> wrote in message
news:3c13ece0@news.povray.org...
> In article <3c13d9da@news.povray.org> , <pan### [at] syixcom>  wrote:
>
> > http://www.tabsnet.com/
>
> Ah, great, they are measuring background activity on every OS.  For neither
> platform there is anything mentioned of the rendered CPU priority.  And for
> the Mac version someone even had the brilliant idea to suggest the medium
> setting.
>
> Together with the lack of any instructions for Windows or Linux how to set a
> suitable priority the whole benchmark still measures more noise than
> anything else and the results are useless for any comparison because they
> are random...
>
>     Thorsten

Are you talking about the gui and render priorities in the menus for
the win version?
I thought it was understood that those settings are irrelevant for windows;
i.e. they don't work or affect anything in the real world.  The windows
evnvironment is always going to have "noise" in the sense that box a
might be os active in a different way than box b even though the same
.pov is being run. Maybe a reccomendation to shut down all windows
except pov, turn off all devices and pull any interenet plugs would
improve the results? Not reasonable for 'joe-user'. Thus, repeated
tests, like in all well done testing regimes, is called for. If, over time,
a consistent result is obtained regardless of 'noise' then some measure
of trust can be invested. One could even use a set of reuslts to measure
so-called 'os noise'.
Fo *nixs the same applies. Shut down all daemons, pull all plugs, anything
else that might keep  pov from running as the sole user of resources.
Again unlikely. Repeated testing is again indicated.
Comparing box a to box b? If the purpose is to gain some points for
having the fastest machine - I don't care and am not interested.
I do think some level of confidence can be attained in comparison
between machines. Develop some factor (call it omega) that describes
'os noise' for a particular box that can be included in the results set
comprising multiple and different machines. Omega can be arrived at
by running results for a particular box through some statistical routines.
If omega turns out to be significant, then it can be used in the matrix of
collected results to develop classes of performance or other metrics.
But - is 'omega' an important factor? I doubt this is a measure of
difference with any distinction - in the sense that any reasonable person
is not going to be running many other processes that would diminish
the best possible numbers. I would think omega would be meaningful
only at the level where a few seconds might be added or subtracted
from the parse and render times. On low end machines 'os noise' might
be an overwhelming factor, but what are you talking about here?
Some P II 200 mghz with 64 mB?
I could isolate a cpu in as noiseless environment as possible, but
how many pov users are going to do that? There will always be an
interest in a povbench that joe-user can simply run on his box
regardless of 'noise'.
I'm not arguing against a good suite of povbench methods being eventually
developed - using chess2 is better than skyvase. If anything else
comes around I'll be among the first to use it. Hasn't happened yet.

Once 3.5 is final it will definitely be a project to script a good .pov
for benching. Got to wait until then.

p.s. Of more importance is benching technique x vs. technique z
to accompish the same image aspect. Having the fastest hardware is not
the same as learning efficient coding.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.