POV-Ray : Newsgroups : povray.general : Isn't it time for a new pov benchmark? : Re: Isn't it time for a new pov benchmark? Server Time
7 Aug 2024 17:21:51 EDT (-0400)
  Re: Isn't it time for a new pov benchmark?  
From: Jérôme Grimbert
Date: 20 Aug 2001 06:57:21
Message: <3B80EDA5.409004B0@atosorigin.com>
Warp wrote:
> 
>   The PovBench (http://www.haveland.com/povbench/) has been obsolete for
> long time now (several years, I would say). The skyvase scene is not slow
> enough to measure current fastest computers accurately, and besides, the
> list is full of bogus entries. Even if the entry is genuine, two entries
> with a rendering time of 3 seconds just doesn't tell which one is better or
> if they are equal.
>   Moreover, using one simple scene to benchmark POV-Ray is too restrictive:
> It measures just a very small percentage of POV-Ray features (eg. it doesn't
> measure heavy memory usage or parsing speed).
> 
>   Wouldn't it be time to make a renewed POV-Ray benchmark? A much better
> benchmark up to the current computer speeds?
>   This benchmark could have the following features:
> 
>   1. It has more than one scene file, each one benchmarking its own important
> area in rendering, for example one for raw raytracing speed, one which takes
> long to parse, one which uses lots of memory, etc.

Then, you would have multiple metrics. IMNSHO, it's a bad thing, because
everyone will wants soon it's own test scene. 
You should rather stick to a single scene, which should nevertheless
be a complex-looking and lovely one.
Rendering an animation set of frame may be an option, but it might easily
open the pandora box with the #if/#switch statements.
Anyway, a full INI file should be provided with the scene,
and every possible setting should be explicitely done in that INI file.

BTW, the rendered size should be huge (minimum of 1600x1200), with AA, 
and the scene should not be biazed toward any colour.
(Skyvase is too much blue !)

> 
>   2. These scene files should take a reasonable amount of time in current
> computers. This amount should be chosen so that it will not be in the order
> of a couple of seconds in a few years. For example they could take about 10
> minutes each in an 1.2GHz Athlon.

That's way too short. 
It should take at least two hours on the 1.4GHz Athlon with enough memory.
Provision should be made in the test protocol to change the scene again
when the reported time had reached less than 3 minutes on commonly
available.

And it should reflect the particularities of POV (so that doing
the same picture in other software(triangle based) should be
very difficult with the same short code and the same smoothness).


> 
>   3. The submission of entries should be controlled. Of course it's difficult
> to see if someone is just making a bogus entry if the numbers are credible,
> but there could be, for example, a "trusted entry system": Entries from
> trusted sources could get a mark showing that it's a trusted entry. Entries
> not having this mark may or may not be true (and this should be clearly
> stated in the page).

Easier, unless one has decided to really forge it: have pov output
a CRC as well as the statistics, and only accept capture from
the pov-output :-> If the CRC does not match, reject the entry :-<

Really, faked entries are of no concern for commonly available systems:
When a lot of P2/400 MHz gives a render time of x, the x/30 entry is
obviously faked.

Maybe the benchmark could summarize per kind of processor, with
the min,max and average, so reader may know what to expect from
a given kind of processor. (If the Celeron 333 metrics is 
1s/79s/119s , then there is obviously something strange, because a 
ratio of more than 100 for the same processor is difficult to explain
by the only change of the OS)

Personaly, I won't bother to have trusted/untrusted entries, nor
even to have any checking on value.


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.