POV-Ray : Newsgroups : povray.general : Isn't it time for a new pov benchmark? : Re: Isn't it time for a new pov benchmark? Server Time
7 Aug 2024 17:21:19 EDT (-0400)
  Re: Isn't it time for a new pov benchmark?  
From: Warp
Date: 20 Aug 2001 08:44:38
Message: <3b810636@news.povray.org>

: Then, you would have multiple metrics. IMNSHO, it's a bad thing, because
: everyone will wants soon it's own test scene. 
: You should rather stick to a single scene, which should nevertheless
: be a complex-looking and lovely one.

  I'm not sure if I understand what do you mean.

  The idea with several scenes is that they measure different aspects of
the raytracing process. Of course the results of each scene should be
shown separately, although the total score could be the total sum of times.
  Of course the rendering time of all the scenes should be approximately
equal so that the results don't get biased towards one of them (ie. the one
which takes longer to render).
  There could be at least 4 different test files:

  1) A test for parsing speed: A pov-file which takes most of the time in
the parsing stage (doing something useful, not just idle loops) and a minimal
amount of time in the rendering part. This could be, for example, a scene
which places some thousands of spheres according to some very complicated
algorithm. The scene could take some memory at parsing time, but not at
render time.

  2) A test for heavy memory usage. The scene should take a considerable
amount of memory which is actively used while rendering. This could be
achieved with lots of light sources, huge amount of objects and so on.

  3) A test for raw raytracing speed: Parsing time and memory usage should
be negligible (ie. parsing time just a few seconds at max. and the scene
doesn't take any memory), but the raytracing part takes most of the time.

  4) A scene which combines all of the three.

  As I said, the total time of all the four scenes should be approximately
equal so that the results don't get biased.

  The idea behind this is that some computers are better in one of those than
in another, and this kind of test gives a good idea where is that computer
good at.
  One single scene can't test all of those things, and even if it does
(as in the 4th example), one can't see how well the computer performs in
the individual tasks.

: That's way too short. 
: It should take at least two hours on the 1.4GHz Athlon with enough memory.

  I think two hours is a bit overkill. Granted, 10 minutes may be too short,
but I think 2 hours is too much. I don't think people want to wait for hours
for this.
  Or perhaps if 4 pov-files are used, the total time could be about 2 hours,
which would mean a half-hour per file.

: Easier, unless one has decided to really forge it: have pov output
: a CRC as well as the statistics, and only accept capture from
: the pov-output :-> If the CRC does not match, reject the entry :-<

  That kind of CRC can be easily faked.

: Really, faked entries are of no concern for commonly available systems:
: When a lot of P2/400 MHz gives a render time of x, the x/30 entry is
: obviously faked.

  Of course, but I suppose that many people have the temptation to take away
a small but unnoticeable amount from the real rendering times (eg. if the
real rendering time was 35 minutes, they could just report 31 minutes and
no-one will notice). Unfortunately many people are like this; even if they
don't get anything from that (not even their name anywhere), they still tend
to "exaggerate" a bit to look better.

: Personaly, I won't bother to have trusted/untrusted entries, nor
: even to have any checking on value.

  Then there will just be a lot of bogus entries as in the current pov-bench,
which effectively destroys its usefulness.
  Even with the "average" thinking, absolutely no checking will still cause
problems. If the real average for a certain CPU would be 1 hour and someone
reports 20 times a rendering time of 1 second, that produces a substantial
change in the average.
  So there should definitely be some kind of checking. And I still think that
the "trusted source" method is good.

-- 
#macro N(D,I)#if(I<6)cylinder{M()#local D[I]=div(D[I],104);M().5,2pigment{
rgb M()}}N(D,(D[I]>99?I:I+1))#end#end#macro M()<mod(D[I],13)-6,mod(div(D[I
],13),8)-3,10>#end blob{N(array[6]{11117333955,
7382340,3358,3900569407,970,4254934330},0)}//                     - Warp -


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.