POV-Ray : Newsgroups : povray.general : Isn't it time for a new pov benchmark? : Re: Isn't it time for a new pov benchmark? Server Time
8 Aug 2024 01:21:38 EDT (-0400)
  Re: Isn't it time for a new pov benchmark?  
From: Charles
Date: 20 Aug 2001 19:35:41
Message: <3b819ecd@news.povray.org>
Greetings everyone (yes, I have been lurking out here for a bit... good
to be back after a long absense); Topic of conversation finally drew me
out of my shell for a brief de-lurk.

Ben has an idea here that I'd like to expand only with a twist:
namely, iteration.

Thing is, instead of parsing and rendering a scene (or several scenes) for
a benchmark, I think a couple of the problems mentioned in this thread might
be solved by having built-in timed iteration loops. Yes, I know, none of us
wants to see a bloat of POV itself in the official release, but perhaps an
unofficial version with the iteration patches built in could be distributed
for those who wanted to benchmark POV on their own system. Picture it
like this:

First, patch looks for a standard .pov file "benchmark.pov" (If it doesn't
exist,the testing patch could autogenerate it to ensure you're using the
right scene). Next, the processor repeatedly parses that file (which would be
chosen to contain things that really give the parsing routines a workout)
for a fixed period of time (relatively short, but enough to get a
statisticly valid reading), starting over each time the file is completed
(if it is!) so that even mega powerful machines will have something to do
for the full parse test. Benchmark the parsing capabilities of the machine
based on how many lines were parsed in the testing period.

Then choose a standard set of POV-Ray's internal functions to test, say,
>intersection testing of...
   a sphere,
   a julia fractal,
   a modestly complex isosurface,
   sequentially arranged transparent and/or reflective objects
>sampling of media,
>calculation of a few complex texture patterns
>(and whatever other features you felt it a good idea to include in a
benchmark.

Each of these functions would be called not in actual scene, but *as if*
you were calling them for a scene, in a timed loop, with the number of
calls completed within the predetermined timeframe being recorded as a
benchmark for that function. This way instead of using a factor such as
pixels per second, or the total amount of time for a given test scene, you
would measure how fast the test platform processes various commonly accessed
functions which contribute to the rendering of a scene. Among other things,
this would allow patch writers to plug their newer, presumably more
streamlined versions of existing functions into the iterative testing code
and see if in fact their revised source code (or choice of compiler) truly
made a difference to each given aspect of raytracing.

Presumably any characteristic of POV you wanted to benchmark could have
an iterative quantifier test designed and included in the patch, and since
the time factor for which the test is run would not vary regardless of the
system you run it on, you'd be measuring actual performance directly rather
than trying to infer performance from how long a particular scene took to
parse/trace.


Charles
----
The Silver Tome ::  http://www.silvertome.com
0x06: I am NOT a RESOURCE. I am a free man!!
0x02: Aaaahahahahahahaha...
--
"Ben Chambers" <bdc### [at] hotmailcom> wrote in message
news:3b813537@news.povray.org...
> Great conversation, all interesting thoughts, but here's another idea:
> How about an iterative/recursive benchmark?  Write a small external utility
> to call POV with custom scenes.  If POV returns too fast (say, under 5
> minutes) then it creates a slightly more complex scene and tries it again.
> It could even spit out different types of scenes (one where a number of
> spheres are randomly distributed in a space, or a reflective-sphere floating
> over a checkered plane with an area light that keeps getting more and more
> lights in it, or one that gets more and more levels of textures, or inside a
> reflective box and keep bumping up the max_recursion level).  For really
> slow processors, it would stop reasonably soon and say "OK, that's the limit
> of your processor", or, for really fast processors, it might have to keep
> bumping things up.  The benchmark, then, would not be render time but scene
> complexity.
> Any thoughts?
>
> ...Chambers
>
>


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.