POV-Ray : Newsgroups : povray.general : Skyvase Rendering Times : Re: Smart Benchmark: Skyvase Rendering Times Server Time
11 Aug 2024 13:21:44 EDT (-0400)
  Re: Smart Benchmark: Skyvase Rendering Times  
From: Mark Wagner
Date: 21 Aug 1999 01:06:04
Message: <37be33bc@news.povray.org>
Matt Swarm wrote in message <37bd15a0@news.povray.org>...
>
>
>>>I propose a Smart Benchmark, which senses how long a routine is taking,
>and
>>>terminates after a set period, projecting the time for completion of the
>>>'full' program-- whatever the machine.   All this code is (or SHOULD be)
>>>deterministic, nonrandom, so extrapolation should be straightforward.
>>>Maybe an animation, with option to deduct for disk access times.
>>
>>
>>How about an image where the render time scales linearly with image area?
>>This way, you simply render the image at what you consider an appropriate
>>resolution for your machine (320x240 for a 486/33, 320000x240000 for a
>>supercomputer, etc) and then apply a correction factor for the area (the
>>486/33 in this example would have a correction factor of x1000000)?
>>
>>Mark
>
>
>Interesting notion- certainly simpler than a tricky program.  How might
this
>work?
>
>We have two machines, say, Trusty Rusty (486)  and the Swarm Machine I (20
>500 MHz Celerons).
>
>Q:   The clock speeds on Swarm add up to about 300 times Rusty.  Which
image
>size would we select, perhaps, for the Swarm Machine?


This is a matter of personal preference, depending on many factors,
including how fast the computer is and how much time you can afford to have
the computer spend rendering the image.  However, I would recommend an image
size that would take at least five minutes to render, and preferably longer.

>Q:   What correction factor would we then apply?


Multiply the resulting time by (baseline image area/rendered image area) to
get the time corrected for the image area.

>Q:   While it's true that image 1000X by 1000Y is 1000000 times greater in
>area than image XY, is it also true that a THEORETICAL idealized machine
>rendering in POV will also be 1000000 times longer?   It's true that area
>increases by the square-- what worries me here is that (so far as I know)
we
>are doing some computations in THREE dimensions.  Might not some portions
of
>the image factor up by the CUBE of the increase instead of merely the
>square?  Or worse?


If antialiasing is not used, for sufficiently large images the only change
is in the number of rays traced, and since for large images, adjacent pixels
fire rays that follow similar pathes, and thus take similar amounts of time
to trace.  This is only true if the parsing time is negligable for all
machines involved, or is not counted for the total time, as parsing time is
independant of image resolution.

Mark


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.