|
 |
It's an old problem. You can't measure something, so you try to estimate
it. But how do you figure out /how accurate/ your estimate is?
Computer graphics is full of situations where you want to estimate the
integral of something. The way you usually do this is to sample it at
lots of points and then take the weighted sum. The more points you
sample, the better the estimate. But usually each sample costs computer
power, so you don't want to take millions of samples except when it's
really necessary. But how do you know if it's "really necessary"?
It's a similar situation with benchmarking. You can run a benchmark and
time it. But what if Windows Update happened to run in the background
just at that moment? Or one of your cores overheated and changed clock
frequency? Hmm, better run the benchmark 3 times and take the average.
Still, 3 flukes are three times less likely than 1, but still hardly
what you'd call "impossible". People play the lottery with worse odds
than that!
So many you run the benchmark 100 times. Now if all 100 results are
almost identical, you can be pretty sure your result is very, very
accurate. And if all 100 results are all over the place, you should
probably do a bazillion more runs and plot a histogram. Still, how do
you put a number on "how accurate" your results are?
Does anybody here know enough about statistics to come up with answers?
Post a reply to this message
|
 |