|
|
Christian Froeschlin <chr### [at] chrfrde> wrote:
> > The problem I need to solve is probably quite similar to those famous quality
> > control jobs, where you produce (or buy) something and need to take samples, to
> > make sure with a certain confidence that only a certain maximum percentage of
> > the products fails to meet the specifications.
>
> Actually the trend is to ensure 100% quality control
> using automated inspection, but that's not what you
> want to hear ;)
.... and it probably won't work in practice with such things as, e.g., how much
amps a fuse can take before it blows :)
> > (1) Given an allowed percentage of failure p (say, 10% may be faulty), and a
> > desired test confidence q (say, I want to be 95% sure that those 10% are met),
> > how many samples do I need to take (need a formula though, not just values for
> > this case)?
>
> I think such a statement usually says "we have to take x samples
> *out of our total of N objects* to be 95% sure that ...". Do you
> have some N? I suppose it might be something like the estimated
> number of samples required by the final render pass.
Hum... sounds a bit familiar, so you may be right - but... well, unfortunately I
don't have such an N.
What I can tell for sure is the number of pixels that will be rendered; I can
also guesstimate the number of "radiosity queries" triggered per ray, by
measuring the average during pretrace. Unfortunately I can't correlate the
number of pixels with the number of rays by any means, due to stuff like focal
blur or adaptive antialiasing.
Still there must be some mathematical solution to this problem...?!
Post a reply to this message
|
|