|
|
Am Sun, 04 Jan 2009 23:05:16 -0500 schrieb clipka:
> (2) Again given allowed percentage of failure p and a confidence q, a
> number of samples taken already N, and a number of failures among them
> M, the question is: Can I stop testing here, because I (a) can be sure
> enough I'll have too many failures and should better optimize my
> production, (b) can be sure enough my level of quality is ok, or (c) do
> I need to continue taking samples before I decide?
May be a bit late now, but...
Using the approximation of the binomial distribution by normal
distribution you can be sufficiently sure that no more samples have to be
taken when the number of failures k is smaller or equal than
k <= N * p - 0.5 - 1.645 * sqrt(N * p * (1-p))
if you assume a level of significance of 0.05. (case b)
With this test you try to keep the probability that you decide the
quality is ok although, in fact, it is bad as low as possible.
You can be sufficiently sure that your quality is too bad if the number
of failures k is greater than
k > N*p - 0.5 + 1.645 * sqrt(N * p * (1-p)
if you assume a level of significance of 0.05. (case a)
With this test you try to keep the probability that you decide the
quality is bad although, in fact, it is ok as low as possible
Otherwise you should take more samples. (case c) However, it may turn out
that the actual quality is very close to p (not really that bad but not
good enough) and you'll just take samples forever!
The value 1.645 was taken from a table of the normal distribution. Note
that this approximation can be very inaccurate for a low number of
samples or very values for p that are very close to 0 or 1. p should not
change while you take samples.
Post a reply to this message
|
|