POV-Ray : Newsgroups : povray.off-topic : Ask the engineer : Re: Ask the engineer Server Time
9 Oct 2024 11:29:31 EDT (-0400)
  Re: Ask the engineer  
From: clipka
Date: 30 Jan 2009 13:35:01
Message: <web.49834846641dffb7bdc576310@news.povray.org>
Invisible <voi### [at] devnull> wrote:
> When you look at product datasheets, sometimes they quote an "MTBF"
> figure. (As in, Mean Time Between Failures.) The question is... do these
> numbers mean anything? Are they a product measurement, or a design goal?
> (I.e., do you *design* a product to have an MTBF of over 100,000 hours?
> Or do you design a product and then *measure* what it's MTBF actually
> is?) What exactly is the mean taken over?

I guess this depends on how good it is done.

Small companies that can't afford a dedicated quality department may just pick a
bunch of samples fresh from production, run them for some time, count the number
of failures, and give the MTBF as <test duration> * <number of samples> /
<number of failures>.

The proper process, however, should start with experience from previous products
and the design changes implemented in the current one, to specify failure modes
(in what ways can it fail?), guesstimate failure mode probailities (how likely
is it to fail this way?), guesstimate probability distributions (how long will
it take to fail this way? e.g. failure modes due to production problems are
often most likely to show up during the first hours of use, while failure modes
due to wear are often increase in likelyhood over time), and other such details.
Also, the risk of specifying a wrong MTBF in the datasheet is an issue (how sure
do we want to be about the MTBF?)

From all this information, a test plan is devised (using accepted best pracice
and certain statistical formulae) specifying things like:

- How do we test?
- How do we detect the individual failure modes?
- How long do we test each sample?
- How do we pick samples to test?
- What MTBF do we hope to see confirmed?
- How many samples do we have to test for confirmation?
- How many failures do we accept *at most*, (a) in total, and (b) per failure
mode, to consider the expected MTBF confirmed?

In theory, such a test plan is (typically) only suitable to confirm (or, more
precisely, give a certain confidence in) a certain lower limit of the MTBF; If
the results fail to give the desired confidence in the expected MTBF, a new
MTBF guesstimate must be made, and the tests re-run (unless the test results
can be mapped to the new test plan in a mathematically "clean" way); on the
other hand, if the results indicate that the true MTBF is probably
significantly higher, it may still be decided to go with the more conservative
original guesstimate, because gaining enough confidence in the indicated MTBF
may require much more exhaustive (and therefore more expensive) tests. Both
cases should also give rise to the question what actually went wrong - whether
the error is actually in the MBTF guesstimate or in the tests - and why.

All in all, quality control is a science by itself...


Post a reply to this message

Copyright 2003-2023 Persistence of Vision Raytracer Pty. Ltd.