A few months ago, we talked about the sources of error that can be found in the setup of a form testing system. We used the terms "accuracy" and "uncertainty.
A few months ago, we talked about the sources of error that can be found in the setup of a form testing system. We used the terms "accuracy" and "uncertainty." While these are often used interchangeably, every now and then it's good to review the precise meaning of these two words and how they affect the quality of measurements.
The internationally agreed upon definition of accuracy is "the closeness of the agreement between the result of a measurement and the true value of the measurement" (for example, a distance). The official definition also notes that "accuracy is a qualitative concept." This means it can be described as "high" or "low," for example, but should not be used quantitatively.
Often, though, it is used quantitatively, as if the definition read, "the difference between the measured value and the true value." This leads to statements like, "accurate to ±X." The problem with this unofficial definition is that it assumes the true value can be defined and known perfectly. However, even in the deepest, darkest laboratories, backed by some of the most powerful countries in the world, perfect values cannot be realized. It's not a question of money or technology. It's just physically impossible to define or make a perfect measurement.
That's why metrologists use the term "uncertainty." The concept of uncertainty accepts that no measurement can be perfect and is therefore defined as a "parameter, associated with the result of a measurement, that characterizes the dispersion of values that could reasonably be attributed to the measurand." Therefore, it is a range of values in which the value is estimated to lie. It does not attempt to define or rely upon a unique true value, but rather statistically estimates a range that value falls within.
Basically, what this comes down to is that using the term "accuracy" for the quantitative characterization of a measuring instrument is not compatible with the official meaning. In short, the way we use the word "accuracy" is technically improper and significantly different from the proper metrology term of uncertainty.
So what's better: to be proper or to get the idea across?
No matter what you call it, it is important that quality managers understand the concept of uncertainty and how it relates to gaging performance. During the past 5 to 10 years, there has been a lot of work done to develop ways of quantifying the performance of gaging equipment. This can be a complex task, even for the simplest gage, because it's not only the gage that influences the measurement; it's the standard, the workpiece, the people and the environment. All of these are subsets that potentially add some source of error to the measurement. So if you are trying to persuade others that your measurement result is a good one, you need to use all means available to express the concept of uncertainty.
To start, you need to determine whether a gage's accuracy is likely to approximate the numbers given by the manufacturer. If the manufacturer has one concept of where and how the gage will be used and the purchaser has a another, there will likely be a problem.
Bear in mind that the manufacturer's specification is often determined by measuring the most beautiful part (under ideal conditions) using the most advantageous part of the gage. You may be measuring rings that are rough, out of round and tapered. So you need to develop an understanding of what "certainty" of performance you can expect under your specific conditions. Questions to ask include: What factors affect performance? How often does the gage need to be calibrated to maintain its performance? What are the required environmental conditions needed for the gage to perform to specification? Do operators need special training; In what condition must the parts be in order to make accurate measurements?blog comments powered by Disqus