Machined Part Geometry Measurement
Uncertain about uncertainty? Having trouble remembering the difference between accuracy and precision? Read on to review key metrology terms relevant to the ISO Guide to the Expression of Uncertainty in Measurement (ISO GUM).
Share
Metrology, or the science and application of measurement, is a value-added process in manufacturing environments. Using geometric measurement systems, such as coordinate measuring machines and structured light scanners, for example, we determine if machined component dimensions meet design tolerances. Similarly, we use optical and stylus surface measurement systems to confirm that the surface roughness meets design requirements for assembly and function. While these instruments can provide accurate results, there is always uncertainty associated with a measurement result. As stated in the ISO Guide to the Expression of Uncertainty in Measurement (ISO GUM), it is our responsibility to report not only the measurement result but also a “quantitative indication of the quality of the result” or uncertainty. This provides the basis for commerce in manufacturing.
A good starting point for implementing the ISO GUM guidance is a review of key metrology terms and definitions.
- The accuracy (of a measurement) is the closeness of agreement between the result of a measurement and the (true) value. It’s important to note that accuracy is a qualitative concept. In other words, numbers should not be associated with it. Numbers should be associated with measures of uncertainty instead. We often find numerical accuracy claims in marketing literature, but this is incorrect by international agreement.
- The error (of measurement) is the result of a measurement minus the (true) value. Because the true value cannot be determined, in practice a “conventional true value” is sometimes used. It’s a bit confusing to state that we cannot know the error because we do not know the true value of a measurement, but this motivates our next term: uncertainty.
- The uncertainty (of a measurement) is a parameter that characterizes the dispersion of the values that could reasonably be attributed to the measurement result. Uncertainty can be quantified using statistical methods or from assumed probability distributions based on experience or other information. It is appropriate to assign numerical values to uncertainty.
- Resolution is the minimum detectable quantity. On a machining center, for example, this is the smallest programmable motion.
- Repeatability (of results of a measurement) is the closeness of agreement between the results of successive measurements of the same parameter carried out under the same conditions of measurement (that is, the same procedure, operator, instrument and location over a short time). Precision has the same meaning as repeatability; it should not be used interchangeably with accuracy, although this is often the case.
- Reproducibility (of results of a measurement) is the closeness of agreement between the results of measurements of the same parameter carried out under changed conditions of measurement, such as a new operator. You may have performed a gage R&R study. The intent of these studies is to determine both repeatability and reproducibility. Based on the standards, however, a gage or instrument is not assigned these values directly because it can be used to measure many different parts and geometries. Instead, only a measurement has a repeatability, reproducibility and uncertainty.
The relationships between the measured value, true value, error and uncertainty are shown graphically in Figure 1. It is seen that the value of the measurement lies in the uncertainty interval with a stated level of confidence. When we describe the measurement uncertainty, we are estimating the standard deviation that we’d expect from that measurement carried out using the selected measurement device under the specified conditions.
Statistics can be used to characterize the expected scatter for n measurements of a quantity x. The relationships between the samples, xi, mean value, µ and the distribution in the samples are again depicted graphically in Fig. 2. A normal distribution is shown. The equations for the mean value, variance and standard deviation are also provided.
Some measurements are based on multiple inputs. In this case, we want to determine the combined effects of the individual inputs on the uncertainty in the measurement result. For example, consider that we wanted to determine the density, r, of an aluminum block. We would need both the mass, m, and volume, V, of the block. If we calculate the volume from measurements of the three side lengths, L1, L2, and L3, this gives four inputs to the density calculation.
The combined standard uncertainty in the density, uc(r), depends on the uncertainty in the mass measurement and the three length measurements. It is determined using a first-order Taylor series expansion of the density equation. The density and combined standard uncertainty equations are shown in Figure 3.
The combined standard uncertainty equation is composed of four separate terms, where each one is the product of the square of the partial derivative (sensitivity) and the square of the measurement uncertainty (or variance) for each input. The partial derivatives are calculated from the mean values of the inputs. The measurement uncertainties for the inputs can be determined from the standard deviation of repeated measurements (Type A evaluation) or can be based on other information, such as a value provided by the manufacturer (Type B evaluation).
The four separate terms can be compared to determine which input has the largest effect on the combined standard uncertainty. If the combined standard uncertainty is larger than desired, the largest term can be used to determine where to invest in improved measurement devices. For the density example, if the first term in the combined standard uncertainty equation is the largest, then it would make sense to purchase a scale with reduced measurement uncertainty.
Related Content
A Balancing Act for Differential Gaging
Differential gaging measures using two devices, which has advantages over standard, comparative measurements using a single sensing head. These include the ability to measure size without regard to position.
Read MoreUsing Digital Tap Testing to Measure Machining Dynamics
Tool-toolholder-spindle-machine combinations each have a unique vibration response. We can measure the response by tap testing, but we can also model it.
Read More4 Ways to Establish Machine Accuracy
Understanding all the things that contribute to a machine’s full potential accuracy will inform what to prioritize when fine-tuning the machine.
Read MoreBallbar Testing Benefits Low-Volume Manufacturing
Thanks to ballbar testing with a Renishaw QC20-W, the Autodesk Technology Centers now have more confidence in their machine tools.
Read MoreRead Next
OEM Tour Video: Lean Manufacturing for Measurement and Metrology
How can a facility that requires manual work for some long-standing parts be made more efficient? Join us as we look inside The L. S. Starrett Company’s headquarters in Athol, Massachusetts, and see how this long-established OEM is updating its processes.
Read More