It is generally understood that the results of precision measurements, such as from a form measuring instrument, are subject to a number of environmental influences, such as shock, vibration and temperature deviations. What is less understood, however, is that the form measuring machine itself can also influence the measurement results.
It is generally understood that the results of precision measurements, such as from a form measuring instrument, are subject to a number of environmental influences, such as shock, vibration and temperature deviations. What is less understood, however, is that the form measuring machine itself can also influence the measurement results. For example, worn probes, excess bearing clearance, natural vibrations and other factors can degrade the overall accuracy of a measurement. These measuring system-based factors that influence the assessment of form are called “measuring uncertainty.”
Some suppliers of precision parts are required to take the measuring uncertainty into account before delivering their products to their customers. Here’s why. Let’s say the specification for radial runout of a shaft is called out in the tolerance at 3 micronmeters. From documentation, we then discover that the uncertainty of the measuring instrument amounts to ±1 micronmeter. Only shafts with a radial runout of less than 2 micronmeters can be accepted. Once the measured radial runout values reach or exceed 2 micronmeters, you can no longer exclude the possibility that the inspected workpieces are out of tolerance.
For this reason, it’s not too surprising that measurement uncertainties are disclosed for facilities doing measurement standards, such as gage blocks, master rings and discs. But there are also cases in which the measuring uncertainties of instruments used for the inspection of products come under scrutiny. When you have determined the uncertainty of an inspection system for production parts, you can then determine what part of the tolerance band is left over for actual production. The drawing tolerances, which are often extremely close, are narrowed even more if the measuring uncertainty is too high. The upshot of this is that imprecise measuring devices increase production effort and, therefore, cost.
The present internationally approved standard for the determination of measuring uncertainty is the GUM method (guide to the expression of uncertainty in measurement). The first procedural step in determining uncertainty is the determination of all the influence quantities. The complexity of a form measurement becomes clear during this first step. Here’s a partial list of influencing factors that may cause measurement errors:
The choice of measuring instruments has an influence on the number of factors that determine the measuring uncertainty. For example, compare a sample roundness measurement on a form measuring instrument with a rotating measuring axis and a roundness measurement on a 3D coordinate measuring instrument. Using an instrument with a rotating measuring axis shows that the influence of stylus ball form errors on the measured result is negligible. This does not apply to 3D coordinate measuring instruments, however. At times, the stylus ball deviations have to be thoroughly calibrated before measurement. For this reason, the degree of uncertainty for roundness measurements on a 3D coordinate measuring instrument depends, in large part, on the uncertainty of probe calibration (into which the measuring uncertainty of the calibration standard enters, among other factors).
All of these factors, which may significantly influence the uncertainty of form measurements, have to be estimated on the basis of concrete data for the expected value and the standard deviation.blog comments powered by Disqus