A common issue these days is that shops are trying to squeeze more performance out of their existing gaging. What were once ±0.002-inch tolerances have gone to ±0.0002- or even ±0.0001-inch tolerances. Many users think all they need to do to improve performance is replace their old, 0.0001-inch-resolution dial or digital indicator with a new, high-performance digital indicator that might read to 50 or even 10 microinches. Unfortunately, this replacement is likely to cause gage operators to lose faith in the gaging equipment they have used for the past 20 years or more.
It’s a whole other world when it comes to making measurements in the millionths. There are many hidden factors that must be considered when going from 100-microinch resolution to 50-, 20- or even 10-microinch resolution. Let’s look at a fairly common test case to see what pitfalls we might find.
The comparative snap gage is one of the most common gages. It is the second step up on the ladder of dimensional OD measurement. Compared to simple micrometers, it is an improvement because it measures quickly and eliminates operator influences common to handheld micrometers. The snap gage is virtually hands-off when making a comparative measurement. It has a very repeatable gaging force, and it uses its backstop to position the part at the same location.
Snap gages are typically equipped with a dial indicator that has a 0.001- or 0.0001-inch graduation. For ±0.002-inch tolerances, this snap gage is a good solution. However, the ±0.002-inch tolerances that were widely accepted in the past have now become ±0.0001- or ±0.005-inch tolerances. With a 0.0001-inch grad on the dial indicator, a lot of the indicator range is not being used. Therefore, it becomes difficult for the operator to judge whether the part is good or bad. Combine that with errors in the gage itself, and it becomes difficult for operators to make good measurements.
Resolution alone will disqualify dial indicators from collecting data for pre-process or statistical process control. Digital indicators should be used for improving gage performance because of their enhanced capabilities. It is simpler to upgrade to the latest indicator with higher resolution.
Now, say a shop purchases a new indicator to go on a 10-year-old gage. All of a sudden, operators are bringing gages to the repair department because they don’t repeat like they used to. The operators may even question the performance of the indicator on the snap gage. In this case, the digital indicator can magnify and show the errors of a gage that was not built to perform such high-resolution measurements.
On a snap gage, the original anvil parallelism—perfectly acceptable for a 0.0001-inch grad indicator—will begin showing as repeat errors. Most snap gages with dial indicators have a parallel tolerance of 0.0001 inch on the whole surface of the anvil. On a dial indicator, you would be hard-pressed to see the movement within one graduation. However, when using a digital indicator with 20 microinches of resolution, that 0.0001-inch parallelism could be 5 flips of the digit (or more) simply by placing the part on different locations on the anvils. That would make discriminating operators quickly lose confidence in their gage.
There are many more errors that could also be magnified with the higher resolution. These include part geometry, dirt, temperature, deflection due to gaging force, and tooling marks. However, there is only one way to ensure that this step up will work: do a thorough gage study first. Put your best readout on the gage. Do repeat tests, do GR&Rs, and analyze the results. If the gage cannot make the measurement, the analysis will be clear. It will prevent you from using bad gages on the shop floor only to bring them back later on.