The purpose of a GR&R study is to measure the variability induced in measurements by the gage or measuring system, and then compare it to the total variability allowed by the part tolerance to determine if the measuring system is adequate for the task.
We all know that the measuring system is more than just the gage. From our SWIPE discussions we know the whole process consists of the Standard (master), the Workpiece, the Instrument (gage), the People and the Environment. Besides these key components, a GR&R study also takes into account:
• The test method. How the gage is set up, the instructions for using the gage and how data is to be collected.
• The part specification. This is the measurement reference value. While the engineering specification of tolerance does not affect the measurement, it is what the gage performance must be compared against.
It’s important to remember what a GR&R does and what it does not do. First, the test will help you determine the repeatability of the gage. This is the variation in measurements taken by the same person, making the same measurement on the same part under the same conditions. It will also determine the reproducibility of the gage. This is the variability introduced by a number of different operators when measuring the same parts.
The GR&R study determines the precision of a measuring system. It will not, however, determine the accuracy of the system. Precision has to do with two things: the ability of a measurement to be reproduced consistently and the resolution of a measurement, or the number of significant digits to which a value may be measured reliably.
Accuracy, on the other hand, has to do with the ability — mechanical or otherwise — of a gage or measuring instrument to achieve exactness. To take a simple example, say you measure the diameter of a bearing by holding a stick up to it and marking the edge point with a pencil. If done carefully, this measurement is accurate, or exact, to within the thickness of your pencil mark. If you hold the stick up to a steel rule you will be able to apply some scale to the measurement. The gradations on this ruler will reflect its resolution: say it reads 3-3/16" plus a bit. That “bit” falls between gradations and is beyond the resolution of the rule.
Measure the mark on the stick with a digital micrometer and you will be able to resolve the measurement to three or four decimal places. You will probably even be able to determine the thickness of your original pencil line. However, the measurement itself will be no more accurate unless you go back and measure the bearing diameter with a more accurate instrument.
This is a pretty key point and one that many people using GR&Rs forget. They may spend a lot of money and time building in and testing for a good GR&R, but if the gage is not accurate then its usefulness comes into question. When qualifying a gage, it is common to specify a separate test to quantify its accuracy.
For GR&R purposes, the precision of a measurement process (its R&R value) is expressed as a percentage of the total specified part tolerance. An excellent gage would have a P/T (precision/tolerance) ratio of 10 percent or less. This means that the variability introduced by the measuring process take up no more than 10 percent of the allowable part tolerance. A 10-30 percent P/T ratio would reflect a marginal process, and one more than 30 percent would probably not be acceptable. This is because the gaging process is using up too much of the part tolerance for classification, and the chances of accepting bad parts becomes too risky.
This is especially true with some of today’s very tight tolerances. The capabilities of some gages, no matter how good the process, may use a relatively high percentage of the tolerance. There may be no other economical or doable gaging alternatives for the money, people, time or environment available.