A question I am frequently asked is, "How accurate is that gage?" I am usually tempted to say something like, "Super accurate" or "So accurate you wouldn't believe it!" But I don't. Instead, I take a deep breath and give my questioner a couple paragraphs worth of information, then watch his jaw drop because he was only expecting a few words.
It just can't be helped. Accuracy is not a simple subject. You have to come at it from several directions before you can nail it down. To have a meaningful conversation with a metrologist or gage supplier about accuracy, without being bamboozled, you have to understand some basic terms that relate to the concept of accuracy. Here's a crash course.
Accuracy: This murky little term deals with several characteristics of gage performance. One of the best definitions of accuracy I know is "the relationship of the workpiece's real dimensions to what the gage actually says." It's not a quantifiable definition, but it does provide a framework for some of the following characteristics.
Repeatability: the ability of a gage or gaging system to produce the same reading every time the same dimension is measured. A gage can be extremely repeatable and still highly inaccurate. A gage with poor repeatability will, on occasion, produce an accurate reading, but it is of little value because you never know when it is right.
Stability: Related to precision, it refers to a gage's consistency over time. The gage may have good precision for the first 15 uses, but how about the first 150? All gages are subject to sources of long-term wear and degradation.
Resolution: the degree to which the analog scale of an indicating device permits the user to distinguish meaningfully between adjacent graduations of the quantity indicated. A machinist can generally estimate the pointer's position between two graduations on a dial but usually not to the resolution of the nearest tenth of a graduation. To prevent users from making guesstimations between the lines, it is generally advisable to select gages that discriminate to a finer level than the measurement units required by the application. Measurements are generally considered reliable only to the level of plus or minus one unit of discrimination. For example, if measurements to 0.001 inch are required, the indicator should discriminate to 0.0005 inch or better. For a digital gage, the resolution is the change in the indication when the digit farthest to the right of the decimal point changes one step.
Magnification: the ratio of the length of the display scale to the amount of displacement actually seen by the gage. Today, it is used in reference to the performance of optics equipment, such as microscopes. The current preferred term in metrology is amplification.
Amplification: In dimensional metrology this can be thought of as getting more than you've got. In a dial indicator, for example, gears or levers amplify the plunger movement. In the electronic world, amplifiers provide output of greater magnitude. But beware. Amplification can create the illusion of accuracy while simultaneously "magnifying" sources of error.
Measurement Range: the distance over which measurements may be taken. With analog gage designs, ranges tend to decrease as amplification increases.
Hysteresis: the error that occurs when a measuring instrument gives different readings for the same dimension when measured in opposite directions. Often with dial indicators, it is a component of bi-directional repeatability caused by clearance (backlash) of the gear train.
Calibration: how closely measurements made by a gage correspond to the dimensions of known standards throughout its measuring range. A gage with good precision may be usable if its calibration is off as long as a correction factor is used.
Understand the terms defined above and, should you ever be so bold as to ask a metrologist about the accuracy of a gage, you will be prepared for the answer.blog comments powered by Disqus