Sunteți pe pagina 1din 2

Measuring up to the task

How do you know which measuring instrument is capable of measuring any


particular dimension reliably and repeatably? Andrew Allcock explains

Whether it’s a set of digital calipers, micrometer, digital height gauge, co-ordinate
measuring machine (CMM) – or anything else where a numeric output figure is
read/generated – any variation in reading between measurements or between
people must be at an acceptable level.

Every measurement taken has some error associated with it and, if this error is large
compared to the component tolerance band, the measuring device will accept bad
parts and reject good ones. The approach used to determine an acceptable error is
called a ‘Gauge Repeatability and Reproducibility Study’ – often called ‘Gauge R&R’
or ‘GR&R’.

Clearly the same dimension does not always demand the same measuring regime; it
depends on the tolerance. The larger the tolerance, the more variation is allowable
in the returned results before the level of unacceptability is reached and vice versa.
Perhaps the most obvious choice is between digital calipers and a micrometer.

Gauge R&R is a statistical approach and requires multiple measurements of the


same component feature and dimension to be taken, plus the measurement of
multiple parts from a production process. These are then subject to mathematical
processing. A Gauge R&R study will convey six basic pieces of information: part
variation; repeatability; reproducibility; repeatability and reproducibility (R&R); per
cent of total variation; per cent of tolerance.

Part variation (PV) is the amount by which the actual part varies, caused by the
manufacturing process itself, and should be the largest element in total variation.
Repeatability relates to Equipment Variation (EV) and shows how much the reading
varies when the same operator measures the same part several times. A large value
could indicate gauge wear, poor operator technique, or a gauge with insufficient
resolution or a defect.

Reproducibility is appraiser variation (AV) and measures differences between


operators. The most common cause of large variation is poor operator training. Of
course, the way to eliminate this element is to use an automated measuring
instrument: a computer-controlled CMM or vision-based system.

Gauge R&R itself is a combined measure relating to variation in the measurement,


as opposed to the manufacturing, process. This number is expressed as a
percentage of the given tolerance and is the most important single attribute. This is
what warns of problems.

If this figure is a large percentage of total variation, but not a large percentage of
tolerance, the measuring instrument is suitable and the manufacturing process is
also acceptable. Of course, if the gauge were better, then it would allow for tighter
control of the process – that is, the control limits that signal the need for action to
bring the process back within the control limits could be finer.

More specifically, if the Gauge R&R figure is =10% of tolerance, then the gauge is
suitable. If the percentage is between 10 and 20, it may be acceptable, based on the
importance of the part. If it is greater than 30%, the instrument is not good
enough.
As with most things now, there are computer software programmes to help generate
these figures. Google ‘Gauge R&R software’ and you’ll see there are plenty of
examples.

Author
Andrew Allcock

S-ar putea să vă placă și