Sunteți pe pagina 1din 15

2.

Literature Review The capability of measurement systems and devices is not in itself a singular subject but a cumulative factor determined by a number of constituent elements. To successfully investigate and resolve capability problems one is compelled to appreciate each of these individual elements and the breadth of their influences. This review of the available literature will examine from first principles the subject of measurement, and its importance. Ultimately its purpose is to identify and understand the key variables able to influence both measurement activities and the systems that perform them. 2.1 The Importance of Measurement Measurement activities are common place and daily in their performance in a broad spectrum of applications. The importance attached to these activities varies with the individual requirements of their application. Some, such as science and engineering, place a great deal of significance on measurement processes, and the ability to undertake them correctly. Quinn et al (2004) estimates that measurement and measurement related activity contributes between 3 and 6 % of an advanced industrial economys Gross Domestic Product (GDP). Quinn et al argues that the safety and success of the majority of manufactured products result from their level of quality, a characteristic highly dependent on measurement processes and systems. Pan (2004), in supporting the view of Quinn et al, argues the variation experienced within gauges, and hence their overall capability, influences the quality improvement activities pursued by industry. This, Pan (2004) believes, is exhibited in the number of companies that include the necessity of a sound measurement system within many programmes of quality assurance. Failure to pay such attention to the capability of measurement activities is believed by Babbar (1994), as being responsible for the underlying causes of dispute between manufacturers, suppliers and customers. Yet little of this commonality of use, or significance of its undertaking, has translated itself into a common understanding of the factors influencing the capability of measurement systems and devices.

2.2 The Key Elements of Measurement Activities Bentley (2005) describes the purpose of all measurement systems as being a link between the object, or process of concern, and the observer, as shown subsequently in Fig.1. A quantitative value, Bentley refers to as the measured variable, is presented to the observer indicating the current value of that given variable (p.3). Input Process Or Object True Value of Variable Measurement System Output Observer Measured value of Variable

Fig.1: Purpose of a Measurement System Source: Bentley (2005) (p.3)

The two most desired characteristics of the output produced by such measurement devices or systems, as is illustrated, are that of accuracy and precision, which will be identified and discussed in due course. 2.2.1 Accuracy Accuracy is defined by Evans and Lindsay (2005) as, the closeness of agreement between an observed value and an accepted reference value or standard. The measure of accuracy is therefore classified as the level of error experienced in relation to the overall size of the measured quantity (p.600). Peoples and Weinstein (1999) reveals that standard practice deems a ratio of 10:1 between the accuracy of a measuring device, and the manufacturing tolerance applied to, as acceptable. For example, a Micrometer with an accuracy of 0.001 should be applied to the measurement of features required to be within a tolerance of 0.01. 2.2.2 Calibration Calibration is the means by which measurement devices and systems are examined to assess the measured value produced in direct comparison to an accepted reference value or standard. Hence, the process provides verification of the level of accuracy that a measurement system is capable of. A general illustration of the calibration process is shown in Fig.2.

Standard Instrument

IM

II

Standard Instrument

Element or System to be Calibrated

Standard Instrument IM = Environmental Input II = Environmental Input I = Input Values O = Output Values Fig.2: Calibration of an Element Source: Bentley (2005) (p.22)

Standard Instrument

Within the previous illustration Bentley (2005) exhibits his interpretation of the calibration process, with measurements being taken of the corresponding values of I, O, IM , II. The value of I can be of constant value, or changing slowly. The measurement of these variables must be accurate with the instruments and techniques used in their quantification conforming to specific standards. These specific standards will be verified S.I. quantities such as the kilogram, and the metre, held at an accredited institute, such as the National Physics Laboratory (NPL). The process by which working level reference standards held within manufacturing facilities, that are responsible for calibration of the facilitys measurement systems, are verified as comparable to these specific standards is known as traceability. Babbar (1995) provides a general definition of traceability as, the ability to relate individual or nationally accepted systems of measurements through a chain of comparisons. Such a chain of comparisons is known as the Instrument Calibration Chain, also described as a traceability ladder, is illustrated subsequently in Fig.3. Evans and Lindsay (2005) identify that the recommended practice is for equipment to be calibrated against working-level standards of an order ten times more accurate than the measuring devices applied to.

The relationship in turn between a reference standard and a working level standard should maintain an accuracy ratio of four to one. Babbar (1994) suggests that the quality of calibration made is hence dependent on the selection and use of a stable standard, which is in both a reliable and traceable compliance with an accepted standard. Calibration capability is therefore one of the primary requirements in the achievement of the second desired quality in measurement devices, that of the precision of measurements.

National Standard Organisatio n

NPL Primary Measurement Standard

Standards Laboratory

Secondary Transfer Measurement Standard Held at an Accredited Local Centre

Company Laboratory Standard

Working Level Standards

Increasing Accuracy

Process Instruments

Fig.3: Instrument Calibration Chain Source: Morris (1991) (p.40)

2.2.3 Precision, and its Relation to Accuracy Precision is defined by Evans and Lindsay (2005) as, the closeness of agreement between randomly selected individual measurements or results, [and therefore] relates to the variance of repeated measurements. Low levels of precision are accredited to random variations resulting from characteristics of the measurement device itself, such as poor design and maintenance (p.600). The graphs opposite in Fig.4 illustrate how measurement systems can be accurate, and yet not precise, and vice versa. As can be seen the levels of measurement variations experienced, and the proximity of the average measured value to the true value, dictate the capability of a measurement system with respect to these characteristics. Accuracy and precision in a measurement system therefore requires that the mean value coincide with that of the true value to be accurate. Where as to be precise the range in the measurement variations is required to remain small over repeated measurements. This form of capability is exhibited in (d). 2.3 Measurement Error Pearn (2005) identifies that measurement errors are unavoidable in any measurement activity, and these will be experienced across all industrial applications. Bentley (2005) declares that perfect accuracy is merely an ideal, with the actual accuracy experienced in real systems being quantifiable by the measurement system error E. Where E = measured value true value E = system output system input Source: Bentley (2005) (p.4)

Raouf et al (1995) provides further definition of the influence of errors in readings produced by measurement devices in the following equation: Reading = True Value + Constant Error + Repeatability Error Raouf et al (1995) accredits a constant error as being the result of the poor calibration of a measuring device, whilst a repeatability error describes an inability of a measuring device to produce the same reading on the making of further measurements.

Frequency

Frequency

True Value Value (a) Not precise and not accurate

True Value Value (b) Precise but not accurate

Frequency

Frequency

True Value Value (c) Accurate but not precise Fig.4: Accuracy Versus Precision Source: Evans and Lindsay (2005) (p.601)

True Value Value (d) Accurate and precise

10

2.3.1 Metrology and the Study of Measurement Uncertainty As all measurements are subject to a degree of uncertainty, ultimately influencing their level of significance, the need for the application of metrology is prompted. Giacomo (1996) describes metrology as a science, the objective of which is to understand the significance of a measurement through examination and analysis of the limits of this significance. Evans and Lindsay (2005) state that the fundamental significance of measurement analysis is illustrated by the subsequent equation:

total process measurement


This equation states that the total observed variation of the production output equals the sum of the true process variation and the variation resulting from the measurement system (p.601). 2.3.2 The Effects of Measurement Error and Uncertainty Forbes (2006) identifies that on producing the measurement data for a products characteristics, in relation to the specification, a limited range of choices must then be made. A product may be accepted as conforming to specification, a further measurement may be undertaken with a more accurate device, or system, the product can be reworked, or rejected as non-conforming. Excessive variation may lead to wrongly accepting, or rejecting, a product, or the incurring of additional costs through its unnecessary reworking or scrapping. What is more, Babbar (1994) argues that when the occurrences of such variation are commonplace an organisations ability to manufacture precision, quality products is severely eroded. This will necessitate the over design of products, or constituent components, so that finer levels of specification may be achieved. This practice is extremely inefficient requiring the investment, of essentially unnecessary, additional capital, reducing the ability of an organisation to effectively compete. In addition, Evans and Lindsay (2005) observe that in tools, such as Six Sigma, the performance indicated is highly dependent on the reliability of the measurement systems responsible for producing its constituent data (p.599).

11

Henderson and Evans (2000) described how General Electric made such a discovery when in the process of implementing Six Sigma. It became obvious to the organisation that many of the measurement systems in operation were not sufficiently repeatable or reproducible to be employed in conjunction with the Six Sigma methodology. This prompted General Electric to develop completely new measurement systems in its operational activities. This influence is equally capable of distorting the view of the capability of manufacturing processes. Process Capability, or Cpk, allows the performance of a process, such as the cutting of metal components to a nominal value, to be observed over a given period of time. It finds the definition of, an arithmetic measure of the acceptability of the variation of a process provided by Slack et al (2004). Therefore, when high levels of measurement variation are experienced, a biased perception of process variation, and hence overall capability will be presented. This may ultimately cause a process performance to appear better, or worse, then it is in reality. Raouf et al (1995), in supporting such an observation, argues that ultimately the capability of measuring systems and devices influence the perception and judgements made about the process capability. Therefore, determining the capability of measurement systems and devices, and evaluating their acceptability, is critical in achieving a high level of process capability and efficiency. 2.4 Assessment of the Variation in Measurement Systems Pearn et al (2005) argues that for a measurement system to be classified as acceptable the variability in the measurement produced, as a direct result of the measurement system itself, cannot exceed a predetermined percentage of the applied engineering tolerance. The determination of this variance, and hence the capability of the measurement device, is achieved by gauge capability analysis, often referred to as Measurement System Analysis (MSA) or Gauge R&R (Repeatability and Reproducibility). MSA is capable of reducing the overall variations experienced into the separate causal factors of repeatability and reproducibility. It is an extremely well established procedure with precise directives for its application contained within ISO 5725. The circumstances in which an MSA is required may include when a process is out of control, or incapable and no specific cause can be accredited, or where proposed changes to the measurement process require examination.

12

For example, Raisinghani et al (2005) argue conducting an MSA must be the first step in the application of Six Sigma, as a precedence to the optimisation of the manufacturing processes required to produce optimal features. Pan (2004) states the experienced variability of a measurement process, and hence its capability, is defined by the subsequent equation:

gauge repeatability reproducibility


This equation states that the total measurement process, i.e. device or gauge, equals the sum of the total variance in the measurement repeatability and reproducibility. Burdick and Larsen (1997) describe repeatability as an error-influencing factor determined by the gauge itself, and is an indication of the accuracy and precision of the measuring process. Where as, reproducibility is the error resulting from variations in measurements produced by different operators, or inspectors, on the same characteristic of a product with the same measurement device. 2.4.1 Influences on Measurement System Performance and Selection Quinn et al (2004) argues that long-term reproducibility of measurements is only achievable by a means of maintaining the accuracy of measurements. Giacomo (1996) however believes that accuracy is only one factor of the determining conditions that must be taken into account. These conditions are identified as the specifications designated to the product characteristics being measured, the quality of the measuring process and device, and what the ultimate purpose of the result is. Hence, a more precise and accurate result demands careful consideration of the product specification, and that measurements are made with high quality processes and devices that have been carefully calibrated. Cremer (2001) mirrors this view that organisations must acquire innovative measurement and inspection processes of an order able to safeguard their ability to effectively compete. Cremer essentially argues manufacturing organisations must move away from the practices of manual inspection and verification and their associated errors. Otherwise the result is lower rates of production, than would be otherwise achievable, and the unnecessary costs of rework and scrap.

13

Yet in practice this view may be seen as the ideal, and that the actual measurement systems and devices open for use, and acquisition, in an organisations activities, is dictated by further factors beyond metrology. Saadat et al (2002) argues that the selection of measurement systems employed within manufacturing is not only influenced by the individual measurement task, but operational budgets, and the availability of the systems, or devices, within an organisation. Therefore, smaller organisations will be more constrained in the choices open to them. Equally, financial restrictions will undermine the level of investment made in the continual maintenance of measurement systems, or the purchasing of updated quality measurement equipment. 2.5 Environmental Influences on Measurement Systems A final area of consideration, in the influencing factors on a measurement systems capability, are the effects produced due to exposure to the operating environment. Morris (1991) identifies that the characteristics of measurement equipment is not constant, but changes with the experience of wear and the influences of the operating environment. The rate at which this change occurs is strongly influenced by the environment conditions, and is increased greatly by factors such as the presence of dirt or chemicals (p.33). Babbar (1995) shares this view, arguing the influences of the environment rank alongside the effects of operator and procedure in determining the capability of measurement devices to produce precision measurements. Examples of environmental inputs may include: Ambient temperature Atmospheric pressure Relative humidity Supply voltage etc.

Within dimensional metrology the effects of ambient temperature is a fundamental consideration, particularly when activities demand that extremely fine tolerances be achieved.

14

It is universally appreciated that materials, such as metals, undergo linear expansion in response to exposure to a thermal gradient. Kalpakjian et al (2001) identify that the standard for measurement temperature is 20 C (68 F), the temperature at which calibration is performed. Therefore it is stressed the measurements must take place in controlled environments where the temperature can be maintained at 20 C 0.3 C (0.5 F) (p.946). Peoples and Weinstein (1999) provide the example of undertaking measurements to an accuracy such as 0.000010 inch, would be inappropriate if performed in environments that experienced significant variations in the ambient temperature. Therefore, it is argued that the more sensitive a gauge is to the temperature of its operating environment, extreme caution should be taken to ensure the frequency of calibration reflects this environment and the level of accuracy demanded.

15

2.6 Conclusions, and Consideration of Elements Integral to the Project Surveying the available literature it becomes evident that assessing the capability of measurement systems and devices is ultimately determined by identifying the level of variability of measurements. The variability experienced is defined as a percentage of the applied dimensional tolerance. At what percentages variation is classed as acceptable, or not, must therefore be predetermined at the beginning of the investigation. The variability of measurements is calculated by a Measurement Systems Analysis (MSA). This identifies the cumulative influence of the two key factors that determine measurement capability. These are the repeatability of the measurement system itself, in addition to the reproducibility of those individuals responsible for making the measurements the Gauge R&R. Hence, any capability improvements must consider first the influence of the device and second the influence of the user. Alongside the impacts of repeatability and reproducibility on overall capability is the influence of the surrounding environment. The resulting influences of temperature and levels of cleanliness will have to be considered in any overall evaluation of the ability to perform capable measurements. Assessing the impact of poor capability through examination of instances of non-conformance will be extremely difficult. This is due to its unwitting influence on determining a components true conformance to specification resulting from the levels of variation possible, as identified by Forbes (2006). Raouf et al (1995) further elaborated that process capability (CpK) is itself determined through measurements, the experience of a high level of variation may potentially falsely influence the perception of them. Caution should therefore be taken in the acceptance of any process capability data associated with poor measurement capability. Hence, any instances where good levels of process capability are associated with highly variable measurement equipment should be identified and examined.

16

2.7 Chapter 2 Summary This chapter has conducted a thorough and reasoned examination of the available literature appropriate to the area of study. Key elements identified that should be of consideration over the course of the investigation are described subsequently. The most desired qualities of this information are that of accuracy and precision. Accuracy describes the ability of a measurement system to produce readings with a mean value that coincides with that of the true value. This is verified by calibration, where a measurement device is tested against a reference standard. Precision describes the ability of a measurement system to produce readings with a small range in measurement variation. Regrettably, all measurements are inescapably prone to error, in addition to the multifarious influences on the capability of measurement systems. Instances of poor measurement capability may therefore cause a misleading impression of conformance to quality and the capability of manufacturing processes. Factors that may negatively influence measurement capability include: the ability of a measurement device to consistently produce repeatable readings. the varying ability of different people to measure the same object with the same device. the quality of calibration performed on measurement devices. the operating environment through variations in ambient temperature, pressure, humidity, etc. The capability of individual measurement systems or devices can be assessed by Measurement Systems Analysis (MSA), or Gauge R&R (Repeatability and Reproducibility). This process allows the determination of the total variation possible in a measuring devices readings due to the cumulative influence of the: errors produced by the device itself its Repeatability. errors produced by different people measuring the same object, with the same device its Reproducibility.

17

The following chapter describes the planned methodology with which the project will be conducted, referring to elements from the literature that impinge on the practical elements of executing the investigation.

18

S-ar putea să vă placă și