Sunteți pe pagina 1din 10

1.3.

UNITS, STANDARDS AND ACCURACY

1. Units of Physical Quantities


All physical quantities in science and engineering are measured in terms of well-defined
units. The value of any quantity is thus simply expressed in terms of the product of
number (e.g. 100) and a unit of measurement (e.g. meters) . Parallel with the progress of
science and engineering, we would find various systems of units being proposed and
standardised. We shall concern ourselves here with only one such system which used to
be known as the meter-kilogram-second (MKS) system, and which is now becoming
accepted as the System International (SI) as an international standard for all sub-systems
of units in science and engineering with some modifications. In this system, as in other
systems of units also, all physical quantities including time and frequency (i.e. the inverse
of time) are divided into two groups, namely: Fundamental quantities, and Derived
quantities.

(1) Fundamental quantities

Fundamental quantities is the System International (SI) as an international standard for


all sub-systems. In all, there are just seven fundamental quantities. The SI units of these
quantities are, respectively: (i) mass (denoted by m), (ii) Length (denoted by L), (iii)
Time denoted by t), (iv) temperature (denoted by T), (v) ampere for current (denoted by
A) for i, (vi) unit of luminous quantity, the candela), and (vii) unit of substance, the mole
(mol).

(2) Derived quantities.

The second group of physical quantities (i.e. all quantities other than the fundamental
quantities), are referred to as derived quantities. This is because they are all defined in
terms of the fundamental quantities in a self-consistent and interconnected manner. Thus,
it can be easily verified that the units for such mechanical quantities like force, velocity,
acceleration, momentum, power and energy can be defined in terms of basic SI units.
This process of defining derived quantities also applies to all electrical quantities with the
exception of the unit of charge (or with the exception of the unit of current in the SI
system, which is, as indicated earlier a slightly modified MKS system). Using defined
physical laws from circuit theory..

1.1.1 Standards for the Fundamental Quantities in SI Units

1.1.2 Physical Standards for Electrical Units


Reliable reference standards for most of the important electrical quantities, which,
however are only kept in very advanced laboratories . We should therefore attempt
briefly to acquaint ourselves with the physical characteristics of the standards for
voltages, current, resistance, capacitance, and inductance.

Before we examine these standards, it is however worthwhile to have first some basic
idea about the broad classification of electrical standards. Essentially, electrical
standards used for checking all practical engineering units are referred to as either (i)
primary standards, or (ii) secondary standards. The former types are kept, as mentioned
earlier, in very few and selected laboratories of the most technologically advanced
countries. There are a number of reasons for keeping such standards. Firstly, they are
needed for reproducing units of measurement, and also for calibrating highly accurate
and precise instruments. Secondly, they are needed for ensuring that, for example, a one
volt in country 'X' is equal to one volt in country 'Y' which then should also be equal to
one volt in country 'Z'.

The secondary types of standards, which are subdivided into laboratory and
industrial (or commercial) standards, are needed for checking electrical units and
measuring instruments in relatively less stringent measuring environments. However, it
must still be emphasised that these latter type of standards must have been checked
directly or indirectly against the corresponding primary standards, for otherwise a
complete instrumentation system involving measurement and/or control processes might
end up being totally useless or purposeless. With this basic understanding of the role of
primary and secondary electrical standards,

1.1.3 Accuracy, Errors and Precision


After we have familiarised ourselves to a sufficient extent with the notions and bases of
electrical units in the SI system and also after we are firmly convinced about the need for
having accepted standards for all electrical quantities, we should next turn our attention to
accuracy of measurements as well as to the inevitable errors involved in determining the
magnitudes of quantities through the direct or indirect use of appropriate instruments and
instrumentation fundamental concepts, and as we have not yet started examine the
principles of operation of standard instruments, our discussion at this stage can only be
restricted to fundamental problems of measurement in the most general terms.
Accordingly, suppose we start out once again with working definitions for accuracy,
error, and precision. Later, we shall also consider the notions of significance, resolution
and sensitivity.

a) Accuracy

As no measurement can yield an exact or true value of any quantity, accuracy can be
defined as a measure of the closeness of the measured magnitude of a given quantity (i.e
either a fundamental or derived quantity) to its true or exact magnitude or, in other words,
we must expect a certain discrepancy between true and measured values. What we
should deduce from the notion of discrepancy is that when the magnitude of a physical
quantity (which can also be referred to as a measurand ) is to be determined by use of an
appropriate measurement process or procedure, there always results (or should result) an
uncertainty with regards to the exact value and the result of the measurement. If the
discrepancy on the other hand is large, poor accuracy would be involved.

b) Errors and general conditions for Error Considerations

If accuracy is to be taken as indication of the closeness of a measured quantity to its true


value, then error can be define as a quantitative measure of the discrepancy between the
true and the measured values. However, as the error involved is not precisely known, it
is found to be more meaningful to express the extent of the error in the form of a measure
of uncertainty. Thus, in practice the true value of a physical quantity can at best be
specified with an estimated degree of uncertainty. There are a number of reasons based
on physical grounds, which lead to this fundamental interpretation of error. In the first
place, we can never be absolutely certain about the accuracy of reading of an instrument
used for measurement. Also, in the case of quantity values obtained by calculations, we
can never be absolutely certain about the accuracy of the data used in the calculation.
The above- indicated general observations will apply to the determination of the
approximate or "nearly accurate" value of any physical quantity, be it a fundamental or a
derived quantity. However, it may appear that we are simply stretching here the concept
of "uncertainty" for everything that can be said about error seems to be contained in or
implied by the notion of accuracy. If this were so, there would have been no need to
over-emphasise errors and error considerations. In reality though, it is in fact the error
estimates, which give us an indication of the degree of accuracy in determining the value
of a quantity. Hence one of our main aims in instrumentation works should be to arrive
at error estimates which can guarantee the accuracy of a measured or calculated value
within a reasonable range of tolerance. As we shall see later, a number of standard
formats for expressing quantitatively (i.e. with numeral figures) error estimates or
uncertainties have been developed for use in measurement and instrumentation.

From the foregoing, it should not, however, be concluded that we must always be strictly
concerned about the presence of the smallest possible error contents in the measured or
calculated value of a physical quantity. In the very hypothetical situation, if the slightest
errors were to cause serious misgivings with regards to the validity and usefulness of
engineering or scientific data, then no progress could actually have been made in
engineering and industrial development and practice. What needs to be realised instead is
that, depending on the desired or required levels of precision and resolution (these are
important terms connected with accuracy as described later), the error to be encountered
in any instrumentation work should be kept within tolerable bounds or limits.

Fundamentally, it is therefore necessary in any instrumentation work to take into account


the following general conditions for errors and error considerations:

(i) There must be an awareness of the presence of errors in any

instrumentation system;

(ii) It is also very essential to have a firm understanding of the type of

instrumentation work involved so as to be able to estimate reasonably the


degree of accuracy to be expected or desired.

(iii) Lastly, to conduct and complete a meaningful instrumentation work, as


much as possible it is necessary to have a good understanding of the
principles of operation and accuracy limitations of the instruments and/or
instrumentation system employed.

c) Classification of Errors

Reference was made briefly earlier to the "principle of uncertainty" and the limitations it
imposes on the measured values of quantities. However, it should not be directly
concluded that errors occur in all instrumentation works because of the "principle of
uncertainty" only. While this fundamental principle (in conjunction with the problem of
noise which will be discussed later) indeed sets an upper limit on the ultimate order of
accuracy in any measurement situation, it is important to realise that there are also other
common types of errors whose causes can be significantly eliminated even under the
most crude measurement conditions. We should therefore like to have a general
awareness about a classification of errors as they are commonly encountered in
instrumentation works. Basically, there are two main classes of errors, namely:

(i) Systematic Errors

and

(ii) Random Errors

We shall briefly examine both types of errors in relation to their respective sources.

c.1) Systematic Errors

These are generally errors whose sources are clearly known and hence whose influences
on the accuracy of measured values can be controlled by introducing systematic
corrections. In practice, these errors are mainly introduced through the misuse of
instruments by users or operators, and in such instances they are referred to as gross
errors. Additionally, systematic errors are also directly introduced by the inherent
accuracy limitations of the measuring instruments used, and as such they are therefore
commonly referred to as instrument errors . In other words, whenever the values of
quantities are to be determined by measurements, the measured value (or readings or
readouts) will invariably be expected to differ from their true values by pre- fixed error
bounds due to the limited precision and resolution capabilities of instruments. Also, it
should be noted that environmental conditions such as changes in temperature, humidity,
and the presence of strong magnetic fields are or can be sources of measurement errors.
Such types of systematic errors are in general referred to as environmental errors , and it
is clear that the necessary precautions should be taken to minimise their influences on the
overall accuracy of measured values.

c.2) Random Errors

Unlike systematic errors, random errors are caused by erratic or unpredictable


fluctuations either in the composition of a material quantity under measurement (and may
be due to variations in parameters), or in the procedures and mechanisms employed in
conducting measurement exercises. Random errors can thus possibly arise in a number
of different ways, and it will be worthwhile to discuss briefly their causes in
measurement works.

Suppose we consider the case of random errors, which can erratically arise in the
determination of the values of a particular quantity by measurement. Assume the desired
value can be obtained by a repeated series of measurement. As an example, we could
think of tedious measurement exercise from which the exact value of a resistor is sought.
Let us further assume that the measuring conditions are practically similar, and hence
repeatable. However, no matter how identical the measuring conditions are made, it
impossible for all of the measured values to be identically equal. In other words, there
are real possibilities or conditions for the introduction of random or erratic errors. Hence,
the expected resistance value will lie somewhere in the centre of the readings with a
scattering of measured values distributed above and below an "arithmetic mean value"
which is simply the average value of all valid readings taken in the measurement
exercise. The extent of the random errors will of course depend also on the accuracy of
the measuring instruments. If the number of measurements are really large, t can still be
shown using standard statistical techniques that the random errors of measurement will
average themselves out, and that therefore the accurate value of the resistor can be
approached.

A second cause for the introduction of random errors into results of instrumentation
works can be explained in terms of the uncertainties specified for circuit components,
which are used in the construction of instrument assemblies. When such components of
equal nominal values (e.g one-kilohm carbon resistors) are used in the circuitries of
instrument sections, ideally it would be expected that the components would not cause
the introduction of random errors. In practice, all individual components cannot be
manufactured to have identical values. Instead, each component should in fact have a
"unique" value, which is nearly equal to what is described as the "nominal value", but
still differing by a small "uncertainty". However, the "unique" values do not or cannot
remain constant. They are subject to random variations, which can be due to either
effects of aging or due to hysteresis effects (i.e. inability to return to initial values after
cyclic external influences. The hysteresis effects in turn could be caused by temperature
variations or by instabilities in the internal material compositions. The end result of such
random variations in general is then to introduce random errors of measurements, which
are different from systematic errors.

1.1.4 Precision, Significance, Resolution and Sensitivity


The notions of precision, resolution, and significance are very useful in providing
quantitative descriptions or statements to the degrees of accuracy of values of quantities
measured or monitored in instrumentation works. From the definitions for the respective
terms that follow, fundamental distinctions will also be brought out in relating precision
to accuracy on one side, and precision to significance another side.

a) Precision

The performance of an instrument in yielding identical measured values under repeatable


conditions is referred to as precision. Note that in this statement nothing is said about
the accuracy of readings of the same quantity. To emphasise this point, let us consider as
an example a voltmeter with a pointer and a scale containing very find divisions.
Suppose the pointer is bent either at its base or near its tip. Now the readings yielded by
such an instrument will be precise (i.e. with a high degree of repeatability), but the
readings will be completely inaccurate as the operating conditions of the instrument
would have been altered due to the damage suffered by the pointer. However, if the
structural and circuit components that go into the construction of voltmeter instrument
assembly are all in good condition the finer the scale divisions, the more accurate will be
the instrument readings. From this simple example, we can then draw the general
conclusion that while accuracy requires precision, precision does not guarantee accuracy.

b) Significance

To convey quantitatively an actual information regarding the value (or magnitude) of a


quantity, it is necessary to have such information expressed in terms of numbers of
figures. The number of digits which are believed to be closer to the true value of a
quantity being measured indicate the precision of a measurement, and we can thus simply
define significance as a measure of precision.

The representation of significant figures in scientific and engineering works is made in


one of the three ways. Depending on the particular method by which it is determined,
and depending also on the accuracy of the information it conveys, the value of a quantity
can be meaningfully and conveniently expressed as follows:

(i) as an entire number

(ii) by use of powers

(iii) by an indication of uncertainty of error limit

b.1) Entire Number Significance Representation

In this method of significance representation, a figure contains significant digits with


uncertainty determined by the position of a decimal point. As an example, suppose a
measured resistance value is expressed as "100 ohms". Here, three significant digits are
contained, and the uncertainty in the measurement can then be specified as " ±0.1 ohm".
The true resistance value is thus estimated to lie between "100.0 and 100.2 ohms".

Next, adding a second digit after the decimal point, let it be stated that the measured
resistance value is "100.05" ohms. Five significant digits are indicated in this figure, and
with an uncertainty of "±0.05 ohms", the resistance value is estimated to lie between
"100.00 and 100.10 ohms". Thus, in the entire number representation of significance, it
should be observed that the number of digits determines the significant numbers, and the
position of the decimal point yields the uncertainty, which determines the degree of
precision. It should however be added that figures with zero to the left of the decimal
point, and with one or more zeros to the right of the decimal point do not contribute to the
number of significant digits in a figure. For example, the number "0.0025" has just two
and not four or five significant digits.

b.2) Use of Powers of Ten for Significance Representation

The entire number method of representing the significance of a precision figure is


unsuitable for expressing large numbers. In such cases, the most convenient method is to
represent a large number by taking the product of a number equal to or greater than one
and less than ten times a power of ten with a positive or negative exponent. Let us
consider another resistance- measured value of 73,600 ohms. In terms of powers of ten
representations, this can be expressed as 7.36 x 104 ohms, a figure, which has three
significant digits with an uncertainty of ±0.01 x 104 ohms, or 100 ohms. Thus the
relatively large resistance is estimated to have a true value between 73,500 and 73,700
ohms. Notice that while the use of powers of ten representation reduces in general the
accuracy for positive exponents greater than one, it does still clearly indicate the gross
limits of accuracy in a given measurement situation involving large numbers.

b.3) Range of Error Method of Significance Representation

In this method of significant digits representation, the value of a quantity is expressed


with an entire number with a specified range of uncertainty attached to it. Taking the
previous example once more, if the measured resistance value is expressed as being
"73,600 ± 50 ohms", we are reasonably certain that the correct value lies between 73,550
and 73,650 ohms as the error range has been clearly indicated. Thus, notice that in
measurement situations in which the uncertainty is accurately expressed, the error method
of significance representation is actually very convenient. Further, it should also be
noted that the precision of accuracy is determined by the uncertainty range - the higher
the value of the uncertainty range, the smaller will be the precision of the accuracy of
measurement, and conversely the smaller the value of the uncertainty range, the more
precise will become the accuracy of measurement.

c) Instrument Resolution and Sensitivity

When we speak of the significant digits as a measure of precision, we do not really


question as to how the value of a quantity is determined - this could be the result of
measurement or calculation. Suppose, however, we are strictly interested in knowing
about the ability of an instrument to provide the highest possible precision. We would
then be talking about what is commonly referred to as the resolution or discrimination of
an instrument. By definition, resolution is simply the smallest change or increment in
measured value which can be read (or detected) from the instrument's display
mechanism. To repeat, note that resolution is a characteristic of measuring instruments,
although resolution is a measure of the precision of given measurements.

S-ar putea să vă placă și