Sunteți pe pagina 1din 17

Instrument Characteristics

1.1 Introduction
This chapter concentrates on 'how well an instrument performs its various functions'. That
determine how closely the instrument output reflects the value of the variable that is being
measured.
Instrument performance is described by means of quantitative qualities which are referred to
as characteristics : the two realms being the static and the dynamic. The static
characteristics pertain to a system where quantities to be measured are constant or vary
slowly with time. When the instrument is required to measure a time-varying process variable,
one has to be concerned with dynamic characteristics which quantify the dynamic relation
between the instrument input and output.

2.2 Static terms and characteristics


2.2.1 Range and span : The region between the limits within which an instrument is designed
to operate for measuring, indicating or recording a physical quantity is called the range of the
instrument. The range is expressed by stating the lower and upper values. Span represents
the algebraic differences between the upper and lower range values of the instrument. For
example,
Range 10C to 80C ; Span 90C
Range 0 volt to 75 volt; Span 75 volt
2.2.2. Accuracy, errors and correction : No instrument gives an exact value of what is being
measured. There is always some uncertainty in the measured value. This uncertainty is
expressed in terms of accuracy and error. Accuracy of an indicated (measured) value may be
defined as conformity with or closeness to an accepted standard value (true value). Accuracy
of the measured signal depends upon the intrinsic accuracy of the instrument itself, variation of
the signal being measured, accuracy of the observer and whether or not the quantity is being
truly impressed upon the instrument. For example, the accuracy of a micrometer depends
upon factors like error in screw, anvil shape, temperature difference, and the applied torque
variations etc
In general, the result of any measurement differs somewhat from the true value of the quantity
being measured. The difference between the measured value (Vm) and the true value (Vt) of
the quantity represents static error or absolute error of measurement (Es), i.e.
Es = Vm - Vt
The error may be either positive or negative. For positive static errors the instrument reads
high and for negative static errors the instrument reads low.
From experimentalist's view point, static correction or simply correction (Cs) is more important
than the static error The static correction is defined as the difference between the true value
and the measured value of a quantity.

Cs = Vt - Vm
The correction of the instrument reading is of the same magnitude as the error, but opposite in
sign, i.e., Cs= -Es
Error specification or representation :
(i) Point accuracy wherein the accuracy of an instrument is stated for one or more points in its
range. For example a given thermometer may he stated to read within +0.5C between 100 oC
and 200 oC. Likewise a scale of length may be read within 0.025 cm.
(ii) Percentage of true value or the relative error wherein the absolute error of measurement is
expressed as a percentage of true value of the unknown quantity.

measured value true value


x 100 percent
true value
V Vt
x 100 percent
= m
Vt

error =

The percentage error stated in this way is the maximum for any point in the range of the
instrument. The size of the error, however, diminishes with a drop in the true value.
(iii) Percentage of full scale deflection where in the error is calculated on the basis of maximum
value of the scale.
The accuracy specified above refers to the intrinsic accuracy of the instrument itself and does
not include procedural or personal performance.

Solve Example 2.1-5


2.2.3 Calibration. The magnitude of the error and consequently the correction to be applied is
determined by making a periodic comparison of the instrument with standards which are
known to be constant. The entire procedure laid down for making, adjusting, or checking a
scale so that readings of an instrument or measurement system conform to an accepted
standard is called the calibration. The graphical representation of the calibration record is
called calibration curve and this curve relates standard values of input or measurand to
actual values of output throughout the operating range of the instrument.
A comparison of the instrument reading may be made with either (i) a primary standard, (ii) a
secondary standard of accuracy greater than the instrument to be calibrated or (iii) a known
input source. For example, we may calibrate a flow meter by comparing it with a standard flow
measurement facility at the National Bureau of Standards ; by comparing it with another flow
meter (a secondary standard) which has already been compared with a primary standard; or
by direct comparison with a primary measurement such as weighing a certain amount of water
in a tank and recording the time elapsed for this quantity to flow through the meter. The
calibration standards, along with their typical accuracies, for certain physical parameters have
been given in Table 2.1. The calibration standard should be at least an order more accurate
than the instrument being calibrated.

In a typical calibration curve (Fig. 2.1) ABC represents the readings obtained while ascending
the scale; DBF represents the readings during descending; KLM represents the median and is
commonly accepted as the calibration curve. The term median refers to the mean of a series of
up and down readings. Quite often, the indicated values are plotted as abscissa and the
ordinate represents the variation of the median from the true values.

A faired curve through the experimental points then represents the correction curve. This type
of deviation
presentation facilitates a rapid visual assessment of the accuracy of the
instrument. The user looks along the abscissa for the value indicated by the instrument and
then reads the correction to be applied.

A properly prepared calibration/correction curve gives information about the absolute static
errors of the measuring device, the extent of the instrument's linearity or conformity, and the
hysterises and repeatability of the instrument.

Solve Example 2.6


2.2.4 Hysterises dead zone: From the instrument calibration curve (Fig. 2.1), it would be noted
that the magnitude of output for a given input depends upon the direction of the change of
input. This dependence upon previous inputs in called Hysterises. Hysterises is the maximum
difference for the same measured quantity (input signal) between the upscale and downscale
readings during a full range traverse in each direction. Maximum difference is frequently
specified as a percentage of full scale. Hysterises results from the presence of irreversible
phenomenon such as mechanical friction, slack motion in bearings and gear, elastic
deformation, magnetic and thermal effects. Hysterises may also occur in electronic systems
due to heating and cooling effects which occur differentially under conditions of rising and
falling input.
Dead zone is the largest range through which an input signal can be varied without initiating
any response from the indicating instrument. Friction or play is the direct cause of dead zone
or band

2.2.5 Drift: It is an undesired gradual departure of the instrument output over a period of time
that is unrelated to changes in input, operating conditions or load. Wear and tear, high stress
developing at some parts and contamination of primary sensing elements cause drift. It may
occur in obstruction flow meters because of wear and erosion of the orifice plate, nozzle or
venturimeter. Drift occurs in thermocouples and resistance thermometers due to the
contamination of the metal and a change in its atomic or metallurgical structure. Drift occurs
very slowly and can be checked only by periodic inspection and maintenance of the
instrument.
2.2.6 Sensitivity : Sensitivity of an instrument or an instrumentation system is the ratio of the
magnitude of the response (output signal) to the magnitude of the quantity being measured
(input signal), i.e.,
Change of output signal
Staticsensitivity (k) =
Change of input signal
Sensitivity is represented by the slope of the calibration curve it the ordinates are expected in
the actual units. With a linear calibration curve, the sensitivity is constant. However, if the
calibration curve in non-linear, the static sensitivity is not constant and must be specified in
terms of the input values as shown in Fig. 2.4

Sensitivity has a wide range of units, and these depend upon the instrument or measurement
system being investigated. For example, the operation of a resistance thermometer depends
upon a change in resistance (output) to change in temperature (input) and as such it sensitivity
will have units of ohms/C. Sensitivity of an instrument system is usually required to be as high
as possible because then it becomes easier to take the measurement (read the output).

Let the different elements comprising a measurement system have static sensitivities of K1, K2,
K3, .etc. When these elements are connected in series or cascades (Fig. 2.5), then the
overall sensitivity is worked out from the following relations:

K1 = 1 ;
K 2 = 2 ; K1 = o ;
i
1
2

Overall sensitivity (K) =

o
i
=

1 2 o
x
x
;
i 1 2

= K1 x K 2 x K 3
The above relation is based upon the assumption that no variation occurs in the values of
individual sensitivities K1, K2, K3, .etc. due to loading effects.
When the input to and output from the measurement system used with electrical/electronic
equipment have the same form, the term gain is used rather than sensitivity. Likewise an
increase in displacement with the optical and mechanical instruments is described by the term
amplification. Apparently the terms sensitivity, gain and magnification all mean the same; and
they describe the relationship between output and input. Further when the input or output
signal is changing with time, the term transfer function or transfer operator is used other
than sensitivity, gain or amplification.
2.2.7 Threshold and resolution : The smallest increment of quantity being measured which can
be detected with certainty by an instrument represents the threshold and resolution of the
instrument.
When the input signal to an instrument is gradually increased from zero, there will be some
minimum value input before which the instrument will not detect any output change. This
minimum value is called the threshold of the instrument. Thus threshold defines the
minimum value of input which is necessary to cause a detectable change from zero output. In
a digital system, it is the input signal necessary to cause one least significant digit of the output
reading to change. Threshold may be caused by backlash or internal noise.

When the input signal is increased from non-zero value, one observes that the instrument
output does not change until a certain input increment is exceeded. This increment is termed
resolution or discrimination. Thus resolution defines the smallest change of input for which
there will be a change of output. With analogue instruments, the resolution is determined by
the ability of the observer to judge the position of a pointer on a scale, e.g. the level of mercury
in a glass tube. Resolution is usually reckoned to be no better than about 0.2 of the smallest
scale division. With digital instruments, resolution is determined by the number of neon tubes
taken to show the measured value. For example, if there are four neon tubes to represent
voltage measurement on a 1 volt range, one tube will be taken by the decimal point and the
others by digits to show readings up to a maximum of 0.999 volt. Thus the third digit shows or
resolves millivolts, and consequently the resolution is 1 mV.
Threshold and resolution may be expressed as an actual value or as a fraction or percentage
of full scale value.
2.2.8. Precision, repeatability and reproducibility. These terms refer to the closeness of
agreement among several measurements of the same true value under the same operating
conditions. Proper checking and maintenance of instrument should be carried out to ensure its
reproducibility.
Let us differentiate between accuracy and precision as applied to the realms of measurements.
Accuracy refers to the closeness or conformity to the true value of the quantity under
measurement. Precision refers to the degree of agreement within a group measurements, i.e.,
it prescribes the ability of the instrument to reproduce its readings over and over again for a
constant input signal. This distinction can be elaborated by considering the following two
examples :
The difference between accuracy and precision has been illustrated in Fig. 2.6. The
arrangement may be thought to correspond to the game of darts where one is asked to strike a
target represented by centre circle. The centre circle then represents the true value, and the
result achieved by (he striker has been indicated by the mark 'X'.

Two further terms used to define reproducibility are :

Stability refers to the reproducibility of the mean reading of an instrument, repeated on


different occasions separated by intervals of time which are long compared with the time of
taking a reading. The conditions of use of the instrument remain unchanged.

Constancy refers to the reproducibility of the mean reading of an instrument when a


constant input is presented continuously and the conditions of test are allowed lo vary
within specified limits. This variation may be due to some changer in the external
environmental conditions.

The above discussion also points out that it is possible to obtain high precision with poor
accuracy, but not high accuracy with low precision. In other words precision is a necessary
prerequisite to accuracy but it does not guarantee accuracy.
2.2.9. Linearity : The working range of most of the instruments provides a linear relationship
between the output (reading taken from the scale of the instrument) and input (measurand,
signal presented to the measuring system). This aspect tends to facilitate a more accurate
data reduction. Linearity is defined as the ability to reproduce the input characteristics
symmetrically, and this can be expressed by the straight line equation
y = mx + c
where y is the output, x the input, m the slope and c the intercept. Apparently the
closeness of the calibration curve to a specified straight line is the linearity of the instrument.
Any departure from the straight line relationship is non-linearity. The non-linearity may be due
to non-linear elements in the measurement device, mechanical hysterises, viscous flow or
creep, and elastic after-effects in the mechanical system. In a nominally linear measurement
device, the non-linearity may taken different forms as illustrated in Fig. 2.7.

(i) Theoretical slope linearity : Maximum departure a from the theoretical straight line OA
passing through the origin. The line OA refers to the straight line between the theoretical end
points, and it is drawn will out regard to any experimentally determined values.
(ii) End point linearity : Maximum departure b from the straight line OB passing through the
through the terminal readings (experimental end pointszero and full scale position)

(iii) Least square linearity : Maximum departure c from the best fit straight line OC determined
by the least square technique.
In roost instruments, the linearity is taken to be the maximum deviation from a linear
relationship between input and output, i.e., from a constant sensitivity and is often expressed
as a percentage of full scale.
The calculation of measurement error requires numerical values of accuracy, resolution,
linearity etc. for the instrument being used. For the majority of laboratory instruments, this data
is given in a manufacturer's hand book. However for some instruments such as micrometers,
vernier calipers, thermometers and materials testing equipment, the data is given in the
standards maintained by the country.
2.2.10 Some other terms associated with the static performance of an instrument are :

Tolerance : Range of inaccuracy which can be tolerated in measurements : it is the


maximum permissible error. For example, the tolerance would be 1% when an inaccuracy
of 1 bar can be tolerated for 100 bar value of pressure.
Readability and least count : The term readability indicates the closeness with which the
scale of the instrument may be read. The term least count represents the smallest
difference that can be detected on the instrument scale. Both readability and least count
are dependent on length scale, spacing of graduations, size of the pointer and parallax
effect.
Back lash : The maximum distance or angle through which any part of a mechanical
system may be moved in one direction without applying appreciable force or motion to the
next part in a mechanical system.
Zero stability : A measure of the ability of the instrument to restore to zero reading after the
measurand has returned to zero, and other variations (temperature, pressure, humidity,
vibration etc.) have been removed.

2.3 Dynamic terms and characteristics


When the instruments are required to measure au input which is varying with time, the
dynamic or transient behavior of the instrument becomes as important as the state behavior.
The signals cannot be impressed upon instantaneously and the mass add capacitances
(thermal, electrical, or fluid) introduce slowness or sluggishness in the measurement system. A
pure time delay may also he encountered when the instrument has to wait for some reactions
to take place. Consequently the system does not settle to its equilibrium steady state condition
immediately after the application of input signal; it does so only after passing through a
transient period.
Certain terms used with dynamic systems are defined below :
2.3.1 Speed of response and measuring lag. In a measuring instrument the speed of
response or responsiveness is defined as the rapidity with which an instrument responds to a
change in the value of the quantity being measured. Measuring lag refers to retardation or
delay in the response of an instrument to a change in the input signal. The lag is caused by
conditions such as capacitance, inertia, or resistance.

2.3.2 Fidelity and dynamic error : Fidelity of an instrumentation system is defined as the
degree of closeness with which the system indicates or records the signal which is impressed
upon it. It refers to the ability of the system to reproduce the output in the same form as the
input. If the input is a sine wave then for 100 per cent fidelity, the output should also be a sine
wave. The difference between the indicated quantity and the true value of the time varying
quantity is the dynamic error, here static error of the instrument is assumed to be zero.
2.3.3 Overshoot. Because of mass and inertia, a moving part, i.e., the pointer of the instrument
does not immediately come to rest in the final deflected position. The pointer goes beyond the
steady state i.e., it overshoots (Fig. 2.8).

The overshoot is defined as the maximum amount by which the pointer moves beyond the
steady state.

2.3 4 Dead time and dead zone : Dead time is defined as the time required for an instrument
to begin to respond to a change in the measured quantity. It represents the time before the
instrument begins to respond after the measured quantity has been altered. Dead zone
defines the largest change of the measurand to which the instrument does not respond. Dead
zone is the result of friction, backlash or hysterises in the instrument.

Some of the dynamic terms are graphically shown in Fig. 2.9 where the measured quantity and
the instrument readings arc plotted as function of time.
2.3.5 Frequency response : Maximum frequency of the measured variable that an instrument
is capable of following without error. The usual requirement is that the frequency of measurand
should not exceed 60 percent of the natural frequency of the measuring instrument.

2.4 Standard test-Inputs


The dynamic performance of both measuring and control systems is determined by applying
some known and predetermined input signal to its primary sensing element and then studying
the behavior of the output signal. The most common standard inputs used for dynamic analysis
have been illustrated in Fig 2.10, and these are ;

(i) Step function which is a sudden change from one steady value to another. The step input is
mathematically represented by the relationship
i = 0 at t < 0
i = o at t 0
where o is a constant value of the input signal i. The capacity of the system to cope with
changes in the input signal is indicated by the transient response.

(ii) Ramp or linear function wherein the input varies linearly with time. The ramp input is
mathematically represented as :
i = 0 at t < 0
i = t at t 0
where is the slope, of the input versus time relationship.
The ramp-response becomes indicative of the steady state error in following the changes in
the input signal.
(iii) Sinusoidal or sine wave function where the input has a cyclic variation ; (he input varies
sinusoidal with a constant maximum amplitude. Mathematically, it may be represented as :
i = A sin t
where A is the amplitude and is the frequency in rad/s.
The frequency or harmonic response is a measure of the capability of the system to
respond to inputs of cyclic nature.
A general measurement system can be mathematically described by the .following differential
equation :
(An Dn + An-1 Dn-1 +...............+ A1 D +Ao) o = (Bm Dm + Bm-1 Dm-1 +...............+ B1 D +Bo) i
where the A's and B's are constants depending upon the physical parameters of the system,
Dk is the operative derivative of the order K, o is the information out of the measurement
system and i is the input information. The time factor in the input or driving function may
correspond to step input, ramp input, sinusoidal input or any combination of these.
The order of the measurement system is generally classified by the value of the power of n
* Zero order system
: n=0 and A1, A2, ..... An = 0
* First order system
: n=1 and A2, A3, ..... An = 0
* Second order system : n=2 and A3, A4, ..... An = 0

2.5 Zero, first and second order systems


2.5.1 Zero order systems : Consider an ideal measuring system, i.e., a system whose output
is directly proportional to input; no matter how the input varies. The output is a faithful
reproduction of input without any distortion or time lag. The mathematical equation relating
output to input is of the form
o = K i
where K is the sensitivity of the system. This equation of the zero order system is obtained
when the power of n is set equal to zero in the general equation for a measurement system.
That gives :
Ao o = Bo i
OR o = (Bo/ Ao) i
= K i
B

The static sensitivity is the only parameter which characterizes a zero order system and its
value can be obtained through the process of static calibration. A block diagram representing
zero-order system has been shown in Fig. 2.11 (a).

Some examples of zero-order system are : mechanical levers, amplifiers, and a linear
electrical potentiometer which gives an output voltage proportional to the displacement of the
wiper.
2.5.2 First-order systems : The behavior of a first-order system is represented by a first-order
differential equation of the form
A1 D o + Ao o = Bo i
(obtained by substituting n=1 in the general equation , D is the del operator = d/dt)
This equation may be manipulated to rewrite in the following standard form :

B
A1 d o
+ o = o i
Ao dt
Ao

d o
+ o = K i
dt

where is the time constant ( = (A1 / Ao ) and K is the static sensitivity (K = (Bo / Ao )
In terms of D-operator where

D = d/dt , D2 = d2/dt2

We have:
D o + o = K i
( D + 1) o = K i

o
K
=
i D + 1

The above equation represents the standard form of transfer operator for the first-order system
; its block diagram has been indicated in Fig.2.11(b).

Some examples of first order system are :


Temperature measurement by mercury-in-glass thermometers, thermocouples and thermistor,
build-up of air pressure in bellows, network of resistance capacitance, velocity of a free falling
mass.
2.5.3 Second-order systems : The input/output relationship of a second order system is
described by a differential equation of the form :

A2

d 2 o
d o
A
+
+ Ao o = Bo o
1
dt 2
dt

(obtained by substituting n=2 in the general equation)


Dividing both sides by Ao and letting

A0
= undamped natural frequency, rad/s
A2
A1
= damping ratio, dimensionless
=
2 Ao A2
B
K = o = static sensitivity or steady state gain
Ao

n =

we obtain :

1 d 2 o 2 d o
+
+ o = K o
n2 dt 2 n dt

OR in terms of the D-operator we can write

D2

D + 1) o = K o
n2 n
o
K
=
2
2
D
o
( 2 +
D + 1)
n n
(

The block diagram of a second-order system or instrument is given in Fig. 2.11 (c).

Some examples of second-order instruments are :


*spring-mass system employed for acceleration and force measurements.
*piezo electric pickups.
*U.V. galvanometer and pen control system on X-Y plotters.
Most of the mechanical instruments invariably consist of a spring and a moving mass, and their
combination provides a system which will oscillate naturally at a given frequency. The
amplitude of the oscillation is affected by damping which .is a means of dissipating energy in
the system. Damping may occur naturally (e.g. hysterises in materials, viscous friction at
bearings etc ) or may be introduced intentionally (e.g. dashpot similar to the automobile
damper). The damping force opposes motion and is taken proportional to the linear/angular
velocity.

Examples to be solved :
Example 2.1. A thermometer reads 73.5C and the true value of the temperature is 73.15C.
Determine the error and the correction for the given thermometer.
Example 2.2. A temperature transducer has a range of 0C to 100C and an accuracy of 0'5
percent of full scale value. Find the error in a leading of 55C.
Example 2.3 A pressure gauge of range 0.20 bar is said to have an error of 0.25 bar when
calibrated by the manufacturer. Calculate the percentage error on the basis of maximum scale
value. What would be the possible error as a percentage of the indicated value when a reading
of 5 bar is obtained in a test ?
Example 2.4 A pressure gauge having a range of 1000 kN/m2 has a guaranteed accuracy of 1
percent of full scale deflection (i) What would be the possible readings for a true value of 100
kN/m1 ? (ii) Estimate the possible readings if the instrument has an error of 1 % of the true
value.
Example 2.5 The pressure at a remote point has been measured by a system comprising a
transmitter, a relay and a receiver element. The specified accuracy limits are :

Transmitter
:
within 0.2%
Relay
:
within 1.1%
Receiver
:
within 0.7%
Estimate the maximum possible error and the root-square accuracy of the measurement
system.
Example 2.6 : Following data is taken while calibrating a bourdon gauge with a dead weight
gauge tester :
5 10
15
20
25
30
25
20
15
10
5
Actual Pressure Kgf/cm2
2
Gauge Reading Kgf/cm
4.5 9.6 14.2 18.0 22.5 28.0 26.0 21.0 16.2 11.4 7.0
Draw the calibration, the error and the correction curves. Make suitable comments on your
results.
Example 2.7 A spring scale requires a change of 15kgf in the applied weight to produce a 2 cm
change in the deflection of the spring scale. Determine the static sensitivity.
Example 2.8 Explain the following statements:
(i) A galvanometer has a sensitivity specified of 15 mm/A.
(ii) An automatic balance has a quoted sensitivity of 1 vernier division/0.1 mg.
Example 2.9 A measuring system consists of a transducer, an amplifier and a recorder, and
their individual sensitivities are stated as follows:
Transfer sensitivity K1 = 0.25 mV/ C
Amplifier gain
K2 = 2.5 V/mV
Recorder sensitivity K3 = 4 mm/V
What would be the overall sensitivity of the measuring system?
Example 2.10 A pressure measuring system consists of a piezoelectric transducer, a charge
amplifier and a ultra violet charge recorder. The sensitivities of these elements are stated as
follows :
Piezoelectric transducer, K1 = 8.5 pC/bar
Charge amplifier,
K2 = 0.004 V/pC
Ultraviolet charge recorder, K3 = 20 mm/V
What would be the deflection on the chart due to a pressure change of 30 bar?
Example 2.11 How resolution is reckoned for the analogue and digital read out devices?
A force transducer measures a range of 0150 N with a resolution of 0.1 percent of full scale.
Find the smallest change which can be measured.
Example 2.12. Distinguish between threshold and resolution (or discrimination). The pointer
scale of a thermometer has 100 uniform divisions, full scale reading is 200C and 1/10th of a
scale division can be estimated with a fair decree of accuracy. Determine the resolution of the
instrument.
Example 2.13 When a step input of 100 kgf/cm2 is applied to a pressure gauge, the pointer
swings to pressure of 102.5 kgf/cm2 and finally comes to rest at 101.3 kgf/cm2. Determine the

overshoot of the gauge reading and express it as a percentage of the final reading. Also
calculate the percentage error of the gauge.
Example 2.14 The dynamic performance of a thermocouple in a protective sheath has been
described by the following differential equation :

25

d o
+ 2.5 o = 1.25 x10 5 i
dt

where o is the output volts and i is the input temperature in oC. Determine the time constant
and the static sensitivity of the thermocouple.

S-ar putea să vă placă și