Documente Academic
Documente Profesional
Documente Cultură
MODULE 1
References:
1. Electronic Measurements & Instrumentation By R. K. Rajput
2. Measurement and Instrumentation Principles By Alan S. Morris
3. Electronic instrumentation By H.S.Kalsi
1. Most of the quantities can be converted by transducers into electrical or electronic signals.
2. Electrical or electronic signals can be amplified, filtered, multiplexed, sampled and measured.
3. The measurement can easily be converted into digital form.
4. The measured signals can be transmitted over long distances.
5. Higher sensitivity, low power consumption and higher degree of reliability.
An element of the instrument which makes first contact with the quantity to be measured is
called primary sensing element. For e.g. in ammeter coil carrying current to be measured is a
primary sensing element. In most cases after the primary sensing element there is a transducer
which converts the measurand into a corresponding electrical signal.
Examples:
The output of the primary sensing element is in electrical form such as voltage, frequency
etc. This is not suitable for actual measurement system. For e.g. if the measurement system is
digital then analog signal obtained from primary sensing element is not suitable for digital
system. So the variable conversion element is nothing but an analog to digital convertor. Some
instruments do not need variable conversion element.
Example: ADC
The variable manipulation element manipulates the signal preserving the original nature of
the signal. This is because the output from previous stages not enough to drive the next stage.
Manipulation involves change in numerical value of the signal.
For e.g. an amplifier amplifies the magnitude of the input at its output keeping the original nature
of the signal. In some cases if we want to reduce the output of the previous stage attenuators are
used as the variable manipulation element.
Sometimes the signals are required to be processed with some processes like modulation clipping
clamping etc. to obtain signal in pre and acceptable form from highly distorted form. Such a
process is called signal conditioning which is also done in second stage. Hence second stage is
called data conditioning or signal conditioning elements.
Examples:
When the elements of the system are physically separated it is necessary to transmit data
from one stage to another. This is achieved by data transmission element. The signal
conditioning and data transmission together called intermediate stage of an instrument.
The transmitted data used by the system finally for monitoring, controlling or analyzing
purposes. Thus the user gets the information in proper form according to the purpose for which it
is intended. This function is done by data presentation element.
If the data is to be monitored then visual display devices are used as data presentation element.
If the signal is to be recorded for analysis purpose then magnetic tapes, recorders, high speed
cameras are used as data presentation elements.
For control and analysis purpose microprocessors, computers and microcontrollers are used as
data presentation elements.
Examples:
Application:
The moving coil is the primary sensor. The magnets and coils together act as data conditioning
stage to convert current in coil to a force. This force is transmitted to the pointer through
mechanical linkages which act as data transmission element. The pointer and scale act as data
presentation element.
Performance Characteristics
Static characteristics
Static characteristics are considered for instructions which are used to measure an unvarying
process condition.
1. Accuracy, Error and Correction
Accuracy is defined as the measure of closeness of output reading of an instrument to the
accepted standard value. Accuracy depends on following factors:
Accuracy of observer
Static Error: The difference between best measured value and the true value of the quantity is
called as static error.
Es = Vm Vt
Where Vm = measured value of quantity
Vt = true value of quantity
Es = static error
Relative Static Error: The relative static error is defined as the ratio of absolute static error to
the true value of quantity under measurement.
Er = Es/Vt ; Er = Vm Vt / Vt
Correction
The difference between true value and measured value of a quantity is called correction (CS)
CS = Vt Vm = -Es
2. Precision/Reproducibility/repeatability
Precision: It is the measure of consistency of instrument output for a given input. i.e. successive
readings do not differ.
Repeatability: describes the closeness of output readings when the same input is applied
repetitively over a short period of time, with the same measurement conditions, same instrument
and observer, same location and same conditions of use maintained throughout.
Reproducibility: describes the closeness of output readings for the same input when there are
changes in the method of measurement, observer, measuring instrument, location, conditions of
use and time of measurement.
3. Tolerance: Tolerance is a term that is closely related to accuracy and defines the maximum
error that is to be expected in some value.
4. Range and Span:
The range of an instrument defines the minimum and maximum values of a quantity that
the instrument is designed to measure.
Span represents the algebraic difference between the upper and lower range values of the
instrument
Eg: Range: 2Kg 50Kg Span : 50 2
= 48Kg
5. Drift:
It is an undesired gradual departure of instrument output over a period of time that is
unrelated to changes in input, operating conditions or load. Drift may be caused by following
reasons:
Zero drift: drift is called zero drift if whole of instrument calibration gradually shifts over by the
same amount.
Span drift: if the calibration from zero upwards changes proportionally then it is called span or
sensitivity drift.
Zonal drift: when the drift occurs only over a portion of span of an instrument it is called zonal
drift
6. Linearity
It is normally desirable that the output reading of an instrument is linearly proportional to
the quantity being measured. The ability to reproduce the input characteristics symmetrically is
called linearity. It can be expressed by straight line equation. Or in other words, linearity is
simply a measure of maximum deviation of any of the calibration points from the straight line.
7. Threshold
It defines the minimum value of input which is necessary to cause a detectable change
from zero output.
8. Resolution
When the input is slowly increased from non-zero value, it is observed that the output
does not change until a certain increment is exceeded. This increment is called resolution.
9. Hysteresis
For a given value of input, the output may be different depending on whether input is
increasing or decreasing. Hysteresis is the difference between these two values of output.
If the input to the instrument is increased from a negative value the output also increases. This is
shown by curve 1. But if the curve is now decreased steadily, the output does not follow the same
curve but lags by certain value. It traces the curve 2 as shown in fig. The difference between the
two curves is called hysteresis.
Dead zone is defined as the range of different input values over which there is no
change in output value.
When elements are arranged in series the overall sensitivity is the multiplication of individual
sensitivity.
Dynamic Characteristics
The dynamic characteristics of a measuring instrument describe its behavior between the
time a measured quantity changes value and the time when the instrument output attains a steady
value in response.
(i) Speed of response : It is the rapidity with which an instrument responds to changes in
measured quantity
(iii) Lag: It refers to retardation or delay in the response of measurement system to changes in
measured quantity.
Retardation type lag: response of measurement system begins immediately after a change in
measured quantity has occurred.
Time delay type lag: in this case the response begins after a dead time after the application of input.
(iv) Dynamic Error: it is the difference between the true value of quantity changing with time
and the value indicated by instrument.
ERRORS IN MEASUREMENTS
Errors are inherent in the process of making measurements and in instruments used for
measurements.
Problem1
Static Error: The difference between best measured value and the true value of the quantity is
called as static error. i.e repeated measurement of the same quantity gives different indications.
Es = Vm Vt
Where Vm = measured value of quantity
Vt = true value of quantity
Es = static error
1. Gross Error
1. Systematic Error
Occur due to short comings of instruments such as defective or worn parts or ageing
or effects of environment on instrument
3Types:
a) Instrumental Error
b) Environmental Error
Due to conditions external to measuring device
Eg: temperature, humidity, pressure, vibrations etc
Can be avoided by
c) Observational Error
3. Random Errors
Reasons
Random errors are caused by the sudden change in experimental conditions and
noise and tiredness in the working persons.
These errors are either positive or negative.
An example of the random errors is during changes in humidity, unexpected change
in temperature and fluctuation in voltage.
These errors may be reduced by taking the average of a large number of readings.
Sources of Errors
Sources of systematic errors
1. System disturbance during measurement.
2. Effect of environmental changes
E.g. Humidity, Temperature Changes, Stray electric and magnetic fields
3. Bent meter needles
4. Use of un-calibrated instruments
5. Drift in instrument characteristics
6. Poor cabling practices
Sources of random errors
1. Arise when measurements are taken by human observation of an analog meter otherwise
called parallax error.
2. Response time: Time taken by instrument to show 63.2% change in a reading to a step
input. This factor contributes to uncertainty of measurements
3. Noise: any signal that doesnt convey any information. Its reduced by
1. Filtering
2. Careful selection of components
3. Shielding and isolation of measuring system
Derived units are those units derived from fundamental units. Eg : m/s
Dimensions:
Very quantity has a quality that distinguishes it from all other quantities.
Standards:
1. Primary
2. Secondary
3. Working
1. Primary Standards: the highest standard of either a base unit or a derived unit is called a
primary standard.
These standards are copies of international prototypes and are kept throughout the world in
national standard laboratories.
They constitute the ultimate basis of reference and are used for the purpose of verification and
calibration of secondary standards.
Invariant
They have highest possible accuracy and not available for use outside national laboratories
2. Secondary standards: Reference calibrated standards designed and calibrated from primary
standards
They are periodically sent to national laboratories for calibration.
Kept by measurement laboratories and industrial organizations to check and calibrate the general
tools for their accuracy and precision.
3. Working standards:
Normal standards use by workers and technicians who actually carry out measurements.
Instrument calibration
Calibration consists of comparing the output of the instrument or sensor under test against the
output of an instrument of known accuracy when the same input (the measured quantity) is applied
to both instruments. This procedure is carried out for a range of inputs covering the whole
measurement range of the instrument or sensor.
1. Instrument calibration has to be repeated at prescribed intervals because the characteristics of any
instrument change over a period.
2. Changes in instrument characteristics are brought about by factors such as mechanical wear, and
the effects of dirt, dust, fumes, chemicals and temperature changes in the operating environment.
3. To a great extent, the magnitude of the drift in characteristics depends on the amount of use an
instrument receives and hence on the amount of wear and the length of time that it is subjected to
the operating environment.
4. Some drift also occurs even in storage, as a result of ageing effects in components within the
instrument.
The calibration facilities provided within the instrumentation department of a company provide the
first link in the calibration chain. Instruments used for calibration at this level are known as working
standards.
However, over the longer term, the characteristics of even such standard instruments will drift,
mainly due to ageing effects in components within them. Therefore, over this longer term, a
programme must be instituted for calibrating working standard instruments at appropriate intervals
of time against instruments of yet higher accuracy.
The instrument used for calibrating working standard instruments is known as a secondary
reference standard. This must obviously be a very well-engineered instrument that gives high
accuracy and is stabilized against drift in its performance with time.
This implies that it will be an expensive instrument to buy. It also requires that the environmental
conditions in which it is used be carefully controlled in respect of ambient temperature, humidity
etc.
When the working standard instrument has been calibrated by an authorized standards laboratory, a
calibration certificate will be issued. This will contain at least the following information:
This describes the highest level of accuracy that is achievable in the measurement of any particular
physical quantity. When the working standard instrument has been calibrated by an authorized
standards laboratory, a calibration certificate will be issued. This will contain at least the following
information: