Sunteți pe pagina 1din 19

INSTRUMENTATION LABORATORY.

SIEL 3671 BY ISAAC KUMA YEBOAH.

Operating Characteristics.
Operating characteristics include details about the measurement by, operation of and environmental effects on the measuring instrument. Measurement: A measuring instrument can measure any value of a variable within its range of measurement. The range is defined by the lower range limit and the upper range limit. The span is the difference between the upper range limit and the lower range limit.

Resolution: The resolution of measuring instrument is a single step of the output. Resolution is expressed as a percentage of the output span of the instrument. The maximum resolution: This is when the size of the steps varies through the range of the instrument. The average resolution: This is expressed as a percentage of output span, is 100 divided by the total number of steps over the range of the instrument. Average resolution(%)=100/N where N represents the total number of steps.

The dead band: Is the smallest change in the measured variable that will result in a measurable change in the output. A measuring instrument can not measure changes in the measured variable that are smaller than its dead band. Threshold is another name for dead band. The sensitivity: Is the ratio of the change in output divided by the change in the input that caused the change in output. Sensitivity and gain are both defined as a change in output divided by the corresponding change in input. However sensitivity refers to static values, whereas gain usually refers to the amplitude of sinusoidal signals.

Operation.
The reliability of a measuring instrument is the probability that it will do its job for a specified period of time under a specified set of conditions. The conditions include limits on the operating environment, the amount of overrange and the amount of drift of the output. Overrange: Is any excess in the value of the measured variable above the upper range limit or below the lower range limit.

When an instrument is subject to an overrange, it does not immediately return to operation within specifications when the overload is removed. A period of time called the recovery time is required to overcome the saturation effect of the overload. The overrange limit is the maximum overrange that can be applied to a measuring instrument without causing damage or permanent change in the performance of the device. Thus one reliability condition is that the measured variable does not exceed the overrange limit.

Zero drift: Is a change in the output of the measuring instrument while the measured variable is held constant at its lower limit. Sensitivity drift: Is a change in the sensitivity of the instrument over the specified period. Zero drift raises or lowers the entire calibration curve of the instrument. Sensitivity drift changes the slope of the calibration curve. The reliability conditions specify an allowable amount of zero drift and sensitivity drift.

Environmental Effects.
The environment of a measuring instrument includes ambient temperature, ambient pressure, fluid temperature, fluid pressure, electromagnetic fields, acceleration, vibration and mounting position. The operating conditions define the environment to which a measuring instrument is subjected. The operating limit are the range of operating conditions that will not cause permanent impairment of an instrument. Temperature effects may be stated in terms of the zero shift and the sensitivity shift. The thermal zero shift is the change in the zero output of a measuring instrument for a specified change in ambient temperature. The thermal sensitivity shift is the change in sensitivity of a measuring instrument for a specified change in ambient temperature.

Static Characteristics.
Static characteristics describe the accuracy of a measuring instrument at room conditions with the measured variables either constant or changing very slowly. Accuracy: Is the degree of conformity of the output of a measuring instrument to the ideal value of the measured variable as determined by some type of standard. Accuracy is measured by testing the measuring instrument with a specified procedure under specified conditions. The test is repeated a number of times and the accuracy is given as the maximum positive and negative error. The error is defined as the difference between the measured value and the ideal value:

Error=measured value ideal value.

Accuracy is expressed in terms of the error in one of the following ways: 1. In terms of the measured variable. 2. As a percentage of span. 3. As a percentage of actual output. The repeatability of a measuring instrument is a measure of the dispersion of the measurement. Repeatability and reproducibility deal in slightly different ways with the degree of closeness among repeated measurements of the same value of the measured variable. Repeatability is the maximum difference between several consecutive outputs for the same input when approached from the same direction in full-range traversals. Reproducibility is the maximum difference between a number of outputs for the same input, taken over an extended period of time approaching from both directions.

Reproducibility includes hysteresis, dead band, drift and repeatability. The measurement of reproducibility must specify the time period used in the measurement. Reproducibility is obviously more difficult to determine because of the extended time period that is required. The procedure of determining the accuracy of a measuring instrument is called calibration.

Calibration versus re-ranging.


To calibrate an instrument means to check and adjust its response so the output accurately corresponds to its input throughout a specified range. In order to do this, one must expose the instrument to an actual input stimulus of precisely known quantity. For a pressure gauge, indicator, or transmitter, this would mean subjecting the pressure instrument to known fluid pressure and comparing the instrument response against those known pressure quantities. One cannot perform a true calibration without comparing an instruments response to known, physical stimuli. To range an instrument means to set the lower and upper value so it responds with the desired sensitivity to changes in input.

In analog instruments, re-ranging could only be accomplished by re-calibration, since the same adjustments were used to achieve both purposes. In digital instruments, calibration and ranging are typically separate adjustments. That is, it is possible to re-range a digital instruments without having to perform a complete recalibration, so it is important to understand the difference.

The purpose of calibration is to ensure the input and output of an instrument correspond to one another predictably throughout the entire range of operation. We may express this expectation in the from of a graph, showing how the input and output of an instrument should relate. The graph below show how any given percentage of input should correspond to the same percentage of output, all the way from 0% to 100%.

Zero and span adjustments. (analog transmitters)

Take for instance a pressure transmitter, a device designed to sense a fluid pressure and output an electronic signal corresponding to that pressure. The graph for a pressure transmitter with an input range of 0 to 100 pounds per square inch (PSI) and an electronic output signal range of 4 to 20 milliamps (mA) electric current:

Although the graph is still linear, zero pressure does not equate to zero current. This is called a live zero, because the 0% point of measurement (0 PSI fluid pressure) corresponds to a non-zero electronic signal. 0 PSI pressure may be the LRV (Lower Range Value) of the transmitters input, but the LRV of the transmitters output is 4 mA, not 0 mA. Any linear, mathematical function may be expressed in slopeintercept equation form: Y = mx + b Where, y = Vertical position on graph, x = Horizontal position on graph, m = Slope of line, b = Point of intersection between the line and the vertical (y) axis. This instruments calibration is no different. If we let x represent the input pressure in units of PSI and y represent the output current in units of milliamps, we may write an equation for this instruments as follows: y = 0.16x + 4

On the actual instrument (the pressure transmitter), there are two adjustments which let us match the instruments behavior to the ideal equation. One adjustment is called the zero while the other is called the span. These two adjustments correspond exactly to the b and m terms of the linear function, respectively: the zero adjustment changes the slope of the function on the graph. By adjusting both zero and span, we may set the instrument for any range of measurement within the manufacturers limits. That is adjusting one has an effect on the other. Specifically, changes made to the span adjustment almost always alter the instruments zero point. An instrument with interactive zero and span adjustments requires much more effort to accurately calibrate, as one must switch back and forth between the lower- and upper-range points repeatedly to adjust for accuracy.

S-ar putea să vă placă și