Documente Academic
Documente Profesional
Documente Cultură
Introduction
This laboratory course concerns making measurements in various heat transfer geometries and
relating those measurements to derived equations. The objective is to determine how well the
derived equations describe the physical phenomena we are modeling. In doing so, we will need to
make physical measurements, and it is essential that we learn how to practice good techniques in
making scientific observations and in obtaining measurements. We are making quantitative
estimates of physical phenomena under controlled conditions.
Measurements
There are certain primary desirable characteristics involved when making these physical
measurements. We wish that our measurements would be:
a) Observer-independent
b) Consistent
c) Quantitative
So when reporting a measurements, we will be stating a number, but whats in a number? A single
number, in isolation, has almost no significance, but the implied question is, Is it large or small?
Is 26 a large number? Is 6 x 105 a large number? The answer requires another number for reference
purposes. Is 26 a large number compared to 6 x 105? Furthermore, we will have to add a dimension
and this leads to another question: Are we talking about a number or a dimensional physical
quantity? We know from experience that a physical value without a unit has no significance.
In reporting measurements, another question arises as to how should we report data; i.e., how many
significant digits should we include? Which physical quantity is associated with the
measurement and how precise should it or could it be.
For example, does 2.54 cm = 2.54001 cm? It is impossible to answer this question without some
measure of the expected natural variation in the measurement. So it is prudent to scrutinize the
claimed or implied accuracy of a measurement.
Performing experiments
In the course of performing an experiment, we first would develop a set of questions or a hypothesis,
or put forth the theory. We then identify the system variables to be measured or controlled. The
apparatus would have to be developed and the equipment set up in a particular way. An
experimental protocol, or procedure, is established and data are taken.
Several features of this process are important. We want accuracy in our measurements, but
increased accuracy generally corresponds to an increase in cost. We want the experiments to be
reproducible, and we seek to minimize errors. Of course we want to address all safety issues and
regulations.
After we run the experiment, and obtain data, we would analyze the results, draw conclusions, and
report the results.
Estimation
In some situations, there is no time to run formal experiments to answer a question or verify an
1
equation. In such cases, it is often useful to make careful estimates. These can help to determine the
ranges of parameters to investigate in the experiments. Also, estimates are necessary for partial
validation of experimental results.
Consider, for example, that we must obtain a quick estimate of the density of a rock. We observe
that it sinks in water, so it must be denser than water, 1 000 kg/m3. As an upper bound, we might
suggest that it is less dense than steel at 3 000 kg/m3. So if we conduct an experiment and obtain a
value outside this range, we would be suspicious and check the equipment and the experimental
approach.
1- Error
The error E is the difference between a TRUE value, x, and a MEASURED value, xi:
E = x - xi (1)
There is no error free measurement. All measurements contain some error. How error is defined and
used is important. The significance of a measurement cannot be judged unless the associated error has
been reliably estimated. In Equation 1, because the true value of x is unknown, then the error E is
unknown as well. This is always the case.
The best we can hope for is to obtain the estimate of a likely error, which is called an uncertainty.
For multiple measurements of the same quantity, a mean value, X , (also called a nominal value)
can be calculated. Hence, the error becomes:
E XX
2- Uncertainty
The uncertainty, x, is an estimate of error E as a possible range of errors:
x=E (2)
2
For example, suppose we measure a velocity and report the result as
V = 110 m/s 5 m/s
The value of 5 m/s is defined as the uncertainty. Alternatively, suppose we report the results as
V = 110 m/s 4.5%
The value of 4.5% is defined as the relative uncertainty. It is common to hear someone speak
of experimental errors, when the correct terminology should be uncertainty. Both terms are
used in everyday language, but it should be remembered that the uncertainty is defined as an
estimate of errors.
3- Accuracy
Accuracy is a measure (or an estimate) of the maximum deviation of measured values, xi,
from the TRUE value, x. So a question like:-
Can be reformulated as
Again, because the true value x is unknown, then the value of the maximum deviation is unknown.
The accuracy, then, is only an estimate of the worst error. It is usually expressed as a percentage;
e.g., accurate to within 5%.
Example 1
p=50 psi
What is the accuracy of the pressure probe used for making this measurement?
Solution:
The relative uncertainty is calculated to be
Example 2
A sensor is claimed to be accurate to 5%. What will be the uncertainty (in psi) in the measurement
of a pressure of 50 psi?
3
Solution:
The accuracy ( relative uncertainty) is 5%, so
P 5%
P
P 0.05 * P 0.05x50
or P 2.5 Psi
The uncertainty in p in psi is 2.5 psi, so the measurement should be reported as follows:
Precision
On the other hand, is a measure (or an estimate) of the consistency (or repeatability). Thus it is
the maximum deviation of a reading (measurement), xi, from its mean value,
As an illustration of the concepts of accuracy and precision, consider the dart board shown in the
accompanying figures. Let us assume that the blue darts show the measurements taken, and that
the bulls eye represents the value to be measured.
When all measurements are clustered about the bulls eye, then we have very accurate and,
therefore, precise results (Figure 1a).
When all measurements are clustered together but not near the bullseye, then we have
very precise but not accurate results (Figure 1b).
When all measurements are not clustered together and not near the bulls eye, but their
nominal value or average is the bulls eye, then we have accurate (on average) but not
precise results (Figure 1c).
When all measurements are not clustered together and not near the bulls eye, and their
average is the not at the bulls eye, then we have neither accurate nor precise results
(Figure 1d).
So, we conclude that accuracy refers to the correctness of the measurements, while
precision refers to their consistency.
4
Classification of Errors
1- Random Error
A random error is one that arises from a random source. Suppose for example that a measurement
is made many thousands of times using different instruments and/or observers and/or samples. We
would expect to have random errors affecting the measurement in either direction () roughly
the same number of times.
Such errors can occur in any scenario:
1. Electrical noise in a circuit generally produces a voltage error that may be positive or
negative by a small amount.
2. By counting the total number of pennies in a large container, one may occasionally pick up
two and count only one (or vice versa).
The question arises as to how can we reduce random errors? There are no random error free
measurements. So random errors cannot be eliminated, but their magnitude can be reduced.
On average, random errors tend to cancel out.
2- Systematic E rror
A systematic error is one that is consistent; that is, it happens systematically. Typically, human
components of measurement systems are often responsible for systematic errors. For example,
systematic errors are common in reading of a pressure indicated by an inclined manometer.
Consider an experiment involving dropping a ball from a given height. We wish to measure
the time it takes for the ball to move from where it is dropped to when it hits the ground. We
might repeat this experiment several times. However, the person using the stopwatch may
consistently have a tendency to wait until the ball bounces before the watch is stopped. As a
result, the time measurement might be systematically too long.
So, systematic measurements can be anticipated and/or measured, and then corrected. This
can be done even after the measurements are made.
5
The question arises as to how can we reduce systematic errors?
This can be done in several ways:
1. Calibrate the instruments being used by checking with a known standard. The standard
can be what is referred to as:
a. A primary standard obtained from the National Institute of standards and
technology (NIST formerly the National Bureau of Standards); or
b. A secondary standard (with a higher accuracy instrument); or
c. A known input source.
2. Make several measurements of a certain quantity under varying test conditions, such as
different observers and/or samples and/or instruments.
3. Check the apparatus.
4. Check the effects of external conditions
5. Check the coherence of results.
A repeatability test using the same instrument is one way of gaining confidence, but a far more
reliable way is to use an entirely different method to measure the desired quantity.
Uncertainty Analysis
Determining Uncertainty.
When we state a measurement that we have taken, we should also state an estimate of the
error, or the uncertainty. As a rule of thumb, we use a 95% relative uncertainty, or stated
otherwise, we use a 95%
Confidence interval
Suppose for example, that we report the height of a desk to be 38 inches 1 inch. This
suggests that we are 95% sure that the desk is between 37 and 39 inches tall. When
reporting relative uncertainty, we generally restrict the result to having one or two
significant figures. When reporting uncertainty in a measurement using units, we use the same
number of significant figures as the measured value. Examples are shown in Table 1:
The previous table's shows uncertainty in measurements, but to determine uncertainty is usually
difficult. So as a rule of thumb, we use a 95% confidence interval which gives us an estimate.
Now the estimate of uncertainty depends on the measurement type: single sample measurements,
measurements of dependent variables, or multi variable measurements.
Single-sample measurements
Single-sample measurements are those in which the uncertainties cannot be reduced by
6
repetition. As long as the test conditions are the same (i.e., same sample, same instrument and
same observer), the measurements (for fixed variables) are single-sample measurements,
regardless of how many times the reading is repeated.
Multi-Sample Measurements.
Multi-sample measurements involve a significant number of data points collected from enough
experiments so that the reliability of the results can be assured by a statistical analysis.
In other words, the measurement of a significant number of data points of the same quantity (for
fixed system variables) under varying test conditions (i.e., different samples and/or different
instruments) will allow the uncertainties to be reduced by the sheer number of observations.
Single-sample uncertainty
It is often simple to identify the uncertainty of an individual measurement. It is necessary to
consider the limit of the scale readability, and the limit associated with applying the
measurement tool to the case of interest.
Consider some measuring device that has as its smallest scale division x. The smallest scale
division limits our ability to measure something with any more accuracy than x/2. The ruler of
Figure 2a, as an example, has 1/4 inch as its smallest scale division. The diameter of the circle
is between 4 and 4 1/4 inches. So we would correctly report that
The ruler depicted in the figure could be any arbitrary instrument with finite resolution. The
7
uncertainty due to the resolution of any instrument is one half of the smallest increment displayed.
This is the most likely single sample uncertainty. It is also the most optimistic because reporting
this value assumes that all other sources of uncertainty have been removed.
Example 3
Estimate the gravitational constant g for dropping a ball from a known height, L through time, t.
The approximate equation would be
8
2
i n f
f x (6)
x i
i 1 i
Example 4
Calculate the Euclidean Norm in the gravitational constant g in example 3
Solution
For determining the gravitational constant:-
1. Write expression for g in terms of its independent variable(s):
9
4. Alternatively, calculate the relative uncertainty g/g:
Note that the expression for g/g is simpler than that for g. Also, in the g/g expression, the
individual terms are dimensionless. This is convenient if quantities are originally given in %, or if
the units are incompatible.
and
g/g 32 %
Example 5
Discards the ice reference junction and connects the Type-J thermocouple directly to the
instrument. We use a thermometer to record the temperature of the instrument face where the
thermocouple is attached. We read 39 oC 1oC. In our testing, we read a value of 3.4560.002 mv
at the meter, and we want to know what the unknown temperature Tx is.
The answer is found by realizing that the meter is creating a virtual thermocouple junction at 39oC
which reduces the voltage readout, and we may compute that offset as
Eoff = Ea + (Tref-Ta)ref
More simply, since the temperature is 39 oC, we may look in the table and find that the
thermocouple is creating the equivalent of Eoff = 2.006 mv offset to the reading for Tx. If the
configuration were as shown in Error! Reference source not found., the reading would then be
10
E = Eact + Eoff = 3.456mv + 2.006mv = 5.462mv
The thermocouple table can be quickly scanned and the temperature Tx is found to be between
103oC and 104oC. We linearly interpolate and find that it is
T = 103 + (5.462-5.432)*1oC/(5.487-5.432) = 103 + 0.03/0.055 = 103.545oC
Unfortunately, we do not know the resolution of this measurement. We do know that the
measurement is a function of the voltmeters accuracy and that of the thermometer. We can write
that
The variables in this equation may be written as Tx = F(Tref, Eactand applying the RSS method
we can write that the error of the measurement is thus
m m
= =
V a bc (3)
Then the uncertainty becomes
d = ( dm) 2 ( da) 2 ( db) 2 ( dc) 2 (4)
m a b c
m 1
= =
m m a bc a bc (5a)
m m
= =- 2
a a a bc a bc (5b)
dm 2 da 2 db 2 dc 2
d = ( ) ( 2 ) ( ) ( ) (6)
a.b.c a .b.c a.b 2 .c a.b.c 2
We note that
m
=
a bc (7)
and rearrange to get
dm 2 da 2 db 2 dc 2
d = ( ) ( ) ( ) ( ) (8)
m a b c
dm = 0.5 gm = 5 x 10-4 kg
da = db = dc = 0.5 mm = 5 x 10-4 m
Finally, using our measurements, say
12
m = 100 gm = 0.1 kg
a = 10 cm = 0.1 m
b = 5 cm = 0.05 m
c = 5 cm = 0.05 cm
with
0.1
= = 400 kg/m3
(0.1)(.05)(.05)
We determine the numerical value of the uncertainty
5x10 -4
400x 1 1 (2) 2 (2) 2 6.32 kg/m 3
0.1
d 6.32 kg/m 3
(7)
Various limits give some interesting results
Deviation
The deviation of each reading is defined by:
(8)
The arithmetic mean deviation: is defined as:
(9)
13
Due to random errors, experimental data is dispersed in what is referred to as a bell distribution,
known also as a Gaussian or Normal Distribution, and depicted in Figure 3.
The Gaussian or Normal Distribution is what we use to describe the distribution followed by
random errors. A graph of this distribution is often referred to as the bell curve as it looks like
the outline of a bell. The peak of the distribution occurs at the mean of the random variable, and
the standard deviation is a common measure for how fat this bell curve is Equation 10 is called
the Probability Density Function for any continuous random variable x.
(10)
The mean and the standard deviation are all the information necessary to completely describe
any normally distributed random variable.
Integrating under the curve of Figure 3 over various limits gives some interesting results.
1. Integrating under the curve of the normal distribution from negative to positive infinity, the
area is 1.0 (i.e., 100 %). Thus the probability for a reading to fall in the range.
2. Integrating over a range within from the mean value, the resulting value is 0.6826. The
probability for a reading to fall in the range of is about 68%.
3. Integrating over a range within 2 from the mean value, the resulting value is 0.954. The
probability for a reading to fall in the range of 2 is about 95%.
4. Integrating over a range within 3 from the mean value, the resulting value is 0.997. The
probability for a reading to fall in the range of 3 is about 99%.
50% 0.6754
68.3%
86.6% 14 1.5
95.4% 2
99.7% 3
Estimating Uncertainty.
We can now use the probability function to help in determining the accuracy of data obtained in
an experiment. We use the uncertainty level of 95%, which means that we have a 95% confidence
interval. In other words, if we state that the uncertainty is x, we suggest that we are 95% sure
that any reading xi will be within the range of x of the mean. Thus, the probability of a sample
chosen at random of being within the range 2 of the mean is about 95%. Uncertainty then is
defined as twice the standard deviation:
x 2
Example 7:
Alloy claims a modulus of elasticity of 40 2 kPa. How is that to be interpreted?
Solution:
The general rule of thumb is that 2 kPa would represent a 95% confidence interval. That is, if
we randomly select many samples of this manufacturers alloy we should find that 95% of the
samples meet the stated limit of 40 2 kPa.
Now it is possible that we can find a sample that has a modulus of elasticity of 37 kPa; however,
it means that it is very unlikely.
Example 8
If we assume that variations in the product follow a normal distribution, and that the modulus of
elasticity is within the range 40 2 kPa, then what is the standard deviation, ?
Solution
The uncertainty 95% of confidence interval 2kPa =2 , thus =1 kPa
Example 9
Assuming that the modulus of elasticity is 40 2 kPa, estimate the probability of finding a
sample from this population with a modulus of elasticity less than or equal to 37 kPa.
Solution
With = 1 kPa, we are seeking the value of the integral under the bell shaped curve, over the
range of - to 3 . Thus, the probability that the modulus of elasticity is less than 37 kPa is:
The probability 1-1/(2n) for retention of data distributed about the mean can be related to a
maximum deviation dmax away from the mean by using a Gaussian probability table. For the
given probability, the nondimensional maximum deviation max can be determined from the table,
where
All measurements that deviate from the mean by more than dmax/sx can be rejected. A new mean
value and a new precision index can then be calculated from the remaining measurements. No
further application of the criterion to the sample is allowed.
Using Chauvenets criterion, we say that the values xi which are outside of the range
(11)
are clearly errors and should be discarded for the analysis. Such values are called outliers. The
constant C may be obtained from Table 3. Note that Chauvenets criterion may be applied only
once to a given sample of readings.
Number, dmax
n =C
sx
3 1.38
4 1.54
5 1.65
6 1.73
7 1.80
8 1.87
9 1.91
10 1.96
15 2.13
20 2.24
25 2.33
50 2.57
100 2.81
300 3.14
500 3.29
1,000 3.48
16
The methodology for identifying and discarding outlier(s) is a follows:
1. After running an experiment, sort the outcomes from lowest to highest value. The suspect
outliers will then be at the top and/or the bottom of the list.
2. Calculate the mean value and the standard deviation.
3. Using Chauvenets criterion, discard outliers.
4. Recalculate the mean value and the standard deviation of the smaller sample and stop. Do
not repeat the process; Chauvenets criterion may be applied only once.
Example 8
Consider an experiment in which we measure the mass of ten individual identical objects.
The scale readings (in grams) are as shown in Table 4. By visual examination of the results, we
might conclude that the 4.85 g reading is too high compared to the others, and so it represents
an error in the measurement. We might tend to disregard it. However, what if the reading was
2.50 or 2.51 g? We use Chauvenets criterion to determine if any of the readings can be
discarded.
TABLE 4. Data obtained in a series of experiments
Number, reading in g
n 1 2.41
2 2.42
3 2.43
4 2.43
5 2.44
6 2.44
7 2.45
8 2.46
9 2.47
10 4.85
We apply the methodology described earlier. The results of the calculations are shown in Table 5:
1. Values in the table are already sorted. Column 1 shows the reading number, and there are 10
readings of mass, as indicated in column 2.
2. We calculate the mean and standard deviation. The data in column 2 are added to obtain a
total of 26.8. Dividing this value by 10 readings gives 2.68, which is the mean value of all
the readings:
m = 2.68 g
In column 3, we show the square of the difference between each reading and the mean value.
Thus in row 1, we calculate
(x x1)2 = (2.68 2.41)2 = 0.0729
We repeat this calculation for every data Point. We then add these to obtain the value 5.235
shown in the second to last row of column 3. This value is then divided by (n-1) =9 data points and
the square root is taken. The result is 0.763, which is the stander deviation, as defined in Equation 9.
(12)
17
3. Next, we apply Chauvenets criterion; for 10 data points, n = 10 and Table 3 reads C = 1.96. We
calculate C = 1.96(0.763) = 1.50. The range of acceptable values then is 2.68 1.50, or:
Any values outside the range of 1.18 and 4.18 are outliers and should be discarded.
Thus for the data of the example, the 4.85 value is an outlier and may be discarded. All other
points are valid. The last two columns show the results of calculations made without data point
#10. The mean becomes 2.44, and the standard deviation is 0.019 (compare to 2.68, and 0.763,
respectively).
Exercise.
Define the following terms
1. Error
2. Uncertainty
3. Accuracy
4. Precision
5. Random Error
6. Systematic Error
7. Confidence Interval
18
3. Uncertainty Analysis in the Planning of an Experiment
Introduction
Uncertainty analysis is a powerful tool for improving the value of experimental work, and can be applied
during all phases of an experimental program. However, the greatest value of uncertainty analysis is
almost certainly obtained when it is used during the planning of an experiment. In citing a dozen uses of
uncertainty analysis, Kline (1985a) highlighted five specific uses within the planning phase
So, uncertainty analysis in the planning phase of an experiment is a worthy endeavor. This chapter
attempts to highlight this fact with discussion and examples of the use of uncertainty analysis techniques
that lend themselves to the planning phase. The material presented here is drawn primarily from two
sources, R.J. Moffats 1985 paper from the ASME/JFE symposium, titled Using Uncertainty Analysis in
the Planning of an Experiment, and Coleman and Steeles 1999 book, Experimentation and Uncertainty
Analysis for Engineers, 2nd ed.
19
properties or other physical constants. In all the phases of experimentation we will consider how the
uncertainties in these individual variables propagate through the data reduction equation into the end result.
In the planning phase of an experiment, we will consider the uncertainties of individual variables and their
propagation into the result in the most basic ways possible. This level of analysis is often called general
uncertainty analysis, as opposed to detailed uncertainty analysis, which is applied to later phases. In the
planning phase we are considering alternative methods for arriving at the experimental result. We have not
selected specific instruments or equipment, and thus we are in no position to address the details of
systematic versus random errors, since any systematic errors in the instruments we eventually choose are
as likely to be positive and negative. At this stage we consider all uncertainty to be caused by random
errors. Later, when we get into the detailed design of an experiment, the debugging and data collection
phases, and the data analysis and reporting phases, we will be more interested in those details, but for now
our focus is on achieving the maximum amount of early guidance for a small amount of expended effort.
By using uncertainty analysis at this stage to help select the correct experimental approach, we can avoid
expending unnecessary effort on methods that might never achieve our objective.
r r ( X 1 , X 2 , , X J ) (1)
Where Ur is the uncertainty in the result, UX1 is the uncertainty in the variable X1, etc. This is the most
general form of the uncertainty propagation equation (Coleman and Steele 1999). When applying the
uncertainty propagation equation, the individual uncertainties should all be expressed with the same odds,
e.g., at 95% confidence. In the planning phase, this assumption is implicit. In addition, the measured
variables and their uncertainties are assumed to be independent of one another.
Nondimensional Forms
Two nondimensional forms of eq. (2) are useful in the planning phase. Dividing each term by r2 and
multiplying the terms on the right-hand side by (Xi/Xi)2, we obtain
2 2 2
UR X R U X1 X 2 R U X2 X R U Xn
1 .......... n (3)
R R x1 X1 R x 2 X2 R x n Xn
In this equation, Ur/r is the relative uncertainty in the result and the factors UXi/Xi are the relative
uncertainties of each variable. The factors in parentheses that multiply the relative uncertainties of the
variables are called uncertainty magnification factors (UMFs). They indicate the influence of uncertainty
in a particular variable on the uncertainty in the result. When the UMF is greater than 1, uncertainty in a
variable is magnified as it propagates through the data reduction equation; if less than 1, the uncertainty in
20
the variable is reduced. The UMF depends on the value of a variable relative to the result and the manner
in which it is incorporated into the data reduction equation, but it is independent of the actual uncertainty
in the variable, so it is useful before we know details about measurement methods and their uncertainties.
Since the UMFs are always squared when inserted into eq. (3), only their absolute values are important.
Each term on the right-hand side gives the fractional contribution of the squared uncertainty in a given
variable to the squared uncertainty in the result. In percentage terms we can define uncertainty
percentage contributions (UPCs) as
2U 2 2
R X R
2 UX
Xi
UPC i i 1
i X x 100 (5)
UR R X U
R (U R /R)
2
i i
The UPCs include the effects of both the UMF and the uncertainty of the variable, so they are useful in the
late planning phase and early design phase, when measurement equipment and methods are being selected
and measurement uncertainties can be estimated.
Simplified Form
The most useful form of the uncertainty propagation equation for planning purposes is probably eq. (3), in
which the squares of the relative uncertainties are related through the UMFs. In a great many cases eq. (3)
can be further simplified. When the data reduction equation is of the form
r kX1a X 2b X 3c (6)
With a, b, and c, and k being constants, applying eq. (3) produces the simplified equation
2 2 2
U r2 UX U U
2
a 2 1 b 2 X 2 c 2 X 3 (7)
r X1 X2 X3
In such a case, the UMFs are the exponents and the uncertainty propagation equation can be written down
by simple inspection. One must keep several things in mind when considering the use of this simplified
form of the equation. First, one must solve for the experimental result before applying the equation.
Second, the Xis must be directly measured variables, so an equation of the form R=a(cos()) is not in
the proper form if is measured directly. Also, an equation of the form Q=CdA(2g(h2-h1))1/2 is acceptable
if h2-h1 is measured directly, but not if h2 and h1 are measured separately.
21
Q CLh11.5 (a))
The variables that will be measured are the weir length, L, and the upstream head, h1, and each will have an
associated uncertainty. The value of the discharge coefficient, C, is an empirical constant which will also
have some uncertainty associated with it.
Before determining how to measure the weir length and upstream head, a general uncertainty analysis can
be used to gain an understanding of the relationship between the measurement uncertainties and the
uncertainty in the result.
(b))
Q Q C C Q L L Q h1 h1
C Q C 1.5 CLh11.5
UMFC Lh1 1 (c))
Q C Q Q
L Q L 1.5 CLh11.5
UMFL Ch1 1 (d))
Q L Q Q
h1 Q h1
1.5CLh10.5
1.5CLh11.5
UMFh1 1.5 (e))
Q h1 Q Q
1.5 1 (f))
Q C L h1
which we could have written down by inspection, since the data reduction equation was in the simple form
discussed previously. Notice that the uncertainties in head measurement are magnified in the result due to
the exponent of 1.5. With this equation, we can now examine several questions related to the uncertainty
of the proposed discharge measurement.
Suppose for example that the weir length is 2 m, the head is 0.3 m, and the discharge coefficient is 1.71
m0.5/s. How accurately can the discharge be measured if the relative uncertainty in C is 5%, the weir
length is known with an uncertainty of 2 mm, and the head is measured with an uncertainty of 3 mm?
22
2 2
UQ 2 Uh
2
0.002
0.05 1.5 1
2
Q 2 h1
0.0025 0.000001 0.000225
0.00273
UQ
0.00273 0.052
Q
Thus, the relative uncertainty in discharge measurement is 5.2%, with the primary sources of uncertainty
being the discharge coefficient and the measurement of the upstream head.
We can also answer what if questions related to design or selection of measurement methods. For
example, if the discharge measurement must be made with an uncertainty of 6% or less and the length of
the weir has an uncertainty of 2 mm as before, what is the maximum allowable uncertainty in the head
measurement? Working through the equation, we find that
2
U
2
2 0.3
2
Uh
0.0036 0.0025 0.000001 2.25 1
0.3
U h1 0.00663 m 6.63 mm
Similarly, if we were doing a laboratory calibration to determine the discharge coefficient we might ask
what uncertainty in head measurement is needed to obtain a C value with relative uncertainty of 2%, if the
laboratory facilities are capable of independently measuring the discharge with an uncertainty of 0.5%.
Solving for Uh1 in that case, we obtain
2 2
U UQ UC U L
2 2
1.5 h1
2
(g))
h1 Q C L
0.3 2
2
U h1
0.00017
0.3
We have obviously committed a serious error! One cannot take the square root of a negative number.
The problem is that we forgot to rearrange the data reduction equation to put the result on the left side
before applying the simplified form of the general uncertainty equation. The correct data reduction
equation and uncertainty expression are
Q
C 1.5
QL1h11.5 (h))
Lh1
23
2 2
UQ 2 Uh
2 2
UC 2UL
1 1.5 1 (i))
C Q L h1
This example shows how easy it is to make a mistake when writing down the uncertainty equation by
inspection. The requirements listed earlier for use of the simplified form must be kept in mind when
considering its use.
Any experimental endeavor begins with a question whose answer is sought. In order to be useful, the
answer must be determined with some level of certainty. It is important to establish this allowable level of
uncertainty in the planning phase, although it need not be a refined value; an order of magnitude estimate
(i.e., 0.1, 1, 10, or 50%) is often sufficient.
In the planning phase, we usually have several options available to us for arriving at the experimental
result. These will often involve completely different experimental methods and the measurement of
different parameters. For example, one experiment might be performed in a steady-state condition while
another is performed in a transient condition. Different experiments may exploit different physical
properties of a material or different parameters of a process. For each alternative, there may be a different
data reduction equation with unique error propagation characteristics. We want to determine which
approach is most likely to yield the desired result, and which measurements may the most critical if a
particular method is used. Once we have selected an experimental method, then we may want to
investigate instrumentation issues more closely. As we do so, we gradually move from the realm of
general uncertainty analysis to more detailed uncertainty analysis.
When applying general uncertainty analysis, it becomes necessary to assign uncertainties to the variables
that we intend to measure, even if we do not yet know the exact methods of measurement or specific
instrumentation that will be used. Coleman and Steele (1999) observed that there is a universal human
reluctance to estimate uncertainties, perhaps out of fear of using the wrong values. This can be overcome
by realizing that there is no correct value at this stage; we only seek to gain understanding by assuming
reasonable values. If one is still unsure of what value to use, a parametric analysis can be made in which
a range of values are used. Even ridiculously large or small values can be used, as they may help illustrate
the sensitivity or insensitivity of the uncertainty in the result to the uncertainties in the measured variables.
24
We will now illustrate the more detailed application of general uncertainty analysis to a nontrivial, real-
world example. Unfortunately, a single example cannot illustrate all of the nuances and possible outcomes
of a planning phase uncertainty analysis. For the reader seeking additional examples, those given by
Coleman and Steele (1999) are highly recommended.
In concept, the objective is to develop a calibration method that would allow one to compute the flow rate
through a gate from a measurement of upstream and downstream water level and the gate position. A
definition sketch for a radial gate is provided in Fig. 1.
1 2 3
v12/2g r
Submerged
H1 y1 a
Free
y2
y3
w yj =w
Clemmens et al. (2003) performed laboratory experiments on a model radial gate and developed a new
calibration method called the Energy-Momentum method. Two key parameters in this method are an
upstream energy loss and velocity distribution factor, 1+, and an energy correction factor, Ecorr. To
provide better definition of the behavior of these parameters, a new series of experiments is now being
contemplated. We wish to use a general uncertainty analysis to explore the possible experiments that
might be performed. For this example we will focus on just the 1+ factor.
The parameter 1+ appears in the energy equation applied to the flow from the upstream pool through the
gate to the vena contracta point. When a gate is in a free flow condition, the calibration equation for the
gate (derived from the energy equation) is
2 g ( H 1 w)
Q wbc (i))
1
where Q is the discharge, is the contraction coefficient, w is the vertical gate opening, bc is the gate
width, g is the acceleration of gravity, H1 is the energy head at section 1, and an energy loss coefficient
25
for losses that occur between sections 1 and 2. The velocity distribution at the vena contracta (section 2 in
Fig. 1) is assumed to be uniform, hence the constant 1 in the denominator. If the velocity distribution is
actual nonuniform, that will be accounted for in the factor, so that 1+ accounts for the effects of both
velocity distribution and energy loss. We wish to perform experiments that will yield the values of 1+.
The first question that arises in considering this problem is what level of uncertainty we should try to
achieve in determining 1+? To answer this question we can first consider the final application of eq. (17)
for the purpose of flow measurement. We would like to be able to measure Q in the field with a relative
uncertainty of 2%, which is comparable to the rating uncertainty of flumes and weirs. If we estimate the
uncertainty in field measurements or estimates of , w, bc,and H1, we can solve for the allowable
uncertainty in 1+. We will assume that the acceleration of gravity, g, is a constant known with negligible
uncertainty.
Eq. (17) is not in the form that allows the simplified development of the uncertainty propagation equation,
so we will need to work out the partial derivatives. The UMFs are
Q w
UMF 1 (ii))
Q 2H 1 w
w Q w
UMFw 1 (iii))
Q w 2H 1 w
bc Q
UMFbc 1 (iv))
Q bc
H 1 Q H1
UMFH1 (v))
Q H 1 2H 1 w
1 Q 1
UMF1 (vi))
Q (1 ) 2
Note that we treat 1+ as a single parameter, since we expect to determine it experimentally in that way.
Determining the partial derivatives is somewhat tedious, but many terms cancel to make the resulting eqs.
(18) to (22) reasonably compact. Substituting the UMFs into the general uncertainty equation we obtain:
U 2 U w 2 U bc
2 2 2 2 2 2
UQ w H1 U H1 1 U
1 1
Q 2H 1 w w bc 2H 1 w H1 4 1
(vii))
We see that in addition to estimating relative uncertainties of the measured variables, we will need to select
appropriate values of , w, and H1, since they are contained in the UMFs. The contraction coefficient
varies from about 0.6 to 0.8 at typical gate openings; a representative value of 0.7 can be used for this
analysis. The values of H1 and w can vary widely during normal gate operations and for gates of different
sizes, so we will choose a range of typical values and then solve for the relative uncertainty of 1+ for each
condition using a spreadsheet model. We stated earlier that the desired relative uncertainty in Q is 2%.
For the other variables we need to make reasonable estimates of their uncertainties. The contraction
26
coefficient, , is usually determined from empirical equations that are believed to have a relative
uncertainty of about 1%. The estimated relative uncertainty in gate opening, w, is also 1% for a field
application. The relative uncertainty in the gate width, bc, for a typical prototype gate is estimated to be
0.25% (includes variation in the width of a gate chamber over the height of a gate). Finally, the
uncertainty of upstream head measurement is estimated to be 6 mm (0.02 ft).
Inserting these values into eq. (23) we obtain the results shown in Table 1. We limit the gate opening, w,
to 0.66H1, since at larger values the gate leaf would not control the flow (critical depth being two-thirds of
the upstream head).
The results in Table 1 show that for small gate openings we must have a smaller uncertainty in 1+ to
achieve our goal of a 2% uncertainty in discharge measurement. We can conservatively conclude that we
should try to design our experiments so that we can determine 1+ with a relative uncertainty of 2.5% or
less.
Now we will consider the planned experiment. We rearrange the data reduction eq. (17) to solve for the
experimental result, 1+.
2
wbc
1 2 g H 1 w (viii))
Q
In the laboratory we will most likely measure w as a single quantity, the jet thickness, yj, using either a
point gage or a piezometer tap located at the vena contracta. This simplifies the data reduction equation to
2
y j bc
1 2 g H 1 y j (ix))
Q
We will now determine the UMFs for this data reduction equation.
H 1 1 H1
UMFH1 (x))
1 H 1 H1 y j
y j 1 2 H1 3 y j
UMF y j (xi))
1 y j H1 y j
bc 1
UMFbc 2 (xii))
1 bc
27
Q 1
UMFQ 2 (xiii))
1 Q
We see that, again, some of the UMFs contain the measured variables themselves, so we will need to
choose representative values of H1 and yj to perform the uncertainty analysis. Typically, H1 will be
significantly larger than yj, so we can quickly see that UMFH1 will be on the order of 1, while UMFyj is on
the order of 2. The complete uncertainty equation is
2 2 2
H1 2H1 3 y j U yj
2 2 2 2
U1 U H1 U U
4 bc 4 Q
b Q (xiv))
H y H y y
1 1 j H1 1 j j c
In the facility that is available for the experiments, we have initially planned to conduct tests on a model
radial gate at heads ranging from 0.13 m up to 0.50 m, and gate openings ranging from 0.05 m to 0.20 m.
The jet thickness, yj, will be about two-thirds of the gate opening, so yj will vary from about 0.033 m to
0.17 m. We believe that we can construct and measure the width of the model gate chamber with a relative
uncertainty of 0.25%. The laboratory is equipped with a weighing tank for determining discharge whose
relative uncertainty is 0.1% or better. It is reasonable to assume that we can measure the upstream head
with an uncertainty of 0.5 mm with a point gage in a stilling well. Knowing from the previous analysis
that we need to achieve a relative uncertainty of 2.5% or better in the experimental result, 1+, we can
solve for the required uncertainty in the measurement of yj. Table 2 shows the results. Note that we solve
for the absolute uncertainty, Uyj, not the relative uncertainty, since we expect variability in our
measurements of the jet thickness to be mostly independent of the actual jet thickness.
The results show that any tests carried out at very low gate openings and jet thicknesses will require
measurements of jet thickness that have an uncertainty of about 0.4 mm.
We anticipate difficulty in reaching this objective, but believe that we can probably achieve a measurement
uncertainty for yj of 0.5 mm. Using that value, we can compute the relative uncertainty in 1+ over the
potential range of test conditions (Table 3), and the uncertainty percentage contributions (UPCs) associated
with each measured variable (Table 4).
In almost all test cases the uncertainty in the measurement of yj is responsible for the majority of the
uncertainty in the result. In the tests to be carried out at very small heads, the uncertainty in head
measurement becomes somewhat significant. Uncertainty in the gate width is also a significant factor in
some test conditions, but it is primarily those in which the total uncertainty in the result is relatively low,
so this is probably not a serious concern.
The analysis performed here shows that uncertainty in the measurement of yj may seriously affect the
ability to achieve the test objective. It may be necessary to focus effort on reducing the uncertainty of that
measurement through the selection of measurement methods and equipment. One might also conclude that
performing tests at very small heads and gate openings is not worthwhile, although there might be other
good reasons for keeping such tests in the plan, such as changes in hydraulic phenomena that only occur at
those test conditions. The key point is that the uncertainty analysis allows one to make informed decisions
about test design, rather than relying on intuition or discovering problems by trial.
r r r X i X i r X i
X i X i X i
with the values of all other variables in the data reduction equation held constant while varying Xi. To
implement this, one must choose a value for Xi, such as 0.01Xi, and compute the derivative, then reduce
Xi to one-half the starting value and compute the derivative again. Continue reducing Xi until the
estimated value of the derivative converges. This method can be easily implemented in commercial
spreadsheet or mathematical analysis software.
29
Summary
This chapter has demonstrated the use of general uncertainty analysis for the planning of experiments. In
the planning phase we are trying to determine whether a proposed experiment can satisfy our objectives. If
alternative data reduction equations and experimental methods are available, we consider the alternatives
and whether one may be more effective than another. General uncertainty analysis using UMFs also gives
us a first impression of which measurements may be most critical for obtaining a desired level of
uncertainty in the experimental result. As we refine our experimental plan, we can analyze the UPCs to
see which measurements are the most critical, considering both the manner in which they propagate into
the end result (indicated by the UMFs) and the actual values of the measurement uncertainties.
Several results are possible from an uncertainty analysis performed in the planning phase. Infeasible
approaches may be discarded and feasible methods pursued, experimental methods may be altered,
instrumentation needs can be clearly identified, and in some cases experimental programs that will never
yield useful results may be abandoned. Regardless of the outcome, the value obtained from the
expenditure of time and money can be increased, usually in an amount far greater than the cost of
performing the uncertainty analysis. In addition, time spent on uncertainty analysis in the planning phase
will lay the foundation for the more detailed uncertainty analysis that follows once experiments are
underway.
References
Clemmens, A.J., Strelkoff, T.S., and Replogle, J.A. (2003). Calibration of submerged radial gates.
Journal of Hydraulic Engineering, Vol. 129, No.9, pp. 680-687.
Coleman, H.W., and Steele, W.G. (1999). Experimentation and Uncertainty Analysis for Engineers, 2nd
Ed., John Wiley & Sons, New York, 275 pp.
Kline, S.J. (1985a). The purposes of uncertainty analysis. Journal of Fluids Engineering, Transactions of
the ASME, June 1985.
Kline, S.J., (1985b). Closure to 1983 Symposium on Uncertainty Analysis. Journal of Fluids
Engineering, Transactions of the ASME, June 1985.
Moffat, R.J. (1985). Using uncertainty analysis in the planning of an experiment. Journal of Fluids
Engineering, Vol. 107, No. 6, pp. 173-178.
30