Sunteți pe pagina 1din 30

CHAPTER (2)

Statistical Treatment of Experimental Data

Introduction
This laboratory course concerns making measurements in various heat transfer geometries and
relating those measurements to derived equations. The objective is to determine how well the
derived equations describe the physical phenomena we are modeling. In doing so, we will need to
make physical measurements, and it is essential that we learn how to practice good techniques in
making scientific observations and in obtaining measurements. We are making quantitative
estimates of physical phenomena under controlled conditions.

Measurements
There are certain primary desirable characteristics involved when making these physical
measurements. We wish that our measurements would be:
a) Observer-independent
b) Consistent
c) Quantitative

So when reporting a measurements, we will be stating a number, but whats in a number? A single
number, in isolation, has almost no significance, but the implied question is, Is it large or small?
Is 26 a large number? Is 6 x 105 a large number? The answer requires another number for reference
purposes. Is 26 a large number compared to 6 x 105? Furthermore, we will have to add a dimension
and this leads to another question: Are we talking about a number or a dimensional physical
quantity? We know from experience that a physical value without a unit has no significance.
In reporting measurements, another question arises as to how should we report data; i.e., how many
significant digits should we include? Which physical quantity is associated with the
measurement and how precise should it or could it be.
For example, does 2.54 cm = 2.54001 cm? It is impossible to answer this question without some
measure of the expected natural variation in the measurement. So it is prudent to scrutinize the
claimed or implied accuracy of a measurement.

Performing experiments
In the course of performing an experiment, we first would develop a set of questions or a hypothesis,
or put forth the theory. We then identify the system variables to be measured or controlled. The
apparatus would have to be developed and the equipment set up in a particular way. An
experimental protocol, or procedure, is established and data are taken.
Several features of this process are important. We want accuracy in our measurements, but
increased accuracy generally corresponds to an increase in cost. We want the experiments to be
reproducible, and we seek to minimize errors. Of course we want to address all safety issues and
regulations.
After we run the experiment, and obtain data, we would analyze the results, draw conclusions, and
report the results.

Estimation
In some situations, there is no time to run formal experiments to answer a question or verify an
1
equation. In such cases, it is often useful to make careful estimates. These can help to determine the
ranges of parameters to investigate in the experiments. Also, estimates are necessary for partial
validation of experimental results.
Consider, for example, that we must obtain a quick estimate of the density of a rock. We observe
that it sinks in water, so it must be denser than water, 1 000 kg/m3. As an upper bound, we might
suggest that it is less dense than steel at 3 000 kg/m3. So if we conduct an experiment and obtain a
value outside this range, we would be suspicious and check the equipment and the experimental
approach.

Comments on Performing Experiments


1. Keep in mind the fundamental state of questions or hypotheses.
2. Make sure the experiment design will answer the right questions.
3. Use estimation as a reality check, but do not let it affect objectivity.
4. Consider all possible safety issues.
5. Design for repeatability and the appropriate level of accuracy.

Definitions of Error and Uncertainty


The laboratory is designed to provide the students with experiments that verify the descriptive
equations we derive to model physical phenomena. The laboratory experience involves making
measurements of different parameters. However, we have to ask if the measurements we make are
accurate and/or precise. In the following paragraphs, we will examine our measurement methods
and define terms that apply. These terms include error, uncertainty, accuracy, and precision

1- Error
The error E is the difference between a TRUE value, x, and a MEASURED value, xi:

E = x - xi (1)
There is no error free measurement. All measurements contain some error. How error is defined and
used is important. The significance of a measurement cannot be judged unless the associated error has
been reliably estimated. In Equation 1, because the true value of x is unknown, then the error E is
unknown as well. This is always the case.
The best we can hope for is to obtain the estimate of a likely error, which is called an uncertainty.
For multiple measurements of the same quantity, a mean value, X , (also called a nominal value)
can be calculated. Hence, the error becomes:

E XX

However, because x is unknown, E is still unknown.

2- Uncertainty
The uncertainty, x, is an estimate of error E as a possible range of errors:

x=E (2)

2
For example, suppose we measure a velocity and report the result as
V = 110 m/s 5 m/s
The value of 5 m/s is defined as the uncertainty. Alternatively, suppose we report the results as
V = 110 m/s 4.5%
The value of 4.5% is defined as the relative uncertainty. It is common to hear someone speak
of experimental errors, when the correct terminology should be uncertainty. Both terms are
used in everyday language, but it should be remembered that the uncertainty is defined as an
estimate of errors.

3- Accuracy
Accuracy is a measure (or an estimate) of the maximum deviation of measured values, xi,
from the TRUE value, x. So a question like:-

Are the measured values accurate?

Can be reformulated as

Are the measured values close to the true value?

Accuracy was defined in Equation 2 as

accuracy estimate of masx X X (3)


i

Again, because the true value x is unknown, then the value of the maximum deviation is unknown.
The accuracy, then, is only an estimate of the worst error. It is usually expressed as a percentage;
e.g., accurate to within 5%.

Example 1

Consider a measurement that is reported as:

p=50 psi

What is the accuracy of the pressure probe used for making this measurement?

Solution:
The relative uncertainty is calculated to be

Thus the accuracy may be estimated to be (around) 10%.

Example 2
A sensor is claimed to be accurate to 5%. What will be the uncertainty (in psi) in the measurement
of a pressure of 50 psi?

3
Solution:
The accuracy ( relative uncertainty) is 5%, so
P 5%
P

P 0.05 * P 0.05x50
or P 2.5 Psi

The uncertainty in p in psi is 2.5 psi, so the measurement should be reported as follows:

p = 50 psi 2.5 psi.

Precision
On the other hand, is a measure (or an estimate) of the consistency (or repeatability). Thus it is
the maximum deviation of a reading (measurement), xi, from its mean value,

precision = estimate of max X X


i
Note the difference between accuracy and precision.
Regarding the definition of precision, there is no true value identified, only the mean value (or
average) of a number of repeated measurements of the same quantity. Precision is a characteristic of
the measurement. In everyday language we often conclude that accuracy and precision are the
same, but in error analysis there is a difference. So a question like:-

Are the measured values precise?


Can be reformulated as
Are the measured values close to each other?

As an illustration of the concepts of accuracy and precision, consider the dart board shown in the
accompanying figures. Let us assume that the blue darts show the measurements taken, and that
the bulls eye represents the value to be measured.
When all measurements are clustered about the bulls eye, then we have very accurate and,
therefore, precise results (Figure 1a).
When all measurements are clustered together but not near the bullseye, then we have
very precise but not accurate results (Figure 1b).
When all measurements are not clustered together and not near the bulls eye, but their
nominal value or average is the bulls eye, then we have accurate (on average) but not
precise results (Figure 1c).
When all measurements are not clustered together and not near the bulls eye, and their
average is the not at the bulls eye, then we have neither accurate nor precise results
(Figure 1d).
So, we conclude that accuracy refers to the correctness of the measurements, while
precision refers to their consistency.

4
Classification of Errors
1- Random Error
A random error is one that arises from a random source. Suppose for example that a measurement
is made many thousands of times using different instruments and/or observers and/or samples. We
would expect to have random errors affecting the measurement in either direction () roughly
the same number of times.
Such errors can occur in any scenario:
1. Electrical noise in a circuit generally produces a voltage error that may be positive or
negative by a small amount.
2. By counting the total number of pennies in a large container, one may occasionally pick up
two and count only one (or vice versa).
The question arises as to how can we reduce random errors? There are no random error free
measurements. So random errors cannot be eliminated, but their magnitude can be reduced.
On average, random errors tend to cancel out.
2- Systematic E rror
A systematic error is one that is consistent; that is, it happens systematically. Typically, human
components of measurement systems are often responsible for systematic errors. For example,
systematic errors are common in reading of a pressure indicated by an inclined manometer.
Consider an experiment involving dropping a ball from a given height. We wish to measure
the time it takes for the ball to move from where it is dropped to when it hits the ground. We
might repeat this experiment several times. However, the person using the stopwatch may
consistently have a tendency to wait until the ball bounces before the watch is stopped. As a
result, the time measurement might be systematically too long.
So, systematic measurements can be anticipated and/or measured, and then corrected. This
can be done even after the measurements are made.
5
The question arises as to how can we reduce systematic errors?
This can be done in several ways:
1. Calibrate the instruments being used by checking with a known standard. The standard
can be what is referred to as:
a. A primary standard obtained from the National Institute of standards and
technology (NIST formerly the National Bureau of Standards); or
b. A secondary standard (with a higher accuracy instrument); or
c. A known input source.
2. Make several measurements of a certain quantity under varying test conditions, such as
different observers and/or samples and/or instruments.
3. Check the apparatus.
4. Check the effects of external conditions
5. Check the coherence of results.
A repeatability test using the same instrument is one way of gaining confidence, but a far more
reliable way is to use an entirely different method to measure the desired quantity.

Uncertainty Analysis
Determining Uncertainty.
When we state a measurement that we have taken, we should also state an estimate of the
error, or the uncertainty. As a rule of thumb, we use a 95% relative uncertainty, or stated
otherwise, we use a 95%

Confidence interval
Suppose for example, that we report the height of a desk to be 38 inches 1 inch. This
suggests that we are 95% sure that the desk is between 37 and 39 inches tall. When
reporting relative uncertainty, we generally restrict the result to having one or two
significant figures. When reporting uncertainty in a measurement using units, we use the same
number of significant figures as the measured value. Examples are shown in Table 1:

Table(1) Relative and absolute uncertainty


uncertainty
Relative uncertainty Uncertainty in units
3.45 cm 8.5% 5.23 cm 0.143 cm
6.4 N 2.0% 2.5 m/s 0.082 m/s
2.3 psi 0.1900% 9.25 in 0.2 in
9.2 m/s 8.598% 3.2 N 0.1873 N

The previous table's shows uncertainty in measurements, but to determine uncertainty is usually
difficult. So as a rule of thumb, we use a 95% confidence interval which gives us an estimate.
Now the estimate of uncertainty depends on the measurement type: single sample measurements,
measurements of dependent variables, or multi variable measurements.
Single-sample measurements
Single-sample measurements are those in which the uncertainties cannot be reduced by
6
repetition. As long as the test conditions are the same (i.e., same sample, same instrument and
same observer), the measurements (for fixed variables) are single-sample measurements,
regardless of how many times the reading is repeated.

Measurement of Function of More Than One Independent Variables.


In many cases, several different quantities are measured in order to calculate another quantity a
dependent variable. For example, the measurement of the surface area of a rectangle is
calculated using both its measured length and its measured width. Such a situation involves a
propagation of uncertainties.

Multi-Sample Measurements.
Multi-sample measurements involve a significant number of data points collected from enough
experiments so that the reliability of the results can be assured by a statistical analysis.
In other words, the measurement of a significant number of data points of the same quantity (for
fixed system variables) under varying test conditions (i.e., different samples and/or different
instruments) will allow the uncertainties to be reduced by the sheer number of observations.

Single-sample uncertainty
It is often simple to identify the uncertainty of an individual measurement. It is necessary to
consider the limit of the scale readability, and the limit associated with applying the
measurement tool to the case of interest.
Consider some measuring device that has as its smallest scale division x. The smallest scale
division limits our ability to measure something with any more accuracy than x/2. The ruler of
Figure 2a, as an example, has 1/4 inch as its smallest scale division. The diameter of the circle
is between 4 and 4 1/4 inches. So we would correctly report that

D = 41/8 1/8 in.


This is the correct reported measurement for Figure 2a and Figure 2b, even though the circles are
of different diameters. We can guesstimate the correct measurement, but we cannot report
something more accurately than our measuring apparatus will display. This does not mean that the
two circles have the same diameter, merely that we cannot measure the diameters with a greater
accuracy than the ruler we use will allow.

FIGURE 2. A ruler used to measure the diameter of a circle.

The ruler depicted in the figure could be any arbitrary instrument with finite resolution. The
7
uncertainty due to the resolution of any instrument is one half of the smallest increment displayed.
This is the most likely single sample uncertainty. It is also the most optimistic because reporting
this value assumes that all other sources of uncertainty have been removed.

Uncertainty in Measurement of a Function of Independent Variables


The concern in this measurement is in the propagation of uncertainties. In most experiments,
several quantities are measured in order to calculate a desired quantity.

Example 3
Estimate the gravitational constant g for dropping a ball from a known height, L through time, t.
The approximate equation would be

Suppose we measured: L = 50.00 0.01 m and t = 3.1 0.5 s.


Solution
Based on the equation, we have:

We now wish to estimate the uncertainty for g in our calculation of g.


The uncertainty g will depend on the uncertainties in the measurements of L and t. Let us examine
the worst cases. These may be calculated as:

The confidence interval around g then is:


7 .7 m/s2 g 14.8 m/s2
Now it is unlikely for all single-sample uncertainties in a system to simultaneously be the worst
possible. Some average or norm of the uncertainties must instead be used in estimating a
combined uncertainty for the calculation of g.

Definition of the Euclidean Norm


In general, if the quantity f is determined by an equation involving n independent variables xi:

f (x1 , x2 , ..., xn ) (5)


and the uncertainty in each independent measurement variable xi is called xi then the uncertainty
in f is given by:

8
2
i n f
f x (6)
x i
i 1 i

We will need a process or an algorithm for calculating f.

The procedure is as follows:

1. Write expression for f in terms of its independent variables, xi

2. Evaluate each term separately


3. Calculate

4. Calculate the relative uncertainty f/f.

Example 4
Calculate the Euclidean Norm in the gravitational constant g in example 3

Solution
For determining the gravitational constant:-
1. Write expression for g in terms of its independent variable(s):

2- Evaluate each partial derivative term separately

3- Calculate the Euclidean norm


we measured: L = 50.00 0.01 m and t = 3.1 0.5 s.

9
4. Alternatively, calculate the relative uncertainty g/g:

Note that the expression for g/g is simpler than that for g. Also, in the g/g expression, the
individual terms are dimensionless. This is convenient if quantities are originally given in %, or if
the units are incompatible.

Now, in calculating g and g/g, we found:


g =10.4 m/s2

and

g/g 32 %

So, the measurement should be reported as:


g = 10 m/s2 32%

This is an example of bad experiment or poor results.

Example 5
Discards the ice reference junction and connects the Type-J thermocouple directly to the
instrument. We use a thermometer to record the temperature of the instrument face where the
thermocouple is attached. We read 39 oC 1oC. In our testing, we read a value of 3.4560.002 mv
at the meter, and we want to know what the unknown temperature Tx is.
The answer is found by realizing that the meter is creating a virtual thermocouple junction at 39oC
which reduces the voltage readout, and we may compute that offset as

Eoff = Ea + (Tref-Ta)ref
More simply, since the temperature is 39 oC, we may look in the table and find that the
thermocouple is creating the equivalent of Eoff = 2.006 mv offset to the reading for Tx. If the
configuration were as shown in Error! Reference source not found., the reading would then be

10
E = Eact + Eoff = 3.456mv + 2.006mv = 5.462mv
The thermocouple table can be quickly scanned and the temperature Tx is found to be between
103oC and 104oC. We linearly interpolate and find that it is
T = 103 + (5.462-5.432)*1oC/(5.487-5.432) = 103 + 0.03/0.055 = 103.545oC
Unfortunately, we do not know the resolution of this measurement. We do know that the
measurement is a function of the voltmeters accuracy and that of the thermometer. We can write
that

Tx = T1 +(1/x(E-E1) = T1 +(1/x( Eact +Ea + ref (Trefb-Ta)-E1)


or

Tx = T1 + (Eact +Ea + ref(Tref-Ta)-E1) /x

The variables in this equation may be written as Tx = F(Tref, Eactand applying the RSS method
we can write that the error of the measurement is thus

Numerically, this becomes

Figure 2. Simpler measurement system.


It is thus obvious that the greatest error arises from the thermometers inaccuracy, not that of the
voltmeter.
There are many instances where thermocouples are connected directly to the measuring instrument,
and this illustrates how important it is to know the temperature of the connection panel. If you use
different values for the accuracy of the thermometer, Tref, you quickly discover that the accuracy of
measuring Tx is essentially the accuracy of Tref. Thus, we may write
11
Tx = 103.55oC 0.93oC
Example 6
We are provided with a perfect parallelepiped of dimensions a x b x c of an unknown material. It is
desired to determine the density of the material and also the uncertainty in this experimental
determination of the density. We will determine the density by measuring the dimensions of the
parallelepiped with a ruler, measuring its mass with a scale, and using the definition

m m
= =
V a bc (3)
Then the uncertainty becomes


d = ( dm) 2 ( da) 2 ( db) 2 ( dc) 2 (4)
m a b c

Evaluating the partial derivatives

m 1
= =
m m a bc a bc (5a)

m m
= =- 2
a a a bc a bc (5b)

and similarly for b and c. Now substituting

dm 2 da 2 db 2 dc 2
d = ( ) ( 2 ) ( ) ( ) (6)
a.b.c a .b.c a.b 2 .c a.b.c 2

We note that
m
=
a bc (7)
and rearrange to get

dm 2 da 2 db 2 dc 2
d = ( ) ( ) ( ) ( ) (8)
m a b c

Next, we specify the uncertainty in our measurements. For example

dm = 0.5 gm = 5 x 10-4 kg
da = db = dc = 0.5 mm = 5 x 10-4 m
Finally, using our measurements, say
12
m = 100 gm = 0.1 kg
a = 10 cm = 0.1 m
b = 5 cm = 0.05 m
c = 5 cm = 0.05 cm
with
0.1
= = 400 kg/m3
(0.1)(.05)(.05)
We determine the numerical value of the uncertainty

5x10 -4 2 5x10 -4 2 5x10 -4 2 5x10 -4 2


d = 400 ( ) ( ) ( ) ( )
0.1 0.1 0.05 0.05

5x10 -4
400x 1 1 (2) 2 (2) 2 6.32 kg/m 3
0.1
d 6.32 kg/m 3

Uncertainty i n Multi Sample Measurements


When a set of readings is taken in which the values vary slightly from each other, the
experimenter is usually concerned with the mean of all readings. If each reading is denoted by xi
and there are n readings, then the arithmetic mean value is given by:

(7)
Various limits give some interesting results
Deviation
The deviation of each reading is defined by:

(8)
The arithmetic mean deviation: is defined as:

Note that the arithmetic mean deviation is zero:

Standard Deviation: The standard deviation is given by:-

(9)
13
Due to random errors, experimental data is dispersed in what is referred to as a bell distribution,
known also as a Gaussian or Normal Distribution, and depicted in Figure 3.

Figure 3. Gaussian or Normal Distribution

The Gaussian or Normal Distribution is what we use to describe the distribution followed by
random errors. A graph of this distribution is often referred to as the bell curve as it looks like
the outline of a bell. The peak of the distribution occurs at the mean of the random variable, and
the standard deviation is a common measure for how fat this bell curve is Equation 10 is called
the Probability Density Function for any continuous random variable x.

(10)

The mean and the standard deviation are all the information necessary to completely describe
any normally distributed random variable.
Integrating under the curve of Figure 3 over various limits gives some interesting results.
1. Integrating under the curve of the normal distribution from negative to positive infinity, the
area is 1.0 (i.e., 100 %). Thus the probability for a reading to fall in the range.
2. Integrating over a range within from the mean value, the resulting value is 0.6826. The
probability for a reading to fall in the range of is about 68%.
3. Integrating over a range within 2 from the mean value, the resulting value is 0.954. The
probability for a reading to fall in the range of 2 is about 95%.
4. Integrating over a range within 3 from the mean value, the resulting value is 0.997. The
probability for a reading to fall in the range of 3 is about 99%.

Table 2. Probability for Gaussian Distribution (tabulated in any statistics book)

Probability value of the mean

50% 0.6754
68.3%
86.6% 14 1.5
95.4% 2
99.7% 3
Estimating Uncertainty.
We can now use the probability function to help in determining the accuracy of data obtained in
an experiment. We use the uncertainty level of 95%, which means that we have a 95% confidence
interval. In other words, if we state that the uncertainty is x, we suggest that we are 95% sure
that any reading xi will be within the range of x of the mean. Thus, the probability of a sample
chosen at random of being within the range 2 of the mean is about 95%. Uncertainty then is
defined as twice the standard deviation:
x 2
Example 7:
Alloy claims a modulus of elasticity of 40 2 kPa. How is that to be interpreted?
Solution:
The general rule of thumb is that 2 kPa would represent a 95% confidence interval. That is, if
we randomly select many samples of this manufacturers alloy we should find that 95% of the
samples meet the stated limit of 40 2 kPa.
Now it is possible that we can find a sample that has a modulus of elasticity of 37 kPa; however,
it means that it is very unlikely.

Example 8
If we assume that variations in the product follow a normal distribution, and that the modulus of
elasticity is within the range 40 2 kPa, then what is the standard deviation, ?
Solution
The uncertainty 95% of confidence interval 2kPa =2 , thus =1 kPa

Example 9
Assuming that the modulus of elasticity is 40 2 kPa, estimate the probability of finding a
sample from this population with a modulus of elasticity less than or equal to 37 kPa.
Solution
With = 1 kPa, we are seeking the value of the integral under the bell shaped curve, over the
range of - to 3 . Thus, the probability that the modulus of elasticity is less than 37 kPa is:

P(E < 37kPa) = (100 - 99.7)/2=0.15%

Statistically Based Rejection of Bad Data (Chauvenets Criterion)


Occasionally, when a sample of n measurements of a variable is obtained, there may be one
or more that appear to differ markedly from the others. If some extraneous influence or mistake
in experimental technique can be identified, these bad data or wild points can simply be
discarded. More difficult is the common situation in which no explanation is readily available.
In such situations, the experimenter may be tempted to discard the values on the basis that
something must surely be resisted, since such data may be significant either in terms of the
phenomena being studied or in detecting flaws in the experimental technique. On the other
hand, one does not want an erroneous value to bias the results. In this case, a statistical criterion
must be used to identify points that can be considered for rejection. There is no other
justifiable method to throw away data points.
One method that has gained wide acceptance is Chauvenets criterion; this technique defines
an acceptable scatter, in a statistical sense, around the mean value from a given sample of n
measurements. The criterion states that all data points should be retained that fall within a band
15
around the mean that corresponds to a probability of 1-1/(2n). In other words, data points can be
considered for rejection only if the probability of obtaining their deviation from the mean is less
than 1/(2n). This is illustrated in Figure 4.

FIGURE 4. Rejection of bad data.

The probability 1-1/(2n) for retention of data distributed about the mean can be related to a
maximum deviation dmax away from the mean by using a Gaussian probability table. For the
given probability, the nondimensional maximum deviation max can be determined from the table,
where

All measurements that deviate from the mean by more than dmax/sx can be rejected. A new mean
value and a new precision index can then be calculated from the remaining measurements. No
further application of the criterion to the sample is allowed.
Using Chauvenets criterion, we say that the values xi which are outside of the range
(11)
are clearly errors and should be discarded for the analysis. Such values are called outliers. The
constant C may be obtained from Table 3. Note that Chauvenets criterion may be applied only
once to a given sample of readings.

TABLE 3. Constants to use in Chauvenets criterion, Equation11.

Number, dmax
n =C
sx
3 1.38
4 1.54
5 1.65
6 1.73
7 1.80
8 1.87
9 1.91
10 1.96
15 2.13
20 2.24
25 2.33
50 2.57
100 2.81
300 3.14
500 3.29
1,000 3.48
16
The methodology for identifying and discarding outlier(s) is a follows:
1. After running an experiment, sort the outcomes from lowest to highest value. The suspect
outliers will then be at the top and/or the bottom of the list.
2. Calculate the mean value and the standard deviation.
3. Using Chauvenets criterion, discard outliers.
4. Recalculate the mean value and the standard deviation of the smaller sample and stop. Do
not repeat the process; Chauvenets criterion may be applied only once.

Example 8
Consider an experiment in which we measure the mass of ten individual identical objects.
The scale readings (in grams) are as shown in Table 4. By visual examination of the results, we
might conclude that the 4.85 g reading is too high compared to the others, and so it represents
an error in the measurement. We might tend to disregard it. However, what if the reading was
2.50 or 2.51 g? We use Chauvenets criterion to determine if any of the readings can be
discarded.
TABLE 4. Data obtained in a series of experiments
Number, reading in g
n 1 2.41
2 2.42
3 2.43
4 2.43
5 2.44
6 2.44
7 2.45
8 2.46
9 2.47
10 4.85

We apply the methodology described earlier. The results of the calculations are shown in Table 5:
1. Values in the table are already sorted. Column 1 shows the reading number, and there are 10
readings of mass, as indicated in column 2.
2. We calculate the mean and standard deviation. The data in column 2 are added to obtain a
total of 26.8. Dividing this value by 10 readings gives 2.68, which is the mean value of all
the readings:
m = 2.68 g
In column 3, we show the square of the difference between each reading and the mean value.
Thus in row 1, we calculate
(x x1)2 = (2.68 2.41)2 = 0.0729
We repeat this calculation for every data Point. We then add these to obtain the value 5.235
shown in the second to last row of column 3. This value is then divided by (n-1) =9 data points and
the square root is taken. The result is 0.763, which is the stander deviation, as defined in Equation 9.

(12)
17
3. Next, we apply Chauvenets criterion; for 10 data points, n = 10 and Table 3 reads C = 1.96. We
calculate C = 1.96(0.763) = 1.50. The range of acceptable values then is 2.68 1.50, or:

Any values outside the range of 1.18 and 4.18 are outliers and should be discarded.
Thus for the data of the example, the 4.85 value is an outlier and may be discarded. All other
points are valid. The last two columns show the results of calculations made without data point
#10. The mean becomes 2.44, and the standard deviation is 0.019 (compare to 2.68, and 0.763,
respectively).

Exercise.
Define the following terms
1. Error
2. Uncertainty
3. Accuracy
4. Precision
5. Random Error
6. Systematic Error
7. Confidence Interval

Table 5. Calculations summary for the data of Table 4.

Number, n reading in g (x xi)2 remove #10 (x xi)2


1 2.41 0.0729 2.41 0.000835
2 2.42 0.0676 2.42 0.000357
3 2.43 0.0625 2.43 0.000079
4 2.43 0.0625 2.43 0.000079
5 2.44 0.0576 2.44 0.000001
6 2.44 0.0576 2.44 0.000001
7 2.45 0.0529 2.45 0.000123
8 2.46 0.0484 2.46 0.000446
9 2.47 0.0441 2.47 0.000968
10 4.85 4.7089
= 26.8 5.235 21.95 0.002889
2.68 0.763 2.44 0.019

18
3. Uncertainty Analysis in the Planning of an Experiment

Introduction
Uncertainty analysis is a powerful tool for improving the value of experimental work, and can be applied
during all phases of an experimental program. However, the greatest value of uncertainty analysis is
almost certainly obtained when it is used during the planning of an experiment. In citing a dozen uses of
uncertainty analysis, Kline (1985a) highlighted five specific uses within the planning phase

1. Enforcing a complete examination of experimental procedures,


2. Identifying situations in which improved instrumentation and/or procedures are needed to obtain
desired accuracy of results,
3. Minimizing instrumentation costs to obtain a given output accuracy,
4. Identifying instruments and procedures that control accuracy (usually one or a few from the total
set of those involved in a given experiment), and
5. Informing beforehand when an experiment cannot meet desired accuracy requirements and is thus
hopeless. Such experiments can sometimes be redesigned, or if they must be abandoned,
considerable time and money can be saved. (An example might be trying to measure the normal
velocity through a fish screen with a 3-dimensional ADV when the sweeping velocity is extremely
large compared to the normal component.)
So important is uncertainty analysis in the planning phase of an experiment that it was prominently
featured in both of the primary conclusions obtained from the landmark 1983 symposium on uncertainty
analysis sponsored by the ASME Journal of Fluids Engineering (JFE). Those two conclusions were (Kline
1985b):

1. Uncertainty analysis is an essential ingredient in planning, controlling, and reporting experiments.


The important thing is that a reasonable uncertainty analysis be done. All differences of opinion
about appropriate methods are subsidiary to this conclusion.
2. It is particularly important to use an uncertainty analysis in the planning and checkout stages of an
experiment.
In emphasizing the importance of uncertainty analysis during the planning stages of an experiment, Hugh
Coleman wrote to Kline (1985b) All experimentalists should be taught that an uncertainty analysis
PERFORMED IN THE PRELIMINARY DESIGN PHASE OF AN EXPERIMENT will often yield results
and insights far out of proportion to the relatively small investment of time required for the analysis.
(Colemans emphasis).

So, uncertainty analysis in the planning phase of an experiment is a worthy endeavor. This chapter
attempts to highlight this fact with discussion and examples of the use of uncertainty analysis techniques
that lend themselves to the planning phase. The material presented here is drawn primarily from two
sources, R.J. Moffats 1985 paper from the ASME/JFE symposium, titled Using Uncertainty Analysis in
the Planning of an Experiment, and Coleman and Steeles 1999 book, Experimentation and Uncertainty
Analysis for Engineers, 2nd ed.

General Uncertainty Analysis


In all but the simplest of experiments, the end result of an investigation is not measured directly, but rather
is determined by calculation from a data reduction equation. The end result, and the uncertainty in it, are a
product of the direct measurement of other parameters and, in most cases, assumed values of material

19
properties or other physical constants. In all the phases of experimentation we will consider how the
uncertainties in these individual variables propagate through the data reduction equation into the end result.

In the planning phase of an experiment, we will consider the uncertainties of individual variables and their
propagation into the result in the most basic ways possible. This level of analysis is often called general
uncertainty analysis, as opposed to detailed uncertainty analysis, which is applied to later phases. In the
planning phase we are considering alternative methods for arriving at the experimental result. We have not
selected specific instruments or equipment, and thus we are in no position to address the details of
systematic versus random errors, since any systematic errors in the instruments we eventually choose are
as likely to be positive and negative. At this stage we consider all uncertainty to be caused by random
errors. Later, when we get into the detailed design of an experiment, the debugging and data collection
phases, and the data analysis and reporting phases, we will be more interested in those details, but for now
our focus is on achieving the maximum amount of early guidance for a small amount of expended effort.
By using uncertainty analysis at this stage to help select the correct experimental approach, we can avoid
expending unnecessary effort on methods that might never achieve our objective.

Uncertainty Propagation Equation


For the general case of an experimental result, r, computed from J measured variables X1J, the data
reduction equation is:

r r ( X 1 , X 2 , , X J ) (1)

and the uncertainty in the experimental result is given by


2 2 2
r 2 r 2 r 2
U
2
U X1 U X 2 U X J (2)
X 1 X 2 X J
r

Where Ur is the uncertainty in the result, UX1 is the uncertainty in the variable X1, etc. This is the most
general form of the uncertainty propagation equation (Coleman and Steele 1999). When applying the
uncertainty propagation equation, the individual uncertainties should all be expressed with the same odds,
e.g., at 95% confidence. In the planning phase, this assumption is implicit. In addition, the measured
variables and their uncertainties are assumed to be independent of one another.

Nondimensional Forms
Two nondimensional forms of eq. (2) are useful in the planning phase. Dividing each term by r2 and
multiplying the terms on the right-hand side by (Xi/Xi)2, we obtain

2 2 2
UR X R U X1 X 2 R U X2 X R U Xn
1 .......... n (3)
R R x1 X1 R x 2 X2 R x n Xn

In this equation, Ur/r is the relative uncertainty in the result and the factors UXi/Xi are the relative
uncertainties of each variable. The factors in parentheses that multiply the relative uncertainties of the
variables are called uncertainty magnification factors (UMFs). They indicate the influence of uncertainty
in a particular variable on the uncertainty in the result. When the UMF is greater than 1, uncertainty in a
variable is magnified as it propagates through the data reduction equation; if less than 1, the uncertainty in
20
the variable is reduced. The UMF depends on the value of a variable relative to the result and the manner
in which it is incorporated into the data reduction equation, but it is independent of the actual uncertainty
in the variable, so it is useful before we know details about measurement methods and their uncertainties.
Since the UMFs are always squared when inserted into eq. (3), only their absolute values are important.

The second nondimensional form is obtained by dividing by Ur2, which produces


2 2 2 2 2 2
r U X1 r U X2 r U XJ
1 ... (4)

1
X Ur 2 X Ur X J Ur

Each term on the right-hand side gives the fractional contribution of the squared uncertainty in a given
variable to the squared uncertainty in the result. In percentage terms we can define uncertainty
percentage contributions (UPCs) as

2U 2 2
R X R
2 UX
Xi
UPC i i 1
i X x 100 (5)
UR R X U
R (U R /R)
2
i i

The UPCs include the effects of both the UMF and the uncertainty of the variable, so they are useful in the
late planning phase and early design phase, when measurement equipment and methods are being selected
and measurement uncertainties can be estimated.

Simplified Form
The most useful form of the uncertainty propagation equation for planning purposes is probably eq. (3), in
which the squares of the relative uncertainties are related through the UMFs. In a great many cases eq. (3)
can be further simplified. When the data reduction equation is of the form

r kX1a X 2b X 3c (6)

With a, b, and c, and k being constants, applying eq. (3) produces the simplified equation
2 2 2
U r2 UX U U
2
a 2 1 b 2 X 2 c 2 X 3 (7)
r X1 X2 X3

In such a case, the UMFs are the exponents and the uncertainty propagation equation can be written down
by simple inspection. One must keep several things in mind when considering the use of this simplified
form of the equation. First, one must solve for the experimental result before applying the equation.
Second, the Xis must be directly measured variables, so an equation of the form R=a(cos()) is not in
the proper form if is measured directly. Also, an equation of the form Q=CdA(2g(h2-h1))1/2 is acceptable
if h2-h1 is measured directly, but not if h2 and h1 are measured separately.

Examples Simple Cases


Consider the case of flow measurement over a fully contracted, sharp-crested rectangular weir. We wish to
determine the discharge through the weir, using the equation

21
Q CLh11.5 (a))

The variables that will be measured are the weir length, L, and the upstream head, h1, and each will have an
associated uncertainty. The value of the discharge coefficient, C, is an empirical constant which will also
have some uncertainty associated with it.

Before determining how to measure the weir length and upstream head, a general uncertainty analysis can
be used to gain an understanding of the relationship between the measurement uncertainties and the
uncertainty in the result.

Applying eq. (3), the general uncertainty expression is


2 2 2 2 2
UQ C Q U C L Q U L h1 Q U h1
2 2

(b))
Q Q C C Q L L Q h1 h1

And the UMFs are

C Q C 1.5 CLh11.5
UMFC Lh1 1 (c))
Q C Q Q

L Q L 1.5 CLh11.5
UMFL Ch1 1 (d))
Q L Q Q

h1 Q h1
1.5CLh10.5
1.5CLh11.5
UMFh1 1.5 (e))
Q h1 Q Q

Substituting the UMFs back into eq. (9), we obtain


2 2
UQ UC U L 2 Uh
2 2

1.5 1 (f))
Q C L h1

which we could have written down by inspection, since the data reduction equation was in the simple form
discussed previously. Notice that the uncertainties in head measurement are magnified in the result due to
the exponent of 1.5. With this equation, we can now examine several questions related to the uncertainty
of the proposed discharge measurement.

Suppose for example that the weir length is 2 m, the head is 0.3 m, and the discharge coefficient is 1.71
m0.5/s. How accurately can the discharge be measured if the relative uncertainty in C is 5%, the weir
length is known with an uncertainty of 2 mm, and the head is measured with an uncertainty of 3 mm?

Inserting the values of the variables and their uncertainties, we obtain

22
2 2
UQ 2 Uh
2
0.002
0.05 1.5 1
2

Q 2 h1
0.0025 0.000001 0.000225
0.00273
UQ
0.00273 0.052
Q

Thus, the relative uncertainty in discharge measurement is 5.2%, with the primary sources of uncertainty
being the discharge coefficient and the measurement of the upstream head.

We can also answer what if questions related to design or selection of measurement methods. For
example, if the discharge measurement must be made with an uncertainty of 6% or less and the length of
the weir has an uncertainty of 2 mm as before, what is the maximum allowable uncertainty in the head
measurement? Working through the equation, we find that
2
U
2

0.06 0.05 0.002 1.52 h1


2 2

2 0.3
2
Uh
0.0036 0.0025 0.000001 2.25 1
0.3
U h1 0.00663 m 6.63 mm

Similarly, if we were doing a laboratory calibration to determine the discharge coefficient we might ask
what uncertainty in head measurement is needed to obtain a C value with relative uncertainty of 2%, if the
laboratory facilities are capable of independently measuring the discharge with an uncertainty of 0.5%.
Solving for Uh1 in that case, we obtain
2 2
U UQ UC U L
2 2

1.5 h1
2
(g))
h1 Q C L

when we insert the values for the example we obtain


2
U
2

1.5 h1 0.0052 0.022 0.002


2

0.3 2
2
U h1
0.00017
0.3

We have obviously committed a serious error! One cannot take the square root of a negative number.
The problem is that we forgot to rearrange the data reduction equation to put the result on the left side
before applying the simplified form of the general uncertainty equation. The correct data reduction
equation and uncertainty expression are

Q
C 1.5
QL1h11.5 (h))
Lh1
23
2 2
UQ 2 Uh
2 2
UC 2UL


1 1.5 1 (i))
C Q L h1

Inserting the appropriate numerical values for our example,


2
2 Uh
2
0.002
0.02 0.005
2 2
1.5 1
2 0.3
2
Uh
0.0004 0.000025 0.000001 2.25 1
0.3
2
Uh
2.25 1 0.000374
0.3
U h1 0.0039 m 3.9 mm

This example shows how easy it is to make a mistake when writing down the uncertainty equation by
inspection. The requirements listed earlier for use of the simplified form must be kept in mind when
considering its use.

Using Uncertainty Analysis in the Planning of an Experiment


The examples given thus far are relatively trivial, simple cases. Before proceeding on to a more detailed
example, we should review the role of uncertainty analysis in experimental planning.

Any experimental endeavor begins with a question whose answer is sought. In order to be useful, the
answer must be determined with some level of certainty. It is important to establish this allowable level of
uncertainty in the planning phase, although it need not be a refined value; an order of magnitude estimate
(i.e., 0.1, 1, 10, or 50%) is often sufficient.

In the planning phase, we usually have several options available to us for arriving at the experimental
result. These will often involve completely different experimental methods and the measurement of
different parameters. For example, one experiment might be performed in a steady-state condition while
another is performed in a transient condition. Different experiments may exploit different physical
properties of a material or different parameters of a process. For each alternative, there may be a different
data reduction equation with unique error propagation characteristics. We want to determine which
approach is most likely to yield the desired result, and which measurements may the most critical if a
particular method is used. Once we have selected an experimental method, then we may want to
investigate instrumentation issues more closely. As we do so, we gradually move from the realm of
general uncertainty analysis to more detailed uncertainty analysis.

When applying general uncertainty analysis, it becomes necessary to assign uncertainties to the variables
that we intend to measure, even if we do not yet know the exact methods of measurement or specific
instrumentation that will be used. Coleman and Steele (1999) observed that there is a universal human
reluctance to estimate uncertainties, perhaps out of fear of using the wrong values. This can be overcome
by realizing that there is no correct value at this stage; we only seek to gain understanding by assuming
reasonable values. If one is still unsure of what value to use, a parametric analysis can be made in which
a range of values are used. Even ridiculously large or small values can be used, as they may help illustrate
the sensitivity or insensitivity of the uncertainty in the result to the uncertainties in the measured variables.
24
We will now illustrate the more detailed application of general uncertainty analysis to a nontrivial, real-
world example. Unfortunately, a single example cannot illustrate all of the nuances and possible outcomes
of a planning phase uncertainty analysis. For the reader seeking additional examples, those given by
Coleman and Steele (1999) are highly recommended.

Detailed Example Radial Gate Calibration Experiments


The calibration of radial (Tainter) gates for flow measurement is a topic of current research interest to
those who work with irrigation canal systems, in which radial gates are often used at check structures and
canal bifurcations. Accurate calibration of such gates would allow operators to measure flow rates through
the gates without the need for construction of dedicated flow measurement structures (e.g., flumes or
weirs) or the purchase of expensive measurement instruments such as acoustic flow meters. Measurement
at control structures also eliminates some of the lag time involved in draining and filling canal reaches
upstream from flumes or weirs. Finally, accurate calibration equations allow canal operators to more
accurately set gates to deliver desired flow rates.

In concept, the objective is to develop a calibration method that would allow one to compute the flow rate
through a gate from a measurement of upstream and downstream water level and the gate position. A
definition sketch for a radial gate is provided in Fig. 1.

1 2 3
v12/2g r

Submerged

H1 y1 a
Free
y2
y3
w yj =w

Fig. 1. Definition sketch for flow through a radial gate

Clemmens et al. (2003) performed laboratory experiments on a model radial gate and developed a new
calibration method called the Energy-Momentum method. Two key parameters in this method are an
upstream energy loss and velocity distribution factor, 1+, and an energy correction factor, Ecorr. To
provide better definition of the behavior of these parameters, a new series of experiments is now being
contemplated. We wish to use a general uncertainty analysis to explore the possible experiments that
might be performed. For this example we will focus on just the 1+ factor.

The parameter 1+ appears in the energy equation applied to the flow from the upstream pool through the
gate to the vena contracta point. When a gate is in a free flow condition, the calibration equation for the
gate (derived from the energy equation) is

2 g ( H 1 w)
Q wbc (i))
1

where Q is the discharge, is the contraction coefficient, w is the vertical gate opening, bc is the gate
width, g is the acceleration of gravity, H1 is the energy head at section 1, and an energy loss coefficient
25
for losses that occur between sections 1 and 2. The velocity distribution at the vena contracta (section 2 in
Fig. 1) is assumed to be uniform, hence the constant 1 in the denominator. If the velocity distribution is
actual nonuniform, that will be accounted for in the factor, so that 1+ accounts for the effects of both
velocity distribution and energy loss. We wish to perform experiments that will yield the values of 1+.

The first question that arises in considering this problem is what level of uncertainty we should try to
achieve in determining 1+? To answer this question we can first consider the final application of eq. (17)
for the purpose of flow measurement. We would like to be able to measure Q in the field with a relative
uncertainty of 2%, which is comparable to the rating uncertainty of flumes and weirs. If we estimate the
uncertainty in field measurements or estimates of , w, bc,and H1, we can solve for the allowable
uncertainty in 1+. We will assume that the acceleration of gravity, g, is a constant known with negligible
uncertainty.

Eq. (17) is not in the form that allows the simplified development of the uncertainty propagation equation,
so we will need to work out the partial derivatives. The UMFs are

Q w
UMF 1 (ii))
Q 2H 1 w

w Q w
UMFw 1 (iii))
Q w 2H 1 w

bc Q
UMFbc 1 (iv))
Q bc

H 1 Q H1
UMFH1 (v))
Q H 1 2H 1 w

1 Q 1
UMF1 (vi))
Q (1 ) 2

Note that we treat 1+ as a single parameter, since we expect to determine it experimentally in that way.
Determining the partial derivatives is somewhat tedious, but many terms cancel to make the resulting eqs.
(18) to (22) reasonably compact. Substituting the UMFs into the general uncertainty equation we obtain:

U 2 U w 2 U bc
2 2 2 2 2 2
UQ w H1 U H1 1 U
1 1
Q 2H 1 w w bc 2H 1 w H1 4 1
(vii))

We see that in addition to estimating relative uncertainties of the measured variables, we will need to select
appropriate values of , w, and H1, since they are contained in the UMFs. The contraction coefficient
varies from about 0.6 to 0.8 at typical gate openings; a representative value of 0.7 can be used for this
analysis. The values of H1 and w can vary widely during normal gate operations and for gates of different
sizes, so we will choose a range of typical values and then solve for the relative uncertainty of 1+ for each
condition using a spreadsheet model. We stated earlier that the desired relative uncertainty in Q is 2%.
For the other variables we need to make reasonable estimates of their uncertainties. The contraction
26
coefficient, , is usually determined from empirical equations that are believed to have a relative
uncertainty of about 1%. The estimated relative uncertainty in gate opening, w, is also 1% for a field
application. The relative uncertainty in the gate width, bc, for a typical prototype gate is estimated to be
0.25% (includes variation in the width of a gate chamber over the height of a gate). Finally, the
uncertainty of upstream head measurement is estimated to be 6 mm (0.02 ft).

Inserting these values into eq. (23) we obtain the results shown in Table 1. We limit the gate opening, w,
to 0.66H1, since at larger values the gate leaf would not control the flow (critical depth being two-thirds of
the upstream head).

Table 1. Allowable relative uncertainty in 1+


w, meters
H1
meters 0.1 0.2 0.33 1.33 3.33 5.33
0.5 2.66% 2.79% 2.86%
2 2.82% 2.87% 2.94% 3.59%
5 2.80% 2.82% 2.85% 3.07% 3.63%
8 2.80% 2.81% 2.82% 2.96% 3.27% 3.63%

The results in Table 1 show that for small gate openings we must have a smaller uncertainty in 1+ to
achieve our goal of a 2% uncertainty in discharge measurement. We can conservatively conclude that we
should try to design our experiments so that we can determine 1+ with a relative uncertainty of 2.5% or
less.

Now we will consider the planned experiment. We rearrange the data reduction eq. (17) to solve for the
experimental result, 1+.
2
wbc
1 2 g H 1 w (viii))
Q

In the laboratory we will most likely measure w as a single quantity, the jet thickness, yj, using either a
point gage or a piezometer tap located at the vena contracta. This simplifies the data reduction equation to
2
y j bc
1 2 g H 1 y j (ix))
Q

We will now determine the UMFs for this data reduction equation.

H 1 1 H1
UMFH1 (x))
1 H 1 H1 y j

y j 1 2 H1 3 y j
UMF y j (xi))
1 y j H1 y j

bc 1
UMFbc 2 (xii))
1 bc
27
Q 1
UMFQ 2 (xiii))
1 Q

We see that, again, some of the UMFs contain the measured variables themselves, so we will need to
choose representative values of H1 and yj to perform the uncertainty analysis. Typically, H1 will be
significantly larger than yj, so we can quickly see that UMFH1 will be on the order of 1, while UMFyj is on
the order of 2. The complete uncertainty equation is
2 2 2
H1 2H1 3 y j U yj
2 2 2 2
U1 U H1 U U
4 bc 4 Q
b Q (xiv))
H y H y y
1 1 j H1 1 j j c

In the facility that is available for the experiments, we have initially planned to conduct tests on a model
radial gate at heads ranging from 0.13 m up to 0.50 m, and gate openings ranging from 0.05 m to 0.20 m.
The jet thickness, yj, will be about two-thirds of the gate opening, so yj will vary from about 0.033 m to
0.17 m. We believe that we can construct and measure the width of the model gate chamber with a relative
uncertainty of 0.25%. The laboratory is equipped with a weighing tank for determining discharge whose
relative uncertainty is 0.1% or better. It is reasonable to assume that we can measure the upstream head
with an uncertainty of 0.5 mm with a point gage in a stilling well. Knowing from the previous analysis
that we need to achieve a relative uncertainty of 2.5% or better in the experimental result, 1+, we can
solve for the required uncertainty in the measurement of yj. Table 2 shows the results. Note that we solve
for the absolute uncertainty, Uyj, not the relative uncertainty, since we expect variability in our
measurements of the jet thickness to be mostly independent of the actual jet thickness.

Table 2. Allowable uncertainty in measurement of yj, meters.


yj, meters
H1, meters 0.033 0.066 0.1 0.133
0.13 0.00047 0.00158
0.2 0.00044 0.00106
0.3 0.00043 0.00093 0.00162
0.4 0.00042 0.00089 0.00146 0.00216
0.5 0.00042 0.00087 0.00139 0.00198

The results show that any tests carried out at very low gate openings and jet thicknesses will require
measurements of jet thickness that have an uncertainty of about 0.4 mm.

We anticipate difficulty in reaching this objective, but believe that we can probably achieve a measurement
uncertainty for yj of 0.5 mm. Using that value, we can compute the relative uncertainty in 1+ over the
potential range of test conditions (Table 3), and the uncertainty percentage contributions (UPCs) associated
with each measured variable (Table 4).

Table 3. Relative uncertainty in 1+.


yj, meters
H1, meters 0.033 0.066 0.1 0.133
0.13 2.62% 1.20%
0.2 2.80% 1.32%
0.3 2.90% 1.42% 0.96%
0.4 2.95% 1.48% 1.01% 0.80%
0.5 2.97% 1.50% 1.04% 0.83%
28
Table 4. UPCs for the measured variables at different test conditions.
H1, meters yj, meters UPCH1 UPCyj UPCbc UPCQ
0.13 0.033 4% 92% 4% 1%
0.20 0.033 1% 95% 3% 1%
0.30 0.033 0% 96% 3% 0%
0.40 0.033 0% 96% 3% 0%
0.50 0.033 0% 97% 3% 0%
0.13 0.066 42% 37% 17% 3%
0.20 0.066 8% 75% 14% 2%
0.30 0.066 2% 83% 12% 2%
0.40 0.066 1% 86% 11% 2%
0.50 0.066 1% 87% 11% 2%
0.30 0.10 7% 61% 27% 4%
0.40 0.10 3% 69% 25% 4%
0.50 0.10 1% 71% 23% 4%
0.40 0.133 5% 50% 39% 6%
0.50 0.133 3% 55% 36% 6%

In almost all test cases the uncertainty in the measurement of yj is responsible for the majority of the
uncertainty in the result. In the tests to be carried out at very small heads, the uncertainty in head
measurement becomes somewhat significant. Uncertainty in the gate width is also a significant factor in
some test conditions, but it is primarily those in which the total uncertainty in the result is relatively low,
so this is probably not a serious concern.

The analysis performed here shows that uncertainty in the measurement of yj may seriously affect the
ability to achieve the test objective. It may be necessary to focus effort on reducing the uncertainty of that
measurement through the selection of measurement methods and equipment. One might also conclude that
performing tests at very small heads and gate openings is not worthwhile, although there might be other
good reasons for keeping such tests in the plan, such as changes in hydraulic phenomena that only occur at
those test conditions. The key point is that the uncertainty analysis allows one to make informed decisions
about test design, rather than relying on intuition or discovering problems by trial.

Numerical Approximation of Partial Derivatives


Developing the general uncertainty equation for a given data reduction equation often requires the
determination of numerous partial derivatives. When the data reduction equation is complex, this can be a
very tedious task. An alternative is to use numerical approximations of the partial derivatives. They can
be computed from

r r r X i X i r X i

X i X i X i

with the values of all other variables in the data reduction equation held constant while varying Xi. To
implement this, one must choose a value for Xi, such as 0.01Xi, and compute the derivative, then reduce
Xi to one-half the starting value and compute the derivative again. Continue reducing Xi until the
estimated value of the derivative converges. This method can be easily implemented in commercial
spreadsheet or mathematical analysis software.

29
Summary
This chapter has demonstrated the use of general uncertainty analysis for the planning of experiments. In
the planning phase we are trying to determine whether a proposed experiment can satisfy our objectives. If
alternative data reduction equations and experimental methods are available, we consider the alternatives
and whether one may be more effective than another. General uncertainty analysis using UMFs also gives
us a first impression of which measurements may be most critical for obtaining a desired level of
uncertainty in the experimental result. As we refine our experimental plan, we can analyze the UPCs to
see which measurements are the most critical, considering both the manner in which they propagate into
the end result (indicated by the UMFs) and the actual values of the measurement uncertainties.

Several results are possible from an uncertainty analysis performed in the planning phase. Infeasible
approaches may be discarded and feasible methods pursued, experimental methods may be altered,
instrumentation needs can be clearly identified, and in some cases experimental programs that will never
yield useful results may be abandoned. Regardless of the outcome, the value obtained from the
expenditure of time and money can be increased, usually in an amount far greater than the cost of
performing the uncertainty analysis. In addition, time spent on uncertainty analysis in the planning phase
will lay the foundation for the more detailed uncertainty analysis that follows once experiments are
underway.

References
Clemmens, A.J., Strelkoff, T.S., and Replogle, J.A. (2003). Calibration of submerged radial gates.
Journal of Hydraulic Engineering, Vol. 129, No.9, pp. 680-687.
Coleman, H.W., and Steele, W.G. (1999). Experimentation and Uncertainty Analysis for Engineers, 2nd
Ed., John Wiley & Sons, New York, 275 pp.
Kline, S.J. (1985a). The purposes of uncertainty analysis. Journal of Fluids Engineering, Transactions of
the ASME, June 1985.
Kline, S.J., (1985b). Closure to 1983 Symposium on Uncertainty Analysis. Journal of Fluids
Engineering, Transactions of the ASME, June 1985.
Moffat, R.J. (1985). Using uncertainty analysis in the planning of an experiment. Journal of Fluids
Engineering, Vol. 107, No. 6, pp. 173-178.

30

S-ar putea să vă placă și