Documente Academic
Documente Profesional
Documente Cultură
Preface
1. Introduction
1.1 Concepts of Measurement and Error.
10
13
13
2.2. Linearization
20
31
3.1. Introduction
31
34
36
41
5.
59
59
66
63
87
106
106
107
113
115
5.5. Expectation
125
129
133
138
144
6. Variance-Covariance-Propagation
148
6.1. Introduction
148
149
6.3. Examples
154
6 . 4. Stepwise Propagation
160
167
170
180
180
184
188
190
193
199
199
200
203
206
209
211
214
215
216
218
220
224
236
9.1. Introduction
236
9.2. Derivation
242
248
254
256
266
10.1. Introduction
266
266
269
272
274
281
297
305
A.1. Definitions
305
305
308
308
309
309
312
314
314
A.10.
316
A.11.
318
A.12.
319
A.13.
Linear Equations
320
A.14.
321
A.15.
322
Appendix B. Tables
Table I. Values of the Standard Normal Distribution Function
326
328
330
Bibliography
333
Index
337
INTRODUCTION
1.1.
3. The rear tapeman then centers his plumb bob over his survey station.
The head tapeman suspends his plumb bob from the tape, with the plumb
bob string held near the zero graduation, and applies tension to the tape.
(The rear tapeman, of course, must pull equally in the opposite direction
while keeping his plumb bob centered over his station.)
5. While maintaining tension, the head tapeman shifts his plumb bob string
along the tape until his plumb bob is centered over his survey station.
6. The head tapeman then reads the position of his string on the tape.
7. The measured distance is obtained by subtracting the tape reading in step
(1-1)
Since we will never really know what the value of is, we will never know
what the exact value of is. However, if we are able to obtain by some means
a good estimate of , we can use this estimate in place of as reference for
v = x x
(1-2)
The residual, v, is the quantity that is actually used to express variation in
the measurement.
1.2. T Y P E S O F E R R O R S
Errors have been traditionally classified into three types: (I) gross errors,
(2) systematic errors, and (3) random errors. Each type will be discussed
separately.
Gross Errors
Gross errors are the results of blunders or mistakes that are due to carelessness of the observer. For example, the observer may make a pointing on the
wrong survey target, or he may read a scale or dial incorrectly, or he may read
the wrong scale, or record the wrong value of a reading by transposing numbers
(e.g., recording 41.56 m as .41.65 m). There are any number of mistakes an
observer can make if he is inattentive.
If a survey is to have any usefulness at all, mistakes and blunders cannot be
tolerated. Good field procedures are designed to assist in detecting mistakes.
These procedures include:
1. Careful checking of all pointings on survey targets.
2. Taking multiple readings on scales and checking for reasonable consistency.
3. Verifying recorded data by rereading scales.
Repeating entire measurements independently and checking for consistency.
5. Using simple geometric or algebraic checks, such as comparing the sum of
three measured angles in a plant triangle with 180.
It is very important to safeguard against the occurrence of mistakes. If they do
occur, they must be detected and eliminated from the survey measurements
before such measurements can be used.
Systematic Errors
Systematic errors are so called because they occur according to some deterministic system which, when known. can be expressed by some functional
relationship. If, for example, the expansion of a steel tape is essentially linear
with respect to temperature, and the coefficient of thermal expansion is known,
a functional relationship between the temperature and the expansion of the tape
can be established. I f the length of the tape at some specified standard
temperature is taken as reference, the change in the length of the tape from this
reference caused by change in temperature from its standard value is classified
as a systematic error.
A systematic error follows a pattern which will be duplicated if the measurement
is repeated under the same conditions. For example, measuring a distance with
a steel tape that is too short will result in the same systematic error if the same
tape is used by the same tapemen to measure the same distance under the
same conditions of temperature, pull, support, and slope.
remain the same throughout the measuring process. It is counteracting if its
sign changes while its magnitude remains the same.
The system underlying a systematic error may depend on the observer, the
instrument used, the physical or environmental conditions at the time the
measurement is made, or any combination of these factors.
The personal bias of an observer leads to systematic errors that may be
constant or counteracting, depending on the observation procedure. If the
conditions of observation vary, the natural senses of vision and hearing of the
observer may vary as well and his personal error becomes variable, too.
Imperfect instrument construction or incomplete instrument adjustment can lead
to instrumental errors that are systematic. Imperfect construction includes such
things as variation in scale graduations and eccentricity in centering
components. Incomplete instrument adjustment includes such things as not
making the axis of collimation (telescope axis) of a theodolite perpendicular to
the instrument's tilting (horizontal) axis.
Since survey measurements are acquired in the field, they are affected by
many physical and environmental factors. Temperature, pull and terrain slope,
for example, affect taped distances, while humidity and atmospheric pressure
as well as temperature affect electro-optical distance measurements (EDM),
angle measurements, and leveling. AB of these effects are functionally
expressible in terms of the factors wich cause them and so are classified as
systematic errors.
All sources of systematic error so far discussed are related directly to the
observational operations, However, systematic errors can also occur through
simplification of the geometry or mathematical model chosen to represent the
survey. If, for example, a plane triangle instead of a spherical triangle is used to
connect three survey stations that are spaced several kilometers apart, the
spherical excess will emerge as a systematic error.
In the reduction of survey measurements, it is important to detect and correct
for all possible systematic errors.
Random Errors
After all blunders are detected and removed, and the measurements are
corrected for all known systematic errors, there will still remain some variation in
the measurements. This variation results from observational errors which have
no known functional relationship based upon a deterministic system. These
errors, instead, have random behavior, and must be treated accordingly,
it was stated earlier that a measurement or observation is looked upon
mathematically as a variable. More specifically, it is a random variable
because it includes error components which exhibit random behavior. Indeed,
the random errors themselves are random variables.
Whereas systematic variations ate dealt with mathematically using functional
relationships or models, random variables must use probability models. Some
elementary concepts in probability theory are introduced in the following
section. More will be discussed in Chapter 5,
1.3. ELEMENTARY CONCEPTS IN PROBABILITY
Let us assume that a distance is measured a very large number of times and
that all measurements are free of gross errors and corrected for all systematic
errors. Whatever variation remains in the measurements is caused by random
errors only. Although it is not possible to correct measurement for specific
random errors from knowledge of the measurement system, it is possible to
study their collective behavior from their frequency distribution. It is the
frequency distribution that Is used as basis for constructing the probability
model for the measurements.
EXAMPLE 1-1
A distance of about 810 m is measured 200 times. All measurements are free of
gross errors and are corrected for systematic errors. The corrected values are
ex-pressed to 0.01m. It is noted that after correcting for the systematic errors the
resulting variation in the measurements ranges from 810.11m to 810,23m,
distributed as follows:
VALUE OF
MEASUREMENT (m)
NUMBER OF
MEASUREMENTS
810.11
810.12
810.13
810.14
810.15
810.16
810.17
810.18
810.19
810.20
810.21
810.22
310.23
1
3
7
19
20
36
38
29
24
10
11
0
2
Evaluate and plot the relative frequencies of occurrence for all listed values.
Solution
The relative frequency of occurrence is obtained by dividing the number of measurements observed for a value by the total number of measurements. Since there
are 200 measurements in total, the following relative frequencies are obtained:
VALUE OF
MEASUREMENT (m)
RELATIVE
FREQUENCY
810.11
810.12
810.13
810.14
810.15
810.16
810.17
810.18
810.19
810.20
810.21
810.22
810.23
O.005
0.015
0.035
0.095
0.100
0.180
0.190
0.145
0.120
0.050
0.055
0.000
0.010
The sum of the relative frequencies must, of course, be 1.000. The relative
frequencies are plotted in Fig. 1-1. as rectangles.
Figure 1-1 is a frequency distribution in the form of a histogram. The base of
each rectangle represents a class interval; and the height represents the
corresponding relative frequency. The base of the tallest rectangle in Fig. 1-1, for
example, represents the class of all measurements between 810.165 m and
810.175 m (expressed specifically as 810.17 m), and the height of this rectangle
represents the corresponding relative frequency, 0.190.
The frequency distribution in Fig. i-1 is centered on or very near the value 810.17
m. Highest frequencies are at or near the central value.
If the number of measurements were to be increased infinitely, it would be found
that each relative frequency would approach a stable limit. The limiting value of
the relative frequency is known as the probability.
Instead of using a histogram to represent probabilities, it is often more convenient
to use a mathematical modela probability model, or probability - distribution.
An example of such a model is given in Fig. 1-2, in which probability is
represented by the area under a continuous curve that is a mathematical function
of the measurement. Specifically, the probability that the measurement falls
between the two values x1 and x2 is given by the area of
Fig 1-1
Fig 1-2
A similar density function, shown in Fig. 1-3, can be used as probability model
for the random error of the measurement. In this case, the mean value is zero.
The mean value, , is referred to as the position or location parameter of the
probability distribution. Another parameter of the distribution is its standard
deviation, o, which measures the spread or dispersion of the probability
distribution, as indicated in Fig. 1-3. If measurement A has greater variation
than measurement B, measurement A will have a standard deviation that is
larger than the standard deviation of B. The square of the standard deviation is
known as the variance.
1.4. RELIABILITY OF MEASUREMENTS
Several terms are used to express the reliability of measurements. Three
common terms are precision, accuracy, and uncertainty.
Fig1-3
147.64
2.1
1013
1.007
17.710
0.021
1320
A numerical value should carry all certain digits plus the first digit stint is
doubtful. if, for example, the first tour digits of the value 137.824 are certain
and the last two digits are doubtful, the value should be expressed to only
five significant figures, i.e., 137.82 (1, 3, 7 and 8 are certain, 2 is doubtful).
The number of significant figures in a directly measured quantity is not
usually difficult to determine, as it essentially depends on the least co unt of
the instrument used. For example, if a distance is measured with a tape
graduated in centimeters, with estimation to millimeters, and a reading of
462,513 m is taken, the first five digits are certain, the sixth digit is estimated
(and therefore doubtful) and so the value has six significant figures.
The number of significant figures in a numerical quantity is reduced by
rounding off. The least error will be caused if rounding off is done according
to the following rules:
1. If k significant figures are required, discard all digits to the right of the
(k+1)th digit.
2. Examine the (k+ 1)th digit.
a. If it is 0 to 4, discard it ; e.g., 12.34421 is rounded off to four
significant figures as 12.34.
b. If it is 6 to 9, discard it and increase the kth digit by one; e.g.,
1.376 is rounded off to three significant figures as 1.38.
c. If it is 5 and the kth digit is even, discard it; e.g., 12.345 is
rounded off to four significant figures as 12.34.
d. If it is 5 and the kth digit is odd, discard it and increase the kth
digit by one; e.g., 12.3435 is rounded off to five significant figures as
12.344.
(2-1)
represented by the straight line in Fig. 2-1. The coefficients a and b are
known and assumed to be errorless.
For purposes of analysis, it is helpful to use the concept of true value, as
introduced in Chapter 1, and to define the error of a measurement as the
measured value minus the true value, as it was in Eq. (1-i), Thus, if x, represents the true value of x, and dx represents the error, then
x= x 1 + dx
(2-2)
(2-3)
Fig 2-1
y ax b
a ( x1 dx) b
a x1 b adx
y
adx
(2-5)
Now if elementary calculus were applied to Eq. (2-I), we would see that
derivative of y with respect to xis dy/dx = a. Thus an equivalent expression
for Eq. (2-5) is
dy
(2-6)
dy
dx
dx
Eq. (2-6) is obviously the expression for the total differential of the function
Eq. (2-1). The reason the error dy obtained from the function turns out to 4
Identical to the error as a total differential from calculus is that the function
ax + b is linear in the measured quantity x. It will be shown shortly that q, (2
6) does not hold exactly for nonlinear functions.
EXAMPLE 2-1
A land parcel is trapezoidal in shape with the dimensions given in Fig. 2-2. For a
measured distance d=3,560m , the ordinate h, is required. If the error in the
60 20
0.5(exact)
80
If we visualize a coordinate system with origin at A and x-axis along AB, the
equation of line CD is
y 0.5 x 20
Fig 2-2
Let us now look at a case in which the function that relates the computed
quantity y to the measured quantity x is nonlinear. For example, let
2
yx
(2-7)
If, again, x, and y, represent the true values of x and y, respectively, and dx
and dy represent the corresponding errors, then applying Eq. (2-7), we get
y x
1
(2-8)
And
y ( y dy )
1
( x1 dx) 2
x 2 x dx (dx)
1
(2-9)
(2-10)
Recognizing from Eq. (2-7) that 2x, is the derivative of y with respect to x,
evaluated at x0 we can express Eq. (2-10) as follows:
dy
dy
dx (dx) 2
dx
(2-11)
Equation (2-11) differs from Eq. (2-6) in that it includes the additional term
(ox)'. In practice, however, the error dx is so small relative to the measurement
itself that the higher-order term (dx)2 can be neglected. This means that
instead of using point P (Fig. 2-3) or. the curve to determine y point P ' on the
tangent to the curve at T is used to determine y. Thus, the propagated error
dy is represented by (y-y1) in Fig. 2-3, instead of by (y-y1), under the
assumption that the difference y -y' =(dx) 2 is negligible.
EXAMPLE 2-2
The area y of a square tree: of land, shown In Fig. 2.4, is required. The length x
of the side of the tract is measured with a 30 m long steel tape and is
observed to be 50.170 m. This measurement is then used to calculate the area
of the tract, as follows: the calculated area is y = x2 = (50.170)2 = 2517.0289 m2,
represented by square ABCD in Fig. 2-4,
If the tape is known to be too short by 0.030 m, compute the corresponding
error in the calculated area.
Fig 2-3
Fig 2-4
the tape is too short by 0.030 m, a measured 30-meter distance is
actually m, The correct length of the side of the square is therefore
x1
29.970
(50.170) 50.120m
30.000
,
Error in the area is thus
dy y y1 2517.0289 2512.0144 5.0145m 2
dy 2 x1 dx (dx) 2
2(50.120)(0.050) (0.50) 2 5.0145m 2
Now, if Eq. (2-6) is used, we must first evaluate the derivative dy/dx at x=0.120
Thus,
dy d 2
( x ) 2(50.120) 100.240m
dx dx
dy
dy
dx (100.240)(0.050) 5.0120m 2
dx
which, in Fig. 2-4, equals the sum of the areas of rectangles AB 1B'A' and
CB 2B'C'. The difference between the exact determination of the error in the
area and its determination according to Eq. (2-6) is only 0.0025m2 which is
(dx)2 the area of the all square B1BB2B'. This difference is only 0.05 % of the
error, and is therefore significant.*
In practice it makes little or no difference if the derivative dy/dx is evaluated at
x, or at x. Since the measured value is often the convenient value to
evaluation of the derivative at the measured value is common procedureSo far we have considered only the case of a single variable y computed as
a function of a single variable x. Suppose y now represents the area of a
rectangle instead of a square. In this case, two measured quantities, the
length x1 and width x2 are involved, and y is their product; i.e., y =x1 x2.
When more than one variable is involved in a function, the rules of partial
differentiation can be applied. Specifically, if the errors in x1, x2,. xn, are
represented by the differentials dx1 dx2..., dxnrespectively, then the error in y
can be represented by:
dy
y
y
y
dx1
dx 2 ...
dx n
x1
x 2
x n
(2-12)
y
y
y
,
,
are evaluated at the given
x1 x 2
x n
numerical (measured) values of x1, x2,. xn, respectively.
EXAMPLE 2-3
Instead of the square dealt with in Example 2-2, a tract of land is rectangular,
measuring 50.170 m by 61.090 m. If the same 30 in tape (0.030 m too short) is
used to make the measurements, evaluate the error In the calculated area of
the tract.
Solution
The calculated area, expressed to the appropriate number of significant
figures, is
y1 x1 x 2 (50.170)(61.090) 3064.9m 2
The partial derivatives are evaluated as:
y
x 2 61 .090 m
x1
y
x1 50 .170 m
x 2
*aplication of the multiplication rule for significant figures (Section 1.5) calls for expression dy to two significant Figures only;
i.e., dy = 5.0 m'. This, too, indicates that the 0.0025 m' mince is insignificant.
The errors, dx1 and dx2 are computed on the basis of the 30 m tape being too
short by 0.030 m. Thus,
dx1
0.030
(50.170) 0.050m
30
dx2
0.030
(61.090) 0.061m
30
y
y
dx1
dx 2 (61 .090 )(0.050 ) (50 .170 )(0.061) 6.1m 2
x1
x 2
2.2. LINEARIZATION
We have noticed in the propagation of known errors in Section 2.1 that when
the function is nonlinear the higher-order term (dx)2 can be dropped because
it is very small. The resulting error propagation function, with (dx)' omitted, is
an approximation. Since it was shown that error propagation involving linear
functions does not entail any approximation, an alternative approach for
nonlinear functions is possible. This approach first replaces the nonlinear
function by its linearized form and then applies propagation, rather than
applying propagation to the nonlinear function first and then dropping terms.
The basis of linearization of it Function is the Taylor series expansion which
for one function of one variable y f (x), is
dy
y y0 x higher-order terms
dx x0
(2-13)*
where y f ( x0 ), and x x x0
The linearized form includes only the first two terms on the right-hand side
of Eq (2-13); all higher-order terms are neglected. For example, the
linearized form of the function y x 2 .
y x 02 2 x 0 x
* The symbol (dy/dx)0 represents the first derivative or y with respect to x, evaluated at x=x0
linearized form of the original function will yield a result that is identical to the
result obtained using Eq. (2-6),
EXAMPLE 2-4
Linearize the function y 2 x 3 x 3 4 x 7 at x=2 If x=2 and the error in xis
0.01, compute the error in y by:
Fig 2-5
i.
ii.
Solution
Eq. (2-13), with higher order terms neglected, is y y 0 ( dy
dx
) x Thus, for
x0=2,
y ( 2 x 03 x 02 4 x 0 7) (6 x 02 2 x 0 4) x
19 24 x
which is the linearized form of y (2 x 3 x 2 4 x 0 7) , with a=24 and b=19. i.
x 2 ,
x0
(2-15)
(2-16)
the multiplication of j by x yields j1 x1 j 2 x 2 Thus, it can be seen that E.q. (217) can be written compactly as
y y 0 j x
(2-18)
As matter of fact. Eq. (2-18) applies no matter how many elements j and x have,
so long as they have the same number of elements. For example, y may be a
function of four variables. i.e.,
y y 0 j x y 0 j1 , j 2 , j3 , j 4
x1
x
2
x 3
x 4
y y 0 j1 x1 j 2 x 2 j3 x3 j 4 x 4
which shows how compact the matrix form in Eq. (2-18) is.
EXAMPLE 2-5
Figure 2-6 shows a tract of land composed of a semicircle with diameter AB, a
rectangle ABCE, and a triangle ECD. Express the total area y of the tract as a
function of the three dimensions x1, x2 and x3 shown in the figure. Then
linearize this function, given x1 0 50m, x 2 0 20m, x3 0 30m,
Fig 2-6
Solution
From the figure, the rea of the tract is
y
x32 x1 x3
1
x 2 x3
2
1
x 20 x30 (30) 2 (50)(30) (20)(30) 2153.43m 2
2
8
2
The evaluate j:
y y y
1
j
,
,
x 30 , x 30 , x 30 , x10 , x 20 ,
2
4
2
x1 x 2 x 3
y y 0 j x
x1
=2153.43+[31 15 84 ] x 2 (m2)
x 3
That is,
y 2153 .43 30 x1 15x 2 84 x3 ( m 2 )
Equation (2-IS) shows linearization for the case of one variable y as a function
of several variables x1, x2, ....We now extend the technique to the more general
case of several variables, y1 y2 .., each a function of a set of independent
(measured) variables, x1 x2 ...; i.e.,
y1 f 1 ( x1 , x 2 ......x n )
y 2 f 2 ( x1 , x 2 ......x n )
(2-19)
y m f m ( x1 , x 2 ......x n )
x 2 1
x0
x n
y
x1 2
x0
x 2
x n
x0
x 2 2
x0
x n
x n
x0
(2-20)*
y
y
y m y m 0 m1 x1 m
x1 x0
x 2
x 2 m
x0
x n
x n
x0
Replacing the partial derivative of each yi with respect to each variable xk by j1k
y
j 32 ), Eq. (2-20) becomes
(e.g., 3
x 2
y1 y1 0 j11 x1 j1 2 x 2 j1 n x n
y 2 y 2 0 j 2 1 x1 j 2 2 x 2 j 2 n x n
(2-21)
y m y m 0 j m 1 x1 j m 2 x 2 j m n x n
We can compact each line in Eq. (2-21) further to a form comparable to Eq,
(2-18) by using j1 =[j11, j12, j1n], j=[ j21, j22, j2n] and so on. Thus,
y1 y1 0 j1 x
y 2 y 2 0 j 2 x
y m y m 0 j m x
*At this point, it should be clear that derivatives y1/x1, y1/x2,, y1/xn are to be evaluated at
(2-22)
We may further collect all y 1 and yi0 into respective column matrices; thus,
y1 0
y1
x1
y
x
y2 0
2
2
y
; y0
. As before , x
y m 0
ym
x n
Finally, if we consider j, as the first row, j 2 the second row, and so on, of a
rectangular matrix J with m rows and it commas, Eq (2-22) becomes
j1 2 j1 n x1
y 1 0 j11
y 1 y 1 0 j1
y y j
y
x 2
j
j
2
2
2
2
0
0
22
2 n
x
21
y
j
y
y
m
m
m0
j m 2 j m n x n
m 0 jm1
(2-23)
(2-24)
The matrix J is called the jacobian matrix, which represents the partial
derivatives of all she functions in y with respect to each of the variables in x;
i.e ,
y1 y1
y1
x
x 2 x n
1
y 2
y 2 y 2
x 2 x n
(2-25)
J m1n
x1
x
y m y m
y m
x1
x 2
x n
It can be soon that J has as many rows m as there are functions yi, and as
many columns n as there are independent (or measured) quantities xk . All
partial derivatives, as well as the quantities y1 0 , y 2 0 , y m 0 , are evaluated at
the given values x1 0 , x 2 0 , x m 0 of the independent variables.
EXAMPLE 2-6
The tract shown in Fig. 2-6 is to be divided into two parts by the broken line
conne c t ing points B and E. Express the areas of the two parts, y1 and y2,
shown in the figure, as functions of the dimensions x1, x2 and x3. For x10 = 50 m,
x20=20m and x3o=30m evaluate the Jacobian matrix, J, and express y1 and y2
in linearized form.
Solution
From the figure, the two areas arc
1
x1 x3
8
2
1
1
y x1 x3 x 2 x3
2
2
y1
x32
x 1
y1
x 2
y 2
x 2
y 1 1
x
x 3 2 3 0
y 2 1
x3 0
x 3 2
0
1
x3
2 0
x1 0 )
2
1
x 2 0 )
x3 0
4
1
( x1 0
2
15 0 49
15 15 35
Now
1
2
x3 0 x1 0 x3 0
1103.43 2
y
10
8
2
y0
m
y 2 0 1 x x ( 1 x x ) 1050.00
10 3 0
1
2
2 0 0
2
x1
1103 .43 15 0 49
x 2 m 2
1050 .00 15 15 35
x 3
that is,
y1 (1103 .43 15 x1 49 x 3 ) m 2
y 2 (1050 .00 15 x1 15 x 2 35 x 3 ) m 2
EXAMPLE 2.7
If the dimensions in Fig, 2-6 are x 1 = 50.00m, x 2 = 20.00 m, and x3 = 30.00 m,
and the errors in these dimensions are 0.02 m, -0.04m and 0.03 m,
respectively, evaluate the errors in areas y1 and y2 by applying error
propagation to the linearized functions derived in Example 2-6.
Solution
Errors dx1, dx2 and dx3 in x1, x 2 and x3 respectively, are also errors in x1, x2
and x3, respectively. Thus, applying error propagation to
y1 1103 .43 15 x1 49 x 3
y 2 1050 .00 15 x1 15 x 2 35 x 3
PROBLEMS
2-1 Interior angles A and B of a plane triangle are known and fixed. Side b
(opposite B) is computed from the fixed values of A and B and the measured
value of side a (opposite A). If the error in a is 0.015 rn, evaluate the resulting
error in b. (a) for A= 60 and B= 60; (b) for A= 120 and B= 15; (c) for A =
15 and B= 120.
2-2 The triangular parcel of land ABC shown in Fig. 2-7 has dimensions AB=
150.00 m, BC= 80.00 m, and CA = 110.00 m. The parcel is divided into two
parts, I and II, as shown, by setting D on AB at a distance x from B. Evaluate
the resulting error in the area of I if x has an error of 0.020 m.
Fig2-7
2-3 A building wall is shown in plan view in Fig. 2-8. Each window has width a, the
spacing between windows is b, and the distance from each building corner to the
nearest window is c. The length y of the wall is determined from measurements x1,
x 2 and x3 If the errors in measurements x1, x 2 and x 3 are 5 mm, 8 mm, and 9 mm,
respectively, determine tic: error in the calculated value of y.
v2
c
2l
where l is the length of the tape and y is the difference in elevation between the
ends of the tape. If l=30 m, v =4.0 m, and the error in v is 0.10 m, evaluate the
error in the computed slope correction.
2-5 Interior angles A, B, and C of a plane triangle are known and fixed. The area
of the triangle is computed from these angles and the measured value of side a
(opposite angle A).
Fig2-8
Show that the relative error* in the computed area is twice the relative error
in the measured side.
2-6 Interior angles A and B of a plane triangle are fixed at 3000'00' and
7000'00', respectively. Side a (oppositesA) is measured and found to be
400.000 m, and the area of the triangle is computed from the given data. If
the error in the measured value of a is 0.040 m, evaluate the error in the
computed area: (a) by determining the difference between the area
computed from the measured value of a and the area computed from the
true value of a; (b) by applying Eq. (2 .6); (c) by applying the relationship
expressed in Prob. 2-5.
2-7 In stadia leveling, the difference in elevation V is computed from the rod
intercept r and vertical angle using the function V=(1/2)kr sin 2 where k
is the stadia constant. If k= 100 (assumed errorless) and the errors in r and
a are 0.005 m and 60 seconds of arc+ , respectively, evaluate V and the
error in V for: (a)r= 1.500 m, a = 0; (b) r = 1.500 m, a=15.
2 . 8 Interior angles A and B and side b of a plane triangle are measured and
found to be 5000'00', 2000'00' and 100.000 m, respectively. The errors in
A, B and b are 15'- 25', and 0.010 m, respectively. If side a is calculated
from A, B, and b, evaluate the error in at (a) by determining the difference
between a computer from the measured values of A, B, and b, and a
computed from the corresponding true values; (b) by applying Eq. (2 -12).
* If d x is the error in x, then d x/ x is the relative error in x.
+Angular error should the expressed in radians in functions that contain derivative
2-9 Use the Taylor series expansion, Eq. (2-13), to linearize each of the
following functions at the given value then. use the linearized function to
evaluate y the value x,. Compare this value of y with the value obtained
using the origin function.
(a)
(b)
(c)
(d)
2-10 The correction for the sag of a tape is given by the function
w 2l
y
24 p 2
in which L is the length of the tape, w is the unit weight of the tape and p
is the applied pull. The length and unit weight of a tape are 50 m and
0.031 kg/m respectively, and the pull is variable. (a) Linearize y at p.= 10
kg using the Taylor series expansion, Eq. (2-I3). (b) Evaluate y for p= 11
kg using the line arized function and compare the resulting value with the
value obtained usin the original function.
2-11 Linearize the function y= [(I00)2 + x2 -200x cos] 1/2 at x0=80.000 m and
0 = 4000' according to Eq. (2-16). Use this linearized function to evaluate y at
x=81.000 m and =4020', and compare the resulting value with the value of
y obtained using the original function.
y1
x
2-12 Vector x = is a function of v ector y y 2 such that
x2
y 3
x1 3 y12
y2
0 .7 y 3
y12
And
x2
y2
4 y 32
2
y1
Evaluate the Jacobian matrix J = x./ y at y1o =2, y2o = 4, and y3o = 1
d
2-13 With reference to Fig. 2-9, vector y is a nonlinear function of vector
h
s
x .Evaluate the Jacobian matrix J=x./ y at s0=1000 m and 0=20 and
express y in the linearized form of Eq (2-24).
2-14 if, in fig 2-9, measured values for s and are 1000.000 m and 200000
respectively, and their errors are 0.100 m and 20, respectively, evaluate the
errors in the computed values of d and h applying error propagation to the
linearized form of y as obtained in problem 2-13.
3.1. INTRODUCTION
(3-1)
of so-called estimates* l , such that the new set l fits the model exactly.
For example, when the new estimates of the side and three interior angles
are calculated, any combination of the side and any two angles will give
exactly the same triangle. This means that with l , the inconsistency with the
model is resolved, and any n 0 subset of the n estimated measurements will
always yield the same unique determination of the model.
Each estimated observation, l i , can be looked upon as a corrected observation, obtained from the measured value l ; by adding a correction v; to it,
i.e.,
l lv
(3-2)
*The term estimate is used in the proper statistical sense (see Chaper 8). It is not to be taken as a guess, but as the result of an
appropriate computation or graphical construction using known data,
l
n,1
l v
n,1
(3-3)
n,1
l l 2 ; l l 2 ; and v
l
n
l n
v1
v 2
v n
(3-4)
Fig 3-1
(3-5)
in which is the length of the course and a,, is its azimuth from north or the
Y-axis in Fig. 3-2. Starting with point A and using the measured angles and
distances, we end up at point B' instead of the control point B. The closing
errors are shown in the figure as the departure error ex, and the latitude error
e y. The corrections in departure and latitude for one leg, such as 2-3, are
computed as
departure correction for. 2-3 =
(l 23 )( e x )
l ij
(3-6)
(l 23 )(e y )
(3-7)
l ij
Fig 3-2
in which
if
traverse in Fig. 3 - 2 ,
if
l A1 l12 l 23 l3 B
EXAMPLE 3-1
Data for the traverse in fig 3-2 are as follows
STATION
A
1
2
3
B
LINE
LENGTH
(m)
AZIMUTH
A1
12
23
3B
212,120
321,070
315,820
304,650
60 33'' 00'
130 12'' 00'
64 03'' 00''
122 45'' 00'
X (m)
5000,000
Y (m)
5000,000
5970,010
4870,280
Solution
The compass or Bowditch rule yields good results when distances and
angles are measured with equivalent precision. The tale assumes no
correlation and that all measurements are of equal weight The least
squares method, on the other hand, imposes no such restrictions. Its
results are optimum, and with the accelerated increase in the availability
and efficiency of electronic computational aids, it is expected to be the
procedure that is most extensively used.
First, we compute the departures and latitudes and the provisional
coordinates of point B' to determine the closing errors e and
See first table on opposing page.
In order to apply the compass rule, the following two ratios arce
needed: 1. e, divided by the total length of the traverse:
e
l
0.134
1.16 10 4
1153.66
e
l
0.166
1.44 10 4
1153.66
STATION
A
COURSE
LENGTH
(m)
AZIMUTH
DEPARTURE
A1
212,12
60.5500
184,711
12
321,07
130.2000
245,232
LATITUDE
5104,292
-207.237
5429,943
23
315,82
64.0500
283,978
4897,055
138,199
5713,921
3B'
B'
B'
304,65
1153,66
122.7500
Y(m)
5000
104.292
5184,711
STATION
A
X(m)
5000
256,223
5035,254
-164,808
5970,144
5970,01
e=0,134
4870,446
4870,28
e=0,166
CPRRECTD
DEPARTURE DEPARTURE
LATITUDE
CORRECTED
DEPARTURE CORRECTION
(m)
LATITUDE CORRECTION LATITUDE
COURSE
LENGTH
(m)
A1
212,12
184,711
-0,025
184,686
104.292
-0,031
104,261
12
321,07
245,232
-0,037
245,195
-207.237
-0,046
-207,283
23
315,82
283,978
-0,037
283,941
138,199
-0,045
138,154
3B'
304,65
1153,66
256,223
-0,035
-0,134
256,188
-164,808
-0,044
-0,166
-164,852
1
2
3
B'
B'
check
X(m)
5000
Y(m)
5000
5184,686
5104,261
5429,881
4896,978
5713,822
5035,132
5970,01
5970,01
0
4870,28
4870,28
0
v12 v 22 v n2 vi2
minimum
(3-8)
i 1
Thus, in addition to the fact that the ad j usted observations must satisfy
the model exactly, the corresponding residuals must satisfy the least
squares criterion of Eq. (3-8). To demonstrate the application of this
criterion, let us consider the very simple example of determining a
distance x by direct measurement. Since the model involves only one
element (or variable), it requires only one measurement to uniquely
determine the distance, i.e., n 0 =1. Suppose that we have two
measurements, l1 = 15.12 m and l2= 15.14 m. Then n = 2, and according
to Eq. (3-1), there is one redundancy because
r=nn 0 =2 1 = 1.
The final value of the distance can be obtained from the observations as
follows:
x l1 v1 l 1
(3-9)
x l2 v2 l 2
There are obviously many possible values for v 1 and v 2 such that these
relations are satisfied. For example, we could have v 1 = 0 and
1 0 ( 0.02 ) 2 4 10 4 m 2
2 ( 0.01) 2 ( 0.01) 2 2 10 4 m 2
3 ( 0.015 ) 2 ( 0.005 ) 2 2.5 10 4 m 2
It is clear that 2 is the smallest of the three values, but the real question is
whether it is the very minimum value when all possible combinations of
corrections are considered. To answer this question, and also demonstrate
the criterion of least squares geometrically, we refer to Fig. 3-3. The two
l1 l 2 0
(3-10)
which is easily obtained from Eq. 3-9 by subtracting the second line from
the first. Now, if we let the abscissa of a two-dimensional Cartesian
coordinate system in Fig.3-3 represent l 1 and the ordinate l 2 then Eq.(310) would be depicted by a straight line that is inclined 45 with both axes,
l 2 =15.14 m define a point A which falls above the line because l 1 < l 2 The
line representing Eq. (3-10) is called the condition line since it represents
the condition that must exist between the two adjuste a observations l 1
Fig 3-3
for such a point on the line, three of which are indicated by A1, A2, A 3 to
correspond to the three computed values 1, 2, 3, respectively. Of all
possibilities, the least squares principle selects the one point, A2 such that
the distance AA2, is the shortest possible (i.e., minimum). From simple
geometry, the line AA2, is therefore normal to the condition line as shown in
Fig. 3-3. It can be seen that. A2 also satisfies the intuitive property that the
Indeed, this is the simplest tase of the very important fact that whenever
measurements of a quantity are uncorrelated and of equal precision
(weight), the least squares estimate of rite quantity is equal to the arithmetic
mean of the measurements (see the following section).
When only n 0 observations ate obtained, the model will be uniquely determined, as for example measuring a distance once. If one additional
measurement is made, there is one r redundancy (r= I) and a corresponding
equation must be formulated to alleviate he resulting inconsistency. Thus, in
the case of two measurements of a distance just discussed, Eq. (3-10) must
be enforced in order to guarantee that the two adjusted observations end up
being equal, thus satisfying the model. Such an equation is called a
condition equation, or simply a condition, since it reflects the condition that
must be satisfied with regard to the given model.
As another example, consider the shape of a plane triangle. Any two
interior angles would uniquely determine its shape, l.e., n0=2 If all three
interior angles are measured (n=3), then there is a redundancy of one (r=
nn0=32=1). For this redundancy, one condition equation needs to be
writ ten to make the adjusted observed angles consistent, such a condition
would reflect the geometric fact that the sum of the three interior angles in
a plane triangle must equal I80. Thus, if the three measured angles
the observations. This will lead to one technique of least squares called
adjustment of observations only.
A second least squares technique which is used frequently is called adjustment of indirect observations. In this technique, the number of conditions
is equal to the total number of observations, n. Since in terms of the observations only there should be r conditions, then in this technique the
equations must contain n - r= n0 additional unknown variables. These
additional unknowns are called parameters. However, unlike an observation
which has a value at the outset, a parameter is an unknown which has to a
priori value An example of the conditions involved in the case of adjustment
l1 v1 x or
v1 x 32.51
l 2 v2 x or
v 2 x 32.48
l 3 v3 x or
v 3 x 32.52
l 4 v4 x or
v 4 x 32.53
Of
As expected, the least squares estimate turns out to be the simple mean of
the four observations. This is always true no matter how many repeated
measurements there are as long as they are uncorrelated and of equal
precision. [When the measurements are uncorrelated but unequal in
precision. the so -called weighted mean is used; see Example 4-5 and Eq.
(4-39), Chapter 4.1]
EXAMPLE 3-3
The interior angles of a plane triangle are 1= 4133', 2= 7357' and
3=5927'. Compute the adjusted angles using the method of least squares.
Solution
Since it takes two angles to fix the triangle (i.e., ,n 0=2) and since n=3, then
the redundancy is r= 3 2 = 1. The one corresponding condition is
1 2 3 180
Or
( 1 v1 ) ( 2 v 2 ) ( 3 v 3 ) 180
Or
v1 v 2 v 3 ) 180 ( 1 2 3 )
180 ( 4133'7857'59 27' )
180 17957' 3'
2v1 2(3'v1 v 2 )( 1) 0
v1
2v 2 2(3'v1 v 2 )( 1) 0
v 2
Hence v1=1 substituting back in the first of the pair of equations, we get
v 2 3'2v1 3'2' 1'
Finally,
v3 3'v1 v 2 3'2' 1'
1 1 v1 4134 '
2 2 v 2 78 58'
3 3 v3 59 28'
Check 18000
EXAMPLE 3-4
Distances AB, BC, CD, AC, and BD are measured, Fig 3-4. The observed
values are 100.000 m, 100.000 m, 100.080 m 200.040 m, and 200.000
m, respectively. All measurements are uncorrelated and have the same
precision. If the measured values are adjusted in accordance with the
least squares principle, what is the resulting adjusted distance between
A and D?
Solution
The geometric model is relatively simple being that of three collinear
distances, AB, BC, and CD, which we will denote by x 1 , x2 and x3
respectively. It would obviously take a minimum of three measurements to
uniquely determine this model (i.e., n 0= 3). Since we have five measured
distances (n= 5), there are then two redundant observations, or r=5-3=2.
If we carry x 1, x 2, x 3 as three unknown parameters, then we need to write
2+3 =5= n condition equations which relate the five measurements, l1, l2,
l1 v1 x1
v1 x1 l1
or
v2 x 2 l2
or
v3 x 3 l3
or
v 4 x1 x 2 l 4 x1 x 2 200.040
or
v 5 x 2 x3 l 5 x1 x 2 200.000
l 2 v2 x 2
l 3 v3 x3
l 4 v4 x1 x 2
l 5 v5 x 2 x3
or
x1 100.000
x 2 100.000
x 3 100.080
In order to obtain a least squares solution we must minimize the sum of the
squares of the residuals. Thus,
Mu st be m in im ize d.
To minimize , its partial derivative with respect to each parameter
Fig 3-4
2( x 1 100000) 2( x 1 x 2 200.040) 0
x1
x2
x3
2 x1
300 .040
(a)
x 1 3 x 2 x 3 500 .040
(b )
x2
x 2 2 x 3 300 .080
(c )
These three equations in three unknowns are called the normal equations.
They indicate that after applying the least squares criterion (of minimizing ),
an overdetermined inconsistent measurement case is transformed into a
unique (consistent) case.
Dividing (a) by 2 and subtracting from (b) gives
2.5 x 2 x 3 350.020
2 x 2 199.980,
or
x 2 99.990m
or
x 1 100.025m
or
x 3 100.045m
EXAMPLE 3-5
The equation of a straight line in a plane is y-ax-b=0, as shown in fig. 3-5. In
order to estimate the slope, a, and the y-intercept, b, the coordinates of three
points are given:
c
1
2
3
X (cm)
2
4
6
Y (cm)
3,2
4
5
exercise, there
to find a and b such that the line y- a x- b =0 fits the three
points as closely possible according to the least squares criterion.
Obviously, it takes a minimum of two measurements (two points) to uniquely
(determine a straight line (i.e., n0= 2). Since we have three measurements
(n =3 ), are is one redundant observation (r= 3 -2 =1 I). I f we carry two
unknown parameter, a and b, we must write n =3 condition equations which
relate the three measurements y 1 , y 2 and y 3 to the two parameter estimates
v1 y1 a x1 b 0
v2 y 2 a x2 b 0
v3 y 3 a x3 b 0
Expressing the residuals in terms of a and b andd the given coordinate values,
we have
v1 2 a b 3.2
v 2 4 a b 4 .0
v 3 6 a b 5 .0
v12 v 22 v32
v ( 2 a b 3 .2 ) 2 ( 4 a b 4 .0 ) 2 ( 6 a b 5 .0 ) 2
2 ( 2 a b 3 .2 ) 2 ( 2 ) 2 ( 4 a b 4 .0 ) 2 ( 4 ) 2 ( 6 a b 5 .0 ) 2 ( 6 ) 0
56 a 12 b 52.4
12 a 3 b 12.2v
EXAMPLE 3-6
Figure 3-6 depicts a simple triangulation scheme where the six angles x1 x2
x3 are measured and have the following values:
X1=48.88
x2=42.10
x3=44.52
x4=43.80
x5=46.00
x6=44.70
Using the method of least squares, calculate the adjusted values of the six
angles.
Solution I
The geometric model of this problem is concerned with two overlapping
plane triangles, as shown in Fig. 3-6. It therefore takes its n0=4 angles to
uniquely determine the model. Having n =6 measured angle, the
redundancy is r=6-4=2. Hence, there are two independent condition
equations that can be written in terms of the six observations.These are
x 1 x 2 x 3 x 4 180.00
x 3 x 4 x 5 x 6 180.00
Fig 3-6
( x1 v1 ) ( x 2 v 2 ) ( x3 v3 ) ( x 4 v 4 ) 180.00
And
( x3 v3 ) ( x 4 v 4 ) ( x5 v5 ) ( x6 v6 ) 180.00
and
v3 v 4 v5 v 6 180 .00 ( x3 x 4 x5 x 6 )
180 .00 179 .02 0.98
The tows values 0.70 and 0.98 are called closing errors for the tow triangles
ABD and ABC respectively.
The least squares criterion calls for minimization of
2(0.70 v 2 v3 v 4 )( 1) 2v 2 0
v 2
2(0.70 v 2 v3 v 4 )( 1) 2v 4 2(0.98 v3 v 4 v 5 )( 1) 0
v 4
2v 5 2(0.98 v3 v 4 v5 )( 1) 0
v 5
(1)
(2)
'
2v 4 2k1 2k 2 0 or
v 4
v 4 k1 k 2
'
2v 5 2k 2 0 or
v 5
v5 k 2
'
2v 6 2k 2 0 or
v 6
v6 k 2
Now we have the six unknown residual expreseed in terms of only two
unknown constants k1, k2. Thus when substituting into the two condition
equations we get two normal equations in two unknowns, i.e.,
k1 k1 ( k1 k 2 ) ( k1 k 2 ) 0.70
( k1 k 2 ) ( k1 k 2 ) k 2 k 2 0.98
Or
4k1 2k 2 0.70
2k1 4k 2 0.70
0.07
v3 k1 k 2 0.28
v 4 k1 k 2 0.28
v5 k 2
0.21
v6 k 2
0.21
Which are identical to those calculate by the firs procedure. Therefore, the
estimated values for the angles x 1 , x 2 ,, x 6 are also the same. These two
procedures represent two techniques of least squares adjustment
describes in Chapter 4
Example 3-7
Figure 3-7 shows a small level net which A is a bench mark with known
elevation 281.130 m. the following differences in elevation are observed
using a direct leveling procedure:
FROM
(LOWER
POINT)
B
D
D
B
D
A
TO (HIGHER
POINT)
A
B
A
C
C
C
OBSERVED
DIFERENCE IN
ELEVATION
l1=11,973
l2=10,940
l3=22,932
l4=21,040
l5=31,891
l6=8,983
Assuming that all observations are uncorrelated and have equal precision,
use to least squares criterion to calculate values (least squares estimates)
for the elevation of points B, C, and D.
Solution
In this problem, the minimum number of observations needed for a unique
solution is n 0= 3, and the number of redundant observations is r = 6 - 3 = 3.
If there were no observational errors, going around a loop , and closing back
the starting point would yield no discrepancy. However, this is not the case,
as example, going from A to B to D and back to A yields (-11.973 - 10.940
+22.932) 0.019 m. We must therefore allow for this discrepancy, and others,
by introduction six residuals v1, v2,,vk one for each observation, For
convenience, the elevation of a point is designated by its own symbol, i.e., B
is the elevation of point B, and s o The six condition equations are
ring and rearranging we get the following set of four normal equations in the
unknown residuals v2, v3, v4, v5,:
2v2+ v3+ v4
=0.70
(a)
v2+3v3+2 v4+ v5 =1.68
(b)
v2+2 v3+3 v4+ v5=1.68
(c)
v3+ v4+2 v5=0.93.
(d)
or
v3=v4
(e)
(f)
(g)
Check 18000
x 3 44 .80
x 4 44 .08
Sum= 180.00check
also showing that the two condition equations are exactly satisfied.
Solution II
This problem can also be solved another way. The two conditions
equations are rewritten as follows:
v1 v 2 v 3 v 4 0.70 0
v 3 v 4 v 5 v 6 0.98 0
The scalars k1 and k2 are unknown constants which are called Lagrange
multipliers (see also Section 4.4). The factor 2 is introduced for
convenience only, to avoid fractions after differentiation.
Now, the least squares criterion calls for minimization of the function
v1 k1
'
2v 2 2k1 0 or
v 2
v 2 k1
'
2v 3 2k1 2k 2 0 or
v 3
v 3 k1 k 2
B l1 A B
D l2 B D
D l3 A D
B l4C B
D l5C D
l1 v1 281 .130 0
l2 v2 B
l 3 v 3 281 .130 0
l4 v4 C
l5 v5 C
A l 6 C 281 .130 l 6 v 6 C
(d)
(e)
(f)
Fig 3-9
3-4 All sides of the trilateration network shown in Fig. 3-10 are measured with
an EDM device. No angles are measured. Evaluate the redundancy if the size
and shape of the network constitute the mathematical model.
3-5 The parabola y= ax2+ b x - c is to be fitted to 14 data points. The x-coordinate
of each point is an error-free constant, they-coordinate is an observed quantity,
and a, b. and c are unknown parameters. Evaluate the redundancy.
3-6 The interior angles of a four-sided closed traverse (quadrilateral) are
measured and found to be: 1 = 11000'20'' 2=9002'15'' 3= 8005'25", and
4=7952'40". Adjust these angles according to the principle of least squares,
assuming all observations are uncorrelated and have the same precision. What
is the equivalent simple adjustment: technique?
Fig 3-10
3-7 The distances shown in Fig. 3-11 are measured. All measurements
are uncorrelated and have the same precision. The measured values are:
l1 = 100.010 m, l2 = 200.050 m, l3 =200.070 m, and l 4 = 300.090 m. Use
the principle of least squares to find the adjusted distance between A and
C.
3-8 The angles about a survey station, Fig. 3-12, are measured. The
observed values are: a=6000'00', b=6000'00', c=2400025 and
d=12000'05'. All measurements are uncorrelated and have the same
precision. If these observed angles are adjusted in accordance with the
least squares principle, what are the adjusted values of angles a and b?
3-9 With reference to Fig. 3-13, the following angles are measured.
L 1= Angle AOB = 3000'20"
L 2=Angle AOC= 5000'00"
L 3=Angle COD = 2000'00"
L 4=Angle ROD = 4000'20"
The measurements are uncorrelated and have the same precision. Find
the adjusted values for the angles in accordance with the principle of least
squares.
fig 3-12
fig -13
3-10 Use the principle of least squares to estimate the parameters of the
straight line, y =ax+b that fits the following data
x
1
2
3
4
5
y
9.50
8.85
8.05
7.50
7.15
Assume the x-coordinates are eror-free constants and the y-coordinates are
uncorrelated observations with equal precision.
3-11 Figure 3-14 shows a level net connecting three bench marks, A, B, and
fig 3-14
fig 3-15
Least Squares
Adjustment
l1 v1 x 1 0
(4-1)
l2 v2 x2 0
Replacing x by the more commonly used symbol for each parameter used
and rearranging Eq. (4-1) gives
v1 l1
(4-2)
v 2 l 2
We can collect the residuals in one column vector v, and the observations in
another vector l, i.e.,
v1
v
v
and
l1
l
l 2
and factor out the coefficients of in both equations into one coefficient
matrix
1
B
1
(4-3)
Or
V=B=f
(4-4)
(4-5)
v n bn1 1 bn 2 2 bnu u d n l n f n
Where
v1 , v 2 , , v n
b1 , b2 , , bn
d2
l2
f2
b
j
2
22
2u 2
21
(4-6)
v m bm 1 bm 2 bm u u d n l n f n
and in a more concise form becomes identical to Eq. (4-4), or
v B d l f
(4-7)
n,1 n, u u ,1 n,1 n,1 n,1
The following are examples of the matrix form of condition equations for
least squares adjustment of indirect observations.
EXAMPLE 4-1
We recall the data for the problem in Example 3-5 of fitting a straight line of
the form y - ax - b=0 to three points:
POINT
1
2
3
X (cm)
2
4
6
Y (cm)
3,2
4
5
or
v1 ax1 b y1
v 2 y 2 ax 2 b 0
or
v 2 ax 2 b y 2
v3 y 3 ax 3 b 0
or
v3 ax 3 b y 3
v 2 6 1 5.0
Or
v B f
with corresponding values above. The vectors v and f are each 3 x I (where
3=n= number of observations), B is 3x2, and is 2x I, containing the two
unknown parameters a (the slope of the line) and b (its y-intercept). Note
that in this example f=-l because the numerical vector d is equal to zero.
EXAMPLE 4-2
Reference is made to the level net in Fig. 3-7 and the data given in Example
3-7. It is required to give the matrix form, including the dimensions of the
constituent matrices, of the condition equations for the adjustment of the
level net by the method of indirect observations.
Solution
Since n = 6 and n 0 = 3 (see Example 3-7), there are six condition equations
including three unknown parameters. To conform to the general symbols, we
will designate the elevations of points B, C, and D by 1, 2 and 3
respectively. Again with reference to Example 3-7, the six condition
equations are
B l1 v1 A 0 or
v1 1
A l1 269 .157 f 1
D l 2 v 2 B 0 or
v 2 1 3 l 2
D l 3 v 3 A 0 or
v3 3
B l 4 v 4 C 0 or
v 4 1 2 l 4
21 .040 f 4
D l 5 v 5 C 0 or
v 5 2 3 l 5
31 .891 f 5
A l 6 v 6 C 0 or
v6 2
10 .940 f 2
A l 3 258 .198 f 3
A l 6 290 .113 f 6
The only unknowns are the parameters, which can be placed in a column
vector
1
2
3
10 .940 10 .940
2 1 0 1 0
0 0
21 .040 21 .040
v 4 1 1
v 0 1 1 3 0
31 .891 31 .891
5
0
281 .130 8.983 290 .113
v 6 0 1
Or
v1 v 2 l1 l 2 0
1
which in general is
Av = f
(4-8)
(4-9)
a r1v1 a r 2 v 2 a rn v n f r
where
v1 , v 2 , , v n
a11 , a12 , , a1n
f1 , f 2 ,, f n
a11 a12 a1n v 2 f 2
a r1 a r 2 a rn v n f n
(4-10)
which corresponds directly :o the compact form of Eq. (4-8). When the
conditions are originally linear the vector f is usually written in terms of the
given observations as
f=dAl
(4-11)
or
v1
1
Solution
Since n = 6, and it would take a minimum of 3 measured differences in
elevation to determine the elevations of points B, C, and D (i.e., n0= 3),
then the redundancy is r= 6 3 = 3. There must be then three condition
equations that can be written between the observations. In order to write
these, we refer to Fig. 3-7 and realize that if we go around any loop, starting
and closing on the same point, the adjusted differences in elevation in the
loop must add up to zero. Thus,
Loop B-A-D-B
Loop D-B-C-D
Loop D-A-C-D
l1 v1 l3 v3 l 2 v 2 0
l 2 v 2 l 4 v 4 l 5 v5 0
l 3 v3 l 6 v 6 l 5 v5 0
Rearranging, we get
v1 v 2 v 3 (l1 l 2 l 3 )
v 2 v 4 v 5 (l 2 l 4 l 5 )
v 3 v 5 v 6 (l 3 l 5 l 6 )
1 1 1
0 1
0
0 1
0
0
1 1
0
0 1 1
0
v1
v
2 1 1 1
v 3
0
0 1
v 4 0
0 1
v
5
v 6
0
1 1
0
0 1 1
0
l1
l
2
l 3
l 4
l
5
l 6
0.024
Note that the elements of f are the so-called closing errors of the individual loops
of the level network.
(4-12)
(4-13)
and therefore
w 02 / 2
(4-14)
0 22 0
0 0 m2
(4-15)
(4-16)
0 w2 0
(4-17)
W
0 0 wm
which in view of Eq. (4-16) becomes
02 / 12
1 / 12
0
0
0
0
0
02 / 22
0
1 / 22
0
2 0
W
0
0
0
0 1 / m2
0 02 / m2
(4-18)
When the reference variance 02 is factored out in Eq. (4-18) the remaining
matrix is the inverse of the variance matrix . (The inverse of a diagonal
matrix is another diagonal matrix, each element of which is the reciprocal of
the corresponding element in the original matrix; see Appendix A.) Hence,
Eq. (4-18) becomes
W 02 1
(4-19)
v12 v 22 v n2 vi2
(4-20)
i 1
as has already been given in Chapter 3 [see Section 3.3; Eq. (4-20) is
identical to Eq. (3-8)]. For uncorrelated observations of unequal precision,
the criterion calls for minimization of the weighted function
n
(4-21)
i 1
in which w1, w2,...wn are the weights of the corresponding observations, l1,
l2,,ln respectively.
In order to facilitate understanding of the matrix derivation to follow, we shall
recall Example 4-2 on the level net and proceed to work it out gradually in
matrix form. We will assume here that the six measured differences in
elevations are still uncorrelated, but now have unequal precision, i.e., different
weight. The six weights are designated by w1, w2,...w6 Thus, the function to
be minimized for this example is
w1 ( f 1 1 ) 2 w2 ( f 2 1 3 ) 2 w3 ( f 3 3 ) 2 w4 ( f 4 1 2 ) 2
w5 ( f 5 2 3 ) 2 w 6 ( f 6 2 ) 2
(4-22)
2 w1 ( f 1 1 ) 2 w2 ( f 2 1 3 ) 2 w4 ( f 4 1 2 ) 0
1
2 w4 ( f 4 1 2 ) 2 w5 ( f 5 2 3 ) 2 w6 ( f 6 2 ) 0
2
2 w 2 ( f 2 1 3 ) 2 w 3 ( f 3 3 ) 2 w5 ( f 5 2 3 ) 0
3
Clearing and collecting terms in on the left-hand side and terms in f on the
right-hand side leads to
( w1 w2 w4 ) 1 ( w4 ) 2 ( w2 ) 3 w1 f 1 w2 f 2 w4 f 4
( w4 ) 1 ( w4 w5 w6 ) 2 ( w5 ) 3 w4 f 4 w5 f 5 w6 f 6
(4-23)
( w2 ) 1 ( w5 ) 2 ( w2 w3 w5 ) 3 w2 f 2 w3 f 3 w5 f 5
This is the set of normal equations, which are equal in number to the unknown
parameters (in this case u = 3). The normal equations make a unique solution
possible because the unknown parameters are the only unknowns in them. It
should be noted as well that the condition equations are six in number but
contain nine unknowns (three parameters and six residuals), and that by adding
the three normal equations to them toe get a total of nine independent equations
in nine unknowns, hence a unique solution.
Now, if we recall the B, , and f matrices from Example 4-2, namely,
0
0
f1
1
f
1
0 1
2
1
f3
0
0 1
B
, 2 , f
0
f4
1 1
f
3
0 1 1
5
0
0 1
f 6
And if we collect the six weights into a diagonal weight matrix W, i.e.,
w1
0 0
0 w2 0
0 0 w6
We find that we are able to construct the normal equations, Eq. (4-23), form the
following relationship
(BtWB) =BtWf,
That is,
0
0
1
0 1
w1 0 0 1
1
0 1
0 0
1 1
0
w
0
0
0
1
2
0
0
0 1 1 1
2
1 1
0
0 1 1
0 1
0
3
0
1
0
0
w
6
0
0 1
(4-24)
f1
w1 0 0 f 2
0 1
0 0
1 1
0 w2 0 f 3
0
0
0 1 1 1
f
4
0 1 1
0 1
0
0
0
w
6 f5
f6
Multiplying the matrices out gives
(4-25)
w4
w2
( w1 w2 w3 )
1
w4
( w4 w5 w6 )
w5
w2
w3
( w2 w3 w5 ) 3
(4-26)
w1 f 1 w2 f 2 w4 f 4
w4 f 4 w5 f 5 w6 f 6
w2 f 2 w3 f 3 w5 f 5
(4-27)
which
N=BtWB
(4-28)
and
t =BtWf
(4-29)
N is the coefficient matrix of the normal equations. It is square and sym metric, as illustrated in Eq. (4-25). t is the vector of constant terms.
Equations (4-24), (4-27), (4-28), and (4-29) are all quite general, although we
have arrived at them by way of an example. They apply to any problem for
which the condition equations are or the form in Eq. (4-7). This is shown to be
true by the general derivation which now follows. As already stated, the least
squares criterion for uncorrelated observations with unequal precision is
imposed by minimizing the function in Eq. (4-21). This function can be
written in matrix form as
w1 0 0 v1
0 w 0
2
v 2
v1 , v 2 , , v n
0 0 w6 f n
(4-30)
v tWv
(4-31)
And rearranging it as
v f B
(4-32)
( f B ) t W ( f B )
( f t B t t ) W ( f B )
( f tW B t tW ) ( f B )
Or
(4-33)
Since is a scalar, each one of the four terms on the right-hand side of Eq.
( 4 - 3 3 ) is also a scalar. Furthermore, since the transpose of a scala r is equal to
itself, then the second and third terms on the right-hand side of Eq. ( 4 - 3 3 ) are
equal, i.e.,
t B tWf (t B tWf ) t f tWB
(4-34)
In Eq. (4-34), W t is replaced by W because the weight matrix is always symmetric and therefore equal to its transpose. Combining the-second and third
terms on the right-hand side of Eq. (4-33) yields
(4-35)
In Eq. (4-35) all matrices and vectors are numerical constants, except the
vector of unknowns. Thus, in order for to be a minimum, its partial derivative
with respect to must be equated to zero. Since the first term on the right-hand
side of Eq. (4-35) does not contain , its partial derivative is automatically zero.
The partial derivatives of the second and third terms are obtained by applying
the rules of differentiation for bilinear and quadratic forms, respectively, as given
in Appendix A. Thus, differentiating with respect to and equating the result
to zero, we have:
2 f tWB 2t ( B tWB ) 0
(4-36)
B)
u , n n, n n, u u ,1
Bt
u , n n, n n,1
(4-37)
which is identical to Eq. (4-24). Us concise form is given by Eq. (4-27), in which
N is a square symmetric matrix of order u and tins u x 1vector. The solution of
Eq. (4-27) is
=N-1t
(4-38)
which yields the vector of parameters. This solution requires the inversion of the
u x u normal equations coefficient matrix, N.
EXAMPLE 4-5
Recall the four measurements of distance given in Example 3-2: l1 = 32.51 m, l2=
32.46 m, l3= 32.52 to, and l4= 32., 53 m. Compute the least squares estimate of the
distance: (1) if all measurements are. uncorrelated and of equal precision; and
(2) if they are uncorrelated but have the following weights: w1 =1, w2 = 2, w3 = 1
and w4= 0.5
Solution
The number of observations is m =4, and with the minimum required to fix the
model n0 = 1, the redundancy is r= 4 - 1 = 3. Carrying the final estimate as a
parameter, i.e., u= 1, there are four condition equations, one for each
observation. Letting represent the estimate of the distance, the condition
equations are
l1 v1 x or
l 2 v2 x or
l 3 v3 x or
l 4 v4 x or
v1 x l1 32.51
v 2 x l 2 32.48
v 3 x l 3 32.52
v 4 x l 4 32.53
32.51
32.48
32.52
32.53
4,1
1,1
f
4,1
32.53
Finally, the parameter is obtained by applying Eq. (4-38), i.e.,
0 0 0
2 0 0
0 1 0
0 0 0 .5
and
1
1
2
1
N B t WB 1,1,1,1
1
1
0.5 1
1
1
1,2,1,0.5 4.5
1
1
4
i 1
Next
32.51
32.48
t
4.5
t B Wf 1,2,1,0.5
32.52
32.53
(1)(32.51) ( 2)(32.48) (1)(32.52) (0.5)(32.53)
146.255
Which is equal to the sum of the products of each observation and its
4
weight or
wl
i i
i 1
x wi l i
i 1
wi ,
i 1
observations.
(4-39)
EXAMPLE 4-6
Consider the data for Examples 3-5 and 4-1 I n which a straight line with the
equation y= ax + b is fitted to three points: x 1=2, y1 = =3.2; x2 =4, y2 = 4.0;
x3 = 6, y3 = 5.0. All coordinates are in centimeters. The y-coordinates are the
observations, which are assumed to be (1) uncorrelated and of equal
precision, and (2) uncorrelated with variances 0.10 cm 2, 0.08 cm2, and 0.08
v1 2a b y1 3.2
or
v 2 y 2 ax 2 b 0
or
v 2 4 a b y 2 4 .0
v3 y 3 ax 3 b 0
or
v 3 6 a b y 3 5 .0
v 2 6 1 5.0
(1) The first case is when the observations are uncorrelated and of equal
precision, for which W= I. The normal equations coefficient matrix is in
this case [see Eq (4-28)
2 1
4 1 56 12
N B tWB B t B
12 3
1 1 1
12 .2
1 1 1
5
is the inverse of N is
56 12
12 3
3 12
1 / 24
12 56
a N 1t 1 / 24 3 12 52 .4
12 56 12 .2
i.e.,
which are identical (except for rounding) to the answers obtained for the same
case in Example 3-5.
(2) For the second case, we need to construct the weight matrix from the given
variances. Since the measurements are uncorrelated, the weight matrix W
may be calculated according to Eq. (4-18). Thus, we first begin by computing
1
the inverse
of the variance matrix, or
1 / 12 0
0 1 / 0.10
2
1 / 0.08
0 1/ 2 0
0
2
1
/
0
.
08
0
1
/
10
12.5
12.5
Next, we select a value for the reference variance 12 usually to make the
numerical values of the weights smaller and more convenient. Thus, if 12 =0.10,
tne weight matrix becomes [Eq. (4-10)]
W (0.10 )
10
12 .5
12 .5
2 4 6
t
N B WB
1.25
1.25
2 1
4 1
6 1
2 1
5
7 .5
2
69 14 .5
4 1
1 1.25 1.25
14 .5 3.5
And
3 .2
7
.
5
4.0 63 .9
t B tWf
14 .45
1 1.25 1.25
5
.
0
69 14 .5
14 .5 3.5
3.5 14 .5
1 / 31 .25
14 .5 69
and, finally,
3.5 14 .5 63 .9 0.452
N 1t 1 / 31 .25
14 .5 69 14 .45 2.256
v r 4 .0 4 1
0.067 cm
2.267
5.0 6 1
0
.
033
v m 4 .0 4 1
0.064 cm
2.256
5.0 6 1
0
.
032
adjustment of indirect observations for two cases, (1) when the observed
differences in elevation are uncorrelated and have equal precision, and (2) when
the weight of each observation is inversely proportional to the leveled distance
(see Fig. 3-7).
Solution
Recall from Example 4-2 the matrices for the condition equations v+ B= f.
0
1 0
1 0 1
0 0 1
B
and
0
1 1
0 1 1
0
0 1
269 .157
10 .940
258 .198
f
21 .040
31 .891
290 .113
(1) For the first case, the weight matrix W of the observations is the identity
matrix because the observations are uncorrelated and have equal precision.
Consequently, the N and t matrices are [see Eqs. (4-28) and (4-29)]
0
0
1
1
0 1
3 1 1
1
0
1
0
0
0
0 1
t
NB B 0
0
0 1 1 1
3 1
1
1 1
0
0 1 1
1 1
0 1
0
3
0 1 1
0
0 1
And
269 .157
10 .940
259 .057
0 1
0 0
1 1
258 .198
t Bt f 0
0
0 1 1 1
343 .044
21 .040
215 .367
0 1 1
0 1
0
31 .891
290 .113
The inverse of N is
And
3 1 1
2 1 1
1
3 1 1 / 4 1 2 1
1 1
1 1 2
3
which are the elevations of points B, C, and D when the observed differences
in elevation are assumed equal in weight.
(3) The weight of each difference in elevation is proportional to the
reciprocal of the leveled distance between the two points. These
distances are given below.
Observation
l1
l2
l3
l4
l5
l6
Distance (km)
20
12
15
28
20
26
Reciprocal of
distance
0.057
0.083 0.057
0.036
0.050 0.039
Weight
1.400
2.333 1.867
l.000
1.400 1.077
The weights are such that the smallest value (0.036) in the third line is given
a weight value of 1.0 and the rest are proportionately computed. Thus the
weight matrix is
1.400
2.333
1.867
W
1.000
1.400
1.077
0
0
1.400
1
2.333
1
0 1
0 1
0 0
1 1
1
.
867
0
0
1
N B t WB 0
0
0 1 1 1
1.000
1 1
0
0 1 1
0 1
0
0 1 1
1.400
1.077 0 1
0
2.333 1.400
5.600
And
1.400
269 .157
10 .940
2.333
0 0
1 1 0 1
1
.
867
258
.
198
t B t Wf 0
0 0 1 1 1
1.000
21 .040
0 1 1 0 1 0
31 .891
1.400
381 .3028
378 .1391
411 .8852
The inverse of N is
0.337902
0.171085
0.183543
0.171085
0.406418
0.172880
0.183543
0.172880
0.298256
0.183543
1
0.171085
0.406418
0.172880
269 .135
290 .124
258 .205
EXAMPLE 4-8
A noon sight is made on the sun with a theodolite. The following data are
obtained:
TIME
40m 45.5s
46 18.5
50 03.0
56 53.0
h
12 05m 34.5s
12 12.0
16 04.5
11h
OBSERVED ALTITUDE
515109
5642
520030
0224
0106
515545
5130
For the small time range involved, the altitude is assumed to be a parabolic
function of time. Only the altitude is considered to be the observed variable; the
time is considered to be errorless, All observations are uncorrelated and equal
in precision,
Apply the method of least squares to fit a parabola to the given data and to
estimate the maximum altitude of the sun and the time at which maximum
altitude occurs.
Solution
Let T represent the time and h the altitude in a rectangular coordinate system,
the origin of which is at T0 = 12h 00m 00s, and n, = 5150'00", Thus, the data
points are:
T 19 m14 .5 s 1154 .5 s , h 69 ' '
T 13 m 41 .5 s 821 .5 s , h 402 ' '
T 9 m 57 .0 s 597 .0 s , h 630 ' '
T 3 m 07 .0 s 187 .0 s , h 744 ' '
T 5 m 34 .5 s 334 .5 s , h 666 ' '
T 12 m12 .0 s 732 .0 s , h 345 ' '
T 16 m 04 .5 s 964 .5 s , h 90 ' '
If the fitted parabola is of the form aT2 + bT- c = h, then the condition equations
are
a ( 1154 .5) 2 b( 1154 .5) 2 c 69 v1
a ( 821 .5) 2 b( 821 .5) 2 c 402 v 2
a ( 597 .0) 2 b( 597 .0) 2 c 630 v 3
a ( 187 .0) 2 b( 187 .0) 2 c 744 v 4
a (334 .5) 2 b(334 .5) 2 c 666 v 5
a (732 .0) 2 b(732 .0) 2 c 345 v 6
a (964 .5) 2 b(964 .5) 2 c 90 v 7
B 34 .969 187 .0
111,890 334 .5
535,824 732 .0
930 ,260 964 .5
And
a
b
c
1
1
1
1
1
1
The weight matrix is W=I, since all observations are uncorrelated and equal
in precision. Hence,
N = BtWB =BtB,
i.e,
3.5252 1012
N 9.8563 10 4
12
3.9771 10
9.8563 10 4
3.9771 10 4
729
3.9771 10 4
729
And
t=B tWf=B tf
i.e.,
9.570 10 4
t 3.630 10 4
2.946 10 4
N 1
8.4611 10 13
1.2394 10 10
4.6782 10 7
1.2394 10 10
2.7449 10 7
4.1830 10 5
4.6782 10 7
4.1830 10 5
4.0429 10 1
N 1t 1.0426 10 1 ,
0.7585 10 3
i.e.,
a 0.00061347
b 0.10426
c 758.5
How well this equation fits the observed data is reflected in the vector of
residuals (values in seconds of arc):
69 1,332 ,870 1154 .5
402 674 ,862 821 .5
630 356 ,409 597 .0
v f B 744 34 .969 187 .0
666 111,890 334 .5
345 535,824 732 .0
90 930 ,260 964 .5
1
8
28
1
4
1 6.1347 10 28
1 1.0426 10 1 13
1 0.7585 10 3 11
1
8
3
1
Now, the maximum altitude occurs when dh / dT=0. Let the value of T at
Tmax
0.10426
85.0 s.
2( 0.00061347)
2a
Hence, the time at which maximum altitude occurs is:
Now
2
hmax 0.00061347 Tmax
0.10426 Tmax 758 .5
EXAMPLE 4-9
In Fig. 4-1, A and B are horizontal control points, spaced 100.000 m
(assumed errorless) apart. A third point, C, is to be located along a line
normal to AB, as shown. Two measurements are made:
distance AC:
angle at A:
l1=131.200m
l2= 4020'00",
l1 v1 x 2 100 2 0
And
l 2 v 2 arctan
x
0
100
And
y 2 l 2 v 2 arctan
x
0
100
According to Eq. (2-13), the linearized forms of the functions y 1 and y2 are
y1 y10
l1 v1 x 02 100 2
y1
x
x
x0
x
x 2 100 2 1
0
and
Fig 4-1
y 2 y 20
y 2
x
x
x
1
l 2 v 2 arctan 0
x
100 100[1 ( x 02 / 100 2 )]
l1 v1 x 02 100 2
x0
x 02 100 2
x 0
x0
100
2
x 0
100 x 0 / 100 2
Rearranging these equations and writing them in the form v+B=f as in Eq. (47), we have
l 2 v 2 arctan
x 2 100 2 l
v1 x 0 x 02 100 2
0
1
v
2 100 ( x 02 100 2 )
arctan( x 0 / 100) l 2
0.005806
and
0.04405
f1
0.000545
(0.005 ) 2
2500
(0.00010 ) 2
0 2500
The apparent disparity in the two weights arises not so much from any
imbalance in precision as from the different kinds of units used for expressing
distances (meters) and angles (radians). Thus,
N 1 B1tWB1 [0.503722]
And
t1 B1tWf 1 [0.036440]
And so
1 [x1 ] N 11t1
0.036440
0.072m
0.503722
Now, since during linearization we neglect all second and higher order terms, we
0.005810
and
0.00257
f2
0.000127
Thus
N 2 B2t WB 2 [0.503722]
And
t 2 B2t Wf 2 [0.000181]
And so
2 [x 2 ] N 21t 2
0.000181
0.00036m
0.503722
r, n
n,1
f
r ,1
(4-40)
(4-41)
v
4
f
0
0 1
0 1 1
v 3
5
v 6
Assuming the six measurements are uncorrelated and have unequal weight,
function
w1v12 w1v 22 w3 v32 w4 v 42 w5 v52 w6 v 62
Must be minimized.
It can be seen that it is not possible to express separately each of the six
.residuals v 1, v2,..., v6 for substitution in , as was possible in the
procedure of adjustment of indirect observations, Therefore, another
approach is needed to make sure that is a minimum and, at the same
time, that the condition equations (4-41) are satisfied. This is accomplished
by rewriting each condition equation in normal form (zero on the right -hand
side), multiplying the left-hand sides by unspecified factors -2k1 and adding
the products to . The quantities ki are called Lagrange multipliers; there are
its many multipliers as there are conditions.
The new quantity to be minimized is then
w1v12 w2 v 22 w3 v 32 w4 v 42 w5 v52 w6 v 62 2 k1 (v1 v 2 v 3 f 1 )
(4-43)
2 k 2 (v 2 v 4 v 5 f 2 ) 2 k 3 (v 3 v 5 v 6 f 3 )
The factor -2 preceding each Lagrange multiplier is introduced for convenience only, to avoid unnecessary fractions and negative signs after dif ferentiation and separation of the residuals. In Eq. (4-43) the unknowns; are
the residuals v1 v 2 ..., v6 and Lagrange multipliers, k 1, k2 and k3. Therefore,
in order for ' to be a minimum, its partial derivative with respect to each of
these unknown variables must be zero.
Considering the residuals first,
'
1
2 w1v1 2k1 0 or v1
k1
v1
w1
'
2 w2 v 2 2( k1 k 2 ) 0 or
v 2
v2
'
2 w3 v 3 2( k1 k 3 ) 0 or
v 3
'
2 w4 v 4 2k 2 0 or
v 4
v3
v6
1
( k1 k 2 )
w3
1
k2
w4
v4
'
2 w5 v 5 2( k 2 k 3 ) 0 or
v 5
'
2 w6 v 6 2k 3 0 or
v 6
1
( k1 k 2 )
w2
v5
1
k3
w6
(4-44)
1
(k 2 k 3 )
w5
Next, differentiating ' with respect to k1, k2 and k3 and equating to zero,
gives
2(v1 v 2 v3 f 1 ) 0 or
k1
2k (v 2 v 4 v 5 f 2 ) 0 or
k 2
v1 v 2 v 3 f 1
v 2 v 4 v5 f 2
And
2(v 3 v 5 v 6 f 3 ) 0 or v 3 v 5 v 6 f 3
k 3
which are identical to the condition equations (4-41). Consequently, when '
is differentiated with respect to the Lagrange multipliers, the original
condition equations result; this demonstrates that the introduction of
Lagrange multipliers ensures that the conditions will be satisfied when is
minimized.
Equation (4-44) may be written in matrix notation as
0
0
v1 1 w1
1
v
1 w2
0
2
1 1
k1
v 3
1
1 w3
0 1
(4-45)
k 2
v
1
w
0
1
0
4
4
k
v
0 1 1 3
1
w
5
5
0 1
1 w6 0
v 6
The first matrix on the right-hand side of Ec. (4-45) is a diagonal matrix
representing the inverse of the weight matrix W of the observations. Such a
matrix is called a cofactor matrix and it Is given the symbol Q; thus
Q=W-1
(4-46)
(4-49)
The quantity (AQAt) represents a square coefficient matrix for what may be
termed normal equations for this technique of adjustment. It is referred to by
the symbol Qc, i.e.,
Q c = AQAt
(4-50)
where the inverse of Q c is denoted by W c. (There is a reason for using
symbols Qc and W c since the respective matrices do represent cofactor and
weight matrices, but explanation of this, here, would be premature and is
deferred until Chapter 9.)
Thus
Wc Qc1 ( AQA t ) 1
(4-53)
Equations (4-48) to (4-53) are all general and apply for correlated as well as
uncorrelated observations. They can be derived without reference to an
example, as will now be demonstrated.
In matrix notation, the least squares criterion calls for minimization of the
function given by Eq. (4-31),
v tWv
With k as the vector of Lagrange multipliers is constrained by the condition
equations (4-40). Thus, the function
' v tWv 2k t ( Av f )
(4-54)
(4-55)
(4-56)
by the vector f to give k [Eq. (4-52)]. With the value of k, the residuals are
calculated from Eq. (4-48) and then added to the observations l to yield the
observations may then be computed using l and the given functions. These
steps are demonstrated by the following examples.
EXAMPLE 4-10
Referring once more to the plane triangle in Example 4-3, the three
measured interior angles are 1=4133'45", 2= 7857'55", and 3 = 5927'50".
Compute the least squares estimates of there angles, (1) if they are
uncorrelated and have equal weight, and (2) if they are uncorrelated but
have weights w1=1, w2= 0.67, and w3 = 0.50, respectively.
Solution
The one condition equation is
1 v1 2 v 2 3 v3 180 0
or
v1 v 2 v3 180 ( 1 2 3 ) 180 17959'30' ' 30' '
1
10 ' '
1
10 ' '
t
Finally, adding the residuals to the given observation, we obtain the adjusted
observations:
1 1 v1 4133'55'
2 2 v 2 78 58'05'
3 3 v 2 59 28'00 '
Sum=1800000, which checks the condition
(2) for w1=1, w2=0.67, and w3=0.50, the weight matrix is
w1
1
W w2
0.67
0
.
50
w3
Q W
1 / w1
1
1 / w2
1 .5
2
1 / w3
2 1
t
And
1
1
7 ' '
v QA k
1 .5
1 [6.7' ' ] 10 ' '
13' '
2 1
t
1 1 v1 4133'52 '
2 2 v 2 78 58'05'
3 3 v 2 59 28'03'
Sum=1800000, which checks
EXAMPLE 4-11
The angles shown in fig. 4-2 are measured with a theodolite. the observed
values with weights, are listed as follows
ANGLE
l1
l2
l3
l4
l5
l6
OBSERVED VALUE
445044
461025
455512
430403
483245
422742
WEIGTH
1
3
3
3
3
1
Use the method of least squares to determine adjusted values for these angles.
Solution
It takes n0=4 measured angles to uniquely fix the two triangles in fig 4-2. thus,
with n=6, the redundancy is r=6-4=2, and the corresponding two condition
equations are
Fig 4-2
l1 v1 l 2 v 2 l 3 v 3 l 4 v 4 180
l 3 v 3 l 4 v 4 l 5 v 5 l 6 v 6 180
Where v1, v2,, v6 are the residuals in the angles l1 , l 2 ,...l 6 , respectively. Then,
v1
v
2
1 1 1 1 0 0 v 3 180 l1 l 2 l 3 l 4 24 ' '
0 0 1 1 1 1 v 180 l l l l 18' '
3
4
5
6
v
5
v 6
Q W 1
0.333
0.333
0.333
0.333
0.333
1
0.333
1
1
1
1
0
0
1
Qc AQA t
0.333
1
0 0 1 1 1 1
0.333
0
0
.
333
0.666
2
2
0.666
And
2
Wc Qc (1 / 3.56 )
0.666
0.666 0.562
2 0.187
0.187
0.562
0.187 24 16 .9
0.562 18 14 .6
0.333
1
0.333
1
v QA t k
0.333
1
0.333
0
0.333
0
17 ' '
5' '
0
1 16 .9 1' '
1 14 .6 1' '
5' '
1
1
15' '
0
0
1
1
The adjusted values are then obtained by adding the residuals to the given
observations, i.e.,
l 3 l 3 v 3 45 55'11' '
The reader should check to make sure that the adjusted values satisfy the
condition equations exactly.
EXMAPLE 4-12
The level net used in Example 4-7 is considered once more (see fig 3-7). For
convenience, the given data are repeated bellow
FROM
(LOWER
POINT)
B
D
D
B
D
A
TO (HIGHER
POINT)
A
B
A
C
C
C
OBSERVED
DIFERENCE IN
ELEVATION
l1=11,973
l2=10,940
l3=22,932
l4=21,040
l5=31,891
l6=8,983
LEVELED
DISTANCE
(Km)
20
12
15
28
20
26
Solution
As discussed in Example 4-4 n0=3, and with n=6, the redundancy is r=3. The
conditions corresponding to this redundancy may be written for three
independent loops, each starting with, and closing on, the same point. Using the
same loops as used in Example 4-4, and referring to Fig. 3-7, the three
conditions are
Loop B-A-D-B
Loop D-B-C-D
Loop D-A-C-D
l1 v1 l3 v3 l 2 v 2 0
l 2 v 2 l 4 v 4 l 5 v5 0
l 3 v3 l 6 v 6 l 5 v5 0
0 1
0 1 1 0 l 2 l 4 l 5 0.089
0 0 1 0 1 1 l 3 l 5 l 6 0.024
v 6
0 0
1 1 0 1
Q c AQA 0
0
0 1 1 1
0 1 1 0 1 0
A
Wc Qc1
0
1 0
1 0 1
3
0
0
1
1 1 0
1
0 1 1
0 1 0
4 4
8 0.25 0.25 0.50
0.0258
1 0 1
0.011
t
t
v QA k A k
0.0433
0 1 0 0.0150 0.043
0 1 1
0.028
0 0 1
0.015
1
3
1
1
1
Or as
Similarly
All values computed agree exactly with the results obtained in Example 3-7
and in Example 4-7 using the technique of least squares adjustment of
indirect observations. This clearly shows that no matter which technique is
used, the least squares adjustment results for a given problem are, except
for round off, the same.
(2) In this case, the weight of each measurement is inversely proportional to
the leveled distance. Thus the weight matrix W is
a / 20
a / 12
a / 15
W
a / 28
a / 20
a / 26
20
12
15
Q 1/ a
28
20
26
Now, any convenient value can be selected for a . Let a =12 then
1.67
1.00
1.25
Q
2.33
1.67
2.17
Thus
1.67
1.00
1 1 1 0 0 0
1
.
25
AQ 0 1 0 1 1 0
2.33
0 0 1 0 1 1
1.67
2.17
0
0
1.25
0
2.33 1.67 0
0 1.67 2.17
0
And
1.67 1.00 1.25
AQ 0
1.00
0
0
0
1.25
3.92
1.00
1.25
1.00
5.00
1.67
0
0
1
1 1
0
0
0
0
1
0
1
2.33 1.67 0
0
1
0
0 1.67 2.17
0 1 1
0
1
0
1.25
1.67
5.09
And so
Wc Q
1
c
0.1000
0.3158
0.1000
0.1104
0.2563
0.1087
0.3158
k Wc f 0.1000
0.1104
0.1000
1.67
1.00
1.25
v QA t k
0
0
0.2563
0.1087
1.00
0
2.33
1.67
0
0.1104
0.1087
0.2592
0.1104 0.019 0.01225
0.1087 0.089 0.02210
0
0.0205
0.0099
0
0.01225
1.25
0
.
0084
0.02210
0
0.0515
0.00555
0.0276
1.67
2.17
0.0120
l 1 l1 v1 11 .9949 m
l 2 l 2 v 2 10 .930 m
l 3 l 3 v 3 22 .924 m
l 4 l 4 v 4 20 .988 m
l 5 l 5 v 5 31 .919 m
l 6 l 6 v 6 8.995 m
These values differ by no more than 0.001 m. from those obtained in Example
4-7, These small differences are attributed to rounding errors.
A summary of the symbols and equations used for least squares
adjustment of observations only is will he found in Section 9.5, Chapter 9.
PROBLEMS
4-1. In fig. 4-3, AOB is a straight line and all angle measurements, l1,
through l are shown. Give the elements of the mathematical model (n, n0, r,
u, and c) for adjustment of the angles by the method of indirect
observations, and write the appropriate condition equations in the form
v+B= f.
Fig 4-3
4-2 Figure-4-4 shows seven angle measurements, l1, through l7,, made about
a survey station A. Determine the redundancy r for a least squares
adjustment of the angles, and write the appropriate condition equations in
the form Av = f
4-3 The direction of line OA, Fig. 4-5, is known and fixed. The directions of
lines OB, OC, OD, and OE are to be determined based upon the observed
angles, 1, through 8,. Determine the redundancy and write appropriate
condition equations in the form v + B = f and in the form Av = f.
4-4 With reference to the level net shown in Fig. 4-6, the following positive
elevation differences are observed:
FROM
TO STATION
OBSERVED
STATION
DIFFERENCE (m)
5
1
42.107
2
1
12.424
3
5
42.251
3
4
8.464
4
5
4.138
4
1
46.269
4
2
33.802
Fig 4-4
4-6 The following angles are measured about a survey station (Fig. 4-7):
=11015'20'', = 13040'08'', = 11904'42'', and = 24055'43". All
observations are uncorrelated and have the same precision. Find least
squares estimates for the angles using the method of adjustment of indirect
observations.
LENGTH (km)
18
12
20
8
22
The elevation of the bench mark at B is fixed at 192.320m above mean sea
level. The weight of each elevation difference is inversely proportional to the
length of the line. Use the method of adjustment of indirect observations to
s
b
OBSERVED VALUE
352.140m
236.765m
421520
STANDARD DEVIATION
0.030 m
0.020 m
15
Compute least squares mares for x and y using the method of adjustment
of indirect observations.
4-10 Apply the method of adjustment of observations only to the data given
in Problem 4-6 to find least squares estimates for the angles. Compare the
results with those of Problem 4-6.
4-11 Apply the method of adjustment of observations only to the data given
in Problem 4-8 to fled least squares estimates for the elevations of A, C,
and D. Compare the results with those of Problem 4-8. Which method
involves less computation?
4.12 Apply the method of adjustment of observations only to the data given
in Problem 4-9 to find adjusted values for s, b, and and least squares
estimates for x and y. Compare the results with those of Problem 4-9.
4-13 Adjust the level net of problem 4-4, assuming all observations are
uncorrelated and have the same precision.
4-14 A line-crossing technique used in a hydrographic surrey of a river to
determine the distance between. two sore stations A and B (Fig. 4-10). The
sum S of the two distances S 1, and S 2 is observed at one-minute intervals
as the boat proceeds upstream. The observed values of S in meters, are:
6137, 6075, 6020, 6015, 6029, 6072, and 6143. All observations are
uncorrelated and have the same precision. If S can he approximated by a
parabolic function of elapsed time,
Fig 4-9
Fig 4-10
find the least: squares estimates for the parameters of the parabola and
thereby obtain the least squares estimate for the distance between A and B.
4-15 For the triangle in Fig. 4-11 the following observations are made:
Angle A:
4502'13"
Angle B:
8501'48"
Angle C:
4956'19"
Side a:
241.555 m
Side b:
340.097 m
The standard deviation of each angle measurement is 10' the standard
deviation of each side is 0.020m . All measurements are uncorrelated.
(a) Write appropriate condition equations for least squares adjustment of
the observations. (b) Compute the least squares estimates for the angles
and sides of the triangle.
Fig 4-11
5
Elementary Probability
Theory
(5-1)
ROUNDED-OFF
VALUE
OF DISTANCE
MEASUREMENT
489.51 or less
(m)
489.52
489.53
NUMBER OF
RELATIVE
MEASUREMENT
FREQUENCY
S
OF OCCURRENCE.
0 OCCURRING 0
a
a/5000
206
0.0412
0.7256
3633
489,54
489.55 or more
1161
0
0.2322
0
Total
5000
1.0000
(5-2)*
RANDOM VARIABLES.
EXAMPLE 5-2
A distance, known to be 297.500 m. is measured with a 50 m carbon steel tape.
The measurement is made over a flat, horizontal surface with the tape fully supported, so that there are no errors due to slope and sag of the tape. The ends of
the tape are very carefully marked. and appropriate corrections are made so
that the effects of errors in marking, length of tape, temperature, and tension
are negligible. However, the tape is only roughly aligned, and significant random
error due to misalignment of the tape is introduced . Since any error in
alignment causes the measured value of the distance to be too high, the error is
distance due to misalignment is always positive.
With much effort, the measurement is repeated 5000 times and the resulting
values are rounded off to the n eares cen me er The results are summarized in
Table 5-2. The errors in distance due to misalignment of the tape (x) are shown
in centimeters, and relative frequencies are calculated by dividing the number of
measurements associated with each error value by the total number of
measurements. Again, since a large number of measurements i s involved, the
relative frequencies are accepted as li miting values, i.e., as probabilities.
MEASUR
ERROR IN
ED
MEASURED
DISTANC
DISTANCE
297.50
0
E
X (cm)
(m)
297.51
297.52 2
297.53 3
297.54 4
297.55 3
297.56 6
297.57
297.53 8
Sum
NUMBER OF RELATIVE
MEASUREME FREQUEN P (x)
NTS
CY
223' n
0.0446
n/5000 0.04
50
1613
0.3226
0.32
24
1705
0.34:0
0.34
03
901
0.1302
0.18
367
0.0734
0.07
05
33
131
0.0262
0.02
60
42
0.0034
0.00
84
15
0.0030
0.00
3
0.0006
0.00
26
08
5000
1.0000
1.00
00
(5-3)*
(5-4)
Here, X is a radom variable, and p(x) is its probability function. The random
variable and its probability function constitute wlat is known as a probability
model, a mathematical model that describes the assignment or distribution of
probabilities to a particular class of random events. In the example at hand, the
probability of the random event that X takes on the value 0 cm is
p(0)=P[X=0]=0.0450; the probability that X takes on the value 1 cm is p (1)
P[X=1] = 03224; and so on.
As a matter of convention, random variables are represented by capital italic
letters (such as X), and the numerical values they take on are represented by
lower-case italic letters (such as x). In some literature, random variables are
known as variants.
The specific function given by Eq. (5-3) is only one of many that can say Any
thing more than an illustration of what is meant by a probability function. It is
simply a mathematical model of what happens in reality, its basic function
being to distribute the total amount of probability available among all values of
the random variable. As illustrated in Table 5-2, the sum of all values of p (x)
must be unity.
The probability function p(x) is not the only kind of function associated
with a random variable. Another function with comparable importance is
F(x) = P [ X x]
for all x
(5-5)
(5-6)
0 F (x) 1
for all x
(5-7)
F (a) F (b)
for a < b
(5-8)
F (-) = 0
(5-9)
F () = 1
(5-10)
(5-11)
To Illustrate F (x) and its properties, let us return to Example 5-2. The
probabilities p (x) listed in Table 5-2, are used to determine the values of F(x) in
Table 5-3. The function F(x) is plotted in Fig. 5-2.
From Table 5-2, it should be clear that the F (x) values are simply running
accumulations of the p (x) values. For example,
F(1) = p (0) / p (1) 0,0450+0.3224 = 0.3674
F(2) =p (0) +p (1) +p (2) = 0,0450 + 0.3224 + 0,3408 = 0.7082
and so on.
Now, if a=2 cm and b= 4 cm, the probability that the error X is greater than 2 cm
but less than or equal to 4 cm is given by Eq. (5-6), i.e.,
P [ 2 < X 4 ] = F ( 4 ) F ( 2 ) = 0.9622 0.7082 = 0.2540
Equation (5-7) states that all values of F (x) must lie between zero and one,
inclusive. This must be so because the values of F (x) are probabilities of
random events, and all probabilities are numbers between zero and one. For
example, F(3)=0.8887 is the probability of the event that X is 3 cm, or less; F(5)
= 0.9882 is the probability that X is 5 cm, or less.
Equation (5-8) states that the distribution function is nondecreasing , i.e., F(x)
cannot decrease in value as x increases. This property should be evident from
Table 5-3, or from Fig. 5-2.
0
1
2
3
ERROR IN
MEASURED
DISTANCE x
cm
Table 5.3
DISTRIBUTION
FUNCTION
0.0450
=0.045
F
(x)
=
P[X
x]
0.0450
-=0.367
0
0.3574
-=0.708
0.3224
4
0.7082
-=0.888
0.3408
2
0.1805
4
0.8887
+7
=
2
0.0735
0.9622
5
0.96
2
+=
0.0260
0.9882
6
0.9882
+=
0.0084
0.9966
7
0.9966
-=0.999
0.0026
8
0.9992
+2
=1.000
0.0008
0
9
1.0000 - 0
=
1.0000
112 Analysts and Adjustments of Survey Measurements
Fig. 5-2.
Equations (5-9) and (5-10) give the two extreme values for F (x). In the case at
hand,
F (x) = 0 for any value of x less than zero, and F (x) = 1 for any value
of x greater than 8.
Equation (5-11) states that the distribution function is continuous from the right.
Referring to rig. 5-2. this means that F(2) = 0.7082, not 0.3674, and
F(3)=0.8887, not 0.7082, the higher of the two possible values is taken.
It has already been stated that a random variable and its probability function
constitute a probability model. The probability model can be given as well by the
random variable and its distribution function. In either case, the model is known
as a probability distribution.
The particular model illustrated in Example 5-2 is that of a discrete probability
distribution. The characteristic feature of a discrete probability dis-
tribution is that the probability function p (x) is nonzero only for a distinct set of
values of x, for all other values of x, p (x) is zero. In Example 5-2, p (x) is
nonzero only for x=0, 1, 2, 3, 4, 5, 6, 7, and 8. It follows that the distribution
function F (x) of a discrete probability distribution must be a step function; that
is, a function that increases only in finite jumps. Figure 5-2 shows this
characteristic quite clearly. Indeed, p (x) is precisely the jump in F (x) at each
value of x for which p (x) is nonzero.
It is important to note that although p (x) may be zero at a particular value of x,
F (x) is not necessarily zero. In Example 5-2, p (1.5) = 0, but F(1.5 ) = 0.3674;
and p (9) = 0, but F(9) = 1.0000.
The measurements in Example 5-2 were rounded off to the nearest centimeter,
yielding nine distinct values for the error in distance. This led to describing the
behavior of the error in terms of a discrete probability distribution ion. If,
however, we are to look at measurements and their errors in a more general
way, unconstrained by round off, it is more convenient to use probability
distributions that are continuous.
A continuous probability distribution is a probability model which has a
continuous distribution function; i.e., a function which has no jumps.
If a distribution function F(x) has no jumps, it must follow that the corresponding
probability function p (x) is zero everywhere, and so is meaningless to use . In
its place, another function is used the probability density function f(x). This
function was mentioned briefly in chapter 1.
The density function is mathematically defined at the first derivate of the
distribution function, i.e.,
f ( x) F ' ( x)
(5-12)
F ( x)
(5-13)
f ( x ) dx F (b) F ( a )
(5-14)
P[ a X b] F (b) F ( a )
f ( x)dx,
a
(5-15)
which provides the means for evaluating the probability that the random variable
X takes on a value between a and b.
Note that (a b) is a random event and that the probability of this random event
is a definite integral of the density function. Hence, the probability of this event
is represented by the area under the density function between a and b.
It is quite important to recognize that evaluation
of a density function i,e.,
does no yield probability, the density function f (x) does not give the probability
that X takes on the value x, as p (x) does. To repeat, probability is represented
by the area under the density function, not by the ordinate value.
Figure 5-3 illustrates the relationship between a continuous distribution function
and the corresponding density function. Note carefully how P[a < X b] is
represented in each case. in Fig. 5-3 (a), P[a<X b] is represented by a
difference in ordinate values, F(b) -F(a); in Fig. 5-3 (b), P[a < X b] is
represented by the hatched area. Note also that F (x) is indeed continuous (i.e.,
no jumps) and it lies entirely between zero and one, and that f(x) is
nonnegative.
Two important and very useful relationships can be obtained as special cases of
Eq. (5-I5). First, by making a = - and b = ,and taking note of the properties of
the distribution given in Eqs. (5-9) and (5-10), we have
f ( x ) P[ X ] F ( ) F ( ) 1,
(5-16)
which states that t the iota: area under the density function must equal unity.
Second , by making a =- and b=x, noting again the property expressed by Eq.
(5-9), and recognizing that the event (X x ) is equivalent to the event (- <X
x) we have.
F ( x ) P[ X x ] P[ X x ] F ( x ) F ( )
f (u ) du. (5-17)
1
ba
elsewhere
(5-18)
The numbers a and b are called the parameters of the distribution. The
corresponding distribution function is
for x a
F (x) = 0
1
xa
du
ba
ba
=1
(5-19)
for x b
The two functions are plotted in Fig. 5-4. Note that the probability that X is less
than or equal to c is given by the hatched area in Fig. 5-4 (a) and by the
ordinate F(c) in Fig. 5.4 (b). Also note that the nonzero portion of the density
function must be 1/(b- a) in order to make the total area under the density
function equal to unity.
EXAMPLE 5-3
Let X be a uniformly distributed random variable with parameters a = 2 and b=
8, The density function of X is
f ( x)
1
1
for 2 < x < 8
82 6
=0
elsewhere
for x 2
x2
6
=1
for x 8
117 Analysts and Adjustments of Survey Measurements
Evaluate the probability that: (a) X is less than or equal to 3; (b) X is greater
than 4; (c) X lies between 5 and 7; (d) X lies between 6 and 10; (e) X equals 4.
Solution
(a) The probability that X is less than or equal to 3 is
P[ X 3] F (3)
32 1
6
6
6 3
118 Analysts and Adjustments of Survey Measurements
6 6 6 6 3
5.3.
Of all existing probability distributions, none is more important than the normal
distribution. The normal distribution has widespread application in science,
technology, and industry; it is used as the basic model for all physical
measurements, includin measurements in surveying. The density function of the
normal distribution is:
x 2
F ( x)
exp
for - < x < .
2 2
2
(5-20)*
The quantities and are the parameters of the distribution, and are called the
mean and standard deviation, respectively (see Chapter 1). They will be
discussed in more detail in Section 5.5.
The distribution function of the normal distribution is
u 2
F ( x ) exp
du.
2 2
(5-21)
These two functions are plotted in Fig. 5-5. It is quite clear from Fig. 5-5 (a) that
the normal distribution is symmetric about . The density function s points of
inflection are located at x = u and x = u + . Maximum density occurs at x =
If X is a random variable with a normal distribution, then P[X c], the probability
that X is less than or equal to c, is represented by the hatched area in Fig. 5-5
(a) and by the ordinate, F (c), in Fig. 5-5 (b).
* The notation exp (t) is another way of expressing ea, where e=2.71828, the
base of the natural system of logarithms.
119 Analysts and Adjustments of Survey Measurements
EXAMPLE 5-4
Let X be a normally distributed random variable with parameters = 12 and =
2 . i.e., the density function of X is
x 122
1
2
f ( x )
exp
0.1947 exp 0.1250 x 12
2
22
2 2
(5-22)
This function is obtained from the normal density function by setting = 0 and
= 1; it is shown in fig. 5-7.
The standard normal distribution is an important special case of the normal
distribution because it provides a convenient way of evaluating probabilities
associated with any normal distribution. Since the density function of the normal
distribution cannot integrated directly, it presents a problem whenever
probabilities have to be evaluated for specific values of and . Fortunately,
the problem can be circumvented by first transforming.
The normal random variable X into the standard normal random variable Z and
the evaluating probabilities for Z.
The transformation, known as standardization, is given by
(5-23)
Even though it is just as impossible to integrate f (z) directly as it is f (x) for the
normal distribution there is but one f (z) to integrate. The resulting function is
P[ Z z ] ( z )
2
exp 2 du
2
1
(5-24)
The integral can be evaluated by approximate means once and for all, and
values of (z) can be tabulated. Such tabulation is given in a table I of
appendix B.
To show how standardization and table I are applied, let us consider the
following example
EXAMPLE 5-5
As in Example 51 let X be a normally distributed random variable with
parameters =12 and = 2. Standardize X and use Table I of Appendix B to
evaluate the probability that: (a) X is less than or equal to 10; (b) X lies between
11 and 15; (c) X is greater than 16.
Solution
According to Eq. (5-23), the standard normal random variable is
z
x 12
.
2
(a) The probability that X is less than or equal to 10 is evaluated by first no that
if X 10 then (X 12)/2 (10 12)/2, i.e., Z -1. Thus
X 12 10 12
P[ X 10 ] P
PZ 1,
2
2
(b) The probability that X lies between 11 and 15 is evaluated in similar fashion,
i.e.,
11 12 X 12 15 12
P[11 X 15] P
P[ 0.5 Z 1.5]
2
2
2
P X 16 P16 X P
2
2
P2 Z ( ) ( 2)
1 0.9772 0.0228
5.4.
EXPECTATION
E X x i p ( x i ) x1 p x1 x 2 p x 2 ... x n p x n
(5-25)
i 1
E X .
(5-26)
E g X g x i p x i .
(5-27)
i 1
E X x i p x i .
2
(5-28)
i 1
var X 2 E X .
2
(5-29)
Consider once more the distance that was measured in Example 5-2. Values of
the resulting error caused by misalignment of the tape and their corresponding
probabilities were listed in Table 5-2; for convenience, they are repeated in the
first two columns of Table 5-5. Evaluate the mean, variance and standard
deviation of the probability distribution of the measurements.
Solution
The mean is
E X x1 px1 x 2 px 2 ... x n px n
This calculation is given in the third column of Table 5-5. The fourth column of
the table lists the deviations from the mean, for use in the calculation of the
variance. The variance is
1.4371cm 2 .
This calculation is given in the last column of Table 5-5. The standard deviation
is:
1.4371 1.1988cm.
For continuous probability distributions, the mean and variance are defined in
terms of integrals instead of sums. Thus, if X is a random variable with a
continuous probability distribution,
E X xf x dx
And
2 E X
2
x 2 f x dx.
(5-30)
(5-31)
2 2
2
(5-32)
E X
2
x 2
x 2
2
exp
dx . (5-33)
2
2
2
(5-34)
(5-35)
(5-36)
2 E X 2
E X 2X 2
2
E X 2 2E X 2
EX
(5-37)
(5-38)
5.5.
Under the assumption that the normal distribution is the acceptable probability
model for a survey measurement, we can represent the measurement by X, a
random variable with density function given by Eq. (5-20). If we set the mean
equal to zero, Eq. (5-20) reduces to
f x
x2
exp
.
2
2
2
1
(5-41)
In this form, the normal distribution is the probability model for the random error
component of a survey measurement. This density function is shown in Fig. 5-9.
Fig. 5-9 Probability model for the random error of a survey measurement.
the high precision measurement, and 2, is the standard deviation of the low
precision measurement.
Applying Eq. (5-15), the probability that the measurement X lies between -
and + is given by
x 2
exp
dx. (5-42)
2
2
P X
P 1 Z 1
1 1
0.8413 0.1587 0.6826
(5-43)
which means that the shaded area in Fig. 5-11 is 0.6826 of the total area under
the density function.
Multiples k of the standard deviation are also used as measures of precision.
The probability that a survey measurement lies between - k and + k is
131 Analysts and Adjustments of Survey Measurements
P k X k
x 2
exp
dx.
2
2
2 2
(5-44)
Noting from the symmetry of the distribution that (k)+ (- k) = 1, Eq. (5-45)
reduces to
P k X k 2 k 1
(5-46)
Values for 0 (k) can be obtained from Table 1 of Appendix B for specific values
of k. Table 5-6 lists 0 (k) and P [- k < X<+ k] for several values of k.
The probabilities for k=0.674, 1.645, and 1.960 are represented in Fig. 5-12.
The quantity 0.674 has been called the probable error. This term is obsolete
and should no longer be used. A more appropriate term is 50% uncertainty.
The quantity 1.645 is called the 90% uncertainty The probability tlat a
measurement is within 1.645 of its mean value is 0.900. Similarly, 1.960
is called the 95 % uncertainty, and 2.576 is called the 99% uncertainty.
EXAMPLE 5-7
A survey measurement is assumed to lave a normal distribution with mean
value 394.625m and standard deviation 0.023 m Evaluate the 90% and 95%
uncertainties for the measurement, and make appropriate probability
statements.
Solution
90% uncertainty = 1.645 (0.023) = 0.038 m.
95% uncertainty = 1.960 (0.023) = 0.045 m
The probability is 0.90 that the measurement lies within 0.038 m of 394,625 m,
i.e., between 394.587 m and 394.663 m . The probability is 0.95 that the
measurement lies within 0.045 m of 394.625 m, i.e., between 394.580 m and
394.670 m.
The accuracy of a measurement has been defined in Chapter 1 as its degree of
conformity or closeness to the true value. Accuracy is influenced, not only by
the random error component of the measurement, but also by the bias created
by uncorrected systematic error.
The mean square error M is defined as the following expectation:
M 2 E X ,
2
(5-47)
Where is the "true " value. The mean square error is used as a measure of
accuracy.
The bias is defined as
(5-48)
(5-49)
If two random variables X and Y are included in the same probability model,
they are said to be jointly distributed, their joint distribution junction is
F x, y P X x, Y y ,
(5-50)
Or equal to some specified value x and, at the same time, Y is less than or
equal to y.
The distribution functions of the individual random variables are obtainable from
the joint distribution function as follows:
F x P X x P X x , Y F x ,
(5.51)
And
F y PY y P X , Y y F , y (5-52)
When F (x) and F (y) are derived from the joint distribution function in this way,
they are called the marginal distribution functions of X and Y, respectively.
If X and Y are continuous random variables, their joint density function is given
by
f x, y
2
F x, y
xy
(5-53)
x y
F x, y
f u, v dudv
(5-54)
f x, y dy
(5-55)
And
f y
f x, y dx
(5-56)
In Section 5.1 it was stated that two events are independent if the occurrence of
one has no influence on the occurrence of the other. In similar fashion,
(5-57)
Or, alternatively,
F x, y F x F y
(5-58)
(5-59)
This product relationship of independence can be futher extended to expectations. In particular, it can be shown that
E XY E X E Y xy
(5-60)
Now the marginal distributions of X and Y have their own separate means and
variances; these are x and y respectively, and x and y, respectively. In
addition to these four parameters, there exists a fifth parameter which measures
the degree of correlation between the two random variables. This parameter is
known as the covariance: it is designated by the symbol cov (X, Y), or by xy,
and it is defined as the following expectation.
cov X , Y xy E X x Y y .
(5-61)
xy
x y f x, y dxdy
xs
(5-62)
xy E X x Y y
E XY x Y X y x y
E XY x E Y E X y x y
E XY x y x y x y
E XY x y
(5-63)
If X and Y are independent. Eq. (5-60) holds, and it must necessarily follow from
Eq.
(5-63) that
xy 0
(5-64)
If the random variables are standardized by the transformation given by Eq. (523), the expectation of :..e product of the standardized variables is a
dimensionless number known as the correlation coefficient, designated xy thus
X Y y
x
x y
xy E
(5-65)
which simplifies to
xy
xy
x y
(5-66)
Since x and y are always positive, xy takes on the sign of xy. It can be
demonstrated that the absolute value of xy never exceeds unity, i.e.,
1 xy 1. (5-67)
EXAMPLE 5-8
x 1.72 1.31cm
y 1.18 1.09 cm
Thus,
xy
xy
x y
0.32
0.22
1.311.09
The sum of two jointly distributed random variables is also a random variable.
Depending on the joint distribution involved, the sum may or may not have the
same kind of distribution as its components. For example, if X and Y are
uniformly distributed, X+ Y does not have a uniform but a triangular distribution,
as already shown in Fig. 5-8 (b); however, if X and Y are normally distributed. it
can be demonstrated that X+ Y has a normal distribution. In any case, the mean
of the sum of two jointly distributed random variables is equal to the sum of their
means, i.e.,
E X Y E x E Y
(5-68)
Or, alternatively,
x y x y .
(5-69)
The variance of the sum can be determined as well from the variances and
covariance of its components, quite apart from the kind of distribution involved.
According to Eq. (5-29), the variance of the sum is the following expectation:
x2 y E X Y x y ,
2
(5-70)
Which reduces to
x2 y E X x Y y
E X 2 X x Y y Y y
2
E X 2 E X x Y y E Y y
2
x2 2 xy y2 .
(5-71)
Example 5-9
Calculate the mean and variance of the sum of the X and Y coordinates in
Example 5-8, given 2 = 43.00 cm, x = 1.72 cm, y = 27.00 cm, y =1.18
cm, and xy= 0.32 cm.
Solution
The mean of X + Y is
x2 y x2 y2 ,
(5-72)
.
X
.
.
Xm
(5-73)*
Is a random vector.
If we let
X 1 1
X 2
2
.
x
,
.
Xm m
(5-74)
where 1, 2, ..., m, are the mean values of X1, X2, ..., Xm, respectively then.
x x t
X 1 1
X 2
2
X 1 1 X 2 2 ... X m m
.
Xm m
X 1 1 2
X 1 1 X 2 2 ... X 1 1 X m m
X 2 X 1 1
X 2 2 2
... X 2 2 X m m
2
. (5-75)
...................
..........................
...
.....................
X m m 2
X m m X 1 1 X m m X 2 2 ...
By taking expectations of the elements in be matrix on the right-hand side of Eq.
(5-75), and noting from Eq. (5-29) that E[(Xi- i)] = i (the variance of Xi), and
from Eq. (5-61) that E[(Xi - i) (Xj - j )] = i (the covariance of Xi and Xj), and
that ij= ji,we obtain the symmetric matrix
* A boldface lower case letter is used to represent a random vector in keeping
with established vector and matrix notation; the elements , however, arc in
capital letters to represent random variables.
140 Analysts and Adjustments of Survey Measurements
12
xx ...12
1m
12
22
...
2m
... 1m
... 2 m
.
... ...
... m2
(5-76)
The variances of the individual random variables form the main diagonal of xx;
the covariances of all possible pairs of the random variables are the off-diagonal
elements of E-, with each covariance appearing twice (in one position above
and in one position below the main diagonal).
The matrix xx is called the variance-covariance matrix of x, or simply the
covariance matrix of x, since the variance of a random variable can be looked
upon as its covariance with itself (i = ii). The term dispersion matrix is also
applied to xx.
If the random variables in x are uncorrelated, all covariance (off-diagonal)
elements of xx are zero, and the matrix is diagonal [see Eq. (4-15), Chapter 4].
In such a case, xx maybe called a variance matrix, as it was in Chapter 4.
However, it is quite acceptable refer to a diagonal xx as a covariance matrix,
even though all covariances in it are zero.
The relationship between the weight matrix W and the corresponding variance
matrix (or covariance matrix, as we will now call it) was also given in Chapter
4 by Eq. (4-19 ). This relationship, with subscripts added to indicate reference to
random vector x, is restated as Eq. (5-77):
W xx 02 xx .
1
(5-77)
2
0
1
xx
12
12
12
22
2
0
02
2 2
1 2 122
22
12
12
.
12
1 2 1 122 12 1 2
12
o2
12 1 122
o2 12
1 2
1 2
12
o2 12
2
1 2 1 12
.
o2
22 1 122
o2
02
andw
.
2
12
22
Obviously, the diagonal elements of Wxx are o/1 of and o/3 only when
12=0. When 120, the weights w1 and w2 cannot be used as diagonal
elements of Wxx.
Each element of xx can be divided by o to yield a scaled version of , This
new matrix is designated Qxx and it is known as the cofactor matrix of x. Thus,
Q xx
2
0
xx
12
2
0
12
2
0
...
1m
2
0
12
02
22
02
...
2m
02
1m
02
2m
...
02
...
...
...
(5-78)
...
m2
02
(5-79)
12 0.025 0.000625 m 2
2
22 0.050 0.002500 m 2
2
32 0.015 0.000225m 2
2
12 12 1 2 0.500.0250.050 0.000625m 2
13 23 0
Thus,
xx
0.000625
0.000625
0.000625
0.002500
0
m 2
0
0.000225
0
Q xx
0.000625
1
0.000625
0.000625
0.000625
0.002500
0
0.000225
0
0
1.00 1.00
1.00 4.00
0 .
0
0
0.36
xx 1.8 10 7 m rad
1.8 10 7 m rad
.
2.0 10 9 rad 2
1
1.0 10 4 m 2
1.0 10 4 m 2
7
1.8 10 m rad
1.8 10 7 m rad
.
2.0 10 9 rad 2
1 .0
1.8 10 3 rad / m
.
3
5
2
2
1.8 10 rad / m 2.0 10 rad m
Solution
5.0 10 4 m 2 / rad 2
Q xx
90 m / rad
90 m / rad
1 .0
1.8m rad
.
0.02 rad 2
Example 5-12 shows that the tams of the elements of Qxx can vary depending
on the choice of units for the reference variance. Since it is impossible to select
a reference Qxx variance that will give a completely dimensionless when the
units of the components of x are mixed, there is merit in selecting a
dimensionless reference variance. as was done in part (c) of Example 5-12, so
that the elements of Qxx are the same units as their counterparts in xx .
Now, by inverting both sides Eq. (5-78), we get
Q xx1 o2 xx1 .
(5-80)
(5-81)
i.e., the weight matrix is the inverse of the cofactor matrix. Equation (5-31) is
applicable to correlated as well as uncorrelated observations. The same
relationship was given previously by En. (4-46) in Chapter 4 for uncorrelated
observations only,
5.9, INTRODUCTION TO SAMPLING
PROBLEMS
5-1 Random variable X las the following probability function:
px
120
0.7 x 0.35 x for x = 0, 1, 2, 3, 4, 5.
x!5 x !
(a) Evaluate and plot p (z) for all given values of z (b) Evaluate and plot F(x)
=P [X x] for -1 < x < 7. (c) Evaluate P [1 < X 3].
146 Analysts and Adjustments of Survey Measurements
x
18
0
for x
0<x<6
elsewhere
Derive the distribution function of X and evaluate P [X< 1], P [1 < X <3], andP [X
> 5].
5-3 X is a random variable with the following probability density function:
f x
0
2
x 1
9
Derive the distribution function of X and evaluate P [X<2], P[X> 3], and P [2.5 <
X< 3.5].
5-4 The density function of a random variable X is
f x
sin x
2
Derive the distribution function of 5 and evaluate P[X< /3], P [X> /2], and
P [/3 < X < /2].
5-5 X is a uniformly distributed random variable with parameters a = 5 and b =
9. Evaluate P[X<5], P[X<8], P[X<10] , P[6<X<7], P[4<X<6 ].
5-6 Given a random variable Z with standard normal distribution, determine
from Table I of Appendix B the probability that Z takes on a value: (a) between
0.35 and 1.65; (b) between -1.96 and 42; (c) between -1.50 and 1.50; (d) less
than 0.50; (e) greater than zero; (f) greater than 3.00; (g) between -0.674 and
0.674.
5-7 A random variable X is normally distributed with mean 72 and variance 25.
Determine P [X< 77], P [X < 84 ], P [X> 84], P [67 <X< 77], P [72 < X< 73], and
P [X= 68].
5-8 Evaluate the mean and variance of the random variable in Problem, 5-1.
5-9 Evaluate the mean and variance of the random variable in Problem 5-2.
5-10 Evaluate the mean and variance of the random variable in Problem 5-3.
5-11 Evaluate the mean and variance of the random variable in Problem 5-4.
5-12 X is a normally distributed random variable. E[X] = 20, and var [ X] =4. (a)
What is the probability that X is greater than 23? (b) What is the probability
that )(lies between 19 and 21? (c) What is the probability that X is less than 18?
5-13 A random variable X is normally distributed. If E[X] =40 and P [X<8]=
0.8413, determine the mean, variance and standard deviation of X.
f x
x 2 1
1
exp
m .
0.15
0.0072
5-19 Select a reference variance of 0.0009 m and evaluate the cofactor and
weight matrices of the coordinates in Problem 5-17 for each case, (a) and (b).
5.20 The position of a survey station is given by an angle and distance S. The
standard deviations of and S are 20" and 0.10 m, respectively, and the correlation coefficient is 0.50. (a) Evaluate the covariance matrix for and S, using
radians as the units for . (b) Select a reference variance of 0.0010 m and
evaluate the cofactor and weight matrices for and S. Assign appropriate units
to all elements.
6
Variance-Covariance Propagation
6.1. INTRODUCTION
In Chapter 2 the basic technique of error propagation was introduced. Given the
errors in a set of measurements, the technique of error propagation is used to
evaluate the resulting errors in quantities that are calculated from the
measurements.
Although error propagation, as dealt with in Chapter 2, is very useful in studying
the combined influences of errors in measurements on the quantities computed
from these measurements, it is nevertheless based upon the assumption that
the measurement errors are known and given in the first place. In practice, if the
errors are known, they are invariably eliminated by applying appropriate
corrections, leaving nothing to propagate. Such is the case when we have
known systematic errors, i.e., the measurements are corrected for known
systematic. Error before they are used to calculate the desired quantities. If
random errors are considered, however, the specific values of the errors are not
known, and it is impossible to correct for them or to apply the technique of error
propagation given en in Chanter 2.
Even though the specific values of the random errors are not known, it is still
possible to study, the effects of their propagation. However, instead of working
with the actual error values, as was the case in Chapter 2. We must now work
with the probability distributions of the errors, or, equivalently, with the
(6.1)
Then the propagation problem basically consists of finding the joint probability
distribution of y, given the joint probability distribution of x. However derivation
of the joint distribution of y from the joint distribution function of x can be quite
involved mathematically, even when y is a relatively simple function of x, and it
is not within the scope of this text to attempt such derivation. Fortunately, for our
purposes, there is an effective alternative if we limit consideration to linear
functions of x, or to linearized forma nonlinear functions of x. This alternative
involves propagation of and covariances only.
And
X 2 k X 2 k x 2 ,
k = 1, 2, , q.
x21 lim q
1 q
X 12k
q k 1
And
2
x2
lim q
1 q
X 22k
q k 1
(6-4)
x1x 2 lim q
1 q
X 1k X 2 k
q k 1
(6-5)
The definition of variance expressed by Eqs. (6-3) and (6-4) is consistent the
definition given by Eq. (5-28) if we bear to mind that the probability of any value
xi occurring is reflected in the sample by the frequency of occurrence of that
value. Indeed, if the sample is infinite in size (q), the relative frequency of
occurrence of a particular value is precisely its probability (see Section 5,1).
Similarly, the definition of covariance expressed by Eq. (6-5) is consistent with
the discrete equivalent of the definition given by Eq. (5-62).
Let us next consider a random vector y composed of two jointly distributed
random variables Y1 and Y2 such that
y Ax b,
(6-6)
(6-7)
Y2 k a 21 X 1k a 22 X 2 k b2
(6-8)
And
Y2 k Y2 k y 2 ,
K= 1, 2, , q
(6-9)
And
Y2 k a 21 X 1k a 22 X 2 k ,
k= 1, 2, , q
(6-10)
Defining the variances and covariance of Y1 and Y2 in the same way as the
variances and covariances of X1 and X2 are defined in Eqs. (6-3), (6-4), and (65), we have
y21 lim q
1 q
Y12k
q k 1
y22 lim q
1 q
Y22k
k 1
q
(6-11)
(6-12)
And
y1 y 2 lim q
1 q
Y1k Y2 k .
q k 1
(6-13)
Substituting the right-hand side of Eq. (6-10) for Y1k in Eq. (6-11), expanding,
and following the basic rules of summation, we get
1 q
( a11 X 1k a12 X 2 k ) 2
k 1
q
1 q
lim q k 1 ( a112 X 12k 2a11 a12 X 1k X 2 k a122 X 22k )
q
y21 lim q
lim q
q
q
1 q 2
2
a
2
a
a
a122 X 22k
2k
11 1k 11 12 1k
q k 1
k 1
k 1
a112 lim q
1 q
1 q 2
2
a
a
lim
a12 X 1k X 2 k
1k 11 12 q q
q k 1
k 1
a
2
11
2
x1
(6-14)
And that
(6-15)
(6-16)
x1 x 2
x22
(6-17)
y21
yy
y1 y 2
y1 y 2
y22
(6-18)
It is easily shown that Eqs. (6-14), (6-15), and (6-16) can be expressed in the
matrix Form
yy A xx A t ,
(6-19)
Where
a11
A
a 21
a12
.
a 22
It will be found that Eq. (6-19) holds, net only for vector x with n=2 components
and vector y with m=2 components, but for all values of m and n. As such
represents the general law of propagation of variances and covariances for all
linear functions,
y =Ax+ b.
For linearized functions, the coefficient matrix A is replaced by the Jacobian
matrix
Y1
Y1
Y1
...
X
X 2
X n
1
Y2 Y2 ... Y2
J yx X
X 2
X n .
1
...
... ... ...
Y
Y m
Y m
m
...
X n
X 1 X 2
and the general law of propagation of variances and covariances becomes
The relationships expressed by Eqs . (6-14), (6-15), and (6-16), and the Eqs. (619) and (6-20) are valid no matter what the probability of x is.
When the components of the random vector x are independent (uncorrelated),
function the covariance matrix xx is diagonal. Thus, for the linear function.
Y a1 X 1 a 2 X 2 ... a n X n
in which X1, X2, ..., Xn are independent (uncorrelated), application (6-19) yields
2
y
x21
0
0 x22
a1 , a 2 ,..., a n
...
...
0
0
.... 0
,
... ....
2
... xn
...
which results in
In which X1, X2, ..., Xn, are independent (uncorrelated), application (6-20)
yields
Y Y
Y 0
,
,...,
Xn ...
X 1 X 2
2
x1
2
y
2
x2
...
...
...
...
...
0 X 1
Y
0
X
... 2
...
xn2 Y
X n
Which results in
2
Y
Y 2 Y 2
x1
x 2 ...
X 1
X 2
X n
154 Analysts and Adjustments of Survey Measurements
2
y
2
xn
Equations (6-22) and (6-24) constitute the linear and nonlinear cases,
respectively, of what is known as the special law of propagation of variances. In
Chapter 5, Eq. (5-79) gave the relationship between the covariance matrix xx
and the corresponding cofactor matrix Qxx of a random vector x.
This relationship is
xx 02 Q xx
(6-25)
(6-26)
Substituting the right-hand sides of Eqs. (6-25) and (6-26) for xx and yy ,
respectively, in Eqs. (6-19) and (6-20), and dividing by 02 we get
Q yy AQ xx A t
(6-27)
And
t
Q yy J yx Q xx J yx
Y X1 X 2 X 3 X 4 ,
y 0.001455 0.038 m.
EXAMPLE 6- 2
Two measurements are independent and have standard deviations 1 = 0.20 m
and 2=0.15 m, respectively. Evaluate the standard deviations of the sum and
difference of the two measurements. Also evaluate the correlation between the
sum and difference.
Solution
Let X1 and X2 represent the two measurement, and y1 and Y2 their sum and
difference respectively, i.e.,
y1 X 1 X 2 and Y2 X 1 X 2 .
In matrix farm (y= Ax b), the functions are
Y1 1 1 X 1 0
Y 1 1 X 0.
2
2
y21 1 12 1 22 12 22
2
And
y22 1 12 1 22 22 0.0625 m 2 .
2
Note deviation that the sum and difference have the same variance. Thus, the
standard of both the sum and the difference is
y1 y 2 0.0625 0.25m
156 Analysts and Adjustments of Survey Measurements
To find the correlation between Y1 and Y2, we must use the general law of
propagation of covariance, Eq. (6-19). Thus,
yy A xx A t
1 1 12 0 1 1
2
1 1 0 2 1 1
2 22 1 1 12 22
12
2
2
2
1 2 1 1 1 2
12 22
.
12 22
The diagonal elements of yy, are the variances y1 and y2, as previously
determined. Each off-diagonal element i s the covariance of Y1 and Y2, i.e.,
y1 y 2
y1 y 2
0.0175
0.28.
0.250.25
EXAMPLE 6-3
If a quantity is independently measured n times, and the n measurements,
represented by X1, X2, ..., have different weights, w1, w2, wn, respectively,
then the weighted mean of the measurements is defined as
__
X0
w1 X 1 w2 X 2 ... wn X n
.
w1 w2 ... wn
Show that the weight of the weighted mean is equal to the sum of the weights of
the individual measurements.
Solution
Let w = w1 + W2 + ... + Wn In matrix form (y= Ax + b), the weighted mean is
__
X1
W W
W X2
1 , 2 ,..., 2 0.
w ...
w w
X n
Now,
157 Analysts and Adjustments of Survey Measurements
Q xx W
1
xx
1
W
1
0
...
...
0
1
W2
...
...
... 0
.
... ...
1
...
W n
...
Thus,
yy A xx A t
1
0
W
1
1
Wn 0
W1 W 2
,
,...,
W2
w
w w
..
...
0
0
W W2 ... Wn 1
1
.
w 2
W
0 1
w
W
... 0 2
w
... ... ...
1 Wn
...
W n w
...
Hence Wyy = Qyy = [w], i.e., the weight of Xw (X medium), is the sum of
the weights, w.
EXAMPLE 6-4
A distance is measured twice using two different methods. The two
measurements are X1=321.643 m and X2= 321.618 m, with the standard
deviations 1= 0.030m and 2=0,015 m, respectively. Evaluate the weighted
mean and the standard deviation of the weighted mean of the two
measurements, Assume that X1 and X2 are uncorrelated.
Solution
Let 0=0.030 m (selection of a 0 is arbitrary). Then the weights of the
measurements are
02 0.0302
w1 2
1
1 0.0302
and
02 0.0302
w2 2
4,
1 0.0152
__
Xw
w1 X 1 w2 X 2 1321.643 4 321.618
321.623m
w1 w2
1 4
w2
02
w1 w2
0.0302
5
0.00018m 2
w 0.00018 0.013m
Note: The variance w2 can also be evaluated as follows:
w1
X
w1 w2
___
w2
X 1
w1 w2
1
4
X 2 X 1 X 2 .
5
5
Thus
2
1
4
12 22
5
5
2
w
1
4
2
2
0.030 0.015 0.00018m 2 .
5
5
EXAMPLE 6-5
The angles and sides of a right-angled plane triangle are shown in Fig. 6-1.
Sides a and b are measured and the measurements are 416.050 m and
202.118 m, respectively. The measurements are independent, the standard
deviation of a is 0.020 m, and the standard deviation of b is 0.012 m. Evaluate
the elements c and of the triangle and their standard deviations. Also
determine the correlation, if any, between c and .
Fig. 6-1.
159 Analysts and Adjustments of Survey Measurements
Solution.
The functions which express c and in terms of a and b are
c a2 b2
b
sin 1
a
Using these functions to evaluate c and , we get
c
363 .656 m.
sin 1
202.018
29 03"54.
416.050
Since the functions are nonlinear, the Jacobian matrix Jyx must be evaluated,
noting that
a
x
b
c
y
and
Now
c
2
a b2
a a
1 2
a b2
2
c
2
a b2
b b
1 2
a b2
2
1/ 2
1/ 2
2a
2b
1/ 2
1/ 2
1
b
b
sin 1
2
2
a a
a
1 b / a a
1
b
1
sin 1
2
b b
a
1 b / a a
a
2
2 1/ 2
b
b
2 1/ 2
a
c
b
c
b
b
2
ac
a b a
2
1 1
a b a c
a
Thus,
J yx
c a
b c
b
b ac
b
1.144
c
1 1.336 10 3
0.556
.
2.750 10
3
Now,
0.0202
0
xx
0.0122
0
160 Analysts and Adjustments of Survey Measurements
3
2.750 10 3 0
1.336 10
5.680 10 4
7
8.315 10
1.144 1.336 10 3
8.315 10 7
.
1.803 10 9
Thus
c 5.680 10 4 0.024m.
1.803 10 9 4.246 10 5
And the correlation coefficient of c and is
radians = 8.8
8.315 10 7
0.816 .
0.024 4.246 10 5
Thus,
And
z 481 21 .9cm.
Solution II
Apply the special law of variance propagation to each step of the calculation.
Step 1.
Y1 3 X 1 X 2 2
Y2 X 1 2 X 2 3
Step 2.
Z Y1 Y2
y2 y21 y22
234 61 295cm 2
y 295 17 .2cm.
This solution is Incorrect; see the discussion following the example. Solution III
Apply the general law of variance and covariance propagation to each step of
the calculation,
2
3 1 5.0
1 2 0
3 1 234 93
3.02 1 2 93 61
0
Step2.
Z 1
Y1
1
Y2
234
zz 1 1
93
93 1
481
61 1
z2 481cm 2
z 481 21.9cm
Solutions I and III agree, and are correct. Solution II is incorrect because it fails
to take into account the correlation between Y1 and Y2. The offdiagonal
elements of yy in Solution III (93 cm ) are the covariance of Y1 and Y2. The
correlation coefficient calculated from this covariance is
93
0.78.
234 61
Note that the variances in yy of Solution III (234 and 61) agree with the
variances calculated for Y1 and Y2 in Solution II. The problem with Solution II is
not in the calculation of the variances. but its failure to take the covariance into
account.
The fore going example illustrates the danger in using the special law of
propagation in steps. If the propagation can be done in one step using field
measurements that are independent, the special law can be applied, as it was
in Solution I. If the propagation is to be done in two or more steps, the general
law should be used, as it was in Solution III.
EXAMPLE 6-7
Stations A and C in Fig. 6-2 are at the same elevation. Independent field
measurements are made of the base b, horizontal angles and , and vertical
angles and .
b=0.008m
= 10 =4.85 x (10exp -5)
radians
= 10 =4.85
x (10exp -5)
radians
= 15 =7.27
x (10exp -5)
radians
= 15" =7.27
x (10exp -5)
radians.
Evaluate the height h, and the standard deviation of h using: (a) computation in
one step; (b) computation in two steps.
Solution
(a) Computation in one step. Referring to Fin. 6-2, sides a and c can be
calculated from base b and angles and :
a
b sin
b sin
,c
sin
sin
and
h2 c tan
.
2 sin
94 .232 m.
The function is also the basis or application of the special law of propagation of
variance. Since it is a nonlinear function of b, , , , and , it is first necessary
to evaluate the partial derivatives of h with respect to the five measured
quantities. Thus,
h h 94.232
0.739
b b 127.552
h b sin tan cos sin tan sin tan cos
2
sin 2
b tan cos
h cot 315m / rad
2 sin
h b tan cos
h b sin sec 2
227m / rad
2 sin
h b sin sec 2
226m / rad
2 sin
Applying the special law of propagation of variance, we lave
2
h
h 2 h 2 h 2 h 2
b2
b
2
2
2
2
5
0.739 0.008 315 4.85 10 317 4.85 10 5
2
2
k
227 7.27 10 5
2
=0.00105.
Thus k 0.0105 0.0032m.
165 Analysts and Adjustments of Survey Measurements
(b) Computation in two steps. The height h can be computed in the following
two steps:
Step 1:
b sin
127 .552 sin 82 4113"
a
433 .508 m
sin
sin 163 0155"
b sin
127 .552 sin 80 2042"
c
430 .873 m
sin
sin 163 0155"
Step 2:
h 1 / 2 a tan c tan
In this computation, a and c are intermediate quantities that are very likely
correlated because they are functions of the same measured quantities, b, ,
and . Thus, if k is to be evaluated in steps, the general law of propagation of
variance and covariance should be used.
Let x be the vector of field measurements, y be the vector of intermediate
quantities, and h the height vector, consisting of the single element, h. Thus,
b
a
c
x , y , and h=[h].
Applying the general law of propagation of variances and covariances, we have:
Step 1: yy J yx xx J yxt ;
Step 2: kk J ky yy J kyt ;
Where
a a a a a
b
c c c c c
J yx b
b
b
166 Analysts and Adjustments of Survey Measurements
And
h h h h
J ky , ,
, .
a c
Thus:
J yx
0
0
0
1 0
0
0
0 1
0
(6-29)
In this model, v is the vector of residuals, is the vector of unknown parameters, B is the matrix of numerical coefficients of the unknown parameters, and
f d l,
(6-30)
(6-31)
(6-32)
(6-33)
(6-34)
(6-35)
(6-36)
And
^
l lv
(6-37)
t B tW f
(6-38)
(6-39)
N l t
v f B
F BN l t
F BN l B tWf
I BN l B tW f
(6-40)
And
^
l lv
l f B
B d .
(6-41)
All intermediate steps in Eqs. (6-42) through (6-45) are Left as an exercise for
the student.
From Eqs. (6-44) and (6-45), it is easily seen that
Q ^ ^ Q Qvv
ll
(6-46)
Covariance matrices for t, , v, and ^l are obtained by multiplying the corresponding cofactor matrices by the reference variance o.
170 Analysts and Adjustments of Survey Measurements
EXAMPLE 6-8
For the level net in Example 4-7, calculate the covariance matrix for the
adjusted elevations of points B, C, and D, for the case of weighted
observations, given a reference variance of 0.0016 m. Also calculate the
standard deviations in the adjusted elevations of B, C, and D.
Solution
According to Eq. (6-43), and using the data in Example 4-7, we have:
Q N
0.337902
symmetric
0.171085
0.406418
0.183543
0.172880 .
0.298256
4.773
4
B 10 2 5.406 0.023 m
C 10 2 6.502 0.025m
D 10 2 4.773 0.022 m .
A summary of the symbols and equations used in propagation for least squares
adjustment of indirect observations will be found in Section 9.5, Chapter 9.
6.6. PROPAGATION FOR
OBSERVATIONS ONLY
LEAST
SQUARES
ADJUSTMENT
OF
The condition model for adjustment of observations only was also given in
Chapter 4. In matrix notation, this model is
(6-47)
Av f .
In this model, v is the vector of residuals, A is the matrix of numerical coefficients of the residuals, and
f d Al , (6-48)
(6-49)
(6-50)
And
^
l l v,
(6-55)
(6-56)
Applying the law of propagation of cofactors to Eqs. (6-53) and (6-54), we get
Qkk Wc Q ff Wc Wc
t
And
Q xx QA t Qkk QA t
(6-57)
which reduces to
Q ^ ^ Q QA t Wc AQ .
ll
(6-60)
The reduction is left as an exercise for the student. From Eqs. (6-58) and (6-60
), it is obvious that
(6-61)
Q ^ ^ Q Qvv ,
ll
which is the same relations hip as given by Eq. (6-46). Again, covariance
matrices for k, v, and ^l are obtained by multiplying the corresponding cofactor
matrices by the reference variance, 0
EXAMPLE 6-9
Calculate the covariance matrix and the standard deviations of the adjusted
angles in the plane triangle of Example 4-10, for case (2), unequal weights,
given ~o= 10".
Solution
From Eq. (6-53) and the dam in Example 4-10,
1 78 8.8"
2 100 10"
3 111 10.5"
A summary of the symbols and equations used in propagation for least squares
adjustment of observations only will be found in Section 9.5, Chapter 9,
PROBLEMS
6-1 The interior angles of an n-sided closed traverse arc measured. All
measurements are uncorrelated. If the standard deviation of each measurement
is 4.0 evaluate the standard deviation of the sum of the measured angles for:
(a) n = 5; (b) n =10; (c) n = 20.
6-2 Two interior angles of a plane triangle are measured and the third angle Is
with computed from them. If the two angle measurements are uncorrelated,
standard deviations 4,5 and 6.0, respectively, evaluate the standard deviation
of the computed angle.
174 Analysts and Adjustments of Survey Measurements
6-3 A distance is measured in three segments. Each segment is measured
more than once and in each case a mean segment length (simple average) is
obtained. The distance is then computed as the sum of the mean segment
lengths. Following are the measurements, all uncorrelated, in meters:
Compute the distance and determine the standard deviation of the computed
distance if the standard deviation of a 100 m measurement is 0.010 an and the
variance of each measurement is directly proportional to its magnitude.
6-4 A distance is measured using three different methods. The observed values
and their standard deviations (all in meters) are shown below:
Determine the weighted mean of the observed distances and the standard
deviation of this weighted mean, assuming the three observations are independent.
6-5 Six independent determinations of the elevation of a point are made. These
values and their corresponding weights are shown below.
Compute the weighted mean of the six elevations and evaluate the standard
deviation of this weighted mean if a weight of 2 corresponds to a standard
deviation of 0.030 m.
6-6 The area of a trapezoidal parcel of land is computed as follows:
175 Analysts and Adjustments of Survey Measurements
a a2
Area 1
b1
2
sin sin
.
sin sin
1 1 0
1 2 1
0 1 2
D3
radians.
D1 D2
(1) Derive and use the relationship U= Y cos - X sin , where U is the
displacement of A' from AT, as shown. (2) erros 1 and 2 , in ,radians, can be
approximated by U/D1 and Y/D2 , respectively.
6-15 Determinate the standard deviations of and ^b computed in example 4-6,
case (2). Chapter 4.
6-16 Determinate the standard deviations of the least square estimate for x in
example 4-9, Chapter 4.
6-17 With the reference to Example 4-11, Chapter 4, determinate the standard
deviation of each adjusted angle in the reference standard deviation 0, is 12.
6-18 With reference to example 4-8, evaluate the standard deviations for the
computed values of Tmax and hmax if the standard deviation in observing
altitude with the theodolite is 10.
7
Preanalysis of Survey
Measurements
Preanalysis
of survey measurements is analysis of the component
measurements of a survey project before the project is actually undertaken.
Preanalysis is very helpful overall design of the survey project because it
provides a basis for evaluation of the accuracies of the survey measurements,
for meeting tolerances that may be imposed on these measurements, and for
selection of suitable instrumentation and measurement procedures.
In this introduction to preanalysis, we shall assume that all components of a
survey measurement are free of bias caused by systematic error. This means
that variances, or standard deviations, or multiples of standard deviations can
be used as measures of accuracy as well as measures of precision, and that
the propagation laws of Chapter 6 can be applied. We shall further assume that
all measurement components are independent , so that the special law of
propagation of variances, as expressed by Eqs. (6-22) and (6-24) in Chapter 6,
can be used. For convenience, the special law of propagation of variances is
expressed once more below.
Linear functions. If Y is a linear function of the independent measurements
X1, X2,..., Xn, i.e.,
Y a1 X 1 a 2 X 2 ... a n X n
Then
(7-1)
where a1 , a2, ..., an, are constant coefficients; x1, x2, ..., xn are the
respective variances of X1, X2, ...,Xn; and y is the variance of Y.
(7-2)
Preanalysis not only makes use of Eqs. (7-1) and (7-2) directly, but inversely as
well in order to evaluate the required accuracies of the input measurements X,
given the required accuracy of the resultant quantity Y. If we assume that each
input measurement contributes equally to the total accuracy of the end result,
then for the linear case we have
(7-3)
from which we get
(7-4)
where l ai l denotes the absolute value of ai.
For the nonlinear case, again assuming an equal contribution from each input
measurement, we have
(7-5)
from which we get
(7-6)
where a Y
X i
When each input measurement contributes equally to the accuracy of the end
result, the measurements are said to have balanced accuracies.
EXAMPLE 7.1
The dimensions of the base of a large rectangular reservoir are 85 no and 60 m,
to the nearest meter, If the area of the reservoir 's base is to be determined with
a standard
182 Analysts and Adjustments of Survey Measurements
where L and W are the length and width of the base, respectively.
Obviously, n =2, and for purposes of analysis, L = 85 in and W = 60 in.
Now A = W and A L . and so from Eq. (7-6), we have
w
L
And
Thus, for balanced accuracies, the length and width of the reservoir's
base should be measured with standard deviations of 7 mm and 5 mm,
respectively, if a standard deviation of 0.6 m 2 in the area is to be
achieved.
Often it is not possible to have a set of input measurements with
accuracies that are exactly balanced, because of limitations in the
instruments and/or procedures used in making the measurements. In
such cases, trade-offs are made in which the standard deviations of one
or more components are necessarily increased to reflect the imposed
instrument/procedure limitations, while the standard deviations in the
remaining components are correspondingly decreased.
EXAMPLE 7-2
The height h of a survey station A (Fig. 7-1) above the instrument center
at B is to be determined with a standard deviation of 0.010 m from
measurements of the slope distances, the vertical angle a. and the
target height t. The function used is
h =s sin t.
For the purpose of preanalysis, estimated values for s and are 400 in
and 30, respectively.
(a) Evaluate the standard deviations in measuring s, and t, assuming
balanced
183 Analysts and Adjustments of Survey Measurements
(7-7)
where D1 and D2 are the distances from the theodolite station A (Fig. 7
2), to targets T1 and T2, respectively, and
(7-8)
(7-9)
If both targets arc aligned independently and with the same accuracy
(represented by standard deviation t) at distances D1 and D2
respectively, from the theodolite, application of Eq. (7-1) yields the
following expression for the standard deviation ot of the error in the
angle caused by errors in target alignment:
(7-10)
D1, D2, and t in Eq. (7-10) must be expressed in the same units; a,
ranges typically from 0.5 to 5 mm.
The standard deviation p in pointing at a survey target is influenced by
the telescope optics, the target design, and the atmospheric conditions
between the telescope and target. Four pointings are made for each
paired set of observations (two pointings at each target). Thus, for n sets
of observations, 4n pointings are made.
Under the assumption that all pointings are independent, and the mean of
the n sets is taken, application of Eq. (7-1) results in the following
standard deviation op for the error in the measured angle caused by
pointing error:
(7-11)
(712)
(713)
The range of values for r is typically 1" to 10 depending upon the particular instrument used.
Since the errors in centering the instruments, allinging the targets,
pointing, and reading the circle are additive, the combined standard
deviation in measuring the horizontal angle can be expressed as follows,
in accordance with Eq. (7-1):
(7-14)
Care must be taken to express all terms in the same units, usually
seconds of arc.
EXAMPLE 7-3
A repeating theodolite is to be used to mea sure the angles of a traverse,
the sides of which are essentially in a straight line. The length of each
traverse side is approximately 800.m. The standard deviations in
i nstrument centering, target alignmen t, pointing and circle reading are,
respectively c = 2. 0 mm; t = 4.0 mm p = 2.0"; and r=6.0". Evaluate
the expected standard deviation in . measuring each angle of the traverse
if: (a) one set of observations is taken; (b) four sets are taken.
Solution
Since the traverse is in a straight line and each side is 800 m in length, D1
=D2 = 800 m, and D3= 1600 m.
(a) For one set of observations (n =1),
Thus
as before, and
Thus,
(7-15)
Where . Is the wavelength of modulation;
189 Analysts and Adjustments of Survey Measurements
m is the whole number of wavelengths in the measurement (assumed
errorless);
u is the fractional part of a half-wavelength (~/2 ) obtained by measuring
phase difference;
k is the zero correction, which includes the instrument and reflector
constants.
The wavelength of modulation, in turn, can be expressed as follows:
(7-16)
Where
c is the velocity of light in vacuum;
n is the index of refraction of the atmosphere;
f is the modulation frequency.
Under the assumption that the errors in , u, and k are independent, the
special law of propagation of variance, Eq. (7-1), can be applied to Eq. (715) to yield
(7-17)
Similarly, assuming that errors in c, n, and f are independent, Eq. (7-2) can
he applied to Eq. (7-16) to yield
(7-18)
Combining Eqs, (7-17) and (7-18), we get
(7-19)
For the purpose of evaluating variance, we can approximate the slope
distance by using the first term (by far the largest) of Eq. (7-15) only, i.e.,
(7-20)
Then
(7-21)
190 Analysts and Adjustments of Survey Measurements
Letting
And taking the square root of both sides of Eq. (7-21), we get
(7-22)
f
f
, and the resulting factor b are customarily given in parts per million
(ppm).
EXAMPLE 74
The velocity of light is known to be 299792.5 km/sec with a standard
deviation of 0.1 km/sec. If the index of refraction of the atmosphere can be
determined with a relative accuracy n/n of 2.0 ppm, the modulation
frequency of a particular EDM instrument can be determined with a
relative accuracy f/f of 1.0 ppm, phase difference can be measured with
a standard deviation of 5.0 mm, and the zero correction can be
determined with a standard deviation of 2.5 mm, evaluate the factors a
and b, and determine the standard deviation of a distance measurement of
2 km.
Solution
We are given that c= 299792.5 km/sec and c= 0.1 km/sec. Then
Solution
(7-32)
computed area. Explain the unusual result obtained for . (Hint: pass a
circle through points A, B, and C and note what happens when b and are
held fixed and is varied.) What is a practical value to assign to o?
7-4 Height h (Fig. 7-6), is to be computed from independent
measurements of horizontal angles and , vertical angle , and
horizontal distance c. The vertical angle can be measured with a standard
deviation no less than 15. If h must be determined with a standard
deviation no larger than 0.040 m, determine suitable values for the
standard deviations of the measurements, given = = 85, =15, and
c=50 as. Summarize the results in tabular form similar to Table 7-1.
7-5 A direction theodolite is to be used to measure the angles of a
traverse. The length of each traverse side is approximately 5DO en. The
standard deviations in instrument centering, target alignment, pointing,
and circle reading are, respectively: c= 1.5 mm; t= 2.5 mm; p=2.5`;
and r=2.0 Evaluate the expected standard deviation in measuring
traverse angle in n paired sets for: (a) = 45 , n = 2; (b) 6 = 45 , n =
6; (c) = 135', n = 2; (d) =135 , n = 6.
7-6 The angles at the corners of a rectangular parcel of land, 100 m by
200 m, are to be measured with a repeating theodolite. For the particular
instrument used,
factors for this EDM and determine the standard deviations for
measurements of 500 m, 2.0 km, and 5.0 km.
7-8 The compensator of a surveyor's level allows the instrument to be
leveled with a standard deviation of 1.5. The rod can be read with a
standard deviation of 0.010 mm per meter of sight distance. If sight
distances of 80 m are used, express the expected standard deviation in
determining elevation difference with this level and rod as a function of K1
the length of the level line in kilometers.
7-9 Two bench marks are 20 Km apart. The elevation difference between
these two bench marks must be determined with a standard deviation that
is no greater than 0.010 m. Using the level and rod of Problem 7-8, two
level lines are run between the bench marks and a mean value is taken.
Determine the maximum sight distance that should be used.
7-10 In the subtense bar method of measuring distance the horizontal
distance between the theodolite and subtense bar is computed as follows:
where b is the length of the bar and is the subtended angle (Fig. 7-7). If
b and are uncorrelated, show that the standard deviation in the
computed value of D is
where P is the pull applied to the tape In the field, P, is the pull applied
during tape standardization, A is the cross-sectional area of the tape, E is
the modulus of elasticity of the tape material (B=2.1 x 10 4 kg/mm for
carbon steel), I is the nominal length of the tape, and w is the mass of the
tape per unit length. A
198 Analysts and Adjustments of Survey Measurements
2 km distance is measured with a 50 m steel tape that is supported at its
ends only. The cross-sectional area and mass of the tape are 3.8 mm and
6.030 kg/m, respectively, P1 =10.0 kg (assumed errorless) and P= 8.0 kg.
Calculate the combined tension and sag correction for the entire 2 km
distance and determine the standard deviation of this correction if the
standard deviation In P is 0.5 kg. (Note: Each tapelength has its own
independent pull.)
8
Introductory
Statistical Analysis
(8-1)
If observations X1, X2, , Xn are it independent, identically distributed
random variables, each with mean and variance then X is also a
random variable which can be shown to have a mean and variance /n
(i.e. Section 8.5). The specific probability distribution of X will depend on
the distribution of the population from which the observations are drawn; if
the population is normally distributed, X will be normally distributed. Since
the observations will vary from sample to sample, the values of X will
vary.
___
___
___
___
structed from a random sample drawn from a normal population are also
normally distributed.
Other distributions play important roles in dealing with samples drawn
from normal populations. These distributions are known as sampling distributions, the most common of which are the chi-square and Student (t) distributions. These two distributions are briefly discussed in the sections
which follow.
8.2. THE CHI-SQUARE DISTRIBUTION
If Z1, Z2, ..., Zn are n independent random variables, each with a standard
normal distribution (i.e., =0, and =1; see Section 5.4) it can be shown
(beyond the scope of this text) that the probability density function of
(8-2)
Is
(8-3)*
Y is said to have a Chi-square (x 2) distribution with n degrees of freedom.
The distribution parameter is it, and the mean and variance of Y are
() is known as the gamma function. It it defined as the integral
And
The random variable Y is the sum of the squares of four standard normal
random variables, i.e., Y has a chi-square distribution with four degrees of
freedom (n= 4). Evaluate and plot the density function of Y. Evaluate the
mean and standard deviation of Y.
Solution
For n=4, (n/2) = (2) = 1! = 1. Thus, from Eq. (8-3), the density function
is
For y= 0.5,
For y = 1.0
Other evaluations off (y) are given in Table 8-1, and the density function is
plotted in Fig. 8-1. The mean and standard deviation of Y are
(8-6)
where f (u) is the density function given by Eq. (8-3).
Bear in mind that F (y) is a probability. When this probability is fixed at
some specified value, F (y) = p, the corresponding value of y is known as
the pth fractile, or pth percentile. For the chi-square distribution with n
degrees
(8-7)
is said to have a t distribution with n degrees of freedom. The probability
density function of the t distribution is
(8-8)
*"Student" was the pseudonym of W. S. Cosset, the statistician who first
derived the t-distribution .
(8-10)
The distribution is symmetric about zero.
EXAMPLE 8-2
The random variable T has a t distribution with four degrees of freedom.
Evaluate and plot the density function of T. Evaluate the standard
deviation of T.
Solution
For n = 4,
And
It is obvious from Fig. 8-3 that the shape of the t distribution's density
function resembles that of the normal distribution. However, the two distributions are not the same, particularly for small values of n. As n
increases, the t distribution approaches the standard normal distribution.
Indeed, for large is, the standard normal distribution can be used to
approximate the t distribution.
205 Analysts and Adjustments of Survey Measurements
(8-11)
where f (u) is the density function given by Eq. (8-8). If this function is
fixed at some specified value p, (i.e., F(t ) =P [T t] =p), the value oft
associated with p is designated tp,n, representing the pth percentile of the
t distribution with it degrees of freedom. Values of tp,n are given in Table
III of Appendix B.
Common statistics that are used to locate the "center" of a sample include
the following.
Sample Mean. The sample mean, X , has already been defined in Eq. (81), he value of X is defined as the simple average of all values in the
sample. Sample Medium If the values in the random sample are arranged
in order of magnitude, the value of the sample median is the value in the
middle of the arranged set if n is odd, or it is the a verage of the two middle
values if n is even.
___
___
Sample Midrange. The sample midrange is the average of the largest and
smallest values in the sample.
EXAMPLE 8-3
Following are the results of 20 independent measurements of a distance,
made under the same conditions:
Arrange the values in order of magnitude (last two significant figures only):
55, 56, 58, 59, 59, 60, 60, 61, 61, 61, 61, 62, 62, 62, 63, 64, 64, 65, 68 ,
69.
where the values averaged are the tenth and eleventh values in order of
magnitude
(8-12)
Sample Variance. The sample variance S is defined as the following
function of the random sample
(8-13)
where X is the sample mean. The reason for using n - 1 instead of n in Eq.
(8-13) will be taken up in Section S-6.
Sample Standard Deviation. The sample standard deviation S is defined
as the positive square root of the sample variance.
EXAMPLE 8-4
For the random sample in Example 8-3, evaluate he sample range,
sample mean. deviation, sample variance, and sample standard deviation.
Solution
Sample variance,
208 Analysts and Adjustments of Survey Measurements
Sample Covariance. If the random sample is drawn from a twodimensional population such that pairs of observations are obtained, it is
possible to evaluate a sample covariance. Le: the random sample consist
of the paired observations (X1, Y1), ,Yn). The sample covariance is then
defined as
(8-14)
(8-18)
Applying Eqs. (8-16) and (8-17) to Eq . (8-18), and noting that E [X1] =
E
[X2] = ... = E [Xn] = , and x1= x2 = = xn we get, for the expectation
and variance of X,
(8-19)
And
(8-20)
Obviously, since E [ X ]= ; X and since them unbiased Xisan
variance of X must approach zero as n goes to infinity, X must be a
consistent estimator of . Hence, on the basis of unbiasedness and
consistency, X is a good estimator of .
___
___
___
___
___
___
(8-21)