Sunteți pe pagina 1din 22

Introduction to

Adjustment Computations
&
Theory of Errors

Introduction to
Adjustment Computations & Theory of Errors
Introduction
Theory of Errors: To understand, classify and minimize the Errors
Adjustment computations: To adjust the data for Parameter Estimation
Statistical Analysis & Testing: To analyze & validate the results
Importance of Theory of Errors & Statistics in Engg:
- Quantitative Modeling, Analysis & Evaluation
- Decisions based on Insufficient, Incomplete and Inaccurate data
Examples:
(i). Dam safety Analysis
(ii). Earth quake Hazard Analysis
(iii). Design of Traffic Intersection

Introduction to
Adjustment Computations & Theory of Errors
Fundamental Concepts
True value Parameters: Not known
Error = Observed Value True Value
Correction + Observation = True (corrected) Value
Ex: A length is measured 3 times, with True
(corrected) value: l, and errors e1, e2, e3:

l1 = l + e1
l2 = l + e2
l3 = l + e3

Aim: To obtain best possible estimate of l and e


Purpose of Adjustment:
Obtain unique estimates of parameters
Obtain estimates of accuracy & precision
Stat. Analysis & Testing
To fit observations to the model

Introduction to
Adjustment Computations & Theory of Errors
Conceptual Model
A Priori Info

Observations

Data
Math Model

Adjustments
Linear

Non-Linear

Estimates
Linearise
Parameters

Estimator

Precision

Stat. Testing

Introduction to
Adjustment Computations & Theory of Errors
Theory of Errors & Applied Statistics
MODEL: Theoretical abstractions to which the measurements refer.
MATHAMATICAL MODEL: A theoretical system or an abstract concept, by which one
can mathematically describe a physical situation or a set of events.
(a) Functional model: Describes deterministic properties of events. It is a completely
fictitious construction, used to describe a set of physical events by an intelligible
system, suitable for Analysis:
(i) Geometric Model (ii) Dynamic Model (iii) Kinematic Model.
(b) Stochastic model: Model which designates and describes the non-deterministic or
probabilistic (stochastic) properties of variables involved.
ACCURACY: Measure of closeness of the observed value to the true value, in
absolute terms.
PRECISION: Measure of repeatability of observations, or internal consistency of
observations.

Introduction to
Adjustment Computations & Theory of Errors
RELATIVE ACCURACY: (Error / Measured quantity (true or observed)) - it has no units
ERRORS: (a) Blunders/Gross Errors/Mistakes:- Observational/ recording/ reading
errors, due to carelessness/oversight.
(b) Systematic Errors:- Errors which follow a systematic trend, and can be corrected
through mathematical modeling:
(i) Environmental Errors
(ii) Instrumental Errors
(iii) Personal Errors
(iv) Mathematical model Errors
(c) Random Errors:- Residual errors after removing blunders and systematic errors.
Inherent in most observations, they follow random behavior.
HISTOGRAM: A graphical /empirical description of the variability of experimental
information.

Introduction to
Adjustment Computations & Theory of Errors
MEASURES OF CENTRAL TENDENCY (SAMPLE STATISTICS FOR POSITION MEASURES)
(a) Mean (Average), (for population) or Xm (for sample) = (1/n) Xi

:a unique value.

(b) Mode: The value corresponding to maximum frequency.


(c) Median: Central value(s).
(d) Range: Largest value Smallest value
(e) Mid-Range : (Maximum value + Minimum value)/2
MEASURES OF DISPERSION (SAMPLE STATISTICS FOR DISPERSION MEASURES)
(a) Mean deviation: (1/n) (Xi Xm)
(b) Sample Variance : Sx2 = (1/( n-1)) ( X i Xm ) 2 (reason for using (n-1): E[Sx2] = x2)
(c) Standard Deviation : Sx: Square Root of Variance
(d) Sample Covariance : Sx,y = ( 1/(n-1)) ( Xi Xm ) * ( Yi Y m )
(e) Max. Error, Median Error, Mean Error
(f)Corrlation Coefficient:

x,y

= x, y / x y

Introduction to
Adjustment Computations & Theory of Errors
PROBABILITY: Numerical measure of the likelihood of the occurrence of an event
relative to a set of alternative events. It is a non-negative measure, associated
with every event.
-orThe limit of the frequency of occurrence of an event, when the event is repeated
a large no. of times. (n )
RANDOM VARIABLE: If a stat. event (outcome of a stat. expt.) has several
possible outcomes, we associate with that event a stochastic or random variable
X, which can take on several possible values, with a specific probability
associated with each.

Introduction to
Adjustment Computations & Theory of Errors
RANDOM EVENT: Event for which the relative frequency of occurrence
approaches a stable limit as the no. of observations or repetitions of an
experiment, n, is increased to infinity.
SAMPLE SPACE: The set of all possibilities in a probabilistic problem, where
each of the individual possibilities is a sample point. An event is a subset of
the sample space.
(a) Discrete Sample Spaces: Sample points are individually discrete entities,
and countable.
e.g.-throwing a dice.
(b) Continuous Sample Spaces: Sample points can take infinite no. of values.
e.g.-measuring a distance

Introduction to
Adjustment Computations & Theory of Errors
Covariance Matrix

For Vector

x1
x
2
X =
( n *1 )


x n

Covariance matrix

x 2
1
x2 , x1
X =
( n*n )

xn , x1

1, x 2

2
2 , x2

x1, xn



2
xn

Symmetric Matrix, with non-negative diagonal elements


Ex. For Coordinates of the 3-D position of a point: P (X, Y, Z)

X
p = Y
Z

X 2

p = X ,Y

X ,Z

YZ
,


Z 2

Introduction to
Adjustment Computations & Theory of Errors
Propagation of Covariance: To estimate variance of Y, knowing var. of X
For

Y=G*X+C

Y = G * X * G T
Ex. For

y1 = 2 * x1 + 2 * x2 + 2 * x3 + 3
y2 = 3 * x 1 - x 2 - 5

and

4.5 1.2 1.3


x = 1.2
3.2 2.1
1.3 2.1 6.3

Compute

and

y1, y2

Fundamentals
of
Adjustment Computations

Fundamentals of Adjustment Computations

Linear Models
(i). Straight line : y = a * x + b
(ii). Triangulation : L * A + L * B + L * C = 1800 +

Non-Linear Models :
(i). Range :

R12 = ( X 2 X 1 ) 2 + (Y2 Y1 ) 2 + ( Z 2 Z1 ) 2
Sinc
AB = BC * ( Sina
)

(ii). Triangulation :

Linearization Using Taylors Series:

f ( x) = f (a) +

df ( x )
dx

( x a) +
x=a

Non-linear terms

Fundamentals of Adjustment Computations


For matrix Y and X, related by : Y = F (x)

Y = F (X 0) +

F
X

Thus,

F
X

+ Non-linear terms
X =X

f1
x1

f1
x2

fn
x1

= G

F
G =
X

X = X

f1
xn

fn
xn

= G, Jacobian Matrix/ Design Matrix

Ex : Variance of the volume of cuboid, sphere, etc.

Fundamentals of Adjustment Computations


Weights & Weighted Means
Weighted Mean X

Pi l i
Pi

l1,l2..are observations with weights P1,P2


Weight is inversely proportional to Variance

02

: Variance of unit weight

Weight Matrix :

P = 0 1
2

9 For no Correlation : Diagonal


9 For equal weight and no correlation : Identity Matrix, I
A posteriori Variance of unit weight

V T PV
0 =
n 1
For residual : Vi = xi x
2

9 Number of observations = n
9 n 1 = Degrees of Freedom = No. of Obsns. No. of parameters

Fundamentals of Adjustment Computations


Weights & Weighted Means
Mean Square Errors (MSE) :

MX = 2 + 2
2

= ,

Where bias
Average Error :

eav = 0.7979 *

Probable Error (PE) :


Pe = 0.6745 *

Corresponds to 75 percentile

is the true value

Fundamentals of Adjustment Computations


Least Squares Estimator
Need of an Estimator :
Consider a system of 3 linear equations with 2 unknowns
X

X
X

1
2

l1
l
2
l 3

v
v

1
2
3

For u unknowns and n observations,


Three cases
9 n=u
9 n<u
9 n>u

unique solutions
Indeterminate
Infinite solutions

For case (iii), additional conditions are required.


The best criteria is : square of residuals is minimum

Vi

= min
or

Vi
x1

Vi

=0

x2

= 0

Fundamentals of Adjustment Computations


Least Squares Estimator
Least Square Estimator is statistically the best estimator, as
9 It is the unique estimator
9 It is an unbiased estimator, satisfying E[V]=0 Best Linear Unbiased
Estimator (B.L.U.E)
9 It is a minimum variance estimator, satisfying

x 2 =
i

9 It is the most probable estimator, i.e.

X Ls = Most probable value of X

or

Probability

(X

Ls

= X ) = max

9 It is to compute stat. parameters of adjustment

min

Fundamentals of Adjustment Computations


Methods of Least Square Estimations

(i) Method of Observation Equations

(a) Linear: L = A * X
(b) Non Linear: L = F ( X )

(ii) Method of Condition Equations : F ( L ) = 0

(iii) Method of combination of Observation Equations &


condition equations : (F, X ) = 0

Fundamentals of Adjustment Computations


(i) Method of Observation Equations
Observations expressed as a function ( linear or non linear ) of parameters
L=F(X)

Linear Models :

n = observations,

L( n * 1=)

A * ( Xn * 1 )
( n*u )

A = coefficient matrix of n * u.

DF = n u,

Residuals

Where u = unknown parameters,

V = L L

Where

= covariance matrix of observation,

P = 02 1.
Observation Equations :
Minimizing Function :

V = A X L

=V V
T

or

V PV

with

=1
2

Fundamentals of Adjustment Computations


(i) Method of Observation Equations:

By Minimizing this, i.e.

= 0

We can derive :
Normal Equations :

A PA X A PL = 0 N X U = 0
T

Solution :

X = ( A T PA ) 1 A T PL

= N 1U
N is normal matrix : AT
U is

matrix : AT

P *A
P *L

Fundamentals of Adjustment Computations


Estimate of precision of estimated parameter :

= 02 ( AT PA) 1 = 02 N 1
X

A posteriori

2
0

A posteriori variance of unit weight :


T
V
PU
2
0 =
n u

S-ar putea să vă placă și