Sunteți pe pagina 1din 5

Kriging

Lecture 1: Theory of kriging:


1. Ordinary kriging.
2. Simple kriging.

Lecture 2: Kriging examples: The effect to kriging of changing

1. variogram, 3. sampling intensities.


2. target point, and

Lecture 3: Theory of kriging:

1. Universal kriging. 4. Median kriging.


2. Robust kriging. 5. Lognormal kriging.
3. Block kriging.

Lecture 4: Kriging examples:


1. Block kriging. 3. Irregularly spaced data.
2. Effect of anisotropy. 4. Mapping using kriging.

Research Centre Foulum 10th April 2003 1


Kriging 1 Universal kriging: The mean is unknown but a linear combination of known functions
of locations,
What is kriging?
p
Kriging is a spatial prediction algorithm based on a continuous model of stochastic spatial
X
µ(x) = βifi(x) = β 0f (x),
variation. i=1

In general, the method combines a prediction of the mean and a prediction of the residual where the vector β of parameters (β1, β2, . . . , βp) has to be estimated.
process for a given location.

Especially, the aspects of kriging are concerned with the prediction of the residual process, The objective of kriging.
typically by using a covariance function or variogram. We want to estimate the value of a random function Z(x) at one or more unsampled points
in a region D from sample data {z(x1), z(x2), . . . , z(xn)} at points x1, x2, . . . , xn.
Different kinds of kriging methods exist, which pertains to the assumptions about the
mean structure of the model, Assume that we have an intrinsic stationary process and that we want to predict at x0.
As predictor, Ẑ(x0), we choose to use a linear combination of the observed points, i.e.
E Z(x) = µ(x).


n
X
Ẑ(x0) = λiZ(xi),
Simple kriging: The mean is a known constant, i.e. E Z(x) = µ.

i=1

Ordinary Kriging: The mean is unknown but constant and needs to be estimated. a weighted average of data with λi as the weights.

Research Centre Foulum 10th April 2003 2 Research Centre Foulum 10th April 2003 3

We want the estimate to be unbiased, i.e. The weights are chosen in order to minimize the prediction variance
n 2 o
E(Ẑ(x0)) = E(Z(x0)), var Ẑ(x0) − Z(x0) = E Ẑ(x0) − Z(x0)

,

which (in most cases: ordinary and universal) implies that which is a function of λ1, λ2, . . . , λn, subject to the condition of
n
X
n
X λi = 1.
λi = 1. i=1
i=1
The method of Lagrange multiplier is used to find a minimum of a multivariate function
This is easiest shown in the case of ordinary kriging, subject to a side condition:
n Find the minimum of the Lagrangian function
E(Ẑ(x0)) = λiE(Z(xi)) = µ
X X
λi , !
n n
i=1 2 o
`(λ1, λ2, . . . , λn, m) = E
X
Ẑ(x0) − Z(x0) − 2m λi − 1 ,
i=1
which must equal µ = E(Z(x0)) such that
Pn
i=1 λi = 1.
by differentiating this function for each λi, i = 1, 2, ..., . . . and solve the corresponding
Now the job is to choose the weights λi. system of equations with respect to the set of weights, λ1, λ2, . . . , λn.
The quantity m is the Lagrange multiplier, which is dependent on x0.

Research Centre Foulum 10th April 2003 4 Research Centre Foulum 10th April 2003 5
Ordinary Kriging
Equating to zero we get the following system of equations,
We assume that Z(x) is intrinsic stationary and that
n
E Z(x) = µ constant but unknown,
 X
γ(x0 − xi) = λj γ(xi − xj ) − m, , for i = 1, 2, . . . , n,

var Z(xi) − Z(xj ) = 2γ(xi − xj ), j=1
n
n X
λi = 1.
X
Ẑ(x0) = λiZ(xi).
i=1
i=1

We find Using
n n n X
n
2 o
E
X X
Ẑ(x0) − Z(x0) =2 λiγ(x0 − xi) − λiλj γ(xi − xj ).
i=1 i=1 j=1 γ0 = (γ(x1 − x0), γ(x2 − x0), . . . , γ(xn − x0), 1),

Substituting this into `(λ1, λ2, . . . , λn, m) and differentiating with respect to λi gives, λm = (λ1, λ2, . . . , λn, m),
 
n
γ(x1 − x1) γ(x1 − x2) ··· γ(x1 − xn) 1
∂ X  γ(x2 − x1) γ(x2 − x2) ··· γ(x2 − xn) 1
`(λ1, λ2, . . . , λn, m) = 2γ(x0 − xi) − 2 λj γ(xi − xj ) − 2m,  .. .. .. .. 
∂λi j=1
Γ=  ··· ,

γ(xn − x1) γ(xn − x2) ··· γ(xn − xn) 1
for i = 1, 2, . . . , n. 1 1 ··· 1 0

Research Centre Foulum 10th April 2003 6 Research Centre Foulum 10th April 2003 7

we may write the system of equations in matrix form as The minimized prediction variance
n 2 o
γ0 = Γλm. var Ẑ(x0) − Z(x0) = E Ẑ(x0) − Z(x0)

,

This system has a unique solution for λm if Γ is invertible.


is also called the kriging variance by some authors and is denoted by σ 2(x0).
Using only admissible theoretical variograms, i.e. γ is conditional negative definite, then
Γ indeed is invertible. We find

Thus, n
X
σ 2(x0) = λiγ(xi − x0) + m
−1 i=1
λm = Γ γ0.
= λ0mγ0,
such that for λ−m = (λ1, λ2, . . . , λn),
which is easily calculated when λm and γ0 have been found.
Ẑ(x0) = z 0λ−m.

Note that only γ0 needs to be recalculated for a new point x0.


The matrix Γ do not change for new positions x0 but only when sample positions are
modified or another variogram is chosen.

Research Centre Foulum 10th April 2003 8 Research Centre Foulum 10th April 2003 9
Pn
Simple Kriging Note, we do not need the constraint i=1 λi = 1.
We assume that Z(x) is second order stationary and that Minize the prediction variance, i.e. minimize

E Z(x) = µ constant and known,


 n 2 o
var Ẑ(x0) − Z(x0) = E Ẑ(x0) − Z(x0)

 .
var Z(xi) − Z(xj ) = 2γ(xi − xj ),
n
X
Ẑ(x0) = λiZ(xi). The weights are found as before by solving the linear system of equations,
i=1
λ0 = Γλ,
We estimate Z(x0) by

n n
! where now
X X
Ẑ(x0) = λiZ(xi) + µ 1 − λi ,
i=1 i=1 γ0 = (γ(x1 − x0), γ(x2 − x0), . . . , γ(xn − x0)),
λ = (λ1, λ2, . . . , λn)
where the λis are the weights, since then  
γ(x1 − x1) γ(x1 − x2) ··· γ(x1 − xn)
E Z(x0) − Ẑ(x0) = 0,
  γ(x2 − x1) γ(x2 − x2) ··· γ(x2 − xn) 
Γ= .. .. .. .
 ··· 
such that Ẑ(x0) is an unbiased estimator. γ(xn − x1) γ(xn − x2) ··· γ(xn − xn)

Research Centre Foulum 10th April 2003 10 Research Centre Foulum 10th April 2003 11

In the case of simple kriging the prediction or kriging variance is Weights


n
X 1. Near points carry more weights than more distant ones.
σ 2(x0) = λiγ(xi − x0)
i=1 (a) Their relative proportion depends on the positions of the sampling points and on the
= λ0γ0, variogram.
(b) The larger the nugget variance the smaller are the weights that are nearest to the
with λ and γ0 defined as above. target points.

2. Clustered points are weighted less than isolated ones at the same distance.

3. Data points can be screened by ones lying between them and the target point.

In many situation in practice, the nearest 4 or 5 points to the target point constitute about
80% of the total weight with the next 10 or so almost all of the remainder.

Research Centre Foulum 10th April 2003 12 Research Centre Foulum 10th April 2003 13
End of lecture 1

Research Centre Foulum 10th April 2003 14

S-ar putea să vă placă și