Documente Academic
Documente Profesional
Documente Cultură
In general, the method combines a prediction of the mean and a prediction of the residual where the vector β of parameters (β1, β2, . . . , βp) has to be estimated.
process for a given location.
Especially, the aspects of kriging are concerned with the prediction of the residual process, The objective of kriging.
typically by using a covariance function or variogram. We want to estimate the value of a random function Z(x) at one or more unsampled points
in a region D from sample data {z(x1), z(x2), . . . , z(xn)} at points x1, x2, . . . , xn.
Different kinds of kriging methods exist, which pertains to the assumptions about the
mean structure of the model, Assume that we have an intrinsic stationary process and that we want to predict at x0.
As predictor, Ẑ(x0), we choose to use a linear combination of the observed points, i.e.
E Z(x) = µ(x).
n
X
Ẑ(x0) = λiZ(xi),
Simple kriging: The mean is a known constant, i.e. E Z(x) = µ.
i=1
Ordinary Kriging: The mean is unknown but constant and needs to be estimated. a weighted average of data with λi as the weights.
Research Centre Foulum 10th April 2003 2 Research Centre Foulum 10th April 2003 3
We want the estimate to be unbiased, i.e. The weights are chosen in order to minimize the prediction variance
n 2 o
E(Ẑ(x0)) = E(Z(x0)), var Ẑ(x0) − Z(x0) = E Ẑ(x0) − Z(x0)
,
which (in most cases: ordinary and universal) implies that which is a function of λ1, λ2, . . . , λn, subject to the condition of
n
X
n
X λi = 1.
λi = 1. i=1
i=1
The method of Lagrange multiplier is used to find a minimum of a multivariate function
This is easiest shown in the case of ordinary kriging, subject to a side condition:
n Find the minimum of the Lagrangian function
E(Ẑ(x0)) = λiE(Z(xi)) = µ
X X
λi , !
n n
i=1 2 o
`(λ1, λ2, . . . , λn, m) = E
X
Ẑ(x0) − Z(x0) − 2m λi − 1 ,
i=1
which must equal µ = E(Z(x0)) such that
Pn
i=1 λi = 1.
by differentiating this function for each λi, i = 1, 2, ..., . . . and solve the corresponding
Now the job is to choose the weights λi. system of equations with respect to the set of weights, λ1, λ2, . . . , λn.
The quantity m is the Lagrange multiplier, which is dependent on x0.
Research Centre Foulum 10th April 2003 4 Research Centre Foulum 10th April 2003 5
Ordinary Kriging
Equating to zero we get the following system of equations,
We assume that Z(x) is intrinsic stationary and that
n
E Z(x) = µ constant but unknown,
X
γ(x0 − xi) = λj γ(xi − xj ) − m, , for i = 1, 2, . . . , n,
var Z(xi) − Z(xj ) = 2γ(xi − xj ), j=1
n
n X
λi = 1.
X
Ẑ(x0) = λiZ(xi).
i=1
i=1
We find Using
n n n X
n
2 o
E
X X
Ẑ(x0) − Z(x0) =2 λiγ(x0 − xi) − λiλj γ(xi − xj ).
i=1 i=1 j=1 γ0 = (γ(x1 − x0), γ(x2 − x0), . . . , γ(xn − x0), 1),
Substituting this into `(λ1, λ2, . . . , λn, m) and differentiating with respect to λi gives, λm = (λ1, λ2, . . . , λn, m),
n
γ(x1 − x1) γ(x1 − x2) ··· γ(x1 − xn) 1
∂ X γ(x2 − x1) γ(x2 − x2) ··· γ(x2 − xn) 1
`(λ1, λ2, . . . , λn, m) = 2γ(x0 − xi) − 2 λj γ(xi − xj ) − 2m, .. .. .. ..
∂λi j=1
Γ= ··· ,
γ(xn − x1) γ(xn − x2) ··· γ(xn − xn) 1
for i = 1, 2, . . . , n. 1 1 ··· 1 0
Research Centre Foulum 10th April 2003 6 Research Centre Foulum 10th April 2003 7
we may write the system of equations in matrix form as The minimized prediction variance
n 2 o
γ0 = Γλm. var Ẑ(x0) − Z(x0) = E Ẑ(x0) − Z(x0)
,
Thus, n
X
σ 2(x0) = λiγ(xi − x0) + m
−1 i=1
λm = Γ γ0.
= λ0mγ0,
such that for λ−m = (λ1, λ2, . . . , λn),
which is easily calculated when λm and γ0 have been found.
Ẑ(x0) = z 0λ−m.
Research Centre Foulum 10th April 2003 8 Research Centre Foulum 10th April 2003 9
Pn
Simple Kriging Note, we do not need the constraint i=1 λi = 1.
We assume that Z(x) is second order stationary and that Minize the prediction variance, i.e. minimize
n n
! where now
X X
Ẑ(x0) = λiZ(xi) + µ 1 − λi ,
i=1 i=1 γ0 = (γ(x1 − x0), γ(x2 − x0), . . . , γ(xn − x0)),
λ = (λ1, λ2, . . . , λn)
where the λis are the weights, since then
γ(x1 − x1) γ(x1 − x2) ··· γ(x1 − xn)
E Z(x0) − Ẑ(x0) = 0,
γ(x2 − x1) γ(x2 − x2) ··· γ(x2 − xn)
Γ= .. .. .. .
···
such that Ẑ(x0) is an unbiased estimator. γ(xn − x1) γ(xn − x2) ··· γ(xn − xn)
Research Centre Foulum 10th April 2003 10 Research Centre Foulum 10th April 2003 11
2. Clustered points are weighted less than isolated ones at the same distance.
3. Data points can be screened by ones lying between them and the target point.
In many situation in practice, the nearest 4 or 5 points to the target point constitute about
80% of the total weight with the next 10 or so almost all of the remainder.
Research Centre Foulum 10th April 2003 12 Research Centre Foulum 10th April 2003 13
End of lecture 1