Sunteți pe pagina 1din 120

EUDEM2

The EU in Humanitarian Demining-


State of the Art on HD Technologies, Products,
Services and Practices in Europe
IST–2000-29220

EUDEM2 Technology Survey

Electromagnetic methods in geophysics

Jerzy Wtorek, PhD., Anna Bujnowska, MSc


Gdańsk University of Technology

Version 1.0, 08.10.2004

http://www.eudem.info/

Project funded by the European Community and


OFES (Swiss Federal Office for Education and
Science) under the “Information Society
Technologies” Programme (1998-2002)

Vrije Universiteit Brussel Swiss Federal Institute of Gdansk University of


VUB, B Technology – Lausanne Technology
EPFL, CH GUT, PL
Contents

Electromagnetic methods in geophysics 3

1. Technical principles of EM methods in geophysics 5

1.1. Electrical resistivity methods 5

1.2. The induced polarization technique 16

1.3. Electromagnetic surveys 17

1.4. Magnetic techniques 33

1.5. Multi-modal techniques 39

2. Inverse problems in geophysics 40

2.1. Introduction 40

2.2. The deterministic approach 43

2.3. The probabilistic approach 56

2.4. Simultaneous and joint inversion 67

2.5. Mutual constraint inversion 72

2.6. Discrete tomography 74

3. The applicability of geophysical prospecting methods to demining 77

Appendices 85

A. Linear least-square inversion 85

B. Non-linear least-square inversion 86

C. Quadratic programming 88

D. Probabilistic methods 89

References 92

Equipment manufacturers and rental companies 96

Collection of summaries of selected papers 97

2
Electromagnetic methods in geophysics
This report is made up of three sections. Technical aspects of the electromagnetic methods
used in geophysical studies, from DC to relatively high frequency, are presented in the first
section. The most common methods are briefly introduced, namely electrical resistivity and
induced polarization, both magnetic and electromagnetic. It is clear from this chapter that
particular techniques, including electromagnetic ones, are already being utilized in mine
detection. Metal detectors, for example, operate on similar principles. It would also appear,
even from a restricted study of the literature, that more attention is now being paid to inverse
problems. The results obtained when using these methods are more accurate, although the
computation required is much greater.

The inverse problems encountered in geophysics are briefly surveyed in the second section of
the report. Among the wide variety of such problems are determination of earth structure,
deconvolution of seismograms, location of earthquakes using the arrival times of waves,
identification of trends and determination of sub-surface temperature distribution. These are
presented in the report in relation to the types of tools used to solve them. As a result, the
methods are categorized as “deterministic” or “probabilistic”. The difference between
deterministic and probabilistic methods is that in the former the parameters are unknown but
non-random, whereas in the latter they are treated as random variables and therefore have a
probability distribution. Probabilistic methods can be used even though there is no strict
probabilistic behaviour of the system. When probabilistic methods are applied, the problem is
formulated on the basis of probability theory. The results obtained with this approach, using
certain assumptions on the probabilities, coincide with the results obtained from the
deterministic approach. From the presentation of selected papers many other important
aspects of inverse problems can be also identified. Mathematically, the inverse problems are
non-linear and ill-posed. That it is why many different approaches to solving them have
already been put forward. One of the most common is the minimization of the squared norm
of the difference between the measured and the calculated boundary values such as voltages.
Because of the ill-posed nature of the problem, the minimization has to be modified in order
to obtain a stable solution. This modification, known as regularization, is obtained by
introducing an additional term into the minimization so that the problem becomes well-posed.
The solution of this new problem approximates to the required solution of the ordinary
problem and, in addition, is more satisfactory than the ordinary solution. When the problem is
regularized by the introduction of the additional term, prior information about the solution is
incorporated. Very often the prior assumption about the solution sought is that the computed
solution should be smooth. The smoothness assumption is very often utilized in geophysical
studies. It is, however, very general in nature and does not necessarily utilize the known
properties of the solution. This has certain advantages because the same assumptions might be
valid for a wide range of situations. However, if a certain problem is considered, it would
seem reasonable to use prior information that is effectively tailored for this particular
situation. Prior information that could be used in the reconstruction problem would be, for
example, knowledge of the internal structure of an object, such as the layered structure, the
limits of the resistivity values of the different interior structures, the correlation between
resistivities and the geometric variability of the interior structures.

The third section of the report contains a presentation of the papers published in recent issues
of geophysical journals or in conference proceedings which describe the direct application of
“geophysical” methods to mine detection or topics which are, in our opinion, essential from
this perspective.

3
The papers presented in the second and third sections have been selected with regard to their
contents. We hope that this selection will be of interest to those involved in developing tools
for demining. The authors of this report are convinced that future demining techniques will
utilise a multi-modal approach and advanced computational methods. This view explains why
a substantial part of this report is devoted to inverse problems.

The appendices contain general information on inverse techniques. The information included
there is not restricted to geophysics and is drawn on in other scientific disciplines. It is also
known to demining “insiders”.

Finally, apart from the usual reference section, a small “database” is included. This contains
original summaries taken from some of the papers referenced. We hope that this will enable
the reader to select, in an efficient manner, a paper containing information which is of use.

4
1. Technical principles of EM methods in geophysics

1.1. Electrical resistivity methods

Resistivity measurements

One of the parameters used to describe soil is soil complex permittivity. This depends on
many factors such as soil structure, mineral content and water contamination. In the resistivity
method current is injected into the formation and potential is measured from different points.
There are many variations on the resistivity method depending on the type of current injected:
• DC measurement
• Single frequency excitation
• Multi-frequency excitation
• Time-domain techniques
With respect to current injection and measurement, the resistivity methods are:
• Surface-based
• Well-logging.

Electrical well-logging is a technique for measuring the impedance of the formation in depth.
A hole is usually drilled down to the formation and, depending on the configuration, an
electrode or electrode set-up is moved into it [Furche M. and Weller A., 2002, Sattel D. and
Macnae J., 200, Keller G. V. and Frischknecht F. C., 1966].
There are different configurations of electrode sets:
• Single-electrode resistance logs
• Multi-electrode spacing logs
• Focused current logs
• Micro-spacing and pad device logs

Single-electrode resistance logs

Figure 1. Single-electrode resistance log configuration.

This technique uses two electrodes, one of which in located in the hole, while the second, the
reference electrode, is located on the ground surface (Figure 1). The resistance between the
movable in-hole electrode and the reference electrode is measured as a function of the depth
of the in-hole electrode. The measured resistance is a function of the electrical properties of

5
the material surrounding the electrode and electrode’s shape and dimensions. For successful
measurement it is crucial that the in-hole electrode makes good contact with the formation.
This is achieved by filling the hole with water or drilling mud. Another problem arises when
using a two-electrode configuration. The measured resistance, particularly for DC surveys,
contains electrode-contact impedance, which introduces measurement errors. An additional
error stems from the variable length of the wire on which the in-hole electrode hangs.

When the logging electrode is spherical in shape, fairly simple formulae can be used to
calculate the grounding resistance. Such an idealized situation may be approached if the
resistivity of a thick layer of rock is uniform and if the well bore is filled with drilling mud
with about the same resistivity as the rock.

In a completely uniform medium, current will spread out radially from an electrode, A, with a
uniform current density in all directions, as shown in (Figure 2). The grounding resistance
may be calculated by dividing the earth around the electrode into a series of thin concentric
spherical shells. The total grounding resistance is found by summing the resistances through
all such shells extending from the surface of the electrode to infinity. The resistance through a
single shell is found using one of the defining equations for resistivity [Keller G. V. and
Frischknecht F. C., 1966]:

Figure 2. Geometry of the electrode and the hole.

l dr (1.1)
dR = ρ =ρ
A 4πr 2

The total resistance is determined by integrating this expression for the resistance of a single
thin shell over a range in r extending from the surface of the electrode (radius a) to infinity:

ρ ρ ∞ ρ (1.2)
R=∫ dr = − ∫ =
a 4πr
2
4πr a 4πa

A geometric factor, K, may be defined from this equation by combining the factors, which
depend on the geometry of the electrode;

K = 4πa (1.3)
Every electrode or array of electrodes can be characterized by a particular geometric factor.
This is a parameter which, when multiplied by the measured resistance, will convert the
resistance to the resistivity for a uniform medium.

Spacing logs

6
V
V
K B B

L K
L
A A

a) b)

Figure 3. Spacing log configurations: a) normal array, b) lateral array.

For two-electrode resistance measurement the error from long wiring and from electrode
contact phenomena can have a significant effect on the result obtained. In order to measure
formation resistivity accurately a four-electrode configuration is widely used. Two electrodes
are used for current excitation, while two others measure voltage. Such an approach
minimises contact and wire impedance errors, as the measuring voltage device does not draw
much current, so the measured voltage value is not affected by the presence of parasite
resistances. There are several techniques of electrode placement. The most common are two
electrodes on the probe (Figure 3.a) or three electrodes on the probe (Figure 3.b). The
remaining electrodes are placed on the ground surface. When two electrodes are placed on the
probe, one is the current electrode and the other the voltage electrode. The reference current
and voltage electrodes are placed on the ground surface. This configuration is referred to as a
“normal” array or “potential” array. In this configuration only the resistance contributed by
the formation outside the equi-potential surface passing through this measuring electrode is
measured.

To increase the resolution of the survey three electrodes are placed on the probe. One is the
current electrode and the other two are voltage electrodes. The current reference electrode is
placed on the soil surface. This configuration is known as a lateral array or gradient array. It is
also possible to change the current and voltage electrodes in this configuration. It can be
shown that this does not affect the value of the measured resistance.

As before, the hole should be filled with water or mud to achieve good electrical electrode-to-
formation contact. If the conductivity of the hole-filling medium differs from the conductivity
of the formation, the measured resistance differs from the true one. Moreover, the contrast
between the measured resistances is less in measurement than in reality. In general, an
increase in the distance between the electrodes can reduce this error significantly [Keller
G. V. and Frischknecht F. C., 1966].

7
Focused current logs

guard

center

guard

Figure 4. Focused current log configuration.

In this technique the single-electrode resistance measurement is improved by adding two


guard electrodes (Kelvin guard electrodes) above and below the main electrode (Figure 4).
These electrodes cause the current of the centre electrode to flow more to the rock.

This configuration is very useful in the investigation of the formations within thin layers.
Even when the conductivity of the borehole filling solution (mud) is highly conductive, the
method gives good results. Commercially available systems using this kind of configuration
are Laterolog (Schlumberger) and Guard-Log (Welex and Birdwell) and the current-focused
logs (Lane-Wells). The geometric factor for the guard electrode array is:

⎡⎛ L ⎞ 2 ⎤
2π l ⎢⎜ ⎟ − 1⎥
⎢⎣ ⎝ d ⎠ ⎥⎦
K S =
L ⎛ L ⎞ ⎡⎛ L ⎞ 2 ⎤ (1.5)
ln ⎜ ⎟ + ⎢ ⎜ ⎟ − 1⎥
d ⎝ d ⎠ ⎢⎣ ⎝ d ⎠ ⎥⎦

where L is the total length of the array (the length of the centre band) and d is the diameter of
the electrode.

Micro-logging

For very high-resolution scans all the electrodes are placed at an extremely small distance
from each other. Unlike the previous method, the electrodes have to be placed as close as
possible to the borehole wall (Figure 5). To achieve this, a special spring-system is often used.
The area of investigation in this type of survey is very local and depends on electrode
distance. Any layer of mud or liquid between the electrode and the formation, the thickness of
which is comparable to the electrode displacement, significantly modifies the measured
resistance.

8
V

Figure 5. Micro-logging configuration.

Small displacement of the electrode also requires a small electrode area, which increases the
electrode’s contact resistance [Keller G. V. and Frischknecht F. C., 1966].

Cross-borehole imaging

The cross-borehole technique is a modification of borehole logging. In this type of survey two
or more boreholes are drilled and a multiple set of electrodes inserted in the holes (Figure 6).

Figure 6. Cross-borehole set-up.


The electrodes can usually work as current or voltage, allowing for different methods of
excitation. In general, this technique can give a better spatial resolution between the holes
owing to the presence of more measuring points and more hypothetical current-paths.

One of the possible modifications of the cross-borehole technique is the addition of more
electrodes to the system by placing them on the ground surface between the holes [Curtis A,
1999, Abubakar A., and van den Berg P.M., 2000, Jackson P.D., et al., 2001, Bing Z., and
Greenhalgh S.A., 2000, Slater L., et al., 2000, and Keller G. V. and Frischknecht F. C., 1966].

Surface resistivity surveys

9
Electrical well-logging is quite an expensive and time-consuming method of soil prospecting.
However, it gives good results, and good depth-resolution. The main reason for the
complexity of the method is the fact that the borehole has to be drilled. If the results are not
satisfactory, another borehole must be drilled and the survey has to be performed again.

Measurement of the apparent resistivity of the surface is another technique, which is much
cheaper to apply than borehole resistivity imaging. Basically, surface resistivity surveys use a
four-electrode technique for apparent resistance measurements.

In the theoretical analysis the first step is to assume a completely homogeneous formation
under the point electrodes. An equation giving the potential about a single point source of
current can be developed from two basic considerations:
1. Ohm's law:
E=ρ j (1.6)

where E is the potential gradient, j is the current density and ρ is the resistivity of the medium.

2. The divergence condition:


∆⋅ j = 0 (1.7)
which states that the sum of the currents entering a chunk of material must be equal to the
sum of the current leaving the chunk, unless there is a source of current inside the chunk. The
divergence of the current density vector must be zero at all sites except at the current source.

These two equations may be combined to obtain Laplace's equation:


1 1 (1.8)
∇ ⋅ j = ∇ ⋅ E = ∆U
ρ ρ

where U is a scalar potential function defined such that E is its gradient. In polar co-ordinates,
the Laplace equation is:
∂ ⎛ 2 ∂u ⎞ 1 ∂ ⎛ ∂u ⎞ 1 ∂ 2u
⎜r ⎟+ ⎜ sin θ ⎟+ =0 (1.9)
∂r ⎝ ∂r ⎠ r 2 sin θ ∂θ ⎝ ∂θ ⎠ r 2 + sin 2 θ ∂ϕ 2

If only a single point source of current is considered, complete symmetry of current flow with
respect to the Θ and ϕ directions may be assumed, so that derivatives taken in these directions
may be eliminated from (1.9):

∂ ⎛ 2 ∂u ⎞ (1.10)
⎜r ⎟=0
∂r ⎝ ∂r ⎠

This equation may be integrated directly:

∂u
r2 =C (1.11)
∂r
C
u=− +D
r

10
Defining the level of potential at a great distance from the current source as zero, the constant
of integration, D, must also be zero. The other constant of integration, C, may be evaluated in
terms of the total current, I, from the source. In view of the assumed symmetry of the current
flow, the current density should be uniform throughout the surface of a small sphere with
radius a drawn around the current source. The total current may be expressed as the integral
of the current density over the surface of the sphere:

E C 2πC
I = ∫ j ⋅ ds = ∫ ds = ∫ ds = −
S S
ρ S ρr 2
ρ (1.12)

This equation may be solved for the constant of integration, C, and this value substituted in
(1.11) for the potential function:

ρI
UM =
2π r (1.13)

Potential functions are scalars and so may be added arithmetically. If there are several sources
of current rather than the single source assumed so far, the total potential at an observation
point may be calculated by adding the potential contributions from each source considered
independently. Thus, for n current sources distributed in a uniform medium, the potential at
an observation point, M, will be:

ρ ⎡ I1 I1 In ⎤
UM = ⎢ a + a + ⋅⋅⋅ + a ⎥ (1.14)
2π ⎣ 1 1 n⎦

where In is the current from the nth in a series of current electrodes and an is the distance from
the nth source at which the potential is being observed.

Equation (1.14) is of practical importance in the determination of earth resistivities. The


physical quantities measured in a field determination of resistivity are the current, I, flowing
between two electrodes, the difference in potential, A U, between two measuring points, M
and N, and the distances between the various electrodes. Thus, the following equation applies
for the four ordinary terminal arrays used in measuring earth resistivity:

⎛ U M −U N ⎞ 2π ∆U
ρ =⎜ ⎟ =K (1.15)
⎝ I ⎠ 1 − 1 − 1 + 1 I
AM BM AN BN

There are several main configurations of the electrodes:


• The Wenner array
• The Schlumberger array
• The dipole-dipole array
• The pole-pole array
• The pole-dipole array

Apart from the basic configurations mentioned above, there are a wide variety of
modifications of them. Most of the measuring techniques use a similar basic four-electrode

11
configuration of two current electrodes and two voltage electrodes. The main difference is the
spacing between the electrodes. For the Wenner configuration (Figure 7.a) the distance
between the voltage electrodes is equal to the corresponding distance between the voltage and
current electrodes. For the Schlumberger configuration (Figure 7.b) the voltage electrodes are
placed symmetrically to the mid-point between the current electrodes, the spacing between
them usually being considerably less than half the distance between the current electrodes.

I I

V V

a
a) b)
a a a b

Figure 7. The Wenner a) and the Schlumberger b) electrode placement array.

These arrays can be used to measure the apparent resistivity of the soil, which is treated as
homogenous within the area of investigation. For both configurations the apparent resistivity
can be written as:
∆U
ρ=K , (1.16)
I
where K is the geometry-dependent factor. For the Wenner array (Figure 7.a) K can be
defined as:
K = 2πa , (1.17)

where for the Schlumberger array (Figure 7.b):

⎛ a2 b ⎞
K = π ⎜⎜ − ⎟⎟ . (1.18)
⎝ b 4⎠

There are several modifications of the basic configurations. The Lee modification of the
Wenner array splits the voltage measurement into two parts by adding a central reference
electrode. This extends the apparent resistivity to the amenability to evaluation of the
parameters of the two halves under the array. The dipole-dipole array is used in
resistivity/induced polarization (IP) surveys because of the low EM coupling between the
current and potential circuits. In this configuration the current electrodes are placed at a
specific distance, whereas the voltage electrodes are placed in line with them but at the side of
the current pair. The spacing between the current electrode pair, C2-C1, is given as “a”, which
is the same as the distance between the potential electrode pair P1-P2. Into this configuration
factor “n” was introduced, which states the current-to-voltage electrode separation in relation
to the current or voltage electrode separation “a”. For surveys with this array, the “a” spacing
is initially kept fixed and the “n” factor is increased from 1 to 2 to 3 and up to about 6 in order
to increase the depth of the investigation.

12
I V

C1 C2 P1 P2

a a
n*a

Figure 8. The dipole-dipole configuration.

The largest sensitivity values are located between the C2-C1 dipole pair, as well as between
the P1-P2 pair. This means that this array is most sensitive to resistivity changes between the
electrodes in each dipole pair. The sensitivity contour pattern is almost vertical. Thus the
dipole-dipole array is very sensitive to horizontal changes in resistivity, but relatively
insensitive to vertical changes in the resistivity. This means that it is good at mapping vertical
structures such as dykes and cavities but relatively poor at mapping horizontal structures such
as sills or sedimentary layers. The median depth of investigation of this array also depends on
the “n” factor, as well as the “a” factor. In general, this array has a shallower depth of
investigation compared to the Wenner array. However, for 2-D surveys, this array has better
horizontal data coverage than the Wenner.

One possible disadvantage of this array is the very small signal strength for large values of the
“n” factor. The voltage is inversely proportional to the cube of the “n” factor. This means that
for the same current, the voltage measured by the resistivity meter drops by about 200 times
when “n” is increased from 1 to 6. One method of overcoming this problem is to increase the
“a” spacing between the C1-C2 (and P1-P2) dipole pair to reduce the drop in potential when
the overall length of the array is extended to increase the depth of the investigation. Two
different arrangements for the dipole-dipole array, with the same array length but with
different “a” and “n” factors, are shown in Figure 9. The signal strength of the array with the
smaller “n” factor (Figure 9.b) is about 28 times stronger than the one with the larger “n”
factor.

Figure 9 Two different arrangements for a dipole-dipole array measurement with the same
array length but different “a” and “n” factors, resulting in very different signal
strengths.

13
To use this array effectively, the resistivity meter should have a relatively high degree of
sensitivity and very good noise rejection circuitry. There should also be good contact between
the electrodes and the ground in the survey. With the proper field equipment and survey
techniques this array has successfully been used in many areas to detect structures such as
cavities, where the good horizontal resolution of this array is a major advantage. Note that the
pseudo-section plotting point falls in an area with very low sensitivity values. For the dipole-
dipole array, the regions with the high sensitivity values are concentrated below the C1-C2
electrode pair and below the P1-P2 electrode pair. In effect, the dipole-dipole array gives
minimal information about the resistivity of the region surrounding the plotting point, and the
distribution of the data points in the pseudo-section plot does not reflect the sub-surface area
mapped by the apparent resistivity measurements. Note that if the datum point is plotted at the
point of intersection of the two 45° angle lines drawn from the centres of the two dipoles, it
would be located at a depth of 2.0 units (compared with 0.96 units given by the median depth
of the investigation method), where the sensitivity values are almost zero. Loke and Barker
(1996) used an inversion model where the arrangement of the model blocks directly follows
the arrangement of the pseudo-section plotting points. This approach gives satisfactory results
for the Wenner and Wenner-Schlumberger arrays, where the pseudo-section point falls in an
area with high sensitivity values. However, it is not suitable for arrays such as the dipole-
dipole and pole-dipole, where the pseudo-section point falls in an area with very low
sensitivity values.

Other modifications are half-Wenner and half-Schlumberger arrays, also known as pole-
dipole and pole-pole configurations (Figure 10). With the half-Schlumberger array, one of the
current electrodes is placed at a great distance. With the half-Wenner array one current and
one voltage electrode are placed at a great distance from the traverse. They must, moreover,
also be placed far away from each other. These configurations are useful for horizontal
profiling for vertical structures. Data obtained in such research are more readily interpreted
than data obtained with other configurations.

I I

V
V

a) b)

Figure 10. The half-Wenner a) and half-Schlumberger b) configuration, also known as the
pole-dipole and pole-pole configuration, respectively.

For profiling large areas the electrode set-up is replaced and data are then collected and
stored. For depth investigation the distance between the electrodes may vary too, allowing
deeper current penetration with larger spacing of the current electrodes. The electrode set-up
can be replaced manually but electrodes have also been known to be pulled behind a car and
the data acquired with some time-step (Figure 11). This is a much faster technique when large
areas have to be inspected [Keller G. V. and Frischknecht F. C., 1966].

14
Figure 11. Electrode set-up pulled behind a car.

Multi-electrode arrays and data inversion

Modern electrical ground prospecting techniques have led to an increase in the spatial
resolution of the survey by adding more electrodes and using advanced excitation patterns.
This method is also known as geo-electrical tomography. This is an improved method of
moving the electrodes in the Schlumberger or Wenner configurations. Many electrodes are
positioned, usually in line, and connected to a common multi-core wire. One transmitter and
one receiver are commonly used. These are connected to the electrodes by an appropriate
switching box. After a series of measurements data is collected, it is processed in such a way
that it provides information about the resistivity or the conductivity distribution of the
formation measured.

Figure 12. A typical arrangement of electrodes for a 3-D survey.

A variety of systems and software are available for imaging conductivity distribution. When
software is used for data analysis, techniques are available for introducing ground topology to
improve data inversion. Modern data analysis approaches have led to improved models of the
earth, such its presentation in anisotropic terms [Yin C., 2000, Muiuane E. A. and Laust B.,
2001, Candansayar M.E., and Basokur A.T., 2001, Panissod C. et al., 2001, Jackson P.D., et
al., 2001, Yi M.-J., et al, 2001, Szalai S. and Szarka L., 2000, van der Kruk J., et. al., 2000,
Vickery A.C. and Hobbs B.A., 2002, Dahlin T., 2000, Storz H., et al, 2000, Roy I. G., 1999,
Mauriello P., Patella D., 1999, Olayinka A.I. and Yaramanci U., 2000]. These issues are
presented in more detail in the next section, Inverse problems in geophysics.

15
1.2. The induced polarization technique.

DC equipment is frequently used in measuring the electrical properties of soil. This is a valid
method for obtaining soil resistivity, although the permittivity of the soil cannot be measured
by means of DC currents. It has been shown that, in general, the soil has not only resistive but
also capacitive electrical components. A very popular representation of these electrical
properties is the Cole-Cole equation. The Cole-Cole model is expressed as:
⎧⎪ ⎡ 1 ⎤ ⎫⎪
Z ( jω ) = Z (0)⎨1 − m ⎢1 − α ⎥⎬ (1.19)
⎪⎩ ⎣⎢ 1 + ( jωτ ) ⎦⎥ ⎪⎭

where Z(0) denotes zero-frequency impedance, m denotes limited polarizability


(chargeability), τ denotes the time constant of the surface polarization and α is an exponent
characterising frequency-dependence (non-integer).

Data for induced polarization methods can be acquired in both the frequency and time
domains. In the time domain a square current is applied between the current electrodes and
voltage is observed at the voltage electrodes (Figure 13). As a result of the polarization effect,
the voltage response for the square current is not square.

Io Current applied

Ep
Observed voltage
Eo Ep

Figure 13. Time-domain excitation and response.

The voltage response for step-function excitation can be expressed as:

⎧⎪ ⎫⎪
E (t ) ≅ E o ⎨1 − ∑ E pn e − β nt ⎬
⎪⎩ n =1, 2,3... ⎪⎭
(1.20)

where E0 is the amplitude of the voltage response in a stable state. The ratio of Ep/Eo is often
no more than a few per cent.

In practice the excitation pattern is usually bipolar or referred to as on+ zero on-zero. This
enables the electrode polarization effect to be dealt with better (Figure 14).

16
I

Im
t

-I m

Im
t

-I m

Figure 14. Typical current patterns for time domain-induced polarization.

In frequency domain-induced polarization the excitation is sine-waved in shape, the response


signal being in the form of a sine wave. By applying signal excitation at different frequencies
a frequency response from the formation may be collected. This technique is referred to as
impedance spectroscopy. There is also a modification of the frequency technique in which the
excitation signal is the sum of two or more sinusoidal signals and the responding voltage has a
separate demodulation unit for each signal frequency component.

A variety of improvements have been made to the matching of excitation signal patterns,
noise has been reduced and electrode phenomena minimized [Apparao G.S., et al, 2000,
Bhattacharya B. B., et al, 1999, Weller, et al, 2000].

Electrodes for the electrical methods

By using the two-electrode configuration measurement, especially when performing DC


measurements, electrode impedance can have a significant effect on the result. This is due to
electrode-polarization phenomena and contact impedance. In order to minimize contact
impedance phenomena four-electrode impedance measurement is widely used. Whereas
polarization of the electrode in DC measurement may introduce error, in the four-electrode
technique this can be avoided by using voltage non-polarizable electrodes, lead-lead chloride
for instance, or stainless steel for current electrodes.

Another technique is to use AC measurements. When measuring in the time domain the
polarization effect can be neglected by using a current excitation of opposite amplitude.
Consideration may thus be given to the use of stainless steel electrodes.

1.3. Electromagnetic surveys

Two types of electromagnetic survey are currently practised:


• Time-domain electromagnetic (TDEM) surveys, which are mainly used for depth
soundings and, recently, in some metal-detector type instruments
• Frequency-domain electromagnetic (FDEM) surveys, which are used predominantly
for mapping lateral changes in conductivity.
The EM method generally uses coils and there is no need for the probes to be in contact with
the ground. Measurements can thus be made much simpler and faster than for electrical

17
methods, where it is necessary to place metal electrodes on the ground surface.
Electromagnetic techniques measure the conductivity of the ground by inducing an electrical
field through the use of time-varying electrical currents in transmitter coils located above the
surface of the ground. These time-varying currents create magnetic fields that propagate in the
earth and cause secondary electrical currents, which can be measured either while the primary
field is transmitting (FDEM) or after the primary field has been switched off (TDEM).
Frequency-domain (FDEM) methods are used to provide rapid and generally shallow
coverage, while time-domain methods (TDEM) are more commonly used on large deep
targets [Mitsuhata Y., et al, 2001].

Another modification of the EM technique is the high-frequency horizontal loop (HLEM).


The basis of this is the use of a moving loop configuration:

RX
Bs

Bp
TX

Eddy
currents

Figure 15. Electromagnetic methods by means of horizontal TX and RX loops.

These surveys are good for kimberlite exploration and for targets beneath lakes. Land-based
targets do not respond well. High-frequency HLEM can be useful when mapping structure in
resistive environments and can provide information on the geometry and conductance of
potential targets.

Frequency-domain electromagnetics (FDEM)

In frequency-domain electromagnetic surveys the transmitting coil generates sine wave


electromagnetism, the primary field, whereas the receiving coil receives a signal from the
transmitting coil and from the environment, the secondary field. It is important to minimize
the influence of the primary field induced in the receiving coil. At least two coils are generally
necessary.

18
The technique is usually used to measure lateral conductivity variations along line profiles
either as single lines or grids of data. Modern systems integrate GPS with FDEM to increase
the rate of the surveys. Typical results for FDEM surveys are contour maps of conductivity
and 2-D geo-electrical sections showing differences in conductivity along a line profile.
Changes in conductivity are often associated with differences between lithological sequences
and over widely distributed ground such as faulted or mineralized zones.

Time-domain electromagnetics (TDEM)

In TDEM it is possible to use one coil for transmitting and receiving signals. The excitation
signal is a short – long current pulse in the coil. A short time afterwards the pulse decay of the
signal is measured. The decay depends on the material properties surrounding the coil. In this
type of survey it is possible to use one coil for both transmitting and receiving the data.

i(t) H(t)

t t

Toff Toff
H=(Hx,Hy,Hz)

i(t)

Figure 16. Time-domain electromagnetic set-up.

TDEM techniques produce 1-D and 2-D geo-electrical cross-sections in a similar manner to
electrical cross-sections. Survey depths range from a few to hundreds of metres with high
vertical and lateral resolution. The techniques do not yield a high resolution for shallow
depths.

In addition, the method can be applied to logging electromagnetic data in boreholes with non-
metallic casing. There is a variety of techniques for TDEM data interpretation [Lee T.J., et al,
2000].

Coil configurations:

There are several possible transmitting - receiving coil configurations. These are:
• Central loop (in-loop)
• Coincident loop
• Fixed loop
• Moving loop

19
• Large Offset TEM (LOTEM)

Special precautions usually have to be taken to minimize the primary field from the
transmitting coil. In most cases this includes special wiring of the coils or the use of a
gradiometer [Sattel D., and Macnae J., 2001].

• Central loop (in-loop). The coil set-up consists of two coils, the transmitting loop outside
and the receiving loop usually placed inside the transmitting loop in a fixed position. This
array is mainly used for vertical sounding applications.

RX

TX

Figure 17. Central loop configuration, RX – receiving coil, TX – transmitting coil.

• Coincident loop. Two partially overlapping loops are placed in such way that the primary
field from the transmitting loop is minimal in the receiving loop. The double D (DD)
configuration is very popular.

TX TX RX
a) RX b)

Figure 18. Coincident loops: a) functional diagram, b) DD configuration.

• Fixed loop, used for profiling and vertical sounding. One transmitting loop is at a fixed
position and there is an array or moving receiving loop.

RX

TX

Figure 19. Fixed loop configuration.

20
• Moving loop (Offset-loop). In this configuration the transmitting and receiving loops are
separated. There are some configurations in which the surfaces of these loops are placed
perpendicular to each other, which results in a minimal primary field being induced in the
receiving loop.

TX RX

Figure 20. Moving loop configuration.

• LOTEM (Large Offset TEM). In this configuration the current is transmitted via a long
dipole connected at two ends to the ground and in this way closing the loop. The receiving
coil is usually placed at greater distances from the transmitting dipole-loop.

RX
TX

Figure 21. LOTEM set-up.

Airborne EM systems.

Airborne EM systems are used for large-zone investigations. Typically, a set of coils is
mounted and carried behind a small aeroplane or helicopter. In airborne EM systems both
techniques, FDEM and TDEM, are present. This method is used to investigate larger areas for
spatial variation of electrical conductivity [Beard L P., 2000, Siemon, 2001].

Figure 22. A Geoterrex aircraft with the magnetometer (left) and the EM sensor (right) stowed
against the rear deck-ramp of the aircraft.

21
Figure 23. Photograph of a Geoterrex aircraft in flight, deploying the magnetometer and EM
sensor detectors behind it on separate cables.

Figure 24. Image of a conductivity-depth transform for one flight-line


(source: http://volcanoes.usgs.gov/jwynn/7spedro.html ).

Figure 25. Another airborne EM system - FLAIRTEM

Airborne slingram

A number of fixed-loop systems have been devised for use with helicopters and fixed-wing
aircraft. Basically, these are extremely sensitive slingram systems in which the real and
imaginary parts of the mutual coupling are measured at a single frequency, which may range
from 320 to 4000 Hz. Vertical-loop arrangements are used in preference to horizontal-loop
arrangements since when the ration of height to separation is large, vertical-loop
configurations are more sensitive to steeply dipping conductors and less sensitive to flat lying
conductors. When the vertical co-planar loop configuration is used, coils with ferromagnetic
cores are placed in pods attached to either wingtip. For vertical co-axial loop arrangements,
the transmitting coil is attached to brackets on the nose of the aircraft and the receiving coil is

22
placed in a boom or "stinger" extending from the tail. Vertical co-axial arrangements are used
with helicopters as well as on fixed-wing aircraft; in one system installed on a small
helicopter, the coils are placed at either end of a long light-weight detachable boom. In other
systems the coils are placed at either end of a large bird, which is towed by the helicopter.

Proper mounting of the loops is essential. Vibrations and flexing of the airframe or coil
mountings cause variations in the orientation and separation of the loops in relation to one
another and in relation to the aircraft, introducing noise into the system. Noise is additionally
caused by vibration of the coils in the earth's magnetic field. Shock mounting of the receiver
coil eliminates much of the noise contributed by this last mechanism but tends to increase the
amount of low-frequency noise caused by the changing orientation of the loops. Co-planar
loops are suspended somewhat below the wing tips to minimize variations in the separation of
the coils as the wings flex. A change in separation of one part in a hundred thousand causes a
change in the free-space mutual coupling of about 30 ppm, which is greater than the internal
noise level of some of the more sensitive fixed-coil systems. For co-axial coil configurations
mounted in a bird or on a boom the structure is designed and supported in such a fashion that
the loops tend to remain parallel to each other during flexures of the structure.

Variations in the intensity of the secondary field at the aircraft caused by the movement of
control surfaces such as ailerons are sometimes a source of noise. Variations in the contact
resistance between various parts of the airframe can cause changes in the eddy current flow
pattern and the associated secondary field at the aircraft. Careful electrical bonding of the
various portions of the airframe largely eliminates this source of noise.

Low pass filters placed between the demodulators and the recorder reject most of the noise
which is of high enough frequency that it cannot be confused with lower-frequency earth-
return signals. Some systems also have high pass filters to eliminate very low-frequency noise
and drift such as may be caused by variation in the coil properties with temperature. Such
systems are responsive only to vertical conductors and the edges of horizontal conductors.
This type of response is acceptable for most prospecting but is not desirable if data are to be
used in geological mapping. Because the resolution of airborne measurements is inferior to
that of ground measurements, fewer reference data are needed in the interpretation of airborne
surveys. In some cases, calculation of the anomaly curve for particular conductor geometry is
easier for an airborne system than for a ground system.

In interpreting airborne electromagnetic results, maximum use should be made of any other
geological or geophysical information which may be available. Magnetic field and natural
gamma radiation measurements are usually made along with electromagnetic measurements
in a typical airborne survey. Massive sulphide bodies will often be somewhat magnetic, while
conductors of no economic interest, such as graphitic slate or water-filled shear zones, are
non-magnetic. When there is a correspondence between electromagnetic and magnetic
anomaly curves, additional information about the shape of the body can be derived from the
magnetic survey data. Swamps and lakes, which are likely to be conductive, are observed as
lows in natural radioactivity. In many cases, the geology of the survey area may be
sufficiently well known that ambiguities in interpretation may be readily resolved.

The quadrature system

One of the most widely used towed-bird systems is known as the quadrature system or the
dual-frequency phase shift system. In such a system, the phase shift of the electromagnetic

23
field at two frequencies is observed at the receiver. The transmitting loop is a cable stretched
around the aircraft between the wing tips and the tail in such a manner that the axis of the loop
is nearly vertical. The loop is powered with several hundred watts at two frequencies,
typically 400 and 2300 Hz. A horizontal-axis receiving coil is towed in a bird at the end of a
long cable. In a typical installation in an amphibious aircraft the bird maintains a position
about 130 m behind and 70 m below the aircraft. At this location, the direction of the primary
field is about 25° from the axis of the coil. There is a small out-of-phase secondary field at the
receiving coil due to currents induced in the aircraft. The motion of the bird in this field is a
source of noise. To reduce noise from this source, an auxiliary horizontal-axis loop, powered
by a current 90° out of phase with respect to the current in the main transmitter loop, is used
to cancel the secondary field of the aircraft. Narrow-band circuits are used to measure and
record the phase shift of the field at the receiving loop relative to the current in the transmitter
loop. Except under conditions of excessive turbulence, or when there are nearby electrical
storms, the noise level is less than 0.05°.

At normal flying heights of 900 – 2000 m the coil configuration used with the phase shift
system is sensitive to conductors of almost any shape or attitude; in comparison with vertical
slingram configurations, there is lesser tendency to discriminate against horizontal sheets and
to accentuate vertical sheets. Since the in-phase component of the secondary field is not
measured, there is some possibility of not detecting highly conductive ore bodies which do
not cause significant out-of-phase fields. In practice, the possibility of missing a large, highly
conductive near-surface conductive body is slight because out-of-phase currents flow between
the body and the surrounding medium and because there is usually a halo of disseminated
mineralization around highly conductive zones.

Gaur (1963) conducted a series of model experiments for the dual-frequency phase shift
system in which the models were placed in a tank full of brine, simulating a conductive host
rock and overburden. In the case of horizontal sheets, the presence of the brine environment
changes the shape of the anomaly curves and increases the peak amplitude by as much as a
factor of three. Similar results are observed for vertical sheets, except that there is less change
in the shape of the anomaly curve.

The rotating field method

Another means for obtaining adequate sensitivity in an airborne EM system using a towed
bird is the use of two or more sets of loops responding differently to a conductor. The ratio of
responses or the difference in response is observed and recorded. Ideally, in such a system the
differences between free-space mutual couplings for each set of loops should be constant, as
the bird moves relative to the aircraft.

The rotating field method (Tornquist, 1958) uses two transmitting loops, which are attached
to the aircraft, and two receiving loops, which are placed in a bird. One of the transmitting
loops is vertical with its plane passing through the centre line of the aircraft; the second
transmitter loop is orthogonal to the first and is approximately horizontal. A similar
arrangement of receiving loops is towed in a bird as nearly directly behind the aircraft as is
possible or is towed at the end of a short cable by a second aircraft. The inclinations of the
two nominally horizontal loops are adjusted for the height of the bird so that the loops are as
nearly co-planar as possible.

24
For simple conditions the interpretation of results obtained with the rotary field method is not
too different from the interpretation of slingram results. When the geological environment is
complex and the flight lines are not normal to all of the conductors, rotary field results are
very complicated inasmuch as they are essentially the combined results obtained with three
different coil configurations.

Other dual transmitter systems

A variety of other techniques have been proposed in which the ratio or difference between
dual transmitter fields is measured. The simplest of such techniques uses two parallel
transmitting coils rigidly fastened together and a similar arrangement of receiving coils placed
in a bird. One coil system operates at a frequency suitable for detecting the conductive zones
sought and the other operates at a very low frequency, preferably low enough that the
response is slight, even for large highly conductive bodies. The two signals from the receiving
coils are amplified by selective circuits, rectified and the difference recorded. When no
conductive zones are present, the difference between the two signals is zero or constant,
independent of movements of the bird relative to the aircraft. When a conductive zone is
present, the mutual coupling between coils changes more at the high frequency than at the low
frequency and an anomaly is recorded.

This technique has not proved to be very useful. The weight and power requirements for a
system using a very low frequency as a reference are excessive. In addition, variations in the
amplitude of either of the transmitted signals or changes in the gains of either of the receiving
channels cause drift or noise.

Another technique (Slichter L. B., 1955) uses a set of orthogonal transmitting coils attached to
the aircraft and a set of orthogonal receiving coils carried in a towed bird in such a fashion
that one coil is co-planar and the other is co-axial with respect to its counterpart on the
aircraft. The transmitting coils are powered with two frequencies, which differ enough to
permit separation of the signals in the receiver circuitry. The amplitudes of the transmitted
signals are made the same by electronic regulators. The signal from each of the coils is
filtered to remove the unwanted frequency and to eliminate noise. After filtering, the signals
are mixed and amplified by a common amplifier, so that the gain in each channel will be the
same. After amplification the two signals are again separated, after which they are rectified
and the difference recorded. In free space this difference will be zero, despite changes in coil
separation. Rotation of the bird about one of its axes causes an error signal, which varies as
the cosine of the angle. These error signals may be detected by means of a third receiving coil,
orthogonal to the other two, amplified and recorded. The error signal may also be used to
actuate a servo-mechanism which rotates the transmitting coils to help compensate for
misalignment between the transmitting and receiving coils as the bird moves about relative to
the aircraft. As in the rotating field method, conductors are detected because they generate
unequal changes in the mutual coupling in the two coil systems. Vertical co-planar and
vertical co-axial coil configurations are preferred in this method, although the horizontal co-
planar arrangement could also be used for one of the coil pairs. By using a third set of coils
parallel to one of the other sets but operating at a substantially different frequency, a second
difference signal can be obtained. A comparison of the amplitudes of the two signals provides
an indication of the conductivity in a conductive zone. One such system, which has been used
extensively (Pemberton, 1961), employs a combination of coils and frequencies such that
three differences and one error signal are recorded.

25
The response of a difference system may be calculated by taking the differences between the
responses for each coil configuration separately. There is no coupling between the various
coil configurations, such as can occur in the rotary field method. The results obtained with
this method are likely to be more complicated than those obtained with a slingram system but
they are less complicated than the results obtained with the rotary field method.

Transient response systems

Fundamentally, the problem in achieving adequate sensitivity in an airborne electromagnetic


system is that of detecting very small secondary fields from the earth in the presence of a
large primary field. This problem may be avoided by using a pulsed primary field and by
measuring the transient secondary field from the earth while the primary field is zero. To be
practical an airborne transient system must use a repetitive primary field, so that the recorded
parameters will be essentially continuous.

While one serious problem in making airborne measurements is eliminated by using transient
response measurements, other problems become more acute and new problems arise. In most
cases only a small part of the energy in the secondary field remains after the end of the
energizing pulse. As a result, a more intense primary field must be used in a transient method
than in a continuous-wave method in order to measure a secondary response of the same
magnitude. It is more difficult to design circuitry for transient signals than for sinusoidal
signals. Inasmuch as many of the circuit elements must have a wide band pass, there is greater
difficulty in rejecting atmospheric and other noise.

The only airborne transient system which has been reported to date is the INPUT system
(Barringer, 1963), which uses a half-sine wave primary pulse with alternating polarity. A
large horizontal transmitting loop is stretched around the aircraft. A vertical receiving coil
with its axis aligned with the flight direction is towed in a bird at the end of a long cable. The
primary purpose for using a long cable is to remove the receiving coil as far as possible from
secondary fields induced in the aircraft. The signal seen by the receiving coil is blocked from
the amplifiers during the primary pulse. During the interval between pulses the received
signal is amplified and fed to a series of electronic gates, which sample the signal at four
different times. The signal from each gate consists of a series of pulses with constant width
and with an amplitude which is a function of the amplitude of the secondary field during the
time the gate was open. These pulse series are integrated using circuits with appropriate time
constants and recorded on four separate channels.

The delay of the gate for the first channel is selected so that this channel will be responsive to
poor conductors, including conductive overburden. Delays in the other channels are
successively longer, so that channel four is sensitive only to conductors with a large value for
the response parameter. A comparison of the anomalies recorded on the fourth channel serves
to indicate the conductivity of the body in the same way that a comparison of measurements
made at several frequencies with continuous-wave methods does.

Because the coil configurations are much the same, the shapes of the anomaly curves obtained
with the INPUT system are similar to the shapes of the anomaly curves observed with the
dual frequency phase shift system. To the extent that the shape of the anomaly curve is
independent of frequency, model experiments in which the coil configuration is simulated and
in which continuous waves are used can be employed to obtain the shapes for INPUT
anomaly curves. The exact shape and amplitude of INPUT anomalies can be determined from

26
model experiments in which airborne circuitry is simulated or by calculation from the
continuous-wave response as discussed in previous sections.

The AFMAG system

Instrumentation for the AFMAG method can be adapted to airborne measurements. Two
orthogonal receiving coils of equal sensitivity are towed in a bird behind a fixed-wing aircraft
or helicopter. The length of the tow cable is determined by the magnitude of the secondary
fields induced in the airframe and the electrical noise generated by the aircraft. The coils are
suspended so that their axes are at 45° from the horizontal and aligned with the flight
direction. The signals from the two coils are compared in such a fashion that the output is
approximately proportional to the tilt angle (Ward, 1959). As with a ground system,
measurements are made at two frequencies. In addition, a continuous record of the signal
strength at each frequency is made to help evaluate the quality of the data. In some airborne
systems the phase difference is measured between signals from the two coils.

Inasmuch as it is not possible to determine the mean azimuth of the field and then measure the
tilt angle in that direction, as is done in making ground surveys, flight lines are laid out
perpendicular to the regional strike. As pointed out in the discussion on ground AFMAG
surveys, the natural field tends to become aligned perpendicular to the regional strike. The
field strength must be somewhat higher for airborne measurements, so in many parts of the
world AFMAG methods can be used only rarely. For large targets the AFMAG method
provides a greater depth of penetration than any of the other airborne electromagnetic
methods.

The semi-airborne EM technique

In this technique a large fixed transmitting loop is laid on the ground while a small aeroplane
or helicopter carries the detector system.

RX

TX

Figure 26. A semi-airborne EM survey.

The concept of semi-airborne systems has been put forward in the literature:

27
• The TURAIR system is operated in the frequency domain and uses two spatially displaced
receivers to define an amplitude ratio and phase difference (Bosschart, Siegel, 1972;
Becker, 1979; Seigel, 1979)
• FLAIRTEM is a time-domain system with data stacking and windowing (Elliott, 1997,
1998)
• The GEOTEM airborne EM system (Annan et al, 1996)

The useful frequency range for induction methods is about 100 Hz to 100 kHz. This range is
determined by strong attenuation of the time-varying EM waves in the conductive soil
structure. The measure of the attenuation is skin depth δ, which is defined as the distance in
which the amplitude of a plane EM wave decays to e-1 ≈ 0.37 of its initial value:

2 ρ
δ = ≈ 0,503m , (1.21)
σµω f

where ω = 2πf is the angular frequency of the plane EM wave and µ = 4π 10 −7 [ H / m] was used in
the approximate formula.

Table 1: Approximate skin depth δ [m] for materials of different conductivity for different
frequencies
F[Hz]\ρ[Ω 0.1 0.2 0.5 1 2 5 10
m]
100 15.92 22.51 35.59 50.33 71.18 112.54 159.16
200 11.25 15.92 25.17 35.59 50.33 79.58 112.54
500 7.12 10.07 15.92 22.51 31.83 50.33 71.18
1000 5.03 7.12 11.25 15.92 22.51 35.59 50.33
2000 3.56 5.03 7.96 11.25 15.92 25.17 35.59
5000 2.25 3.18 5.03 7.12 10.07 15.92 22.51
10000 1.59 2.25 3.56 5.03 7.12 11.25 15.92
20000 1.13 1.59 2.52 3.56 5.03 7.96 11.25
50000 0.71 1.01 1.59 2.25 3.18 5.03 7.12
100000 0.50 0.71 1.13 1.59 2.25 3.56 5.03

Induction logging

Induction logging is a technique for EM surveys in a borehole. When a borehole is drilled, oil
is often used for lubrication. Lubricating oil generates oil-based muds, which are insulators
and prevent the flow of DC electrical current from the electrodes to the formation. In the
induction the EM set-up is inserted into the borehole. Currents are induced in the formation
using a transmitting coil. An array of the receiving coil measures the magnetic field from the
transmitter and the secondary currents induced in components of the formation. If the
formation is symmetrical around the borehole axis, the current flows around the borehole in
circular loops. When measurements are performed at different frequencies and for different
separations between transmitter and receiver, data can be converted into a 2-D image of the
electrical conductivity near the borehole. An assumption has to be made of the symmetry of
the formation around the borehole axis.

28
RX array

TX

Figure 27. EM borehole logging.

The forward problem for the EM sounding was used following the digital linear filter
approach to computing the resistivity sounding curves. Filter sets for computing EM sounding
curves have been designed for various dipole-dipole configurations. The expression for the
mutual impedance ratio for the horizontal co-planar coil system can be written as:


Z
= 1 − r 3 ∫ λ2 R (λ )J 0 (λ )dλ (1.22)
Z0 0

where r is the distance between the transmitter and the receiver, J0 is the Bessel function of
zero order and the complex EM function is denoted by R(λ) or R3,N (λ). This can be computed
using the following relationship for a given frequency:

Vi −1, N + Ri , N (λ )e −2 hiVi
Ri −1, N = , (i=N,N-1,...,1), (1.23)
1 + Vi −1, N Ri , N (λ )e −2 hiVi
with
R N , N (λ ) = 0, (1.24)
Vi = λ 2
+ vi2 , (1.25)
vi = iωµ 0σ i , (1.26)
Vi − Vk
Vi ,k = , (1.27)
Vi + Vk
ω = 2πf , (1.28)

29
where f denotes the frequency of excitation [Hz], N denotes the number of layers, µo denotes
the magnetic permeability of free space (4πx10-7 H/m), σi denotes the electrical conductivity of
the ith layer (S/m) and hi denotes the thickness of the ith layer ([m]).

Equation (1.22), with a change in the variables x = ln(r ) and y = ln(1 / λ ) , can be rewritten for
computation as:

Z
Z0
( )
= 1 − r 3 ∫ {e − 2 y R( y )}{e x − y J 0 e x − y }dy (1.29)
−∞

Note that the integral (1.29) is a convolution integral with the input in the first, and the filter
function in the second bracket.

The ratio of mutual impedances can easily be computed with the help of the filter coefficients
developed by Koefoed et al, (1972). In the following the phase ϕ of the mutual impedance
obtained from
⎛ Im(Z / Z 0 ) ⎞
ϕ = arctan⎜⎜ ⎟,
⎟ (1.30)
⎝ Re( Z / Z 0 ) ⎠

is considered as a representation of the EM response.

EM Diffraction

In some cases it can be useful to measure electromagnetic field diffraction since some
problems can be solved analytically [Weidelt P., 2000].

High-frequency EM methods

For high-frequency EM waves the dielectric properties of the formation become important.
For frequencies higher than 1 MHz an electromagnetic wave propagates as a strongly
attenuated one. Skin depth for 1 Ωm formation at 1MHz is about 0.5 m. High-frequency
electromagnetic methods use a transmitter and receiver, where the measured value is usually
the traveltime of the EM wave emitted, which is reflected from buried objects or layer
borders.

The technique known as ground penetrating radar (GPR) involves the collection of data from
surface or borehole radar, when the survey is performed in the borehole. These techniques
yield a high resolution but, in accordance with the nature of high-frequency EM waves in the
soil, only allow rather shallow environmental studies. The frequency range that can be used
varies from 1 MHz to several GHz [Lazaro-Mancilla O. and Gomez-Trevino E., 2000, Al-
Nuaimy W., et al, 2000].

Radio Wave Methods

The electromagnetic waves transmitted from radio broadcast stations may be used as an
energy source in determining the electrical properties of the earth. The use of radio-frequency
electromagnetic fields can be considered an essentially different method from the inductive
methods discussed in the preceding chapter, inasmuch as the instrumental methods used are
very different.

30
The frequencies used in radio-wave transmission are generally much higher than those used in
the inductive methods. If existing radio transmission stations are used as a signal source, only
limited ranges in frequency are available. Standard MW broadcast stations cover the
frequency range from 540 to 1640 kHz (in the USA). A great many commercial and amateur
stations (SW) have broadcast frequencies from 1640 kHz to 30 MHz and above, but for the
most part, transmissions are intermittent and many of these stations cannot be considered a
reliable source. There are also a few stations operating in the frequency range from 100 to 540
kHz (LW), and a very few stations broadcasting in the range 10 to 100 kHz (VLF).

Considering that the depth to which radio waves can penetrate into a conductive material such
as the earth is very limited, it is usually preferable that measurements be made at as low a
frequency as is possible. For the frequencies used by the standard broadcast stations, the skin
depth (the depth at which the signal strength is reduced by the ratio 1/e) ranges from tens of
metres in normally conductive rock to hundreds of metres in resistive rock. For this reason,
radio-wave measurements using standard broadcast stations as an energy source are best used
for studying the electrical properties of the soil cover and overburden and, in areas where the
soil is reasonably thin, the bedrock geology.

The transmitting antenna normally used in radio stations consists of a vertical wire supplied
with an oscillatory current, so that the antenna can be treated as a current dipole source.

The propagation from a transmitter to a receiver location can take place along a multiplicity of
paths. The various paths which may be followed are:
1. A direct line-of-sight path from the transmitter to the receiver;
2. A path in which the ray is reflected from the surface of the earth;
3. A path (or many paths) in which the ray is reflected from the lower surface of the
ionosphere;
4. A path in which energy is continually re-radiated by currents induced in the ground
(the surface wave).

Paths which may be followed by a radio wave in travelling from a transmitting antenna to a
receiving antenna include direct transmission, reflection from the earth and ionosphere, and a
surface wave. In addition, energy may follow reflected ray paths from irregularities on the
earth's surface, such as mountains. The electrical properties of the earth may be computed
from the rate at which the amplitude of the ground wave decreases with distance from the
transmitting antenna, but it must be distinguished from the other varieties of wave which may
arrive at the receiver.

If both the transmitting antenna and the receiving antenna are close to the surface of the earth
(with antenna heights of less than 5 % of the antenna separation), the wave reflected from the
earth's surface almost exactly cancels the direct arriving wave, since the travel distances along
the two paths are almost equal, and since the phase of the reflected wave is inverted with
respect to the phase of the direct-arriving wave as a result of the reflection. This means that
the total signal strength will consist only of the ground wave and ionosphere reflections.
Ordinarily, the ionosphere-reflected waves are important only at distances greater than several
tens of kilometres from the transmitter. Therefore, at distances of about 15 or 40 km, the only
signal observed at the earth's surface is the ground wave.

It would be possible to conduct a depth sounding using radio-wave field-strength profiles for
a variety of frequencies but such a technique is not used in practice for several reasons:

31
1. Instrumentally, it is difficult to transmit the range of frequencies required. Each frequency
requires a different antenna length to obtain good radiation efficiency. Also, permission
must be obtained for each frequency used, unless very low radiation power is used. Such
permission cannot be obtained readily for a range of frequencies covering several decades,
as is desirable if a sounding is to be made.
2. In order to determine a value for the effective resistivity with reasonable certainty, it is
necessary to make field strength measurements over a range of distances corresponding to
ten wavelengths at the lowest frequency used. In practice, this might be a distance of 10
km. In so doing, the layering must be assumed to be uniform over such a lateral distance,
and this is rarely the case for the depths to which radio waves normally penetrate.
3. It is usually easier to determine the resistivity as a function of depth using the galvanic
method at the depths normally reached by radio waves.

The calculation of earth resistivity from radio-wave decay rates is not as accurate as
determinations made by other methods, nor is it as convenient, unless an existing transmitter
can be used as a power source. The advantage of the method is that the effective resistivity is
determined over a relatively large area.

It has often been suggested in the literature that local variations in radio-wave field intensity
which occur within a distance of a few metres or tens of metres may be used to locate locally
conductive areas, such as fault traces or shallow ore bodies. The intensity of the radio-wave
field will normally tend to decrease less rapidly than normal over a small conductive area, and
may actually increase locally. Frequently, there may be a zone of anomalously low intensity
on the side of the conductive zone away from the transmitter.

Local variations in field strength may be studied, using a distant radio station as a source,
providing that some corrections may be made for variations in the level of power transmitted.
Broadcast stations normally have a power variation of between 10 and 20 %, though this
averages out if the signal level is observed and averaged over a period of several minutes. It is
also desirable to measure radio wave intensities from several stations operating at about the
same frequency but for which the signals arrive in the survey area from different directions.
The anomaly in field strength may commonly depend on the orientation of the conductive
zone relative to the direction of transmission of the radio waves. The certainty of detecting a
conductive zone is increased when radio waves arriving from several directions are observed.

The use of a distant transmitter to detect the presence of conductive zones is attractive in that
a survey may be conducted rapidly. However, precautions are necessary to discriminate
between local anomalies in field intensity caused by conductive zones in the earth, and the
anomalies associated with such things as pipes and fences.

The VLF method

The very low frequency (VLF) technique relies on several electromagnetic wave transmitters
located at various places on the earth’s surface. The electromagnetic wave frequency range is
between 3 and 30 kHz. As transmitted, the EM waves spread throughout the earth and the
interaction between them and conducting planes in the earth can be measured. VLF EM
waves interacting with buried conductors generate a secondary field. This field can be
measured at and above the ground surface.

32
Low-frequency EM waves have deep a penetration range into the ground and the sea. This is a
useful military aspect of VLF, as they are used for submarine communication. Furthermore,
the range of these waves is global, which makes it possible to use them in, for example,
geophysics.

The VLF method is inexpensive, very fast, and well suited to hard-rock prospecting. Porous
unconsolidated media such as sand are not suitable for this method. However, it enables very
large metal objects buried in the ground to be sought. Conductive media like wet clays
effectively mask anything lying beneath them.

VLF EM data are usually presented as profile or contour maps. The VLF method is seldom
used alone. Instead, it tends to be used in parallel with, for instance, DC resistivity techniques.
The VLF-R technique is a modification of VLF data interpretation with the aim of providing
resistivity profiling.

A list of worldwide VLF stations compiled by William Hepburn LWCA can be found at the
following address: http://www.iprimus.ca/~hepburnw/dx/time-vlf.htm.

1.4. Magnetic techniques

Magnetic techniques measure the remnant magnetic field associated with a material or the
change in the Earth’s magnetic field associated with a geological structure or man-made
object. They have been used for regional surveys since the early twentieth century in the
hydrocarbon industry and for longer in mineral prospecting. Groundwater studies have been
of little noticeable use.

The measurements of the magnetic field are performed mainly in proton magnetometers.
These devices can be hand-held for local magnetic field investigation and airborne for larger
area investigations. It is possible to measure a magnetic field in the sea by means of a
waterproof measuring coil. The results of magnetic surveys are usually presented as line
profiles or magnetic anomaly maps.

Proton magnetometry is a technique of precise measurement of the value of the Earth’s


magnetic field. The Earth’s magnetic field is a vector dependent on the position of a site on
the Earth’s surface. A proton magnetometer makes use of the phenomenon by which protons
from atoms in earth components display precession in the constant magnetic field. The
frequency of the precession depends on the external magnetic field. If we can measure the
frequency of precession, it is possible to specify the magnetic field. The accuracy of the
method is about 0.1 nT. The magnetic field of the earth varies from 49000 nT to 70000 nT.
This method is useful for the investigation of mineral resources like ferrum, oil and hidden
objects.

The magnetotelluric resistivity method

Electrical currents induced in rocks by fluctuations in the Earth's magnetic field may be used
to measure resistivity. If the time variations in the magnetic field can be treated as a magnetic
component of a plane electromagnetic wave, a simple relationship can be shown to exist
between the amplitude of the magnetic field changes, the voltage gradients induced in the
earth and the resistivity of the earth.

33
The depth to which an electromagnetic wave penetrates in a conductor depends both on the
frequency and on the resistivity of the conductor. Therefore, resistivity may be computed as a
function of depth within the earth, if the amplitudes of the magnetic and electrical field
changes can be measured at several frequencies.

The advantages of the method are:


• The feasibility of detecting resistivity beneath a highly resistant bed, which is difficult in
the galvanic method.
• The opportunity to study resistivities to great depths within the earth.
The disadvantage is:
• The instrumental difficulty encountered in trying to measure the amplitude of small rapid
changes in the magnetic field.

The magnetotelluric field

The unit of measurement for the electric field is the [V/m]. In practice, the multiple unit
[mV/km] is commonly used, since the electrical component of the magnetotelluric field is
measured with a pair of electrodes with spacing of the order of a kilometre and the voltages
recorded over such a separation are in tens or hundreds of millivolts. The millivolt per
kilometre is identical with [µV/m], a unit which is commonly used in radio-wave field-
intensity measurements.

Magnetic field intensity is defined as the force exerted on a magnetic pole of unit strength by
a magnetic field. It is usually represented by the symbol F when the Earth's field is under
consideration. In the CGS system of units, which is almost invariably used by geophysicists,
the unit of intensity is the oersted. The intensity of the earth's magnetic field, except for areas
of anomalous local magnetization, ranges from about 0.25 to 0.70 oersted. For describing
small changes in the Earth's magnetic field, geophysicists have defined the gamma, which is
10-s oersted.

In the MKS system of units, intensity is measured in [N/Wb]. Experiments by Biot and Savart
(1820) have led to the realization that the magnetic field about a long and straight wire with
current flowing through it is proportional to the amount of current and inversely proportional
to the distance from the wire. In MKS units, the relationship is:

I
F =
2πa (1.31)

where a is the distance from the wire at which the magnetic field is measured. This
relationship between magnetic field strength and current permits the definition of another
MKS unit of intensity, the ampere per metre. One ampere of current flowing through a long
straight wire will generate a magnetic field intensity of one ampere per metre at a distance of
one metre from the wire. The ampere per metre is more widely used than the newton per
weber, but the quantity measured is numerically the same, whichever name is used for the
unit. The newton per weber (or ampere per metre) is a much smaller unit than the oersted:

1 [N/Wb] = 1 [A/m] = 4π x 10-3 oersted,


or
1 oersted = 79.7 [N/Wb] = 79.7 [A/m].

34
Magnetic induction is a measure of the force exerted on a moving charge by a magnetic field,
whereas magnetic intensity is a measure of the force exerted on a magnetic pole by a
magnetic field, whether the pole is moving or not. Magnetic induction is related to magnetic
intensity as:

B = µF (1.32)

where µ is the magnetic permeability of the space in which the induction is being measured.
Usually no distinction is made between magnetic induction and magnetic intensity when they
are measured in CGS units because the permeability of most rocks is unity, or nearly so, when
so measured. Therefore, the two quantities, intensity and induction, are numerically equal.

In the MKS system of units care must be taken to distinguish between the two quantities,
since permeability, µ, has the value 12.56 x 10-7, or very nearly so, for most rocks. The
quantity which is measured in practice depends on the type of magnetometer used. Some
magnetometers measure the force on a magnetic pole by balancing this force against a
gravitational force; such magnetometers measure the magnetic intensity ([N/Wb]). Other
magnetometers operate on the principle that time variations in the magnetic induction induce
voltage in a coil.

The MKS unit of magnetic induction is defined in terms of Ampere's Law:

F = ρ v× B (1.33)

where F is the force felt by a charge v moving with a velocity q in a field of magnetic
induction strength B. If the charge is measured in coulombs, the force in newtons and the
velocity in metres per second, the unit for magnetic induction is the [Wb/m2]. In the CGS
system (dynes for force, statcoulombs for charge and centimetres per second for velocity) the
unit for induction strength is the gauss. The weber per square metre is a much larger quantity
than the gauss:

1 weber/m2 = 10,000 gauss.

The ranges for each of these units which may be used in describing the Earth's magnetic field
are as follows:

CGS system -
magnetic induction in gauss 0.25 to 0.70
magnetic intensity in oersteds 0.25 to 0.70
magnetic intensity in gammas 25,000 to 70,000

MKS system -
magnetic induction in webers/m 0.000025 to 0.000070
magnetic intensity in newtons/weber 20 to 56

Instrumentation used in measuring the magnetotelluric field

Determination of earth resistivity using the magnetotelluric field as a source of power requires
that wave impedances be measured over a frequency spectrum several decades in width. In
designing field equipment it is necessary to know what frequency intervals must be observed.

35
It is a fairly simple matter to measure the electrical field in the earth with the necessary
precision. The commonest technique uses three electrodes, or ground contacts, in the form of
an L, with the lengths of the arms ranging from several tenths of a kilometre to several
kilometres. If measurements are being made over resistant rock, such as igneous or
metamorphic rocks, the electrical field strengths will be large and the shorter electrode
separations will be adequate. If measurements are being made over very conductive rock,
electrode separations of several kilometres may be required to obtain a large enough voltage
to measure accurately.

With the L-spread, the corner electrode is used as a common ground for two recording
channels (Figure 28). It is essential that no current flows through the recording system since if
it does, current from one arm of the L-spread may return to the ground through the other arm
of the L-spread. When this happens, a voltage appearing along one arm of the spread will also
be recorded, but with reduced amplitude, on the recording channel connected to the other arm
of the spread. This cross-sensitivity leads to errors in relating the proper component of
electrical field variation with the component of magnetic field variation measured at the same
time. If the recording system in use has a relatively low resistance, a four-terminal cross array
of electrodes, such as that shown in (Figure 29), should be used. With a four-terminal system,
neither recording circuit has a common ground.

Almost any type of electrode may be used to make contact with the ground if variations only
with periods shorter than hundreds of seconds are to be measured. A lead plate buried in a
shallow trench and moistened with salt water is a good simple electrode.

Figure 28. L-spread configuration.

Figure 29. Four-electrode realization of the spread configuration.

However, if variations with periods ranging up to a day are to be measured, some precautions
should be taken to avoid variations in electrode potential due to temperature changes.

36
Electrode potential, which is the potential drop between an electrode and the electrolyte in
contact with it, depends on temperature. If both electrodes used in measuring an earth voltage
are identical in behaviour, and if both electrodes vary in temperature in the same way, the
changes in electrode potential at each electrode will cancel out. In practice, it is unreasonable
to expect, when the electrodes are widely spaced, that they will both be subjected to the same
changes in temperature, unless some provision is made to ensure that this is so. This may be
done by burying the electrodes several metres in the ground, so that diurnal temperature
changes, changes in temperature caused by variation in cloud cover, wind velocity and similar
factors will not penetrate to the electrode.

If measurements are to be made as soon as the electrodes are emplaced, there is some
advantage in using non-polarizing electrodes, electrodes which consist of a metal immersed in
a saturated solution of one of its salts. Combinations commonly used are copper electrodes in
solutions of copper sulphate and zinc electrodes in solutions of zinc sulphate.

In view of the small changes in voltage which are to be measured, some form of amplification
is required. This may be accomplished with a direct-current amplifier, in view of the low
frequencies being measured. It is important that the short-term drift of the amplifier be
considerably smaller than the voltage variations to be measured in order for the drift not to be
confused with true voltage variations. Long-term drift is not a problem unless variations are to
be recorded over a long period. Voltage amplification ratios up to 3,000 may be readily
obtained using chopper-stabilized amplifiers to avoid drift problems. In such amplifiers the
low-frequency voltages appearing at the input are converted to higher-frequency voltages
using either a mechanical or an electronic alternator (a device that alternates the polarity of
the input at rates ranging from 60 to 400 times per second). Amplification is then
accomplished in the same manner as in AC amplifiers and the output is converted back to DC
(or low frequency) using an alternator operating synchronously with the input alternator. Such
amplifiers may be used for frequencies up to several cycles per second. At higher frequencies,
conventional AC amplifiers may be used, since the short-term drift at these higher frequencies
is not usually a problem.

The voltages from the electrode arrays may be filtered before amplification in some cases.
The extent of the filtering carried out before the data are recorded depends on the type of data
analysis which is planned. If data are recorded on magnetic tape, generally as little filtering as
possible is done, since it may be performed more effectively when the tapes are replayed at an
analysis centre. If data are recorded on paper, it is usually necessary to record only a narrow
band of frequencies on a single record. If broad-band signals are recorded, usually only a
single frequency, or perhaps just a few are readily apparent since they are far larger than
signals on other frequency bands. In order to see variations in the magnetotelluric field on
frequency bands outside the dominant frequencies, it is necessary to exclude these dominant
frequencies with filters.

In addition to filtering, it is usually necessary to remove a static self-potential level which


ordinarily exists between a widely spaced pair of electrodes. The static potential difference
between a pair of electrodes spaced at one kilometre apart is ordinarily of the order of some
hundreds of millivolts. This level must be removed before the voltage between the electrodes
can be amplified, if the static level is to exceed the dynamic range of the amplifier and
recording system. The static potential may be cancelled with a potentiometric circuit, or
rejected with a capacitive input to the recorder system. If a capacitive input system is used,
the time constant must be much longer than the longest period to be recorded.

37
The measurement of magnetic field micropulsations is considerably more difficult than the
measurement of electrical field oscillations and only in recent years has equipment become
available which makes it possible to detect such micropulsations of normal amplitude.
Magnetometers have been constructed using a wide variety of physical phenomena for
detecting magnetic field changes but only four have been used extensively up to the present
time in studying the magnetotelluric field. These are the magnetic balance, the flux-gate
magnetometer, the induction coil and the optical pumping magnetometer. The last is the most
sensitive for long-term magnetic field variations.

Of the four methods mentioned for measuring magnetic field, the magnetic balance is the
oldest and the simplest. A bar magnet is suspended in such a way that torque due to the
magnetic field is balanced against gravity. Magnetic balances are commonly sensitive enough
to provide record deflections of one millimetre for magnetic field changes of 2.50 gammas.
The chief advantage of a magnetic balance is its simplicity. Disadvantages include its
sensitivity to temperature changes, seismic accelerations and lack of sensitivity to short-term
variations in the magnetic field. The sensitivity to temperature arises from changes in the
magnetic moment and the position of the centre of gravity, although these can be
compensated for to a large extent. The sensitivity to seismic accelerations comes about since
the value of g is apparently changed when seismic accelerations are added to the normal
gravitational acceleration. The long period is a result of the small forces being balanced, as
well as the large mass of the magnet system. The natural period of sensitive magnetic
balances is of the order of several hundred seconds, so only magnetic variations with periods
longer than this can be measured.

In the flux-gate magnetometer the fact that the magnetic permeability of a ferromagnetic
material depends on the magnetic field strength is used in measuring the latter. The maximum
sensitivity which may be obtained with a flux-gate magnetometer depends on noise generated
within the core material and in the associated electronic circuits. Noise can, with careful
design, be reduced to about 0-02 gammas or less. The advantages of flux-gate magnetometers
for magnetotelluric measurements are that they measure only that component of the field
parallel with the core and that they are readily available and reliable. The disadvantages are
the high noise level, as compared with some other types, and long-term drifts in the DC bias
current and electronic circuitry.

Induction magnetometers differ from the other three types used in measuring magnetotelluric
field variations in that they measure the rate of change of field strength, rather than the field
itself. As long as the coil can be assumed to have negligible resistance, inductance and
capacitance, the EMF induced in a coil of wire is proportional to the oscillation frequency of
the magnetic field. For the low-field strengths and low frequencies which are of interest in
magnetotelluric measurements, a very large number of turns must be used to provide a
measurable voltage output from a coil. The sensitivity of an induction coil may be increased
or its resistance decreased by winding the coil onto a core of high-permeability material. The
principle advantage of the induction coil is its high output at high frequencies. The principal
disadvantages are the low output at low frequencies, the sensitivity to coil movement (seismic
disturbances), and the difficulty of obtaining a precise calibration.

Optical pumping magnetometers are the newest type of sensitive magnetometer to become
available and provide the best means for measuring long-period variations in magnetic field
strength. Optical pumping magnetometers take advantage of a rather complicated internal

38
transfer phenomenon in atoms which depends on the ambient magnetic field strength. Optical
pumping magnetometers have great advantages over other types of magnetometers in that the
measurement of field strength is absolute and not subject to the many uncertainties of
maintaining calibration that other types of magnetometers are subjected to. Optical pumping
magnetometers may be used to measure long-term field changes without concern for drift
problems. One major disadvantage is the complexity and cost of the equipment required to
obtain an accurate measurement.

Telluric Current Methods

The magnetotelluric method for measuring earth resistivity is very demanding in the
measurements required, those of magnetic field variations with amplitudes as small as a
hundredth of a gamma. The telluric method for studying changes in earth resistivity utilizes
the same natural magnetotelluric field as a source of power but requires only that the electrical
field component of the field be measured simultaneously at several locations. In this way the
need to measure small changes in magnetic field strength is avoided but in simplifying the
measurement technique the possibility of determining earth resistivities in absolute terms is
also surrendered. The telluric method may be used to study ratios of resistivities between
locations, but cannot be used to determine the absolute value of resistivity at any one location
without auxiliary information. The telluric method can logically be thought of as a special
application of the more general magnetotelluric method. In fact, however, the telluric method
was developed prior to the development of the theory for the magnetotelluric method. The
telluric method was first applied in exploration geophysics during the 1930s by Conrad and
Marcel Schlumberger and has been used extensively in Europe and Africa, particularly since
1946. Between the years 1941 and 1955, 565 crew months of effort were devoted to telluric
current surveys by the Compagnie Generale de Geophysique of Paris, France (d'Erceville and
Kunetz, 1962). The telluric current method was first used in routine exploration in the
U.S.S.R. in 1954 and by 1959 its use had extended to the point where 24 exploration teams
were in the field, covering an area of 120,000 km2 per year (Birdichevskiy, 1960). Use of the
method in the United States has been limited to the 23 crew months of work reported by the
Compagnie Generale de Geophysique to have been carried out there. This reflects the fact that
the telluric method is primarily a reconnaissance method, used to best effect in areas where
the geology is poorly known.

1.5. Multi-modal techniques

Modern geophysical techniques have led to improved data analysis by bringing together
different technologies. The introduction of more information about formation structure into
data visualization has resulted in better data reconstruction [e.g. Sharma S. P. and Kaikkonen
P., 1999, Xiang J., et al, 2002]. More details on this subject are given in the following section.

39
2. Inverse problems in geophysics
2.1. Introduction

This brief outline indicates only some of the issues in the theory of inverse problems in
geophysics. More convenient and detailed descriptions of the application of the inverse theory
in geophysics can be found in many publications, including Snieder R., 1998, Engl H., 1992,
Groetsch C. W., 1998 and Snieder R and J Trampert, 1999. Inverse problems are not only
encountered in geophysics and a few examples are given here of inverse problems in other
applications:

1. Medicine;
• computed tomography (CT),
• magnetic resonance imaging (MRI),
• single photon emission tomography (SPECT),
• positron emission tomography (PET),
2. Curve fitting,
3. Determination of earthquake location,
4. Image enhancement,
5. Impedance tomography,
6. Lithosphere response to loading,
7. Pump test analysis in hydrogeology,
8. Factor analysis in geology,
9. Geomagnetism,
10. Density distribution within the Earth,
11. Temperature distribution within Earth,
12. Radioactive elements distribution within Earth.

The above list of inverse problems is, of course, not complete. Only in geophysics may other
inverse problems be considered, such as electromagnetic wave propagation through the earth
and current or fluid flow in porous rocks or in soil. The earth (soil) may be described in terms
of its physical property distributions (Engl H., 1992). In turn, these properties are very often
measured and then analysed. To achieve this specially constructed experiments are conducted.
For example, measurement of DC resistivity is performed using different electrode arrays. As
a result, sets of data are obtained. As they are obtained from measurement, they are referred to
as experimental or sometimes observational data. Data having been collected, there is the
means of modelling them in conjunction with the corresponding properties of the earth. Thus
the relationship between the properties of the model studied and the experimental data can be
understood. The way the model is constructed is, therefore, an essential step in the whole
process of calculating the distribution of properties on the basis of experimental data. The
process of modelling the relationship between observed data and properties is known as the
Forward Problem, in contrast to the opposite one, in which the properties are drawn from the
data and known as the Inverse Problem. A classical forward problem is to find a unique effect
of a given cause by using an appropriate physical model. An important aspect of the physical
sciences is to make inferences about physical parameters from data (Snieder R. and J.
Trampert, 1999). In general, the laws of physics provide the means for computing the data
values, given a model. These problems are usually well-posed, which means that they have a
unique solution, which is insensitive to small changes in data. An inverse problem is a set of
consecutive steps (algorithms) leading to the discovery of the cause of a given effect (Groetsh
C. W., 1993). Inverse problems do not necessarily have a unique and stable solution, which

40
means that small variations in the data can involve large changes in the parameters obtained.
This is why the inverse problems are often considered to be ill-posed.

There are many methods of solving inverse problems. These can be categorised as
“statistical” or “deterministic”. If the model being determined consists of fewer parameters
than the number of data, then the inverse problem is said to be “over-determined”. It can be
solved using techniques and methods which lead to a best fit of data. In the opposite situation,
when the number of parameters estimated is greater than the data, the inverse problem is
described as “under-determined”. In this case the number of solutions, in other words the
number of models that are in agreement with the measured data, is infinite. However, it is
possible to use methods originally developed for over-determined problems in under-
determined ones in order to obtain so called “smooth” models from “under-determined” data.
When the number of model parameters corresponds to a small number of data the problem is
said to be “evenly determined”.

In the ideal case, an exact theory exists that prescribes how the data should be transformed in
order to reproduce the model (Snieder R and J Trampert, 1999). This is the case for some
selected examples, assuming the required infinite and noise-free data sets to be available.
However, they are of limited applicability for a number of reasons. Firstly, the exact inversion
techniques are usually only applicable for idealistic situations that may not hold in practice.
Secondly, the exact inversion techniques are often very unstable. However, the third reason is
the most fundamental. In many inverse problems the model to be determined is a continuous
function of the space variables. This means that the model has infinitely many degrees of
freedom, while in a realistic experiment the quantity of data that can be used for the
determination of the model is usually finite. A simple count of variables shows that the data
cannot carry sufficient information to determine the model uniquely. This issue is also
relevant for non-linear inverse problems.

The fact that in realistic experiments a finite quantity of data is available to reconstruct a
model with infinitely many degrees of freedom necessarily means that the inverse problem is
not unique, in the sense that there are many models that explain the data equally well. The
model obtained from the inversion of the data is therefore not necessarily equal to the true
model that one seeks. This implies that for realistic problems, inversion really consists of two
steps (Figure 30).

Figure 30. The inverse problem viewed as a combination of an estimation problem plus an
appraisal problem (source: Snieder R. and J Trampert, 1999, Inverse problems in

41
geophysics [in] “Wavefield inversion”, ed. A. Wirgin, Springer Verlag, New
York, pp. 119-190)

The true model is denoted by m and the data by d. From the data d one reconstructs an
estimated model m ~ ; this is called the estimation problem. Apart from estimating a model m~
that is consistent with the data, the relation between the estimated model m ~ and the true model
m also needs to be investigated. In the appraisal problem the properties of the true model
recovered by the estimated model are determined as well as the errors attached to it. The
essence of this discussion is that inversion = estimation + appraisal. It does not make much
sense to make a physical interpretation of a model without acknowledging the fact of errors
and limited resolution in the model (Snieder).

In general there are two reasons why the estimated model differs from the true model. The
first reason is the non-uniqueness of the inverse problem that causes several (usually infinitely
many) models to fit the data. Technically, this model null-space exists as a result of
inadequate sampling of the model space. The second reason is that real data (and, more often
than we would like, physical theories) are always contaminated with errors and the estimated
model is therefore affected by these errors too. Model appraisal has, then, two aspects, namely
evaluation of non-uniqueness and error propagation. Model estimation and model appraisal
are fundamentally different for discrete models with a finite number of degrees of freedom
and for continuous models with infinitely many degrees of freedom. The problem of model
appraisal is only well-solved for linear inverse problems. For this reason the inversion of
discrete models and continuous models is treated separately in the bibliography. Similarly, the
issue of linear inversion and non-linear inversion is also treated independently. More detailed
discussion on these problems can be found in Snieder R, 1998, Snieder R and J Trampert,
1999.

In general, the methods for solving inverse problems are categorized here as deterministic and
statistical methods and both these categories are briefly presented in the appendices. A more
detailed description of the deterministic methods is given by Meju M A, 2001, Geophysical
data analysis, SEG, Tulsa, Oklahoma, while statistical methods are rigorously treated by
Mosegaard K and A. Tarantola, 2002, Probabilistic approach to inverse problems, [in]
Handbook of earthquake and engineering seismology, Academic Press.

42
2.2. The deterministic approach

General remarks on the deterministic and probabilistic approaches to inverse problems are
given in the appendices. Here the different approaches are illustrated by a more detailed
presentation of selected papers.

2.2.1. Linear and non-linear least-square inversion

Inversion of geo-electrical data is an ill-posed problem. This and the ensuing sub-optimality
restrict the initial model to one that closely approaches the true model. The problem may be
reduced by introducing damping into the system of equations. It has been shown that an
appropriate choice of damping parameter obtained adaptively and the use of a conjugate-
gradient algorithm to solve the normal equations make the 1-D inversion scheme efficient and
robust [Roy I. G., 1999]. In fact, this work belongs to the simultaneous inversion category, as
two different types of data are utilized. The changes in the damping and relative residual error
with iteration number are illustrated in the paper of Roy I. G., 1999. A comparative evaluation
is made of its efficacy over the conventional Marquardt and simulated annealing methods,
tested on Inman’s model. Inversion of induced polarization (IP) sounding is obtained by
inverting twice (true and modified) DC apparent resistivity data. The inversion of IP data
presented here is generic and can be applied to any of the IP observables such as
chargeability, frequency effect and phase, as long as these observables are explicitly related to
DC apparent resistivity. The scheme is used successfully in inverting noise-free and noisy
synthetic data and field data taken from the published literature.

A layered model of the earth is considered. The forward problem is formed by the following
three following equations. The IP effect in the medium is viewed as the apparent chargeability
of an inhomogeneous polarizable earth given by

ρ a∗ (ρi + ηi ρi ) − ρ a (ρi )
ηa = (2.1)
ρ a (ρ i )

where ρ a and ρ a∗ are the apparent resistivities for DC and time-varying electrical field
measurements, and ρi and ηi are the true resistivity and chargeability of the ith layer. IP
forward modelling is realized by carrying out twofold DC forward modelling, once with the
true DC resistivity, ρi , distribution of the medium and the other time with a modified
resistivity, ρi∗ = ρi + ηi ρi , distribution. The apparent resistivity, ρ a , over layered earth is
related to the kernel function through the Hankel integral

ρ a = s 2 ∫ T (λ )J1 (λs )λdλ (2.2)


0

where s is half the electrode spacing in a Schlumberger electrode configuration, λ is an


integration parameter and J1(λs) is a first-order Bessel function. T(λ), the resistivity transform
function, is obtained by recurrence relationship (Koefoed, 1979)

Ti +1 (λ ) + ρi tanh (λhi )
Ti (λ ) = (2.3)
1 + Ti +1 (λ ) tanh (λhi ) / ρi

43
with T(λ)=ρ0, where n denotes the number of layers, and ρi and hi are the resistivity and
thickness of the ith layer, respectively.

The computation of Frechet derivatives is an essential step for any successively linearized
non-linear inversion scheme. To lower the computational burden analytical (strictly speaking
semi-analytical as the Hankel transform is computed numerically) computation is considered
of the Frechet derivatives of the apparent resistivity data. The Frechet derivative of the
resistivity data is obtained by differentiating both sides of (2.2) with respect to the kth
parameter.

The inverse was solved and the minimum norm least-square evaluation of the solution x, such
that
2
min Jx − d = εT ε (2.4)
x

where d and x are the data difference and the parameter correction vectors, respectively, J is
the Jacobian matrix of the Frechet derivatives of the apparent resistivity data and ε is the noise
associated with the data. The matrix J is over-determined (the number of measurements is
greater than number of model parameters) and non-self-adjoint. Since J is not a square matrix
and ill-conditioned in general, the solution is given by normal equation

(J J + µI )x = J d
T T
(2.5)

where µ is the damping parameter and I is the identity matrix.

The optimal damping parameter is obtained by differentiating the above equation with respect
to the damping factor and rearranging it to the form

(J J + µI )y = J (Jx
T T
µ − d ) = JT ε (2.6)

where x µ indicates that x depends on µ.

Figure 31. Inversion results of synthetic apparent resistivity ρ a and chargeability ηa


sounding data over a three-layered earth model. Bullets represent the synthetic

44
data and the solid line is the best- fit curve after inversion. The true model is inset.
(source: Roy I. G., An efficient non-linear least-square 1-D inversion scheme for
resistivity and IP sounding data, Geophys. Prosp., 1999, vol. 47, pp. 527-550)

The paper has demonstrated that a dramatic improvement in robustness and efficiency can be
obtained by adaptively choosing the damping parameter which depends on the noise level of
the data. The adaptive damping NLSI scheme presented has been applied successfully in
inverting IP/resistivity sounding data. The novelty of such an approach lies in the simplicity
of the algorithmic structure, although, as reported earlier, such a technique does not
demonstrate the required robustness in estimating IP parameters for noisy data. However, it
has been shown here, with synthetic and field examples, that the presented scheme works well
with moderately noisy data, even for estimates far removed from the true model. The scheme
is efficient and converges rapidly to the minimum possible residual data error. It is also
demonstrated here that for synthetic noise-free data, the required damping at the final stage is
either very small or zero. The objective of the paper presented is to seek an efficient iterative
scheme with adaptive damping, leading to a robust inversion of IP/resistivity data.

An algebraic formulation of the inverse problem is presented in the paper of Vasco D.W.,
2000. According to the author, many geophysical inverse problems derive from governing
partial differential equations with unknown coefficients. Alternatively, inverse problems often
arise from integral equations associated with a Green’s function solution to a governing
differential equation. In their discrete form such equations reduce to systems of polynomial
equations, known as algebraic equations (Vasco D. W., 2000). Such equations are associated
with many deep results in mathematics. The proposed approach is illustrated by synthetic
magnetotelluric values generated by conductivity variations within the layer (Figure 32).

Figure. 32. 2-D conductivity structure used to generate synthetic field components (source:
Vasco D. W., 2000, An algebraic formulation of geophysical inverse problems,
Geophys. J. Int., 142, pp. 970 – 990).

The conductivity variations in the layer are estimated using a generalized inverse scheme
(Figure 33). The estimates are quite close to the conductivity values used to generate the
synthetic field values. However, they contain some scatter due to the added noise. A Born
approach was also adopted for the same data. It is emphasized in the paper that, in their
discrete form, two large classes of non-linear inverse problems reduce to polynomial or

45
algebraic equations. These two classes encompass the most commonly encountered
differential equations with unknown coefficients and many associated integral equations and
functionals. Even non-linear differential equations, in which the terms are products of field
variables and their derivatives, are polynomial upon discretization.

The algebraic approach is more computational than methods used to solve the linear inverse
problems. However, the applications indicate that the calculations are robust in the presence
of both discretization and observational errors.

Figure 33. (a) Solution obtained by the proposed method. (b) Solution obtained by Born
approximation. (source: Vasco D. W., 2000, An algebraic formulation of
geophysical inverse problems, Geophys. J. Int., 142, pp. 970 – 990)

Moreover, concepts and methods from computational algebra and algebraic geometry may be
used to address questions of the existence and uniqueness of the problem being considered.

Cross-well electrical measurement as known in the oil industry is a method for determining
the electrical conductivity distribution between boreholes from the electrostatic field
measurements in the boreholes (Abubakar A and P M van den Berg, 2000). A reconstruction
of the conductivity distribution of a 3-D domain is based on measurements of the secondary
electrical potential field, which is represented in terms of an integral equation for the vector
electrical field. This integral equation is taken as the starting point for developing a non-linear
inversion method, the so-called contrast source inversion (CSI) method. The CSI method
considers the inverse scattering problem as an inverse source problem in which the unknown
contrast source (the product of the total electrical field and the conductivity contrast) in the
object domain is reconstructed by minimizing the object and data error using a conjugate-
gradient method.
A theoretical model of the cross-well configuration is shown in Figure 34. An inhomogeneous
domain, D, with conductivity σ(x) in an unbounded homogeneous background medium with
conductivity σ0 is defined. The source is a small spherical electrode emitting a DC current, I,
located in the domain, S. The secondary electrical potential fields, Vs, are measured at the
various electrode locations in domain S.

46
Figure 34. Object domain D with conductivity σ(x) in the unbounded homogeneous
background of conductivity σ0. (source: Abubakar A and P M van den Berg,
2000, Non-linear 3-D inversion of cross-well electrical measurements,
Geophysical Prospecting, 48, 109–134)

The aim of the study is to determine the conductivity distribution σ (x) inside the domain D
from the secondary electrical potential field Vs measurements made in the domain S. The
governing equations originating from Maxwell’s equations at zero frequency are of the form:

∇V + E = 0 (2.7)

∇ ⋅ (σE ) = q ext (2.8)

where V is the electric potential field, E is the electrical field, σ is the electrical conductivity
and qext is the external source. Here an integral form of potential V and electrical field E has
been obtained under the assumption that they are described analytically for the point source.
This point source solution {VG, EG} is also known as Green’s State. The simplest medium in
this category is the unbounded and homogeneous one with the conductivity σ0

(
V p (x ) = σ 0−1IG x − x s ) (2.9)
and

(
E p (x ) = −σ 0−1I∇G x − x s ) (2.10)

where Vp and Ep are respectively defined as the primary electrical potential field and the
primary electrical field arising from DC current I injected to the domain by a small spherical
electrode. The primary fields are those arising for domain D with conductivity equal to that of
the background medium (i.e. σ ( x ) = σ 0 ). The small spherical electrode is modelled as a point
( )
source, i.e. q ext = Iδ x − x s . Using the spatial Fourier transform, the integral equation for the
scalar electrical potential field is obtained

V (x ) = V p (x ) + ∇ ⋅ ∫ G(x − x )χ (x')∇'V (x')dv


s
(2.11)
x '∈D

47
where V is the total electrical potential field in D, and χ(x) is the conductivity contrast given
by

σ (x ) − σ 0
χ (x ) = (2.12)
σ0

Thus the integral equation for the vector electrical field is then given by

E(x ) = E p (x ) + ∇∇ ⋅ ∫ G(x − x')χ (x')E(x')dv (2.13)


x '∈D

where E is the total electrical field in D. Note that either the potential or electrical field
change involves integration over the object domain D. This is because outside the object
domain D the conductivity contrast χ is zero.
In the electrical logging problem the secondary electrical potential field Vs in data domain S at
xR is considered. The secondary electrical potential field is the difference between the total
electrical potential field and the primary electrical potential field as a result of the presence of
the object, V s = V − V p :

( )
V s x R = −∇ ⋅ ∫ G(x − x )χ (x')∇'V (x')dv
s
(2.14)
x '∈D

The integral equation (2.14), relates the conductivity contrast χ to the secondary electrical
potential field Vs (measurement data). In forward modelling, the conductivity contrast χ is
known and the secondary electrical potential field Vs can be calculated.

Since the matrix operator consists of spatial convolutions, fast Fourier transform (FFT)
routines have been applied. The problem has been solved with the conjugate-gradient method.
With this so-called conjugate-gradient FFT technique complex 3-D problems can be solved
efficiently. Furthermore, it supplies the basis for the solution to the inverse problem.
However, the matrix describing this linear system of equations is non-symmetrical. Therefore
the adjoint operator has been used to set up the conjugate-gradient scheme.

The data equation contains both the unknown total electrical field and the unknown
conductivity contrast, but they occur as a product which can be considered as a contrast
source that produces the secondary electrical potential field at the measurement points. Hence
there is no unique solution to the problem of inverting the data equation by itself. The CSI
method attempts to overcome this difficulty by recasting the problem as an optimization in
which we seek not only the contrast sources but also the conductivity contrast itself. This is to
minimize a cost functional consisting of two terms, the L2 errors in the data equation in the
object equation, rewritten in terms of the conductivity contrast and the contrast sources rather
than the electrical fields.
An alternative method of solving this optimization problem iteratively is proposed in which
first the contrast sources are updated in the conjugate-gradient step, weighted so as to
minimize the cost functional, and then the conductivity contrast is updated to minimize the
error in the object equation using the updated sources. This latter minimization can be carried
out analytically, which allows easy implementation of the positivity constraint for the
conductivity.

48
Figure 35. Source–receiver set-up with 42 sources and receivers located in two boreholes.
(source: Abubakar A and P M van den Berg, 2000, Non-linear 3-D inversion of
cross-well electrical measurements, Geophysical Prospecting, 48, 109–134)

A numerical experiment based on synthetic data is performed (Figures 35 and 36).

Figure 36. Conductivity distribution (σ) in the (x1,x2)-plane for different x3-levels of a 3-D
homogeneous anomaly. (a) The original profile; (b) the reconstructions after 512
iterations (source: Abubakar A and P M van den Berg, 2000, Non-linear 3-D

49
inversion of cross-well electrical measurements, Geophysical Prospecting, 48,
109–134)

It is demonstrated that the inversion algorithm developed can reconstruct a 3-D electrode
logging problem over a wide range of conductivity contrasts using moderate computer power.
The main advantage of the non-linear inversion method presented is reduction of computation
as a result of the fact that a full forward problem does not have to be solved in each iteration.
Moreover, the method does not need any regularization technique. The only a priori
information used is the natural assumption that the conductivity is a positive quantity. The
authors claim that the algorithm presented is capable of reconstructing the unknown
conductivity contrast up to an acceptable level of accuracy. Furthermore, the numerical tests
indicate that this inversion algorithm using synthetic data with 20% noise still gives
reasonably good reconstruction results. However, adequate configuration of the wells is
crucial for correct reconstructions.

Field and “noisy” synthetic measurements of electrical field components can be inverted into
3-D resistivities by smoothness-constrained inversion, as has been done by Jackson et al,
2001). The values of the electrical field can incorporate changes in the polarity of the
measured potential differences seen when 2-D electrode arrays are used with heterogeneous
“geology” without utilizing negative apparent resistivities or singular geometrical factors. Use
of both the X and Y components of the electrical field as measurements resulted in faster
convergence of the smoothness-constrained inversion when compared with the use of one
component alone. Geological structure and resistivity were reconstructed as well as, or better
than, comparable published examples based on traditional measurement types.

(a)

(b)

Figure 37. (a) Electrode configuration used in the study. (b) Top view of electrode array
placement and conductive body localization (source: Jackson P. D., S. J. Earl and

50
G. J. Reece, 2001, 3-D resistivity inversion using 2-D measurements of the
electrical field, Geophysical Prospecting, 2001, 49, 26-39).

A 2-D electrode grid (20×10) incorporating 12 current-source electrodes was used for both the
practical and numerical experiments; this resulted in 366 measurements being made for each
current-electrode configuration. Consequently, when using this array for practical field
surveys, 366 measurements could be acquired simultaneously, making the upper limit on the
speed of acquisition an order of magnitude faster than a comparable conventional pole-dipole
survey. Other practical advantages accrue from the closely spaced potential dipoles being
insensitive to common-mode noise (telluric, for instance) and only 7% of the electrodes (those
used as current sources) being susceptible to recently reported electrode charge-up effects. A
dual-grid approach has been adopted to improve resolution without incurring significant time
penalties. A coarse grid is used to calculate the boundary conditions for a smaller, finer grid
on which the potentials are calculated. The current electrodes C1 and C2 were situated at X=-
70 m and +70 m, respectively. The apparent resistivities were calculated for 5 m dipoles
situated on the straight line intersecting C1 and C2. Agreement of 2±3% has been achieved
between the analytical and numerical values for the fine grid, whereas larger errors are
evident near the extremities of the coarse grid. The fine grid is three times smaller than the
coarse grid in each of the three dimensions, resulting in 27 times more nodes per unit volume
than the coarse grid.

Figure 38. 2-D measurements of Ex and Ey showing different responses to a conductive 3-D
body compared with the homogeneous case (source: Jackson P D, S J Earl and
G.J. Reece, 2001, 3-D resistivity inversion using 2-D measurements of the
electrical field, Geophysical Prospecting, 2001, 49, 26-39)

The method of choice today for inverting electrical resistivity survey data, in common with
other forms of geophysical tomography, is to apply a smoothness constraint to a least-square
minimization. This approach has been extended to both 2-D and 3-D inversions of resistivity
survey data. The reconstruction scheme reduces to the well-known equation

(A A + λR R )x = A b
T T T
(2.15)

51
where R is a matrix which defines the “roughness” of the model; λ is the Lagrange multiplier
controlling the balance between misfit and roughness; x is the unknown resistivity vector; b is
a vector of weighted function of the data mismatch (r (i ) − m(i )) σ (i ) . The roughness of x is
defined as hTh, where h=Rx and Aij=Jij/σ(i) and σ(i) is the standard deviation of the “field”
measurement; and Jij is the element of the Jacobian matrix which is the partial derivative of
the ith simulated measurement m(i) with respect to jth resistivity parameter. Numerical
experiments on synthetic data have been performed (Figures 39, 40).

Figure 39. A 3-D resistivity model: orthoparallel piped anomaly set in a background of 100
Ωm (source: Jackson P D, S J Earl and G.J. Reece, 2001, 3-D resistivity inversion
using 2-D measurements of the electrical field, Geophysical Prospecting, 2001,
49, 26-39).

Figure 40. Results of the inversion (source: Jackson P D, S J Earl and G.J. Reece, 2001, 3-D
resistivity inversion using 2-D measurements of the electrical field, Geophysical
Prospecting, 2001, 49, 26-39).

Field measurements were also performed and are presented in the paper. The inversions
reported here used electrical field components directly as measurements and the logarithms of
resistivity as unknown parameters. For smaller numbers of unknowns (<112), the inversions
are more accurate than has generally been reported in the literature (e.g. Olayinka and
Yaramanci 2000). For larger numbers of unknowns (>1000), the spatial smearing seen in our
results is similar to comparable published examples, while at shallower depths it is

52
significantly less. At greater depths, for larger numbers of unknowns, the solutions are too
smooth and the use of alternative approaches is suggested. The use of a 2-D electrode array
enabled the X and Y components of the electrical field (Ex and Ey) to be utilized
independently, while using both components together resulted in faster convergence. The field
case study presented in the paper demonstrates the necessity of using 3-D tomographic
methods in typical geological settings that can rarely be considered to be 2-D in nature. The
inverted resistivities are consistent with geological mapping and trenching across a fault in a
carboniferous sand-shale sequence that had been reactivated by mining activity. It is
concluded that 3-D resistivity tomography using electrical field measurements is suitable for
assessing typically heterogeneous geological settings. The use of two orthogonal components
of the electrical field provides an additional data set. Compared with the one component that
is typically measured, this data set is sensitive to the changes in sub-surface resistivity for
which inversion is sought.

2.2.2. Quadratic programming

This approach for a layered model was developed and presented by Lazaro-Mancilla O. and
Gomez-Trevino E., 2000. Their paper presents a method for inverting ground-penetrating
radargrams in terms of 1-D profiles. Consideration is given to a special type of linearization
of the damped E-field wave equation to solve the inverse problem. The numerical algorithm
for the inversion is iterative and requires the solution of several forward problems, which are
evaluated using the matrix propagation approach. Analytical expressions for the derivatives
with respect to physical properties are obtained using the self-adjoint Green’s function
method.

Figure 41. A layered earth model whose three electromagnetic properties vary from layer to
layer. The plane wave propagates downward in the positive z-direction (source:
Lazaro-Mancilla O and E Gomez-Trevino, Ground-penetrating radar inversion in
1-D: an approach for the estimation of electrical conductivity, dielectric
permittivity and magnetic permeability).

Three physical properties of materials are considered, namely dielectrical permittivity,


magnetic permeability and electrical conductivity. The inverse problem is solved by
minimizing the quadratic norm of the residuals using quadratic programming optimization. In
the iterative process the Levenberg–Mardquardt method is used to speed up convergence. The
ground is modelled using thin horizontal layers to approximate to general variations in the
physical properties. A plane wave at normal incidence upon an n-layer earth model as
illustrated in Figure cc is considered. The three electromagnetic properties vary from layer to

53
layer; each layer is linear, homogeneous and isotropic. A dependence of the form ei vt is
assumed for the fields, where t is time and v is angular frequency. It is further assumed that
the physical properties do not depend on frequency. The governing differential equations for
the electrical Ex(z) and magnetic Hy(z) fields propagating in z-direction for the i-th layer are

d2
2
Ex ( z ) − γ i2 Ex (z ) = 0 (2.16)
dz

d2
2
H y ( z ) − γ i2 H y ( z ) = 0 (2.17)
dz

where

γ i = ( jµiσ iω − µiε iω 2 )
12

is the propagation constant of the layer. The quantities µi , ε i and σ i represent magnetic
permeability, dielectric permittivity and electrical conductivity. The solution is obtained in the
standard form of a boundary value problem with 2n boundary conditions and the same
number of unknowns. The fields in the successive layer are found by applying continuity of
the fields and using standard propagation matrices. The radargram is computed by applying
the inverse Fourier transform to the product of the electrical field and Ricker’s pulse
spectrum. The inverse problem is based on integral equations for the electrical and magnetic
fields

E = GE , µ µ (r ')d 3r ' − GE ,σ σ (r ')d 3r ' − GE ,ε ε (r ')d 3r '


∫ ∫ ∫ (2.18)
V' V' V'

where GE , µ , GE ,ε and GE ,σ represent the functional or Frechet derivatives of E with respect


to µ, ε and σ, respectively. Finally, the electrical field for the layered structure can be
described as

N
⎛ ∂E ∂E ∂E ⎞
E= ∑ ⎜⎜⎝ ∂µ
i =1 i
µi −
∂ε i
εi − σi ⎟
∂σ i ⎟⎠
(2.19)

Variations of each property have been considered separately and invert the corresponding
radargrams in terms of the property used for their generation. The purpose is to illustrate the
applicability of the present approach to the quantitative interpretation of radargrams, and to
show that it represents a viable alternative to existing inversion methods. The experiments
demonstrate that there is an intrinsic ambiguity in the interpretation of radargrams in the sense
that variations of one property can be genuinely mistaken for variations in the other two. It is
of particular interest that a reflection produced by a discontinuity in electrical permittivity can
be reproduced by a double discontinuity in electrical conductivity.

It follows from the paper that it is impossible to recover the three electromagnetic properties
jointly from GPR data. The authors claim, however, that before drawing this conclusion, one
must take into consideration that the approach to the problem is perhaps the simplest possible.
The possibility of differentiation must come from a further complication of the physical

54
model, for instance by including frequency-dependent properties on the one hand or by
considering vertical electrical fields on the other. This may change the picture drastically. As
they stand, the present results should be seen as an ideal limit case that directly points to the
impossibility of differentiation.

55
2.3. The probabilistic approach

The Bayesian approach focuses on obtaining a probability distribution (the posterior


distribution), by assimilating three kinds of information: physical theories (data modelling),
observations (data measurements) and prior information on models.

The inverse problem used in Malinverno’s paper (Malinverno A., 2000), for example, is
illustrated in Figure 42. The densities of the two quarter-spaces separated by a vertical
discontinuity are functions of the depth z only and the density contrast is ∆ρ (z ) .

Figure 42. Measurements of the horizontal gradient of gravity (top) as a function of distance
from a vertical interface between two quarter-spaces where density is a function
of depth only (bottom left). The horizontal gravity gradient depends on the density
contrast between the two quarter-spaces (bottom right). To simulate measurement
noise, Gaussian white noise with a standard deviation of 0.1×10-9 s-2 has been
added to the gravity gradient data. (source: Malinverno A, 2000, A Bayesian
criterion for simplicity in inverse problem parameterisations, Geophys. J. Int.,
140, pp. 267-285)

The horizontal gravity gradient ∆g ' ( x ) measured at the surface (z=0) at a distance x from the
vertical discontinuity is related to ∆ρ (z ) by


2γz
∆g ' ( x ) = ∫z ∆ρ ( z )dz (2.20)
0
2
+ x2

where γ is the gravitational constant. The gravitational edge effect inverse problem is to
infer ∆ρ (z ) from a finite set of inaccurate measurements of ∆g ' ( x ) . These N measurements
can be listed in a data vector

[
d = ∆g1' ,....., ∆g N' ] (2.21)

56
where ∆g i' = ∆g ' (xi ) and the symbol T denotes the transpose. The density contrast *o(z) can
be represented by a stack of M layers of thickness *z, which can be as small as desired to
approximate a continuous function. The density contrasts in these layers form a model vector

m = [∆ρ1 ,...., ∆ρ M ]
T
(2.22)

We can then write the forward problem as a matrix equation,

d = Gm + e (2.23)

where each element of N×M matrix G is the contribution of the jth layer to the ith
measurement

j∆z
z ⎡ j 2 ∆z 2 + x 2 ⎤
Gij = 2γ ∫)
( j −1 ∆z
z 2 + xi2
dz = γ log ⎢ 2⎥
⎣ ( j − 1) ∆z + xi ⎦
2 2
(2.24)

and the vector e contains measurement errors.

Rather than inverting directly for the M>>N elements of m that approximate a continuous
∆ρ (z ) , a common approach is to estimate a relatively small number H<<M of the parameters
that make up a parameter vector

θ = [θ1 ,...., θ H ]
T
(2.25)

A linear case where m is related to θ by a matrix equation is considered in the paper

m = Aθ (2.26)

Thus the relation describing the forward problem can be written as

d = GAθ + e (2.27)

To illustrate the Bayesian parameter estimation (see appendix D) and model selection, an
example known from the literature is considered and is shown in Figure 43.
This paper applied Bayesian model selection to the ranking of different parametrizations of
geophysical inverse problems. The model selection criterion uses Bayes' rule to compute the
probabilities of different parametrization hypotheses a posteriori, that is, given the
information contained in the geophysical measurements. This probability is computed from
the evidence, which is the integral of the product prior pdf likelihood function. As shown by
application to the gravitational edge effect inverse problem, the Bayesian criterion prefers
parametrizations that better fit the data, contain fewer free parameters and result in a posterior
pdf of the parameters that is more similar to what is expected a priori. Beyond the simple
linear example presented here, the Bayesian criterion can be applied when the magnitude of
the measurement errors is not known a priori, and to the general case where the relationship
between the model parameters and measurements is not linear.

57
Figure 43. Posterior mean (thick black line) in the spectral parametrization when k~3, 5 and 7
orthogonal functions are used. The thin black lines are the ±1 posterior standard
deviation bounds, and the grey line is the actual density contrast profile. (source:
Malinverno A., 2000, A Bayesian criterion for simplicity in inverse problem
parameterisations, Geophys. J. Int., 140, pp. 267-285).

A Bayesian formulation for the discrete geophysical inverse problem that can significantly
reduce the cost of the computations is considered in the paper by Moraes and Scales (Moraes
F S and J. A. Scales, 2000). The formulation proposed was developed on the basis of a
working hypothesis that the local (sub-surface) prior information on model parameters
supersedes any additional information from other parts of the model. An approximation which
permits a reduction in dimensionality is proposed in the basis of this hypothesis.
The posterior can be written as

p(m d, J ) = s (m J )l (m d ) (2.28)

where s and l are, respectively, the prior and the likelihood function. All functions in the
above equation have the same dimensionality of the parameter vector, which is usually high in
most geophysical applications. An alternative Bayesian formulation avoids solving the full
multi-variate problem. In particular, the necessity of having the multi-dimensional prior
distribution is obviated. Instead, all prior information is introduced to marginal (local) prior
distributions for single parameters. To apply this approach the first step is to divide the
parameter vector into parts, m1 and m2. Next, the parameter m2 is eliminated from the
problem in order to have a solution expressed only in terms of m1. This is done by treating m2
as nuisance parameters.

58
Figure 44. Schematic representation of the local Bayesian inversion. At the top level is the
original multi-dimensional Bayesian problem involving functions of a full vector
of parameters m. At the intermediate level the prior is approximated by the
product of three functions t, 1/q and f, where f and q are normal distributions and t
is the marginal prior distribution for parameter m1. The parameter can then be
integrated out of the problem for a proper choice of likelihood function l, leaving
a 1-D version of the Bayesian theorem. (source: Moraes F S and J. A. Scales,
2000, Local Bayesian inversion: theoretical developments, Geoph. J. Int., vol.
141, pp. 713-723)

Figure 45. Simple earth model consisting of six rectangular cells, numbered 1-6, with centre
co-ordinates (x, z), width d and height h, as indicated in the figure. The problem is
to estimate the density contrast in each cell from the gravity field and prior
information. The grey box above the cells is used to simulate modelling errors.
(source: Moraes F S and J. A. Scales, 2000, Local Bayesian inversion: theoretical
developments, Geoph. J. Int., vol. 141, pp. 713-723).

A nuisance parameter is a term usually employed in Bayesian inference to denote a parameter


which one is obligated to infer but which is of no immediate interest. Eliminating parameters
m2 involves finding a marginal distribution for m1 from the posterior. Analytical examples are
also presented in the paper. For the prior information the true correlation and two well logs
measure density contrasts through the cells. The well logs are built from pseudo-random
numbers from six different probability density distributions, so that each cell density contrast
has its own underlying process.

59
Figure 46. Inversion results depicted by the posterior marginal for each parameter. The 95 %
inter-quantile regions are represented by the shaded areas and the true values for
the parameter are given by solid circles. This example uses 10 % of the maximum
gravity value as the standard deviation for the noise. The density axis corresponds
to values for density contrast of the corresponding cell indicated by numbers 1-6.
(source: Moraes F. S. and J. A. Scales, 2000, Local Bayesian inversion:
theoretical developments, Geoph. J. Int., vol. 141, pp. 713-723).

The most important contribution of this research is that it offers an alternative strategy for
treating complex multi-dimensional problems by reducing the dimensionality of the problem
before the final solution is found. When this is done, the main difficulties in Bayesian
inference are automatically addressed. Appropriate methods are proposed for the construction
of prior distributions. These methods allow sub-surface data to be processed into marginal
distributions.

The basic properties of Bayesian estimation have been examined by Malverno (Malinverno
A., 2002). The purpose of this paper is to describe an extension of the commonly used
Bayesian parameter estimation approach to account for the posterior probabilities of different
parametrizations of the earth model. Specifically, a generic layered medium is used, where the
number of layers, the depths to the interfaces between layers (or layer thicknesses) and the
layer properties are free parameters. If the posterior probabilities of different parametrizations
(that is, different numbers of layers) are considered, there is no need for regularization beyond
what is dictated by scant prior information. This is because the posterior probability of model
parametrizations obeys a principle of parsimony or simplicity. Of earth models that fit the
data equally well, the models that have fewer degrees of freedom (fewer layers) have higher
posterior probabilities. The net effect is that the data determine how complex the model
parametrization ought to be. In addition, by broadening the space of the parametrizations
possible a priori, the determination of posterior uncertainty does not depend on a particular
choice of parametrization (such as a fixed number of layers) and gives a more comprehensive
quantification of the non-uniqueness of the solution.

To implement this approach in practice, a Markov chain Monte Carlo algorithm is applied to
the non-linear problem of inverting DC resistivity sounding data to infer the characteristics of

60
a 1-D earth model. The earth model is parameterized as a layered medium, where the number
of layers and their resistivities and thicknesses are poorly known a priori.

Figure 47. A generic layered medium with k layers. The layer interfaces are at depths between
a minimum zmin and a maximum zmax, and no layer can be thinner than hmin.
(source: Malinverno A, 2002, Parsimonious Bayesian Markov chain Monte Carlo
inversion in a nonlinear geophysical problem, Geophys. J. Int., 151, pp. 675-688)

The logarithm of depths applies because the resolving power of the resistivity sounding data
decreases with increasing depth; the logarithm of resistivity covers the broad range of
resistivities encountered in nature, while avoiding numerical difficulties. In short, the earth
model can be written as a vector

m = (k , z , ρ ) (2.29)

The illustrative data used are log-apparent resistivities measured at the surface using a
Schlumberger array. For different spacings between the electrodes, the current lines sample
different depth ranges in the sub-surface, and the measured apparent resistivities (the
resistivities that would be measured if the medium were homogeneous) vary accordingly.
Solving the forward problem requires specification of a forward modelling function g(m) that
returns a vector of apparent resistivity data predicted by the generic layered medium in m.
The filter method described is used with the 11-point filter for g(m). In Bayesian inference the
posterior pdf of m measures how well a generic layered medium agrees with prior information
and data. It is helpful to write this posterior using the definition of a conditional pdf as

p(m d, I ) = p(k d, I ) p(z, ρ k , d, I ) (2.30)

where d=(d1, d2, . . . , dN) are the measured log-apparent resistivities and I denotes prior
information. In Bayesian inference all probabilities are conditional at least on I, which
represents prior knowledge of the parametrization of the earth model, realistic values of
factors such as the parameters, the geometry of the sub-surface and the forward model.

61
Figure 48. Three-layer earth model (a) and data (b) used in the synthetic example. (source:
Malinverno A, 2002, Parsimonious Bayesian Markov chain Monte Carlo
inversion in a nonlinear geophysical problem, Geophys. J. Int., 151, pp. 675-688)

Figure 49. Histogram of the number of layers sampled by the MCMC algorithm for the
synthetic data in Figure 48. This histogram approximates the posterior pdf of the
number of layers; note that the prior pdf is uniform. (source: Malinverno A, 2002,
Parsimonious Bayesian Markov chain Monte Carlo inversion in a nonlinear
geophysical problem, Geophys. J. Int., 151, pp. 675-688)

The formulation presented in the paper addresses two basic problems in solving the non-
uniqueness half of inverse problems: the difficulty of setting a prior distribution when little is
known a priori and the dependence of the posterior uncertainty on the parametrization of a
particular earth model. While Bayesian inference has many desirable qualities, setting the
prior pdf on the basis of available knowledge is often problematic. Setting the prior
distribution is equivalent to formulating an initial hypothesis that should then be modified by
the data as needed. If little is known a priori, the prior hypothesis should allow for a variety
of possible models and for a broad range of parameters in these models. Generally, however,
this is not feasible. To solve the problem in practice it seems that one must choose a particular
parametrization (for instance by fixing the number of layers) and impose a regularization
factor that goes well beyond prior knowledge.

62
Figure 50. Image obtained by superimposing the values of resistivity in the layered media
sampled by the MCMC algorithm for the synthetic data in Figure 48. This image
is an estimated display of the posterior marginal pdf of resistivity at different
depths. The dotted white line shows the range of resistivity and thickness of the
middle layer that is consistent with the data of Figure 48. (source: Malinverno A,
2002, Parsimonious Bayesian Markov chain Monte Carlo inversion in a nonlinear
geophysical problem, Geophys. J. Int., 151, pp. 675-688)

The Bayesian approach is also present in a paper by Saccarotti and Del Pezzo (Saccorotti G.
and E. Del Pezzo., 2000). Array techniques are particularly well suited for detecting and
quantifying the complex seismic wave fields associated with volcanic activity such as
volcanic tremor and long-period events (Saccorotti G. and E. Del Pezzo, 2000). The aim of
the study was to develop a method in which the estimate of the slowness vector is obtained
through a probabilistic approach. In this method, the problem of determining the slowness
vector of a signal is reduced to a search for the maximum likelihood solution in a joint
probability density function of data and model parameters, where data and model parameters
are represented by array-averaged cross correlations as functions of slowness. The Bayesian
formulation is used to map the probability of a signal propagating across the array with a
given slowness vector, allowing for a complete definition of the uncertainties and correlations
of the parameters.

Mauriello and Patella (Mauriello P. and D. Patella, 1999) proposed a probability-based


tomography, although the approach is not Bayesian. Probability tomography is a concept
reflecting the inherently uncertain nature of any geophysical interpretation. The rationale of
the new procedure is based on the fact that a measurable anomalous field representing the
response of a buried feature to physical stimulation can be approximated by a set of partial
anomaly source contributions.

A set of apparent resistivity data is considered, ρ a (l , n ) measured by any electrode device


along a straight-line profile. The examined medium is assumed to be resistive,
inhomogeneous, and isotropic. A volume of medium contributing to the measured apparent

63
resistivity is divided into Q elementary cells of small volume ∆V and described by a true
resistivity ρ q . A set of apparent resistivity data ρ a (l , n ) , (l=1, 2 . . , L; n= 1, 2 . . ,N) is
considered. This can be measured by any electrode device (for instance, pole–pole, pole–
dipole or dipole–dipole) along a straight-line profile located on the free surface of an
inhomogeneous, isotropic resistivity structure. Using the standard rules for pseudo-section
tracing, the ρ a (l , n ) values are assigned to the nodes of a vertical 2-D grid across the profile.
At each node l is the position along the x-axis, defining the profile, and n is the pseudo-depth
along the vertical z-axis, positive downwards (Figure kk).

Figure 51. The dipole–dipole pseudo-section profiling method. (source: Mauriello P. and D.
Patella, 1999, Resistivity anomaly imaging by probability tomography,
Geophysical Prosp., vol. 47, pp. 411-429)

The whole piece of ground contributing to the measured ρ a (l , n ) data is assumed be


composed of Q elementary cells with a sufficiently small volume ∆V, each identified by a true
resistivity ρ q (q= 1, 2, . . , Q). Mathematical manipulation (for example expansion of
ρ a (l , n ) in a Taylor series) then leads to the relation

L N
ηq = Cq ∑∑ ∆ρ a (l , n )J (l , n )
l =1 n =1
(2.31)
where

−1
⎡ L N L N
⎤ 2
Cq = ⎢ ∑∑ [∆ρa (l , n)] ∑∑
⎣ l =1 n =1
2

l =1 n =1
[J (l , n)] ⎥
2


(2.32)

It is shown that ηq satisfies the following condition:

− 1 ≤ ηq ≤ 1 (2.33)

Each value is then heuristically interpreted as the probability that a resistivity of anomaly
located in the qth cell deviating from the reference model is responsible for the whole set of
measured apparent resistivities within the first order expansion. Positive and negative values

64
of ηq result from increments and decrements of resistivity in the qth cell with respect to the
reference model. Synthetic examples are included in the paper (Figure 52).

Figure 52. A synthetic three-prism resistivity model: (a) plan view, (b) cross-sectional view.
(source: Mauriello P. and D. Patella, 1999, Resistivity anomaly imaging by
probability tomography, Geophysical Prosp., vol. 47, pp. 411-429)

The model consists of three prismatic blocks with resistivities of 5, 10 and 500 Ωm buried in
a uniform half-space of resistivity 100 Ωm. Figure 2 shows the plan and cross-section of the
three-prism model.

Figure 53. Tomography images of the resistivity anomaly occurrence probability for the
combined three-prism model. For location and size of the prisms refer to
Figure 52. (source: Mauriello P. and D. Patella, 1999, Resistivity anomaly
imaging by probability tomography, Geophysical Prosp., vol. 47, pp. 411-429)

Field measurements and reconstructions are also provided in the paper. The purpose of
Mauriello and Patella’s study was to provide a simple tool for image reconstruction of the
most probable location of resistivity anomalies underground in the most objective way. The
important aspect of the analysis developed above is that the resistivity signatures are
considered only from a probabilistic viewpoint. This is quite a new concept in geophysics,

65
which conforms to the inherently uncertain nature of the geophysical interpretation process.
To this end it is worthwhile pointing out that the probability concept introduced here is not the
direct consequence of statistics performed on a set of repeated measurements representing the
different responses of a simulated buried system in the presence of varying sources of error. It
is much more; it is the consequence of the intrinsic non-uniqueness of the geophysical
solution. Thus the statistical basis consists of many models providing, within the accuracy of
measurement, equivalent responses that can in no way be distinguished from one another.

66
2.4. Simultaneous and joint inversions

The simultaneous inversion is implemented such that the same numerical kernel is used for
inversion in different physical domains and different data types. Gyulai and Ormos (1997 and
1999) developed a simultaneous Series Expansion (SE) inversion of DC sounding curves with
power and periodical basis functions (referred to as 1.5-D inversion). M. Kis (1998) applied
simultaneous and joint SE inversion for the interpretation of DC geo-electrical and seismic
refraction data with examination of the possibility of improvement to the approximate 1-D
forward modelling applied in SE inversion methods and introduction of the integral mean
concept to the SE inversion. Simultaneous and joint SE inversions have also been applied by
other authors (such as Herwanger J.V. et al, 2002) for the interpretation of geo-electrical and
seismic data, although with the use of a different approach. Their approach is also interesting
because they consider an anisotropic and inhomogeneous media. Finite elements are used to
discretize the anisotropic Laplace equation governing the forward problem. The inverse
problem is solved using a variant of the popular Marquardt-Levenberg algorithm, with
additional terms for smoothness, structural and anisotropy constraints.

(J T
) ( )
WJ + C−1 + νM ∆m = −J T W d obs − d pre (m old ) − C−1m old (2.34)

where ∆m denotes model updates, J is the Jacobian, W is the data covariance matrix, C is the
model covariance matrix and M is the matrix controlling step length. Matrix C contains the
structure and anisotropy penalty.

In order to solve large-scale problems, parallel computer and domain decomposition


techniques were used. Data from an electrical tomographic study between boreholes at a
hydrological test–site were compared to the results with an anisotropic seismic tomography
study carried out at the same location. Both the electrical and the seismic experiments scan a
depth interval of 20–115 metres between two wells spaced at 25 metres. The number of data
is approximately 8,000 for each survey and the sub-surface in the inter-well region is
discretized in elements of approximately 1.5 metres in both x and z directions.

Figure 54. Anisotropic resistivity tomogram. Figure 55. Anisotropic velocity tomogram. In
In the left image average the left image average seismic
resistivity is displayed, while on velocity is shown, while the right
the right electrical anisotropy is image displays seismic
shown. anisotropy ε.

67
According to the authors, a comparison of anisotropic seismic velocity distribution and
electrical conductivity distribution shows an extraordinary correlation between the two
tomograms (Figures 54 and 55). Both methods clearly delineate an anisotropic body of highly
layered and fractured siltstones underlain by an isotropic sandstone body. Zones of fractured
rock and zones of highly layered sedimentary rock both result in electrical and seismic
anisotropy.

Musil et al applied a joint inversion in order to overcome the limits of typical 1-D geo-
electrical inversion (Musil et al, 2003). They found that it often has internal non-uniqueness
and ambiguity problems. This is because source and receiver arrays are usually restricted to
the surface or a small number of shallow boreholes and critical parts of the target media may
be only sparsely sampled, resulting in ambiguities in the tomographic inversions. To
compensate for the limitations of the recorded data, additional constraints are generally
required. An efficient way to overcome internal ambiguities is the use of the joint inversion,
which means the integration of various groups of data records (arising from physically or
geometrically different methods and surveys) into a single inversion algorithm (Musil et al,
2003). Originally, the joint inversion algorithm was introduced by Vozoff and Jupp for
magnetotelluric (MT) and DC resistivity data. These difficulties can also be reduced be by
various regularization procedures. One option is to assume that spatial variations of the sub-
surface physical properties are smooth. This may be implemented using an inversion
algorithm that minimizes the curvature of the model space. A potential disadvantage of such a
procedure is that the resultant images may be blurred and important small scale features may
remain unresolved. Another way of compensating for sparse data is to introduce a priori
information in the form of damping. In this approach, model parameters are not allowed to
deviate greatly from a given starting model. Clearly, this requires that the starting model
should be a close representation of the true sub-surface structure. Although smoothing and
damping are powerful mathematical tools, it is much better to minimize the ambiguities by
applying appropriate data constraints. This has led to the concept of joint inversions, whereby
different types of data are inverted simultaneously (Vozoff & Jupp 1975). A necessary
requirement for a joint inversion is to have a factor that is common to the two data sets. The
most straightforward approach is to invert data sets that are sensitive to the same physical
property. For example, direct-current electrical resistivity and electromagnetic data are both
sensitive to electrical resistivity. A variety of studies have demonstrated the substantial
reduction in ambiguity that may result from joint inversions (Vozoff & Jupp 1975). It follows
from the paper of Musil et al (2003) that jointly inverting data sets that are sensitive to
different physical properties is a more difficult problem. Coupling of the two data sets must
involve common structural elements. In 1-D applications, the common elements may be layer
thicknesses. This concept can be extended to 2 and 3-D data sets, as long as the targets can be
represented by different physical models with common geometries. Besides smoothing,
damping and joint inversion, a further option is open for reducing model ambiguity: a priori
knowledge may enable the model parameters to be restricted to a few narrow ranges of
values. If this type of information can be included in an inversion algorithm, the model space
and thus the ambiguities, can be significantly reduced relative to standard least-square
inversions that allow the model space to be continuous and unlimited.

Herrman R. B. et al have shown that teleseismic P-wave receiver functions and surface-wave
dispersion measurements can be employed to infer simultaneously the shear-wave velocity
distribution with depth in the lithosphere. Receiver functions are primarily sensitive to shear-
wave velocity contrasts and vertical traveltimes and surface-wave dispersion measurements

68
are sensitive to vertical shear-wave velocity averages, so that their combination bridges
resolution gaps associated with each individual data set. The inversions are performed using a
joint, linearized inversion scheme which accounts for the relative influence of each set of
observations and allows a trade-off between fitting the observations and the smoothness of the
model. Additional constraints on mantle structure are also incorporated during the inversion
procedure since requiring the data to blend smoothly into an appropriate deep structure affects
the estimate of the lower crust velocities, yielding models which are more consistent with
expectations than those resulting from unconstrained inversions. The authors found that a
priori knowledge of upper mantle velocities are required to predict the dispersion up to a 50-
second period and that stability constraints are required. When dispersion is limited to periods
greater than 15 seconds a priori information on the upper crustal velocities may also be
required. The results of applying this technique to data from different tectonic environments
in the Arabian Plate and North America are presented in the paper. A "jumping" algorithm is
employed to jointly invert receiver functions and surface-wave dispersion observations for
shear-wave velocity. The jumping scheme allows smoothness constraint to be implemented in
the inversion by minimizing a model roughness norm that trades off with the goodness of fit.
The goodness-of-fit criterion takes into account the different units, magnitudes, noise and
number of observations of the data and enables an influence parameter 'p' to be set before
inversion to balance the relative importance of each data set of observations. In particular, a
value of p=0 only uses the receiver function data and a value of p=1 only uses the dispersion
data. The system of equations to be inverted is given by

⎡ pDs ⎤ ⎡rs ⎤ ⎡Ds ⎤


⎢ qD ⎥ x = ⎢r ⎥ + ⎢D ⎥ x (2.35)
⎢ r⎥ ⎢ r⎥ ⎢ r⎥ 0
⎢⎣ sA ⎥⎦ ⎢⎣ 0 ⎥⎦ ⎢⎣ 0 ⎥⎦

where Dr and Ds are partial derivative matrices for the dispersion measurements and the
receiver function estimates respectively, rs and rr are the corresponding vectors of residuals, x
is the vector of S-wave velocity, x0 is the starting model, and A is a matrix that constructs the
second order difference of the model x. The factor q equals 1-p and the factor s balances the
trade-off between data fitting and model smoothness. Additional a priori information is
required to stabilize the results of the models in the upper mantle. One possibility is to require
the deepest layers in the model to be similar to predetermined values, such as PREM. This can
be achieved by adding the following set of equations to the original system

Wx = Wx a (2.36)

where W is a diagonal matrix of weights and the vector xa contains the a priori predefined
velocity values.

Finally, the authors conclude that the combination of surface wave dispersion data and
receiver functions provides constraints on the shear velocity of the propagating medium that
improve those provided by either of the data sets considered separately and helps to avoid
over-interpretation of single data sets.

The problem of solution appraisal, mentioned in the Introduction, is considered in many


papers, including that by van Wijk et al (van Wijk K., et al, 2002). Moreover, Sharma and
Kaikkonen (Sharma S. P. and P. Kaikkonen, 1999) discuss the problem of appraisal of
equivalence and suppression problems in 1-D EM and DC measurements using global

69
optimization and joint inversion. According to these authors, individual inversion of the EM
data set can resolve a conducting layer reasonably well but it fails when the layer is either thin
or resistive with respect to the surroundings. On the other hand, the individual inversion of the
DC resistivity data suffers from an inherent equivalence problem. In general, when a thin
conducting layer is encountered, inversion results resolve the product conductivity × thickness
or resistivity × thickness, rather than the exact values of conductivity and thickness separately.
Further, when a middle layer has values for the physical parameters between those of the
overlying and underlying layers, then the presence of such layers is suppressed in the data.
Various researchers have provided a theory for the computation of forward responses of the
horizontal co-planar coil system over stratified earth. Following the digital linear filtering
approach to the computation of resistivity sounding curves, filter sets have been designed to
compute EM sounding curves for various dipole-dipole configurations. The expression for the
mutual impedance ratio for a horizontal co-planar coil system can be written as a convolution
integral

∫ (e ) ( )
Z ∞
= 1− r2 −2 y
R( y ) e x − y J 0 e x − y dy (2.37)
Z0 −∞

where r is the distance between the transmitter and the receiver, J0 is the Bessel function of
zero order and R() is the complex EM kernel function. The input and filter functions are given
in the first and second brackets, respectively. The ratio of mutual impedances can be
computed easily with the help of filter coefficients developed by Koefoed et al. (1972). In the
following, the phase ϕ of the mutual impedance obtained from

⎛ Im(Z Z 0 ) ⎞
ϕ = arctan⎜⎜ ⎟⎟ (2.38)
⎝ Re(Z Z 0 ) ⎠

is considered as a representation of the EM response.

Similarly, the convolution form of the relationship has been developed for the apparent
resistivity measured using the Schlumberger array


ρ a (x ) = ∫ T ( y )[e2( x − y ) J1 (e x − y )]dy (2.39)
−∞

where T(y) is the resistivity transform and J1 is the Bessel function (further details of the
original relationship are given in the sub-section entitled The deterministic approach).

The resistivity transform is the input of the filter and the second term in the above integral is
the filter function. The two integrals complete a forward problem. Joint inversion is carried
out using the following objective function, combining the EM phase data and DC apparent
resistivity data

⎡ 1 NF
⎛ ϕi0 − ϕic ⎞
2 NS
⎛ ρi0 − ρic ⎞
2

∑ ∑
1
ε =⎢ ⎜⎜ ⎟⎟ + ⎜⎜ ⎟ ⎥
⎢⎣ NF ( )
i =1 ⎝ abs ϕi + C ⎠
0
NS i =1 ⎝ ρi0 ⎟⎠ ⎥⎦
(2.40)

70
where ϕi0 , ϕic are the observed and computed phases, while ρi0 ρic are the observed and
computed apparent resistivities respectively. NF and NS are the numbers of frequencies and
observation points in the EM and DC measurements respectively.

Figure 56. The H-type model. From left to right: comparison between observed and computed
responses, resistivity versus depth sections and inverted h2 versus ρ2 results,
arising from ten very fast simulated annealing runs after inversion of the
following: phase (a), (b), (c); apparent resistivity (d), (e), (f); both data sets
together (g), (h), (i). (source: Sharma S. P. and P. Kaikkonen, 1999, Appraisal of
equivalence and suppression problems in 1-D EM and DC measurements using
global optimization and joint inversion, Geophys. Prosp., 47, pp. 219 –249)

According to the authors, the study reveals that global optimization of individual data sets, the
phase or apparent resistivity, cannot solve inherent equivalence or suppression problems.
Joint inversion of EM and DC measurements can overcome the problem of equivalence very
well. However, a suppression problem cannot be solved even after the combination of data
sets. It is also concluded that the equivalence associated with a thin resistive layer can be
solved better by joint inversion than that for a thin conducting layer. Similar studies
concerning 2-D and 3-D structures for genetically-related and non-related observations would
be necessary to understand the circumstances in which the joint inversion is really meaningful
in reducing ambiguities of interpretation.

71
2.4. Mutual constraints

Joint inversion generally implies that two related datasets are used in the same objective
function and one model is produced through the optimization process (Vozoff and Jupp,
1975). The concept of co-operative inversion, pioneered by Lines et al. (1988) with gravity
and seismic datasets, is an iterative procedure alternating between two datasets and eventually
arriving at a common model. Historically, constrained inversion has meant using a concept of
a priori data to constrain the inversion. MCI uses concurrent inversion of two datasets
mutually constrained through the parameter resolution matrix. Mutually constrained inversion
(MCI) is a process in which two distinct data sets are inverted to produce two closely related
models (Auken E., et al, 2001). MCI has many of the properties of joint inversion, the process
where two datasets are inverted to produce one model. Poorly resolved parameters are
enhanced and invisible layers can be seen. Although MCI is more robust, the two resulting
models can be evaluated independently and the best resolved parameters used in the
interpretation. The best resolved parameters in the two models resulting from the MCI can be
used in the interpretation. If the MCI is not appropriate, it is evident in the constraint residual.
The authors have chosen time-domain electromagnetic and electrical resistivity techniques for
MCI. According to them, time-domain electromagnetic (TEM) and electrical resistivity are
two methods that measure the same fundamental property, resistivity, but have different
degrees of sensitivity and will not necessarily respond to the earth in the same manner. The
TEM and electrical resistivity methods are well known in exploration geophysics. The
methods are complementary in many ways, making them ideal partners for MCI. Although
both methods measure the electrical conductivity or resistivity of the sub-surface, they sample
different volumes and have different sensitivities. TEM, an inductive technique, has an area of
investigation that is a function of the descending and expanding image of the transmitted
current, typically 40 m by 40 m or greater. The resistivity method is a galvanic technique that
samples a more linear portion of the ground as defined by the area of current flow. The TEM
method is sensitive to conductive units and relatively insensitive to resistive ones. The method
is capable of detecting a thick resistive layer but is unable to resolve the resistivity. In
contrast, the resistivity method is sensitive to conductivity-thickness and resistivity-thickness
products or conductance and is incapable of resolving a series of thin layers of high contrast, a
situation often known as the equivalence problem. The TEM method gives an absolute
measurement of sub-surface resistivity; the electrical resistivity method gives a relative
measure of this quantity. In principle, TEM systems can be used to sample very shallow
depths and a large electrode array can be used for deep electrical soundings. In practice,
however, the opposite is used, small easy-to-deploy electrode arrays for resistivity systems
and transmitter moments.

Inversion is thought of as a recipe for data processing with the ingredients being the forward
problem, the inversion method and the regularization. In this case the forward problems are
that of 1-D electrical sounding using the Schlumberger array and 1- D central-loop TEM
sounding for a 40 by 40 m square. The regularized inversion is an iterative damped least-
square routine. The innovation in this scheme lies in the mutual constraints between the two
inversions. The damped, least-square solution for the model parameters, m, at the ith+1
iteration is given by:

{[
mi +1 = mi + G Ti Cd−1G i + BT Cc−1B + Cm−1 + λI ] [G C
−1 T
i
−1
d (do − g (mi )) − BT Cc−1Bmi + Cm−1 (m0 − mi )]}
(2.41)
where G is the Jacobian at the ith iteration, Cd the data error co-variance matrix, B the
roughness matrix, Cc the co-variance matrix of the constraint between the model parameters,

72
λ the damping parameter, I the identity matrix, Cm the model parameter co-variance matrix
for the a priori model, mo, do the observed data, and g(m) the model response. The term BT C
B operates as a band to tie the inverse models together. The term in the first square bracket is
referred to by the authors as a “generalized co-variance” while that in the second one is
“generalized data”.

Figure 57. (a) The true (also MCI) model and corresponding TEM and electrical resistivity
inverted model results for the Skaro model. (b) The inverted model from electrical
resistivity data shifted by a factor of 1.2, joint inversion results of the TEM and
shifted resistivity data, and MCI. (source: Auken E., L. Pellerin and K. I.
Sorensen, 2001, Mutually Constrained Inversion (MCI) of Electrical and
Electromagnetic Data, SEG/San Antonio, Expanded Abstracts)

The MCI procedure is first demonstrated on the classic problem of resolving a resistive layer
(Vozoff an Jupp, 1975), which is encountered in the field example in Skaro. Joint inversion
and MCI both successfully solved this problem. Figure 57a shows the true model, which is
the same as the joint and MCI results, and the inverse model for the separate electrical
resistivity and TEM datasets. MCI is used to resolve incompatibilities produced when the
resistivity sounding is distorted by near-surface inhomogeneities, static shift, without the need
of a special parameter that cannot be measured. A field study demonstrates how a resistive
layer, important in aquifer characterization, which is either inconsistently detected or
unresolved in the separate time-domain and resistivity datasets, is well delineated with the
MCI.

The advantages of MCI over joint inversion, according to the authors, are threefold:
1. Multiple datasets are not necessarily compatible, and a joint inversion can give
misleading results, while the MCI is robust and gives reasonably accurate results. For
example, if one dataset is distorted, such as with an electrical static shift, the resistivity
inversion results would be different from a TEM inversion over the same site. An
anisotropic earth influences TEM data differently from electrical resistivity data,
resulting in conflicting models.
2. The MCI enables the interpreter to allow for the difference between the sensitivities
and resolving capabilities of two methods. The models can be independently evaluated
and the best resolved parameters used in the interpretation.

73
3. Because of the soft bonds between the two models, the approach is quite robust and
can be used in a generic approach, as opposed to specifically designed problems. A
static shift parameter or coefficient of anisotropy is not explicitly required for a
convergent solution as in a joint inversion.

The MCI is a robust method for processing distinct datasets. In contrast to other methods that
result in a single model, the MCI approach recovers models for each data set that are not exact
but interpretationally the same. For inconsistent data sets, such as static shifted resistivity data
interpreted with undistorted TEM data or data containing significant anisotropy, the MCI
method will recover a useable model without explicitly allowing for the distortion with a
specific parameter in the inversion.

2.5. Discrete tomography

The problem of joint inversion for loosely connected or unconnected physical properties has
been discussed in the paper of Musil et al (2003). Tomographic inversions of geophysical data
generally include an undetermined component. To compensate for this shortcoming,
assumptions or a priori knowledge need to be incorporated into the inversion process. A
possible option for a broad class of problems is to restrict the range of values within which the
unknown model parameters must lie. The authors then introduce a discrete tomography
developed and based on mixed-integer linear programming. An important feature of the
method is the ability to invert jointly different types of data, for which the key physical
properties are only loosely connected or unconnected.

A typical example of an undetermined component in inversion is cavity detection or the


delineation of isolated ore bodies in the sub-surface. In cavity detection the physical
properties of the cavity can be narrowed down to those of air and/or water. The physical
properties of the host rock are either known to within a narrow band of values or can be
established from simple experiments. Discrete tomography techniques allow such information
to be included as constraints on the inversions. The performance of a new algorithm is
demonstrated on several synthetic data sets. In particular, this is how the complementary
nature of seismic and georadar data can be exploited to locate air or water-filled cavities.

Discrete tomography is a possible option for tackling problems characterized by variables that
can only assume values within very limited ranges. This tomographic method has been used
to map molecules in discrete lattices, reconstruct the shapes and dimensions of industrial parts
and determine approximate binary images from discrete X-rays.

A new discrete tomography algorithm based on mixed-integer linear programming (MILP) is


presented in this paper. An important advantage of the MILP formulation is that it lends itself
naturally to the concept of joint inversion. It allows all options for reducing ambiguities
(smoothing, damping, joint inversion and discrete parameter intervals) to be considered
simultaneously. The authors briefly review traveltime tomography and the commonly
employed least-square L2-norm minimization procedure (the conventional approach). Since
the MILP algorithm used is based on linear programming and L1 - norm minimization, the
necessary theoretical background for these concepts is also outlined. The possibilities and
limitations of their approach are demonstrated on synthetic traveltime data generated from
simple models with relatively high velocity contrasts. In a second suite of examples they
simulate realistic full-wave-form seismograms and radargrams for typical cavity detection
problems. In these latter examples they deal with very high-velocity contrasts that generally

74
cause difficulties in conventional tomographic inversions. The traveltime, t, of a seismic or
georadar wave travelling along a ray path, S, through a 2-D isotropic medium is written as

t= ∫ u (r(x, z ))dr
S
(2.42)

where u(r) is the slowness (the reciprocal of velocity) field and r(x, z) is the position vector.
The slowness field u(r) is represented by M cells, each having a constant slowness uj
(j = 1,..., M), so the ith traveltime can be written as

M
ti = ∑l u
j =1
ij j = Liu (2.43)

where lij denotes the portion of the ith ray path in the jth cell. To determine the matrix L,
calculation of ray paths in 2-D media is required. The above equation describes a linear
relationship between the traveltimes and the 2-D slowness field. In principle, the slowness
vector u may be obtained by inverting the system of equations (2.43). In practice, it is
generally not possible to determine u unambiguously without introducing a priori information
in the form of smoothing and/or damping constraints:

⎡ t ⎤ ⎡L ⎤
⎢ 0 ⎥ = ⎢ A ⎥u (2.44)
⎢ ⎥ ⎢ ⎥
⎢⎣u 0 ⎥⎦ ⎢⎣ I ⎥⎦

where A is a smoothing matrix, u0 is a vector of damping constraints and I is the identity


matrix.

This equation can be written in a more compact form as

d = Gu (2.45)

The smoothing and damping constraints cause the system of equations (2.45) to be over-
determined. Because the values of L depend on the unknown slowness field u, the inversion
problem is non-linear and consequently the problem must be solved iteratively. Algorithms
that employ “L2-norm minimization” attempt to minimize the squared sum of the prediction
error

∑∑ (G − di )
N M
2
ij (2.46)
i =1 j =1

where N is the number of traveltimes plus the additional constraints (see equation 4). There
are several options for solving the classical least-square problem. Popular choices include
accumulation of the normal equations and inverting the resultant Hessian matrix.

Algorithms that employ “L1-norm minimization” attempt to minimize the absolute difference
of the prediction error

75
N M

∑∑ G
i =1 j =1
ij − di (2.47)

Linear programming is typically used for this purpose. The over-determined system of
equations must be converted into an appropriate form for L1-norm minimization.

Figure 58. Flow diagram describing the discrete inversion procedure used in the paper.
(source: Musil M., H. R. Maurer and A. G. Green, 2003, Discrete tomography
and joint inversion for loosely connected or unconnected physical properties:
application to cross-hole seismic and georadar data sets, Geophys. J. Int., 153, pp.
389 -402

Apart from presenting many examples of the proposed algorithm, Musil et al (2003) conclude
that they have introduced a discrete tomography technique for individually or jointly inverting
seismic and georadar cross-hole data. The technique is applicable to a broad class of problems
for which the propagation velocities are restricted to a few relatively narrow ranges of values.
If sufficient a priori velocity information exists, the tomographic inversions should be
reliable. It has been demonstrated that the technique works well when the average velocities
are known to within ±5 per cent. Other tests indicate that convergence to correct velocities
also occurs when velocity uncertainties are as large as ±10 per cent. The new technique is
unlikely to produce meaningful results if the average velocities fall outside the chosen
velocity ranges. Unlike conventional least-square inversion methods, the discrete tomography
technique does not provide a formal means of estimating ambiguity or, equivalently, of
determining unequivocally whether the output model is the result of an insignificant local
minimum in the model space or whether it is one of a number of very similar solutions
distributed about the global minimum. To address this issue each data set should be inverted
independently several times and the resultant models compared. For all of the tests that we
have performed, including many not shown here, the output models of all multiple runs were
found to be very close to each other. Under a variety of conditions, the joint discrete
inversions were found to be more robust than the individual discrete inversions. The
complementary nature of the jointly inverted data sets enabled less ambiguous tomographic
reconstructions to be achieved.

76
3. The applicability of geophysical prospecting methods to demining
Further study of the literature on visualisation methods based on the electromagnetic principle
reveals several papers which deal more or less directly with the issue of demining. These are
presented briefly in this section. Amongst these is an attempt to apply impedance tomography
to the detection of mines. The first results have been presented in publications by Wort et al
(Wort P., et al, 1999), and Church et al (Church, P., et al, 2001.

The paper of Church et al reports the results of the performance assessment of an electrical
impedance tomography detector (EIT) for mine-like objects in soils. The EIT uses an array of
electrodes to inject low-frequency currents into the soil and measure the resulting electrical
potentials. The measurements are then used to reconstruct the electrical conductivity
perturbations underneath the array. In the course of this work an EIT instrument was built and
field evaluated. The array is made of 64 stainless steel stimulating and recording electrodes
arranged in an 8x8 grid. A specialized electronics system was constructed to control the
electrode current stimulations and potential measurements.

The detection algorithm is tuned to objects of a given size and shape in order to reduce the
false alarm rate. The main mechanical, electronic and algorithm components of the detector
will be presented. The EIT detector was originally designed with a view of evaluating its
potential as a confirmatory detector of AT mines. To this end mine-like objects representative
of some AT mines were used. The results of preliminary field evaluations are presented. The
detection capabilities and limitations of mine-like objects are discussed.

Figure 59. EIT Detector System (source: Church, P, P Wort, S Gagnon and J E McFee, 2001,
Performance Assessment of an Electrical Impedance Tomography Detector for
Mine-Like Objects, Proc. SPIE Conference on Detection and Remediation
Technologies for Mines and Mine-like Targets VI. Vol. 4394, Orlando, FL, USA,
16-20 April)

The data processing application comprises the software required for the data acquisition and
the target detection algorithm. A laptop computer (IBM ThinkPad) is the host computer. The
basic algorithm developed for the reconstruction of the electrical conductivity distribution is
described in detail in Church’s paper [Church P. et al, 2001]. It is a linearized reconstruction
based on the following relation:

δZ ∝ Sδσ (2.48)

77
The terms are defined as follows:
δZ : The elements of this vector represent the difference between the transfer impedance
measurement for a given configuration of pairs of stimulators and recorders and the transfer
impedance predicted by a homogeneous, semi-infinite model for the same configuration. The
size of the vector is determined by the number of independent measurements used by the
system.
S: This is defined as the sensitivity matrix. Its form arises from a linear approximation and its
elements are evaluated by averaging over a grid cell the scalar product of the electrical field
caused by the stimulating electrodes with the electrical field that would result if the recording
pair was stimulated.
δσ : The elements of this vector represent the conductivity perturbation (with respect to the
uniform conductivity, semi-infinite model) over a regular grid covering the region of interest.
This is the solution that is referred to as the “conductivity distribution reconstruction”.
The solution of the problem requires the inverse of S. The matrix is ill-conditioned and
requires a regularization procedure in order to calculate its inverse. The mine detection
algorithm is based on a matched filter approach. It consists of calculating the detector
response for a replica of the size and shape of the object of interest for a number of grid
locations underneath the detector. The detector response for a given replica is calculated by
assigning zero conductivity to the nodes of the calculation grid that represent the size and
shape of the replica. A correlation is then performed between the detector response for the
replica and the actual detector response obtained from the measurements for all the replica
positions considered. The position that yields the largest correlation value is identified as the
most likely position for the mine. A grid with a resolution of 15x15x3 nodes is used for the
calculations. This resolution is equivalent to a 0.5 electrode spacing (7 cm), in x, y and z. For
every node i of that grid, the correlation operation is defined. An example of a reconstruction
is shown in Figure 60.

Figure 60. Detector response for a mine-like object buried at a depth of 7 cm (source: Church
P., et al, 2001, Performance Assessment of an Electrical Impedance Tomography
Detector for Mine-Like Objects, Proc. SPIE Conference on Detection and
Remediation Technologies for Mines and Mine-like Targets VI. Vol. 4394,
Orlando, FL, USA, 16-20 April)

The authors conclude that when using mine-like objects with a size of the order of two
electrode-spacings (ES), reliable detections were obtained down to a range of 1.0-1.5 ES. For
an AT mine with a diameter of 28 cm this results in a detection range of about 15-20 cm in
depth. The detection of targets down to a depth of 17 cm has been successful in all cases. The

78
strength of the detection varies for targets buried at depths of 21 cm (1.5 ES). This
performance is consistent with the whole set of experiments that have been performed in
various soils, either with the small scale lab model used in the initial phase of investigation
[3] or with the current 64 electrode instrument.

The detector was found to be capable of resolving very well two mine-like objects buried at 1
ES and separated by 0.5 ES. For two AT mines with a 28 cm diameter and buried at a depth
of 14 cm, this corresponds to a distance of 7 cm separating the edges of the two objects (35
cm centre-to-centre). The resolution of two mine-like objects buried at 1.5 ES and separated
by 1 ES was not successful, indicating that the resolution power of the detector decreases
rapidly with the depth of burial. The EIT detector performed unexpectedly well in the DRES
Mine Pen facility. The hard soil crust, covered with small pebbles was thought to be too
difficult to achieve a good electrode-soil contact. AT mines (TMA3, M15, PTMiBAIII) were
clearly detected down to depth of 16 cm. The detector’s response for the TMA4 buried at 19
cm was not as clear as for the other mines, the detector being at the limit of its detection
capability for that depth level. The metal AT mine M16 was detected as a non-conductive
object, presumably because of its coat of paint. The same behaviour had been observed for the
metal mine TM46 during the trials at CDC.

Another set of experiments was conducted along an alley of surrogate AT mines set up at
DRES. The results presented unexpected anomalies. The signal from the surrogate AT mine
was weaker than expected and the detector often showed a strong broad signal at the lower
layers. These anomalies may actually be related in the sense that the broad signal masks the
features we were trying to detect. It appears that the ground environment is likely to be
responsible for the problems encountered. The results from experiments point to the presence
of a double layer of soil with different electrical conductivities. The current conductivity
reconstruction algorithm assumes a conductivity perturbation in a semi-infinite homogeneous
medium. The algorithm requires revision in order to work properly in environments that
present multi-layers of soils of very different conductivities.

This work concludes a two-phase study on the suitability of using EIT technology as the basis
of a confirmatory detector for AT mines. As a very general conclusion, EIT technology has
proved to be useful in the role of a confirmatory detector for AT mines. This has been
demonstrated through the experiments in the DRES Mine Pen facility. Results from the trials
at CDC and DRES have also indicated that the detector is capable of reliably detecting AT
type mines at depths of 15 to 20 cm. The detector is also capable of resolving a typical AT
size mine buried at depths of up to 16 cm and separated by distances as small as 7 cm. A
detection algorithm based on a replica of the object of interest has also proved to be efficient
in reducing the false alarm rate of the detector. The trials performed at DRES have also shown
that the soil environment may have a significant impact on the detector’s performance if this
is not accounted for in the reconstruction model. However, in order to provide a balanced
evaluation of the EIT detection technology, it should be pointed out that the detector also
faces limitations because an electrode-soil contact is required. Electrical contact cannot be
assured in all types of environment and the deployment of electrodes in the close proximity of
explosives is a potential operational issue, although no large force is required to achieve
contact. EIT technology, as a mine detection application, appears to have a special niche in
environments such as beaches, ocean littorals and other wet areas where EIT works at its best.
The detector has been shown to be unexpectedly efficient in sand, even if it is poorly
conductive, as long as the sand holds some moisture. The fluidity of the sand also provides an
easy reliable contact with the electrodes. EIT may also have an application in locating intact

79
mines in the berms formed when mine clearing equipment neutralizes and removes mines.
Most mines in such berms are already inert, reducing the likelihood of initiation when
inserting the sensor head. Further, the EIT sensor head could be made cheaply enough to be
disposable and inserted remotely to improve safety.

The following paper does not touch directly on the problem of demining, although the issues
discussed are very similar and are important in demining. Candansayar and Basokur
(Candansayar M. E. and A. T. Basokur, 2001) discuss the problem of detecting small
archaeological targets. Achievements in this area, however, may also be of use in demining.
The detecting capabilities of some electrical arrays for the estimation of position, size and
depth of small-scale targets are examined in the light of the results obtained from 2-D
inversions of apparent-resistivity data. The two-sided three-electrode apparent resistivity data
are obtained by the application of left and right-hand pole-dipole arrays that also permit the
computation of four-electrode and dipole-dipole apparent-resistivity values without actually
measuring them. Synthetic apparent-resistivity data sets of the dipole-dipole, four-electrode
and two-sided three-electrode arrays are calculated for models that simulate buried tombs.
The results of 2-D inversions are compared with regard resolution in detecting the exact
location, size and depth of the target, showing some advantage of the two-sided three-
electrode array. A field application was carried out in the archaeological site known as Alaca
Hoyuk, a religious temple area of the Hittite period. The 2-D inversion of the two-sided three-
electrode apparent-resistivity data led to the location of part of the city wall and a buried small
room. The validity of the interpretation has been checked against the results of subsequent
archaeological excavations.

Figure 61. (a) A view of the exposed city wall from the southern side (area EA1); (b) a view
of the exposed room and kiln (area EA2). (source: Candansayar M. E. and A. T.
Basokur, 2001, Detecting small-scale targets by the 2-D inversion of two-sided
three-electrode data: application to an archaeological survey, Geophysical
Prospecting, 49, 13 – 25)

A comparison test has been applied to examine the resolution obtained with earth models
derived from the 2-D inversions of three and four-electrode synthetic data. Furthermore, a
computer program that handles two types of electrode array has been developed as an
adaptation of the algorithm published by Uchida and Murakami (1990) and this includes the

80
modelling of topography. The forward and inversion schemes utilize, respectively, the finite-
element and damped least-square methods.

Figure 62. Plan views of the final model inverted from the two-sided three-electrode apparent-
resistivity data measured at the Alaca Hoyuk archaeological site. The figure
shows the variation of the intrinsic resistivity values inside the blocks within the
same depth range. The resistivity maps correspond to the depth ranges (a)
0.41±1.83, (b) 1.83±3.11 and (c) 3.11±5.20, respectively. The excavation areas
are outlined by black rectangles (EA1 and EA2). Yellow dashed lines indicate the
exposed wall and room. The green lines mark the geophysical interpretation.
(source: Candansayar M. E. and A. T. Basokur, 2001, Detecting small-scale
targets by the 2-D inversion of two-sided three-electrode data: application to an
archaeological survey, Geophysical Prospecting, 49, 13 – 25)

The problem of water content variation in soil considered by Panissod (Panissod C. et al,
2001) is also important from the perspective of demining. Firstly, the visualized geometries
are relatively small, although they are still larger than mines. Secondly, the variation in water
content in the soil associated with vegetation is examined. This problem is significant when
electromagnetic methods used for mine detection are considered as a whole.

Figure 63. Location of the electrodes in relation to the corn plant rows (7, 8, 9... for the
smaller pseudo-section and 7’, 8’, 9’... for the larger pseudo-section) and model

81
scheme used for 3-D modellings. (source: Panissod C., D. Michot, Y. Benderitter
and A. Tabbagh, 2001, On the effectiveness of 2-D electrical inversion results: an
agricultural case study, Geophysical Prospecting, 49, 570-576)

The authors used electrical resistivity tomography in Beauce (France) to assess the water
extraction by corn plants (evapotranspiration). The acquired pseudo-sections show conductive
anomalies under the plants. A 2-D inversion of measurements led us to identify clear resistive
features associated with the water losses under the corn plant rows. New models have been
calculated with two different 3-D algorithms (finite difference and moment method) to take
into account the 3-D structure of the ground and to confirm that periodic resistive features
may generate shifted apparent-resistivity anomalies. The effectiveness of 2-D inversion
results is demonstrated with a field example showing the evapotranspiration effect in relation
to corn plant rows. The increase in the electrical resistivity due to the water extraction
corresponds to a typical 2-D structure of the ground with resistive features under the corn
rows.

Figure 64. 2-D inversion of calculated data (pseudo-section with a=0.2 m for short length
L=0.25 m (perpendicular to the pseudo-section plane) of the corn plant rows).
(source: Panissod C., D. Michot, Y. Benderitter and A. Tabbagh, 2001, On the
effectiveness of 2-D electrical inversion results: an agricultural case study,
Geophysical Prospecting, 49, 570-576)

The 3-D modellings (using both finite difference and moment method) confirm the reality of
2-D artefacts, show that 3-D effects are not significant and allow numerical artefacts to be
excluded. The boundary between the 2-D and the 3-D cases can be defined by combining the
use of 3-D modelling and 2-D inversion algorithms. In the present example, the 2-D inversion
of pseudo-sections is very efficient and demonstrates well the effects of evapotranspiration.

The transport of water in soil plays an important role in modifying the electrical properties of
the soil. Thus the phenomena involved may also be examined accurately using impedance
techniques. This problem has been studied by Slater et al (Slater L., et al, 2000) and presented
in their paper entitled “Cross-hole electrical imaging of a controlled saline tracer injection”.
Electrical imaging of tracer tests can provide valuable information on the spatial variability of
solute transport processes. This concept was investigated by cross-borehole electrical imaging
of a controlled release in an experimental tank. A saline tracer of conductivity 8=103 mS/m

82
and volume 270 l was injected into a tank facility with dimensions 10 × 10 × 3 m and
consisting of alternating sand and clay layers. Injection was from 0.3 m below the surface at a
point where maximum interaction was expected between the tank structure and the tracer
transport. Repeated imaging over a two-week period detected non-uniform tracer transport,
partly caused by the sand/clay sequence. Tracer accumulation on two clay layers was
observed and a density-driven spill of the tracer over a clay shelf was imaged. An additional
unexpected flow pathway, probably caused by complications during array installation, was
identified close to the electrode array.

Figure 65. Circulating measurement configurations used in electrical imaging, The ‘‘normal’’
transfer resistance measurement and its reciprocal. (source: Slater L., A.M.
Binley, W. Daily, R. Johnson, 2000, Cross-hole electrical imaging of a controlled
saline tracer injection, Journal of Applied Geophysics, 44, 85–102)

Pore water samples obtained following termination of electrical imaging generally supported
the observed electrical response, although discrepancies arose when the response of individual
pixels was analysed. The pixels that make up the electrical images were interpreted as a large
number of breakthrough curves. The shape of the pixel breakthrough-recession curve allowed
some quantitative interpretation of solute traveltime, as well as a qualitative assessment of
spatial variability in advective-dispersive transport characteristics across the image plane.
Although surface conduction effects associated with the clay layers complicated
interpretation, the plotting of pixel breakthroughs was considered a useful step in the
hydrological interpretation of the tracer test. The spatial coverage provided by the high
density of pixels is the most encouraging factor in the approach.

83
Figure 66. Images of the conductivity ratio obtained at nine intervals during tracer injection.
a) Between 8 and 47 h after the start of the tracer injection. b) Between 71 and
264 h after the start of the tracer injection. (source: Slater L., A.M. Binley, W.
Daily, R. Johnson, 2000, Cross-hole electrical imaging of a controlled saline
tracer injection, Journal of Applied Geophysics, 44, 85–102)

84
Appendices
A. Linear least squares inversion
a). Unconstrained case
It is considered a simple linear equation

d = Gm (A.1)

where d – data vector, G model matrix, m – vector of parameters.


For the noiseless data we have

m = G −1d (A.2)

If the errors (noise) are assumed to be additive, we have:

d = Gm + e (A.3)

where e stands for residuals because of experimental noise and errors.


The best way to get unique solution is to minimise residuals. It can be done using least
squares method, thus:

2
⎛ ⎞
∑ ∑
N M

Q = eT e = ⎜d − Gij m j ⎟ (A.4)
⎜ i ⎟
i =1 ⎝ j ⎠

where Q is the number representing the misfit.


Equation (1.4) can be written in matrix form:

Q = (d − Gm ) (d − Gm )
T
(A.5)

We want Q to be the smallest, thus minimisation is obtained by differentiating Q with respect


to each model parameters mj. and as result a set of equations is obtained. The solution is given
in the form of so called normal equation

G T Gm = G T d . (A.6)

Thus least square solution is given as following:

( )−1
ˆ = GT G GT d .
m (A.7)

( )−1
The product G T G G T is often called the least squares Generalised Inverse. Solution given
by (1.7) is also described as unbiased estimator of m.

b). Constrained case

In some situations additional information, e.g. from previous investigations, is known.


This information can also be obtained from general laws, e.g. it is known that DC resistivity is
positive. This additional information can be incorporated in the inverse problem. The solution

85
is called as a constrained as the introduced additional information restrict the possible solution
only to these which also fulfil the constraint. The constraining equations can be arranged in
the following form:

h = Dm (A.8)

where D is a matrix which manipulates on the model parameters m in order to obtain the a
priori values contained in vector h. Now, instead of equation (A.5) the following one is
minimised:

Q = (d − Gm ) (d − Gm ) + β 2 (h − Dm ) (h − Dm )
T T
(A.9)

where β is called undetermined multiplier.


Normal equation following from the relationship (A.9) has the form:

(G G + β
T 2
)
DT D m = G T d + β 2 DT h (A.10)

If, as it is often assumed, matrix D is identity one then solution is

m (
ˆ = G T G + β 2I ) (G d + β h)
−1 T 2
(A.11)

The formula (2.4) is described as biased or constrained linear one.

c). Weighted linear LS


The vector m is called the weighted least squares solution (WLS) if it solves the
problem

2
min Ld − LGm p
(A.12)
m

where LT L = W is so called weighting matrix.


Solution to (A.12) is given by following relation

m T
(
ˆ WLS = G WG G Wd
T
)−1
(A.13)

In case W = I equation (A.13) reduces to the ordinary LS solution given by (A.7)

B. Non-linear least squares inversion

Thus, in general relation between data and model parameters can be described by
relation:

d = f (m ) (B.1)

Most non-linear problems demand a starting point, m 0 . This starting model may follow from
the previous experiments (information a priori) or can be an intelligent guess. If the

86
nonlinearity is assumed to be “weak” the model f (m ) can be linearised around initial value
m0

( )
d = f (m ) = f m 0 +
∂f m 0 ( )(
m − m0 + O m − m0 ) ( ) (B.2)
∂m

where

( )
∂f m 0
is so called Jacobian of f (m ) at m = m 0 , O( • )
∂m

m ( )−1
ˆ = m 0 + JT WJ JT W d − f m 0 ( ( )) (B.3)

a). Gauss-Newton algorithm


One of the most popular deterministic method to solve non-linear is Gauss-Newton
approach. In this technique the objective function

Q(m ) = Ld − Lf (m )
2
(B.4)

is expanded into Taylor truncated series

( )
Q(m ) ≈ Q m + ⎜⎜
0 ( ) ⎞⎟ (m − m ) + (m − m ) ⎛⎜ ∂ Q(m ) ⎞⎟(m − m ) = r (m )
⎛ ∂Q m 0
T
0 0 T
2 0
0
(B.5)
⎟ ⎜ ∂m ⎟
⎝ ∂m
2
⎠ ⎝ ⎠

2
where ∂Q ∂m is the gradient of Q while ∂ 2Q ∂m is the Hessian. The minimum of Q(m) is
achieved for the m = m
ˆ fulfilling the condition

∂r (m )
= +
( )
∂Q m 0 ∂ 2Q m 0
ˆ −m = 0
m 0 ( )( ) (B.6)
∂m m = mˆ ∂m ∂m 2

Calculating from () the m̂ leads to the following relationship

⎛ ∂ 2Q m 0
ˆ = m − ⎜⎜
m 0 ( ) ⎞⎟ −1
∂Q m 0 ( ) (B.7)

⎝ ∂m ∂m
2

where the gradient of Q(m) is given as

∂Q(m ) ⎛ ∂f (m ) ⎞ T
T

= −2⎜ ⎟ L L(d − f (m )) (B.8)


∂m ⎝ ∂m ⎠

while Hessian

87
∂ 2Q(m ) ⎛ ∂f (m ) ⎞ T ⎛ ∂f (m ) ⎞ ⎛⎜ ∂f j ⎞⎟
∑ (d
T M

= 2⎜ ⎟ L L⎜ ⎟−2 ( ))
− f j m 0 LT L (B.9)
∂m 2 ⎝ ∂m ⎠ ⎝ ∂m ⎠ ⎜⎝ ∂m 2 ⎟
j
j =1 ⎠

The second term of the relationship is omitted in the Gauss-Newton algorithm.


Omission of the second term in Hessian and introducing the gradient (B.8) and Hessian (B.9)
into (B.6) the final relation has the following form:

ˆ i +1 = m
m (
ˆ i + J i L LJ i
T T
) (J L L(d − f (mˆ )))
−1 T
i
T
i (B.10)

C. Quadratic programming

Quadratic programming consist of minimizing the quadratic norm of residuals subject to


lower and upper bound in each parameter, that is:

1
F (x ) =
2
min y − Ax (C.1)
2

subject to xl ≤ x ≤ xu (C.2)

The vector xl represents a lower limit imposed on the properties of the layers; xu is a
corresponding upper limit. The inequality must be understood as applying to corresponding
components of the vectors xl, xu, and x. The algorithm finds a solution for x such as the square
of the residuals is minimum, with the additional constraints that the parameters must fall
within the upper and lower bounds previously established. The function to minimize can be
represented as:

1 T
F (x ) = cT x +x Sx (C.3)
2
subject to xl ≤ x ≤ xu (C.4)
where
(
cT = − AT y )T
(C.5)
and

S = AT A (C.6)

is the symmetric Hessian matrix. Strictly speaking minimized function F(x) should include a
( )
term of the form yT y . However inclusion of this term does not intervene in minimization.
The process is stabilized by adding to the Hessian a term λI , with λ << 1 . The Hessian is
calculated in a way that the diagonal is Unitarian, so the process is modified to minimize:

1 1
F (x ) = y − A ' x' + λ x '
2 2
(C.7)
2 2

A' = AV (C.8)

x' = V −1x (C.9)

88
1
vii = (C.10)

∑A
n
2
ji
j =1

and vij=0, i≠j.


Finally, when we have posed the problem and have the vectors and matrices in accordance
with the last equation to get the x’ is obtained and x=Vx’ is considered. To improve
convergence in the iterative process the Levenberg-Mardquardt method may be used.

D. Probabilistic methods

The probabilistic methods are based on assumption that model parameters m are
random variables described by probability distribution. The methods utilizing this approach
can be divided into two groups, classical and Bayesian. The former group contains maximum
likelihood estimation, least squares approach, method of moments while the latter one
contains so called minimum mean square error estimation, maximum a posteriori estimation
or linear minimum square estimation. Moreover, each method itself contains many variants,
e.g. least squares approach may be sequential, constrained or unconstrained, linear or
nonlinear. Thus, it is not possible to cover all methods and their modifications and to discuss
them in such short report, only some of the will be presented. Probabilistic methods can be
used even though there is not a strict probabilistic behaviour of the studied problem.

a) Maximum Likelihood Estimation


This approach is very popular way to obtain estimations in complicated estimation
problems. MLE has the asymptotic properties of being unbiased, thus it is considered to be
asymptotically. In maximum likelihood estimation the probability density function of the
experimental data d given the unknown parameter m is assumed to be known. The probability
density of m is not required. The m̂ is the maximum likelihood estimation, for given data d,
if the following relation is fulfilled

p(d m
ˆ MLE ) ≥ p(d m
ˆ) (D.1)

It means that m̂ MLE maximises the likelihood distribution p(d m ) for given data d.

b) Minimum mean square estimation


Methods is based on maximising the expectation of the squared norm of the estimation
error

E{m − m
ˆ} (D.2)

Using Bayesian cost method it could be determined that the minimum mean square estimate
m̂ MMSE is conditional expectation of m given the observation d

89

ˆ MMSE =
m
∫ mp(m d)dm
−∞
(D.3)

This is the MMSE that minimises

E{(m − m
ˆ )} =
∫ (m − mˆ )2 p(d, m )dddm (D.4)

c) Bayesian approach
The description below is based on paper Malinverno A., 2000. In Bayesian approach,
inferences about the parameter vector θ and the parameterisation are made using probability
density functions and probabilities. In other words, the conclusions that can be drawn from
Bayesian analysis are of the type “from what we know, there is a 95 per cent probability that
parameter θi has a value between 0.5 and 1.3” It is important to stress at the outsets that these
probabilities quantify uncertain knowledge. As such, these probabilities are always
conditional on something that is assumed true: these assumptions are prior information and
are denoted J. In the notation used here, the statement above on θi can be written as

1.3

∫ p(θ J )dθ = 0.95


0.5
i i (D.5)

where p(θi J ) is the probability density function of θi .


The fundamental formula in Bayesian parameter estimation is Bayes’ rule, which for a vector
of parameter θ and vector of data d is

p (θ J ) p (d θ, J )
p (θ d, J ) = (D.6)
p (d J )

where p(θ d, J ) is the posterior pdf of the parameters (the distribution of θ given d and J),
p(θ J ) is the prior pdf (quantifying what is known about θ from J only), and p(d θ, J ) is the
likelihood function (the pdf of the data when the parameter vector equals θ). In other words,
what can be inferred about the parameter vector a posteriori is a combination of what is
known a priori, independent of the data, and of the information contained in the data. The
denominator in Bayes’ rule can be shown to be the integral of the numerator

p(d J ) =
∫ p(θ J )p(d θ, J ) (D.7)

Therefore, p(d J ) is a normalizing factor that makes the integral of the posterior pdf equal
unity: since it does not depend on θ, it is typically ignored in parameter estimation. The
posterior pdf, which quantifies the uncertainty of the parameter values once the information
in the data is accounted for, is the solution of the inverse problem. The linear, Gaussian case
is examined that is appropriate for a linear forward problem and for a prior pdf and a
likelihood function that are multivariate Gaussian distributions. The natural choice for a
priori pdf is the distribution that allows for the greatest uncertainty while obeying the

90
constraints imposed by the prior information, and it can be shown that this least informative
pdf is the pdf that has maximum entropy. Suppose all one knows about the parameters a
priori is that they are as likely to be positive as negative, but that their square value cannot be
too large and is expected to be σ θ . For any single parameter, the pdf that has maximum
2

entropy subject to these prior constraints is a Gaussian distribution with zero mean and a
variance equal to σ θ . If there is no a priori information on correlations amongst H
2

parameters, the maximum entropy prior pdf of θ is the product of the pdfs of ach parameter

⎛ θT θ ⎞
p (θ J ) =
1
exp⎜⎜ − 2 ⎟⎟
( )
(D.8)
⎝ 2σ θ ⎠
2 H 2
2π σ θ

Quantities assumed a priori are denoted with bar, for example, θ , is the prior mean of the
parameter vector.
The likelihood function is the pdf of the measurement error vector e, defined as the difference
between the observed data d and the data predicted for a given value of the parameter vector

e = d − Gm = d − GAθ (D.9)

If the errors are expected a priori to have a mean square deviation from zero equal to σ e and
2

to be uncorrelated the likelihood function is again a Gaussian distribution

⎛ eT e ⎞
p (d θ, J ) =
1
⎜⎜ − 2 ⎟⎟
( )
exp (D.10)
⎝ 2σ e ⎠
2 N 2
2π σ e

The probability of having observed the data d becomes smaller as the sum of squared errors
eT e becomes larger, and the likelihood quantifies the information about θ contained in the
data. If prior pdf and the likelihood function are as shown in (12) and (14), it is easy to show
that the posterior pdf is also Gaussian, and has a posterior covariance matrix Ĉθ and a
posterior mean vector θ̂ that are as follows

−1
ˆ θ = ⎛⎜ 12 I + 12 AT G T GA ⎞⎟
C (D.11)
⎝σθ σe ⎠

1 ˆ T T
θˆ = 2 CθA G d (D.12)
σe

where I is an H×H identity matrix. A posteriori quantities are denoted with hat, θ̂ is the
posterior mean of the parameter vector. If the posterior pdf of θ is Gaussian, the posterior pdf
of m=Aθ is also Gaussian with aposterior covariance matrix and a posterior mean vector that
are as follows

ˆ m = AC
C ˆ θ AT (D.13)

ˆ = Aθˆ
m (D.14)

91
References
Abubakar A. and P. M. van den Berg, 2000, Non-linear three-dimensional inversion of cross-
well electrical measurements, Geophysical Prospecting, 48, 109–134
Al-Nuaimy W., Y. Huang, M. Nakhkash, M.T.C. Fang, V.T. Nguyen and A. Eriksen, 2000,
Automatic detection of buried utilities and solid objects with GPR using neural
networks and pattern recognition, Journal of Applied Geophysics, 43, 157–165
Annan, A.P., Smith, R.S., Lemieux, J., O'Connell, M.D., and Pedersen, R.N., 1996, Resistive-
limit time-domain AEM apparent conductivity: Geophysics, 61, 93-99
Apparao, G.S. Srinivas, V. S. Sarma, P. J. Thomas, M. S. Joshi and P. R. Prasad, 2000, Depth
of detection of highly conducting and volume polarizable targets using induced
polarization, Geophysical Prospecting, 48, 797-813
Auken E., L. Pellerin and Sřrensen K. I., Mutually Constrained Inversion (MCI) of Electrical
and Electromagnetic Data,
Beard L. P., 2000, Comparison of methods for estimating earth resistivity from airborne
electromagnetic measurements, Journal of Applied Geophysics, 45, 239–259
Bhattacharya B. B., Shalivahan and M. K. Sen, 1999, Use of VFSA for resolution, sensitivity
and uncertainty analysis in 1D DC resistivity and IP inversion, Geophysical
Prospecting, 47, 411–429
Bing Z. and S.A. Greenhalgh, 2000, Cross-hole resistivity tomography using different
electrode configurations, Geophysical Prospecting, 48, 887-912
Bohm G., P. Galuppo and A. Vesnaver, 2000, 3D adaptive tomography using Delaunay
trianglesand Voronoi polygons, Geophysical Prospecting, 48, 723-744
Candansayar M. E. and A. T. Basokur, 2001, Detecting small-scale targets by the 2D
inversion of two-sided three-electrode data: application to an archaeological survey,
Geophysical Prospecting, 49, 13 - 25
Church, P, P Wort, S Gagnon and J E McFee, 2001, Performance Assessment of an Electrical
Impedance Tomography Detector for Mine-Like Objects, Proc. SPIE Conference on
Detection and Remediation Technologies for Mines and Mine-like Targets VI. Vol.
4394, Orlando, FL, USA, 16-20 April
Cozzolino K., A. Q. Howard Jr. and J. S. Protazio, 2000, A new look at multiphase invasion
with applications to borehole resistivity interpretation, Journal of Applied Geophysics,
43, 91–100
Curtis A., 1999, Optimal experiment design: cross-borehole tomographic examples, Geophys.
J. Int. 136, 637–650
Descloitres M., R. Guerin, Y. Albouy and A. Tabbagh, 2000, Improvement in TDEM
sounding interpretation in presence of induced polarization. A case study in resistive
rocks of the Fogo volcano, Cape Verde Islands, Journal of Applied Geophysics, 45, 1–
18
Dahlin T., 2000, Short note on electrode charge-up effects in DC resistivity data acquisition
using multi-electrode arrays, Geophysical Prospecting, 48, 181-187
Elliott, P., 1998. The principles and practice of FLAIRTEM: Exploration Geophysics 29, 58-
60.
Furche M. and A. Weller, 2002, Sensitivity distributions of different borehole electrode
configurations considering a model with a cylindrical coaxial boundary, Geophys. J.
Int. 149, 338–348
Furness P., 2001, A note on magnetic modelling with remanence, Journal of Applied
Geophysics, 48, 257–261

92
Herrmann R. B., C. J. Ammon and J. Julia, Joint inversion of receiver functions and surface-
wave dispersion for crustal structure, Report Department of Earth and Atmospheric
Science, Saint Louis University, Contract DSWA 01.98.C0160
Jackson P. D., S. J. Earl and G. J. Reece, 2001, 3D resistivity inversion using 2D
measurements of the electric field, Geophysical Prospecting, 49, 26-39
Keller G. V. and Frischknecht F. C., 1966, Electrical Methods in Geophysical prospecting,
Pergamon Press
Kis M., 2002, Generalised Series Expansion (GSE) used in DC geoelectric–seismic joint
inversion, Journal of Applied Geophysics 50, 401– 416
Koefoed O., D. P. Ghosh and G. J. Polman, 1972, Computations of type curves for
electromagnetic depth sounding with horizontal transmitting coil by means of digital
linear filter, Geophysical Prospecting 20, 406-420.
Lazaro-Mancilla O. and E. Gomez-Trevino, 2000, Ground penetrating radar inversion in 1-D:
an approach for the estimation of electrical conductivity, dielectric permittivity and
magnetic permeability, Journal of Applied Geophysics, 43, 199–213
Lee T. J., R. S. Smith and C. S. B. Hyde, 2000, Using a non-integer moment of the impulse
response to estimate the half-space conductivity, Geophysical Prospecting, 48, 887-
912
Malinverno A., 2000, A Bayesian criterion for simplicity in inverse problem parametrization,
Geophys. J. Int., 140, 267-285
Malinverno A., 2002, Parsimonious Bayesian Markov chain Monte Carlo inversion in a
nonlinear geophysical problem Geophys. J. Int., 151, 675–688
Mauriello P. and D Patella, 1999, Resistivity anomaly imaging by probability tomography,
Geophysical Prospecting, 1999, 47, 411–429
Mitsuhata Y., T. Uchida, Y. Murakami, H. Amano, 2001, The Fourier transform of
controlled-source time-domain electromagnetic data by smooth spectrum inversion,
Geophys. J. Int. 144, 123-135
Muiuane E. A. and B. Laust, 2001, 1D inversion of DC resistivity data using a quality-based
truncated SVD, Geophysical Prospecting, 49, 387-394
Olayinka A. I. and U. Yaramanci, 2000, Use of block inversion in the 2-D interpretation of
apparent resistivity data and its comparison with smooth inversion,. Journal of Applied
Geophysics, 45, 63–81
Panissod C., D. Michot, Y. Benderitter and A. Tabbagh, 2001, On the effectiveness of 2D
electrical inversion results: an agricultural case study, Geophysical Prospecting, 49,
570-576
Poddar M., G. Ashokbabu, S. B. Singh, and P. P. Prasad, 1999, 3-D HLEM Modeling of
Tropical Weathering: A Trial from a Granitic Area of N-W Rajasthan, India, Pure
Appl. Geophys., 155 169–181
Porsani M. J., S. Niwas and N. R. Ferreira, 2001, Robust inversion of vertical electrical
sounding data using a multiple reweighted least-squares method, Geophysical
Prospecting, 49, 255-264
Roy I. G., 1999, An efficient non-linear least-squares 1D inversion scheme for resistivity and
IP sounding data, Geophysical Prospecting, 47, 527–550
Saccorotti G. and E. Del Pezzo, A probabilistic approach to the inversion of data from a
seismic array and its application to volcanic signals, Geophys. J. Int. (2000) 143, 249–
261
Sambridge M., 1999, Geophysical inversion with a neighbourhood algorithm--I. Searching a
parameter space, Geophys. J. Int., 138, 727-746
Sambridge M., 1999, Geophysical inversion with a neighbourhood algorithm II. Appraising
the ensemble, Geophys. J. Int., 138, 479–494

93
Sattel D. and J. Macnae, 2001, The feasibility of electromagnetic gradiometer measurements,
Geophysical Prospecting, 49, 309-320
Sharma S. P. and P. Kaikkonen, 1999, Appraisal of equivalence and suppression problems in
1D EM and DC measurements using global optimization and joint inversion,
Geophysical Prospecting, 47, 219–249
Siemon S., 2001, Improved and new resistivity-depth profiles for helicopter electromagnetic
data, Journal of Applied Geophysics, 46, 65–76
Slater L., A.M. Binley, W. Daily, R. Johnson, 2000, Cross-hole electrical imaging of a
controlled saline tracer injection, Journal of Applied Geophysics, 44, 85–102
Slichter L. B., 1955, Geophysics applied to prospecting for ores, Econ. Geol., pp.885-969
Snieder R., 1998, The role of nonlinearity in inverse problems, Inv. Probl., 14, 387 - 404
Snieder R. and J. Trampert, 1999, Inverse problems in geophysics, [in] Wavefield Inversion,
[ed.] A. Wirgin, Spriger Verlag, New York, 119-190
Storz H., W. Storz and F. Jacobs, 2000, Electrical resistivity tomography to investigate
geological structures of the earth's upper crust, Geophysical Prospecting, 48, 455-471
Szalai S. and L. Szarka, 2000, An approximate analytical approach to compute geoelectric
dipole-dipole responses due to a small buried cube, Geophysical Prospecting, 48, 871-
885
Tartaras E., M. S. Zhdanov, K. Wada, A. Saito and T. Hara, 2000, Fast Imaging of TDEM
data based on S-inversion, Journal of Applied Geophysics, 43, 15–32
Uyeshima M. and A. Schultz, 2000, Geoelectromagnetic induction in a heterogeneous sphere:
a new three-dimensional forward solver using a conservative staggered-grid finite
difference method, Geophys. J. Int., 140, 636–650
Vasco D. W., 2000, An algebraic formulation of geophysical inverse problems, Geophys. J.
Int., 142, pp. 970 – 990
van der Kruk J., J. A. C. Meekes, P.M. Van den Berg and J. T. Fokkema, 2000, An apparent-
resistivity concept for low-frequency electromagnetic sounding techniques,
Geophysical Prospecting, 48, 1033-1052
van Wijk K., J. A. Scales, W. Navidi and L. Tenorio, 2002, Data and model uncertainty
estimation for linear inversion, Geophys. J. Int., 149, 625 -632
Velis D. R. and T. J. Ulrych, 2001, Simulated annealing ray tracing in complex three-
dimensional media, Geophys. J. Int., 145, 447–459
Vickery A. C. and B. A. Hobbs, 2002, The effect of subsurface pipes on apparent-resistivity
measurements, Geophysical Prospecting, 50, 1-13
Vozoff, K. and Jupp, D. L. B., 1975, Joint inversion of geophysical data: Geophys. J.R. Astr.
Soc., 42, 977-991.
Ward S. H., 1959 AFMAG – airborne and ground, Geophysics, 24, 531 - 546
Weidelt P., 2000, Electromagnetic edge diffraction revisited: the transient field of magnetic
dipole sources, Geophys. J. Int., 141, 605-622
Weller, W. Frangos, M. Seichter, 2000, Mathematical models and controlled experimental
studies Three-dimensional inversion of induced polarisation data from simulated
waste, Journal of Applied Geophysics, 44, 67–83
Wort P, P Church and S Gagnon, 1999, Preliminary Assessment of Electrical Impedance
Tomography Technology to Detect Mine-like Objects, Proceedings of SPIE
Conference on Detection and Remediation of Mines and Mine-like Targets IV, Vol.
3710, Orlando FL USA, 5-9 April 1999, pp.895-905.
Xiang J., N.B. Jones, D. Cheng and F.S. Schlindwein, 2002, A new method to discriminate
between a valid IP response and EM coupling effects, Geophysical Prospecting, 50,
565-576

94
Yi M.-J., J.-H. Kim, Y. Song, S.-J. Cho, S.-H. Chung and J.-H. Suh, 2001, Three-dimensional
imaging of subsurface structures using resistivity data, Geophysical Prospecting, 49,
483-497
Yin C., 2000, Geoelectrical inversion for a one-dimensional anisotropic model and inherent
non-uniqueness, Geophys. J. Int., 140, 11-23

95
Equipment manufacturers and rental companies

• http://www.aurorageosciences.com – company involved in geologic exploration of


minerals, oil and gas and geotechnical exploration in Northern Canada and Alaska.
Methods: magnetic, gravity, EM, GPR, seismic.
• http://www.geofisik.com - Elliott Geophysics international Methods: induced
polarisation, airborne survey (magnetic, radiometric),
• http://www.l-gm.de – “... a small and flexible company we offer you a broad variety of
geophysical instruments, which we will build according to your needs.” - from web page
of the company. Manufacturer of various resistivity meters and more.
• http://www.fugroground.com – Fugro Ground Geophysiscs – company that performs
geophysical researc and can provide necessary instrumentation from technologies above:
1. Electromagnetics
Time Domain Systems: SMARTemv ,ProTEM ,SiroTEM Mk.3, Zonge GDP-16
Frequency Domain Systems : Max-Min HLEM , EM34-3
2. Gravity: LaCoste & Romberg Model G meters, Scintrex CG-3 and CG-3M Autograv
meters
3. Induced Polarization: Scintrex Systems, Zonge Systems, IRIS Instruments portable
systems, SmartIP Receivers
4. Radiometrics , Exploranium GR-256, Exploranium GR-320 (0.35 to 8.4L crystals)
5. Magnetics: Scintrex & Geometrics Proton Magnetometers, Cesium Vapour Magnetometers
6. Seismic: SmartSeis, OYO DAS-2

• http://www.abem.se - manufacturer of geophysical instrumentation. Range of devices


contains resistivity meters, seismic devices, VLF receivers, and vibration meters.
• http://www.iris-instruments.com – manufacturer of various resistivity, induced
polarisation VLF and proton magnetic systems. France.
• http://www.thorde.com – “THOR Geophysical provides comprehensive geophysical field
exploration and reservoir development services via field data acquisition.” – from
company web-page. Methods – electrical resistivity – multi-electrode, electromagnetic,
ground penetrating radar and magnetics complete the portfolio of geophysical services,
Germany
• http://www.geonics.com – Geonics Limited, manufacturer of various ground conductivity
meters, metal detectors, Time-domain electromagnetic systems and VLF equipment.
Canada
• http://www.trxconsulting.com – Consultation and geo-service company from Venezuela.

96
Collection of selected papers’ summaries
Title Authors Method Description

Geoelectrical inversion for a one- Changchun Yin Electroimpedance Anisotropic media description, non-linear least square
dimensional anisotropic model and inversion, layered structure,
inherent non-uniqueness
A Bayesian criterion for simplicity in Alberto Malinverno General inversion, General data inversion using Bayesian approach
inverse problem parametrization application to
gravity data
Electromagnetic edge diffraction Peter Weidelt Electromagnetic Exact solution – perfectly conducting half-plane embedded in
revisited: the transient field of diffraction uniformly conducting host energized by unit step impulse of an
magnetic dipole sources arbitrary oriented magnetic dipole
Geoelectromagnetic induction in a M. Uyeshima A. Electromagnetic Second order in the magnetic field H differential equations
heterogeneous sphere: a new Schultz induction derived from the integral form of Maxwell equations. Finite
three-dimensional forward solver difference method.
using a conservative staggered-grid
finite difference method
The Fourier transform of controlled- Yuji Mitsuhata, Electromagnetic
Comparison of the FEM response with the TEM response in
source time-domain Toshihiro Uchida, induction the near zone. Importance of the imaginary (quadrature)
electromagnetic data by smooth Yutaka Murakami component of the FEM response and the superiority of the
spectrum inversion and Hiroshi Amano TEM measurements in the near zone. Least-squares inversion
A horizontally two-layered earth with a horizontal electric
theory, which uses a smoothness constraint for the estimation
dipole source (Tx) with a unit moment and a vertical magnetic
receiver of the FEM response. Application of the technique to
synthetic data with Gaussian noise and real TEM data.
Geophysical inversion with a Malcolm Sambridge General inversion, Derivative free reconstruction algorithm for data
neighbourhood algorithm--I. application to reconstruction. Description of the algorithm, comparison with
Searching a parameter space seismic data Monte Carlo, Simulated Anealing and genetic algorithms
Geophysical inversion with a Malcolm Sambridge
neighbourhood algorithm II.
Appraising the ensemble
Optimal experiment design: cross- Andrew Curtis Cross-borehole A genetic algorithm approach to design optimal signal sources
borehole tomographic examples tomography and receivers for cross borehole tomography is presented.
General case study.
A probabilistic approach to the G. Saccorotti, Ultrasonic array Probabilistic method to obtain estimate of slowness vector
inversion of data from a seismic E. Del Pezzo monitoring volcanic when using ultrasonic sensors array monitoring natural

97
array and its application to volcanic activity, time volcanic sources.
signals domain
Sensitivity distributions of different M. Furche and A. Electrical resistivity, Sensitivity distribution in cylindrical coordinates for borehole
borehole electrode configurations Weller borehole electrical resistivity for different electrode configurations.
considering a model with a
cylindrical coaxial boundary
Simulated annealing ray tracing inDanilo R. Seismic ray-tracing, Simulated annealing has been applied to seismic ray tracing
complex three-dimensional media Velis,Tadeusz J. simulated to determine the minimum traveltime ray path connecting two
Ulrych annealing points in complex 3-D media.
Appraisal of equivalence and S.P. Sharma and P. Simulated Global optimization with very fast simulated annealing (VFSA)
suppression problems in 1D EM Kaikkonen annealing, in association with joint inversion is performed for 1D earth
and DC measurements using global inversion, structures. The inherent problems of equivalence and
optimization and joint inversion1 electromagnetic suppression in electromagnetic (EM) and direct current (DC)
and direct current resistivity methods are studied. Synthetic phase data from
data with noise multifrequency sounding using a horizontal coplanar coil
system and synthetic apparent resistivity data from
Schlumberger DC resistivity measurements are inverted
individually and jointly over different types of layered earth
structures. Noisy data are also inverted.
1D inversion of DC resistivity data Elonio A. Muiuane, Direct current A inversion scheme is presented, which adopts a truncation
using a quality-based truncated Laust B. Pedersen resistivity – vertical criterion based on the optimization of the total model variance.
SVD resistivity sounding, This consists of two terms: (i) the term associated with the
inversion, truncated variance of statistically significant principal components, i.e.
SVD the standard model estimate variance, and (ii) the term
associated with statistically insignificant principal components
of the solution, i.e. the variance of the bias term. As an initial
model for the start of iterations, a multilayered homogeneous
half-space is used whose layer thicknesses increase
logarithmically with depth to take into account the decrease of
the resolution of the DC resistivity technique with depth. The
presented inversion scheme has been tested on synthetic and
field data. The fact that the truncation level in the SVD is
determined intrinsically in the course of inversion proves to be
a major advantage over other inversion schemes where it is
set by the user.
Detecting small-scale targets by the M. Emin 2D electrical Real data inversion, comparison between three electrode
2D inversion of two-sided three- Candansayar and resistivity data configurations. The two-sided three-electrode apparent-
electrode data: application to an Ahmet T. Basokur inversion, different resistivity data are obtained by the application of left- and
archaeological survey electrode right-hand pole-dipole arrays that also permit the computation

98
combination of four-electrode and dipole-dipole apparent-resistivity values
without actually measuring them. Synthetic apparent-resistivity
data sets of the dipole-dipole, four-electrode and two-sided
three-electrode arrays are calculated for models that simulate
buried tombs.
On the effectiveness of 2D C. Panissod, D. Electrical Resistivity Electrical resistivity tomography was used in Beauce (France)
electrical inversion results: an Michot, Y. tomograpgy, results to assess the water extraction by corn plants
agricultural case study Benderitter, A. from the (evapotranspiration). The acquired pseudosections show
Tabbagh experiment conductive anomalies under the plants. A 2D inversion of
(res2dinv by Loki) measurements led us to identify clear resistive features
associated with the water losses under the corn-plant rows.
New models have been calculated with two different 3D
algorithms (finite- difference and moment-method) to take into
account 3D structure of the ground and to confirm that
periodic resistive features may generate shifted apparent-
resistivity anomalies.
3D adaptive tomography using Gualtiero Bohm Seismic Automatic regridding of the ROI
Delaunay triangles and Voronoi Paolo Galuppo tomography
polygons Aldo Vesnaver

Non-linear three-dimensional Aria Abubakar and Borehole resistivity The reconstruction of the conductivity distribution of a three-
inversion of cross-well electrical Peter M. van den 3d inversion dimensional domain. The measured secondary electric
measurements Berg potential field is represented in terms of an integral equation
for the vector electric field. This integral equation is taken as
the starting point to develop a non-linear inversion method,
the so-called contrast source inversion (CSI) method. The CSI
method considers the inverse scattering problem as an
inverse source problem in which the unknown contrast source
(the product of the total electric field and the conductivity
contrast) in the object domain is reconstructed by minimizing
the object and data error using a conjugate-gradient step,
after which the conductivity contrast is updated by minimizing
only the error in the object.
3D resistivity inversion using 2D P.D. Jackson, S.J. Electrical resistivity Field and `noisy' synthetic measurements of electric-field
measurements of the electric field Earl and G.J. tomography components have been inverted into 3D resistivities by
Reece smoothness-constrained inversion. A 2D electrode grid (20
x10), incorporating 12 current-source electrodes, was used for
both the practical and numerical experiments; this resulted in

99
366 measurements being made for each current-electrode
configuration. Consequently, when using this array for
practical field surveys, 366 measurements could be acquired
simultaneously, making the upper limit on the speed of
acquisition an order of magnitude faster than a comparable
conventional poledipole survey.
Three-dimensional imaging of Myeong-Jong Yi, 3D inversion ERT, A three-dimensional inverse scheme for carrying out DC
subsurface structures using Jung-Ho Kim, FEM, topography resistivity surveys, incorporating complicated topography as
resistivity data Yoonho Song, well as arbitrary electrode arrays was developed. The
Seong-Jun Cho, algorithm is based on the finite-element approximation to the
Seung-Hwan forward problem, so that the effect of topographic variation on
Chung, Jung-Hee the resistivity data is effectively evaluated and incorporated in
Suh the inversion. Furthermore, we have enhanced the resolving
power of the inversion using the active constraint balancing
method. Numerical verifications show that a correct earth
image can be derived even when complicated topographic
variation exists. By inverting the real field data acquired at a
site for an underground sewage disposal plant, we obtained a
reasonable image of the subsurface structures, which
correlates well with the surface
geology and drill log data.
An approximate analytical approach Sanďor Szalai, Analytical solution, A simple analytical solution is presented for computing direct
to compute geoelectric dipole- Laszlo Szarka DC resistivity current (DC) electric field distortion due to a small cube in a
dipole responses due to a small homogeneous half-space, measured with a dipole-dipole array
buried cube on the surface.
An apparent-resistivity concept for J. van der Kruk, Apparent resistivity, Apparent resistivity is a useful concept for initial quickscan
low-frequency electromagnetic J.A.C. Meekes, multifrequency interpretation and quality checks in the field, because it
sounding techniques P.M. van den Berg, represents the resistivity properties of the subsurface better
J.T. Fokkema than the raw data. An apparent-resistivity concept is applied
beyond the low-induction zone, for which the use of different
sourcereceiver configurations is not needed. This apparent-
resistivity concept was formerly used to interpret the
electromagnetic transients that are associated with the turn-off
of the transmitter current. The concept uses both amplitude
and phase information and can be applied for a wide range of
frequencies and offsets, resulting in a unique apparent
resistivity for each individual (offset, frequency) combination. It
is based on the projection of the electromagnetic field data on
to the curve of the field of a magnetic dipole on a

100
homogeneous half-space and implemented using a non-linear
optimization scheme. This results in a fast and efficient
estimation of apparent resistivity versusfrequency or offset for
electromagnetic sounding, and also gives a new perspective
on electromagnetic profiling.
The effect of subsurface pipes on Anna C. Vickery, ERT, artefacts Subsurface conducting pipes can be either a target or a noise
apparent-resistivity measurements Bruce A. Hobbs removal source in geophysical surveying. Their effect as a noise
source in resistivity imaging can be so severe as to render the
geophysical data uninterpretable. A method is developed here
for identifying, locating and removing the effects of subsurface
conducting pipes from imagedata, thus revealing the
background resistivity structure.
Short note on electrode charge-up Torleif Dahlin DC resistivity, The measurement sequence used in DC resistivity data using
effects in DC resistivity data electrode effects multi-electrode arrays should be carefully designed so as to
acquisition using multi-electrode minimize the effects of electrode charge-up effects. These
arrays effects can be some orders of magnitude larger than the
induced signal and remain at signicant levels for tens of
minutes. Even when using a plus-minus-plus type of
measurement cycle, one should avoid making potential
measurements with an electrode that has just been used to
inject current, as the decay immediately after current turn-off
is clearly non-linear.
Electrical resistivity tomography to H. Storz, W. Storz ERT , It is important to have detailed knowledge of the electrical
investigate geological structures of and F. Jacobs measurement properties of the earth's crust in order to recognize geological
the earth's upper crust report structures and to understand tectonic processes. In the area
surrounding the German Continental Deep Drilling Project
(KTB), we have used DC dipoledipole soundings to
investigate the electrical conductivity distribution down to a
depth of several kilometres.
An efficient non-linear least-squares Indrajit G. Roy Induced The inverse problem may be reduced by introducing damping
1D inversion scheme for resistivity polarisation, into the system of equations. It is shown that an appropriate
and IP sounding data apparent DC choice of the damping parameter obtained adaptively and the
resistivity use of a conjugate-gradient algorithm to solve the normal
equations make the 1D inversion scheme efficient and robust.
The scheme uses an optimal damping parameter that is
dependent on the noise in the data, in each iterative step.
The feasibility of electromagnetic Daniel Sattel, Transient The quantities measured in transient electromagnetic (TEM)
gradiometer measurements James Macnae electromagnetic surveys are usually either magnetic field components or their
surveys time derivatives. Alternatively it might be advantageous to

101
measure the spatial derivatives of these quantities. Such
gradiometer measurements are expected to have lower noise
levels due to the negative interference of ambient noise
recorded by the two receiver coils. Error propagation models
are used to compare quantitatively the noise sensitivities of
conventional and gradiometer TEM data. To achieve this,
eigenvalue decomposition is applied on synthetic data to
derive the parameter uncertainties of layered-earth models.
The results indicate that near-surface gradient measurements
give a superior definition of the shallow conductivity structure,
provided noise levels are 20-40 times smaller than those
recorded by conventional EM instruments. For a fixed-
wingtowed-bird gradiometer system to be feasible, a noise
reduction factor of at least 50-100 is required. One field test
showed that noise reduction factors in excess of 60 are
achievable with gradiometer measurements. However, other
collected data indicate that the effectiveness of noise
reduction can be hampered by the spatial variability of noise
such as that encountered in built-up areas. Synthetic data
calculated for a vertical plate model confirm the limited depth
of detection of vertical gradient data but also indicate some
spatial derivatives, which offer better lateral resolution than
conventional EM data. This high sensitivity to the near-surface
conductivity structure suggests the application of EM
gradiometers in areas such as environmental and
archaeological mapping.
Depth of detection of highly A. Apparao, G.S. Induced We define the apparent frequency effect in induced
conducting and volume polarizable Srinivas, V. polarisation effect, polarization (IP) as the relative difference between apparent
targets using induced polarization Subrahmanya DC apparent resistivities measured using DC excitation on the one hand
Sarma, P.J. resistivity, High and high-frequency excitation (when the IP effect vanishes) on
Thomas,M.S. Joshi, frequency resistivity the other.
P. Rajendra Prasad
Cross-hole resistivity tomography Zhou Bing, S.A. DC electric This paper investigates the relative merits and effectiveness of
using different electrode Greenhalgh surveying , cross-hole resistivity tomography using different electrode
configurations electrode configurations for four popular electrode arrays: pole-pole,
configuration pole-bipole, bipole-pole and bipole-bipole.
Using a non-integer moment of the Terry J. Lee, Electromagnetic Interpretation of the EM impulse data by using moments. Half-
impulse response to estimate the Richard S. Smith, impulse response order moments in estimation of the apparent ground
half-space conductivity Christophe S.B. conductivity

102
Hyde

Robust inversion of vertical Milton J. Porsani, Resistivity inverse Improved LS approach – multiple reweighted LS
electrical sounding data using a Sri Niwas, Niraldo problem, vertical
multiple reweighted least-squares R. Ferreira electrical sounding
method
A new method to discriminate Jianping Xiang, Induced Discrimination between valid IP and EM coupling effects.
between a valid IP response and N.B. Jones, polarisation, Special finite IP-EM model , least squares
EM coupling effects Daizhan Cheng, electromagnetic
F.S. Schlindwein method
Resistivity anomaly imaging by Paolo Mauriello, ERT Geoelectric probability tomography. Apparent resistivity data.
probability tomography Domenico Patella
Use of VFSA for resolution, Bimalendu B. DC Resistivity Results from the resolution and the sensitivity analysis od 1D
sensitivity and uncertainty analysis Bhattacharya, Induced DC resistivity and IP sounding.
in 1D DC resistivity and IP inversion Shalivahan, Mrinal polarisation, very
K. Sen fast simulated
annealing vertical
electrical sounding
Comparison of methods for Les P. Beard AEM – airborne Comparison of three different methods for estimating earth
estimating earth resistivity from Electromagnetic resistivity from AEM measurements: 2x lookup table +
airborne electromagnetic system linearised, iterative inversion.
measurements
Use of block inversion in the 2-D A.I. Olayinka , U. ERT – Additional information introduced into inversion routine wchich
interpretation of apparent resistivity Yaramanci multielectrode is connecting more finite elements into one block of equal
data and its comparison with resistivity array properties
smooth inversion
A new look at multiphase invasion K. Cozzolino, A.Q. Borehole resistivity In well log interpretation, it is frequently necessary to correct
with applications to borehole Howard Jr., J.S. imaging. logs for invasion. Invasion occurs in permeable formations
resistivity interpretation Protazio when there is a radial differential pressure Z RDP between the
borehole and formation. Other factors on which invasion
depend include saturation, mobility, pressure Z RDP and
capillary pressure, permeability and viscosity of fluids, and
temperature transient effects associated with the mud filtrate
injected into the formation. Thus, simulation of realistic
invasion is not an easy task. This work reviews the famous
BuckleyLeverett mathematical model in cylindrical coordinates
appropriate for borehole geometries. The model predicts
multiphase invasion in porous media when gravity, capillary
pressure, and mud cake can be neglected. One application is

103
to correct logging while drilling Z.LWD and wireline resistivity
logs for time-dependent invasion and formation temperature
effects.
Cross-hole electrical imaging of a L. Slater, A.M. Cross borehole Experimental tank – controlled saline injection in konown
controlled saline tracer injection Binley, W. Daily, R. electrical imaging structure, Solute transport, pixel breakthrough
Johnson
Fast Imaging of TDEM data based Efthimios Tartaras, Time domain
on S-inversion MichaelS. Zhdanov, electromagnetic Fast S- inversionof EM data using thin-sheet approach,
Kazushige Wada, data
Akira Saito,Toshiaki
Hara
Ground penetrating radar inversion O. Lazaro-Mancilla GPR Inversion of GPR data. Linearisation of the damped E-field
in 1-D: an approach for the ) wave equation to silve the inverse problem, electric properties
estimation of electrical conductivity, , E. Gomez- estimation
dielectric permittivity and magnetic Trevino
permeability
Automatic detection of buried W. Al-Nuaimy, Y. GPR Neural network, image segmentation, feature analysis, pattern
utilities and solid objects with GPRHuang, M. recognition
using neural networks and pattern Nakhkash, M.T.C.
recognition Fang, V.T. Nguyen,
A. Eriksen
Mathematical models and A. Weller, W. Induced Induced polarisation, dipole-dipole method, 3d SIRT imaging
controlled experimental studies Frangos, M. Polarisation
Three-dimensional inversion of Seichter
induced polarization data from
simulated waste
A note on magnetic modelling with Peter Furness Magnetic method Treatment of the remamence filed technique
remanence
Improved and new resistivity-depth B. Siemon Helicopter Improved inversion of the HEM data using multifrequency
profiles for helicopter electromagnetic methods
electromagnetic data data
Improvement in TDEM sounding Marc Descloitres, Time domain Case study
interpretation in presence of Roger Guerin, Yves electromagnetic
induced polarization. A case study Albouy, Alain method, Ip DC
in resistive rocks of the Fogo Tabbagh, Michel resistivity
volcano, Cape Verde Islands Ritz

104
Original summaries of papers on inverse problems in geophysics
Geoelectrical inversion for a one-dimensional anisotropic model and inherent non-uniqueness
Changchun Yin
SUMMARY
It has been shown that the inversion of geoelectrical sounding data from an anisotropic underground
structure with an isotropic model can strongly distort the image of the resistivity distribution of the
Earth. Because of this it is useful to extend the models to include an anisotropic earth. The inverse
model used in this paper is a layered earth with general anisotropy, such that a 3x3 resistivity tensor is
assigned to each layer. This symmetric, positive-denite resistivity tensor is parametrized by three
principal resistivities and three Euler angles. Therefore, together with the thickness, seven parameters
for each layer of the earth have to be resolved. The Marquardt-Levenberg method is used to invert the
Schlumberger resistivity sounding data with an anisotropic model. The inversion results using
synthetic data show that for an anisotropic earth, rather than all parameters, only particular parameter
combinations can be resolved uniquely. Theoretical investigations support these conclusions and
conrm that unlike the general non-uniqueness in geoelectrical inversion resulting from inaccurate,
insucient or inconsistent data, the non-uniqueness of geoelectrical inversion for an anisotropic model
is an inherent one, which means that no unique solution can be obtained, even if perfect data are
assumed.
Key words: anisotropy, inherent non-uniqueness, inversion, resistivity tensor.

A Bayesian criterion for simplicity in inverse problem parametrization


Alberto Malinverno
SUMMARY
To solve a geophysical inverse problem in practice means using noisy measurements to estimate a
nite number of parameters. These parameters in turn describe a continuous spatial distribution of
physical properties. (For example, the continuous solution may be expressed as the linear
combination of a number of orthogonal functions; the parameters are the coecients multiplying each of
these functions.) As the solution is non-unique, estimating the parameters of interest also requires a
measure of their uncertainty for the given data. In a Bayesian approach, this uncertainty is quantied by
the posterior probability density function (pdf) of the parameters. This `parameter estimation', however,
can only be carried out for a given way of parametrizing the problem; choosing a parametrization is
the `model selection' problem. The purpose of this paper is to illustrate a Bayesian model selection
criterion that ranks dierent parametrizations from their posterior probability for the given set of
geophysical measurements. This posterior probability is computed using Bayes' rule and is higher for
parametrizations that better t the data, which are simple in that they have fewer free parameters, and
which result in a posterior pdf that departs the least from what is expected a priori. Bayesian model
selection is illustrated for a gravitational edge eect inverse problem, where the variation of density
contrast with depth is to be inferred from gravity gradient measurements at the surface. Two
parametrizations of the density contrast are examined, where the parameters are the coecients of an
orthogonal function expansion or the values at the nodes of a cubic spline interpolation. Bayesian
model selection allows one to decide how many free parameters should be used in either
parametrization and which of the two parametrizations is preferred given the data. While the
illustration used here is for a simple linear problem, the proposed Bayesian criterion can be extended
to the general non-linear case.
Key words: inverse problem, inversion.

Electromagnetic edge diffraction revisited: the transient field of magnetic dipole sources
Peter Weidelt
SUMMARY
A surprisingly simple exact solution is derived for the transient electromagnetic eld scattered by a
perfectly conducting half-plane, which is embedded in a uniformly conducting host and energized by a
unit step impulse of an arbitrarily oriented magnetic dipole. Despite its simplicity, the model has some
relevance for geophysical applications (e.g. mineral exploration), provides insight into the physics of
the transient scattering process, and has merits in validating numerical 2.5-D or 3-D codes. The
diffraction of electromagnetic waves at a perfectly conducting edge is one of the few vectorial
diffraction problems that allows an exact treatment. In the past, attention has been conned to
harmonic excitation in a lossless dielectric host, whereas the transient eld in a lossy medium has

105
escaped attention. In the quasi-static approximation in particular, this solution turns out to be simple
compared to the explicit form of the eld using harmonic excitation. However, even the inclusion of
displacement currents, which may be necessary when applying transient electromagnetic methods to
environmental geophysics, does not lead to complications. The electric eld and the time derivative of
the magnetic eld are given explicitly both in the quasi-static limit and with the inclusion of displacement
currents. The late-time behaviour of the eld is remarkable: whereas the full-space parts of these elds
show the well-known t{5a2 decay, the diffracted wave emerging from the edge decays only as t{2 and
therefore dominates the eld geometry at late times. The appendices briefly treat the quasi-static
transient eld of a grounded electric dipole and sketch the formal solution for a perfectly conducting
half-plane in a layered host.
Key words: diffraction, electromagnetic diffusion, electromagnetic induction, transient
electromagnetic elds.

Geoelectromagnetic induction in a heterogeneous sphere: a new three-dimensional forward solver


using a conservative staggered-grid finite difference method
M. Uyeshima and A. Schultz
SUMMARY
A conservative staggered-grid finite difference method is presented for computing the electromagnetic
induction response of an arbitrary heterogeneous conducting sphere by external current excitation.
This method is appropriate as the forward solution for the problem of determining the electrical
conductivity of the Earth's deep interior. This solution in spherical geometry is derived from that
originally presented by Mackie et al. (1994) for Cartesian geometry. The difference equations that we
solve are second order in the magnetic field H, and are derived from the integral form of Maxwell's
equations on a staggered grid in spherical coordinates. The resulting matrix system of equations is
sparse, symmetric, real everywhere except along the diagonal and ill-conditioned. The system is
solved using the minimum residual conjugate gradient method with preconditioning by incomplete
Cholesky decomposition of the diagonal sub-blocks of the coefficient matrix. In order to ensure there is
zero H divergence in the solution, corrections are made to the H field every few iterations. In order to
validate the code, we compare our results against an integral equation solution for an azimuthally
symmetric, buried thin spherical shell model (Kuvshinov & Pankratov 1994), and against a quasi-
analytic solution for an azimuthally asymmetric configuration of eccentrically nested spheres (Martinec
1998).
Key words: electrical conductivity, electromagnetic induction, electromagnetic modelling, mantle,
numerical techniques.

The Fourier transform of controlled-source time-domain electromagnetic data by smooth spectrum


inversion
Yuji Mitsuhata, Toshihiro Uchida, Yutaka Murakami and Hiroshi Amano
SUMMARY
In controlled-source electromagnetic measurements in the near zone or at low frequencies, the real
(in-phase) frequency-domain component is dominated by the primary field. However, it is the
imaginary (quadrature) component that contains the signal related to a target deeper than the
sourcereceiver separation. In practice, it is difcult to measure the imaginary component because of the
dominance of the primary field. In contrast, data acquired in the time domain are more sensitive to the
deeper target owing to the absence of the primary field. To estimate the frequency-domain responses
reliably from the time-domain data, we have developed a Fourier transform algorithm using a least-
squares inversion with a smoothness constraint (smooth spectrum inversion). In implementing the
smoothness constraint as a priori information, we estimate the frequency response by maximizing the
a posteriori distribution based on Bayes' rule. The adjustment of the weighting between the data mist
and the smoothness constraint is accomplished by minimizing Akaike's Bayesian Information Criterion
(ABIC). Tests of the algorithm on synthetic and field data for the long-offset transient electromagnetic
method provide reasonable results. The algorithm can handle time-domain data with a wide range of
delay times, and is effective for analysing noisy data
Key words: Bayes, electromagnetics, frequency domain, least squares, smooth spectrum, time
domain.

Geophysical inversion with a neighbourhood algorithm--I. Searching a parameter space

106
Malcolm Sambridge
SUMMARY
This paper presents a new derivative-free search method for finding models of acceptable data fit in a
multidimensional parameter space. It falls into the same class of method as simulated annealing and
genetic algorithms, which are commonly used for global optimization problems. The objective here is
to find an ensemble of models that preferentially sample the good data-fitting regions of parameter
space, rather than seeking a single optimal model. (A related paper deals with the quantitative
appraisa of the ensemble.) The new search algorithm makes use of the geometrical constructs known
as Voronoi cells to derive the search in parameter space. These are nearest neighbour regions
defined under a suitable distance norm. The algorithm is conceptually simple, requires just two `tuning
parameters', and makes use of only the rank of a data fit criterion rather than the numerical value. In
this way all difficulties associated with the scaling of a data misfit function are avoided, and any
combination of data fit criteria can be used. It is also shown how Voronoi cells can be used to enhance
any existing direct search algorithm, by intermittently replacing the forward modelling calculations with
nearest neighbour calculations. The new direct search algorithm is illustrated with an application to a
synthetic problem involving the inversion of receiver functions for crustal seismic structure. This is
known to be a non-linear problem, where linearized inversion techniques suffer from a strong
dependence on the starting solution. It is shown that the new algorithm produces a sophisticated type
of `self-adaptive' search behaviour, which to our knowledge has not been demonstrated in any
previous technique of this kind.
Key words: numerical techniques, receiver functions, waveform inversion.

Geophysical inversion with a neighbourhood algorithmII. Appraising the ensemble


Malcolm Sambridge
SUMMARY
Monte Carlo direct search methods, such as genetic algorithms, simulated annealing, etc., are often
used to explore a nite-dimensional parameter space. They require the solving of the forward problem
many times, that is, making predictions of observables from an earth model. The resulting ensemble of
earth models represents all 'information' collected in the search process. Search techniques have
been the subject of much study in geophysics; less attention is given to the appraisal of the ensemble.
Often inferences are based on only a small subset of the ensemble, and sometimes a single member.
This paper presents a new approach to the appraisal problem. To our knowledge this is the rst time
the general case has been addressed, that is, how to infer information from a complete ensemble,
previously generated by any search method. The essence of the new approach is to use the
information in the available ensemble to guide a resampling of the parameter space. This requires no
further solving of the forward problem, but from the new `resampled' ensemble we are able to obtain
measures of resolution and trade-o in the model parameters, or any combinations of them. The new
ensemble inference algorithm is illustrated on a highly non-linear waveform inversion problem. It is
shown how the computation time and memory requirements scale with the dimension of the parameter
space and size of the ensemble. The method is highly parallel, and may easily be distributed across
several computers. Since little is assumed about the initial ensemble of earth models, the technique is
applicable to a wide variety of situations. For example, it may be applied to perform `error analysis'
using the ensemble generated by a genetic algorithm, or any other direct search method.
Key words: numerical techniques, receiver functions, waveform inversion.

Optimal experiment design: cross-borehole tomographic examples


Andrew Curtis
SUMMARY
Experiment design optimization requires that the quality of any particular design can be both quantied
and then maximized. In this study, experiment quality is defined to measure the constraints on a
particular model oered by the anticipated experimental data (that is, it measures anticipated model
information post-experiment). Physical and financial constraints define the space of possible
experimental designs. The definitions used here require that the relationship between model
parameters and data can be linearized without significant loss of information. Two new measures of
model information are introduced and compared to three previously known measures. One of the new
measures can be calculated extremely efficiently allowing experiments constraining large model
spaces to be designed. This efficiency trades o with a lack of sensitivity to poorly constrained parts of
the model. Each measure is used independently to design a cross-borehole tomographic survey

107
including surface sources and receivers (henceforth called nodes) which maximally constrains the
interborehole velocity structure. The boreholes are vertical and the background velocity is assumed to
be approximately constant. Features common to most or all optimal designs form robust design
criteria`rules of thumb which can be applied to design future experiments. These are:
(1) surface nodes significantly improve designs;
(2) node density increases steadily down the length of each well;
(3) surface node density is increased slightly around the central point between
the wells;
(4) average node density on the ground surface is lower than that down each well.
Three of these criteria are shown to be intuitively reasonable (the fourth is not), but the current
method is quantitative and hence may be applied in situations where intuition breaks down (for
example, non-vertical wells with multilateral splays; combining different data types; inversion for
anisotropic model parameters). In such cases the optimal design is usually not obvious, but can be
found using the quantitative methods introduced and discussed herein.
Key words: cross-borehole tomography, experiment, information, optimal design, survey.

A probabilistic approach to the inversion of data from a seismic array and its application to volcanic
signals
G. Saccorotti and E. Del Pezzo
SUMMARY
Array techniques are particularly well-suited for detecting and quantifying the complex seismic
waveelds associated with volcanic activity such as volcanic tremor and long period events. The
methods based on the analysis of the signal in the frequency domain, or spectral methods, have the
main advantages of both resolving closely spaced sources and reducing the necessary computer time,
but may severely fail in the analysis of monochromatic, non-stationary signals. Conversely, the time-
domain methods, based on the maximization of a multichannel coherence estimate, can be applied
even for short-duration pulses. However, for both the time and the frequency domain approaches, an
exhaustive definition of the errors associated with the slowness vector estimate is not yet available.
Such a definition become crucial once the slowness vector estimates are used to infer source location
and extent. In this work we develop a method based on a probabilistic formalism, which allows for a
complete definition of the uncertainties associated with the estimate of frequency slowness power
spectra from measurement of the zero-lag cross-correlation. The method is based on the estimate of
the theoretical frequency slowness power spectrum, which is expressed as the convolution of the true
signal slowness with the array response pattern. Using a Bayesian formalism, the a posteriori
probability density function for signal slowness is expressed as the difference, in the least-squares
sense, between the model spectrum and that derived from application of the zero-lag cross-correlation
technique. The method is tested using synthetic waveforms resembling the quasi-monochromatic
signals often associated with the volcanic activity. Examples of application to data from Stromboli
volcano, Italy, allow for the estimate of source location and extent of the explosive activity.
Key words: array, inverse problem, volcanic activity.

Sensitivity distributions of different borehole electrode configurations considering a model with a


cylindrical coaxial boundary
M. Furche and A. Weller
SUMMARY
The sensitivity distributions of different electrode configurations are computed for both a
homogeneous resistivity distribution and a model consisting of two vertical zones of homogeneous
resistivity. The inner zone around the borehole axis represents a borehole filled with mud and the
outer zone is the undisturbed formation. The sensitivity of the homogeneous model is independent of
resistivity. Whereas the sensitivity in the case of the cylindrical coaxial boundary depends on the
contrast between formation resistivity and mud resistivity. With increasing contrast, the sensitivity
distribution changes dramatically for all investigated electrode configurations. The sensitivity patterns
are used to illustrate the ability of different electrode configurations to delineate thin layers. The
superiority of focused tools in comparison to normal logs can clearly be shown if the effect of variable
bucking currents is included.
Key words: borehole geophysics, electrical resistivity, numerical techniques.

108
Simulated annealing ray tracing in complex three-dimensional media
Danilo R. Velis and Tadeusz J. Ulrych
SUMMARY
Simulated annealing has been applied to seismic ray tracing to determine the minimum traveltime ray
path connecting two points in complex 3-D media. In contrast to conventional ray tracing schemes
such as shooting and bending, simulated annealing ray tracing (SART) overcomes some well-known
difficulties regarding multipathing and take-off angle selection. These include local convergence (that
is, failing to obtain the ray path with absolute minimum traveltime) and divergence of the take-off angle
selection strategy. Under these circumstances, shooting and bending methods may not provide
reliable results in highly variable 3-D media. A flexible model representation is used to accommodate a
large class of velocity models.
Key words: block model, heterogeneous media, numerical techniques, ray tracing, simulated
annealing.

Appraisal of equivalence and suppression problems in 1D EM and DC measurements using global


optimization and joint inversion
S.P. Sharma and P. Kaikkonen
Abstract
Global optimization with very fast simulated annealing (VFSA) in association with joint inversion is
performed for 1D earth structures. The inherent problems of equivalence and suppression in
electromagnetic (EM) and direct current (DC) resistivity methods are studied. Synthetic phase data
from multifrequency sounding using a horizontal coplanar coil system and synthetic apparent
resistivity data from Schlumberger DC resistivity measurements are inverted individually and jointly
over different types of layered earth structures. Noisy data are also inverted. The study reveals that
global optimization of individual data sets cannot solve inherent equivalence or suppression problems.
Joint inversion of EM and DC measurements can overcome the problem of equivalence very well.
However, a suppression problem cannot be solved even after combination of data sets. This study
reveals that the K-type earth structure is easiest to resolve while the A-type is the most difficult. We
also conclude that the equivalence associated with a thin resistive layer can be resolved better than
that for a thin conducting layer.

1D inversion of DC resistivity data using a quality-based truncated SVD


Elonio A. Muiuane and Laust B. Pedersen
ABSTRACT
Many DC resistivity inversion schemes use a combination of standard iterative least-squares and
truncated singular value decomposition (SVD) to optimize the solution to the inverse problem.
However, until quite recently, the truncation was done arbitrarily or by a trial-and-error procedure, due
to the lack of workable guidance criteria for discarding small singular values. In this paper we present
an inversion scheme which adopts a truncation criterion based on the optimization of the total model
variance. This consists of two terms: (i) the term associated with the variance of statistically significant
principal components, i.e. the standard model estimate variance, and (ii) the term associated with
statistically insignificant principal components of the solution, i.e. the variance of the bias term. As an
initial model for the start of iterations, we use a multilayered homogeneous half-space whose layer
thicknesses increase logarithmically with depth to take into account the decrease of the resolution of
the DC resistivity technique with depth. The present inversion scheme has been tested on synthetic
and field data. The results of the tests show that the procedure works well and the convergence
process is stable even in the most complicated cases. The fact that the truncation level in the SVD is
determined intrinsically in the course of inversion proves to be a major advantage over the inversion
schemes where it is set by the user.

Detecting small-scale targets by the 2D inversion of two-sided three-electrode data: application to an


archaeological survey
M. Emin Candansayar and Ahmet T. Basokur
ABSTRACT
The detecting capabilities of some electrical arrays for the estimation of position, size and depth of
small-scale targets were examined in view of the results obtained from 2D inversions of apparent-
resistivity data. The two-sided three-electrode apparent- resistivity data are obtained by the application

109
of left- and right-hand pole-dipole arrays that also permit the computation of four-electrode and dipole-
dipole apparent-resistivity values without actually measuring them. Synthetic apparent-resistivity data
sets of the dipole-dipole, four-electrode and two-sided three-electrode arrays are calculated for models
that simulate buried tombs. The results of two-dimensional inversions are compared with regard to the
resolution in detecting the exact location, size and depth of the target, showing some advantage for
the two-sided three-electrode array. A field application was carried out in the archaeological site
known as Alaca Hoyuk, a religious temple area of the Hittite period. The two-dimensional inversion of
the two-sided three-electrode apparent-resistivity data has led to locating a part of the city wall and a
buried small room. The validity of the interpretation has been checked against the results of
subsequent archaeological excavations.

On the effectiveness of 2D electrical inversion results: an agricultural


case study
C. Panissod, D. Michot,Y. Benderitter and A. Tabbagh
ABSTRACT
Electrical resistivity tomography was used in Beauce (France) to assess the water extraction by corn
plants (evapotranspiration). The acquired pseudosections show conductive anomalies under the
plants. A 2D inversion of measurements led us to identify clear resistive features associated with the
water losses under the corn-plan rows. New models have been calculated with two different 3D
algorithms (finite-difference and moment-method) to take into account 3D structure of the ground and
to confirm that periodic resistive features may generate shifted apparent-resistivity anomalies.

3D adaptive tomography using Delaunay trianglesand Voronoi polygons


Gualtiero Bo�m, Paolo Galuppo and Aldo Vesnaver
Abstract
The solutions of traveltime inversion problems are often not unique because of the poor match
between the raypath distribution and the tomographic grid. However, by adapting the local resolution
iteratively, by means of a singular value analysis of the tomographic matrix, we can reduce or
eliminate the null space influence on our earth image: in this way, we get a much more reliable
estimate of the velocity field of seismic waves. We describe an algorithm for an automatic regridding,
able to fit the local resolution to the available raypaths, which is based on Delaunay triangulation and
Voronoi tessellation. It increases the local pixel density where the null space energy is low or the
velocity gradient is large, and reduces it elsewhere. Consequently, the tomographic image can reveal
the boundaries of complex objects, but is not affected by the ambiguities that occur when the grid
resolution is not adequately supported by the available raypaths.

Non-linear three-dimensional inversion of cross-well electrical measurements


Aria Abubakar and Peter M. van den Berg
Abstract
Cross-well electrical measurement as known in the oil industry is a method for determining the
electrical conductivity distribution between boreholes from the electrostatic field measurements in the
boreholes. We discuss the reconstruction of the conductivity distribution of a three-dimensional
domain. The measured secondary electric potential field is represented in terms of an integral
equation for the vector electric field. This integral equation is taken as the starting point to develop a
non-linear inversion method, the so-called contrast source inversion (CSI) method. The CSI method
considers the inverse scattering problem as an inverse source problem in which the unknown contrast
source (the product of the total electric field and the conductivity contrast) in the object domain is
reconstructed by minimizing the object and data error using a conjugate-gradient step, after which the
conductivity contrast is updated by minimizing only the error in the object. This method has been
tested on a number of numerical examples using the synthetic `measured' data with and without noise.
Numerical tests indicate that the inversion method yields a reasonably good reconstruction result, and
is fairly insensitive to added random noise.

3D resistivity inversion using 2D measurements of the electric field


P.D. Jackson, S.J. Earl and G.J. Reece
Abstract

110
Field and `noisy' synthetic measurements of electric-field components have been inverted into 3D
resistivities by smoothness-constrained inversion. Values of electrical field can incorporate changes in
polarity of the measured potential differences seen when 2D electrode arrays are used with
heterogeneous `geology', without utilizing negative apparent resistivities or singular geometrical
factors. Using both the X- and Y-components of the electric field as measurements resulted in faster
convergence of the smoothness-constrained inversion compared with using one component alone.
Geological structure and resistivity were reconstructed as well as, or better than, comparable
published examples based on traditional measurement types. A 2D electrode grid (20 �10),
incorporating 12 current-source electrodes, was used for both the practical and numerical
experiments; this resulted in 366 measurements being made for each current-electrode configuration.
Consequently, when using this array for practical field surveys, 366 measurements could be acquired
simultaneously, making the upper limit on the speed of acquisition an order of magnitude faster than a
comparable conventional poledipole survey. Other practical advantages accrue from the closely
spaced potential dipoles being insensitive to common-mode noise (e.g. telluric) and only 7% of the
electrodes (i.e. those used as current sources) being susceptible to recently reported electrode
charge-up effects.

Three-dimensional imaging of subsurface structures using resistivity data


Myeong-Jong Yi,* Jung-Ho Kim, Yoonho Song, Seong-Jun Cho, Seung-Hwan Chung and Jung-Hee
Suh
Abstract
We have developed a three-dimensional inverse scheme for carrying out DC resistivity surveys,
incorporating complicated topography as well as arbitrary electrode arrays. The algorithm is based on
the finite-element approximation to the forward problem, so that the effect of topographic variation on
the resistivity data is effectively evaluated and incorporated in the inversion. Furthermore, we have
enhanced the resolving power of the inversion using the active constraint balancing method.
Numerical verifications show that a correct earth image can be derived even when complicated
topographic variation exists. By inverting the real field data acquired at a site for an underground
sewage disposal plant, we obtained a reasonable image of the subsurface structures, which correlates
well with the surface geology and drill log data.

An approximate analytical approach to compute geoelectric dipoledipole responses due to a small


buried cube
Sandor Szalai and Laszlo Szarka
Abstract
A simple analytical solution is presented for computing direct current (DC) electric field distortion due
to a small cube in a homogeneous half-space, measured with a dipoledipole array on the surface.
Both the transmitter and the receiver may have any orientation; furthermore their position on the
horizontal surface and the depth of the cube can be freely selected. It is shown that a simple
approximate analytical method may replace more complicated 3D numerical modelling algorithms. The
approximation lies in the linearization of the problem: the secondary source (i.e. the cube) is
considered as a system of three perpendicular electric dipoles. In spite of this first-order
approximation, in the case of realistic depths z zaR < 0X10X5Y where R is the transmitterreceiver
distance), this approximate solution fits very well with true 3D numerical modelling results, and with
analogue modelling results if aaR # 0X1Y where a is the length of the side of the cube. Due to its
simplicity, this method could be used for computing DC field distortion effects, estimating parameter-
sensitivities, or even determining some initial models for further inversions.

An apparent-resistivity concept for low-frequency electromagnetic sounding techniques


J. van der Kruk, J.A.C. Meekes, P.M. van den Berg and J.T. Fokkema
Abstract
Apparent resistivity is a useful concept for initial quick-scan interpretation and quality checks in the
field, because it represents the resistivity properties of the subsurface better than the raw data. For
frequency-domain soundings several apparent-resistivity definitions exist. One definition uses an
asymptote for the field of a magnetic dipole in a homogeneous half-space and is useful only for low
induction numbers. Another definition uses only the amplitude information of the total magnetic field,
although this results in a non-unique apparent resistivity. To overcome this non-uniqueness, a

111
complex derivation using two different source-receiver configurations and several magnetic field
values for different frequencies or different offsets is derived in another definition. Using the latter
theory, in practice, this means that a wide range of measurements have to be carried out, while
commercial systems are not able to measure this wide range. In this paper, an apparent-resistivity
concept is applied beyond the low-induction zone, for which the use of different source-receiver
configurations is not needed. This apparent-resistivity concept was formerly used to interpret the
electromagnetic transients that are associated with the turn-off of the transmitter current. The concept
uses both amplitude and phase information and can be applied for a wide range of frequencies and
offsets, resulting in a unique apparent resistivity for each individual (offset, frequency) combination. It
is based on the projection of the electromagnetic field data on to the curve of the field of a magnetic
dipole on a homogeneous half-space and implemented using a non-linear optimization scheme. This
results in a fast and efficient estimation of apparent resistivity versus frequency or offset for
electromagnetic sounding, and also gives a new perspective on electromagnetic profiling. Numerical
results and two case studies are presented. In each case study the results are found to be comparable
with those from other existing exploration systems, such as EM31 and EM34. They are obtained with
a slight increase of effort in the field but contain more information, especially about the vertical
resistivity distribution of the subsurface.

The effect of subsurface pipes on apparent-resistivity measurements


Anna C. Vickery and Bruce A. Hobbs
Abstract
Subsurface conducting pipes can be either a target or a noise source in geophysical surveying. Their
effect as a noise source in resistivity imaging can be so severe as to render the geophysical data
uninterpretable. A method is developed here for identifying, locating and removing the effects of
subsurface conducting pipes from image data, thus revealing the background resistivity structure. A
previously known analytic solution for the potential distribution produced by current injection in a
uniform half-space containing an infinitely long conducting cylinder is used to calculate apparent
resistivities corresponding to electrode arrays on the surface of the half-space. Most results concern
the Wenner array and an examination is made of the effects produced by varying the electrode
spacing and the depth, size and orientation of the pipe with respect to the array. A method is
developed for locating pipes in resistivity image data by cross-correlation of the analytic solution with
the measured field data. Pipe effects are then removed by multiplying each datum point in the
measurements by the reciprocal of the corresponding value in the analytic solution. The success of
the method is demonstrated by applications to synthetic data sets involving one or two pipes
embedded in non-uniform half-spaces. In further examples, the method is applied to some measured
resistivity images from an ex-industrial site (a former oil distribution terminal), where an
electromagnetic survey had previously revealed a labyrinth of underground pipes. The method is
shown to be successful in removing the effects of the pipes to reveal the underlying geology.

Short note on electrode charge-up effects in DC resistivity data acquisition using multi-electrode
arrays
Torleif Dahlin
Abstract
The measurement sequence used in DC resistivity data using multi-electrode arrays should be
carefully designed so as to minimize the effects of electrode charge-up effects. These effects can be
some orders of magnitude larger than the induced signal and remain at significant levels for tens of
minutes. Even when using a plus-minus-plus type of measurement cycle, one should avoid making
potential measurements with an electrode that has just been used to inject current, as the decay
immediately after current turn-off is clearly non-linear.

Electrical resistivity tomography to investigate geological structures of the earth's upper crust
H. Storz, W. Storz and F. Jacobs
Abstract
It is important to have detailed knowledge of the electrical properties of the earth's crust in order to
recognize geological structures and to understand tectonic processes. In the area surrounding the
German Continental Deep Drilling Project (KTB), we have used DC dipole-dipole soundings to
investigate the electrical conductivity distribution down to a depth of several kilometres. We have

112
adapted the electrical resistivity tomography (ERT) technique, a well-established near-surface method,
to large-scale experiments. Independent transmitting and receiving units were used to realize the
concept of simultaneous multichannel registration of the scalar electrical potential at 44 dipoles. The
measured data yielded apparent resistivities which were inverted to a 2D resistivity model ranging
from the surface down to a depth of 4 km. Two highly conductive structures with steep inclination were
detected. They are expected to be major fault zones embedded in a metamorphic body. The rather
low resistivity (r < 10 Qm) can be explained by the existence of graphitic minerals and/or electrolytic
uids.

An efficient non-linear least-squares 1D inversion scheme for resistivity and IP sounding data
Indrajit G. Roy
Abstract
Non-linear least-squares inversion operates iteratively by updating the model parameters in each step
by a correction vector which is the solution of a set of normal equations. Inversion of geoelectrical data
is an ill-posed problem. This and the ensuing suboptimality restrict the initial model to being in the near
vicinity of the true model. The problem may be reduced by introducing damping into the system of
equations. It is shown that an appropriate choice of the damping parameter obtained adaptively and
the use of a conjugate-gradient algorithm to solve the normal equations make the 1D inversion
scheme efficient and robust. The scheme uses an optimal damping parameter that is dependent on
the noise in the data, in each iterative step. The changes in the damping and relative residual error
with iteration number are illustrated. A comparison of its efficacy over the conventional Marquardt and
simulated annealing methods, tested on Inman's model, is made. Inversion of induced polarization (IP)
sounding is obtained by inverting twice (true and modified) DC apparent resistivity data. The inversion
of IP data presented here is generic and can be applied to any of the IP observables, such as
chargeability, frequency effect, phase, etc., as long as these observables are explicitly related to the
DC apparent resistivity. The scheme is used successfully in inverting noise-free and noisy synthetic
data and field data taken from the published literature.

The feasibility of electromagnetic gradiometer measurements


Daniel Sattel and James Macnae
Abstract
The quantities measured in transient electromagnetic (TEM) surveys are usually either magnetic field
components or their time derivatives. Alternatively it might be advantageous to measure the spatial
derivatives of these quantities. Such gradiometer measurements are expected to have lower noise
levels due to the negative interference of ambient noise recorded by the two receiver coils. Error
propagation models are used to compare quantitatively the noise sensitivities of conventional and
gradiometer TEM data. To achieve this, eigenvalue decomposition is applied on synthetic data to
derive the parameter uncertainties of layered-earth models. The results indicate that near-surface
gradient measurements give a superior definition of the shallow conductivity structure, provided noise
levels are 2040 times smaller than those recorded by conventional EM instruments. For a fixed-
wingtowed-bird gradiometer system to be feasible, a noise reduction factor of at least 50100 is
required. One field test showed that noise reduction factors in excess of 60 are achievable with
gradiometer measurements. However, other collected data indicate that the effectiveness of noise
reduction can be hampered by the spatial variability of noise such as that encountered in built-up
areas. Synthetic data calculated for a vertical plate model confirm the limited depth of detection of
vertical gradient data but also indicate some spatial derivatives, which offer better lateral resolution
than conventional EM data. This high sensitivity to the near-surface conductivity structure suggests
the application of EM gradiometers in areas such as environmental and archaeological mapping.

Depth of detection of highly conducting and volume polarizable targets using induced polarization
A. Apparao, G.S. Srinivas, V. Subrahmanya Sarma, P.J. Thomas, M.S. Joshi and P. Rajendra Prasad
Abstract
We define the apparent frequency effect in induced polarization (IP) as the relative difference between
apparent resistivities measured using DC excitation on the one hand and high-frequency excitation
(when the IP effect vanishes) on the other. Assuming a given threshold for the minimum detectable
anomaly in the apparent frequency effect, the depth of detection of a target by IP can be defined as
that depth below which the target response is lower than the threshold for a given electrode array.

113
Physical modelling shows that for the various arrays, the depth of detection of a highly conducting and
volume polarizable target agrees closely with the depth of detection of an infinitely conducting and
non-polarized body of the same shape and size. The greatest depth of detection is obtained with a
two-electrode array, followed by a three- electrode array, while the smallest depth of detection is
obtained with a Wenner array when the array spread is in-line (i.e. perpendicular to the strike
direction). The depth of detection with a Wenner array improves considerably and is almost equal to
that of a two-electrode array when the array spread is broadside (i.e. along the strike direction).

Cross-hole resistivity tomography using different electrode configurations


Zhou Bing and S.A. Greenhalgh
Abstract
This paper investigates the relative merits and effectiveness of cross-hole resistivity tomography using
different electrode configurations for four popular electrode arrays: pole-pole, pole-bipole, bipole-pole
and bipole-bipole. By examination of two synthetic models (a dipping conductive strip and a dislocated
fault), it is shown that besides the popular pole-pole array, some specified three- and four-electrode
configurations, such as pole-bipole AMN, bipolepole AMB and bipole-bipole AMBN with their
multispacing cross-hole profiling and scanning surveys, are useful for cross-hole resistivity
tomography. These configurations, compared with the pole pole array, may reduce or eliminate the
effect of remote electrodes (systematic error) and yield satisfactory images with 20% noise-
contaminated data. It is also shown that the configurations which have either both current electrodes
or both potential electrodes in the same borehole, i.e. pole-bipole AMN, bipole-pole ABM and bipole-
bipole ABMN, have a singularity problem in data acquisition, namely low readings of the potential or
potential difference in cross-hole surveying, so that the data are easily obscured by background noise
and yield images inferior to those from other configurations.

Using a non-integer moment of the impulse response to estimate the half-space conductivity
Terry J. Lee, Richard S. Smith and Christophe S.B. Hyde
Abstract
The nth-order moments of the electromagnetic impulse response are useful for interpreting
electromagnetic data. We have derived an analytic expression for the half- order moment of a
conductive half-space. By inverting this expression, the measured half-order moment can be used to
estimate an apparent conductivity of the ground. The first-order moment can also be used to estimate
the half-space conductivity. A sensitivity analysis indicates that for an airborne EM configuration, the
half-order moment will be most sensitive to material in the top 2648 m, while the first-order moment
will be sensitive to deeper material (down to depths between 66 and 127 m).

Robust inversion of vertical electrical sounding data using a multiple reweighted least-squares method
Milton J. Porsani, Sri Niwas and Niraldo R. Ferreira
Abstract
The root cause of the instability problem of the least-squares (LS) solution of the resistivity inverse
problem is the ill-conditioning of the sensitivity matrix. To circumvent this problem a new LS approach
has been investigated in this paper. At each iteration, the sensitivity matrix is weighted in multiple
ways generating a set of systems of linear equations. By solving each system, several candidate
models are obtained. As a consequence, the space of models is explored in a more extensive and
effective way resulting in a more robust and stable LS approach to solving the resistivity inverse
problem. This new approach is called the multiple reweighted LS method (MRLS). The problems
encountered when using the L1- or L2-norm are discussed and the advantages of working with the
MRLS method are highlighted. A five-layer earth model which generates an ill-conditioned matrix due
to equivalence is used to generate a synthetic data set for the Schlumberger configuration. The data
are randomly corrupted by noise and then inverted by using L2, L1 and the MRLS algorithm. The
stabilized solutions, even though blurred, could only be obtained by using a heavy ridge regression
parameter in L2- and L1-norms. On the other hand, the MRLS solution is stable without regression
factors and is superior and clearer. For a better appraisal the same initial model was used in all cases.
The MRLS algorithm is also demonstrated for a field data set: a stable solution is obtained.

114
A new method to discriminate between a valid IP response and EM coupling effects
Jianping Xiang, N.B. Jones, Daizhan Cheng and F.S. Schlindwein
Abstract
The problem of discrimination between a valid induced polarization (IP) response and electromagnetic
(EM) coupling effects is considered and an effective solution is provided. First, a finite dimensional
approximation to the Cole-Cole model is investigated. Using the least-squares approach, the
parameters of the approximate model are obtained. Next, based on the analysis of overvoltage, a
finite dimensional structure of the IP model is produced. Using this overvoltage-based structure, a
specific finite dimensional approximation of the Cole-Cole model is proposed. Summarizing the
analysis of the finite dimensional IP model, it is concluded that the proposed IP model, which fits the
field data much better than the traditional Cole-Cole model, is essentially an RC-circuit. From a circuit-
analysis point of view, it is well known that an electromagnetic effect can be described by an RL-
circuit. The simulation results on experimental data support this conception. According to this
observation, a new method to discriminate between a valid IP response and EM coupling effects is
proposed as follows: (i) use a special finite dimensional model for IPEM systems; (ii) obtain the
parameters for the model using a least-squares approach; (iii) separate RC-type terms and RL-type
terms the first models the IP behaviour, the latter represents the EM part. Simulation on experimental
data shows that the method is very simple and effective.

Resistivity anomaly imaging by probability tomography


Paolo Mauriello and Domenico Patella
Abstract
Probability tomography is a new concept reflecting the inherently uncertain nature of any geophysical
interpretation. The rationale of the new procedure is based on the fact that a measurable anomalous
field, representing the response of a buried feature to a physical stimulation, can be approximated by
a set of partial anomaly source contributions. These may be given a multiplicity of configurations to
generate cumulative responses, which are all compatible with the observed data within the accuracy
of measurement. The purpose of the new imaging procedure is the design of an occurrence probability
space of elementary anomaly sources, located anywhere inside an explored underground volume. In
geoelectrics, the decomposition is made within a regular resistivity lattice, using the Frechet
derivatives of the electric potential weighted by resistivity difference coefficients. The typical
tomography is a diffuse image of the resistivity difference probability pattern, that is quite different from
the usual modelled geometry derived from standard inversion.

Use of VFSA for resolution, sensitivity and uncertainty analysis in 1D DC resistivity and IP inversion
Bimalendu B. Bhattacharya, Shalivahan and Mrinal K. Sen
Abstract
We present results from the resolution and sensitivity analysis of 1D DC resistivity and IP sounding
data using a non-linear inversion. The inversion scheme uses a theoretically correct MetropolisGibbs'
sampling technique and an approximate method using numerous models sampled by a global
optimization algorithm called very fast simulated annealing (VFSA). VFSA has recently been found to
be computationally efficient in several geophysical parameter estimation problems. Unlike
conventional simulated annealing (SA), in VFSA the perturbations are generated from the model
parameters according to a Cauchy-like distribution whose shape changes with each iteration. This
results in an algorithm that converges much faster than a standard SA. In the course of finding the
optimal solution, VFSA samples several models from the search space. All these models can be used
to obtain estimates of uncertainty in the derived solution. This method makes no assumptions about
the shape of an a posteriori probability density function in the model space. Here, we carry out a
VFSA-based sensitivity analysis with several synthetic and field sounding data sets for resistivity and
IP. The resolution capability of the VFSA algorithm as seen from the sensitivity analysis is satisfactory.
The interpretation of VES and IP sounding data by VFSA, incorporating resolution, sensitivity and
uncertainty of layer parameters, would generally be more useful than the conventional best-fit
techniques.

115
Comparison of methods for estimating earth resistivity from airborne electromagnetic measurements
Les P. Beard
Abstract
Earth resistivity estimates from frequency domain airborne electromagnetic data can vary over more
than two orders of magnitude depending on the half-space estimation method used. Lookup tables are
fast methods for estimating half-space resistivities, and can be based on in-phase and quadrature
measurements for a specified frequency, or on quadrature and sensor height. Inverse methods are
slower, but allow sensor height to be incorporated more directly. Extreme topographic relief can affect
estimates from each of the methods, particularly if the portion of the line over the topographic feature
is not at a constant height above ground level. Quadraturesensor height lookup table estimates are
generally too low over narrow valleys. The other methods are also affected, but behave less
predictably. Over very good conductors, quadraturesensor height tables can yield resistivity estimates
that are too high. In-phasequadrature tables and inverse methods yield resistivity estimates that are
too high when the earth has high magnetic susceptibility, whereas quadraturesensor height methods
are unaffected. However, it is possible to incorporate magnetic susceptibility into the in-
phasequadrature lookup table. In-phasequadrature lookup tables can give different results according
to whether the tables are ordered according to the in-phase component or the quadrature component.
The rules for handling negative in-phase measurements are particularly critical. Although resistivity
maps produced from the different methods tend to be similar, details can vary considerably, calling
into question the ability to make detailed interpretations based on half-space models.
Keywords: Airborne electromagnetic survey; Resistivity; Inversion; Lookup table

Use of block inversion in the 2-D interpretation of apparent resistivity data and its comparison with
smooth inversion
A.I. Olayinka, U. Yaramanci
Abstract
The ability of a block inversion scheme, in which polygons are employed to define layers andror
bodies of equal resistivity, in determining the geometry and true resistivity of subsurface structures has
been investigated and a simple strategy for deriving the starting model is proposed. A comparison has
also been made between block inversion and smooth inversion, the latter being a cell-based scheme.
The study entailed the calculation Zby forward modelling of the synthetic data over 2-D geologic
models and inversion of the data. The 2-D structures modelled include vertical fault, graben and horst.
The Wenner array was used. The results show that the images obtained from smooth inversion are
very useful in determining the geometry; however, they can only provide guides to the true resistivity
because of the smearing effects. It is shown that the starting model for block inversion can be based
on a plane layer earth model. In the presence of sharp, rather than gradational, resistivity
discontinuities, the model from block inversion more adequately represents the true subsurface
geology, in terms of both the geometry and the formation resistivity. Field examples from a crystalline
basement area of Nigeria are presented to demonstrate the versatility of the two resistivity inversion
schemes. q 2000 Elsevier Science B.V. All rights reserved.
Keywords: Resistivity inversion; Fault; Graben; Horst; Wenner array; Nigeria

A new look at multiphase invasion with applications to borehole resistivity interpretation


K. Cozzolino, A.Q. Howard Jr., J.S. Protazio
Abstract
In well log interpretation, it is frequently necessary to correct logs for invasion. Invasion occurs in
permeable formations when there is a radial differential pressure Z RDP between the borehole and
formation. Other factors on which invasion depend include saturation, mobility, pressure Z RDP and
capillary pressure, permeability and viscosity of fluids, and temperature transient effects associated
with the mud filtrate injected into the formation. Thus, simulation of realistic invasion is not an easy
task. This work reviews the famous BuckleyLeverett mathematical model in cylindrical coordinates
appropriate for borehole geometries. The model predicts multiphase invasion in porous media when
gravity, capillary pressure, and mud cake can be neglected. One application is to correct logging while
drilling Z.LWD and wireline resistivity logs for time-dependent invasion and formation temperature
effects. This is important, for example, when there are possible large differences in formation and mud
temperature. Modeling studies show these effects can be large enough to noticeably influence
resistivity logs. However, after correction, difference in LWD and wireline logs arising from the time-
dependent heat process are explained. Thus, the method, when coupled to a time-dependent heat

116
flow model, and a response function formulation of resistivity, yields new insight into the influence of
thermal and electrical transients in log interpretation.
Keywords: Multiphase invasion; Borehole resistivity; BuckleyLeverett mathematical model

Cross-hole electrical imaging of a controlled saline tracer injection


L. Slater, A.M. Binley, W. Daily, R. Johnson
Abstract
Electrical imaging of tracer tests can provide valuable information on the spatial variability of solute
transport processes. This concept was investigated by cross-borehole electrical imaging of a
controlled release in an experimental tank. A saline tracer Z3 conductivity 8 = 10 msrm volume 270 l
was injected into a tank facility Z dimensions 10 = 10 = 3 m consisting of alternating sand and clay
layers. Injection was from 0.3 m below the surface, at a point where maximum interaction between
tank structure and tracer transport was expected. Repeated imaging over a two-week period detected
non-uniform tracer transport, partly caused by the sand-clay sequence. Tracer accumulation on two
clay layers was observed and density-driven spill of tracer over a clay shelf was imaged. An additional
unexpected flow pathway, probably caused by complications during array installation, was identified
close to an electrode array. Pore water samples obtained following termination of electrical imaging
generally supported the observed electrical response, although discrepancies arose when analysing
the response of individual pixels. The pixels that make up the electrical images were interpreted as a
large number of breakthrough curves. The shape of the pixel breakthrough-recession curve allowed
some quantitative interpretation of solute travel time, as well as a qualitative assessment of spatial
variability in advective-dispersive transport characteristics across the image plane. Although surface
conduction effects associated with the clay layers complicated interpretation, the plotting of pixel
breakthroughs was considered a useful step in the hydrological interpretation of the tracer test. The
spatial coverage provided by the high density of pixels is the factor that most encourages the
approach.
Keywords: Resistivity; Tomography; Solute transport; Pixel-breakthroughs

Fast Imaging of TDEM data based on S-inversion


Efthimios Tartaras, Michael S. Zhdanov, Kazushige Wada, Akira Saito, Toshiaki Hara
Abstract
Fast S-inversion is a method of interpretation of time-domain electromagnetic Z.TDEM sounding data
using the thin sheet model approach. Within the framework of this method, the electromagnetic Z.EM
response measured at the surface of the earth at every time delay is matched with that of a thin sheet
model. The conductivity change with depth is obtained using the conductance, S, and depth, d, of the
equivalent thin sheet. We analyze two different numerical techniques, the differential S-transformation
and the regularized S-inversion, to determine the parameters of the thin sheet. The first technique is a
direct differential transformation of the observed data into conductance and depth values. It is fast and
requires no iterations or starting model. The second technique uses a regularized inversion scheme to
fit the measured response with that of a thin sheet. In both techniques, the retrieved conductance
values are differentiated with respect to depth to obtain the conductivity change with depth. We apply
S-inversion to three-dimensional synthetic data and we successfully locate the local conductors. We
also demonstrate a case history by interpreting TDEM data obtained at the Nojima fault zone in Japan.
The results clearly indicate the location of the fault zone.
Keywords: Electromagnetic methods; Transient methods; Conductivity; Inverse problem

Ground penetrating radar inversion in 1-D: an approach for the estimation of electrical conductivity,
dielectric permittivity and magnetic permeability
O. Lazaro-Mancilla, E. Gomez-Trevino
Abstract
This paper presents a method for inverting ground penetrating radargrams in terms of one-
dimensional profiles. We resort to a special type of linearization of the damped E-field wave equation
to solve the inverse problem. The numerical algorithm for the inversion is iterative and requires the
solution of several forward problems, which we evaluate using the matrix propagation approach.
Analytical expressions for the derivatives with respect to physical properties are obtained using the
self-adjoint Green's function method. We consider three physical properties of materials; namely
dielectrical permittivity, magnetic permeability and electrical conductivity. The inverse problem is

117
solved minimizing the quadratic norm of the residuals using quadratic programming optimization. In
the iterative process to speed up convergence we use the Levenberg-Mardquardt method. The special
type of linearization is based on an integral equation that involves derivatives of the electric field with
respect to magnetic permeability, electrical conductivity and dielectric permittivity; this equation is the
result of analyzing the implication of the scaling properties of the electromagnetic field. The ground is
modeled using thin horizontal layers to approximate general variations of the physical properties. We
show that standard synthetic radargrams due to dielectric permittivity contrasts can be matched using
electrical conductivity or magnetic permeability variations. The results indicate that it is impossible to
differentiate one property from the other using GPR data.
Keywords: Ground penetrating radar; Frechet derivatives; Inverse problem; Parameter estimation;
Quadratic programming

Automatic detection of buried utilities and solid objects with GPR using neural networks and pattern
recognition
W. Al-Nuaimy, Y. Huang, M. Nakhkash, M.T.C. Fang, V.T. Nguyen, A. Eriksen
Abstract
The task of locating buried utilities using ground penetrating radar is addressed, and a novel
processing technique computationally suitable for on-site imaging is proposed. The developed system
comprises a neural network classifier, a pattern recognition stage, and additional pre-processing,
feature-extraction and image processing stages. Automatic selection of the areas of the radargram
containing useful information results in a reduced data set and hence a reduction in computation time.
A backpropagation neural network is employed to identify portions of the radar image corresponding to
target reflections by training it to recognise the Welch power spectral density estimate of signal
segments reflected from various types of buried target. This results in a classification of the radargram
into useful and redundant sections, and further processing is performed only on the former. The
Hough Transform is then applied to the edges of these reflections, in order to accurately identify the
depth and position of the buried targets. This allows a high resolution reconstruction of the subsurface
with reduced computation time. The system was tested on data containing pipes, cables and anti-
personnel landmines, and the results indicate that automatic and effective detection and mapping of
such structures can be achieved in near real-time.
Keywords: Neural networks; Pattern recognition; Hough transform; Ground-penetrating radar

Mathematical models and controlled experimental studies Three-dimensional inversion of induced


polarization data from simulated waste
A. Weller, W. Frangos, M. Seichter
Abstract
The Idaho National Laboratory Z . INEL Cold Test Pit Z. CTP has been carefully constructed to
simulate buried hazardous waste sites. An induced polarization Z. IP survey of the CTP shows a very
strong polarization and a modest resistivity response associated with the simulated waste. A three-
dimensional Z . 3-D inversion algorithm based on the simultaneous iterative reconstruction technique
Z . SIRT and finite difference forward modelling has been applied to generate a subsurface model of
complex resistivity. The lateral extents of the waste zone are well resolved. Limited depth extent is
recognized, but the bottom of the waste appears too deep. With a modelling experiment, the intrinsic
polarizability of the waste material is determined. Since IP is a technique for detection of diffuse
occurrences of metallic material, this method holds promise as a method to distinguish buried waste
from conductive soil material. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Electrical
resistivity; Geoelectrical prospection; Induced polarization; Inversion algorithm

A note on magnetic modelling with remanence


Abstract
The general problem of magnetic modelling involves accounting for the effect of both remanent
magnetization and the application of an external magnetic field. However, as far as the disturbing field
of a magnetic body in a magnetic environment is concerned, there is an equivalence between the
effects of these two causations that allows the remanence to be represented in terms of an equivalent
primary magnetic H field. Moreover, due to the linearity of the magnetic field in terms of its causations,

118
the general modelling problem involving an applied magnetic field in the presence of remanence can
be simply and more efficiently dealt with in terms of an equivalent primary field acting in the absence
of any remanent magnetization. D 2001
Elsevier Science B.V. All rights reserved.
Keywords: Magnetic modelling; Magnetic remanence; Integral equation

Improved and new resistivity-depth profiles for helicopter electromagnetic data


B. Siemon
Abstract
The calculation of apparent resistivities, based on the model of a homogeneous half-space, is
commonly the first step in order to evaluate helicopter electromagnetic Z. HEM data. Due to the
increase in frequencies used in HEM systems, survey results are not only displayed as apparent
resistivity maps but also as cross-sections that require resistivity and depth information. After a brief
description and discussion of the basic approaches, improved HEM resistivity depth profiles are
derived. The apparent resistivities r are calculated more accurately when better approximations are
used. The corresponding depth a value, the centroid depth z), is newly defined as the sum of the
apparent depth and the half of the apparent skin depth. The p resulting profile is referred to hereafter
as the improved standard sounding curve r Z ) z .. a p Several algorithms for deriving A enhanced B
resistivity depth profiles, which are more sensitive to resistivity variations with respect to depth, are
presented. Two of these enhanced resistivity depth profiles are based on algorithms used for the
interpretation of MT data. The r Z ) z . sounding curve is derived from the Niblett-Bostick algorithm. It
requires NB s multi-frequency data because the enhancing is achieved by differentiating the r Z f .
sounding curve with respect to a )frequency. The other one, the r Z ) z . sounding curve, is computed
from each frequency independently of the other d frequencies because no differentiation is involved. It
is similar to Schmucker's r )z ) scheme and it uses the apparent depth d to enhance the sensitivity of
the apparent resistivity. Furthermore, both novel algorithms are able to increase the a depth of
exploration. All enhanced resistivitydepths profiles are compared with the improved standard sounding
curve r Z ) z . and with the a p differential parameter method, r Z D z . D , published by Huang and
Fraser. Since being robust and easy to calculate, the proved r Z ) z .method should be used for the
standard calculation of resistivitydepth profiles. Besides, it is the basis for the a p enhanced methods,
which, in addition, can be used to derive more sensitive resistivity-depth profiles. Among all methods
discussed, only r and r are relatively independent of the measured sensor height h, which is a serious
problem in survey a NB areas with dense forests. The corresponding centroid depth values, z ) and z
), are not distorted by the vegetation, if they are p s displayed with respect to the elevation of the HEM
sensor which should be given in m a.s.l. q 2001 Elsevier Science B.V.
All rights reserved.
Keywords: Helicopter electromagnetic data; Apparent resistivity; Centroid depth; Resistivity-depth
profiles; Sounding curves

Improvement in TDEM sounding interpretation in presence of induced polarization. A case study in


resistive rocks of the Fogo volcano, Cape Verde Islands
Marc Descloitres, Roger Guerin, Yves Albouy, Alain Tabbagh, Michel Ritz
Abstract
A Time Domain Electromagnetic Z . TDEM survey was carried out in and around the caldera of the
Fogo volcano, Cape Verde Islands, to detect the low resistive structures that could be related to
groundwater. A sign reversal in the sounding curves was encountered in central-loop measurements
for the soundings located in the centre of the caldera along three main radial profiles. The negative
transients are recorded in the early channels between 6.8 and 37 ms. Negative values in an early time
transient is an unusual field observation, and consequently the first step was to check the data to
ascertain their accuracy and quality. In the second step, three-dimensional Z . 3D effects are
evaluated and ruled out in this zone, while an Induced Polarization Z . IP phenomenon is observed
using Direct Current Z . DC sounding measurements. In the third step, the IP effect is called upon to
explain the TDEM distortions; a ColeCole dispersive conductivity is found to be adequate to fit the field
data. However, the more relevant one-dimensional Z . 1D model is recovered when both central-loop
and offset-loop data are jointly taken into account, thus indicating that an effect of dispersive
conductivity is necessary to explain the field data. The 1D electrical structure exhibits four layers, with
decreasing resistivity with depth. Only the first layer is polarizable and its ColeCole parameters are m
s 0.85, c s 0.8 and t s 0.02 ms for chargeability, frequency dependence and time constant,

119
respectively. However, the ColeCole parameters deduced from TDEM forward modelling remain
different from those deduced from DCrIP sounding. In this volcanic setting, this IP effect may be
caused by the presence of small grains of magnetite andror by the granularity of effusive products Z .
lapillis . As a conclusion, it is shown that a modelling using different TDEM data sets is essential to
recover the electrical structure of this area. q 2000 Elsevier Science B.V. All rights reserved.
Keywords: Central-loop TDEM; Negative transient; Induced Polarization; ColeCole modelling; Fogo
volcano

120

S-ar putea să vă placă și