Sunteți pe pagina 1din 110

Gravitational Force Fields Gravity and its Units of Measurement Gravity is the acceleration on a unit mass.

Objects fall to Earth with an acceleration of about 32 ft/s2 [980 cm/s2]. The unit "centimeter per second per second" (cm/s2)is known as a gal in honor of Galileo. In gravity exploration, the acceleration of gravity is the fundamental quantity measured, and the basic unit of acceleration is the milligal (mGal). Thus, the acceleration of a body near the Earth's surface is about 980,000 mGal. For borehole gravity, a microgal is used as the basic unit, with 1 mGal = 10-3 mGal.

Inverse Square Law and the Principle of Superposition The magnitude F of the gravitational force between two point masses is given by

(1) where G = universal gravitational constant = 6.670 10-11 N-m2/kg M1, M2 = masses 1 and 2, respectively R = distance between center of masses F = force This equation is also known as the inverse square law, since F varies with 1/R2. The acceleration or attraction a1 on M1 is

(2) Note that a1 is independent of M1. The acceleration vector is

(3)

The minus sign indicates that acceleration decreases as R increases. The principle of superposition indicates that the attraction of a group of point masses is equal to the vector sum of the individual attractions:

(4) Vertical Component Concept The Earth's gravitational acceleration is approximately 980,000 mGal in an essentially vertical direction (i.e., roughly perpendicular to the Earth's surface). Gravity accelerations of local geologic disturbances may range up to approximately 300 mGal. These local accelerations are not necessarily vertical; but they are measured by gravity meters that are leveled with respect to the Earth's total gravity field. Since the Earth's main vertical gravity field is much stronger than any local disturbance, these local anomalies have little effect on the direction of the Earth's gravity field. Thus, from a practical standpoint, a gravity meter detects the "vertical" component of local geologic gravity accelerations, which is the component parallel to the "plumb-bob" direction (that is, the orientation of a weight which is hanging from a string at a fixed point). Figure 1 ( Vertical component of gravitational attraction of a buried sphere ) illustrates the vertical component concept for a buried spherical mass distribution.

Figure 1

We can show mathematically that the gravity attraction of a sphere is equivalent to the gravity attraction of a point mass, with the sphere's mass concentrated at its center. Therefore, the vertical gravitational attraction az for a simple point mass body is

(5) where G = universal gravitational constant M = mass of the body R = distance from the sphere's center of mass to the gravity observation surface z = vertical component of R Attraction of Complex Bodies For a constant density, mass is equal to density times volume. Thus, for an element of volume dV, we may express the increment of vertical acceleration daz as follows:

(6) where = density of the attractive mass For a complex body made up of dV small elements, the total vertical component of gravitational attraction az is

(7) where (V) is the density contrast as a function of location within the volume, and the integration is carried out throughout the volume of the density contrast ( Figure 1 ) and ( Figure 2 , Attraction of distributed body).

Figure 2

Geologic Application In gravity exploration, we define a density anomaly as a deviation from the ?corrected? gravity values that would be produced by a uniform, layer-cake, subsurface geologic setting (see Section 4.0). To produce a measurable gravity anomaly, there must be (1) a lateral density contrast between the geologic body of interest and the surrounding rocks, and, (2) a favorable relationship between the gravity station locations and the geometry (including the depth) of the geologic body of interest. In this discussion on gravity, we will examine methods for determining density contrasts, calculating gravity effects of simple geologic bodies and measuring and reducing gravity field data. We will also look at interpretation-the art of deriving a subsurface density distribution that could geologically and mathematically explain an observed gravity field. Densities of Rocks Density Computation In surface gravity exploration, we look for lateral density contrasts that we can relate to subsurface geologic features, such as stratigraphic changes, faults and folds. A layer-cake geologic section with no local structure or stratigraphic changes will not produce local gravity anomalies, because there is no lateral density contrast present. In this section, we look at rock densities and practical methods for determining them. The bulk density of any rock is the sum of each constituent mineral's volume fraction multiplied by its density i:

(1) where VI = volume fraction of ith mineral, expressed as a dimensionless decimal I = density of mineral i, g/cm3 b = bulk density, g/cm3
The sum of the volume fractions should equal 1. 3 3 3 Mineral densities can range from near zero for air or shallow natural gas to 2.65 g/cm for quartz, 2.70 g/cm for calcite, 2.87 g/cm for dolomite, to 19.3 3 g/cm for gold. For a two-component system made up of one matrix mineral, porosity (f, expressed as a fraction), and a fluid, we have:

b = m(1- )+ A sedimentary rock's porosity depends on such characteristics as grain size and shape, depth of burial, geologic age and geologic history (e.g., metamorphism or diagenesis, sorting of grain sizes, maximum depth of burial, etc.). The porosity of intrusive or metamorphic rocks is frequently near zero except when fracture systems are present. The porosity of extrusive rocks can vary greatly depending on the environment of deposition and geologic history since disposition. Because rock porosity can greatly affect bulk density, we must exercise care when computing bulk densities of sedimentary, metamorphic and extrusive rocks. Density Contrasts Local gravity anomalies are caused by lateral density contrasts. If horizontal density layering is uniform, then we will observe no local gravity anomalies. Once we have selected a base density, we can compute density contrasts as follows: = anomalous-background where = density contrast anomalous= anomalous density background= density of the "background" (1)

(2)

For example, if a massive sulfide ore body has a bulk density of 4.0 g/cm3, and is contained in a limestone having a density of 2.7 g/cm3, the density contrast of the ore body is +1.3 g/cm3. The amplitude of the gravity anomaly observed from such an ore body is proportional to the density contrast. We can determine density contrasts directly by calculating the gravity effect of a known feature and comparing this calculated effect with observed gravity, or indirectly by determining bulk densities and computing density differences.

Figure 1 ( Salt dome density model, U.S.

Figure 1

Gulf Coast ) illustrates the concept of density contrast for the case of a salt dome in the United States Gulf Coast area. With normal impurities, the average density of salt is assumed nearly constant as a function of depth, although it can vary between 2.15-2.20 g/cm3. Here, the density is a constant 2.20 g/cm3. The densities of the Gulf Coast clastic rocks surrounding the salt dome generally increase with depth at a gradually decreasing rate. Figure 1 shows clastic rock density increasing from 2.0 g/cm3 to over 2.55 g/cm3. The density contrast () is the salt density minus the density of the laterally adjacent clastics. Note that at a certain crossover depth, the clastic and salt densities are equal; there is no density contrast. Above the crossover depth, the salt is more dense than the surrounding clastic rocks; below the crossover depth, it is less dense. The local gravity anomaly observed over the salt dome is directly related to lateral density contrasts in the subsurface. If the salt dome top is at or below the depth of density cross-over, we will observe a gravity minimum. If the salt dome extends above the density crossover, we will observe the same long-wavelength gravity minimum, but with a superimposed, shorter-wavelength positive gravity anomaly due to the salt above the crossover. Interpretation of salt located near the density crossover can be very ambiguous. Magnetic and seismic data can sometimes assist in such interpretations.

Figure 2 ( Density vs.

Figure 2

formation type, Rocky Mountain area, USA ) illustrates a different density distribution. Here, density varies primarily with lithology and formation type rather than only with depth, as was the case for the clastic rocks surrounding the salt dome in Figure 1 . Where density varies with formation type, we must determine or estimate the density of each formation as shown in Figure 2 . Then we can subtract a reference (background) density from all density layers to produce a model of density contrasts. For example, in Figure 2 , we could choose a reference density of 2.67 g/cm3 to correspond to the density of the Precambrian basement. The density contrasts of each of the overlying four layers would then be (from top to bottom, respectively) -.37, -.22, -.12, 0.0, and 0.0 g/cm3. Notice that the layers with densities of 2.30 g/cm3, (contrast -.37 g/cm3) and 2.45 g/cm3 (contrast -.22 g/cm3) are absent over the faulted anticline. The absence of these negative density contrasts will enhance the relative positive gravity anomaly observed over the anticline. Notice also that the "Top of High Density Rocks" occurs at the top of the carbonate section, which is above the basement. Sources of Density Data There are a number of practical methods for obtaining density information, some of the more common of which are mineral composition analysis and the use of data from surface samples, cores, wireline logs, seismic data and gravity profiling. Calculation Based on Composition

In making calculations based on composition, we must correct all mineral proportions to volume fraction, and then multiply each volume fraction by the density of the mineral occupying that volume fraction. The sum of the volume fractions should equal 1. Surface Samples Many times-particularly in mining applications-surface samples are our only source of density information. We must be careful to (1) properly identify samples, (2) collect fresh samples which have minimum porosity alteration due to recent surface weathering and (3) correct for the presence of pore fluids, depending on whether the rocks in the exploration problem are above or below the water table. Bailey (1945) patented a method for determining rock density which applies Archimedes' principle, using a device known as a Jolly Balance. Cores Core analyses often contain information relating to grain density and fluid saturations. We can use these densities provided that we adjust grain densities, or matrix densities, for the expected actual porosities and fluid saturations. Core analyses can also provide information that helps us calibrate density measurements from wireline logging tools (e.g., sonic or neutron). Wireline Logs Wireline logs, when available, are usually the best source of density information. Exploratory wells generally have more complete log suites than development wells. As a rule, the ideal log for determining density is the Gamma-Gamma Density Log (assuming that it is properly calibrated, measured in a hole in good condition, and run from "grass roots" to basement). Note: This discussion on wireline logs is designed to point the reader in the right direction. Specific response characteristics of logging tools or questions regarding logging tools should be discussed with your company's log analyst and/or a qualified service company representative. Wireline logging tools respond differently and independently to different matrix compositions and to the presence of fluids and/or gas. A combination or suite of logs thus provides more information about a formation than does any one individual log. Gamma-Gamma Density Log "Density logs are primarily used as porosity logs. Other uses include identification of minerals in evaporite deposits, detection of gas, determination of hydrocarbon density, evaluation of shaly sands and complex lithologies, determinations of oilshale yield, calculation of overburden pressure and rock mechanical properties." -Schlumberger, Log Interpretation Principles, 1989

In gravity exploration, we use density logs to determine subsurface bulk densities of rocks. From these bulk densities, we determine density contrasts, which we use to help interpret gravity data. The gamma-gamma tool's depth of investigation is only a few inches away from the borehole in a radial direction. Its principle of measurement, as summarized in Schlumberger's Log Interpretation Principles (1989) is as follows: "A radioactive source, applied to the borehole wall in a shielded sidewall skid, emits medium-energy gamma rays into the formations. These gamma rays may be thought of as high-velocity particles that collide with the electrons in the formation. At each collision a gamma ray loses some, but not all, of its energy to the electron, and then continues with diminished energy. This type of interaction is known as Compton scattering. The scattered gamma rays reaching the detector, at a fixed distance from the source, are counted as an indication of formation density. The number of Compton-scattering collisions is related directly to the number of electrons in the formation. Consequently, the response of the density tool is determined essentially by the electron density (number of electrons per cubic centimeter) of the formation. Electron density is related to the true bulk density, b,
which, in turn, depends on the density of the rock matrix material, the formation porosity, and the density of the fluids filling the pores."

Table 1 lists common rocks and minerals with their corresponding actual bulk densities and apparent bulk densities (derived from electron density). Note that a considerable correction is needed to obtain the true bulk density in halite and sylvite. Schlumberger (1989) gives several equations relating electron density, bulk density, the sum of the atomic numbers (Z) making up a molecule, and molecular weight, which further explain how Table 1 is derived.

Compound

Formula

Actual Density,
b

2_z MOL. WT 0.9985 0.9991 0.9977 0.9990 0.9657 0.9581 1.0222

Index, e

Electron Density a (as seen


by tool)

Quartz Calcite Dolomite Anhydrite Sylvite Halite Gypsum Anthracite Coal

SiO2 CaCO3 CaCO3MgCO3 CaSO4 KCl NaCl CaSO42H2O {1.400}

2.654 2.710 2.850 2.960 1.984 2.165 2.320 1.0300 {1.800}

2.650 2.708 2.863 2.957 1.916 2.074 2.372

2.648 2.710 2.850 2.977 1.863 2.032 2.351

{1.442} {1.355} {1.852} {1.796}

Bituminous {1.200} Coal

1.0600 {1.500}

{1.272} {1.173} {1.590} {1.154} 1.1101 1.0797 1.1407 1.2470 1.238 1.110 1.237 0.970 1.247 meth 2.238 g 1.000 1.135 0.850 1.335 meth-0.188 1.325 g
-0.188

Fresh Water Salt Water Oil Methane Gas

H2O 200,000 ppm n(CH2) CH4 C1.1H4.2

1.000 1.146 0.850 meth g

Table 1: Gamma-gamma log measured and true bulk densities (g/cm3). (Courtesy Schlumberger Oilfield Services, 1989). Figure 6 ( Schlumberger FDC tool )is a schematic representation of Schlumberger's FDC (Formation Density Compensated) gamma-gamma density logging tool.

Figure 6

This tool employs two gamma ray detectors and a single gamma ray source. It automatically compensates for borehole effects to produce the data display of Figure 5 , ( FDC Log Display ), which shows Bulk Density (g/cm3)

Figure 5

Schlumberger's borehole correction, made from the short and long detector output comparison, (g/cm3) Gamma ray correlation log Caliper curve. In hole sections where washouts (indicated by the caliper log) or fractures (indicated by the caliper, density correction, or other wireline logs) exist, the densities indicated by the log are lower than their true values. Schlumberger's latest density logging tool is known as the Litho-Density Log. This tool is similar in appearance and in operation to the FDC tool except that, in addition to a bulk density measurement, the tool also measures the photoelectric absorption index. This index can sometimes be related to lithology. To determine density distributions from a density log, we usually block the density log into segments of constant density or density gradients. Blocking smoothes out the thin-bed

density chatter. Figure 5 provides an example of density blocking. For surface gravity interpretation, possible density log blocks on Figure 5 might be:

Depth Interval 7005-7050 ft 7050-7065 ft 7065-7102 ft 7102-7212 ft

Density 2.65 g/cm3 2.39 g/cm3 2.70 g/cm3 2.50 g/cm3

When we have completed a table of densities vs. depth blocks, we can plot the data points with a compressed depth scale (such as 1000 ft. per inch) and an expanded density scale (such as 0.2 g/cm3/inch). The resulting graph should show the density-depth or densityformation relationship (example, Figure 4 , Example of density blocking ).

Figure 4

Borehole Gravity Meter The Borehole Gravity Meter (BHGM) can provide accurate bulk density information if we know enough about the subsurface geology to make certain corrections for nearby, largescale structures. This tool works well in cased holes. Resolution of thin beds depends on the density contrast and the accuracy of depth measurements. We discuss this tool in greater detail in Section 8. Sonic Log The sonic or acoustic log measures the interval transit time (T) of P waves, which is the inverse of P wave interval velocity.

(1)

where T is in ms/ft Vint is in ft/s. Gardner et al.(1974) made an extensive study of the relationship between interval velocity and bulk density. Gardner's empirical relationship is: b = 0.23(Vint)0.25 where Vint is in ft/s b is in g/cm3 This relationship generally works well for most clastic and carbonate rocks?at least as a first approximation, if no density log density data is available. Figure 3 ( Velocity-Density relationship in rocks of different lithology ) shows Gardner's empirical relationship with lithologic effects. (2)

Figure 3

Note that salt has a bulk density much lower, and anhydrite has a bulk density much higher than Gardner's empirical relationship would compute. To eliminate this problem, we can adjust the constants of Equation 2 to fit local geology. We should block the sonic log as we would the density log. Then, we can convert the blocked T's to bulk densities and determine a density relationship. As Figure 3 indicates, rocks of differing densities may have the same interval velocity. To distinguish among different rock types, we need additional knowledge of the local geology, or log data in addition to that provided by the sonic tool. For example, Figure 2 ( t vs.

Figure 2

B crossplot for density determination )shows a crossplot of sonic T vs. density log bulk density. If sonic and density logs are both available, we can estimate lithology from the sonic-density crossplot.

In geopressured (overpressured) zones, where fluid pressures are greater than normal hydrostatic pressure, velocities and bulk densities are generally lower than normal. Thus, geopressured masses of rock (e.g., shale diapirs) can produce gravity minima. Density analysis for a region should include thorough research to see if any geopressure zones are known to exist. Neutron Log The neutron logging tool is sensitive to the hydrogen concentration in the formation. The neutron log gives fairly reliable liquid-filled porosities in non-shaly rocks. Density calculations require a knowledge of (or assumptions regarding) the matrix lithology. When gas is present, neutron-measured porosities are lower than their actual values, because gases have a relatively low concentration of hydrogen when compared with either water or liquid hydrocarbons. Most modern neutron logs include a compensation for borehole effects and display porosity, gamma ray, and caliper curves. In shales, neutron porosities are higher than their actual values due to the presence of bound water (a hydrogen source). Older neutron logs read in counts and need to be calibrated to porosity. Your company log analyst or wireline service company representative would be familiar with this process.. Seismic Data The relationship between P-wave interval velocity and bulk density (Gardner et al., 1974) makes it possible to obtain bulk density estimates from seismic data. We can use reflection seismic interval velocities or refraction layer velocities in conjunction with Gardners relationship to estimate interval bulk densities. Additionally, by comparing the calculated gravity effect of a seismically identified structure with the observed gravity anomaly, we can often obtain information on density contrasts. If we know some background densities, then we can estimate absolute densities. This illustrates the potential of using gravity (or magnetic) data to help identify the lithology of unknown intrusions (e.g., salt, igneous, reef, etc.) to produce a complete integrated geophysical interpretation. Gravity Profiling Method In areas where topography is not structurally or stratigraphically related, the gravity profiling method can yield reliable densities for rocks within topography. Remember, though, that topographic features are often geologically controlled, and that in such cases, the method may be unreliable. This method has been published several times by Nettleton. Figure 1 ( Nettleton density profiling method )illustrates the use of the gravity profiling method,

Figure 1

which involves applying elevation factors that are based on free-air gravity corrections (to compensate for the effects of elevation) and simple Bouguer corrections (to compensate for the effects of topography or geologic structure). From these, we select a Bouguer density that minimizes the correlation between topography and the Bouguer gravity. In Figure 1 , this corresponds to a Bouguer density of 2.2 g/cm3. This Bouguer density represents the average density of the near-surface rocks.
a. A sonic log reads T of 50 s/ft. Calculate the interval velocity and use Gardner's Relationship to compute the interval density. What are the pitfalls of this method ( Refer to Figure 1 )?

Figure 1

b. A sonic log reads T of 100 s/ft. What is the interval velocity and interval density? Comment on the accuracy of this calculation ( Refer to Figure 1 ). a. T = 50 ms/ft Vint = 1,000,000/T = 20,000 ft/s rb = .23(Vint).25 = .23 x 20,000.25 = .23 x 11.89 = 2.74 g/cm3 This density would be approximately correct for a carbonate rock. However, anhydrite also has a velocity of 20,000 ft/s but a much higher density, 2.96 g/cm3. We would need local geologic knowledge or additional log data (e.g., in the form of a crossplot such as the one shown in Figure 1 ) to distinguish between the two cases.

Figure 1

b.

T = 100 ms/ft Vint = 1,000,000/T = 20,000 ft/s rb = .23(Vint).25 = .23 x 10 = 2.30 g/cm3

Refer to Figure 2.

Figure 2.

At 10,000 ft/s there is no large ambiguity between rock types, such as there was between carbonate rocks and anhydrite in part a of this problem. However, Figure 2 shows that, generally, at 10,000 ft/s, shaly rocks generally are slightly more dense than 2.30 g/cm3 (perhaps by about .04 g/cm3 on average) and sandy rocks are generally less dense than 2.30 g/cm3 (perhaps by about .08 g/cm3). Thus, our answer of 2.30 g/cm3 is within about .08 g/cm3 unless something is known about the shaliness of the rock. If it is known to be sand or shale, then we can make a more accurate density estimate.

Determination of Gravity Effects on Geologic Bodies Infinite Slab Model


We can approximate a body of relatively constant thickness and very large areal extent by representing it as an infinite slab, that is, as a horizontal slab of material of thickness t that extends to infinity in all horizontal directions. Bodies that can be modeled in this manner include lava flows or sedimentary sections with a uniform thickness and flat dip over a large area. The infinite slab model assumes that the gravity effect is independent of depth, and does not take into account the Earth's curvature or its spherical coordinate system.

Thin 2-D Prism


We can approximate many geologic features using the thin 2-D prism model. This mathematics of this simple graphical method are based on concentrating the mass of the prism as a horizontal "sheet of mass" at

the prism's average depth ( Figure 1 , Thin 2-D prism model ).

Figure 1

As an approximation, we may consider a body as "thin" if its thickness does not exceed its depth of burial. This 2-D representation might be appropriate for an elongate geologic feature. To make the calculation graphically, we draw a cross section of the body with the depth, thickness, and width shown in natural (vertical = horizontal) scale. We then measure the plane angles with a protractor. For more detailed modeling of elongate bodies, we can use "2 1/2-D" computer modeling (with end corrections to adjust for the non-infinite extent) or 3-D modeling. The thin 2-D prism model is useful for anticlines, horst blocks, grabens, channel sands, or any other body which is linear in map view. We can easily model faults by assuming that one side of the prism extends to infinity on one side of the profile.

Thin Disk Model


Many geologic bodies are finite in areal and vertical extent, and roughly circular in map view. Examples might include igneous intrusives, pinnacle reefs, salt domes or structural domes. We can approximate the gravity effect of these bodies using the thin disk model (where thickness is less than depth), or by a series of thin disks stacked vertically. Such approximations can be useful for designing complex 2 1/2-D (with end corrections), or 3-D computer models. The attraction g from an observation point is:

g =2. 03t where = solid angle in radians, to average depth of disk, t = thickness of disk in thousands of feet = density contrast in disk, g/cm3

(1)

Figure 1 ( Solid Angles for Horizontal Circular Disks ) illustrates the thin disk model method and contains a solid angle table for use in doing the calculations.

Figure 1
This method assumes that the disk's mass is concentrated in a sheet at the average depth of the disk, as illustrated by the dashed line around the midpoint of the disk. We can use the following relationship to calculate the solid angle subtended by a disk for a point directly above the center of the disk. = 2(1-cos ) where = plane angle between the line perpendicular to the disk from the center of the disk to the observation point and any line between the observation point and the edge of the disk. We would use the

(2)

edge of the disk at the average depth of the structure. Equation 2 is useful for computing a structure's maximum gravity response without referring to a solid angle table.

2-D Computer Models


Talwani (1960 and 1965) published a method for calculating gravity and magnetic effects of twodimensional bodies defined by polygon vertices entered in a cross-sectional view. EDCON has modified this method for use on personal computers, in the form of a program named GMOD. By way of illustration, we can refer to the following 2-D models: Figure 4.

Figure 4
( Thrust fault example - 2-D modeling )shows a 2-D thrust fault model. Note the sharp gravity decrease at X=12,000 ft, which is related to the thrust sheet edge in the subsurface. Figure 3 ( Salt anticline - 2-D, 3-D ) shows the gravity effect of a salt anticline.

Figure 3

The strike length varies from infinity (labeled "2-D") to 8, 4, 2, and 1 times the body width (labeled 8W, 4W, 2W, and W, respectively). Figure 3 is prepared with the cross section shown at the midway point of the axis of the body perpendicular to the profile. Note that at strike lengths of less than 8W, the two-dimensional assumption does not work very well. EDCON's rule of thumb is that the strike length is equal to or greater than 10W before the 2-D assumption is really valid. For bodies with lengths less than 10 times width, it is best to use a 21/2-D model (with end corrections for the body geometry outside of the line of section) or 3-D modeling. Figure 2 ( Warm springs valley - interactive model, intermediate product )shows an interim result of interactive modeling across Warm Springs Valley.

Figure 2
The residual gravity is shown by square "dots," and the calculated gravity effect of the model is shown by the smooth curve. The gravity values are calculated at the topographic elevations shown by the square dots in the model portion of the profile. Note the discrepancy between the completed and the residual anomalies over point A. Figure 1 ( Warm springs valley - interactive model, a final solution ) shows a final solution with point A moved into its proper position such that the calculated effect of the gravity model fits the residual gravity.

Figure 1 Ex-1:
Calculate the approximate gravity attraction of a 10,000 ft thick section of sedimentary rock having an interval velocity of 15,000 ft/s. This section rests on granite, which has an average density of 2.67 g/cm3. The sedimentary section lies in the middle of a large, gently-dipping basin. Assuming that the densities are properly computed, will treating the section as an infinite slab result in an over-calculation or undercalculation of the effect? First, we need to determine the density contrast. From Gardner's empirical relationship, 15,000 ft/s interval velocity corresponds to a bulk density of 2.54 g/cm3. Thus, the density contrast of the infinite slab of sedimentary section is -0.13 g/cm3, since granite has an average density of 2.67 g/cm3. Next, we determine the gravity effect of the infinite slab. g = a (i.e., "acceleration of gravity," "gravity effect," "computed gravity") = 12.77 t where t is in kilofeet, is in g/cm3 and g is in mGal. Therefore, g = 12.77 -0.13 g/cm3 10 kilofeet = -16.6 mGal Thus, an anomaly of about -16.6 mGal could be expected over this sedimentary basin. Since the calculation is made in the middle, where the thickness of the slab is the greatest, the calculation over-calculates the gravity effect somewhat (refer to Figure 1 ).

Figure 1 Ex-2:
A fault having a displacement of 1,000 ft occurs at 2,000 ft depth below the surface; a low-density layer is present on the downthrown side of the fault which is absent on the upthrown side. Assuming a density contrast of .25 across the fault, graphically compute the effect of the fault. Compute the effect if the fault were buried at 5,000 ft. What gravity criteria do you see which indicate the location and the depth of the fault? Refer to Figure 1 .

Figure 1

The procedure is as follows: First, draw the problem at a convenient and natural ( i.e., vertical = horizontal ) scale with the observation level in the same natural scale as the body. Then, dash in the average depth of the high-density mass on the upthrown side of the fault. Next, compute the "constant" in the formula g = a = 0.071t() = 0.071 (0.25 g/cm3) (1 kilofoot) () = 0.0178 mGal Next, strike the angles between the horizontal (the average depth point on the upthrown side of the fault at infinity) and the average depth point on the fault plane itself. Multiply each of these angles by 0.0178, and then plot the result at the same horizontal scale as the geologic cross section. Repeat this process for the fault buried 5000 ft. Analysis: From the thin 2-D prism formula, if the observation point were far enough west, a gravity value would be computed as follows:

g = .0178 (180) = 3.20 mGal Note also that if we use the infinite slab formula, we compute the identical value if the observation point is west at infinity: g = 12.77 t = 12.77 (0.25 g/cm3)(1 kilofoot) = 3.20 mGal It is apparent that if the observation point were far enough east, a gravity value would be computed as follows: g = .0178 (0) = 0.0 mGal We can observe from the gravity model that for both the deep and the shallow fault models, the gravity directly over the fault is given by: g = .0178 (90) = 1.60 mGal This is exactly half of the infinite slab amplitude. From the figure, we can also see that the shallower fault causes a steeper gravity gradient than the deeper fault. Also, for both the shallower and the deeper fault, the steepest gravity gradient and inflection of the gravity curve occur directly over the fault plane. It can be shown mathematically that the average depth of the fault is equal to the horizontal distance between the "half amplitude" point (over the fault plane) and either the "3/4 amplitude" or "1/4 amplitude" points. This exercise illustrates the usefulness of models in defining the gravity response to a postulated or known geologic situation. We can determine both the anomaly amplitude and shape, as well as the location of the gravity anomaly in relationship to the geologic body.

Ex-3:
Compute the gravity effect of a basement dome, buried 5,000 ft and having structural relief of 2,000 ft above the surrounding basement rock. The density contrast between basement and sedimentary rocks is 0.2 g/cm3. Compute the gravity effect along a profile through the axis of the dome. The diameter of the dome is 15,000 ft Refer to the solid angle chart shown in Figure 1 .

Figure 1
a. Define parameters for the solid angle chart ( Figure 1 ). Depth = 5000 ft and Thickness = 2000 ft. Therefore, Z= 6,000 ft (depth to the average depth). Since the diameter is 15,000 ft, R = 7500 ft (radius of disk). Therefore, Z/R = 6000/7500 = 0.80.

Figure 1

b. Set up a table as follows:

X, ft 0 6000 12000 18000 24000

X/Z 0 1 2 3 4

, radians
2.40 1.70 0.55 0.18 0.08

g, mGal
1.94 1.38 0.45 0.15 0.06

Comments On Axis

i. In Figure 1 , read solid angles from the solid angle chart along vertical line having the value Z/R = .80. Read the appropriate values of X/Z to determine the solid angles by interpolating between contours.

ii. Determine value of constant for disk as follows:

g = 2.03 (t) = 2.03 (.2 g/cm3)(2 kilofeet) = .81w mGal

iii. Then multiply each solid angle by 0.81 to complete the table. c. Plot the result over the body (see Figure 1 ). Note that the body is symmetrical, so we only need to calculate one side of the body. Also note that this plot gives both the amplitude and shape of the gravity anomaly. We can use this information to design survey specifications or interpretation procedures.

Gravity Data: Reduction and Processing

Introduction
A gravity anomaly is the difference between what one observes and what one expects from applying a simple Earth model. The purpose of gravity data reduction and processing is to produce gravity anomaly values that relate to subsurface geology more readily than the observed data. To do this, we correct for known or expected effects so that maps and profiles show only anomalous effects. The two most common products of data reduction are free-air anomalies and Bouguer anomalies. Many contend that the Bouguer anomaly is an interpretive product, because its purpose is to remove the effect of topography. To accomplish this purpose, we must correctly judge the density of topography. In rugged or even moderate terrain, the Bouguer anomaly map's appearance depends greatly on the choice of the Bouguer density. The choice of Bouguer density can therefore be a very important first step in interpretation. Any depth estimation or modeling analysis of the normal free-air and Bouguer gravity anomalies must be referenced to the observation surface. A common misconception about the gravity data reduction process is that the gravity field is somehow "reduced to sea level" or some other datum. Several textbooks incorrectly describe gravity reduction as reduction to a datum, even though they correctly outline the mechanics of computing anomalies. Ervin (1977) points out the problem, while LaFehr (1991) reviews the question in detail and includes a list of correct and incorrect texts on the subject. The role of the reduction datum (usually sea level) is only to provide a starting point for computing effects of an assumed Earth model. The concept that the reduction to Bouguer gravity produces a corrected field that is the same as the anomaly one would have observed at sea level is wrong, particularly in areas of high elevation and in rugged topography. Consider Figure 1 ( Comparison of Bouguer anomaly observed on the ground and at 2500 m elevation ), where Bouguer anomaly values have been computed for two sets of gravity observations over a hypothetical model.

Figure 1

One set of observations is on the terrain surface, and the other is at a constant elevation to simulate an airborne gravity profile. The sharper gradients of the Bouguer anomaly observed at the surface reflect the proximity of the truncated high density layer compared to the airborne observations.

Station Gravity
Station gravity is the absolute value of gravitational acceleration (g), or gravity at some point or station on the Earth's surface. In exploration practice, gravity surveys are carried out using gravity meters. These spring-balance type instruments are capable of efficiently measuring small changes in gravity (g), although they cannot directly measure absolute station gravity. During the early part of the twentieth century, pendulums were used to establish networks of absolute gravity reference stations; use of the Potsdam Gravity System was nearly universal from 1909 to 1971. Since about 1970, it has become increasingly practical to obtain more accurate absolute gravity measurements using weight-drop instruments, and this has resulted in the adoption of a new reference system. The old Potsdam gravity values are too high by about 14 mGal. The new system, known as IGSN 71 (International Gravity Standardization Net, 1971), corrects this error. Station gravity has been established at several points over the Rocky Mountain Gravity Calibration Line near the Colorado School of Mines, using a weight-dropping apparatus that can measure absolute gravity to an accuracy of less than about 5 mGal. The value of the absolute gravity reference at the Colorado School of Mines is 979,571.131 mGal; the highest station at Echo Lake on Mt. Evans has a value of 979,256.073

mGal. Keep in mind that these are values for gravity when the tidal effects of the sun and the moon are zero, and that over the course of a day, the tidal variation is as much as 0.3 mGal (peak to trough). So, even with a very accurate instrument capable of measuring the absolute value of gravity, it is necessary to correct for the tidal variation of gravity to obtain a tide-free station gravity value. Computer algorithms (e.g., Longman, 1959) are available to compute tidal variations in gravity as a function of time and station location. Gravity meters indicate readings in instrument counter units. These are typically close to 1 mGal/counter unit, but they require calibration, either by multiplying by a scale factor or by interpolating from a calibration table. Once the gravity meter reading is calibrated in milligals, and a base constant is added to the calibrated reading, we may obtain an absolute gravity value. The base constant varies with time and ambient instrument temperature. This variation is referred to as instrument drift. We can determine the base constant for a particular meter by reading the meter at a base where absolute gravity has been determined. Thus, station gravity = (calibrated gravity reading) + (tide correction) + (base constant) Successive occupation of a base station with an established gravity value is the common means of monitoring instrument drift and establishing the time variation of the base constant. A low-drift instrument has a nearly unvarying base constant. LaCoste and Romberg instruments (which are considered low-drift), drift on the order of one mGal per month at constant ambient temperature. There are two tidal peaks and lows over the course of a day. It used to be common practice to correct for instrument drift and tidal variation all at once by interpolating linearly between base station readings. This may be a satisfactory practice where survey accuracy tolerance is greater than 0.1 mGal, as long as the interval between base occupations is less than 6 hours (the average time between peak and low tidal effects). This practice was used to avoid the labor and cost of computing tide corrections from tables. With the wide availability of computers, the most common and the best practice is to compute the tidal correction and correct for instrument drift separately. Table 1 shows an example of data taken with a computer-nulled meter over a calibration range:

Station

Time

Meter Reading (calib. units)

Cal. Reading (mGal) 3135.24 2 3044.15 4 3038.68 0 3044.16 2 3040.95 6

Tide Corr. (mGal)

Tide Corrected Gravity (mGal)

Drift (mGal)

Base Const. (mGal)

Station (mGal)

BASE 95 100 95 98

08:16 09:44 10:18 10:44 11:16

3097.75 9 3007.76 0 3002.35 2 3007.76 8 3004.60 0

-0.008 -0.050 -0.055 -0.055 -0.053

3135.234 3044.104 3038.625 3044.107 3040.903

0.000 0.003 0.004 0.005 0.006

976432.02 2 976432.01 9 976432.01 8 976432.01 7 976432.01 6

979567.256 979476.12 3 979470.64 3 979476.12 4 979472.91 9

95 BASE

11:35 12:06

3007.76 5 3097.79 9

3044.15 9 3135.28 2

-0.049 -0.040

3044.110 3135.242

0.007 0.008

976432.01 5 976432.01 4

979476.12 5 979567.256

Table 1: Example of data taken with a computer-nulled meter over a calibration range Note that the repeated drift values at station 95 agree to within 2 mGal. This is unusually precise data. The example illustrates the correction sequence from a gravity meter measurement to the establishment of a station gravity value. Drift was computed by assuming zero at the first base reading and linearly interpolating the increase in tide-corrected gravity reading from the first to the next, or last, base reading. The example data were taken on a day when there was little ambient temperature variation. For very precise surveys (e.g., precision tolerance of less than 0.1 mGal) it is important to protect the meter from large temperature changes-for example, by keeping the meter shaded and not leaving it inside a hot vehicle It is also good practice to record temperature at each station.

Corrections for Expected Variations


The correction for latitude reflects the expected increase in gravity with latitude for two reasons: (1) decreasing centrifugal force, because the radius of Earth rotation decreases as the observation point nears the poles, and (2) decreased distance from the center of the Earth at the pole compared to the equator (polar flattening). The correction for elevation reflects the expected decrease in gravity with increase in elevation; the observation point is farther from the center of the Earth's mass at higher elevations. The Bouguer Anomaly is the most common presentation of gravity measurements used in exploration. The intent of the Bouguer reduction is to use a simple, single-density model to separate topographically-related elements in the field from the geologic anomalies of interest. In rugged topography, the choice of Bouguer correction density becomes a critical decision affecting the interpretation of the Bouguer Anomaly map or profile.

Latitude Correction
Gravity decreases from about 983,000 mGal at the pole to about 978,000 mGal at the equator. About 3400 mGal of this decrease results from a difference in centrifugal acceleration; the rest is due to polar flattening. The International Gravity Formula (GRS67, Geodetic Reference System, 1967) for computing the expected value of gravity on the reference ellipsoid (a near-sea-level mathematical surface that closely approximates the Earth's shape) is:

g( ) = 978 031.846 (1 + 0.005 278 895 sin2 +0.000 023 462 sin4 ) (1) where = latitude Equation 1 was developed for use with IGSN 71, and is now used to reduce most exploration data. Prior to 1971, a substantial fraction of exploration data had been referenced to the Potsdam datum and the 1930 International Gravity Formula, which is given by

g( ) = 978 049 (1 + 0.005 2884 sin2 - 0.000 0059 sin22 )

(2)

Equation 1 was derived as an improvement of Equation 2, based on observations of satellite orbital characteristics resulting from Earth's shape and distribution of mass. The choice of a reference system for the latitude correction is of no practical importance for interpreting gravity data at the exploration scale (i.e., areas with dimensions of hundreds of miles, or less), except that surveys reduced using the two different gravity formulas will not tie together properly. Because of this, it is not unusual to see new data reduced using the old formula-just to avoid recomputation of the earlier data set. Another common practice with old data sets is to compute a north-south gravity gradient for a latitude near the center of the area. We can do this by differentiating the latitude correction formula to give w, the rate of change of gravity with latitude:

= 0.812 sin (2 ) mGal/km The maximum rate of change with latitude occurs at f = 45 degrees: 0.812 mGal/km or 0.008 mGal in 10 meters. For a survey to be accurate within 0.01 mGal, we must know the north-south coordinates to within 10 m. For practical purposes, we are interested in the relative changes of gravity, so the required coordinate accuracy is relative to a base rather than absolute. It is poor practice to use this approximation in reducing data, because it complicates integration with independently reduced data sets. In the past, using the approximation rather than evaluating the International Gravity Formula resulted in time and cost savings. Computers are now so widely available that this is no longer the case. The Geoid, the Ellipsoid, Elevation, and GPS The geoid is the sea-level equipotential surface. It is the average level of sea water after removing the effects of currents and tides. Or, to put it another way, it is the level that sea water would have in imaginary canals dug underneath continents. All density variations within the Earth have an effect on geoid shape, including the crust both above and below sea level. Conventional surveying methods (spirit leveling) and inertial surveying determine elevations relative to the geoid. The reference ellipsoid is a mathematical surface constructed to approximate the geoid. Scores of such surfaces have been derived for use in mapping. Sometimes ellipsoids are derived as best fits for regions (e.g., North America or Europe) while ellipsoids such as that developed for GRS67 and WGS84 are meant to fit the geoid for the entire world. By convention and for convenience, we reference the values computed from a gravity formula, such as the GRS67 International Gravity Formula, to sea level rather than the height of the ellipsoid. This distinction has no practical impact on the geologic interpretation of gravity maps for exploration purposes-except that mixing elevations measured from a sea-level datum and those measured from an ellipsoidal datum in the same survey area would result in unacceptable errors between stations. GPS (Global Positioning System) surveying poses a new hazard for this type of mix of survey reference datums. GPS is the satellite-based navigation system maintained by the United States Department of Defense, and is increasingly used to position geophysical surveys. GPS positions are relative to the World Geodetic System 1984 Ellipsoid. The WGS 84 Geoid is calculated from the spherical harmonic expansion of the gravitational potential, WGS 84 Earth Gravitational Model (EGM). WGS 84 EGM is based on a very large number of gravity measurements in a worldwide gravity database. Deviations of the WGS 84 Geoid

(3)

from the WGS 84 Ellipsoid have been calculated up to order 180. Figure 1 ( Departures of WGS 84 geoid from WGS 84 ellipsoid, calculated to order 18 )shows the departures of the geoid from the WGS 84 Ellipsoid calculated to order 18.

Figure 1

The figure shows the relief of the longer wavelength lumps in the geoid; the lumps at shorter wavelengths are smaller in amplitude. The DMA Technical Report (DMA, 1991) describes the reduction procedures that must be followed for reducing gravity with respect to the WGS 84 reference for geodetic purposes. Common land surveying using spirit leveling delivers elevations referenced to the relatively bumpy geoidal surface. Because of the proliferation of ellipsoidal models and elevations datums, explorationists must be careful to document their survey base references. The choice of elevation datum will not affect the geologic interpretation of an anomaly map as long as the datum is consistent within the prospect. Even though the geoidal surface is bumpy and reflects the gravity anomalies that are the "targets" of an exploration survey, the relative amplitude of geoidal relief at wavelengths of detailed exploration interest are almost unmeasurable. Sandwell (1992) describes a method to compute gravity anomalies from a network of geoid height measurements at sea using satellite altimetry. A simple relationship we can see from his work is that for a geoid height anomaly with a wavelength l and amplitude h, the associated gravity anomaly is given by

(4)

where g = 980 000 mGal, h is the geoid height above the ellipsoid and l is the anomaly wavelength. h and l must be in the same units. If we solve the relationship for h, we find that the bump in the geoid associated with a 10 mGal gravity anomaly with a wavelength of 10 km is just 16 mm. Because the bump is even smaller for lower-amplitude anomalies with shorter wavelengths, the conventional practice of using sea level reference rather than the theoretically correct ellipsoidal reference for elevation is justified and has no practical effect on results at exploration or engineering scales of investigation. The effect of referencing to the geoid rather than the spheroid is a very small warp in the regional gradient of the field.

Free-Air Correction
The inverse square law (Equation 1) predicts that above the Earth's surface, gravity will decrease with separation distance from its center. Because gravity survey operations span a very small fraction of the average radius of the Earth, we can treat the local rate of change of gravity with elevation as a constant. This constant is the first term in a Taylor-series expansion of the rate of decrease in gravity with increasing distance from the center of the Earth. The adopted constant is F = 0.3086 mGal/m in free air. That is, gravity is expected to decrease by 0.3086 mGal/m for every meter in elevation, taking no account of any mass between the computation datum (usually sea level) and the observation point. We would expect that gravity measured at the base and the top of a number of television towers would closely reflect this normal free-air gradient, TV towers are practically massless. Robbins (1981) has published very small adjustments in F, varying with altitude and latitude, primarily for use in borehole gravity data reduction. These small adjustments have no practical impact on interpreting anomalies of exploration interest. Some researchers have recommended larger, local adjustments of F, based on measured values of the vertical gravity gradient, but this intended refinement to the reduction process is likely to cause distortion of anomalies and should not be adopted as a standard reduction procedure. The free-air anomaly reflects any anomalous mass, including the mass of all the rock underlying the topographic surface. As a result, free-air anomaly maps show a lot of similarity to topographic maps where relief is significant. The most critical source of error in land gravity surveys is the measurement of elevation at the survey site. To accurately predict gravity at the elevation of the gravity observation, we must know the elevation rather precisely. It is the surveying of elevations between gravity stations that is the most time-consuming and expensive part of any land gravity survey. To achieve a free-air correction accuracy of 0.01 mGal, for example, we must measure elevation to an accuracy of 0.01 mGal = 0.3086 mGal/m, which is 3.2 cm, or just over an inch. Table 1 lists vertical and horizontal survey requirements for a range of specified free-air anomaly accuracy. The horizontal accuracy requirement reflects the rate of change in the latitude correction which is a maximum at a latitude of 45 degrees.

Specified Anomaly Accuracy 0.01 mGal 0.10 mGal 0.50 mGal

Vertical Accuracy 3.2 cm/1 inch 32 cm/1 ft 160cm/5 ft

Horizontal Accuracy 10 m/30 ft 100m/300 ft 500m/1600 ft

Table 1: Required Vertical and Horizontal Survey Accuracies Given a desired level of gravity anomaly accuracy, the values in Table 1 should be taken as minimum tolerance specifications, even though we could view the vertical accuracy requirements as being mitigated by the Bouguer slab correction. As we will see in the next section, the Bouguer slab correction results in a combined elevation factor which is about 30 percent less than the free-air elevation factor.

Bouguer Correction
The objective of the Bouguer correction is to produce an anomaly map that indicates subsurface density variations caused by geologic structure. In progressing from the free-air anomaly map to the Bouguer anomaly map, our hope is that the resultant Bouguer map will be free of obvious correlation with topography unless the topography directly correlates with subsurface geology. The Bouguer correction is designed to compensate for the attraction of rock between sea level and the observation point. Remember that the starting point in predicting gravity is a latitude formula such as Equation 1. This formula predicts gravity for a smooth, sea-level Earth. Adding a hill or plateau of rock under our TV tower will result in an increase in gravity. The Bouguer correction is the most complicated and interpretive of the corrections to observed gravity. In essence, by using the Bouguer correction, we are attempting to further refine the estimate of expected gravity at the observation point by modeling the gravitational attraction of topography. Bullard(1936) conceived of the Bouguer correction as a series of three steps where the last two steps apply only in rugged terrain: 1. Bouguer slab correction or "Simple Bouguer correction" 2. Bullard "B" Correction 3. Bouguer terrain correction or "Complete Bouguer correction" Hundreds of thousands (even millions) of gravity stations have been reduced no further than the simple Bouguer correction with no practical loss in utility of the data. Areas of flat terrain in Texas, Louisiana, and much of the Middle East are such that further refinement of the Bouguer correction would be inconsequential. In areas with more topography, the refinements become more important, but the computational work load is greatly increased. Bouguer Slab Correction and Simple Bouguer Gravity The common, simple Bouguer anomaly is based on using an infinite slab of some density to approximate the gravitational effect of the rock between the datum (sea level or the ellipsoid) and the station. The density used in the correction is called the Bouguer density. The legend of a Bouguer anomaly map should always specify the assumed Bouguer density. Where the slab approximation is good enough, the data reduction process becomes simple. The attraction of the Bouguer slab is:

B() = 2G Bouguerh

(5)

or in mGal, B()

= 0.04191 Bouguerh

where Bouguer = Bouguer density and h = elevation above sea level in meters.

For h in feet,

B()

= 0.04191 Bouguerh

Given a Bouguer density, we can combine the free-air correction and the Bouguer correction into a single elevation factor. When working with elevations in 3 feet, a Bouguer density of 2.67 g/cm results in an elevation factor of 0.06 mGal/ft. In the 1940s and 1950s, when "computers" were essentially mechanical 3 adding machines, data reduction was simplified by a single digit elevation factor, which probably explains why 2.67 g/cm has been such a popular choice for 3 Bouguer density, 2.67 g/cm is also a good average density for continental crust. Elevation factor = F - B()

= 0.3086h - 0.04191 Bouguerh

where h is in meters and F = free air gradient, as described in Section 4.2.2 or

Elevation factor = F - B() = 0.09406h - 0.01277 Bouguerh, where h is in feet


Simple Bouguer Gravity for Stations Underwater or Underground

To compute a Bouguer Anomaly for a station underwater, we use the same approach: compute expected gravity and subtract it from what is observed. The latitude formulas each predict a value for gravity at the surface of a sea-level Earth. The formulas assume that rock rather than water underlies the observation station. At sea, we would expect gravity to be less than the value computed from the latitude formula, because the density (and gravitational attraction) of water is less than rock.

Consider the following example:

Suppose we want to predict the value of gravity at the shore of the Dead Sea. Just to make it easy, we'll move the Dead Sea about a hundred o miles south to a latitude of 30 degrees. We already know that at that latitude gravity should be g (30 ) = 979 324.012 mGal, on the surface, at sea level. The elevation of the Dead Sea is -400 m. The 400 m TV tower would be handy for checking our results.

a. What should the gravity be at the top of the TV tower just above the Dead Sea shore? (Hint: you would notice, if you were at sea level on the top of the tower, that there is a lot of rock missing between you and the bottom of the tower -- assume the missing rock has a density of 2.67 3 g/cm .)

b. What will the gravity be at the base of the TV tower? 3 c. What would the gravity be at the bottom and the top of the tower if the tower were inundated with sea water having a density of 1.03 g/cm ?

d. What if we were somewhere else at a latitude of 30 degrees, the ground elevation was at sea level, and our gravity meter was 400 m down in a 3 borehole where the rock above the meter has a density of 2.67 g/cm ? What is gravity at the top of the borehole? Solutions:

The principal for computing expected gravity is well illustrated by the underwater station. Starting with the predicted value from the latitude formula g(30

), we expect an increase in gravity with depth due to F, the free-air gradient; the absence

of rock between sea level and the water bottom leads to an expected decrease in predicted gravity; and finally, the upward attraction of the water leads to a further decrease in predicted gravity. Where depth is positive downward, the expected change in observed gravity due to water depth, d, is:

g(d) = +0.3086d - 0.04191rock d - 0.04191water d


Notice that the solution for the borehole case is identical except that there is rock instead of water, so water is replaced by rock.

For the other cases, the solutions are as follows:

TV tower in air:

Top of Tower:

979324.012 - (0.04191

2.67 400) = 979279.252

Base of Tower: Sea surface:

979279.252 + (.3086 mGal/m 400m) = 979402.692

979279.252+(.04191 1.03 g/cm3) 400m = 979296.519 On the surface at the top of the borehole (Elevation=0, Depth=0): g(d) = 979324.012 In the borehole (Elevation of Top=0, Depth=400m): 979324.012 + (.3086 mGal/m 400m)-2 (.04191 2.67 g/cm3) 400 The solutions are summarized below: Medium of Gravity Measurement Elevation (h) Depth (d) Gravity Value Elevation/ Depth factor

Air (TV tower)

400

979 279.252

-0.1119d

Air (TV tower)

-400

400

979 402.692

-0.1967h

Sea Water (surface)

400

979 296.519

-0.06873d

Sea Water (bottom)

-400

400

979 385.425

+0.1535d

Borehole (surface)

n/a

979 324.012

-0.1967h

Borehole ( = 2.67 g/cm3)

n/a

400

979 357.932

+0.08480d

Borehole ( = 2.489 g/cm3)

n/a

400

979 364.012

+0.100d

The last entry above suggests a rule of thumb: gravity will increase with depth in a borehole by about 1 mGal per cm. Note that the factors give the predicted change in gravity with elevation or depth which can then be added to the value computed from the latitude formula for each situation. The simple Bouguer anomaly for each of the above observation points is observed station gravity minus the expected gravity value we just computed. Bullard B Correction The Bullard B correction is the difference between the effect of an infinite slab and a spherical cap on the Earth's surface ( Figure 2 , Bouguer slab and Bullard cap with the neglected "triangle." R0= earth's average radius; R=actual local earth radius; a=angle subtended by the truncating Bullard radius.

Figure 2

Not to scale ). This correction to the Bouguer slab correction has been neglected in commercial work; LaFehr (1991a) advocates the routine inclusion of the Bullard B correction in gravity data reduction. The Bullard B correction ranges in value from about -1.5 to +1.5 mGal over an elevation range of 0 to 5000m, and is solely a function of elevation. The gradient of the correction ranges from zero to less than 0.0015 mGal/m?we can think of this maximum gradient as equivalent to a Bouguer correction density uncertainty with a maximum value of 0.036 g/cm3. In a practical sense, we rarely know the Bouguer density this precisely. Whitman (1991) developed an approximation for the Bullard B correction and a revised Bouguer slab formula that includes the correction:

(6)
where b = 1, a = 0.026 radians, H = h/(R0 + h) and R0 = 6371 km, the average radius of the Earth. We can obtain the Bullard B correction by evaluating Equation 6 with b = 0. By setting b = 1 in Equation 6, we can use this relationship to replace the simple slab formula (Equation 5) and obtain a Bouguer correction that includes the Bullard B correction. The Bullard B correction has been the subject of much recent discussion. In this writer's experience, including the correction would make no practical difference in the outcome of any interpretation. Whether it is theoretically more correct to include it is another matter. The correction appears to be inappropriate in most topographic situations unless terrain corrections beyond a radius of 167 km are carried out.

Terrain Corrections
In flat terrain, the Bouguer slab model is usually satisfactory for removing unwanted topographic effects. For complicated topography, we need to model the topographic volume from surface to sea level (or below), and compute the gravity effect of topography at our station. This computation almost always assumes a constant Bouguer density, although we know the rock densities making up topography are often not even approximately constant. Because terrain corrections are corrections to a slab model, they tend to be more difficult to intuitively understand than the directly computed effects of hills or valleys. The terrain correction is the change we have to make to a slab to make it look like the actual terrain ( Figure 3 , Bouguer slab for the station X, shown with the terrain.

Figure 3

Terrain corrections are computed for the hacheured section. ). Terrain corrections for land gravity stations are always positive (i.e., terrain effects are negative). That is, the value of expected gravity at a station is less than that predicted by the Bouguer slab. We can see this by considering a station on the flank of a mountain, as shown in Figure 3 . A Bouguer slab with its base at sea level and its top at the level of the gravity station has been used as a first approximation for the effect of topography. Clearly, the slab correction overestimates the contribution to gravity on the valley side of the station. On the mountain side, the effect of mass above the station exerts an upward attraction that also reduces the expected value of gravity at the station. Historically, there is very good reason for making terrain corrections in this way. Terrain corrections used to be laborious, and were done only when absolutely demanded by topography. For example, for surveys over valleys or basins, terrain corrections might be neglected for stations on the valley floor, but stations in the foothills and in the mountains would require them. It was also common practice to economize by neglecting the effect of terrain far from the station. Computer studies of terrain effects have revealed that this practice can cause unanticipated anomaly distortion in some rugged survey areas. Before the widespread use of computers, determining terrain corrections involved using templates of segmented concentric rings that would be laid over a map of topography and centered on a gravity station location to permit estimation of the average elevation for each segmented compartment. Two such templates were widely used for decades: the Hammer chart and the Hayford-Bowie chart. Nettleton (1940, 1976) and Dobrin (1988) describe the parameters for constructing Hammer charts, while Swick (1942) describes parameters for the Hayford-Bowie chart. An important feature of both charts is that the ring segments become smaller as they get closer to the station, reflecting the need for more accurate knowledge of the topography close to the station. Figure 4 (Use of terrain chart with topographic map: (a) terrain chart overlying topographic map (b) enlarged view of a single zone )gives an example of the use of such a chart.

Figure 4

There are a number of computer methods for computing terrain corrections; Plouff's (1966) and Krohn's (1976) are two of the most widely used. Cogbill (1991) discusses refining earlier computer techniques and applying the digital elevation models that can be purchased in the United States from the National Cartographic Information Center. Digital models of topography greatly simplify computing terrain corrections, and digital topography is becoming available for a wider range of countries at increasingly detailed grid intervals. Computer methods of computing terrain corrections have led to more accurate corrections and have enabled careful study of the sources of error. Sources of terrain correction error stem from the following: 1. Inaccurate sources of terrain data. Terrain inaccuracies close to the station have relatively large effects compared to poor terrain data farther away. We must take extreme care to avoid errors of up to 0.5 mGal in rugged terrain. For example, for terrain within 200m of a station, an estimating the slope to be 20 degrees rather than its actual 30 degrees will lead to a 0.5 mGal error. Neglecting near-zone terrain measurements can lead to even larger errors. 2. Poor knowledge of terrain density. Another obvious source of error is significant variations in terrain density from the assumed Bouguer density. 3. Inadequate terrain models. Krohn (1976) showed that the flat-topped model elements used in the segmented compartments of terrain correction charts lead to systematic overestimates of the terrain correction. Smaller model elements and terrain elements with sloped tops have both proven to produce more accurate terrain corrections. This is especially true of the ?inner zone? terrain corrections within 2 km of the station, where digital terrain models and computer terrain correction programs are highly beneficial.

4. Far-zone terrain corrections. In rugged terrain where there is substantial vertical relief between adjacent stations, neglecting far-zone terrain can lead to significant errors from station to station. Terrain corrections beyond a distance that is a few times the wavelength of anomalies of interest commonly have been neglected in order to save effort?often without appreciable distortion of the anomalies of interest, but sometimes not. If the terrain corrections from the far-zones are relatively unvarying over an area comparable to the size of anomalies of interest, the expense of terrain correction to the outer zones may not be justified. However, a terrain effect that is smooth and unvarying when evaluated at a single elevation may vary significantly at varying station elevations. LaFehr (1991) points out that this is a general phenomenon of distant regional effects. LaFehr (1991) has proposed adopting a terrain-corrected radius of 167 km as a data reduction standard. Given the wide availability of digital terrain models, the relative cost of performing terrain corrections to the Hayford Bowie radius of 167 km is much less than it once was. The advantage of adopting such a standard is that it would help to assure adequate terrain corrections in all circumstances, and would facilitate the integration of data from diverse survey sources where the reduction standards had been followed. An alternative and natural way to proceed, given the power of computer modeling, is to compute the effects of geometric shapes of hills and valleys directly and make no Bouguer slab (or Bullard B) correction at all. For some problems, making the Bouguer correction this way can be more effective than dealing with corrections to the Bouguer slab. Lakshmanan (1991) has proceeded in this way to construct complex density models required for analyzing microgravity surveys for engineering and archeological applications. One of his studies located a hidden chamber in the pyramid of Cheops in Egypt. His model of expected gravity included varying densities for the various blocks of differing rock types (granite and porous limestone) used in constructing the pyramid. Others, including Vajk (1956), have suggested Bouguer corrections involving complex density models for topography. Some have objected to such complexity at the data reduction stage, because constructing such a model clearly involves making a geologic interpretation. For that matter, it is a good idea to recognize that the choice of Bouguer density is interpretive, and the Bouguer anomaly map is therefore a first step in the interpretation process. However complex or simple the assumptions for terrain may be, we should take advantage of the ease of computing and evaluate the effect of varying density assumptions in the context of the actual survey and survey target. We can usually do this effectively using profile modeling to gain an idea of the amplitude and rate of variation of terrain effects in comparison to target anomalies.

Corrections for Marine and Airborne Gravity


Measuring gravity from a moving vehicle in a dynamic environment poses problems and requires corrections that we do not encounter for land or underwater gravity measurements. Once we make the necessary instrumental and dynamic corrections, however, we can correct marine and airborne gravity for free-air and Bouguer gravity effects just as we correct station gravity values for static measurements. Unlike station gravity on a land survey, processed station gravity values from a marine or airborne gravity profile have been subjected to high-cut filtering, which places a limit on the spatial wavelengths that can be resolved.

The Etvs Correction


The Etvs effect is the vertical component of the Coriolis force. In other words, it is a change in vertical acceleration that will affect any moving vehicle as a result of the change in centrifugal acceleration, depending on speed and direction ( Figure 1 and Figure 2 ).

Figure 1

Figure 2

The Etvs correction is given by:

gEtvs = 2Vcos sin + V2/R


where is the rotation rate of the Earth, is latitude, a is course (azimuth, measured clockwise from geographic north), V is speed, and R is the radius of the path over the Earth (very close to the radius of the Earth). For speed in knots, the Etvs correction in mGal is:

(1.a)

gEtvs = 7.5 3Vcos sin + 0.0042V2


When traveling east (i.e., in the direction of the Earth's rotation), the effect is negative, because the effective centrifugal acceleration acting on the gravity meter is increased and the downward pull of gravity is decreased-the gravity meter feels lighter. Small variations in the vehicle's eastward velocity therefore cause variations in the instrument reading, which we must correct. On north-south lines, small variations in course result in large Etvs variations. For east-west lines, course is less critical, and speed changes lead to

(1b)

the greatest change in Etvs effect. The second term in the formula is independent of course and is nearly negligible for marine measurements. The Etvs effect is by far the single largest source of error for shipborne gravity measurements. It is also important for airborne measurements, although vertical acceleration corrections are more significant. A momentary change in eastward or westward speed of just 0.1 knot near the equator results in a 0.75 mGal change in the gravity reading. The wavelength and character of variations in Etvs typically cannot be separated from geologic signal by simple filtering. Accurate measurement of the Etvs correction demands high precision navigation data, and GPS navigation has recently proven capable of delivering much more accurate velocity measurements than previous shore-based navigation systems. Herring (1985) and Hall and Herring (1991) have presented results of high-resolution field tests that demonstrate the increased spatial resolution possible using the direct velocity measurement capability of GPS.

Vertical Acceleration Correction


Commercial gravity surveys using stabilized platform gravity meters were first successfully carried out on ships. Airborne gravity measurements have been successfully used for regional exploration problems for several years, but suffer from long-period uncertainties in aircraft elevation. Marine acquisition also is plagued by strong vertical accelerations, which result from sea swells and waves and can amount to 10,000 to 100,000 mGal, but the time to traverse the spatial wavelength of typical geologic anomalies is very long compared to the period of ocean wave motion (i.e., 7 to 15 seconds). Typical survey ship speeds are 5 or 10 knots (150 to 300 meters per minute), and it turns out-perhaps surprisingly-that the sea's vertical motion at periods of a few minutes amounts to less than 1.0 mGal, even in rough sea conditions. Although the short period vertical accelerations typically experienced under survey conditions in an aircraft are not as great as on board ship, an airplane does not inherently maintain long-term stability of elevation. Successful airborne gravity surveying demands highly accurate altitude control and measurement. The vertical sinusoidal motion that will give rise to vehicle accelerations of 1.0 mGal are plotted in Figure 3 .

Figure 3

This figure gives us a means of estimating the shortest wavelength anomaly that we could expect to resolve at a given level of vertical motion uncertainty. Sensitive altimeters measure relative changes in elevation on aircraft and give results that allow resolution of a few mGal at periods of a few minutes. On the other hand, control of the vertical acceleration correction to a tolerance of 1.0 mGal at a wavelength of one minute would require us to control or know the aircraft's relative elevation to a precision of less than 1 mm (or, alternatively, we would have to know the aircraft's vertical velocity to a precision of 0.01 cm/sec). The vertical acceleration correction is the current limiting factor on airborne gravity resolution and accuracy.

Cross-Coupling Errors and Corrections


All types of shipborne gravity meters are subject to cross-coupling errors caused by the interactions of the effects of accelerations on the gravity meter or stabilized platform (LaCoste, 1967). They can occur only when the accelerations have the same periods and there are systematic phase relations between the accelerations; otherwise the errors will average to zero. Figure 4 shows an example of a cross-coupling error.

Figure 4

In this figure, the platform is off level by an angle b. Such stabilized platform errors occur when the platform's center of gravity (including the gravity meter) is not accurately on the platform's axes of rotation. For example, if the center of gravity is off horizontally, vertical accelerations will cause errors, and if the center of gravity is off vertically, horizontal accelerations will cause errors. If the sensitive axis of the gravity meter is in the direction shown in the figure, the gravity meter will detect a component of horizontal acceleration given by:

e = kax
where k is a constant, and ax is the horizontal acceleration. In this case, the component e is the cross-coupling error. It will eventually average out to zero if there is no systematic correlation between ax and b, but there will usually be correlation. In the case we are considering, let us assume that is caused by vertical acceleration, av; then b will be approximately proportional to av and Equation 2 becomes

(2)

e = k axav
where k is still a constant, but probably a different value than that of Equation 2. Experience has shown that there is often a strong correlation between the horizontal and vertical accelerations of ships. In fact, the water particle motion in waves is roughly circular. Therefore we can expect Equation 3 to give a systematic cross-coupling error whose magnitude depends on the phase difference between the accelerations. We can correct for cross-coupling when we adjust the gravity meter system and also when we process the data. Routine processing to improve the cross-coupling correction has been common since the late 1970s. In the manufacturing of gravity meters and their associated stabilized platforms, cross-coupling errors are corrected for as closely as possible. However, rough treatment during use or transport, inadequate maintenance and other causes sometimes degrade gravity meter performance. LaCoste (1973) has devised a method to improve the cross-coupling corrections to match meter performance in the field. The principle is simple, and similar to the philosophy for refining corrections for E?tv?s and vertical acceleration. It is based on the premise that observed gravity should not systematically correlate with any combination of ship accelerations. The following example shows how the method works: Let us consider a gravity meter that works perfectly except for having an error in the form of Equation 3. Since we can measure the instantaneous accelerations ax and av , we can multiply them together to obtain a "monitor" which we can use to check and to correct the observed gravity. We know that if there were no errors in observed gravity, the gravity profile should not correlate systematically with the monitor profile. In other words, they should not have the same shapes. If they do, there is an error in observed gravity. To determine the size of the error, we must determine what fraction of the monitor needs to be subtracted from the observed gravity so that the resulting corrected gravity does not correlate with the monitor. This procedure gives us the value of the constant k in Equation 3, as well as giving a corrected gravity profile. We can compute and correlate up to seven monitors with observed gravity. We can compute more using higher order terms, but LaCoste (1973) found no practical benefit in computing more than seven; often, five is sufficient. Proper use of LaCoste's method almost always results in some improvement in gravity data quality.

(3)

Filtering and Spatial Resolution


Various types of correlation filtering (like that described for the cross-coupling correction, as well as simple high-cut filtering) are applied to shipborne and airplane gravity data. The objective of correlation filtering is to minimize the correlation of the correction with the final corrected gravity profile. The argument for this approach is that any correlation between the correction and geologically caused anomalies would have to be fortuitous and is probably incorrect. After we have made all corrections, noise will remain in the corrected gravity trace, and will appear to be periodic or random. When processing the data, we must exercise our judgment and select a high-cut filter that is designed to suppress what we judge to be residual noise on each individual traverse in a survey. The selected high-cut wavelength is one index of the shortest wavelength resolvable on the survey line. The high-cut wavelength is usually expressed in seconds of traverse. Typical high-cut filters used in shipborne and airborne gravity processing have high-cut limits from 200 to 1000 seconds. Shorter filter wavelengths reflect better survey conditions: calmer weather and more accurate navigation data. The shortest resolvable spatial wavelength is the time wavelength multiplied by the vehicle speed. A boat speed of 5 knots is about 2.5 m/sec, so the shortest resolvable spatial wavelength for a high-quality survey (e.g., one using GPS velocity measurement for the Etvs correction with data acquired in good weather conditions) might be

500 m (or 200 s * 2.5 m/s). Airborne surveys suffer in this respect from the much greater speed of airplanes. For commercial survey work, aircraft speeds will be about 180-200 kph or 50 m/s-20 times the speed of a ship. Under similarly excellent conditions, we would expect the shortest spatial wavelength resolved by an airborne survey to be about 10 km. Figure 5 shows the range of amplitude and wavelength resolution that we can expect from a range of gravity survey methods.

Figure 5

For land work, spatial resolution is limited by sampling rather than filtering. The upper and lower curves indicate a typical range from average to good operating conditions.

Adjustment of Survey Line Crossing Differences


Network adjustment of marine and airborne gravity data is designed to recognize and remove systematic bias and random errors in the data, which would otherwise result in survey line misties. We expect bias errors to arise from errors involving the gravity meter itself, such as cross-coupling or meter drift, and possibly from errors in the Etvs correction. Survey line intersection differences are evaluated for each survey line crossing in a survey network. In one common method for removing bias errors, we shift each survey line profile up or down by a constant level

to minimize the sum of the square of the mistie errors at each intersection. The systematic corrections for a network are further constrained such that the sum of the systematic corrections is zero, effectively eliminating DC shifts to the network as a whole. The DC level shift for each line has no effect on the shape of relative anomalies on the individual lines. The remaining random errors in the network are typically removed by proration of error between intersections. One common approach is to assign each line a reliability weight that depends on the average absolute mistie for a given line. The final choice for the value at each intersection is weighted toward the statistically better line at the intersection. Figure 6 ( The Etvs Correction ) shows a graphical representation of intersection adjustment statistics typically shown on marine gravity survey line profiles.

Figure 6

This type of display allows rapid visual evaluation of survey line quality because good survey lines typically show smaller random error mistie bars. Figure 7 ( Amplitude of vertical motion that will result in 1 mGal vertical acceleration.

Figure 7

) shows a segment of a typical marine gravity profile. Notice the difference between the observed and filtered data. Also notice the inverse correlation between Etvs correction and observed gravity. The final Etvs-corrected free-air gravity does not reflect the Etvs event.

Regional and Isostatic Effects


Observed free-air gravity values, when averaged worldwide on a large scale, are near zero ( Figure 1 , Free-air gravity vs.

Figure 1

elevation ). Free-air gravity is corrected for station elevation and latitude, but not for the density of the material which constitutes the topography. When corrections for the density of topography (the Bouguer corrections) are made to the observed free-air gravity values, then regions of higher elevation generally have negative values of Bouguer gravity. Generally, mountain ranges such as the Rockies in Colorado, USA, have large negative Bouguer anomalies, whereas oceans typically have positive Bouguer anomalies. The Bouguer corrections, applied to the free-air gravity, are negative for station elevations above sea level. Bouguer corrections in ocean basins are positive, because seawater is mathematically replaced by rock, which is more dense than seawater. As a result, Bouguer gravity generally increases in a seaward direction along continental margins. Over the last one hundred years, various theories have been put forth to explain the above phenomena. Researchers have believed for some time that above a certain depth (presently thought to be at least 60 km), the weight of each column of rock resting on that depth would be approximately equal if considered on a large scale, such as on 250 x 250 km blocks. Thus, some "compensation" mechanism would be present to keep high topographic regions from "sinking."This compensation mechanism is known as isostasy. Historically, two different hypotheses were used to explain isostasy: Airy's hypothesis and Pratt's hypothesis. Both hypotheses postulated lower density shallow "crust" floating on a higher density uniform liquid. Airy's hypothesis stated that compensation for high topographic features is provided by a thickening in the base of the lower density crust. Thus, mountains had thick low density crustal roots holding them up. Pratt's hypothesis stated that compensation is provided by a variable crustal density, without variation in crustal thickness, with lower density crustal rocks underlying mountains and higher density rocks underlying ocean basins.

Wessel (1986) provides a depiction of continental margins and their approximate densities, which indicates that the crust is thicker under mountain ranges and thinner under ocean basins, and also that the continental crust is less dense than the oceanic crust. It is apparent that Bouguer gravity values contain some effects related to the Earth's crustal structure and crustal density variations. We must consider these crustal effects in the Bouguer gravity when interpreting gravity data for local sedimentary structure.

Ex-1:
What is the expected value of gravity at the top of a 400 m TV tower with its base at sea level at a latitude of 30 degrees? g( ) = 978 031.846 (1 + 0.005 278 895 sin2 +0.000 023 462 sin4 ) g(30o) = 979 324.012 mGal is the expected value for gravity at the base of the tower. Using the free-air gradient, g a = 400m(.3086 mGal/m); gravity is 123.440 mGal less at the top of the tower. The expected value of gravity at the top of the tower is 979 200.572 mGal.

Ex-2:
Put a gravity meter on board a stationary boat in the middle of a large harbor. Let the tide go up and down by one meter. Ignore the acceleration of the meter as it goes up and down; also assume that correction for the tidal attraction of the sun and the moon have been made. How much does gravity change? As the tide level increases by 1m, gravity will decrease by the amount of the free-air gradient, 0.3086 mGal, but the mass of water under the meter has increased so gravity will increase by the attraction of the water: a Bouguer slab with a density of 1.00 (or 1.03 for most sea water). Positive 1m tide: g = -0.3086 + 0.04191 1.03 = -0.2654 mGal Negative 1m tide: g = +0.3086 - 0.04191 1.03 = +0.2654 mGal

Ex-3:
At a latitude of 30 degrees, what is the expected value of gravity on the top of a 400m plateau where the plateau is composed of rock with a density of 2.67 g/cm3? g( ) = 978 031.846 (1 + 0.005 278 895 sin2 +0.000 023 462 sin4 ) g(30o) = 978 031.846 (1 + 0.005 278 895 sin2 30? +0.000 023 462 sin4 30) = 979,324.012 mGal The elevation factor (for h in meters): F-B(2.67) = 0.3086 - (0.04191)(2.67) = 0.1967 mGal/m. Using the elevation factor, (F-B(2.67))(400m) = (400m)(.1967 mGal/m) = 78.680 mGal. Gravity will be less at the top of the plateau than at sea level, so the expected gravity at the top of the plateau is 979,245.332.

Gravity Survey Design and Gravity Meters

Introduction
Gravity survey objectives span a variety of operating scales and targets, ranging from studies of entire oceans and continents to searches for man-made underground chambers. The most effective choice of gravity instrument, geographical-coordinate-survey method and survey network design is based on expected target response. In exploration, surveys are often designed to search for a certain anomaly that we believe to be associated with a type of geologic structure. The survey objective might be simply to locate anomalies of a certain minimum size. In other instances, we seek a more precise definition of the anomaly field so that we can use detailed modeling to better define a structure such as a salt flank or the trace of a fault. Survey design plans should always include all available geologic information. Known responses of similar structures and hypothetical model calculations, for example, are important design criteria. Survey design is partly a matter of constructing a net so that anomalies of interest can't slip through, but interference from other anomaly sources, as well as aliased effects from sources much shallower than the target, are also important concerns.

Survey Design
The survey targets observed or hypothetical response will define the surveys minimum wavelength and amplitude resolution requirements. From these requirements, we can judge the minimum required gravity measurement accuracy and the maximum station or line spacing. An effective approach is to define the theoretically ideal survey, evaluate its cost and look at various designs until we find one that we can expect to do the job within a reasonable budget. A not-so-obvious pitfall in this approach is an over-compromised survey that fails to meet the survey objective. Survey design considerations for both gravity and magnetic surveys have a lot in common, and much of what is said in this section applies equally well to magnetic survey design. An old rule of thumb for an ideal survey design is that the station or line spacing should be about one-half the depth of the target. Reid (1980) goes into this issue in detail, as does Naudy (1971), who advocates using one-fourth the depth of the anomaly source as an ideal. A sometimes missed consideration in survey design is that target anomaly responses are always superimposed on effects of structure or density variation that may be of no interest, but must be distinguished from the target response. Low-resolution surveys often fail to provide enough information to distinguish between anomalies of interest and background geology. The survey design problem is that shallow-sourced anomalies may be filtered (in the case of shipborne and airborne surveys) or aliased to look like deeper anomalies. The "ideal" survey, where cost is not a consideration, would adequately sample all anomaly wavelengths whose amplitudes reach 1/5 or 1/10 the amplitude of the target anomaly. The station or survey-line spacing would be 1/4 or 1/2 of the shortest wavelength. Usually, the so-defined ideal survey is not economically feasible or justified. A little simulation (imagined or computed) of expected consequences of dropping part of the ideal station grid should lead to a survey design that is nearly as effective as the ideal and much more practical. For example, orienting survey lines perpendicular to structural strike results in a higher sample rate in the direction where it is more effective. High sample rates along survey lines or roads and trails give useful information (with easy access) about the relative importance of structure shallower than the target with little added cost. We can interpret and resolve target anomaly effects from the background and interpolate them between lines provided that interference from non-target anomalies is manageable.

Geographic Surveying
Elevation survey accuracy practically defines ultimate survey accuracy for land, borehole and airborne gravity. For land work, survey cost and accuracy are directly related. For most land gravity surveys, the elevation survey accuracy requirement determines the largest cost component for the entire survey. Shipborne surveys benefit from the vertical reference provided by average sea level so that it is not a factor in survey accuracy or resolution. This is because variation in ship elevation related to wave motion has a predominant period of 7 to 15 seconds which, for normal boat speeds, is much shorter than the wavelengths of exploration interest. Sea level variations due to the tides result in very long wavelength variations in observed gravity amounting to only a few tenths of a mGal, which can be corrected directly by calculating sea level variations from position and time. More usually, these roughly 12-hour-period variations in gravity are removed at the stage of adjustment of survey line crossing differences.

Conventional Surveying for Land Gravity Surveys


Conventional geographic land surveying employs optical triangulation and spirit leveling. Keep in mind, however, that the "conventional" surveying methods of the past may not be the conventional or the most popular methods of the near future. These survey methods are being overtaken and replaced by satellitebased systems and inertial surveying. The advantages of conventional surveying stem mainly from the low cost and wide availability of the survey instrumentation. Disadvantages accrue when distances between stations are long, or in terrain or vegetation where line of sight is short. Under these circumstances, conventional methods are relatively slow and expensive. Over distances of a few hundred meters, it is reasonably easy to achieve a relative vertical accuracy of less than 1 cm, which is required for the highest precision microgravity surveys. Loop closure requirements for typical land gravity surveys for oil exploration range from 20 to 100 cm (0.1 to 0.5 mGal) where stations are spaced at about 500 m. It is often a matter of contention whether vertical coordinates surveyed along seismic lines meet the vertical accuracy specification for the gravity survey. Gravity values reduced with an incorrect elevation will not fit coherently with adjacent stations, so gravity surveys are often incidentally helpful in finding elevation survey errors on seismic surveys.

Barometric Altimetry
Barometers have been used to establish vertical control for gravity surveys with varying success. Because of wind and weather changes, isobaric surfaces are reliable only as a vertical reference in calm weather and gentle terrain. Networks of recording barometers installed at known elevations have been used to correct for variations in barometric pressure over a survey area. The practical resolution of barometric altimetry is on the order of 0.5 m (about 0.2 mGal). Repeatable measurements to within 1 m or less have been achieved over short distances in calm weather. On the other hand, in rugged terrain, where the operational advantages of the method are of the greatest potential value, repeat differences of more than 10 m (3 mGal) are common. Multiple repeats are required in these cases to reduce the uncertainty of the measurement. The practical result is that transport costs often offset the lower cost of the altimetry equipment.

Inertial Surveying
The inertial survey system is based on the double integration of acceleration outputs from three orthogonal accelerometers mounted on a gyro-stabilized platform. The integration constants at the beginning of each traverse correspond to each of the survey coordinates plus the vehicle velocity in the direction of each coordinate. At each new survey point, we correct the system drift by resetting the system velocities to zero--known as a zero velocity update. Inertial survey accuracy is about 1 part in 50,000 of the distance from the nearest control point. New survey points must lie nearly in a straight line between the initial and final control points on a given line. Typical survey design consists of establishing control points on the perimeter of a survey area so that survey traverse lengths will be on the order of 50 km and expected

coordinate accuracies will be 0.3 to 0.5 m. The usable vertical and horizontal resolution of the inertial system is 0.1 to 0.2 m. Speed is its main advantage. Survey accuracy and speed are independent of intervisibility of survey points (or visibility of satellites). The main disadvantage of inertial surveying is cost. The equipment is expensive, and it is very costly on a daily or hourly basis compared to other survey equipment. The first commercial use of inertial surveying in support of exploration gravity surveys for oil was during the exploration of the Overthrust Belt of Utah, Wyoming, Idaho and Montana. By 1977, most of the areas had been surveyed conventionally to some extent, but rugged terrain confined most of the survey traverses to roads and trails in the valleys and along streams. Rarely were the conventional survey lines spaced more closely than about 10 km. The anomaly field is complex in this area, so detail on structural trends and closures between lines was needed both in evaluating prospects and in planning additional seismic work. Using helicopters for transport, stations were established on nearly regular 1-mile and 0.5-mile grids. Although expensive on a per-station basis, the combination of helicopter and inertial surveying enabled rapid survey coverage over difficult areas that would have been much more expensive and impractical using conventional surveying. Another successful application of the inertial surveying system to gravity survey work uses ground transport rather than a helicopter. The inertial system installed in a utility vehicle facilitates establishing survey coordinates along roads where short line-of-sight or other considerations make conventional surveying impractical or slow. Production rates typically average 50 stations per day with a helicopter and over 100 stations per day with a vehicle along roads. A unique characteristic of inertial systems is that they require no outside signal during a traverse, although each survey traverse must start and end at a pre-defined coordinate. Positions are normally referenced relative to a point on the survey vehicle such as the helicopter landing skid or a point on a car bumper. The pre-defined control coordinates needed to begin inertial surveying must come from another source, such as bench marks and triangulation points. In some very remote areas in Africa, control points have been established prior to the survey using satellite survey methods.

GPS Surveying
GPS is the Global Positioning System satellite navigation system established by the U. S. Department of Defense. A constellation of about 24 satellites circle the Earth in three orbit planes such that we can use measurement of signal transit time from any four satellites to compute our location and precise time relative to the GPS system. GPS survey receivers and survey methods have proliferated since 1990, and GPS shows promise of dominating an increasing number of land survey applications. Equipment costs have come down, and methods are being developed to achieve survey accuracies of less than 1 cm with short station occupation times. The system's accuracy for determining the position of a single receiver is deliberately degraded to about 100 m for non-U.S.-military users. The methods for dealing with the degraded signals is a burgeoning and rapidly changing field. Differential GPS makes use of common-mode error extraction by observing the signals at a fixed survey base and using base information to correct the data from the survey receiver. GPS promises some of the same advantages of inertial surveying, but at lower cost. A further review of the present state of technology at this writing seems futile, since it is certain to continue changing rapidly. Obviously, a fundamental limitation of GPS is that the satellite signals must be received at the survey point. This means that the sky must be visible, because the 1575 MHz signal is blocked by vegetation and buildings. Another limitation is the completeness of the satellite constellation. Until recently, the constellation was incomplete, so that survey work could only be carried out at certain times of the day. Future utility of GPS depends on continued maintenance of the satellites. The U.S. government has recently reaffirmed its commitment to the GPS program, and maintenance into the next century is assured. Additionally, the data will be declassified, and will no longer be degraded by the U.S. Department of Defense.

Navigation Systems for Marine and Airborne Gravity


The navigation accuracy requirement for shipborne and airborne gravity surveys is a consequence of the need to compute an accurate Etvs correction that is proportional to the eastward velocity component of the ship or aircraft. Relative position accuracy is therefore more critical than absolute accuracy. GPS has become the most common navigation system for marine geophysical surveys. Although a number of shore-based radio navigation systems are still used for some applications, GPS delivers much better resolution of the Etvs correction (Herring, 1985). One unique aspect of GPS navigation that is critical to accurate measurement of the Etvs correction is the capability of some GPS receivers to measure velocity directly from the Doppler shift of the GPS carrier signal. Relative velocity accuracy of 2-3 cm/s can be achieved at one-second sampling intervals. This is equivalent to a maximum Etvs effect of about 0.5 mGal (51 cm/s is 1 knot). For longer periods, the Etvs error is further reduced by averaging samples--for periods of over 2 minutes, we can attain Etvs correction accuracy of 0.1 mGal. GPS also shows promise of improving the vertical acceleration correction for airborne gravity. At the present level, we expect vertical acceleration accuracies of about 1 mGal at periods of 7-12 minutes from GPS. Until now, we have obtained the vertical acceleration correction measurement for airborne gravity by differentiating the output of sensitive altimeters. Further improvement in the accuracy of GPS velocity measurement seems likely, making vertical acceleration corrections more effective. Horizontal positioning for most of the marine surveys of the recent past has been obtained from land-based radio-navigation systems operating at a range of frequencies. The lower frequency systems (about 1 MHz) have greater range and less accuracy, while the higher frequency systems (hundreds of MHz) have greater accuracy but more limited range. Airborne gravity surveys have used the higher frequency systems. For all of these systems, the velocity needed to compute the Etvs correction must be obtained from successive point positions--that is, we compute velocity from the measured time and distance between fixes. This obviously demands accurate timing of fixes, as well as accurate positions. As a practical matter, recording accurate relative times between fixes, particularly on seismic operations, has not always been accomplished. Without accurate measurement of the time interval between fixes, the Etvs correction is degraded below the level dictated by position (distance) accuracy, and data quality suffers unnecessarily.

Land Gravity Instruments


The term gravity meter or gravimeter has come to mean some kind of sensitive spring balance capable of measuring changes in gravity with an accuracy of at least 1 mGal. Most gravity meters can repeat measurements to within 0.05 mGal, and some instruments are capable of nearly 0.001 mGal or 1 mGal. Gravity meters came into wide use as practical field instruments in about 1940. Before that, gravity measurements for exploration employed pendulums and torsion balances. Nettleton (1976), Dobrin (1988) and Torge (1991) provide interesting, detailed summaries of the development history of gravity measurement. The straightforward problem of measuring the displacement of a mass hanging on a spring caused some to conclude that a spring and mass gravity meter capable of measuring a gravity change of 0.1 mGal would have to be 10 m tall. Displacement of the mass would be proportional to the total change in gravity--about 1 part in 107 for 0.1 mGal. When the first meters were being developed, displacement could be measured to an accuracy of about 10-4 cm, so some concluded that the total length of a measuring spring would have to be 107 cm times 10-4 cm--about 30 feet tall. Several inventions were made to enhance the displacement of a spring balance so that accurate gravity measurements could be made using a reasonably sized, fieldportable instrument. The most widely used instruments still in common use are the LaCoste and Romberg gravity meter and the Worden gravity meter and its clones.

A significant and relatively recent addition to land gravity measurement is the development of falling body instruments capable of measuring the absolute value of the Earth's gravity field to very high precision--on the order of 1 mGal. These instruments have seen little use in exploration, but they can be expected to facilitate the establishment of accurate calibration ranges for gravity meters. They may also prove valuable in the monitoring of geothermal and gas reservoirs.

LaCoste and Romberg Land Gravity Meter


LaCoste (1934) invented a spring suspension that can be used to make gravity meters with very high displacement sensitivities. The suspension consists of a beam supported by a diagonal spring whose upper point of attachment is vertically above the hinge and can be adjusted vertically to balance the pull of gravity on the beam ( Figure 1 , Schematic diagram of LaCoste and Romberg gravity meter ).

Figure 1

The high sensitivity results from the spring characteristics and the geometry of the suspension. Referring to Figure 1 , the sum of the torques exerted by gravity and the spring is

T = k(Acosb + Bcos - L0)h - mgCsin

(1)

where k = spring constant L0 = length of spring when exerting no force m = mass of beam g = gravity C = distance from hinge to center of gravity of beam and other symbols are as shown in Figure 1 If we let L0 = 0, and note that = a + b and that h can be written either as Asinb or Bsin, then Equation 1 becomes: T = k(ABsina cosb + ABsinb cos) - mgCsin = kABsin(a + b - mgCsin = (kAB -mgC)sin Now, we can adjust the spring tension A so that kAB = mgC and T = (kAB -mgC)sin = 0 Equation 2 states that there is zero torque on the beam regardless of the angle . In other words, the beam will stay wherever it is put, or if gravity changes at all, the beam will move theoretically to the end of its travel. Practically, this means that we can achieve a very high displacement sensitivity. This high-displacement sensitivity depends on a spring whose unstretched length is zero (L0 = 0), or would be zero if the turns of the spring did not bump into each other. Such springs can be made readily both out of metal or quartz; they are generally called zero-length springs. Most of the gravity meters now in use are of this type. Gravity meters with high-displacement sensitivities are sometimes referred to as unstable gravity meters, but this is a misnomer. They are also sometimes referred to as labilized or astatized because they are sometimes made by combining unstable elements with stable elements. Before the advent of zero-length springs, experimenters tried to achieve high sensitivity by using spring suspensions which were stable over part of the range of motion and unstable over the rest of the range. They achieved fair results by operating in the stable range but near instability. This practice probably led to incorrectly calling all high-sensitivity instruments unstable. Figure 2 ( Diagrammatic cross section of LaCoste and Romberg gravity meter )

(2)

Figure 2

and Figure 4 ( Diagram of gear train assembly and measuring screws ) are diagrammatic cross sections of the LaCoste and Romberg gravity meter which uses the high-displacement sensitivity suspension.

Figure 4

In practice, its displacement sensitivity is about 1000 times that of a simple weight on a spring. The beam is nulled by vertically adjusting the upper end of the spring. The proof mass used in the LaCoste and Romberg land meter is about 15 g. Shipborne meters use a proof mass of 29 g and borehole meters use a mass of 8 g. All LaCoste and Romberg meters are constructed of metal and use metal springs. The beam's position is read by observing a cross-hair on the beam with a microscope or by using an electronic readout device built into the instrument. An interesting feature of this instrument is that the displacement sensitivity varies with the orientation of the suspension in relation to the direction of the gravity vector. Figure 1 and the analysis that leads to Equation 2 is based on the spring attachment point being vertically above the hinge point, which results in theoretically infinite sensitivity. Tilting the instrument toward the mass reduces the displacement sensitivity, and tilting it the other way causes the instrument to become unstable. (Think of the instrument tilted 90 degrees toward the beam. The beam would be oriented vertically and hang straight down. In this position, it would be very stable and insensitive to any change in gravity.) Normal land instruments are operated so that the instrument is stable but not quite infinitely sensitive. We can thus adjust sensitivity by adjusting the level reference along the axis parallel to the beam--that is, by adjusting the long level. Shipborne meters, borehole meters and electrostatically nulled meters are normally operated with the longlevel reference adjusted as close to infinite sensitivity as possible. The LaCoste and Romberg instrument must be kept thermostated during use. It weighs about 6 pounds, and requires a storage battery of about equal weight for thermostating. During manufacture, each individual meter is calibrated over its entire range in the laboratory and on known gravity base stations. Figure 3 (

Model G meter and its carrying case ) is a photograph of a LaCoste and Romberg meter with its carrying case.

Figure 3

Various modifications of the LaCoste and Romberg instrument have been used to achieve precision on the order of 2 to 5 mGal. The commonly used instrument in exploration is the LaCoste and Romberg Model G meter which routinely delivers repeatability of 0.02 to 0.05 mGal. The LaCoste and Romberg Model D meter is designed to achieve microgal sensitivity with increased resolution of the smeasuring screw dial. More recently, electrostatically nulled systems that employ computer-based correction and recording systems have been used to achieve consistent repeatability of 2 to 5 mGal. Laboratory repeats of less than 1 mgal have been demonstrated. Worden Gravity Meter The Worden gravity meter sensor is constructed almost entirely of fused quartz. Although the geometry of the suspension is quite different from the LaCoste and Romberg meter, the instruments have the zerolength-spring, high-displacement sensitivity design in common. For both meters, we can adjust sensitivity by adjusting the "long-level" reference. The proof mass of a Worden meter is only about 5 mg.

The main advantage of the quartz meter is its light weight due to the relative thermal stability of quartz and the small sensor. The sensing element is encased in a sealed vacuum flask to isolate it from outside temperature changes. The Worden meter weighs only about 5 pounds, and can be operated without thermostatic control. Some versions of the Worden meter provide thermostatic temperature control that improves the drift stability of the instrument but requires the additional weight of a battery and some loss in the ease of field portability. In addition, most models use a bimetallic temperature compensation spring. Given adequate base ties, Worden-type meters can be used to acquire data accurate to a few tenths of mGal without the use of thermostatic control. Thermostatically controlled Worden meters achieve repeatabilities of about 0.02 to 0.05 mGal. A number of manufacturers have produced quartz spring gravity meters that are basically identical to the Worden meter (For example, Sodin, Worldwide Instruments, Scintrex and Sharpe). Clones of the Worden instrument were also made in China and the old USSR. Scintrex CG-3 Autograv The Scintrex CG-3 meter, which has been in commercial use since 1987, uses a fused quartz sensor and electrostatic nulling. The CG-3 sensor consists of a 300 mg proof mass hanging on a spring about 2 cm long. The proof mass is mechanically constrained to move only very slightly. The measurement relies on an electronic design capable of resolving small changes in the electrostatic force needed to maintain the proof mass at precisely the null reference point. The vacuum-sealed sensor is maintained at a nearly constant temperature (within about 0.001C) using a two-stage or double-oven temperature-stabilized environment. The CG-3 is an automated system with a microprocessor-based control and data acquisition system. Among the advantages of the CG-3 are world-wide range without resetting and automated operation. Instrument drift is computed and corrected internally. Actual drift is claimed to be linear and adequately handled by the microprocessor. The instrument, including its battery, weighs 12 kg. Standard resolution of the CG-3 is 0.01 mGal. Recently, Scintrex has offered a CG-3M which has a reading resolution of 1 mGal. Axis Instruments FG5 Absolute Gravimeter Absolute measurement of the acceleration of gravity is based on the fundamental quantities of distance and time. Laser interferometry is employed in the Axis Instruments FG5 Gravimeter. Other weight drop instruments have been developed and are described in Torge (1989) and Nettleton (1976). Because of its relatively large size, weight and cost, the FG5 is not a field exploration instrument, but it has applications in establishing absolute base stations and monitoring temporal changes in gravity. The manufacturer's stated accuracy for the instrument is 2 mGal with a repeatability of 1 mGal. The measurement time required to obtain 1 mGal precision is less than two hours for a quiet site. The instrument has a shipping weight of 349 kg.

Borehole Gravity Meter


The LaCoste and Romberg borehole gravity meter is the only commercially successful borehole gravity instrument. Although it is smaller than the LaCoste and Romberg land gravity meter, its design is basically identical. The proof mass is about half that of a land meter. Only about ten borehole instruments are operational, in contrast to several thousand land instruments. The borehole gravity meter's main application is deep-investigation density logging; it is useful in identifying porosity and fluid saturation values that are undisturbed by the near-wellbore environment and thus representative of reservoir conditions. The borehole meter's ability to achieve high precision in noisy borehole conditions has been enhanced by the addition of electrostatic nulling and computer-based correction and recording systems. The nullers are similar in concept to those used on some LaCoste and Romberg Model G meters and on the Scintrex CG-3.

Operating limitations of the LaCoste and Romberg borehole gravity meters are imposed by the size of the meter, the need to level the instrument in the borehole and the need to maintain the instrument at a constant thermostating temperature of about 123C. Typical logging sonde diameters are 10.5 cm for logging temperatures below about 115C and 12.7 to 13.3 cm for higher logging temperatures up to about 250C. The larger diameter sondes accommodate a Dewar flask to protect the instrument from well temperatures higher than the meter's thermostating temperature. The length of the logging sonde is about 3 m. The leveling mechanism of the meter limits operations to wells with a deviation from vertical of 14 degrees or less. The relative accuracy of the borehole gravity meter is 2 to 15 mGal, depending on operating conditions.

Underwater Gravity Meters


Until the advent of stabilized platform shipborne gravity meters in 1965, underwater gravity meters were widely used offshore. However, because of the relatively high cost of underwater gravity operations, very few underwater surveys have been carried out over the past 20 years. Nettleton (1976) provides an historical review of the development of underwater gravimetry. Only two or three operational underwater gravity systems exist. Underwater gravity instruments now in use are essentially identical to the LaCoste and Romberg Model G meters. The meters are housed in a small, 50-cm diameter diving bell which is weighted with lead to make it sink, the total weight of the meter and its housing is about 160 kg. Current underwater gravity meter systems use electrostatic nulling and remote servo-system control for leveling and spring-tension adjustment. The accuracy of underwater gravimetry is about 0.1 mGal.

Marine and Airborne Gravity Meters


The instruments used in commercial exploration for marine and airborne survey work are the LaCoste and Romberg Models S and SL (for straight line), the Bell BGM-3 and the Bodenseewerk Kss31. Torge (1989) and Nettleton (1976) describe these and other systems in detail. Until recently, marine and airborne acquisition accuracy has been limited by dynamic corrections, the Etvs correction and vertical acceleration corrections. Published (Valliant, 1983) and unpublished performance comparisons of these instruments reveal little difference in their basic abilities to measure changes in gravity. Typical relative

survey accuracy for marine gravity has been 0.5 to 2.0 mGal at wavelengths of about 2 km ( Figure 2 , Gravity measurementsestimated resolution and accuracy ).

Figure 2

Relative accuracy, demonstrated by using GPS combined with improved low-noise electronics and a computer-controlled LaCoste and Romberg Model S gravity meter, is 0.2 mGal at wavelengths as short as 500 m (Hall and Herring, 1991). LaCoste and Romberg Model S
About 100 of the LaCoste and Romberg Model S instruments have been built and are being operated by exploration contractors and various government and educational institutions. There are about 50 currently used in exploration. The Model S stabilized platform air-sea gravity meter was first used commercially in 1965 by GAI-GMX (LaFehr and Nettleton, 1967). It differs from the LaCoste and Romberg land gravity meter in two significant aspects: first, the zero-length-spring suspension on the Model S is stiffer in order to better withstand horizontal accelerations; second, the damping in the vertical direction is greatly increased to prevent the movable beam of the gravity meter from hitting its stops, even when vertical accelerations are several hundred thousand mGal. The gravity meter is mounted on a gyro-stabilized platform to keep it level. Although we might expect the high damping to give a slow gravity meter response, the following analysis (LaCoste, 1967) shows that this is not the case. The basic equation of motion for the system is the simple harmonic motion equation:

(1)

where g = acceleration of gravity m = mass of beam z = vertical position of a point on the meter case B = displacement of the beam with respect to the meter case F = damping coefficient k = restoring force constant on mass c = spring tension constant S = vertical displacement of springs upper end relative to the meter case Since the Model S gravity meter uses a zero length spring suspension, the displacement sensitivity is practically infinite. Thus, we can assume that the restoring force constant k is zero. With this approximation, let us find the solution for a step function change in gravity on the right side of Equation 1. We find that B/ t, the beam velocity, exponentially approaches a limit, and that the limit is the value of the step function divided by F. We also see that the time constant decreases as the damping coefficient increases, and that the time constant for damping actually used in the gravity meter is only about 1 millisecond. In other words, the time constant is entirely negligible because of the high damping. A qualitative analogy helps illustrate this point: a hard ball is resting on a level, flat, glass table. The ball will remain at rest anywhere it is placed on the glass table until the table is tilted. In air, it will take a long time for the ball to accelerate to a steady-state, constant velocity. If the table and ball are immersed in a viscous fluid so that the motion of the ball is highly damped, very little time will be required for the ball to accelerate to its very low terminal velocity in the viscous fluid. The remark that the ball will remain anywhere it is placed on a level table points to another analogous point of behavior: with the spring balance nulled, the beam will remain wherever it is placed in its range of travel. The LaCoste and Romberg Model S gravity meter, like all gravity meters, has some degree of imperfection cross-coupling and/or platform leveling. The Model S also has inherent cross-coupling as a result of the meter's cantilevered beam design. Meters designed so that the proof mass travels in a vertical straight line do not have inherent cross-coupling error. The form of the inherent cross-coupling error is similar to the example of Figure 1 ( Effect of horizontal accelerations on gravity cross coupling effect. ) developed for the relationship

e = k axav
where e is the cross-coupling error, k is a constant, and ax and av are the horizontal and vertical components of acceleration, respectively. Consider a beam-type meter on a perfectly level platform. The sensitive axis in that type of gravity meter is the direction normal to the beam and the axis of the hinge; this is the direction in which a pull on the beam has the maximum effect. We therefore see that the sensitive axis

(2)

shifts as the beam moves, even though the gravity meter is kept level by the stabilized platform. This is the same condition we had in the case of the platform being driven off level by varying vertical accelerations. Equation 2 still applies, but now b in Figure 1 refers to the angle the beam makes with the horizontal.

Figure 1

There are three ways to deal with inherent cross coupling: Measure the beam's deflection and the horizontal acceleration ax and correct for it. This is the method used in the LaCoste and Romberg Model S meter. Accurately null the beam to keep b = 0. This method was used in the Askania Gss20. Make the proof mass move in a straight vertical line. This is the method employed in the LaCoste Straight Line meter and in the Bell and Bodenseewerk meters described below.

LaCoste and Romberg Model SL


The LaCoste and Romberg Straight Line meter design is similar to that of the standard Model S meter in that it uses the zero-length spring suspension and is highly damped. One main difference is that the Straight Line meter uses a suspension in which the center of gravity of the proof mass moves in a straight vertical line rather than in the arc of a circle. This suspension eliminates inherent cross-coupling effects.

There are other differences between the straight-line and standard models. The straight line meter uses silicone fluid rather than air for damping. This makes it possible to use much larger damper clearances, which simplifies manufacture and adjustment, and which increases ruggedness. The use of silicone fluid and an increase in the rigidity of the suspension make imperfection cross coupling negligible. Another difference in the Straight Line meter is that most analog electronics are replaced by a microprocessor. Valliant (1983) made extensive tests of the Straight Line gravity meter against two randomly chosen standard LaCoste and Romberg Model S meters. Without using cross-correlation corrections on the standard meters, Valliant found that they were substantially inferior to the Straight Line meter. However, with cross-correlation corrections applied to the standard meters, their performance was nearly as good as that of the Straight Line meter. To date, only three of the Straight Line meters have been manufactured, compared to about 100 of the Model S meters.

Bell BGM-3
The Bell BGM-3 uses an accelerometer design that was originally used in a military inertial navigation system. The sensor itself is only about 2.3 cm in diameter and 3.4 cm high. It is housed in a temperaturecontrolled oven mounted on a gyro-stabilized platform. The sensor does not employ a high-displacement sensitivity suspension. The measurement is based on a servo loop, where displacement is sensed by capacitance pick off, and the nulling force is supplied by varying the current in an electromagnet. The sensor has no inherent cross-coupling and is very rugged in terms of tolerating high accelerations. Bell meters have been in operation since 1967. Most of them are operated by the U. S. Navy, and a few are in use in exploration. The BGM-3 performed well in extensive tests reported by Bell and Watts (1986), and outperformed an Askania Gss2, which is early version of the Gss20.

Bodenseewerk Kss30/31
The first shipborne meter to go into service, in 1958, was the Gss2 manufactured by Askania. Askania introduced the Gss3 in 1971, which was later manufactured by Bodenseewerk as the Kss30. The Kss30 sensor uses a capacitance transducer to keep the vertical spring-mass system near a nulled position; the Kss31 is a later version of the Kss30. A permanent magnet provides damping, and a coil attached to the mass senses a current which is a measure of the varying damping force. Like the Bell meter, the sensor does not employ a high-displacement sensitivity suspension. The spring-mass system is a relatively long tube hanging on a coiled spring.

Measurement of Gravity from Satellites


Satellites obtain gravity measurements in two ways: by orbit analysis, or observation of perturbations of the satellite paths from ideal elliptical orbits; and by sea surface altimetry, which can only be carried out over oceans. The resolution of the gravity field based on orbit analysis has very limited resolution and is of no practical value for exploration. NASA has computed gravitational models for the Earth using spherical harmonic analysis up to order 20. The shortest wavelength resolvable from such a model would be about 2000 kilometers. Satellite Altimetry

Satellite altimetry over the Earth's oceans has provided resolution of much shorter wavelengths. Small and Sandwell (1992) compared detailed shipborne gravity measurements to the free-air gravity field inferred from satellite altimetry. They concluded that anomalies with wavelengths as short as 25 km can be resolved using satellite altimetry available at that time. They found that the satellite gravity profiles in their study were accurate to 6.51 mGal for wavelengths greater than 25 km. Gravity derived from satellite altimetry provides low-cost regional information that is often useful in exploration. World-wide coverage has improved in quality and density, and the data are widely available from government and academic sources at low cost. For longer wavelengths, the average accuracy of gravity from satellite altimetry is better than at the shortest wavelengths, and so it can be useful as a leveling reference for sparse or unconnected shipborne gravity data sets. In the introduction to their paper comparing satellite and shipborne gravity, Small and Sandwell offer a concise introduction and summary, including important references: "Satellites such as Geos-3, Seasat, and Geosat use microwave radar to make high precision (2 cm vertical) measurements of the sea surface height relative to the reference ellipsoid. In the absence of disturbing forces such as tides, currents, and waves, the sea surface conforms to the geoid or gravitational equipotential surface. The short wavelength components of these geoid height profiles have been used to map fracture zones, seamounts, hotspot chains, mid-ocean ridges and a multitude of previously undiscovered features in the world's oceans. [See Sandwell (1991) for a review of applications.] Satellite altimeter data have also been used to map continental margin structure, particularly in remote areas where little shipboard data are available (Bostrom, 1989)." For many of these applications it is desirable to compute gravity anomalies from geoid heights so the satellite data can be compared and combined with shipboard gravity measurements. The two-dimensional (2-D) Stokes' integration formula (e.g., Heiskanen and Moritz, 1967) is commonly used to compute geoid height from the gravity anomaly, and it is straightforward to invert the Stokes formula to compute the gravity anomaly directly from the geoid height. An alternate approach is to expand the geoid height in spherical harmonics, multiply each of the coefficients by a known factor, and sum the new series to construct gravity anomaly (Rapp and Pavlis, 1990; Haxby et al., 1983). From this theory it is clear that geoid height and gravity anomaly are equivalent measurements of the Earth's external gravity field.

Survey Costs and Accuracies


The following tables give an idea of costs and accuracies that should be helpful in the early stages of survey planning. Survey costs and accuracies vary substantially depending on conditions. For example, high startup costs will greatly increase the average cost per unit for a small survey. The accuracies achieved on marine and airborne surveys depend heavily on survey conditions. Calm weather and large vessels are needed to obtain the best possible marine results, and airborne gravity similarly benefits from smooth flying conditions.

LAND SURVEYS Application Microgravity Detailed Exploration Regional Exploration Station Spacing 3 to 30 m 0.2 to 1.0 km 2 to 5 km Required Vertical Accuracy 1 cm < 0.5 m 1 to 2 m Gravity Accuracy 2 to 10 mGal 0.1 to 0.2 mGal 0.5 mGal Cost per Station $30 to $50 $30 to $100 $150 to $500

AIRBORNE AND MARINE SURVEYS Application High-resolution, marine with GPS (part of seismic operation) $50 to $100/km (stand alone) Conventional Marine (most surveys before 1991) Airborne Gravity Typical Primary Line Spacing 0.2 to 1.0 k m Shortest Wavelength/ Amplitude Resolution 0.5 km/2 mGal Cost/km $30 to $60/km

1.0 to 10 km

2 to 5 km/0.5 to 2 mGal

same as above

1.0 to 10 km

10 km/2 to 5 mGal

$100 to $200/km

Surface Gravity Interpretation

Geologic Applications and General Knowledge


Gravity interpretation is the determination of subsurface information from gravity maps or profiles. There are a number of geologic applications for gravity interpretation (particularly for integrated gravity/magnetic interpretation), including 1. Mapping subsurface geology

reconnaissance basement mapping in frontier areas definition of areas to acquire mineral rights or to conduct more extensive exploration structural mapping in mature exploration areas; this might include: salt mapping, basement
mapping, mapping top of high density rocks, etc. definition of volcanic-covered areas and interrelated volcanic and clastic sequences 2. Identifying "unknown intrusives" observed on seismic record sections (i.e., salt dome vs. igneous intrusive, reef vs. lava flow, etc.) 3. Aiding in discovery of additional oil & gas fields in a region, once some fields are discovered, to help define the gravity/magnetic signature 4. Extending subsurface interpretations based on seismic data into areas where no seismic data are available We should adhere to the following general procedure whenever we need to acquire or interpret gravity/magnetic data: Step 1: Analyze geology of the area.

Step 2: Determine gravity/magnetic response to known or expected geological features (a) by modeling and (b) empirically. Step 3: Design gravity and/or magnetic survey, or analyze the quality of the available data to solve the geological problem at hand. Step 4: Interpret the gravity and magnetic data from known to unknown areas:

a. Determine residual gravity (or magnetics) b. Interpret residual gravity (or magnetics) In many geologic provinces, we can use both gravity and magnetic data to solve a certain part of the geologic problem, or we can independently provide different pieces of information to help solve the overall geological "puzzle." Using gravity and magnetic data together can provide much more information than we could obtain by using either tool separately; incorporating seismic data into gravity and magnetic analysis can yield even more information. Perhaps we could add a fifth step to the four listed above, which would be a reflective appraisal of the interpretation relating to the possibility of alternate solutions, and which would include some sensitivity analysis of the conclusions. It is important to keep in mind that all interpretations and modeling results for gravity and magnetic data are non-unique. There are literally an infinite number of possible geometries and densities (or susceptibilities, in the case of magnetics), which can be used to fit an observed anomaly. It is therefore essential to integrate all available information into the gravity or magnetic model in order to best constrain the final result.

Applying Local Geological Knowledge


As is true for any kind of geophysical interpretation, the reliability of gravity/magnetic interpretation depends to a great extent on the reliability of local geological knowledge. To produce reliable gravity/magnetic interpretations, we must learn as much about the local geology as possible. When making a gravity/magnetic interpretation, we should answer the following questions: Is there a lateral density/magnetic susceptibility contrast present in the area, either in the sedimentary section or in the basement? What type of basement rock is likely to be present? Are volcanic rocks likely to be present? Is the sedimentary section clastic, carbonate or both? If both clastic and carbonate rocks are present, is there a distinct geologic contact between the two? Are minerals such as salt, gypsum, or anhydrite present? Are reefs, shale diapirs, salt structures, igneous intrusives, or other such features expected?

What subsurface control and surface geology control is available? What density and magnetic susceptibility control is available? What is the dominant structural style of the region, and is there a predominate structural ?strike? expected? What is the expected depth of burial and areal extent of the features of greatest geological interest? What is the local magnetic inclination, declination, and average total field? What is the near-surface geology?

Determining Gravity/Magnetic Response


It is critically important to determine the gravity and magnetic response to the local geology. Ideally, we should estimate this response in two ways: theoretically, by modeling known or expected geologic features empirically, by examining gravity and/or magnetic data within the study area and comparing actual responses to known geologic features. In unsurveyed areas, the empirical analysis of gravity and/or magnetic response will not be possible.

Model Response
The concepts behind determining density and calculating gravity effects of geologic models apply to calculating the theoretical gravity response to the local geologic features of interest. For example, do we expect to encounter salt domes of a certain geometry (e.g., reefs of a certain size and depth or faults of a certain depth and throw)? If so, we can construct simple gravity models to determine the probable amplitude and shape of the expected gravity anomaly. Determination of the amplitude and shape of the anomaly can be useful both for interpretation and for survey design.

Empirical Response
If gravity data exist over some or all of our area of interest, we should compare the observed gravity anomalies with the calculated gravity response. Generally, we will need to adjust the geologic and/or density model the first time we make a comparison. Adjusting the geologic/density model gives us valuable information that we can apply throughout the area we are interpreting. One reason that our initial gravity model may not fit the observed gravity is that our initial estimate of the subsurface density distribution is incorrect. In some cases, we see a greater gravity response than we expected. This discrepancy could be due to the anomaly-enhancing effect of differential compaction over a reef, for example. Another potential anomaly enhancer might consist of structurally high basement blocks occurring over high density basement blocks. The combined structural and density change effects produce a larger anomaly than would be caused by the structural effect alone ( Figure 1 , Differential compaction enhancement of basement horst gravity anomaly ).

Figure 1

In some cases, there appears to be no relationship between the observed gravity response and the original model. This can happen if local geologic features of interest have little or no lateral density contrast, or if the gravity effects from local structure are obscured by geologic features that cause much larger anomalies. Such obscuring features could include shallow basalt or anhydrite thickness variations, abrupt changes in basement lithology, or changes in overall sedimentary section density. The basement density or "normal" sedimentary section densities could be considered "background geology." Large changes in background geology could obscure the gravity anomaly from the target. In any case, we should perform all relevant empirical model response work to design the interpretation procedures, preferably before the survey is acquired.

Analyzing Data Suitability


We can use the gravity response model studies to design a survey to detect the geologic features of interest (if this is possible), and to determine the suitability of existing gravity data to solve a given geologic problem. Questions to address in designing a survey design or determining the suitability of existing data include the following: 1. Have we selected a reasonable Bouguer density or densities?

2. If terrain corrections are needed, have we performed such corrections in a reasonable manner? 3. Is the contouring of the data reasonable? 4. Is the vertical accuracy of the surveying adequate to provide sufficient data accuracy to solve the geologic problem? 5. In the case of shipborne or airborne gravity surveys, have adequate eotvos and line leveling corrections been made? (line leveling problems often cause an obvious "herringbone" pattern in the map contours) 6. Is the line spacing and station spacing sufficient to define the anomalies related to our target and to prevent aliasing of near-surface noise? Is the geologic problem 3-D, but the data distribution suited for only 2-D analysis? 7. In the case of shipborne or airborne gravity surveys, have the data been over-filtered such that target anomalies might be oversmoothed?

Determining Residual Gravity


We should design our interpretations (or new field surveys, for that matter) to incorporate all known relevant geological constraints, including known basement or other "mapped formation" outcrops subsurface depth control (such as "top of salt," "depth to basement," "minimum depth to basement," "projected depth to basement," etc.) other relevant surface geologic controls, such as areas of volcanic outcrop, known faults and fault patterns, basement composition, density, and magnetic susceptibility, sedimentary rock outcrop patterns, etc. subsurface and/or seismic control on bulk densities and magnetic susceptibilities from well logs, samples, or modeling of "known" seismic structures We can use all of the above geologic constraints to help interpret residual gravity Bouguer gravity data contain the superimposed effects of basement, sedimentary, and crustal structure and lithologic changes. The residual gravity field is the portion of the Bouguer gravity field that is related to the geologic problem at hand, and represents the difference between the regional gravity field and the Bouguer gravity field. Determination of Residual Gravity is also known as anomaly separation. There are a number of different anomaly separation methods, which we can group into two general categories: purely mathematical, and "eyeball"/profile/geologic. Purely mathematical methods include ring residuals, bandpass filters, upward or downward continuation, polynomial residuals, and derivatives. Essentially, these methods require selecting a particular residual operator (such as second-derivative) and then performing the operator calculation on a computer. Generally, mathematical residual methods provide qualitative local anomaly enhancement, but do not produce a residual gravity field that is suitable for gravity modeling. Eyeball/profile/geologic residual determination methods allow the interpreter to put a geologic bias into the residual gravity. Eyeball/profile/geologic anomaly separation includes graphical profile analysis and other

methods that require qualitative and quantitative input of geologic constraints. Strictly speaking, it is possible to construct residual gravity anomalies from eyeball/profile work without any geologic input; however, incorporating any known geologic input into eyeball/profile residual determination results in a much more reliable interpretation. Figure 1 ( Regional gravity determination from Warm Springs Valley, Nevada ) shows an example of removing a graphical Regional gradient by eyeball analysis.

Figure 1

Note that the Regional is designed to remove the effects from the gravity which are not related to the valley fill material. Remember that the residual gravity anomaly is simply the difference between the Regional and Bouguer. In this case, the interpretation shows that the residual is caused by the valley fill. A common problem with Residual Gravity determination occurs at the continental margin, where the Bouguer gravity often increases in a seaward direction due to crustal thinning and closer proximity to the higher density oceanic crust. Example Problem 1 illustrates the use of geologic and magnetic depth constraints to assist in designing a gravity regional. Example Problem 1 - Regional Gravity Determination - Continental Margin

A Bouguer gravity profile is observed as shown on Figure 2 ( Bouguer gravity profile ).

Figure 2

Surface geology data indicate granite outcrop along the left end of the figure to X = -60,000 ft. A magnetic basement depth of 5000 ft occurs at X = +60,000 ft. Design a Regional Gravity Profile such that the resulting Residual Gravity Profile will correlate to changes in the thickness of the sedimentary section. Assume that the density contrast between sedimentary rocks and basement rocks is -.5 g/cm3. Solution Figure 3 ( Geologic constraints on "regional gravity" ) illustrates the use of the two geologic constraints to design the "Regional Gravity" Curve.

Figure 3

Constraint A) The "Residual Gravity" (the difference between the Regional and the Bouguer gravity should be nearly zero at the left end of the cross-section. Constraint B) Using the "slab formula", the amount of residual gravity anomaly at X = +60,000 is estimated as follows:

g = 12.77 t = 12.77 (-.5 g/cm3)(5 kilofeet) = -32 mGal Then a smooth Regional can be drawn to give near-zero residual gravity over the granite outcrop and -32 mGal of residual gravity at X = +60,000 ft. The Regional Gravity is smooth (contains no short wavelength anomalies) because the crustal structure that causes the Regional is deeply buried. The regional gravity curve is drawn close to the Bouguer gravity over the known granite outcrop; thus the regional gravity has a broad curve, in this case, instead of merely being a straight line. Note the actual geologic model with the sedimentary basin and a petroleum prospect, a basement high, within the basin, is shown on Figure 4 ( Continental margin Bouguer gravity and geologic model ).

Figure 4

Qualitative Analysis
Figure 1 ( Graphical residual vs.

Figure 1

grid residual for fault anomaly ) illustrates the difference between a graphical and grid residual for defining a fault anomaly. Given the Bouguer gravity anomaly shown in this figure, we can recognize a fault anomaly signature superimposed on a regional gravity increase toward the right side. Using some simple model calculations, we can construct the linear graphical "Regional" and subtract it from the Bouguer gravity to produce the Residual. The resultant Residual is suitable for input to a 2-D inverse modeling program or for further analysis for fault geometry and depth using simple graphical models and a trial and error solution method. Figure 1 also shows a simple approximation to a second-vertical derivative, constructed from a three-point operator having coefficients of -1, 2, and -1. For example, for the second-vertical derivative point at the Bouguer value of 3.80 mGal shown, the calculation would be -1(3.62) + 2(3.80) -1(3.60) = +.38 mGal/(data interval)2. Note that the second-vertical derivative is positive over the upthrown part of the fault anomaly, negative on the downthrown side, and about zero over the steepest gradient of the graphical residual (which is located approximately on the projection of the fault plant to the surface). We can see that the second-vertical derivative does not provide a residual referenced to a datum for modeling, but it does produce an anomaly (which approximately corresponds to the curvature of the Bouguer gravity field) that correlates with the

fault geometry in a qualitative sense. We can also see that the units of the second derivative are not mGal, but are mGal/(data interval)2. For an example of the usefulness of qualitative analysis, refer to Figure 2 (Detail of Bouguer gravity map,

Figure 2

Los Angeles Basin ) and Figure 3 ( Detail of second vertical derivative of gravity, Los Angeles Basin ) :

Figure 3

Figure 2 shows a detailed Bouguer gravity map of a portion of the Los Angeles Basin (Elkins, 1951). This figure shows the steep northeasterly decrease in Bouguer gravity from the Torrance Field to an area several miles northeast of the Dominguez Field. No local gravity closures are present over any of the fields shown in Figure 2 , due to the northeast regional gravity decrease related to the increase in sedimentary section thickness in that direction. Figure 3 shows a more detailed second vertical derivative of the Bouguer Gravity map. Note that the Rosecrans, Dominguez, Long Beach, and Wilmington Fields are all more or less defined by second vertical derivative positives (maxima). This example illustrates the use of the second-vertical derivative to qualitatively outline oil and gas prospects. The second-vertical derivative provides a data enhancement, but not a residual gravity field that we can model. We can use grid residuals to help find more oil and gas fields in an area, once a few fields have been found, by learning and applying the knowledge of the gravity and grid residual signature. It is often necessary to experiment with various grid residual operators to find an operator which defines our geologic features of interest. In the Los Angeles Basin example, the second-vertical derivative seems to enhance the local gravity anomalies (related to local structural highs over oil fields) from the northeast regional gravity decrease (related to increase in sedimentary sections thickness).

Depth Estimation
Once we construct residual gravity anomalies by using an "eyeball"/profile/geologic technique to remove regional effects, we can be further analyze these anomalies using various depth estimation techniques. Most

depth estimation techniques are "rules of thumb" based on simple geologic models. Three examples are as follows (refer to Figure 1 ): For an enlongated body (in map view),

Figure 1
the maximum depth to center of a horizontal cylinder is Zc = X1/2 This model can be useful for anticlines or horst blocks which are two-dimensional in map view. LaFehr (1987) provides more detailed thin plate estimation techniques. For a spherical body, the maximum depth to the center of the sphere is

Zc = 1.3X1/2 For a thin vertical fault, the depth to the center of throw is Zc = X3/4 - X1/2 , or Zc = X1/2 - X1/4 Other depth estimation techniques can be found in Grant and West (1965). Note that real-world geologic structures are much more complex than the simple cylindrical, spherical or straight vertical-throw fault models used to derive these simple guidelines. In other words, depths determined from rule-of-thumb methods are useful only as first approximations. We can check estimated depths for given local gravity anomalies by computing the gravity effect of the model when located at the depth estimated. Once we determine the approximate depth, we can use it as a "starting point" in designing and constructing computer models. Using potential field theory, we can show that any gravity anomaly can be fit exactly by a variable density layer located at the observation surface. The gravity highs would thus be underlain by higher density portions of the surface layer. So, in reference to Figure 1 , for example, a 2-D anomaly (elongated) with a half-width of 5000 ft would imply a 5000 ft depth to the center of a horizontal cylinder. In actuality, the horizontal cylinder depth is the deepest possible depth to the center of a 2-D body. LaFehr (1987) described

a unit half-width circle method that provides somewhat more definition of maximum depth for thin plate models. However, no depth estimation method can overcome the fundamental ambiguity that any given gravity anomaly that is fit by a subsurface structure and density model can also be fit by a shallower model ( Figure 2 , Example of ambiguity ).

Figure 2

We can overcome this ambiguity problem, to a large extent, by applying independent geologic or geophysical constraints (e.g., density data, subsurface data points, knowledge of local structural style). For example, if we know a depth to the high density layer at Well A in Figure 2 , and have a good idea of the density distribution, then the likely geologic solution is very much constrained. We essentially use our knowledge of the depth to the upthrown side of the fault, along with our knowledge of the density contrast, to determine a geologically reasonable model for the amount of throw on the fault. We then check our calculations by calculating the gravity effect of the model. If the final model fits the residual gravity and all of the available geologic constraints, then it represents a valid solution to the local gravity problem. Even one geologic data point such as Well A can greatly increase the reliability of a gravity interpretation.

Quantitative Gravity Modeling


Gravity modeling is divided into two broad categories:

Direct, or forward modeling Inverse modeling Direct gravity modeling involves calculating the gravity effect caused by a given geologic situation, as defined by depths and densities. Direct gravity modeling is unambiguous--that is, for a given geologic model, there is a unique calculated gravity field. If the modeled result violates the observed field, then there is something wrong. A computed gravity effect that matches the observed gravity indicates that the geologic model is at least a possible correct solution. Inverse gravity modeling usually employs the residual gravity field (with a geologically derived "regional" removed), and certain known parts of the geologic model as input. in inversion programs, we generally use an iterative technique to calculate the unknown parts of the geologic model, such that the calculated effect of the geologic model on the last iteration closely matches the input residual gravity. This approach is ambiguous, however, in that more than one solution can fit the residual gravity field. Typically, the following data serve as input for an inverse gravity modeling program: residual gravity, density distribution as a function of X, Y and Z, an initial geologic model, usually defined by either an upper or lower surface. Inverse modeling programs use iteration to determine the "free" model parameters (usually the unspecified upper or lower surface). Because the modeling process is inherently ambiguous, geologic constraints are very useful for assisting in the determination of reasonable solutions. It is possible, however, to assume parameters for an inverse gravity modeling program that do not allow convergence to a solution. For example, if we assume a density contrast that is too low, we may not obtain convergence to a solution. It is important to use all available information for designing the residual gravity, determining the density distribution, and identifying a reasonable starting point for the model. Inverse modeling determines 3-D or 2-D geologic structures and/or densities such that both the residual gravity and geologic constraints are precisely honored. This process is the best method for analyzing residual gravity. Only quantitative modeling, which produces a close fit between residual gravity and the computed effect of the model, can fully verify density and depth assumptions, interpret subtle anomalies, fully account for body geometry and interference between adjacent bodies, and incorporate other geophysical/geological data sets. Figure 1 ( Regional gravity determination,Warm Springs Valley, Nevada, USA ) shows an example of inverse 2-D gravity modeling.

Figure 1

The calculated gravity effect of the Valley fill material (having a density contrast of -0.5 g/cm3 with the underlying higher-density rocks) shown on the cross-section fits the residual gravity. Figure 2 ( Residual gravity,Gulf of Mexico primarily due to salt effects; note salt interpetation at A, B and C.

Figure 2

Contour interval = 0.2 mGal ) and Figure 3 ( Structure map, top of Salt?Gulf of Mexico derived from 3-D inverse gravity modeling.

Figure 3

Contour interval = 1,000 ft. Salt domes at A and B, and salt nosing ) shows a second example?this time of a 3-D inverse gravity model of the Top of Salt from the Gulf of Mexico?that fits the residual gravity and geologic constraints. Note how subtle features on the salt dome are defined by the 3-D inverse modeling. A third example, illustrating the use of 2-D gravity modeling is taken from the "Overthrust Belt" of the western United States (Gray & Guion, 1978).

Figure 4

Figure 4 ( Index map, Pineview field, Summit County, Utah, USA ) is a local index map showing the cross section location of Figure 5 ( Structural cross-section, Pineview field, Summit County, Utah, USA ).

Figure 5

This is a structural cross section through the Pineview Field, on the upthrown fault block of the Tunp Fault. The overridden downthrown block is Cretaceous rock, along the fault plane. Figure 6 ( Density vs.

Figure 6

formation, Pineview field ) shows a density vs. formation curve. The significant density contrast pieces to the thrust fault puzzle were each modeled separately. The density contrast pieces include the lower density Jurassic salt, which flowed into the core of the structure; the lower density Tertiary wedges, which generally thicken away from the crest of the anticline; and the higher density rocks in the core of the anticline, which are on the upthrown side of the Tunp Fault. Figure 7 ( Comparison of combined gravity effect of models with observed gravity, Pineview Field ) shows the superimposed gravity effects of all significant density contrast components compared to the residual gravity.

Figure 7

This previous example illustrates we can use 2-D modeling to understand why particular gravity anomalies are present in structurally complex areas. This understanding of the anomaly signature from known production helps operators in the thrust belt locate additional oil and gas fields. Figure 8 ( Correlation between Bouguer gravity and significant natural gas discovery, Overthrust Belt, Wyoming, USA.

Figure 8

Contours are in units of mgal ) shows the gravity anomaly over another significant Overthrust oil and gas field (Guion, et al., 1978). Figure 9 ( Reconnaissance gravity profile ) illustrates the Eagle Springs Field on the east side of Railroad Valley Nevada, together with the USGS Bouguer gravity profile (Guion & Pearson, 1982).

Figure 9

The principal basin-bounding fault is located near the Paleozoic outcrop on the east side of Railroad Valley and forms part of the oil trapping mechanism. Gravity analysis indicated that the principal basin bounding fault on the western side of Railroad Valley is several miles east of the Paleozoic outcrop. This western basin-bounding fault formed part of the oil trapping mechanism for the Trap Springs Field, which was discovered with the assistance of the gravity data.

Gravity Stripping
Gravity stripping, the process of removing the gravity effect from a known geologic layer, involves direct gravity modeling. For the layer of interest, we first define the depths of its upper and lower surfaces, along with its density contrast distribution. We then compute the gravity effect of the layer, and subtract it from the observed gravity to produce a residual gravity with the effect of the "stripped" layer removed. Gravity stripping might prove useful, for example, in a province which has salt structures and in which presalt structure is of interest. If gravity stripping can be used to mathematically replace the gravity effect of the "post-salt" sediment with salt, then the resulting residual may correlate, to some extent, with the pre-salt structures.

Anomalous Mass Estimates


A unique property of gravity exploration is that a given gravity anomaly is caused by a certain amount of anomalous mass or tonnage.

This is shown by Gauss's Theorem:

(1)

where

= gravity attraction vector = an infinitesimally small unit of surface area vector (direction is outward normal to surface s) = surface integral over closed surface s G = universal gravity constant M = the anomalous mass inside of the closed surface

We may calculate the anomalous mass below an observation surface may be calculated from the residual anomaly, az, attributed to the mass: (2) We can thus determine the mass of an ore body or salt dome by integrating or summing the residual gravity, az, over the area covered by the anomaly. We can then convert this mass into an ore reserve estimate or an estimate of salt mass. In the case of an ore body (such as a massive sulfide), the anomalous mass directly relates to ore tonnage and to economic evaluations.

Integration of Gravity
The quality of any gravity interpretation depends on the quantity and quality of available geologic and geophysical constraints. Such non-gravity geophysical constraints to gravity interpretations may include the following: magnetic basement depths, to help constrain a gravity-derived basement depth model. local magnetic anomalies, to provide some independent confirmation of gravity-derived local basement and/or intrasedimentary structures (this constraint applies if the anomaly source is generally magnetic and is generally denser than the surrounding rocks). seismic data, to provide density information from interval velocities and compare calculated gravity effects of seismic structures with observed gravity data (seismic data can also provide another source of basement depth estimates, although acoustic basement may not correlate with high density basement). incorporation of seismic and gravity data, possibly in the form of a few scattered seismic lines, to provide some "top of high density" depth constraint and assist in producing a more detailed "top of high density rocks" map from gravity modeling (often, gravity data will cover an entire area on a more or less uniform grid, whereas seismic control may be more sporadically located).

Rock Typing of Unknown Seismic Events

Figure 1 ( "Unknown" seismic intrusive ) depicts a typical unknown seismic event.

Figure 1

We can observe such events when acquiring seismic data in frontier areas, where little is known about local geologic features. To analyze unknown seismic events using gravity and/or magnetic data, we may use the following procedure: 1. Construct seismic time-depth curve and construct a structure map or cross-section of the anomalous body. 2. If necessary, utilize seismic interval velocity data in areas between unknown intrusives to estimate background densities of the various principal layers of the sedimentary section. 3. Compute gravity and magnetic models of the unknown intrusives, using various different density and magnetic susceptibility assumptions for the unknown intrusive. For example, such assumed cases might include the following:

Unknown Intrusive

Density

Susceptibility

Salt Gabbro intrusion High pressure shale Pinnacle reef Figure 2 , Figure 3

2.16 g/cm3 2.95 g/cm3 2.40 g/cm3 2.60 g/cm3

-0.5 x 10-6 cgs 3000 x 10-6 cgs 0 cgs 0 cgs

Figure 3
, and Figure 4 , show examples of salt, shale, and igneous gravity effects.

Figure 2
4.

Figure 4
Compare the calculated and observed gravity and magnetic anomalies among the various assumed cases. Select the most reasonable fit. It may be necessary to adjust the density or magnetic susceptibility of the initial model guess to get a good fit between calculated and observed (residual) gravity or magnetic fields.

Borehole Gravity

Applications
When Smith (1950) wrote "The Case for Gravity Data from Boreholes," he correctly anticipated that the well logging applications of borehole gravity would prove more important than the use of borehole gravity data to interpret surface gravity. As a logging tool, the borehole gravity meter (BHGM) is unique among porosity tools for its deep radius of investigation and ability to log inside of casing. Other porosity measurements are derived from gamma-gamma density, neutron and velocity logs. None of the porosity tools (BHGM included) actually measure porosity. Rather, they measure quantities from which we can interpret porosity. The range of BHGM applications is defined on one extreme by density logging and on the other by remote sensing of structure. The first sometimes focuses strictly on formation and reservoir evaluation questions; the other extends to basic exploration. Figure 1 ( Example BHGM log ) is an example of both applications.

Figure 1

In this figure, the purpose of the survey was to detect carbonate porosity in a reef environment that was missed by the other logs. For this objective, the tool's useful radius of investigation is approximately 50 ft. The sharp negative density anomaly observed between 6330 ft and 6370 ft suggests porosity obscured by near-borehole effects or poor volume sampling (the zone was perforated, and produced commercial quantities of oil and gas). On the other hand, the broad discrepancy between the BHGM and gammagamma logs over the depth range of the logged section is typical of a structural effect, in this case, the edge of the reef complex, which is within a few hundred feet.

Density from Borehole Gravity


The underlying assumption in computing apparent density from a BHGM survey is that of an Earth model consisting of a layer cake of horizontal infinite slabs. For such a model, the exact density of any slab is given by the gravity gradient through that slab; the gradient measured at any point within the slab is constant; and the slabs above and below it have no effect on the gradient within it. Figure 1 ( Density of an infinite slab from borehole gravity ) shows the measurements that lead to a computed density for an infinite slab.

Figure 1

This simple assumption serves effectively in a majority of cases. Modeling of more complex geometry is not difficult and is routinely used in computing structural corrections to apparent density. In Figure 1 , the gravitational attraction at the top of the slab is 2pGz (from the slab formula). The attraction at the bottom of the slab is exactly the opposite, so the change in gravity from the bottom to the top of the slab is

g = -4Gz where g is the gravity difference from the bottom and top of the slab is the density of the slab G is the Universal Gravitational Constant z is the thickness of the slab The sign in equation 1 is negative, because the sign conventions for g and z are positive downwards. For measurements on the real Earth, the density computation must take the free air gradient and latitude effect into account:

(1)

g = (F-4G)z This is the equation we use to derive densities from borehole gravity measurements. When the appropriate constant values are inserted in Equation 2 (from Robbins, 1981), we obtain: = 3.68270 - 0.005248sin2 +0.00000172z-0.01192708(g/ z) where z and z are in meters, f is latitude and g is in mGal.

(2)

Advantages and Features of Borehole Gravity


One of the BHGM's great advantages as a density logging tool is that, unlike other porosity tools, it is practically unaffected by near-hole influences. Casing, poor cement bonding, rugosity, washouts and fluid invasion have practically negligible influence on the measurement. Another advantage is the fundamental simplicity of the relationships between gravity, mass, rock volume and density. Complex geology can be easily modeled so that the response of a range of hypothetical models can be studied and understood before undertaking a survey. The normal calculated result of a BHGM survey is apparent density, which is a simple function of the measured vertical gradient of gravity. To obtain an apparent density measurement, we measure gravity at two depths. The accuracy of the computed density depends on the accuracy of both measured differences: gravity and depth. Operationally, BHGM surveys resemble VSP (vertical seismic profiling) surveys. The BHGM is stopped at each planned survey level for a five-to-ten-minute reading. The blocky appearance of the log, reflects the station interval. The log is not continuous. BHGM measurements are taken at discrete depths usually at intervals of 10 to 50 feet, depending on the vertical and density resolution required. While the BHGM has remarkable resolution in the measurement of density over intervals of 10 feet or more (less than 0.01 g/cm3), surveys requiring closer vertical resolution must sacrifice density resolution. Figure 1 ( Density accuracy for various levels of g measurement uncertainty )

Figure 1

and Figure 2 ( Density accuracy for various levels of z measurement uncertainty ) illustrate the BHGM measurement's sensitivity to errors in measuring changes in gravity and depth interval.

Figure 2

Figure 1 shows that over a 20-foot depth interval, measurement of g to an accuracy of 5 mGal will give a density accuracy of 0.01 g/cm3, provided there is no error in the depth interval, z. Similarly, Figure 2 shows that a 2-inch error in z over a 20-foot depth interval will result in a density error of 0.01 g/cm3.

BHGM Density Logging


Borehole gravity density measurements are unhindered by casing, poor hole conditions, and all but the deepest fluid invasion. The BHGM measurement samples a large volume of rock, which provides a density-porosity value that is more representative of the formation. This is especially beneficial in carbonate and fractured reservoirs. BHGM surveys have been used to find hydrocarbon-filled porosity missed by other logs in both open and cased holes. Gas-saturated sands are a particularly easy target because gas is low in density. The BHGM's wide radius of investigation has also been successfully used to determine gas-oil and oilwater contacts in reservoirs where other measurements have been ineffective. BHGM density measurements have been used to calculate hydrocarbon saturations: the larger the fluid density contrast, the larger the measured effect. Gas saturations are therefore the easiest to measure. The density differences measured by the gamma-gamma log and the BHGM can be used to calculate the difference in oil saturation between the invaded and undisturbed zones, which can in turn give an estimate of moveable hydrocarbons. Radius of investigation is normally defined as that radius within which 80 percent of the response is generated. For the BHGM, the depth of investigation depends on the physical dimensions of the zone of density change. The thicker the zone, the greater the radius of investigation. The radius of investigation is 2.45 times the zone thickness. For a 20-foot zone, the radius of investigation is about 50 feet.

The BHGM tool is especially suited to finding bypassed hydrocarbon production (especially gas) behind casing in old wells. Wells cased before the advent of modern porosity logs will often have a suite of old resistivity and SP logs available, and perhaps an older form of neutron porosity log. Modern neutron porosity logs can also be run through casing to help plan a BHGM log. If gas is present in the formation, the neutron log will read less than the true porosity due to the absence of hydrogen ions in water or oil. A valid calculation of water saturation in a gas zone in this case requires additional porosity information, which can be provided by the BHGM tool because the neutron log alone can't distinguish between tight and gas bearing sands. Figure 1 ( Cased hole BHGM with open hole Gamma Ray-Neutron log,

Figure 1

tight sand ) and Figure 2 ( Cased hole BHGM with open hole Gamma Ray-Neutron log, gas sand ) show old neutron and natural gamma logs combined with BHGM densities measured through the well casing.

Figure 2

In Figure 1 , the neutron log shows a pattern that could be interpreted as the result of an upwards increase in gas saturation in the sand from 4390 to 4514 feet, but to the contrary, the BHGM density log shows densities that increase towards the top of the sand to 2.55 g/cm3 indicating that the top of this zone is, in fact, tight. Figure 2 shows the opposite situation where the BHGM log confirms that gas exists in the zone from 4230 to 4246 feet.

Remote Sensing
A practical rule of thumb for BHGM remote sensing applications is that a remote body with sufficient density contrast can be detected by the BHGM no farther from the well bore than one or two times the height of the body. Local geology, and in particular the thickness of local density units, defines the effective radius of investigation of the BHGM. A salt dome with 15,000 feet of vertical relief would have a definitive signature a few miles away. A channel sand 20 feet thick would be detectable no more than 40 feet away, unless the density contrast is very high and little other noise is present. Figure 1 ( Apparent density anomaly of a truncated slab ) illustrates the basis for this rule of thumb.

Figure 1

The linear relation between apparent density and the angle a subtended by the vertical face of a truncated semi-infinite slab is analytically exact for the example. For example, if the true density contrast in the remote zone were 0.08 g/cm3 and a were 90, the apparent density anomaly would be 0.02 g/cm3. Computer modeling of BHGM measurements can help to develop relatively detailed salt-dome-flank or reef-flank model interpretations. Modeling is particularly effective where seismic data can be integrated into the modeling process; a model is sought that is consistent with both data sets. In one case, the presence of an imbricate thrust sheet was confirmed by the BHGM; the BHGM interpretation led to a sidetracked hole and an economic discovery. Figure 2 shows the computer-modeled effect of a salt dome for different well positions.

Figure 2

The sharpness of the density anomaly curve will be diagnostic of lateral offset from the well to the structure, provided there is a vertical change in density.

S-ar putea să vă placă și