Sunteți pe pagina 1din 14

1

Chapter 2
Fundamentals

The Fourier transform (FT)

Consider a mono-frequency sinusoidal function, g(t), given by:


g(t) = |A0| cos(2f0 t - 0).

(2.1)

g(t) can be completely described in terms of the following parameters:


The peak amplitude |A0|, which is the highest amplitude of g(t).
The frequency f0, which is the number of cycles per second.
The phase 0, which is a time-shift applied to g(t) with respect to t = 0.

A time-dependent (periodic) signal, h(t), can be analyzed to an infinite series of


sinusoids, each having its own peak-amplitude, frequency, and phase; using the
Fourier series:

h(t ) = An cos(2f n t n ) ;

(2.2)

n =0

where An is the peak amplitude, fn = 2n f0 is the frequency, and n is the phase of


the nth sinusoidal component. f0 = 1/T0 is the fundamental (dominant) frequency of
h(t), and T0 is the fundamental (dominant) period of h(t).

We use the forward Fourier transform to compute the peak amplitude An and phase n
at every frequency fn.

The forward Fourier transform H(f) of the time-function h(t) is given by:
+

H ( f ) = h(t )e i 2ft dt .

(2.3)

H(f) is generally a complex function that can be represented as:

H ( f ) = H ( f ) ei ( f )
= H ( f ) {cos[ ( f )] + i sin[ ( f )]}
,

= H r ( f ) + iH i ( f )

(2.4)

( f ) = tan 1[ H i ( f ) / H r ( f )]
where Hr(f), Hi(f), |H(f)|, and (f) are the real part, imaginary part, amplitude
spectrum, and phase spectrum of H(f), respectively.

Practically, when we carry out the forward Fourier transform, it returns the amplitude
spectrum |H(f)| and phase spectrum (f) as functions of frequency f.

The peak amplitude An and phase n of the nth sinusoidal component of h(t), as given
in equation (2.2), can be calculated from the Fourier transform, given in equation
(2.4), as:

An = H ( f n ) = H r2 ( f n ) + H i2 ( f n ) ,
n = ( f n ) = tan 1[ H i ( f n ) / H r ( f n )]

(2.5)

where |H(fn)| and (fn) are the values of the amplitude and phase spectra at fn.

Conversely, given |H(f)| and (f), we can synthesize h(t) using the inverse Fourier
transform:

h(t ) =

H ( f )e

i 2ft

df ,

(2.6)

where H(f) is calculated from |H(f)| and (f) using equation (2.4).

The algorithm used to calculate the Fourier transform numerically is the fast Fourier
transform (FFT).

The Fourier transform can be used to transform a function p(a) from any domain (a)
to a function P(b) in another domain (b) provided that: b = 1/a.

We usually denote a Fourier transform pair as: h(t) |H(f)|, that is we only calculate
the amplitude spectrum of the function. The phase spectrum of the function will only
be calculated if needed.

The impulse (delta) function

The impulse function (t) is defined as:


t = 0
(t ) =
.
0 t 0

(2.7)

(t) 1, - < f < . That is, the FT of (t) is one for all frequencies.
(t) can be thought of as a function that has a very large magnitude at t = 0 and
infinitesimally small duration such that the area under the curve is one.

For sampling purposes, (t) can be treated as a function that has a magnitude of one at
t = 0 and zero anywhere else.

Figure.

The sinc function

The sinc function is defined as:

sinc(t ) = 2 Af 0

sin(2f 0 t )
,
2f 0 t

where A and f0 are the peak amplitude and frequency of the sine function.

(2.8)

sinc(t) A, |f| < f0. That is, the FT of sinc(t) is one for frequencies in the interval (f0, f0) and zero elsewhere. It is a square (boxcar) function.

Figure.

Convolution

The convolution of two time-dependent functions x(t) and h(t) yields the timedependent function y(t) given by:

y (t ) = x (t ) * h(t ) = x( )h(t ) d ,

(2.9)

where * denotes the convolution operation.

In general, convolution involves the following sequential steps:


(1) Change the time variable from t to for x(t) and h(t) to get x() and h().
(2) Fold h() to get h(-).
(3) Shift h(-) by t to get h(t-).
(4) Multiply the overlapping values of x() and h(t-).
(5) Add the products found in step (4) to get the value of the function y(t).
(6) Repeat steps (3) - (5) until there is no more overlap between x() and h(t-).

However, for sampled functions x(t) and h(t), we simply fold, shift, multiply, and
add.

For sampled functions x(t) and h(t) that have number of samples Nx and Nh,
respectively, the convolution y(t) has a number of samples Ny given by:
Ny = Nx + Nh 1.

(2.10)

The convolution of a function x(t) with (t t0) is evaluated by simply shifting x(t) so
that its vertical axis is positioned at t = t0. That is: x(t) * (t t0) = x(t t0).

The convolution theorem states that convolution of two functions in the time-domain
is equivalent to multiplying their Fourier transforms in the frequency domain:
y(t) = x(t) * h(t) X(f) H(f) = Y(f),

(2.11)

where Y(f) = |Ay(f)| eiy(f), |Ay(f)| and y(f) are amplitude and phase spectra of y(t);
|Ay(f)| = |Ax(f)| |Ah(f)|, |Ax(f)| and |Ah(f)| are amplitude spectra of x(t) and h(t);
y(f) = x(f) + h(f), x(f) and h(f) are phase spectra x(t) and h(t).

The inverse of the convolution theorem is also true:


x(t) h(t) X(f) * H(f).

(2.12)

Note that convolution is commutative. That is: x(t) * h(t) = h(t) * x(t). This means
that it really doesnt matter if you fold and shift x(t) or h(t).

It is usually easier and faster to perform convolution in the frequency domain for
functions of long durations (i. e., Nx & Nh > 32).

Correlation

The correlation of two time-dependent functions x(t) and h(t) yields the timedependent function y(t) given by:

y (t ) = x (t ) h(t ) = x( ) h(t + ) d ,

where denotes the correlation operation.

In general, correlation involves the following sequential steps:


(1) Change the time variable from t to for x(t) and h(t) to get x() and h().

(2.13)

(2) Shift h() by t to get h(t+).


(3) Multiply the overlapping values of x() and h(t+).
(4) Add the products found in step (4) to get the value of the function y(t).
(5) Repeat steps (2) - (4) until there is no more overlap between x() and h(t+).

However, for sampled functions x(t) and h(t), we simply shift, multiply, and add.

For sampled functions x(t) and h(t) that have number of samples Nx and Nh,
respectively, the correlation y(t) has a number of samples Ny given by:
Ny = Nx + Nh 1.

(2.14)

The correlation of a function with another different function is called crosscorrelation.

The correlation of a function with itself is called autocorrelation.

Applying the convolution theorem on the correlation yields:


x(t) h(t) = x(t) * h(-t) X(f) H*(f)

(2.15)

where H*(f) = Hr(f) i Hi(f) is the complex conjugate of H(f) = Hr(f) + i Hi(f).

Note that correlation is NOT commutative. That is: x(t) h(t)

h(t) x(t). This

means that, given x(t) h(t), you have to shift h(t).

The z-transform

When a continuous function x(t) is sampled, then the sampled form x(t ) is given as:

x (t ) = x k (t kt ) , k = 0, 1, 2, ...

(2.16)

where xk is the kth sample of x(t), (t k t) is the impulse function shifted to the
position t = k t, and t is the sampling interval.

That is, when we want to sample a continuous function x(t) at a sampling interval t,
we simply multiply it by the impulse function every t.

The FT of equation (2.16) is:

X ( f ) = xk e i 2fkt

k = 0, 1, 2, ...

(2.17)

Let us introduce the variable z, such that:


z = e-i 2 f t.

(2.18)

Substituting equation (2.18) into equation (2.17) produces the z-transform X(z) of the
time-dependent sampled function x(t), defined by:

X ( z ) = xk z k = x0 + x1 z + x2 z 2 + ... .

(2.19)

For example, the z-transform of x(t) = (1, -1/2, 2, -1) is X(z) = 1 ()z + 2z2 z3.

The convolution theorem applies to the z-transform: x(t) * h(t) = X(z) H(z).

Frequency filtering

Frequency filtering of a function x(t) means that we apply an operator g(t) to the
function so that the output is a function y(t) that contains only frequencies that lie
within our desired frequency range [fi, ff].

Filtering can be done in the time-domain by convolving x(t) with g(t) or by


multiplying their amplitude spectra (the convolution theorem).

The desired amplitude spectrum of G(f) is a square function that is unity in the range
[fi, ff], and zero elsewhere.

Because the FT exists only for continuous functions and G(f) is discontinuous at the
edges, we must multiply G(f) by a smoothing function (a taper) S(f) to eliminate this
problem.

Furthermore, sharp slopes in G(f) causes the output y(t) and Y(f) to be oscillatory
because of Gibbs phenomenon. Therefore, relatively gentle slopes are used at the
edges of the square function.

With these modifications, the final shape of G(f) will be a trapezoid with smooth
edges.

The trapezoid G(f) will have a gentle slope from fi fi to fi, unity between fi and ff, a
gentle slope from ff to ff + ff, and zero elsewhere. The slope at the high-frequency
end should be gentler than that at the low-frequency end (i. e., fi < ff).

Increasing ff, only, will not improve the vertical resolution of the whole seismic trace;
but increasing the width of the pass-band frequency filter will.

Because of absorption of high frequencies by the Earth, a constant pass-band filter


might not be sufficient and a time-variant filter (TVF) might be needed.

Frequency aliasing

When sampling a continuous time-dependent signal, we must choose a sampling rate


(sampling interval), t.

When we sample a function using a sampling rate t, the highest frequency that can
be restored is the Nyquist frequency: fN = 1/(2t).

Common sampling rates used in seismic exploration are: 2 and 4 ms. Therefore, the
associated Nyquist frequencies are: 250 and 125 Hz, respectively.

Sampling in the time domain means that we multiply the continuous function h(t) by
the time sampling function (t), which is a series of impulses spaced at t.

Since sampling in the time domain involves multiplication; then, by the convolution
theorem, this means that we convolve the amplitude spectra of h(t) and (t) namely
|H(f)| and (f), where (f) is a series of impulses spaced at f = 1/t = 2fN.

Convolution of |H(f)| and (f) will replicate |H(f)| at an interval = 1/t = 2fN.

If the sampling rate is fine so that fN > fh, where fh is the highest frequency in the
function; then, the amplitude-spectrum replicas will be totally separated and both fN
and fh can be retrieved. Therefore, no aliasing will occur.

If the sampling rate is coarse so that fN < fh; then, the amplitude-spectrum replicas
will interfere and neither fN nor fh can be retrieved. Therefore, aliasing will occur.

When the amplitude-spectrum replicas interfere, frequency aliasing occurs and the
maximum retrievable frequency is the alias frequency given by: fa = |2fN fh|.

For example, if the highest frequency of a Vibroseis signal is150 Hz and the sampling
rate is 4 ms; then, fN = 125 Hz, fh = 150 Hz, and the alias frequency is: fa = |2125
150| = 100 Hz, which means that we lost all the frequencies between 100-150 Hz.

Practically, we want to sample the data such that we avoid aliasing. In the field, we
use an anti-aliasing filter before sampling to insure that fN > fh. The anti-aliasing
filter makes fh of the analog data equal to fN/2 or 3fN/4.

10

Phase considerations

A wavelet is a time-domain signal that has a start and end time.

A minimum-phase wavelet has its energy concentrated at its start.

A maximum-phase wavelet has its energy concentrated at its end.

A mixed-phase wavelet has its energy concentrated between its start and end.

A zero-phase wavelet is a mixed-phase wavelet that is symmetric about t = 0, has its


peak amplitude at t = 0, and its energy is concentrated about t = 0.

In practice, it is desirable to have a minimum-phase seismic wavelet.

The wavelet shape and position with respect to t = 0 can be modified by changing the
phase spectrum.

Gain Applications

Gain is a time-variant scaling of the amplitudes of the data.

Gain is usually done as a spherical divergence correction or to display the data.

(1) Spherical divergence correction:


In a single, homogeneous, isotropic layer, a seismic source produces a
spherical wavefront whose energy decays as 1/r2 and amplitude as 1/r, where r
is the distance from the source (usually taken as the depth z).
Therefore, in order to restore the amplitudes to their original values, we have
to multiply them by their corresponding depths. Therefore, the gain function
is:

g(t) = V t,

where V is the layer velocity and t is the TWTT.

(2.20a)

11

In a stack of layers, refraction at layer interfaces increases the decay of


amplitudes.
If we know the RMS velocities (Vrms), then the gain function is:
g(t) = Vrms2 t.

(2.20b)

If we do not know the velocities, then the gain function is:


g(t) = tm,

(2.20c)

where m=1-2.
If absorption effects are considerable, an additional exponential correction can
be used:
g(t) = e( t),

(2.21)

where = 0.01 1 sec-1.


Claerbout (1985) suggests m = 2 in equation (2.20c) to account for both
spherical divergence and absorption.

(2) Display:
(a) Automatic Gain Control (AGC):
The most common type of AGC is the instantaneous AGC that is
performed as follows:
(1) The absolute mean value of trace amplitudes is computed within a
specified time gate.
(2) The trace amplitudes in the time gate are divided by the absolute mean
value found in step (1).

12

(3) The time gate is shifted down by one sample and steps (1) and (2) are
repeated.
In practice, 200 500 ms AGC time gates are commonly chosen.
The RMS value of the trace amplitudes in the time gate is sometimes
chosen in step (1).
This method attempts to scale the amplitudes in the window to unity.
Example.
(b) Trace balancing (equalization):
When the relative amplitude across a group of traces is important (e. g.,
AVO), then trace balancing is required for that group of traces (e. g., CMP
gather).
Trace balancing is done in one of two ways:
(1) Individual trace balancing: Compute the RMS value using the
amplitudes in a single window of every trace in the group and scale all
the amplitudes of every trace using its respective RMS value. Note
that this does not involve a sliding window.
(2) Relative trace balancing: Compute the RMS value using the
amplitudes in a single window of a single trace and scale the
amplitudes of all traces in the group by this value.
AGC is a time-variant gain function while trace balancing is timeinvariant.

AGC should only be used to display the data. Whenever possible, use the
spherical divergence correction.

13

Appendix A
Numerical convolution, cross-, and auto-correlations

x(t) = (2, 1, 0)
y(t) = (1, -1)

Convolution
f(t) = x(t) * y(t) = (2, -1, -1, 0) = y(t) * x(t)
2
-1

1
-1

(Prove!)

f(0) = (2)(1) = 2
1
-1

f(1) = (2)(-1) + (1)(1) = -2 + 1 = -1


1
-1

f(2) = (1)(-1) + (0)(1) = -1 + 0 = -1


1

f(3) = (0)(-1) = 0

Cross-correlation
f(t) = x(t) y(t) = (-2, 1, 1, 0) y(t) x(t)
2
1

-1
1

(Prove!)

f(-1) = (2)(-1) = -2
-1
1

f(0) = (2)(1) + (1)(-1) = 2 - 1 = 1


-1
1

f(1) = (1)(1) + (0)(-1) = 1 + 0 = 1


-1

f(2) = (0)(1) = 0

14

Auto-correlation
f(t) = x(t) x(t) = (0, 2, 5, 2, 0)
2
2

[Note that: f(-t) = f(t)]

f(-2) = (2)(0) = 0

f(1) = (1)(2) + (0)(1) = 2 + 0 = 2

f(-1) = (2)(1) + (1)(0) = 2 + 0 = 2

Z-Transform
X(z) = (2)z0 + (1)z1 + (0)z2 = 2 + z
Y(z) = (1)z0 + (-1)z1 = 1 - z

f(0) = (2)(2) + (1)(1) + (0)(0) = 4 + 1 + 0 = 5

f(2) = (0)(2) = 0

S-ar putea să vă placă și