Sunteți pe pagina 1din 19

SPE 142497

Advanced History Matching Techniques Reviewed


R.Rwechungura, Norwegian University of Science and Technology-NTNU, M.Dadashpour, Statoil, J.Kleppe,
Norwegian University of Science and Technology-NTNU
Copyright 2011, Society of Petroleum Engineers
This paper was prepared for presentation at the SPE Middle East Oil and Gas Show and Conference held in Manama, Bahrain, 2528 September 2011.
This paper was selected for presentation by an SPE program committee following review of information contained in an abstract submitted by the author(s). Contents of the paper have not been
reviewed by the Society of Petroleum Engineers and are subject to correction by the author(s). The material does not necessarily reflect any position of the Society of Petroleum Engineers, its
officers, or members. Electronic reproduction, distribution, or storage of any part of this paper without the written consent of the Society of Petroleum Engineers is prohibited. Permission to
reproduce in print is restricted to an abstract of not more than 300 words; illustrations may not be copied. The abstract must contain conspicuous acknowledgment of SPE copyright.

Abstract
The process of conditioning the geological or the static model to production data is typically known as history matching
(HM). The economic viability of a petroleum recovery project is greatly influenced by the reservoir production performance
under the current and future operating conditions. Therefore evaluation of the past and present reservoir performance and
forecast of its future are essential in reservoir management process. At this point history matching plays a very important role
in model updating and hence optimum forecasting, researchers are looking for new techniques, methods and algorithms to
improve it.
This paper therefore reviews HM and its advancements to date including time-lapse seismic data integration. The paper
covers manual and automatic HM, minimization algorithms including gradient and non gradient methods. It reviews the
advantages and disadvantages of using one method over the other. Gradient methods covered include conjugate gradient,
steepest descent, Gauss-Newton and Quasi-Newton. Non-gradient methods covered includes evolutionary strategies, genetic
algorithm and Kalman filter (ensemble Kalman filter).It also addresses re-parameterization techniques including principal
component analysis (PCA) and discrete cosine transforms (DCT). The methods are evaluated using a data set based on data
from the Norne Field in the Norwegian Sea provided by Statoil and its partners to the Center of Integrated in Petroleum
Industry (IO Center) at the Norwegian University of Science and Technology (NTNU).
Introduction
History matching is defined as the act of adjusting a model of a reservoir until it closely reproduces the past behavior of a
reservoir; this is a common practice in oil and gas industry in order to have reasonable future predictions. It is traditionally
performed by trial and error. Reservoir engineers analyse the difference between simulated and observed value and manually
change one or a few parameters at a time in the hope of improving the match. In such an approach, reservoir parameters are
updated manually and often in two steps: the pressure match and the saturation match (Mattax and Dalton 1991, Saleri and
Toronyi 1988). The quality of this type of history matching largely depends on the engineers experience and the amount of
the budget. Since reservoirs are usually very heterogeneous, there are hundreds of thousands of grid blocks in a typical
reservoir simulation model to estimate reservoir parameters in high resolution. Therefore, manual history matching is often
not reliable for long periods and is always associated with many uncertainties, and computers are employed to automatically
vary the parameters. This procedure is called automatic history matching (semi-automatic HM) and it is an inversion
problem. Semi automatic history matching is defined as construction of an initial model with an initial approximation of the
reservoir parameters which then goes through a systematic process reduction of an objective function that represents the
mismatch between observation and calculated response by perturbing the relevant parameters (Guohua et al. 2004).
One of the first studies on history matching was done by Kruger (1961). He calculated the areal permeability distribution of
the reservoir. Jacquard and Jain (1965) developed a method to automate the history matching. Chavent et al. (1973) studied
history matching in single phase oil reservoirs. The objective was to minimize the difference between observed and actual
pressures at the wells with the permeability-thickness product and porosity-thickness product as adjustable parameters.

SPE 142497

Dougherty and Khairkhah (1975) used optimal control theory for history matching a gas reservoir. Fasanino et al. (1986)
studied single-phase history matching of 2D gas reservoirs using the adjoint method in combination with geostatistical
information and the pilot point technique. Bi et al. (2000) studied the conditioning of three-dimensional stochastic channels
to pressure data in single-phase reservoirs. Wasserman et al. (1974) were among the first to use optimal control theory in
history matching multiphase simulator models. However, instead of using a multi-phase optimal control formulation, they
used an adjoint equation only for the pressure equation. Watson et al. (1979) studied history matching in two-phase
petroleum reservoirs. Li et al. (2003) studied history matching of three-dimensional, three-phase flow production data. The
objective was to minimize the mismatch in flowing wellbore pressure, producing gas oil ratio (GOR) and water oil ratio
(WOR). Figure 1. Above summarizes an approximate of the papers on history matching from 1990 to 2010, this indicates a
growing interest, capacity and advancement in history matching and general optimization.
History Matching Publications 1990 - 2010
100
90
80
70
60
50
40
30
20
10
0
1990

1992

1994

1996

1998

2000

2002

2004

2006

2008

2010

Fig. 1 Approx. number of papers on history matching prepared each year for conferences and journals world wide
General history matching
The main reason for performing history matching is to enable prediction of future performance of a reservoir and thus
optimization of production in terms of economy and oil and gas recovery (IOR). The HM procedure starts with the
construction of an initial reservoir model from a geological model, which is then adjusted until the simulated data matches
the historical data, and finally the adjusted model is used to simulate the future production. Common parameters involved in
the matching are pressure, water oil ratio, gas oil ratio, water gas ratio, water and gas arrival times, fluid saturation from
cores, well logs, chemical tracer test, seismic travel time, time-lapse seismic amplitude and time-lapse AVO gradient and
intercepts etc. Generally HM is divided into Manual and automatic history matching, (see Figure 2).

Figure 2. History matching (Dadashpour, 2009)


Mathematical Model: Scientists in many fields have developed mathematical models based on fundamental physical laws
and characterized by certain parameters in order to reproduce the behaviour of an actual physical system. Such a model is
known as a forward model. Using only forward modeling to solve an engineering problem is possible in many fields of
science when the required parameters for contracting the model to mimic the systems behaviour can be measured directly.
This is not however the case for many of the applications that are related to the earth sciences in which scientists deal with

SPE 142497

systems that are not easily accessible (e.g. subsurface reservoir). The properties of these applications can only be measured
indirectly. In such scenarios forward modeling is only used as a small part of the inverse theory for the estimation of
unknown parameters. The mathematical model (forward model) required for the estimation of unknown parameters in this
research consists of two components (Figure 3):
A reservoir simulator to model the flow of fluid through porous media,
A rock physics model to compute seismic responses (petro-elastic and forward seismic model).

Figure 3. The forward model (Dadashpour, 2009)

The main purpose of the mathematical model is to predict the behaviour of the system with a reasonable accuracy under
various conditions. The following fundamental laws and models are relevant to the dynamics and seismic responses of the
reservoir:
Mass conservation law,
Darcy's law,
Equation of state,
Relative permeability and capillary pressure relationships,
Gassmann equation,
Hertz-Mindlin model,
Woods law,
1D seismic reflectivity modeling (matrix propagation techniques),
Fourier transforms.
The mathematical model is built by combining these laws and results in a system of differential equations and matrices.
The objective function: The definition of objective function depends on the observed variable available, and is defined as
the amount of discrepancy between observation data such as seismic survey, reservoir production historical data, field
pressure, and the simulator response for a given set of parameters. For the composition it is necessary to know the distance
between the simulated curves and observed data related to each parameter included in the process. There are three different
well known formulas for calculating the objective function:

Where dobs represents the observed data, dcal the response of the system, as predicted by the forward modelling, and w is a
diagonal matrix that assigns individual weights to each measurement. The weights for each data type and each well are
assigned as a function of the number of available data points in a set, and on the uncertainty associated with each type of
measurement. The factor is a weighting factor which expresses the relative strength of our belief in the initial model and Cd
is the covariance matrix of the data, Cd provides information about the correlation among the data. C is the covariance
matrix of the parameters of the mathematical model. In our approach, we use a more simplified weighted least-square

SPE 142497

objective function which is commonly used for assisted history matching applications. As correlations between observed data
are difficult to evaluate, only the diagonal terms are often accounted for in the covariance matrices. When matching
production and seismic data together without a priori information, this simplified form can be expressed as follows (Roggero
et al. 2007)

where: nseries is the number of production data series to be matched, ntimej is the number of measurement times, d and S
are standard deviation on production and seismic data errors respectively, ndata is the number of seismic data series to be
matched, n values is the number of observed seismic data values, Sobs and Scal are an observed and computed seismic data
value respectively, and w are weighting coefficients.
History matching usually requires numerous iteration runs that make this procedure very costly. It is not only difficult to
solve but it is also a non-unique inverse problem leading to non-unique predictions which make business decisions difficult.
Some of these decisions include development drilling, facility upgrades, workover schedules, stimulation, water flooding and
installing artificial lifts. It is, well known that the quality of a business decision depends largely on the quality of the history
matching. One useful application of history matching is to estimate uncertain parameters such as saturation, pressure,
permeability and porosity from indirect measurement which is called parameter estimation. This procedure traditionally starts
with an initial guess of body parameters and the process is iteratively advanced until the best fit is obtained between the
calculated and observed data. In this study the optimization variables are porosity, permeability and facies and output
variables or observations are production and time-lapse seismic data. The common algorithm for an inversion process is
shown in Figure 4.
1. Enter the initial guess for the body parameters,
2. Compute the response of the system through the forward modeling,
3. Compute the objective function,
4. Update parameters by minimizing the objective function using a minimization algorithm,
5. If the value of the objective function is not certifiable, return to step 2.

Figure 4. Common algorithm for the inversion process. (Dadashpour, 2009)


Time Lapse Seismic Data Integration
Time-lapse seismic is defined as repeated seismic surveys such as two- or three-dimensional (2D/3D) conventional seismic
data, repeated four components (4C) seismic, VSP and cross-well seismic, and these data constitute a set of dynamic data in
the same conditions at different times. The first survey is usually called the base-line survey and the repeated survey is called
the monitor survey. Ideally, the base survey is a 3D data set acquired before production from the stated field and a monitor
survey is acquired after some time of production. Time-lapse seismic is now an efficient tool to improve drainage and total

SPE 142497

recovery, monitoring contacts and detecting partially flushed zones, to terminate marginal wells and prevent wells from being
drilled into a flushed zone, a complementary tool for the infill drilling of producers and injectors, and recent parameter
estimation (Landr 2005). Seismic reflection data from one seismic survey are sensitive to variations in the source and
acoustic properties of the reservoir. These variations are a function of temperature, compaction, fluid saturation and reservoir
pressure. Seismic images are sensitive to spatial contrasts in two distinct types of reservoir properties:

Time independent static geological properties such as lithology, porosity and shale content,
Time-varying dynamic fluid-flow properties such as fluid saturation pore pressure and temperature.

Given that a single 3D seismic survey represents a single snapshot in the time of a reservoir, the static geology and dynamic
fluid flow contributions to the seismic image are non-uniquely coupled and therefore difficult to separate. For example, it
may be impossible to distinguish an oil-water contact from a horizontal depositional boundary in a single seismic image.
However, with 4D seismic surveys, examining the difference between time-lapse 3D seismic images allows the time
independent geologic contributions to cancel, thus resulting in a direct image of the time-varying changes caused by the
reservoir fluid flow. The potential applications of 4D seismic include mapping of fluid contacts with time, estimating
pressure compartmentalization and fault seal, and locating bypassed oil.
Repeatability: The meaning of repeatability in time-lapse seismic is not that everything should be similar in both the base
and monitor survey. The most important purpose of time-lapse seismic is to emphasize the seismic response changes due to
production or injection changes. Anything else that affects seismic responses is called non-repeatable noise. Success for a
time-lapse seismic analysis requires a reduction in the effect of noise and an increase in the effect of production and injection.
A repeatability analysis is defined as a time-lapse seismic analysis in a reservoir using the same position with as much as
possible and using similar properties, while at the same time conducting a zero time repeatability study (Porter-Hirche and
Hirche, 1998).The idea is that the baseline survey and the monitor survey are both shot on the same day. This can be regarded
as a zero-lapse seismic survey in terms of the production time scale. Using the same acquisition geometry, seismic crew and
source type in near ideal conditions, any changes observed in the difference between the baseline and monitor surveys are
most probable due to uncontrollable noise.
Some of the main non-repeatable noises are due to source type, water layer, source and receiver positions and variations,
number of shots, receivers and shot intervals, system variations (recording instruments, processing algorithms), noises
(weather, rig noise, other vessels) and geology changes. Some of these items are controllable and some of them are
impossible to control. Some of these non-controllable noises are in proportion with non-repeatability. This noise level can be
regarded as a noise threshold. In the ideal scenario a repeatability analysis calculates this non-controllable noise, and if the
non-repeatable noise is less than that of the threshold noise, then a successful time-lapse seismic will be expected in that area.
Integration: An emerging parameter estimation strategy is to combine time-lapse seismic with traditional flow modeling in a
process referred to as parameter estimation using 4D-seismic data or seismic history matching. These data add a new
dimension to parameter estimation problem since they contain information about fluid movement and pressure changes
between and beyond the wells. Most of the reservoir parameters used in a flow simulator for showing the behaviour of the
reservoir come from borehole measurements such as rock cores, cuttings, reservoir engineering pressure data; well test data
and well logging data. There is a big problem though in using these data in that they all represent only a small part of the
reservoir and correlating these data to the entire reservoir presents a huge challenge.
The main advantage of using time-lapse seismic data is that they do not need any correlation, which means they are
representative of an entire reservoir. The results are however associated with some errors related to the repeatability of data
acquisition, data processing sequences, a lack in the understanding of rock physics and errors in the up-scaling and crossscaling of seismic and simulation data. The recent use of seismic data in reservoir management adds a new and very effective
dimension to history matching. The dynamic seismic data depends on the movement of fluids and the changing of pressures
and saturations between and beyond the wells. Two main approaches are involved in time-lapse seismic application.
The first approach is based on rock physics, which is a function of the reservoir condition and other reservoir properties that
consequently affect velocity and attenuation. These properties are divided into two parts in regards to the elastic and inelastic
properties of the rocks. If the rock is isotropic and homogeneous, the elastic properties include two constant parameters, bulk
and shear modulus which both are functions of rock type, pore fluids and pore geometry and thus will all vary with the
reservoir condition. Moreover, cracks and pore fluids, which also may vary with the reservoir condition, determine the
inelastic rock property but this variation is much more than elastic variation.
The second approach is based on selecting seismic attributes, which are sensitive in velocity changes and attenuation due to
changes in reservoir conditions. For example, in a rock that is not so stiff and exposed to a water imbibitions process, Vp
increases and Vs decreases.

SPE 142497

Figure 5. Time lapse data integration in history matching (adopted from Waggoner, 2009)
As shown from Fig. 5, seismic history matching (SHM, and shown on the right side) is directly analogous to conventional
production data history matching (shown on the left side).The objective is to update the reservoir model to improve the match
to both production data (on the left) and seismic data (on the right). The seismic data component to the objective function is
calculated from the difference between the synthetic attribute (center) and the corresponding seismic attribute (right).
Typically, the attribute used will be AI (acoustic impedance) or amplitude attribute but there are cases where saturation
attribute may be appropriate, Fig. 6.

Figure 6. Conditioning reservoir simulation models to 4D seismic data (Cheng.N, 2009)


Before using time-lapse seismic data for history matching, the following issues should be assessed (Kabir and Young 2001):
The time-lapse seismic data must be reliable. This means that there should be a smaller difference for the nonproducing zone when compared to a detectable signal.
The petrophysical model must reflect a reasonable change compared to reality. Hence, Petroelastic modeling (PEM)
has to be designed for each field. PEM makes an effective link between the fluid and rock property and the elastic
and seismic property. A completely automatic process for estimation of parameters is very difficult in practice,
although computer aided parameter estimation is an intermediate approach. The main objective is to automate the
manual tasks like simulation model modifications, run simulations, comparison of observed and simulation data, etc.
The choice of properties to be changed, for example, is the responsibility of the professional. The main purpose of
computer aided parameter estimation is to decrease the human time associated with the process. In these types of
approaches, the computer program used for changing some parameters in order to minimize the objective function.

SPE 142497

The common steps for solving this type of problems are not only for the specific case of this work, but can be used
for any general problem and are:
o
Construct a mathematical model based on the physics of the problem that is capable of reproducing
the system performance,
o
Define an objective or error function which measures the misfit between the observed system
performance and the behavior predicted by the model,
o
Apply a minimization algorithm capable of finding the model parameters that result in a minimum
value of the objective function.
The reservoir engineer vision is that 4D seismic will become a routine component of reservoir management best practice. We
are seeing 4D seismic projects started in many areas and more companies are getting involved for example Statoil, Shell, BP
and Petrobras etc. For 4D seismic to be a routine it must be integrated with the tools used by the asset team, namely
production analysis, reservoir simulation, and history matching. By using the insight gained from 4D seismic observations
within the reservoir, reservoir models can be built that will only need slight modification when new data arrives, instead of
rebuilding them every few years. This will allow asset teams to be more efficient and effecting in managing reservoirs.
Inverse Problem Challenges
History matching is an inverse problem that can be regarded as an optimization routine; the main inverse problem challenges
are based on grid description issues namely, too many possible solutions which lead to non-uniqueness (good history match,
but incorrect predictions), geological realism (preserving geological continuity), and the extensive computation time required
for optimization. Computation time (cost) becomes important when we encounter problems with a large number of unknown
parameters and will be even more critical when an estimation of gradients is needed. Several methods have been introduced
for speeding-up of the optimization process. One efficient way to accomplish this for some optimization algorithms is to
introduce a parallel or distributed computing framework (Ouenes et al. 1995, Schiozer and Sousa 1997, Schulze-Riegert et al.
2003). Parallelization is common in global optimization algorithms but can be also effective for computing gradients. The
gradient of the data mismatch can also be calculated by the means of using adjoints (Ruijian et al. 2003, Dong and Oliver
2005 and 2008). This can significantly reduce the total time required for optimization, but such algorithms can not be
parallelized and require an explicit knowledge of the simulator source code. This problem becomes more evident when timelapse seismic computations are added to the forward model.
History Matching with ENKF: The recent use of ensemble-based techniques has become popular within the petroleum
industry (Chen and Oliver 2010, GU and Oliver 2004, and Evensen et al. 2003). The ensemble Kalman filter (EnKF) method
is a Monte Carlo implementation of the Kalman filter in which the mean of an ensemble of realizations provides the best
estimate of the population mean and the ensemble itself provides an empirical estimate of the probability density. The
correlation between reservoir response (such as water cut and rate) and reservoir variables (such as permeability and porosity)
can be estimated from the ensemble, while an estimate of uncertainty in future reservoir performance can also be obtained
from the ensemble. The histroric view of the Kalman filtering method:
Kalman filter (Kalman, 1960)
Propagation and update of state error covariance and mean for a linear stochastic system
Square root filter (Potter, 1963)
Propagation and update of state error covariance in a square root form
Extended Kalman Filter (Smith et al., 1962)
Propagation of state error covariance with linearised model
Ensemble Kalman filter (Evensen, 1994; Burgers et al., 1998)
Monte-Carlo approximation of state error covariance and its update; propagation of state error covariance and mean
by ensemble integration
Ensemble square root filter (Anderson 2001; Bishop et al. 2001; Whitaker and Hamill 2002; also Pham 2001)
Deterministic representation and update of state error covariance in ensemble form
Among the recent advancement in these kind of methods is by (Jafapour, 2007) who combined the ensemble Kalman filter
(ENKF) with discret cosine transform (DCT), Figure 7.

SPE 142497

Figure 7: Ensemble Kalman Filter + DCT state vectors. (Japhapour, 2009)


In a recent review paper by Oliver and Chen, (2010) ENKF is further explored (section 6). The ensemble Kalman filter and
variants have proven to be very successful when history matching large and fairly complex problems. The ENKF method is
easily adapted to additional data types and model variables. Of all of the methods, EnKF currently comes closest to being a
history matching solution for realistic problems. Oliver also summarized field case studies on history matching using ENKF.
(Peters et al, 2010) summarized a recent benchmark study on closed-loop optimization, where different history matching
(many of these were discussed in (Section 4) and optimization methods were tested on the Brugge, a synthetic field with
the objective of maximizing net present value. The three best results were all obtained by groups using EnKF for data
assimilation.
Reparameterization: Another method for speeding-up optimization is the reparameterization of the reservoir to reduce the
number of parameters. In general there are number of parameters that are determined by the data and the number that are
determined by the prior information. Determination of the optimal parameterization (the one that uses the number of
parameters that can be determined by the data) generally requires the computation of the sensitivity of data to model variables
and then a decomposition of the sensitivity matrix or some related quantity. This reparameterization can be done by defining
pilot points (Marsily et al. 1984, Bissell et al. 1996, Arenas et al. 2001) and then reducing the parameter space by kriging.
Reparameterization can also be done by spatial zonation (simple zonation or adaptive zonation), transform domain methods
(mathematical transforms) or by object/pattern search based approach.

Figure 8. Simple and adoptive zonation (Jahns, 1966; Grimstad 2003, Aanonsen 2005)
Gradzone analysis (Harb 2004) avoids the calculation of sensitivities at all grid nodes. Brun et al. (2001) extend the
gradzone analysis method for considering a non-diagonal prior covariance matrix, the advantage of which is that the
parameter selection is independent of the weights. In these sorts of methods the number of parameters can be reduced by
reparameterizing the model in terms of the eigenvectors corresponding to the largest eigenvalues of the prior covariance
matrix, using vectors of sensitivity coefficients (derivatives of pressures at observation points with respect to model

SPE 142497

parameters), or Bayesian estimation. Reynolds et al (1996) uses a subspace method for reducing the matrix problem size and
compares it with reparameterization based on spectral decomposition. In a subspace method the search direction vector is
expanded as a linear combination of basis vectors for a lower dimensional subspace of the model space. The order of the
matrix problem to be solved (at every iteration of the Gauss-Newton procedure) is thereby reduced to the dimension of the
subspace. It has been shown that it is also possible to make use of spatial correlations in the states (pressures, saturations) to
reduce the order of reservoir models that use system-theoretical techniques, but the application of these possibilities in
optimization, data assimilation or upscaling has yet to have hardly been pursued. For some early attempts, see Heijn et al.
(2004), Van Doren et al. (2006), Markovinovi and Jansen (2006), Gildin et al. (2006), Vakili-Ghahani et al. (2008) and
Cardoso et al. (2008). Principal component analysis (Shah et al. 1978, Scheevel and Payrazyan 2001, Dadashpour 2009) and
in general kernel principal component analysis (Sarma et al. 2007) are two other methods for reducing the parameter space in
an optimization problem. These methods are based on the singular value decomposition of a covariance matrix (or sensitivity
matrix) that incorporates previous statistical knowledge of the parameters to be estimated.

Figure 9. Basis vectors for various parameterization methods (Oliver and Chen, 2010)
Gradzone Analysis: Gradzone analysis (GZA) is a method that based on sensitivities identifies groups of parameters that
define a region. This requires first and second derivative information. The GZA procedure is as follows (Gosselin and Berg
2001):
1. Compute the Hessian matrix,
2. Make a spectral decomposition for obtaining the eigenvectors,
3. The eigenvectors are ordered according to the eigenvalues
4. Define a first threshold,
5. Group together all parameters with associated eigenvectors higher than that threshold (two different groups are
generated),
6. Define a next threshold ,
7. Repeat steps 5 and 6.
The reduction of parameters depends on the number of defined thresholds. Each threshold defines two different groups of
parameters (Dadashpour et al, 2010).
Principal Component Analysis: Principal component analysis (PCA) is a multivariate statistical technique. It is based on a
vector space transform that can be used to reduce data dimensionality by a covariance analysis between factors. Depending
on the field of application, it is also called the discrete Karhunen-Love Transform (KLT), the Hotelling Transform or Proper
Orthogonal Decomposition (POD). PCA is useful when you have obtained data on a number of variables (possibly a large
number of variables), and believe that there is some correlation between them. Because of this cross correlation between the
variables, it should be possible to reduce the observed variables into a smaller number of principal components (artificial
variables) that will account for most of the variance in the observed variables. The method generates a new set of parameters
which are called principal components. Those new parameters are linear combinations of the original parameters:
yi = e11x1 + e21x 2 + ... + ei1xi
where yi is the principal component i, ei1 is the eigenvector corresponding to the eigenvalues, and x represents the original
variables. The PCA procedure for porosity and permeability estimation is as follows (Yadav 2006):
Obtain a set of multiple porosity and permeability realizations for the reservoir by using prior geostatistical
information (for example, by using sequential Gaussian simulation),
Estimate the covariance matrix of first for porosity,
Calculate the eigenvectors and eigenvalues of these covariance matrixes,
Select the eigenvectors corresponding to the highest eigenvalues,

10

SPE 142497

Obtain a set of permeability realizations from the porosity realizations and empirical rock type knowledge (the
relation between porosity and permeability),
Repeat steps 2 4 for the permeability realizations.

Mathematically, a new set of parameters are related to the original one by:
A =Wx S x + B
where A is the column vector containing the new set of parameters constructed from dominating patterns. W is the matrix of
eigenvectors. The number of rows is equal to the number of active cells and the number of columns depends on the number
of eigenvectors retained which correspond to higher eigenvalues. S is normalizer square matrix which is going to normalize
the eigenvector. The size of this matrix is equal to the column size of matrix W. is the column vector containing the weight
of dominating patterns, and B is the column vector containing the mean of the parameters at each grid block obtained from
the set of the multiple realizations of parameters. Each eigenvector represents a parameter distribution pattern. These patterns
are different than the available realizations. Only patterns associated to the highest eigenvalues are retained for analysis
Discrete Cosine Transform: The discrete cosine transform (DCT) is a Fourier-based transform which was first introduced
by Ahmed et al. in 1997 for signal decorrelation. Later many researchers have tried to use this method for other applications
like image compression (Rao and Yip, 1990 and Gonzalez and Woods, 2002) or for reduction of the number of unknown
parameters in optimization problems or history matching (Jafarpour and McLaughlin, 2007 and Bhark et al., 2010). The main
reason of using DCT for parameter reduction (Bhark et al., 2010) is for removal of high frequency components of the
reservoir properties which are insensitive to production data. This will considerably reduce the size of estimated parameters.
The most common DCT definition of a 1D signal u(n) of length N has the following form:
N 1
( 2n + 1) k
( k ) = ( k ) u ( n ) cos
, 0 k N 1
2N

n =0

(k ) =

1
N

for

k=0

2
N

for

k0

The sequence u(n) can be a 1D permeability, porosity field but in this paper u(n) is interpreted as an N-dimensional vector
of principal components of reservoir parameters and v(k) is an N-dimensional vector of transform coefficients, where n is an
indicator for PCs and k refers to the wave number in the transform domain. This matrix vector form can be written as v=u,
(k,n) is an NN-dimension transformation matrix. The original sequence can be reconstructed by applying the inverse
transform -1(n,k) to the transform coefficients. This matrix vector form can be written as u=-1v, where -1 denotes the
matrix transpose of .

u (n ) =

N 1

( 2n + 1) u
, ,
2N

(k )(k ) cos

n =0

Norne Case Studies


The methods discussed above are tested on a semi-synthetic reservoir from the Norne Field, offshore Norway. The model is a
two phase (oil and water), two-dimensional black-oil reservoir that is discretized into 1014 (39126) grid blocks with 739
active cells. It has one producer and one water injector that are located at columns 6 and 33, respectively.

SPE 142497

11

Figure 10: The Norne field, 3D (a) and a 2D section (b)


The reservoir is subdivided into four different formations, Garn, Ile, Tofte, and Tilje, from top to base. Hydrocarbons are
located in the Lower- to Middle-Jurassic sandstones. Different geological and depositional environments gave rise to nine
different rock types (with different properties. Since two parameters (porosity and permeability are attached to each grid cell,
there are 1478 parameters to estimate. Vertical permeability is assumed to be equal to horizontal permeability. The objective
function in this project represents the mismatch in time-lapse seismic and production data.
) 2
) 2
F() = M p ( ) M p ( ) + M s ( ) Ms( )
Where F() is the objective function, is a vector of unknown reservoir model parameters (porosity and permeability), Mp(
) and Ms( ) are the vectors of simulated reservoir production history and time-lapse seismic differences, respectively and
)
)
MP( ),and Ms( ) represents observed production data and time-lapse seismic data, respectively. In this equation variables
are normalized and equal weights are set between the seismic and production parts. The seismic data considered in this
project are zero offset amplitudes and amplitude versus offset (AVO) gradients. The production data used are water and oil
well production rates (WWPR and WOPR, respectively) and bottom hole pressures for injector and producer (WBHPI and
WBHPP, respectively).

Case1. Speed up attempt using gradzone analysis (GA) and principal component analysis PCA

(a)Defined gradzones for porosity and permeability

(a)Real and estimated porosity distributions for different approaches


considered

12

SPE 142497

(c)Real and estimated permeability distributions for different


approaches considered

(d)Objective function versus the number of required forward


simulations for the parameter estimation approaches

Figure 11. Parameterization using PCA and GA (Dadashpour, 2009)


This work used GA and PCA in a parallel distribution environment. In order to accomplish this task we defined three cases:
without reparameterization (WREPA), reparameterization with gradzone analysis (REGZA), and reparameterization with
principal component analysis (REPCA). All cases used both time-lapse seismic in the form of zero offset amplitude and
AVO gradient and production data to estimate porosity and permeability distribution. WREPA deals with 739 unknown
porosities and 739 unknown permeabilities. Case REGZA deals with 49 unknown porosities and 92 unknown permeabilities.
Case REPCA uses principal component analysis for reducing the number of parameters to 100 for porosity and permeability.
The REPCA approach is applied in two different ways by using REPCA-STG1 and REPCA-STG2. In REPCA-STG1, the
principal component analysis is updated simultaneously, while in REPCA-STG2 they are modified sequentially (i.e. the first
the component corresponding to the highest eigenvalue, then the second one, the third one and so forth). We conclude that all
the approaches succeeded in obtaining a cost function value acceptable for estimation purposes Fig. 11.
Case 2: The use of discrete cosine transforms DCT on principal components
This case uses DCT on principal components not directly in parameters. Fig. 12a & Fig. 12b show the effect of using DCT
with different reduction factors of PCs on porosity and permeability distributions. It can clearly be seen even with 50 %
reduction of PCs still most of the features in the model can be presented by new set of reduced parameters. Five different
cases are defined based on compressing factor of PCs to reduce number principal components. Historical data are noise-free
in this study. Case PCADCT70 has 60 components i.e. the threshold is determined based on reducing by 70 percent the less
or non-sensitive components. The other cases are defined by similar procedure. Figure 12(c) shows the effect of parameter
reduction in speeding-up the estimation procedure. It is clear that both cases PCADCT30, PCADCT50, and PCADCT70 give
the same reduction factor in the estimation parameters. But Case PCADCT90 which contains 90% parameter reduction (i.e.
20 components) terminates after 50 equivalent simulation runs. It means this type of parameter reduction is effective in a
certain range of compressing factor.

SPE 142497

(a)

13

Approximate representation of the porosity field with


increasing of the reduction factor

(b)

Approximate representation of the permeability field with


increasing of the reduction factor

(c) Mismatch reduction using different compressing factors

Figure 12: Effect of using DCT on Principal Components (PCs) for parameter estimation
Other studies using the same synthetic Norne model are mainly using derivative free approaches to the inverse problem;
(Echeveria et al, 2009) tested three local optimizers: the gradient-based SNOPT, Generalized Pattern Search (GPS) and
Hooke Jeeves Direct Search (HJDS). All of them use the average of the realization, which is taken from the PCA, as initial
guess. They also considered two global optimizers Particle Swarm Optimisation (PSO) and genetic algorithm (GA). The
distributed computing environment consists of an IBM P690, 32 CPU (Power 4X) with 32GB RAM under AIX version 5.3.
Using 20 processors give a speed-up factor of 15 which is incorporated in SNOPT, GPS, PSO and GA. Fig. 13 and Fig. 14
shows the inversion results. A unique case with Norne data is the use of time lapse data integration, which plays a very
significant role, Fig. 15.

14

SPE 142497

Figure 13: Real and model porosity inversion by SNOPT, GPS, HJDS, PSO and GA ((a)-(e), respectively
(Dadashpour et al, 2009).

Figure 14: Real and model permeability inversion by SNOPT, GPS, HJDS, PSO and GA ((a)-(e), respectively
(Dadashpour et al, 2009)

SPE 142497

15

Figure 15: Initial (top) and final (bottom) time-lapse seismic mismatch in the form of zero offset amplitude (left) and AVO
gradients (right) for PCADCT50 (Dadashpour et al, 2009)
General Parameter Estimation Algorithms
The problem of estimating reservoir parameter from seismic and reservoir engineering data is non-linear with respect to the
set of parameters that characterize the system such as saturation, pressure or even porosity and permeability. The best ways
for solving theses types of problems is to use an iterative algorithm that will calculate the unknown parameters by successive
approximation. These methods were developed to find the local or even global minimum of this complex function. Global
optimizers try to find a global minimum directly in the presence of large numbers of undesired local minima (Press et al.
1999). These optimizers require a vast amount of functions calls to find the global minimum. The main purpose of all these
algorithms is to minimize the objective function. The initial guess of body parameters is the starting point of this algorithm
and the process is iteratively advanced until the best fit is obtained between the calculated and observed data. Several
optimization algorithms are developed for this purpose. These algorithms are usually classified depending on whether they
use the gradient of the objective function or not (Table 1). Some of these, such as Gauss-Newton, Quasi-Newton, Conjugate
gradient, Sequential Quadratic Programming and Levenberg-Marquet (Gill et al. 1984) require both or at least one of the
Jacobian or Hessian information of the objective function. However, some of them such do not require the computation of
gradients for the purpose of optimization. The gradient of the objective function is defined as:

In general, some of these non-linear regression methods require an evaluation of the sensitivity coefficient that is a partial
derivative of time dependent reservoir properties such as saturation and pressure and time-lapse seismic amplitudes with
respect to static reservoir properties such as permeability, porosity, transmissibility and azimuth of a geospatial variogram.
The computation of derivatives is usually not simple and requires an enormous amount of calculations. A number of
interesting approaches are available, of which the most important ones are probably the following (Oliver et al. 2008):
Direct method,
Finite difference method (perturbation method),
Adjoint method.
Derivative-free algorithms are relatively simple to implement, do not require the computation of either gradient or the
sensitivity coefficients, and are also able to reach the global minimum of the objective function. The main disadvantage is
that they require a very large number of functions evaluations and they are very expensive from a numerical point of view.
Moreover, this may become critical when such functions evaluations involve the use of a reservoir numerical simulator.
Some of these algorithms try to find local and some search for the global optimum. Global optimizers are for instance the so-

16

SPE 142497

called simulated annealing (SA) (Ounes et al. 1992 and 1994, Sen et al. 1995, Vasco et al. 1997), the Genetic Algorithm
(GA) (Carter and Romero 2002), Evolutionary Algorithms (EA) (Romero et al. 2000, Schulze-Riegert et al. 2002, 2003),
Monte Carlo simulation (Lepine et al. 1998) and the Maximum Likelihood method (Carrera and Neuman 1986). Pattern
search is a class of methods for finding the local optimum that does not require information about the gradient. The method
works with the value of the objective function. Some of the main important pattern search algorithms are Hooke-Jeeves
method, General Pattern Search, and Mesh Adaptive Direct Search.
Table 1. Common algorithms which are used for the optimization purpose
Local Minima
Gradient based methods
Steepest Descent
Gauss-Newton
Conjugate Gradient
Quasi-Newton
Levenberg-Marquardt
Sequential quadratic programming

Non-gradient based methods


Nelder-Mead
Hooke-Jeeves direct search
Mesh adaptive direct search
General pattern search
Generating set search
Trust-Region Methods in DFO

Global Minima
Evolutionary Strategy
Genetic Algorithm
Simulated Annealing
Partical Swarm Optmization

Discussion
In cases when a computer cluster is available, and the number of optimization parameters is comparable to the number of
nodes in the cluster, the performance of derivative-free methods, in terms of absolute computing time, could be similar to that
of adjoint-based optimizers. Additionally there might be noise in the measurements, so if one wants to use exact gradients,
the data measured has to be filtered prior to incorporating it in the cost function. Derivative-free methods have been observed
to have some robustness against noisy cost functions. Derivative-free techniques consider the different observables (e.g.,
those given by the production and seismic simulations) as black boxes. In those inverse modeling schemes it is relatively
simple to add a new observable (for example, electromagnetic data). Doing that with a simulator could not be that easy, and
might require significant extra programming time.
For the past 20 years there has been great progress made in the development of history matching technology. It has become
possible to history match large reservoir models with hundreds of thousands of uncertain parameters. The types of parameters
that can be included in the history match have become more varied, and the amount of data that can be matched is much
greater than previously possible. Inspite of the rapid progress in capability, however, it seems that no single method is the
best; rather the best method for history matching depends on the parameters of the problem and the data that need to be
matched. Despite the great progress, it is still challenging when it comes to true practicability of history matching. It is clear
that some major sources of uncertainty still cannot be handled adequately.The need for history matching reservoirs with
complex geology remains crucial. However in this study we have seen more development in research towards the added
value of 4D data integration in history matching.
Nomenclature
dp: vector of observed production historical data
dS: vector of observed time-lapse seismic differences
F: objective function
G: Gradient matrix
H: Hessian matrix
J: Jacobin matrix
K: permeability, mD
MP(): the vectors of simulated reservoir production
histories

MP (*): the vectors of simulated reservoir production


histories
MS (): the vectors of simulated time-lapse seismic
differences
MS(*): the vectors of simulated time-lapse seismic
differences
: effective porosity
: Vector of unknown reservoir model parameters
: unknown parameters (porosity or permeability)

SPE 142497

Acknowledgment
The authors want to thank the Norwegian University of Science and Technology (NTNU), the Research Council of Norway
(NFR), and Center for Integrated Operations in the Petroleum Industry (IO-Center) for their financial support.
The authors would also like to thank Statoil (Norne field operator) and its license partners Eni and Petoro for the release
of the Norne field data.
We also wish to thank Schlumberger-GeoQuest for the use of the simulator ECLIPSE and Alexey Stovas for preparing
the seismic forward model. Lastly, we would like to extend our gratitude to David Echeverrie-Ciaurri from Stanford
University for useful discussions.
Disclaimer
The views expressed in this dissertation are the views of the author and do not necessarily reflect the views of Statoil and the
Norne field License partners
References
Aanonsen S I, Aavatsmark I, Barkve T, Cominelli A, Gonard R, Gosselin O, Kolasinski M and Reme H 2003 Effect of Scale Dependent
Data Correlations in an Integrated History Matching Loop Combining Production Data and 4D seismic Data, SPE 79665, SPE
Reservoir Simulation Symposium, Houston, USA, February 3-5.

Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 28842903.
Ahmed N., Natarajan T., Rao K.R. 1974. Discrete cosine transform. IEEE Trans. Computers, 90-93.
Arenas E, Kruijsdijk C and Oldenziel T 2001 Semi-Automatic History Matching Using the Pilot Point Method Including Time-Lapse
Seismic Data, SPE 71634, SPE Annual Technical Conference and Exhibition, New Orleans, USA, September 30 - October 3.
Bi Z, Oliver D S and Reynolds A C 2000 Conditioning 3D Stochastic Channels to Pressure Data. SPE Jounal 5 474 484.
Bishop, C. H., B. Etherton, and S. J. Majumdar, 2001: Adaptive sampling with the ensemble transform Kalman filter. part I: theoretical
aspects. Mon. Wea. Rev., 129, 420436.
Bissell R 1994 Calculating optimal parameters for history matching, 4th European Conference on the Mathematics of Oil Recovery,
Norway.
Bhark E., Jafarpour B., and Data-Gupta A. 2010. A new adaptively scaled production data integration approach using the discrete cosine
parameterization. paper SPE 129183 presenred at SPE omproved oil recovery symposium, Tulsa, USA, 24-28 April.
Brun B, Gosselin O, and Barker J W 2001 Use of Prior Information in Gradient-Based History- Matching, SPE 66353, SPE Reservoir
simulation symposium, Texas, USA, February 1114.
Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126, 17191724.
Cardoso M A, Durlofsky L J and Sarma P 2008 Development and application of reduced-order modelling procedures for subsurface flow
simulation, Int. J. for Numerical Methods in Engineering 77 1322-1350.
Carrera J and Neuman S P 1986 Estimation of aquifer parameters under transient and steady state conditions: 1. maximum likelihood
method incorporating prior information, Water Resource Reservoir, 22 199.
Chavent G, Dupuy M and Lemonnier P 1975 History matching by use of optimal theory, SPE 4627, SPE Journal, 15(1) 7486.
Chen, Y., Oliver, D.S. 2010 Ensemble-based closed-loop optimization applied to Brugge Field (SPE 118926). SPE Reserv. Evalu. Eng.
13(1), 5671
Conn A R, Scheinberg K and Vicente L N 2009 Introduction to derivative-free optimization, MPSSIAM book series on optimization,
SIAM, Philadelphia.
Dadashpour M, Landr M and Kleppe J 2008 Non-linear inversion for estimating reservoir parameters from time-lapse seismic data,
Journal of Geophysics and Engineering, 5 54-66.
Dadashpour M, Echeverria Ciaurri D, Mukerji T, Kleppe J and Landr M 2009 Simple Zonation and Principal Component Analysis for
Speeding-up Porosity and Permeability Estimation from 4D seismic and Production Data, Extended Abstract, 71st EAGE Conference
& Exhibition incorporating SPE EUROPEC, Amsterdam, June 8-11.
Dadashpour M 2009 Reservoir Characterization Using Production Data and Time-Lapse Seismic Data, PhD thesis submitted at the
Norwegian University of Science and Technology (NTNU) department of Petroleum Engineering and Applied Geophysics.
Dadashpour M, Echeveria Ciaurri D, Mukerji T, Kleppe J and Landr M 2010 A Derivative-Free Approach for the Estimation of Porosity
and Permeability Using Time-Lapse Seismic and Production Data, J. Geophys. Eng. 7 351
Dong Y and Oliver D S 2005 Quantitative Use of time-lapse Seismic Data for Reservoir Description, SPE 84571, SPEJ, 10 91-99.
Dong Y and Oliver D S 2008 Reservoir Simulation Model Updates via Automatic History Matching With Integration of Seismic
Impedance Change and Production Data, SPE 12550, International Petroleum Technology Conference, Kuala Lumpur, Malaysia,
December 3-5.
Dougherty E L and Khairkhah D 1975 a History Matching of Gas Simulation Models Using Optimal Control Theory, SPE 5371, 45th SPEAIME Annual California Regional Fall Meeting, Venture, California, April 2-4.
Echeverra Ciaurri D, Santos E T F and Mukerji T 2008 Optimal Updating of Reservoir Facies Models by Integrating Seismic and
Production Data, Proceedings of the VIII International Geostatistics Congress, Santiago de Chile, Chile, December 1-5.
Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte-Carlo methods to forecast error
statistics. J. Geophys. Res., 99, 1014310162.

17

18

SPE 142497

Evensen G 2003 The Ensemble Kalman Filer: theoretical formulation and practical implementation, Ocean Dynamics, 53 343367.
Fasanino G, Molinard J E, de Marsily G and Pelc V 1986 Inverse Modeling in Gas Reservoirs, SPE 15592, 61st SPE Annual Technical
Conference and Exhibition, New Orleans, Texas, October 5-8.

Gonzalez R.C., Woods R.E. 2002. Digital image processing 2nd ed., Prentice Hall, Upper Saddle River, NJ
Gosselin O, Aanonsen S I, Aavatsmarka I, Cominelli A, Gonard R, Kolasinski M, Ferdinandi F, Kovacic L, ENI and Neylon K 2003
History Matching Using Time-lapse Seismic SPE 84464, SPE Annual Technical Conference and Exhibition, Colorad, USA, October
5-8.
Guohua G, Gaoming L and Reynolds A C 2004 A stochastic optimization algorithm for automatic history matching, SPE 90065, SPE
Annual Technical Conference and Exhibition, Houston, Texas, September 26-29.
Gildin E, Klie H, Rodriguez A, Wheeler M F and Bishop R H 2006 Development of low-order controllers for high-order reservoir models
and smart wells, SPE 102214, SPE Annual Technical Conference and Exhibition, San Antonio, USA, September 24-27.
Grimstad, A.A., Mannseth, T., Nvdal, G., Urkedal, H.: Adaptive multiscale permeability estimation. Comput. Geosci. 7(1), 125 (2003)
Gu Y, Oliver D S 2004 History Matching of the PUNQ-S3 Reservoir Model Using the Ensemble Kalman Filter, SPE 89942, SPE Annual
Technical Conference and Exhibition, Houston, USA, September 26-29.
Harb R 2004 History matching including 4-D seismic - An application to a field in the North Sea, Master thesis, NTNU Norway.
Heijn T, Markovinovi R and Jansen J D 2004 Generation of low-order reservoir models using system-theoretical concepts, SPEJ, 9 202218.
Jacquard P and Jain C 1965 Permeability distribution from field pressure data, Soc. Pet. Eng. Journal, 281-294.
Jafarpour B., and McLaughlin D.B. 2007. History matching with an Ensemble Kalman filter and discreate cosine parameterization. paper
SPE 108761 presented at SPE Annual Technical Conference and Exhibition, Anaheim, USA, 11-14 November.
Jafarpour B., Goyal V., McLaughlin D.B. and Freema W. 2009. Exploiting transform Domain sparsity in reservoir history matching.
paper SPE 117819 presented at the SPE Reservoir simulation Symposium, The Woodlands, Texas, February 2 4.
Jansen J D, Brouwer D R, Naevdal G and van Kruijsdijk C P J W 2005 Closed-loop reservoir management, First Break 23 43-48.
Jahns, H.O.: A rapid method for obtaining a twodimensional reservoir description from well pressure response data. SPE J. 6(12), 315327
(1966)
Kabir C S and Young N J 2001 Handling production-data uncertainty in history matching: The Meren reservoir case study, SPE 71621,
SPE Annual Technical Conference and Exhibition, Louisiana
Kalman, R. E., 1960: A new approach to linear filtering and prediction problems. J. Basic. Eng., 82, 3545.
Kleppe J 2008 Reservoir simulation, lecture note 4 and 6, Norwegian University of science and technology, NTNU, Norway.
Kruger W D 1961 Determining Areal Permeability Distribution by Calculations. J. Pet. Tech. 691.
Landr M, Solheim O A, Hilde E, Ekren B O and Strnen L K 1999 The Gullfaks 4D seismic study, Petroleum Geosciences, 5 213-226.
Landr M 2001 Discrimination Seismic Data Acquisition and processing. Lecture notes, Norwegian University of science and technology,
NTNU, Norway.
Landr M 2005 Seismic Reservoir Monitoring, Lecture notes, Norwegian University of science and technology, NTNU, Norway.
Lepine O J, Bissell, R C, Aanonsen S I and Barker J W 1998 Uncertainty analysis in predictive reservoir simulation using gradient
information, SPE 48997, SPE Annual Technical Conference and Exhibition, Louisiana, USA, September 27-30.
Li R, Reynolds A C and Oliver D S 2003 History matching of three-phase flow production data, SPE Journal, 8(4) 328340, 2003.
Mattax C C and Dalton R L 1991 Reservoir simulation, SPE 20399, Journal of Petroleum Technology, 42 692-695.
Markovinovi R and Jansen J D 2006 Accelerating iterative solution methods using reduced-order models as solution predictors, Int. J. for
Numerical Methods in Engineering, 68 525-541.
Marsily G, Levedan G, Boucher M, Fasanino G 1984 Interpretation of interference tests in a well field using geostatistical techniques to fit
the permeability distribution in a reservoir model, Geostatistics for Natural Resources Characterization, 2 831.
Nocedal J and Wright S J 1999 Numerical Optimization, Berlin: Springer-Verlag.
Oliver D S 1994, Incorporation of transient pressure data into reservoir characterization, In Situ, 18 243-275.
Oliver D S, HE N and Reynolds A C 1996 Conditioning permeability fields to pressure data, 5th European Conference on the mathematics
of oil Recovery, Leoben, Austria, September 36.
Oliver D S, Alberto C, Stanislas V D B and Sudip D C 2000 a gradient base approach for history matching of both production and 4D
seismic data, 7th European conference on the mathematics of oil recovery, Baveno, Italy, September 5 8.
Oliver D S, Reynolds A C and Liu N 2008 Inverse theory for Petroleum Reservoir Characterization and History Matching, Cambridge
university press.
Oliver D S, Chen Yan 2010 Recent progress in history matching, a review, journal of ComputGeosci, Springer Science+Business Media
B.V.2010
Ouenes A, Meunier G, Pelce V and Lhote I 1992 Enhancing gas reservoir characterization by simulated annealing method (SAM), SPE
25023, SPE European Petroleum Conference, Cannes, France, November 16-18.
Ouenes A, Weiss W and Anwar J 1995 Parallel Reservoir Automatic History Matching Using a Network of Workstations and PVM, SPE
29107, SPE Reservoir Simulation Symposium, San Antonio, Texas, USA, February 12-15.
Peters, L., Arts, R.J., Brouwer, G.K., Geel, C.R., Cullick, S., Lorentzen, R.J., Chen, Y., Dunlop, K.N.B., Vossepoel, F.C., Xu, R., Sarma,
P., Alhutali, A.H., Reynolds, A.C.: Results of the Brugge benchmark study for flooding optimization and history matching. SPE
Reserv. Evalu. Eng. 13(3), 391405 (2010)
Porter-Hirsche J and Hirsche K 1998 Repeatability study of land data acquisition and processing for time-lapse seismic, SEG Expanded
Abstracts.
Pham, D. T., 2001: Stochastic methods for sequential data assmilation in strongly nonlinear systems. Mon. Wea. Rev., 129, 11941207.
Rao K.R., Yip P. 1990 Discrete cosine transform: Algorithms, advantages, applications. Academic Press, Boston.
Reynolds A C, He N, Chu L, Oliver D S 1996 Reparameterization techniques for generating reservoir descriptions conditioned to
variograms and well-test pressure data, SPEJ, 1 413 - 426.

SPE 142497

Roggero F, Ding D Y, Berthet P, Lerat O, Cap J and Schreiber P E 2007 Matching of Production History and 4D seismic
DataApplication to the Girassol Field, Offshore Angola, SPE 109929, SPE Annual Technical Conference and
Exhibition, Anaheim, USA, November 11-14.
Rwechungura R., E.Suwartadi E., Dadashpour M., Kleppe J., Foss B. 2010. The Norne Field Case - A Unique Comparative Case Study.
paper SPE 127538 presenred at SPE Intelligent Energy Conference and Exhibition, Utrecht, Netherlands, 2325 March.
Ruijian L, Reynolds A C, Oliver D S 2003 History Matching of Three-Phase Flow Production Data, SPE 87336, SPE Journal, 4 328-340.
Saleri N G and Toronyi R M 1988 Engineering control in reservoir simulation, Part I, SPE 18305, SPE Annual Technical Conference and
Exhibition, Houston, Texas, October 2-5.
Sarma P, Durlofsky L J, Aziz K and Chen W H 2006 Efficient real-time reservoir management using adjoint-based optimal control and
model updating, Computational Geosciences, 10 336.
Scheevel J R, Payrazyan K 2001 Principal Component Analysis Applied to 3D Seismic Data for Reservoir Property Estimation, SPE
Reservoir Evaluation & Engineering, 4 64-72.
Schiozer, D J, Sousa S H G 1997 Use of External Parallelization to Improve History Matching, SPE 39062, Latin American and Caribbean
Petroleum Engineering Conference, 1997, Rio de Janeiro, Brazil, August 30 - September 3.
Schulze-Riegert R W, Haase O and Nekrassov A 2003 Combined global and local optimization techniques applied to history matching,
SPE 79668, SPE Reservoir Simulation Symposium, Houston, Texas, February 3-5.
Shah P C, Gavalas G R, and Seinfeld J H 1978 Error analysis in history matching: the optimum level of parameterization, Soc. Petrol. Eng.
J., 18(6) 219228.
Sen M K, Gupta A D, Stoffa P L, Lake L W and Pope G A 1995 Stochastic reservoir modelling using simulated annealing and genetic
algorithm, SPE Formation Evaluation, 10 4956.
Smith, G. L., S. F. Schmidt, and L. A. McGee, 1962: Application of statistical filtering to the optimal estimation of position and velocity
on-board a circumlunar vehicle. Technical report.
Vakili-Ghahani A and Jansen J D 2008 Control relevant upscaling, SPE 113647, SPE Europec/EAGE Annual Conference and Exhibition,
Rome, Italy, June 912.
Van Doren J F M, Markovinovi R and Jansen J D 2006 Reduced-order optimal control of water flooding using POD, Computational
Geosciences, 10 137-158.
Vasco D W, Datta-Gupta A and Long J C S 1997 Integrating field production history in stochastic reservoir characterization, SPE
Formation Evaluation, 12 149156.
Waggoner 2009 Lecture notes on 4D Management SPE course prior to the 2009 SPE Reservoir Simulation Symposium, The Woodlands,
Houston, Texas, February 2009.
Wasserman M L, Emanuel A S and Seinfeld J H 1974 Practical Applications of Optimal Control Theory to History Matching Multiphase
Simulator Models, SPE 5020, SPE-AIME 49th Annual Fall Meeting, Houston, Texas, October 6-9.
Watson, A.T., Wade, J.G., Ewing, R.E.: Parameter and system identification for fluid flow in underground reservoirs. In: Proceedings of
the Conference, Inverse Problems and Optimal Design in Industry, Philadelphia, PA (1994)
Whitaker, J. S. and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924.
Yadav S 2006 History matching Using Face-Recognition Technique Based on Principal Component Analysis, SPE 102148, SPE Annual
Technical Conference and Exhibition, Texas, September 24 - 27.

19

S-ar putea să vă placă și