Sunteți pe pagina 1din 122

What Goes Into Reservoir Simulation?

The basic tool for conducting a reservoir simulation study is a simulator. The development of
this tool requires a good understanding of the physical processes occurring in reservoirs and
a high level of sophistication and maturity in advanced mathematics and computer
programming. Although simulation engineers generally do not get involved in actual software
development, the effective use of reservoir simulators requires that they at least appreciate
what goes into this development.
Like any tool, a reservoir simulator has its strength and limitations, and it is well to keep in
mind the various assumptions that factor into its development. This is not to suggest that all
simulation engineers must be expert programmers; rather, they must be intelligent users.
Therefore, knowledge and understanding of the simulation process are necessary precursors
to a reservoir simulation study.
At first, a simulation study might look like a once-and-for-all exercise. In truth, however, it is
an evolutionary process, throughout which we continually refine our conceptual understanding
of the system. While we cannot overemphasize the importance of accurate reservoir
description in a good reservoir simulation study, we must at the same time acknowledge that
the data needed for an accurate description is seldom available; invariably, studies start out
with less than complete data. However, by carefully analyzing and interpreting the voluminous
information generated during the study, we should be able to refine and extend the input data
base. Such refinement should lead to a better understanding of the system and, ultimately, to
a better reservoir description. Of course, this requires some agility and creativity; there is no
such thing as a casual simulation engineer.
It is, therefore, apparent that there are three basic interwoven components that go into a
simulation study. These are:
The tool: reservoir simulator
The intelligent user: simulation engineer
The pertinent information: reservoir description
Figure 1 depicts the necessary interactions among the simulation engineer, the simulator and the
reservoir description.

Figure 1
The engineer is clearly the prime mover in conducting the simulation study, and must be in control of
other study components.
This control involves:

being cognizant of the simulators limitations and the assumptions that go into its
development

being able to adequately describe the reservoir


being fully conversant with the analytical techniques
involved in interpreting the results.
Based on the initial results, it is not uncommon for the simulation engineer to revisit the
appropriateness of the reservoir description through concept refinement.

Why Do We Need Reservoir Simulation?


The information we obtain from a newly discovered field is scanty at best. It is also disjointed
to a certain extent, because bits and pieces of information are emanating from different parts
of the field. Our first task is to integrate these pieces of information as accurately as possible
in order to construct a global picture of the system. A reservoir simulation study is the most
effective means of achieving this end. As field development progresses, more information
becomes available, enabling us to continually refine the reservoir description.

Once we establish a good level of confidence in our reservoir description, we can use the
simulator to perform a variety of numerical exercises, with the goal of optimizing field
development and operation strategies. We are often confronted with questions such as

what is the most efficient well spacing?


what are the optimum production strategies?
where are the external boundaries located?
what are the intrinsic reservoir properties?
what is the predominant recovery mechanism?
when and how should we employ infill drilling?
when and which improved recovery technique should we
implement?
These are but a few of the critical questions we may need to answer. A reservoir simulation study is the
only practical laboratory in which we can design and conduct tests to adequately address these
questions. From this perspective, reservoir simulation is a powerful screening tool.

What are the Simulation Approaches?


The complexity of the problem at hand, the amount of data available, and the studys
objectives invariably dictate the choice of reservoir simulation approach, granted that we have
already taken into account the appropriate computational environment (both in terms of
hardware and software).
Broadly classified, there are two simulation approaches we can take: analytical and

numerical.

The analytical approach, as is the case in classical well test analysis, involves a
great deal of assumptionsin essence, it renders an exact solution to an approximate
problem.

The numerical approach, on the other hand, attempts to


solve the more realistic problem with less stringent
assumptionsin other words, it provides an approximate
solution to an exact problem.
From here on, we use the term simulation rather loosely to refer only to the numerical approach.

The domain of interest can form another level of categorization for simulation approach and
model selection. For instance, a study may focus on a single well and its interaction with the
reservoir within its drainage area (i.e., single-well simulation in radial-cylindrical coordinate
system). The other extreme case may be the study of an entire field (field-scale simulation in
rectangular coordinate system) in which the overall performance analysis of the field is called
for. In between these two extremes comes the case where only a section of the reservoir is
targeted (window-study).

Figure 2

Figure 2 schematically represents these three simulation approaches.

What are the Basic Steps of a Simulation Study?


In general, a reservoir simulation study involves five steps:

Setting objectives
Selecting the model and approach
Gathering, collecting and preparing the input data

Planning the computer runs, in terms of history matching


and/or performance prediction

Analyzing, interpreting and reporting the results


A critical step is selecting the model and the approach. We must decide at the outset how many
dimensions will be adequate. Such decisions depend on the flow dynamics involved and the amount of
detail required. There may be cases when a two-dimensional representation is sufficient to describe a
thin reservoir, whereas a three-dimensional model is unavoidable in the case of a thick reservoir. The
expected flow structure dictates the choice of the models coordinate system. For example, in most
single well studies, we may use radial-cylindrical flow geometry. However, if a well is vertically
fractured, we should assign an elliptical-cylindrical flow geometry. We also have to decide how best to
represent physical phenomena. For instance, if compositional variation is important, we may have to
employ a compositional simulator rather than the more commonly used "black oil" model. Similarly, if
we intend to study a coalbed methane reservoir, we must use a specialized model that accounts for the
desorption process.
One of the most labor-intensive aspects of reservoir simulation study is data gathering,
collection and preparation. Oftentimes, this requires collaboration among technical personnel
with varying levels of expertise. For instance, geological and geophysical data are extremely
crucial and need to be processed in the form that is useful for reservoir description. Data will
be often be sparse or incomplete. In such situations, statistics or other tools can prove quite
helpful. Because of the large volume of data being processed at this stage, and the likelihood
of internal inconsistencies in the data, the engineer must have strong organizational skills and
sound judgment.
In simulation studies, time (both the engineers and the computers) is of the essence. A
typical simulation study requires a large number of runs, which must be carefully orchestrated
to yield the desired information. As we make each run, we must carefully analyze the results,
and appropriately label them to avoid confusion and costly duplications. In addition, we must
avoid runs that yield no new information. It is pertinent, therefore, to take into account the
inferences from the previous runs in planning the next suite of runs.
Perhaps the most important step in a simulation study is analyzing and interpreting the
results. It is at this stage that our creative and discerning abilities are put to the test. As
tempting as it may be to do so, we should not view every number that comes out of the
computer as the absolute answer. Instead, we must always be asking questions such as,
"what if? what then? why? so what?" In this way, we bring into play our experience, common
sense, and perhaps sometimes extraneous knowledge.
A simulation studys ultimate objective is to forecast reservoir performance. If we have
selected the correct model, adequately prepared our data, conducted the appropriate
computer runs, and made good, informed analyses, we should be confident of our ability to

predict performance. Any mistakes we make in the previous steps will have a cumulative
impact on performance prediction.
We must communicate study results in an appropriate manner to other technical personnel
and to management. This should be in the form of a comprehensive technical report with
sufficient details for others to assess the studys quality.

How are Reservoir Simulators Used?


A reservoir simulator can be an effective tool for screening, analysis and design. The thought
process that goes into appropriately using a simulator for these purposes, however, is quite
involved. Figure 3 illustrates the interactions inherent in this thought process.

Figure 3

The cornerstones of a reservoir simulator are the mathematical model, laboratory

investigation (laboratory data), field observations and the computer code. In using reservoir
simulators, these cornerstones generate signals, which propagate and interact with each
other such that a continuous feedback takes place for the mutual benefit and enhancement of
all the parts. For example, laboratory investigation, field observations and the computer code
can highlight the need for improvement in the mathematical formulation. Similarly, a computer
code originated from a robust formulation, together with pertinent field observations, may
shed light on the validity of the experimental approach taken in the laboratory. This dynamic
interaction illustrates the self-enhancing nature of reservoir simulators.

What Does a Simulation Study Require?


A simulation study is a challenging and demanding task, loaded with opportunities to learn
more about the reservoir. To reap the full benefits of this powerful tool, it is imperative to
recognize the proper roles of the engineer and of the reservoir simulator. In a successful
study, neither of these can afford to dominate the other. The engineer should not demand
from the simulator what it is not meant to do, but neither should he or she become overly
dependent on the simulator. In a nutshell, the success of a simulation study hinges on a
combination of a good engineer and the right simulator.

Steps in a Simulation Study


There are five basic steps in conducting a reservoir simulation study:

setting concrete objectives for the study


selecting the proper simulation approach
preparing the input data
planning the computer runs (including the order in which they
occur)

analyzing the results

Setting the Objectives


Setting objectives is the most important step in conducting a simulation study. Clearly defined
objectives help us obtain the best information at the lowest cost and in the least amount of
time. Improperly set objectives can take the study on a long, roundabout journey which leads
to nowhere.
There are a number of factors that help us define appropriate objectives. The most important
of these are data availability, the required level of detail, availability of technical support and
available resources. In setting objectives , we use all of these factors to determine how to
proceed. For example, it is unrealistic to attempt three-dimensional simulation when the
available geological data gives no information about the presence and description of the
various formation layers present in the reservoir.
In the broadest sense, when we consider all these factors, we will arrive at one of two types of
objectives. These are sufficiently distinct that they affect the entire planning process of the
simulation study. One type of objective is fact-finding, while the other is to establish an
optimization strategy.

Fact-finding involves answering questions about a system or process that is already


in place. For example, a simulation study that matches well test data for the purpose
of determining the damaged zone around a wellbore is a fact-finding mission.
Optimization involves developing a number of plausible
scenarios for a process (e.g., waterflooding) and studying the

system response in an attempt to determine the optimum


scenario. In this case,we must design a suite of numerical
exercises, being careful to avoid waste on exercises that
may not significantly contribute toward the goal.

Choosing the Simulation Approach


In choosing the simulation approach, we need to consider three basic factors:

reservoir complexity
fluid type
scope of the study
While reservoir complexity and the scope of the study determine the simulators dimensions and
coordinate geometry, the fluid type (together with the processes involved) dictate whether we should
use a black-oil model or a more specialized model. For example, predicting well performance in a gas
condensate reservoir will require a compositional rather than a black oil simulator. Furthermore, if the
reservoir is thin and unlayered, it will be sufficient to use a one-dimensional radial flow geometry.
Carrying out such a study with a three-dimensional compositional simulator will require additional
computational resources whose added benefit cannot be justified. In any case, we must exercise our
judgement and ingenuity in selecting the most appropriate simulation approach.

Preparing the Input Data


Because simulation studies usually require large volumes of information from a wide range of
sources, preparing the input data can be a laborious task. However, the time spent in
ensuring that data are properly prepared is worthwhile, in that it can prevent a great deal of
headaches and waste later on in the study. Often, we discover data input errors only after a
problem surfaces during the run, which wastes both time and computing resources.
It is our responsibility to ensure internal consistency in the data. Because data come from
different sources, internal inconsistencies are not uncommon. We should resolve
inconsistencies during the data input preparation. When data inconsistencies are present,
they can lead to an ill-posed problem. Even worse, they could go undetected. With an illposed problem, we may be able to find the inconsistency by the failure of the simulator to run;
but in the case of buried inconsistencies, the simulator may run and yield erroneous solutions.
Pre-processing capability, particularly for the commercial codes currently available, can
facilitate data preparation. Sometimes these processors have internal checks to flag any
detected inconsistencies in the data.
While data preparation is the simulation engineers job, input from other supporting personnel
is extremely important. If inconsistencies appear in the data, or even if some data appear
doubtful, it is imperative to resolve the problem with the help of the geologist, geophysicist
and perhaps the production engineer. In summary, there is no overemphasizing the
importance of adequate data preparation prior to making a simulation study. The payoff is
exceptionally good.

Planning the Computer Runs


Planning computer runs is deceptively simple. To understand the necessity and the
complexity of this planning, we only need to imagine a simulation study as a complex road
map where the traveler knows the point of origin and the destination (these are clear enough

from the objectives of the study). However, just as a traveler requires careful mapping out of
the route that will get him or her to the destination in the best time possible, we must carefully
map out the type and number of computer runs that will achieve the set objectives at a
minimum cost. In so doing, we must account for several factors, which are usually problem
dependent. We should consider the number of parameters to be examined, the duration of
prediction, and the type of information needed to answer the pertinent questions.
Careful planning of computer runs includes not only determining their order, but also
establishing a systematic labeling procedure for them. This is particularly important because
of the large number of runs usually required and the voluminous amount of information
invariably generated for analysis.

Analyzing the Results


When we have analyzed the results of the simulation study and made pertinent inferences
from it, we can evaluate its success. This step caps all the efforts previously discussed.
Considering the amount of effort that we expend on the simulation study up to this point, it is
tempting to become a biased arbiter of the results. On the contrary, this is the time to ask
critical questions and even ponder over the implications of the results. In other words, we
must not become easy subscribers to our solutions.
The mode of analysis and the presentation of results will depend very largely on the audience
for whom they are meant and the post-processing capability available. The graphics
capabilities currently available on most computers makes this process easier and even more
inviting. It is now not uncommon to display information using three-dimensional graphics. In
addition, graphics features, such as image rotation and animation, enhance our interpretation
and inferential ability.

Reservoir Rock Properties


The essential rock properties in reservoir simulation are those that govern the rocks storage
capacity and spatial distribution; its ability to conduct fluids; and its spatial and directional
distributions

Porosity
Porosity is a measure of a rocks storage capacity. In reservoir simulation, we are primarily
interested in interconnected pore space. From here on, therefore, we shall understand
porosity to mean effective porosity. Effective porosity is a dimensionless quantity, defined as
the ratio of interconnected pore volume to the bulk volume. In an idealized arrangement of
grains of uniform size the maximum porosity value is 47.64% for cubic packing, and the
minimum is 25.96% for rhombohedral packing ( Figure 1 ). However, naturally-occurring
reservoirs do not conform to these theoretical limits, due to syn- and post-depositional
processes that have taken place (in addition to non-sphericity of the grains). Their porosity
can vary widely.

Figure 1

In the flow equations used in reservoir simulation, porosity appears as one of the parameters
that scales the volume of fluids present in the reservoir at any time. During production, this
volume is depleted, and reservoir pressure drops. The higher the reservoirs porosity, the less
this pressure decline will be over time. The special case in which porosity does not appear in
the flow equation is the single-phase incompressible flow system. As we will discuss later, in
such a flow system, there is neither accumulation nor depletion, and so porosity vanishes. In
the other extreme, there are reservoirs in which porosity changes with pressure, and so
appears in the equation as a function of pressure rather than as a constant value.

Permeability
Absolute permeability is a measure of a rocks ability to transmit fluid. For a hydrocarbon
reservoir to be commerical, it must not only be porous, but also permeable. Permeability is
analogous to conductivity in heat flow. Since it is a measure of resistance to flow, a higher
permeability reservoir experiences less pressure drop than a corresponding low permeability
2
reservoir. The dimension of permeability is length squared [L ], and its practical field unit is
-8
darcy or millidarcy. One darcy is approximately equal to 10 cm2.
Permeability varies widely in naturally occurring reservoirs, from a fraction of a millidarcy to
several darcies. Similar to porosity, the permeability of a reservoir could be a function of
pressure. Permeability is a key parameter controlling the propagation of transients created by

conditions imposed at the well. It does not determine ultimate recovery, but rather the rate of
this recovery.

Homogeneous vs. heterogeneous systems


Homogeneous systems feature uniform spatial distribution, while heterogeneous systems
exhibit non-uniform distribution. For simplicitys sake, we often assume homogeneity in
reservoir calculations, even though many reservoirs are heterogeneous. This is where
numerical reservoir simulation becomes a very powerful tool, because it allows us to
incorporate property variation in the system. It is important that when we describe a reservoir
as homogeneous, we specify the property of reference (e.g., "this reservoir is homogeneous
with respect to porosity but heterogeneous with respect to permeability"). Although reservoir
simulation equations can accommodate property variation within the domain of interest in
significant detail, such detailed information may be unavailable, or at best, very sketchy. In
such cases, we must employ interpolation techniques and history matching exercises.

Isotropic and anisotropic systems


Some parameters used in reservoir simulation exhibit directional dependency. A reservoir
exhibits isotropic property distribution if that property has the same value regardless of the
direction in which we measure it. On the other hand, if a propertys value does vary with
direction, then the reservoir is anisotropic with respect to that property. One should be careful
to note that only those properties that are not volume-based can exhibit directional
dependency. Porosity, for example, is a volume-based property by definition. It utilizes all
three dimensions, and therefore has zero degrees of freedom in terms of directional variation.
Permeability, by contrast, has the dimension of area, leaving one direction in which it can
vary. Figure 2 shows all possible permutations of isotropic, anisotropic, homogeneous and
heterogeneous systems for two-dimensional cases.

Figure 2

The existing anisotropy determines the orientation of a coordinate systems principal axes. In
most applications, reservoir simulators employ orthogonal coordinate systems, where all the
axes are mutually perpendicular. It is imperative to align these axes with the principal flow
directions, so that we may eliminate the six off-diagonal elements of the permeability tensor,
and be left with three diagonal elements in a three-dimensional system (two for a twodimensional system). Otherwise, an incorrect representation of the system results, as shown
in Figure 3 .

Figure 3

Reservoir Fluid Properties


Fluid properties, like rock properties, significantly affect fluid flow dynamics in porous media.
Unlike rock properties, however, fluid properties exhibit significant pressure dependency.
Therefore, it is often necessary in reservoir simulation to estimate these properties using
correlations and/or equations of state.

Gas properties

In calculating gas properties such as density, compressibility and formation volume factor, we
often use the real gas law as our basis. For more rigorous calculations, we might use a
modern engineering equation of state such as the Peng-Robinson equation of state.
Invariably, these calculations express density as a function of pressure and temperature.
The properties of interest in the gas flow equation are density, compressibility factor,
compressibility, formation volume factor and viscosity. Density appears in the gravity term,
and it is often neglected. The compressibility factor introduces an important non-linearity, in
that it appears in the formation volume factor. Gas viscosity is also strongly dependent on
pressure, and needs to be calculated as pressure varies spatially and temporally. Figure 4
summarizes the equations and correlations necessary for determining gas properties.

Figure 4

Oil properties
Oil properties that appear in the governing flow equations for the oil phase are density,
compressibility, formation volume factor, viscosity and solubility of gas in oil. In the absence of
gas, these oil properties can be treated as constants, because the compressibility of gas-free
oil is very small. However, the presence of dissolved gas in oil necessitates the use of
appropriate correlations to determine the variation of these properties with pressure and
temperature. A recent review of the available correlations has been provided by McCain
(1991). We can also use modern equations of state to calculate these properties.
Theoretically, an infinite amount of gas can dissolve in oil, provided that adequate pressure is
available. Accordingly, if pressure is available, it is conceivable that there will be no free gas
(undersaturated reservoirs). If pressure is not sufficient some of the gas will exist in the free
state (saturated reservoirs). A typical simulation calculation may traverse saturated and
undersaturated conditions. Most reservoir simulators implement variable bubble-point
algorithms to handle these situations. Figure 5 shows the qualitative variation of several of
these properties.

Figure 5

Water properties
McCain (1991) provides correlations for estimating such water properties as density,
compressibility, formation volume factor, viscosity and gas solubility. Since gas solubility in
water is very small compared to oil, for most practical cases, we assume constant values for
these properties that come into play in the water flow equation.

Reservoir Rock/Fluid Interactions


Reservoir fluid flow is governed by complex interactions between the fluids and the reservoir
rock. These interactions become more complicated when, as is often the case, two or more
fluids are present in the same pore. When a driving force acts on such systems, the fluids
compete in motion, because their movement is mutually dependent. To appropriately describe
the simultaneous flow of two or more fluids in a porous medium requires a good
understanding of both the fluid-fluid and rock-fluid interactions.

Wettability and interfacial tension


The principal fluids in a petroleum reservoir are water, oil and gas. When they exist as free
phases, they are generally immiscible (Note: this discussion does not consider emulsions or
dissolved gases). When these immiscible fluids co-exist in the reservoir pore space, their
interactions with one another and with the containing rock control their spatial distribution and
movement. The two principal properties used to quantify these interactions are wettability,
which pertains to rock-fluid interactions, and interfacial tension, which relates to fluid-fluid
interactions.
When two immiscible fluids co-exist in the same pore space, one preferentially adheres to the
rock surface. This phenomenon is known as wetting, and the fluid that is preferentially
attracted is referred to as having a higher wettability index. The parameter which determines
the wettability index is called adhesion tension, and it is directly related to interfacial tension.
Interfacial tension is a measure of the surface energy per unit area of the interface between
two immiscible fluids. Examples of such interfaces include the junction between water and

crude oil and the junction between oil and gas. Figure 6 depicts an oil-water interface.

Figure 6

The study of surface energy phenomena is very important in recovery processes, in that
many EOR processes are based on altering the surface energy so as to favor oil recovery.
They work on the principle that all interfaces existing under equilibrium conditions have some
free energy associated with them. As a two-phase system approaches equilibrium, the
interface assumes a configuration that tends to minimize the free surface energy, unless
otherwise constrained by external forces. Examples of this behavior abound in nature. A rain
drop falling in a vacuum assumes a spherical shape, since this is the geometrical shape that
minimizes surface area and, therefore, surface energy. A rain drop falling through the
atmosphere would do the same thing, except that an external drag force constrains it from
doing so. For this reason, it has a "tear-drop" shape.
Interfacial tension is the surface energy per unit area of a fluid interface. It has units of force
per length. For any fluid having contact with a solid surface, the contact between the fluid and
the solid has a certain value of interfacial tension associated with it. What differentiates fluids
from one another in terms of relative wettability are their values of fluid/solid interfacial
tension. The lower the solid-fluid interfacial tension, the lower the surface energy and the
higher the tendency for the fluid to wet that surface.
For two immiscible co-existing fluids in porous media, the one with the lower interfacial
tension is the wetting phase, while the other is the non-wetting phase. There is a definite
relationship between the solid/wetting phase interfacial tension, the solid/non-wetting phase
interfacial tension, and the wetting/non-wetting phase interfacial tension. The Young-Dupres
equation (Equation 2.1) expresses this relationship.

SN - SW = WNcos (2.1)
where

SN = solid/non-wetting phase interfacial tension


SW = solid/wetting phase interfacial tension

WN = wetting/non-wetting phase interfacial tension


= angle of contact between the fluid and the rock
Relative permeability
When two or more immiscible fluids flow simultaneously through a porous medium, they
compete and do not move at equal velocity. This results on the one hand from interactions
between the fluids and the rock, and on the other from interactions among the fluids
themselves. As previously mentioned, this manifests itself in interfacial tensions.
Interfacial tensions are not transport properties, and so we cannot use them directly to
qualitatively characterize relative motion. We can, however, observe the relative ease with
which each of the two competing fluids go through the porous mediumthat is, we can
measure the relative permeability.
Although relative permeability is not a fundamental property of fluid dynamics, it is the
accepted quantitative parameter used in reservoir engineering. Relative permeability appears
prominently in the flow equations used in reservoir simulation.
By definition, relative permeability is the ratio of the effective permeability, when more than
one fluid is present, to the absolute permeability. Effective permeability is the measured
permeability of a porous medium to one fluid when another is present. The effective
permeability depends on the relative proportion of the two fluids present, or fluid saturation.
Therefore, relative permeability is also a function of fluid saturation. Although there are
models for predicting relative permeability, they are all empirically formulated from measured
data sets. Figure 7 shows typical relative permeability curves for an oil-water system.

Figure 7

Figure 7 depicts relative permeability as a function of water saturation for a two-phase


system. We therefore refer to it as two-phase relative permeability. If a third phase is present,
then each fluid has its own relative permeability, which differs from the corresponding twophase relative permeability.
Because it requires a three-dimensional representation, three-phase relative permeability is
often shown on ternary diagrams, with isoperms displayed at various saturation combinations.
Leverett and Lewis (1941) were one of the first to use this representation. Figure 8 shows a
typical relative permeability curve for a three-phase oil/gas/water system.

Figure 8

Successful simulation of a multiphase system hinges on adequate relative permeability


information. Since relative permeability is a function of saturation, which varies over a
reservoirs life, the best way to get adequate information is to incorporate relative permeability
models into the reservoir simulator. Several models are available (Honapour, et al. 1986),
each claiming varying degrees of merit. The simulation engineer must determine which model
is appropriate. Table 1 lists some common relative permeability models.

Table 1

Capillary pressure
In everyday experience, water levels in two or more connected containers have the same
level if exposed to the same atmosphere. But when it comes to spaces of capillary size (like
those we encounter in porous media), we cannot take this rule so literally. To illustrate,
consider what happens when a glass tube of capillary size is dipped in a larger container filled
with water ( Figure 9 ).

Figure 9

The water in the capillary tube rises above the water level in the container to a height that
depends on the capillary size. Although strictly speaking, the water still finds its level, it does
so in such a way as to maintain an overall minimum surface energy.
In this situation, the adhesion force allows water to rise up in the capillary tube while gravity
opposes it. The water rises until there is a balance between these two opposing forces. The
differential force between adhesion and gravity is the capillary force. This force per unit area
is the capillary pressure. As we might surmise from these observations, there is a relationship
between capillary pressure, Pc , and the interfacial tension between the two fluids (in the case
of Figure 9 , water and air).

<

(2.2)

where
Pc = capillary pressure

wn = wetting/non-wetting phase interfacial tension


R = radius of the tube
= angle of contact between the solid surface and liquid
Note that the exact opposite happens if the fluid is the non-wetting phase with respect to the
tube material. The classic example is oil and mercury in a glass capillary tube, where, instead
of capillary rise, there occurs capillary fall.

Capillary pressure is important in porous media flow description because of the saturation
distribution in the capillary-like pore spaces. A plot of capillary pressure versus water
saturation has a typical shape, as shown in Figure 10 .

Figure 10

It is interesting to note the hysteresis between the capillary curves for a drainage process
(where wetting phase saturation is decreasing) and an imbibition process (where wetting
phase saturation is increasing).

Microscopic Properties
Reservoir fluid flow is a fundamentally complex process. Fluid movement depends not only on
the fluids themselves, but also on how the fluids interact with the porous medium, which in
effect is a huge capillary network. Then, there is the pore structure itself to worry about. On a
scientific level, understanding these microscopic properties is a must if we are to capture the
essence of the system. However, engineers thrive on approximation, and reservoir simulation
engineers are no exception. Using global properties such as permeability, porosity, relative
permeability, and capillary pressure, one attempts to procure the best information possible.
As research progresses into reservoir characterization, improvement in the modeling
approach is inevitable. Meanwhile, we must learn all we can through approximate models.

Flow Geometries and Dimensions


The same factors that dictate our choice of coordinate systems play a dominant role in our
deciding how many dimensions to assign to a problem. Engineering, to a large extent, is a
marriage between pure science and practical reality. In other words, selecting a higher
number of dimensions to represent a system may be scientifically correct, but we may lack
the information or the computational overhead needed to assign this many dimensions. So,

we assign fewer dimensions and settle for a less-than-ideal problem definition. Such
compromise may seem drastic. But in reality, for most engineering problems, we can
generate an adequate amount of information even within these limitations.

Rectangular flow geometry


Rectangular geometry is the one that is most familiar to us, as engineers, dating back to our
high school calculus. In reservoir modeling we often use this familiarity to our advantage,
since most field-scale multi-well studies are done in this co-ordinate system.
As Figure 1 illustrates, we can consider the reservoir to be a rectangular box with the fluid
particles moving in straight lines, perhaps at different speeds in different directions and
locations.

Figure 1

In this case, the streamlines are parallel to the three principal axes (x, y, and z), which are
orthogonal.
Figure 1 also shows the partition of the box into many smaller boxes, which are rectangular
prisms. Each of these rectangular prisms represents a certain portion of the reservoir, about
which we can procure information through simulation studies. We use this smaller element of
dimensions, (x, y, z) as a control volume to set up and discretize the governing equations.
We should emphasize that a fluid particle entering an elemental volume in one direction does
not necessarily exit in the same direction; by the same token, the fluid particle leaving the
elemental volume in one direction did not necessarily enter it in the same direction. Formation

characteristics, (such as heterogeneity, permeability contrasts and the force fields imposed by
the conditions at the boundaries) dictate the flow path once the element enters the control
volume. This is the essence of flow multi-dimensionality.
Figure 2 illustrates the concept of one-dimensional flow along the x-direction. Although it is
difficult to find real-world examples of truly one-dimensional flow, many types of analyses and
systems do lend themselves to description as one-dimensional.

Figure 2

The flow structure shown in Figure 2 precludes flow in any other direction, which implies there
is no property variation along the y and z directions. Therefore, if we take a section
perpendicular to the indicated flow direction, there will not be any property variation across
the plane. Similarly, any cross section taken in the x-z or x-y planes (parallel to the
streamlines) will reveal the uniformity of the flow structure. More explicitly, the pressure
profiles of flow paths will be similar. A long, skinny reservoir that is confined between two
closely spaced, parallel faults fits this description.
The next level of description used in reservoir simulation is two-dimensional flow. Many
reservoir simulation studies employ two-dimensional Cartesian coordinate systems. This
makes sense when we consider the large lateral extent of most reservoirs compared with
their thicknesses. Figure 3 illustrates a two-dimensional flow structure along the x and y
directions.

Figure 3

This precludes flow in the z-direction. Therefore, any slice taken parallel to the x-y plane will
not show any variation in terms of property and fluid distribution such as porosity, permeability
and saturations. The introduction of the second dimension allows us to describe a wide
variety of problems. We can, for instance, account for directional permeability variation and
lateral well distributions. Moreover, a two-dimensional approach allows us to represent a
variety of well completion strategies (e.g., vertical wells, horizontal wells, stimulated wells).
Thin, blanket sands that tend to display large areal coverage are ideally suited for description
by a two-dimensional model.
The best representation of flow is the three-dimensional model, because it allows us to
procure the most information about the reservoir. Unfortunately, it also requires the largest
amount of input information and a higher level of computational power and overhead. Still,
incorporating a third dimension gives us the latitude we need to include all the property
variations in all three spatial directions. This means that if we take two parallel slices
perpendicular to the third dimension, they will exhibit property and flow differences. Figure 4
illustrates a three-dimensional flow structure.

Figure 4

A three-dimensional representation allows us to accommodate a wide variety of problems of


practical interest, such as layered reservoirs (with or without crossflow), partially penetrating
wells, multi-layered production schemes, and thick reservoirs where gravitational forces could
be significant. A three-dimensional model makes it possible for us to come up with more
realistic representations of drive mechanisms (or any combination thereof), such as gas cap
expansion, bottom water drive, and so forth.
In spite of a three-dimensional models many advantages, it is less often used in practice than
we might expect. This is because we have to weigh such factors as cost, data availability and
marginal utility. In many cases a three-dimensional model may be a luxury that we can ill
afford. On the other hand, there are certain problems in which it is a technical necessity.
Consider, for instance, a thick reservoir with no significant property variation in the vertical
direction; while a two-dimensional model would appear adequate based on this description, it
may turn out that the gravitational field contribution may be so significant as to require a third
dimension.

Radial-cylindrical flow geometry


The radial-cylindrical coordinate system is particularly appealing for describing single-well
problems. Figure 5 shows the principal directions of this flow geometry and its elemental
volume.

Figure 5

The three principal flow directions are radial (r), vertical (z) and tangential (). To visualize this
flow structure, imagine a single well located in the center of a circular reservoir, such that the
wellbore and the reservoir boundary are two concentric circles. If we assume a reservoir of
uniform thickness, then the system becomes two concentric cylinders of the same height. A
particle moving in a three-dimensional radial-cylindrical flow geometry can be illustrated as in
Figure 6 .

Figure 6

A typical one-dimensional, radial-cylindrical flow model is the classical representation used in


well test analysis. In this case, flow is constrained to the r-direction such that streamlines are
rays converging towards the center of the well ( Figure 7 ).

Figure 7

Studying the problem along one trajectory is sufficient because of symmetry. Any particle
located on any of the trajectories will experience similar forces. By neglecting the flow in
angular (q) and axial (z) directions, we introduce a series of assumptions, such as no
permeability gradation along the q-direction and no gravitational effect along the z-direction.
As we can imagine from looking at Figure 6 , one-dimensional flow representations in the qand z-directions have no practical significance in reservoir studies.
The two-dimensional (r-z) representation is appealing for single-well problems where gravity
and/or layering effects are significant ( Figure 8 ). This r-z plane can be taken at any q
location without changing the problem because of its axi-symmetric nature.

Figure 8

The three-dimensional flow structure in radial-cylindrical coordinate system admits property


variation in all three directions. Figure 9 shows this system.

Figure 9

Elliptical-cylindrical flow geometry


We sometimes use elliptical-cylindrical flow geometry in single-well studies when a strong
permeability contrast exists in two principal directions on the lateral plane. Another common
application of this coordinate system is when a vertical well is intercepted by a vertical, highconductivity fracture (theoretically presumed to be of infinite conductivity). Under these
conditions, the normally concentric equipotential contours degenerate into confocal ellipses.
Similarly, the streamlines become distorted into confocal hyperbolas. Figure 10 depicts this
flow structure.

Figure 10

Spherical flow geometry


Although not commonly used for general simulation, the spherical coordinate system provides
a good representation of some specific reservoir engineering problems. Two examples are
partial penetration to a thick formation by a production well, and flow around perforations. The
principal flow directions in spherical coordinates are radial (r), tangential () and azimuthal (),
as shown in Figure 11 .

Figure 11

Curvilinear flow geometry


The most generalized coordinate system is curvilinear. In fact, all of the coordinate systems
previously discussed constitute a subset of the curvilinear system. A curvilinear coordinate
system allows a better representation of the flow geometry, as well as the boundary geometry
where the latter dictates the former. With the flow geometry more accurately represented, the
results obtained with a curvilinear coordinate system do not get distorted by grid orientation
effects, as often happens with other coordinate systems. Another advantage is that curvilinear
systems may help reduce the number of grid blocks needed for the same level of accuracy.
Figure 12 shows the areal implementation of curvilinear coordinates to a five-spot
injection/production pattern.

Figure 12

Note that the streamlines and equipotential contours define the curvilinear elemental volume.
Although curvilinear coordinate systems offer attractive advantages, their use is limited
because of the added mathematical and interpretational complexity they introduce.
Choosing the appropriate coordinate system and number of dimensions is not only paramount
to a simulation studys success, but also to its relative simplicity. It is thus essential that we
use sound engineering judgment and perform thorough analyses throughout this process. We
must answer questions pertaining to the reservoirs approximate geometry, possible drive
mechanisms, well and completion configurations, level of detail required, type and amount of

data available, and so on. As far as reservoir simulation is concerned, bigger is not
necessarily better. We must exercise good engineering judgment in establishing the scope of
our study. We need to avoid overkill, but at the same time, understand that underrepresenting the needed details is dangerous. Simply put, we must strike a balance.

Single-Phase Flow Equations


Single-phase flow in petroleum reservoirs is rare in practice. There are only a limited number
of casesdry gas reservoirs, for examplewhere conditions exist for single-phase flow. But
we do apply single-phase flow assumptions (predominantly in well test analysis) as a means
of simplifying problems and rendering them analytically tractable.
In numerical reservoir simulation, we may relax these types of simplifying constraints because
of the more versatile nature of numerical schemes over analytical methods. Moreover, even
for single-phase systems, an appropriate description of real problems usually involves
considering non-linear phenomena, which are often ignored or approximated for ease of
mathematical handling. Again, numerical schemes are not limited by these constraints.
Therefore, this discussion on single-phase flow problems has not only a pedagogical basis,
but a practical one as well.

Darcys law and the concept of flow potential


Darcys law is central to describing fluid flow in petroleum reservoirs. It distinguishes flow in
porous media from flow in other domains (such as pipes and conduits), and represents a kind
of constitutive relationship between the pressure field and the velocity field. Although Henry
Darcy derived this relationship empirically in 1851, it has since been proven that Darcys law
is a special form of the Navier Stokes equation, which is commonly used in fluid mechanics.
Simply put, Darcys law expresses a functional relationship between the fluid velocity and the
porous media and the potential gradient.

<
(3.1)
This law appears in different forms. Table 1 summarizes the forms used in this text, along with their
appropriate units.

Table 1

Velocity is the flow rate divided by the cross-sectional area perpendicular to flow. Equation
3.1 expresses the direct proportionality between flow rate and potential gradient, with the
proportionality constant reflecting properties of both the flowing fluid and the porous medium.
Note that as written, Equation 3.1 is for reservoir conditions. To express flow in surface
conditions (or standard conditions), we must incorporate a formation volume factor into the
proportionality constant.
The flow potential , defined by M. King Hubert and usually referred to as Huberts potential, is
basically a combination of pressure and gravitational fields, as shown in Table 2 .

Table 2

<

(3.1)

where q : volumetric pressure


A : cross-sectional area perpendicular to flow
k : permeability
: viscosity
/ x : potential gradient in the flow direction x
: unit conversion factor

<

(3.2)

where : Huberts potential


P : pressure
: fluid density
g : local gravitational constant
gc: universal gravitational constant
D : depth with respect to datum, taken as positive downward

In practical application, g/gc is set to 1.0 and hence Equation (3.2) as

(3.3)
<
The negative sign in Equation 3.1 shows that the potential gradient is negative in the flow direction. In
Equation 3.2, the sign convention is such as to give the appropriate addition or subtraction of gravity
from pressure.
In this text, we use the positive downward convention for depth, as shown in Figure 1 .

Figure 1

This figure shows a sloping reservoir, where the datum is at sea level (the datum could be
any fixed elevation, although sea level is the most widely accepted convention).Using the
positive downward convention, the depth to point 1 from the datum is positive; the depth to
point 2 is negative. We can calculate the potentials at points 1 and 2 as:

<

<

(3.4)

(3.5)

Assuming uniform pressure within the reservoir (P1 = P2 ), if we subtract from ,

<

(3.6)

we obtain a positive , which reflects the hydrostatic head exerted by the fluid column of
density and height D1 + D2.
It is the gradient of potential, rather than the potential itself, which appears in Darcys law
(Equation 3.1). It is therefore clear that if we take the derivative of Equation 3.3, we obtain

<

(3.7)

In resolving Equation 3.7, we assume that density is constant. Furthermore, note that we can
write Equation 3.7 for the other principal flow directions as well, yielding depth gradients not
only along the x-direction but also along the y- and z-directions (i.e., D/y, D/z). If the depth
gradient along any of the flow directions does not exist, then the potential gradient becomes
equal to the pressure gradient. Equation 3.7 then becomes

<
(3.8)
Figure 2 shows four reservoirs with varying orientations relative to the datum plane.

Figure 2

Part (a) of Figure 2 shows a reservoir where the x-y plan is parallel tothe datum
surface. In this case, there is no depth gradient along the x- and y-directions.
In part (b) of Figure 2, the reservoir is tilted so that the x-y
plane is no longer parallel to the datum, but the edges along
the y-axis remain parallel. Thus, depth gradient along the ydirection is still non-existent, whereas the depth gradient now
exists in the x-direction.

The reservoir in part (c) of Figure 2 is similar to the one in


part (b), except now the edges of the reservoir along the xdirection are parallel to the datum surface, yielding a
vanishing depth gradient in the x-direction, but not in the ydirection.

Part (d) of Figure 2 represents the most general case, in


which neither the x- nor the y-direction edges are parallel to
the datum surface. This means that the depth gradient is
non-zero in both the x- and y-directions.
In all four of these cases, the reservoir configurations are such that the formation thickness is always
measured parallel to the direction that depth is measured. This is why, in each of these cases, the depth
gradient along the z-direction is always unity.
We usually formulate the differential equations governing fluid flow in porous media based on
the continuum assumption, in which we consider a differential element of the system and take
balances over a conserved quantity of interest. When the quantity is mass, the resulting
equation is the mass balance equation or the continuity equation. Figure 3 shows a
representative element (control volume) of the reservoir in Cartesian coordinates.

Figure 3

Note that while the control volume will differ according to the coordinate system chosen, the
basic strategy is the same .

Conservation of mass
The conservation of mass principle simply says that over a fixed time period,
[Mass in] - [Mass out] = [Net change in mass content]

Applying this principle to the system in Figure 3 , we obtain the continuity equation shown in
Table 3 .

Table 3

The porosity term in the right-hand-side of the Equation (3.9), if treated as a constant, will
come out of the differential operator. This is a reasonable assumption for a reservoir with low
rock compressibility.
With appropriately defined terms and parameters, Equation 3.9 is general and can be used
for any system. To specialize it to porous media, we must invoke Darcys law. Substituting
Darcys law, written in terms of velocity as:

<
(3.12)
we obtain the flow equation for porous media. In a rectangular coordinate system, this equation
becomes

<

(3.13)

In a radial co-ordinate system, it takes the form

<

(3.14)

Equation (3.13) does not take into account the existence of wells and possible variations in
formation thickness along the flow directions. In practice, we must incorporate both of these
factors into the equations. In reservoir simulation, we treat the well (which could be a
producer or injector) as a source or sink within the system. In this text, we follow the
convention of treating injection as positive (source) and production as negative (sink).
Therefore, injection or production wells are represented in the same fashion in the flow
equation, except for the sign. With this in mind, Equation (3.13) becomes:

<

(3.15)

In formulating these equations, we have made no assumption about the nature of the fluid. Of
course, this will come into the picture when we take into account that the reservoir fluid can
be treated as incompressible, slightly compressible or compressible. We shall now specialize
the continuity equation for these fluid types.

Incompressible flow equation


For an incompressible fluid, density and viscosity are constant and formation volume factor is
equal to unity. In addition if we assume that porosity does not vary with pressure, we obtain:

<

(3.16)

Equation 3.16 is written for heterogeneous and anisotropic formations. For such a formation,
and without injection or production (q=0), Equation 3.16 simplifies to:

<

(3.17)

which is the well-known Laplace Equation. The following field units apply to Equations 3.16
and 3.17:
A [ft2]; k [perms]; [psi]; x, y, z [ft]; qsc [STB/day]; [cp].

Slightly compressible flow equation


For a slightly compressible fluid, density, viscosity and formation volume factor exhibit weak
dependence on pressure. Furthermore, for a slightly compressible fluid, we usually assume
that compressibility does not vary within the pressure range of interest.

<

(3.18)

For slightly compressible fluids the changes in viscosity and formation volume factor with
pressure are negligible and they can be treated as constants. Furthermore, if we assume that
we are dealing with homogeneous and isotropic porous media with no well, Equation 3.18
reduces to a simpler form, which is known as the diffusivity equation.

<

(3.19)

The group in Equation 3.19 is the hydraulic diffusivity constant for the reservoir fluid system.
Note that the transport phenomenon described by Equation 3.19 is not a diffusion process,
but a laminar flow problem. Equation 3.19 is analogous to the diffusivity equation in heat
and/or mass transfer.
Note also that in Equations 3.18 and 3.19, we assume that the depth gradients are negligible.
Field units for these equations are as follows:
A [ft2]; k [perms]; P [psi]; x, y, z [ft]; qsc [STB/day]; [cp]; c [psi-1];
B [bbl/STB]; [dimensionless]; Vb [ft3]; t [day]; and =5.615

Compressible flow equation


Compressible fluid flow is the norm for gas reservoirs. While the concepts that apply to
incompressible and slightly compressible flow also apply here, compressible fluid flow
involves additional considerations. The highly compressible nature of gas makes certain gas
properties (i.e., viscosity, density, formation volume factor and compressibility factor) strongly
dependent on pressure. Since we cannot assume that these properties are constant, they
introduce non-linearities to the flow equations. The numerical handling of the flow equations
becomes more challenging as the degree of non-linearity increases. However, one aspect we
do not have to worry about is the gravitational component. This is because the low density of
gas makes the gravitational contribution negligible in most cases. Therefore, the potential
gradients are readily replaceable by the pressure gradients. The governing equations for
compressible fluid flow are summarized as follows:

<

(3.20)

where

<

(3.21)

Substituting Equation 3.21 into Equation 3.20 will accentuate the inherent non-linearity of
compressible flow.

<

(3.22)

In Equation 3.22, the sources of non-linearity are the terms P/mZ in the spatial derivatives
and 1/Z in the temporal derivative. Field units for Equations 3.20, 3.21 and 3.22 are as
follows:
A [ft2]; k [perms]; P [psi]; x, y, z [ft]; qsc [STB/day]; m [cp]; c [psi-1]; B [bbl/SCF];
[dimensionless]; Vb [ft3]; t [day]; Z [dimensionless]; T [R]; and =5.615
It is sometimes expedient to linearize the non-linear equations of compressible fluid flow in
porous media (Equation 3.22). The two common approaches are called the P2 method and
the pseudo-pressure method. In the approach we simply recognize that Pdp = 1/2[d(P2)] and
assume that the gZ product at low pressures is constant. The dependent variable in the
resulting equation is now in P2 rather than in P. The pseudo-pressure approach (real gas
potential) uses the transformation

(3.23)
<
By implementing this transformation, Equation 3.22 is linearized and the dependent variable is now P.
The assumptions made in formulating the approach are not as drastic as we might think when we
observe the plot of gZ versus P ( Figure 4 ).

Figure 4

At low pressures, the gZ product is essentially constant; at intermediate pressures (40 to


130 atmospheres), it exhibits some non-linearity, and at high pressures, it becomes linear
with pressure. In fact, at high pressures, if we treat the P/ gZ group as constant as depicted
in Figure 4 , Equation 3.22 becomes similar to the slightly compressible fluid flow equation.
This is not surprising, because gases do start to behave like liquids at higher pressures. The
linearized forms of the compressible fluid flow equation are summarized as follows:
P2 form:

<

(3.24)

where gi and Z are calculated at some average pressure P.


P* form:

<

(3.25)

where gi , Zi and ci are calculated at the initial pressure. Field units for Equations 3.24 and
3.25 are as follows:
A [ft2]; k [perms]; P [psi]; x, y, z [ft]; qsc [SCF/day]; [cp]; c [psi-1];
f [dimensionless]; Vb [ft3]; t [day]; Z [dimensionless]; T [R]; =5.615; P* [psi2/cp]

Multiphase Flow Equations


Multi-phase flow equations are based on the same principles that govern single-phase flow,
except that they must account for interactions between simultaneously flowing phases in
porous media. The main parameters that we use to characterize these interactions are
relative permeabilities, saturations and solution gas-liquid ratios.

The reader will recall the process of formulating the single-phase flow equations. Basically,
we obtain the flow equation by substituting Darcys law into the continuity equation. For multiphase systems, we write the continuity equation

for each of the phases. Then we use an appropriate form of Darcys law, which accounts for
the presence of multiple-fluid flux terms (left-hand side of the equation) to characterize the
transport part of Equation 3.9. At the same time, we adjust the phase accumulation term
using phase saturations. The number of partial differential equations depends on how many
phases are present. This development is summarized in Table 1 .

Table 1

Two-phase (oil-water) equations


In reservoirs where two-phase flow of oil and water phases dominates (typically in dead oil
reservoirs with no gas), we need to write Equation 3.28 for the oil and water phases
separately. We can do this easily by setting the subscript f first to o for oil-phase and then to
w for water phase. The complete set of equations for two-phase oil-water transport problems,
together with the unknowns to be solved, is summarized below. In these equations, we have
used pressure gradients rather than potential gradients; in other words, we have neglected
depth gradients.

Oil flow equation (fo):

(3.29)
Water flow equation (fw):

(3.30)
In Equations 3.29 and 3.30, there are four unknowns: oil-phase pressure, Po , water-phase
pressure, Pw , oil-phase saturation, So , and water-phase saturation, Sw. To solve the system,
we need two more equations. These equations, called the auxiliary equations, are
Capillary pressure relationship: Pcow (Sw) = Po - Pw (3.31)
Saturation relationship: So + Sw = 1 (3.32)
With these last two equations, we now have a well-posed problem (four equations in four
unknowns). Field units for Equations 3.29 through 3.32 are as follows:
A [ft2]; k [perms]; kro krw [fraction]; P [psi]; x, y, z [ft]; o, w [cp]; Bo, Bw [bbl/STB];
qosc, qwsc [STB/day]; So, Sw [fraction]; [dimensionless]; Vb [ft3];
t [day]; =5.615.

Two-phase (oil-gas) equations


In a volumetric reservoir, there are three phases present (oil, water and gas); at irreducible
water saturation, the dominant flow is oil and gas. In representing flow in this kind of reservoir,
we must account for both the free gas and the gas dissolved in the oil phase.
Figure 1 depicts a two-phase (oil-gas) system, considering flow only along the x-direction for
illustrative purposes.

Figure 1

Note that while there is only one flow term for the oil phase, the gas phase has two flow
terms, which describe free gas flow and dissolved gas flow. The two gas flow models must
also be taken into account in the source and accumulation components of the governing flow
equation for the gas phase. The final equations for the two-phase flow of oil and gas are
shown below.
Oil Flow Equation (fo):

(3.33)
Gas Flow Equation (fg)

(3.34)
In the flow terms of Equation 3.34, the second term in each bracket represents the
contribution from the gas dissolved in oil. Similarly, qgsc represents the free gas produced
(injected), while the product Rsoqosc represents the dissolved gas produced along with oil.
Finally, the second term under the temporal derivative represents the accumulation
(depletion) of gas dissolved in oil. Again, the auxiliary equations necessary to complete this
formulation are the capillary pressure and saturation relationships.

Pcow (Sw) = Po - Pw (3.35)


So + Sw = 1 (3.36)

Field units for Equations 3.33 through 3.36 are


A [ft2]; k [perms]; kro, krg [fraction]; P [psi]; x, y, z [ft]; o, w [cp]; Bo [bbl/STB]; Bg [bbl/SCF]; qosc
[STB/day]; qgsc [SCF/day]; So, Sg [fraction]; [dimensionless]; Rso [SCF/STB]; Vb [cf]; t [day];
a=5.615.

Two-phase (gas-water) equations


In an aquifer-driven gas reservoir, simultaneous flow of gas and water takes place, requiring
us to formulate two-phase gas-water equations. The formulation is very similar to that of the
oil-gas flow equation, except that we replace the oil phase with the water phase. In most
practical cases, the solubility of natural gas in water is quite small, allowing us to neglect the
terms of the flow equation that arise from the dissolved gas. One notable exception is the
case of geopressured aquifers, where dissolved gas is significant because of the prevailing
high pressures.
Gas flow equation (fg):

(3.37)
Water flow equation (fw):

(3.38)

Auxiliary equations:
Pcgw (Sw) = Pg - Pw (3.39)
Sw + Sg = 1 (3.40)
Field units for Equations 3.37 through 3.40 are
A [ft2]; k [perms]; krw, krg [fraction]; P [psi]; x, y, z [ft]; g, w [cp]; Bw [bbl/STB]; Bg [bbl/SCF]; qw
[STB/day]; qg [SCF/day]; Sw, Sg [fraction]; [dimensionless]; Vb [cf]; t [day]; =5.615.

Three-phase (oil-water-gas) equations


The widest application of reservoir simulation is for three-phase oil-water-gas systems, in
which all three phases are active in the production process. In fact, a large majority of
petroleum reservoirs fall into this category. For this class of problems, we need to write
Equation 3.28 for each of the three phases.
Oil flow equation (fo):

(3.41)
Water flow equation (fw):

(3.42)
Gas flow equation (fg) (ignoring the solubility of gas in water):

(3.43)
Auxiliary equations:
Pcow (Sw) = Po - Pw (3.44)
Pcgw (Sg) = Pg - Po (3.45)
So + Sw + Sg = 1 (3.46)
In Equations 3.41 through 3.46, the dependent variables (unknowns) are the phase pressures
(Po , Pw and Pg), and phase saturations (So, Sw and Sg). The parameters that appear in the
coefficients, and which are functions of these unknowns (mo, krw, etc.), are not treated as the
problems principal unknowns, but are specified as part of the data input. However, their
dependence on the principal unknowns introduces non-linearities of varying degrees. Field
units are
A [ft2]; k [perms]; kro, krw, krg [fraction]; P [psi]; x, y, z [ft]; o, w, g, [cp]; Bo, Bw [bbl/STB]; Bg
[bbl/SCF]; qosc, qwsc [STB/day]; qgsc [SCF/day]; So, Sw, Sg [fraction]; Rso [SCF/STB]; f [dimensionless]; Vb [ft3];

[day]; =5.615.

In all of the multi-phase formulations summarized in Equations 3.29 through 3.46, each phase
is characterized by its own pressure. We achieve closure of a set of equations by using the
capillary pressure and saturation expressions. In the capillary pressure relationships, capillary
pressure is defined as the difference between the pressure of the non-wetting phase and that
of the wetting phase. In some cases, there are incentives for assuming equality between the
phase pressures, thus allowing us to assign a single pressure for all the phases.

The formulation presented in Equations 3.41 through 3.46 allows interphase mass transfer, so
that gas can either come out of or go into solution. However, we assume bulk transfer such
that there is no compositional difference between the dissolved gas and the free gas. By the
same token, the composition of the free gas remains unaltered. Changes in the oil phase
density and viscosity are handled using PVT data, which take into consideration the amount
of gas dissolved in the oil. This approach is commonly known as black-oil modeling, and it
differs significantly from compositional modeling.

Flow Equations Based on Individual Components


Until now, we have emphasized the bulk properties of the mobile phases in the porous media
without considering their compositions and compositional changes. While this is adequate for
many systems, there are others in which compositional effects are significant and do affect
both the transport mechanisms and bulk transport of the phases involved. Examples of such
systems include volatile oil reservoirs, gas condensate reservoirs and enhanced oil recovery
applications such as chemical flooding, miscible flooding, steam injection and so on. In all
these cases, the systems phase behavior plays a dominant role in the transport and fluid
distribution mechanisms. Numerical simulators for handling these specialized problems are
usually more complicated and require a higher degree of numerical maturity and
sophistication.

Isothermal compositional formulation


The strategy for formulating the governing equations of isothermal compositional systems is
the same as for the black-oil model, except the focus is on each component within the control
volume rather than the bulk phase. However, we normally assume that there is no selfdiffusion of a component within the phase, and hence the component velocity assumes the
value of the phasial velocity.
As previously mentioned, since the systems phase behavior plays a dominant role,
procurement of the necessary PVT data is crucial. In modern computations, we often obtain
these PVT data by using an appropriate equation of state. The common ones used in the
petroleum industry include the Peng-Robinson equation of state (or any of its modifications)
and the Soave-Redlich-Kwong equation of state. One of the major considerations in using
these equations of state is that by virtue of their cubic form, they generally require much less
computational overhead than other, more complex relationships. This is especially significant
when we consider the repetitive calculations that are required in reservoir simulation. The
calculations involved provide such critical information as distribution of components between
the phases, density variation of each of the phases, phase viscosity and other thermophysical properties.
Isothermal flow is a special case where existing temperature gradients within a given
reservoir are negligible. Therefore, in isothermal compositional formulation, temperature
distribution does not come into play. Hence, all the phase behavior and transport calculations
are performed under isothermal conditions.
For an N-component system, we can write N equations of the form of Equation 3.47. Thus,
we will have N equations in (3N+6) unknowns. Table 2 summarizes the auxiliary equations
(2N+6) needed for closure.

Table 2

Non-isothermal compositional formulation


For systems in which either the intrinsic temperature gradient or the externally imposed ones
are significant, the energy balance will play an important role in the flow of fluids and their
distribution in the porous media. A highly dipping reservoir may be subjected to a substantial
geothermal temperature gradient. Reservoirs where such EOR processes as steam flooding
and in situ combustion are taking place will definitely experience severe temperature
gradients. These systems call for consideration of the energy balance. Usually, we use
additional equations to account for enthalpy balance in the reservoir. In any case, the
additional equation is sufficient to achieve closure in view of the additional dependent
variables. Needless to say, with the additional equations comes increased computational
overhead. The energy equation usually accounts for heat transfer by heat conduction and
convection as well as heat storage. The energy and molar balance equations are related by
the various coefficients that appear in both of them.

Boundary and Initial Conditions


Mathematically speaking, the differential equations that describe flow in porous media have
an infinite number of solutions, only one of which will describe a particular problem. We can
obtain this solution by imposing additional constraints known as boundary and initial
conditions. In actuality, we impose these conditions as part of the known data about the
system under consideration. For instance, for a reservoir with strong edge-water drive, we
can set the pressure at the boundary to a constant equaling the initial reservoir pressure,
which implies that the reservoir is in direct communication with an infinitely large aquifer.

The systems physical boundary is divided into two groups: the inner boundary (usually the
location where the well is physically coupled to the reservoir) and the outer boundary (usually
the limits of the reservoir). The conditions that are specified at these boundaries constitute the
boundary conditions. The most often used boundary conditions can be grouped into two main
categories, Dirichlet and Neumann type. In a Dirichlet type boundary condition, the values of
the dependent variables are specified at the boundaries, whereas it is their gradients that are
so specified in the Neumann type.
Depending on what we know at the wellbore (either flow rate or pressure), we can specify the
Dirichlet-type boundary condition (i.e., pressure) or Neumann-type (i.e., flow rate). Similarly,
at the outer boundarydepending on the physical characteristicswe may be able to specify
either Neumann or Dirichlet-type boundary conditions. For instance, if we have a sealed
boundary that allows no flow into or out of the reservoir, we specify the vanishing Neumanntype boundary condition (i.e., zero pressure gradient across the boundary). On the other
hand, a strong edge water drive enables us to specify the Dirichlet-type boundary condition
(i.e., pressure specification). Numerical handling of the inner and outer boundary conditions is
a crucial component in conducting reservoir studies.
Similar to the boundary conditions, it is necessary to describe the original state of the system
before the process under consideration begins (for instance, before initiating a production or
injection process). The conditions that describe the values of the dependent variables at the
pre-set time (such time is usually set to zero) are known as the initial conditions for the
problem. The existing hydrostatic head distribution determines the initial saturation and
pressure distribution.
With the imposition of the boundary and initial conditions, the formulation of the problem is
now complete and ready for solution. What we have is a well-posed mathematical problem
which guarantees the existence of a unique solution.

Analytical vs. Numerical Solution Methods


There are two techniques for solving mathematical reservoir models: analytical and
numerical. Each of these has certain strengths and limitations.

Analytical, or closed-form techniques offer the advantage


of providing exact solutions (when they can be found);
furthermore, those solutions are continuous throughout the
system. The types of problems that are amenable to
analytical solution, however, are very limited. Analytical
methods fall short when we start dealing with varying
formation thickness, non-uniform porosity and permeability,
and changing fluid properties, and other such conditions that
describe most real reservoirs.
To find analytical solutions for the type of system that
"Mother Nature" generally provides, we have to modify the
problemsometimes quite drasticallyto make it plausible
for handling analytically. What we end up doing is providing
an exact solution to an approximate problem (e.g., a classical
well test analysis model).

A numerical solution involves discretizing, or approximating


the mathematical modelthat is, using a numerical tool such
that continuous forms of the partial differential equations are
written in a discrete form. We perform this discretization
process not only on the partial differential equation, but also
on the physical systems. This means that we divide the

physical system into a number of sub-domains that are


coupled to one another.
The clear advantage of the numerical approach is that it allows us to assign representative
properties to as many parts of a system as we have information for. However, we must not
forget that we inevitably lose some measure of accuracy in discretizing the partial differential
equations. The net result is that in using a numerical approach, we are providing an
approximate solution to an exact problem.

Numerical Models: Grid Systems


Numerical models provide approximate solutions to exact problems. The degree of exactness
depends largely on the discretization scheme, the heart of which is the type and size of the
grid. Therefore, we must pay adequate attention to selecting the appropriate grid system.
Although this discussion is limited to rectangular coordinate systems, the same basic
principles apply to other coordinate systems.

Body-centered grids
In body-centered (or block-centered) grids, the discrete points are located at the center of
each cell ( Figure 1 ).

Figure 1

There are no discrete points at the reservoirs external boundaries. In setting up the grid
system, we must be especially careful in placing the grid with respect to the wells. First, the
well should be located at the center of its host grid block. Second, each grid block should
have no more than one well within the same grid block. In cases where these conditions are

difficult to meet, there are techniques for handling them. For example, if it is difficult to locate
the wells such that they coincide with the grid center, we can shift them slightly to satisfy this
condition. Also, if we cannot avoid having more than one well in the same grid block, we can
mathematically replace them with one well of equivalent strength.
Figure 2 shows the same multi-well reservoir in which two wells are combined into one
equivalent well (wells W1 and W2) located in the center of a new block.

Figure 2

In doing so, the number of blocks along the x- and y-directions is decreased by one, which
results in an overall decrease of 6 blocks.
Note that in using a finite number of grids to develop a numerical model, the reservoirs
physical boundary cannot always coincide with a grid-imposed boundary. The finer the grid,
the closer the two boundaries will be. It is necessary to compromise between how much
boundary information needs to be captured (via finite grids) and the amount of computational
overhead involved. Obviously, as the number of grid blocks increases, so do the memory and
the CPU time requirements.

Mesh-centered grids
In mesh-centered (or point-distributed) grids, the discrete points are located at the grid line
intersections . In contrast to body-centered grids, there are discrete points located on the
reservoir boundaries ( Figure 3 ).

Figure 3

As is true for body-centered grids, it is permissible to slightly relocate the wells so that they lie
at the intersection of the grid lines. As shown in Figure 3 , the discrete points in a meshcentered grid are not necessarily located at grid block centers.
In mesh-centered grid systems, the discrete points that are located at the corners and edges
of the grid represent only a certain portion of the block they are associated with. For instance,
a discrete point located on the corner may represent 1/4 or 3/4 of a block, whereas the
discrete points located on the edges of the grid system represent only 1/2 of a full block.
The finite difference approximations of partial differential equations are independent of the
grid system used. There are some practical advantages to using a body-centered grid in the
case of no-flow boundaries. For systems with constant pressure boundaries, mesh-centered
grids offer better accuracy, since the discrete points (at which pressure is specified) exist at
the reservoir boundaries.

Numerical Models: Grid Systems


Size and number of grid blocks
Grid size and the number of grid blocks are not independent of each other. In a fixed system
(i.e., a defined reservoir), specifying the grid size determines the number of grid blocks. There
is no hard and fast rule for selecting the grid size for a simulation study. This does not mean
that we have unlimited freedom in selecting the grid size. The degree of freedom is often
limited by the amount of input data available, the information we want to gain from the study,
and the investment we are willing to make in terms of computational overhead. In any case,
there are a number of factors that impose lower and upper bounds on our choice of grid block
size. Within these limits, however, we do have some degree of freedom. Table 1 summarizes
the most salient factors that we need to consider.

Table 1

We must always be aware that the quality of output depends on the quality of input. Each
block added to the system demands information in terms of accurate property representation.
A 40-foot blanket sand without significant porosity and permeability variation, or without
property information in the vertical direction, does not require subdivision into several layers.
By a similar token, in terms of areal coverage, it is unnecessary to divide a reservoir into
blocks beyond where information is known, or where there is significant lateral property
variation. In these two examples, the only consideration on which we must base our selection
is mathematical requirements, which we will discuss later.
A principal goal of reservoir simulation is to define the level of information or detail desired.
We must define a priori what this goal is and what questions we need to answer at each
phase of the study. For instance, if we need to further develop a certain portion of a producing
field, we will need to focus on that area, perhaps by imposing a larger number of blocks to
better represent saturation and pressure distributions. Similarly, if a production well requires
stimulation work that involves perforating and/or fracturing, then we must focus attention in
that section by using finer grids.
In some reservoir studies, the quality and detail of the information we need may require us to
use much finer grid blocks. A case in point is the simulation of an EOR process, where we
need to track a fluid front in order to better control the outcome. In instances like
steamflooding or miscible flooding, where changes are abrupt and the interactions between
the phases are strong, a fine grid system will help capture the details of the changes taking
place.
Reservoir geology is often structurally and stratigraphically complex. Representing this
complexity with a grid system sometimes requires the use of a large number of grid blocks. In
other words, it is not possible to construct a grid system for a lenticular sand with several
pinch-outs without imposing finer grids that capture peculiar features of this system.
The number of wells in the reservoir plays a dominant role in grid size selection. We have
already noted that ideally, each block should host only one well. We also have to consider
that the presence of wells in the reservoir further complicates an already complex system,
since most of the significant activities and rapid changes that take place in the reservoir occur
in the near-wellbore region. As a rule, the more wells we have in a reservoir, the more grid
blocks we need.

Approximating a partial differential equation using its finite-difference analogue introduces


certain errors. The magnitude of these errors and the stability of the numerical algorithm
depend on the grid size. In general, we can say that finer grids produce more accurate results
and, in fact, an infinite number of grid blocks enables a discrete numerical model to collapse
to a continuum representation of the problem.
On the other hand, one must not lose sight of the fact that more grid blocks means higher
computational overhead. As the number of grid blocks increases, so do the memory and CPU
time requirements. It is the responsibility of the simulation engineer to strike an acceptable
balance between these factors.

Grid orientation
The first step in constructing a numerical model is the placement of the axes. Sometimes, for
obvious practical reasons, an inexperienced engineer may be tempted to place the axes
parallel to the margins of the paper. But it isnt that simple. In orienting the grid, we have to
consider two factors:

Permeability anisotropy, if any


Coordinate orthogonality
In anisotropic formations, it is imperative to align the coordinate axes with the principal flow
directions. In doing so, however, we must ensure that we preserve the coordinate systems
orthogonality. When the two principal flow directions are not orthogonal, the major flow
directions should be aligned with one of the axes.
We can most easily illustrate the effect of grid orientation by considering an isotropic reservoir
in which there is no principal flow direction. Consider the five-spot pattern in this system as
shown in Figure 1 .

Figure 1

In studying this system on a unit basis, we have the option of placing the grid so that
production and injection wells are positioned diagonally with respect to each other. The
second option is to construct the grid on a larger unit, so that the injection and production
wells are on adjacent corners.

Note that although we are solving the same problem, the two different grid alignments result
in two different solutions. The difference becomes more significant for cases when the
mobility ratio is greater than one. It becomes even further pronounced in the case of
unfavorable mobility ratio when finer grid blocks are used. However, for favorable mobility
ratio, refining the grid diminishes the disparities between solutions.
One plausible solution to this problem is to use a higher order finite-difference approximation
(e.g., a 9-point finite-difference scheme in areal studies) which brings in the diagonal
interaction between the discrete points.

Local grid refinement


We have emphasized that while grid refinement does yield greater accuracy in numerical
reservoir simulation, a studys cost tends to increase in direct proportion to the number of grid
blocks it uses. A good compromiseand a way to avoid placing blocks where they are not
neededis to refine the grids around the locality where we require more detailed information,
and/or where rapid changes occur. This is called local grid refinement. Figure 2 shows
several refinement strategies that we can use in conducting reservoir simulation studies.

Figure 2

Part (a) of Figure 2 shows a coarse grid composed of


81mesh-centered grid points. The shaded area around the
well is the section from which we need detailed information.

We could get this information by performing a global grid


refinement as shown in part (b) of Figure 2, which involves a
total of 289 discrete points. Such refinement over the entire
reservoir, however, is unnecessary.

Part (c) of Figure 2 depicts the conventional approach for


resolving this problem. This approach results in the desired
level of refinement not only around the wellbore, but also
outside of the area of interest, yielding a total of 169 cells.

Part (d) of Figure 2 illustrates the best solution, which limits


refinement to the area of interest. This scheme uses a total
of 137 grid points. We should emphasize that the type of
local grid refinement shown in (d) requires specialized
handling of the non-continuous grid lines at the interface
between the refined and the coarse areas.
Figure 3 shows a slightly different refinement strategy.

Figure 3

In this scheme, the level of refinement increases as we move toward the well, where the most
rapid changes take place. A total of four levels of refinement are shown in conventional local
refinement and true local refinement schemes (Parts (a) and (b), respectively).
The ultimate refinement strategy involves dynamically changing the grid size as needed. For
example, in following a flood front movement, the area being refined continually changes as
the front propagates. In this type of adaptive grid refinement scheme, we use a certain set of
criteria such as pressure and/or saturation gradients to delineate the region that needs the
refinement. We use the same set of criteria to remove the previously refined region as the
front moves away from that region. Dynamic grid refinement requires a high level of numerical
maturity and sophistication, and so is only available in advanced simulators.

Numerical Models: Time step selection

Since forecasting is the major thrust of reservoir simulation studies, time is the "fourth
dimension" in mathematical representations of flow dynamics in porous media. A typical
simulation study may cover a number of years. The numerical algorithms will require
subdividing this period into smaller time segments. In this way, the solution over the entire
domain of interest marches from one time period to the next. As in the case of grid block size,
we must control the size of the time step. In some cases, this is necessary for numerical
stability. But even for schemes that are unconditionally stable, we must still control the time
step so that we dont lose anything from the solutions physical meaning.
We can use uniform time steps in numerical modeling, but it is neither necessary nor
warranted by the nature of the reservoir problem being solved. More often than not, the
changes are more rapid initially, thus requiring us to use finer time step sizes in order to
capture the essence of these changes. As time progresses, the system tends toward pseudosteady-state behavior, in which changes are more gradual and often occur in linear fashion.
During this period, we can employ larger time steps without sacrificing accuracy.
Although we can change time step sizes manually, most modern simulators implement
automatic time control strategies, which are generally known as automatic time step size
selectors. These selectors operate based on changes in pressure and saturation over the
previous time step. A common rule of thumb is not to allow more than a 5 to 10 percent
change in saturation during one time step. The magnitude of acceptable pressure change
varies, depending on the nature of the problem, between 10 and 100 psi per time step.
As time step size progressively increases, it is common for material balance errors to appear.
This should signal the necessity for placing a cap on the size of the time step. Errors in
material balance reflect the loss of accuracy that results from using coarse time step sizes to
calculate pressure and saturation distributions. Recall that we use these saturation and
pressure distributions to compute flow rates. Existing errors in pressure and saturation values
manifest themselves in the flow rate estimations. Thus, material balance checks help the
engineer to determine the maximum time step size that is admissable by the particular
problem, and that can be tolerated by the model we are using.

Finite-Difference Approximation of Reservoir Flow Equations


In addition to converting the spatial and temporal domains (i.e., the reservoir and time) into a
set of coupled discrete sub-domains we need to convert the continuous form of the governing
differential equations into a discrete form. To do this, we need to utilize some numerical tools.
There are a number of tools that we could employ, including finite-difference, finite-element,
collocation techniques and so forth. The most widely used numerical technique in reservoir
simulation is the finite-difference approach.
The finite-difference approach gives us a great deal of flexibility in handling the non-linear
partial differential equation, in addition to the property distribution in heterogeneous systems
for which an analytical solution is not feasible. The governing equations, as well as the
boundary conditions used for describing flow in porous media, have only first-order and
second-order derivatives, and so we will limit our discussion to these.

First-order derivative
First-order derivatives appear in the governing equations on the right-hand side in the form of
the time derivative (the accumulation term). In addition, first-order derivatives appear when
the gradient is specified across a given boundary. To approximate the first-order derivatives,
we use truncated Taylor series expansion.
Figure 1 introduces the notation we will use throughout this discussion.

Figure 1

In this figure, we show a hypothetical pressure distribution, P, along the x-direction. The
derivative is approximated at the point x (also designated as i) at which the value of pressure
is Pi. The two neighboring points to the central point are x - x (also designated as i -1) and x
+ x (also designated as i - 1). Accordingly, at these two neighboring points, the pressure
values are Pi - 1 and P i + 1, respectively.
In general, a Taylor series expansion for evaluating a function f (x) at x + x can be written

<

(4.1)

Using the notation of Figure 1 , we can write Equation 4.1 as:

<

(4.2)

Forward-Difference Approximation:
The forward-difference approximation to the first-order derivative at point uses the values of
the function at points i and i +1. We can obtain this approximation from Equation 4.2 by
truncating all terms after the first-order derivative and rearranging the equation to solve for
(P/x)i. Note that in forward differencing, only the + sign in Equation 4.2 is relevant. The result
is

<

(4.3)

The second term on the right-hand side of Equation 4.3 denotes the error in the
approximation and is read as "order of ." The magnitude of the error is the same as the
magnitude of . In practice, the error term is dropped in writing the finite-difference analogues.
Thus, we write:

<

(4.4)

which is clearly an approximation. Figure 2 shows the geometrical interpretation of this


approximation.

Figure 2

The exact derivative of P at point is the slope of the tangent AB to the curve at point O.
Equation 4.4 uses the slope of the secant OC to approximate this derivative. As we can
conclude from the figure, the accuracy of this approximation depends on the shape of the
curve, as well as the length of the interval between and . In the limit as point is moved
progressively closer to the secant OC tends to obtain the same shape as the tangent AB.
In reservoir simulation, we use the forward difference approximation in conjunction with the
temporal derivative. The two neighboring points in this case represent the old time level and
the new time level . When this derivative is written at a point i in the spatial domain it
represents the rate of change of pressure with time at point i. In other words,

<

(4.5)

In Equation 4.5, is the unknown that we want to determine, and represents the known value of
pressure at the old time level. Remember, we are using a marching scheme in time, where
we obtain information at the new time level from what is already known at the old time level.
Backward-Difference Approximation:
With the same strategy, but now using the neighboring point , we can obtain the backwarddifference approximation to the first-order derivative from Equation 4.2 using the (-) sign:

<

(4.6)

Figure 3 shows the geometrical interpretation of Equation 4.6.

Figure 3

Central-Difference Approximation:
The central-difference approximation to the first-order derivative at
point i uses the two adjacent neighboring points and . We can obtain this approximation by
adding Equations 4.4 and 4.6:

<

(4.7)

Figure 4 shows the geometrical interpretation of Equation 4.7.

Figure 4

It is apparent from Figure 4 that central-difference provides a more accurate approximation to


the first-order derivative than either the forward- or backward-difference approximation does.
Note that the secant CE is almost parallel to the desired tangent AB. We can explain this
increased accuracy by deriving Equation 4.7 directly from Equation 4.2. In this case, when we
write Equation 4.2 two times (once with the (+) sign and once with the (-) sign), and then
subtract them from each other, the second-order terms drop out (not truncated) and the
truncation starts with the third-order derivative, resulting in

<

(4.8)

Second-order derivative
The left-hand-side of the flow equation is composed of second-order derivatives representing
the flux terms. To approximate these second-order derivatives, we use central-difference
approximation. This involves writing Equation 4.2 once with the (+) sign and then with the (-)
sign; we then add the two resulting equations together to yield

<

(4.9)

Dropping the error term, Equation 4.9 becomes

<

(4.10)

For heterogeneous systems where properties vary spatially, the second-order derivatives

appear in the form <

, and the approximation is different from Equation 4.10:

<

(4.11)

Note that in Equation 4.11, the dependent variable P is expressed at the nodal points,
whereas the coefficient a is expressed at the block boundaries between nodes i and i + 1 and
i and i - 1. The subscripts i + 1/2 and i - 1/2 simply indicate that a needs to be calculated using
some averaging technique boundaries between blocks i and i + 1 and i and i - 1 respectively.

Finite-difference schemes
There are two principal groups of finite-difference schemes: explicit and implicit. We can
illustrate the governing concepts for these schemes using the classical diffusivity equation as
it is written in one dimension:

<

(4.12)

Equation 4.12 has a second-order spatial derivative and a first-order temporal derivative (a is
assumed constant). Using the finite-difference approximations for these derivatives we can
write the finite difference analogue for Equation 4.12. Before we do that, however, we must
answer one question: at what time level do we implement the approximation to the spatial
derivative? The two most common approaches are to evaluate them at the old time level (time
level n) or the new time level (time level n+1). These lead, respectively, to explicit and implicit
schemes.
Accordingly, the explicit finite-difference analogue to Equation 4.12 is

<

(4.13)

Since pressures at the old time level are known at all locations, the only unknown in Equation
4.13 is . In fact, we can rearrange Equation 4.13 to solve for this unknown:

<

(4.14)

When Equation 4.14 is written for every node, we have one unknown in one equation for each
node. Thus, we can solve explicitly, one unknown at a time, for the pressures at the new time
level.
The implicit finite difference analogue to Equation 4.12 is:

<

(4.15)

It is obvious that Equation 4.15 has three unknowns <


. By collecting
all the unknown terms on one side, we obtain the characteristic form of the implicit finitedifference scheme to Equation 4.12:

<

(4.16)

When we write Equation 4.16 for all the nodes where pressure is unknown, we obtain a
system of simultaneous equations. The solution to this system of equations provides the
pressure distribution at the new time level.
With the explicit scheme (Equation 4.14), we only need to solve one equation at a time for the
unknown pressure at a particular node. For the implicit scheme (Equation 4.16), we must
solve all the equations simultaneously. But while the explicit scheme looks more attractive in
terms of its simplicity, it is less often used because of its restrictive stability constraints.

Flow equations in the finite-difference form


Having studied finite-difference approximations to partial differential equations and the
differences between the explicit and the implicit formulations, we can now write a finitedifference analogue for porous media flow equations. For illustrative purposes (and for the
sake of brevity), we will develop this analogue for a single-phase flow equation. Let us
consider the flow equation for the slightly compressible single-phase flow problem in three
dimensions:

<

(4.17)

To approximate the second-order derivatives that appear in Equation 4.17, we invoke


Equation 4.11, and obtain

<
(4.18)
We write Equation 4.18 in the most explicit form simply because it is conducive to recognizing
one of the most important groups in reservoir simulation, the transmissibility terms. In
Equation 4.18, the pressure coefficients on the left-hand-side are known as the
transmissibility. These define the interaction between two neighboring grid blocks. Figure 5
shows two neighboring blocks (i, j, k and i+1, j, k) whose properties are different, and the
transmissibility term,

Figure 5

<
calculated at the interface.
The term Axkx/x is the constant part of the transmissibility group, while the term B could
depend on the pressure. To calculate these two terms at the interface, we must use
averaging procedures. For the constant part of the transmissibility, if we consider the two
blocks as being connected in series, it becomes necessary to use harmonic averaging, such
as

<
(4.19)
The second term (B) is considered a weak function of pressure; that is, it exhibits weak nonlinearity. We calculate this term at the arithmetic average of pressures at the two neighboring
blocks:
Note that under multiphase flow conditions, a typical transmissibility term will include relative
permeability to the fluid for which the flow equation is being written. In that case, the
transmissibility group between the blocks i, j, k and i+1, j, k is

<

(4.20)

We calculate all of the terms in this multiphase transmissibility group as single-phase, except
for the new term (krf). The relative permeability term is a strong function of saturation and
exhibits a high degree of non-linearity. In expressing this term at the interface of two
neighboring blocks, we need to establish the flow direction to use the saturation of the
upstream block. This procedure is known as single-point upstream weighting, and can be
summarized as

<

(4.21)

A more accurate representation of the relative permeability at the interface is known as twopoint upstream weighting, and is expressed as

<

(4.22)

We obtain Equations 4.21 and 4.22 by using a Taylor series expansion to expand the relative
permeability function. While Equation 4.21 yields a first-order approximation. Equation 4.22 is
a second-order approximation, and thus more accurate.
Although we have illustrated the transmissibility groups only for the i+1/2, j, k interface, similar
analyses apply to the other interfaces. In summary, it is imperative to note that the
transmissibility group consists of three components, and that each is calculated differently.

The constant component is calculated using harmonic averaging


The weakly non-linear component is estimated using arithmetic averaging
The strongly non-linear component is obtained using upstream averaging

Incorporation of boundary conditions


To obtain a particular numerical solution to a flow problem, we must impose specific boundary
conditions. First, let us consider the outer boundary. Figure 6 shows a pure Dirichlet-type
boundary condition with a mesh-centered grid system.

Figure 6

The boundary conditions imposed on the system (as shown in Figure 6 ) translate to the
following equations:

<
(4.23)
where NX and NY represent the maximum number of blocks along the x- and y-directions,
respectively.
The implementation of Equation 4.23 will result in pressure discontinuity at the corner points
(e.g., P1, 1 = PC and PD, PD PC ). Although at first glance, this may seem to pose a problem, it
does not. Finite difference equations are only written at the nodes where pressures are not
specified. In a conventional 5-point finite-difference scheme, when the equation is written for
node (2,2), we do not write the equation for node (1,1) or the other corner points. If we use a

9-point finite-difference scheme, then the corner points come into the picture, and we simply
use the arithmetic average assigned along the two conjoining edges.
Although we use a mesh-centered grid system in this discussion, the implementation of the
Dirichlet-type boundary conditions for a body-centered grid system follows a similar logic. On
the other hand, the implementation of the Neumann-type boundary conditions for the bodycentered and mesh-centered grid systems differ. This is because different nodes are reflected
with respect to the boundary. The reflection nodes (which are actually outside of the domain
of interest) constitute a practical means of incorporating Neumann-type boundary conditions.
In creating a reflection node, we simply need to treat the boundary as a mirror showing the
images of the actual nodes that are adjacent to the boundary. Furthermore, note that these
image nodes are assigned the same properties as the actual nodes that they reflect. Figure 7
shows a body-centered grid with Neumann-type boundary conditions.

Figure 7

In this case, finite-difference representation of the boundary conditions is as follows:

<
(4.24)

<

The above Equations 4.24 contain the reflection nodes (imaginary nodes), which we can
calculate as

<
(4.25)

<
Figure 8 shows a similar treatment for the mesh-centered grid system.

Figure 8

The important difference is the node being reflected. The implementation of the finitedifference approximation in this case will be as follows:

<
(4.26)

<
Again if we solve for the reflection nodes, we obtain the following sets of equations:

<
(4.27)

<
Note that for Neumann-type boundary conditions, pressures are still unknown for the
boundary nodes, and so must be computed. To write the finite-difference equations for these
nodes, we must obtain values of the pressure at the reflection nodes. We achieve this by
using Equations 4.25 and 4.27. Although introducing the reflection nodes may appear to be
adding more unknowns to the problem, each reflection node brings with it a new equation,
through which the boundary condition is incorporated. Finally, for the special case of no-flow
boundary conditions, pressures at the reflection nodes are equal to those at the
corresponding reflected nodes (simply set the constants C1 through C4 equal to zero in
Equations 4.25 and 4.27.
Having dealt with outer boundary conditions, we now focus on the inner boundary (i.e., the
well). Figure 9 shows a typical grid block hosting a vertical well.

Figure 9

We may express the relationship between the block pressure, the sandface pressure and the
flow rate as:

qsc = -PI (Pi, j, k - Psf) (4.28)


where PI is the productivity index, Pi, j, k is the pressure of the block hosting the well and Psf is
the sandface pressure. The sign convention is such that a positive q means injection and a
negative q indicates production.
The PI of a vertical well, as defined by Peaceman (1983), is given in Table 1 .

Table 1

As mentioned earlier, we can specify either the flow rate at the wellbore or the flowing
sandface pressure. For flow rate, we substitute the specified value into Equation 4.18 (written
for the block hosting the wellfor blocks with no wells, we set flow rates equal to zero). After
solving for the pressure distribution, we come back to Equation 4.28 to solve for sandface
pressure. When we specify sandface pressure, we substitute Equation 4.28 into Equation
4.18, replacing the qi, j, k term. After solving for the block pressures, we can then use Equation
4.28 to obtain the resulting flow rate.
There have been a number of formulations proposed for implementing horizontal wells in
reservoir simulation. One simple approach is to use Equation 4.8 with appropriately defined
parameters (e.g., for a horizontal well oriented along the x-direction, we replace all the xdirection related terms in Equation 4.29 with the corresponding terms in the z-direction and
vice-versa; thus, kx becomes kz, x becomes z, and h becomes x).
With the implementation of the inner and outer boundary conditions, the finite-difference
representation is now complete and ready for solution.

Shorthand notation for the finite-difference equations


Because finite-difference analogues are usually cumbersome, even for relatively simple,
single-phase flow problems, we commonly employ a shorthand notation such as the StronglyImplicit-Procedure, or SIP (Stone, 1968). For example, we can write Equation 4.18 in
rearranged SIP form as follows:

<
(4.33)
Z, B, D, F, H and S are the transmissibility terms representing the interaction of block i, j, k
with the neighboring blocks. E represents the summation of transmissibilities and the
accumulation term coefficient, while Q includes all the known terms collected on the righthand side. Figure 10 graphically shows the interaction of block i, j, k with the surrounding
blocks.

Figure 10

The principal advantage of the SIP notation is its inherent flexibility. For instance, having
written it for three-dimensional flow, for one- and two-dimensional flows some of the SIP
coefficients simply drop out. Table 2 summarizes the possible combinations and their
corresponding SIP coefficients. Note also that for the blocks located at the boundary, we can
easily implement a no-flow boundary by setting the corresponding SIP coefficient equal to
zero across that boundary.

Table 2

Special considerations
In writing finite-difference analogues for partial differential equations, we must always
consider a number of important points, which principally pertain to the accuracy of the
resulting solution. The specific points of consideration include solution stability, consistency
and truncation error. Rarely does the user of a reservoir simulator need to worry about these
problems, but one should be aware of them.
Stability analysis is meant to ensure that the round-off error (which arises over time due to the
computers finite word length as time evolves) does not magnify in such a way as to obscure
the solution. Stability analyses focus on defining the stability criterion for a given finitedifference scheme. These types of analyses provide results such as conditional stability and
unconditional stability or instability. When we encounter a conditional stability, the criterion will
bring in certain limitations on the block dimensions and time step sizes. Although an
unconditional stability implies no restriction on the block and time step sizes,we shouldkeep in
mind that the physical meaning of the solution can be lost if we assign unrealistically large
block and time step sizes.
The next important question to answer is whether the proposed scheme is consistent (i.e.,
compatible) with the original partial differential equation. In other words, does the proposed
finite-difference analogue collapse to the original partial differential equation in the limit as the
block dimensions and time step size approach zero? The truncation error is the difference
between the original partial differential equation and its finite-difference analogue. For a
compatible scheme, we should expect that the truncation error will disappear as the block
dimensions and the time step size become infinitesimally small. If this does not happen, we
have a consistency problem. In other words, the proposed scheme produces a solution to a
different problem (partial differential equation).
It is worth noting that in general, schemes used in reservoir simulationat least the ones
discussed in this textmeet stability as well as consistency criteria. If these schemes are
used, the engineer need not worry about these types of problems.

Single-Phase Flow Equations: Solution Methods


One-dimensional flow system
Figure 1 shows a simple one-dimensional system composed of five blocks.

Figure 1

A no-flow boundary condition is imposed on the left end, while a non-zero pressure gradient is
specified on the right. A well with a flow rate qs is located in block #2. We assume that the
flowing fluid is slightly compressible.
Writing Equation 4.33 with the appropriate coefficients, we have

(5.1)
Equation 5.1 represents the characteristic equation for any block in Figure 1 . Pressure is not
known in any of these five blocks, and so we must write Equation 5.1 for each of them (i.e., i
= 1,..., 5):

Notice that although subscripts 0 and 6 (denoting "reflection" blocks 0 and 6, respectively)
appear in the first and the last equations, they will be removed by the proper implementation
of the imposed boundary conditions. This is done as follows:

At block #1, a no-flow boundary condition exists, implying that

or

Moving to block #5, the imposed constant pressure gradient implies that

or

These manipulations eliminate both


unknowns.

and

, leaving us with five equations in five

In implementing the flow rate specified at the well block, we invoke the following definition of
Q2:

Table 2a and Table 2b summarize the remaining coefficients and present them in matrix form.

Table 2a

Table 2b

The 5x5 coefficient matrix in Table 2 is a tri-diagonal coefficient matrix, which is normally
written in a more compact form as follows:

When we solve this system of equations, we obtain the pressure distribution in the system. We then
calculate the sandface pressure in the wellbore using Equation 4.20. The objective is to determine if the
reservoir under study can support the imposed flow rate at the wellbore. Although the calculation of
sandface pressure, Psf, may appear to be superfluous, it is actually an important parameter in production
planning. For instance, the sandface pressure for the specified flow rate qs can become too low to
support the friction losses in the production tubing and the associated peripheral wellbore devices; at
this point a pump (e.g., downhole or sucker rod) may be needed to augment the flow energy. In fact, it
is not impossible for the calculated sandface pressure to assume a negative value, which, of course, is
unrealistic. In essence, what this tells the engineer is that the reservoir can no longer support the desired
production rate.

Two-dimensional flow system


Figure 2 is a simple two-dimensional system partitioned into nine blocks.

Figure 2

The reservoir under consideration has three no-flow boundaries and one constant-pressure
boundary. A well is located in block (2,2), at which sandface pressure is specified as Psf.
Although there are nine blocks in the system, the constant pressure specification on the
boundary blocks (1,1), (2,1), and (3,1) means that only six blocks have unknown pressures.
Hence, we need to write Equation 4.33 for just these six blocks:

To implement the boundary conditions (similar to the one-dimensional case), we can write the
following:

In implementing the inner boundary condition at the well block (2,2) where sandface pressure is
specified, we need to invoke the proper equation for E and Q.
Table 3a

Table 3a

and Table 3b summarize the resulting coefficient matrix, unknown vector and the right-hand
side vector for the two-dimensional, slightly compressible fluid flow problem.

Table 3b

The matrix in Table 3 is a penta-diagonal coefficient matrix, and it is normally written in the
more compact form shown below:

Solving the system of equations will yield the pressure distribution in the system. The final
step to calculate of the resulting flow rate (as determined by the computed pressure
distribution surface) using the equation
qsc = -PI (Pi, j, k - Psf)
where PI is the productivity index, Pi, j, k is the pressure of the block hosting the well and Psf is the
sandface pressure. The sign convention is such that a positive q means injection and a negative q
indicates production.

Three-dimensional flow system

In this third example, we illustrate logical continuity of this procedure using a threedimensional system. Figure 3 shows a three-dimensional reservoir with 18 blocks in which
slightly compressible fluid is flowing.

Figure 3

The reservoir is volumetric (completely sealed), and there is a well in the center of the field,
completed through only the top layer (2,2,2). Assume that a constant flow rate of qs is
specified. We start with the flow equation in its finite difference form:

We then write this equation for every block at which the pressure is unknown, i.e.,

i=1, j=1, k=1

i=1, j=1, k=2

i=2, j=1, k=1

i=2, j=1, k=2

i=3, j=1, k=1

i=3, j=1, k=2

i=1, j=2, k=1

i=1, j=2, k=2

i=2, j=2, k=1

i=2, j=2, k=2

i=3, j=2, k=1

i=3, j=2, k=2

i=1, j=3, k=1

i=1, j=3, k=2

i=2, j=3, k=1

i=2, j=3, k=2

i=3, j=3, k=1

i=3, j=3, k=2

In this case, there would be a total of 18 equations. The long-hand notation used here is only
for illustrative purposes; ordinarily these equations are automatically generated within the
computer model. Again, in implementing the no-flow boundary conditions we should simply
observe the following:

Zi, j, k

i = 1,2,3 and j = 1,2,3

Bi, 1 , k

i = 1,2,3 and k = 1,2

D1, j, k

j = 1,2,3 and k = 1,2

F3, j, k

j = 1,2,3 and k = 1,2

Hi, 3, k

i = 1,2,3 and k = 1,2

Si, j, 2

i = 1,2,3 and j = 1,2,3

To implement the constant flow rate specification at the well block, we invoke the appropriate
equation:

As clearly shown, a three-dimensional single-phase problem leads to a hepta-diagonal coefficient


matrix whose general form is as follows:

When we solve the system of equations with the hepta-diagonal coefficient matrix structure,
we obtain the reservoir pressure distribution. We use this pressure distribution, together with
the flow rate specification at the wellbore, in calculating the flowing sandface pressure at the
well.

Irregularly bounded reservoirs


The tri-, penta-, and hepta-diagonal matrix structures obtained for the three previous
examples are quite standard. However, discretization of irregularly-bounded reservoirs may
yield other matrix structures. Figure 4 illustrates a simple example.

Figure 4

For the sake of brevity, we will adopt a simple block numbering scheme, as shown in Figure 4
. The ten equations for this reservoir are as follows (cf. Equation 4.33):
E1 P1 + F1 P2 + H1 P5 = Q1 D2P1 + E2P2 = Q2
E3P3 + F3P4 + H3P6 = Q3
D4P3 + E4P4 + F4P5 + H4P7 = Q4
B5P1 + D5P4 + E5P5 + H5P8 = Q5
B6P3 + E6P6 + F6P7 = Q6
B7P4 + D7P6 + E7P7 + F7P8 + H7P10 = Q7
B7P4 + D7P6 + E7P7 + F7P8 = Q7

D9P8 + E9P9 = Q9
B10 + E10P10 = Q10
The resulting 10x10 coefficient matrix is

Notice that the above coefficient matrix is hepta-diagonal instead of penta-diagonal, as would
be normally expected from two-dimensional flow problems. The purpose of this example is
show that some problems do not necessarily fit into the standard mode. However, there are
methods for handling such special cases.

Multi-Phase Flow Equations: Solution Methods


Simulating multiphase fluid flow in porous media involves solving a system of coupled nonlinear partial differential equations. Similar to the case of single-phase flow models,
developing a computer model for these types of systems requires the use of finite-difference
approximation to discretize these equations. The various solution techniques differ with
respect to how we manipulate the governing partial differential equations. The following
material summarizes the most prominent methods used for handling multiphase flow
equations.

IMPES method
We can trace the origin of the Implicit Pressure, Explicit Saturation (IMPES) method back to
the works of Sheldon, et. al. (1959), and Stone and Gardner (1961). The basic strategy of this
method is to obtain a single equation in which the sole unknown is the pressure of one of the
phases. We achieve this by combining the partial differential equations for each phase in such
a way as to eliminate the saturation derivatives. Furthermore, we assume capillary pressure
to be constant during any time step. By so doing, we obtain just one partial differential
equation, with a phase pressure as the only unknown (this is usually the water-phase
pressure).
After writing the finite-difference approximation to this partial differential equation, we obtain
the appropriate characteristic equation. When we apply this characteristic equation at the grid
nodes, we generate a system of linear algebraic equations. The coefficients appearing in this
system of equations are functions of the pressures and saturations; therefore, they are
estimated using the information available at the previous iteration level. At any iteration level,
when solution for the phase pressure (e.g., Pw) distribution is obtained, the next step is to
solve explicitly for that phase saturation distribution, Sw from the partial differential equation
describing the flow of that phase.
At this stage, we know the Pw and Sw distributions. This enables us to determine the oil phase
pressure distribution using the capillary pressure relationship between the oil and water

phases. Similar to the determination of Sw, after obtaining the Pw distribution, we explicitly
solve the oil-phase partial differential equation for the oil-phase saturation, So.
With the values of So and Sw calculated, we can easily determine Sg (Sg = 1 - So - Sw). Finally,
using the capillary pressure relationship between the oil and gas phases, we obtain the gas
phase pressure (Pg) distribution. This completes one iteration; we then repeat the whole
procedure until we achieve convergence. At the beginning of each iteration, all the pressure
and saturation dependent terms are updated using the most recent information available.
Figure 1 is a flow chart highlighting the major steps involved in the IMPES method.

Figure 1

Simultaneous Solution (SS) method


In the simultaneous solution (SS) method, we convert the saturation derivatives that appear
on the right-hand-side of the flow equations to pressure derivatives using the capillary
pressure relationships. In this manner, we obtain three partial differential equations, with the
principal unknowns being the phase pressures. In other words, at each grid node there are
three unknowns. This does not pose any problem, however, because we have three
equations to write at each node.
With three equations for each node, the resulting coefficient matrix becomes significantly
larger. For example, for a three-dimensional system with 10x15x3 blocks, the size of the
coefficient matrix is 1350x1350. The overall structure of the coefficient matrix is similar to the
corresponding single-phase structures. However, since more than one partial differential

equation is approximated at each block, the resulting elements of the coefficient matrix are
either 2x2 or 3x3 submatrices for two-phase and three-phase flow problems, respectively.
Accordingly, for a two-dimensional three-phase flow problem, we often refer to the resulting
coefficient matrix as a tri-tri-pentadiagonal matrix typifying the aforementioned structure. Once
we have constructed the coefficient matrix and obtained the phase pressures, using the
capillary pressure relationship in an inverse manner, we obtain Sw and Sg. The calculation of
So is straightforward when Sw and Sg are available. Figure 2 summarizes the basic steps of
the simultaneous solution procedure.

Figure 2

Fully Implicit Solution with Newtonian iteration


In the Fully Implicit Solution with Newtonian iteration method, we reduce the six principal
unknowns of the three-phase flow equations to three linearly independent principal unknowns
(most often one phase pressure and two saturations) by using the capillary pressure and
saturation relationships. We then use finite-differences to approximate the three partial
differential equations that result.
By treating the coefficients implicitly (at the same level as the principal unknowns) we
generate a system of non-linear algebraic equations. We can linearize these equations using
the generalized Newton-Raphson procedure, such that we can implement a Newtonian
iteration. In the solution process, a residual function is formed and its derivative calculated
with respect to each principal unknown to construct the Jacobian. We then use a numerical

differentiation scheme to obtain the elements of the Jacobian matrix. The salient points of the
Newton-Raphson method are highlighted below.
Consider a set of non-linear equations in two unknowns, x and y:
f (x, y) = 0
g (x, y) = 0
with (xo,yo) as an initial guess to the solution. Suppose that (xo + x, yo + y) is the exact
solution. Then, a Taylor series expansion can be written in the neighborhood of (xo,yo), i.e.,

Truncating the above series after the first-order terms, we obtain a system of linear equations
in two unknowns, x and y.

The solution of these two equations for x and y leads to better estimates of the solution of
the original non-linear equations. We repeat this process in an iterative manner until the
improvements in x and y become small enough to satisfy the pre-set convergence criterion.
We can express the iterative process in the matrix form as follows:
which is followed by:

x(k+1) = x(k) + x(k+1)


y(k+1) = y(k) + y(k+1)
In the above matrix equation, the coefficient matrix is referred to as the Jacobian (J).
The most stable algorithm is the Fully Implicit procedure. But at the same time, it is also the
most computationally intensive and costly method. The IMPES procedure is generally the
most cost-effective method (although we must qualify this by noting that its explicit treatment
of the mobility and capillary terms causes it to suffer some stability limitations). The
Simultaneous Solution method treats all primary variables implicitly; therefore it is

unconditionally stable. However, explicit treatment of the coefficients is similar to the IMPES
procedure and this does create some stability problems.
The whole area of solution techniques for partial differential equations is a subject of intensive
research in many fields. As a result, there are many techniques that are evolving through
rigorous testing and validation processes. For example, adaptive techniques, which utilize
different degrees of implicitness at different spatial and temporal locations to maximize the
stability while minimizing the computational overhead, have received a great deal of attention.

Solution of Matrix Equations


The following material summarizes some direct and iterative procedures for numerically
solving sets of linear algebraic equations. In setting the computer model, we face the problem
of solving a set of n equations relating n unknowns, which are expressed in the form of
a11x1 + a12x2 + a13x3 + ... +a1nxn = C1
a21x1 + a22x2 + a23x3 + ... +a2n xn = C1 (5.2)
an1x1 + an2x2 + an3x3 + ... +ann xn = Cn
We can put the left-hand members of Equation 5.2 into a square array of the coefficients,
known as the coefficient matrix,

(5.3)
and the unknown vector,

(5.4)
The right-hand members form the right-hand side vector

(5.5)
We can express the set of equations summarized by Equation 5.2 in matrix notation as
AX = C (5.6)
In reservoir simulation problems, coefficient matrices are sparsely filled and contain a main
diagonal and a number of additional diagonals. This relates to the way in which the grid

blocks are ordered (i.e., the order in which the finite difference equations are written), the
irregularity of the external boundaries and the number of dimensions.

Ordering of grid blocks


There is a one-to-one correspondence between the re-ordering of matrix elements and renumbering of reservoir grid blocks. Therefore, if we know the coefficient matrix structure that
results from a certain numbering scheme, we can employ the appropriate direct and/or
iterative equation solver.
Standard ordering by rows
Figure 1 shows standard ordering by rows for grid blocks in a 5x2 model, and the resulting
coefficient matrix.

Figure 1

The non-zero elements of the coefficient matrix are indicated by x; positions, while zero
elements are left blank.
The coefficient matrix has a banded structure, and so is called a band matrix; its non-zero
elements lie on parallel diagonals. The band width for the reservoir of Figure 1 is eleven
(there are eleven parallel diagonals between the lowermost and uppermost diagonals,
including the main diagonal). To calculate the band width, we multiply the maximum number
of blocks in the direction they are ordered by two, and then add one; in this case,
(5x2)+1=11.) Note that the upper and lower diagonals 3 and 4 have zero elements.
Standard ordering by columns
Figure 2 shows the standard ordering by columns of the reservoir from Figure 1 ,

Figure 2

and the resulting coefficient matrix with its non-zero elements. Here, the band width for the
coefficient matrix is five (2x2+1=5). If the solution algorithm operates only with the elements
located between the uppermost and lowermost diagonals, this ordering scheme will
significantly reduce computational requirements.
Checkerboard (Cyclic-2) Ordering
Consider a reservoir model numbered in a checkerboard fashion, as shown in Figure 3 .

Figure 3

In ordering the grid blocks, we number the shaded blocks first and then the unshaded blocks.
The resulting coefficient matrix offers the advantage that, during the forward solution stage of
Gaussian elimination (a common method for solving systems of linear algebraic equations),
only a small portion of the matrix needs to be worked for triangularization
Checkerboard (D-4) Ordering
In checkerboard D-4 ordering, we number grid blocks along alternate diagonals. This
alternate diagonal ordering leads to substantial savings in computational overhead (CPU time
and storage). Figure 4 shows a typical coefficient matrix generated from D-4 ordering of a 4x5
matrix.

Figure 4

(D-2) Ordering
The D-2 ordering scheme is also known as diagonal ordering. The resulting coefficient matrix
has a relatively small bandwidth ( Figure 5 ).

Figure 5

Table 1 provides a summary of the storage and computational work requirements for different
ordering schemes.

Table 1

A close inspection of Table 1 indicates that the greatest improvement in D-4 ordering is
observed when J=I.
Note: All non-zero elements appearing in the coefficient matrices are single elements for onephase problems. However, they represent 2x2 and 3x3 sub-matrices for two-phase and threephase problems, respectively.

Solution methods
One of the more important steps of a numerical simulation study is solving the systems of
equations generated by the finite-difference approximation. Our goal is always to end up with
a system of linear equations, regardless of whether the original equations are linear or nonlinear. Hence we need to linearize the non-linear equations before proceeding. In the
broadest sense, there are two main categories of solution methods for the resulting system of
linear algebraic equations: direct and iterative.
Direct solution methods
For a direct solution, we assume that the machine performing the computations is capable of
carrying an infinite number of digits (i.e., there is no round-off error). In this ideal situation, a
direct solution method will yield an exact solution after a finite number of elementary
arithmetical operations.
In reality, every computer carries a finite number of digits; round-off errors do occur, resulting
in non-exact solutions. As the number of operations increases, so does the cumulative roundoff error. For very large systems of equations, it is possible for the round-off errors to grow

uncontrollably to the point of generating unrealistic results and even instability. Still, there are
situations in which a direct solution offers distinct advantages. Furthermore, a well-written
direct-solver subroutine subprogram can be used on a variety of problems. This feature is
particularly attractive, because it lets us test various parts of the code development without
having to worry about the efficacy of the solver.
The Gaussian elimination technique is the fundamental algorithm used by direct solvers. As
shown in Table 2a ,

Table 2a

Table 2b ,

Table 2b

and Table 2c his technique involves a systematic set of elementary row operations that
essentially transform the coefficient matrix into an upper triangular matrix.

Table 2c

This step of the solution process is known as the forward solution. Once we obtain the upper
triangular matrix, the backward solution comes into effect, whereby the last unknown is
directly solved for and the others are obtained by systematic backward substitution.
Matrices associated with the finite difference methods are often sparse; therefore, they lend
themselves more easily to iterative solutions. Remember that in computer operations,
elementary arithmetic operations involving zeros take the same amount of time. Hence the
sparseness of the matrix does not affect the time involved in using direct solvers. However,
there are direct solution algorithms which take advantage of the structure of the coefficient
matrix involved in finite-difference techniques. One of the most popular techniques is Thomas
Algorithm, which is designed for tri-diagonal coefficient matrices. With this algorithm, we
simply avoid performing any arithmetic operations on the zero elements of the coefficient
matrix, thereby saving a significant amount of time. Table 3 summarizes this algorithm.

Table 3

Although Thomas algorithm is designed for tri-diagonal coefficient matrices, there are other
direct solvers which utilize the presence of other band structures. Again, the idea is to exploit
the sparse nature of the matrix by not carrying out any operations on the zeros that are
outside the band. The band matrix solvers can be effectively employed to coefficient matrices
with bandwidths up to 15. As the bandwidth becomes larger than 15, the efficacy of the band

matrix algorithm starts to decrease because this group of algorithms treats all zeros which are
located within the band as non-zeros.
Iterative methods
Iterative solution methods involve making an initial guess, or approximation of the solution
vector, and then improving on this guess by successfully implementing the algorithm. In other
words, successive applications of the algorithm lead to better approximations. This systematic
repetitive application of the algorithm continues until the solution vectors generated between
two successive iterations are in agreement within a specified tolerance (assuming that
algorithm generates solutions that approach the correct answer by each convergent iteration).
Iterative methods offer two important advantages: less storage requirements and less
computational work. The simplest and best-known iterative methods for solving systems of
linear algebraic equations are grouped under the name fixed point iterative methods.
To illustrate, consider the following system of equations:
a11x1 + a12x2 + ......... +a1nxn = C1
a21x1 + a22x2 + ......... +a2n xn = C1 (5.11)
....................................................
an1x1 + an2x2 + ......... +ann xn = Cn
In the fixed-point iterative techniques, we rearrange the first equation to solve for x1 in terms
of the other variables; we similarly rearrange the second equation to solve for x2, and then the
last equation to solve for xn. We begin the iteration process by making guess-values for all the
variables; each variable is then updated according to a certain procedure which differs from
one method to another.
Table 4 summarizes the algorithmic equations for the Point-Jacobi, Gauss-Seidel and
Pointwise Successive Over-Relaxation methods as they apply to the solution of Equation
5.11.

Table 4

The algorithms in Table 4 are presented in the order of increasing convergence rates. In other
words, the Point-Jacobi method is the slowest, while the Point-Successive Over-Relaxation
method is the fastest. The convergence rates are easy to explain: since the Point GaussSeidel method uses the most current values as soon as they become available, whereas the
Point-Jacobi method relies on the old values until the current iteration level computations are
carried out on all the unknowns. The Point Successive Over-Relaxation method is essentially
based on the Gauss-Seidel method. If we set the value of opt to one, Equation 5.14 reduces
to Equation 5.13. The basic strategy of Equation 5.14 is to magnify the successive
improvements achieved by each converging Gauss-Seidel iteration through the use of a
common multiplier, opt. We refer to this common multiplier as the optimum acceleration
parameter, whose value is problem-dependent and always lies between 1 and 2.
Figure 6 shows a typical variation of the number of converging iterations with values of .

Figure 6

opt is that value of which yields the minimum number of iterations .


We perform the same convergence check on all these three algorithms. The basic
convergence criterion is indicated by Equation 5.22. The convergence tolerance, , is a preselected small number (e.g., for pressures 10-1 and for saturations 10-4). All of the
variables must satisfy this convergence criterion. The use of a small tolerance does not
always guarantee a solution that is sufficiently close to the actual solution. We must be careful
to apply material balance checks in order to ensure that the converged solution preserves
material balance. If the material balance check signals an unacceptably inaccurate solution,
we should tighten the tolerance and continue iterations. Figure 7 shows a comparison of
convergence paths for the three algorithms discussed.

Figure 7

Figure 7 reveals two important characteristics. First the solution always uniformly approaches
the true solution from one side (no oscillations around the true solution). Secondly, we see
that for the same problem Gauss-Seidel converges twice as fast as the Point-Jacobi method,
whereas Point Successive Over-Relaxation converges twice as fast as the Gauss-Seidel
method. One note of caution: if the Jacobi method does not converge on a given problem,
neither will the other two.
We can considerably enhance the efficiency of point-iterative methods by grouping a number
of unknowns together and solving for them simultaneously at a given iteration level. One easy
way of grouping is to group all the points located along the same line. This approach is called
Line Successive Over-Relaxation. Once we solve the values of the unknowns for the
particular line at a given iteration level, we can immediately apply them when the algorithm is
executed on the next line of points. Once we are on the new line, the points of the next line
are still at the old iteration level.
Sometimes we can group more than one line of points together (even possibly all the points
on a plane of a three-dimensional system) to form a block of points. We then solve for all the
unknowns on these points simultaneously; this strategy is referred to as Block Successive
Over-Relaxation.
The complexity of the problem dictates the methods we should use; as the problem
complexity increases in terms of the number of unknowns and the relationships among these
unknowns, we must use more powerful iterative methods. In this class of methods one can
include alternating direction implicit procedure, strongly implicit procedure, conjugate gradient
method, and nested factorization method.
Comparison of direct and iterative methods
Although the efficacy and the efficiency of direct and iterative methods are problemdependent, there are some general features which distinguish them from one another. Table
5 summarizes these features.

Table 5

Internal Checks

We must always ensure that the numbers coming out of the computer are reasonable,
realistic and descriptive of the problem at hand. There are three basic checks we should
always perform at the end of each time step computation. These are the residual check,
incremental material balance check and cumulative material balance check.
In the residual check, we simply substitute the solution back into the original set of equations
and observe the departure from the equation by calculating the difference between the lefthand side and the right-hand-side of the equations. We do this check at the grid block level.
Any grid blocks exhibiting unexpectedly large residuals need to be scrutinized and diagnosed
for the underlying reasons.
The incremental material balance check simply questions whether or not mass is being
conserved or not during a time step. Before we perform this check, we should pre-determine
the acceptable material imbalance tolerance. Once we obtain the solution for a given time
step, we calculate the reservoirs mass content using the obtained pressure and saturation
values. We then compare this mass content against the mass content calculated at the end of
the previous time step. The difference between these two should equal the mass produced
and/or introduced into the reservoir. An unacceptable material imbalance may indicate a
loose tolerance level imposed on the solution algorithm. We can correct this problem by
tightening the tolerance used on equation solvers.
The cumulative material balance check is similar to the incremental material balance check.
This time, however, we apply conservation of mass at the end of each time step against the
original mass content of the reservoir at the initial time. This check informs us as to the extent
of the computational errors accumulated so far into the run. In this way, when we report the
results of the simulation study, we will be able to comment on the level of confidence.
While not all of us will be involved in actually developing simulators, we still must have a
general understanding of what goes into their development. Furthermore, we must be quick to
recognize "red flags" so that we can exercise prudent engineering judgement. In this way, we
will be able to ask the right questions for assessing the validity of the numbers being
generated by the computer.

Constructing the Reservoir Model: Data Collection and


Preparation
Constructing a reservoir model involves physically defining the reservoir and its fluids in terms
of a large set of data. These data define the fluid properties, the reservoir boundaries and the
interactions with the environment. Once we put together such a physical description of the
model, we use a numerical code to study its performance in the form of response to external
stimuli.

Data collection
There is no overemphasizing the importance of data gathering in a simulation study,
particularly as it relates to data quality. An adequate and appropriate reservoir description
hinges delicately on the quality of the data we use. It is imperative, when reporting reservoir
performance, that we measure the actual system performance. We can only be assured that
were doing this if we have used the correct data set to construct the reservoir model.
A close examination of the governing equation will reveal the type and extent of the data
required for building a reservoir model in readiness for a simulation study. For the sake of
completeness, we consider data requirements for the three-phase, gas-oil-water system.
Summarized below are the various data needed to build a reservoir model.

Geological Information
Accurate geological information is paramount to a successful simulation study. This
realization has led to the development of a new area of research called reservoir
characterization, which encompasses a number of disciplines. Accurate reservoir description,
even in the crudest sense, requires a team effort between geologists, geophysicists, log
analysts and engineers. Geologists study the depositional environment, the syn-depositional
and post-depositional forces responsible for the productive formation. Geophysical studies
define the stratigraphy of the environment as well as the reservoir structure. A careful logging
program provides information about porosity and saturation distribution. This, coupled with
core analysis, produces credible data for constructing isoporosity maps. Table 1 gives an
overall picture of geological data requirements, data sources and usages.

Table 1

Rock Properties
After procuring the geological information, which basically defines the reservoir boundaries
and structure, we need to obtain information about the reservoirs internal structure, as
typified by its fluid storage capacity, fluid distribution and ability to transmit these fluids. That
is, we need to determine its porosity, fluid saturation, and permeability, respectively. As far as
the reservoir simulation is concerned, these are the key macroscopic properties that control
the flow. The fundamental microscopic properties that affect fluid transport in porous media,
such as pore size and pore size distribution, pore throats, tortuosity, wetting affinity,
mineralogy, etc., are all embedded in these key macroscopic properties. This is why these
properties, even though they have the controlling effect, are not explicitly represented in the
flow equations. Table 2 provides an overall summary of the rock properties used in reservoir
simulation.

Table 2

Fluid Properties
Understanding reservoir fluid properties is essential to formulating their overall behavior.
Apart from the fact that many of these properties appear in the coefficients of the governing
equations, they are also used to compute reserves in surface units. Generally, these
properties are functions of the basic thermodynamic variables of temperature, pressure and
composition.
There are two broad categories of simulators: black oil simulators and compositional
simulators. While black oil simulators use the local average properties of the fluids for
characterization, compositional models rely on the accurate composition of the fluids.
Similarly, while most simulators assume isothermal reservoir conditions, there are certain
situations (e.g., thermal recovery operations) which necessitate the use of energy balance
equations and consideration of the fluid properties. In a typical black oil simulator, we express
fluid properties as functions of pressure, to different degrees of nonlinearity. These
nonlinearities make the governing equations even more nonlinear. Depending upon the
degree of nonlinearity, we may need to update these properties at each iteration level in order
to preserve the form of the original partial differential equation. This requires us to update the
transmissibility terms at each iteration level, thus having a tremendous implication on the
computational overhead required--particularly if direct solvers are used. This is because at
every iteration level, a new coefficient matrix is generated as properties are updated.
Therefore, triangularization of the coefficient matrix has to be performed continuously
throughout the iteration process. One strategy for alleviating this problem is to simply keep
the properties fixed for a pre-set number of iterations before updating them during a time step.
A heuristic approach is necessary to assess that this strategy will not adversely impact the
adequacy of the solution. Table 3 summarizes the relevant fluid properties.

Table 3

Rock-Fluid and Fluid-Fluid Interaction Properties


It is normal to expect that interactions between the fluid and the reservoir rock will affect the
systems dynamic behavior. In the presence of multiple fluids, the dynamic behavior of the
individual fluid phases will be likewise affected. We quantify these effects in terms of
interaction properties, namely relative permeability, interfacial tension, wettability and capillary
pressure.
Some of these interaction properties exhibit their influence on a microscopic scale, and so do
not appear explicitly in the governing equations. They manifest themselves, however, in other
macroscopic properties such as relative permeability. Although very commonly used and
taken for granted, relative permeability is not a fundamental property, but rather a global
property, in which microscopic properties, (interfacial tension and wettability) are implicitly
embedded. It is currently the most practical way of distributing these microscopic parameters
over the entire reservoir domain.
We always express relative permeability as a function of fluid saturation, and this function is
strongly nonlinear. Therefore, it introduces the same degree of nonlinearity into the
transmissibility terms. The importance of this becomes even more accentuated in a
displacement process, where the displacing front is being propagated as a sharp
discontinuity. In this type of situation, we can often achieve a better numerical description of
the problem by using a numerical weighting scheme where we assign a greater weight to the
points upstream than to those downstream. The classical examples of this treatment are
single-point and two-point upstream weighting procedures for relative permeabilities. In
actuality, these procedures are analogous to expanding the relative permeability functions in
terms of Taylor series.
Production and Pressure History

Advances in seismic and other geophysical techniques notwithstanding, the wellbore remains
the only true window through which we can glimpse what is happening in the reservoir. We do
this primarily by keeping account of production and pressure data and using this database to
refine the models descriptive and predictive capabilities.
Pressure data from well test analyses are useful in monitoring the reservoirs response to
certain impulses. Such data may be used to infer reservoir properties. Wellhead pressures
taken over a period of time can also be used for this purpose.
Reservoir production history, both in terms of rate and cumulative flow, is essential not only in
validating the simulators predictive capability, but also for tuning the simulator for future
predictions. As a rule of thumb, we can achieve a well-tuned simulator after the reservoir has
produced about 10 percent of its ultimate recovery. With this level of tuning, we can develop
the confidence to use the tuned simulator in a predictive mode. Production data include not
only the production rates of individual phases, but also the gas-oil ratio (GOR) and the wateroil ratio (WOR).

Data preparation
Now that we are familiar with the types of data we need for a simulation run, we can focus on
how to put this data into usable formats. Current technology in the areas of image digitization
and graphics visualization has considerably enhanced several aspects of this process,
making it easy to translate maps and graphical representations to numerical values. In
making these translations, we create digitized databases. For these numerical values to be
useful, we must sort them out in a proper format.
At this point, we should scrutinize the data for consistency and conformity. If we find an
anomaly in some of the data which we cannot substantiate, we should reject these data.
Figure 1 shows a set of hypothetical data for a variety of property variations, with typical
functions superimposed on each.

Figure 1

It is unusual for measured data to lie on a smooth continuous curve; data scattering is almost
always present. We may have to smooth the data using statistical techniques such as least
squares analysis, analysis of variation and regression. All of these techniques are a part of
preparing the acquired raw data for a reservoir simulation study.
After we have acquired, analyzed and smoothed the data, we must put them into the form of a
simulator input. There are two basic methods of doing this. We can have data input already
expressed in terms of analytical functions, or in tabular form.

In obtaining an analytical equation, two approaches are


commonly taken: approximating functions by least square
analysis and interpolating functions such as cubic splines.
These functions are coded as function subprograms, which
have to be called each time the property value is needed.
Since this involves a great deal of data transfer between the
different segments of the code, some measure of inefficiency
in terms of time is expected.

A tabular data form, usually referred to as the table look-up


method, makes the data transparent to the subroutine that
needs it. This can save a substantial amount of time. In a
typical table look-up routine, we need to bracket the data
points and then use a simple interpolation method (often
linear) to obtain the data of interest.

As Figure 1 demonstrates, some properties lend themselves to being represented with


analytical equations, while others are not so readily conducive. Properties that fall in the latter
category are invariably obtained by the table look-up method.

Reservoir Simulation and the Use of Pseudo Functions


Selecting the number of dimensions (1D, 2D, or 3D) to adequately represent a reservoir
simulation problem calls for prudent engineering judgement. We need to balance the amount
of additional information that an extra dimension could provide (assuming the necessary data
are available) against the increased computational overhead it would require. In general, we
should always try to use the least number of dimensions that provide sufficient information for
engineering analysis. For some problems, however, we need greater detail, and so we must
use higher dimensions.
Methods have been developed over the years to try to get the best of the two worlds--that is,
to use a smaller number of dimensions and still attain a high degree of detail. One prominent
method involves the use of pseudo functions.
The concept of pseudo functions dates back to the early 1960s. It developed in response to
the need for accommodating a third dimension within the limits of the computers then
available. Of course, we could argue that these limitations have more or less disappeared.
Still, three-dimensional computations are expensive, not only in reservoir simulation but in
other process simulations as well. This is true both in terms of computational overhead and
the shear amount of data required. It is not always easy to procure this type of data,
particularly in reservoir simulation. As a result, there remains even today a strong interest in
using pseudo-functions to reduce problem dimensionality and other numerical difficulties
(Zagalai and Murphy, 1991).
Pseudo functions are broadly divided into two categories, based on where they are used in
the reservoir simulation: interblock pseudo functions and well pseudo functions.

Interblock pseudo functions


The purpose of interblock pseudo functions is to describe flow between grid blocks. Since
interblock flow is controlled by relative permeability functions, which in turn depend on
saturation, generating interblock pseudo functions involves averaging the saturations in the
blocks of interest. There are two basic kinds of interblock pseudo functions: analytical and
dynamic.
Analytical pseudo functions involve volumetric averaging of water saturation over the
thickness of the interval. We then assign this volume-averaged saturation as the saturation for
the grid block. For the case of segregated flow, we neglect capillary forces. This implies a
complete gravity segregation. Thus, in calculating pseudo relative permeabilities, we simply
need to use the equality of the horizontal flow calculated between the blocks using the
pseudo-relative permeability (computed at the volume-averaged saturation) and the actual
relative permeability (computed at the actual water saturation in the uninvaded zone).
Similarly, the use of appropriate pseudo capillary pressure functions either partially or totally
eliminates the need for vertical gridding as long as such functions maintain the integrity of the
initial fluid distribution, fluid movement and pressure distribution.
Another concept, which is more widely used in analytical pseudo functions, is that of vertical
equilibrium . Here, we assume that the potential gradient of all the phases in the vertical
direction is zero. As depletion takes place, gravity and capillary forces attain equilibrium in the
vertical direction in every grid block of the reservoir model. This implies that the differences
between the phase pressures are totally balanced by capillary pressure, thereby fixing the
vertical saturation distribution. In effect, this reduces a three-dimensional problem to a twodimensional areal problem, and a two-dimensional cross-sectional problem to a one-

dimensional problem. While the vertical equilibrium pseudo function has some inherent error
in describing the vertical dimension, it is quite adequate for many reservoir problems.
It is imperative to pre-test the applicability of any set of pseudo functions by comparing results
obtained from them with models describing detailed vertical segmentation using actual data.
While the basic assumption of vertical equilibrium is reasonable for many reservoir problems,
it is not valid in reservoirs with poor vertical communication. In such cases, the time it takes
for any perturbation to dissipate in the vertical direction compared to a mean residence time
for any fluid particle moving in the lateral direction causes significant vertical potential
gradients, thus violating the premise of vertical equilibrium. One suggested procedure is to
assume a set of vertical equilibria in time; but even this approach breaks down in many
cases. The approach for generating the dynamic pseudo functions follows essentially the
same procedure as in the case of analytical pseudo functions, except that areal weighting is
used cross-sectionally. This is done by partitioning each block into finer grid blocks. The end
result is that a pseudo function is generated for each vertical column of blocks.

Well pseudo functions


Well pseudo functions attempt to extend the concept of interblock pseudo functions to
describe multiphase flow into the wellbore from adjacent blocks. This entails extending the
concept of pseudo relative permeability functions to the vertical performance of individual
wells. There are certain reservoir problems in which this application becomes extremely
useful. Practical examples include water coning and gas cusping, in which multiphase flow in
the near-wellbore region plays a dominant role and usually requires more details than usual.
Ordinarily, a three-dimensional model is most appropriate for such problems, but the use of
well pseudo functions can serve as a good compromise. The most direct way to handle these
problems is to use source representation, which couples the wellbore with a numerical
description in the near-wellbore region. Pseudo-relative permeability and pseudo-capillary
pressure functions have been found to be satisfactory. Often, the pseudo functions can be
expressed in a field-scale numerical model in terms of analytical solutions, correlations based
on numerical models and pseudo functions.

Reservoir Simulation and the Computational Environment


The magnitude, time and complexity of a reservoir simulation problem depends in part on the
available computational environment. For instance, simple material balance calculations are
now routinely performed on desktop personal computers, while running a field-scale threedimensional compositional simulator may call for the use of a supercomputer. We have come
far as a modeling community since the first digital computer became available in the early
fifties. While the first 25 years witnessed a systematic enhancement in computer technology,
the 10 years that followed have truly been revolutionary. We have gone from a period when
computers were big, expensive and only available to a small group of people (mainly
researchers) to a time when they are accessible to virtually every practicing engineer.
In designing a simulation study, we must always be aware of the capabilities and limitations of
our computing resources. We must take into account the storage requirements, CPU time
demand (to ensure that we can achieve a reasonable turnaround time) and the general
architecture of the machine. This last condition is essential now that vector processing and
parallel processingboth of which can considerably enhance the computational
environmentare commonplace.
In the early years of computer reservoir simulation, trial and error usually determined what
could or could not be done. Because of the interactive nature of todays computers, this is no
longer the case. The Restart procedures that are now available give us the ability to stop a
run pre-maturely, display the results graphically and determine whether to continue or abort

the run. With the restart option, we can decide to continue the run without having to reinitialize
the program.
In any case, we must determine a priori what size and kind of problem can be run on a
particular computer. However, this question is rapidly becoming redundant with the
phenomenal growth in computer technology. Problems that could only run on mainframes just
a few years ago now run on desktop computers. The machines that are available today range
from simple personal computers to work stations, mainframes, superminicomputers and even
a whole new generation of supercomputers. Nevertheless, each computer has its own
limitations, which we must be careful to observe.

Special Purpose Reservoir Simulators


Black oil models encompass as much as 80% of all reservoir simulation applications. But
there may be times when we find ourselves dealing with totally different physical
environments--environments that (in terms of processes or the type of porous media) are not
amenable to black-oil modeling.
For example, we cannot adequately represent a coalbed methane reservoir with a
conventional gas-water simulator, even though it is a gas-water (two-phase) system. This is
because the conventional model does not account for adsorption/desorption phenomena.
Neither can a black oil model track the processes that govern miscible flooding in volatile oil
and condensate reservoirs, because pressure and temperature changes have a tremendous
impact on the phase behavior involved and hence the fluid properties.
As we encounter increasingly complex reservoir problems, our data requirements become
more stringent, and the numerical solution schemes we employ become more sophisticated.
Furthermore, we have to keep track of significantly more variables than usual and interpret
their trends. In situations where the governing physical and chemical principles differ from
what we see in black-oil reservoirs, we may resort to special purpose simulators.
The question often arises as to whether we could develop a general purpose simulator that is
capable of handling all the arrays of reservoir problems. Theoretically (considering that every
simple problem is a merely a subset of a more complex one) the answer is a resounding yes!
In a practical sense, however, such an all-encompassing model is a virtual impossibility; CPU
and storage requirements, along with compilation overhead, would be prohibitive, and in most
cases, redundant for the problem at hand. This is why special purpose simulators are a rule
rather than an exception.

Water Coning Simulators


Water coning, a premature invasion of the wellbore by water, usually occurs in bottom-drive
reservoirs. Basically, the oil-water contact, which is normally horizontal, becomes distorted
near the wellbore, assuming an upward concave posture. Coning usually occurs if the well is
completed close to the oil-water contact and produced at a high rate, creating a sharp
pressure gradient near the wellbore and resulting in excessive water production. This water
invades the pore spaces around the perforations inhibits the flow of oil towards the wellbore.
A similar phenomenon often takes place near the gas-oil contact where the gas cap becomes
distorted as the gas cusps downward and causes excessive gas production. Figure 1
illustrates these two phenomena.

Figure 1

When using a reservoir simulator to study water coning or gas cusping, we must take extra
care to capture the rapid changes that take place within the immediate vicinity of the wellbore.
To enhance the description of these near-wellbore changes, we resort to single-well modeling
most of the time. A radial-cylindrical grid ( Figure 2 ), with smaller grid spacing along the
radial-direction and dense spacing along the vertical direction, captures the movement of the
saturation front.

Figure 2

As shown in Figure 2 , a coning study obtains the formation of the cone as axisymmetric with
respect to the wellbore. Therefore, it is common to study the problem in two-dimensional

radial-cylindrical coordinates (r-z directions only) to decrease the storage requirements and
increase the computational speed. This class of simulators is always demanding to develop
and to run. The main difficulty arises from the convergent nature of the flow toward the
wellbore. As fluid flows toward the wellbore, the cross-sectional area perpendicular to the flow
direction becomes progressively smaller, resulting in higher velocities near the well. This
necessitates the use of the smaller time steps to ensure numerical stability
While the data requirement for a coning study is the same as for a full field model, the grid
system is significantly different. Sometimes, we superimpose rectangular geometry with a
hybrid grid system over the computational domain (part a of Figure 3 ).

Figure 3

In some cases, if simulators can handle locally refined grids, then we can use local refinement
techniques effectively. We can place these locally refined sections around the wellbore (part b
of Figure 3 ), or they can be dynamic such that they move together with the oil-water contact
(part c of Figure 3 ).
Note that not every simulator has these options, and that implementing these options to
existing models can sometimes be extremely demanding.

Dual Porosity/Permeability Simulators


With more and more reservoirs being identified as naturally fractured, there is a growing
interest in simulating such systems. We generally characterize naturally fractured reservoirs
as dual porosity systems because of the presence of two continua: the rock matrix and the
fractures. We usually represent these continua using two different sets of porosity and
permeability functions, referring to the matrix porosity as primary and the fracture porosity as
secondary. Fluids are stored mainly in the rock matrix, and transmitted through the fracture
network; the permeability of the fracture network is much higher than that of the matrix.
Porosity and permeability are not the only discontinuous functions in a dual-porosity system.
Other properties, such as the capillary pressure and relative permeability functions, also differ
as we move from one subdomain of flow to the other.
There are two general approaches to simulating dual-porosity systems: dual-porosity/singlepermeability and dual-porosity/dual-permeability.

In the dual-porosity/single-permeability approach, we write flow equations for only


one of the two continua, i.e., the fracture network. We account for matrix flow by
imposing a source term on the flow equations for the fracture continuum. We
calculate this mass transfer term between the matrix blocks and the fractures by using

a transfer function, which is based on either unsteady-state or pseudo-steady-state


computations.

In the dual-porosity/dual-permeability approach, we write


flow equations for both continua. We subdivide matrix blocks
into grid blocks, and solve flow equations within each of
these subdomains (matrix blocks). Again, we calculate the
transfer of fluid between the matrix and the fracture using the
existing pressure gradients at the interface.
In practice, natural fractures occur randomly in the porous medium (part a of Figure 4 ).

Figure 4

However, in simulating the fracture system, we idealize the system as shown in part b of Figure 4 .
Usually, a computational block consists of several matrix blocks. For instance, in part b, each
computational block is made up of eight matrix blocks.
Although the matrix blocks in the idealized dual-porosity system (part b of Figure 4 ) are
shown as cubes, we could have shown them as spheres, cylinders, or slab elements. Studies
have shown that there is no significant disparity between results obtained by using any of
these geometrical configurations. The main difference usually results from the handling of the
matrix/fracture fluid exchange and the types of the capillary pressure and relative permeability
functions used within the fracture network.

Thermal Recovery Simulators


Thermal recovery processes are designed to raise the temperature of reservoir oil, thereby
decreasing its viscosity and enhancing its flow characteristics. The primary differences among
various thermal recovery methods are in the heat sources used to raise oil temperature. The
two most popular methods are steamflooding and in-situ combustion. Of all enhanced oil
recovery (EOR) processes, steamflooding is among the most successful.
The major difference between thermal recovery simulators and other types of models is the
need for the energy balance equation. Temperature distribution is the main driving factor in
thermal recovery, and so must be adequately predicted--especially since viscosity is a strong
function of temperature. The energy equations are usually highly nonlinear and strongly

related to the mass balance equations. In addition, two types of heat transfer mechanisms are
accounted for: conduction and convection.
The behavior of the energy equation depends on which of the two mechanisms dominates. If
conduction is dominant, the equation exhibits parabolic behavior (diffusivity equation) but if
convection dominates, it exhibits hyperbolic behavior (shock wave equation). This translates
to sharp thermal fronts, which leads to a numerical smearing of the front. In this case, we
must pay special attention to the numerical scheme in order to avoid numerical instability.
One other feature of thermal simulators is the need to calculate the heat loss to the
surrounding formations. Whereas in formulating fluid flow equations, we consider the
reservoir an isolated system with no fluid exchange with the surrounding, thermal equations
allow heat exchange with the surroundings, at least by conduction. This is more critical for the
overburden and underburden because of the large contact area relative to the adjacent
formations in the lateral directions.
Steamflooding enhances oil recovery by transferring heat from the steam to increase the
temperature of the reservoir section adjacent to the wellbore. This temperature increase
reduces the oil viscosity and hence the flow resistance in the wellbore vicinity. There are
basically two types of steam injection schemes: cyclic, where injection and production are
done sequentially using the same well, and continuous, which involves separate injectors and
producers.
The primary effects being simulated in steam injection are the temperature increase and the
resultant decrease in the oil viscosity. There are two basic types of steamflooding simulators:
compositional and non-compositional. Non-compositional simulators are simpler and usually
adequate for most cases. But in cases where we suspect that the distillation of light
components significantly affects the stimulation process, we should use a compositional
steamflood simulator. Such a simulator requires a phase behavior description for the
oil/steam system. An additional complication arises when three-phase relative permeability
data are considered temperature-dependent.
The main computational challenges arise from the strong inherent nonlinearities in the energy
equations, the discontinuity resulting from phase changes (condensation, vaporization), and a
strong coupling between the fluid movement and the energy transfer. These peculiar
characteristics manifest themselves in numerical problems such as instability and grid
orientation effects.
In-situ combustion uses basically the same principle as steam injection, i.e., using heat to
reduce viscosity of oil. In this case, however, the heat comes from injecting air into the
reservoir and igniting part of the oil to start a combustion front.
The primary consideration in simulating in-situ combustion, apart from all the other factors
mentioned in the discussion of steam flooding, is the combustion reaction kinetics. The
temperature dependence for the kinetics equations is usually described by Arrhenius type
rate expressions. These strongly temperature-dependent functions introduce a new level of
nonlinearity to the energy equations. We must be aware of possible severe stability problems
in in-situ combustion simulators. Excessive computational time is not at all uncommon, since
it is not unusual for a typical in-situ combustion run to be two orders of magnitude larger than
a typical black oil problem applied to the same reservoir.
In thermal simulators, apart from mass balance checks, we must also ascertain energy
balance as part of the internal checks.

Compositional Simulators
Compositional reservoir simulators account for multiphase flow and interfacial mass transfer
of each component in a hydrocarbon system. This implies that at any given time, the

simulator tracks fluid movement and establishes the state of equilibrium of the reservoir fluids
at the discrete points. At each node, phase pressure, phase saturation and overall
composition are computed as a function of time.
Compositional simulators are particularly useful in describing gas condensate reservoirs,
volatile oil reservoirs, gas cycling processes and in some thermal recovery processes, in
which compositional changes are important.
The distinguishing feature of a compositional simulator is its full coupling of the fluid phase
behavior model with the flow equations and perhaps the energy equation. In this case,
rigorous flash calculations are performed using either tabular equilibrium ratios or analytical
equations of state. All the volumetric and thermal properties are computed as being
composition-dependent.
There are certain limitations inherent in using compositional reservoir simulators. The two
principal ones are the excessive CPU time requirements and the problem of adequately
describing the fluid phase behavior.

Generally, CPU time requirements increase exponentially as the number of fluid


components increases. We can alleviate this problem by lumping the system into a
few pseudo-components. This requires rigorous testing to ensure that the pseudosystem mimics the original systems phase behavior within the range of temperature
and pressure being simulated.
In compositional simulation, we generally treat petroleum
as a mixture of limited discrete components. But in fact, it is a
continuous mixture. The standard method of handling this
problem is the use of C7+. However, difficulties often arise
from assigning the needed parameters to this pseudocomponent.
One of the methods of avoiding these problems is to apply the concept of continuous thermodynamics.
This area is still undergoing research, however, and its practical use is currently limited.

Miscible Displacement Simulators


A miscible displacement process involves two or more fluids that are mutually miscible when
they come into contact in all proportions. When complete miscibility takes place, there is no
interface formed between components. The miscibility between two components can take
place in two different ways: first-contact miscibility and multiple-contact miscibility. In the first
case, the displacing fluid is immediately miscible with the displaced fluid, while in the second,
miscibility occurs after a series of equilibrium contact stages.
Examples of miscible displacement processes include chemical flooding (e.g., miscible
carbon dioxide injection, polymer flooding, micellar flooding) and displacement of oil by
solvents. Miscible displacement simulators can be multi-component and multi-mechanistic
models. Multi-mechanism indicates that flow is taking place due to convection and dispersion.
We can represent multi-mechanistic flow by describing the velocity of a component as the
sum of the velocities due to different mechanisms.
In a miscible displacement simulator, we generally assume single-phase flow. This
assumption implies the presence of full miscibility and lets us avoid difficult vapor-liquid
equilibrium computations of multiple-contact miscibility. Furthermore, we usually consider two
components (possibly oil and the solvent), and emphasize flow by dispersion. in the
construction of these simulators. We calculate phase properties such as viscosity and density
using mixing rules. This type of formulation assumes no volume change upon mixing.

Chemical and Polymer Flooding Simulators


Chemical flooding simulators are much more demanding to develop than other special
purpose simulators. This is simply because the physics involved in a typical chemical flood
are much more complex, and require consideration of the extensive microscopic phenomena
that are taking place at the fluid-fluid and fluid-rock interfaces. Chemical floods employ
several different fluids, and therefore form several fluid banks. Interfacial phenomena, phase
behavior of the complex systems, adsorption and desorption of certain chemical agents to
and from the rock grains make the problem even more complicated. Most chemical flood
simulators are developed to study certain phenomena in the laboratory, where it is much
easier to control process variables.
Polymer injection is a complex process. It involves simultaneous multiphase fluid flow with
interphase mass transfer of water between the polymer and water phases when there is
adsorption of polymer on the rock of the porous medium. In a polymer injection simulator, the
polymer and water phases form the aqueous phase. Usually water is allowed to transfer from
the polymer phase to the water phase as a result of polymer slug deterioration caused by
polymer adsorption on rock. However, most of the time, water is not allowed to transfer from
the water phase to the polymer phase (i.e., the polymer slug cannot be diluted by water). It is
necessary to consider the polymer adsorption on the rock and its effects on the permeability
of the rock to the existing phases. We also need to account for transverse dispersion of the
polymer component within the aqueous phase. Most polymer injection simulators treat the
adsorption of polymer on the rock as permanent (i.e., there is no desorption). However, the
rock adsorptive capacity limits the maximum polymer adsorption on the rock.
We can use chemical and polymer flood simulators, like other simulators, as screening tools
to select optimal patterns. We can also use them to determine optimal slug sizes, and to
analyze the increased production and profitability of a full scale chemical flood under several
operating strategies. Finally, we can use them to predict the effect of fluid and rock properties
on the oil recovery and flood performance.

Coalbed Reservoir Simulators


The petroleum industry classifies methane from coalbed reservoirs as unconventional natural
gas. Unconventional resources offer significant potential, both now and in the future, in terms
of large volumes of reserves, low production costs, and relatively simple development
techniques.
One of the more important characteristics of coal seams is their dual-porosity nature,
characterized by well-defined macropore and micropore structures. The natural fracture
networks of coal seams are uniformly distributed, and composed of two fracture systems
which are almost orthogonal to each other (face and butt cleats). While the face cleat is
continuous throughout the reservoir and capable of draining large areas, the butt cleat is
discontinuous and ends at the face cleat. Thus, the anisotropic nature of a coal seam
originates from this cleat system in which the permeability in the direction of a face cleat is
considerably larger than that in the direction of a butt cleat.
The micropore system, as a primary porosity matrix, has a size in the order of molecular
dimensions to a few Angstroms. In general, we assume that the openings are not accessible
to water, and that they contribute the major portion of gas storage in areas in which gas is
stored in adsorbed and free states. Quantitatively, as much as 2000 SCF of methane can be
stored in a ton of coal by adsorption. When the coal seam is in virgin conditions, the volume
of the free gas in the micropores is almost negligible compared to the volume of the gas in the
adsorbed state. Again, for coal seams in virgin conditions,we assume that the cleat system is
fully saturated with water. As water is removed from the macropores, gas is desorbed from
the micropore surfaces toward the macropore structure. This process represents a distributed

source mechanism over the macropore structure. As the desorption process continues, the
free gas saturation within the fracture network increases.
Most coalbed simulators use Langmuir sorption isotherms to describe the release of methane
from the adsorbed state. They achieve this by solving a first order kinetic sorption model.
Along these lines, two approaches have been developed: equilibrium sorption isotherms
(pressure dependent) and non-equilibrium sorption isotherms (pressure and time dependent).

In the equilibrium sorption isotherm approach, we assume that the gas adsorbed
onto the micropore walls is in a constant state of equilibrium with the free gas phase
in the macropore system. Models based on this approach are essentially singleporosity models altered for coal seams by either the inclusion of a pressure dependent
source term or by the modification of the storage term. These models generally
predict optimistic results, since the adsorbed gas is assumed to instantaneously enter
the macro-pore system. A slightly more sophisticated approach involves treating
matrix sorption as a quasi-steady state model, in which the desorption rate is
proportional to the difference between the gas concentration at the external matrix
surface and the average concentration contained within the matrix.
A more realistic approach is the non-equilibrium
formulation. The non-equilibrium (unsteady-state) formulation
takes into account the time lag experienced during transport
through the micropores.
The overall structure of coalbed reservoir simulators is similar to the conventional dual-porosity/singlepermeability models. With the aid of these models, production performances of coalbed reservoirs have
been thoroughly examined. If the simulator has the options to accommodate ongoing mining activities,
it can be used to predict the methane emission rates into the active mine working area.
The first-generation coalbed reservoir models consider the only existence of a single
component gas (methane) and water. In some coalbed reservoirs, components other than
methane, (e.g., such as carbon dioxide), may play an important role in the coals sorption
characteristics. Similarly, enhanced recovery of coalbed methane through nitrogen or carbon
dioxide injection is under consideration as a viable process to increase the rate of methane
recovery. In studying these types of applications, it will be necessary to use secondgeneration coalbed reservoir models which use a compositional approach in modeling the
selective adsorption/desorption of different gas components.

History Matching, Forecasting and Updating


History matching, the most practical method for testing a reservoir models validity and
accuracy, is a process of parameter adjustment. Its goal is to procure a set of parameters that
yields the best prediction of the reservoirs performance history. Simulating the reservoirs
past performance is central to history matching, and the process should ideally help to identify
weaknesses in and ways of improving reservoir and model description data.
The main weakness of history matching is non-uniqueness. Several authors have decried this
problem (e.g., Saleri and Toronyi, 1988). Non-uniqueness arises because more than one
combination of reservoir parameters may yield the same predictions. Of course, this is not
physically possible, since the actual reservoir parameters that the model is attempting to
describe are unique.
The data we build into a reservoir simulator, at best, only approximate the actual reservoir
parameters. We therefore cannot expect these data to truly represent the reservoir. To
understand why this is true, we must consider the data sources. Permeability and porosity
data may have come from laboratory core analyses, and scaling up such data to real
reservoir conditions inevitably causes a problem. The reservoirs geometrical configuration

its shape, internal discontinuities and their descriptions (e.g., fracture and fracture
geometry)is inferred from a few discrete locations, and then extrapolated over vast areas.
In light of these facts, we cannot expect these data to give more than a crude approximation
of real conditions.
In essence, we can describe history matching as a feedback control procedure, analogous to
the classical control problem. With the best estimates of the model parameters in hand, we
run the simulator to predict the reservoir history. We then compare this predicted performance
history, using some key history matching parameters, to the actual recorded performance
history. If we do not see an acceptable match, we adjust the model parameters and attempt a
new match. We continue this iteration process until a "good" match results. The set of model
parameters that achieves this match is the best estimate, and becomes part of the simulator
for future predictions. Figure 1 schematically represents this feedback control logic.

Figure 1

The one step in the feedback control procedure that was often ignored in the past is the "revisit/ask questions" box. It has since become clear that this was a mistake. Geologists and
geophysicists can provide a great deal of insight into reservoir descriptioninsight to help
engineers make intelligent revisions of the model parameters. The iteration loop can easily
become an infinite do-loop without adequate control; we need to be clear on how good a
match we desire. As many simulation experts have succintly put it, "a good history match is
obtained when you run out of either time or money" and, we may even add, patience.

Manual history matching

Modern simulators offer a high degree of automation. Still, manual history matching is the
more prudent option, in that it allows a fuller application of engineering judgment, experience
and insight. While it can be time-consuming and expensive, it represents the final, and
perhaps most important, phase in a simulators development. Unless this step is successful,
all of the preceding steps (conceptual modeling, mathematical modeling and data input) could
be rendered useless.
Manual history matching entails a sequential study of parameters, in which we study output
and adjust the parameters accordingly for the next run until we obtain the best set of
parameters (i.e., that which yields the best match with actual performance data). An
advantage of this approach is that it allows close interaction between the engineer and the
simulator. An inherent problem associated with history matching is non-uniqueness; a lack of
proper control on parameter adjustment can lead to unrealistic parameter estimates and even
outright violation of physical principles. With the control that manual history matching allows,
we can avoid this problem by defining the matching parameters and criteria for "goodness of
match" prior to the matching process. In addition, we should use our best judgement to set
limits within which we can vary each parameter.
In a heterogeneous, multiphase, multicomponent system, there are many parameters which
can affect performance history. The key consideration is to identify which of them (either one
or several) has the most impact, and target them for adjustment during the history matching
process. There is no universal rule for selecting these, as they are very system-dependent.
Table 1 summarizes the parameters used in history matching.

Table 1

The main parameters usually adjusted in history matching are

reservoir and aquifer transport capacities, (kh)res and (kh)aq


reservoir and aquifer storage, (hct )res and (hct )aq
relative permeability function
capillary pressure function
original saturation distribution

Other parameters are adjusted only if we observe a poor match or new information becomes available.
The two broad parameters considered in determining a match are pressure history and fluid
movement. These translate to pressure, flow rates, water-oil ratios, gas-oil ratios, and watergas ratios. The first step in manual history matching is to set the study objectives as clearly as
possible and develop the criteria for a "good match." Once we do this, we can perform a
sensitivity analysis to determine which of the parameters from Table 1 will most likely have
the greatest effect. Although this will vary from problem to problem, we can make some
general statements about pressure history matching and fluid movement history matching.
Permeability is the most often used reservoir variable for pressure history matching. This is
partly because permeability is the least well-defined parameter, and at the same time, the one
that affects pressure distribution the most. Usually, porosity data are much better than
permeability data and hence are not as widely used as a tuning parameter. While permeability
information from well-test analysis may be better than that obtained from other sources, its
reliability depends on the representation accuracy of the well-test model. The basic steps still
follow the structure presented in Figure 1 . A more elaborate procedure is presented in the
SPE Reservoir Simulation Monograph (Mattax and Dalton, 1990).
Figure 2 illustrates an attempt to history-match the data collected by the U.S. Department of
Energy on the Cozzette blanket sandstone of the Mesaverde formation using a multimechanistic model (Bezilla, et al., 1989

Figure 2

Based on available geologic information, it had been established that this formation is
anisotropic. In fact, initial attempts to assume isotropic permeability distribution did not yield
an adequate match, but rather, provided the magnitude of the overall effective permeability.
Extensive history matching yielded the values of the permeability contrast shown as the "best
match" values.
Usually the best way to confirm a reservoir models validity is to use fluid movement
particularly gas-water movementas the matching parameter.

In this case, the fluid-contact movement is the primary indicator of a match. This is indirectly
measured and monitored by the gas and water arrival times, as indicated by the water cut
(WGR) and gas and water production rates. Figure 3 shows the history matching conducted
using the gas production rate from the Cozzette Blanket sand as the history matching
parameter.

Figure 3

Here, the "best kx/ky parameter" obtained from the pressure history match was used for the
production rate match. As we can see from Figure 3 , the match is consistent with the
pressure history match.

Automatic history matching


The sole purpose of automatic history matching is to remove the drudgery from the history
matching process by letting the computer do most of the work, including analysis. There are
several algorithms presented in the literature that are meant to do just this. All automatic
history matching algorithms use the principle of nonlinear optimization to achieve the best
match of the observed data. In order to do this, an objective function is defined based on the
history matching parameter. This objective function is usually a function representing a
measure of total error between predicted and observed data. The strategy is to minimize this
error to yield the best match. Table 2 summarizes the basic equations.

Table 2

The primary danger of automatic history matching is lack of control. The so-called "best"
parameters could be unrealistic, since this is basically an unconstrained optimization process.
It practically eliminates engineering input. Still, it saves time and effort. A suggested
compromise is a hybrid technique between manual and automatic methods, where we still
retain a measure of control. The interactive graphics now available on many computers have
made this option more and more feasible. These capabilities allow us to examine the trend
graphically, and perhaps re-direct actions to achieve the study objectives. By so doing, we
ensure that the process is not turned into a "loose cannon" in which we totally relinquish all of
our control.

Forecasting
The ultimate goal of any modeling effort is forecasting. The modeling involved in reservoir
simulation is no exception. It is therefore imperative to ensure that a model has the necessary
predictive capability before using it as a forecasting tool. As we have learned, we ensure
predictive capability by formulating an accurate representation of the reservoir, properly
solving the resulting equations, and proving the validity of the model through history matching.
Once we have taken these steps, the simulator is ready for its primary purpose of forecasting.

Designing a prediction study


The crucial element of a "meaningful" simulation study is planning. By "meaningful,"we mean
obtaining the most with the least. As we have emphasized previously, we do not have
unlimited resources, either of information or of money. The goal of a simulation study should
be to obtain the desired information given these limited resources. In general, the planning
process should start long before we actually perform the simulation runs, and should involve

other production staff and managers. The ultimate objective, after all, is to develop the best
reservoir management strategy.
There are two important components involved in designing a successful prediction study:
setting the objectives and checking the inventory. We could consider these components
separately, but to yield optimum design criteria, it is better to handle them interdependently. It
is only normal to expect achievable objectives to depend on the available resources, and
hence the inventory of the latter could influence the former. Figure 4 provides an overview of
the main ingredients in designing a prediction study.

Figure 4

In setting the objectives, we must first ask everybody involved (including management) what
type of questions need to be answered. Or to put it another way, how much detail will it take
to answer those questions. This stage of the process requires close interaction between the
engineering and the management staff, and it should serve to provide valuable guidance in
considering alternative production schemes to be considered, evaluating reservoir
management strategies and assessing the economics of the processes.
We can only set realistic objectives if we know what our resources are. We need to evaluate
what field and laboratory data, supporting expertise and computational resources (both
software and hardware) are available. Our objectives, which are set within the confines of
these resources, will help formulate the studys design guidelines.
In terms of the design criteria, the first step is to identify the key events we need to track. For
instance, in a waterflooding scheme, we will need to determine the breakthrough time, which
is a major event in developing an overall management strategy. Once we identify the key
events to be monitored, we will be in a position to define the other elements, including the
appropriate simulation approach and the time frame of the study. Simulation approach is an
all-encompassing term, which includes the type of simulators to be used (e.g., compositional,
black-oil, multi-dimensional, etc.) and the scope of the study (e.g., window study, single well

study, full field scale study, etc.). This type of systematic planning process is most likely to
result in a successful prediction study at minimum cost.

Key parameters in prediction


Key parameters in a prediction study come into play at three different levels ( Figure 5 ).

Figure 5

The first level involves selecting the production process. This is particularly applicable in
formulating a development strategy for a new field. We may have to choose, for instance,
between two enhanced oil recovery schemes, or decide when to begin a specific production
process. While we can perform the initial screening without a simulator, we generally need
more detailed information for economic decisions. Through a prediction study, we will develop
the screening parameters at this level.
Once we pass this first stage, we must identify a second-level set of parameters to predict the
selected production schemes major events. For example, in a flooding scheme, we would
need to determine the optimal pattern and spacing, injection rates, and surface operating
conditions.
The third and final level is where we answer the questions that constitute the focal point of the
study. Having selected the process and determined its mechanics, we must now answer
questions relating to how much and how fast we can recover the resource. In other words, we
need to establish the optimal flow rates and the ultimate recovery under a variety of schemes

Analysis of results
A typical simulation run generates a voluminous amount of data, which we must promptly
analyze to obtain the information we need. Although a simulation study could involve many
runs planned in advance, we should not think that we have to complete all the runs before

beginning our analysis. It is not a sequential processwe should, in fact, conduct simulation
runs and analysis simultaneously.
Usually analyzing previous runs will give us valuable information for future runs. Again, we
must carefully and rigorously plan our runs, and continuously fine-tune them as more
information becomes available from analyses of previous runs. Spontaneous analysis of each
run could indicate a major flaw in the planning process. This may lead to discussions and
consultations with other involved parties before proceeding further; it may even require a
change in direction. A heuristic approach, while admissible if information can be gained,
cannot be used as a "hit or miss" procedure. In fact, considering the cost of conducting a
simulation study, a hit or miss approach is a luxury we can ill afford. Careful and timely
analysis (and prudent judgement) will ensure a successful prediction study.

Updating
Rarely do we have available all the information that we need at the beginning of a simulation
study. In fact, a basic tenet of engineering is using the available informationas inadequate
as it may beto come up with a "best" solution. This solution is then improved as more
information becomes available. This process called updating. There are two methods of
updating in reservoir simulation: updating the reservoir model itself, and revising the
simulation approach.

Progressive evolution of the reservoir model


It is generally acknowledged in reservoir simulation that we cannot develop a good level of
confidence until between 10 to 15 percent of the recoverable reserves have been produced.
This implies that as we update the reservoir model using available information, we will reach a
stage when we have finally incorporated enough information to give the model good
predictive capability. A good example is a reservoir whose extent has been initially defined
from purely geophysical/geological data. As production continues, we may find that the
average reservoir pressure is declining faster than predicted, indicating that the reservoir may
be smaller than originally thought. This pressure decline could also be due to such factors as
weaker-than-expected communication with the aquifer, or the presence of a previously
undetected disconformity within the reservoir. It is the project technical teams responsibility to
try and ascertain why predicted and observed performances differ. The net result is an
enhancement of the reservoir model. This iterative process is continuous. As the years
proceed, however, each modification becomes less drastic than the previous ones. This is a
clear indication that as more information becomes available, we can develop a greater degree
of confidence in our ability to forecast reservoir performance.

Progressive assessment of the simulation approach


As a reservoir model undergoes changes and modifications, so does the simulation approach,
although perhaps to a lesser extent. As time progresses and we acquire more information, we
may decide to use a higher dimensional simulator. On the other hand, new information may
tell us that a simpler model (e.g., a lower dimensional model) is adequate. Or, we may
determine that the reservoir behavior is not amenable to black oil modeling because we have
found that compositional changes are more significant than previously thought. Once we
reach these types of conclusions, we should exercise judgment in implementing the
necessary simulation approach.
Another factor we should consider is the rapid development in both software and hardware.
New hardware and later versions of the software could make new simulation scenarios
possible. We must always try to adopt new changes that could positively affect our
performance. Certainly these changes abound in the world of simulation.

S-ar putea să vă placă și