Documente Academic
Documente Profesional
Documente Cultură
Abstract
Reservoir fluid properties are very important in reservoir
engineering computations such as material balance
calculations, well test analysis, reserve estimates, and
numerical reservoir simulations. Ideally, these properties
should be obtained from actual measurements. Quite often,
however, these measurements are either not available, or very
costly to obtain. In such cases, empirically derived
correlations are used to predict the needed properties. All
computations, therefore, will depend on the accuracy of the
correlations used for predicting the fluid properties.
This study presents Artificial Neural Networks (ANN)
model for predicting the formation volume factor at the bubble
point pressure. The model is developed using 803 published
data from the Middle East, Malaysia, Colombia, and Gulf of
Mexico fields. One-half of the data was used to train the ANN
models, one quarter to cross-validate the relationships
established during the training process and the remaining one
quarter to test the models to evaluate their accuracy and trend
stability. The results show that the developed model provides
better predictions and higher accuracy than the published
empirical correlations. The present model provides predictions
of the formation volume factor at the bubble point pressure
with an absolute average percent error of 1.789%, a standard
deviation of 2.2053% and correlation coefficient of 0.988.
Trend tests were performed to check the behavior of the
predicted values of Bob for any change in reservoir
temperature, Gas Oil Ratio (GOR), gas gravity and oil gravity.
The trends were found to obey the physical laws.
Introduction
Reservoir fluid properties are very important in petroleum
engineering computations, such as material balance
calculations, well test analysis, reserve estimates, inflow
performance calculations and numerical reservoir simulations.
Ideally, these properties are determined from laboratory
studies on samples collected from the bottom of the wellbore
or at the surface. Such experimental data are, however, not
always available or very costly to obtain. Then, the solution is
to use the empirically derived correlations to predict PVT
properties.
There are many empirical correlations for
predicting PVT properties, most of them were developed using
linear or non-linear multiple regression or graphical
techniques.
Each correlation was developed for a certain
range of reservoir fluid characteristics and geographical area
with similar fluid compositions and API gravity. Thus, the
accuracy of such correlations is critical and it is not often
known in advance.
Among those PVT properties is the bubble point Oil
Formation Volume Factor (Bob), which is defined as the
volume of reservoir oil that would be occupied by one stock
tank barrel oil plus any dissolved gas at the bubble point
pressure and reservoir temperature. Precise prediction of Bob
is very important in reservoir and production computations.
The objective of this study is to develop a new predictive
model for Bob based on Artificial Neural Networks (ANN)
using worldwide experimental PVT data.
A new algorithm for training feed forward neural networks
was used. That algorithm was found to be faster and more
stable than other schemes reported in the literature. Database
of 803 published data from the Middle East, Malaysia, and
Gulf of Mexico fields was used to develop the present model.
Of the 803 data points, 402 were used to train the ANN
models, 201 to cross-validate the relationships established
during the training process and the remaining 200 to test the
model to evaluate its accuracy and trend stability. Using the
same 200 data points, several empirical correlations were used
to predict Bob. The results show that the present model
outperforms all the existing models in terms of absolute
average percent error, standard deviation, and correlation
coefficient.
SPE 68233
SPE 68233
samples for testing the model. Input data to the RBFNM were
reservoir pressure, temperature, stock tank oil gravity, and
separator gas gravity. Accuracy of the model in predicting the
solution gas oil ratio, oil formation volume factor, oil
viscosity, oil density, undersaturated oil compressibility and
evolved gas gravity was compared for training and testing
samples to all published correlations. The comparison shows
that the proposed model is much more accurate than these
correlations in predicting the properties of the oils. The
behavior of the model in capturing the physical trend of the
PVT data was also checked against experimentally measured
PVT properties of the test samples. He concluded that
although, the model was developed for specific crude oil and
gas system, the idea of using neural network to model
behavior of reservoir fluid can be extended to other crude oil
and gas systems as a substitute to PVT correlations that were
developed by conventional regression techniques.
Finally, Varotsis et al.36 presented a novel approach for
predicting the complete PVT behavior of reservoir oils and gas
condensates using Artificial Neural Network (ANN). The
method uses key measurements that can be performed rapidly
either in the lab or at the well site as input to an ANN. The
ANN was trained by a PVT studies database of over 650
reservoir fluids originating from all parts of the world. Tests of
the trained ANN architecture utilizing a validation set of PVT
studies indicate that, for all fluid types, most PVT property
estimates can be obtained with a very low mean relative error
of 0.5-2.5%, with no data set having a relative error in excess
of 5%. This level of error is considered better than that
provided by tuned Equation of State (EOS) models, which are
currently in common use for the estimation of reservoir fluid
properties. In addition to improved accuracy, the proposed
ANN architecture avoids the ambiguity and numerical
difficulties inherent to EOS models and provides for
continuous improvements by the enrichment of the ANN
training database with additional data.
Data Acquisition and Analysis
Data used for this work are collected from published sources.
After dropping the repeated data sets, we ended up with 803
data sets as follows: Katz2 (53), Vazquez and Beggs4 (254),
Glaso5 (41), Ghetto et al.22 (173), Omar and Todd15 (93),
Gharbi and Elsharkawy33 (22), and Farshad et al.37 (146).
Each data set contains reservoir temperature, oil gravity, total
solution gas oil ratio, and average gas gravity, bubble point
pressure and oil formation volume factor at the bubble point
pressure. The repeated data sets were reported by other
investigators26,27. Of the 803 data points, 403 were used to
train the ANN models, the remaining 200 to cross-validate the
relationships established during the training process and 200 to
test the model to evaluate its accuracy and trend stability. A
statistical description of training and test data are given in
Table 1 and Table 2, respectively.
Neural Networks
An artificial neural network is a computer model that attempts
to mimic simple biological learning processes and simulate
specific functions of human nervous system. It is an adaptive,
parallel information processing system, which is able to
develop associations, transformations or mappings between
objects or data. It is also the most popular intelligent
technique for pattern recognition to date. The basic elements
of a neural network are the neurons and their connection
strengths (weights). Given a topology of the network structure
expressing how the neurons (the processing elements) are
connected, a learning algorithm takes an initial model with
some prior connection weights (usually random numbers)
and produces a final model by numerical iterations. Hence
learning implies the derivation of the posterior connection
weights when a performance criterion is matched (e.g. the
mean square error is below a certain tolerance value).
Learning can be performed by supervised or unsupervised
algorithm. The former requires a set of known input-output
data patterns (or training patterns), while the latter requires
only the input patterns. This is commonly known as the feed
forward model, in which no lateral or backward connections
are used38.
Advantages of Artificial Neural Networks
Several advantages can be attributed to ANNs rendering them
suitable to applications such as considered here. Firstly, an
ANN learns the behavior of a database population by selftuning its parameters in such a way that the trained ANN
matches the employed data accurately. Secondly, if the data
used are sufficiently descriptive39, the ANN provides a rapid
and confident prediction as soon as a new case, which has not
been seen by the model during the training phase, is applied.
Possibly, the most important aspect of ANNs is their ability to
discover patterns in data that are so obscure as to be
imperceptible to normal observation and standard statistical
methods. This is particularly the case for data exhibiting
significantly unpredictable nonlinearities40.
Traditional
correlations are based on simple models which often have to
be stretched by adding terms and constants in order for them
to become flexible enough to fit experimental data, whereas
neural networks are marvelously self-adaptable. Using a
sufficiently large database for training, ANNs allow property
values to be accurately predicted over a very wide range of
input data36.
An ANN model can accept substantially more information
as input to the model, thereby, improving significantly the
accuracy of the predictions and reducing the ambiguity of the
requested relationship. Moreover, ANNs are fast-responding
systems. Once the model has been educated predictions
about unknown fluids are obtained with direct and rapid
calculations without the need for tuning or iterative
computations. Furthermore, an outstanding attribute of the
ANNs is their capability of becoming increasingly expert by
retraining them using larger databases. Continuous enrichment
of the ANN knowledge eventually leads to a predictive
model exhibiting accuracy comparable to the PVT data itself36.
SPE 68233
SPE 68233
Er =
Where
1 N
Ei
n i =1
(1)
an experimental value
Ea =
1 n
Ei
n i =1
(3)
E min = min Ei
(4)
i =1
n
E max = max Ei
(5)
i =1
1 n 2
RMS = Ei
n i =1
(6)
n
n
_
2
r = 1 (Bob )exp (Bob )est i / (Bob )exp Bob
i =1
i =1
(7)
Where
_
B ob =
1 n
(Bob )exp
n i =1
(8)
2. Of the 803 data sets, 403 were used to train the ANN
model, 200 to cross-validate the relationships established
during the training process and the remaining 200 to test the
model to evaluate its accuracy.
3. A new algorithm was used to train a feedforward threelayer network. The first layer consists of four neurons
representing the input values of reservoir temperature, solution
gas oil ratio, gas specific gravity and API oil gravity. The
second (hidden) layer consists of 5 neurons, and the third layer
contains one neuron representing the output values of the
bubble-point formation volume factor Bob.
4. The results show that the developed model provides
better predictions and higher accuracy than the published
empirical correlations. The present model provides predictions
of the formation volume factor at the bubble point pressure
with an absolute average percent error of 1.789%, and
correlation coefficient of 0.988.
5. Trend analysis was performed to check the behavior of
the predicted values of Bob for any change in reservoir
temperature, solution gas oil ratio (Rs), gas gravity and oil
gravity. The model was found to be physically correct. The
stability of the model indicated that the neural network model
does not over fit the data, which implies that it was
successfully trained.
6. Incorporating additional data sets during training and
cross-validation stages can further refine the new model to
cover a wider range of input variables.
Nomenclature
=
Bob
Rs
=
T
=
o
=
g
=
Er
=
Ei
=
Ea
=
Emax
=
=
Emin
RMS
=
r
=
Acknowledgement
The authors wish to thank King Fahd University of Petroleum
and Minerals for the facilities utilized to perform the present
work and for their support.
References
1.
Conclusions
1. A new model was developed to predict the oil formation
volume factor at the bubble-point pressure. The model was
based on artificial neural networks, and developed using 803
published data sets from the Middle East, Malaysia, Colombia,
and Gulf of Mexico fields.
SPE 68233
2.
3.
Standing
M.B.:
A
Pressure-Volume-Temperature
Correlation for Mixtures of California Oils and Gases,
Drill&Prod. Pract., API (1947), pp 275-87.
Katz, D. L.: Prediction of Shrinkage of Crude Oils,
Drill&Prod. Pract., API (1942), pp 137-147.
Standing, M. B.: Volumetric and Phase Behavior of Oil
Field Hydrocarbon System. Millet Print Inc., Dallas, TX
(1977) 124.
SPE 68233
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
(A-5)
Calculate:
S j +1 =
S j - ( jk j ) (x j S j )T
(A-6)
bj
j =
1+
(A-7)
the
hidden
the network.
For each layer j, and for every node k, calculate the
summation output
y jk = ( x j 1,i . w jki )
dk =
(A-1)
i =0
(A-9)
1 1 + ok
ln
a 1 ok
(A-10)
[1 exp( a . y )]
)=
[1 + exp( a . y )]
jk
(A-2)
jk
(A-11)
are updated by
2a .exp( a . y jk )
1 + exp( a . y jk )
f ' ( y jk ) =
f ( y jk
(A-4)
S j )T ( x j S j ) + b j
kj = Sj (xj Sj)
((x
where
j =
SPE 68233
(A-3)
(A-12)
8. Stopping criteria:
convergence test, or
Split the data into 2 sets: one set to train the network and
the other set to test the error. This method is called cross
SPE 68233
G = Rs ( g / o ) 4 + a5T
a
a1 = 6.5811
a2 = 2.91329
a3 = 0.27683
a 4 = 0.526
1. Standing (1947):
Bob = a1 + a2 Rs ( g / o ) 3 + a4T
a
4. Al-Marhoun (1988):
a5
(B-1)
Where
a1 = 0.975
a2 = 12 10 5
a4 = 1.25
a5 = 1.2
Bob = 1 + c1 R s + c 2 (T 60)(API / g ) +
c 3 R s (T 60 )(API / g )
(B-2)
c2 = 2.75110 5
c3 = 1.337 10
3. Glaso (1980):
b2 = 0.862963 10 3
b3 = 0.182594 10 2
b4 = 0.318099 10 5
b5 = 0.74239
b6 = 0.323294
5. Al-Marhoun (1992):
Bob = 1 + a1 R s + a 2 R s ( g / o ) + a 3 R s (T 60 )(1 o )
+ a 4 (T 60 )
b1 = 0.945810
b7 = 1.20204
c3 = 1.811 10 8
c1 = 4.670 10
c2 = 1.110
(B-5)
Where
a1 = 0.177342 10 3
a 2 = 0.220163 10 3
a 4 = 4.292580 10 6
a 4 = 0.528707 10 3
(B-3)
Where
TABLE 1: STATISTICAL DESCRIPTION OF THE INPUT DATA USED FOR TRAINING (603 POINTS)
Property
Oil FVF At Pb
Bubblepoint Pressure
Temperature, F
Gas-Oil Ratio
Gas Relative Density
Oil Relative Density
Bp Oil Relative Density
API Oil Gravity
(B-4)
Bob = b1 + b2 (T 60 ) + b3 M + b4 M 2
Where
a3 = 0.5
c1 = 4.677 10 4
a5 = 0.968
Min
1.028
107.33
58
8.61
0.511
0.725
0.432
11
Max
3.562
7127
341.6
3617.27
1.789
0.993
0.992
63.7
Average
1.342
2015.958
183.333
549.192
0.89
0.854
0.732
34.593
St. Dev
0.284
1284.455
52.228
495.508
0.184
0.046
0.098
8.765
Skewness
2.116
0.996
0.222
1.969
1.019
0.544
-0.191
-0.276
Kurtosis
8.001
1.052
-0.523
6.251
1.16
-0.045
-0.528
-0.211
10
SPE 68233
TABLE 2: STATISTICAL DESCRIPTION OF THE INPUT DATA USED FOR TESTING (200 POINTS)
Property
Oil FVF At Pb
Bubblepoint Pressure
Temperature, F
Gas-Oil Ratio
Gas Relative Density
Oil Relative Density
Bp Oil Relative Density
API Oil Gravity
Min
Max
Average
St. Dev
Skewness
Kurtosis
1.038
148
60
12
0.525
0.741
0.48
11.4
2.478
7127
341.6
2191.33
1.789
0.99
0.991
59.5
1.338
1964.05
184.302
533.75
0.9
0.854
0.732
34.757
0.262
1312.422
55.815
447.483
0.198
0.047
0.101
8.996
1.425
1.128
0.322
1.233
1.042
0.475
-0.129
-0.225
2.302
1.528
-0.46
1.366
1.286
-0.284
-0.572
-0.386
Er
EA
Emin
Emax
Erms
-0.1696
2.3083
1.8186
0.3395
-0.1152
0.3024
2.7238
2.9755
3.3743
2.3334
2.2053
1.7886
0.0081
0.0136
0.0030
0.0112
0.0033
0.0076
20.1795
15.5368
17.7763
13.2590
13.1794
11.7751
4.2025
4.0417
4.5663
3.3810
3.4162
2.7193
0.9742
0.9842
0.9715
0.9811
0.9806
0.9878
HIDDEN LAYER
INPUT
OUTPUT
SPE 68233
11
4.0
3.5
3.0
AAPRE
2.5
2.0
1.5
1.0
0.5
0.0
Standing (1947)
Vazquez and
Beggs (1980)
Glaso 1980
Al-Marhoun
(1988)
Al-Marhoun
(1992)
Artificial Neural
Networks
0.990
Correlation coefficient
0.985
0.980
0.975
0.970
0.965
0.960
0.955
0.950
Standing 1947
Vazquez and
Beggs 1980
Glaso 1980
Al-Marhoun
1988
Al-Marhoun
1992
Artificial
Neural
Networks
3.00
3.00
2.50
2.50
Predicted Bob
Predicted Bob
12
2.00
1.50
1.00
2.00
1.50
1.00
0.50
0.50
0.00
0.00
0.00
0.50
1.00
1.50
2.00
2.50
0.00
3.00
0.50
Measured Bob
2.50
2.50
Predicted Bob
Predicted Bob
3.00
2.00
1.50
1.00
2.00
2.50
3.00
2.00
1.50
1.00
0.50
0.50
0.00
0.00
0.50
1.00
1.50
2.00
2.50
0.00
3.00
0.50
3.00
2.50
2.50
Predicted Bob
3.00
2.00
1.50
1.00
3.00
1.00
0.00
0.00
2.00
2.50
1.50
0.50
1.50
2.00
2.00
0.50
1.00
1.50
0.50
1.00
Measured Bob
Measured Bob
0.00
1.50
3.00
0.00
1.00
Measured Bob
Predicted Bob
SPE 68233
2.50
3.00
Measured Bob
Fig 8- Cross Plot of Al-Marhoun (1992) Correlation.
0.00
0.50
1.00
1.50
2.00
2.50
3.00
Measured Bob
Fig. 9- Cross Plot of Artificial Neural Networks Model.
SPE 68233
13
4.0
Standing
Vazquez&Beggs
Glaso
Al-Marhoun 88
Al-Marhoun 92
ANN
3.5
Predicted Bob
3.0
2.5
2.0
1.5
1.0
0
1000
2000
3000
4000
5000
1.50
Standing
Vazquez & Beggs
1.45
Glaso
Al-Marhoun 88
Al-Marhoun 92
Predicted Bob
1.40
ANN
1.35
1.30
1.25
1.20
0
100
200
300
400
500
14
SPE 68233
1.40
Standing
Vazquez & Beggs
Glaso
Al-Marhoum 88
Al-Marhoun 92
ANN
Predicted Bob
1.35
1.30
1.25
10
20
30
40
50
60
1.44
Standing
Vazquez&Beggs
Glaso
Al-Marhoun 88
Al-Marhoun 92
ANN
1.40
Predicted Bob
1.36
1.32
1.28
1.24
1.20
0.4
0.6
0.8
1.0
1.2
1.4
1.6