Sunteți pe pagina 1din 21

Software Reliability Models

1. Introduction
The amount of cost needed for testing a software system to achieve a required reliability
level can sometimes be as high as 60% of the overall cost. As more as the testing and
verification process discover faults, the additional cost of exposing the remaining faults
generally rises very quickly. Even after a lengthy testing period, additional testing will
always potentially detect more faults. Thus there is limit beyond which continuation of
testing to further improvement can be justified only if such improvement is cost effective.
To make cost effective software a careful planning of testing phase and accurate
decision-making is required. This Careful planning and decision-making requires the use
of a software reliability models.

A Software Reliability Model usually has the form of random process that describes the
behavior of failures with respect to time

A software reliability model specifies the general form of the dependence of the failure
process on the principle factors that affect it: fault introduction, fault removal, and the
operational environment. At a particular time it is possible to observe a history of the
failure rate (failures per unit time) of software. Fault identification and removal generally
force the failure rate of a software system to decrease with time as shown in Fig. 1.
Software reliability modeling is done to estimate the form of the failure rate function by
statistically estimating the parameters associated with a selected mathematical model.
The purpose of the modeling is twofold: (1) To forecast the remaining time required to
achieve a specified objective. (2) To forecast the expected reliability of the software
when the product is released. Project management can use these forecasts as inputs for
cost estimation, resource planning, and schedule validation .
The main factors affecting software reliability are: Fault Introduction and Fault removal
1. Fault Introduction
This depends mainly on the characteristics of the developed code and development
process characteristics. Code size is the most important characteristic. Important process
Software Reliability Models 2

characteristics include the tools used during development and the experience of the
personnel
2. Fault removal
The fault removal process depends on time, operational profile, and the quality of the
repair process.

Failures per Hour


Current Failure rate
Objective
Failure rate

Time
Remaining Test Time

Fig.1 Expected software Failure Rate Curve

2. Classification
A number of software reliability models have been proposed to handle the problem of
software reliability measurement. A popular approach for classification of models in
terms of five different attributes is given here. This classification scheme defines
relationship between the models.

1. Time domain: Calendar or execution (CPU or processor) time


2. Category: The number of failures that you can experience in infinite time is finite
or infinite
3. Type: the distribution of the number of failures experienced by the time specified
4. Class (finite failure category only): Functional form of the failure intensity in
terms of failure
5. Family (infinite failures category only): Functional form of the failure intensity
in terms of the expected number of failures experienced
Software Reliability Models 3

3. Characteristics of Good Software Reliability Model


In spite of much research effort, there is no universally applicable software reliability
growth model, which can be trusted to give accurate prediction of reliability in all
circumstances. So it is required to select a model, which gives better prediction accuracy
as compare to rest models. In this section the characteristic of good software reliability
model is stated. A good software reliability model has several characteristics: It should:

• Give good predictions of future failure behavior


A model is reasonably accurate if the number of defects discovered after release is
with in the 90% confidence limit of the model.

• Compute useful quantities


A model must be able to provide information that is useful to the decision making
process or it serves no purpose.

• Be simple enough for many to use


Not everyone is well versed in the statistical considerations of the models. The
models must allow people from a range of backgrounds to obtain useful, easy to
understand information.

• Be widely applicable
The value of the modeling effort is enhanced if the same tool can be used for
multiple releases or across many platforms. This can reduce confusion resulting
from the use of many different models simultaneously.

• Be based on sound assumptions


Each model makes assumptions pertaining to test and defect repair. In choosing a
model which to use it is critical that the underlying assumptions be understood.
They may or may not be appropriate to every organization

• Become and remain stable


If the model's prediction varies greatly from week to week, no one will believe the
results. Ideally, a model should be validated by through calibration using
historical data.
Software Reliability Models 4

4. Recommended Models
4.1. Jelinski & Moranda (JM) Model
This is one of the earliest models proposed. It assumes the elapsed time between failures
is follows exponential distribution.

Category: Finite failure; Class: Exponential; Type: Binomial


Nature of failure process: Time to failure
Assumptions
1. The failure rate remains constant over the intervals between fault occurrences.
2. Each failure is independent of others
3. Each fault has the same probability to cause a failure.
4. A detected fault is removed with certainty in negligible time and no new faults are
introduced during debugging process.
5. The fault detection rate is proportional to residual faults
6. During test, the software is operated in a similar manner as expected operational
usage
The hazard function during ti, the time between the (i-1)st and ith failures is given by

Z ( t i ) = φ [ N − ( i − 1)]

Where,
φ is a proportionality constant
N = Number of defects in software at the start of testing

Model Form

If (i-1) faults have been discovered by time t, there would be [N-(i-1)] faults remaining in
the system. If we represent the time between the i'th and the (i-1)th failure by the random
variable Xi, from assumption (2) we can see that Xi has an exponential distribution, f(Xi),
as shown below

f ( X i ) = φ ( N − (i − 1))e −φ ( N −(i −1)) X i


Software Reliability Models 5

Using assumption 2, the joint density of all the Xi is:

n n
L ( X 1 , X 2 ,....., X n ) = ∏ f ( X i ) = ∏ φ ( N − (i − 1))e −φX i ( N − ( i −1)
i =1 i =1

This is the Likelihood function, which we can use to compute estimates for the
parameters φ and N. To make the computation easier, we can take the natural logarithm
of the likelihood function to produce the log-likelihood function. After doing this, we
then take the partial derivative of the log-likelihood function with respect to each of the
two parameters, giving us two equations in two unknowns. Setting these equations to zero
and then solving them gives us estimated values, N̂ and φˆ , for the model parameters N
and φ
n
φˆ = n n
Nˆ (∑ X i ) − ∑ (i − 1) X i
i =1 i =1

We have to find the value of N̂ numerically from the following equation, and then use it
to solve the previous equation for φ:

n
1 n
∑ Nˆ − (i − 1) = 1 n
i =1
Nˆ − n
(∑ (i − 1) X i )
∑X
i =1
i
i =1

A typical plot for hazard function of JM model


For N=100, φ=0.01

1
0.99 Z(t3)=0.01[100-2]
Hazard function

0.98

0.97
0.96
0.95
0 10 20 30
Cumulative Time

Fig.2 A typical plot for hazard function of JM model


Software Reliability Models 6

MTTF using JM model is estimated as

1
MTTF =
φˆ( Nˆ − n)

4.2 Shooman Model


This model is essentially similar to jelinski moranda model. For this model hazard rate
function can be expressed in the following form
N
Z (t ) = k[ − n c (τ )]
I
Where,
T is the operating time of the system measured from its initial activation,
I is the total number of instructions in the program
τ is the debugging time since start of integration
nc(τ) is the total number of faults corrected during τ, normalized with respect to I
k is proportionality constant

4.3 Musa’s Basic Execution Time Model


This model was one of the first to use the actual execution time of the software
component on a computer for the modeling process. The times between failures are
expressed in terms of computational processing units (CPU) rather than elapsed wall-
clock time.
Category: Finite failure; Class: Exponential; Type: Poisson
Nature of failure process: Time to failure

Assumptions
1. The execution times between the failures are piecewise exponentially distributed.
2. The cumulative number of failures follows a Poisson process
3. The mean value function ( µ (t ) = β 0 [1 − exp(− β 1t )] , where β 0 , β 1 > 0 ) is such
that the expected number of failure occurrences for any time period is
proportional to the expected number of undetected faults at that time.
4. The quantities of the resources that are available are constant over a segment for
which the software is observed.
Software Reliability Models 7

5. Resources expenditures for the kth resources, ∆χκ ≈ θk∆t + µ k ∆m , where ∆t is the
increment of execution time, ∆m is the increment of failures experienced, θκ is a
failure coefficient of resources expenditure.
6. Fault –identification personnel can be fully utilized and computer utilization is
constant.
7. Fault-correction personnel utilization is established by the limitation of fault
queue length for any fault-correction person. Fault queue is determined by
assuming that fault correction is a Poisson process and that servers are randomly
assigned in time.
Assumptions 4 through 7 are needed only if the second established component of the
basic execution model linking execution time and calendar time is desired.

Model form
Mean value function is given by

µ (t ) = β 0 (1 − exp(− β 1t )),

The failure intensity function for this model is

λ ( t ) = µ ' ( t ) = β 0 β 1 exp( − β 1 t )
If n failures of the software system observed at times t1, t2 ……….tn, and from the last
failure time tn and additional time of x (x ≥ 0) has elapsed without failure. Using the
model assumptions the likelihood function for this class is obtained as

 n 
L( β 0 , β 1 ) = β 0n β 1n ∏ exp(− β 1t i ) exp(− β 0 [1 − exp(− β 1 (t n + x))])
 i =1 

So the MLEs of β 0 andβ 1 , are obtained as the solutions to the following pair equations

n
β0 = &
1 − exp(− βˆ1 (t n + x))

n n(t n + x) n
− − ∑ ti = 0
βˆ1 exp( βˆ1 (t n + x)) − 1 i =1

Once the estimates of β 0 andβ 1 are obtained, we can use the invariance property of the
MLEs to estimate other reliability measures.
Software Reliability Models 8

4.4.Goel-Okumoto Model
Goel-Okumoto model takes the number of faults per unit of time as independent Poisson
random variables. The model has formed the basis for the models using the observed
number of faults per unit time group.
Category: Finite failure; Class: Exponential; Type: Binomial
Nature of failure process: Fault counts

Assumptions

1. The number of faults detected in each of the respective intervals is independent of


each other.
2. The cumulative number of failures follows a Poisson process with mean value
function µ(t).
3. The mean value function is such that the expected number of fault occurrences for
any time t to t+∆t is proportional to the expected number of undetected faults at
time t.It is also assumed to be a bounded, nondecreasing function of time with lim
t→∞ µ(t)= N < ∞

Model form
Mean value function is given by
µ(t) = N(1-e-bt)

For some constants b>0 and N>0, N is the expected total number of faults to be
eventually detected. Since the failure intensity function is the derivative of µ(t) we have,
therefore
λ(t)= N be-bt

Model estimates
The maximum likelihood estimates (MLEs) of N and b can be obtained as the solutions
for the following pair of equations:
n

∑ fi
N = i = 1
− btn
(1 − e )
Software Reliability Models 9

n
−bˆtn ∑ fi ˆ ˆ
tne i =1
f i (t i e −bti − t i −1e −bti −1 )
−bˆt n
=∑ ˆ ˆ
(1 − e ) e −bti −1 − e −bti

The second equation is solved for b by numerical methods, and the solution is then
substituted into first equation to find N.

4.5.Schneidewind’s Model
In this model it is assumed that current fault rate might be a better predicator of the future
behavior than the observed rates in the distant past. The failure rate processes may be
changing over time so the current data may better model the present reliability. This
model is presented in three forms depending upon the data points used for study.

Category: Finite failure; Class: Exponential; Type: Poisson


Assumptions
Nature of failure process: Fault counts
1. Only "new" failures are counted (i.e., failures that are repeated as a consequence
of not correcting a fault are not counted).
2. The fault correction rate is proportional to the number of faults to be corrected.
3. The number of failures detected in one interval is independent of the failure count
in another.
4. The mean number of detected failures decreases from one interval to the next.
5. The intervals are all the same length, where the length can be chosen at the
convenience of the user (in practice, the length can be varied).
6. The rate of failure detection is proportional to the number of faults in the program
at the time of test.
7. The failure detection process is a non-homogeneous Poisson process with an
exponentially decreasing failure detection process

Model form
From the assumptions, the cumulative mean number of faults by the ith time period is
α
Di = µ (t i ) = [1 − exp(− β i )] .
β
Thus the expected number of faults in the ith period is
Software Reliability Models 10

α
mi = Di − Di −1 = µ (t i ) − µ (t i −1 ) = [exp(− β (i − 1)) − exp(− βi )]
β
Using the assumptions again pertaining to the fi’s being independent nonhomogenous
Poisson random variables and incorporating the concept of the different model types, we
have the joint density.
F f
Ms−1 s−1 exp(−Ms−1) n mi i exp(−mi )
Fs−1

i=s fi !

Where s is some integer value chosen in the range 1 to n, Ms-1 is the cumulative mean
number of faults in the intervals up to s – 1, and Fs-1 is the cumulative number of faults
detected up through interval s-1

Model 1 estimates
Model 1 All the fault counts from the n periods are utilized. This means that all data
points are equal importance. The MLE estimates of model are
βˆFn
α= and
1 − exp(− βˆn)
n −1
1 n f
− = ∑ k k +1
exp( βˆ ) − 1 exp( βˆn) − 1 k = 0 Fn


n
Where Fn = i =1
f i and the fi’s are fault counts in intervals 1 to n.

Model 2 estimates
Model 2. In this type, the fault counts from the first through the s-1 time periods
ignored completely, i.e. only use the data from period s through n. This
means that the early time periods contribute little if anything in predicting
future behavior. The MLE estimates of model are

βˆFr n
α=
1 − exp( − βˆ ( n − s + 1))

1 n − s +1 n−s
f
− = ∑ k s+k
exp( βˆ ) − 1 exp( βˆ (n − s + 1)) − 1 k = 0 Fs , n
Software Reliability Models 11


n
Where Fs,n = k
= s f k . Notice if we let s= 1, model 2 estimates become equivalent to
model 1.

Model 3 estimates
Model 3. In model 3,the cumulative fault counts from the intervals 1 to s-1 is used as
the first data point and individual fault counts for periods through n as the
additional data points. This is an approach, intermediate between the two
that reflects the belief that a combination of the first s -1 period is indicative
of the failure rate process during the later stages. The MLE estimates of
model are
βFn
αˆ =
1 − exp(− βˆn)

( s − 1) Fs −1 Fs , n nFn n−s
+ − = ∑ (s + k − 1) f s + k
exp( βˆ ( s − 1)) − 1 exp( βˆ ) − 1 exp( βˆn) − 1 k = 0

where Fs −1 = ∑k =1 f k .
s −1

If s = 1 is substituted into the above equations we obtain the equivalent estimates for
model 1.

4.6.Hyperexponential Model
The basic idea is that the different sections (or classes) of the software experience an
exponential failure rate; however, the rates vary over these sections to reflect their
different natures. This could be due to different programming groups doing the different
parts, old versus new code, sections written in different languages, etc. The basic idea is
that different failure behaviors are represented in the different sections. We thus reflect
the sum of these different exponential growth curves, not by another exponential, but by a
hyperexponential growth curves. If in observing a software system, you notice that
different clusters of that software appeared to behave differently in their failure rates, the
hyperexponential model may be more appropriate than the classical exponential model
that assumes a similar failure rate.

Category: Finite failure; Class: Extension to Exponential; Type: Poisson


Nature of failure process: Failure Counts
Assumptions
Software Reliability Models 12

The basic assumptions are as follows. Suppose there are K sections (classes of the
software) such that within each class
1. The rate of fault detection is proportional to the current fault content within that
section of the software.
2. The fault detection rate remains constant over the intervals between fault
occurrence.
3. A fault is corrected instantaneously without introducing new faults into the
software.
And for the software system as a whole:
4. The cumulative number of failures by time t, M(t), follows a Poisson
Model form
The failure intensity function is the derivative of µ (t ) , we therefore have

K
λ (t ) = N ∑ pi β i exp(− β i t )
i =1

The failure intensity function is strictly decreasing for t >0.


By letting N*i = N pi , that is, N*i is the number of faults in the ith class, one can obtain
the MLE estimates for each class.

4.7.Schick-Wolverton
One of the most widely used model used models for hardware reliability modeling is the
Weibull distribution. It can accommodate increasing, decreasing, or constant failure rates
because of the great flexibility expressed through the model’s parameters. Schick-
wolverton is an important example of this type model. In this section, initially model is
developed for standard weibull distribution and then derived for schick-wolverton

Category: Finite failure; Class: Weibull; Type: Binomial


Nature of failure process: Fault counts

Assumptions
Including the Standard Assumptions of jelinski and moranda models basic assumptions
are:
1. At the initial of software testing, software having fixed number of faults (N) in it
2. The time to failure of fault a, denoted as Ta, is distributed as Weibull distribution
with parameter ∝ and β
Software Reliability Models 13

3. The numbers of faults detected in each of the respective intervals are independent
for any finite collection of times.

Model form

Failure intensity and mean value function is given by

−1
λ ( t ) = Nf a ( t ) = N αβ t a exp( − β t α ) &

µ ( t ) = NF a ( t ) = N (1 − exp( − β t α ))

The total number of faults in the system at the start is lim t → ∞µ (t ) = N . If α =2


weibull model becomes Schick-Wolverton model. Also from the assumptions we have
that if α =1, the distribution fa becomes the exponential, and if it equals 2 we have the
Rayleigh distribution, another important failure model in hardware reliability theory. if 0
< α < 1, per fault hazard rate is decreasing with respect to time; if α equals 1
(exponential) it is constant; and if α > 1, it increases. This form of the conditional hazard
rate is shown to be

z(t | t i −1 ) = ( N − i + 1)αβ (t + t i −1 )α −1 for t i −1 ≤ t + t i −1 < t i

The reliability function is obtained from the cumulative distribution function as

R (t) = 1 –F (t)=exp (-βtα)

1
Γ( + 1)
MTTF = ∫ R(t )dt = α 1

β α

Where Γ (•) is the gamma function.

4.8.S-shaped Reliability Growth Model


The S-shaped reliability model assumes curve formed by mean value function µ(t) is S-
shaped curve rather than the exponential growth of the Goel-Okumoto model. This model
supposes reliability decay prior to reliability growth. This is to reflect the initial learning
curve at the beginning, as test team members become familiar with the software,
followed by growth and then leveling off as the residual faults become more difficult to
uncover.
Software Reliability Models 14

Category: Finite failure; Class: Gamma; Type: Poisson


Nature of failure process: Fault counts

Assumptions
1. The failure occurrences are independent and random
2. The initial fault content is random variable
3. The cumulative number of failures by time t, M(t) follows a Poisson process with
mean value function µ(t). The mean value function is of the formµ(t) = α[1-
(1+βt)e-βt ] for α , β > 0.
4. The time between failures of the (i-1)st and the ith depends on the time to failure of
the (i-1)
5. When a failure occurs, the fault, which caused it, is immediately removed and no
other faults are introduced.
6. The software is operated in a similar operational profile as the anticipated usage

Model form

The testing period is divided in various parts. These time intervals are independent of
each other. Suppose fi is the number of faults occurring in the interval of length li = ti* -
ti*-1. From the assumptions we have, each fi is an independent Poisson random variable
with mean

*
* β ti
µ (t i ) − µ (t i −1 ) = α [1 − (1 + βt i )e − βti ] − α [1 − (1 + βti * −1 )e − βt*i −1 = α [(1 + βt * )e −
* * *
i −1
]

Also, form the mean value function µ (t ) = α [1-(1+βt)e-βt], we have the failure intensity
function

λ(t)= µ’(t) = αβ2te-βt

The model gets its S-shaped form because of the mean value function.

The joint density of the fault counts over the given partition is

∏ [µ (t ]
n
) − µ (t i −1 exp(−( µ (t i ) − µ (t i −1 )))
* * * *
i
i =1
Software Reliability Models 15

using the assumptions from the previous section. The MLEs of α and α are shown to be
the solutions of the following pair of equations:

∑ f i = αˆ (1 − (1 + βˆt n ) − β t * n
* ˆ

i =1

 i 
  ∑ f − ∑ f ((t * ) 2 e − βti* − (t * ) 2 e − βti*−1 ) 
i −1

n   i j  i i −1

 j =1 
= ∑
* j =1
αˆ (t n * ) 2 e − βt n 
i =1  ((1 + βˆt i −1 )e − βti −1 − (1 + βˆt i )e − βti 
 
 

4.9.Duane’s Model

In this model, time of failures is considered. The number of such occurrences considered
per unit of time is assumed to follow a nonhomogeneous poisson process. This model is
sometimes referred to as the power model since the mean value function for the
cumulative number of failures by time t is taken as power of t, that is µ(t) = αtβ for some
β > 0 and α > 0. (For the case where β =1, we have the homogeneous Poisson process
model.) This model is an infinite failures model since limt→ ∞ µ (t) = ∞.

Category: Infinite failure; Family: Power; Type: Poisson


Nature of failure process: Fault Counts
Assumption
1. The software is operated in a similar operational profile as anticipated usage
2. The failure occurrences are independent.
3. The cumulative number of failures by time t, M (t), follows a Poisson process
with mean value function µ(t) = αtβ for some β > 0 and α > 0.

Model form.

This model represents Poisson process with a mean value function of

µ(t) = αtβ

If T is the total time the software is observed, then we have


Software Reliability Models 16

(T ) αT β
µ = =
T T Expected number of failures by time T
Total testing time

(T ) αT β
If µ = is plotted on log-log paper a straight of the form Y = a + b*X with a =
T T
ln(λ), b = b and X= ln(t) is obtained.

The failure intensity function is obtained by taking the derivative of the mean value
function, that is

λ(t) = dµ(t) ⁄ dt = αβtβ-1

For β > 1, the failure intensity function is strictly increasing

β=1, the failure intensity is remains constant (homogeneous Poisson process)

1 > β > 0, failure intensity is strictly decreasing

β > 1, there can be no reliability growth.

The maximum likelihood estimates for Duane model

n
αˆ = ˆ
&
t nβ
n
βˆ = n −1

∑ ln(t
i =1
n / ti )

Where the ti’s are the observed failure times in either CPU time or wall clock time and n
is the number of failures of observed to date.

4.10.Geometric Model

The time between failures is take to an exponential distribution whose mean decreases in
a geometric fashion. The discovery of the earlier faults is taken to have a larger impact on
reducing the hazard rate than the later ones. As failures occur the hazard rate decreases in
a geometric progression.
Category: Infinite failure; Family: Geometric
Software Reliability Models 17

Assumption

Including the Standard Assumptions jelinski and moranda model, the basic assumptions
are:
1. There are an infinite number of total faults in the system.
2. The fault detection rate forms a geometric progression and is constant between
fault detections.
3. The time between fault detection follows an exponential distribution.

Model form

The density for the time between failures of the ith and (i-1)st is exponential of the form:
f(Xi) = Dφi-1exp(-Dφi-1Xi) = z(ti-1)exp(-z(ti-1)Xi).

Thus the expected time between failures is

1 1
E( X i ) = = for i = 1,……, n
z (t i −1 ) Dφ i −1

1
µ (t ) = ln([ Dβ exp( β )]t + 1)
β

D exp( β )
λ (t ) =
[ Dβ exp( β )]t + 1

Where β = − ln(φ ) for 0 < φ < 1

From assumptions the joint density function for the Xi’s is

n n
 n

∏ f ( X i ) = D n ∏ φ i =1 exp − D ∑ φ i −1 X i 
i =1 i =1  i =1 

Taking the natural log of this function and taking the partials with respect to φ and D, the
maximum likelihood k estimates are the solutions of the following pair of equations:

φˆn
Dˆ = n


i =1
φˆ i X i
Software Reliability Models 18

∑ iφ i −1
Xi
n +1
i =1
n
=
2
∑ iφ
i =1
i −1
Xi

4.11.Musa- Okumoto Logarithmic Poisson


The exponential rate of decrease reflects the view that the earlier discovered failures have
a greater impact on reducing the failure intensity function than those encountered later. It
is called logarithmic because the expected number of failures over time is a logarithmic
function.

Category: Infinite failure; Family: Geometric; Type: Poisson


Nature of failure process: Time to failure
Assumptions

Including the Standard Assumptions of jelinski and moranda model, the basic
assumptions are:

1. The failure intensity decreases exponentially with the expected number of failures
experienced, that is,λ(t) = λ0exp(-θµ(t)), where µ(t) is the mean value function θ
> 0 is the failure rate decay parameter, and λ0 > 0 is the initial failure rate.

2. The cumulative number of failures by time t, M(t), follows a Poisson process

λ(t) = λ0/(λ0 θt + 1).

A second expression of the logarithmic Poisson model to aid in obtaining the maximum
likelihood estimates is through a reparameterization of the model. We let β0 = θ -1 and β1
= λ0θ . The intensity and mean value functions become in this case:

λ(t) = β0β1/ (β1t +1) and µ(t) = β0ln (β1t+1)

Using the reparameterized model, the maximum likelihood estimates of β0 and β1 are

n
β0 =
ln(1 + βˆ1t n )

1 n
1 nt n
βˆ
∑ 1 + βˆ t =
(1 + βˆ1t n ) ln(1 + βˆ1t n )
1 i =1 1 i
Software Reliability Models 19

4.12.Littlewood- Verrall Reliability Growth Model

The Littlewood-Verrall model is the best example of Bayesian type models. The model
attempts to account the fault existed due to fault correction process. Impact of this the
software program could become less reliable than before. With each fault correction, a
sequence of software programs is generated. Each is obtained from its predecessor by
attempting to fix the fault. Because of uncertainty, the new version could be better or
worse than its predecessor; thus another source of variation is introduced. This is
reflected in the parameters that define the failure time distributions, which are taken to be
random. The distribution of failure times is, as in the earlier models, assumed to be
exponential with a certain failure rate, but it is that rate that is assumed to be random
rather than constant as before. The distribution of this rate, as reflected by prior, is
assumed to be gamma distribution.

Category: Infinite failure; Bayesian Model


Nature of failure process: Time to failure
Assumptions
1. Times between failures (successive), that is, Xi’s are assumed to be independent
exponential random variables with parameter ξi , i = 1, ….., n.
2. The ξi’ form a sequence of independent random variables, each with a gamma
distribution of parameters α and ψ (i). The function ψ (i) is taken to be an
increasing function of i that describes the quality of the programmer and the
difficulty of the task. A good programmer would have a more rapidly increasing
function than a poorer one.
3. The software is operated in a manner similar to the anticipated operational usage.

Model form

The prior distribution for the ξi’s is of the form:

α −1
[ψ (i )]α ξ i exp(−ψ (i )ξ i )
g (ξ iψ (i ),α ) = ,ξ i > 0
Γ(α )

The marginal distribution of the xi can be shown to be:


Software Reliability Models 20

α [ψ (i)]α
f ( xi / α ,ψ (i )) = for xi > 0
[ xi + ψ (i )]α +1

The joint density is

n
α n ∏ [ψ (i )]α
f ( x1 , x 2 ,......., x n ) = n
i =1
for xi > 0, i = 1,....., n
∏[x
i =1
i + ψ (i )]α +1

The posterior distribution for the ξi’s is therefore obtained as

n
 n

∏ ξ iα exp − ∑ ξ i ( xi + ψ (i)) 

h(ξ 1 , ξ 2 ,..., ξ n ) =
i =1 i =1  forξ > 0, i = 1,...., n
n i
[Γ(α + 1)] n ∏ ( x i + ψ (i )) α +1
i =1

The failure intensity functions for the linear and quadratic forms can be shown to be

α −1
λ linear (t ) = and
β 02 + 2 β 1t (α − 1)
ν1
λ quadratic (t ) = ((t + (t 2 + v 2 )1 / 2 )1 / 3 − (t − (t 2 + v 2 )1 / 2 )1 / 3
t +ν 2
2

Where v1 = (α − 1) 1 / 3 /(18β 1 ) 1 / 3 )
v 2 = 4 β 30 /(9(α − 1) 2 β )

Using the marginal distribution function for the xi’s, the maximum likelihood estimates
of α, β0 , and β1 can be found as the solutions to the following system of equations

n n n
+ ∑ In(ψˆ (i )) − ∑ In( xi + ψˆ (i )) = 0
αˆ i =1 i =1
Software Reliability Models 21

n n
1 1
αˆ ∑ − (αˆ + 1)∑ =0
i =1 ψˆ (i ) i =1 x i + ψˆ (i )

n n
i' i'
αˆ ∑ − (αˆ + 1)∑ =0
i =1 ψˆ (i ) i =1 x i + ψˆ (i )

Where ψ(i) = β0, β1i’ and i’ is either i or i.2 Using a uniform prior for α.

S-ar putea să vă placă și