Sunteți pe pagina 1din 4

Standardization of Software Reliibilitv Estimation and Predkthn: ADDliation to SDace Svstems

Norman F. Schneidewind
Code MISS
Naval Postgraduate School
Monterey, CA 93943

Abstract developing strategies to bring software into conformance with


goals. The Space Shuttle Primary Avionics Software Subsystem
lhe American Institute of Aeronautics and Astronautics, is used as an example.
through its Sopware Reliability Working Group, and the
American National SrandordF Institute have prodnced a
Recommended Practice for Sojhare Reliability, ANSIAUA R-
013-1992for sofiare reliabiluy estimation and predictwn. By It is important for software organizations to have a strategy
baht this document was approved by A U A and ANSI as a for testing; otherwise, testing costs are likely to get out of
recommended practice on February 23, 1993. lhis paper control. Without a strategy, each module you maintain may be
presetus "AppendirF - Sojhare Reliabildy Measurement Case treated equally with respect to allocation of resources. You need
Studies: Using &$ware Reliability Modek for Devebping Test to treat your modules unequally! That is, allocate more test
Strategies "?om the Recommended Practice. time during testing, effort and funds to the modules which have
the highest predicted number of failures, F(tl,t2), during the
Keywords: Application of softwarereliability models, reliability interval t17t2, where t17t2 could be execution time or labor time
prediction, testing strategies, Space Shuttle. (of maintainers) for a single module. In the remainder of this
section, "time" means execution time. Use the convention that
you make a prediction of failures at t l for a continuous interval
with end-points t l and U.

The American Institute of Aeronautics and htrOMUtiCS, The following sections describe how a reliability model can
through its Software Reliability Working Group, and the be used to predict F(t17t2). The testing strategy is the
American National Standards Institute have produced a folbwing:
Recommended Practicefor Software Reliability, ANSIlAIAA R-
013-1992 for software reliability estimation and prediction. By Allocate test execution time to your modules during testing
ballot this document was approved by AIAA and ANSI as a in proportion to F(tl,t2).
recommended practice on February 23, 1993. This document
contains recommended procedures for implementing and using You update model parameters and predictions based on
software reliability models and a set of recommended models. observing the actual number of failures, during 1,tl. This
The effort also involves the establishmentof a national software is shown in Figure 1, where you predict F(tl,t2), using the
reliability database to be located at the Rome Laboratory, model and XI,,. In this figure, t, is total available test time for
Griffiss Air Force Base, Rome, New York. A "Call for a sing& module. Note that you could have t2 = t, (i.e., the
Participation" for this database project appeared in the October prediction is made to the end of the test period).
1991issue of Computer Magazine, p.94. This document will be
proposed as an ANSI recommended practice. The working
group was c h a d by Dave Siefert of Am-NCR with Ted 1 tl t2 t,
Keller, IBM-FSD and George Stark of Mitre Corporation, as X1.U F(tl,U)
vice chairs.
Figure 1. Reliability prdition time scale
This paper presents "Appendix F -
Software Reliability
Measurement Case Studies: Using Software Reliability Models Based on the updated predictions, you may want to reallocate
for Developing Test Strategies" from the Recommended your test resources during testing (i.e., test execution time). Of
practice. course, it could be disruptive to your organization to reallocate
too frequently. So, you could predict and reallocate at major
Softwarereliability models providethe softwaremanager with milatones (e.g., major upgrades). Using the Schneidewind
a powerful tool for predicting, controlling and assessing the software reliability model [2], the Space Shuttle Primary
reliability of software. In combination, these functions allow an Avionics Software Subsystem, and failure data from the AIAA
organization to determinewhether its reliability goals have been Software Reliability Database [3] as an example, the process of
met. We show how the recommended practice can be applied to using prediction for allocating test resources is developed. Two
use reliability models in such important procases as predicting parameters, a and 6, which will be used in the following
reliability, detecting anomalous conditions in software, and equations, are estimated by applying the model to [2]. Once

94
U.S.Government Work Not Protected by U.S.Copyright
the parameters have been established, you can predict various shown as X(202O+T). Since there may be remaining failures,
quantities that wiU assist you in allocating test resources, as R(T) is predicted from (4)and shown in Table 2. The predicted
shown in the following equations: remaining failures indicate that additional testing is warranted.
o Number of failures during 1,t: Note that the actual total number of failures F(m) would only
be known after aU (i.e., extremely long test time) testing is
complete and was not known at 2O+T. Thus you need
additional procedures for deciding how long to test to reach a
where 1 5 s S t is the stiuting failure count interval determined given number of remaining failures. A variant of this decision
by a mean square error criterion and is the actual is the stopping rule (when to stop testing?). This is discussed in
cumulative failure count in 1 ,s-1. the following section.
o Using (1)and Figure 1, you can predict number of failures
during tl,t2: Makine Test Decisions During Testing

In addition to allocating test resources, you can use reliability


prediction to estimate the minimum total test execution time t2
where is the actual cumulative failure count in s,tl. (i.e. , interval l,t2) necessary to reduce the predicted maximum
o Also, you can predict maximum number of failures during the number of remaining failures to R(t2). To do thiis, subtract
life (t = -) of the software: equation (1)from (3),set the result equal to R(t2), and solve for
t2:
t2=[log[a/(B [ R ( t , ) l )ll/t3+(s-1) (6)
o Using (3), you can predict the maximum remaining number of
failures at t: where, by using (3),R(t2) can be expressed as:

R(t) =(a/B)-&,t (4).


where &, is the actual cumulative failure count in s,t.
Given n modules, allocate test execution time p e r i d T, for where p is the desired fraction (percentage) of remaining
each module i Pccording to the following equation: failures at t2.

Equation (6) is plotted for Module 1, Module 2, and


F,(tl, t2) * (n)[t2-tl] Module 3 in Figure 2 for various values of p.
Ti= You can use (6) as a rule to determine when to stop testing
2 F,(tl,
i=l
t2) a given module during testing.

Using (6) and Figure 2 you can produce Table 3 which tells
In (3,note that although predictions are made using (2) you the following: the total minimum test execution time t2 from
for a single module, the total available test execution time time 0 to reach essentially 0 remaining failures (i.e., at p =
(n)(Wl) is allocated for each module i across n modules. You .001 (.1%), predicted remaining failures are .01295, .01251,
use the same interval 1 2 0 for each module to estimate a and /? .01165 for Module 1, Module 2 and Module 3, respectively
and the same interval 2OJO for each module to make (see (6) and Table 2)); the additional test execution time beyond
predictions, but from then on a variable amount of test time Ti 20+T shown in Table 2; and the actual amount of test time
is used depending on the predictions. required, starting at 0, for the "last" failure to occur (this
quantity comes from the data and not from prediction). You
Tables 1 and 2 summarize the results of applying the model don't know that it is necessarily the last; you only know that it
to the failure data for three Space Shuttle modules (operational was the "last" after 64 periods (1910 days), 44 periods (1314
increments). The modules are executed continuously, 24 hours days), and 66 periods (1951days) for Module 1, Module 2 and
per day, day after day. For illustrahve purposes, eachperiod in Module 3, respectively. So, t2 = 52.9,54.0 and 63.0 periods
the test interval is assumed to be equal to 30 days. After would constitute your stopping rule for Module 1, Module 2
executing the modules during 1,20, the SMERFS [l]program and Module 3, respectively. This procedure allows you to
was applied to the observed failure data during 120 to obtain exercise control over software quality.
estimatea of a and 8. The total number of failures observed
during 120 and the estimated parameters are shown in Table 1. Summary
We have shown how to use a software reliability model for
Equations (2), (3), (4) and (5) were used to obtain the failure prediction, allocation of test resource8 during testing
predictions in Table 2 during 20JO. The prediction of F(20JO) based on failure prediction, and a criterion for terminating
led to the prediction of T, the allocated number of test execution testing based on prediction of remaining failures. These
time periods. The number of additional failures that were elements comprisea strategy for assigning priorities to modules
subsequently observed, as testing continued during 20JO+T, is for testing.

95
Table 1

Observed Failures and Model Parameters

I failures
x(120) l a
II Module1 I 12 I 1.6915 I .1306 II
11 Module2 I 11 ~ ~
I 1.7642~
1 .1411 ll
Module 3 10 1.3403 .1151

Table 2

Allocation of Test Resources During Testing

R(T) T X(2OJO+T)
failures 1 failures failures Deriods failures

++ 12.95 .950 7.0

c7 y
1 0

Module 2

Predicted 12.51 SO7 11.6

Actual 1 1

+ 11.65

Table 3
.646 I 11.4 1 I

Test T i e t2 Required to Reach "0" RemaSiog Failures

p = .001

t2 Additional Test Last Failure Found


Time

periods periods periods

Module 1 52.9 45.9 64

Module 2 54.0 42.4 44

Module 3 63.0 51.6 66

96
Refereocs Norman P. Schneidewind and T.W. Keller,
"Application of Reliability Models to the Space
111 William H. Farr and Oliver D. Smith, Statistical Shuttle", IEEE Software, July 1992 pp. 28-33.
Modeling and Estimation of Reliability Functions for
Software (SMEWS) Users Guide, NAVSWC TR-84- Recommended Practice for Software Reliability,
373, Revision 2, Naval Surface Weapons Center, ANSVAIAA, R-013-1992, American Institute of
March 1991. (This program is available from Dr. Aeronautics and Astronautics, 1993. (This document
William H. Farr, Strategic Syskms Department, is available from Administrator, Standards, AIAA
Naval Surface Warfare Center, Dahlgren, VA 22448, Headquartem, 370 L'Enfant Promenade SW,
Tel: 703-663-4719). Washington, DC 20024-2518).

Exocution Tinr to Rorch Romrininq


Fmilurr Prrction p

97

S-ar putea să vă placă și