Documente Academic
Documente Profesional
Documente Cultură
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted
digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about
JSTOR, please contact support@jstor.org.
American Statistical Association, American Society for Quality, Taylor & Francis, Ltd.
are collaborating with JSTOR to digitize, preserve and extend access to Technometrics
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
Luis A. ESCOBAR
(wqmeeker@iastate.edu)
C. Joseph Lu
Department of Statistics
National Cheng-Kung University
Tainan 70101
Taiwan
High reliability systems generally require individual system components having extremely high
reliability over long periods of time. Short product development times require reliability tests to be
conducted with severe time constraints. Frequently few or no failures occur during such tests, even
with acceleration. Thus, it is difficult to assess reliability with traditional life tests that record only
failure times. For some components, degradation measures can be taken over time. A relationship
between component failure and amount of degradation makes it possible to use degradation models
and data to make inferences and predictions about a failure-time distribution. This article describes
likelihood estimation is used to estimate model parameters from the underlying mixed-effects
nonlinear regression model. Simulation-based methods are used to compute confidence intervals for
quantities of interest (e.g., failure probabilities). Finally we use a numerical example to compare the
results of accelerated degradation analysis and traditional accelerated life-test failure-time analysis.
KEY WORDS: Bootstrap; Maximum likelihood; Mixed effects; Nonlinear estimation; Random
effects; Reliability.
1. INTRODUCTION
1.1 Background
Today's manufacturers face strong pressure to develop
newer, higher-technology products in record time while
tain timely information on the reliability of product components and materials. Generally, information from tests at
all quality. This has motivated the development of methods like concurrent engineering and encouraged wider use
of designed experiments for product and process improvement efforts. The requirements for higher reliability have
increased the need for more up-front testing of materials,
accepted modern quality philosophy for producing highreliability products: Achieve high reliability by improving
the design and manufacturing processes, moving away from
reliance on inspection to achieve high reliability.
Estimating the failure-time distribution or long-term performance of components of high-reliability products is par-
ditions. In some cases the level of an accelerating variable is increased or otherwise changed during the course
of a test (step-stress and progressive-stress AT's). AT results are used in design-for-reliability processes to assess
or demonstrate component and subsystem reliability, certify components, detect failure modes, compare different
manufacturers, and so forth. AT's have become increasingly
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
90
dation directly over time, either continuously or at specific points in time. In most reliability-testing applications,
of the degradation process (e.g., tire wear) may allow direct modeling of the failure-causing mechanism, providing
more credible and precise reliability estimates and a firmer
Example 1: Device-B Power-Output Degradation. Figure 1 shows the decrease in power, over time, for a sample
ples of devices were tested at each of three levels of temperature. At standard operating temperatures (e.g., 80?C junc-
?iI
oxide silicon (MOS) devices. Nelson (1990, chap. 11) described applications and models for accelerated degradation and described Arrhenius analysis for data involving
a destructive test (only one degradation reading on each
unit). Carey and Koenig (1991) described an application
of the Carey and Tortorella (1988) methods of accelerated degradation analysis in the assessment of the reliability of a logic device. Tobias and Trindade (1995) illustrated the use of some simple linear regression methods
for analyzing degradation data. Murray (1993, 1994) and
Murray and Maekawa (1996) used such methods to analyze accelerated-degradation-test data for data-storage disk
error rates. Tseng, Hamada, and Chiao (1995) used similar methods with experimental data on lumens output from
l! 1[!g ! t. , . 150 Degrees C useful models for degradation processes at a particular level
;; ;;'; ;.; ; i ; , *s-: that can be used to relate degradation level and failure time.
-0.4
C"
V
0
1.3 Literature
-0.2
.0
ory and limited real data available at the time the more
complete experiment was being planned.
0.0-
-0.6 -
-0.8
-1.0
'. ' - .: cdf for a specified degradation model and use the results
-1.2
-1.4
;~'.<:~~~~~ ~bution. Section 7 describes and illustrates the use of a para'10 2000 3 400 metric bootstrap algorithm to compute confidence intervals
1000
2000
3000
4000
ed-Degradation-Test Results Giving Power Drop from a traditional accelerated life-test analyses. In Section
in Device-B Output for r a Sample of Units Tested at Three Levels of 9 we conclude with discussion of some areas for further
Figure 1. AcceleratE
Temperature.
research.
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
A1 i A2
and the rate equations for this reaction are
dA1
dA2
dtdt
= -klA and
dt = k1Al, k1 > 0. (1)
dt
The solution of this system of differential equations gives
91
times. A degradation model should account for the important sources of variability in a failure process. Figure
2 shows degradation curves with unit-to-unit variability in
spect to the amount of material available to wear, initial level of degradation, amount of harmful degradationcausing material, and so on. For some applications, the variable of interest is the amount of change from an initial level
of some measure of performance (typically measured in either percent change or in dB). This is why the paths in Figure 2 are shown starting at the same point. This adjustment
is useful when the corresponding failure times, defined by
Al(t) = Ai(O)exp(-kit)
by
'
0 o
o
0_
oq
2000
4000
6000
8000
Hours
The scales of y and t can be chosen (as suggested by physical theory and the data) to simplify the form of D(t, 3). For
example, the relationship between the logarithm of degradation and the logarithm of time might be modeled by the
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
92
ponents of the vector /3 are independent of the eij deviations. We also assume that the Eij deviations are inde-
havior in Df.
4. ACCELERATION MODEL
often possible to use some form of acceleration. Increasing the level of acceleration variables like temperature, hu-
Tg(temp) = yoexp
kB x (temp + 273.15)
acceleration factor
RZ(temp) 11,605
(4)
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
93
eral a model will be SAFT if the failure mechanism is governed by a single-step chemical reaction with a rate that
depends on an acceleration variable like temperature but is
otherwise constant over time. Klinger (1992) also noted this
relationship.
reaction:
Consider Model (5) along with the critical level Df. For
values of t such that D(t) is small relative to Doo,
temp, and Doo is the asymptote. When degradation is measured on a scale decreasing from 0, Doo < 0, and we specify
that failure occurs at the smallest t such that 2D(t) < Df.
Figure 3 shows Model (5) for fixed values of Ru, Doo, and
Ea for four different levels of temperature. Equating D(T;
temp) to Df and solving for T gives the failure time at
temperature temp as
T(te
) - Ru g(l - D)=
(6)
T(temp)=
- T(tempu)
D(m
= RZ+ x AF(temp) x t
DTtf 1 T(temPu)
( p) R+ Af(temp) AF(temp)'
(7)
D(t; temp)
m -0.5-- .
c
s0
C
a
V /r 195 Degrees C
QO
T5
0
o -1.0- \
?.
237 Degrees C
-1.5
2000
4000
6000
8000
2000
4000
Hours
6000
8000
Hours
Figure 3. Illustration of the Effect of Arrhenius Temperature Depen- Figure 4. Illustration of the Effect of Arrhenius Temperature Depen-
dence on the Degradation Caused by a Single-Step Chemical Reaction. dence on a Linear Degradation Process.
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
94
n r ?? oo^ I
i=
jI
i=00
1 00
j=l
where the first two parameters are random effects and activation energy Ea is a fixed effect. That is, Ea is assumed to
and -Doo).
Lu and Meeker (1993) used a two-stage method to estimate the parameters of the mixed-effects accelerated degra-
(8)
where (ij = [yij - D(tij, /i)]/ a,, (z) is the standard normal density function, and fo (Pi; /p, pE) is the multivariate
normal distribution density function. See Palmer, Phillips,
and Smith (1991) for motivation and explanation. To simplify notation and presentation, we continue to collect both
the unit-to-unit random-effects and fixed-effects parameters
into the vector fi with the entries in E3 being 0 for the rows
and columns corresponding to the fixed effects.
Evaluation of (8) will, in general, require numerical approximation of n integrals of dimension kr (where n is
the number of paths and kr < k is the number of random parameters in each path). Maximizing (8) with respect
to (/,, Za, ur) directly, even with today's computational
capabilities, is extremely difficult unless D(t) is a linear
function. Pinheiro and Bates (1995a) described and compared estimation schemes that provided approximate ML
estimates of 00 = (/t3, E,) and oa, as well as estimates of
Continuing with Example 3, we fit model (5) using SPlus function nlme. To improve the stability and robustness of the approximate ML algorithm, it is important to
reduce the correlation between the estimates of Ea and
the parameters relating to the reaction rate R. Thus, it
is preferable to estimate R at some level of temperature
that is central to the experimental temperatures, rather than
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
95
0.6-
'e
kI
.15021
-.02918
0>)
.3510
-.02918
.01809
0.
.6670
-7.572
0.4 Cm
0.2 -
a. =?
0.0I
-8.5 -8.0
-7.0-6.5
-7.5
-7.0 -6.5
Beta1
is a constant. Allowing D)f to be random is a computationally straightforward generalization but would complicate the presentation.
For a specified degradation model, the distribution func-
needed.
-log(b)- p-
-d
parameters in 0Q.
of degradation of all the test units at time 0, and 32 represents the degradation rate, random from unit to unit. Then
= 1 -^ log(Df - 1) - log(t) - ,p
:-
0.0
-0.2
odr~~~~~~~~~~~~~~~~~~
-~~~~~~~
-0.4
._
Q
o
-0
a.
t > 0.
' This shows that T has a lognormal distribution with parameters that depend on the basic path parameters 0 =
(/31,t, o), and Df. That is, exp[log(Df - 31) - p] is the
195 Degrees C lognormal median and ro is the lognormal shape parameter.
-0.6
-0.8
-1.0
-1.2 -
-1.4
237DegreesC See Lu and Meeker (1993, sec. 2.3) for some other examples.
1000
2000
Hours
3000
4000
Figure 5. Device-B F :ower-Drop Observations and Fitted Degradation is nonlinear and more than one of the elements in 3 =
Model for the 34 Sampl
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
96
F(t) = P(T< t)
x-oo 3 1- d3 a1 ,
form expression for F(t), and when numerical transformation methods are too complicated, one can use Algorithm 1
X( U1/
2|lP 2= c2(1-p2).
In principle, this approach can be extended in a straightforward manner when there are more than two continuous ran-
dation Data.
values of t.
3. Generate a large number (e.g., B = 4,000) of bootstrap samples and corresponding bootstrap estimates F* (t)
according to the following steps:
1.0 -
^.
0.8 ,'
I'
01
._
'0
U-
LL
0.6 I
O
O
values of t.
I111I
I
150
II
0.4-
E!
80
II
0.2 -
F(t) are
[F(t), F(t)] = [* (t)[lB], P*(t)[]B],
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
97
1.0 I/
I'
0.8 -
5000 -
/II
,I i //
.0
1000 0.6I
i
1:
0
2000 -
0.4-
200 -
,I
k 500-
100 -
50 -
i' /
0.2
// /
2010-
0.0-
20
50
100
200
..,
Degrees C
Thousands of Hours
500 1000
Figure 9. Scatterplot of Device-B Failure-Time Data With Failure Defined as Power Drop Below -.5 dB. The symbol A indicates the seven
units that were tested at 150?C and had not failed at the end of 4,000
hours.
where
I = [2 -1(q) + :-l(a/2)],
u = ,[2I-1(q) + c-81(1 - a/2)],
,I-l(p) is the standard normal p quantile, and q
is the proportion of the B values of F* (t) that are
peratures. Figure 11 is also a multiple lognormal probability plot for the individual samples at 237?C and 195?C. In
this case, however, the superimposed lines show the fitted
lognormal-Arrhenius model relating the life distributions to
temperature. This is a commonly used ALT model for elec-
.98 -
.95 .9 -
X'
.8 0)
7t
.7
,X
,'x
. .6 -
co .5 -
X,'
LL
,X
/+
+
c .4 -
E .3-
,X
X
,'X
c .2
.o
0
'
.1 -
.05 4?
X,
.02 -
.01 .005 -
..
..
5000 10000
5000
10000
Hours
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
98
.99
.98
.95
.9
.8
.7
.6
c
.5
.4
0
Qo
o
II
.3
.2
.1
available at 150?C.
.05
.02
.01
.005
.001
Hours
Figure 11. Lognormal-Arrhenius Model Fit to the Device-B FailureTime Data With Failure Defined as Power Drop Below -.5 dB.
log(Tu) and log(-DPo) [and thus log(Df/Doo)] are assumed to follow a joint normal distribution. If TDf/Doo
is small relative to 1 (as in this example), then log (1Df/D,oo) -Dff/Doo and thus T(tempu) is approximately
/ 11,605
/ +- $3 \temp + 273.15
and constant standard deviation a,. In relation to the
lognormal-Arrhenius failure-time model described in Section 4.2, the slope /3 = Ea is the activation energy and the
intercept is
mal distribution.
00 = /LU - 03 11,605
0 -- (t3tempu + 273.15 )
The estimated lognormal cdf's in Figure 11 are parallel be-
Using degradation data offers some important advantages for making reliability inferences and predictions, es-
.95
.9
.98
.8
.9
0) .7
.7
.C_
.6
. 6
' .5
.5
U-
c .4
.3
?f .3
c) .2
o .2
._
L .1
.1
.05
i .05
.02
.03
.01
.02
.005
.01
.001
.005
10A1
10^2
10'3
10A4
10^5
Hours
.003
.001
10^1
10^2
10A3
Time Data With Failure Defined as Power Drop Below -.5 dB (solid
also shown.
04
10A5
Hours
Figure 13. Weibull-Arrhenius Model Fit to the Device-B FailureTime Data (solid lines) Compared With the Degradation-Model Estimates
(dotted lines).
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms
99
and corresponding statistical methods are needed for dealing with such problems.
An Argument for the Inverse Relationship," Microelectronics and Reliability, 32, 987-994.
Lindstrom, M. J., and Bates, D. M. (1990), "Nonlinear Mixed Effects Models for Repeated Measures Data," Biometrics, 46, 673-687.
Palmer, M. J., Phillips, B. F., and Smith, G. T. (1991), "Application of Nonlinear Models With Random Coefficients to Growth Data," Biometrics,
47, 623-635.
tigue failure.
University.
Statlib).
Shao, J., and Tu, D. (1995), The Jackknife and Bootstrap, New York:
Springer-Verlag.
Shiomi, H., and Yanagisawa, T. (1979), "On Distribution Parameter During Accelerated Life Test for a Carbon Film Resistor," Bulletin of the
Electrotechnical Laboratory, 43, 330-345.
REFERENCES
Tseng, S. T., and Yu, H. F. (1997), "A Termination Rule for Degradation
Experiment," IEEE Transactions on Reliability, 46, 130-133.
This content downloaded from 103.27.9.50 on Wed, 25 May 2016 04:30:44 UTC
All use subject to http://about.jstor.org/terms