Documente Academic
Documente Profesional
Documente Cultură
1. Introduction
The probability density function (PDF) and the cumulative distribution function (CDF) of
the inverse Weibull (IW) distribution, with shape parameter α and scale parameter λ, are
respectively defined by
−α
f (x; α, λ) = αλx−(α+1) e−λx , α > 0, λ > 0, x > 0, (1)
and
−α
F (x; α, λ) = e−λx . (2)
The IW distribution has a heavy right tail and its hazard rate is unimodal as the log-normal
and the inverse Gaussian distributions. The IW model is useful in reliability analysis, insur-
ance, and other fields. For example, Keller et al. (1985) have pointed out that the degradation
phenomena of dynamic components of diesel engines can be well described by the IW model.
Maswadah (2003) has indicated that the maximum flood levels of the Susquehanna River at
Harrisburg, Pennsylvania, over a period of 24 years are well modeled by the IW distribu-
tion. Many authors have studied the properties of the IW distribution. Calabria and Pulcini
CONTACT Yan Zaizai zz.yan@.com Department of Mathematics, Science College, Inner Mongolia University of
Technology, Hohhot, P. R. China.
© Taylor & Francis Group, LLC
622 P. XIUYUN AND Y. ZAIZAI
(1989, 1990) have studied the maximum likelihood estimators (MLEs) of the parameters and
of the reliability function for the complete and censored sampling, and, in 1992 and 1994, they
proposed the Bayesian credible intervals (CIs) of the parameters and of the reliability, and
derived the Bayesian prediction CIs based on the two-prediction approach under the priors
with uniform distribution on α and gamma distribution on γ . Kundu and Howlader (2010)
have described the Bayesian inference and prediction of future observation for censored data
under the assumption that both α and λ have independent gamma priors. Based on general-
ized order statistics, Abd Ellah (2012) has examined the Bayesian and the MLEs when α has
discrete prior and λ has gamma prior distribution. In addition, the mathematical property
and the application of inverse Weibull distribution have been discussed in monographs, for
example, see Reiss and Thomas (2007).
Progressive censoring life test is commonly used in reliability engineering because of
many pragmatic reasons such as time constraint, cost reduction, and backward technology.
Balakrishnan and Aggarwala (2000) and Balakrishnan (2007) have presented an elaborate
overview of various developments in progressively censored data. In this paper, we focus on
the general progressive censoring scheme to derive the Bayesian inference for the IW dis-
tribution. Recently, based on the general progressively censored data, several papers have
appeared to estimate the unknown parameter for different distributions. For example, Bal-
akrishnan and Sandhu (1996) have derived the best linear unbiased estimator and MLE, and
Fernández (2004) has considered both MLEs and Bayesian estimators for the exponential
distribution. Soliman (2008) has considered the Bayesian estimators and MLEs of the param-
eters as well as some survival time parameters for the Pareto model. Kim and Han (2009) have
derived the MLE and Bayesian estimator for scale parameter as well as the Bayesian predic-
tive estimator of future observation for the Rayleigh distribution. In this paper, we consider
the Bayesian inference for the IW distribution. It is well known that the Bayesian method
is more attractive than MLEs when there is some prior knowledge available on population.
For the Bayesian parameter estimation, the first problem is to decide the prior distribution
of the parameter(s). Unfortunately, for the IW distribution, the continuous conjugate priors
of (α, λ) do not exist when (α, λ) are both unknown. A natural choice for prior distribu-
tion of λ is a gamma distribution, which is the conjugate distribution of λ when α is known.
We simply assume that the prior on α, π(α), is a log-concave PDF on support (0, ) and
π(α) is independent of gamma prior distribution of λ. For scale-shape parameter distribu-
tions, gamma prior distribution on scale parameter in combination with log-concave prior
PDF on shape parameter is commonly used, for example, the Weibull distribution (see Berger
and Sun, 1993; Kundu and Raqab, 2012; Lin et al., 2012) and the gamma distribution (see
Pradhan and Kundu, 2011). One of the reasons for selecting a log-concave prior PDF for
shape parameter is the mathematical tractability of the resulting posterior density. Another
reason is that many common densities are log-concave. For example, the normal distribu-
tion, the log-normal distribution, and the gamma distribution (when the shape parameter
is 1) all have log-concave PDFs. In this paper, based on the general progressive censor-
ing, we prove that the posterior density functions of both (α, λ) are log-concave based on
the assumption that λ has a gamma prior distribution and α has a log-concave prior PDF
under the squared error loss (SEL) function, and we derive the Gibbs sampling strategy to
estimate the unknown parameters and the corresponding CIs and also predict future order
statistics. Meanwhile, we find that many distributions with two parameters have log-concave
posterior densities and they are suitable for the proposed Gibbs sampling strategy under the
assumption that the scale parameter has a gamma prior distribution and shape parameter(s)
has a log-concave prior PDF. Those include the Weibull, Burr type XII , and flexible Weibull
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 623
distributions. Furthermore, we find that the proposed Gibbs sampling strategy can be
extended from the general progressive censored sample to the general progressive hybrid cen-
sored sample.
The rest of this paper is organized as follows: In Section 2, we investigate the properties
of the posterior density functions of (α, λ), and then derive the Gibbs sampling strategy to
estimate the unknown (α, λ) and the CIs and predict future order statistics for the IW dis-
tribution. The simulation and analysis of data are given in Section 3. The extensions of the
proposed method are indicated in Section 4. Finally, the conclusion is drawn in Section 5.
2. Bayesian inference
The general progressive censoring scheme is conducted as follows: Assume that independent
and identical units with size n are simultaneously placed on a life test. The failure times of the
first r failed units are not observed. At the time of the (r + 1)th failure, the surviving units with
size Rr + 1 are withdrawn from the test randomly. At the time of the (r + i)th failure, Rr + i of
the surviving units with are withdrawn from the test randomly. Finally, at the time of the mth
failure, the remaining Rm = n − m − Rr + 1 − ··· − Rm − 1 units are withdrawn from the test
randomly. Finally, at the time of the mth failure, the remaining Rm = n − m − Rr + 1 − … −
Rm − 1 are withdrawn from the test, where the failure number during the test m is prefixed. The
m − r failure times X = (Xr + 1 , Xr + 2 , …, Xm ) are named as general progressively censored
data with censoring scheme R = (Rr + 1 , …, Rm ), where Xr + 1 Xr + 2 Xm . In this
paper, we always suppose m − r > 0, such that at least one failure time is observed in the test.
It is clear that the general progressive censoring includes the conventional order statistics (r
= 0, m = n), the Type-II right censoring (r = 0, R1 = … = Rm − 1 = 0, Rm = n − m > 0), and
the progressive right censoring (r = 0).
Suppose that the observed general progressive censored data x = (xr + 1 , …, xm ) are from a
continuous lifetime distribution with PDF and CDF, f(x; θ) and F(x; θ), respectively, then the
likelihood function is constructed by
m
L(θ; x) = [F (xr+1 ; θ )]r
f (xi ; θ )[1 − F (xi ; θ )]Ri , (3)
i=r+1
−α
m
−α −α
L(α, λ; x) = α m−r λm−r e−rλxr+1 xi−(α+1) e−λxi (1 − e−λxi )Ri . (4)
i=r+1
The PDF of the gamma distribution with shape and scale parameters a > 0 and b > 0,
respectively, is given by
ba a−1 −bt
g(t; a, b) = t e . (5)
(a)
Suppose that α and λ are independent and λ has gamma prior with PDF g(λ; a, b) and α has
log-concave PDF π(α), where hyperparameters (a, b) are chosen to reflect prior knowledge
about λ. Then, the joint posterior density of (α, λ) given x is derived by:
L(α, λ; x)g(λ; a, b)π (α)
π (α, λ|x) = ∞ ∞ . (6)
0 0
L(α, λ; x)g(λ; a, b)π (α)dαdλ
624 P. XIUYUN AND Y. ZAIZAI
Thus, under the SEL function, the Bayesian estimator of any function of (α, λ), say ϕ(α, λ),
is
∞∞
ϕ(α, λ)L(α, λ; x)g(λ; a, b)π (α)dαdλ
ϕ̂B (α, λ) = 0 0∞ ∞ . (7)
0 0
L(α, λ; x)g(λ; a, b)π (α)dαdλ
Analytically computing (7) is not possible. We study the properties of the posterior distribu-
tion and propose the Gibbs sampling strategy to approximate (α, λ) and construct the corre-
sponding CIs as follows.
−α m −α
m
−α
π (α|λ; x) ∝ α m−r e−λ(rxr+1 + i=r+1 xi ) xi−(α+1) (1 − e−λxi )Ri π (α). (9)
i=r+1
−α m −α
m
−α
π (λ|α; x) ∝ λm−r+a−1 e−λ(rxr+1 + i=r+1 xi +b) (1 − e−λxi )Ri . (10)
i=r+1
Based on Theorem 1, the Gibbs sampling procedure is found to generate the Monte Carlo
Markov Chain (MCMC) samples from the log-concave functions (9) and (10). Devroye (1984)
published a black-box style rejection algorithm that can be used for log-concave density f(x)
on the real line for which the location of a mode m̃ is known. This method can not be applied
directly when density f(x) is proportional to a function, that is, f(x) = ch(x), c 1, when
normalization constant c = χ h(x)dx is not available in a closed form, where χ denotes the
domain of f(x). Hence, Devroye’s method requires a lot of time to compute c and m̃ when those
are unknown, such as Eqs. (9) and (10). Gilks and Wild (1992) proposed other black-box
style rejection algorithm, named as adaptive rejection algorithm, to sample from log-concave
function. The algorithm of Gilks and Wild (1992) is more convenient than that of Devroye
(1984), as the former does not need to know c and m̃, yet estimators from both methods are
consistent. Therefore, we suggest the adaptive rejection algorithm for the Gibbs sampling in
this paper. The Gibbs sampling is as follows:
Step 1. Give starting value λ0 , and set i = 1.
Step 2. Generate α i from the log-concave function π(α|λi − 1 ; x).
Step 3. Generate λi from the log-concave function π(λ|α i ; x).
Step 4. Set i = i + 1.
Step 5. Repeat Steps 2-4 M times and obtain (α 1 , λ1 ), …, (α M , λM ).
Step 6. Eliminated the first K samples, the remaining N = M − K samples are denoted by (α 1 ,
λ1 ), …, (α N , λN ) for convenience.
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 625
Step 7. Obtain the Bayesian estimators and the posterior variances of α and λ with respect to
the SEL function as
1 1
N N
ˆ
θB = θi , Var(θ̂B ) = (θi − θˆB )2 , (11)
N i=1 N i=1
where θ is α and λ.
Step 8. Order θ 1 , …, θ N as θ (1) θ (N) .
The 100(1 − γ )% Bayesian CIs are given by
where [z] is the integer part of z. Then, the 100(1 − γ )% highest posterior density (HPD) CIs
for the IW parameters are easily obtained by numerical method, which is the one with the
smallest interval width among all (12).
The starting value λ0 in Step 1 can be estimated based on the prior information of the
model. Certainly, λ0 can be replaced by the starting value α 0 and the order of Steps 2 and
3 can be exchanged to obtain the Bayesian estimators if the prior information on α is more
accurate than that on λ. Compared to the MLEs, one advantage of the Bayesian method is that
the prediction intervals of a future order statistic are readily constructed using above MCMC
samples, which is discussed in Section 2.2.
Analytically computing (14) is not possible. Based on the Monte Carlo method, (14) can be
approximated by
1 ∗
N
f ∗ (yl |x) ≈ f (yl |αi , λi ). (15)
N i=1
w! N w−l
l1 1 − [F (yl |αi , λi )]
l+l1
= (lw−l )(−1) . (16)
N(w − l)!(l − 1)! i=1 l =0 1 l + l1
1
Then, the 100(1 − γ )% Bayesian prediction bounds for Yl are obtained by solving the follow-
ing equations with respect to z:
N w−l
w! 1 − [F (z|αi , λi )]l+l1 γ /2,
(lw−l )(−1)l1 = (17)
N(w − l)!(l − 1)! i=1 l =0 1 l + l1 1 − γ /2.
1
It is convenient to also find predictions of future order statistics under SEL, which are
given by
∞
N w−l
∗ w!
Ŷl = E[Yl |x] = yl f (yl |x)dyl ≈ (w−l )(−1)l1
0 N(w − l)!(l − 1)! i=1 l =0 l1
1
∞
× yl [F (yl |αi , λi )]l−1+l1 f (yl |αi , λi )dyl
0
N w−l
w! 1 1 1
= (lw−l )(−1) l1
[λ i (l + l1 )] αi 1 − (18)
N(w − l)!(l − 1)! i=1 l =0 1 l + l1 αi
1
3.1. Simulation
In order to evaluate the performance of the method discussed in Section 2, a Monte Carlo
simulation study is conducted. To calculate the Bayesian estimates, we need to specify the
log-concave prior PDF for α. We assume that α has gamma distribution with PDF g(α; c, d).
It is well known that g(α; c, d) is log-concave if c 1. However, based on (32) (see Appendix),
we can see that Theorem 1 is still true for any c 0 and d 0, where g(α; c, d)1/α if c = 0
and d = 0.
In our experiment, parameters are chosen to be (α, λ) = (1.2, 1) and (0.8, 1) with sample
size n = 20 and m = 15. A group of representative general progressive censoring schemes are
considered in Table 1. Given (α, λ), n, r, m and R, the general progressively censored samples
are generated by using Balakrishnan and Aggarwala (2000).
For the Bayesian estimation, two group priors, denoted by prior-0 and prior-1, have been
considered. For prior-0, the non informative priors on λ and α are set to be a = b = c = d = 0.
For prior-1, the informative priors on λ and α are used to be a = b = c = d = 1, we keep that
the prior mean is same as the original mean of λ = a/b and prior mean is somewhat different
from the original mean of α = c/d. For each censored sample, 1,100 MCMC samples are
generated using the Gibbs sampling and the first 100 samples are eliminated. The Bayesian
estimators, the 95% CIs, and the lengths are approximated based on the remaining 1, 000
MCMC samples. The MLEs, the 95% confidence intervals (CIs), and the lengths are calculate
also for the purpose of comparison with the Bayesian estimates. We replicate the process 1, 000
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 627
Table . Different censoring schemes R = (Rr + , …, R ) (the unobserved failure times are remarked by – ).
Schemes r R = (R , …, R )
x() (, , , , , , , , , , , , , , )
x() (, , , , , , , , , , , , , , )
x() (, , , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, , , , , , , , , , , , , )
x() (–, –, –, –, –, , , , , , , , , , )
x() (–, –, –, –, –, , , , , , , , , , )
times and compute average estimates (means), mean squared errors (MSEs), average length
of CIs (LCIs), and coverage percentages (CPs). The results are reported in Table 2.
In Table 2, first, the results from the MLEs and Bayesian estimators are found satisfactory
in terms of CPs. Next, based on the analysis of the means, MSEs, and LCIs, the performance
of MLEs is found close to that of the Bayesian estimators with respect to the prior-0, and
the performance of Bayesian estimators based on the prior-1 is found to be better than the
MLEs and Bayesian estimators with respect to the prior-0. The results from censoring sam-
ples x(1) , x(2) , and x(3) are better than the the results from x(4) , x(5) , and x(6) , respectively, because
m( = 15) exact failure times are observed for x(1) , x(2) , and x(3) , and m − r( = 13) exact failure
times are observed for x(4) , x(5) and x(6) , although their expected durations of the experiments
are same. The performance of the MLEs and Bayesian methods becomes worse as r increases
from 2 to 5, as expected. Based on above analysis, we can see that the Bayesian method pro-
posed in this paper is better than the MLEs if a model has some appropriate prior information.
If we have no prior information for the model, then it is better to use the MLEs rather than
the Bayesian method for parameter estimation problem, since the Bayesian estimators are
computationally more expensive.
Table . Means, MSEs, LCIs, and CPs with (α, λ) = (., ), (., ).
(α, λ) x Items Means MSEs ALs CPs
α, λ α, λ α, λ α, λ
(., ) x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ,
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
(., ) x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
(Continued)
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 629
Table . (Continued)
(α, λ) x Items Means MSEs ALs CPs
α, λ α, λ α, λ α, λ
x) MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
x() MLEs ., . ., . ., . ., .
prior- ., . ., . ., . ., .
prior- ., . ., . ., . ., .
a, b) and g(α; c, d), respectively. The non informative prior-0 of both unknown parameters
is used to be a = b = c = d = 0, because we have no prior information about this data. 11,
000 MCMC samples are generated and the first 1, 000 samples are eliminated. Based on the
remaining 10, 000 samples, the Bayesian estimators, the posterior variances, the 95% CIs are
estimated, and these are also listed in Table 3. From Table 3, the performance of x(0) is found
to be the best, followed by x(1) and x(2) in terms of the variances; the performance of MLEs
and the Gibbs sampling are almost the same, as expected. Furthermore, we suppose that Y1
Y15 are the order statistics of a future sample of size w = 15 from the same population.
Using (17), the predictive intervals of Y1 Y15 are calculated based on the previous 10,
000 MCMC samples. For example, based on x(0) , x(1) , and x(2) , the 95% predictive intervals of
Y1 are found to be (0.150, 1.081), (0.143, 1.128), and (0.197, 1.288), respectively.
4. Extension
In this section, we extend the Gibbs sampling procedure presented in Section 2 for the IW dis-
tribution to other distributions. Likewise, we generalize our method from general progressive
censoring to general progressive hybrid censoring.
Table . Parameter estimators, variances, and CIs for Nelson’s () data.
x Items Estimators CIs Variances
α λ α λ α λ
x() MLEs ., . (., .), (., .) ., .
Gibbs ., . (., .), (., .) ., .
x() MLEs ., . (., .), (., .) ., .
Gibbs ., . (., .), (., .) ., .
x() MLEs ., . (., .), (., .) ., .
Gibbs ., . (., .), (., .) ., .
630 P. XIUYUN AND Y. ZAIZAI
α
m
m α
π (α|λ; x) ∝ (1 − e−λxr+1 )r α m−r xi α−1 e−λ i=r+1 (1+Ri )xi π (α). (21)
i=r+1
Based on the analysis of (21), as IW distribution, Theorem 2. (I) is still true for any gamma
prior distribution on α with PDF g(α; c, d) .
It should be pointed out that Soliman et al. (2011) computed the Bayesian point and interval
predictions using the two-sample prediction method for general progressive censoring based
on the assumption that α has discrete prior and λ has gamma prior. The discrete prior on
α has been criticized because of its difficulty in application to real-life problems, though the
parameter estimations have closed forms, see for example, Kaminskiy and Krivtsov (2005).
Kundu (2008) discussed the parameter estimates when the data are from a progressively right
censored sample (r = 0). Apparently, in this special case, (22) reduces to a gamma density.
Hence, the MCMC samples can be generated from log-concave density (21) and gamma den-
sity (22). However, if r > 0, (22) is not gamma density. Therefore, the method of Kundu (2008)
is no longer applicable.
β
m
π (α|β; x) ∝ α m−r
[1 − (1 + xr+1 )−α ]r (1 + xiβ )−(α+1+Ri α) π (α). (25)
i=r+1
and it is log-concave.
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 631
β
m
π (β|α; x) ∝ β m−r [1 − (1 + xr+1 )−α ]r xiβ−1 (1 + xiβ )−(α+1+Ri α) π (β ). (26)
i=r+1
Based on the analysis of (25) and (26), we can see that Theorem 3 is still true for any gamma
prior distributions on β and α with PDFs g(β; a, b) and g(α; c, d), respectively.
It should be pointed out that Rastogi and Tripathi (2012) gave parameter estimates for
the Burr type XII distribution using the Lindley approximation method when the data are
progressively censored, and this method can be extended to general progressive censoring
schemes. However, a clear disadvantage of the Lindley approximation method is that it can-
not calculate precision of the estimators of the parameters and the predictors of future order
statistics.
Suppose that α and β are independent and α and β have priors with log-concave PDFs, π(α)
and π(β), respectively. We have the following Theorem.
Theorem 4:
(I) The conditional PDF of α given β and x is log-concave, and it is proportional to
r m
αxr+1 − x β β
π (α|β; x) ∝ 1 − exp −e r+1 α+ 2
i=r+1
xi
β
αx −
× exp αxi − (1 + Ri )e i xi π (α). (29)
discussed the general progressive hybrid censoring scheme yet. In this section, we extend the
proposed Gibbs sampling method from the general progressive censoring to the general pro-
gressive hybrid censoring scheme.
d
∗
L(θ; x) = [F (xr+1 ; θ )]r f (xi ; θ )[1 − F (xi ; θ )]Ri [1 − F (T ∗ ; θ )]R , (31)
i=r+1
5. Conclusion
Gibbs sampling has been proposed in this paper to construct the Bayesian point and interval
estimators as well as to derive prediction intervals, based on general progressively censored
samples. The proposed method is applicable not only to the IW distribution but also to the
Weibull, Burr type XII, and flexible Weibull distributions. In addition, it can be extended to
general progressive hybrid censoring. Therefore, the proposed Gibbs sampling method has a
broad perspective in application.
Acknowledgments
The authors are sincerely grateful to the referees and to the Editor-in-Chief N. Balakrishnan for their
many constructive comments and careful reading of the paper.
Funding
This study was supported by the National Natural Science Foundation of China (no. 11161031) and the
Natural Science Foundation of Inner Mongolia (no. 2013MS0108).
Appendix
(I) Based on (4), we have that
∂ 2 ln L(x; α, λ) m−r m
−α
= − − λrx (ln x r+1 ) 2
− λ xi−α (ln xi )2
∂α 2 α2 r+1
i=r+1
m
Ri F (xi ; α, λ)λxi−α (ln xi )2 [1 − F (xi ; α, λ) − λxi−α ]
+ .
i=r+1
[1 − F (xi ; α, λ)]2
(32)
By combing the log-concave prior π(α) with (32), and using the mathematical prop-
erty 1 − x − e−x < 0, x > 0, we have that
∂ 2 ln π (α|x, λ) ∂ 2 ln L(x; α, λ) ∂ 2 ln π (α)
= + < 0. (33)
∂α 2 ∂α 2 ∂α 2
Therefore, π(α|x, λ) is log-concave.
(II)
∂ 2 ln L(x; α, λ) m−r m
Ri F (xi ; α, λ)(xi−α )2
= − − . (34)
∂λ2 λ2 i=r+1
[1 − F (xi ; α, λ)]2
References
Abd Ellah, A.H. (2012). Bayesian and non-Bayesian estimation of the inverse Weibull model based on
generalized order statistics. Int. Inf. Manage. 4: 23–31.
Balakrishnan, N. (2007). Progressive censoring methodology: an appraisal. Test 16: 211–259.
Balakrishnan, N., Aggarwala, R. (2000). Progressively Censoring: Theory, Methods, and Applications.
Boston: Birkhäuser.
Balakrishnan, N., Kundu, D. (2013). Hybrid censoring: Models, inferential results and applications.
Comput. Stat. Data Anal. 57: 166–209.
Balakrishnan, N., Sandhu, R.A. (1996). Best linear unbiased and maximum likelihood estimation for
exponential distributions under general progressive type-II censored samples. Sankhyā 58(B): 1–9.
Banerjee, A., Kundu, D. (2008). Inference based on Type-II hybrid censored data from a Weibull dis-
tribution. IEEE Trans. Reliab. 57: 369–378.
Bebbington, M., Lai, C.D., Zitikis, R. (2007). A flexible Weibull extension. Reliab. Eng. Sys. Safety 92:
719–726.
Berger, J.O., Sun, D. (1993). Bayesian analysis for the Poly-Weibull distribution. J. Am. Stat. Assoc. 88:
1412–1418.
Calabria, R., Pulcini, G. (1989). Confidence limits for reliability and tolerance limits in the inverse
Weibull distribution. Reliab. Eng. Sys. Safety 24: 77–85.
Calabria, R., Pulcini, G. (1990). On the maximum likelihood and least-squares estimation in the inverse
Wiebull distribution. Stat. Appl. 2(1): 53–63.
Calabria, R., Pulcini, G. (1992). Bayes probaility intervals in a load-strength model. Commun. Stat.-
Theory Methods 21(12): 3393–3405.
Calabria, R., Pulcini, G. (1994). Bayes 2-sample prediction for the inverse Weibull distribution. Com-
mun. Stat.-Theory Methods 23(6): 1811–1824.
Devroye, L. (1984). A simple algorithm for generating random variates with a log-concave density.
Computing 33: 247–257.
Erto, P. (1989). Genesis, properties and identification of the inverse Weibull lifetime model. Statistica
Applicata 1: 117–128.
Fernández, A.J. (2004). On estimating exponential parameters with general type-II progressive censor-
ing. J. Stat. Plan. Inf. 121: 135–147.
Gilks, W.R., Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Appl. Stat. 41(2): 337–348.
Gupta, R.D., Kundu, D. (2009). A new class of weighted exponential distributions. Statstics 43(6):
621–634.
Kaminsky, M.P., Krivtsov, V.V. (2005). A simple procedure for Bayesian estimation of the Weibull dis-
tribution. IEEE Trans. Reliab. 54: 612–616.
Keller, A.Z., Giblin, M.T., Farnworth, N.R. (1985). Reliability analysis of commercial vehicle engines.
Reliab. Eng. 10: 15–25.
Kim, C., Han, K. (2009). Estimation of the scale parameter of the Rayleigh distribution under general
progressive censoring. J. Korean Stat. Soc. 38: 239–246.
Kundu, D. (2007). On hybrid censored Weibull. J. Stat. Plan. Inf. 137: 2127–2142.
Kundu, D. (2008). Bayesian inference and life testing plan for the Weibull distribution in presence of
progressive censoring. Technometrics 50: 144–154.
Kundu, D., Howlader, H. (2010). Bayesian inference and prediction of the inverse Weibull distribution
for Type-II censored data. Comput. Stat. Data Anal. 54: 1547–1558.
Kundu, D., Raqab, M.Z. (2012). Bayesian inference and prediction of order statistics for a Type-II cen-
sored Weibull distribution. J. Stat. Plan. Inf. 142: 41–47.
Lin, C.T., Ng, H.K.T., Chan, P.S. (2009). Statistical inference of type-II progressively hybrid censored
data with Weibull lifetimes. Commun. Stat.-Theory Methods 38: 1710–1729.
Lin, C.T., Chou, C.C., Huang, Y.L. (2012). Inference for the Weibull distribution with progressive hybrid
censoring. Computational Statistics and Data Analysis 56: 451–467.
Maswadah, M. (2003). Conditional confidence interval estimation for inverse Weibull distribution
based on censored generalized order statistics. J. Stat. Comput. Simul. 73(12): 887–898.
Mokhtari, E.B., Habibi, R.A., Yousefzadeh, F. (2011). Inference for Weibull distribution based on pro-
gressively Type-II hybrid censored data. J. Stat. Plan. Inf. 141: 2824–2838.
Nelson, W.B. (1982) Applied life data analysis. New York: Wiley.
COMMUNICATIONS IN STATISTICS—THEORY AND METHODS 635
Pradhan, B., Kundu, D. (2011). Bayes estimation and prediction of the two-parameter gamma distribu-
tion. J. Stat. Comput. Simul. 81(9): 1187–1198.
Rastogi, M.K., Tripathi, Y.M. (2012). Estimating the parameters of a Burr distribution under progressive
type II censoring. Stat. Method. 9(3): 381–391.
Reiss, R.D, Thomas, M. (2007). Statistical Analysis of Extreme Values, Third Edition. Boston: Birkhäuser.
Soliman, A.A. (2008). Estimations for Pareto model using general progressive censored data and asym-
metric loss. Communications in Statistics-Theory and Methods 37: 1353–1370.
Soliman, A. A., Al-Hossain, A. Y., and Al-Harbi, M. M. (2011). Predicting observables from Weibull
model based on general progressive censored data with asymmetric loss. Stat. Method 8: 451–461.