Sunteți pe pagina 1din 29

Journal of Statistical Computation and Simulation

ISSN: 0094-9655 (Print) 1563-5163 (Online) Journal homepage: http://www.tandfonline.com/loi/gscs20

Hybrid ranked set sampling scheme


Abdul Haq, Jennifer Brown & Elena Moltchanova
To cite this article: Abdul Haq, Jennifer Brown & Elena Moltchanova (2016) Hybrid ranked
set sampling scheme, Journal of Statistical Computation and Simulation, 86:1, 1-28, DOI:
10.1080/00949655.2014.991930
To link to this article: http://dx.doi.org/10.1080/00949655.2014.991930

Published online: 16 Dec 2014.

Submit your article to this journal

Article views: 124

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=gscs20
Download by: [COMSATS Headquarters]

Date: 02 December 2015, At: 21:33

Journal of Statistical Computation and Simulation, 2016


Vol. 86, No. 1, 128, http://dx.doi.org/10.1080/00949655.2014.991930

Hybrid ranked set sampling scheme


Abdul Haq , Jennifer Brown and Elena Moltchanova

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

School of Mathematics and Statistics, University of Canterbury, Christchurch, New Zealand


(Received 12 August 2014; accepted 21 November 2014)
Cost-effective sampling methods are of major concern in surveys of natural resources in agriculture, biology, ecology, forestry, fisheries, environmental management, etc. In this paper, we propose a hybrid ranked
set sampling (HRSS) scheme for estimation of the population mean. The proposed sampling scheme
encompasses several existing ranked set sampling (RSS) schemes, and may help in selecting a smaller
number of units to rank. The HRSS scheme provides an unbiased estimator of the population mean, and it
is always more precise than the sample mean based on simple random sampling. Extensive Monte Carlo
simulations from both symmetric and asymmetric distributions are used to study the performances of the
mean estimators based on HRSS and imperfect HRSS schemes. A simulation with real data set is also performed. It is found that HRSS scheme can provide improvements with respect to existing RSS schemes
when estimating population mean.
Keywords: unbiased estimator; order statistics; relative efficiency; perfect and imperfect rankings;
simple random sampling; mean squared error

1.

Introduction

Cost-effective sampling methods are of major concern in surveys of natural resources in agriculture, biology, ecology, forestry, fisheries, environmental management, etc. In environmental,
biomedical and ecological studies, situations may arise where taking the actual measurement of
the sample observations is costly, destructive and time-consuming, etc. such as measuring the
level of soil contamination by some pollutant, estimating the root weight of experimental plants,
assessing the stream pool size or stream habitat area. However, ranking a small set of selected
units is relatively easy and reliable. The ranking of the experimental units can be accomplished
visually or by any less expensive method. The ecological assessment of hazardous waste sites
involves expensive radiochemical techniques to find the value of the study variable. However, the
hazardous waste sites with different levels of contamination can be ranked by a visual inspection
of defoliation or soil discolouration (cf., Patil et al. [1]). In such situations, McIntyre [2] proposed
a method, later called ranked set sampling (RSS), can be used as a cost-efficient alternative to the
simple random sampling (SRS) scheme. The RSS scheme gathers auxiliary information about the
variable of interest in order to rank the selected sampling units, and thus helps in selecting more
representative samples from the parent population. Murray et al. [3] used the RSS scheme to
estimate both the percentage of the upper leaf surface covered by the spray and the total amount
of spray deposited on the upper surface of the leaves. Without knowing actual values, the leaves
were ranked based on visual appearance of the spray deposits on the upper leaf surfaces when
*Corresponding author. Email: aaabdulhaq@yahoo.com
2014 Taylor & Francis

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

A. Haq et al.

viewed under ultraviolet light. For more works on the RSS scheme, see Yu and Lam,[4] Mode
et al.,[5] Al-Saleh and Al-Shrafat,[6] Wang et al.[7]
The RSS scheme was first suggested by McIntyre [2] for estimating mean pasture and forage
yields. Takahasi and Wakimoto [8] derived the statistical theory of the RSS. They proved that
the sample mean of a ranked set sample is an unbiased estimator of the population mean, and
it is always more precise than that based on SRS. Dell and Clutter [9] were the first to prove
that, even in the presence of ranking errors, the RSS estimator of the population mean is not only
unbiased but also it is at least as efficient as the mean estimator with SRS. Stokes [10] considered
the case in which ranking is done on the basis of a covariate and that, under certain assumptions
about the joint distribution of the covariate and the variate of interest, relative efficiencies (REs)
can be expressed in terms of their squared correlation coefficient.
The traditional RSS scheme requires m2 units from the target population when selecting a
sample of size m. However, in practice, it is not always possible for the experimenter to be able
to observe or identify m2 units due to the limited time or cost constraints, etc. In such situations,
Muttlak [11] suggested a paired RSS (PRSS) scheme as an alternative to the RSS scheme. The
PRSS scheme uses less than m2 units, and at the same time, provides an unbiased estimator of
the population mean and it is better than that based on SRS. However, the efficiency of the mean
estimator based on PRSS is less than the RSS-based mean estimator. Al-Saleh and Al-Kadiri
[12] suggested double RSS (DRSS) scheme for efficient estimation of the population mean.
The DRSS scheme identifies m3 units from the parent population while selecting a sample of
size m units. It is shown that the DRSS-based mean estimators are always better than the mean
estimators with RSS. As explain above, similar to the RSS scheme, m3 units under DRSS scheme
cannot be identified or ranked with full confidence due to the shortage of experimental units or
ranking costs cannot be ignored. When confronted with such situations, Haq et al. [13] suggested
a paired double RSS (PDRSS) scheme as an efficient alternative to the DRSS scheme. It is shown
that the mean estimator under the PDRSS scheme is more precise than the mean estimators based
on SRS, PRSS and RSS schemes.
In this paper, we propose a new generalized RSS scheme which encompasses some existing
RSS schemes, at the same time continuing to ensure better performance with respect to SRS.
The proposed sampling scheme is named hybrid RSS (HRSS) since it is a generalization of the
PRSS, RSS, PDRSS and DRSS schemes. Under the HRSS scheme, depending on the experimental situations, the experimenter can select a sample of size m in variety of different ways.
This may help in reducing ranking costs and increases the applicability of the suggested sampling scheme when ranking costs are constrained by budgets. It is theoretically shown that the
mean estimators based on the HRSS scheme outperform those based on SRS in the presence of
any population. Moreover, it is numerically shown that the imperfect HRSS (IHRSS) scheme
provides an unbiased estimator of the population mean.
The outline of the rest of the article is as follows: In Section 2, we briefly explain the traditional
RSS scheme. In Section 3, the HRSS scheme is proposed. The mean estimator under HRSS is
obtained and compared with its counterpart based on SRS. Section 4 compares the performances
of the mean estimators obtained under the IHRSS scheme. An application to real data set is given
in Section 5, and Section 6 summarizes the main findings.

2.

Ranked set sampling

The traditional RSS scheme is as follows: identify m2 units from the target population. Partition
these units into m sets, each of size m units. Rank the units within each set visually or by an
inexpensive method. Then, for the actual measurement, the rth smallest ranked unit is selected
from the rth set, r = 1, 2, . . . , m. This completes one cycle of a ranked set sample of size m.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Journal of Statistical Computation and Simulation

The set size m is usually kept small because ranking a large set size is not only difficult by
judgement, but it also leads to errors in ranking. Thus, in order to reach a suitable sample size,
the entire process of obtaining the ranked set sample is repeated t times to obtain t cycles of a
ranked set samples with total sample size n = mt.
Let Y be the study variable with probability density function (PDF) f (y) and cumulative distribution function (CDF) F(y). Let Y and Y2 be the mean and variance of Y , respectively. Let
(Y11j , . . . , Y1mj ), (Y21j , . . . , Y2mj ), . . . , (Ym1j , . . . , Ymmj ) be m independent simple random samples,
each of size m, drawn from f (y), in the jth cycle. Apply the RSS scheme on these samples to
get Yr(r:m)j , r = 1, . . . , m, which represents a ranked set sample of size m for the jth cycle, j =
1, . . . , t, where Yr(r:m)j = rth min{Yr1j , . . . , Yrmj }. Note that having fixed r, Yr(r:m)j , j = 1, . . . , t,
are independent and identically distributed (IID) random variables, i.e. Yr(r:m)j = Y(r:m) , where
Y(r:m) = rth min{Yr1 , . . . , Yrm }, and Yr1 , . . . , Yrm is the rth simple random sample of size m.
However, having fixed j, Yr(r:m)j , r = 1, . . . , m, are independent and non-identically distributed
(INID) random variables. The PDF and CDF of the rth order statistic, Y(r:m) , r = 1, . . . , m,
are
m!
{F(y)}r1 {1 F(y)}mr f (y),
(r 1)!(m r)!
m  

m
{F(y)}i {1 F(y)}mi ,
F(r:m) (y) =
i
f(r:m) (y) =

< y < ,

(1)
(2)

i=r

respectively. For more details, see David and Nagaraja.[14]


From Equations (1) and (2), the mean and variance of Y(r:m) (r = 1, . . . , m) are, respectively,
given by


(3)
Y (r:m) = yf(r:m) (y) dy and Y2(r:m) = (y Y (r:m) )2 f(r:m) (y) dy.
Similarly, the covariance between Y(r:m) and Y(s:m) (1 r < s m) is given by
  ys
yr ys f(r,s:m) (yr , ys ) dyr dys Y (r:m) Y (s:m) .
Y (r,s:m) =

(4)


 
Let Y SRS = (1/n) ni=1 Yi and Y RSS = (1/n) tj=1 m
i=1 Yr(r:m)j be the sample means based on
SRS and RSS schemes, respectively. Takahasi and Wakimoto [8] showed that Y RSS is an unbiased
estimator of Y , and it is more precise than Y SRS , i.e.
E(Y RSS ) = Y

3.

and

Var(Y SRS ) = Var(Y RSS ) +

1 
(Y (r:m) Y )2 .
nm r=1
m

(5)

Hybrid ranked set sampling scheme

In this section, a new RSS scheme is proposed for estimation of the population mean.
The HRSS scheme encompasses several RSS schemes, including PRSS, RSS, PDRSS and
DRSS. The proposed scheme provides plenty of options to the experimenter in selecting more
representative samples from the target population. The HRSS scheme can be used as an efficient
alternative to the existing RSS schemes when the experimenter is unable to inspect the total
number of units that are required under different RSS schemes or when there is a shortage of
experimental units, etc.

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

The steps involved in selecting a hybrid ranked set sample of size m are given below:
Step 1 Let k1 , k2 and k3 be constants such that k1 = 0, m, 1 < k1 < m 1, k2 = [k1 ] and k3 =
[(m k1 )], where 0 0.5. Here, [q] is the largest integer value less than or equal
to q.
Step 2 Identify k1 (k1 k2 ) units from the target population, and partition them into k1 k2 sets,
each of size k1 units. Rank the units within each set visually or by an inexpensive method.
Select the rth smallest ranked unit from the first k1 k2 sets, r = 1, . . . , k1 k2 . Also select
the (k1 r + 1)th smallest ranked unit from first k2 sets, r = 1, . . . , k2 .
Step 3 Identify (m k1 )2 (m k1 k3 ) units from the target population, and partition them into
m k1 k3 sets, each of size (m k1 )2 units. Apply the traditional RSS scheme on each
set to get m k1 k3 ranked set samples, each containing m k1 units.
Step 4 Again rank the units obtained in Step 3, and select the rth smallest ranked unit from
the first m k1 k3 sets, r = 1, 2, . . . , m k1 k3 . Also select the (m k1 r + 1)th
smallest ranked unit from first k3 sets, r = 1, . . . , k3 .
Step 5 This completes one cycle of a hybrid ranked set sample of size m.
The above steps can be repeated t times in order to get a total sample of size n = mt. Note
that the HRSS scheme requires t{k1 (k1 k2 ) + (m k1 )2 (m k1 k3 )} units when selecting a
sample of size n.
Some special cases of the HRSS scheme are as follows:
(i)
(ii)
(iii)
(iv)

If k1
If k1
If k1
If k1

= m and k2 = [(k1 + 1)/2], then HRSS is equivalent to PRSS.


= m and k2 = 0, then HRSS is equivalent to RSS.
= 0 and k3 = [(m k1 + 1)/2], then HRSS is equivalent to PDRSS.
= k3 = 0, then HRSS is equivalent to DRSS.

In Table 1, for some plausible values of m, we report different possible values of k1 , k2 and
k3 , that will allow the experimenter to select different representative samples from the target
population depending on time and cost constraints. From Table 1, it is clear that, under the
HRSS scheme, a sample of size n can be selected in different ways subject to the availability of
the experimental units. For example, a hybrid ranked set sample of size n = 5 (m = 5, t = 1) can
be selected in 10 possible ways. Thus, the proposed scheme is more economical and practical
than the existing RSS schemes.
3.1.

Examples of HRSS

In order to select a hybrid ranked set sample of size n = 4 (m = 4, t = 1), the possible values of
(k1 , k2 , k3 ) are (4, 2, 0), (4, 1, 0), (4, 0, 0), (0, 0, 2), (0, 0, 1), (0, 0, 0) and (2, 0, 0). In the following
examples, we consider some choices of (k1 , k2 , k3 ) for illustrating the proposed sampling scheme.
(1) Let k1 = 4, k2 = 2, k3 = 0. With these choices, HRSS is equivalent to PRSS. Identify
8 units from the target population and partition them into 2 sets, each of size 4 units. Let
(Y11 , Y12 , Y13 , Y14 ) and (Y21 , Y22 , Y23 , Y24 ) represent the identified units in the first and second
sets, respectively. Rank the units within each set to get

Y1(1:4)

Y1(2:4)

Y1(3:4)

Y1(4:4)

Y2(1:4)

Y2(2:4)

.
Y2(3:4)

Y2(4:4)

Journal of Statistical Computation and Simulation

Table 1. HRSS possibilities when selecting samples of different sizes.


M

HRSS

k1

k2

k3

Units

PRSS
RSS
PDRSS
DRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

3
3
0
0
4
4
4
0
0
0
2
5
5
5
0
0
0
3
3
2
2
6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

1
0
0
0
2
1
0
0
0
0
0
2
1
0
0
0
0
0
1
0
0
3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
1
0
0
0
0
2
1
0
0
0
0
0
2
1
0
0
0
1
0
0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

6
9
18
27
8
12
16
32
48
64
12
15
20
25
75
100
125
17
14
22
31
18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Now, select the rth and the (5 r)th ranked units from the rth set, r = 1, 2. This gives a paired
ranked set sample of size 4, denoted by (Yr(r:4) , Yr(5r:4) ), r = 1, 2. Let Y PRSS be the mean of a


paired ranked set sample of size 4, where Y PRSS = (1/4) 2r=1 Yr(r:4) + Yr(5r:4) . The variance
of Y PRSS is given by
Var(Y PRSS ) = Var(Y RSS ) +

2
16


Y (1,4:4) + Y (2,3:4) ,

(6)


where Var(Y RSS ) = (1/16) 4r=1 Y2(r:4) and Y (r,5r:4) 0 is the positive covariance between
Yr(r:4) and Yr(5r:4) , for r = 1, 2, see David and Nagaraja.[14] From Equation (6), note that Y RSS
is always more efficient than Y PRSS .
(2) Let k1 = 4, k2 = 1, k3 = 0. Identify 12 units from the target population and partition them
into 3 sets, each of size 4 units. Let (Yr1 , Yr2 , Yr3 , Yr4 ) represent the identified units in the rth set,

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

r = 1, 2, 3. Rank the units within each set to get

Y1(1:4) Y2(1:4)

Y
1(2:4) Y2(2:4)

Y1(3:4) Y2(3:4)

Y1(4:4) Y2(4:4)

Y3(1:4)

Y3(2:4)

Y3(3:4)

Y3(4:4)

Now, select both minimum and maximum from the first set, and select the second and third
smallest ranked units from the second and third sets, respectively. This gives a hybrid ranked
set sample of size 4, denoted by {(Yr(r:4) ), Y1(4:4) }, r = 1, 2, 3. Let Y HRSS be the mean of a hybrid

ranked set sample of size 4, where Y HRSS = (1/4)( 3r=1 Yr(r:4) + Y1(4:4) ). The variance of Y HRSS
is given by
2
Y (1,4:4) ,
(7)
Var(Y HRSS ) = Var(Y RSS ) + 16
which shows that Y HRSS is always less efficient than Y RSS .
From Equations (6) and (7), we can write
Var(Y PRSS ) Var(Y HRSS ) =

16 Y (2,3:4)

0,

(8)

which shows that Y HRSS is always better than Y PRSS .


(3) Let k1 = 4, k2 = k3 = 0. With these choices, HRSS is equivalent to RSS. Identify 16
units from the target population and partition them into 4 sets, each of size 4 units. Let
(Yr1 , Yr2 , Yr3 , Yr4 ) represent the identified units in the ith set, i = 1, 2, 3, 4. Rank the units within
each set to get

Y1(1:4) Y2(1:4) Y3(1:4) Y4(1:4)

Y1(2:4) Y2(2:4) Y3(2:4) Y4(2:4)


.

Y
1(3:4) Y2(3:4) Y3(3:4) Y4(3:4)
Y1(4:4) Y2(4:4) Y3(4:4) Y4(4:4)
Now, select the diagonal of the above matrix to get a ranked set sample of size 4, denoted by
Yr(r:4) , r = 1, 2, 3, 4.
(4) Let k1 = k2 = 0, k3 = 2. With these choices, HRSS is equivalent to PDRSS. Identify 32
units from the target population and partition them into 2 sets, each of size 16 units, denoted by
(1)
(1)
(1)
(2)
(2)
(2)
, Y12
, . . . , Y44
) and (Y11
, Y12
, . . . , Y44
). Rank the units within each set to get
(Y11
(1)
(2)
(1)
(1)
(1)
(2)
(2)
(2)
Y3(1:4)
Y4(1:4)
Y1(1:4) Y2(1:4) Y3(1:4) Y4(1:4)
Y1(1:4) Y2(1:4)

(1)
(2)
(1)
(1)
(1)
(2)
(2)
(2)
Y1(2:4) Y2(2:4)

Y1(2:4) Y2(2:4)
Y3(2:4)
Y4(2:4)
Y3(2:4)
Y4(2:4)

and
(1)

(1)
(1)
(1)
(2)
(2)
(2)
(2)
Y

1(3:4) Y2(3:4) Y3(3:4) Y4(3:4)


Y1(3:4) Y2(3:4) Y3(3:4) Y4(3:4)

(1)
(1)
(1)
(1)
(2)
(2)
(2)
(2)
Y1(4:4)
Y2(4:4)
Y3(4:4)
Y4(4:4)
Y1(4:4)
Y2(4:4)
Y3(4:4)
Y4(4:4)
Select the diagonal of each matrix to get
(1)
Y1(1:4)

Y (1)
2(2:4)

(1)
Y3(3:4)

(1)
Y4(4:4)

(2)
Y1(1:4)

(2)
Y2(2:4)

.
(2)
Y3(3:4)

(2)
Y4(4:4)

Journal of Statistical Computation and Simulation

Now, apply the PRSS scheme to get

(1)(1:4)
Y1(1:4)

(1)(2:4)
Y2(2:4)

(1)(3:4)
Y
3(3:4)

(1)(4:4)
Y4(4:4)

(2)(1:4)
Y1(1:4)

(2)(2:4)
Y2(2:4)

,
(2)(3:4)
Y3(3:4)

(2)(4:4)
Y4(4:4)

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015



(r)(r:4)
(r)(5r:4)
, Y(5r)(5r:4)
,r=
which is a paired double ranked set sample of size 4, denoted by Yr(r:4)
(r)(r:4)
(1)
(1)
= rth min{Y1(1:4)
, . . . , Y4(4:4)
}. Let Y PDRSS be the mean of a paired double
1, 2, where Yr(r:4)

  (r)(r:4)
(r)(5r:4)
+ Y(5r)(5r:4)
. The variance
ranked set sample of size 4, where Y PDRSS = (1/4) 2r=1 Yr(r:4)

of Y PDRSS is given by

Var(Y PDRSS ) = Var(Y DRSS ) +

2
16


(2,3:4)
+

Y(1,4:4)
(1,4:4)
Y (2,3:4) ,

(9)


(r)(r:4)
(r,r:4)
(r,5r:4)
where Var(Y DRSS ) = (1/16) 4r=1 Y(r,r:4)
(r,r:4) with Var(Yr(r:4) ) = Y (r,r:4) . Here, Y (r,5r:4) 0 is
(r)(r:4)
(r)(5r:4)
the positive covariance between Yr(r:4) and Y(5r)(5r:4) , for r = 1, 2, see Al-Saleh and AlOmari.[15] From Equation (9), note that Y DRSS is always more efficient than Y PDRSS . Al-Saleh
and Al-Kadiri [12] showed that
4
2  (r,s:4)

.
16 1r<s Y (r,s:4)

(10)

4

2  (r,s:4)
2  (1,4:4)
Y (r,s:4)
Y (1,4:4) + Y(2,3:4)
(2,3:4) ,
16 1r<s
16

(11)

Var(Y RSS ) = Var(Y DRSS ) +


From Equations (9) and (10), we can write
Var(Y RSS ) = Var(Y PDRSS ) +

which shows that Y PDRSS is always more efficient than Y RSS .


(5) Let k1 = k2 = 0, k3 = 1. Identify 48 units from the target population and partition them
(r)
(r)
(r)
, Y12
, . . . , Y44
), r = 1, 2, 3. Rank the units
into 3 sets, each of size 16 units, denoted by (Y11
within each set to get
(r)
(r)
(r)
(r)
Y3(1:4)
Y4(1:4)
Y1(1:4) Y2(1:4)

(r)
(r)
(r)
(r)
Y1(2:4) Y2(2:4) Y3(2:4) Y4(2:4)

, r = 1, 2, 3.
(r)
(r)
(r)
(r)
Y
1(3:4) Y2(3:4) Y3(3:4) Y4(3:4)

(r)
(r)
(r)
(r)
Y1(4:4)
Y2(4:4)
Y3(4:4)
Y4(4:4)
Select the diagonal of each matrix to get
(1)
Y1(1:4)

Y (1)
2(2:4)

(1)
Y3(3:4)

(1)
Y4(4:4)

(2)
Y1(1:4)
(2)
Y2(2:4)
(2)
Y3(3:4)
(2)
Y4(4:4)

(3)
Y1(1:4)

(3)
Y2(2:4)

.
(3)
Y3(3:4)

(3)
Y4(4:4)

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Again rank the units within each set (column) to get


(1)(1:4)
(2)(1:4)
(3)(1:4)
Y1(1:4)
Y1(1:4)
Y1(1:4)

(1)(2:4)
(2)(2:4)
(3)(2:4)
Y2(2:4)
Y2(2:4)
Y2(2:4)

(1)(3:4)
.
(2)(3:4)
(3)(3:4)
Y
Y3(3:4)
Y3(3:4)
3(3:4)

(1)(4:4)
(2)(4:4)
(3)(4:4)
Y4(4:4)
Y4(4:4)
Y4(4:4)
Now, select both minimum and maximum from the first set, and select the second and third
smallest ranked units from the second and third sets, respectively. This gives a hybrid ranked set
(r)(r:4)
(1)(4:4)
), Y4(4:4)
}, r = 1, 2, 3. Let Y HRSS be the mean of a hybrid
sample of size 4, denoted by {(Yr(r:4)

(r)(r:4)
(1)(4:4)

+ Y4(4:4)
). The variance of
ranked set sample of size 4, where YHRSS = (1/4)( 3r=1 Yr(r:4)
Y HRSS is given by
2
,
(12)
Var(Y HRSS ) = Var(Y DRSS ) + Y(1,4:4)
16 (1,4:4)
which shows that Y HRSS is always less efficient than Y DRSS .
From Equations (9) and (12), we can write
Var(Y HRSS ) = Var(Y PDRSS )

2 (2,3:4)
0,

16 Y (2,3:4)

(13)

which shows that Y HRSS is always better than Y PDRSS .


(6) Let k1 = k2 = k3 = 0. Identify 64 units from the target population and partition them into
(r)
(r)
(r)
, Y12
, . . . , Y44
), r = 1, 2, 3, 4. Rank the units within
4 sets, each of size 16 units, denoted by (Y11
each set to get
(r)
(r)
(r)
(r)
Y3(1:4)
Y4(1:4)
Y1(1:4) Y2(1:4)

(r)
(r)
(r)
(r)
Y1(2:4) Y2(2:4) Y3(2:4) Y4(2:4)

, r = 1, 2, 3, 4.
(r)
(r)
(r)
(r)
Y
1(3:4) Y2(3:4) Y3(3:4) Y4(3:4)

(r)
(r)
(r)
(r)
Y1(4:4) Y2(4:4) Y3(4:4) Y4(4:4)
Now, apply the RSS schemes on each matrix to get
(1)
(2)
(3)
Y1(1:4)
Y1(1:4) Y1(1:4)

(2)
(3)
Y (1)
2(2:4) Y2(2:4) Y2(2:4)

(1)
(2)
(3)
Y3(3:4) Y3(3:4)
Y3(3:4)

(1)
(2)
(3)
Y4(4:4)
Y4(4:4)
Y4(4:4)
Again rank the units within each set (column) to get
(1)(1:4)
(2)(1:4)
(3)(1:4)
Y1(1:4)
Y1(1:4)
Y1(1:4)

(1)(2:4)
(2)(2:4)
(3)(2:4)
Y2(2:4)
Y2(2:4)
Y2(2:4)

(1)(3:4)
(2)(3:4)
(3)(3:4)
Y
Y3(3:4)
Y3(3:4)
3(3:4)

(1)(4:4)
(2)(4:4)
(3)(4:4)
Y4(4:4)
Y4(4:4)
Y4(4:4)

(4)
Y1(1:4)

(4)
Y2(2:4)

.
(4)
Y3(3:4)

(4)
Y4(4:4)
(4)(1:4)
Y1(1:4)

(4)(2:4)
Y2(2:4)

.
(4)(3:4)
Y3(3:4)

(4)(4:4)
Y4(4:4)

Journal of Statistical Computation and Simulation

Now, select the diagonal of above matrix to get a double ranked set sample of size m, denoted
(r)(r:4)
, r = 1, 2, 3, 4. From Equations (9)(13), it is clear that Y DRSS is more efficient than all
by Yr(r:4)
other estimators considered here.

4.

HRSS and mathematical set-up

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

In this section, we explore the properties of the random variables under the HRSS scheme. The
estimators of the population mean under both HRSS and IHRSS schemes are proposed.
(1)(1:mk1 )
Y1(k1 :k1 )j , . . . , Yk2 (k1 k2 +1:k1 )j ,
Y1(1:mk
,...,
Let
Y1(1:k1 )j , . . . , Y(k1 k2 )(k1 k2 :k1 )j ,
1 )j

(mk1 k3 )(mk1 k3 :mk1 )


(1)(mk1 :mk1 )
3 )(mk1 k3 +1:mk1 )
, Y(mk
, . . . , Yk(k3 (mk
, j = 1, . . . , r, represent a
Y(mk
1 k3 )(mk1 k3 :mk1 )j
1 )(mk1 :mk1 )j
1 k3 +1:mk1 )j

(r)(r:mk1 )
, j=
hybrid ranked set sample of size n. Note that for fixed r, both Yr(r:k1 )j and Yr(r:mk
1 )j

(r)(r:mk1 )
are
1, . . . , t, are IID random variables. However, having fixed j, both Yr(r:k1 )j and Yr(r:mk
1 )j
INID random variables. Here, without loss of generality, we consider Yr(r:k1 )j Y(r:k1 ) and
(r)(r:mk1 )
(r:mk1 )
(r:mk1 )
Yr(r:mk
Y(r:mk
, j = 1, . . . , t, where Y(r:mk
= rthmin{Yr(1:mk1 ) , . . . , Yr(mk1 :mk1 ) } and
1 )j
1)
1)
Yr(1:mk1 ) , . . . , Yr(mk1 :mk1 ) is the rth ranked set sample of size m k1 .
is a
square matrix oforder m. Then, the permanent of the matrix A
Suppose A = ((ai,j )) 
is defined as Per(A) = P m
P () denotes the sum over all m! permutations
j=1 aj,ij , where
(i1 , i2 , . . . , im ) of (1, 2, . . . , m), see Vaughan and Venables.[16]
(r:mk1 )
(1 r m k1 ) is given by
Following Bapat and Beg,[17] the PDF of Y(r:mk
1)

fY (r:mk1 ) (y) =
(r:mk1 )

where

=

Per()
,
(r 1)!(m k1 r)!

< y < ,

F(1:mk1 ) (y)

F(mk1 :mk1 ) (y)

f(1:mk1 ) (y)

f(mk1 :mk1 ) (y)

1 F(1:mk1 ) (y) 1 F(mk1 :mk1 )

(14)

}r 1

.
}1
(y) }m k1 r

Here, }r 1, }1 and }m k1 r show that the first, second and third rows are repeated
r 1, 1 and m k1 r times, respectively.
(r:mk1 )
(s:mk1 )
and Y(s:mk
(1 r < s m k1 ), is given by
The joint density function of Y(r:mk
1)
1)
fY (r:mk1 ) ,Y (s:mk1 ) (yr , ys ) =
(r:mk1 )

where

(s:mk1 )

Per()
,
(r 1)!(s r 1)!(m k1 s)!

F(1:mk1 ) (yr )

f(1:mk1 ) (yr )

=
F(1:mk1 ) (ys ) F(1:mk1 ) (yr )

f(1:mk1 ) (ys )

1 F(1:mk1 ) (ys )

< yr < ys < ,

F(mk1 :mk1 ) (yr )

f(mk1 :mk1 ) (yr )

(15)

}r 1

}1

F(mk1 :mk1 ) (ys ) F(mk1 :mk1 ) (yr )


}s r 1 .
}1

f(mk1 :mk1 ) (ys )


}m k1 s

1 F(mk1 :mk1 ) (ys )

(r,r:mk1 )
(r:mk1 )
1)
Let Y(r:mk
(r:mk1 ) and Y (r,r:mk1 ) be the mean and variance of Y(r:mk1 ) , respectively, given by


2
(r,r:mk1 )
1)
1)
(r:mk1 ) (y) dy
Y(r:mk
=
yf
and

=
(y (r:mk
(16)
(r:mk1 )
Y (r,r:mk1 )
(r:mk1 ) ) fY (r:mk1 ) (y) dy.
Y
(r:mk1 )

(r:mk1 )

10

as

A. Haq et al.
(r:mk1 )
(s:mk1 )
1)
Let Y(r,s:mk
(r,s:mk1 ) be the covariance between Y(r:mk1 ) and Y(s:mk1 ) (1 r < s m k1 ), defined
1)
Y(r,s:mk
(r,s:mk1 ) =

ys

(s:mk1 )
1)
yr ys gY (r:mk1 ) ,Y (s:mk1 ) (yr , ys ) dyr dys Y(r:mk
(r:mk1 ) Y (s:mk1 ) .
(r:mk1 )

(s:mk1 )

(17)

Based on the above formulae, it is easy to find the mean, variances and covariances of the
random variables obtained under the HRSS scheme.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

4.1.

Estimation of the population mean

The estimator of the population mean based on the HRSS scheme, say Y HRSS , is defined as
k k
mk
k2
t
1 k3
1
2


1 
(r)(r:mk1 )
Y HRSS =
Yr(r:k1 )j +
Yr(k1 r+1:k1 )j +
Yr(r:mk
1 )j
n j=1 r=1
r=1
r=1
+

k3


(r)(mk1 r+1:mk1 )
Y(mk
1 r+1)(mk1 r+1:mk1 )j

(18)

r=1

The variance of Y HRSS is


 k
mk
1
1 (r,r:mk )
1 
Var(Y HRSS ) =
Y (r,r:k1 ) +
Y (r,r:mk1 1 )
nm r=1
r=1
 k

k3
2


(r,mk1 r+1:mk1 )
+2
Y (r,k1 r+1:k1 ) +
Y (r,mk1 r+1:mk1 )
.
r=1

(19)

r=1

Theorem 1
(i) Y HRSS is an unbiased estimator of the population

 mean Y . 
(ii) Y HRSS is more precise than Y SRS , i.e. Var Y HRSS < Var Y SRS .
Proof
(i) From Equation (18), we have
k k
mk
k2
t
1 k3
1
2


1 
1)

Y (r:k1 ) +
Y (k1 r+1:k1 ) +
Y(r:mk
E(YHRSS ) =
(r:mk1 )
n j=1 r=1
r=1
r=1
+

k3


1 r+1:mk1 )
Y(mk
(mk1 r+1:mk1 )

(20)

r=1

 k

mk
1
1 (r:mk )
1 
1
Y (r:k1 ) +
Y (r:mk1 ) .
=
m r=1
r=1
1
 1 (r:mk1 )
We can write kr=1
Y (r:k1 ) = k1 Y and mk
r=1 Y (r:mk1 ) = (m k1 )Y , see Takahasi and
Wakimoto [8] and Al-Saleh and Al-Kadiri.[12] Using these results, Equation (20) can be written
as
1
E(Y HRSS ) = {k1 Y + (m k1 )Y } = Y .
m


Journal of Statistical Computation and Simulation

11

Hence, (i) follows.


(ii) It is easy to write
k1 Y2

= Var

 k
1



= Var

Yr

 k
1


r=1
k1


Y (r,r:k1 ) = k1 Y2 2

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

r=1


Y(r:k1 )

r=1
k1


k1


Y (r,r:k1 ) + 2

r=1

k1


Y (r,s:k1 ) ,

1r<s

Y (r,s:k1 ) ,

(21)

1r<s

where Y (r,s;k1 ) 0, r < s = 1, . . . , k1 , is the positive covariance between Y(r:k1 ) and Y(s:k1 ) (cf.,
David and Nagaraja [14]).
Similarly, we can write
mk

mk

mk
1
1
1 (r:mk )
Y (r,r:mk1 ) = Var
Y(r:mk1 ) = Var
Y(r:mk11)
r=1

r=1

mk
1

r=1

1)
Y(r,r:mk
(r,r:mk1 ) + 2

r=1
mk
1

1)
Y(r,r:mk
(r,r:mk1 ) =

r=1

mk
1

mk
1

1)
Y(r,s:mk
(r,s:mk1 ) ,

1r<s

Y (r,r:mk1 ) 2

r=1

mk
1

1)
Y(r,s:mk
(r,s:mk1 ) ,

(22)

1r<s

(r:mk1 )
1)
where Y(r,s:mk
(r,s:mk1 ) 0, r < s = 1, . . . , m k1 , is the positive covariance between Y(r:mk1 ) and
(s:mk1 )
.
Y(s:mk
1)
Using Equations (21), (22) can be written as
mk
1

2
1)
Y(r,r:mk
(r,r:mk1 ) = (m k1 )Y 2

r=1

mk
1

Y (r,s,mk1 ) 2

1r<s

mk
1

1)
Y(r,s:mk
(r,s:mk1 ) .

(23)

1r<s

By replacing Equations (21) and (23) in Equation (19), we get


Y2
2
(24)

(AY + BY + CY ),
n
nm
 1
k1
k2
mk1
where AY = mk
1r<s Y (r,s:mk1 ) , BY =
1r<s Y (r,s:k1 )
r=1 Y (r,k1 r+1:k1 ) and CY =
1r<s
k3 (r,mk1 r+1:mk1 )
1)
Y(r,s:mk

.
Note
that
A
,
B
and
C
are
all
positive
quantities.
Here,
Y
Y
Y
r=1 Y (r,mk1 r+1:mk1 )
(r,s:mk1 )
AY is the sum of all positive covariances. The first terms in both BY and CY include the second
terms; therefore, overall these two quantities are always positive. Hence (ii) follows.
The RE of Y HRSS with respect to Y SRS is
Var(Y HRSS ) =

RE(Y HRSS , Y SRS ) =

mY2
Var(Y SRS )
=
> 1,
mY2 2(AY + BY + CY )
Var(Y HRSS )

(25)

which is independent of the number of cycles t. The efficiency of Y HRSS increases as the value
of m increases and vice-versa.
Since the RE of Y HRSS with respect to Y SRS is independent of t, therefore, we consider different
values of m only, i.e. m = 3,4,5,6. Using these values of m, the numerical values of Equation (17)
are calculated by assuming both symmetric and asymmetric distributions for the study variable

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

12

A. Haq et al.

Y (cf. Dell and Clutter [9]). Note that the distributions considered here cover a wide range of
skewness and kurtosis. The values of kurtosis for Uniform (0,1), Normal (0,1), Logistic (0,1),
Laplace (0,1), Beta (3,3) distributions are 1.8, 3, 4.2, 6, 2.3333, with zero skewness. Similarly,
the values of skewness and kurtosis for Exponential (1), Gamma (2,1), Beta (9,2), Weibull (2,1)
and Weibull (4,1) distributions are (2,9), (1.4142,6), ( 0.8793,3.6483), (0.6311,3.24509) and
( 0.087237,2.74783), respectively. The numerical values of the REs are reported in Table 2.
Remember that, since HRSS encompasses all unbiased RSS sampling schemes considered here,
therefore, the estimators of the population mean under HRSS are as efficient as the estimator
based on other RSS schemes. From Table 2, it is observed that, as expected, all REs are greater
than one, showing the superiority of HRSS over SRS when estimating the population mean.
Moreover, having fixed k1 , the RE of the mean estimator under HRSS increases when either the
values of k2 or k3 decrease and vice-versa. Note that, when k1 = m and k2 < [(m + 1)/2], then
the mean estimators based on the HRSS scheme are more efficient than those with PRSS scheme.
For example, when m = k1 = 5 and k2 < 2, i.e. k2 = 0, 1, for uniform distribution, the RE under
the HRSS scheme is 2.8378, while the RE under the PRSS scheme is 1.6667. Similarly, when
m1 = 5, k1 = 0 and k3 < 2, the estimators under the HRSS scheme are more precise than those
obtained under the PDRSS scheme. In all cases, given m, the maximum RE under the HRSS
scheme is achieved when k1 = k3 = 0, and with this choice both HRSS and DRSS schemes
are equivalent. Note that for the first four symmetric distributions (Uniform to Laplace), the
skewness increases from 1.8 to 6. Thus, generally, the REs of mean estimators under HRSS
decrease as the values of skewness increase and vice-versa.
Under the HRSS scheme, when selecting a sample of size m, the experimenter ranks the sets
of sizes k1 and m k1 under single and double ranking schemes, respectively. The maximum
set size that can be ranked under the HRSS scheme is m. Some important points when selecting
samples under the HRSS scheme are as follows:
(a) The value of k1 should be selected such that it leads to large (m) set sizes. Given m, there are
only two possible values of k1 that lead to large set sizes, i.e. k1 = 0 or m.
(i) If the experimenter can only identify less than or equal to m2 units, then it is better to
select k1 = m. With k1 = m, the possible integer values of k2 are k2 = 0, 1, . . . , [m/2].
The RE of Y HRSS decreases as the value of k2 increases and vice-versa. The maximum
RE is obtained when k2 = 0 by identifying m2 units. Note that, when k1 = m, the minimum and maximum number of units required to select a sample of size m are m[m/2]
and m2 , respectively. Based on the availability of units, the experimenter can select the
value of k2 .
(ii) Similarly, if the experimenter can identify more than m2 but less than or equal to m3
units, then it is better to select k1 = 0. With k1 = 0, the possible integer values of k3 are
k3 = 0, 1, . . . , [m/2]. The RE of Y HRSS decreases as the value of k3 increases and viceversa. The maximum RE is attained when k3 = 0 by identifying m3 units. Note that,
when k1 = 0, the minimum and maximum number of units required to select a sample
of size m are m2 [m/2] and m3 , respectively. The estimators with k1 = 0 (using any value
of k3 ) are always better than those obtained with k1 m. Based on the availability of
units, the experimenter can select the value of k3 .
(b) If it is not possible to set k1 = 0 or m, then it is possible to select hybrid ranked set samples.
Note that when k1 = 0, m, more precise estimators can be obtained when m k1 > k1 .
4.2.

Imperfect hybrid ranked set sampling

In this section, we propose an estimator of the population mean based on the IHRSS scheme.

REs of mean estimators based on HRSS versus SRS under symmetric and asymmetric distributions.
Symmetric distribution

Asymmetric distribution

HRSS

k1

k2

k3

Units Uniform (0,1) Normal (0,1) Logistic (0,1) Laplace (0,1) Beta (3,3) Exponential (1) Gamma (2,1) Beta (9,2) Weibull (2,1) Weibull (4,1)

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

6
9
18
27

1.6667
2.0000
2.8875
3.0256

1.5812
1.9137
2.5425
2.6333

1.5303
1.8381
2.3437
2.4133

1.4595
1.7297
2.0927
2.1396

1.6232
1.9682
2.7159
2.8284

1.4595
1.6364
1.9877
2.0236

1.5141
1.7532
2.2109
2.2664

1.5706
1.8595
2.4501
2.5294

1.5842
1.8975
2.5285
2.6176

1.5948
1.9324
2.5976
2.6949

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

0
0
0
2
1
0
0

8
12
16
32
48
64
12

1.6667
2.2727
2.5000
3.4094
4.2443
4.2808
1.6854

1.6767
2.0902
2.3469
3.1025
3.5035
3.5264
1.6105

1.6449
1.9761
2.2164
2.8364
3.1034
3.1199
1.5597

1.5912
1.8229
2.0383
2.5111
2.6484
2.6592
1.4870

1.6891
2.1844
2.4433
3.3083
3.8729
3.9025
1.6504

1.5652
1.8113
1.9200
2.3663
2.5160
2.5232
1.4187

1.6152
1.9320
2.0958
2.6579
2.8944
2.9066
1.5008

1.6577
2.0598
2.2667
2.9761
3.3460
3.3651
1.5737

1.6694
2.0933
2.3251
3.0746
3.4884
3.5105
1.6004

1.6831
2.1209
2.3799
3.1742
3.6206
3.6455
1.6234

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

2.3333
2.8378
3.0000
5.2842
5.6615
5.6705
1.9685
1.7606
2.1077
2.1507

2.2190
2.5597
2.7702
4.2904
4.4501
4.4556
1.8602
1.6569
1.9659
1.9979

2.1129
2.3778
2.5783
3.7227
3.8214
3.8252
1.7827
1.5960
1.8712
1.8974

1.9627
2.1431
2.3274
3.1152
3.1636
3.1662
1.6740
1.5115
1.7415
1.7608

2.2937
2.7094
2.9145
4.8131
5.0535
5.0609
1.9202
1.7077
2.0419
2.0792

1.9468
2.1157
2.1898
2.9663
3.0146
3.0160
1.5859
1.4815
1.6615
1.6764

2.0653
2.3024
2.4244
3.4622
3.5457
3.5484
1.7013
1.5580
1.7893
1.8108

2.1869
2.5058
2.6645
4.0872
4.2301
4.2346
1.8079
1.6327
1.9129
1.9414

2.2167
2.5611
2.7436
4.2736
4.4378
4.4431
1.8462
1.6551
1.9542
1.9856

2.2455
2.6091
2.8197
4.4597
4.6415
4.6476
1.8799
1.6731
1.9907
2.0243

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 2.

(Continued).

13

Symmetric distribution
m

HRSS

k1

k2

k3

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

Asymmetric distribution

Units Uniform (0,1) Normal (0,1) Logistic (0,1) Laplace (0,1) Beta (3,3) Exponential (1) Gamma (2,1) Beta (9,2) Weibull (2,1) Weibull (4,1)
18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

2.3333
2.9697
3.3793
3.5000
5.8694
7.0389
7.1794
7.1815
2.6458
2.6365
2.3937
2.4082
2.1494
2.3632
2.1134
2.2727
2.1429
1.7442

2.3306
2.7175
3.0058
3.1857
4.9192
5.3575
5.4142
5.4155
2.4022
2.3951
2.2619
2.2166
1.9759
2.1838
1.9498
2.1241
1.9775
1.7114

2.2421
2.5309
2.7531
2.9276
4.2414
4.4951
4.5287
4.5295
2.2437
2.2380
2.1411
2.0868
1.8730
2.0603
1.8516
2.0153
1.8769
1.6647

2.1094
2.2849
2.4391
2.6028
3.5244
3.6282
3.6447
3.6652
2.0396
2.0354
1.9799
1.9129
1.7353
1.8940
1.7196
1.8663
1.7407
1.5930

2.3730
2.8619
3.2137
3.3829
5.5076
6.2037
6.2922
6.2940
2.5326
2.5243
2.3500
2.3212
2.0626
2.2824
2.0319
2.2085
2.0613
1.7403

2.0678
2.2718
2.3947
2.4490
3.3592
3.4885
3.5033
3.5036
1.9447
1.9419
1.8806
1.8095
1.6958
1.7950
1.6831
1.7633
1.7008
1.5484

2.1825
2.4607
2.6461
2.7423
3.9388
4.1635
4.1907
4.1913
2.1325
2.1280
2.0391
1.9770
1.8154
1.9556
1.7973
1.9130
1.8192
1.6197

2.2910
2.6680
2.9272
3.0550
4.6140
5.0519
5.1324
5.1337
2.3273
2.3212
2.1950
2.1433
1.9379
2.1143
1.9141
2.0569
1.9391
1.6829

2.3176
2.7191
3.0049
3.1551
4.8905
5.3502
5.4081
5.4093
2.3890
2.3821
2.2446
2.2001
1.9738
2.1680
1.9479
2.1070
1.9749
1.7029

2.3498
2.7667
3.0751
3.2536
5.1201
5.6276
5.6930
5.6945
2.4450
2.4375
2.2928
2.2508
2.0038
2.2162
1.9763
2.1521
2.0045
1.7222

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

14

Table 2. Continued.

Journal of Statistical Computation and Simulation

15

Stokes [10] suggested that it is possible to rank the values of the study variable Y with respect
to the ranks of the auxiliary variable, say X . If it is assumed that the joint PDF of Y and X follows
a bivariate normal or bivariate Pareto distribution, then we can write
Y
(Xr(r:k1 )j X ) + rj , r = 1, . . . , k1 , j = 1, . . . . , t,
X
Y (r)(r:mk1 )
= Y + YX (Xr(r:mk
X ) + rj , r = 1, . . . , m k1 j = 1, . . . , t,
1 )j
X

Yr[r:k1 ] j = Y + YX
(r)[r:mk ]

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Yr[r:mk1 ] 1j

(26)
(27)

where YX is the correlation coefficient between Y and X , X and X2 are the mean and variance
of X , respectively, rj and rj are independent random error terms with zero means and con2
). Note that Xr(r:k1 )j ,
stant variances, i.e. E(rj ) = E(rj ) = 0, Var(rj ) = Var(rj ) = Y2 (1 YX
(r)(r:mk1 )

Xr(r:mk1 )j , rj and rj are all mutually independent random variables. Here, Xr(r:k1 )j is the rth
order statistic and the analogous Yr[r:k1 ] j is the rth judgement order statistic in the jth cycle of
(r)(r:mk1 )
a ranked set sample of size k1 . Similarly, Xr(r:mk
is the rth order statistic corresponding to
1 )j
(r)[r:mk ]

the rth judgement order statistic Yr[r:mk1 ] 1j in the jth cycle of a double ranked set sample of size
m k1 .
The estimator of the population mean based on the IHRSS scheme, say Y IHRSS , is defined as

Y IHRSS

1
=
n j=1
t

k3


k k
1
2


Yr[r:k1 ] j +

r=1

k2

r=1

Yr[k1 r+1:k1 ] j +


mk
1 k3


(r)[r:mk ]

Yr[r:mk1 ] 1j

r=1

(r)[mk r+1:mk ]

1
1
Y(mk1 r+1)[mk
.
1 r+1:mk1 ] j

(28)

r=1

The variance of Y IHRSS is given by


 k
mk
1
1 [r,r:mk ]
1 
Y [r,r:k1 ] +
Y [r,r:mk1 1 ]
Var(Y IHRSS ) =
nm r=1
r=1
 k

k3
2


[r,mk1 r+1:mk1 ]
+2
Y [r,k1 r+1:k1 ] +
Y [r,mk1 r+1:mk1 ]
,
r=1

(29)

r=1

(r)[r:mk ]

[r,r:mk ]

where Var(Yr[r:k1 ] j ) = Y [r,r:k1 ] , Var(Yr[r:mk1 ] 1j ) = Y [r,r:mk1 1 ] , Cov(Yr[r:k1 ] j , Yr[k1 r+1:k1 ] j ) =


Y [r,k1 r+1:mk1 ] and

(r)(r:mk1 )
(r)(mk1 r+1:mk1 )
Cov(Yr[r:mk
, Y(mk
)
1] j
1 r+1)[mk1 r+1:mk1 ] j

[r,mk r+1:mk ]

= Y [r,mk1 1 r+1:mk1 1 ] .

Theorem 2
Under Equations (26) and (27):
(i) Y IHRSS is an unbiased estimator of the population mean Y .
(ii) Y IHRSS is more precise than Y SRS , i.e. Var(Y IHRSS ) Var(Y SRS ).

16

A. Haq et al.

Proof
(i) From Equation (28), we have
1
E(Y IHRSS ) =
n j=1
t

k3


k k
1
2


Y [r:k1 ] +

r=1

k2


Y [k1 r+1:k1 ] +

r=1

mk
1 k3


[r:mk ]

Y [r:mk1 1 ]

r=1

[mk r+1:mk ]

Y [mk1 1 r+1:mk1 1 ]

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

r=1

 k

mk
1
1 [r:mk ]
1 
Y [r:k1 ] +
Y [r:mk1 1 ] .
=
m r=1
r=1

(30)
[r:mk ]

From Equations (26) and (27), we can write Y [r:k1 ] = Y + YX Y (r:k1 ) and Y [r:mk1 1 ] =

(r:mk1 )
(r:mk1 )
(r)(r:mk1 )
Y + YX Y (r:mk
, where (r:k1 ) = E((Xr(r:k1 )j X )/X ) and (r:mk
= E((Xr(r:mk

1)
1)
1 )j
k1
mk1 (r:mk1 )
X )/X ). Using the fact that r=1 (r:k1 ) = r=1 (r:mk1 ) = 0, Equation (30) can be written
as
1
E(Y IHRSS ) = {k1 Y + (m k1 )Y } = Y .
m


Hence, (i) follows.
2
(r,r:k1 ) + 1
(ii) From Equations (26) and (27), we can write Y [r,r:k1 ] = Y2 (YX
[r,r:mk
]
[r,s:mk ]
(r,r:mk
)
1
1
2
2 2
2 2
2
YX ), Y [r,s:k1 ] = Y YX (r,s:k1 ) , Y [r,r:mk1 ] = Y (YX (r,r:mk1 ) + 1 YX ) and Y [r,s:mk1 1 ] =

(r,s:mk1 )
2
Y2 YX
(r,s:mk
for r = s, where
1) 

X

(r,s:k1 ) = Cov r(r:k1 )jX X , r(s:k1 )jX X


and
 (r)(r:mk1 )

(r)(s:mk1 )
Xr(r:mk1 )j X Xs(s:mk
X
(r,s:mk1 )
1 )j
(r,s:mk1 ) = Cov
,
.
X
X

Using these results, Equation (29) can be simplified as



 k
mk
1
2 
1 (r,r:mk )

1
Y
2
2
2

Var(YIHRSS ) =
X (r,r:k1 ) +
X (r,r:mk1 1 )
mY (1 YX ) + YX 2
nm
X r=1
r=1

k
k
2
2

 (r,mk r+1:mk )
1
1
+2
X (r,k1 r+1:mk1 ) +2
X (r,mk1 r+1:mk1 )
.
i=1

(31)

i=1

Using Equations (21) and (23), and after some simplifications, Equation (31) can be written as
Var(Y IHRSS ) =

Y2
2 2 2
YX 2Y (AX + BX + CX ),
n
nmX

(32)

 1
k1
k2
where AX = mk
1r<s X (r,s:mk1 ) , BX =
1r<s X (r,s:k1 )
r=1 X (r,k1 r+1:k1 ) and CX =
k3 (r,mk1 r+1:mk1 )
mk1 (r,s:mk1 )
1r<s X (r,s:mk1 )
r=1 X (r,mk1 r+1:mk1 ) . Note that AX , BX and CX are all positive quantities. Hence, (ii) follows.

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 3.

17

REs of mean estimators based on IHRSS versus SRS under bivariate normal distribution.

HRSS

k1

k2

k3

Units

YX = 0.25

YX = 0.50

YX = 0.75

YX = 0.90

YX = 0.99

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

6
9
18
27

1.0235
1.0308
1.0394
1.0403

1.1012
1.1355
1.1788
1.1835

1.2606
1.3672
1.5180
1.5358

1.4239
1.6306
1.9662
2.0097

1.5631
1.8796
2.4668
2.5504

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

0
0
0
2
1
0
0

8
12
16
32
48
64
12

1.0259
1.0337
1.0372
1.0442
1.0467
1.0469
1.0243

1.1122
1.1499
1.1675
1.2040
1.2175
1.2182
1.1047

1.2937
1.4152
1.4767
1.6160
1.6721
1.6750
1.2710

1.4857
1.7315
1.8687
2.2169
2.3742
2.3827
1.4431

1.6544
2.0458
2.2857
2.9779
3.3373
3.3576
1.5912

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

1.0356
1.0396
1.0416
1.0503
1.0509
1.0509
1.0298
1.0254
1.0317
1.0322

1.1592
1.1797
1.1901
1.2372
1.2404
1.2405
1.1307
1.1100
1.1400
1.1427

1.4472
1.5215
1.5611
1.7587
1.7734
1.7738
1.3515
1.2870
1.3819
1.3907

1.8017
1.9745
2.0730
2.6400
2.6880
2.6897
1.5989
1.4730
1.6611
1.6795

2.1665
2.4826
2.6759
4.0267
4.1642
4.1689
1.8288
1.6355
1.9289
1.9590

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

1.0370
1.0411
1.0435
1.0448
1.0524
1.0536
1.0537
1.0537
1.0379
1.0378
1.0361
1.0355
1.0319
1.0351
1.0314
1.0342
1.0319
1.0267

1.1665
1.1877
1.2002
1.2070
1.2487
1.2552
1.2560
1.2560
1.1709
1.1704
1.1621
1.1590
1.1409
1.1568
1.1387
1.1525
1.1410
1.1160

1.4731
1.5516
1.6009
1.6285
1.8121
1.8433
1.8471
1.8472
1.4889
1.4873
1.4573
1.4466
1.3847
1.4387
1.3774
1.4239
1.3852
1.3052

1.8603
2.0489
2.1764
2.2509
2.8196
2.9309
2.9446
2.9449
1.8969
1.8933
1.8244
1.8004
1.6668
1.7828
1.6517
1.7503
1.6678
1.5076

2.2705
2.6277
2.8905
3.0529
4.5633
4.9300
4.9770
4.9781
2.3370
2.3304
2.2065
2.1642
1.9383
2.1335
1.9136
2.0776
1.9398
1.6875

The RE of Y IHRSS with respect to Y SRS is


RE(Y IHRSS , Y SRS ) =

mY2
Var(Y SRS )
=
1,
mY2 2(AX + BX + CX )
Var(Y IHRSS )

(33)

which is also independent of the number of cycles t. The efficiency of Y IHRSS is an increasing
function of m. For different values of m and YX , the REs of the mean estimators under HRSS
schemes are computed and reported in Table 3. As expected, Table 3 shows that all REs are
greater than one for all values of YX considered here. Give m and ki (i = 1, 2, 3), the REs tend
to increase as the value of YX increases and vice-versa. Similarly, given YX and ki (i = 1, 2, 3),
the REs also increase with m. As explained earlier, the mean estimator under the HRSS scheme
attains the maximum RE when for a given m, we set k1 = k3 = 0.
In practice, the assumptions imposed by Stokes [10] in developing the model for imperfect
ranking do not hold in general. In real-life situations, the experimenter has no idea about whether

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

18

A. Haq et al.

there exists a linear relationship between the study variable (Y ) and the auxiliary variable (X )
or not. Thus, in order to further explore the effect of imperfect ranking on the performance of
the HRSS-based mean estimator, the simulation method considered here is based on the method
suggested by Dell and Clutter.[9] In the simulation, we assume both symmetric and asymmetric
distributions for Y , and consider different values of m. Given m and ki (i = 1, 2, 3), generate
Yr from the assumed underlying distribution, where r = 1, 2, . . . , {k1 (k1 k2 ) + (m k1 )2 (m
k1 k3 )}. Also generate the random errors Vr from the normal distribution with the mean zero
and variance V2 , i.e. Vr = N(0, V2 ), where Vr is independent of Yr . Then compute Xr = Yr +
Vr . Now, apply the HRSS scheme using the values of X , and also observe the corresponding
values of Y . Then, a pair (XrHRSS , YrIHRSS ), r = 1, 2, . . . , m, is selected, where XrHRSS and YrIHRSS
represent perfect and imperfect hybrid ranked set samples, each of size m. Since the RE of HRSSbased mean estimator depends on m rather than t, therefore, different values of m with t = 1 are
considered. In order to examine the effect of judgment error on the HRSS-based mean estimator,
we choose V2 = 0.05, 1.00, 3.00. Using different values of m and V2 , the REs of the mean
estimators based on HRSS relative to SRS are calculated using 100,000 replications, and are
reported in Tables 4 and 5 for symmetric and asymmetric distributions, respectively. Here, the
RE of Y IHRSS with respect to
Y SRS is the ratio of estimated mean-squared error (EMSE) of Y SRS and EMSE of Y IHRSS , given
by
EMSE(Y SRS )
,
RE(Y IHRSS , Y SRS ) =
EMSE(Y IHRSS )

2
where EMSE(Y W ) = (1/N) Ni=1 (Y W,i Y ) , W = SRS or IHRSS and N = 100, 000. Sim
ilarly, the absolute bias (AB) of Y W is estimated as: AB(Y W ) = |(1/N) Ni=1 Y W,i Y |. Here, it
is clear that if RE(Y IHRSS , Y SRS ) 1, then it shows that Y W is more precise than Y SRS .
From Tables 4 and 6, we first observe that, despite the presence of ranking errors, the mean
estimators under the IHRSS scheme are at least as efficient as the mean estimators based on SRS
for all cases considered here. It is also observed that, as the error variance (V2 ) increases, the RE
of the HRSS-based mean estimator decreases and vice-versa. Moreover, Tables 5 and 7 demonstrate that the estimators under the IHRSS scheme are approximately (or roughly) unbiased for
both symmetric and asymmetric populations, respectively.

5.

An application to real data

In this section, a real data set is considered to investigate the performance of the means estimators
based on SRS, HRSS and IHRSS schemes. The data set consists of the heights of conifer trees
measured in feet (ft), say the study variable Y , and the diameters of the conifer trees measured
at breast height in centimetres (cm), say the auxiliary variable X . Our objective is to estimate
the mean height of a random sample of 399 conifer trees. We treat this sample as our parent
population. For a detail description about the data set, see Platt et al.[18] In Table 8, we report
the summary statistics of this data set.
Different values of m are chosen for a fair comparison of mean estimators under the considered
sampling schemes, i.e. m = 3, 4, 5, 6. In order to select a ranked set sample of size m = 5 under
PRSS, RSS, PDRSS and DRSS schemes, the experimenter needs to identify 15, 25, 75 and 125
trees. At times, it can be difficult to identify these specific numbers of trees due to limited time
or budget constraints. Thus, it is more economical to use HRSS scheme, where the experimenter
can select a sample of size n = 5 (m = 5, t = 1) by identifying 14, 15, 17, 20, 22, 25, 31, 75,
100, 125 trees. Based on these numerous options, the experimenter can select a sample of size

REs of mean estimators based on IHRSS versus SRS under symmetric distributions using different values of V2 .
Uniform (0,1)

HRSS

k1

k2

k3 Units

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

Normal (0,1)

Logistic (0,1)

Laplace (0,1)

Beta (3,3)

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

6
9
18
27

1.3536
1.4613
1.7334
1.7406

1.0196
1.0333
1.0516
1.0516

1.0149
1.0029
1.0254
1.0144

1.5479
1.8294
2.3982
2.4519

1.2309
1.3076
1.4212
1.4425

1.0983
1.1470
1.1886
1.1837

1.5237
1.8084
2.2665
2.3307

1.3516
1.5249
1.7612
1.7947

1.2238
1.3044
1.4273
1.4226

1.4358
1.6935
2.0057
2.0708

1.2553
1.3802
1.5094
1.5131

1.1149
1.1908
1.2688
1.2788

1.1963
1.2671
1.3573
1.3835

1.0050
1.0200
1.0183
1.0314

1.0037
1.0077
1.0049
1.0068

0
0
0
2
1
0
0

8
12
16
32
48
64
12

1.3476
1.5514
1.6317
1.8255
1.9856
1.9610
1.3490

1.0339
1.0376
1.0335
1.0435
1.0617
1.0591
1.0367

1.0153
1.0237
1.0162
1.0226
1.0139
1.0151
1.0126

1.6212
1.9720
2.1794
2.8205
3.1174
3.1704
1.5779

1.2542
1.3475
1.4127
1.5210
1.5616
1.5564
1.2329

1.1163
1.1568
1.1560
1.2061
1.2150
1.2110
1.1088

1.6435
1.9524
2.1753
2.7158
2.9905
3.0012
1.5555

1.4286
1.5978
1.7266
1.9639
2.0444
2.0686
1.3572

1.2714
1.3565
1.3981
1.5098
1.5336
1.5345
1.2196

1.5796
1.7694
1.9754
2.4073
2.5248
2.5384
1.4543

1.3167
1.4082
1.4884
1.6268
1.6680
1.6987
1.2702

1.1708
1.2328
1.2658
1.3356
1.3380
1.3190
1.1521

1.2039
1.2970
1.3314
1.4072
1.4601
1.4606
1.1905

1.0181
1.0065
1.0274
1.0211
1.0162
1.0283
1.0180

1.0059
1.0023
1.0141
1.0019
1.0056
1.0033
1.0075

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

1.5665
1.7202
1.7472
2.0528
2.1342
2.1326
1.4558
1.3846
1.5225
1.5166

1.0441
1.0529
1.0602
1.0710
1.0523
1.0716
1.0367
1.0348
1.0461
1.0461

1.0122
1.0139
1.0243
1.0260
1.0233
1.0283
1.0130
1.0193
1.0217
1.0026

2.0843
2.4028
2.5468
3.7098
3.8307
3.7840
1.7933
1.6129
1.8833
1.9075

1.3866
1.4261
1.4503
1.6112
1.6268
1.6555
1.3040
1.2445
1.3206
1.3340

1.1727
1.1818
1.1896
1.2272
1.2228
1.2509
1.1268
1.1160
1.1422
1.1368

2.0879
2.3204
2.5134
3.5589
3.6934
3.6877
1.7651
1.5873
1.8605
1.8687

1.6571
1.7809
1.8626
2.2298
2.2999
2.2388
1.5008
1.3889
1.5352
1.5653

1.3753
1.4345
1.4429
1.5980
1.6241
1.5955
1.2900
1.2398
1.3204
1.3129

1.8860
2.0784
2.2193
2.9711
2.9466
2.9868
1.6251
1.4647
1.7117
1.7203

1.4825
1.5175
1.5792
1.7909
1.7942
1.7837
1.3460
1.2813
1.3786
1.3923

1.2321
1.2936
1.2965
1.3643
1.3707
1.3697
1.2026
1.1610
1.2082
1.2046

1.3148
1.3522
1.3802
1.5097
1.5031
1.4974
1.2583
1.2050
1.2748
1.2680

1.0150
1.0328
1.0292
1.0218
1.0267
1.0210
1.0247
1.0206
1.0181
1.0204

1.0000
1.0074
1.0000
1.0061
1.0086
1.0017
1.0090
1.0023
1.0094
1.0000

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 4.

(Continued).

19

Uniform (0,1)
m

HRSS

k1

k2

k3 Units

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

Normal (0,1)

Logistic (0,1)

Laplace (0,1)

Beta (3,3)

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

1.5632
1.7541
1.8235
1.8634
2.1310
2.2499
2.2259
2.2530
1.6621
1.6694
1.5981
1.5860
1.5200
1.6024
1.5115
1.5633
1.5144
1.3725

1.0507
1.0520
1.0542
1.0515
1.0535
1.0640
1.0747
1.0704
1.0482
1.0517
1.0511
1.0400
1.0432
1.0426
1.0314
1.0465
1.0346
1.0391

1.0129
1.0236
1.0202
1.0262
1.0201
1.0254
1.0239
1.0253
1.0111
1.0219
1.0214
1.0106
1.0025
1.0091
1.0119
1.0215
1.0222
1.0124

2.2081
2.5267
2.7257
2.8997
4.1220
4.4038
4.4728
4.5176
2.2452
2.2317
2.1205
2.1016
1.8807
2.0595
1.8473
2.0468
1.8845
1.6515

1.3861
1.4624
1.5152
1.5154
1.6453
1.6836
1.7047
1.6880
1.4086
1.4115
1.3875
1.3738
1.3306
1.3792
1.3250
1.3659
1.3317
1.2621

1.1649
1.1991
1.2038
1.2004
1.2381
1.2577
1.2558
1.2518
1.1693
1.1755
1.1704
1.1560
1.1353
1.1535
1.1357
1.1683
1.1443
1.1141

2.2098
2.5007
2.6824
2.8558
4.0524
4.2285
4.3066
4.2403
2.1774
2.2024
2.0982
2.0671
1.8787
2.0048
1.8284
1.9732
1.8554
1.6384

1.7275
1.8514
1.9276
1.9976
2.4048
2.4393
2.4126
2.4121
1.7030
1.7349
1.6774
1.6542
1.5210
1.6161
1.5308
1.6176
1.5417
1.4380

1.3914
1.4268
1.4771
1.5369
1.6745
1.6668
1.6899
1.6587
1.3928
1.3936
1.3700
1.3596
1.3224
1.3589
1.3138
1.3492
1.3363
1.2677

2.0671
2.1805
2.3662
2.4625
3.2924
3.3665
3.4057
3.3938
1.9825
1.9815
1.9343
1.8726
1.7168
1.8367
1.6592
1.8106
1.7079
1.5860

1.5240
1.5907
1.6142
1.6717
1.8667
1.8790
1.8769
1.8691
1.4835
1.4943
1.4600
1.4471
1.3804
1.4231
1.3627
1.4253
1.3773
1.3097

1.2507
1.2806
1.2909
1.3221
1.3907
1.3948
1.4002
1.3973
1.2612
1.2657
1.2278
1.2312
1.2054
1.2370
1.1964
1.2274
1.2110
1.1701

1.3143
1.3660
1.4039
1.4391
1.5183
1.5354
1.5433
1.5397
1.3461
1.3382
1.3228
1.3172
1.2706
1.3225
1.2672
1.3034
1.2913
1.2118

1.0298
1.0182
1.0233
1.0224
1.0165
1.0241
1.0280
1.0286
1.0145
1.0262
1.0220
1.0196
1.0197
1.0204
1.0000
1.0172
1.0223
1.0149

3.00
1.0072
1.0020
1.0260
1.0080
1.0145
1.0164
1.0051
1.0074
1.0000
1.0006
1.0076
1.0067
1.0000
1.0109
1.0086
1.0061
1.0130
1.0023

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

20

Table 4. Continued.

ABs of mean estimators based on IHRSS under symmetric distributions using different values of V2 .
Uniform (0,1)

HRSS

k1

k2

k3 Units

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

Normal (0,1)

Logistic (0,1)

Laplace (0,1)

Beta (3,3)

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

6
9
18
27

0.0004
0.0002
0.0004
0.0002

0.0004
0.0001
0.0006
0.0001

0.0007
0.0002
0.0003
0.0007

0.0003
0.0008
0.0012
0.0016

0.0003
0.0002
0.0005
0.0020

0.0005
0.0012
0.0012
0.0008

0.0018
0.0001
0.0017
0.0011

0.0028
0.0005
0.0046
0.0011

0.0060
0.0011
0.0016
0.0038

0.0019
0.0035
0.0028
0.0007

0.0045
0.0014
0.0045
0.0034

0.0033
0.0000
0.0014
0.0014

0.0004
0.0006
0.0000
0.0001

0.0001
0.0002
0.0004
0.0003

0.0002
0.0001
0.0000
0.0003

0
0
0
2
1
0
0

8
12
16
32
48
64
12

0.0004
0.0000
0.0002
0.0001
0.0005
0.0004
0.0001

0.0001
0.0003
0.0002
0.0005
0.0002
0.0005
0.0001

0.0004
0.0001
0.0005
0.0003
0.0001
0.0004
0.0004

0.0012
0.0004
0.0014
0.0010
0.0011
0.0017
0.0004

0.0002
0.0001
0.0009
0.0007
0.0002
0.0003
0.0041

0.0011
0.0022
0.0019
0.0005
0.0015
0.0019
0.0007

0.0016
0.0036
0.0010
0.0013
0.0032
0.0013
0.0008

0.0009
0.0018
0.0003
0.0003
0.0013
0.0011
0.0064

0.0003
0.0018
0.0002
0.0026
0.0045
0.0001
0.0011

0.0022
0.0008
0.0009
0.0018
0.0017
0.0001
0.0032

0.0012
0.0024
0.0022
0.0024
0.0016
0.0011
0.0003

0.0030
0.0010
0.0007
0.0040
0.0004
0.0004
0.0000

0.0003
0.0001
0.0002
0.0001
0.0000
0.0001
0.0006

0.0000
0.0001
0.0001
0.0002
0.0000
0.0004
0.0001

0.0003
0.0003
0.0002
0.0001
0.0002
0.0003
0.0008

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

0.0003
0.0008
0.0003
0.0002
0.0001
0.0002
0.0000
0.0002
0.0004
0.0003

0.0007
0.0004
0.0004
0.0002
0.0002
0.0003
0.0004
0.0003
0.0001
0.0000

0.0003
0.0002
0.0001
0.0004
0.0002
0.0001
0.0001
0.0003
0.0001
0.0005

0.0014
0.0005
0.0006
0.0012
0.0006
0.0005
0.0008
0.0010
0.0001
0.0014

0.0014
0.0003
0.0008
0.0011
0.0006
0.0001
0.0004
0.0002
0.0003
0.0003

0.0022
0.0004
0.0011
0.0017
0.0002
0.0017
0.0014
0.0005
0.0002
0.0015

0.0024
0.0017
0.0004
0.0012
0.0001
0.0016
0.0008
0.0006
0.0032
0.0006

0.0030
0.0016
0.0019
0.0013
0.0014
0.0014
0.0047
0.0000
0.0042
0.0009

0.0025
0.0002
0.0043
0.0011
0.0022
0.0007
0.0020
0.0026
0.0010
0.0038

0.0005
0.0027
0.0013
0.0010
0.0004
0.0020
0.0025
0.0019
0.0006
0.0015

0.0007
0.0023
0.0014
0.0005
0.0041
0.0024
0.0003
0.0002
0.0018
0.0009

0.0035
0.0006
0.0008
0.0010
0.0002
0.0028
0.0004
0.0013
0.0018
0.0009

0.0000
0.0002
0.0003
0.0001
0.0002
0.0001
0.0001
0.0002
0.0002
0.0000

0.0004
0.0004
0.0008
0.0001
0.0006
0.0004
0.0000
0.0000
0.0004
0.0003

0.0004
0.0001
0.0001
0.0003
0.0000
0.0004
0.0005
0.0000
0.0004
0.0003

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 5.

(Continued).

21

Uniform (0,1)
m

HRSS

k1

k2

k3 Units

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

Normal (0,1)

Logistic (0,1)

Laplace (0,1)

Beta (3,3)

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

0.0008
0.0001
0.0005
0.0004
0.0005
0.0002
0.0000
0.0001
0.0000
0.0001
0.0006
0.0000
0.0001
0.0002
0.0006
0.0002
0.0003
0.0000

0.0007
0.0003
0.0001
0.0004
0.0002
0.0001
0.0002
0.0002
0.0001
0.0000
0.0000
0.0001
0.0004
0.0001
0.0002
0.0001
0.0001
0.0000

0.0001
0.0006
0.0004
0.0004
0.0003
0.0001
0.0003
0.0002
0.0004
0.0003
0.0002
0.0000
0.0008
0.0009
0.0003
0.0004
0.0004
0.0002

0.0004
0.0006
0.0008
0.0003
0.0001
0.0010
0.0004
0.0016
0.0005
0.0010
0.0011
0.0010
0.0007
0.0000
0.0015
0.0007
0.0003
0.0003

0.0004
0.0021
0.0009
0.0008
0.0005
0.0004
0.0014
0.0008
0.0002
0.0015
0.0011
0.0012
0.0002
0.0002
0.0007
0.0014
0.0008
0.0012

0.0005
0.0001
0.0004
0.0006
0.0003
0.0003
0.0005
0.0010
0.0013
0.0019
0.0012
0.0007
0.0026
0.0011
0.0004
0.0021
0.0007
0.0016

0.0023
0.0005
0.0012
0.0014
0.0015
0.0015
0.0001
0.0001
0.0027
0.0010
0.0002
0.0008
0.0016
0.0016
0.0018
0.0015
0.0015
0.0001

0.0021
0.0006
0.0002
0.0005
0.0025
0.0022
0.0011
0.0011
0.0002
0.0008
0.0017
0.0013
0.0011
0.0000
0.0022
0.0013
0.0012
0.0031

0.0017
0.0022
0.0012
0.0024
0.0036
0.0015
0.0007
0.0024
0.0023
0.0022
0.0043
0.0018
0.0028
0.0013
0.0045
0.0016
0.0010
0.0018

0.0001
0.0034
0.0003
0.0010
0.0006
0.0012
0.0017
0.0003
0.0012
0.0012
0.0021
0.0001
0.0012
0.0001
0.0001
0.0005
0.0011
0.0004

0.0028
0.0011
0.0002
0.0004
0.0006
0.0009
0.0000
0.0007
0.0018
0.0011
0.0021
0.0045
0.0010
0.0007
0.0039
0.0034
0.0010
0.0023

0.0010
0.0005
0.0010
0.0014
0.0006
0.0024
0.0033
0.0001
0.0017
0.0027
0.0022
0.0004
0.0007
0.0007
0.0002
0.0004
0.0002
0.0010

0.0002
0.0001
0.0001
0.0000
0.0001
0.0004
0.0003
0.0001
0.0001
0.0002
0.0001
0.0001
0.0002
0.0001
0.0003
0.0002
0.0003
0.0002

0.0001
0.0000
0.0003
0.0001
0.0002
0.0000
0.0002
0.0002
0.0001
0.0002
0.0002
0.0002
0.0003
0.0001
0.0005
0.0000
0.0004
0.0001

3.00
0.0002
0.0001
0.0004
0.0003
0.0003
0.0001
0.0003
0.0001
0.0002
0.0003
0.0006
0.0001
0.0003
0.0004
0.0002
0.0000
0.0002
0.0001

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

22

Table 5. Continued.

REs of mean estimators based on IHRSS versus SRS under asymmetric distributions using different values of V2 .
Exponential (1)

Gamma (2,1)

Beta (9,2)

Weibull (2,1)

Weibull (4,1)

HRSS

k1

k2

k3

Units

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

6
9
18
27

1.4279
1.5726
1.8682
1.8847

1.1817
1.2418
1.3162
1.3395

1.0916
1.1196
1.1514
1.1499

1.5106
1.7226
2.1394
2.1658

1.2713
1.4054
1.5649
1.5647

1.1637
1.2066
1.2691
1.2789

1.0628
1.0967
1.1439
1.1213

1.0045
1.0042
1.0052
1.0070

1.0028
1.0074
1.0042
1.0000

1.4192
1.6288
1.9828
2.0203

1.0756
1.0877
1.1241
1.1120

1.0230
1.0310
1.0367
1.0376

1.2630
1.3741
1.5394
1.5600

1.0151
1.0294
1.0466
1.0434

1.0041
1.0074
1.0145
1.0057

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

0
0
0
2
1
0
0

8
12
16
32
48
64
12

1.5235
1.7489
1.8291
2.1947
2.3415
2.3088
1.3762

1.2416
1.2912
1.3323
1.4297
1.4461
1.4379
1.1732

1.1160
1.1385
1.1689
1.1818
1.1721
1.1748
1.0788

1.5959
1.8826
2.0006
2.5419
2.7617
2.7279
1.4626

1.3472
1.4595
1.5188
1.6997
1.7802
1.7717
1.2813

1.1988
1.2558
1.2628
1.3505
1.3512
1.3380
1.1412

1.0974
1.1206
1.1314
1.1563
1.1677
1.1905
1.0699

1.0049
1.0011
1.0096
1.0019
1.0000
1.0191
1.0000

1.0066
1.0000
1.0036
1.0043
1.0111
1.0080
1.0000

1.4825
1.7447
1.8749
2.2161
2.3683
2.3843
1.4396

1.0742
1.1033
1.1211
1.1446
1.1448
1.1438
1.0657

1.0214
1.0386
1.0432
1.0481
1.0492
1.0483
1.0241

1.2860
1.4300
1.4997
1.6205
1.7015
1.6798
1.2781

1.0259
1.0346
1.0315
1.0413
1.0528
1.0297
1.0176

1.0137
1.0063
1.0126
1.0091
1.0198
1.0098
1.0046

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

1.8521
1.9838
2.0196
2.6909
2.6938
2.7114
1.5107
1.4340
1.5877
1.6068

1.3392
1.3844
1.3635
1.5300
1.5240
1.5333
1.2169
1.2004
1.2564
1.2360

1.1521
1.1733
1.1865
1.2263
1.2362
1.2242
1.1229
1.0774
1.1360
1.1145

1.9846
2.2356
2.3597
3.2587
3.3278
3.3117
1.6543
1.5167
1.7718
1.7766

1.5210
1.5927
1.6472
1.8988
1.9053
1.9192
1.3554
1.3164
1.4012
1.4018

1.2595
1.3039
1.3154
1.4159
1.4136
1.4244
1.1892
1.1601
1.2115
1.1944

1.1324
1.1494
1.1214
1.1855
1.1925
1.1905
1.1039
1.0877
1.1017
1.1144

1.0112
1.0250
1.0026
1.0043
1.0087
1.0170
1.0079
1.0062
1.0105
1.0130

1.0140
1.0109
1.0000
1.0000
1.0000
1.0089
1.0014
1.0023
1.0000
1.0020

1.8115
1.9907
2.0640
2.6509
2.6953
2.6891
1.5895
1.4705
1.6492
1.6746

1.1049
1.1115
1.1304
1.1606
1.1581
1.1493
1.0946
1.0693
1.0888
1.0941

1.0303
1.0340
1.0563
1.0607
1.0564
1.0599
1.0234
1.0216
1.0337
1.0280

1.4784
1.5376
1.5760
1.7967
1.8096
1.8066
1.3578
1.3043
1.3901
1.3813

1.0450
1.0373
1.0460
1.0566
1.0367
1.0479
1.0374
1.0244
1.0374
1.0282

1.0110
1.0202
1.0111
1.0214
1.0282
1.0143
1.0090
1.0008
1.0072
1.0177

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 6.

(Continued).

23

Exponential (1)

Gamma (2,1)

Beta (9,2)

Weibull (2,1)

Weibull (4,1)

HRSS

k1

k2

k3

Units

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

1.9495
2.1646
2.2337
2.2641
2.9712
3.0605
3.0943
3.1327
1.8380
1.8371
1.8127
1.6855
1.6165
1.7156
1.6150
1.6804
1.6325
1.5197

1.3923
1.4200
1.4309
1.4277
1.5856
1.5885
1.6027
1.6045
1.3097
1.3192
1.3184
1.2785
1.2623
1.2592
1.2369
1.2868
1.2721
1.2173

1.1646
1.1763
1.1903
1.1881
1.2350
1.2474
1.2228
1.2309
1.1559
1.1548
1.1407
1.1437
1.1281
1.1365
1.1202
1.1452
1.1220
1.1041

2.1453
2.3636
2.5300
2.6075
3.6591
3.8284
3.9060
3.8245
2.0736
2.0716
1.9968
1.9414
1.7537
1.9141
1.7467
1.8703
1.7880
1.5708

1.5771
1.6703
1.7140
1.7348
1.9992
2.0340
2.0494
2.0402
1.5132
1.5330
1.5012
1.4763
1.4143
1.4665
1.4231
1.4647
1.4231
1.3364

1.3073
1.3227
1.3366
1.3418
1.4619
1.4648
1.4807
1.4763
1.2643
1.2766
1.2695
1.2582
1.2158
1.2432
1.2209
1.2379
1.2189
1.1871

1.1335
1.1425
1.1655
1.1582
1.1890
1.1821
1.1942
1.1865
1.1357
1.1223
1.1147
1.1383
1.1034
1.1115
1.1042
1.1200
1.1121
1.0828

1.0088
1.0047
1.0068
1.0087
1.0152
1.0073
1.0137
1.0076
1.0024
1.0080
1.0034
1.0072
1.0106
1.0030
1.0164
1.0059
1.0132
1.0022

1.0079
1.0022
1.0000
1.0000
1.0119
1.0114
1.0000
1.0043
1.0079
1.0113
1.0000
1.0007
1.0052
1.0099
1.0000
1.0024
1.0006
1.0000

1.8458
2.0766
2.1815
2.2351
2.8117
2.9468
2.9884
2.9691
1.8928
1.8930
1.8074
1.7966
1.6676
1.7811
1.6581
1.7473
1.6665
1.5087

1.0998
1.1369
1.1189
1.1298
1.1550
1.1737
1.1753
1.1653
1.1171
1.1306
1.1072
1.1056
1.0824
1.1058
1.1046
1.0972
1.0956
1.0779

1.0337
1.0405
1.0456
1.0521
1.0458
1.0624
1.0640
1.0649
1.0449
1.0391
1.0416
1.0339
1.0267
1.0383
1.0363
1.0328
1.0198
1.0310

1.4916
1.5850
1.6088
1.6489
1.8355
1.8807
1.8810
1.8713
1.5081
1.5053
1.4590
1.4493
1.3920
1.4435
1.3810
1.4417
1.4026
1.2939

1.0323
1.0503
1.0401
1.0496
1.0483
1.0457
1.0475
1.0472
1.0323
1.0313
1.0385
1.0396
1.0341
1.0285
1.0266
1.0390
1.0326
1.0407

1.0099
1.0156
1.0156
1.0082
1.0114
1.0141
1.0075
1.0143
1.0089
1.0102
1.0072
1.0143
1.0186
1.0149
1.0138
1.0261
1.0022
1.0104

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

24

Table 6. Continued.

ABs of mean estimators based on IHRSS under asymmetric distributions using different values of V2 .
Exponential (1)

Gamma (2,1)

Beta (9,2)

Weibull (2,1)

Weibull (4,1)

HRSS

k1

k2

k3

Units

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

PRSS
RSS
PDRSS
DRSS

3
3
0
0

1
0
0
0

0
0
1
0

6
9
18
27

0.0001
0.0002
0.0009
0.0024

0.0011
0.0002
0.0013
0.0004

0.0007
0.0014
0.0003
0.0014

0.0030
0.0008
0.0027
0.0009

0.0030
0.0030
0.0004
0.0023

0.0000
0.0030
0.0016
0.0014

0.0001
0.0000
0.0001
0.0001

0.0005
0.0001
0.0002
0.0001

0.0001
0.0000
0.0002
0.0002

0.0003
0.0002
0.0006
0.0007

0.0016
0.0010
0.0010
0.0001

0.0015
0.0014
0.0008
0.0002

0.0002
0.0002
0.0000
0.0001

0.0002
0.0005
0.0005
0.0001

0.0007
0.0003
0.0008
0.0002

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

4
4
4
0
0
0
2

2
1
0
0
0
0
0

0
0
0
2
1
0
0

8
12
16
32
48
64
12

0.0006
0.0021
0.0025
0.0007
0.0004
0.0004
0.0018

0.0002
0.0000
0.0011
0.0014
0.0015
0.0005
0.0001

0.0025
0.0001
0.0005
0.0011
0.0022
0.0005
0.0001

0.0021
0.0019
0.0003
0.0003
0.0015
0.0003
0.0024

0.0003
0.0002
0.0003
0.0006
0.0003
0.0004
0.0031

0.0032
0.0027
0.0021
0.0005
0.0008
0.0015
0.0006

0.0002
0.0000
0.0002
0.0001
0.0001
0.0004
0.0002

0.0001
0.0002
0.0004
0.0001
0.0001
0.0000
0.0000

0.0000
0.0001
0.0002
0.0000
0.0001
0.0002
0.0001

0.0004
0.0008
0.0005
0.0004
0.0001
0.0000
0.0006

0.0007
0.0001
0.0006
0.0005
0.0007
0.0004
0.0001

0.0008
0.0001
0.0016
0.0007
0.0002
0.0002
0.0006

0.0004
0.0005
0.0003
0.0009
0.0002
0.0004
0.0005

0.0004
0.0002
0.0004
0.0000
0.0001
0.0001
0.0005

0.0005
0.0003
0.0005
0.0005
0.0002
0.0006
0.0001

PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

5
5
5
0
0
0
3
3
2
2

2
1
0
0
0
0
0
1
0
0

0
0
0
2
1
0
0
0
1
0

15
20
25
75
100
125
17
14
22
31

0.0011
0.0013
0.0009
0.0000
0.0005
0.0001
0.0016
0.0002
0.0002
0.0001

0.0002
0.0002
0.0007
0.0005
0.0020
0.0029
0.0002
0.0003
0.0012
0.0005

0.0031
0.0004
0.0004
0.0018
0.0006
0.0010
0.0007
0.0004
0.0024
0.0020

0.0001
0.0005
0.0012
0.0008
0.0013
0.0009
0.0002
0.0020
0.0009
0.0012

0.0004
0.0015
0.0030
0.0007
0.0003
0.0024
0.0013
0.0005
0.0031
0.0017

0.0040
0.0014
0.0015
0.0032
0.0016
0.0018
0.0010
0.0001
0.0005
0.0063

0.0002
0.0000
0.0001
0.0002
0.0001
0.0001
0.0002
0.0003
0.0003
0.0002

0.0001
0.0001
0.0001
0.0003
0.0001
0.0001
0.0001
0.0001
0.0002
0.0001

0.0001
0.0001
0.0001
0.0000
0.0000
0.0004
0.0002
0.0000
0.0001
0.0000

0.0004
0.0004
0.0002
0.0002
0.0006
0.0002
0.0001
0.0005
0.0009
0.0003

0.0003
0.0008
0.0005
0.0005
0.0002
0.0006
0.0014
0.0002
0.0011
0.0006

0.0012
0.0014
0.0006
0.0011
0.0005
0.0008
0.0000
0.0001
0.0003
0.0003

0.0004
0.0001
0.0002
0.0001
0.0000
0.0001
0.0002
0.0000
0.0000
0.0005

0.0003
0.0003
0.0007
0.0001
0.0002
0.0003
0.0003
0.0007
0.0007
0.0007

0.0003
0.0003
0.0001
0.0001
0.0000
0.0000
0.0004
0.0001
0.0003
0.0004

Journal of Statistical Computation and Simulation

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

Table 7.

(Continued).

25

Exponential (1)

Gamma (2,1)

Beta (9,2)

Weibull (2,1)

Weibull (4,1)

HRSS

k1

k2

k3

Units

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

0.05

1.00

3.00

PRSS
HRSS
HRSS
RSS
PDRSS
HRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

0.0003
0.0006
0.0011
0.0014
0.0003
0.0005
0.0003
0.0012
0.0010
0.0003
0.0022
0.0012
0.0018
0.0005
0.0015
0.0006
0.0016
0.0004

0.0011
0.0001
0.0005
0.0009
0.0002
0.0007
0.0009
0.0002
0.0015
0.0009
0.0011
0.0009
0.0004
0.0015
0.0003
0.0025
0.0012
0.0011

0.0023
0.0001
0.0015
0.0018
0.0022
0.0009
0.0018
0.0016
0.0001
0.0001
0.0017
0.0004
0.0001
0.0005
0.0024
0.0011
0.0006
0.0012

0.0031
0.0012
0.0002
0.0002
0.0002
0.0000
0.0003
0.0012
0.0013
0.0025
0.0009
0.0015
0.0015
0.0034
0.0004
0.0026
0.0014
0.0011

0.0009
0.0007
0.0012
0.0023
0.0010
0.0011
0.0009
0.0014
0.0017
0.0005
0.0011
0.0007
0.0007
0.0004
0.0025
0.0006
0.0002
0.0007

0.0013
0.0022
0.0017
0.0022
0.0006
0.0001
0.0018
0.0031
0.0019
0.0021
0.0015
0.0012
0.0010
0.0014
0.0026
0.0005
0.0009
0.0003

0.0001
0.0000
0.0001
0.0001
0.0000
0.0002
0.0002
0.0000
0.0002
0.0000
0.0001
0.0001
0.0001
0.0001
0.0003
0.0002
0.0002
0.0001

0.0000
0.0000
0.0000
0.0000
0.0003
0.0001
0.0001
0.0001
0.0002
0.0002
0.0002
0.0000
0.0001
0.0000
0.0002
0.0002
0.0000
0.0001

0.0000
0.0000
0.0000
0.0000
0.0001
0.0002
0.0001
0.0000
0.0001
0.0003
0.0001
0.0000
0.0000
0.0000
0.0001
0.0002
0.0001
0.0002

0.0007
0.0004
0.0001
0.0002
0.0000
0.0001
0.0000
0.0000
0.0003
0.0005
0.0002
0.0004
0.0000
0.0004
0.0002
0.0007
0.0004
0.0001

0.0003
0.0001
0.0004
0.0003
0.0010
0.0004
0.0006
0.0001
0.0000
0.0005
0.0007
0.0014
0.0012
0.0004
0.0006
0.0006
0.0002
0.0003

0.0007
0.0007
0.0005
0.0000
0.0006
0.0009
0.0001
0.0002
0.0002
0.0002
0.0002
0.0003
0.0002
0.0003
0.0000
0.0005
0.0002
0.0003

0.0001
0.0001
0.0005
0.0002
0.0002
0.0006
0.0004
0.0001
0.0001
0.0001
0.0002
0.0001
0.0003
0.0000
0.0005
0.0003
0.0001
0.0003

0.0002
0.0001
0.0006
0.0000
0.0000
0.0001
0.0001
0.0002
0.0004
0.0000
0.0003
0.0009
0.0002
0.0003
0.0003
0.0002
0.0000
0.0001

0.0003
0.0005
0.0001
0.0002
0.0001
0.0003
0.0002
0.0003
0.0005
0.0000
0.0000
0.0004
0.0003
0.0006
0.0002
0.0002
0.0001
0.0002

A. Haq et al.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

26

Table 7. Continued.

Journal of Statistical Computation and Simulation

27

Table 8.

Summary statistics of 399 conifer trees.

Variable

Mean

Variance

Skewness

Kurtosis

Median

YX

Y (height) in feet
X (diameter) in cm

52.36
20.84

325.14
310.11

1.619
0.884

1.7760
0.4230

29.000
14.500

0.908

Table 9.

ABs and REs of HRSS- and IHRSS-based mean estimators with respect to SRS-based mean estimator.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

RE

AB

HRSS

k1

k2

k3

Units

HRSS

IHRSS

SRS

HRSS

IHRSS

PRSS
RSS
PDRSS
DRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
HRSS
HRSS
HRSS

3
3
0
0
4
4
4
0
0
0
2
5
5
5
0
0
0
3
3
2
2

1
0
0
0
2
1
0
0
0
0
0
2
1
0
0
0
0
0
1
0
0

0
0
1
0
0
0
0
2
1
0
0
0
0
0
2
1
0
0
0
1
0

6
9
18
27
8
12
16
32
48
64
12
15
20
25
75
100
125
17
14
22
31

1.4778
1.6324
2.0148
2.0443
1.5860
1.8420
1.9319
2.4978
2.6351
2.6408
1.4033
2.0056
2.1681
2.2250
3.2371
3.2799
3.2810
1.5752
1.4870
1.6662
1.6804

1.4132
1.5408
1.8577
1.8830
1.4740
1.6994
1.7655
2.1629
2.2816
2.2821
1.3653
1.7965
1.9276
1.9678
2.6093
2.6433
2.6446
1.5021
1.4265
1.5786
1.5911

0.0063
0.0091
0.0035
0.0023
0.0019
0.0060
0.0050
0.0008
0.0055
0.0164
0.0017
0.0002
0.0060
0.0040
0.0063
0.0012
0.0097
0.0048
0.0071
0.0007
0.0022

0.0001
0.0108
0.0056
0.0104
0.0049
0.0095
0.0061
0.0068
0.0033
0.0021
0.0078
0.0013
0.0099
0.0050
0.0037
0.0026
0.0021
0.0007
0.0120
0.0072
0.0010

0.0044
0.0046
0.0005
0.0107
0.0003
0.0107
0.0081
0.0056
0.0152
0.0105
0.0071
0.0004
0.0034
0.0023
0.0022
0.0007
0.0075
0.0048
0.0128
0.0104
0.0056

HRSS

k1

k2

k3

Units

HRSS

IHRSS

SRS

HRSS

IHRSS

PRSS
RSS
PDRSS
DRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS
PRSS
HRSS
RSS
PDRSS
HRSS
DRSS
HRSS

6
6
6
6
0
0
0
0
2
2
2
3
3
3
3
4
4
4

3
2
1
0
0
0
0
0
0
0
0
0
1
0
1
0
1
2

0
0
0
0
3
2
1
0
0
1
2
0
0
1
1
0
0
0

18
24
30
36
108
144
180
216
68
52
36
36
33
27
24
24
20
16

2.1529
2.3591
2.4759
2.5180
3.8397
3.9527
3.9654
3.9672
1.9831
1.9809
1.9295
1.8126
1.7162
1.8020
1.7069
1.7615
1.7118
1.5545

1.8679
2.0403
2.1262
2.1498
2.8838
2.9531
2.9613
2.9606
1.8155
1.8149
1.7651
1.6967
1.6155
1.6832
1.6066
1.6448
1.6071
1.4657

0.0049
0.0015
0.0062
0.0052
0.0022
0.0086
0.0111
0.0096
0.0059
0.0017
0.0038
0.0083
0.0020
0.0064
0.0012
0.0021
0.0122
0.0071

0.0017
0.0083
0.0001
0.0006
0.0045
0.0062
0.0001
0.0015
0.0042
0.0018
0.0067
0.0017
0.0015
0.0007
0.0075
0.0166
0.0006
0.0030

0.0009
0.0137
0.0003
0.0050
0.0029
0.0034
0.0022
0.0020
0.0131
0.0030
0.0042
0.0053
0.0050
0.0133
0.0056
0.0033
0.0049
0.0102

m = 5. For perfect rankings, the values within each set are ranked with respect to the heights of
the tree. However, under imperfect rankings, the heights of the trees within a set are ranked with
respect to the diameters measurements. Based on 100,000 iterations, the ABs and REs of the
mean estimators based on HRSS and IHRSS schemes are calculated with respect to SRS-based
mean estimator, and are reported in Table 9.

28

A. Haq et al.

From Table 9, we observe that the mean estimators under both HRSS and IHRSS schemes
turns out be more efficient than their counterparts based on SRS. As expected, the REs under the
HRSS scheme are greater than those based on the IHRSS scheme. The REs are increasing with
m. Moreover, under each sampling scheme, the values of ABs are closer to zero. This shows that
the mean estimator under both HRSS and IHRSS are unbiased.

Downloaded by [COMSATS Headquarters] at 21:33 02 December 2015

6.

Concluding remarks

In this paper, we have proposed a new HRSS scheme for efficient estimation of the population
mean. It is theoretically shown that, regardless of the parent distributions, the mean estimator
under HRSS is unbiased and always better than that based on the SRS scheme. It is numerically
shown that the mean estimators under the IHRSS scheme are unbiased and at least as efficient as
the SRS-based mean estimator. The advantage of using the HRSS scheme over the existing RSS
scheme is that it not only encompasses its counterparts, but also provides many options to the
experimenter in selecting more representative samples from the target population at an affordable
cost.
Disclosure statement
No potential conflict of interest was reported by the author(s).

References
[1] Patil, GP, Sinha AK, Taillie C. Ranked set sampling In: Patil GP, Rao CR, editors. Handbook of statistics,
environmental statistics. Vol. 12. Amsterdam: North-Holland, Elsevier; 1994. p. 167200.
[2] McIntyre GA. A method for unbiased selective sampling, using ranked sets. Aust J Agric Res. 1952;3:385390.
[3] Murray RA, Ridout MS, Cross JV. The use of ranked set sampling in spray deposit assessment. Aspects Appl Biol.
2000;57:141146.
[4] Yu PLH, Lam K. Regression estimator in ranked set sampling. Biometrics. 1997;53(3):10701080.
[5] Mode NA, Conquest LL, Marker DA. Ranked set sampling for ecological research: accounting for the total costs
of sampling. Environmetrics. 1999;10:179194.
[6] Al-Saleh MF, Al-Shrafat K. Estimation of average milk yield using ranked set sampling. Environmetrics.
2001;12:359399.
[7] Wang Y-G, Ye Y, Milton DA. Efficient designs for sampling and subsampling in fisheries research based on ranked
sets. J Marine Sci. 2009;66:928934.
[8] Takahasi K, Wakimoto K. On unbiased estimates of the population mean based on the sample stratified by means
of ordering. Ann Inst Stat Math. 1968;20:131.
[9] Dell TR, Clutter JL. Ranked set sampling theory with order statistics background. Biometrics. 1972;28:545555.
[10] Stokes SL. Ranked set sampling with concomitant variables. Commun Stat Theory Methods. 1977;6(12):1207
1211.
[11] Muttlak HA. Pair rank set sampling. Biom J. 1996;38:879885.
[12] Al-Saleh MF, Al-Kadiri MA. Double-ranked set sampling. Stat Prob Lett. 2000;48:205212.
[13] Haq A, Brown J, Moltchanova E, Al-Omari AI. Paired double ranked set sampling. Commun Stat Theory
Methods. 2014, accepted for publication.
[14] David HA, Nagaraja HN. Order statistics. 3rd ed. New York: Wiley; 2003.
[15] Al-Saleh MF, Al-Omari AI. Multistage ranked set sampling. J Stat Plan Inference. 2002;102:273286.
[16] Vaughan RJ, Venables WN. Permanent expressions for order statistic densities. J R Stat Soc B. 1972;34:308310.
[17] Bapat RB, Beg MI. Order statistics from nonidentically distributed variables and permanents. Sankhy: Ser. A.
1989;51:7993.
[18] Platt WJ, Evans GM, Rathbun SL. The population dynamics of a long-lived conifer (Pinus palustris). Am Nat.
1988;131:491525.

S-ar putea să vă placă și