Next Article in Journal
Adaptive Differential Privacy Mechanism Based on Entropy Theory for Preserving Deep Neural Networks
Previous Article in Journal
Word-Based Processor Structure for Montgomery Modular Multiplier Suitable for Compact IoT Edge Devices
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Order-Restricted Inference for Generalized Inverted Exponential Distribution under Balanced Joint Progressive Type-II Censored Data and Its Application on the Breaking Strength of Jute Fibers

1
School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, China
2
Metals & Chemistry Research Institute, China Academy of Railway Sciences Corporation Limited, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Submission received: 9 December 2022 / Revised: 30 December 2022 / Accepted: 3 January 2023 / Published: 8 January 2023
(This article belongs to the Section Probability and Statistics)

Abstract

:
This article considers a new improved balanced joint progressive type-II censoring scheme based on two different populations, where the lifetime distributions of two populations follow the generalized inverted exponential distribution with different shape parameters but a common scale parameter. The maximum likelihood estimates of all unknown parameters are obtained and their asymptotic confidence intervals are constructed by the observed Fisher information matrix. Furthermore, the existence and uniqueness of solutions are proved. In the Bayesian framework, the common scale parameter follows an independent Gamma prior and the different shape parameters jointly follow a Beta-Gamma prior. Based on whether the order restriction is imposed on the shape parameters, the Bayesian estimates of all parameters concerning the squared error loss function along with the associated highest posterior density credible intervals are derived by using the importance sampling technique. Then, we use Monte Carlo simulations to study the performance of the various estimators and a real dataset is discussed to illustrate all of the estimation techniques. Finally, we seek an optimum censoring scheme through different optimality criteria.

1. Introduction

In real lifetime testing, progressive censoring schemes have been widely mentioned in the statistical literature during the past couple of decades. The purpose of introducing different progressive censoring schemes is to accelerate the experimental process and reduce the experimental cost because a pre-fixed number of functioning units can be removed intentionally and ensure that a certain number of failures are observed during the process of lifetime experiments to make it more efficient. For example, in the process of life testing, the longer lifetime or accidental damage of equipment makes it difficult for the experimenter to collect complete lifetime data, thus affecting the experiment results. Therefore, research on progressive censoring schemes is applied to deal with these problems. In recent years, a great quantity of work has been done on the different progressive censoring schemes; relevant content can be found in Ref. [1].
Nearly all conventional censoring schemes, for instance, hybrid censoring, progressive type-I censoring, progressive first-failure censoring, etc (see Refs. [2,3,4]), are based on a single population. However, to carry out comparative lifetime testing on products, which are from two or more populations under the same survival conditions, more censoring schemes are proposed. For example, in the joint progressive type-II censoring scheme (JPC), the lifetime testing of two samples from different populations will be carried out simultaneously and when a certain number of failures are observed, the experiment is terminated. In the background of the JPC scheme, Ref. [5] first considered the exact statistical inference for two exponential populations, while Ref. [6] extended similar research contents to k-sample exponential populations. Ref. [7] discussed the interval estimation, which was constructed by three methods, and the conditional maximum likelihood of two Weibull distributions. Furthermore, Ref. [8] considered the classical and Bayesian inference when the order restriction was imposed on the scale parameters. Ref. [9] employed the EM algorithm to calculate the maximum likelihood estimates of all parameters and the order restriction of the shape parameters were considered in the Bayesian inference. Here, Ref. [10] computed the Bayesian estimates of unknown parameters under the generalized entropy loss function and discussed the criteria for obtaining the optimal censoring scheme.
Recently, Ref. [11] proposed a new censoring scheme based on two samples from exponential distributions, which is considered to be a balanced joint progressive censoring scheme (B-JPC). Compared with the JPC scheme, the B-JPC scheme has more advantages. For example, at each failure, a pre-fixed number of the functioning units will be removed from the products of two populations simultaneously, which makes the analysis process more flexible and the calculation more simple. In practice, under different stress levels, we use the B-JPC scheme to accelerate the life testing of products. When it comes to an acceptable sampling scheme, this scheme is employed to make acceptance decisions for products from different batches. Hence, we can decide on diversified products in a single experiment. According to the B-JPC scheme, Ref. [12] proposed a new criterion that depends on the precise joint confidence region volume of parameters to find the optimum censoring scheme, and Ref. [13] developed the Bayesian inference under the condition that the order restriction is imposed on scale parameters and employed precision criteria to obtain the optimum censoring scheme. Ref. [14] used the research content discussed by Ref. [13] for flexible prior assumptions, and different design criteria, along with the variable neighborhood search algorithm proposed in Ref. [15], are employed to obtain the optimum censoring scheme. Ref. [16] studied the statistical inferences for the Lindley distribution and the optimum censoring scheme is obtained by using the Bayesian and classical design criteria.
Compared with the generalized inverse exponential distribution under the JPC scheme, discussed in Ref. [17], we have a flexible censoring process and consider whether the scale parameter is known in this article. Here, we suppose that the lifetimes of experimental units from two different populations follow a two-parameter generalized inverse exponential distribution with different shape parameters but the same scale parameter. Furthermore, we discuss the Bayesian inference based on the order restriction between the two shape parameters as well as the likelihood estimation of all parameters. Under these circumstances, suppose that the same scale parameter follows a Gamma prior, and the shape parameters jointly follow an ordered Beta-Gamma distribution. Furthermore, we compare the different censoring schemes under precision criteria to find the optimum censoring scheme.
In order to overcome these disadvantages, which contain the non-closed form or constant hazard rates of some distributions, such as the gamma and exponential distribution, the generalized inverted exponential distribution is proposed. According to Ref. [18], some properties and characteristics of this distribution are provided. Based on the existing research about the hazard rate function, the shape of the generalized inverted exponential distribution is non-monotone unimodal and it is suitable to analyze the data from the distribution of the non-monotone failure rate function.
The rest of this article is arranged as follows. We provide the notations and brief introduction to the B-JPC scheme in Section 2. The maximum likelihood estimations and coverage probabilities of model parameters are discussed in Section 3. Using the observed Fisher information matrix, the asymptotic confidence intervals of all parameters are constructed. Furthermore, proof of the existence and uniqueness of maximum likelihood estimation is provided. The order restriction on the shape parameters, the highest posterior density credible intervals, and the Bayesian estimates of all unknown parameters concerning the importance sampling method are discussed in Section 4. Section 5 contains the simulation study and analysis results for real datasets. Then, we obtain the optimum censoring scheme through five precision criteria in Section 6. Finally, we draw the conclusions of this article in Section 7.

2. Notations, Model Description and Assumption

2.1. Notations

CI : Confidence / credible interval IP / NIP : Informative prior / non-informative prior AV / AL : Average estimate / length MSE : Mean squared error CP : Coverage percentage BG / OBG : Beta-Gamma / Ordered Beta-Gamma BE : Bayesian estimate CDF : Cumulative distribution function GIED : Generalized inverted exponential distribution PDF : Probability density function SELF : Squared error loss function MLE : Maximum likelihood estimator i . i . d . : Independent and identically distributed k : Total count of failures HPD : Highest posterior density k 1 : Total count of failures from population A CS : Censoring scheme GA ( α , λ ) : PDF of Gamma distribution : f G A x ; α , λ = λ α Γ α x α 1 e λ x , x > 0 ; α , λ > 0 . k 2 : Total count of failures from population B Beta ( a , b ) : PDF of Beta distribution : f B e t a y ; a , b = Γ a + b Γ a Γ b y a 1 1 y b 1 , 0 < y < 1 ; a , b > 0 .

2.2. Model Description and Assumption

We introduce the B-JPC scheme as follows: suppose a random sample of size m units is taken from population A, while another random sample of size n is drawn from population B. Then, the two samples will be conducted in the lifetime experiment at the same time. During the B-JPC process, suppose only k k < min m , n failures are observed in the life testing experiment and the censoring scheme R = R 1 , R 2 , , R k 1 are pre-fixed positive integers satisfying R 1 + R 2 + + R k 1 + k 1 < min m , n . Here, we record the time of the first failure as W 1 , which is from the sample of population B. At W 1 , R 1 + 1 units are removed from the m units of population A at random and R 1 units are removed from population B, whose remaining surviving units are n 1 . Next, it is assumed that the next failure belongs to population A and we record the failure time point as W 2 . At W 2 , R 2 units are removed randomly from the m R 1 + 1 1 units of population A, and R 2 + 1 experimental units are randomly chosen to drop from the remaining n R 1 1 surviving units of population B simultaneously. Furthermore, when the i-th failure occurs i = 1 , 2 , , k 1 , we record the time of occurrence as W i . R i units are dropped randomly from one sample where the i-th failure occurs and R i + 1 units are dropped randomly from another sample. The experiment is continued until the k-th (from population B or A) failure occurs, and the whole remaining surviving units from both populations A and B are removed at the k-th failure. The experimental process of the B-JPC scheme is shown in Figure 1 and Figure 2.
In this life testing experiment, Z 1 , Z 2 , , Z k represents a series of indicator variables. Here, when i-th failure belongs to population A, Z i = 1. Otherwise, the i-th failure belongs to the sample of population B. Hence, the data consist of W , Z , R based on the B-JPC scheme.
It is assumed that the lifetimes of size m experimental units from population A, here, the random variables X 1 , X 2 , ⋯, X m , are independently and identically distributed across GIED with the parameters λ and α 1 . Then, the probability density function f x , the corresponding cumulative distribution function F x , and the survival function F ¯ x are defined as
f x ; α 1 , λ = α 1 λ x 2 e λ x 1 e λ x α 1 1 , x > 0 ; λ , α 1 > 0 F x ; α 1 , λ = 1 1 e λ x α 1 , x > 0 ; λ , α 1 F ¯ x ; α 1 , λ = 1 e λ x α 1 , x > 0 ; λ , α 1 > 0 ,
In the same way, suppose that the lifetimes of size n units from population B, Y 1 , Y 2 , ⋯, Y m are iid GIED with parameters λ and α 2 . The probability density function g y as well as the corresponding cumulative distribution function G y , and the survival function G ¯ y are given by
g y ; α 2 , λ = α 2 λ y 2 e λ y 1 e λ y α 2 1 , y > 0 ; λ , α 2 > 0 G y ; α 2 , λ = 1 1 e λ y α 2 , y > 0 ; λ , α 2 > 0 G ¯ y ; α 2 , λ = 1 e λ y α 2 , y > 0 ; λ , α 2 > 0
where λ , α 1 , and α 2 are the common scale parameter and different shape parameters, where they are both positive. Figure 3 and Figure 4 show the PDFs and CDFs for different α and fixed λ of GIED. According to Figure 3, we find that the PDF of GIED is nonmonotone and when λ is fixed, the more sharp the decreases and increases in PDF are with the bigger α . As for the CDF of the distribution in Figure 4, a smaller α results in a lower rising rate.

3. Maximum Likelihood Estimation

3.1. Point Estimation

Based on the progressive censoring scheme R, let ω 1 , ω 2 , …, ω k be a B-JPC sample, which is from population A with PDF f(.) and CDF F(.) and population B with PDF g(.) and CDF G(.). Here, the following Algorithm 1 is applied to generate the B-JPC sample. Under the B-JPC sample, the likelihood function L α 1 , α 2 , λ | w , z , R is given by    
L = c i = 1 k 1 f w i G ¯ w i R i + 1 F ¯ w i R i z i g w i G ¯ w i R i F ¯ w i R i + 1 1 z i × f w k G ¯ w k n i = 1 k 1 R i + 1 F ¯ w k m i = 1 k 1 R i + 1 1 z k × g w k F ¯ w k m i = 1 k 1 R i + 1 G ¯ w k n i = 1 k 1 R i + 1 1 1 z k .
here, c = i = 1 k n j = 1 i 1 R j + 1 1 z i + m j = 1 i 1 R j + 1 z i .
Algorithm 1: Generate the B-JPC sample from GIED.
Step 1: Given the initial values of k, m, n, R, λ , α 1 and α 2 .
Step 2: Generate X 1 , ⋯, X m from GIED ( λ , α 1 ) and sort them as X ( 1 ) , ⋯, X ( m ) .
Step 3: Generate Y 1 , ⋯, Y n from GIED ( λ , α 2 ) and sort them as Y ( 1 ) , ⋯, Y ( n ) .
Step 4: Calculate W 1 = min( X ( 1 ) , Y ( 1 ) ), if X ( 1 ) Y ( 1 ) , Z 1 = 1, otherwise Z 1 = 0.
Step 5: Calculate W i = min( X ( η i ) , Y ( η i ) ). Similarly, if X ( η i ) Y ( η i ) , Z i = 1, otherwise
             Z i = 0 (i = 2, 3, ⋯, k), here η i = i 1 + j = 1 i 1 R j .
Step 6: Here ( W 1 , Z 1 ), ⋯, ( W k , Z k ) are the B-JPC sample from GIED that we need.
In this case, the likelihood function of the unknown parameters ( λ , α 1 , α 2 ) with respect to the observed data w , z is defined as
L λ , α 1 , α 2 | w , z , R = c α 1 k 1 α 2 k 2 λ k i = 1 k 1 ω i 2 e λ ω i × i = 1 k 1 1 e λ ω i α 1 R i + α 1 z i 1 e λ ω k α 1 m i = 1 k 1 R i + 1 z k × i = 1 k 1 1 e λ ω i α 2 1 + z i + α 2 R i 1 e λ ω k α 2 n i = 1 k 1 R i + 1 + z k 1 .
Here, k 1 = i = 1 k z i and A 1 λ = i = 1 k z i ln 1 e λ ω i , k 2 = k k 1 = i = 1 k 1 z i and A 2 λ = i = 1 k 1 z i ln 1 e λ ω i .
Thus, ignoring the normalizing constant, the log-likelihood function is given by
l λ , α 1 , α 2 | w , z , R = k 1 ln α 1 + k 2 ln α 2 2 i = 1 k ln ω i i = 1 k λ ω i + k ln λ + i = 1 k 1 α 1 R i + α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k + i = 1 k 1 α 2 1 + z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 + z k 1 ln 1 e λ ω k .
Then, the partial derivatives involving α 1 , α 2 , and λ are taken and equated to zero. The calculation results are as follows:
l λ = k λ i = 1 k 1 ω i + i = 1 k 1 α 1 R i + α 1 z i e λ ω i 1 ω i 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k e λ ω k 1 ω k 1 e λ ω k + i = 1 k 1 α 2 1 + z i + α 2 R i e λ ω i 1 ω i 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 + z k 1 e λ ω k 1 ω k 1 e λ ω k = 0 ,
l α 1 = k 1 α 1 + i = 1 k 1 R i + 1 ln 1 e λ ω i + m i = 1 k 1 R i + 1 ln 1 e λ ω k = 0 ,
l α 2 = k 2 α 2 + i = 1 k 1 R i + 1 ln 1 e λ ω i + n i = 1 k 1 R i + 1 ln 1 e λ ω k = 0 .
However, owing to the nonlinearity of the equations, it is hard to obtain the closed-form solutions of the above equations; hence, the Newton–Raphson method is considered to compute the roots of equations. Here, we employ the Newton–Raphson method to solve this problem and calculate the MLEs for unknown parameters. After solving Equations (6)–(8), α 1 ^ , α 2 ^ , and λ are acquired.
Theorem 1.
The uniqueness and existence of maximum likelihood estimation.
Let ξ 1 λ = l λ , ξ 2 α 1 = l α 1 and ξ 3 α 2 = l α 2 , which are defined in Equations (6)–(8). The above functions attain unique MLEs at 0 < λ , α 1 , α 2 < in which λ ^ , α 1 ^ and α 2 ^ are the solutions of ξ 1 ( λ ) = 0 , ξ 2 ( α 1 ) = 0 and ξ 3 ( α 2 ) = 0 if k 1 > 0 and k 2 > 0 , where k 1 = i = 1 k z i and k 2 = k k 1 = i = 1 k 1 z i .
Proof. 
From Equations (6)–(8)
lim λ 0 ξ 1 λ + ,   lim α 1 0 ξ 2 α 1 +   and   lim α 2 0 ξ 3 α 2 + ;
lim λ ξ 1 λ = i = 1 k 1 ω i < 0 ,   lim α 1 ξ 2 α 1 < 0   and   lim α 2 ξ 3 α 2 < 0 ;
ξ 1 ( λ ) = 2 l λ 2 < 0 ,   ξ 2 ( α 1 ) = 2 l α 1 2 < 0   and   ξ 3 ( α 2 ) = 2 l α 2 2 < 0 .
Hence, ξ 1 λ , ξ 2 α 1 , and ξ 3 α 2 are continuous and monotonically decreasing functions on 0 , , and they reduce from + to a negative number. Therefore, we prove the existence of MLEs of λ , α 1 , and α 2 and show that they are unique solutions of the equation ξ 1 λ = 0 , ξ 2 α 1 = 0 , and ξ 3 α 2 = 0 if k 1 > 0 and k 2 > 0 .    □

3.2. Asymptotic Confidence Interval

Applying the asymptotic theory, the asymptotic confidence intervals for λ , α 1 , and α 2 are obtained from the variance–covariance matrix, which is also regarded as the inverse Fisher information matrix. Supposing that θ = λ , α 1 , α 2 , the Fisher information matrix of the parameters θ is expressed as follows:
I θ = E 2 l λ , α 1 , α 2 λ 2 2 l λ , α 1 , α 2 λ α 1 2 l λ , α 1 , α 2 λ α 2 2 l λ , α 1 , α 2 α 1 λ 2 l λ , α 1 , α 2 α 1 2 2 l λ , α 1 , α 2 α 1 α 2 2 l λ , α 1 , α 2 α 2 λ 2 l λ , α 1 , α 2 α 2 α 1 2 l λ , α 1 , α 2 α 2 2 .
Here,
2 l λ , α 1 , α 2 λ 2 = k λ 2 i = 1 k 1 α 1 R i + α 1 z i 1 ω i 2 e λ ω i 1 e λ ω i 2 α 1 m i = 1 k 1 R i + 1 z k 1 ω k 2 e λ ω k 1 e λ ω k 2 i = 1 k 1 α 2 1 + z i + α 2 R i 1 ω i 2 e λ ω i 1 e λ ω i 2 α 2 n i = 1 k 1 R i + 1 + z k 1 1 ω k 2 e λ ω k 1 e λ ω k 2 ,
2 l λ , α 1 , α 2 λ α 1 = 2 l λ , α 1 , α 2 α 1 λ = i = 1 k 1 R i + 1 e λ ω i 1 ω i 1 e λ ω i + m i = 1 k 1 R i + 1 e λ ω k 1 ω k 1 e λ ω k , 2 l λ , α 1 , α 2 λ α 2 = 2 l λ , α 1 , α 2 α 2 λ = i = 1 k 1 R i + 1 e λ ω i 1 ω i 1 e λ ω i + n i = 1 k 1 R i + 1 e λ ω k 1 ω k 1 e λ ω k ,
2 l λ , α 1 , α 2 α 1 α 2 = 2 l λ , α 1 , α 2 α 2 α 1 = 0 , 2 l λ , α 1 , α 2 α 1 2 = k 1 α 1 2 , 2 l λ , α 1 , α 2 α 2 2 = k 2 α 2 2 .
For the above expressions, the expected values are not easy to obtain. Thus, in order to obtain an approximate expected Fisher information matrix, we apply the observed Fisher information matrix. Suppose the MLE of the parameter θ = λ , α 1 , α 2 is θ ^ = λ ^ , α 1 ^ , α 2 ^ . Here, the observed Fisher information matrix I θ ^ turns out to be
I θ ^ = 2 l λ , α 1 , α 2 λ 2 2 l λ , α 1 , α 2 λ α 1 2 l λ , α 1 , α 2 λ α 2 2 l λ , α 1 , α 2 α 1 λ 2 l λ , α 1 , α 2 α 1 2 2 l λ , α 1 , α 2 α 1 α 2 2 l λ , α 1 , α 2 α 2 λ 2 l λ , α 1 , α 2 α 2 α 1 2 l λ , α 1 , α 2 α 2 2 θ = θ ^ .
Furthermore, through inverting the observed Fisher information matrix, we obtain the observed variance–covariance matrix I 1 ( θ ^ ) of MLEs λ ^ , α 1 ^ , α 2 ^ , which is given by
I 1 ( θ ^ ) = V a r λ ^ C o v λ ^ , α 1 ^ C o v λ ^ , α 2 ^ C o v α 1 ^ , λ ^ V a r α 1 ^ C o v α 1 ^ , α 2 ^ C o v α 2 ^ , λ ^ C o v α 2 ^ , α 1 ^ V a r α 2 ^ .
Here, we know that the asymptotic distribution of θ ^ is N θ , I 1 θ ^ . Therefore, the 100 ( 1 γ ) % ACI of the parameter θ j for a significance level 0 < γ < 1 is constructed as θ j ^ ± z γ 2 V a r ( θ j ^ ) ( j = 1 , 2 , 3 ), where z γ 2 represents the upper z γ 2 -th percentile of the standard normal distribution. Furthermore, the coverage probabilities of parameters λ , α 1 and α 2 are given by  
C P λ = P ( λ ^ λ ) Var ( λ ^ ) z γ / 2 , C P α 1 = P ( α 1 ^ α 1 ) Var ( α 1 ^ ) z γ / 2 , C P α 2 = P ( α 2 ^ α 2 ) Var ( α 2 ^ ) z γ / 2 .

4. Bayesian Estimation

4.1. Without Order Restriction of Shape Parameters

Before studying Bayesian inference, we discuss the assumptions of the unknown parameters. Here, we assume a very flexible prior on the shape parameters. Meanwhile, when we find that one population is superior to another in reliability, the order restriction between shape parameters is reasonable. Hence, when the order-restricted condition is discussed between shape parameters, a prior is considered for them. In this article, λ is the common scale parameter, while α 1 and α 2 are two shape parameters. Suppose α = α 1 + α 2 , then
α 1 + α 2 = α G A b 1 , b 2 , α 1 α B e t a b 3 , b 4 ,
where they are independent, and the hyper-parameters b 1 , b 2 , b 3 , b 4 are positive numbers. The transformation of variables method can be easily used to obtain the joint PDF of α 1 , α 2 , derived as follows:
π α 1 , α 2 | b 1 , b 2 , b 3 , b 4 = Γ b 3 + b 4 Γ b 3 Γ b 4 Γ b 1 b 2 b 1 α 1 b 3 1 α 2 b 4 1 α 1 + α 2 b 1 b 3 b 4 e b 2 α 1 + α 2 ,
where 0 < α 1 , α 2 < . Then, the joint PDF (12), which is a Beta-Gamma distribution (BG), can be expressed as BG( b 1 , b 2 , b 3 , b 4 ). It is found that the bivariate BG distribution is fairly flexible and absolutely continuous. Based on the BG distribution, the following Lemma 1 is employed to generate samples.
Lemma 1.
( X , Y )∼BG( b 1 , b 2 , b 3 , b 4 ), if and only if Z = X + Y G A ( b 1 , b 2 ) , V = X X + Y B e t a b 3 , b 4 , and Z and V are independently distributed.
Proof. 
Using the transformation method of variables, the above results are easy to prove
E α 1 = b 1 b 2 b 3 b 3 + b 4 , E α 2 = b 1 b 2 b 4 b 3 + b 4 , E α 1 2 = b 1 b 1 + 1 b 2 2 b 3 b 3 + 1 b 3 + b 4 b 3 + b 4 + 1 , E α 2 2 = b 1 b 1 + 1 b 2 2 b 4 b 4 + 1 b 3 + b 4 b 3 + b 4 + 1 , E α 1 , α 2 = b 1 b 1 + 1 b 2 2 b 3 b 4 b 3 + b 4 b 3 + b 4 + 1 , C o v α 1 , α 2 = b 1 b 3 b 4 b 3 + b 4 b 1 b 2 2 b 3 + b 4 2 b 3 + b 4 + 1 .
   □
Between the two shape parameters, the Beta-Gamma prior is included in distinct dependency structures, and the correlation between α 1 and α 2 is determined by the values of b 1 , b 3 , and b 4 . When b 3 + b 4 > b 1 , α 1 and α 2 are positively correlated, while for b 3 + b 4 < b 1 , they are negatively correlated. If b 3 + b 4 = b 1 , α 1 and α 2 are independent.
Owing to the flexibility and wide application of the Gamma distribution in statistical inference, we suppose that the scale parameter
λ G A a 0 , b 0 = π λ | a 0 , b 0 ,
where the hyper-parameters are a 0 > 0 and b 0 > 0 . In addition, the scale parameter λ and shape parameters α 1 , α 2 are independent.
Based on the prior assumptions discussed previously and the squared error loss function, the Bayesian estimators are considered for all parameters of the generalized inverted exponential distribution in this section. Then, we also obtain the associated credible intervals under different situations. Here, the likelihood function (4) is also denoted as
L λ , α 1 , α 2 | w , z , R = c α 1 k 1 α 2 k 2 λ k e 2 i = 1 k l n ω i e λ i = 1 k 1 ω i e α 1 1 A 1 λ e α 2 1 A 2 λ × i = 1 k 1 1 e λ ω i α 1 R i + α 1 α 1 z i 1 e λ ω k α 1 m i = 1 k 1 R i + 1 z k × i = 1 k 1 1 e λ ω i α 2 z i + α 2 R i 1 e λ ω k α 2 n i = 1 k 1 R i + 1 + z k 1 .

4.1.1. Posterior Analysis: Scale Parameter λ Is known

In this section, when we know the scale parameter λ and the order restriction on α 1 and α 2 is not considered, the Bayesian estimates and the corresponding credible intervals are constructed. Ignoring the constants, the joint posterior distribution of parameters α 1 and α 2 is given by
π α 1 , α 2 | λ , d a t a e α 1 A λ A 1 λ α 1 + α 2 b 1 b 3 b 4 e α 1 + α 2 b 2 A λ α 1 k 1 + b 3 1 α 2 b 4 + k 2 1 × e α 2 A λ A 2 λ e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 + z k 1 ln 1 e λ ω k ,
for 0 < α 1 < α 2 < , where A λ = min A 1 λ , A 2 λ , and the A 1 λ , A 2 λ are the same as described in function (4). Therefore, π α 1 , α 2 | λ , d a t a can also be expressed as
π α 1 , α 2 | λ , d a t a h 0 α 1 , α 2 × π 0 α 1 , α 2 | λ , d a t a ,
where
π 0 α 1 , α 2 | λ , d a t a B G k + b 1 , b 2 A λ , k 1 + b 3 , k 2 + b 4 , h 0 α 1 , α 2 = e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k . × e α 2 A λ A 2 λ e α 1 A λ A 1 λ
Therefore, we regard the Beta-Gamma prior as a conjugate prior for the known scale parameter λ . From Lemma 1, the corresponding posterior means are the Bayesian estimates of α 1 and α 2 concerning the SELF. Hence, they are directly calculated as
α 1 ^ = E α 1 = b 1 + k b 2 A λ × b 3 + k 1 b 3 + b 4 + k   and   α 2 ^ = E α 2 = b 1 + k b 2 A λ × b 4 + k 2 b 3 + b 4 + k

4.1.2. Posterior Analysis: Scale Parameter λ Is Not Known

Furthermore, we analyze the situation based on the unknown scale parameter λ . On this condition, the joint posterior distribution of parameters λ , α 1 , and α 2 is derived as follows:
π λ , α 1 , α 2 | d a t a e α 1 A λ A 1 λ α 1 + α 2 b 1 b 3 b 4 e α 1 + α 2 b 2 A λ α 1 k 1 + b 3 1 α 2 b 4 + k 2 1 × e α 2 A λ A 2 λ e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × λ a 0 + k 1 e λ b 0 + i = 1 k 1 ω i e 2 i = 1 k ln ω i i = 1 k ln 1 e λ ω i × 1 b 2 A λ b 1 + k .
Hence, the Bayesian estimate of g λ , α 1 , α 2 regarding the SELF is expressed as
E g λ , α 1 , α 2 | d a t a = 0 0 0 g λ , α 1 , α 2 π λ , α 1 , α 2 | d a t a d λ d α 1 d α 2 .
However, an explicit form of (16) may not be easy to obtain under general situations. Therefore, the importance sampling technique (IS) is applied to obtain the Bayesian estimates, and the HPD credible intervals are also constructed; these results can be acquired as follows.
For further development, π λ , α 1 , α 2 | d a t a is expressed again as
π λ , α 1 , α 2 | d a t a h 0 λ , α 1 , α 2 × π 0 α 1 , α 2 | λ , d a t a × π 1 λ | d a t a ,
where
π 0 α 1 , α 2 | λ , d a t a B G k + b 1 , b 2 A λ , k 1 + b 3 , k 2 + b 4 , π 1 λ | d a t a G A k + a 0 , b 0 + i = 1 k 1 ω i , h 0 λ , α 1 , α 2 = e α 2 A λ A 2 λ e 2 i = 1 k ln ω i i = 1 k ln 1 e λ ω i e α 1 A λ A 1 λ × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × 1 b 2 A λ b 1 + k .
Therefore, the Bayesian estimate of g λ , α 1 , α 2 regarding the SELF is expressed as
E g λ , α 1 , α 2 | d a t a = 0 0 0 g λ , α 1 , α 2 π 0 α 1 , α 2 | λ , d a t a π 1 λ | d a t a h 0 λ , α 1 , α 2 d λ d α 1 d α 2 0 0 0 π 0 α 1 , α 2 | λ , d a t a π 1 λ | d a t a h 0 α 1 , α 2 , λ d λ d α 1 d α 2 .
We employ the following Algorithm 2 to obtain the Bayesian estimate and corresponding HPD credible interval of g λ , α 1 , α 2 .
Algorithm 2: The application of the importance sampling technique in Bayesian estimates.
Step 1: Under the given observed data, generate λ from π 1 λ | d a t a .
Step 2: Generate α 1 and α 2 from π 0 α 1 , α 2 | λ , d a t a for the given λ .
Step 3: Repeat step 1 and 2 M times to acquire λ 1 , α 11 , α 21 , , λ M , α 1 M , α 2 M .
Step 4: In order to calculate the Bayesian estimates of g λ , α 1 , α 2 , the h 01 , h 0 M
            and ( g 1 , , g M ) are computed. Here,
h 0 i = h 0 λ i , α 1 i , α 2 i   and   g i = g λ i , α 1 i , α 2 i .
Step 5: The approximate Bayesian estimates of g λ , α 1 , α 2 are given by
g I S ^ λ , α 1 , α 2 = i = 1 M g i h 0 i i = 1 M h 0 i .
Here, the Bayesian estimates of all unknown parameters λ , α 1 , α 2 under SELF are obtained as
α 1 ^ I S = i = 1 M α 1 i h 0 α 1 i , λ i i = 1 M h 0 α 1 i , λ i , α 2 ^ I S = i = 1 M α 2 i h 0 α 2 i , λ i i = 1 M h 0 α 2 i , λ i , λ ^ I S = i = 1 M λ i h 0 α i , λ i i = 1 M h 0 α i , λ i .

4.2. With Order Restriction of Shape Parameters

We suppose α 2 > α 1 when we consider the order restriction on the shape parameters α 1 and α 2 . Then, the joint prior distribution of parameters α 1 and α 2 is written as
π α 1 , α 2 | b 1 , b 2 , b 3 , b 4 = Γ b 3 + b 4 Γ b 3 Γ b 4 Γ b 1 b 2 b 1 α 1 + α 2 b 1 b 3 b 4 e b 2 α 1 + α 2 × α 1 b 3 1 α 2 b 4 1 + α 1 b 4 1 α 2 b 3 1 . 0 < α 1 < α 2 < ,
Moreover, the joint prior (17) mentioned above is the joint PDF of α 1 , α 2 , and the α 1 , α 2 are ordered random variables. Here,
α 1 , α 2 = α 1 , α 2 if α 1 < α 2 α 2 , α 1 if α 1 α 2
and α 1 , α 2 follows BG ( b 1 , b 2 , b 3 , b 4 ). The joint prior (17) is referred to as an ordered Beta-Gamma prior distribution, which is denoted as OBG ( b 1 , b 2 , b 3 , b 4 ). Simultaneously, the common scale parameter λ follows π α | a 0 , b 0 defined previously, which is independent of α 1 , α 2 .

4.2.1. Posterior Analysis: Scale Parameter λ Is Known

In this section, the Bayesian estimate is discussed concerning the order restriction α 1 < α 2 . The joint posterior distribution of parameters α 1 and α 2 can be expressed as
π α 1 , α 2 | λ , d a t a α 1 b 3 + J 1 α 2 b 4 + J 1 + α 1 b 4 + J 1 α 2 b 3 + J 1 α 1 + α 2 b 1 b 3 b 4 e b 2 α 1 + α 2 × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × e α 1 1 A 1 λ e α 2 1 A 2 λ .
The function (18) is also expressed as
π α 1 , α 2 | λ , d a t a α 1 b 3 + J 1 α 2 b 4 + J 1 + α 1 b 4 + J 1 α 2 b 3 + J 1 α 1 + α 2 b 1 b 3 b 4 e α 1 + α 2 b 2 A λ × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × α 1 K 1 J e α 2 A λ A 2 λ α 2 K 2 J e α 1 A λ A 1 λ ,
where A λ = min A 1 λ , A 2 λ , and J = min k 1 , k 2 .
Then, the joint posterior distribution of parameters α 1 and α 2 given in function (19) is written as follows
π α 1 , α 2 | λ , d a t a h 0 α 1 , α 2 × π 0 α 1 , α 2 | λ , d a t a ,
where
π 0 α 1 , α 2 | λ , d a t a O B G b 1 + 2 J , b 2 A λ , b 3 + J , b 4 + J , h 0 α 1 , α 2 = e α 1 A λ A 1 λ α 1 K 1 J α 2 K 2 J e α 2 A λ A 2 λ × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k .
Therefore, Algorithm A1 (see Appendix A.1 for details) is used for obtaining the Bayesian estimates along with the CIs of the parameters α 1 and α 2 .

4.2.2. Posterior Analysis: Scale Parameter λ Is Not Known

Based on the order restriction α 1 < α 2 and the unknown scale parameter λ , the posterior distribution of all parameters α 1 , α 2 and λ can be expressed as
π λ , α 1 , α 2 | d a t a α 1 b 3 + J 1 α 2 b 4 + J 1 + α 1 b 4 + J 1 α 2 b 3 + J 1 α 1 + α 2 b 1 b 3 b 4 e b 2 α 1 + α 2 × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × e α 1 1 A 1 λ e 2 i = 1 k ln ω i λ i = 1 k 1 ω i λ a 0 + k 1 e b 0 λ e α 2 1 A 2 λ × 1 b 2 A λ b 1 + 2 J .
Here, function (20) is rewritten as
π λ , α 1 , α 2 | d a t a e α 1 + α 2 b 2 A λ α 1 + α 2 b 1 b 3 b 4 α 1 b 3 + J 1 α 2 b 4 + J 1 + α 1 b 4 + J 1 α 2 b 3 + J 1 × α 1 K 1 J α 2 K 2 J e α 1 A λ A 1 λ e 2 i = 1 k ln ω i i = 1 k ln 1 e λ ω i e α 2 A λ A 2 λ × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m z k i = 1 k 1 R i + 1 ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 + z k 1 ln 1 e λ ω k × λ a 0 + k 1 e λ b 0 + i = 1 k 1 ω i × 1 b 2 A λ b 1 + 2 J ,
where A λ = min A 1 λ , A 2 λ and J = min k 1 , k 2 . According to the function (21), we observe that the joint posterior distribution of parameters α 1 , α 2 , and λ in this situation is given by
π λ , α 1 , α 2 | d a t a h 0 λ , α 1 , α 2 × π 0 α 1 , α 2 | λ , d a t a × π 1 λ | d a t a ,
where
π 0 α 1 , α 2 | λ , d a t a O B G b 1 + 2 J , b 2 A λ , b 3 + J , b 4 + J , π 1 λ | d a t a G A a 0 + k , b 0 + i = 1 k 1 ω i , h 0 λ , α 1 , α 2 = α 1 K 1 J α 2 K 2 J e α 1 A λ A 1 λ e 2 i = 1 k ln ω i i = 1 k ln 1 e λ ω i e α 2 A λ A 2 λ × e i = 1 k 1 α 1 R i + α 1 α 1 z i ln 1 e λ ω i + α 1 m i = 1 k 1 R i + 1 z k ln 1 e λ ω k × e i = 1 k 1 α 2 z i + α 2 R i ln 1 e λ ω i + α 2 n i = 1 k 1 R i + 1 1 + z k ln 1 e λ ω k × 1 b 2 A λ b 1 + 2 J .
Then, Algorithm A2 (see Appendix A.2) is applied to obtain the Bayesian estimates and CIs of all parameters λ , α 1 , and α 2 .

4.3. HPD Credible Interval

In this section, the generated importance samples are applied to construct the highest posterior density CIs of α 1 , α 2 , and λ . We arrange α 1 ( 1 ) < < α 1 ( M ) and α 2 ( 1 ) < < α 2 ( M ) as the ordered value of α i 1 , α i 2 , , α i M i = 1 , 2 . Then, employing the algorithm provided in [19], the 100 × ( 1 γ ) % HPD credible intervals for 0 < γ < 1 of the parameters λ , α 1 and α 2 are obtained as λ ( j ) , λ ( j + [ ( 1 γ ) M ] ) , α 1 ( j ) , α 1 ( j + [ ( 1 γ ) M ] ) and α 2 ( j ) , α 2 ( j + [ ( 1 γ ) M ] ) , where j satisfies that
λ ( j + [ M ( 1 γ ) ] ) λ ( j ) = min 1 i M γ λ ( i + [ M ( 1 γ ) ] ) λ ( i ) , j = 1 , 2 , , M ,
α 1 ( j + [ M ( 1 γ ) ] ) α 1 ( j ) = min 1 i M γ α 1 ( i + [ M ( 1 γ ) ] ) α 1 ( i ) , j = 1 , 2 , , M ,
α 2 ( j + [ M ( 1 γ ) ] ) α 2 ( j ) = min 1 i M γ α 2 ( i + [ M ( 1 γ ) ] ) α 2 ( i ) , j = 1 , 2 , , M ,
where y is the integer part of y.

5. Simulation Study and Data Analysis

5.1. Simulation Study

We conduct some simulation experiments to analyze the effects and performance of various estimators in this section. B-JPC samples are obtained from the combinations of various sample sizes, effective sample sizes, different true values of all parameters λ , α 1 , α 2 , and censoring schemes ( m , n , k ) for GIED λ , α 1 , α 2 . Here, the R software is employed for all the calculations. Based on the true values of all parameters, we consider the same scale parameter λ = 0.5 and the different shape parameters α 1 , α 2 are taken as (0.4, 0.8) and (0.4, 0.3). Here, the shape parameters jointly follow a Beta-Gamma prior and the scale parameter follows a Gamma prior; these parameters are proposed to compute Bayesian estimates concerning SELF.
In the process of Bayesian estimation, a simulation study is first performed based on the informative priors (IP). Here, in order to match the true expected values of the two different populations with their prior expectations, the hyper-parameters are selected as b 1 , b 2 , b 3 , b 4 , a 0 , b 0 = 1.3 , 1 , 1.3 , 2 , 0.25 , 0.5 and 1.4 , 2 , 2.6 , 2 , 0.25 , 0.5 . Meanwhile, for the non-informative prior (NIP), the hyper-parameters are chosen as b 1 = b 2 = b 3 = b 4 = a 0 = b 0 = 10 5 , which are close to zero to avoid improper posterior density. Here, the notation R = 3 , 2 , 2 ( 5 ) means R 1 = 3, R 2 = 2, R 3 = R 4 = = R 7 = 2.
Based on the various B-JPC censoring schemes, the MLEs and Bayesian estimates of all parameters are discussed. The whole process is repeated 1000 times for each case of MLEs, and we obtain the average estimates (AV), variance estimates, and associated mean squared errors (MSE). Here, the corresponding results are recorded in Table 1 and Table 2. Moreover, the standard errors are computed by squaring the root of the variance estimates. In Table 3 and Table 4, we also record the average lengths (AL) of 95% asymptotic CIs and the corresponding 95% coverage percentages (CP) of all parameters based on 1000 samples. In different cases of Bayesian estimation, when the order-restricted condition between two shape parameters is discussed, the average values of the Bayesian estimates (BE) and corresponding MSEs both for the NIP and IP are recorded in Table 5 and Table 6, and the above processes are repeated 1000 times. According to the importance sampling procedure, Table 7 and Table 8 present the ALs and CPs of 95% HPD credible intervals, and the value of M in the importance sampling procedure is 1000.
From Table 1 and Table 5, it is observed that the MSEs and standard error of all parameters increase with the increase in effective sample size k. Compared with MLEs, the performance of the Bayesian estimators is better concerning MSEs. Here, in terms of MSEs, the Bayesian estimators also perform better for IP than NIP, as expected. However, when the order restriction is discussed between two shape parameters, the Bayesian estimation of α 1 and α 2 under order restriction is slightly better.
From Table 3 and Table 8, we also observe that the average lengths of 95% asymptotic CIs are longer than those of the 95% HPD credible intervals in most cases. Furthermore, with respect to coverage percentage, the performance of the HPD credible intervals is better for IP than NIP, and the above two HPD credible intervals with order restriction perform better than that without order restriction.
Therefore, by comparing the Bayesian estimators and MLEs, we observe that the performance of the Bayesian estimators concerning NIP is better than that of MLEs. Therefore, when there is no prior information imposed on all parameters and the order-restricted condition is considered for different shape parameters, we recommend employing the Bayesian estimators with NIP in this case and obtaining the corresponding CIs for the NIP. Furthermore, if there is some prior knowledge of all unknown parameters, we give priority to the IP.

5.2. Real Data Analysis

In order to illustrate whether these different methods work well in practice, we consider real datasets in this section. Here, the real datasets represent the breaking strength of jute fiber and can be obtained from Ref. [9]. Dataset 1 and dataset 2 show the breaking strength of jute fiber, where the gauge lengths are 10 mm and 20 mm. These data are presented below.
Dataset 1 (10 mm):
43.93, 50.16, 101.15, 123.06, 108.94, 151.48, 163.40, 141.38, 177.25, 212.13, 183.16, 257.44, 291.27, 303.90, 262.90, 353.24, 323.83, 376.42, 422.11, 506.60, 383.43, 530.55, 671.49, 590.48, 693.73, 637.66, 727.23, 700.74, 704.66, 778.17.
Dataset 2 (20 mm):
36.75, 45.58, 71.46, 48.01, 99.72, 83.55, 116.99, 119.86, 113.85, 145.96, 166.49, 187.85, 200.16, 187.13, 284.64, 244.53, 350.7, 375.81, 456.6, 419.02, 578.62, 581.60, 585.57, 547.44, 594.29, 688.16, 662.66, 756.70, 707.36, 765.14.
According to [9], we divide the real data by 1000 without affecting the inference process, and fit a two-parameter GIED for each dataset. Using the Kolmogorov–Smirnov (K-S) distance between the fitted distributions and empirical distribution functions, as well as the corresponding p-values for both datasets, we illustrate the fitting results. Then, the MLEs of all parameters and the above results are recorded in Table 9. To check whether real datasets have equal scale parameters, supposing H 0 : λ 1 = λ 2 , the likelihood-ratio test is performed, and the associated p-value is obtained as 0.937. Therefore, we confirm the null hypothesis. Based on the assumption, the MLE of the scale parameter is computed as 0.195 and the MLEs of the shape parameters are calculated as 1.394 and 1.270 for dataset 1 and 2, respectively.
For the above datasets, we generate three balanced joint censored samples based on the three different censoring schemes. The third column of Table 10 represents the B-JPC samples, while the second column shows various censoring schemes. We compute the Bayesian estimates and the maximum likelihood estimates of all parameters in the above three cases. For MLEs, the estimated values of unknown parameters and the corresponding 95% CIs are recorded. For Bayesian inference, owing to the lack of prior information on parameters, the non-informative prior is employed to estimate all parameters, and the corresponding 95% HPD credible intervals are also constructed. Here, they are also discussed in the case of whether the shape parameters have order restriction. In the process of the importance sampling procedure, the value of M is taken as 1000. These results are listed in Table 11.

6. Optimum Censoring Scheme

Here, we discuss the optimal censoring schemes with given values of m , n , k . In the previous sections, the process of the interval and point estimation of all parameters based on the B-JPC samples of the GIED was discussed. Furthermore, many methods are considered to solve the problem of selecting an optimum censoring scheme (OCS), which can be found in the literature; for example, see Refs. [20,21]. Here, we use the following classical optimal criteria to obtain the OCS in the case of B-JPC schemes:
Criterion 1: Through this criterion, we obtain the minimum value of the determinant of the inverse of the observed Fisher information matrix I 1 θ ^ for the maximum likelihood estimates of all parameters, where θ ^ = λ ^ , α 1 ^ , α 2 ^ .
Criterion 2: This criterion is based on the minimization of the trace of the matrix I 1 θ ^ (the definition is the same as above) of all parameters.
Criterion 3: According to this criterion, we can obtain the minimum of the greatest eigenvalue of the matrix I 1 θ ^ of the MLEs of all parameters.
Criterion 4: This criterion is based on the maximization of the trace of the observed Fisher information matrix I θ ^ .
Criterion 5: This criterion is based on some specific choices of a quantile “q”. For a fixed weight 0 ω 1 , Criterion 5 is given by
C 5 ( q ) = w Var ln T ^ q , 1 + ( 1 w ) Var ln T ^ q , 2
In this criterion, the qth quantile points of the two generalized inverted exponential distributions are
T q , 1 = λ / ln 1 ( 1 q ) 1 / α 1 ,   T q , 2 = λ / ln 1 ( 1 q ) 1 / α 2
Hence, the logarithmic forms of the qth quantile of the two generalized inverted exponential distributions are calculated as
ln T q , 1 = ln λ ln 1 ( 1 q ) 1 / α 1 ,   ln T q , 2 = ln λ ln 1 ( 1 q ) 1 / α 2 ,
where 0 < q < 1 . Here, we denote V 1 = ln T q , 1 λ , ln T q , 1 α 1 and V 2 = ln T q , 2 λ , ln T q , 2 α 2 , and then Var ln T ^ q , 1 and Var ln T ^ q , 2 can be approximated by the delta method as follows:
var ln T q , 1 = V 1 I 1 ( β 1 ) V 1 T ,   var ln T q , 2 = V 2 I 1 ( β 2 ) V 2 T .
For further development, I 1 β 1 and I 1 β 2 can be given as follows:
I 1 β 1 = I 11 I 12 I 21 I 22   and   I 1 β 2 = I 11 I 13 I 31 I 33
and
ln T q , k α k = 1 q 1 α k ln 1 q 1 + 1 + q 1 α k α k 2 ln 1 1 q 1 α k ,   ln T q , k λ = 1 λ ,   k = 1 , 2 .
Based on the criteria discussed above, we illustrate the content of the optimum censoring scheme, employing the real datasets related to gauge lengths of 10 mm and 20 mm, which are described in the previous section. For Criterion 5 , the ω and q are taken as 0.5 and 0.05. Then, the values of the greatest eigenvalue I 1 ( θ ^ ) , trace I 1 ( θ ^ ) , det I 1 ( θ ^ ) , and trace I ( θ ^ ) , as well as C 5 , are recorded in Table 12. According to Table 12, we conclude that censoring scheme 2 is the optimal scheme in terms of criteria 1, 2, 3, and 4. Furthermore, scheme 3 is the optimal scheme for criterion 5 at ω = 0.5.

7. Conclusions

Throughout this article, the analysis of the B-JPC scheme for different populations is considered. Suppose that the lifetime distributions of the products from two different populations follow a GIED with different shape parameters but the same scale parameter. Here, the MLEs of parameters along with the corresponding 95% confidence intervals are obtained, and the existence and uniqueness of MLEs are proved. Assuming that the shape parameters jointly follow an ordered Beta-Gamma prior and the common scale parameter follows a Gamma prior, the Bayesian estimates are derived by importance sampling technique and the corresponding 95% HPD credible intervals are also constructed. The above prior assumptions are commonly used in the statistical inference process and the order restriction inference between the different shape parameters is considered.
Through a considerable amount of simulation study, we find that the performance of Bayesian estimators of the IP is significantly superior to that concerning NIP for point estimation based on the MSE and average estimate. Then, the performance of Bayesian estimators concerning NIP performs better than that of MLEs with respect to MSE and standard error. In terms of coverage percentage, the credible intervals for the IP are better than those of the NIP. However, the MLEs have longer ALs and higher CPs of CIs than those of the other two methods. It is also observed that if there is an order restriction considered for two shape parameters, we suggest employing it, because the inference result of this method is much better than that of other methods. Finally, we set some precision criteria to compare the various censoring schemes and obtain an optimum censoring scheme. In this paper, it is found in the derivation that when the order restriction of parameters is considered in the classical framework, the form of the function is complex and it is difficult to prove the existence and uniqueness of MLEs. On the contrary, the above content is easier to calculate in the Bayesian framework. Of course, all inference processes can be extended to a classical framework in future research, and there is more work to be done in that direction.

Author Contributions

Investigation, C.Z. and T.C.; Supervision, W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by The Development Project of China Railway (No. N2022J017) and the Fund of China Academy of Railway Sciences Corporation Limited (No. 2022YJ161).

Data Availability Statement

The data presented in this study are openly available in Ref. [9].

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Appendix A.1. When the Scale Parameter λ Is Known

Based on the known scale parameter λ , the joint posterior distribution of parameters α 1 and α 2 is obtained as (19). Here, the HPD credible intervals along with the Bayesian estimates can be derived by the importance sampling technique, and we can employ Algorithm A1 to achieve the above purposes.
Algorithm A1: The application of the importance sampling technique in Bayesian estimates and HPD credible intervals for known λ .
Step 1: Under the given data, α 1 and α 2 are generated from π 0 α 1 , α 2 | λ , d a t a .
Step 2: Repeat step 1 and 2 M times to acquire α 11 , α 21 , , α 1 M , α 2 M .
Step 3: To acquire Bayesian estimates about g α 1 , α 2 , we calculate h 01 , h 0 M and
             g 1 , , g M . Here, h 0 i = h 0 α 1 i , α 2 i and g i = g α 1 i , α 2 i .
Step 4: The approximate Bayesian estimate about g α 1 , α 2 is given by
g I S ^ α 1 , α 2 = i = 1 M g i h 0 i i = 1 M h 0 i .
Step 5: To obtain the 100 × ( 1 γ ) % CIs of α 1 and α 2 , arrange α 1 ( 1 ) < < α 1 ( M ) and
α 2 ( 1 ) < < α 2 ( M )   are   the   ordered   value   of   α i 1 ,   α i 2 ,   ,   α i M   i = 1 , 2 .
The
             100 × ( 1 γ ) % HPD credible intervals of α 1 , α 2 for a significance 0 < γ < 1 are
            constructed as α 1 ( j ) , α 1 ( j + [ M ( 1 γ ) ] ) , α 2 ( j ) , α 2 ( j + [ M ( 1 γ ) ] ) , where j satisfies that
α 1 ( j + [ M ( 1 γ ) ] ) α 1 ( j ) = min 1 i M γ α 1 ( i + [ M ( 1 γ ) ] ) α 1 ( i ) ; j = 1 , 2 , , M α 2 ( j + [ M ( 1 γ ) ] ) α 2 ( j ) = min 1 i M γ α 2 ( i + [ M ( 1 γ ) ] ) α 2 ( i ) ; j = 1 , 2 , , M
            where y is the integer part of y.

Appendix A.2. When the Scale Parameter λ Is Not Known

Based on the unknown scale parameter λ , we express the joint posterior distribution of λ , α 1 , and α 2 as (21). Furthermore, the HPD credible intervals along with the Bayesian estimates are derived by the importance sampling technique, where Algorithm A2 is employed to achieve the above purposes.
Algorithm A2: The application of the importance sampling technique in Bayesian estimates and HPD credible intervals for unknown λ .
Step 1: Under the observed given data, λ is generated from π 1 λ | d a t a .
Step 2: Based on the known λ , the α 1 and α 2 are obtained from π 0 α 1 , α 2 | λ , d a t a .
Step 3: Repeat step 1 and 2 M times to acquire λ 1 , α 11 , α 21 , , λ M , α 1 M , α 2 M .
Step 4: To acquire the Bayesian estimate about g λ , α 1 , α 2 , the h 01 , h 0 M and ( g 1 , ,
             g M ) are calculated. Here h 0 i = h 0 λ i , α 1 i , α 2 i and g i = g λ i , α 1 i , α 2 i .
Step 5: The approximate Bayesian estimate about g λ , α 1 , α 2 is given by
g I S ^ λ , α 1 , α 2 = i = 1 M g i h 0 i i = 1 M h 0 i .
Step 6: To obtain the 100 × ( 1 γ ) % CIs of all parameters λ , α 1 and α 2 , arrange
             α 1 ( 1 ) < < α 1 ( M ) and α 2 ( 1 ) < < α 2 ( M ) be the ordered value of α i 1 , α i 2 , ⋯,
             α i M i = 1 , 2 . Where the 100 × ( 1 γ ) % HPD credible intervals of parameters
             λ , α 1 and α 2 for 0 < γ < 1 are given by λ ( j ) , λ ( j + [ M ( 1 γ ) ] ) , α 1 ( j ) , α 1 ( j + [ M ( 1 γ ) ] )
            and α 2 ( j ) , α 2 ( j + [ M ( 1 γ ) ] ) , here j satisfies that
λ ( j + [ M ( 1 γ ) ] ) λ ( j ) = min 1 i M γ λ ( i + [ M ( 1 γ ) ] ) λ ( i ) ; j = 1 , 2 , , M α 1 ( j + [ M ( 1 γ ) ] ) α 1 ( j ) = min 1 i M γ α 1 ( i + [ M ( 1 γ ) ] ) α 1 ( i ) ; j = 1 , 2 , , M α 2 ( j + [ M ( 1 γ ) ] ) α 2 ( j ) = min 1 i M γ α 2 ( i + [ M ( 1 γ ) ] ) α 2 ( i ) ; j = 1 , 2 , , M
            where y is the integer part of y.

References

  1. Balakrishnan, N.; Cramer, E. The Art of Progressive Censoring: Applications to Reliability and Quality; Springer: New York, NY, USA, 2014. [Google Scholar]
  2. Dey, S.; Pradhan, B. Generalized inverted exponential distribution under hybrid censoring. Stat. Methodol. 2014, 18, 101–114. [Google Scholar] [CrossRef]
  3. Dube, M.; Krishna, H.; Garg, R. Generalized inverted exponential distribution under progressive first-failure censoring. J. Stat. Comput. Simul. 2015, 86, 1095–1114. [Google Scholar] [CrossRef]
  4. Balakrishnan, N.; Rasouli, A. Exact likelihood inference for two exponential populations under joint Type-II censoring. Comput. Stat. Data Anal. 2008, 52, 2725–2738. [Google Scholar] [CrossRef]
  5. Rasouli, A.; Balakrishnan, N. Exact likelihood inference for two exponential populations under joint progressive type-II censoring. Commun. Stat. Methods 2010, 39, 2172–2191. [Google Scholar] [CrossRef]
  6. Balakrishnan, N.; Su, F.; Liu, K.Y. Exact likelihood inference for k exponential populations under joint progressive type-II censoring. Commun. Stat.-Simul. Comput. 2015, 44, 902–923. [Google Scholar] [CrossRef]
  7. Parsi, S.; Ganjali, M.; Farsipour, N.S. Conditional maximum likelihood and interval estimation for two Weibull populations under joint Type-II progressive censoring. Commun. Stat.-Theory Methods 2011, 40, 2117–2135. [Google Scholar] [CrossRef]
  8. Mondal, S.; Kundu, D. Point and Interval Estimation of Weibull Parameters Based on Joint Progressively Censored Data. Sankhya Indian J. Stat. 2019, 81, 1–25. [Google Scholar] [CrossRef] [Green Version]
  9. Mondal, S.; Kundu, D. On the joint Type-II progressive censoring scheme. Commun. Stat. Theory Methods 2019, 49, 958–976. [Google Scholar] [CrossRef]
  10. Krishna, H.; Goel, R. Inferences for two Lindley populations based on joint progressive type-II censored data. Commun. Stat. Simul. Comput. 2020, 51, 4919–4936. [Google Scholar] [CrossRef]
  11. Mondal, S.; Kundu, D. A new two sample type-II progressive censoring scheme. Commun. Stat. Theory Methods 2018, 48, 2602–2618. [Google Scholar] [CrossRef]
  12. Mondal, S.; Kundu, D. Inference on Weibull parameters under a balanced two-sample type-II progressive censoring scheme. Qual. Reliab. Eng. Int. 2019, 36, 1–17. [Google Scholar] [CrossRef]
  13. Mondal, S.; Kundu, D. Bayesian Inference for Weibull distribution under the balanced joint type-II progressive censoring scheme. Am. J. Math. Manag. Sci. 2019, 39, 56–74. [Google Scholar] [CrossRef]
  14. Mondal, S.; Bhattacharya, R.; Pradhan, B.; Kundu, D. Bayesian optimal life-testing plan under the balanced two sample type-II progressive censoring scheme. Appl. Stoch. Model. Bus. Ind. 2020, 36, 628–640. [Google Scholar] [CrossRef]
  15. Bhattacharya, R.; Pradhan, B.; Dewanji, A. On optimum life-testing plans under Type-II progressive censoring scheme using variable neighborhood search algorithm. Test 2016, 25, 309–330. [Google Scholar] [CrossRef]
  16. Goel, R.; Krishna, H. Statistical inference for two Lindley populations under balanced joint progressive Type-II censoring scheme. Comput. Stat. 2022, 37, 263–286. [Google Scholar] [CrossRef]
  17. Chen, Q.; Gui, W. Statistical Inference of the Generalized Inverted Exponential Distribution under Joint Progressively Type-II Censoring. Entropy 2022, 24, 576. [Google Scholar] [CrossRef] [PubMed]
  18. Abouammoh, A.M.; Alshingiti, A.M. Reliability estimation of generalized inverted exponential distribution. J. Stat. Comput. Simul. 2009, 79, 1301–1315. [Google Scholar] [CrossRef]
  19. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  20. Sultan, K.; Alsadat, N.; Kundu, D. Bayesian and maximum likelihood estimations of the inverse Weibull parameters under progressive type-II censoring. J. Stat. Comput. Simul. 2014, 84, 2248–2265. [Google Scholar] [CrossRef]
  21. Pradhan, B.; Kundu, D. On progressively censored generalized exponential distribution. Test 2009, 18, 497–515. [Google Scholar] [CrossRef]
Figure 1. Schematic representation of the k-th failure occurring in population A.
Figure 1. Schematic representation of the k-th failure occurring in population A.
Mathematics 11 00329 g001
Figure 2. Schematic representation of the k-th failure occurring in population B.
Figure 2. Schematic representation of the k-th failure occurring in population B.
Mathematics 11 00329 g002
Figure 3. Graph of the PDF of GIED for λ = 0.5.
Figure 3. Graph of the PDF of GIED for λ = 0.5.
Mathematics 11 00329 g003
Figure 4. Graph of the CDF of GIED for λ = 0.5.
Figure 4. Graph of the CDF of GIED for λ = 0.5.
Mathematics 11 00329 g004
Table 1. MSEs and AVs of the MLEs of the model parameters with λ = 0.5, α 1 = 0.4, α 2 = 0.8 based on different CSs.
Table 1. MSEs and AVs of the MLEs of the model parameters with λ = 0.5, α 1 = 0.4, α 2 = 0.8 based on different CSs.
λ ^ α 1 ^ α 2 ^
Censoring SchemeAVMSEVariance EstimateAVMSEVariance EstimateAVMSEVariance Estimate
k = 8, R = (2, 2 ( 6 ) )0.5730.0400.0350.2720.0700.0530.5720.2180.166
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) )0.5880.0490.0410.2870.1110.0990.6350.4550.427
k = 8, R = ( 2 ( 7 ) )0.5800.0440.0380.2850.0720.0590.5960.4040.362
k = 8, R = (3, 2, 2 ( 5 ) )0.6120.0570.0440.3030.0840.0750.7020.4330.423
k = 8, R = (5, 4, 1 ( 5 ) )0.6580.0710.0460.3730.1310.1300.9040.5620.551
k = 8, R = ( 2 ( 5 ) , 3, 4)0.6100.0550.0430.3140.1080.1010.6750.3600.345
k = 10, R = (7, 1 ( 8 ) )0.7120.0980.0530.4450.1250.1231.1061.3861.292
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) )0.6670.0720.0440.3530.0660.0640.7900.3620.361
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) )0.6630.0730.0460.3520.0780.0760.7920.4410.440
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) )0.7140.0980.0510.4260.1130.1121.0180.5830.536
k = 10, R = (5, 3, 1 ( 7 ) )0.7190.0970.0490.4350.1320.1301.0550.6390.574
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) )0.7570.1430.0770.4830.1260.1190.4240.1320.117
Abbreviations: AV—average estimate. MSE—mean square error.
Table 2. MSEs and AVs of the MLEs of the model parameters with λ = 0.5, α 1 = 0.4, and α 2 = 0.3 based on different CSs.
Table 2. MSEs and AVs of the MLEs of the model parameters with λ = 0.5, α 1 = 0.4, and α 2 = 0.3 based on different CSs.
λ ^ α 1 ^ α 2 ^
Censoring SchemeAVMSEVariance EstimateAVMSEVariance EstimateAVMSEVariance Estimate
k = 8, R = (2, 2 ( 6 ) )0.6290.0840.0670.3500.0800.0780.3040.1130.113
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) )0.6570.0980.0730.3770.0830.0830.3180.0730.072
k = 8, R = ( 2 ( 7 ) )0.6340.0860.0680.3620.0900.0890.3040.0920.092
k = 8, R = (3, 2, 2 ( 5 ) )0.6710.1030.0740.3930.1040.1040.3280.0730.073
k = 8, R = (5, 4, 1 ( 5 ) )0.7320.1360.0830.4770.1890.1830.3950.1320.123
k = 8, R = ( 2 ( 5 ) , 3, 4)0.6690.1070.0780.3910.0810.0810.3400.1010.099
k = 10, R = (7, 1 ( 8 ) )0.8100.1900.0940.5920.2260.1890.5110.2430.198
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) )0.7260.1240.0730.4480.0730.0710.3900.1030.094
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) )0.7390.1340.0770.4470.0680.0660.3930.0880.077
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) )0.8010.1830.0920.5540.1730.1500.4760.1800.149
k = 10, R = (5, 3, 1 ( 7 ) )0.7920.1690.0830.5410.1380.1180.4620.1530.126
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) )0.7570.1430.0770.4830.1260.1190.4240.1320.117
Abbreviations: AV—average estimate; MSE—mean square error.
Table 3. CPs and ALs of 95% asymptotic confidence intervals of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.8 based on different CSs.
Table 3. CPs and ALs of 95% asymptotic confidence intervals of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.8 based on different CSs.
λ ^ α 1 ^ α 2 ^
ALCPALCPALCP
k = 8, R = (2, 2 ( 6 ) )0.90298.6%0.83468.3%1.68071.6%
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) )0.90598.7%0.87372.8%1.70571.4%
k = 8, R = ( 2 ( 7 ) )0.88597.9%0.83568.2%1.52468.8%
k = 8, R = (3, 2, 2 ( 5 ) )0.92398.6%0.91374.0%1.83775.7%
k = 8, R = (5, 4, 1 ( 5 ) )0.95497.4%1.26283.2%2.67088.1%
k = 8, R = ( 2 ( 5 ) , 3, 4)0.92598.3%0.96173.2%1.89875.0%
k = 10, R = (7, 1 ( 8 ) )0.93796.2%1.20586.1%2.74692.6%
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) )0.88698.4%0.90179.2%1.81783.8%
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) )0.90097.1%0.94082.6%1.88083.0%
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) )0.93996.5%1.11686.0%2.57892.4%
k = 10, R = (5, 3, 1 ( 7 ) )0.94395.8%1.21588.4%2.67593.0%
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) )0.93796.7%1.04486.0%2.27687.8%
Table 4. CPs and ALs of 95% asymptotic confidence intervals of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on different CSs.
Table 4. CPs and ALs of 95% asymptotic confidence intervals of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on different CSs.
λ ^ α 1 ^ α 2 ^
ALCPALCPALCP
k = 8, R = (2, 2 ( 6 ) )1.02197.9%0.96681.0%0.87785.4%
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) )1.02998.0%0.94883.3%0.87285.4%
k = 8, R = ( 2 ( 7 ) )1.02298.6%0.94483.0%0.84985.5%
k = 8, R = (3, 2, 2 ( 5 ) )1.05598.9%0.97786.2%0.90888.6%
k = 8, R = (5, 4, 1 ( 5 ) )1.10798.5%1.23791.6%1.12491.2%
k = 8, R = ( 2 ( 5 ) , 3, 4)1.04098.2%0.95786.5%0.94086.9%
k = 10, R = (7, 1 ( 8 ) )1.08294.9%1.32296.8%1.31496.0%
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) )1.03196.9%1.07193.0%1.04694.3%
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) )1.02496.1%1.01493.6%0.98092.9%
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) )1.09395.0%1.23595.3%1.21695.9%
k = 10, R = (5, 3, 1 ( 7 ) )1.10494.7%1.24396.4%1.23996.8%
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) )1.07795.1%1.10694.8%1.14894.6%
Table 5. MSEs and BEs of the Bayesian estimates of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on the importance sampling procedure and different CSs.
Table 5. MSEs and BEs of the Bayesian estimates of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on the importance sampling procedure and different CSs.
Without Order RestrictionWith Order Restriction
IPNIPIPNIP
Censoring SchemeParameterBEMSEBEMSEBEMSEBEMSE
k = 8, R = (2, 2 ( 6 ) ) λ 0.5750.0450.4850.0340.5650.0390.5330.043
α 1 0.3700.0230.3740.0500.3370.0220.3110.040
α 2 0.2890.0180.3020.0530.2650.0190.2770.036
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) ) λ 0.5800.0470.4940.0400.5940.0440.5410.038
α 1 0.3780.0270.3870.0560.3560.0230.3240.043
α 2 0.2980.0190.3000.0590.2860.0180.2790.042
k = 8, R = ( 2 ( 7 ) ) λ 0.5760.0440.4940.0390.5950.0490.5300.045
α 1 0.3860.0240.3770.0540.3490.0270.3240.046
α 2 0.3030.0200.3130.0560.2860.0180.2760.036
k = 8, R = (3, 2, 2 ( 5 ) ) λ 0.5650.0450.5180.0400.6170.0560.5360.037
α 1 0.3770.0300.4100.1040.3650.0200.3550.043
α 2 0.2900.0200.3180.0510.2800.0160.2830.036
k = 8, R = (5, 4, 1 ( 5 ) ) λ 0.6310.0710.5390.0380.6530.0700.6040.058
α 1 0.4170.0270.4330.0650.4070.0290.4240.077
α 2 0.3370.0290.3390.0840.3050.0160.3470.065
k = 8, R = ( 2 ( 5 ) , 3, 4) λ 0.5760.0580.5180.0370.6000.0530.5340.039
α 1 0.3890.0230.3680.0480.3540.0220.3370.041
α 2 0.3060.0240.3330.0710.2990.0240.2900.052
k = 10, R = (7, 1 ( 8 ) ) λ 0.6660.0710.6730.0850.6990.0840.7220.106
α 1 0.4870.0350.5590.1160.4670.0300.5270.087
α 2 0.4030.0440.3700.0710.3850.0330.4520.118
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) ) λ 0.6410.0560.6290.0640.6580.0640.6420.072
α 1 0.4190.0220.4660.0680.4080.0210.3970.033
α 2 0.3370.0180.3700.0710.3200.0180.3620.086
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) ) λ 0.6490.0690.5970.0530.6620.0720.6480.073
α 1 0.4430.0280.4520.0620.4130.0260.4320.060
α 2 0.3450.0240.4090.0960.3320.0250.3860.061
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) ) λ 0.6830.0750.6690.0830.6800.0720.7070.098
α 1 0.4720.0470.4920.0840.4490.0260.5070.086
α 2 0.3720.3870.4270.0950.3540.0260.4310.088
k = 10, R = (5, 3, 1 ( 7 ) ) λ 0.7050.0960.6880.0960.6790.0960.7100.102
α 1 0.4650.0370.5670.1310.4550.0320.4890.099
α 2 0.3870.0380.4500.1290.3770.0320.4520.127
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) ) λ 0.6740.0820.6290.0640.6700.0700.6740.090
α 1 0.4570.0360.5010.0840.4160.0190.4690.067
α 2 0.3690.0340.4150.0910.3530.0210.4080.102
Table 6. MSEs and BEs of the Bayesian estimates of the model parameters with λ = 0.5, α 1 = 0.4, and α 2  = 0.8 based on the importance sampling procedure and different CSs.
Table 6. MSEs and BEs of the Bayesian estimates of the model parameters with λ = 0.5, α 1 = 0.4, and α 2  = 0.8 based on the importance sampling procedure and different CSs.
Without Order RestrictionWith Order Restriction
IPNIPIPNIP
Censoring SchemeParameterBEMSEBEMSEBEMSEBEMSE
k = 8, R = (2, 2 ( 6 ) ) λ 0.4800.0260.4370.0280.5270.0270.4640.025
α 1 0.3220.0590.2530.0780.2750.0440.2530.056
α 2 0.6040.1220.5710.2220.5430.1180.4280.201
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) ) λ 0.5000.0210.4330.0280.5250.0210.4710.021
α 1 0.3240.0540.2350.0590.3050.0390.2630.068
α 2 0.6460.1450.5880.1370.5790.1170.5070.185
k = 8, R = ( 2 ( 7 ) ) λ 0.5030.0250.4130.0320.5200.0300.4550.024
α 1 0.3200.0490.2570.0810.2830.0400.2320.070
α 2 0.6400.1270.5240.1930.5730.1910.4610.199
k = 8, R = (3, 2, 2 ( 5 ) ) λ 0.5130.0270.4460.0230.5250.0200.4900.030
α 1 0.3100.0520.2430.0650.2910.0380.2900.078
α 2 0.6430.1230.6150.1950.5790.1170.5500.352
k = 8, R = (5, 4, 1 ( 5 ) ) λ 0.5480.0290.4870.0250.5650.0290.5310.032
α 1 0.3420.0500.2770.0980.3250.0460.2980.069
α 2 0.7910.1660.8290.3990.6970.0890.6650.205
k = 8, R = ( 2 ( 5 ) , 3, 4) λ 0.5060.0270.4520.0270.5370.0250.4750.026
α 1 0.3310.0520.2620.0720.2750.0420.2540.061
α 2 0.6410.1340.6150.1750.5910.0990.5290.198
k = 10, R = (7, 1 ( 8 ) ) λ 0.6040.0360.5540.0300.6130.0420.5660.029
α 1 0.4280.0780.3530.1010.4000.0470.3560.064
α 2 0.9030.1390.9370.3230.8320.0860.7680.151
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) ) λ 0.5820.0380.5210.0240.5700.0320.5490.025
α 1 0.3710.0470.3100.0610.3440.0320.3300.050
α 2 0.7550.1010.7110.1570.6280.0880.6110.145
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) ) λ 0.5830.0360.5360.0300.5980.0360.5510.032
α 1 0.3310.0390.3240.0630.3390.0370.3080.050
α 2 0.7770.1210.7250.1860.6530.0820.6210.162
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) ) λ 0.6260.0450.5850.0380.6050.0370.5890.039
α 1 0.3980.0600.3520.0790.3850.0420.3660.062
α 2 0.9160.2020.9510.3510.7770.0920.7920.206
k = 10, R = (5, 3, 1 ( 7 ) ) λ 0.5950.0310.5690.0330.6300.0500.6000.046
α 1 0.3880.0470.3370.0730.3880.0450.3530.053
α 2 0.8730.1680.8900.2410.8030.1010.8040.188
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) ) λ 0.5820.0320.5560.0310.6020.0350.5950.046
α 1 0.3690.0400.3430.0740.3630.0440.3480.068
α 2 0.7670.1120.7940.1780.7090.0840.6960.169
Table 7. CPs and ALs of 95% HPD credible interval of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.8 based on different CSs.
Table 7. CPs and ALs of 95% HPD credible interval of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.8 based on different CSs.
Without Order RestrictionWith Order Restriction
IPNIPIPNIP
Censoring SchemeParameterALCPALCPALCPALCP
k = 8, R = (2, 2 ( 6 ) ) λ 0.38277.6%0.37070.0%0.45388.6%0.44880.1%
α 1 0.43761.6%0.35142.8%0.48368.7%0.43556.7%
α 2 0.80665.3%0.80261.0%0.77157.9%0.75052.0%
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) ) λ 0.39476.9%0.34563.0%0.46990.3%0.47085.1%
α 1 0.44562.4%0.37346.4%0.49367.4%0.47062.7%
α 2 0.78561.0%0.72652.2%0.84667.1%0.86261.6%
k = 8, R = ( 2 ( 7 ) ) λ 0.36174.2%0.35862.3%0.47791.2%0.44876.8%
α 1 0.41554.6%0.36345.0%0.49670.3%0.43357.9%
α 2 0.71258.4%0.78259.2%0.79064.3%0.78156.5%
k = 8, R = (3, 2, 2 ( 5 ) ) λ 0.40180.2%0.39474.1%0.47887.7%0.48484.3%
α 1 0.46560.8%0.36444.6%0.52473.0%0.50961.7%
α 2 0.86170.0%0.87363.3%0.83665.3%0.85860.2%
k = 8, R = (5, 4, 1 ( 5 ) ) λ 0.43484.5%0.42980.0%0.50992.3%0.48988.2%
α 1 0.52173.6%0.49253.9%0.60875.5%0.60671.5%
α 2 1.02581.4%1.14371.5%0.99585.2%1.04872.3%
k = 8, R = ( 2 ( 5 ) , 3, 4) λ 0.39780.4%0.37371.1%0.46886.2%0.47582.2%
α 1 0.46467.7%0.39448.8%0.52171.5%0.51365.2%
α 2 0.87870.0%0.83159.8%0.81566.1%0.91361.1%
k = 10, R = (7, 1 ( 8 ) ) λ 0.48187.3%0.46590.3%0.50690.3%0.48886.4%
α 1 0.66384.0%0.60569.5%0.65287.6%0.69779.7%
α 2 1.25089.0%1.43886.6%1.08392.6%1.18085.3%
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) ) λ 0.46388.6%0.41780.6%0.50088.0%0.50987.2%
α 1 0.52776.5%0.45459.9%0.53482.9%0.57869.3%
α 2 0.97179.5%0.97770.1%0.88678.6%0.96575.2%
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) ) λ 0.44289.9%0.42981.2%0.48988.6%0.49892.0%
α 1 0.51073.7%0.50560.7%0.54377.8%0.54674.7%
α 2 0.92877.4%0.90066.8%0.88482.2%0.94477.4%
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) ) λ 0.47587.3%0.46386.9%0.50595.0%0.51490.1%
α 1 0.61878.6%0.61867.8%0.60684.6%0.69679.5%
α 2 1.18688.0%1.31881.9%0.98689.3%1.15483.7%
k = 10, R = (5, 3, 1 ( 7 ) ) λ 0.47289.0%0.45788.1%0.49089.0%0.51592.0%
α 1 0.57883.6%0.57671.5%0.61981.6%0.63278.5%
α 2 1.14986.0%1.39885.4%1.00984.9%1.13786.5%
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) ) λ 0.46388.6%0.44382.2%0.50289.3%0.51191.0%
α 1 0.53579.1%0.52667.4%0.60184.9%0.60374.3%
α 2 0.98080.5%1.13876.5%0.94180.9%1.01376.7%
Table 8. CPs and ALs of 95% HPD credible interval of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on different CSs.
Table 8. CPs and ALs of 95% HPD credible interval of the model parameters with λ  = 0.5, α 1  = 0.4, and α 2  = 0.3 based on different CSs.
Without Order RestrictionWith Order Restriction
IPNIPIPNIP
Censoring SchemeParameterALCPALCPALCPALCP
k = 8, R = (2, 2 ( 6 ) ) λ 0.49378.3%0.39769.7%0.62390.7%0.54587.9%
α 1 0.47078.3%0.45866.2%0.51587.7%0.54372.1%
α 2 0.41480.7%0.39765.5%0.45086.3%0.50778.3%
k = 8, R = ( 2 ( 4 ) , 3, 2 ( 2 ) ) λ 0.50382.2%0.40671.5%0.61189.6%0.56785.6%
α 1 0.46784.6%0.46564.9%0.52086.6%0.56677.5%
α 2 0.41381.9%0.40764.9%0.46790.0%0.50178.2%
k = 8, R = ( 2 ( 7 ) ) λ 0.50084.2%0.39470.8%0.62190.3%0.55786.2%
α 1 0.47484.5%0.43372.2%0.51786.3%0.52077.3%
α 2 0.41185.2%0.39265.6%0.45085.3%0.45970.9%
k = 8, R = (3, 2, 2 ( 5 ) ) λ 0.54184.9%0.45975.4%0.65492.0%0.57290.5%
α 1 0.51583.2%0.54272.7%0.56987.3%0.56177.5%
α 2 0.44285.3%0.44968.7%0.47290.3%0.51279.6%
k = 8, R = (5, 4, 1 ( 5 ) ) λ 0.58389.0%0.51376.0%0.68793.0%0.63591.2%
α 1 0.57492.0%0.65977.0%0.59492.0%0.69382.5%
α 2 0.48990.7%0.56466.9%0.55592.0%0.66089.5%
k = 8, R = ( 2 ( 5 ) , 3, 4) λ 0.58385.7%0.41775.6%0.61490.6%0.57887.1%
α 1 0.50085.3%0.47271.8%0.53987.3%0.57880.0%
α 2 0.45584.0%0.45669.3%0.48288.6%0.54680.7%
k = 10, R = (7, 1 ( 8 ) ) λ 0.66686.7%0.57578.3%0.71884.3%0.65280.8%
α 1 0.64792.3%0.79786.3%0.69896.7%0.82689.9%
α 2 0.60593.0%0.75879.0%0.63394.0%0.80884.2%
k = 10, R = ( 2 ( 5 ) , 2, 1 ( 3 ) ) λ 0.56683.6%0.52781.5%0.66091.3%0.62387.6%
α 1 0.53793.3%0.60581.5%0.58094.0%0.66387.0%
α 2 0.49093.3%0.56374.0%0.53996.3%0.61289.3%
k = 10, R = ( 2 ( 6 ) , 1 ( 3 ) ) λ 0.57681.3%0.50080.0%0.67686.7%0.63586.2%
α 1 0.52190.0%0.61479.3%0.57797.0%0.66188.2%
α 2 0.49489.0%0.56972.3%0.52393.7%0.65087.6%
k = 10, R = ( 4 ( 2 ) , 1 ( 7 ) ) λ 0.64585.3%0.54779.9%0.71686.7%0.68381.8%
α 1 0.61195.0%0.71184.3%0.66596.7%0.76789.9%
α 2 0.58390.0%0.67174.6%0.60795.3%0.73587.5%
k = 10, R = (5, 3, 1 ( 7 ) ) λ 0.64280.7%0.55680.8%0.71488.0%0.68281.5%
α 1 0.65393.7%0.69883.8%0.66097.7%0.73691.6%
α 2 0.57391.0%0.62982.1%0.61994.3%0.72889.9%
k = 10, R = ( 2 ( 7 ) , 1 ( 2 ) ) λ 0.58185.6%0.51380.4%0.66689.0%0.64883.1%
α 1 0.56391.9%0.62884.5%0.58494.3%0.69987.5%
α 2 0.52390.6%0.62773.3%0.56394.0%0.70588.1%
Table 9. The K-S distance and MLEs of the two datasets.
Table 9. The K-S distance and MLEs of the two datasets.
MLEs with Complete Samples
Dataset α ^ λ ^ K-S Distancep Value
Dataset 11.3530.1880.1620.367
Dataset 21.8410.2930.1410.536
Table 10. Three B-JPC samples from the breaking strength of jute fiber based on different CSs.
Table 10. Three B-JPC samples from the breaking strength of jute fiber based on different CSs.
(m, n, k)SchemeBalanced Joint Progressive Type-II Censored Samples
(30, 30, 10)R 1 = (9, 1 ( 8 ) )36.75, 145.96, 187.13, 200.16, 284.64, 375.81, 422.11, 530.55, 585.57, 662.66
R 2 = ( 2 ( 5 ) , 3, 2 ( 3 ) )36.75, 48.01, 99.72, 119.86, 187.13, 244.53, 383.43, 530.55, 594.29, 704.66
R 3 = (6, 7, 1 ( 7 ) )36.75, 113.85, 244.53, 350.70, 383.43, 506.60, 581.60, 594.29, 688.16, 727.23
Table 11. The Bayesian estimates and MLEs of the parameters under order restriction and without order restriction with the real dataset.
Table 11. The Bayesian estimates and MLEs of the parameters under order restriction and without order restriction with the real dataset.
Censoring SchemeR 1 R 2 R 3
α 1 ^ M L 0.4743 (0.0145, 0.9341)0.2962 (0.0126, 0.5799)0.4613 (−0.0046, 0.9271)
α 2 ^ M L 0.1186 (−0.0644, 0.3015)0.1270 (−0.0360, 0.2900)0.1977 (−0.0639, 0.4593)
λ ^ M L 0.2111 (0.0429, 0.3794)0.1389 (0.0302,0.2477)0.2362 (0.0584, 0.4141)
α 1 ^ I 0.1070 (0.0144, 0.2559)0.1957 (0.1049, 0.4601)0.2104 (0.1029, 0.5959)
α 2 ^ I 0.1670 (0.1122, 0.3040)0.1395 (0.1041, 0.1705)0.2300 (0.0976, 0.2960)
λ ^ I 0.3970 (0.2175, 0.7283)0.7115 (0.3430, 0.9272)0.6056 (0.3183, 0.8765)
α 1 ^ I I 0.1189 (0.0031, 0.3083)0.1198 (0.0386, 0.2851)0.1785 (0.0225, 0.4332)
α 2 ^ I I 0.1980 (0.0721, 0.3211)0.1223 (0.0573, 0.1819)0.2077 (0.0586, 0.3309)
λ ^ I I 0.4587 (0.1031, 0.9138)0.2704 (0.0359, 0.4962)0.4088 (0.1311, 0.7794)
Notes: α 1 ^ I , α 2 ^ I , and λ ^ I —Bayesian estimates without order restriction; α 1 ^ I I , α 2 ^ I I , and λ ^ I I —Bayesian estimates under order restriction.
Table 12. Selection of optimum censoring scheme based on different criteria.
Table 12. Selection of optimum censoring scheme based on different criteria.
Censoring SchemeCriterion 1Criterion 2Criterion 3Criterion 4Criterion 5
R 1 = (9, 1 ( 8 ) )1.456742 × 10 6 0.071116490.05996095475.85050.1896620
R 2 = ( 2 ( 5 ) , 3, 2 ( 3 ) )2.072309 × 10 7 0.023295760.05948553902.41020.1402625
R 3 = (6, 7, 1 ( 7 ) )3.260692 × 10 6 0.082550370.06397661380.05840.1001845
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, C.; Cong, T.; Gui, W. Order-Restricted Inference for Generalized Inverted Exponential Distribution under Balanced Joint Progressive Type-II Censored Data and Its Application on the Breaking Strength of Jute Fibers. Mathematics 2023, 11, 329. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020329

AMA Style

Zhang C, Cong T, Gui W. Order-Restricted Inference for Generalized Inverted Exponential Distribution under Balanced Joint Progressive Type-II Censored Data and Its Application on the Breaking Strength of Jute Fibers. Mathematics. 2023; 11(2):329. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020329

Chicago/Turabian Style

Zhang, Chunmei, Tao Cong, and Wenhao Gui. 2023. "Order-Restricted Inference for Generalized Inverted Exponential Distribution under Balanced Joint Progressive Type-II Censored Data and Its Application on the Breaking Strength of Jute Fibers" Mathematics 11, no. 2: 329. https://0-doi-org.brum.beds.ac.uk/10.3390/math11020329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop