Next Article in Journal
A Fractional-Order Compartmental Model of Vaccination for COVID-19 with the Fear Factor
Previous Article in Journal
Efficiency and Productivity of Local Educational Administration in Korea Using the Malmquist Productivity Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications

by
Saieed F. Ateya
1,2,
Abdulaziz S. Alghamdi
3 and
Abd Allah A. Mousa
2,4,*
1
Department of Mathematics, Faculty of Science, Assiut University, Assiut 71515, Egypt
2
Department of Mathematics, Faculty of Science, Taif University, Taif 21944, Saudi Arabia
3
Department of Mathematics, College of Science & Arts, King Abdulaziz University, Rabigh 21911, Saudi Arabia
4
Department of Basic Engineering Science, Faculty of Engineering, Menofia University, Shebin El-Kom 32511, Egypt
*
Author to whom correspondence should be addressed.
Submission received: 21 March 2022 / Revised: 21 April 2022 / Accepted: 22 April 2022 / Published: 26 April 2022

Abstract

:
Industries are constantly seeking ways to avoid corrective maintenance in order to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way. In many real life applications, one wants to predict the future failure time of equipment or devices that are expensive, or with long lifetimes, to save costs and/or time. In this paper, statistical prediction was studied using the classical and Bayesian approaches based on a unified hybrid censoring scheme. Two prediction schemes were used: (1) a one-sample prediction scheme that predicted the unobserved future failure times of devices that did not complete the lifetime experiments; and (2) a two-sample prediction scheme to predict the ordered values of a future independent sample based on past data from a certain distribution. We chose to apply the results of the paper to the Burr-X model, due to the importance of this model in many fields, such as engineering, health, agriculture, and biology. Point and interval predictors of unobserved failure times under one- and two-sample prediction schemes were computed based on simulated data sets and two engineering applications. The results demonstrate the ability of predicting the future failure of equipment using a statistical prediction branch based on collected data from an engineering system.

1. Introduction

Industries are constantly seeking ways to avoid corrective maintenance in order to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way, see [1,2,3]. In condition-based maintenance, the main goal is to come up with ways to treat and transform data from an engineering system, so that it can be used to build a data set to make statistical predictions about how the equipment will act in the future and when it will break down.
In many practical situations, one desires to predict future observations from the same population of previous data. This may be done by constructing an interval that will include future observations with a certain probability.
Predictive interval accuracy depends on sample size; full testing is impractical in real testing, owing to the advancement of industrial design and technology, which results in very reliable products with long lifespans. Censoring has been implemented in this case for a variety of reasons, including a lack of available resources and the need to save costs. In general, only a small percentage of failure times are recorded when a C S is engaged in a test environment.
Let X 1 : n , X 2 : n , , X n : n be the ordered failure times of n identical units placed on a life-test, from a certain distribution with P D F , f ( x ; θ ) , where θ is the vector of parameters and R F , R ( x ; θ ) . For fixed k , r { 1 , 2 , , n } and T 1 < T 2 ( 0 , ) with k < r and upon the relation between T 1 , T 2 , X k and X r , an U H C S is defined by Balakrishnan with six decisions, as follows:
(1)
Stopping the experiment at T 1 if 0 < X k : n < X r : n < T 1 < T 2 ;
(2)
Stopping the experiment at X r : n if 0 < X k : n < T 1 < X r : n < T 2 ;
(3)
Stopping the experiment at T 2 if 0 < X k : n < T 1 < T 2 < X r : n ;
(4)
Stopping the experiment at X r : n if 0 < T 1 < X k : n < X r : n < T 2 ;
(5)
Stopping the experiment at T 2 if 0 < T 1 < X k : n < T 2 < X r : n ;
(6)
Stopping the experiment at X k : n if 0 < T 1 < T 2 < X k : n < X r : n .
Let d i denote the number of failures until time T i , i = 1 , 2 . Then, the L F of this U H C S censored sample is as follows:
L ( θ ; d a t a ) = n ! ( n d ) ! [ i = 1 d f ( x i ; θ ) ] [ R ( T 1 ; θ ) ] ( n d ) , d 1 = d 2 = d = r , , n , n ! ( n r ) ! [ i = 1 r f ( x i ; θ ) ] [ R ( x r ; θ ) ] ( n r ) , d 1 = k , , r 1 , d 2 = r , n ! ( n d 2 ) ! [ i = 1 d 2 f ( x i ; θ ) ] [ R ( T 2 ; θ ) ] ( n d 2 ) , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 n ! ( n r ) ! [ i = 1 r f ( x i ; θ ) ] [ R ( x r ; θ ) ] ( n r ) , d 1 = 0 , 1 , , k 1 , d 2 = r , n ! ( n d 2 ) ! [ i = 1 d 2 f ( x i ; θ ) ] [ R ( T 2 ; θ ) ] ( n d 2 ) , d 1 = 0 , , k 1 , d 2 = k , , r 1 , n ! ( n k ) ! [ i = 1 k f ( x i ; θ ) ] [ R ( x k ; θ ) ] ( n k ) , d 2 = 0 , , k 1 .
Many well-known censoring schemes can be considered as special cases from the studied U H C S , such as generalized type-I HCS, see [4] when T 1 0 , generalized type-II HCS, see [4] when k = 1 , type-I HCS, see [5], when T 1 0 and k = 1 , type-II HCS, see [5], when T 2 and k = 1 , type-I censoring, see [6], when T 1 0 , k = 1 , r = n and type-II censoring, see [6], when T 1 0 , T 2 and k = 1 .
Among the advantages of U H C S is that it is more flexible than the generalized type-I H C S and generalized type-II H C S ; moreover, it guarantees us more observations, which will increase the accuracy of the predictive intervals.
Ref. [7] proposes the Burr-X distribution as a member of the Burr distribution family. This model is extremely useful in the fields of statistics and operations research. Engineering, health, agriculture, and biology are just some of the fields where it can be used to great effect.
A random variable X is said to have a Burr-X with a vector of parameters θ = ( α , β ) if the P D F is given by
f ( x ; α , β ) = 2 α β x e β x 2 ( 1 e β x 2 ) α 1 , x > 0 , ( α > 0 , β > 0 ) .
The corresponding C D F and R F are given, respectively, as:
F ( x ; α , β ) = ( 1 e β x 2 ) α , x > 0 , ( α > 0 , β > 0 ) ,
R ( x ; α , β ) = 1 ( 1 e β x 2 ) α , x > 0 , ( α > 0 , β > 0 ) .
For more details about some Burr models with related inferences using classical and Bayesian approaches, see [8,9,10,11,12,13,14,15,16,17,18].
Many contributions found in this paper, such as: studying the prediction problem in a U H C S using the classical and Bayesian approaches with making some comparisons between the two approaches, analyzing two engineering real data sets using Burr-X distribution and applying the obtained results on these real data sets as illustrative examples.
This paper is organized as follows: the point and interval prediction problems under one- and two-sample prediction schemes were studied using the classical and Bayesian approaches in Section 2 and Section 3, respectively. In Section 4, the obtained results were applied on simulated and real data sets. Our conclusions are summarized in Section 5.

2. One-Sample Prediction

Assume that n items are placed in a life-time experiment and that this experiment will be terminated at a fixed time T * and the number of failures until this time is D. The previous ordered failures denoted by x = ( x 1 : n , x 2 : n , , x D : n ) , which can be written for simplicity as x = ( x 1 , x 2 , , x D ) , called (Informative sample). In Balakrishnan’s U H C S , T * will equal T 1 in the first case, x r in the second case, T 2 in the third case, x r in the fourth case, T 2 in the fifth case and x k in the sixth case. Moreover, D will equal d 1 in the first case, r in the second case, d 2 in the third case, r in the fourth case, d 2 in the fifth case, and k in the sixth case. In the one-sample prediction scheme, the future failure time x D + s y s , s = 1 , 2 , , n D will be predicted based on the informative sample.
In this section, the P P s and I P s of the future unknown failure time y s will be computed using classical and Bayesian methods.
First, the conditional P D F of the future failure time y s given the vector of parameters θ should be derived as follows:
Based on the informative sample x = ( x 1 , x 2 , , x D ) , the P D F of y s given θ will be the P D F of the sth ordered value from n D ordered values after T * , which can be written as (see [15,19,20,21]):
g 1 ( y s ; θ ) [ R ( T * ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n D s [ R ( T * ; θ ) ] ( n D ) f ( y s ; θ ) , y s > T * .
Using this P D F , the conditional P D F of the future failure time y s given θ based on all cases of Balakrishnan’s U H C S is:
g 1 ( y s ; θ ) [ R ( T 1 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d s [ R ( T 1 ; θ ) ] ( n d ) f ( y s ; θ ) , y s > T 1 , d 1 = d 2 = d = r , , n , [ R ( x r ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n r s [ R ( x r ; θ ) ] ( n r ) f ( y s ; θ ) , y s > x r , d 1 = k , , r 1 , d 2 = r , [ R ( T 2 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d 2 s [ R ( T 2 ; θ ) ] ( n d 2 ) f ( y s ; θ ) , y s > T 2 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 [ R ( x r ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n r s [ R ( x r ; θ ) ] ( n r ) f ( y s ; θ ) , y s > x r , d 1 = 0 , 1 , , k 1 , d 2 = r , [ R ( T 2 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d 2 s [ R ( T 2 ; θ ) ] ( n d 2 ) f ( y s ; θ ) , y s > T 2 , d 1 = 0 , , k 1 , d 2 = k , , r 1 , [ R ( x k ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n k s [ R ( x k ; θ ) ] ( n k ) f ( y s ; θ ) , y s > x k , d 2 = 0 , , k 1 .

2.1. Classical Method (Maximum Likelihood Prediction)

In this subsection, the P P s and I P s of y s were obtained using the following P L F (see [22]):
g 1 * ( y s ; θ , x ) L ( θ ; x ) g 1 ( y s ; θ ) , y s > T * .
Substituting from (1) and (6) in (7), we have
g 1 * ( y s ; θ , x ) [ i = 1 d f ( x i ; θ ) ] [ R ( T 1 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d s f ( y s ; θ ) , y s > T 1 , d 1 = d 2 = d = r , , n , [ i = 1 r f ( x i ; θ ) ] [ R ( x r ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n r s f ( y s ; θ ) , y s > x r , d 1 = k , , r 1 , d 2 = r , [ i = 1 d 2 f ( x i ; θ ) ] [ R ( T 2 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d 2 s f ( y s ; θ ) , y s > T 2 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 [ i = 1 r f ( x i ; θ ) ] [ R ( x r ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n r s f ( y s ; θ ) , y s > x r , d 1 = 0 , 1 , , k 1 , d 2 = r , [ i = 1 d 2 f ( x i ; θ ) ] [ R ( T 2 ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n d 2 s f ( y s ; θ ) , y s > T 2 , d 1 = 0 , , k 1 , d 2 = k , , r 1 , [ i = 1 k f ( x i ; θ ) ] [ R ( x k ; θ ) R ( y s ; θ ) ] s 1 [ R ( y s ; θ ) ] n k s f ( y s ; θ ) , y s > x k , d 2 = 0 , , k 1 .
Substituting from (2)–(4) in (8), we have
g 1 * ( y s ; α , β , x ) α d + 1 β d + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 d x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β T 1 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n d s , y s > T 1 , d 1 = d 2 = d = r , , n , α r + 1 β r + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β x r 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n r s , y s > x r , d 1 = k , , r 1 , d 2 = r , α d 2 + 1 β d 2 + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β T 2 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n d 2 s , y s > T 2 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 α r + 1 β r + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β x r 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n r s , y s > x r , d 1 = 0 , 1 , , k 1 , d 2 = r , α d 2 + 1 β d 2 + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β T 2 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n d 2 s , y s > T 2 , d 1 = 0 , , k 1 , d 2 = k , , r 1 , α k + 1 β k + 1 y s e β y s 2 ( 1 e β y s 2 ) α 1 [ i = 1 k x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ ( 1 e β y s 2 ) α ( 1 e β x k 2 ) α ] s 1 [ 1 ( 1 e β y s 2 ) α ] n k s , y s > x k , d 2 = 0 , , k 1 .

2.1.1. Point Predictor

In this subsection, the P P s of y s will be obtained using two methods.
Method (1):
obtaining the values of α , β , and y s , which maximize the logarithm of the P L F , and will be denoted by α * , β * , and y s * , respectively. The values α * and β * will be called the P M L E s and the value y * will be called the M L P of y s .
To maximize the logarithm of the P L F , we will differentiate l o g ( g 1 * ( y s ; α , β , x ) ) with respect to α , β , and y s , set the resulting derivatives to zero and solve the resulting nonlinear equations. The solution of the resulting nonlinear equations will be α * , β * , and y * .
Method (2):
first, the M L E s of the parameters α and β , denoted by α ^ and β ^ , will be obtained, then replace α and β by α ^ and β ^ , respectively, in the P L F , to obtain the M L P F in the form: g 1 * * ( y s ; x ) = g 1 * ( y s ; α ^ , β ^ , x ) , and finally the M L P of y s will be equal to T * y s g 1 * * ( y s ; x ) d y s , which represents the mathematical expectation of the random variable Y s . To obtain the ( M L E s ) of α and β , we will differentiate the logarithm of the L F then set the resulting equations to zero and solve the resulting nonlinear equations. The solution of the resulting nonlinear equations will be α ^ and β ^ .
Based on the studied U H C S , g 1 * * ( y s ; x ) can be written in the form:
g 1 * * ( y s ; x ) = A y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s , y s > T * ,
where A is a normalizing constant and has the value
A = 1 T * ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s .
So, the M L P of y s will be
y s * = E [ Y s ] = T * y s g 1 * * ( y s ; x ) d y s = T * ( y s 2 e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s T * ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s ,
where
D = d , y s > T * , T * = T 1 , d 1 = d 2 = d = r , , n , D = r , y s > T * , T * = x r , d 1 = k , , r 1 , d 2 = r , D = d 2 , y s > T * , T * = T 2 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 D = r , y s > T * , T * = x r , d 1 = 0 , 1 , , k 1 , d 2 = r , D = d 2 , y s > T * , T * = T 2 , d 1 = 0 , , k 1 , d 2 = k , , r 1 , D = k , y s > T * , T * = x k , d 2 = 0 , , k 1 .

2.1.2. Interval Predictor

A ( 1 τ ) × 100 % M L P I ( L M 1 , U M 1 ) of the future failure time y s can be obtained by solving the following two nonlinear equations:
L M 1 g 1 * * ( y s ; x ) d y s = 1 τ 2 , U M 1 g 1 * * ( y s ; x ) d y s = τ 2 .
From (10) and (11) in (14), the two nonlinear equations in (14) can be rewritten to be of the form
L M 1 ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s T * ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s = 1 τ 2 , U M 1 ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s T * ( y s e β ^ y s 2 ( 1 e β ^ y s 2 ) α ^ 1 [ ( 1 e β ^ y s 2 ) α ^ ( 1 e β ^ T * 2 ) α ^ ] s 1 [ 1 ( 1 e β ^ y s 2 ) α ^ ] n D s ) d y s = τ 2 .
By solving the previous system, the M L P I of y s , ( L M 1 , U M 1 ) , can be computed.

2.2. Bayesian Method (Bayesian Prediction)

Using the following bivariate prior suggested by [23,24]:
π ( α , β ) α c 1 + c 3 1 β c 3 1 e α ( β + c 2 ) , α > 0 , β > 0 , ( c 1 > 0 , c 2 > 0 , c 3 > 0 ) ,
where c 1 , c 2 , and c 3 are the prior parameters ( also known as hyperparameters) and L F (1) replace f ( x i ; θ ) and R ( x i ; θ ) by its definitions from (2) and (4), the posterior P D F of α and β can be written as:
π * ( α , β ; x ) α d + c 1 + c 3 1 β d + c 3 1 e α ( β + c 2 ) [ i = 1 d x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β T 1 2 ) α ] n d s , d 1 = d 2 = d = r , , n , α r + c 1 + c 3 1 β r + c 3 1 e α ( β + c 2 ) [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β x r 2 ) α ] n r s , d 1 = k , , r 1 , d 2 = r , α d 2 + c 1 + c 3 1 β d 2 + c 3 1 e α ( β + c 2 ) [ i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β T 2 2 ) α ] n d 2 s , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 α r + c 1 + c 3 1 β r + c 3 1 e α ( β + c 2 ) [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β x r 2 ) α ] n r s , d 1 = 0 , 1 , , k 1 , d 2 = r , α d 2 + c 1 + c 3 1 β d 2 + c 3 1 e α ( β + c 2 ) [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β T 2 2 ) α ] n d 2 s , d 1 = 0 , , k 1 , d 2 = k , , r 1 , α k + c 1 + c 3 1 β k + c 3 1 e α ( β + c 2 ) [ i = 1 k x i e β x i 2 ( 1 e β x i 2 ) α 1 ] × [ 1 ( 1 e β x k 2 ) α ] n k s , d 2 = 0 , , k 1 .
Using the previous posterior P D F and the conditional P D F of y s given α and β , (6), after using the definition of f ( x i ; θ ) and R ( x i ; θ ) from (2) and (4), the Bayesian predictive P D F of y s given x will be as follows (see [22]):
h 1 * ( y s ; x ) = 0 0 h 1 ( y s ; α , β , x ) d β d α ,
where
h 1 ( y s ; α , β , x ) = π * ( α , β ; x ) g 1 ( y s ; α , β ) = A 1 y s α d + c 1 + c 3 β d + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 d x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β T 1 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n d s , y s > T 1 , d 1 = d 2 = d = r , , n , A 2 y s α r + c 1 + c 3 β r + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β x r 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n r s , y s > x r , d 1 = k , , r 1 , d 2 = r , A 3 y s α d 2 + c 1 + c 3 β d 2 + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β T 2 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n d 2 s , y s > T 2 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 A 4 y s α r + c 1 + c 3 β r + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β x r 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n r s , y s > x r , d 1 = 0 , 1 , , k 1 , d 2 = r , A 5 y s α d 2 + c 1 + c 3 β d 2 + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β T 2 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n d 2 s , y s > T 2 , d 1 = 0 , . . . , k 1 , d 2 = k , , r 1 , A 6 y s α k + c 1 + c 3 β k + c 3 e ( α ( β + c 2 ) + β y s 2 ) ( 1 e β y s 2 ) α 1 × [ i = 1 k x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ ( 1 e β y s 2 ) α ( 1 e β x k 2 ) α ] s 1 × [ 1 ( 1 e β y s 2 ) α ] n k s , y s > x k , d 2 = 0 , , k 1 ,
where A i , i = 1 , 2 , , 6 are normalizing constants.
The B P of y s will equal to (see [22]):
y s * * = E [ Y s ] = T * y s h 1 * ( y s ; x ) d y s ,
and the ( 1 τ ) × 100 % B P I , ( L B 1 , U B 1 ) , of y s can be obtained by solving the following two nonlinear equations:
L B 1 h 1 * ( y s ; x ) d y s = 1 τ 2 , U B 1 h 1 * ( y s ; x ) d y s = τ 2 .
It is clear that the previous system contains double integration on α and β , which will make the problem of finding the solution for this system very complicated. In this situation, the Gibbs sampler and Metropolis–Hastings algorithm were used to generate a random sample ( α ( 1 ) , β ( 1 ) ) , ( α ( 2 ) , β ( 2 ) ) , , ( α ( K ) , β ( K ) ) from the posterior P D F ( 17 ) ; the the system (21) will be of the form
i = 1 K L B 1 h 1 ( y s ; α ( i ) , β ( i ) , x ) d y s i = 1 K T * h 1 ( y s ; α ( i ) , β ( i ) , x ) d y s = 1 τ 2 , i = 1 K U B 1 h 1 ( y s ; α ( i ) , β ( i ) , x ) d y s i = 1 K T * h 1 ( y s ; α ( i ) , β ( i ) , x ) d y s = τ 2 .
By solving this system, the B P I , ( L B 1 , U B 1 ) , for y s will be obtained.
For more details about the Gibbs sampler and Metropolis–Hastings algorithms, see, for example [25,26,27,28].

3. Two-Sample Prediction

Assume that x = ( x 1 , x 2 , , x D ) and z = ( z 1 , z 2 , , z m ) represent the informative sample, from the studied U H C S and a future ordered sample of size m, respectively. It is assumed that the two samples are independent.
In this section, P P s and I P s of the observation z s , s = 1 , 2 , , m will be obtained using the classical and Bayesian methods. The conditional P D F of the observation z s given the vector of parameters θ is the P D F of the sth ordered value from the m ordered values, which can be written as (see [15,22]):
g 2 ( z s ; θ ) [ 1 R ( z s ; θ ) ] s 1 [ R ( z s ; θ ) ] m s f ( z s ; θ ) , z s > 0 .
Using the definitions of R ( x ; θ ) and f ( x ; θ ) from (2) and (4) in (23), the conditional P D F of the observation z s given θ will be:
g 2 ( z s ; α , β ) α β z s e β z s 2 ( 1 e β z s 2 ) s α 1 [ 1 ( 1 e β z s 2 ) α ] m s , z s > 0 .
Based on the two-sample scheme and the same prior (16), the I P s and P P s of z s can be summarized as follows in the following subsections.

3.1. Maximum Likelihood Prediction (Point and Interval Predictors)

The M L P F can be obtained from (24) after replacing each parameter by its M L E to be of the form
g 2 * * ( z s ; x ) = B z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s , z s > 0 ,
where B is a normalizing constant has the value
B = 1 0 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s .
So, the M L P of z s will be
z s * = E [ Z s ] = 0 z s g 2 * * ( z s ; x ) d z s = 0 ( z s 2 e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s 0 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s ,
A ( 1 τ ) × 100 % M L P I ( L M 2 , U M 2 ) of z s can be obtained by solving the following two nonlinear equations:
L M 2 g 2 * * ( z s ; x ) d z s = 1 τ 2 , U M 2 g 2 * * ( z s ; x ) d z s = τ 2 .
From (25) and (26) in (28), the two nonlinear equations in (28) can be rewritten, to be of the form
L M 2 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s 0 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s = 1 τ 2 , U M 2 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s 0 ( z s e β ^ z s 2 ( 1 e β ^ z s 2 ) s α ^ 1 [ 1 ( 1 e β ^ z s 2 ) α ^ ] m s ) d z s = τ 2 .
By solving the previous system, the M L P I of z s , ( L M 2 , U M 2 ) , can be computed.

3.2. Bayesian Prediction (Point and Interval Predictors)

The Bayesian predictive P D F of z s given x will be as follows:
h 2 * ( z s ; x ) = 0 0 h 2 ( z s ; α , β , x ) d β d α ,
where
h 2 ( z s ; α , β , x ) = π * ( α , β ; x ) g 2 ( z s ; α , β ) = B 1 z s α d + c 1 + c 3 β d + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × [ i = 1 d x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β T 1 2 ) α ] n d , z s > 0 , d 1 = d 2 = d = r , , n , B 2 z s α r + c 1 + c 3 β r + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × [ i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β x r 2 ) α ] n r , z s > 0 , d 1 = k , , r 1 , d 2 = r , B 3 z s α d 2 + c 1 + c 3 β d 2 + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β T 2 2 ) α ] n d 2 , z s > 0 , d 1 = k , , r 1 , d 2 = k , , r 1 , d 1 d 2 B 4 z s α r + c 1 + c 3 β r + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × i = 1 r x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β x r 2 ) α ] n r , z s > 0 , , d 1 = 0 , 1 , , k 1 , d 2 = r , B 5 z s α d 2 + c 1 + c 3 β d 2 + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × i = 1 d 2 x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β T 2 2 ) α ] n d 2 , z s > 0 , d 1 = 0 , , k 1 , d 2 = k , , r 1 , B 6 z s α k + c 1 + c 3 β k + c 3 e ( α ( β + c 2 ) + β z s 2 ) ( 1 e β z s 2 ) s α 1 × i = 1 k x i e β x i 2 ( 1 e β x i 2 ) α 1 ] [ 1 ( 1 e β z s 2 ) α ] m s × [ 1 ( 1 e β x k 2 ) α ] n k , z s > 0 , , d 2 = 0 , , k 1 ,
where B i , i = 1 , 2 , , 6 are normalizing constants.
The B P of z s will equal
z s * * = E [ Z s ] = 0 z s h 2 * ( z s ; x ) d z s ,
and the ( 1 τ ) × 100 % B P I , ( L B 2 , U B 2 ) , of z s can be obtained by solving the following two nonlinear equations:
L B 2 h 2 * ( z s ; x ) d z s = 1 τ 2 , U B 2 h 2 * ( z s ; x ) d z s = τ 2 .
Using ( α ( 1 ) , β ( 1 ) ) , ( α ( 2 ) , β ( 2 ) ) , , ( α ( K ) , β ( K ) ) , which are generated from the posterior P D F ( 15 ) , then the system (33) will be of the form
i = 1 K L B 2 h 2 ( z s ; α ( i ) , β ( i ) , x ) d z s i = 1 K 0 h 2 ( z s ; α ( i ) , β ( i ) , x ) d z s = 1 τ 2 , i = 1 K U B 2 h 2 ( z s ; α ( i ) , β ( i ) , x ) d z s i = 1 K 0 h 2 ( z s ; α ( i ) , β ( i ) , x ) d z s = τ 2 .
By solving this system, the B P I , ( L B 2 , U B 2 ) , for z s will be obtained.
From the results of the second and third sections, it is clear that the classical method of prediction and inference in general, called the maximum likelihood approach, depends only on an informative sample from the studied distribution under a suggested censoring scheme, and does not depend on any additional information about the parameters of the population. However, for the Bayes method, it depends on the same informative sample, but in addition to additional information about the parameters of the population represented in the prior distribution of the parameters. This obviously leads to better results. The results obtained based on the samples in the next section will verify this fact.
In case of absence of information on the population parameters, we have two choices. The first is to use the Bayes approach under a vague prior and the second is to use the classical method.

4. Results

In this section, one- and two-sample P P s and I P s using the classical and Bayesian approaches were obtained based on simulated and real data sets.

4.1. Simulated Results

The predictive process is a process that takes in historical data to predict which areas and parts of an asset will fail and at what time. The technician can receive relevant and accurate data points, remotely. The collected data are then analyzed and predictive algorithms to determine which part are more likely to fail. This information is communicated to workers via collaboration tools and data visualization, with which they can perform maintenance work only on the parts that require it. By implementing a predictive maintenance solution (Figure 1), organizations will know when to schedule a specific part replacement and be alerted to future degradations due to faulty parts.
In this section, the P P s and I P s of future failure times are computed, in one- and two-sample schemes, using the classical and Bayesian methods based on a generated U H C S informative sample for different values of r , k , T 1 , and T 2 as follows:
  • For a given set of prior parameters c 1 , c 2 , and c 3 , the population parameters α and β are generated from the joint prior (16).
  • Making use of α and β obtained in step 1, a sample of size n of upper ordered values from Burr-X is generated.
  • For different values of n , r , k , T 1 , and T 2 , a U H C S informative sample is generated from the complete sample in step 2.
  • For different values of n , r , k , T 1 , and T 2 , the P P s and I P s of the future failure times are computed using classical and Bayes methods in a one-sample scheme, as explained in Section 2.
  • The same is done in a two-sample scheme, as explained in Section 3.
  • For each future failure time, the P P , I P , length of the I P , and the C P of the I P are computed.
  • The results are summarized in Table 1 and Table 2.
    From Table 1 and Table 2, observe the following:
    (a)
    For fixed r , k , T 1 , and T 2 , the length and the C P of the I P increase by increasing s because the element y s or z s will be larger, which will widen its predictive interval and, therefore, its C P .
    (b)
    In all six cases of the studied U H C S :
    • The length and the C P of the I P decreases by increasing the ratio D n , which means that the results will be better by increasing the available information.
    • In the cases with constant ratio D n and fixed r , T 1 , and T 2 , the length and the C P of the I P decrease by increasing k, which show us that the results will be better by increasing k.
    (c)
    In all cases, the lengths of the I P s are shorter in case of the Bayesian method than that computed by the classical method, which means that the Bayesian method is better than the classical method.
    (d)
    In all cases, the Bayesian C P s are less than that computed by the classical method, which is also a criterion indicating that the results obtained by using the Bayes method is better than that obtained using the classical method.
    (e)
    The values r , k , T 1 , and T 2 have been chosen so as to give all six cases of the studied U H C S .

4.2. Data Analysis

In this section, two real data sets are introduced; they were analyzed using Burr-X. The studied real data sets are from [8]. The first data set represents the failure times in the hours of 15 devices, and the second represents the first failure times in the months of 20 electronic cards. These real data sets are:
Data I:
0.19, 0.78, 0.96, 1.31, 2.78, 3.16, 4.15, 4.76, 4.85, 6.5, 7.35, 8.01, 8.27, 12.06 and 31.75.
Data II:
0.9, 1.5, 2.3, 3.2, 3.9, 5.0, 6.2, 7.5, 8.3, 10.4, 11.1, 12.6, 15.0, 16.3, 19.3, 22.6, 24.8, 31.5, 38.1 and 53.0.
In Table 3, the M L E s of the parameters α and β and the corresponding Kolmogorov–Smirnov ( K S ) test statistic were computed under the Burr-X model.
Under the significance level ( 0.05 ) and using the Kolmogorov–Smirnov table, the critical value for the K S test statistic is 0.33760 , which is greater than the computed K S test statistics for the two real data sets under the B u r r -X model. This means that the studied model fits the two biological data sets well.
P P s and I P s of the remaining future failure times ( y s , s = 1 , 2 , 3 , 4 ) and of the first four observations ( z s , s = 1 , 2 , 3 , 4 ) from an independent ordered sample based on a generated Balakrishnan U H C S informative sample from the given real data sets, were computed; they are summarized in Table 4, Table 5, Table 6 and Table 7.
From previous tables and figures, we can observe (for fixed r , k , T 1 , and T 2 ):
  • Increase the length of the predictive intervals by increasing s, because, as mentioned previously, the element y s or z s will be larger, which will widen its predictive interval.
  • The length of the predictive intervals computed by the Bayesian method is less that that computed by the classical method, which means that Bayes technique is better than the other technique.
  • For Bayes and classical approaches, and for all values of s, the exact value of y s lies in its predictive interval.
  • From Figure 2A,B, Figure 3A,B, Figure 4A,B and Figure 5A,B, we can observe that:
    (a)
    The red broken refracted line, which represents the true value of the observation to be predicted, is located between the two broken lines that represent the lower and upper bounds of the predictive internals, which confirms with 3.
    (b)
    The lower bounds increase by increasing s.
    (c)
    The upper bounds also increase by increasing s.
  • From Figure 2C, Figure 3C, Figure 4C and Figure 5C, we can observe:
    (a)
    The length of the predictive interval increase by increasing s, which confirms with 1.
    (b)
    The lengths of the predictive intervals obtained using the Bayes approach are less than that obtained by the classical approach, which confirms with 2.

5. Conclusions

In this paper, the P P s and I P s of the future failure times from Burr-X were computed based on a U H C S (suggested by Balakrishnan et al. (2008) ) informative sample using different values of r , k , T 1 , and T 2 , using classical and Bayesian approaches, making some comparisons between the two approaches. Two engineering real data sets were introduced and analyzed using the Burr-X model to emphasize that the studied model fits the given real data sets well. Based on a generated U H C S informative sample from the given real data sets, the P P s and I P s of the future failure times under one- and two-sample schemes were computed using classical and Bayesian approaches; it was found that the predictive intervals using the Bayesian approach were shorter than those computed by the classical approach, which means that the Bayesian approach is better than the other approach. In addition to the tabular description of the results related to the real data sets, graphical descriptions were also introduced. The results of the work confirm that it is possible to use statistical prediction to perform predictive tasks in relation to the conditions of industrial equipment.

Author Contributions

Data curation, A.S.A.; Formal analysis, S.F.A. and A.A.A.M.; Funding acquisition, S.F.A. and A.S.A.; Investigation, S.F.A. and A.A.A.M.; Project administration, A.S.A.; Resources, S.F.A. and A.S.A.; Software, S.F.A.; Supervision, A.A.A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors extend their appreciation to the Deputyship for Research & Innovation, Ministry of Education, Saudi Arabia, for funding this research work through project number IFPRP:373-662-1442 and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
B P Bayesian predictor
B P I Bayesian predictive interval
TLAThree letter acronym
C S Censoring scheme
C D F Cumulative distribution function
C P Coverage probability
H C S Hybrid censoring scheme
IPsInterval predictors
L F Likelihood function
M L Maximum likelihood
MLEsMaximum likelihood estimates
M L P Maximum likelihood predictor
M L P F Maximum likelihood predictive function
M L P I Maximum likelihood predictive interval
P D F Probability density function
P L F Predictive likelihood function
P M L E s Predictive maximum likelihood estimates
PPspredictors
R F Reliability function
U H C S Unified hybrid censoring scheme

References

  1. Calabria, R.; Guida, M.; Pulcini, G. Point estimation of future failure times of a repairable system. Reliab. Eng. Syst. Saf. 1990, 28, 23–34. [Google Scholar] [CrossRef]
  2. Scalabrini Sampaio, G.; de Aguiar Vallim Filho, A.R.; da Silva, L.S.; da Silva, L.A. Prediction of Motor Failure Time Using An Artificial Neural Network. Sensors 2019, 19, 4342. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Alghamdi, A.S. Partially accelerated model for analyzing competing risks data from Gompertz population under type-I generalized hybrid censoring scheme. Complexity 2021, 2021, 9925094. [Google Scholar] [CrossRef]
  4. Chandrasekar, B.; Childs, A.; Balakrishnan, N. Exact likelihood inference for the exponential distribution under generalized Type-I and Type-II hybrid censoring. Nav. Res. Logist. 2004, 51, 994–1004. [Google Scholar] [CrossRef]
  5. Childs, A.; Chandrasekar, B.; Balakrishnan, N.; Kundu, D. Exact likelihood inference based on Type- I and Type-II hybrid censored samples from the exponential distribution. Ann. Inst. Stat. Math. 2003, 55, 319–330. [Google Scholar] [CrossRef]
  6. Lawless, J.F. Statistical Models and Methods for Lifetime Data; John Wiley and Sons: New York, NY, USA, 1982. [Google Scholar]
  7. Burr, W.I. Cumulative frequency distribution. Ann. Math. Stat. 1942, 13, 215–232. [Google Scholar] [CrossRef]
  8. Zimmer, W.J.; Keats, J.B.; Wang, F.K. The Burr-XII distribution in reliability analysis. J. Qual. Technol. 1998, 30, 386–394. [Google Scholar] [CrossRef]
  9. Kim, C.; Chung, Y. Bayesian estimation of P(Y < X) from Burr-type X model containing spurious observations. Stat. Pap. 2006, 47, 643–651. [Google Scholar]
  10. Rastogi, M.K.; Tripathi, Y.M. Inference on unknown parameters of a Burr distribution under hybrid censoring. Stat. Pap. 2013, 54, 619–643. [Google Scholar] [CrossRef]
  11. Abd EL-Baset, A.A.; Magdy, E.E.; Tahani, A.A. Estimation under Burr type X distribution based on doubly type II censored sample of dual generalized order statistics. J. Egyp. Math. Soc. 2015, 23, 391–396. [Google Scholar]
  12. Arabi Belaghi, R.; Noori Asl, M. Estimation based on progressively type-I hybrid censored data from the Burr XII distribution. Stat. Pap. 2019, 60, 761–803. [Google Scholar] [CrossRef]
  13. Mohammadi, M.; Reza, A.; Behzadi, M.H.; Singh, S. Estimation and prediction based on type-I hybrid censored data from the Poisson-Exponential distribution. Commun. Stat. Simul. Comput. 2019, 1–26. [Google Scholar] [CrossRef]
  14. Rabie, A.; Li, J. Inferences for Burr-X Model Based on Unified Hybrid Censored Data. Int. J. Appl. Math. 2019, 49, 1–7. [Google Scholar]
  15. Ateya, S.F.; Amein, M.M.; Heba, S. Mohammed Prediction under an adaptive progressive type-II censoring scheme for Burr Type-XII distribution. Commun. Stat.-Theory Methods 2020, 1–13. [Google Scholar] [CrossRef]
  16. Balakrishnan, N.; Rasouli, A.; Sanjari Farsipour, N. Exact likelihood inference based on an unified hybrid censored sample from the exponential distribution. J. Stat. Comput. Simul. 2008, 78, 475–488. [Google Scholar] [CrossRef]
  17. Osatohanmwen, F.; Oyegue, F.O.; Ogbonmwan, S.M. The Weibull-Burr XII {log logistic} Poisson lifetime model. J. Stat. Manag. Syst. 2021, 1–36. [Google Scholar] [CrossRef]
  18. Aslam, M.; Usman, R.M.; Raqab, M.Z. A new generalized Burr XII distribution with real life applications. J. Stat. Manag. Syst. 2021, 24, 521–543. [Google Scholar] [CrossRef]
  19. David, H.A. Order Statistics, 2nd ed.; John Wiley and Sons, Inc.: New York, NY, USA, 1981. [Google Scholar]
  20. Balakrishnan, N.; Shafay, A.R. One- and Two-Sample Bayesian Prediction Intervals Based on Type-II Hybrid Censored Data. Communi. Stat.-Theory Methods 2012, 41, 1511–1531. [Google Scholar] [CrossRef]
  21. Nigm, A.M.; Al-Hussaini, E.K.; Jaheen, Z.F. Bayesian one-sample prediction of future observations under Pareto distribution. Statistics 2003, 37, 527–536. [Google Scholar] [CrossRef]
  22. Ateya, S.F.; Mohammed, H.S. Prediction Under Burr-XII Distribution Based on Generalized Type-II Progressive Hybrid Censoring Scheme. JOEMS 2018, 26, 491–508. [Google Scholar] [CrossRef]
  23. Ateya, S.F. Estimation under modified Weibull distribution based on right censored generalized order statistics. J. Appl. Stat. 2013, 40, 2720–2734. [Google Scholar] [CrossRef]
  24. Ateya, S.F. Estimation under Inverse Weibull Distribution based on Balakrishnan’s Unified Hybrid Censored Scheme. Commun. Stat. Simul. Comput. 2017, 46, 3645–3666. [Google Scholar] [CrossRef]
  25. Jaheen, Z.F.; Al Harbi, M.M. Bayesian estimation for the exponentiated Weibull model via Markov chain Monte Carlo simulation. Commun. Stat. Simul. Comput. 2011, 40, 532–543. [Google Scholar] [CrossRef]
  26. Press, S.J. Subjective and Objective Bayesian Statistics: Principles, Models and Applications; John Wiley and Sons: New York, NY, USA, 2003. [Google Scholar]
  27. Upadhyaya, S.K.; Gupta, A. A Bayes analysis of modified Weibull distribution via Markov chain Monte Carlo simulation. J. Stat. Comput. Simul. 2010, 80, 241–254. [Google Scholar] [CrossRef]
  28. Upadhyaya, S.K.; Vasishta, N.; Smith, A.F.M. Bayes inference in life testing and reliability via Markov chain Monte Carlo simulation. Sankhya A 2001, 63, 15–40. [Google Scholar]
Figure 1. Reactive periodic proactive predictive four stage engineering process.
Figure 1. Reactive periodic proactive predictive four stage engineering process.
Mathematics 10 01450 g001
Figure 2. (A) ML one-sample predictive intervals based on sample I; (B) Bayesian one-sample predictive intervals based on sample I; (C) lengths of the one-sample predictive intervals based on sample I.
Figure 2. (A) ML one-sample predictive intervals based on sample I; (B) Bayesian one-sample predictive intervals based on sample I; (C) lengths of the one-sample predictive intervals based on sample I.
Mathematics 10 01450 g002
Figure 3. (A) ML one-sample predictive intervals based on sample II; (B) Bayesian one-sample predictive intervals based on sample II; (C) lengths of one-sample predictive intervals based on sample II.
Figure 3. (A) ML one-sample predictive intervals based on sample II; (B) Bayesian one-sample predictive intervals based on sample II; (C) lengths of one-sample predictive intervals based on sample II.
Mathematics 10 01450 g003
Figure 4. (A) ML two-sample predictive intervals based on sample I; (B) Bayesian two-sample predictive intervals based on sample I; (C) lengths of two-sample predictive intervals based on sample I.
Figure 4. (A) ML two-sample predictive intervals based on sample I; (B) Bayesian two-sample predictive intervals based on sample I; (C) lengths of two-sample predictive intervals based on sample I.
Mathematics 10 01450 g004
Figure 5. (A) ML two-sample predictive intervals based on sample II; (B) Bayesian two-sample predictive intervals based on sample II; (C) lengths of two-sample predictive intervals based on sample II.
Figure 5. (A) ML two-sample predictive intervals based on sample II; (B) Bayesian two-sample predictive intervals based on sample II; (C) lengths of two-sample predictive intervals based on sample II.
Mathematics 10 01450 g005
Table 1. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 , based on the generated Balakrishnan U H C S informative sample. ( α = 3.1811 , β = 1.5779 ) , ( c 1 = 4.8 , c 2 = 2.5 , c 3 = 4.5 ) .
Table 1. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 , based on the generated Balakrishnan U H C S informative sample. ( α = 3.1811 , β = 1.5779 ) , ( c 1 = 4.8 , c 2 = 2.5 , c 3 = 4.5 ) .
Values of ( T 1 , T 2 ) ( 1.5 , 1.6 )
G e n e r a t e d y 1 G e n e r a t e d y 2 G e n e r a t e d y 3 G e n e r a t e d y 4
( r , k ) P P of y 1 P P of y 2 P P of y 3 P P of y 4
( n , D ) M e t h o d I P of y 1 I P of y 2 I P of y 3 I P of y 4
L e n g t h L e n g t h L e n g t h L e n g t h
C P C P C P C P
( 10 , 5 ) M L 1.038091.116081.573051.65624
( 20 , 12 ) 0.916761.09141.47111.6674
(0.87340,1.12306)(0.82512,1.13621)(1.10036,1.61367)(1.42708,1.97736)
0.249660.311090.513310.55028
0.91760.921770.930010.93816
B a y e s 1.038091.116081.573051.65624
0.910531.15371.50181.6214
(0.87103,0.98636)(1.02104,1.22278)(1.16172,1.47264)(1.29082,1.70564)
0.115230.201740.310920.41482
0.90370.91650.92750.9310
( 10 , 7 ) M L 1.038091.116081.573051.65624
( 20 , 14 ) 1.180771.321471.599171.79101
(1.09166,1.22885)(1.21106,1.42664)(1.38102,1.68120)(1.49016,1.88918 )
0.137190.215580.300180.39902
0.90010.911300.927190.92991
B a y e s 1.038091.116081.573051.65624
1.179141.342211.620171.77082
(1.11057,1.23958)(1.20184,1.4052)(1.42061,1.69377)(1.58062,1.94235)
0.129010.203360.273160.36173
0.89970.910550.92060.9227
Values of ( T 1 , T 2 ) ( 1.3 , 1.5 )
( 25 , 5 ) M L 0.662090.890380.910051.75215
( 30 , 25 ) 0.581440.91020.881521.70119
(0.40068,0.79311)(0.60106,1.09158)(0.74061,1.27075)(1.12105,1.85211)
0.392430.490520.530140.73106
0.951610.961730.961060.97015
B a y e s 0.6620890.8903780.9100531.75215
0.648250.862390.949311.69046
(0.50151,0.86203)(0.76175,1.16289)(0.95276,1.46868)(1.39173,1.90765)
0.360520.401140.515920.61058
0.94830.95530.95810.98813
( 25 , 10 ) M L 0.6620890.8903780.9100531.75215
( 30 , 25 ) 0.661310.865610.940721.78334
(0.49106,0.78122)(0.73083,1.11254)(0.807016,1.26818)(1.24608,1.94780)
0.290160.381710.461170.70172
0.948140.952310.956180.96131
B a y e s 0.6620890.8903780.9100531.75215
0.670130.881720.901571.74105
(0.58043,0.79278)(0.72063,1.07077)(0.83803,1.24985)(1.14473,1.82615)
0.212250.350410.411820.68142
0.93810.94810.95110.9591
( 25 , 15 ) M L 1.420771.509441.544941.63888
( 30 , 21 ) 1.393541.485931.512071.65472
(1.31075,1.56622)(1.32194,1.86571)(1.34528,1.70186)(1.45619,1.84651)
0.255470.273770.356580.45032
0.96950.97260.97990.9801
B a y e s 1.420771.509441.544941.63888
1.434061.492251.538211.64152
(1.40089,1.52250)(1.40916,1.59487)(1.42534,1.67038)(1.50294,1.77342)
0.121610.185710.245040.27048
0.92170.93170.95020.9573
Values of ( T 1 , T 2 ) ( 0.8 , 2.5 )
( 25 , 20 ) M L 1.420771.509441.544941.63888
( 30 , 25 ) 1.405931.495931.572071.61472
(1.32194,1.56571)(1.32194,1.58239)(1.34528,1.68186))(1.25619,1.70651)
0.243770.260450.336580.45032
0.95040.95510.96230.9708
B a y e s 1.420771.509441.544941.63888
1.419481.510211.548431.63451
(1.35285,1.47209)(1.40421,1.65515)(1.48797,1.81620)(1.52675,1.95603)
0.119240.250940.328230.42928
0.94950.95250.96050.9693
Values of ( T 1 , T 2 ) ( 0.8 , 1.1 )
( 30 , 20 ) M L 1.420611.520621.638151.64518
( 40 , 23 ) 1.410921.493321.613371.66319
(1.32901,1.59942)(1.40162,1.69674)(1.48054,1.78922)(1.50512,2.14725)
0.270410.295120.341160.64213
0.96010.96640.97180.9804
B a y e s 1.420611.520621.638151.64518
1.419411.518041.641831.65184
(1.34162,1.56357)(1.44076,1.69407)(1.52184,1.82301)(1.59042,2.00790)
0.221950.253310.301170.41748
95940.96140.96950.9748
Values of ( T 1 , T 2 ) ( 0.5 , 0.8 )
( 30 , 25 ) M L 1.514361.607341.718131.72479
( 40 , 25 ) 1.492011.588011.774191.75118
(1.40162,1.663278)(1.45281,1.74086)(1.59042,1.91318)(1.66492,2.25535)
0.231160.288050.322760.59043
0.95840.96150.96970.978
B a y e s 1.514361.607341.718131.72479
1.501841.582941.701741.73182
(1.41062,1.62090)(1.50372,1.74565)(1.60152,1.88221)(1.64107,1.98290)
0.210280.241930.280690.34183
0.95330.95570.96140.9736
Table 2. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on the generated Balakrishnan U H C S informative sample. ( α = 3.1811 , β = 1.5779 ) , ( c 1 = 4.8 , c 2 = 2.5 , c 3 = 4.5 ) .
Table 2. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on the generated Balakrishnan U H C S informative sample. ( α = 3.1811 , β = 1.5779 ) , ( c 1 = 4.8 , c 2 = 2.5 , c 3 = 4.5 ) .
Values of ( T 1 , T 2 ) ( 1.5 , 1.6 )
G e n e r a t e d z 1 G e n e r a t e d z 2 G e n e r a t e d z 3 G e n e r a t e d z 4
( r , k ) P P of z 1 P P of z 2 P P of z 3 P P of z 4
( n , D ) M e t h o d I P of z 1 I P of z 2 I P of z 3 I P of z 4
L e n g t h L e n g t h L e n g t h L e n g t h
C P C P C P C P
( 10 , 5 ) M L 0.413030.786541.098781.31352
( 20 , 12 ) 0.370170.691540.951241.24012
(0.25175,0.46227)(0.43129,0.80322)(0.71182,1.30162)(1.00273,1.59152)
0.210520.371930.48980.58879
0.881530.91520.92050.9317
B a y e s 0.413030.786541.098781.31352
0.387180.710320.971821.27104
(0.33152,0.53268)(0.51037,0.82988)(0.81094,1.20606)(1.17213,1.65357)
0.201160.319510.395120.48144
0.87810.90130.91140.9226
( 10 , 7 ) M L 0.413030.786541.098781.31352
( 20 , 14 ) 0.380530.711080.981551.30153
(0.35102,0.54184)(0.51005,0.82108)(0.81106,1.22128)(1.10924,1.63025)
0.190820.311030.410220.52101
0.87710.90100.91130.9215
B a y e s 0.413030.786541.098781.31352
0.390120.733181.11921.34417
(0.38065,0.56341)(0.54194,0.83207)(0.79168,1.16883)(1.01845,1.50039)
0.182760.290130.377150.48194
0.87710.89170.90160.9106
Values of ( T 1 , T 2 ) ( 1.3 , 1.5 )
( 25 , 5 ) M L 0.413030.786541.098781.31352
( 30 , 25 ) 0.380260.821651.162711.41152
(0.15243,0.42406)(0.40072,0.90224)(0.87932,1.44074)(1.27194,1.88367)
0.271630.501520.561520.61173
0.92010.93320.94410.9505
B a y e s 0.413030.786541.098781.31352
0.390120.735150.921661.28061
(0.27251,0.52427)(0.50041,0.96225)(0.88015,1.40234)(1.18145,1.78310)
0.251760.461840.522190.60165
0.91150.92710.93960.9471
( 25 , 10 ) M L 0.413030.786541.098781.31352
( 30 , 25 ) 0.407120.0.810260.991721.28017
(0.35143,0.59310)(0.53183,0.99795)(0.80384,1.29498)(0.93317,1.51480)
0.241670.466120.0.491140.0.58163
0.91950.93070.94210.9497
B a y e s 0.413030.786541.098781.31352
0.410050.790151.018231.31561
(0.37041,0.60357)(0.55272,0.97563)(0.79043,1.24077)(0.93962,1.49109)
0.233160.422910.450370.55147
0.90610.92980.94020.9488
( 25 , 15 ) M L 0.413030.786541.098781.31352
( 30 , 21 ) 0.408150.778040.1.109261.28915
(0.31629,0.57135)(0.56052,0.86964)(0.80057,1.24265)(1.01748,1.55254)
0.255060.309120.442080.53506
0.94790.955140.96090.9716
B a y e s 0.413030.786541.098781.31352
0.411050.780521.088051.30615
(0.30225,0.547738)(0.52817,0.82633)(0.93183,1.3335)(1.13052,1.64131)
0.245130.298160.401670.0.51079
0.93970.95200.95930.9675
Values of ( T 1 , T 2 ) ( 0.8 , 2.5 )
( 25 , 20 ) M L 0.413030.786541.098781.31352
( 30 , 25 ) 0.412010.801521.110261.28961
(0.43172,0.65335)(0.60332,0.85448)(0.82184,1.20366)(1.10286,1.56910)
0.221630.251160.381820.46624
0.94180.95360.95920.9663
B a y e s 0.413030.786541.098781.31352
0.410520.793160.1.120521.29164
(0.45132,0.66194)(0.65293,0.89786)(0.87281,1.23444)(1.17148.1.60341)
0.210620.244930.0.361630.43193
0.94060.95130.95540.9614
Values of ( T 1 , T 2 ) ( 0.8 , 1.1 )
( 30 , 20 ) M L 0.413030.786541.098781.31352
( 40 , 23 ) 0.419030.790160.991730.1.3201
(0.32183,0.58324)(0.53148,0.83165)(0.81064,1.22247)(1.10573,1.68725)
0.261410.300170.411830.58152
0.97260.97750.98020.9892
B a y e s 0.413030.786541.098781.31352
0.415250.778120.1.031241.31902
(0.35149,0.59301)(0.56028,0.84226)(0.79104,1.18118)(0.97823,1.51955)
0.241520.281980.390140.54132
0.96180.96930.97150.9801
Values of ( T 1 , T 2 ) ( 0.5 , 0.8 )
( 30 , 25 ) M L 0.413030.786541.098781.31352
( 40 , 25 ) 0.41110.79020.1.10580.1.3111
(0.31047,0.56566)(0.55061,0.84878)(0.81718,1.20735)(1.03081,1.59187)
0.255190.298170.390170.56106
0.95940.96150.96940.9772
B a y e s 0.413030.786541.098781.31352
0.401830.791631.108151.29284
(0.28071,0.52488)(0.51148,0.80667)(0.79208,1.15819)(1.10273,1.62478)
0.244170.295190.366110.52205
0.95230.95940.96110.9731
Table 3. M L E s of the parameters and the associated K S based on the real data sets I and II.
Table 3. M L E s of the parameters and the associated K S based on the real data sets I and II.
Data Set No. M L E s K S
I α ^ = 0.436377 , β ^ = 0.0218652 0.193849
II α ^ = 0.611733 , β ^ = 0.140011 0.247625
Table 4. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 based on a generated Balakrishnan U H C S informative sample from real data set I.
Table 4. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 based on a generated Balakrishnan U H C S informative sample from real data set I.
Values of ( T 1 , T 2 ) ( 1.1 , 7.5 )
True y s M e t h o d y 1 y 2 y 3 y 4
( r , k ) P P of y 1 P P of y 2 P P of y 3 P P of y 4
( n , D ) I P of y 1 I P of y 2 I P of y 3 I P of y 4
C a s e N o . L e n g t h L e n g t h L e n g t h L e n g t h
( 12 , 5 ) M L 8.018.2712.0631.75
( 15 , 11 ) 7.57158.112911.381633.8818
5 (6.77191,9.97962)(7.17305,10.60674)(10.48435,14.55529)(25.66194,39.64581)
3.207713.433694.0709413.98387
B a y e s 8.018.2712.0631.75
7.709258.1902711.6901632.20172
(7.21062,9.51244)(7.50592,10.42419)(10.88017,14.53119)(27.44018,38.34811)
2.301822.918273.6510210.90793
Table 5. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan’s U H C S informative sample from real data set II.
Table 5. P P s and I P s of the future failure time y s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan’s U H C S informative sample from real data set II.
Values of ( T 1 , T 2 ) ( 1.3 , 1.8 )
True y s M e t h o d y 1 y 2 y 3 y 4
( r , k ) P P of y 1 P P of y 2 P P of y 3 P P of y 4
( n , D ) I P of y 1 I P of y 2 I P of y 3 I P of y 4
C a s e N o . L e n g t h L e n g t h L e n g t h L e n g t h
( 12 , 5 ) M L 5.06.27.58.3
( 15 , 5 ) 4.68276.01967.38918.9017
6 (3.51081,6.01779)(4.89192,7.91478)(6.09047,9.82138)(6.12081,10.55201)
2.506983.022863.730914.43120
B a y e s 5.06.27.58.3
4.79086.30057.42138.5105
(3.78207,5.66372)(5.03927,7.07429)(6.24718,8.57770)(6.74039,10.55102)
1.881652.035022.330523.81063
Table 6. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan U H C S informative sample from real data set I.
Table 6. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan U H C S informative sample from real data set I.
Values of ( T 1 , T 2 ) ( 1.1 , 7.5 )
Generated z s M e t h o d z 1 z 2 z 3 z 4
( r , k ) P P of z 1 P P of z 2 P P of z 3 P P of z 4
( n , D ) I P of z 1 I P of z 2 I P of z 3 I P of z 4
C a s e N o . L e n g t h L e n g t h L e n g t h L e n g t h
( 12 , 5 ) M L 0.388190.490160.520150.60823
( 15 , 11 ) 0.421150.502840.510820.58807
5 (0.34905,0.51106)(0.36373,0.54428)(0.39017,0.60024)(0.48165,0.7626)
0.162010.180550.210070.28095
B a y e s 0.388190.490160.520150.60823
0.410660.481070.527130.60153
(0.36105,0.49267)(0.40552,0.56075)(0.48174,0.65461)(0.52315,0.76499)
0.131620.155230.172870.24184
Table 7. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan U H C S informative sample from real data set II.
Table 7. P P s and I P s of the future failure time z s , s = 1 , 2 , 3 , 4 , based on a generated Balakrishnan U H C S informative sample from real data set II.
Values of ( T 1 , T 2 ) ( 1.3 , 1.8 )
Generated z s M e t h o d z 1 z 2 z 3 z 4
( r , k ) P P of z 1 P P of z 2 P P of z 3 P P of z 4
( n , D ) I P of z 1 I P of z 2 I P of z 3 I P of z 4
C a s e N o . L e n g t h L e n g t h L e n g t h L e n g t h
( 12 , 5 ) M L 0.280030.410520.481050.50185
( 15 , 5 ) 0.301190.388260.419180.46817
6 (0.20275,0.37427)(0.25594,0.4890)(0.30142,0.71214)(0.35107,0.82210)
0.1717520.233060.410720.47103
B a y e s 0.280030.410520.481050.50185
0.290140.390820.451190.48017
(0.26023,0.42308)(0.27684,0.48890)(0.33206,0.61621)(0.39082,0.70595)
0.162850.212060.284150.31513
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ateya, S.F.; Alghamdi, A.S.; Mousa, A.A.A. Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics 2022, 10, 1450. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091450

AMA Style

Ateya SF, Alghamdi AS, Mousa AAA. Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications. Mathematics. 2022; 10(9):1450. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091450

Chicago/Turabian Style

Ateya, Saieed F., Abdulaziz S. Alghamdi, and Abd Allah A. Mousa. 2022. "Future Failure Time Prediction Based on a Unified Hybrid Censoring Scheme for the Burr-X Model with Engineering Applications" Mathematics 10, no. 9: 1450. https://0-doi-org.brum.beds.ac.uk/10.3390/math10091450

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop