Next Article in Journal
The Superconducting Critical Temperature
Next Article in Special Issue
Optimal Sample Size for the Birnbaum–Saunders Distribution under Decision Theory with Symmetric and Asymmetric Loss Functions
Previous Article in Journal
Effect of Plastic Anisotropy on the Collapse of a Hollow Disk under Thermal and Mechanical Loading
Previous Article in Special Issue
A Note on the Birnbaum–Saunders Conditionals Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Survival and Reliability Analysis with an Epsilon-Positive Family of Distributions with Applications

1
Facultad de Ingeniería y Ciencias, Universidad Adolfo Ibáñez, Diagonal Las Torres 2640, Peñalolén, Santiago 7941169, Chile
2
Department of Statistics, Oregon State University, 217 Weniger Hall, Corvallis, OR 97331, USA
3
Departamento de Matemática, Facultad de Ciencias Básicas, Universidad de Antofagasta, Antofagasta 1240000, Chile
*
Author to whom correspondence should be addressed.
Submission received: 25 April 2021 / Revised: 14 May 2021 / Accepted: 18 May 2021 / Published: 20 May 2021

Abstract

:
We introduce a new class of distributions called the epsilon–positive family, which can be viewed as generalization of the distributions with positive support. The construction of the epsilon–positive family is motivated by the ideas behind the generation of skew distributions using symmetric kernels. This new class of distributions has as special cases the exponential, Weibull, log–normal, log–logistic and gamma distributions, and it provides an alternative for analyzing reliability and survival data. An interesting feature of the epsilon–positive family is that it can viewed as a finite scale mixture of positive distributions, facilitating the derivation and implementation of EM–type algorithms to obtain maximum likelihood estimates (MLE) with (un)censored data. We illustrate the flexibility of this family to analyze censored and uncensored data using two real examples. One of them was previously discussed in the literature; the second one consists of a new application to model recidivism data of a group of inmates released from the Chilean prisons during 2007. The results show that this new family of distributions has a better performance fitting the data than some common alternatives such as the exponential distribution.

1. Introduction

The statistical analysis of reliability and survival data is an important topic in several areas, including medicine, epidemiology, biology, economics, engineering, and environmental sciences, to name a few. When using a parametric approach, one of the first steps for modeling the data is to choose a suitable distribution that can capture relevant features of the observations of interest. In this context, the gamma and Weibull distributions have become popular choices due to their flexibility that allows for a non-constant hazard rate function and to model skewed data. Although several alternatives were considered to accommodate different cases, researchers have continued to develop extensions and modifications of the standard distributions to increase the flexibility of the models see [1,2,3] for a few examples.
In this paper, we consider a generalization of the distributions with positive support and propose a new family of distributions, called the epsilon–positive family, whose construction is motivated by the ideas behind the generation of skew distributions using symmetric kernels. Specifically, we build upon the ideas from [4], where the authors start with a symmetric around zero distribution f, and define a family of distributions indexed by a parameter γ > 0 as the set of densities of the form
h ( x ; γ ) = 2 γ + γ 1 f ( x / γ ) 1 { x 0 } + f ( γ x ) 1 { x < 0 } ,
where 1 A denotes the indicator function of the set A. Some extensions of this family include epsilon-skew-normal family introduced by [5] and the epsilon–skew–symmetric family introduced by [6], both discussed in some details in [7].
Here, starting with a probability density function g with positive support, we obtain a general class that extends the family of distributions with positive support, and that contains the Weibull, gamma and exponential distributions as special cases, depending on the choice of g. Furthermore, we discuss a stochastic representation and how to obtain maximum likelihood estimators for the members of this family. We also derive the corresponding survival and hazard functions and note that one interesting feature of this new class is that the hazard function is not necessarily constant.
The rest of the paper is organized as follows: in Section 2 we define the epsilon–positive family and obtain the hazard and survival functions, mean residual life and stress-strength parameters for this family. In addition, we discuss maximum likelihood estimation and how to obtain such estimates using an EM-type algorithm for the general case. In Section 3, we focus on one specific member of the family introduced in Section 2, namely epsilon–exponential distribution, and discuss its applicability in the analysis of survival data. In Section 4 we discuss two real data examples and we finish with a brief discussion in Section 5. We include Appendix A and Appendix B with some of the technical details.

2. The Epsilon–Positive Family

Let g ( · ) = g Y ( · ; Ψ ) be a probability density function (pdf) with positive support and parameters Ψ p . Then, for 0 < ε < 1 , the corresponding epsilon–positive (EP) family of distributions is defined as
f X ( x ; Ψ , ε ) = 1 2 g x 1 + ε + g x 1 ε , x > 0 .
If a random variable X has the density given in (2), we say that X has an epsilon–positive distribution and write X E P ( Ψ , ε ) .
Observe that as ε 0 , f X x ; Ψ , ε g x 1 { x > 0 } and therefore the distribution g Y ( · ; Ψ ) can be seen as a particular member of the family.
The rth moment of X E P ( Ψ , ε ) , r = 1 , 2 , , is given by
E X r = ( 1 + ε ) r + 1 2 + ( 1 ε ) r + 1 2 E Y r ,
where E Y r is the rth moment of Y g Y ( · ; Ψ ) . From (3) we obtain that the mean, variance, skewness (CS) and kurtosis (CK) coefficients are (respectively)
E ( X ) = ( 1 + ε 2 ) E ( Y )
V a r ( X ) = ( 1 + 3 ε 2 ) E Y 2 ( 1 + ε 2 ) 2 E 2 Y
C S = ( 1 + 6 ε 2 + ε 4 ) E ( Y 3 ) 3 ( 1 + 4 ε 2 + 3 ε 4 ) E ( Y ) E ( Y 2 ) + 2 ( 1 + ε 2 ) 3 E 3 ( Y ) ( ( 1 + 3 ε 2 ) E ( Y 2 ) ( 1 + ε 2 ) 2 E 2 ( Y ) ) 3 / 2
and
C K = A ( ( 1 + 3 ε 2 ) E ( Y 2 ) ( 1 + ε 2 ) 2 E 2 ( Y ) ) 2 ,
A = ( 1 + 10 ε 2 + 5 ε 4 ) E ( Y 4 ) 4 ( 1 + 7 ε 2 + 7 ε 4 + ε 6 ) E ( Y ) E ( Y 3 ) + 6 ( 1 + 5 ε 2 + 7 ε 4 + 3 ε 6 ) E 2 ( Y ) E ( Y 2 ) 3 ( 1 + 4 ε 2 + 6 ε 4 + 4 ε 6 + ε 8 ) E 4 ( Y ) ,
where E ( Y r ) , r = 1 , 2 , 3 , 4 , are the first four moments of the random variable Y g Y ( · ; Ψ ) .
To draw observations from an epsilon–positive distribution, we first notice that for 0 < ε < 1 , if Y g Y ( · ; Ψ ) and U ε (independent from Y) satisfies P U ε = 1 + ε = 1 P U ε = 1 ε = ( 1 + ε ) / 2 , then X = U ε Y E P ( Ψ , ε ) .
From this stochastic representation, it follows that we can generate EP random variables according to the Algorithm 1:
Algorithm 1 Algorithm to generate observations from an epsilon–positive distribution.
Require: Initialize the algorithm fixing Ψ and ε
  1: Generate Y from g Y ( · ) and U from B e r p = 1 + ε 2
  2: if U = 1 then
  3:   U ε 1 + ε
  4: else
  5:   U ε 1 ε
  6: end if
  7: return X = U ε Y .
Finally, observe that the definition in (2) can be easily extended so we can represent the epsilon–positive family as a finite scale mixture of positive distributions. In fact, for any 0 < ε < 1 we can write
f X ( x ; Ψ , ε ) = ξ J ( ε ) π ξ ( ε ) 1 ξ g x ξ ,
where x > 0 , π ξ ( ε ) > 0 are mixing proportions satisfying ξ J ( ε ) π ξ ( ε ) = 1 , and J ( ε ) is some finite subset of that will typically depend on ε . For instance, taking J ( ε ) = { 1 ε , 1 + ε } and π ξ ( ε ) = ( 1 ± ε ) / 2 we recover the expression in (2). This representation will be particularly useful in order to obtain maximum likelihood estimates using EM–type algorithms, as we discuss in Section 2.5.

2.1. Reliability Properties

From the definition, we obtain that the survival function S X ( x ; Ψ , ε ) = P ( X > x ) for this family is given by
S X ( x ; Ψ , ε ) = 1 + ε 2 S Y x 1 + ε + 1 ε 2 S Y x 1 ε ,
where S Y ( · ) is the survival function associated with the density g Y ( · ; Ψ ) . Similarly, the hazard function λ X ( x ; Ψ , ϵ ) = f X ( x ; Ψ , ε ) / S X ( x ; Ψ , ε ) is given by
λ X ( x ; Ψ , ϵ ) = 1 + r ( x ) ( 1 + ε ) R x 1 + ε + ( 1 ε ) R x 1 ε r ( x ) ,
where r ( x ) = g Y x 1 ε / g Y x 1 + ε , and R ( · ) = S Y ( · ) / g Y ( · ) is the Mills ratio.
Table 1 shows some examples of the densities that can be extended using the definition of the epsilon–positive family, with the corresponding densities, survival and hazard functions. Figure 1 and Figure 2 show the pdf, survival and hazard functions of the epsilon-exponential, epsilon-Weibull, epsilon-log-logistic and epsilon-gamma distributions. We can see that in the case of the epsilon-Weibull, epsilon-log-logistic and epsilon-gamma distributions a bimodal shape is obtained when the value of the parameter ε is 0.9 .

2.2. Mean Residual Life

The mean residual life or life expectancy is an important characteristic of the model. It gives the expected additional lifetime given that a component has survived until time t. For a non-negative continuous random variable X E P ( Ψ , ε ) the mean residual life ( m l r ) function is defined as
m r l ( t ) = E ( X t | X > t ) = E ( X | X > t ) t
where t > 0 . The above conditional expectation is given by
E ( X | X > t ) = t x f X ( x ) P ( X > t ) d x = t x f X ( x ) 1 F X ( t ) d x = 1 S X ( t ) t x f X ( x ) d x .
Calculation of the integral in (8) is done in the same way as the calculation of the mean. Thus,
I = t x f X ( x ) d x = t x 1 2 g Y x 1 + ε + g Y x 1 ε d x .
Making the changes of variables z = x 1 + ε and u = x 1 ε we have
I = ( 1 + ε ) 2 2 t / ( 1 + ε ) z g Y ( z ) d z + ( 1 ε ) 2 2 t / ( 1 ε ) u g Y ( u ) d u = ( 1 + ε ) 2 2 S Y ( t 1 ) t 1 z g Y ( z ) S Y ( t 1 ) d z + ( 1 ε ) 2 2 S Y ( t 2 ) t 2 u g Y ( u ) S Y ( t 2 ) d u = ( 1 + ε ) 2 2 S Y ( t 1 ) E ( Y | Y > t 1 ) + ( 1 ε ) 2 2 S Y ( t 2 ) E ( Y | Y > t 2 ) = ( 1 + ε ) 2 2 S Y ( t 1 ) E ( Y t 1 | Y > t 1 ) + t 1 + ( 1 ε ) 2 2 S Y ( t 2 ) E ( Y t 2 | Y > t 2 ) + t 2 = ( 1 + ε ) 2 2 S Y ( t 1 ) m r l Y ( t 1 ) + t 1 + ( 1 ε ) 2 2 S Y ( t 2 ) m r l Y ( t 2 ) + t 2 ,
where t 1 = t 1 + ε , t 2 = t 1 ε , and m r l Y ( t i ) = E ( Y t i | Y > t i ) , i = 1 , 2 corresponds to the mean residual life of the random variable Y g Y ( · ) . Finally, Equation (7) can be written as
m r l ( t ) = ( 1 + ε ) 2 2 S Y t 1 + ε ( m l r Y ( t 1 ) + t 1 ) + ( 1 ε ) 2 2 S Y t 1 ε ( m r l Y ( t 2 ) + t 2 ) 1 + ε 2 S Y t 1 + ε + 1 ε 2 S Y t 1 ε t .

2.3. Stress-Strength Parameter

An important concept in reliability theory is the stress-strength parameter. Let X 1 denote the strength of a system or component with a stress X 2 . Then, the stress-strength parameter is defined as R = P ( X 2 < X 1 ) , which can be viewed as a measure of the system performance. In the next theorem, we look at this quantity when X 1 and X 2 are independent random variables with epsilon-positive distributions.
Theorem 1.
Suppose X 1 and X 2 are random variables independently distributed as X 1 E P ( Ψ 1 , ε 1 ) and X 2 E P ( Ψ 2 , ε 2 ) , the reliability of the system with stress variable ( X 2 ) and strength variable ( X 1 ) is given by
R = P ( X 2 < X 1 ) = ( 1 + ε 2 ) 2 4 a P ( Y 2 < a Y 1 ) + b P ( Y 2 < b Y 1 ) + ( 1 ε 2 ) 2 4 c P ( Y 2 < c Y 1 ) + d P ( Y 2 < d Y 1 ) ,
where a = 1 + ε 1 1 + ε 2 , b = 1 ε 1 1 + ε 2 , c = 1 + ε 1 1 ε 2 , d = 1 ε 1 1 ε 2 , and Y i g Y i ( · ; Ψ i ) , i = 1 , 2 , with Y 1 independent of Y 2 .
Proof of Theorem 1.
Making the changes of variables z = x 1 1 + ε and u = x 1 1 ε we have
P ( X 2 < X 1 ) = 0 F X 2 ( x 1 ) f X 1 ( x 1 ) d x 1 = 0 1 + ε 2 2 G Y 2 x 1 1 + ε 2 + 1 ε 2 2 G Y 2 x 1 1 ε 2 × 1 2 g Y 1 x 1 1 + ε 1 + 1 2 g Y 1 x 1 1 ε 1 d x 1 = ( 1 + ε 2 ) 2 4 a o G Y 2 ( a z ) g Y 1 ( z ) d z + ( 1 + ε 2 ) 2 4 b o G Y 2 ( b u ) g Y 1 ( u ) d u × ( 1 ε 2 ) 2 4 c o G Y 2 ( c z ) g Y 1 ( z ) d z + ( 1 ε 2 ) 2 4 d o G Y 2 ( d u ) g Y 1 ( u ) d u = ( 1 + ε 2 ) 2 4 a P ( Y 2 < a Y 1 ) + ( 1 + ε 2 ) 2 4 b P ( Y 2 < b Y 1 ) + ( 1 ε 2 ) 2 4 c P ( Y 2 < c Y 1 ) + ( 1 ε 2 ) 2 4 d P ( Y 2 < d Y 1 ) .
 □
Observe that the same concept can be used to make comparisons between two systems. For example, if X 1 and X 2 denote instead the lifetimes of systems S 1 and S 2 respectively, then, a probability P ( X 1 < X 2 ) > 0.5 would indicate that the system S 2 is better than the system S 1 in a stochastic sense.

2.4. Maximum Likelihood Estimation

Let X ˜ n = ( X 1 , , X n ) be a random sample from an E P ( Ψ , ε ) distribution. Then, the maximum likelihood estimator (MLE) of θ = ( Ψ , ε ) is given by
θ M L E = ( Ψ ^ , ε ^ ) M L E = arg max Ψ , ε ( Ψ , ε ; X ˜ n ) ,
where ( Ψ , ε ; X ˜ n ) = i = 1 n log f X i ( x i ; Ψ , ε ) is the log–likelihood.
Although the MLE for the E P family is conceptually straightforward, typically closed form solutions are not available and the MLE need to be obtained numerically. One possibility is the Newton–Raphson algorithm, with iteration equation
θ ^ ( k + 1 ) = θ ^ ( k ) [ H ( θ ^ ( k ) ) ] 1 u ( θ ^ ( k ) ) ,
where θ ( k ) be the current estimate of θ , u ( θ ) denote the vector of first derivatives of ( θ ; X ˜ n ) , and H ( θ ) .
A disadvantage of this approach is that it requires redthe calculation of the second derivatives of the likelihood function and repeated inversion of potentially large matrices, which can be computationally intensive. Instead, we can consider an expectation–maximization (EM) approach see [8] as a general iterative method for data sets with missing (or incomplete) data.
The mixture representation proposed in (4) is particularly useful in order to use an EM–type algorithm to estimate the model parameters, since it provides a hierarchical scheme for the E P family. Next, we show how to implement maximum likelihood estimation using an EM–type algorithm for the E P family.

2.5. MLE via the EM Algorithm

From (4), the log–likelihood takes the form,
( Ψ , ε ; X ˜ n ) = i = 1 n log ξ J ( ε ) π ξ ( ε ) 1 ξ g x i ξ ,
where the derivatives with respect to Ψ and ε typically lead to a system of equations with no closed form solution. To address this problem, we can “augment” the data X ˜ n using an unobservable matrix W = ( w i j ) , i = 1 , , n ; j = 1 , , m = | J ( ε ) | , with elements w i j defined as
w i j = 1 ,   if   observation   x i   comes   from   the   distribution   1 ξ j g x ξ j 0 ,   otherwise ,
where ξ 1 , , ξ m denote the distinct elements of J ( ε ) . This way, each row of W contains only one 1 and zero 0, and the (complete) log-likelihood for the augmented data Y = ( X ˜ n , W ) is given by
c ( Ψ , ε ; Y ) = i = 1 n j = 1 m w i j log π ξ j ( ε ) ξ j + log g x i ξ j .
Then, if we denote by θ ^ ( s ) = ( Ψ ( s ) , ε ( s ) ) the estimate of θ = ( Ψ , ε ) at iteration s, and by Q ( θ , θ ^ ( s ) ) , the conditional expectation of c ( θ ; Y ) given X ˜ n and θ ^ ( s ) , we obtain
Q ( θ , θ ^ ( s ) ) = E ( c ( θ ; Y ) | X ˜ n , θ ^ ( s ) ) = i = 1 n j = 1 m ( s ) w i j ( s ) log π ξ j ( ε ) ξ j + log g x i ξ j ,
where m ( s ) = | J ( ε ( s ) ) | , and
w i j ( s ) = π ( ε ( s ) ) j 1 ξ j g x i ξ j j = 1 m ( s ) π ξ j ( ε ( s ) ) 1 ξ j g x i ξ j .
From here, it follows that for J ( ε ) = { 1 ε , 1 + ε } , the iteration s of the EM algorithm takes the form:
  • E–step: For i = 1 , , n , compute
    w i j ( s ) = g x i 1 + ε ( s ) g x i 1 + ε ( s ) + g x i 1 ε ( s ) , ξ j = 1 + ε .
  • M–step: Given ε ( s ) and Ψ ( s ) , compute
    ε ( s + 1 ) = 2 n i = 1 n w i j ( s ) 1 , j = 1 + ε , Ψ ( s + 1 ) = arg max Ψ Q ( θ , θ ^ ( s ) ) .
The E and M steps are alternated repeatedly until a convergence criteria is satisfied. For the variance estimation of the MLEs we consider the bootstrapping method suggested in [9].

3. The Epsilon–Exponential Distribution

If we take g Y ( y ; Ψ = σ ) = 1 σ e y / σ 1 { y > 0 } , the pdf of an exponential distribution, the expression in (2) becomes
f X ( x ; σ , ε ) = 1 2 σ e x / ( 1 + ε ) σ + e x / ( 1 ε ) σ , x > 0 ,
where σ > 0 and 0 < ε < 1 . We say that a random variable X has epsilon–exponential (EE) distribution with scale parameter σ and shape parameter ε if its density has the form in (13), and we write X E E ( σ , ε ) .
Recall that the rth moment, r = 1 , 2 , , of Y E x p ( σ ) is E ( Y r ) = r ! σ r . From (3), when X E E ( σ , ε ) , we obtain that the mean, variance, skewness (CS) and kurtosis (CK) coefficients are, respectively,
E ( X ) = 1 + ε 2 σ V a r ( X ) = σ 2 1 + 4 ε 2 ε 4 C S = 2 + 18 ε 2 6 ε 4 + 2 ε 6 1 + 4 ε 2 ε 4 3 / 2 C K = 9 + 120 ε 2 + 18 ε 4 3 ε 8 1 + 4 ε 2 ε 4 2 .
Please note that for any value of ε ( 0 , 1 ) , C S > 0 and C K > 0 . It can be seen that 2 < C S < 2.3 and 9 < C K < 11.023 . Figure 3 depicts the behavior of the skewness (CS) and kurtosis (CK) coefficients as a function of ε . In the figures we observe that the maximum skewness is attained ar ε = 0.4 , while the maximum kurtosis coefficient is obtained when ε = 0.37 .
Recall that the survival function of the exponential distribution is of the form S Y ( y ) = e y / σ 1 { y > 0 } and the mean residual life is of the form m r l Y ( t ) = σ . Then, it follows from (5), (6) and (9) that if X E E σ , ε , then the survival and hazard functions and mean residual life are (respectively)
S ( x ; σ , ε ) = ( 1 + ε ) 2 e x / ( 1 + ε ) σ + ( 1 ε ) 2 e x / ( 1 ε ) σ , λ ( x ; σ , ε ) = 1 σ e x / ( 1 + ε ) σ + e x / ( 1 ε ) σ ( 1 + ε ) e x / ( 1 + ε ) σ + ( 1 ε ) e x / ( 1 ε ) σ   and m r l ( t ) = σ ( 1 + ε ) 2 e t / ( 1 + ε ) σ + ( 1 ε ) 2 e t / ( 1 ε ) σ ( 1 + ε ) e t / ( 1 + ε ) σ + ( 1 ε ) e t / ( 1 ε ) σ .
Interestingly, in contrast to the exponential distribution, it can be shown that the hazard function λ ( x ; σ , ε ) of the EE distribution is not constant, but decreasing in x. This feature can be easily observed in Figure 1 (top panel), where we show the pdf, survival and hazard functions of the EE distribution for different values of the parameter ε when σ = 1 . Please note that λ ( x ; σ , ε ) λ Y ( x ; σ ) = 1 / σ as ε 0 . Additionally, m r l ( t ) m r l Y ( t ) = σ as ε 0 .
Suppose X 1 and X 2 are random variables independently distributed as X 1 E E ( σ 1 , ε 1 ) and X 2 E E ( σ 2 , ε 2 ) , using Theorem 1 the reliability of the system with stress variable ( X 2 ) and strength variable ( X 1 ) is given by
R = ( 1 + ε 2 ) 2 4 a 2 σ 1 a σ 1 + σ 2 + b 2 σ 1 b σ 1 + σ 2 + ( 1 ε 2 ) 2 4 c 2 σ 1 c σ 1 + σ 2 + d 2 σ 1 d σ 1 + σ 2 .
Please note that when ( ε 1 , ε 2 ) ( 0 , 0 ) , R σ 1 σ 1 + σ 2 which corresponds to the stress-strength reliability model of the exponential distributions with parameter σ 1 for X 1 (strength), and with parameter σ 2 for X 2 (stress), respectively.
Maximum likelihood estimations for the parameters σ and ε of the epsilon-exponential distribution, can be obtained following the strategy described in Section 2.4 and Section 2.5 (see the Appendix B for details). Let X 1 , X 2 , , X n and Y 1 , Y 2 , , Y m be random samples from E E ( σ 1 , ε 1 ) and E E ( σ 2 , ε 2 ) , respectively. Having estimates of ( σ 1 , ε 1 , σ 2 , ε 2 ) , say ( σ ^ 1 , ε ^ 1 , σ ^ 2 , ε ^ 2 ) , by the invariance property of the MLE, the MLE of R becomes
R ^ = ( 1 + ε ^ 2 ) 2 4 a ^ 2 σ ^ 1 a ^ σ ^ 1 + σ ^ 2 + b ^ 2 σ ^ 1 b ^ σ ^ 1 + σ ^ 2 + ( 1 ε ^ 2 ) 2 4 c ^ 2 σ ^ 1 c ^ σ ^ 1 + σ ^ 2 + d ^ 2 σ ^ 1 d ^ σ ^ 1 + σ ^ 2 .

Numerical Experiments

To illustrate the properties of the estimators we performed a small simulation study considering 5000 simulated datasets for different pair of values of σ and ε using Algorithm 1. The goal of the study is to observe the behavior of the MLEs for the model parameters using our proposed EM algorithm considering different sample sizes.
Table 2 summarizes the simulation results. In the table, the columns labeled as “estimate” show the average of the estimators obtained in the simulations, and the columns labeled “SD” show the sample standard deviation of the corresponding estimators. To obtain the standard errors we used the bootstrap method with B = 150 samples. We observe that the estimates are quite stable and fairly accurate, reporting (on average) numbers close to the true values of the parameters in all cases. Please note that as expected, the precision of the estimates improves as the sample size increase.

4. Survival and Reliability Analysis

Let T 0 represent the survival time until the occurrence of a “death” event. In this context, suppose we have n subjects with lifetimes determined by a survival function S ( t ) , and that the ith subject is observed for a time t i . If the individual dies at time t i , its contribution to the likelihood function is given by L i = f ( t i ) , where f ( t ) = S ( t ) is the event density associated with S ( t ) , or equivalently, L i = S ( t i ) λ ( t i ) , where λ ( t ) is the corresponding hazard function. On the other hand, if the ith individual is still alive at time t i and therefore, under non–informative censoring, all we can say is that their lifetime exceeds t i . It follows that the contribution of a censored observation to the likelihood is simply given by L i = S ( t i ) . Notice that in either case we evaluate the survival function at time t i , because in both cases the ith subject was alive until (at least) time t i . A death will multiply this contribution by the hazard λ ( t i ) , but a censored observation will not.
We can combine these contributions into a single expression using a death indicator d i , taking the value one if individual i died and the value zero otherwise. The resulting likelihood function L is of the form
L = i = 1 n λ ( t i ) d i S ( t i ) = i = 1 n f ( t ) d i S ( t ) 1 d i .
In the next section we will assume that the random variable T follows an epsilon–positive family, and show how estimate the model parameters using the EM algorithm.

4.1. Estimation Using the EM Algorithm

Let T 1 , , T n denote the survival times, T i E P ( Ψ , ε ) . Using the notation introduced in the previous section, the observed data are a collection of pairs X ˜ n = { ( T 1 , d 1 ) , , ( T n , d n ) } , where the d i , i = 1 , , n , are the censoring indicators.
In order implement the EM algorithm, we augment the observed data X ˜ n with the unobservable matrix W defined in Section 2.5, and obtain the (complete) likelihood is given by
L c ( θ ; X ˜ n , W ) = i = 1 n j = 1 m π ξ j ( ε ) 1 ξ j g t i ξ j d i S t i ξ j 1 d i w i j
with corresponding (complete) log–likelihood c = log L c .
Then, if θ ^ ( s ) = ( Ψ ( s ) , ε ( s ) ) be the estimate of θ = ( Ψ , ε ) at iteration s, and we denote by Q ( θ , θ ^ ( s ) ) the conditional expectation of c ( θ ; X ˜ n , W ) given the observed data X ˜ n and θ ^ ( s ) , we obtain
Q ( θ , θ ^ ( s ) ) = E ( c ( θ ; X ˜ n , W ) | X ˜ n , θ ^ ( s ) ) = i = 1 n j = 1 m ( s ) w i j ( s ) log π ξ j ( ε ) + d i log 1 ξ j g t i ξ j + ( 1 d i ) log S t i ξ j ,
where
w i j ( s ) = π ξ j ( ε ) 1 ξ j g t i ξ j d i S t i ξ j 1 d i j = 1 m ( s ) π ξ j ( ε ) 1 ξ j g t i ξ j d i S t i ξ j 1 d i .
Then, for J ( ε ) = { 1 ε , 1 + ε } , the iteration s of the algorithm takes the form:
  • E–step: For i = 1 , , n , compute
    w i j ( s ) = w i , + w i , + + w i , , f o r ξ j = 1 + ε ,
    where
    w i , + = g t i 1 + ε ( s ) d i ( 1 + ε ( s ) ) S t i 1 + ε ( s ) 1 d i
    and
    w i , = g t i 1 ε ( s ) d i ( 1 ε ( s ) ) S t i 1 ε ( s ) 1 d i .
  • M–step: Given ε ( s ) and Ψ ( s ) , compute
    ε ( s + 1 ) = 2 n i = 1 n w i j ( s ) 1 , ξ j = 1 + ε , Ψ ( s + 1 ) = arg max Ψ Q ( θ , θ ^ ( s ) ) .

EM Algorithm for the Epsilon-Exponential Distribution

Suppose that the survival times T i E E ( σ , ε ) . Then the EM algorithm takes the form:
  • E–step: For i = 1 , , n , compute
    w i j ( s ) = w i , + w i , + + w i , , for ξ j = 1 + ε ,
    where
    w i , + = 1 σ e t i / ( 1 + ε ( s ) ) σ ( s ) d i ( 1 + ε ( s ) ) e t i / ( 1 + ε ( s ) ) σ ( s ) 1 d i
    and
    w i , = 1 σ e t i / ( 1 ε ( s ) ) σ ( s ) d i ( 1 ε ( s ) ) e t i / ( 1 ε ( s ) ) σ ( s ) 1 d i .
  • M–step: Given ε ( s ) and σ ( s ) , compute
    ε ( s + 1 ) = 2 n i = 1 n w i j ( s ) 1 , ξ j = 1 + ε , σ ( s + 1 ) = i = 1 n j J ( ε ( s ) ) w i j ( s ) t i j i = 1 n j J ( ε ( s ) ) w i j ( s ) d i .

5. Real Data Examples

In this section, we use two examples to illustrate the proposed distributions using (un)censored data sets.

5.1. Example 1: Maintenance Data

First, we consider a real data set originally analyzed by [10]. The complete data set correspond active repair times (in hours) for an airborne communication transceiver, and can be found in Table 3.
Using the EM algorithm described in Section 4 we fit an epsilon–exponential (EE) distribution to the active repair times. We obtain that the maximized log–likelihood value 103.806 . Alternatively, we also fit an exponential (Exp), exponentiated–exponential (EExp) and Weibull (Wei) distribution, obtaining the maximized log–likelihood values of 105.006 , 104.983 and 104.470 , respectively. For model comparison, we use the Akaike information criterion (AIC) introduced in [11], given by A I C = 2 l ^ + 2 k where l ^ is the maximized log–likelihood and k is the number of parameters of the distribution under consideration.
The best model is deemed to be the one with the smallest AIC. For the data set in the example, we obtain A I C E E = 211.611 , A I C E x p = 212.012 , A I C E E x p = 213.966 , and A I C W e i = 212.939 . It follows that in terms of the AIC criteria, the epsilon–exponential shows the best performance when fitting these data. Figure 4 shows the fit of the different models used in the example. Figure 5 displays the three estimated survival functions for this data set.

5.2. Example 2: Recidivism Data

For the second example we use real data obtained from the official records of Gendarmerie of Chile on all inmates released from the Chilean prisons during 2007 after serving a sentence of imprisonment by robbery.
The data set contains records of 9477 inmates and the follow–up period from release until 30 April 2012. In this study, recidivism is defined to occur when a released prisoner goes back to prison for the original or any other offense.
Overall, 52.2% of the inmates in the cohort were convicted of one or more offenses and returned to prison within 64 months of release. About 11.8% of the cohort returned to prison within three months of release, and 30% returned within a year of release.
Table 4 shows the observed proportion of the cohort returning to prison within 1, 3, 6, 12, 18, 24, 36, 48 and 64 months of release. We observe that the cumulative proportion of recidivism grew quickly over the first 12 months after release, increasing by more than 7% every 3 months. After that, the percent increases were smaller and over longer periods.
To analyze the time to recidivism, we determined the number of days between an inmate’s release and his return to prison. Because some inmates did not reoffend, we have censored data and we used the EM algorithm described in Section 4 to fit an epsilon–exponential distribution to the time to recidivism. The maximized log–likelihood value for an assumed epsilon–exponential distribution is easily calculated to be −42,067.84. In comparison, we also fit an exponential distribution yielding a maximized log–likelihood value of −42,632.81.
Looking at the AIC values, we obtain A I C E E = 84,139.68 and A I C E x p = 85,267.62 for the epsilon–exponential and exponential model respectively, and therefore we conclude, the epsilon–exponential is a better model for these data, based on this criteria.
Finally, we also analyzed the survival time using the Kaplan–Meier estimator. Figure 5 displays the three estimated survival functions for this data set. We observe a close agreement between the Kaplan–Meier survival curve and the epsilon–exponential distribution.

6. Discussion

We introduced a new class of distributions with positive support called epsilon–positive which are generated from any distributions with positive support. This new class of distributions includes the exponential, Weibull, log–normal, etc. ones as special cases. We discussed a stochastic representation for this family, as well as parameter estimation, using the maximum likelihood approach via the Newton–Raphson. In addition, we showed that the elements of this new family can be expressed as a finite scale mixture of positive distributions, which facilitates the implementation of EM-type of algorithms.
We then focused on particular member of this family, called epsilon–exponential distribution, and discuss its applicability in the analysis of survival and reliability data. In this context, we considered the censored data case, and we show how we can use this new family to analyze this type of data sets. For the new class of distributions and, in particular, for the epsilon–exponential distribution we estimate the model parameters using the EM algorithm. An interesting feature of the hazard function of the epsilon–exponential distribution is that is not constant at difference of the exponential one. This feature increases the flexibility of the models allowing its use in a broader range of scenarios.
This greater flexibility is corroborated in the two examples considered in this paper where the AIC criteria shows a better performance our proposed epsilon–exponential model when compared to commonly used alternatives such as the exponential one. The results suggests that the epsilon–exponential distribution should be considered to be a legitimate alternative for the analysis of survival and reliability data in both the censored and uncensored case.

Author Contributions

Conceptualization, R.d.l.C., H.W.G.; methodology, P.C., R.d.l.C., C.F. and H.W.G.; software, P.C.; validation, R.d.l.C. and H.W.G.; formal analysis, P.C.; investigation, P.C., R.d.l.C., C.F. and H.W.G.; resources, R.d.l.C. and H.W.G.; data curation, P.C. and R.d.l.C.; writing–original draft preparation, P.C., R.d.l.C., C.F. and H.W.G.; writing–review and editing, P.C., R.d.l.C., C.F. and H.W.G.; visualization, P.C.; supervision, R.d.l.C. and H.W.G.; project administration, R.d.l.C. and H.W.G.; funding acquisition, R.d.l.C. and H.W.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by ANID FONDECYT grant number 1181662 & Anillo ACT–87 project, and grant SEMILLERO UA-2021 (Chile).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors would like to thank Gendarmerie of Chile for sharing the data on recidivism used in the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. The EE–MLE

For a sample of n independent identically distributed (i.i.d.) observations X ˜ n = ( X 1 , , X n ) from E E ( σ , ε ) , θ = ( σ , ε ) , has to be estimated from the data. The log–likelihood function is given by
= ( σ , ε ; X ˜ n ) = n log 2 σ + i = 1 n log e x i / ( 1 + ε ) σ + e x i / ( 1 ε ) σ .
Differentiating (A1) with respect to σ and ε and equating to 0 respectively, we obtain the likelihood equations
σ = n σ + 1 σ 2 ( 1 + ε ) ( 1 ε ) i = 1 n x i a i ( σ , ε ) = 0 ε = 1 σ ( 1 + ε ) 2 ( 1 ε ) 2 i = 1 n x i b i ( σ , ε ) = 0 ,
where
a i ( σ , ε ) = ( 1 ε ) e x i / ( 1 + ε ) σ + ( 1 + ε ) e x i / ( 1 ε ) σ e x i / ( 1 + ε ) σ + e x i / ( 1 ε ) σ
and
b i ( σ , ε ) = ( 1 ε ) 2 e x i / ( 1 + ε ) σ + ( 1 + ε ) 2 e x i / ( 1 ε ) σ e x i / ( 1 + ε ) σ + e x i / ( 1 ε ) σ .
Please note that no closed form solutions are available to obtain the MLEs of σ and ε , respectively. Therefore, the Newton–Raphson algorithm can be implemented. Let u ( θ ) = θ and H 2 × 2 ( θ ) = 2 θ θ for the epsilon–exponential distribution. Let u ( θ ) = ( / σ , / ε ) the vector of first derivatives. Define
H 11 = 2 σ 2 , H 12 = 2 σ ε   and   H 22 = 2 ε 2 .
The entries H r s , r , s = 1 , 2 , of the symmetric matrix of second partial derivatives for the epsilon–exponential distribution are
H 11 = 2 σ 2 = n σ 2 2 σ 3 ( 1 + ε ) ( 1 ε ) i = 1 n x i a i ( σ , ε ) 1 σ 4 ( 1 + ε ) 2 ( 1 ε ) 2 i = 1 n x i 2 a i 2 ( σ , ε ) + i = 1 n x i b i ( σ , ε ) .
H 22 = 2 ε 2 = 4 ε σ ( 1 + ε ) 3 ( 1 ε ) 3 i = 1 n x i b i ( σ , ε ) + 1 σ 2 ( 1 + ε ) 4 ( 1 ε ) 4 i = 1 n x i 2 h i ( σ , ε ) 1 σ ( 1 + ε ) ( 1 ε ) i = 1 n x i 2 b i 2 ( σ , ε ) .
H 12 = 2 ε σ = 1 σ 2 ( 1 + ε ) 2 ( 1 ε ) 2 i = 1 n x i b i ( σ , ε ) + 1 σ 2 ( 1 + ε ) 4 ( 1 ε ) 4 i = 1 n x i c i ( σ , ε ) 1 σ 3 ( 1 + ε ) 6 ( 1 ε ) 6 i = 1 n x i 2 b i 2 ( σ , ε ) ,
where
c i ( σ , ε ) = ( 1 ε ) 3 e x i / ( 1 + ε ) σ + ( 1 + ε ) 3 e x i / ( 1 ε ) σ e x i / ( 1 + ε ) σ + e x i / ( 1 ε ) σ ,
and
h i ( σ , ε ) = ( 1 ε ) 4 e x i / ( 1 + ε ) σ + ( 1 + ε ) 4 e x i / ( 1 ε ) σ e x i / ( 1 + ε ) σ + e x i / ( 1 ε ) σ ,
and the quantities a i ( σ , ε ) and b i ( σ , ε ) are defined in Equations (A2) and (A3), respectively.
The functions u ( θ ) and H 2 × 2 ( θ ) define the terms of the Newton–Raphson iteration equation given in (12). To implement the Newton–Raphson algorithm, we can use the moments estimates for σ and ε as starting values.
Next, we show that the EM–type algorithm describe in Section 2.5 can be implemented to find the MLEs of the parameters of the epsilon–exponential distribution.

Appendix B. An EM–Type Algorithm for the EE–MLE

In order to implement the EM algorithm to estimate the model parameters of the epsilon–exponential distribution we need to choose g ( u ) = 1 σ e u / σ in the EM algorithm described in Section 2.5. Let X ˜ n = ( X 1 , , X n ) be a random sample from E E ( σ , ε ) . The complete log–likelihood is
c ( σ , ε ; Y ) = i = 1 n j = 1 m w i j log π ξ j ( ε ) ξ j x i ξ j σ log σ .
Let θ ^ ( s ) = ( σ ( s ) , ε ( s ) ) be the estimate of θ = ( σ , ε ) at iteration s, and denote by Q ( θ , θ ^ ( s ) ) the conditional expectation of c ( θ ; Y ) given the observed data X ˜ n and θ ^ ( s ) . We obtain
Q ( θ , θ ^ ( s ) ) = E c ( θ ; Y ) | X ˜ n , θ ^ ( s ) = i = 1 n j = 1 m ( s ) w i j ( s ) log π ξ j ( ε ) ξ j x i ξ j σ n log σ ,
where w i j ( s ) = π ξ j ( ε ( s ) ) 1 ξ j e x i / ξ j σ ( s ) j = 1 m ( s ) π ξ j ( ε ( s ) ) 1 ξ j e x i / ξ j σ ( s ) . Therefore, the iteration s of the algorithm takes the form:
  • E–step: For i = 1 , , n , compute
    w i j ( s ) = e x i / ( 1 + ε ( s ) ) σ ( s ) e x i / ( 1 + ε ( s ) ) σ ( s ) + e x i / ( 1 ε ( s ) ) σ ( s ) , ξ j = 1 + ε .
  • M–step: Given ε ( s ) and σ ( s ) , compute
    ε ( s + 1 ) = 2 n i = 1 n w i j ( s ) 1 , ξ j = 1 + ε , σ ( s + 1 ) = 1 n i = 1 n j = 1 m ( s ) w i j ( s ) x i ξ j .

References

  1. Mudholkar, G.S.; Srivastava, D.K.; Kollia, G.D. A generalization of the weibull distribution with application to the analysis of survival data. J. Am. Stat. Assoc. 1996, 91, 1575–1583. [Google Scholar] [CrossRef]
  2. Gupta, R.D.; Kundu, D. Exponentiated exponential family: An alternative to gamma and weibull distributions. Biom. J. 2001, 43, 117–130. [Google Scholar] [CrossRef]
  3. Cooray, K. Analyzing lifetime data with long-tailed skewed distribution: The logistic-sinh family. Stat. Model. 2005, 5, 343–358. [Google Scholar] [CrossRef]
  4. Fernández, C.; Steel, M.F. On Bayesian modeling of fat tails and skewness. J. Am. Stat. Assoc. 1998, 93, 359–371. [Google Scholar]
  5. Mudholkar, G.S.; Hutson, A.D. The epsilon skew normal distribution for analyzing near normal data. J. Statist. Plann. Inference 2000, 83, 291–309. [Google Scholar] [CrossRef]
  6. Arellano-Valle, R.B.; Gómez, H.W.; Quintana, F.A. Statistical inference for a general class of asymmetric distributions. J. Statist. Plann. Inference 2005, 128, 427–443. [Google Scholar] [CrossRef]
  7. Jones, M. A note on rescalings, reparametrizations and classes of distributions. J. Statist. Plann. Inference 2006, 136, 3730–3733. [Google Scholar] [CrossRef]
  8. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the em algorithm. J. R. Stat. Soc. Ser. B Stat. Methodol. 1977, 39, 1–38. [Google Scholar]
  9. Givens, G.H.; Hoeting, J.A. Computational Statistics, 2nd ed.; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  10. Chhikara, R.S.; Folks, J.L. The inverse gaussian distribution as a lifetime model. Technometrics 1977, 19, 461–468. [Google Scholar] [CrossRef]
  11. Akaike, H. A new look at the statistical model identification. IEEE Trans. Automat. Contr. 1974, 19, 716–723. [Google Scholar] [CrossRef]
Figure 1. Examples of the probability density f ( x ) , survival S ( x ) and hazard λ ( x ) functions of epsilon-exponential distribution, E E ( σ , ε ) , and epsilon-Weibull distribution, E W ( α , σ , ε ) . Please note that the exponential and Weibull distributions correspond to the case ε = 0 .
Figure 1. Examples of the probability density f ( x ) , survival S ( x ) and hazard λ ( x ) functions of epsilon-exponential distribution, E E ( σ , ε ) , and epsilon-Weibull distribution, E W ( α , σ , ε ) . Please note that the exponential and Weibull distributions correspond to the case ε = 0 .
Symmetry 13 00908 g001
Figure 2. Examples of the probability density f ( x ) , survival S ( x ) and hazard λ ( x ) functions of epsilon-log-logistic distribution, E L L ( σ , ε ) , and epsilon-gamma distribution, E G ( α , σ , ε ) . Please note that the log-logistic and gamma distributions correspond to the case ε = 0 .
Figure 2. Examples of the probability density f ( x ) , survival S ( x ) and hazard λ ( x ) functions of epsilon-log-logistic distribution, E L L ( σ , ε ) , and epsilon-gamma distribution, E G ( α , σ , ε ) . Please note that the log-logistic and gamma distributions correspond to the case ε = 0 .
Symmetry 13 00908 g002
Figure 3. Skewness (CS) and kurtosis (CK) coefficientes for X E E ( σ , ε ) .
Figure 3. Skewness (CS) and kurtosis (CK) coefficientes for X E E ( σ , ε ) .
Symmetry 13 00908 g003
Figure 4. The density functions of the fitted epsilon exponential, exponential, Weibull and exponentiated exponential distributions.
Figure 4. The density functions of the fitted epsilon exponential, exponential, Weibull and exponentiated exponential distributions.
Symmetry 13 00908 g004
Figure 5. Fit of the survival functions: Kaplan–Meier estimator (solid line), exponential (dashed line red) and epsilon–exponential (dashed line blue) distributions.
Figure 5. Fit of the survival functions: Kaplan–Meier estimator (solid line), exponential (dashed line red) and epsilon–exponential (dashed line blue) distributions.
Symmetry 13 00908 g005
Table 1. Hazard rate, λ ( · ) , Survival, S ( · ) , and density, f ( · ) , functions of some probability models that can be generalized using the definition in (2) In the table I ( a , β ) = 0 a Γ ( β ) 1 u β 1 e u d u .
Table 1. Hazard rate, λ ( · ) , Survival, S ( · ) , and density, f ( · ) , functions of some probability models that can be generalized using the definition in (2) In the table I ( a , β ) = 0 a Γ ( β ) 1 u β 1 e u d u .
λ ( y ) S ( y ) f ( y )
Exponential 1 σ ( > 0 ) exp ( y σ ) 1 σ exp ( y σ )
Weibull ( β σ β ) y ( β 1 ) ( β , σ > 0 ) exp ( [ y σ ] β ) β y ( β 1 ) σ β exp ( [ y σ ] β )
Log-logistic ( β σ ) ( y σ ) β 1 1 + ( y σ ) β ( β , σ > 0 ) ( 1 + [ y σ ] β ) 1 ( β σ ) ( y σ ) β 1 ( 1 + ( y σ ) β ) 2
Gamma f ( y ) S ( y ) 1 I ( y / σ , β ) y ( β 1 ) exp ( y σ ) σ β Γ ( β )
Table 2. Summary of simulation results.
Table 2. Summary of simulation results.
True Valuen σ ε
σ ε EstimateSDEstimateSD
n = 500.2860.0470.3760.126
n = 1000.2920.0370.3500.127
0.3n = 2000.2920.0300.3350.118
n = 5000.2950.0220.3170.106
n = 10000.2990.0170.3020.089
n = 500.2870.0540.5580.133
n = 1000.2900.0430.5420.129
0.30.5n = 2000.2940.0370.5270.123
n = 5000.2970.0300.5120.111
n = 10000.2980.0230.5060.091
n = 500.2990.0590.8180.126
n = 1000.2990.0470.8090.122
0.8n = 2000.3010.0380.8010.108
n = 5000.3030.0290.7940.088
n = 10000.3020.0220.7940.069
n = 500.4760.0790.3790.125
n = 1000.4840.0620.3550.125
0.3n = 2000.4860.0490.3370.121
n = 5000.4940.0370.3160.107
n = 10000.4970.0290.3030.088
n = 500.4770.0880.5580.133
n = 1000.4840.0740.5420.131
0.50.5n = 2000.4880.0590.5280.124
n = 5000.4940.0480.5120.109
n = 10000.4980.0390.5020.091
n = 500.4840.1040.8380.129
n = 1000.4990.0760.8090.119
0.8n = 2000.5030.0640.7990.110
n = 5000.5050.0490.7930.089
n = 10000.5040.0380.7930.070
n = 500.7620.1250.3740.124
n = 1000.7710.0990.3560.125
0.3n = 2000.7790.0800.3360.120
n = 5000.7890.0580.3140.105
n = 10000.7960.0460.3010.089
n = 500.7650.1390.5550.134
n = 1000.7710.1160.5430.134
0.80.5n = 2000.7820.0960.5280.123
n = 5000.7890.0760.5150.110
n = 10000.7970.0620.5030.091
n = 500.7910.1530.8150.130
n = 1000.8000.1260.8040.122
0.8n = 2000.7980.1030.7980.106
n = 5000.8060.0790.7940.089
n = 10000.8070.0600.7930.070
Table 3. Repair times (in hours) of 46 transceivers.
Table 3. Repair times (in hours) of 46 transceivers.
0.20.30.50.50.50.50.60.60.70.7
0.70.80.81.01.01.01.01.11.31.5
1.51.51.52.02.02.22.52.73.03.0
3.33.34.04.04.54.75.05.45.47.0
7.58.89.010.322.024.5
Table 4. Observed recidivism by time after release.
Table 4. Observed recidivism by time after release.
Time after Release% Observed Recidivists
1 month4.7%
3 month11.8%
6 month19.9%
12 month30.0%
18 month35.9%
24 month40.8%
36 month46.6%
48 month50.4%
64 month52.2%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Celis, P.; de la Cruz, R.; Fuentes, C.; Gómez, H.W. Survival and Reliability Analysis with an Epsilon-Positive Family of Distributions with Applications. Symmetry 2021, 13, 908. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050908

AMA Style

Celis P, de la Cruz R, Fuentes C, Gómez HW. Survival and Reliability Analysis with an Epsilon-Positive Family of Distributions with Applications. Symmetry. 2021; 13(5):908. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050908

Chicago/Turabian Style

Celis, Perla, Rolando de la Cruz, Claudio Fuentes, and Héctor W. Gómez. 2021. "Survival and Reliability Analysis with an Epsilon-Positive Family of Distributions with Applications" Symmetry 13, no. 5: 908. https://0-doi-org.brum.beds.ac.uk/10.3390/sym13050908

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop