Next Article in Journal
Continuous-Variable Quantum Key Distribution Based on Heralded Hybrid Linear Amplifier with a Local Local Oscillator
Previous Article in Journal
Joint Early Stopping Criterions for Protograph LDPC Codes-Based JSCC System in Images Transmission
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Generator of Probability Models: The Exponentiated Sine-G Family for Lifetime Studies

1
School of Mathematical Sciences, Hebei Normal University, Shijiazhuang 050024, China
2
Department of Mathematical Sciences, Bayero University, Kano 700241, Nigeria
3
Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
4
Department of Mathematics, College of Science and Human Studies at Hotat Sudair, Majmaah University, Majmaah 11952, Saudia Arabia
5
Department of Mathematics, College of Science and Arts in Gurayat, Jouf University, Gurayat 77454, Saudi Arabia
6
Department of Community Medicine & Public Health, College of Medicine, Majmaah University, Almajmaah 11952, Saudi Arabia
7
Azra Naheed Medical College, Superior University, Lahore 54000, Pakistan
8
Department of Mathematics, University of Caen-Normandie, 14032 Caen, France
9
Department of Statistics, The Islamia University of Bahawalpur, Punjab 63100, Pakistan
*
Authors to whom correspondence should be addressed.
Submission received: 13 September 2021 / Revised: 12 October 2021 / Accepted: 15 October 2021 / Published: 24 October 2021

Abstract

:
In this article, we propose the exponentiated sine-generated family of distributions. Some important properties are demonstrated, such as the series representation of the probability density function, quantile function, moments, stress-strength reliability, and Rényi entropy. A particular member, called the exponentiated sine Weibull distribution, is highlighted; we analyze its skewness and kurtosis, moments, quantile function, residual mean and reversed mean residual life functions, order statistics, and extreme value distributions. Maximum likelihood estimation and Bayes estimation under the square error loss function are considered. Simulation studies are used to assess the techniques, and their performance gives satisfactory results as discussed by the mean square error, confidence intervals, and coverage probabilities of the estimates. The stress-strength reliability parameter of the exponentiated sine Weibull model is derived and estimated by the maximum likelihood estimation method. Also, nonparametric bootstrap techniques are used to approximate the confidence interval of the reliability parameter. A simulation is conducted to examine the mean square error, standard deviations, confidence intervals, and coverage probabilities of the reliability parameter. Finally, three real applications of the exponentiated sine Weibull model are provided. One of them considers stress-strength data.

1. Introduction

In statistics, probability models are essential tools for modeling random phenomena. New flexible models were presented and examined in the literature in recent decades. This allowed a more deep exploration of real-life phenomena. The advancement of research in numerous domains, such as reliability, survival analysis, computer sciences, finance, biomedical sciences, medicine, hydrology, etc., leads researchers to propose more flexible distribution for better modeling of various kinds of data experience in practical applications. Most of the probability distributions were proposed based on algebraic functions. Since the von Mises (VM) distribution (see [1]), circular Cauchy (CC) distribution (see [2]), and Fisher distribution (see [3]), etc. However, there are very few distributions involving the so-called trigonometric functions. The recent ones include the beta-trigonometric (BT) distribution (see [4]), sine square (SS) distribution (see [5]), hyperbolic cosine-F (HC-F) distribution (see [6]), and distributions of the sine-generated (SG) family (see [7]). Further, ref. [8] introduced the cosine-sine-generated (CSG) family, ref. [9] provided the new-sine-generated (NSG) distribution and highlighted some of the advantages of sine-generated distributions in statistical studies, ref. [10] proposed the polyno-expo-trigonometric (PET) distribution, ref. [11] elaborated the new exponential with trigonometric function (NET), and [12] created the weighted-cosine exponential distribution. Due to the increase in interest in data analysis, researchers are required to go further to have more choices in terms of the availability of distributions that focus on trigonometric functions.
On the other hand, over the past decades, probability generators have greatly contributed to distribution theory, leading to several important mathematical and statistical tools useful in both theory and practice. Thus, we believe that involving trigonometric functions in distribution theory through various viewpoints can solve several complex problems in the future concerning the theoretical aspects and practice in various fields of study. The purpose of this study is to provide another new flexible probability model/distribution generator, called the exponentiated SG (ESG) family and provide several useful theoretical and practical results. In addition, the modeling capabilities of the SG family in practice draw our attention to further investigation and exploration of the SG family in its generalized form. The ESG family can be considered as a natural one-parameter extension of the SG family and an alternative to many other power-exponential-logarithmic distributions. It is defined by a cumulative distribution function (CDF) expressed as the power of the CDF of the SG family. The additional tuning power parameter allows us to modulate the functionalities, including the oscillatory amplitude, of the CDF of the SG family, opening some new statistical perspectives for data fitting, among other things. Surprisingly, this very intuitive concept did not appear to receive much attention in the literature. We investigate the fundamental properties in a closed-form and real-world applications of the ESG family, as well as some computational algorithms and techniques. Different distributions serve different purposes and represent different data generation processes. The ESG strategy provides other means of generating new probability models containing some trigonometric functions. The exponentiated sine-Weibull (ESW) distribution is proposed as a particular member of the ESG family and expresses closed-form expressions of several properties and statistical measures, such as the moments, asymptotic residual life, and distributions of extreme values. By presenting an estimation method and a real-world application of the ESW distribution, the potential and applicability of the new family in the stress strength reliability analysis are discussed. On the other hand, most statistical properties of distributions derived using trigonometric functions raised to a certain power are not fully expressed analytically in the literature; here, we are able to express the series representation of the sine and cosine functions to the power of a real number, which hopefully will be useful. In addition, a result by [13] provides differentiation formulas and power expansions for trigonometric functions to the power of positive integers and summation formulas. We hope that these series representations will be the means of extending several mathematical results in applied mathematics.
The following is the plan of the paper: In Section 2, the ESG family is specified. Some of its key characteristics are discussed. In Section 3, the expression of a reliability parameter is derived. In Section 4, the ESW distribution is defined. A few of its most important properties are discussed. In Section 5, maximum likelihood estimation and Bayes estimation for the ESW distribution are proposed and examined by simulation studies. Among other things, the maximum likelihood estimation of the reliability parameter of the ESW model is obtained. The nonparametric percentile bootstrap and Student’s bootstrap are considered to approximate the reliability parameter confidence interval, also discussed in simulation studies. Three examples of application to real data sets can be found in Section 6. In Section 7, conclusions are suggested.

2. The Exponentiated Sine-Generated Distributions

2.1. Presentation

The exponentiation technique [14] is one of the popular techniques that received extensive consideration in extending classical distributions for better flexibility. In this work, we apply the exponentiation technique to the SG family. To begin, the SG family, introduced by [7], has the following CDF:
H ( x ) = H ( x ; α , ξ ) = sin π 2 G ( x ; ξ ) , x R ,
where G ( x ; ξ ) is any valid CDF of a continuous distribution, and ξ is a vector of parameters. Therefore, we propose the exponentiated sine-G (ESG) distribution with CDF given by
F ( x ) = F ( x ; α , ξ ) = sin π 2 G ( x ; ξ ) α , α > 0 .
Notice that, if α = 1 , the ESG family is reduced to the SG family.
The probability density function (PDF) connected to the SG family is
f ( x ) = f ( x ; α , ξ ) = α π 2 g ( x ; ξ ) cos π 2 G ( x ; ξ ) sin π 2 G ( x ; ξ ) α 1 ,
where g ( x ; ξ ) is the PDF of G ( x ; ξ ) .
The related survival function (SF) and hazard rate function (HRF) are derived as
s ( x ) = s ( x ; α , ξ ) = 1 sin π 2 G ( x ; ξ ) α
and
h ( x ) = h ( x ; α , ξ ) = ( α π / 2 ) g ( x ; ξ ) cos ( π / 2 ) G ( x ; ξ ) sin ( π / 2 ) G ( x ; ξ ) α 1 1 sin ( π / 2 ) G ( x ; ξ ) α ,
respectively.
As a first approach on the impact of the parameter α , we can notice that, for α 1 , F ( x ) F ( x ; 1 , ξ ) . The contrary holds for 0 < α < 1 . In addition, for a sufficiently small x, i.e., as G ( x ; ξ ) 0 , F ( x ) π / 2 α G α ( x ; ξ ) and f ( x ) α π / 2 α g ( x ; ξ ) G α 1 ( x ; ξ ) . On the other hand, for a sufficiently large x, i.e., as G ( x ; ξ ) 1 , then s ( x ) ( α π 2 / 8 ) ( 1 G ( x ; ξ ) ) 2 , and f ( x ) ( α π 2 / 4 ) g ( x ; ξ ) ( 1 G ( x ; ξ ) ) .
Let us now perform a basic quantile analysis of the ESG family. Firstly, we can use the quantile function (QF) of the distribution to analyze its skewness and kurtosis; it also serves as a means of parameter estimation. Moreover, it is used for generating random data that follows a specified distribution. The QF aided in the computations of some distribution properties and goodness-of-fit measures. After some developments, the QF of the ESG family is given by
Q ( p ) = Q ( p ; ξ ) = G 1 2 π arcsin ( p 1 / α ) ; ξ , 0 < p < 1 ,
where G 1 ( x ; ξ ) is the QF related to G ( x ; ξ ) . In particular, the median is
Q * = Q 1 2 = G 1 2 π arcsin 2 1 / α ; ξ .
Obviously, the definition of G ( x ; ξ ) is crucial to the modeling capabilities of the ESG family. A precise example will be presented later.

2.2. Useful Expansion

In this part, we present the PDF of the ESG family in a series expansion form. To achieve this aim, some preliminary results are required. First, we recall that the sine and cosine series expansions are given by sin x = n = 0 ( 1 ) n x 2 n + 1 / ( 2 n + 1 ) ! and cos x = n = 0 ( 1 ) n x 2 n / ( 2 n ) ! . We will also need the power series raised to the power of integer as given by [15]. It is recalled in the following lemma.
Lemma 1
([15]). For a given power series of the form k = 0 a k x k , where n denotes a positive integer, we have
k = 0 a k x k n = k = 0 c k x k ,
where c 0 = a 0 n , c m = [ 1 / ( m a 0 ) ] k = 1 m ( k n m + k ) a k c m k for m 1 .
The following lemma is for future technical purposes.
Lemma 2.
We have
A n = n = 1 ( 1 ) n ( π / 2 ) 2 n ( 2 n + 1 ) ! = 2 π 1 0.36338 .
Proof. 
We can prove that A n is a convergent alternating series via standard convergence criteria, and the value of A n can be verified from https://www.wolframalpha.com/widgets/gallery/view.jsp?id=d983db47634e1936eb568b79884012ac (accessed on 18 August 2021) or https://www.emathhelp.net/calculators/calculus-2/series-calculator/ (accessed on 18 August 2021). □
A series expansion of the exponentiated version of a sine term is provided below.
Lemma 3.
Let α > 0 be real and noninteger, and x such that 0 < G ( x ; ξ ) < 1 , then
sin π 2 G ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) + l = 1 i = 0 α 1 l π 2 α 1 b i G 2 ( i + l ) + α 1 ( x ; ξ ) ,
where b 0 = a 0 l , b m = [ 1 / ( m a 0 ) ] i = 1 m ( i l m + i ) a i b m i for m 1 , and a i = ( 1 ) i + 1 ( π / 2 ) 2 i + 2 / ( 2 i + 3 ) ! .
Proof. 
We can write the sine series expansion by separating the first term as
sin π 2 G ( x ; ξ ) α 1 = π 2 G ( x ; ξ ) + n = 1 ( 1 ) n ( π 2 ) 2 n + 1 ( 2 n + 1 ) ! G 2 n + 1 ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) 1 + n = 1 ( 1 ) n ( π 2 ) 2 n ( 2 n + 1 ) ! G 2 n ( x ; ξ ) α 1 .
By considering Equation (4), we can conclude that
n = 1 [ ( 1 ) n ( π / 2 ) 2 n / ( 2 n + 1 ) ! ] G 2 n ( x ; ξ ) < 1 , since 0 < G 2 n ( x ; ξ ) < 1 for all x. Let n = i + 1 , and i 0 , then, n 1 . Therefore,
sin π 2 G ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) 1 + i = 0 a i G 2 i + 2 ( x ; ξ ) α 1 ,
where a i = ( 1 ) i + 1 ( π / 2 ) 2 i + 2 / ( 2 i + 3 ) ! . By the generalized binomial expansion and some algebra, we get
sin π 2 G ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) 1 + l = 1 α 1 l G 2 l ( x ; ξ ) i = 0 a i G 2 i ( x ; ξ ) l .
Since l 1 is an integer, we can apply Equation (3) to the series with the power l, thus,
sin π 2 G ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) 1 + l = 1 i = 0 α 1 l b i G 2 ( i + l ) ( x ; ξ ) ,
where b 0 = a 0 l and b m = [ 1 / ( m a 0 ) ] i = 1 m ( i l m + i ) a i b m i for m 1 . Hence,
sin π 2 G ( x ; ξ ) α 1 = π 2 α 1 G α 1 ( x ; ξ ) + l = 1 i = 0 α 1 l π 2 α 1 b i G 2 ( i + l ) + α 1 ( x ; ξ ) .
The stated result is obtained.  □
Now, we can apply Lemma 3 and the series expansion cos ( π / 2 ) G ( x ; ξ ) = k = 0 [ ( 1 ) k ( π / 2 ) 2 k / ( 2 k ) ! ] G 2 k ( x ; ξ ) to represent the PDF in Equation (2) as
f ( x ) = k = 0 D k ( 1 ) g ( x ; ξ ) G 2 k + α 1 ( x ; ξ ) + k , i = 0 l = 1 D i , k , l ( 2 ) g ( x ; ξ ) G 2 ( i + l + k ) + α 1 ( x ; ξ ) ,
where D k ( 1 ) = α ( 1 ) k ( π / 2 ) 2 k + α / ( 2 k ) ! , D i , k , l ( 2 ) = α ( 1 ) k ( π / 2 ) 2 k + α α 1 l b i / ( 2 k ) ! and b i is given in Lemma 3. Most of the important properties of the ESG family can be represented in a closed form analytically. The PDF is now a series of PDFs of the exponentiated baseline distribution.

2.3. Moments and Entropy

In this subsection, moments and Rényi entropy of the ESG family are derived. Let X be a random variable (RV) with distribution belonging to the ESG family. Then, the rth moment of X is traditionally obtained in the following way: μ r = E ( X r ) = x r f ( x ) d x . By virtue of Equation (5), the following series expansion holds:
μ r = k = 0 D k ( 1 ) x r g ( x ; ξ ) G 2 k + α 1 ( x ; ξ ) d x + k , i = 0 l = 1 D i , k , l ( 2 ) x r g ( x ; ξ ) G 2 ( i + l + k ) + α 1 ( x ; ξ ) d x .
Specific moments can be determined by setting r = 1 , 2 , 3 , .
Entropy, on the other hand, is a measure of a system’s disorder or randomness. The Rényi entropy of X is defined by
I R ( ρ ) = 1 1 ρ log f ρ ( x ) d x , ρ > 0 , ρ 1 .
Thus, to determine I R ( ρ ) , it is enough to compute f ρ ( x ) d x , and to plug it into the previous logarithmic-integral expression. To begin, we have
f ρ ( x ) = α π 2 ρ g ρ ( x ; ξ ) cos ρ π 2 G ( x ; ξ ) sin ρ ( α 1 ) π 2 G ( x ; ξ ) .
For an expansion of the exponentiated cosine term, the following lemma is useful.
Lemma 4.
Let ρ > 0 real and noninteger. Then, we have
cos ρ π 2 G ( x ; ξ ) = 1 + j = 1 k = 0 z k , j * * G 2 ( k + j ) ( x ; ξ ) ,
where z k , j * * = ρ j z k * , z 0 * = z 0 j , z m * = [ 1 / ( m z 0 ) ] k = 1 m ( k j m + k ) z k z m k * for m 1 , and z k = ( 1 ) k + 1 ( π / 2 ) 2 ( k + 1 ) / ( 2 ( k + 1 ) ) ! .
Proof. 
Applying some algebra to the cosine series expansion with n 0 and replacing n = k + 1 , k 0 , then n 1 , as in previous lemmas, we have
cos ρ π 2 G ( x ; ξ ) = j = 0 ρ j cos π 2 G ( x ; ξ ) 1 j = 1 + j = 1 ρ j n = 0 ( 1 ) n ( π / 2 ) 2 n ( 2 n ) ! G 2 n ( x ; ξ ) 1 j = 1 + j = 1 ρ j k = 0 ( 1 ) k + 1 ( π / 2 ) 2 ( k + 1 ) ( 2 ( k + 1 ) ) ! G 2 ( k + 1 ) ( x ; ξ ) j = 1 + j = 1 ρ j G 2 j ( x ; ξ ) k = 0 z k G 2 k ( x ; ξ ) j ,
where z k = ( 1 ) k + 1 ( π / 2 ) 2 ( k + 1 ) / ( 2 ( k + 1 ) ) ! . Since j 1 , we can use Equation (3), which yields
cos ρ π 2 G ( x ; ξ ) = 1 + j = 1 ρ j k = 0 z k * G 2 ( k + j ) ( x ; ξ ) ,
where z 0 * = z 0 j and z m * = [ 1 / ( m z 0 ) ] k = 1 m ( k j m + k ) z k z m k * for m 1 . Hence,
cos ρ π 2 G ( x ; ξ ) = 1 + j = 1 k = 0 z k , j * * G 2 ( k + j ) ( x ; ξ ) ,
where z k , j * * = ρ j z k * . The stated expansion is proved.  □
On the other hand, with a slight adaptation of Lemma 3, we have
sin π 2 G ( x ; ξ ) ρ ( α 1 ) = π 2 ρ ( α 1 ) G ρ ( α 1 ) ( x ; ξ ) + l = 1 i = 0 V i , l G 2 ( i + l ) + ρ ( α 1 ) ( x ; ξ ) ,
where V i , l = ρ ( α 1 ) l ( π / 2 ) ρ ( α 1 ) b i and b i is similar to that of Lemma 3. Thus, by substituting Lemma 4 and Equation (9) in Equation (7), we can compute the integral of f ρ ( x ) to get the Rényi entropy.
The Shannon entropy can be expressed in a similar series expansion manner.

3. Stress-Strength Reliability

The system performance parameter, known as the stress-strength parameter, is crucial in the context of a system’s mechanical reliability. In practice, a good design ensures that the strength exceeds the anticipated stress. If a component has stress modeled by an RV X and is subjected to strength modeled by an RV Y, the stress-strength parameter defined by R = P ( Y < X ) determines the system performance. The system will fail if and only if the applied stress exceeds the system’s strength. On the other hand, R can be used to compare two populations. Engineering, biological sciences, and finance all have uses for stress-strength reliability parameters. See [16] (Chapter 7), among others.
The parameter R was studied by many authors in literature through various viewpoints. Assuming that X and Y are independent, the following distributions have been considered: generalized logistic distribution (see [17]), normal distribution (see [18,19]), exponential distribution with the common location parameter (see [20]), generalized exponential distribution (see [21]), generalized exponential Poisson distribution (see [22]), Poisson-odd generalized exponential distribution (see [23]), beta-Erlang truncated exponential distribution (see [24]), Poisson-generalized half logistic distribution (see [25]), Poisson half logistic distribution (see [26]), and Weibull (W) distribution (see [27,28,29]), among others.
Here, we study the expression of R in the setting of the ESG family. Suppose that X has the PDF f ( x ; α 1 , ξ ) , Y has the CDF F ( y ; α 2 , ξ ) , and X and Y are assumed to be independent. Then, the stress-strength reliability parameter is then defined as follows:
R = P ( Y < X ) = f ( x ; α 1 , ξ ) F ( x ; α 2 , ξ ) d x .
Thus,
R = α 1 π 2 g ( x ; ξ ) cos π 2 G ( x ; ξ ) sin π 2 G ( x ; ξ ) α 1 + α 1 1 d x = α 1 α 1 + α 2 .
We see that R has some manageable form that can make it easy for statistical purposes, and so on. In one of the next sections, we discuss the behavior of R with some particular distribution of the ESG family for illustration.

4. The ES Weibull Distribution

In this section, we concentrate our attention on one special distribution of the ESG family, called the ESW distribution or ESW( α , β , λ ) distribution. In this case, the baseline in Equation (1) is the W distribution with CDF and PDF given as
G ( x ; β , λ ) = 1 e λ x β , β , λ , x > 0 ,
and
g ( x ; β , λ ) = β λ x β 1 e λ x β , x > 0 ,
respectively. Therefore, the CDF of the ESW distribution has the following expression:
F ( x ) = F ( x ; α , β , λ ) = sin π 2 1 e λ x β α , α , β , λ , x > 0 .
If β = 1 , the ESW distribution is reduced to the ES exponential (ESE) distribution. The corresponding probability density, survival, and hazard rate functions are expressed by
f ( x ) = f ( x ; α , β , λ ) = π 2 α β λ x β 1 e λ x β cos π 2 1 e λ x β sin π 2 1 e λ x β α 1 ,
s ( x ) = s ( x ; α , β , λ ) = 1 sin π 2 1 e λ x β α
and
h ( x ) = h ( x ; α , β , λ ) = ( π / 2 ) α β λ x β 1 e λ x β cos ( π / 2 ) 1 e λ x β sin ( π / 2 ) 1 e λ x β α 1 1 sin ( π / 2 ) 1 e λ x β α ,
respectively. Figure 1 and Figure 2 show the plot of the PDF given by Equation (12) and HRF specified in Equation (14). In particular, Figure 2 shows that the failure rate is flexible enough to accommodate monotonic (increasing and decreasing) shapes, and also bathtub and upside-down bathtub shapes.
The decreasing behavior of the PDF is formulated in the following result.
Theorem 1.
The PDF of the ESW distribution given by Equation (12) is a decreasing function for α 1 and β 1 .
Proof. 
We have
{ log [ f ( x ) ] } = β 1 x β λ x β 1 ( π / 2 ) α β λ x β 1 e λ x β sin ( π / 2 ) 1 e λ x β cos ( π / 2 ) 1 e λ x β + ( α 1 ) ( π / 2 ) α β λ x β 1 e λ x β cos ( π / 2 ) 1 e λ x β sin ( π / 2 ) 1 e λ x β .
When α 1 and β 1 , all the terms in the sum are negative, implying that { log [ f ( x ) ] } 0 , so log [ f ( x ) ] is decreasing, and the same for f ( x ) . This ends the proof.  □
Proposition 1.
The asymptotic behavior of F ( x ) in Equation (11), f ( x ) in Equation (12) and s ( x ) in Equation (13) are as follows:
  • As x 0 , we have
    F ( x ) π 2 α G α ( x ; β , λ ) π 2 α λ α x α β
    and
    f ( x ) α π 2 α g ( x ; β , λ ) G α 1 ( x ; β , λ ) π 2 α λ α α β x α β 1 .
  • As x , we have
    s ( x ) α π 2 8 ( 1 G ( x ; β , λ ) ) 2 = α π 2 8 e 2 λ x β
    and
    f ( x ) α π 2 4 g ( x ; β , λ ) ( 1 G ( x ; β , λ ) ) = α π 2 4 λ β x β 1 e 2 λ x β .
Proof. 
Standard equivalency function findings are used in the proof. For this reason, the details are omitted.  □

4.1. Quantile and Moments

The QF of the ESW distribution can be derived from Equation (11) as
Q ( p ) = Q ( p ; β , λ ) = 1 λ log 1 2 π arcsin ( p 1 / α ) 1 / β , 0 < p < 1 .
The median of the ESW distribution is
Q * = Q 1 2 = 1 λ log 1 2 π arcsin ( 2 1 / α ) 1 / β .
The skewness and kurtosis properties of the ESW distribution can be determined using the Bowley’s skewness and Moor’s kurtosis, among other quantile based measures. They are defined by
B = Q ( 3 / 4 ) 2 Q ( 2 / 4 ) + Q ( 1 / 4 ) Q ( 3 / 4 ) Q ( 1 / 4 ) , M = Q ( 7 / 8 ) Q ( 5 / 8 ) + Q ( 3 / 8 ) Q ( 1 / 8 ) Q ( 6 / 8 ) Q ( 2 / 8 ) ,
respectively. Equation (17) shows that both B and M are independent of λ . Figure 3 reveals that these quantile skewness and kurtosis are decreasing with respect to α and β .
The moments of the ESW distribution can be computed from μ r = x r f ( x ) d x . By letting u = sin ( π / 2 ) 1 e λ x β , this integral is reduced to
μ r = α λ r / β 0 1 log 1 2 π arcsin ( u ) r / β u α 1 d u .
It is not known if the integral in Equation (18) can be derived analytically, but it can be computed numerically by software such as R. Further, to obtain the moments of the ESW distribution in a series form, we can use Equation (6), and the following lemma.
Lemma 5.
Let ω 1 R , ω 2 > 0 , ω 3 > 0 real and noninteger. We define
ψ ( ω 1 , ω 2 , ω 3 ) = 0 x ω 1 g ω 2 ( x ; β , λ ) G ω 3 ( x ; β , λ ) d x .
Then,
ψ ( ω 1 , ω 2 , ω 3 ) = s = 0 λ ω 2 β ω 2 1 ω 3 s ( 1 ) s [ λ ( ω 2 + s ) ] ( 1 + ω 1 ω 2 ) / β + ω 2 Γ 1 β ( 1 + ω 1 ω 2 ) + ω 2 ,
where Γ ( x ) refers to the standard gamma function.
Proof. 
In an expanded form, we have
ψ ( ω 1 , ω 2 , ω 3 ) = λ ω 2 β ω 2 0 x ω 2 ( β 1 ) + ω 1 e λ ω 2 x β ( 1 e λ x β ) ω 3 d x = λ ω 2 β ω 2 s = 0 ω 3 s ( 1 ) s 0 x ω 2 ( β 1 ) + ω 1 e λ ( ω 2 + s ) x β d x .
By setting u = λ ( ω 2 + s ) x β , we get
ψ ( ω 1 , ω 2 , ω 3 ) = s = 0 λ ω 2 β ω 2 1 ω 3 s ( 1 ) s [ λ ( ω 2 + s ) ] ( 1 + ω 1 ω 2 ) / β + ω 2 0 u ( 1 + ω 1 ω 2 ) / β + ω 2 1 e u d u = s = 0 λ ω 2 β ω 2 1 ω 3 s ( 1 ) s [ λ ( ω 2 + s ) ] ( 1 + ω 1 ω 2 ) / β + ω 2 Γ 1 β ( 1 + ω 1 ω 2 ) + ω 2 .
The stated result is obtained.  □
In fact, most of the features of the ESW distribution may be obtained in a series form using this lemma. In particular, Equations (5) and (19) can be used to express the moments of the ESW distribution in a series of the following form:
μ r = k = 0 D k ( 1 ) ψ ( r , 1 , 2 k + α 1 ) + k , i = 0 l = 1 D i , k , l ( 2 ) ψ ( r , 1 , 2 ( i + l + k ) + α 1 ) ,
where D k ( 1 ) and D i , k , l ( 2 ) are given in Equation (5) and ψ ( ω 1 , ω 2 , ω 3 ) follows from Lemma 5.

4.2. Moments of Residual Life

Residual life functions are involved in a variety of fields, including engineering, quality control, and life testing. They also aid in the determination of the asymptotic distributions of order statistics. For an RV X with the ESW distribution, the related mean residual life is defined by ζ ( t ) = E ( X t | X > t ) = 0 s ( x + t ) / s ( t ) d x . The reversed mean residual life of X is defined by ζ ¯ ( t ) = E ( t X | X t ) = 0 t F ( x ) / F ( t ) d x . Now, we derive some asymptotic properties of ζ ( t ) for sufficiently large t, and asymptotic properties of ζ ¯ ( t ) for very small t.
Theorem 2.
Let X be an RV with the ESW distribution. Then, for t , the mean residual life of X satisfies
ζ ( t ) e 2 λ t β ( 2 λ ) 1 / β β Γ 1 β , 2 λ t β ,
where Γ ( a , b ) denotes the incomplete (upper version) gamma function.
Proof. 
From the asymptotic behavior of s ( t ) for t in Equation (16), we have
ζ ( t ) = 0 s ( x + t ) s ( t ) d x e 2 λ t β 0 e 2 λ ( x + t ) β d x .
By applying the change of variable u = 2 λ ( x + t ) β , and after some algebra, we get
ζ ( t ) e 2 λ t β ( 2 λ ) 1 / β β 2 λ t β u 1 / β 1 e u d u = e 2 λ t β ( 2 λ ) 1 / β β Γ 1 β , 2 λ t β .
The stated equivalence is obtained.  □
Theorem 3.
Let X be an RV with the ESW distribution. Then, for t 0 , the reversed mean residual life of X satisfies
lim t 0 ζ ¯ ( t ) t α β + 1 .
Proof. 
From the asymptotic behavior of F ( t ) for t 0 in Equation (15), we have
ζ ¯ ( t ) = 0 t F ( x ) F ( t ) d x 0 t x α β t α β d x = t α β + 1 .
The desired outcome is demonstrated.  □

4.3. Order Statistics and Asymptotic

We now focus on the order statistics from the ESW distribution, and some of their asymptotic properties. Let X 1 , X 2 , , X n , n 1 , be a sample of independent RVs following the ESW distribution. Let us consider X 1 : n , X 2 : n , , X n : n the ordered versions of X 1 , X 2 , , X n . Then, for any j = 1 , 2 , , n , the PDF of X j : n is given as
f j : n ( x ) = n ! ( j 1 ) ! ( n j ) ! f ( x ) F j 1 ( x ) s n j ( x ) , = m = 0 n j n ! ( 1 ) m ( j 1 ) ! ( n j m ) ! m ! f ( x ) F j + m 1 ( x ) .
Therefore, by expressing F ( x ) as in Equation (11), and f ( x ) as in Equation (12) into Equation (20), we have
f j : n ( x ) = m = 0 n j ( 1 ) m α β λ π n ! x β 1 e λ x β 2 ( j 1 ) ! ( n j m ) ! m ! cos π 2 1 e λ x β sin π 2 1 e λ x β α ( j + m ) 1 .
Also, we can express f j : n ( x ) as a series expansion depending on PDFs of the ESW distribution with parameters α ( j + m ) , β and λ as
f j : n ( x ) = m = 0 n j n ! ( 1 ) m ( j + m ) ( j 1 ) ! ( n j m ) ! m ! f ( x ; α ( j + m ) , β , λ ) .
The asymptotic distributions for the minimum order statistic ( X 1 : n ) can be derived as follows. For detail see [30] (Chapter 8).
Theorem 4.
Let X 1 , X 2 , , X n , n 1 , be a sample of independent RVs following the ESW distribution. Let X 1 : n = inf ( X 1 , , X n ) and B n * = ( X 1 : n a n * ) / b n * . Then, for any x > 0 , we have
lim n P ( B n * x ) = 1 e x α β .
We recognize the CDF of the W distribution with parameters 1 and α β . The normalizing constant can be derived from Equation (17) by following Theorem 8.3.6 of [30]. That is, we take a n * = 0 and b n * = Q ( 1 / n ) .
Proof. 
We aim to apply Theorem 8.3.6 of [30]. By Equation (15), we have
lim ϵ 0 F ( ϵ x ) F ( ϵ ) lim ϵ 0 ( ϵ x ) α β ϵ α β = x α β .
It follows from Theorem 8.3.6 of [30] the desired result. This ends the proof.  □

5. Parameter Estimation

In this part, we use maximum likelihood and Bayes estimation approaches to estimate the new model parameters. The maximum likelihood estimation of the ESG family is discussed in the general case, while the Bayes estimation is discussed for the ESW distribution only. The estimation techniques are examined by simulation studies.
The expression of the reliability parameter R of the ESW distribution is derived under a common scale parameter from two independent RVs with possibly different ESW distributions. Subsequently, the maximum likelihood estimation of R is discussed, and the nonparametric bootstrap confidence interval (CI) is used to determine the approximate CI of R, and finally assessed by simulation studies.

5.1. Maximum Likelihood Estimation

The method of maximum likelihood can be used to estimate the parameters of the models coming from the ESG family. To begin, let X 1 , X 2 , , X n , n 1 , be a sample of independent RVs whose distribution belongs to the ESG family. We denote by x 1 , x 2 , , x n the corresponding observations. Let θ = ( α , ξ ) T be a vector of parameters. Then, the maximum likelihood estimate (MLE) vector of θ , say θ ^ = ( α ^ , ξ ^ ) T , is obtained by the maximization of the log-likelihood function given by
L ( θ ) = n log α + n log π 2 + i = 1 n log g ( x i ; ξ ) + i = 1 n log cos π 2 G ( x i ; ξ ) + ( α 1 ) i = 1 n log sin π 2 G ( x i ; ξ ) ,
with respect to θ . This maximization can be achieved by solving the following nonlinear system of equations given as
L ( θ ) α = n α + i = 1 n log sin π 2 G ( x i ; ξ ) = 0 ,
and   
L ( θ ) ξ = i = 1 n g ξ ( x i ; ξ ) g ( x i ; ξ ) π 2 i = 1 n G ξ ( x i ; ξ ) tan π 2 G ( x i ; ξ ) + π 2 ( α 1 ) i = 1 n G ξ ( x i ; ξ ) cot π 2 G ( x i ; ξ ) = 0 ,
where g ξ ( x i ; ξ ) = g ( x i ; ξ ) / ξ and G ξ ( x i ; ξ ) = G ( x i ; ξ ) / ξ . Now, we can analyze the existence and uniqueness of the MLE of the parameter α . For more studies on the existence and uniqueness of the MLEs for various models, one can see [25,31,32], among others.
Proposition 2.
Let D ( α ; ξ ) represent the right-hand side of Equation (21), and suppose that ξ are true values of the parameters. Then, D ( α ; ξ ) = 0 has a unique real root.
Proof. 
Equation (21) shows that lim α 0 D ( α ; ξ ) = , from the other side, lim α D ( α ; ξ ) = i = 1 n log sin ( π / 2 ) G ( x i ; ξ ) < 0 , thus, D ( α ; ξ ) is a decreasing function runs from positive to negative. Hence, D ( α ; ξ ) = 0 has a unique real root since L ( θ ) / α is negative.  □
By applying the plug-in technique, we can use the estimates of the model parameters to provide estimates of the unknown distribution functions (PDF, SF, etc.).
For an in-depth statistical treatment on the parameters (interval estimations, test procedures…), the observed information matrix is required. Here, it is given by I ( θ ) = 2 L ( θ ) / ( θ θ T ) . Now, assume that ξ is a vector of m 1 components, say ξ = ( ξ 2 , ξ 3 , , ξ m ) T , and write ξ ^ = ( ξ ^ 2 , ξ ^ 3 , , ξ ^ m ) T . Then, the approximate distribution of the random version of θ ^ is N m ( θ , I ( θ ^ ) 1 ) under the usual regularity conditions. The asymptotic CI for each parameter θ can be determine using 100 ( 1 ϵ ) % CI as A C I α = ( α ^ Ω ϵ / 2 T 11 , α ^ + Ω ϵ / 2 T 11 ) , and A C I ξ k = ( ξ k ^ Ω ϵ / 2 T k k , ξ k ^ + Ω ϵ / 2 T k k ) for 2 k m , where T r r is defined by the square root of the rth diagonal component of I ( θ ^ ) 1 ) , for r = 1 , 2 , m and Ω ϵ / 2 is the quantile ( 1 ϵ / 2 ) of the N 1 ( 0 , 1 ) distribution. As a mathematical complement, the components of I ( θ ) are
I α α = n α 2 , I α ξ = π 2 i = 1 n G ξ ( x i ; ξ ) cot π 2 G ( x i ; ξ ) , I ξ ξ = i = 1 n g ξ ( x i ; ξ ) g ( x i ; ξ ) + i = 1 n ( g ξ ( x i ; ξ ) ) 2 g 2 ( x i ; ξ ) + π 2 2 i = 1 n G ξ ( x i ; ξ ) 2 sec 2 π 2 G ( x i ; ξ ) + π 2 i = 1 n G ξ ( x i ; ξ ) tan π 2 G ( x i ; ξ ) + ( α 1 ) π 2 2 i = 1 n G ξ ( x i ; ξ ) 2 csc 2 π 2 G ( x i ; ξ ) ( α 1 ) π 2 i = 1 n G ξ ( x i ; ξ ) cot π 2 G ( x i ; ξ ) ,
where g ξ ( x i ; ξ ) = 2 g ( x i ; ξ ) / ( ξ ξ T ) and G ξ ( x i ; ξ ) = 2 G ( x i ; ξ ) / ( ξ ξ T ) .

5.2. Bayes Estimation

In this subsection, we discuss the Bayes estimation of the parameters of the ESW distribution. In this setting, let θ = ( α , β , λ ) T . The Bayes estimate (BE) vector θ ^ = ( α ^ , β ^ , λ ^ ) T is constructed using the posterior distributions given the sample data. The procedure is briefly described below. Let N be the number of iterations and K be the number of the burn in. Then the square error loss (SEL) function for the assumed prior distribution is given as S ( θ ^ , θ ) = ( θ ^ θ ) 2 , and is minimized by the posterior mean, as θ ^ = [ 1 / ( N K ) ] i = K + 1 N θ ^ ( i ) . The highest posterior density (HPD) credible interval for θ ^ is determined using the package HDInterval (see [33]) in the R software. let X 1 , X 2 , , X n , n 1 , be a sample of independent RVs with the ESW distribution, and x 1 , x 2 , , x n be the observations. We recall that the unknown parameters are α , β , and λ . Adopting the Bayesian paradigm, we suppose that α , β , and λ are independent RVs that follow the gamma distribution with PDF defined by
p i ( x ) = ϑ i κ i Γ ( κ i ) x κ i 1 e ϑ i x , x > 0 , i = 1 , 2 , 3 ,
respectively, i.e., p 1 ( x ) , p 2 ( x ) , and p 3 ( x ) are the prior PDFs. Let L ( θ | d a t a ) be the likelihood function defined by
L ( θ | d a t a ) = π 2 n α n β n λ n i = 1 n x i β 1 e λ i = 1 n x i β i = 1 n cos π 2 1 e λ x i β × i = 1 n sin π 2 1 e λ x i β α 1 .
Then, the joint posterior PDF of θ | d a t a is specified by
π ( θ | d a t a ) = L ( θ | d a t a ) p 1 ( α ) p 2 ( β ) p 3 ( λ ) g ( d a t a ; θ ) d α d β d λ ,
where g ( d a t a ; θ ) = L ( θ | d a t a ) p 1 ( α ) p 2 ( β ) p 3 ( λ ) is the joint PDF associated with the data. The marginal posterior PDF of α , β , and λ can be derived from Equation (22) as
π 1 ( α ) α n + κ 1 1 e ϑ 1 α i = 1 n sin π 2 1 e λ x i β α 1 ,
π 2 ( β ) β n + κ 2 1 e ϑ 2 β λ i = 1 n x i β i = 1 n x i β 1 i = 1 n cos π 2 1 e λ x i β × i = 1 n sin π 2 1 e λ x i β α 1
and
π 3 ( λ ) λ n + κ 3 1 e λ ϑ 3 + i = 1 n x i β i = 1 n cos π 2 1 e λ x i β i = 1 n sin π 2 1 e λ x i β α 1 .
The related posterior distributions are not from well-known distributions. As a result, we can obtain samples from posterior distributions using the Metropolis–Hastings (MH) method and the Gibbs sampling technique, as detailed below (see [34,35,36]) (the normal distribution is utilized as a suitable distribution for the MH method):
  • Start by initial guess ( α ( 0 ) , β ( 0 ) , λ ( 0 ) ) ,
  • Set t = 1 ,
  • Apply the MH algorithm to generate α ( t ) from π 1 ( α ) ,
  • Apply the MH algorithm to generate λ ( t ) from π 3 ( λ ) ,
  • Apply the MH algorithm to generate β ( t ) from π 2 ( β ) ,
  • Set t = t + 1 ,
  • Repeat step 3 to 6, T times.
For a very large value of T, the estimated vector value of θ can be obtained based on the SEL, and also, the HPD credible interval can be constructed. An approximate 100 ( 1 ϵ ) % HPD credible interval of θ can be computed using the idea derived by [37] as the smallest length of the intervals for each parameter ( θ ^ 1 , θ ^ ( 1 ϵ ) T ) , ( θ ^ 2 , θ ^ ( 1 ϵ ) ( T + 1 ) ) , , ( θ ^ ϵ T , θ ^ T ) .

5.3. Simulation Study I

The performance of maximum likelihood and Bayes estimations are evaluated using simulation results. We generate 1000 samples of size n = ( 50 , 60 , 70 , , 300 ) from the ESW distribution for some selected parameter values using Equation (17). The bias, mean square error (MSE), the average length of CI (ALCI), and coverage probability (CP) of the estimates are computed using R3.5.3 developed by [38]. For the MLEs, we use the package called maxLik developed by [39] in R-software by maxBFGS. This package allows us to obtain the MLEs and information matrix, and the package matlib elaborated by [40] can be used to get the inverse of the information matrix to compute the confidence interval. The resulting simulations are given in Figure 4, Figure 5, Figure 6 and Figure 7. For the Bayes estimation, we used N = 3000 number of iterations and the first 30 % as a burn-in sample. We notice that the estimation behaves well when the all hyperparameters are greater than one and unimodal gamma PDF. It is indicated that the MLE and BE perform consistently, the bias, MSE, and the ALCI decrease as the sample size increases; the CP for each parameter is approaching 0.95 in all cases.

5.4. Estimation of the Stress-Strength Reliability from the ESW Distribution

In this part, the expression of a reliability parameter of the ESW distribution is derived when the parameter λ is common. The maximum likelihood estimation of R is obtained. The nonparametric percentile bootstrap (Bp) and Student’s bootstrap (Bt) are used to approximate the CI of R. Finally, we assessed the estimates through simulation studies.
In the setting of the ESW distribution, let X be an RV having the PDF f ( x ; α 1 , β 1 , λ ) and Y be an RV with the CDF F ( x ; α 2 , β 2 , λ ) . We assume that X and Y are independent. Then, the stress-strength reliability parameter is given by
R = f ( x ; α 1 , β 1 , λ ) F ( x ; α 2 , β 2 , λ ) d x = π 2 α 1 β 1 λ 0 x β 1 1 e λ x β 1 cos π 2 1 e λ x β 1 sin π 2 1 e λ x β 1 α 1 1 × sin π 2 1 e λ x β 2 α 2 d x .
Now, by considering the changes of variables u = ( π / 2 ) 1 e λ x β 1 and
r u = ( π / 2 ) 1 e λ ( 1 / λ ) log 1 2 u / π β 2 / β 1 , we get
R = α 1 0 π / 2 cos u ( sin u ) α 1 1 sin ( r u ) α 2 d u .
The parameter R above has no closed form expression, but it can be easily calculated using mathematical software such as R and Matlab, among others. Moreover, when β 1 = β 2 , the expression of R can be deduced from Equation (10).

5.4.1. Maximum Likelihood Estimation of R

Assume that X 1 , X 2 , , X n , n 1 , is a sample of independent RVs whose distribution is the E S W ( α 1 , β 1 , λ ) distribution, and Y 1 , Y 2 , , Y m , m 1 , is a sample of independent RVs whose distribution is the E S W ( α 2 , β 2 , λ ) distribution. We work with the observations of these samples, denotes by x 1 , x 2 , , x n and y 1 , y 2 , , y m , respectively. Let θ = ( α 1 , α 2 , β 1 , β 2 , λ ) T be the vector of unknown parameters, θ ^ = ( α ^ 1 , α ^ 2 , β ^ 1 , β ^ 2 , λ ^ ) T be the MLE of θ , and R ^ be the estimate of R. Then, the log likelihood function is given as
L R ( θ ) = ( m + n ) log π 2 + n log α 1 + n log β 1 + ( m + n ) log λ + ( β 1 1 ) i = 1 n log x i λ i = 1 n x i β 1 + i = 1 n log cos π 2 1 e λ x i β 1 + ( α 1 1 ) i = 1 n log sin π 2 1 e λ x i β 1 + m log α 2 + m log β 2 + ( β 2 1 ) j = 1 m log y j λ j = 1 m y j β 2 + j = 1 m log cos π 2 1 e λ y j β 2 + ( α 2 1 ) j = 1 m log sin π 2 1 e λ y j β 2 .
It can be maximized with respect to θ to obtain θ ^ . The above equation can equivalently be maximized by the solution of the system of equations given below:
L R ( θ ) α 1 = n α 1 + i = 1 n log sin π 2 1 e λ x i β 1 = 0 ,
L R ( θ ) α 2 = m α 2 + j = 1 m log sin π 2 1 e λ y j β 2 = 0 ,
L R ( θ ) β 1 = n β 1 + i = 1 n log x i λ i = 1 n x i β 1 log x i λ π 2 i = 1 n log tan π 2 1 e λ x i β 1 e λ x i β 1 x i β 1 log x i + ( α 1 1 ) λ π 2 i = 1 n log cot π 2 1 e λ x i β 1 e λ x i β 1 x i β 1 log x i = 0 ,
L R ( θ ) β 2 = m β 2 + j = 1 m log y j λ j = 1 m y j β 2 log y j λ π 2 j = 1 m log tan π 2 1 e λ y j β 2 e λ y j β 2 y j β 2 log y j + ( α 2 1 ) λ π 2 j = 1 m log cot π 2 1 e λ y j β 2 e λ y j β 2 y j β 2 log y j = 0
and
L R ( θ ) β 2 = m + n λ i = 1 n x i β 1 λ j = 1 m y j β 2 π 2 i = 1 n log tan π 2 1 e λ x i β 1 e λ x i β 1 x i β 1 π 2 j = 1 m log tan π 2 1 e λ y j β 2 e λ y j β 2 y j β 2 + ( α 1 1 ) π 2 i = 1 n log cot π 2 1 e λ x i β 1 e λ x i β 1 x i β 1 + ( α 2 1 ) π 2 j = 1 m log cot π 2 1 e λ y j β 2 e λ y j β 2 y j β 2 = 0 .
Once the MLE θ ^ is obtained, we can compute R ^ from Equation (23).

5.4.2. Bootstrap CIs for R

Here, we propose the use of two nonparametric CI, the percentile Bp CI, and the Bt CI, as discussed in [41]. The following steps are required to obtain the bootstrap CIs:
  • We generate a sample of values x 1 , x 2 , x 3 , , x n from the E S W ( α 1 , β 1 , λ ) distribution, and an independent sample of values y 1 , y 2 , y 3 , , y m from the E S W ( α 2 , β 2 , λ ) distribution.
  • We generate independent bootstrap samples of values x 1 , x 2 * , x 3 * , , x n * and y 1 * , y 2 * , y 3 * , , y m * using sampling with replacement from the first step in above. Based on the bootstrap sample, we compute the MLE of θ , say θ ^ * = ( α ^ 1 * , α ^ 2 * , β ^ 1 * , β ^ 2 * , λ ^ * ) T , then compute the corresponding MLE of R, say R ^ * .
  • In order to get a set of bootstrap samples of R, repeat step 2 to 3 B-times. We consider the samples ordered in an increasing order, say R ^ j * , j = 1 , 2 , , B .
Then, bootstrap CIs of R can be obtained.
  • Percentile bootstrap CI:
    Let R ^ ( δ ) * be the δ percentile of R ^ j * , j = 1 , 2 , 3 , , B . That is
    1 B j = 1 B I R ^ j * R ^ δ * = δ , 0 < δ < 1 ,
    where I { . } denotes the standard indicator function. A 100 ( 1 ϵ ) % Bp CI of R is given as
    R ^ ( ϵ / 2 ) * , R ^ ( 1 ϵ / 2 ) * .
  • Student’s t bootstrap CI:
    Let us set
    R ^ ¯ * = 1 B j = 1 B R ^ j * , s e ( R ^ * ) = 1 B j = 1 B R ^ j * R ^ ¯ * 2 ,
    and t ^ δ * be the δ percentile of ( R ^ j * R ^ ) / s e ( R ^ * ) , j = 1 , 2 , , 3 , such that
    1 B j = 1 B I ( R ^ j * R ^ ) / s e ( R ^ * ) t ^ δ * = δ , 0 < τ < 1 .
    With these tools, a 100 ( 1 ϵ ) % B t CI of R is given as
    R ^ ¯ * ± t ^ ( ϵ / 2 ) * s e ( R ^ * ) = R ^ ¯ * t ^ ( ϵ / 2 ) * s e ( R ^ * ) , R ^ ¯ * + t ^ ( ϵ / 2 ) * s e ( R ^ * ) .

5.4.3. Simulation Study II

Here, a simulation is conducted to study the performance of the MLE and bootstrap CI of R. A simulated sample size of 1000 is obtained via Equation (17) for various sample sizes and parameters from the E S W ( α 1 , β 1 , λ ) and E S W ( α 2 , β 2 , λ ) distributions. Let n and m be these sample sizes, respectively. We consider various cases of ( n , m ) as ( 20 , 20 ) , ( 30 , 20 ) , ( 40 , 40 ) , ( 40 , 60 ) and ( 60 , 60 ) . In each case, the MLEs of R is obtained and the standard deviation (SD), and bias and MSE of R are considered. A 95 % CI of R is obtained using the nonparametric percentile Bp and Bt CIs. The bootstrap is obtained using B = 1000 replications. The result of the simulation studies is given in Table 1. It shows that, as the two sample sizes increase, the SDs and MSEs decrease; the estimated value of R converges to its actual value; the ALCI in the two bootstrap methods decreases as the sample size increases. Also, the CP goes to the nominal level of 0.95 .

6. Application

Using three real data applications, we demonstrate the performance of the ESW model in comparison to its submodels and several other existing models. Two data sets for stress-strength reliability assessments make up one application. We estimate all the competing model’s parameters by the maximum likelihood method. Also, the parameters of the ESW model are further estimated by the Bayes techniques. We traditionally compare the fitted models using the Akaike information criterion (AIC), Bayesian information criterion (BIC), and consistent Akaike information criterion (CAIC), all are defined via the estimated log-likelihood ( L ) as a main ingredient. Further, the goodness-of-fit statistics known as the Kolmogorov-Smirnov (KS), Anderson-Darling (AD), and Cramér-von Mises (CvM) are considered. As usual, the data are better represented by the model with the least value of these metrics than the other models. In the case of the MLEs and BEs of the parameters of the ESW model, we compare them via the KS, AD, and CvM. The competing models are generalized exponential (GE) model (see [42]), generalized Rayleigh (GR) model (see [43]), generalized exponential Poisson (GEP) model (see [44]), half logistic Poisson (HLP) model (see [45]), exponentiated Nadarajah-Haghighi (ENH) model (see [46,47]), generalized inverse Weibull (GIW) model (see [48]), and W model.

6.1. Real Data Application I

This data set is constituted by the total milk production from the first birth of 107 cows of the SINDI race. The data can be found in [49]. They were also analyzed by [10]. Concretely, the data set is: 0.4365, 0.4260, 0.5140, 0.6907, 0.7471, 0.2605, 0.6196, 0.8781, 0.4990, 0.6058, 0.6891, 0.5770, 0.5394, 0.1479, 0.2356, 0.6012, 0.1525, 0.5483, 0.6927, 0.7261, 0.3323, 0.0671, 0.2361, 0.4800, 0.5707, 0.7131, 0.5853, 0.6768, 0.5350, 0.4151, 0.6789, 0.4576, 0.3259, 0.2303, 0.7687, 0.4371, 0.3383, 0.6114, 0.3480, 0.4564, 0.7804, 0.3406, 0.4823, 0.5912, 0.5744, 0.5481, 0.1131, 0.7290, 0.0168, 0.5529, 0.4530, 0.3891, 0.4752, 0.3134, 0.3175, 0.1167, 0.6750, 0.5113, 0.5447, 0.4143, 0.5627, 0.5150, 0.0776, 0.3945, 0.4553, 0.4470, 0.5285, 0.5232, 0.6465, 0.0650, 0.8492, 0.8147, 0.3627, 0.3906, 0.4438, 0.4612, 0.3188, 0.2160, 0.6707, 0.6220, 0.5629, 0.4675, 0.6844, 0.3413, 0.4332, 0.0854, 0.3821, 0.4694, 0.3635, 0.4111, 0.5349, 0.3751, 0.1546, 0.4517, 0.2681, 0.4049, 0.5553, 0.5878, 0.4741, 0.3598, 0.7629, 0.5941, 0.6174, 0.6860, 0.0609, 0.6488, 0.2747.
Figure 8 is the total time on test (TTT) plot of the data set. This shows that data exhibiting an increasing HRF. The ESW model is a capable candidate to accommodate this kind of HRF. The numerical values of the considered statistical measures for each model are computed and presented in Table 2. The results show that the ESW model provides a better fit than the other competing models. The Bayes estimation method shows a better fit in terms of KS, while the maximum likelihood method shows a better fit in terms of AD and CvM. Both of these techniques are sufficient choices for parameter estimation of the ESW model. Figure 9 illustrates the plots of the histogram with the fitted PDF of the ESW model and the empirical SF with the fitted SF of the ESW model. Figure 10 displays the plots of the HRF of the ESW model for the data; empirical cumulative HRF (CHRF) with the fitted CHRF of the ESW model, and quantile-quantile (QQ) plot of the ESW model. The obtained fits are totally satisfying. Figure 11 illustrates the profile log-likelihood of each parameter for the given data set. The uniqueness of the estimates can be seen. To show the performance of the BEs, Figure 12 shows the iterations obtained from the MH algorithm and the Gibbs sampling technique for each parameter, and Figure 13 shows the posterior PDFs of each parameter based on the iterations.
We end this part by indicating the observed information matrix of the ESW model:
I ( α ^ , β ^ , λ ^ ) = 1034.53601 90.1359 19.22373 90.13590 107836.9429 107846.01301 19.22373 107846.0130 107775.44190
and
I ( α ^ , β ^ , λ ^ ) 1 = 0.00113068 0.00155187 0.00155269 0.00155187 0.01467901 0.01468834 0.00155269 0.01468834 0.01468841 .

6.2. Real Data Application II

This data set is provided by [50]. It consists of the remission times (in months) of a random sample of 128 bladder cancer patients, also studied by [25,51]. The data set is: 0.08, 2.09, 3.48, 4.87, 6.94, 8.66, 13.11, 23.63, 0.20, 2.23, 3.52, 4.98, 6.97, 9.02, 13.29, 0.40, 2.26, 3.57, 5.06, 7.09, 9.22, 13.80, 25.74, 0.50, 2.46, 3.64, 5.09, 7.26, 9.47, 14.24, 25.82, 0.51, 2.54, 3.70, 5.17, 7.28, 9.74, 14.76, 26.31, 0.81, 2.62, 3.82, 5.32, 7.32, 10.06, 14.77, 32.15, 2.64, 3.88, 5.32, 7.39, 10.34, 14.83, 34.26, 0.90, 2.69, 4.18, 5.34, 7.59, 10.66, 15.96, 36.66, 1.05, 2.69, 4.23, 5.41, 7.62, 10.75, 16.62, 43.01, 1.19, 2.75, 4.26, 5.41, 7.63, 17.12, 46.12, 1.26, 2.83, 4.33, 5.49, 7.66, 11.25, 17.14, 79.05, 1.35, 2.87, 5.62, 7.87, 11.64, 17.36, 1.40, 3.02, 4.34, 5.71, 7.93, 11.79, 18.10, 1.46, 4.40, 5.85, 8.26, 11.98, 19.13, 1.76, 3.25, 4.50, 6.25, 8.37, 12.02, 2.02, 3.31, 4.51, 6.54, 8.53, 12.03, 20.28, 2.02, 3.36, 6.76, 12.07, 21.73, 2.07, 3.36, 6.93, 8.65, 12.63, 22.69.
Figure 14 displays the TTT plot of the data set. It shows that the data exhibit an upside-down-bathtub HRF, and the ESW model is a capable candidate to accommodate such an HRF. The numerical values of these measures for all the models are computed and given in Table 3; the results show that the ESW model captures the information of the data better than the other competing models. Figure 15 gives the plot of the histogram with the fitted PDF of the ESW model and empirical SF with the fitted SF of the ESW model. Figure 16 shows the plot of the HRF of the ESW model, empirical CHRF with the fitted CHRF of the ESW model, and QQ plot of the ESW model. Figure 17 is the plots of the profile log-likelihood of each parameter. To illustrate the Bayes estimates performance, Figure 18 shows the iterations obtained from the MH algorithm and the Gibbs sampling technique for each parameter, and Figure 19 shows the posterior PDFs of each parameter based on the iterations obtained.
To end this portion, the observed information matrix of the ESW model is computed as
I ( α ^ , β ^ , λ ^ ) = 16.71197 72.64589 220.8935 72.64589 2307.67228 2847.1163 220.89353 2847.11632 4766.8891
and
I ( α ^ , β ^ , λ ^ ) 1 = 1.4557178 0.14214165 0.15235355 0.1421417 0.01552620 0.01586004 0.1523536 0.01586004 0.01674243 .

6.3. Real Data Application III

In this subsection, we illustrate the performance of the ESW model in the stress strength reliability studies by using real data sets. Also, we demonstrate the application of the proposed estimation techniques in a practical scenario. We compute the value of R by maximum likelihood approach and the 95 % nonparametric Bp and Bt CIs of R using B = 1000 replications. The KS is used to show how good the fit of the ESW model is for the two data sets.
The reference [52] provides the following data sets. One data set represents single fibers tested under tension at gauge lengths of 10 mm (data1), and the other represents impregnated tows of 1000 fibers tested at gauge lengths of 20 mm (data2), of sizes n = 63 and m = 69 , respectively.
They are given as:
Data1(X): 1.901, 2.132, 2.203, 2.228, 2.257, 2.350, 2.361, 2.396, 2.397, 2.445, 2.454, 2.474, 2.518, 2.522, 2.525, 2.532, 2.575, 2.614, 2.616, 2.618, 2.624, 2.659, 2.675, 2.738, 2.740, 2.856, 2.917, 2.928, 2.937, 2.937, 2.977, 2.996, 3.030, 3.125, 3.139, 3.145, 3.220, 3.223, 3.235, 3.243, 3.264, 3.272, 3.294, 3.332, 3.346, 3.377, 3.408, 3.435, 3.493, 3.501, 3.537, 3.554, 3.562, 3.628, 3.852, 3.871, 3.886, 3.971, 4.024, 4.027, 4.225, 4.395, 5.020.
Data2(Y): 1.312, 1.314, 1.479, 1.552, 1.700, 1.803, 1.861, 1.865, 1.944, 1.958, 1.966, 1.997, 2.006, 2.021, 2.027, 2.055, 2.063, 2.098, 2.14, 2.179, 2.224, 2.240, 2.253, 2.270, 2.272, 2.274, 2.301, 2.301, 2.359, 2.382, 2.382, 2.426, 2.434, 2.435, 2.478, 2.490, 2.511, 2.514, 2.535, 2.554, 2.566, 2.57, 2.586, 2.629, 2.633, 2.642, 2.648, 2.684, 2.697, 2.726, 2.770, 2.773, 2.800, 2.809, 2.818, 2.821, 2.848, 2.88, 2.954, 3.012, 3.067, 3.084, 3.090, 3.096, 3.128, 3.233, 3.433, 3.585, 3.585.
The estimated values from the application are given in Table 4; it is quite clear that the ESW model fits the two data sets well as measured by KS (i.e., K S 1 for the ESW distribution underlying Data1(X) and K S 2 for the ESW distribution underlying Data2(X)), and its performance indicates that the ESW model can be considered as a good candidate in stress strength reliability analysis. Figure 20 displays the empirical and fitted CDF of the ESW model, and the PDF of the estimated bootstrap values of R for the data sets. Figure 21 shows the profile log likelihood of the estimated parameters. It proves that the obtained MLE is unique.

7. Conclusions

The exponentiated sine-generated family is a new family of distributions proposed in this paper. Some important mathematical and statistical properties were derived, such as the series representation of the probability density function, quantile function, moments, stress-strength reliability parameter, and Rényi entropy. A special member of the family, called the exponentiated sine Weibull (ESW) distribution, was derived and studied. We analyzed its skewness and kurtosis, mean residual and reversed mean residual life functions, order statistics, and extreme value distributions. The maximum likelihood estimation and Bayes estimation under the square error loss function of the ESW model were discussed. They were assessed using simulation studies via the mean square error and confidence interval of the estimates, and they work well. The expression of a reliability parameter of the ESW distribution is derived when the scale parameter is common. The maximum likelihood is used to estimate this special parameter, and nonparametric bootstrap techniques were considered for the confidence interval. The estimation was assessed by simulation studies and worked well by examining the mean square error, standard deviations, confidence intervals, and coverage probability. In the end, three applications of the ESW model to real data were provided for illustration, in which the ESW model outperforms some other existing models in terms of fitting and is demonstrated as a good candidate for stress-strength parameter analysis. The potential for new applications of the ESW model in the applied field is huge, opening the door to new statistical analysis in applied science dealing with important lifetime data. Also, another specific member of the new family can be investigated for further applied perspectives.

Author Contributions

All authors equally contribute to the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by: The Key Fund of Department of Education of Hebei Province (ZD2018065); The funds of Deanship of Scientific Research, Majmaah University via project number (R-2021-248).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors warmly thank the referees for their useful comments.

Conflicts of Interest

There are no conflict of interest declared by the authors.

References

  1. Evans, M.; Hastings, N.; Peacock, B. Statistical Distributions; IOP Publishing: Bristol, UK, 2001. [Google Scholar]
  2. Kent, J.T.; Tyler, D.E. Maximum likelihood estimation for the wrapped Cauchy distribution. J. Appl. Stat. 1988, 15, 247–254. [Google Scholar] [CrossRef]
  3. Fisher, N.I. Statistical Analysis of Directional Data; Cambridge University Press: Cambridge, UK, 1993. [Google Scholar]
  4. Nadarajah, S.; Kotz, S. Beta trigonometric distributions. Port. Econ. J. 2006, 5, 207–224. [Google Scholar] [CrossRef]
  5. Al-Faris, R.Q.; Khan, S. Sine square distribution: A new statistical model based on the sine function. J. Appl. Probab. Stat. 2008, 3, 163–173. [Google Scholar]
  6. Kharazmi, O.; Saadatinik, A. Hyperbolic cosine-F family of distributions with an application to exponential distribution. Gazi Univ. J. Sci. 2016, 29, 811–829. [Google Scholar]
  7. Kumar, D.; Singh, U.; Singh, S.K. A new distribution using sine function-its application to bladder cancer patients data. J. Stat. Appl. Probab. 2015, 4, 417. [Google Scholar]
  8. Chesneau, C.; Bakouch, H.S.; Hussain, T. A new class of probability distributions via cosine and sine functions with applications. Commun. Stat. Simul. Comput. 2019, 48, 2287–2300. [Google Scholar] [CrossRef]
  9. Mahmood, Z.; Chesneau, C.; Tahir, M.H. A new sine-G family of distributions: Properties and applications. Bull. Comput. Appl. Math. 2019, 7, 53–81. [Google Scholar]
  10. Jamal, F.; Chesneau, C. A new family of polyno-expo-trigonometric distributions with applications. Infin. Dimens. Anal. Quantum Probab. Relat. Top. 2019, 22, 1950027. [Google Scholar] [CrossRef] [Green Version]
  11. Bakouch, H.S.; Chesneau, C.; Leao, J. A new lifetime model with a periodic hazard rate and an application. J. Stat. Comput. Simul. 2018, 88, 2048–2065. [Google Scholar] [CrossRef]
  12. Abate, J.; Choudhury, G.L.; Lucantoni, D.M.; Whitt, W. Asymptotic analysis of tail probabilities based on the computation of moments. Ann. Appl. Probab. 1995, 5, 983–1007. [Google Scholar] [CrossRef]
  13. Brychkov, Y.A. Power expansions of powers of trigonometric functions and series containing Bernoulli and Euler polynomials. Integral Transform. Spec. Funct. 2009, 20, 797–804. [Google Scholar] [CrossRef]
  14. Gupta, R.C.; Gupta, P.L.; Gupta, R.D. Modeling failure time data by Lehman alternatives. Commun. Stat. Theory Methods 1998, 27, 887–904. [Google Scholar] [CrossRef]
  15. Gradshteyn, I.; Ryzhik, I.; Jeffrey, A.; Zwillinger, D. Table of Integrals, Series and Products, 7th ed.; Academic Press: New York, NY, USA, 2007. [Google Scholar]
  16. Kotz, S.; Pensky, M. The Stress-Strength Model and Its Generalizations: Theory and Applications; World Scientific: Singapore, 2003. [Google Scholar]
  17. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Estimation of the stress–strength reliability for the generalized logistic distribution. Stat. Methodol. 2013, 15, 73–94. [Google Scholar] [CrossRef]
  18. Barbiero, A. Confidence intervals for reliability of stress-strength models in the normal case. Commun. Stat. Simul. Comput. 2011, 40, 907–925. [Google Scholar] [CrossRef]
  19. Guo, H.; Krishnamoorthy, K. New approximate inferential methods for the reliability parameter in a stress–strength model: The normal case. Commun. Stat. Theory Methods 2004, 33, 1715–1731. [Google Scholar] [CrossRef]
  20. Baklizi, A.; El-Masri, A.E.Q. Shrinkage estimation of p(x < y) in the exponential case with common location parameter. Metrika 2004, 59, 163–171. [Google Scholar]
  21. Kundu, D.; Gupta, R.D. Estimation of p[y < x] for generalized exponential distribution. Metrika 2005, 61, 291–308. [Google Scholar]
  22. Nadarajah, S.; Bagheri, S.; Alizadeh, M.; Samani, E.B. Estimation of the stress strength parameter for the generalized exponential-Poisson distribution. J. Test. Eval. 2018, 46, 2184–2202. [Google Scholar] [CrossRef]
  23. Muhammad, M. Poisson-odd generalized exponential family of distributions: Theory and applications. Hacet. J. Math. Stat. 2016, 47, 1652–1670. [Google Scholar] [CrossRef]
  24. Shrahili, M.; Elbatal, I.; Muhammad, I.; Muhammad, M. Properties and applications of beta Erlang-truncated exponential distribution. J. Math. Comput. Sci. (JMCS) 2021, 22, 16–37. [Google Scholar] [CrossRef]
  25. Muhammad, M.; Liu, L. A new extension of the generalized half logistic distribution with applications to real data. Entropy 2019, 21, 339. [Google Scholar] [CrossRef] [Green Version]
  26. Muhammad, I.; Wang, X.; Li, C.; Yan, M.; Chang, M. Estimation of the reliability of a stress–strength system from Poisson half logistic distribution. Entropy 2020, 22, 1307. [Google Scholar] [CrossRef]
  27. Krishnamoorthy, K.; Lin, Y. Confidence limits for stress–strength reliability involving Weibull models. J. Stat. Plan. Inference 2010, 140, 1754–1764. [Google Scholar] [CrossRef]
  28. Kundu, D.; Gupta, R.D. Estimation of p[y < x] for Weibull distributions. IEEE Trans. Reliab. 2006, 55, 270–280. [Google Scholar]
  29. Asgharzadeh, A.; Valiollahi, R.; Raqab, M.Z. Stress-strength reliability of Weibull distribution based on progressively censored samples. Sort-Stat. Oper. Res. Trans. 2011, 35, 103–124. [Google Scholar]
  30. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; Wiley: New York, NY, USA, 1992; Volume 54. [Google Scholar]
  31. Muhammad, M. A generalization of the Burr XII-poisson distribution and its applications. J. Stat. Appl. Probab. 2016, 5, 29–41. [Google Scholar] [CrossRef]
  32. Muhammad, M.; Liu, L. A new three parameter lifetime model: The complementary poisson generalized half logistic distribution. IEEE Access 2021, 9, 60089–60107. [Google Scholar] [CrossRef]
  33. Meredith, M.; Kruschke, J. Hdinterval: Highest (Posterior) Density Intervals. R Package Version 013. 2016. Available online: https://CRAN.R-project.org/package=HDInterval (accessed on 13 September 2021).
  34. Metropolis, N.; Rosenbluth, A.W.; Rosenbluth, M.N.; Teller, A.H.; Teller, E. Equation of state calculations by fast computing machines. J. Chem. Phys. 1953, 21, 1087–1092. [Google Scholar] [CrossRef] [Green Version]
  35. Hastings, W.K. Monte carlo sampling methods using Markov chains and their applications. Biometrika 1970, 57, 97–109. [Google Scholar] [CrossRef]
  36. Gelf, A.E.; Smith, A.F. Sampling-based approaches to calculating marginal densities. J. Am. Stat. Assoc. 1990, 85, 398–409. [Google Scholar]
  37. Chen, M.H.; Shao, Q.M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 1999, 8, 69–92. [Google Scholar]
  38. R Core Team. A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2019. [Google Scholar]
  39. Henningsen, A.; Toomet, O. maxlik: A package for maximum likelihood estimation in R. Comput. Stat. 2011, 26, 443–458. [Google Scholar] [CrossRef]
  40. Friendly, M.; Fox, J.; Chalmers, P.; Monette, G.; Sanchez, G.; Friendly, M.M. Package Matlib. 2016. Available online: https://cran.r-project.org/web/packages/matlib/index.html (accessed on 13 September 2021).
  41. Tibshirani, R.J.; Efron, B. An Introduction to the Bootstrap; Monographs on Statistics and Applied Probability; Chapman & Hall: New York, NY, USA, 1993; Volume 57, 436p. [Google Scholar]
  42. Gupta, R.D.; Kundu, D. Theory & methods: Generalized exponential distributions. Aust. N. Z. J. Stat. 1999, 41, 173–188. [Google Scholar]
  43. Lovric, M. International Encyclopedia of Statistical Science; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  44. Barreto-Souza, W.; Cribari-Neto, F. A generalization of the exponential-Poisson distribution. Stat. Probab. Lett. 2009, 79, 2493–2500. [Google Scholar] [CrossRef] [Green Version]
  45. Muhammad, M.; Yahaya, M.A. The half logistic-Poisson distribution. Asian J. Math. Appl. 2017, 2017, 126000416. [Google Scholar]
  46. Abdul-Moniem, I.B. Exponentiated Nadarajah and Haghighi exponential distribution. Int. J. Math. Anal. Appl. 2015, 2, 68–73. [Google Scholar]
  47. Lemonte, A.J. A new exponential-type distribution with constant, decreasing, increasing, upside-down bathtub and bathtub-shaped failure rate function. Comput. Stat. Data Anal. 2013, 62, 149–170. [Google Scholar] [CrossRef]
  48. DeGusmao, F.R.; Ortega, E.M.; Cordeiro, G.M. The generalized inverse Weibull distribution. Stat. Pap. 2011, 52, 591–619. [Google Scholar] [CrossRef]
  49. Cordeiro, G.M.; dos Santos Brito, R. The beta power distribution. Braz. J. Probab. Stat. 2012, 26, 88–112. [Google Scholar]
  50. Lee, E.T.; Wang, J. Statistical Methods for Survival Data Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2003; Volume 476. [Google Scholar]
  51. Muhammad, M. Generalized half-logistic Poisson distributions. Commun. Stat. Appl. Methods 2017, 24, 353–365. [Google Scholar]
  52. Bader, M.; Priest, A. Statistical aspects of fibre and bundle strength in hybrid composites. In Proceedings of the Fourth International Conference on Composite Materials, Tokyo, Japan, 25–28 October 1982; Progress in Science and Engineering of Composites. pp. 1129–1136. [Google Scholar]
Figure 1. Selected plots of PDF f ( x ) and HRF h ( x ) .
Figure 1. Selected plots of PDF f ( x ) and HRF h ( x ) .
Entropy 23 01394 g001
Figure 2. Selected plots of various kinds of bathtub shapes of HRF h ( x ) .
Figure 2. Selected plots of various kinds of bathtub shapes of HRF h ( x ) .
Entropy 23 01394 g002
Figure 3. Selected plots of B and M of the ESW distribution.
Figure 3. Selected plots of B and M of the ESW distribution.
Entropy 23 01394 g003
Figure 4. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.5 , β = 0.5 and λ = 1.5 .
Figure 4. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.5 , β = 0.5 and λ = 1.5 .
Entropy 23 01394 g004
Figure 5. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.5 , β = 0.9 and λ = 0.9 .
Figure 5. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.5 , β = 0.9 and λ = 0.9 .
Entropy 23 01394 g005
Figure 6. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 0.8 , β = 0.9 and λ = 0.5 .
Figure 6. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 0.8 , β = 0.9 and λ = 0.5 .
Entropy 23 01394 g006
Figure 7. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.6 , β = 0.6 and λ = 1.4 .
Figure 7. Plots of the Bias, MSE, ALCI, and CP for the estimated α = 1.6 , β = 0.6 and λ = 1.4 .
Entropy 23 01394 g007
Figure 8. TTT plot for the data I.
Figure 8. TTT plot for the data I.
Entropy 23 01394 g008
Figure 9. Plots of histogram with fitted PDF of the ESW model, and empirical SF with fitted SF of the ESW model for the data I.
Figure 9. Plots of histogram with fitted PDF of the ESW model, and empirical SF with fitted SF of the ESW model for the data I.
Entropy 23 01394 g009
Figure 10. Plots of empirical with fitted HRF of the ESW model, empirical CHRF with fitted CHRF of the ESW model, and QQ plot for the data I.
Figure 10. Plots of empirical with fitted HRF of the ESW model, empirical CHRF with fitted CHRF of the ESW model, and QQ plot for the data I.
Entropy 23 01394 g010
Figure 11. Plots of profile log-likelihood function of each parameter for the data I.
Figure 11. Plots of profile log-likelihood function of each parameter for the data I.
Entropy 23 01394 g011
Figure 12. Plots of iterations from the MH algorithm and Gibbs sampling technique for the data I.
Figure 12. Plots of iterations from the MH algorithm and Gibbs sampling technique for the data I.
Entropy 23 01394 g012
Figure 13. Plots of posterior PDFs of α , β and λ in the ESW model based on iterations for the data I.
Figure 13. Plots of posterior PDFs of α , β and λ in the ESW model based on iterations for the data I.
Entropy 23 01394 g013
Figure 14. TTT plot for the data II.
Figure 14. TTT plot for the data II.
Entropy 23 01394 g014
Figure 15. Plots of histogram with the fitted PDF of the ESW model, and empirical SF with fitted SF of the ESW model for the data II.
Figure 15. Plots of histogram with the fitted PDF of the ESW model, and empirical SF with fitted SF of the ESW model for the data II.
Entropy 23 01394 g015
Figure 16. Plots of empirical HRF with fitted HRF of the ESW model, empirical CHRF with fitted CHRF, and QQ plot for the ESW model for the data II.
Figure 16. Plots of empirical HRF with fitted HRF of the ESW model, empirical CHRF with fitted CHRF, and QQ plot for the ESW model for the data II.
Entropy 23 01394 g016
Figure 17. Plots of profile log-likelihood function of each parameter of the ESW model for the data II.
Figure 17. Plots of profile log-likelihood function of each parameter of the ESW model for the data II.
Entropy 23 01394 g017
Figure 18. Plots of iterations from the MH algorithm and Gibbs sampling technique for the data II.
Figure 18. Plots of iterations from the MH algorithm and Gibbs sampling technique for the data II.
Entropy 23 01394 g018
Figure 19. Plots of posterior PDFs of α , β and λ in the ESW model based on iterations for the data II.
Figure 19. Plots of posterior PDFs of α , β and λ in the ESW model based on iterations for the data II.
Entropy 23 01394 g019
Figure 20. Plots of empirical and fitted CDF of the ESW model, and PDF of estimated bootstrap values of R for stress-strength data sets.
Figure 20. Plots of empirical and fitted CDF of the ESW model, and PDF of estimated bootstrap values of R for stress-strength data sets.
Entropy 23 01394 g020
Figure 21. Plots of profile log-likelihood function of each parameter for stress-strength data sets.
Figure 21. Plots of profile log-likelihood function of each parameter for stress-strength data sets.
Entropy 23 01394 g021
Table 1. Parameters’ values, actual value of R, estimated value of R with SD in parenthesis, MSE of R ^ with bias in parenthesis, and ALCI with CP in parenthesis.
Table 1. Parameters’ values, actual value of R, estimated value of R with SD in parenthesis, MSE of R ^ with bias in parenthesis, and ALCI with CP in parenthesis.
( α 1 , α 2 , β 1 , β 2 , λ ) R ( n , m ) R ^ ( SD ) MSE ( Bias ) ALCI Bp ( CP ) ALCI Bt ( CP )
(0.8, 0.7, 0.5, 0.6, 0.5)0.5182(20, 20)0.5187 (0.0939)0.0088 (0.0006)0.4153 (0.93)0.4971 (0.94)
(30, 20)0.5233 (0.0778)0.0061 (0.0052)0.3238 (0.94)0.3544 (0.94)
(40, 40)0.5191 (0.0585)0.0034 (0.0010)0.2323 (0.94)0.2341 (0.94)
(40, 60)0.5181 (0.0544)0.0030 ( 7 × 10 5 )0.2115 (0.95)0.2108 (0.95)
(60, 60)0.5189 (0.0496)0.0025 (0.0007)0.1871 (0.94)0.1876 (0.94)
(0.8, 0.7, 1.2, 1.0, 0.6)0.5555(20, 20)0.5080 (0.1827)0.0356 (−0.0475)0.6756 (0.94)0.9333 (0.89)
(30, 20)0.5263 (0.1411)0.0208 (−0.0292)0.6082 (0.94)0.8711 (0.92)
(40, 40)0.5511 (0.0792)0.0063 (−0.0044)0.4348 (0.94)0.6333 (0.96)
(40, 60)0.5546 (0.0665)0.0044 (−0.0009)0.3434 (0.94)0.4769 (0.96)
(60, 60)0.5564 (0.0509)0.0026 (0.0008)0.2636 (0.95)0.3413 (0.95)
(0.9, 0.5, 0.6, 0.8, 0.7)0.5972(20, 20)0.5970 (0.0893)0.0080 (−0.0021)0.4574 (0.94)0.5967 (0.95)
(30, 20)0.5988 (0.0748)0.0056 (0.0015)0.3406 (0.94)0.4051 (0.95)
(40, 40)0.5976 (0.0575)0.0033 (0.0004)0.2283 (0.95)0.2408 (0.95)
(40, 60)0.5986 (0.537)0.0029 (0.0013)0.2053 (0.93)0.2081 (0.93)
(60, 60)0.5997 (0.0479)0.0023 (0.0025)0.1784 (0.94)0.1793 (0.94)
(0.6, 0.5, 0.9, 0.9, 0.7)0.5455(20, 20)0.5278 (0.1235)0.0155 (−0.0176)0.5598 (0.94)0.7735 (0.95)
(30, 20)0.5378 (0.0882)0.0078 (−0.0076)0.4385 (0.94)0.5898 (0.96)
(40, 40)0.5467 (0.0618)0.0038 (0.0012)0.2780 (0.94)0.3333 (0.94)
(40, 60)0.5441 (0.0555)0.0031 (−0.0013)0.2554 (0.94)0.3065 (0.94)
(60, 60)0.5458 (0.0480)0.0023 (0.0004)0.1870 (0.93)0.1940 (0.93)
(2.5, 1.3, 2.0, 1.1, 1.4)0.7736(20, 20)0.7233 (0.2232)0.05229 (−0.0503)0.7678 (0.95)1.1720 (0.88)
(30, 20)0.7481 (0.1661)0.0282 (−0.0256)0.6759 (0.94)1.0583 (0.93)
(40, 40)0.7594 (0.1162)0.0137 (−0.0142)0.4909 (0.95)0.7720 (0.95)
(40, 60)0.7707 (0.0819)0.0067 (−0.0030)0.4094 (0.94)0.6441 (0.93)
(60, 60)0.7757 (0.0530)0.0028 (0.0021)0.2846 (0.92)0.4186 (0.93)
(2.5, 0.5, 1.0, 2.6, 0.3)0.9084(20, 20)0.9074 (0.0668)0.0045 (−0.0010)0.3645 (0.90)0.5907 (0.91)
(30, 20)0.9073 (0.0627)0.0039 (−0.0011)0.2843 (0.92)0.4423 (0.93)
(40, 40)0.9106 (0.0321)0.0010 (0.0022)0.1314 (0.90)0.1574 (0.91)
(40, 60)0.9108 (0.0305)0.0009 (0.0024)0.1111 (0.90)0.1204 (0.90)
(60, 60)0.9092 (0.0259)0.0007 (0.0008)0.0969 (0.93)0.1029 (0.93)
(3.8, 1.2, 1.5, 1.2, 1.1)0.7787(20, 20)0.7567 (0.1584)0.0255 (−0.0219)0.6621 (0.94)1.0187 (0.92)
(30, 20)0.7743 (0.1167)0.0136 (−0.0044)0.5471 (0.91)0.8355 (0.91)
(40, 40)0.7594 (0.1163)0.0137 (−0.0142)0.4909 (0.95)0.7720 (0.95)
(40, 60)0.7783 (0.0601)0.0036 (−0.0004)0.3099 (0.94)0.4329 (0.95)
(60, 60)0.7791 (0.0412)0.0017 (0.0005)0.1740 (0.93)0.1969 (0.93)
(1.0, 1.0, 1.0, 1.0, 1.0)0.5000(20, 20)0.4738 (0.1447)0.0216 (−0.0262)0.5975 (0.94)0.8229 (0.92)
(30, 20)0.4854 (0.1195)0.0145 (−0.0146)0.5135 (0.94)0.7150 (0.94)
(40, 40)0.4940 (0.0841)0.0071 (−0.0060)0.4511 (0.95)0.6473 (0.96)
(40, 60)0.4964 (0.0665)0.0044 (−0.0036)0.3099 (0.94)0.4093 (0.95)
(60, 60)0.4993 (0.0464)0.0022 (−0.0007)0.2297 (0.94)0.2780 (0.96)
Table 2. MLEs, BE with 95 % CI in parenthesis, L , AIC, BIC, CAIC, KS with p-value in parenthesis, AD, and CvM for the data I.
Table 2. MLEs, BE with 95 % CI in parenthesis, L , AIC, BIC, CAIC, KS with p-value in parenthesis, AD, and CvM for the data I.
Model α β λ θ L AICBICCAICKSADCvM
ESW M L E 0.32165.54244.7906-27.732−49.464−41.446−49.2310.07340.53050.0836
(0.2557, 0.3875)(5.3049, 5.7799)(4.5531, 5.0282) (0.6113)
ESW B E 0.50214.29034.0846-----0.05730.58250.0871
(0.2604, 0.7447)(2.8145, 5.8670)(2.6074, 5.7682) (0.8744)
ESE3.3101-2.1712-6.596−9.191−3.846−9.0760.14184.09700.6751
(2.3421, 4.2782) (1.8087, 2.5336) (0.0271)
SW-2.48942.8311-22.378−40.754−35.409−40.6390.07661.32990.2003
(2.0912, 2.8857)(2.0695, 3.5926) (0.5559)
W-2.60125.3818-21.348−38.695−33.349−38.5800.08321.51260.2307
(2.1896, 3.0128)(3.8514, 6.9122) (0.4487)
GE3.7139-4.2007-5.039−6.078−0.732−5.9620.14774.36960.7257
(2.6102, 4.8176) (3.4729, 4.9284) (0.0188)
GR2.1189--1.256718.155−32.311−26.195−32.1950.11782.15790.3371
(2.6102, 4.8176) (3.4730, 4.9284) (0.1026)
GEP4.26903.75404.1128-5.0158−4.03163.9869−3.79860.15444.37990.7276
(4.2439, 4.2947)(3.0422, 4.4662)(0, 0.0061) (0.0121)
HLP5.7570-−4.9618-17.877−31.754−26.409−31.6390.09161.99320.3015
(5.0236, 6.4923) (−6.4447, −3.4788) (0.3302)
ENH32.52642.35950.0608-21.0381−36.076−28.058−35.8430.29951.38310.2104
(0, 74.2789)(1.7901, 2.9288)(0, 0.1409) ( 9.3 × 10 9 )
Table 3. MLEs, BE with 95 % CI in parenthesis, L , AIC, BIC, CAIC, KS with p-value in parenthesis, AD, and CvM for the data II.
Table 3. MLEs, BE with 95 % CI in parenthesis, L , AIC, BIC, CAIC, KS with p-value in parenthesis, AD, and CvM for the data II.
Model α β λ θ γ L AICBICCAICKSADCvM
ESW M L E 2.76190.62070.2668--−410.476826.95835.508827.1460.04350.25940.0394
(0.3971, 5.1267)(0.3765, 0.8649)(0.0132, 0.5204) (0.9688)
ESW B E 2.24750.62750.2614------0.05070.29520.0468
(1.2466, 3.2377)(0.5272, 0.8899)(0.0752, 0.3236 ) (0.8967)
ESE1.1164-0.0639--−413.915831.830837.535831.9260.07860.81070.1365
(0.8549, 1.3779) (0.0504, 0.0774) (0.4070)
SW-0.99200.0611--−414.325832.650838.354832.7460.06990.83120.1400
(0.8644, 1.1196)(0.0377, 0.0844) (0.5581)
W-1.04760.0939--−414.087832.174837.878832.2700.06990.78150.1308
(0.9152, 1.1800)(0.0565, 0.1314) (0.5123)
GR0.0476--0.3641-−429.225862.450808.154862.5460.15512.76990.4729
(0.9255, 1.5082) (0.0945, 0.1477) (0.0042)
GIW0.19880.7521--8.1915−444.000894.002902.558894.1950.14084.5130.7414
(0, 0.6116)(0.6689, 0.8352) (0, 20.8093) (0.00125)
HLP0.0555-4.0202--−413.171830.342836.046830.4380.09540.36200.0606
(0, 0.1237) (0, 9.4418) (0.1861)
Table 4. MLEs, L R , R, KS with p-value in parenthesis, and 95 % CI with confidence length below for stress strength data sets.
Table 4. MLEs, L R , R, KS with p-value in parenthesis, and 95 % CI with confidence length below for stress strength data sets.
α 1 α 2 β 1 β 2 λ L R KS 1 KS 2 R CI Bp CI Bt
5.94203.71522.36512.71510.0877106.2650.09460.04850.7837(0.7073, 0.8524)(0.7073, 0.8602)
(0.5921)(0.9944) 0.14510.1529
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Muhammad, M.; Alshanbari, H.M.; Alanzi, A.R.A.; Liu, L.; Sami, W.; Chesneau, C.; Jamal, F. A New Generator of Probability Models: The Exponentiated Sine-G Family for Lifetime Studies. Entropy 2021, 23, 1394. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111394

AMA Style

Muhammad M, Alshanbari HM, Alanzi ARA, Liu L, Sami W, Chesneau C, Jamal F. A New Generator of Probability Models: The Exponentiated Sine-G Family for Lifetime Studies. Entropy. 2021; 23(11):1394. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111394

Chicago/Turabian Style

Muhammad, Mustapha, Huda M. Alshanbari, Ayed R. A. Alanzi, Lixia Liu, Waqas Sami, Christophe Chesneau, and Farrukh Jamal. 2021. "A New Generator of Probability Models: The Exponentiated Sine-G Family for Lifetime Studies" Entropy 23, no. 11: 1394. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop