Next Article in Journal
Hadamard Matrices with Cocyclic Core
Next Article in Special Issue
Software Reliability Modeling Incorporating Fault Detection and Fault Correction Processes with Testing Coverage and Fault Amount Dependency
Previous Article in Journal
Sensitivity of the Solution to Nonsymmetric Differential Matrix Riccati Equation
Previous Article in Special Issue
On Estimating the Number of Deaths Related to Covid-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparisons of Parallel Systems with Components Having Proportional Reversed Hazard Rates and Starting Devices

by
Narayanaswamy Balakrishnan
1,
Ghobad Barmalzan
2,* and
Sajad Kosari
3
1
Department of Mathematics and Statistics, McMaster University, Hamilton, ON L8S 4L8, Canada
2
Department of Statistics, University of Zabol, Zabol 98615-538, Sistan and Baluchestan, Iran
3
Department of Mathematics, University of Zabol, Zabol 98615-538, Sistan and Baluchestan, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(8), 856; https://doi.org/10.3390/math9080856
Submission received: 4 February 2021 / Revised: 24 March 2021 / Accepted: 8 April 2021 / Published: 14 April 2021
(This article belongs to the Special Issue Reliability and Statistical Learning and Its Applications)

Abstract

:
In this paper, we consider stochastic comparisons of parallel systems with proportional reversed hazard rate (PRHR) distributed components equipped with starting devices. By considering parallel systems with two components that PRHR and starting devices, we prove the hazard rate and reversed hazard rate orders. These results are then generalized for such parallel systems with n components in terms of usual stochastic order. The establish results are illustrated with some examples.

1. Introduction

Comparison of important characteristics of lifetimes of technical systems is of interest in many problems. Let X 1 , , X n be non-negative independent random variables representing lifetimes of components of a system. Let I p 1 , , I p n be independent Bernoulli random variables with I p i = 1 if the i-th component survives from random shocks and I p i = 0 if the i-th component fails from the shocks and P ( I p i = 1 ) = p i , for i = 1 , , n . Further, let them also be independent of X i s. For a given time period, we can then use I p 1 X 1 , , I p n X n to denote the lifetimes of components that are subject to random shocks. Of special interest are Y n : n = max ( I p 1 X 1 , , I p n X n ) and Y 1 : n = min ( I p 1 X 1 , , I p n X n ) corresponding to lifetimes of parallel and series systems, respectively. Throughout this work, we use the term “heterogeneity” to mean that components have different lifetime distributions. A similar assumption is also made on the survival probabilities. It is then of natural interest to evaluate the influence of heterogeneity among the components and the random shocks on the lifetimes of parallel and series systems, and this reliability problem forms the main basis for the present work.
We can present a different motivation for this problem as follows. Consider a finite system with each of its components equipped with a starter whose performance is modelled by a Bernoulli random variable, and with all component lifetimes being independent. As a starter may fail to initiate the component, the total number of components in operation would thus be random. Such situations arise naturally in a number of applications. Some possible examples are as follows: start-ups of power plants with gas turbines, length of time of a conference online being the maximum online time of those who successfully register for the conference, and the maximum loss of an insured individual who has a policy covering multiple risks being the maxima of those invoked losses. Another interesting scenario discussed by [1,2] in auction theory is when an auctioneer attracts some predetermined potential bidders by advertising a valuable object; in this case, the largest bid of those participants defines the price of the object for sale. One may additionally refer to [3,4,5,6] for the role of random extremes in financial economics, reliability theory, actuarial science, hydrology, and so on.
In actuarial set-up, the claims sizes may be represented by variables X i s, and the variables I p i s represent their occurrences. In this case, Y n : n = max ( I p 1 X 1 , , I p n X n ) and Y 1 : n = min ( I p 1 X 1 , , I p n X n ) correspond to the largest and smallest claim amounts in a portfolio of risks, respectively.
Considerable attention has been paid in the actuarial literature to different stochastic comparisons of numbers of claims and aggregate claim amounts. In particular, [7] consider a general scale model and discuss orderings of smallest and largest claim amounts, while [8] focus on the comparison of smallest and largest claim amounts from two sets of heterogeneous portfolios. These authors have specifically discussed the ordering results in the presence of heterogeneity among the sample sizes and the probabilities of claims and also in the presence of dependence between claim sizes and probabilities of claims.
The flexible family of distributions offered by the proportional reversed hazard rate (PRHR) model has found key applications in lifetime data analysis. For a system consisting of n components, let X i and r ˜ i (for i = 1 , , n ) denote the lifetime and the reversed hazard rate of the i-th component. Then, when
r ˜ i ( x ) = λ i r ˜ ( x ) , f o r   i = 1 , , n ,
the variables X i s are said to have the P R H R model, where r ˜ ( x ) is referred to as the baseline reversed hazard rate function and λ i s (all positive) are the proportionality constants. It is then easy in this case to see that F i ( x ) , the distribution function of X i , is given by ( F ( x ) ) λ i , i = 1 , , n , where F ( x ) is the baseline distribution function corresponding to r ˜ ( x ) . The PRHR family of distributions include many commonly used lifetime distributions as special cases such as generalized exponential and exponentiated Weibull distributions. In addition, when the proportionality constants λ i s are integers, then X i s are in fact the lifetimes of parallel systems consisting of λ i components with their lifetimes being independent and identically distributed with distribution function F ( x ) . As parallel systems with more components are less prone to failure, the PRHR model is also referred to as resilience model in the reliability literature; one may refer to [9] for relevant details.
Suppose Y i = X i I p i , i = 1 , 2 . Then, the survival function of series systems, V 1 : 2 = min { X 1 I p 1 , X 2 I p 2 } , is given by
F ¯ V 1 : 2 ( x ) = i = 1 2 p i F ¯ X 1 : 2 ( x ) , x 0 .
Similarly, the survival function of W 1 : 2 = min { X 1 * I p 1 * , X 2 * I p 2 * } is given by
F ¯ W 1 : 2 ( x ) = i = 1 2 p i * F ¯ X 1 : 2 * ( x ) , x 0 .
Then, the stochastic comparison between V 1 : 2 and W 1 : 2 is equivalent to the comparison between X 1 : 2 and X 1 : 2 * . It should be mentioned that the comparison between X 1 : 2 and X 1 : 2 * has been investigated by many authors earlier. For this reason, we have not considered this problem in the present work.
In this paper, we consider only stochastic comparisons of parallel systems with proportional reversed hazard rate (PRHR) distributed components equipped with starting devices. We specifically establish the hazard rate, reversed hazard rate and usual stochastic orders of parallel systems with PRHR distributed components equipped with starting devices.
The rest of this paper is organized as follows. In Section 2, we present some basic definitions and notation pertaining to stochastic orders and majorization orders that are used in the present work. Section 3 discusses stochastic comparisons of parallel systems for different probabilities of starters in terms of hazard rate order. In Section 4, stochastic comparisons of parallel systems are established for different probabilities of starters in terms of reversed hazard rate order. Section 5 discusses stochastic comparisons of parallel systems for different probabilities of the starters in terms of usual stochastic order. Finally, some concluding remarks are made in Section 6.

2. Preliminaries

In this section, we present some basic definitions and lemmas that will be useful for all subsequent developments. For convenience, we use the notation a = s g n b to denote that both sides of an equality have the same sign.
Definition 1.
Suppose X and Y are two non-negative continuous random variables with distribution functions F X and F Y , survival functions F ¯ X and F ¯ Y , hazard rate functions r X and r Y , and reversed hazard rate functions r ˜ X and r ˜ Y . We assume that all involved expectations exist. Then:
(i) 
X is said to be larger than Y in the usual stochastic order (denoted by X s t Y ) if F ¯ X ( t ) F ¯ Y ( t ) for all t R + . This is equivalent to saying that E ( ϕ ( X ) ) E ( ϕ ( Y ) ) for all increasing functions ϕ : R + R ;
(ii) 
X is said to be larger than Y in the hazard rate order (denoted by X h r Y ) if and only if F ¯ X ( t ) / F ¯ Y ( t ) increases in t R + . This is equivalent to saying that r Y ( t ) r X ( t ) for all t R + ;
(iii) 
X is said to be larger than Y in the reversed hazard rate order (denoted by X r h Y ) if and only if F X ( t ) / F Y ( t ) increases in t R + . This is equivalent to saying that r ˜ X ( t ) r ˜ Y ( t ) for all t R + .
It is known that the usual stochastic order is included in both hazard rate and reversed hazard rate orders. The books by [10,11] provide elaborate details on various stochastic orders and their applications to a wide array of problems.
Definition 2.
Consider two vectors a = ( a 1 , , a n ) and b = ( b 1 , , b n ) with corresponding increasing arrangements a ( 1 ) a ( n ) and b ( 1 ) b ( n ) , respectively. Then:
(i) 
a is said to majorize b , denoted by a m b , if j = 1 i a ( j ) j = 1 i b ( j ) for i = 1 , , n 1 , and j = 1 n a ( j ) = j = 1 n b ( j ) ;
(ii) 
a is said to weakly submajorize b , denoted by a w b , if j = i n a ( j ) j = i n b ( j ) for i = 1 , , n .
The concept of majorization is a way of comparing two vectors of the same dimension, in terms of the dispersion of their components, in which the order u m v means that u i s are more dispersive than v i s for a fixed sum. For example, we always have u m u ¯ , where u ¯ = ( u ¯ , , u ¯ ) with u ¯ = 1 n i = 1 n u i . It is evident that the majorization order implies weak submajorization order.
Definition 3.
A real-valued function ϕ, defined on a set A R n , is said to be Schur-convex on A if
u m v ϕ ( u ) ϕ ( v ) f o r   a n y   u , v A .
Further, ϕ is said to be Schur-concave function on A if ϕ is Schur-convex on A .
Lemma 1.
([12], p. 84) Suppose J R is an open interval and ϕ : J n R is continuously differentiable. Then, necessary and sufficient conditions for ϕ to be Schur-convex (Schur-concave) on J n are
(i) 
ϕ is symmetric on J n ;
(ii) 
for all i j and all z J n ,
( z i z j ) ϕ ( z ) z i ϕ ( z ) z j 0 ( 0 ) ,
where ϕ ( z ) / z i denotes the partial derivative of ϕ with respect to its i-th argument.
Lemma 2.
([12], p. 87) Consider the real-valued function φ , defined on a set A R n . Then, u w v implies ϕ ( u ) ϕ ( v ) if and only if ϕ is increasing and Schur-convex on A .

3. Hazard Rate Order

In this section, we discuss stochastic comparisons of parallel systems for different probabilities of starters in terms of hazard rate order.
Theorem 1.
Suppose X 1 and X 2 are independent non-negative random variables with X i P R H R ( λ i ) . Further, suppose I p 1 , I p 2 , I p 1 * , and I p 2 * are independent Bernoulli random variables, independently of X i s, with E ( I p i ) = p i and E ( I p i * ) = p i * , i = 1 , 2 . Let V 2 : 2 = max { X 1 I p 1 , X 2 I p 2 } and W 2 : 2 = max { X 1 I p 1 * , X 2 I p 2 * } . Then, the following statements hold true:
(i) 
If p 1 = p 1 * ( p 2 = p 2 * ) and λ 1 λ 2 ( λ 2 λ 1 ), then
p 2 p 2 * ( p 1 p 1 * ) V 2 : 2 h r W 2 : 2 ;
(ii) 
If p 1 p 1 * , then
p 1 p 2 ,   p 1 * p 2 *   a n d   p 1 p 2 p 1 * p 2 * V 2 : 2 h r W 2 : 2 .
Proof. 
(i)
The survival functions of V 2 : 2 and W 2 : 2 are given by
F ¯ V 2 : 2 ( x ) = p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x )
and
F ¯ W 2 : 2 ( x ) = p 1 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) .
For the necessity part, note that V 2 : 2 h r W 2 : 2 implies that V 2 : 2 s t W 2 : 2 and so F ¯ V 2 : 2 F ¯ W 2 : 2 . We thus have
p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) .
Therefore, we get
p 2 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) ,
which implies that p 2 p 2 * . Now, for the sufficiency part, let us consider
ϕ ( x ) = p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) ,
and it is then enough to show that ϕ is increasing in x. Upon differentiating ϕ with respect to x, we get
ϕ ( x ) x = s g n p 1 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) × { p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 1 λ 1 F λ 1 ( x ) p 2 λ 2 F λ 2 ( x ) } p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) × { p 1 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 1 λ 1 F λ 1 ( x ) p 2 * λ 2 F λ 2 ( x ) } = p 1 2 p 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 2 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 p 1 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + p 1 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 2 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) + p 1 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + p 1 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) + p 1 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 1 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) + p 1 2 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 2 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) p 1 2 p 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) = p 1 p 2 p 2 * { p 1 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } .
For proving the increasing property of ϕ , it is enough to show that
D ( x ) = λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x )
is positive. For this purpose, we find
D ( x ) = λ 1 F λ 1 ( x ) λ 2 F λ 2 ( x ) + ( λ 2 λ 1 ) F λ 1 + λ 2 ( x ) = s g n λ 1 λ 2 F λ 2 λ 1 ( x ) + ( λ 2 λ 1 ) F λ 2 ( x ) .
Let us now consider E ( x ) = λ 1 λ 2 F λ 2 λ 1 ( x ) + ( λ 2 λ 1 ) F λ 2 ( x ) . Then, we have lim x E ( x ) = 0 ; since λ 1 λ 2 , we find
E ( x ) = λ 2 ( λ 2 λ 1 ) F λ 2 λ 1 1 ( x ) f ( x ) + λ 2 ( λ 2 λ 1 ) F λ 2 1 ( x ) f ( x ) = s g n ( λ 2 λ 1 ) F λ 2 1 ( x ) F λ 1 ( x ) + 1 = s g n F λ 1 ( x ) 1 0 ,
and so E ( x ) is decreasing. Consequently, E ( x ) 0 for x 0 , and so D ( x ) 0 . Thus, Part (i) of the theorem is proved.
(ii)
The survival functions of V 2 : 2 and W 2 : 2 , for x 0 , are
F ¯ V 2 : 2 ( x ) = p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x )
and
F ¯ W 2 : 2 ( x ) = p 1 * 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 * p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) ,
respectively. Then, it is enough to show that
ψ ( x ) = p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 * 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 * p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x )
is increasing in x. We can then see easily that
ψ ( x ) x = s g n { p 1 λ 1 F λ 1 ( x ) p 2 λ 2 F λ 2 ( x ) + p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } × p 1 * 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 * p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) + { p 1 * λ 1 F λ 1 ( x ) + p 2 * λ 2 F λ 2 ( x ) p 1 * p 2 * λ 1 F λ 1 ( x ) × 1 F λ 2 ( x ) p 1 * p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } × p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) = p 1 p 1 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) p 1 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 1 * p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 2 p 1 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) + p 2 p 1 * p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 p 1 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 p 2 p 1 * p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 p 2 p 1 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 p 1 * p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) + p 1 p 1 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) + p 1 * p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) { p 1 * p 1 p 2 λ 1 F λ 1 ( x ) × 1 F λ 1 ( x ) 1 F λ 2 ( x ) } + p 2 * p 1 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + p 2 * p 2 * λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) p 2 * p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 * p 2 * p 1 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 * p 2 * p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 * p 2 * p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 * p 2 * p 1 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 p 1 * p 2 * p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 * p 2 * p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) = ( p 1 * p 2 p 1 p 2 * ) λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 ( p 1 p 1 * ) + p 1 p 1 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 ( p 2 p 2 * )
Based on Part (i) and Equation (1), for λ 1 λ 2 , we have
λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x )
being positive. From p 1 p 2 and p 1 * p 2 * , there exist positive real numbers d and c such that p 1 = p 2 + d and p 1 * = p 2 * + c . Now, from p 1 p 2 p 1 * p 2 * , we have d c , and also from p 1 p 1 * , we get p 2 + d p 2 * + c and then p 2 p 2 * c d 0 . So, p 2 p 2 * and clearly c p 2 d p 2 * . Furthermore, we have
p 1 * p 2 p 1 p 2 * = ( p 2 * + c ) ( p 2 + d ) p 2 * = c p 2 d p 2 * 0 .
Therefore, all terms of the last equality in ψ ( x ) x are positive, and the desired result is obtained.
We now present an example to show that Theorem 1 (under p 1 = p 1 * ) may not hold when the condition λ 2 λ 1 is not satisfied.
Example 1.
Let us consider B e t a ( 1 , 1 ) as baseline distribution function. Set ( p 1 , p 2 ) = ( 0.3 , 0.8 ) , ( p 1 * , p 2 * ) = ( 0.3 , 0.2 ) and ( λ 1 , λ 2 ) = ( 20 , 10 ) . Then, we find
ϕ ( x ) = 0.3 ( 1 x 20 ) + 0.8 ( 1 x 10 ) 0.24 ( 1 x 20 ) ( 1 x 10 ) 0.3 ( 1 x 20 ) + 0.2 ( 1 x 10 ) 0.06 ( 1 x 20 ) ( 1 x 10 )
to be not monotone when x 0.95 (as seen in Figure 1), and this negates the results that V 2 : 2 h r W 2 : 2 and V 2 : 2 h r W 2 : 2 .
Remark 1.
Under the λ 1 λ 2 , the result of Theorem 1 also hold under the following conditions:
1. 
p 1 p 2 p 1 * p 2 * and p 1 p 2 p 1 * p 2 * ;
2. 
p 1 p 1 * p 2 p 2 * and p 1 p 2 p 1 * p 2 * .
We now present an example to show that Part (ii) of Theorem 1 may not hold when p 1 p 2 > p 1 * p 2 * .
Example 2.
Let us consider the standard exponential as baseline distribution with F ( x ) = 1 e x , for x > 0 . Set ( p 1 , p 2 ) = ( 0.8 , 0.4 ) , ( p 1 * , p 2 * ) = ( 0.5 , 0.3 ) and ( λ 1 , λ 2 ) = ( 2 , 10 ) . Clearly, p 1 p 1 * p 2 p 2 * and p 1 p 2 > p 1 * p 2 * . We then have
ψ ( x ) = 0.8 ( 1 ( 1 e x ) 2 ) + 0.4 ( 1 ( 1 e x ) 10 ) 0.32 ( 1 ( 1 e x ) 2 ) ( 1 ( 1 e x ) 10 ) 0.5 ( 1 ( 1 e x ) 2 ) + 0.3 ( 1 ( 1 e x ) 10 ) 0.15 ( 1 ( 1 e x ) 2 ) ( 1 ( 1 e x ) 10 ) ,
to be not monotone in 0 x 1.5 (as seen in Figure 2), and this negates the results that V 2 : 2 h r W 2 : 2 and V 2 : 2 h r W 2 : 2 .

4. Reversed Hazard Rate Order

In this section, we discuss stochastic comparisons of parallel systems for different probabilities of starters in terms of reversed hazard rate order. For this purpose, we first prove the following lemma.
Lemma 3.
Suppose function g ( x ; a , b ) is a differentiable function in x and
ϕ ( x ) = g ( x ; p , d ) g ( x ; p * , c ) .
If we consider L ( x ; a , t ) as
L ( x ; a , t ) = g ( x ; a , t ) g ( x ; a , t ) ,
where g ( x ) denotes the derivative of function g ( x ) with respect to x, then:
ϕ ( x ) x = ϕ ( x ) L ( x ; p , d ) L ( x ; p * , c ) .
Proof. 
We can observe that
ϕ ( x ) L ( x ; p , d ) L ( x ; p * , c ) = g ( x ; p , d ) g ( x ; p * , c ) g ( x ; p , d ) g ( x ; p , d ) g ( x ; p * , c ) g ( x ; p * , c ) = g ( x ; p , d ) g ( x ; p * , c ) g ( x ; p * , c ) g ( x ; p , d ) g ( x ; p * , c ) 2 = ϕ ( x ) x .
Theorem 2.
Suppose X 1 and X 2 are independent non-negative random variables with X i P R H R ( λ i ) . Further, suppose I p 1 , I p 2 , I p 1 * , and I p 2 * are independent Bernoulli random variables, independently of X i s, with E ( I p i ) = p i and E ( I p i * ) = p i * , i = 1 , 2 . Let V 2 : 2 = max { X 1 I p 1 , X 2 I p 2 } and W 2 : 2 = max { X 1 I p 1 * , X 2 I p 2 * } . Then, the following statements hold true:
(i) 
If p 1 = p 1 * ( p 2 = p 2 * ) and λ 1 λ 2 ( λ 2 λ 1 ), then
p 2 p 2 * ( p 1 p 1 * ) V 2 : 2 r h W 2 : 2 ;
(ii) 
If p 2 p 2 * , then
p 1 p 2 , p 1 * p 2 *   a n d   p 1 p 2 p 1 * p 2 * V 2 : 2 r h W 2 : 2 .
Proof. 
(i)
The distribution functions of V 2 : 2 and W 2 : 2 are given by
F V 2 : 2 ( x ) = 1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) , F W 2 : 2 ( x ) = 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) .
For the necessity part, from V 2 : 2 r h W 2 : 2 , we have F V 2 : 2 ( x ) F W 2 : 2 ( x ) , and so
1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) ,
which implies that
p 1 1 F λ 1 ( x ) + p 2 1 F λ 2 ( x ) p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 1 F λ 1 ( x ) + p 2 * 1 F λ 2 ( x ) p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) .
Then, we have
p 2 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) ,
which implies that p 2 p 2 * . Next, for the sufficiency part, let us consider the function
χ ( x ) = 1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) .
It is then enough to show that χ is increasing in x. Upon differentiating χ ( x ) with respect to x, we find
χ ( x ) x = s g n 1 p 1 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) × { p 1 λ 1 F λ 1 ( x ) + p 2 λ 2 F λ 2 ( x ) p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } 1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) × { p 1 λ 1 F λ 1 ( x ) + p 2 * λ 2 F λ 2 ( x ) p 1 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } = p 1 λ 1 F λ 1 ( x ) + p 2 λ 2 F λ 2 ( x ) p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 1 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) p 1 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + p 1 2 p 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 2 p 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 p 2 * p 1 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 2 * p 2 λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) + p 1 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 2 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) p 1 λ 1 F λ 1 ( x ) p 2 * λ 2 F λ 2 ( x ) + p 1 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + p 1 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) + p 1 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) p 1 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + p 1 p 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) p 1 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 2 p 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) p 1 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + p 1 2 p 2 p 2 * λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + p 1 2 p 2 p 2 * λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 × 1 F λ 2 ( x ) = ( p 2 p 2 * ) λ 2 F λ 2 ( x ) 1 2 p 1 1 F λ 1 ( x ) + p 1 2 1 F λ 1 ( x ) 2 = ( p 2 p 2 * ) λ 2 F λ 2 ( x ) 1 p 1 1 F λ 1 ( x ) 2 0 .
Hence, χ is increasing in x, which completes the proof of Part (i) of the theorem.
(ii)
The distribution functions of V 2 : 2 and W 2 : 2 , for x 0 , are given by
F V 2 : 2 ( x ) = 1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x )
and
F W 2 : 2 ( x ) = 1 p 1 * 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 * p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x ) ,
respectively. Then, it is enough to show that
Ω ( x ) = F V 2 : 2 ( x ) F W 2 : 2 ( x ) = 1 p 1 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + p 1 p 2 1 F λ 1 ( x ) 1 F λ 2 ( x ) / 1 p 1 * 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + p 1 * p 2 * 1 F λ 1 ( x ) 1 F λ 2 ( x )
is increasing in x. Since p 1 p 2 and p 1 * p 2 * , there exist positive real numbers d and c such that p 1 = d + p 2 and p 1 * = c + p 2 * , and then we can rewrite Ω ( x ) as follows:
Ω ( x ) = { 1 ( d + p 2 ) 1 F λ 1 ( x ) p 2 1 F λ 2 ( x ) + ( d + p 2 ) p 2 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } × { 1 ( c + p 2 * ) 1 F λ 1 ( x ) p 2 * 1 F λ 2 ( x ) + ( c + p 2 * ) p 2 * 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } 1 .
Let us consider L ( x ; a , b ) as follows:
L ( x ; a , b ) = r ˜ ( x ) × { ( b + a ) λ 1 F λ 1 ( x ) + a λ 2 F λ 2 ( x ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } × { 1 ( b + a ) 1 F λ 1 ( x ) a 1 F λ 2 ( x ) + ( b + a ) a 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } 1 ,
where r ˜ ( x ) = f ( x ) F ( x ) . We can then see easily that
L ( x ; a , b ) = g ( x ; a , b ) g ( x ; a , b ) ,
where g ( x ; a , b ) = g ( x ; a , b ) x and
g ( x ; a , b ) = 1 ( b + a ) 1 F λ 1 ( x ) a 1 F λ 2 ( x ) + ( b + a ) a 1 F λ 1 ( x ) × 1 F λ 2 ( x ) .
Because
Ω ( x ) = g ( x ; p 2 , d ) g ( x ; p 2 * , c ) ,
according to Lemma 3, we have
Ω ( x ) x = Ω ( x ) L ( x ; p 2 , d ) L ( x ; p 2 * , c ) ,
and clearly, for proving increasing property of Ω ( x ) with respect to x, it is enough to prove that the function L ( x ; a , b ) is increasing in a and also in b. First, we have
L ( x ; a , b ) a = s g n { λ 1 F λ 1 ( x ) + λ 2 F λ 2 ( x ) ( 2 a + b ) λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( 2 a + b ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } × { 1 ( b + a ) 1 F λ 1 ( x ) a 1 F λ 2 ( x ) + ( b + a ) a 1 F λ 1 ( x ) 1 F λ 2 ( x ) } { 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( 2 a + b ) 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } × { ( b + a ) λ 1 F λ 1 ( x ) + a λ 2 F λ 2 ( x ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } = λ 1 F λ 1 ( x ) ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + λ 2 F λ 2 ( x ) ( b + a ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) a λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) + ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( 2 a + b ) λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + { ( 2 a + b ) ( b + a ) λ 1 F λ 1 ( x ) × 1 F λ 1 ( x ) 1 F λ 2 ( x ) } + ( 2 a + b ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 ( 2 a + b ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 ( 2 a + b ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + ( 2 a + b ) ( b + a ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + ( 2 a + b ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( 2 a + b ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) + ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) + a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + a λ 2 F λ 2 ( x ) 1 F λ 2 ( x ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( 2 a + b ) ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( 2 a + b ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( 2 a + b ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + ( 2 a + b ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) = λ 1 F λ 1 ( x ) 2 a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + a 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + λ 2 F λ 2 ( x ) 2 ( b + a ) λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + ( b + a ) 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 = λ 1 F λ 1 ( x ) 1 a 1 F λ 2 ( x ) 2 + λ 2 F λ 2 ( x ) 1 ( b + a ) 1 F λ 1 ( x ) 2 0 ,
which shows that L ( x ; a , b ) is increasing in a. Next, we also have
L ( x ; a , b ) b = s g n λ 1 F λ 1 ( x ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) × { 1 ( b + a ) 1 F λ 1 ( x ) a 1 F λ 2 ( x ) + ( b + a ) a 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } 1 F λ 1 ( x ) + a 1 F λ 1 ( x ) 1 F λ 2 ( x ) × { ( b + a ) λ 1 F λ 1 ( x ) + a λ 2 F λ 2 ( x ) ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) } = λ 1 F λ 1 ( x ) ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) a λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + a 2 λ 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 ( b + a ) a 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) × 1 F λ 2 ( x ) 2 a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) + ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 + a 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) { ( b + a ) a 2 λ 2 F λ 2 ( x ) × 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) } + ( b + a ) λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) + a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) { ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) × 1 F λ 2 ( x ) } ( b + a ) a λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 ( b + a ) a λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) a 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) + ( b + a ) a 2 λ 1 F λ 1 ( x ) 1 F λ 1 ( x ) 1 F λ 2 ( x ) 2 + ( b + a ) a 2 λ 2 F λ 2 ( x ) 1 F λ 1 ( x ) 2 1 F λ 2 ( x ) = λ 1 F λ 1 ( x ) 1 2 a 1 F λ 2 ( x ) + 1 F λ 2 ( x ) 2 = λ 1 F λ 1 ( x ) 1 a 1 F λ 2 ( x ) 2 0 ,
which shows that L ( x ; a , b ) is increasing in b. Now, since L ( x ; a , b ) is increasing in a and p 2 p 2 * , we have
L ( x ; p 2 , d ) L ( x ; p 2 * , d ) .
Further, since L ( x ; a , b ) is increasing in b and d c (because d = p 1 p 2 p 1 * p 2 * = c ), we have
L ( x ; p 2 * , d ) L ( x ; p 2 * , c ) .
Upon combining (3) and (4), we have
L ( x ; p 2 , d ) L ( x ; p 2 * , c ) ,
or
L ( x ; p 2 , d ) L ( x ; p 2 * , c ) 0 ,
Next, from (2) and (5), we get Ω ( x ) x 0 , and then the desired result is obtained.
Remark 2.
The results of Theorem 2 also hold under the following conditions:
1. 
p 2 * p 1 * p 2 p 1 and p 1 p 2 = p 1 * p 2 * ;
2. 
p 2 * p 2 p 1 * p 1 and p 1 p 2 = p 1 * p 2 * .

5. Usual Stochastic Order

In this section, we present some stochastic comparisons of parallel systems for different components and different probabilities of starters in terms of the usual stochastic order.
Theorem 3.
Suppose X 1 , , X n ( X 1 * , , X n * ) are independent non-negative random variables with X i P R H R ( λ ) ( X i * P R H R ( λ * ) ), i = 1 , , n . Further, suppose I p 1 , , I p n ( I p 1 * , , I p n * ) are independent Bernoulli random variables, independently of X i s and X i * s, with E ( I p i ) = p i and E ( I p i * ) = p i * , i = 1 , , n . Let V n : n = max { X 1 I p 1 , , X n I p n } and W n : n = max { X 1 * I p 1 * , , X n * I p n * } . If λ λ * and p m p * , then V n : n s t W n : n .
Proof. 
Let us denote s ( p , λ ; x ) = 1 i = 1 n 1 p i 1 F λ ( x ) . For λ λ * , we can observe that s ( p , λ ; x ) s ( p , λ * ; x ) . For obtaining the desired result, it is sufficient to observe that s ( p , λ * ; x ) s ( p * , λ * ; x ) . Therefore, we have to check Conditions (i) and (ii) of Lemma 1. It is then evident that s ( p , λ * ; x ) is symmetric with respect to p , for any x. Additionally, for any i j , we have
s ( p , λ * ; x ) p k = 1 F λ * ( x ) i = 1 , i k n 1 p i 1 F λ * ( x ) .
Thus, we get
( p i p j ) s ( p , λ * ; x ) p i s ( p , λ * ; x ) p j = ( p i p j ) 1 F λ * ( x ) k = 1 , k i , k j n × 1 p k 1 F λ * ( x ) × 1 p j 1 F λ * ( x ) 1 p i 1 F λ * ( x ) = ( p i p j ) 2 1 F λ * ( x ) 2 × k = 1 , k i , k j n 1 p k 1 F λ * ( x ) 0 .
Hence, s ( p , λ * ; x ) is Schur-convex with respect to p , for any x. This implies that V n : n s t W n : n , as required. □

6. Concluding Remarks

A parallel system is one of the most commonly used coherent systems in practice. For this reason, a careful study of its performance characteristics, such as reliability function, hazard function and reversed hazard function, based on the characteristics of the component lifetime distribution, is of great interest to reliability engineers. In this work, we have focused our attention primarily on a parallel system with two components as a parallel system with more components can be decomposed into many subsystems with two components in parallel. One of the prominent examples of a two-component parallel system is a twin-engine jet system, which, in addition to being safer than a single-engine jet system, is more efficient in terms of fuel consumption than a jet system with more than two engines.
Specifically, we have proved the hazard rate and reversed hazard rate orders of parallel systems with two components having proportional hazard rates and starting devices.
It will be of interest to consider the problems discussed here by allowing dependence between components using some general copulas for the joint distribution of lifetime components. We are working in this direction at the present time and will present the corresponding results in the future.

Author Contributions

Conceptualization, G.B. and S.K.; Methodology, N.B., G.B. and S.K.; Software, S.K.; Validation, G.B. and S.K.; Formal analysis, N.B., G.B. and S.K.; Investigation, G.B. and S.K.; Resources, G.B.; Data curation, Not applicable; Writing —Original draft preparation, G.B.; Review and editing, N.B.; Visualization, G.B. and S.K.; Supervision, N.B.; Project administration, N.B. and G.B.; Funding acquisition, N.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

We express our sincere thanks to the Guest Editor and anonymous reviewers for their useful comments and suggestions on an earlier version of this manuscript that led to this improved version. This work has been conducted by University of Zabol, Grant Number: UOZ-GR-3389.

Conflicts of Interest

Data sharing not applicable.

References

  1. Fang, R.; Li, X. Advertising a second-price auction. J. Math. Econ. 2015, 61, 246–252. [Google Scholar] [CrossRef]
  2. Li, C.; Li, X. Stochastic comparisons of parallel and series systems of dependent components equipped with starting devices. Commun. Stat. Theory Methods 2019, 48, 694–708. [Google Scholar] [CrossRef]
  3. Li, X.; Zuo, M.J. Preservation of stochastic orders for random minima and maxima, with applications. Nav. Res. Logist. 2004, 51, 332–344. [Google Scholar] [CrossRef]
  4. Da, G.; Ding, W.; Li, X. On hazard rate ordering of parallel systems with two independent components. J. Stat. Plan. Inference 2010, 140, 2148–2154. [Google Scholar] [CrossRef]
  5. Joo, S.; Mi, J. Some properties of hazard rate functions of systems with two components. J. Stat. Plan. Inference 2010, 140, 444–453. [Google Scholar] [CrossRef]
  6. Di Crescenzo, A.; Di Gironimo, P. Stochastic comparisons and dynamic information of random lifetimes in a replacement model. Mathematics 2018, 6, 204. [Google Scholar] [CrossRef] [Green Version]
  7. Barmalzan, G.; Payandeh Najafabadi, A.T.; Balakrishnan, N. Ordering properties of the smallest and largest claim amounts in a general scale model. Scand. Actuar. J. 2017, 2017, 105–124. [Google Scholar] [CrossRef]
  8. Zhang, Y.; Cai, X.; Zhao, P. Ordering properties of extreme claim amounts from heterogeneous portfolios. ASTIN Bull. 2019, 49, 525–554. [Google Scholar] [CrossRef]
  9. Marshall, A.W.; Olkin, I. Life Distributions; Springer: New York, NY, USA, 2007. [Google Scholar]
  10. Müller, A.; Stoyan, D. Comparison Methods for Stochastic Models and Risks; John Wiley & Sons: New York, NY, USA, 2002. [Google Scholar]
  11. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer: New York, NY, USA, 2007. [Google Scholar]
  12. Marshall, A.W.; Olkin, I.; Arnold, B.C. Inequalities: Theory of Majorization and Its Applications, 2nd ed.; Springer: New York, NY, USA, 2011. [Google Scholar]
Figure 1. Plot of the ratio of survival functions of V 2 : 2 and W 2 : 2 for x [ 0.95 , 1 ] in Example 1.
Figure 1. Plot of the ratio of survival functions of V 2 : 2 and W 2 : 2 for x [ 0.95 , 1 ] in Example 1.
Mathematics 09 00856 g001
Figure 2. Plot of the ratio of survival functions of V 2 : 2 and W 2 : 2 for x [ 0 , 1.5 ] in Example 2.
Figure 2. Plot of the ratio of survival functions of V 2 : 2 and W 2 : 2 for x [ 0 , 1.5 ] in Example 2.
Mathematics 09 00856 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Balakrishnan, N.; Barmalzan, G.; Kosari, S. Comparisons of Parallel Systems with Components Having Proportional Reversed Hazard Rates and Starting Devices. Mathematics 2021, 9, 856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9080856

AMA Style

Balakrishnan N, Barmalzan G, Kosari S. Comparisons of Parallel Systems with Components Having Proportional Reversed Hazard Rates and Starting Devices. Mathematics. 2021; 9(8):856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9080856

Chicago/Turabian Style

Balakrishnan, Narayanaswamy, Ghobad Barmalzan, and Sajad Kosari. 2021. "Comparisons of Parallel Systems with Components Having Proportional Reversed Hazard Rates and Starting Devices" Mathematics 9, no. 8: 856. https://0-doi-org.brum.beds.ac.uk/10.3390/math9080856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop