Next Article in Journal
Elicitation of the Factors Affecting Electricity Distribution Efficiency Using the Fuzzy AHP Method
Previous Article in Journal
Application of Multi-Objective Evolutionary Algorithms for Planning Healthy and Balanced School Lunches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weak Dependence Notions and Their Mutual Relationships

1
Facultad de Matemáticas, Universidad de Murcia, Campus de Espinardo, 30100 Murcia, Spain
2
Dipartimento di Scienze Matematiche, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Torino, Italy
3
Departamento de Estadítica e I. O., Universidad de Cádiz, C/Duque de Nájera 8, 11002 Cádiz, Spain
*
Author to whom correspondence should be addressed.
Submission received: 18 November 2020 / Revised: 23 December 2020 / Accepted: 25 December 2020 / Published: 31 December 2020
(This article belongs to the Section Probability and Statistics)

Abstract

:
New weak notions of positive dependence between the components X and Y of a random pair ( X , Y ) have been considered in recent papers that deal with the effects of dependence on conditional residual lifetimes and conditional inactivity times. The purpose of this paper is to provide a structured framework for the definition and description of these notions, and other new ones, and to describe their mutual relationships. An exhaustive review of some well-know notions of dependence, with a complete description of the equivalent definitions and reciprocal relationships, some of them expressed in terms of the properties of the copula or survival copula of ( X , Y ) , is also provided.

1. Introduction

In the past few decades, a number of different concepts and notions of dependence between the components X and Y of a random pair ( X , Y ) have been defined and applied in a variety of fields like reliability theory or actuarial sciences. Since the main motivation for these notions is in the understanding of the effects of the dependence between X and Y on the reliability of one of them given the behavior of the other, most of these notions have been described and defined by using classical stochastic comparisons, comparing in some stochastic sense the distribution of one variable under different conditions for the other. For example, a well-known notion is the stochastically increasing property: given ( X , Y ) , the variable X is said to be stochastically increasing in Y if, and only if, Pr ( X > x | Y = y ) increases in y for all fixed x (respectively in the corresponding supports of Y and X), or equivalently, if, and only if, ( X | Y = y 1 ) S T ( X | Y = y 2 ) for all y 1 y 2 , where S T denotes the usual stochastic order (whose definition is recalled next). Similarly, other notions have been defined (and studied in detail) by replacing the usual stochastic order with other common stochastic comparisons and considering other conditional events, such as, for example, { Y > y } .
The purpose of this paper is twofold: on the one hand, it is our intention to provide a complete review of the main notions introduced so far, clearly describing equivalent definitions and reciprocal relationships; on the other hand, it is to enlarge the set of dependence notions with two properties recently considered in [1,2] and with other new ones, all defined as above, but considering stochastic orders for which the corresponding notions have not been characterized in the previous literature. The first point is motivated by the fact that characterizations of the previously defined notions can be found in many different articles and books, where the equivalent definitions and mutual connections are separately and partially analyzed, so that it is difficult to find them in a few exhaustive statements. For the second point, the aim is also to describe in an systematic manner some notions introduced in recent literature and to define a unified framework for dependence notions and their mutual relationships.
The paper also contains a result of general interest in the context of total positivity theory, for which we have not found the statement in the literature. This result, stated and proven in Section 2, deals with the preservation properties of total positivity and is used to provide sufficient conditions for the new notions of dependence.
The paper is structured as follows. The definitions of the stochastic orders used to describe the old and new dependence properties are recalled in Section 2, together with other useful properties and characterizations. Section 3 contains the survey on the notions introduced so far in the literature; all of them imply the Positively Quadrant Dependent ( P Q D ) property, defined later, which is a necessary condition for a property of a vector ( X , Y ) to be considered a dependence property according to the classical definition described in [3]. The new dependence notions, which are called here “weak dependence notions” since they imply the non-negativity of the linear correlation index, but not the P Q D property, are described in Section 4 and Section 5. Finally, Section 6 contains counterexamples showing that equivalences among some of these notions are not satisfied.
Throughout the paper, we will consider bivariate vectors ( X , Y ) having an absolutely continuous joint distribution, unless otherwise stated, and the supports of X and Y will be denoted as S X and S Y . For any random variable or random vector X and an event A, the notation ( X | A ) describes the random variable whose distribution is the conditional distribution of X given A (tacitly assuming that it exists). The terms “increasing” and “decreasing” are used in place of “nondecreasing” and “nonincreasing”, respectively. Furthermore, for a function h ( x , y ) : R 2 R , the notations 1 h ( x , y ) and 2 h ( x , y ) are used for the partial derivatives with respect to the first or the second component, respectively, while 12 2 h ( x , y ) is used for the mixed second partial derivative.

2. Preliminaries

First, we briefly recall the definitions of the stochastic orders that will be used throughout this paper. Most of them are mainly used to compare random lifetimes or inactivity times. To this aim, let X and Y be two absolutely continuous random variables having supports S X and S Y , distribution functions F X and F Y , reliability (survival) functions F ¯ X = 1 F X and F ¯ Y = 1 F Y , and density functions f X and f Y , respectively. Furthermore, for t R , let X t = ( X t | X > t ) and Y t = ( Y t | Y > t ) be the corresponding residual lifetimes (RL), and let t X = ( t X | X < t ) and t Y = ( t Y | Y < t ) denote the corresponding inactivity times (IT). Then, we say that X is smaller than Y:
  • in the stochastic order (denoted by X S T Y ) if F ¯ X F ¯ Y in R ;
  • in the likelihood ratio order (denoted by X L R Y ) if the ratio f Y / f X is increasing in S X S Y ;
  • in the hazard rate order (denoted by X H R Y ) if the ratio F ¯ Y / F ¯ X is increasing in R ;
  • in the reversed hazard rate order (denoted by X R H R Y ) if the ratio F Y / F X is increasing in R ;
  • in the mean residual life order (denoted by X M R L Y ) if E ( X t ) E ( Y t ) for all t for which the expectations exist;
  • in the mean inactivity time order (denoted by X M I T Y ) if E ( t X ) E ( t Y ) for all t for which the expectations exist;
  • in the increasing convex order (denoted by X I C X Y ) if E ( ϕ ( X ) ) E ( ϕ ( Y ) ) for all increasing and convex functions ϕ for which the expectations exist;
  • in the increasing concave order (denoted by X I C V Y ) if E ( ϕ ( X ) ) E ( ϕ ( Y ) ) for all increasing and concave functions ϕ for which the expectations exist.
We address the reader to [4] for a detailed description of these stochastic orders (except for the MIT order, for which we address the reader to [5]) and to [6] or [7] for a list of examples of applications in reliability theory and in actuarial sciences. Among these stochastic orders, there exist the well-known relationships shown in Table 1. The proof of all these relationships may be found in [4], except for those involving the MIT order, which can be found in [5].
We recall here some immediate properties and equivalent conditions of these stochastic orders. The proofs of the following statements, unless stated, can be found in [4] or are straightforward.
Proposition 1.
The following conditions are equivalent:
(i) 
X L R Y ;
(ii) 
X t L R Y t for all t;
(iii) 
X t R H R Y t for all t;
(iv) 
( X | s < X < t ) S T ( Y | s < Y < t ) for all s < t .
Note that the third and the fourth conditions in the proposition above can be used to define the order also for variables that are not absolutely continuous.
Proposition 2.
The following conditions are equivalent:
(i) 
X H R Y ;
(ii) 
X t H R Y t for all t;
(iii) 
X t S T Y t for all t.
Proposition 3.
The following conditions are equivalent:
(i) 
X M R L Y ;
(ii) 
X t M R L Y t for all t;
(iii) 
X t I C X Y t for all t.
The proofs of the propositions above may be found in [4] or easily follow from statements there. For example, to prove the last one, we note that if m X ( t ) = E ( X t ) = E ( X t | X > t ) is the MRL of X, then the MRL of X t is m X t ( x ) = m X ( x + t ) . Therefore, we get ( i ) ( i i ) . Furthermore, ( i i ) ( i i i ) holds because the MRL order implies the ICX order. Finally, ( i i i ) ( i ) holds because the ICX order implies the order in expectations.
Similar results can be stated for inactivity times. The proofs of the following results, again, can be found in [4] or are straightforward.
Proposition 4.
The following conditions are equivalent:
(i) 
X R H R Y ;
(ii) 
t X H R t Y for all t;
(iii) 
t X S T t Y for all t.
Proposition 5.
The following conditions are equivalent:
(i) 
X L R Y ;
(ii) 
t X L R t Y for all t;
(iii) 
t X R H R t Y for all t;
Finally, we include the equivalences for the MIT order.
Proposition 6.
The following conditions are equivalent:
(i) 
X M I T Y ;
(ii) 
t X M R L t Y for all t;
(iii) 
t X I C X t Y for all t.
The proof of the equivalence between Conditions (i) and (iii) in the previous proposition may be found in [8], while the equivalence with (ii) can be proven in a similar manner.
For the proof of some of the results stated in the next sections, we will use a few preliminary results whose proofs are given here. These results, for which we have not found the statements in the literature, are of general interest in the context of total positivity theory, regardless of their applications in the following sections.
For them, we recall the notion of total positivity of order two (see, e.g., [9]).
Definition 1.
A bivariate function l : R 2 R + is said to be totally positive of order two ( T P 2 ) if:
l ( x 1 , y 1 ) l ( x 2 , y 2 ) l ( x 1 , y 2 ) l ( x 2 , y 1 ) , for all x 1 x 2 , y 1 y 2 .
We also recall the following useful property, the proof of which is given in Lemma 7.1(a) in Chapter 4 of [6] (see also the remark after the lemma).
Lemma 1.
Let W be a Lebesgue–Stieltjes measure, not necessarily positive, defined on the interval ( a , b ) , and let ϕ be a non-negative function defined on ( a , b ) . If t b d W ( x ) 0 for all t ( a , b ) and ϕ is increasing, then t b ϕ ( x ) d W ( x ) 0 for all t ( a , b ) .
We can now state and prove the first one of the preliminary results.
Theorem 1.
Let f : R 2 R + . If h ( x , y ) = x b f ( s , y ) d s is T P 2 in x , y , with x ( a , b ) R , then:
H ( x , y ) = x b f ( s , y ) ϕ ( s ) d s
is also T P 2 in x , y for all increasing and non-negative real functions ϕ .
Proof. 
We assume that h ( x , y ) is T P 2 in x , y , which means that:
x b f ( s , y 1 ) d s x b f ( s , y 2 ) d s decreases in x , for y 1 y 2 .
Now observe that (1) holds if, and only if:
f ( x , y 1 ) f ( x , y 2 ) x b f ( s , y 1 ) d s x b f ( s , y 2 ) d s for y 1 y 2 .
From (1) and (2), we can rewrite the T P 2 property of h ( x , y ) as:
f ( x , y 1 ) f ( x , y 2 ) t b f ( s , y 1 ) d s t b f ( s , y 2 ) d s , for a < x t < b , y 1 y 2 .
Now, given y 1 y 2 , this is the same as:
t b d W x ( s ) 0 , for all a < x t < b ,
where:
d W x ( s ) = f ( s , y 2 ) f ( x , y 1 ) f ( s , y 1 ) f ( x , y 2 ) d s .
Now, consider the function:
Φ u = u b I [ t , ) s d W x ( s ) , u ( a , b ) ,
where:
I [ t , ) s = 1 , s t 0 , s < t , t ( a , b ) .
Then, by noting that:
Φ u = u b d W x ( s ) , u > t t b d W x ( s ) , u t ,
we see that (3) is equivalent to:
u b I [ t , ) s d W x ( s ) 0 , for all u , for all a < x t < b .
Since ϕ is increasing and non-negative, by using Lemma 1, we have:
u b I [ t , ) s ϕ s d W x ( s ) 0 , for all u , for all a < x t < b .
By repeating the argument in the opposite direction, we see that the last inequality is the same as:
t b ϕ s d W x ( s ) 0 , for all a < x t < b .
Therefore, we have proven that (1) implies:
t b f ( s , y 2 ) f ( x , y 1 ) f ( s , y 1 ) f ( x , y 2 ) ϕ ( s ) d s 0 , y 1 y 2 , a < x t < b .
Rewriting this inequality as:
f ( x , y 1 ) f ( x , y 2 ) t b f ( s , y 1 ) ϕ s d s t b f ( s , y 2 ) ϕ s d s , for x t , y 1 y 2
and taking x = t , we see that:
x b f ( s , y 1 ) ϕ s d s x b f ( s , y 2 ) ϕ s d s decreases in x , for y 1 y 2 .
This means that x f ( s , y ) ϕ s d s is T P 2 in x , y .
The following is an immediate consequence of Theorem 1.
Corollary 1.
Let f : R 2 R + . If h ( x , y ) = x b f ( s , y ) d s is T P 2 in x , y , with x ( a , b ) , then:
H ( x , y ) = x ψ 1 ( b ) f ( ψ s , y ) d s
is also T P 2 in x , y , with x ( ψ 1 ( a ) , ψ 1 ( b ) ) , for all strictly increasing and concave real functions ψ .
Proof. 
Since h ( x , y ) is T P 2 in x , y and ψ is increasing, it follows from Theorem 1.1 of [9], p. 99, that h ( ψ x , y ) is T P 2 in x , y . This is equivalent to stating that:
ψ x b f ( s , y 1 ) d s ψ x b f ( s , y 2 ) d s decreases in x , for y 1 y 2 ,
which is the same as:
x ψ 1 ( b ) f ( ψ s , y 1 ) ψ s d s x ψ 1 ( b ) f ( ψ s , y 2 ) ψ s d s decreases in x , for y 1 y 2 .
Equivalently, we can say that:
x ψ 1 ( b ) f ( ψ s , y ) ψ s d s
is T P 2 in x , y . Since ψ is strictly increasing and concave, the result follows from (5) by applying Theorem 1 with ϕ ( s ) = 1 ψ s .
Using the same procedure as above, but applying Lemma 7.1(b) in Chapter 4 of [6] instead of Lemma 7.1(a), one can also prove the following statements.
Theorem 2.
Let f : R 2 R + . If h ( x , y ) = a x f ( s , y ) d s is T P 2 in x , y , with x ( a , b ) R , then:
H ( x , y ) = a x f ( s , y ) ϕ ( s ) d s
is also T P 2 in x , y for all non-negative and decreasing real functions ϕ .
Corollary 2.
Let f : R 2 R + . If h ( x , y ) = a x f ( s , y ) d s is T P 2 in x , y , with x ( a , b ) R , then:
H ( x , y ) = ψ 1 ( a ) x f ( ψ s , y ) d s
is also T P 2 in x , y , with x ( ψ 1 ( a ) , ψ 1 ( b ) ) , for all strictly increasing and convex real functions ψ .

3. Strong Dependence Notions

Starting from the seminal paper by Kimeldorf and Sampson [3], where a unified framework to deal with dependence concepts was proposed, many alternative notions of dependence (either positive or negative) have been investigated and applied in the literature with the intent of mathematically describing the different properties and aspects of this intuitive concept. According to [3], a bivariate property can be considered as a positive dependence notion if it satisfies a number of conditions, the first of which is the P Q D property, whose definition and relevance in this context is recalled later.
Since all the monotone dependence properties based on the level of concordance between the components of a random vector are entirely described by its copula, whenever it is unique, one can rewrite the conditions described in [3] in terms of families of copulas, without taking care of the marginal distributions. See, e.g., [10] or [11] for the relationships between copulas and positive dependence notions or indexes. For this reason, we briefly recall here the definition of the copula and the survival copula of a random vector ( X , Y ) .
Given a random vector ( X , Y ) with joint distribution function F and marginal distributions F X and F Y , the function C : [ 0 , 1 ] 2 [ 0 , 1 ] is called copula of the vector ( X , Y ) if it is such that, for all ( x , y ) R 2 ,
F ( x , y ) = C ( F X ( x ) , F Y ( y ) ) .
In this case, it also holds, if F X and F Y are continuous, that:
C ( u , v ) = F ( F X 1 ( u ) , F Y 1 ( v ) ) ,
for all u , v [ 0 , 1 ] , where F X 1 denotes the quasi-inverse of F X (and similarly for F Y 1 ). Actually, the copula of a vector ( X , Y ) , which is unique whenever F X and F Y are continuous, is the joint distribution function of a vector having uniformly distributed margins on [ 0 , 1 ] R , which entirely describes the dependence between the components of the vector. Moreover, since in some applicative fields, like reliability theory, the dependence among components of a vector is analyzed through the survival copula instead of through the copula, then we recall its definition as well: given ( X , Y ) , and denoting F ¯ , F ¯ X , and F ¯ Y as its joint survival function and the two marginal survival functions, the survival copula C ^ is defined as:
C ^ ( u , v ) = F ¯ ( F ¯ X 1 ( u ) , F ¯ Y 1 ( v ) ) = u + v 1 + C ( 1 u , 1 v )
for all u , v [ 0 , 1 ] . If C ^ exists, then one also has:
F ¯ ( x , y ) = C ^ ( F ¯ X ( x ) , F ¯ Y ( y ) )
for all x and y. For the basic properties of copulas and survival copulas, we refer the reader to [11,12].
The definitions of the dependence properties considered throughout the paper, and previously defined in the literature, are recalled here.
Definition 2.
Given a vector ( X , Y ) , it is said to be:
  • P Q D (Positively Quadrant Dependent) iff X S T ( X | Y > t ) for all t (see, e.g., [11], p. 187).
  • N Q D (Negatively Quadrant Dependent) iff X S T ( X | Y > t ) for all t (see, e.g., [11], p. 187).
  • R T I ( X | Y ) (Right Tail Increasing) iff ( X | Y > t ) is ST-increasing in t (see, e.g., [11], p. 191).
  • R T D ( X | Y ) (Right Tail Decreasing) iff ( X | Y > t ) is ST-decreasing in t (see, e.g., [10], p. 22).
  • L T D ( X | Y ) (Left Tail Decreasing) iff ( X | Y t ) is ST-increasing in t (see, e.g., [11], p. 191).
  • L T I ( X | Y ) (Left Tail Increasing) iff ( X | Y t ) is ST-decreasing in t (see, e.g., [10], p. 22).
  • S I ( X | Y ) (Stochastically Increasing) iff ( X | Y = t ) is ST-increasing in t S Y (see, e.g., [11], p. 196).
  • S D ( X | Y ) (Stochastically Decreasing) iff ( X | Y = t ) is ST-decreasing in t S Y (see, e.g., [11], p. 196).
  • R C S I (Right Corner Set Increasing) iff Pr ( X > x , Y > y | X > s , Y > t ) is increasing in s and t for all ( x , y ) R 2 (see [11], p. 198) or, equivalently, iff ( X | Y > t ) is HR-increasing in t (see [13], p. 166).
  • R C S D (Right Corner Set Decreasing) iff Pr ( X > x , Y > y | X > s , Y > t ) is decreasing in s and t for all ( x , y ) R 2 (see [11], p. 198) or, equivalently, iff ( X | Y > t ) is HR-decreasing in t (see [13], p. 166).
  • L C S D (Left Corner Set Decreasing) iff Pr ( X x , Y y | X s , Y t ) is decreasing in s and t for all ( x , y ) R 2 (see [11], p. 198) or, equivalently, iff ( X | Y t ) is RHR-increasing in t.
  • L C S I (Left Corner Set Increasing) iff Pr ( X x , Y y | X s , Y t ) is increasing in s and t for all ( x , y ) R 2 (see [11], p. 198) or, equivalently, iff ( X | Y t ) is RHR-decreasing in t.
  • S I R L ( X | Y ) (Stochastically Increasing in Residual Life), defined in [14], iff ( X | Y = t ) is HR-increasing in t S Y .
  • S D R L ( X | Y ) (Stochastically Decreasing in Residual Life), defined in [14], iff ( X | Y = t ) is HR-decreasing in t S Y .
  • P R L D (Positive Likelihood Ratio Dependent) iff its joint density function is T P 2 (see [11], p. 200, where it is denoted as P L R ( X , Y ) ) or, equivalently, iff ( X | Y = t ) is LR-increasing in t S Y (see [14]).
  • N R L D (Negative Likelihood Ratio Dependent) iff its joint density function is R R 2 (see [11], p. 200, and [9] for R R 2 notion) or, equivalently, iff ( X | Y = t ) is LR-decreasing in t S Y .
Note that some of these notions, like the P Q D , are “symmetric”, in the sense that they describe both the properties of X given Y and the properties of Y given X, while other notions, like R T I ( X | Y ) , describe only the properties of X given Y, while the opposite notions (e.g., R T I ( Y | X ) ) describe the properties of Y given X. For these cases, the symmetric notions can be defined similarly and will be denoted as P ( Y | X ) , where P is the specific property.
As pointed out before, for all the above-mentioned notions, there exist equivalent definitions given in terms of the survival copula C ^ (or connecting copula C) of the vector ( X , Y ) . These alternative definitions, proven in [11,13], or [14], will be pointed out in the following statements, which also describe more interesting equivalences. Here, and throughout the rest of the paper, the notations X t , s and Y t , s are used to describe double conditioning, i.e., X t , s = ( X t | X > t , Y > s ) and Y t , s = ( Y s | X > t , Y > s ) .
Proposition 7.
For continuous F ¯ X and F ¯ Y , the following conditions are equivalent:
(i) 
X H R ( X | Y > s ) for all s R and all F ¯ X , F ¯ Y ;
(ii) 
X t H R X t , s for all ( t , s ) R 2 and all F ¯ X , F ¯ Y ;
(iii) 
X t S T X t , s for all ( t , s ) R 2 and all F ¯ X , F ¯ Y ;
(iv) 
C ^ ( u , v ) / u is decreasing in u for all v [ 0 , 1 ] ;
(v) 
R T I ( Y | X ) .
The equivalences between the first three conditions can be obtained from the equivalences given in the preceding section. The equivalences between ( i ) and ( i v ) can be obtained both from Proposition 3.1 in [2] or from Proposition 3.3 ( i i i ) in [13]. The equivalence between ( i v ) and ( v ) can be found in [11] (Theorem 5.2.5). It is interesting to observe that, even if R T I ( Y | X ) refers to a property describing the behavior of the law of Y given X, Conditions ( i ) , ( i i ) , and ( i i i ) refer to the behavior of X given Y. A discussion of and the formal motivation for this fact were clearly provided in [14].
Note that a similar proposition could be stated just for a fixed s. Note that if ( i ) holds for a fixed s > 0 and all F ¯ X , F ¯ Y , then ( i v ) holds for all u = F ¯ X ( t ) > 0 and v = F ¯ Y ( s ) > 0 ; by changing F ¯ Y , we cover the interval ( 0 , 1 ) . However, if we fix F ¯ X and F ¯ Y , then the equivalence is not true when we also fix s (the R T I ( Y | X ) property is no longer satisfied). In this case, ( v ) does not hold, but one could write ( i v ) for a fixed value v = F ¯ Y ( s ) .
The following statement deals with the SI notion.
Proposition 8.
The following conditions are equivalent:
(i) 
X L R ( X | Y > s ) for all s R and all F ¯ X , F ¯ Y ;
(ii) 
X t L R X t , s for all ( t , s ) R 2 and all F ¯ X , F ¯ Y ;
(iii) 
X t R H R X t , s for all ( t , s ) R 2 and all F ¯ X , F ¯ Y ;
(iv) 
C ^ ( u , v ) is concave in u (or 1 C ^ ( u , v ) is decreasing in u) for all v [ 0 , 1 ] ;
(v) 
S I ( Y | X ) .
The equivalences between the first three conditions can be obtained from the equivalences given in the preceding section. The equivalences with ( i v ) can be obtained both from Proposition 3.2 in [2] or from Proposition 3.3 (v), in [13]. Again, it is interesting to observe that S I ( Y | X ) refers to a property describing the behavior of the law of Y given X, while Conditions ( i ) , ( i i ) , and ( i i i ) refer to the behavior of X given Y (see [14]).
The following statement describes equivalent definitions for the R C S I notion.
Proposition 9.
The following conditions are equivalent:
(i) 
( X | Y > s 1 ) H R ( X | Y > s 2 ) for all s 1 s 2 and all F ¯ X , F ¯ Y ;
(ii) 
( Y | X > t 1 ) H R ( Y | X > t 2 ) for all t 1 t 2 and all F ¯ X , F ¯ Y ;
(iii) 
X t , s 1 H R X t , s 2 for all t, all s 1 s 2 and all F ¯ X , F ¯ Y ;
(iv) 
X t , s 1 S T X t , s 2 for all t, all s 1 s 2 and all F ¯ X , F ¯ Y ;
(v) 
Y t 1 , s H R Y t 2 , s for all s, all t 1 t 2 and all F ¯ X , F ¯ Y ;
(vi) 
Y t 1 , s S T Y t 2 , s for all s, all t 1 t 2 and all F ¯ X , F ¯ Y ;
(vii) 
C ^ ( u , v ) is T P 2 in ( u , v ) [ 0 , 1 ] 2 ;
(viii) 
( X , Y ) is R C S I .
The proofs of the equivalences in the above proposition can be found in [1,14] or [13].
The preceding propositions can be completed with the following result (extracted from Proposition 3.3 in [13]) that does not have an equivalent result based on residual lifetimes. For the equivalence between Conditions ( i v ) and ( v ) , see [15].
Proposition 10.
The following conditions are equivalent:
(i) 
X S T ( X | Y > s ) for all s and all F ¯ X , F ¯ Y ;
(ii) 
Y S T ( Y | X > t ) for all t and all F ¯ X , F ¯ Y ;
(iii) 
C ^ ( u , v ) u v for all u , v [ 0 , 1 ] ;
(iv) 
C o v ( τ ( X ) , ϕ ( Y ) ) 0 for all increasing real functions τ and ϕ for which the covariance exists;
(v) 
( X , Y ) is P Q D .
The following two propositions hold as well.
Proposition 11.
The following conditions are equivalent:
(i) 
( X | Y > s 1 ) L R ( X | Y > s 2 ) for all s 1 < s 2 and all F ¯ X , F ¯ Y ;
(ii) 
( X t | Y > s 1 ) L R ( X t | Y > s 2 ) for all t, s 1 < s 2 , s 1 , s 2 and all F ¯ X , F ¯ Y ;
(iii) 
( X t | Y > s 1 ) R H R ( X t | Y > s 1 ) for all t, s 1 < s 2 and all F ¯ X , F ¯ Y ;
(iv) 
1 C ^ ( u , v ) is T P 2 in ( u , v ) [ 0 , 1 ] 2 ;
(v) 
S I R L ( Y | X ) .
The proof of equivalence among ( i ) , ( i v ) , and ( v ) can be found in [13] or [14]. The proof of equivalence among ( i i ) , ( i i i ) , and ( v ) can be found in [1]. Once more, it is interesting to observe that S I R L ( Y | X ) refers to a property describing the behavior of the law of Y given X, while Conditions ( i ) , ( i i ) , and ( i i i ) refer to the behavior of X given Y (see [14]).
Proposition 12.
The following conditions are equivalent:
(i) 
( Y | X = t 1 ) L R ( Y | X = t 2 ) for all t 1 < t 2 , t 1 , t 2 S X and all F ¯ X , F ¯ Y ;
(ii) 
( X | Y = s 1 ) L R ( X | Y = s 2 ) for all s 1 < s 2 , s 1 , s 2 S Y and all F ¯ X , F ¯ Y ;
(iii) 
( X t | Y = s 1 ) L R ( X t | Y = s 2 ) for all t, s 1 < s 2 , s 1 , s 2 S Y and all F ¯ X , F ¯ Y ;
(iv) 
( X t | Y = s 1 ) R H R ( X t | Y = s 2 ) for all t, s 1 < s 2 , s 1 , s 2 S Y and all F ¯ X , F ¯ Y ;
(v) 
( Y s | X = t 1 ) L R ( Y s | X = t 2 ) for all s, t 1 < t 2 , t 1 , t 2 S X and all F ¯ X , F ¯ Y ;
(vi) 
( Y s | X = t 1 ) R H R ( Y s | X = t 2 ) for all s, t 1 < t 2 , t 1 , t 2 S X and all F ¯ X , F ¯ Y ;
(vii) 
1 , 2 2 C ^ ( u , v ) is T P 2 in ( u , v ) [ 0 , 1 ] 2 ;
(viii) 
1 , 2 2 C ( u , v ) is T P 2 in ( u , v ) [ 0 , 1 ] 2 ;
(ix) 
( X , Y ) is P L R D .
The proof of equivalence between ( v i i ) and ( v i i i ) can be found in [11]. The proof of equivalence among ( i ) , ( i i ) , and ( v i i i ) can be found in [13] or [14]. The proof of equivalence among ( i i i ) , ( i v ) , ( v ) , and ( v i ) follows from Proposition 2.1.
In order to synthesize all these properties with a uniform notation and to introduce similarly defined new properties, the definitions that follow were proposed in [13,14]. The first one of them deals with the S I notion.
Definition 3.
We say that ( X , Y ) is stochastically increasing (decreasing) in the order O R D , denoted S I O R D ( X | Y ) ( S D O R D ( X | Y ) ), if:
( X | Y = s 1 ) O R D ( X | Y = s 2 ) ( O R D )
holds for all s 1 < s 2 , with s 1 , s 2 S Y , for a given stochastic order ORD.
The classes for ( Y | X ) are defined in a similar manner. With this definition, the class S I ( X | Y ) can also be written as S I S T ( X | Y ) ; the class S I R L ( X | Y ) can also be written as S I H R ( X | Y ) ; and the condition ( i ) in Proposition 12 as S I L R ( Y | X ) . Note that in this last case, S I L R ( Y | X ) is equivalent to S I L R ( X | Y ) , and so, we can just write S I L R .
One can do the same with the R T I and L T D classes by proposing the following definitions (see [14]).
Definition 4.
We say that ( X , Y ) is right tail increasing (decreasing) in the order O R D , denoted R T I O R D ( X | Y ) ( R T D O R D ( X | Y ) ), if:
( X | Y > s 1 ) O R D ( X | Y > s 2 ) ( O R D )
holds for all s 1 < s 2 for a given stochastic order ORD.
Definition 5.
We say that ( X , Y ) is left tail decreasing (increasing) in the order O R D , denoted L T D O R D ( X | Y ) ( L T I O R D ( X | Y ) ), if:
( X | Y s 1 ) O R D ( X | Y s 2 ) ( O R D )
holds for all s 1 < s 2 for a given stochastic order ORD.
The classes for ( Y | X ) are defined in a similar manner. With this definition, the class R T I ( X | Y ) can also be written as R T I S T ( X | Y ) ; the conditions ( i ) and ( i i ) in Proposition 9 can also be written as R T I H R ( X | Y ) and R T I H R ( Y | X ) ; and the condition ( i ) in Proposition 11 as R T I L R ( X | Y ) .
We can add more notions by considering the following definitions.
Definition 6.
We say that ( X , Y ) is right tail increasing (decreasing) at zero in the order O R D , denoted R T I O R D 0 ( X | Y ) ( R T D O R D 0 ( X | Y ) ), if:
X O R D ( X | Y > s ) ( O R D )
holds for all s for a given stochastic order ORD.
Definition 7.
We say that ( X , Y ) is left tail decreasing (increasing) at infinity in the order O R D , denoted L T D O R D ( X | Y ) ( L T I O R D ( X | Y ) ), if:
X O R D ( X | Y s ) ( O R D )
holds for all s for a given stochastic order ORD.
With these definitions, P Q D can also be written as R T I S T 0 ( Y | X ) or R T I S T 0 ( X | Y ) , but also as L T D S T ( Y | X ) or L T D S T ( X | Y ) . Analogously, R T I ( Y | X ) and L T D ( Y | X ) can be written as R T I H R 0 ( X | Y ) and L T D R H R ( X | Y ) , respectively. Furthermore, S I ( Y | X ) can be written as R T I L R 0 ( X | Y ) or as L T D L R ( X | Y ) . The proofs of these equivalences can be found in [2,13]. Note also that some of these notions are actually equivalent, like, e.g., R T I L R 0 ( Y | X ) and S I S T ( X | Y ) or R T I H R 0 ( Y | X ) and R T I S T ( X | Y ) (see [14] for details).
The preceding propositions can be connected by using the relationships among the stochastic orders shown in Table 1. They are summarized in Table 2. For example, to prove S I R L ( X | Y ) R C S I ( X , Y ) , we just note that R C S I ( X , Y ) = R T I H R ( X | Y ) , that S I R L ( X | Y ) = R T I L R ( X | Y ) , and that the LR order implies the HR order. These relationships prove that they are positive dependence properties, since all of them imply the PQD property. Analogously, the corresponding negative dependence properties imply the NQD property. Some of these relationships were given in [13], p. 166, and in [14]. Connections with the ordering properties of coherent systems and extremes values were given in [16,17].
The relationships for the reversed properties (i.e., based on cumulative distributions rather than survival functions) are given in Table 3. For them, observe, e.g., that S I R H R ( Y | X ) iff 1 C ( u , v ) is T P 2 and that S I R H R ( X | Y ) iff 2 C ( u , v ) is T P 2 (see [13], where more equivalences were also provided). More equivalences can be seen in [13]. Note that two of the notions described in this table ( S I R H R ( Y | X ) and S I R H R ( X | Y ) ) were previously considered in [13].
Negative dependence properties can be considered as well. For them, one has to consider the R T D and L T I properties, obtaining relationships similar to those described in Table 2 and Table 3.

4. Weak Dependence Notions

In this section, new dependence notions are introduced and discussed. These notions, which are defined as those satisfying R T I O R D ( X | Y ) and R T I O R D 0 ( X | Y ) where ORD is one of the orders I C X , or M R L , or simply in Expectation (E), are “weak” in the sense that they do not imply the P Q D property, which is a necessary condition for a property to be considered a “dependence notion” according to the classical definition given in [3]. However, they imply the non-negativity of the linear correlation coefficient r X , Y , as mentioned after Proposition 15, and this is the reason to consider them as “weak dependence notions”. Note that some of these notions have been already considered and applied in the literature; this is the case, for example, of R T I I C X 0 ( X | Y ) , which was named positive stop-loss dependence in [7].
For R T I I C X 0 ( X | Y ) , we have the following properties.
Proposition 13.
Let ( X , Y ) be a random vector with a continuous marginal distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
X I C X ( X | Y > y ) for all y R (i.e., R T I I C X 0 ( X | Y ) );
(ii) 
The survival copula C ^ satisfies:
0 z [ C ^ ( u , v ) u v ] d F ¯ X 1 ( u ) 0 , z [ 0 , 1 ] , v [ 0 , 1 ] .
Proof. 
It is clear that X I C X ( X | Y > y ) for all y, if, and only if,
t Pr X > x d x t Pr X > x | Y > y d x , t , y R .
This is equivalent to:
t [ C ^ ( F ¯ X ( x ) , F ¯ Y ( y ) ) F ¯ X ( x ) F ¯ Y ( y ) ] d x 0 , t , y R
or
0 F ¯ X ( t ) [ C ^ ( u , v ) u v ] d F ¯ X 1 ( u ) 0 , t R , v [ 0 , 1 ] .
Since F X is continuous, this is the same as:
0 z [ C ^ ( u , v ) u v ] d F ¯ X 1 ( u ) 0 , z [ 0 , 1 ] , v [ 0 , 1 ] .
Corollary 3.
Under the above assumptions, if F ¯ X is convex and:
0 z [ C ^ ( u , v ) u v ] d u 0 , z [ 0 , 1 ] , v [ 0 , 1 ] ,
then X I C X ( X | Y > y ) for all y R and all F ¯ Y .
Proof. 
Since F ¯ X is decreasing and convex, then F ¯ X 1 ( u ) is decreasing and convex and F ¯ X 1 ( u ) is increasing and concave. Then, by applying Lemma 7.1(b), of [6], to Equation (8) above, we obtain:
0 z [ C ^ ( u , v ) u v ] d F ¯ X 1 ( u ) 0 , z [ 0 , 1 ] , v [ 0 , 1 ] ,
which is the same as (7). □
In particular, we have that the PQD property implies both R T I I C X 0 ( X | Y ) for all F ¯ Y and all convex F ¯ X and R T I I C X 0 ( Y | X ) for all F ¯ X and all convex F ¯ Y . Note also that if R T I I C X 0 ( X | Y ) holds for a continuous survival function F ¯ Y , then it holds for all continuous survival functions F ¯ Y (since (6) holds for any v = F ¯ Y ( y ) and all y).
For R T I M R L 0 ( X | Y ) , we have the following properties.
Proposition 14.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
X M R L ( X | Y > y ) for all y R (i.e., R T I M R L 0 ( X | Y ) );
(ii) 
X t M R L X t , s for all ( t , s ) R 2 ;
(iii) 
X t I C X X t , s for all ( t , s ) R 2 ;
(iv) 
The survival copula C ^ satisfies:
0 z [ z C ^ ( u , v ) u C ^ ( z , v ) ) d F ¯ X 1 ( u ) 0 z , v [ 0 , 1 ] .
Proof. 
The equivalence between the first three items follows from Proposition 3. For the equivalence with ( i v ) , observe that X M R L ( X | Y > y ) for all y S Y if, and only if,
t F ¯ ( x , 0 ) d x t F ¯ ( x , y ) d x / F ¯ ( 0 , y ) decreases in t y ,
i.e., if, and only if,
t C ^ ( F X ¯ ( x ) , 1 ) d x t C ^ ( F ¯ X ( x ) , v ) d x decreases in t v [ 0 , 1 ] .
This is equivalent to:
C ^ ( F ¯ X ( t ) , 1 ) t C ^ ( F ¯ X ( x ) , v ) d x + C ^ ( F ¯ X ( t ) , v ) t C ^ ( F ¯ X ( x ) , 1 ) d x 0
for all v [ 0 , 1 ] and t. The latter can be written as:
t [ C ^ ( F ¯ X ( x ) , v ) F ¯ X ( t ) C ^ ( F ¯ X ( t ) , v ) F ¯ X ( x ) ] d x 0 , v [ 0 , 1 ] , t ,
which is the same as:
z 0 [ z C ^ ( u , v ) u C ^ ( z , v ) ] d F ¯ X 1 ( u ) 0 v , z [ 0 , 1 ] ,
i.e., (10). □
Corollary 4.
Under the above assumptions, if F ¯ X is strictly monotone and convex and:
0 z C ^ ( u , v ) d u 0 z C ^ ( u , 1 ) d u is decreasing in z for all v [ 0 , 1 ]
then X M R L ( X | Y > y ) for all y S Y and all F ¯ Y .
Proof. 
Observe that (11) is equivalent to:
z 1 C ^ ( 1 u , 1 ) d u z 1 C ^ ( 1 u , 1 v ) d u is decreasing in z for all v [ 0 , 1 ] .
Defining:
K 1 u , i = C ^ ( 1 u , 1 ) , i = 0 C ^ ( 1 u , 1 v ) , i = 1 ,
then (11) can be restated as T P 2 in the ( z , i ) property of z 1 K ( 1 u , i ) d u . Thus, by applying Corollary 1 one gets that t K ( 1 F X ( u ) , i ) d u is also T P 2 in ( t , i ) , i.e., that:
t C ^ ( 1 F X ( u ) , 1 F Y ( 0 ) ) d u t C ^ ( 1 F X ( u ) , 1 F Y ( v ) ) d u is decreasing in t for all v [ 0 , 1 ] ,
which means:
t F ¯ ( u , 0 ) d u t F ¯ ( u , s ) d u is decreasing in t for all s 0 ,
i.e., X M R L ( X | Y > y ) for all y. Note that the strict monotonicity of F X is required in this proof. □
Note that the sufficient conditions for R T I M R L 0 ( X | Y ) described in Corollary 4 were proven already in [2] with a different and longer proof. However, the strict monotonicity of F ¯ X is not required in the proof provided there. Moreover, note that (11) is also a necessary condition since equation (3.5) appearing in [2] is what one gets by choosing uniform distributions. However, we think that the convexity of F ¯ X is not a necessary condition.
Alternative conditions for R T I M R L 0 ( X | Y ) are described also in Proposition 3.3 (vi), of [13] (obtained from [18]), and are:
E ( X ) E ( X | Y > y ) y R
and:
1 u C ^ ( u , v ) is the bathtub in u for all v [ 0 , 1 ] .
Clearly, (13) is a necessary condition (because they are the values of the respective MRL functions at zero).
It is useful to point out that the counterexamples provided in Section 6 show that there exist no relationships between PQD (i.e., C ^ ( u , v ) u v for all u , v [ 0 , 1 ] ) and both (11) and (14).
The Positive Quadrant Dependence in Expectation property ( P Q D E ), defined in [19], is considered in the following statement. According to the definition provided in [19], and the notation used here, we say that the vector ( X , Y ) satisfies P Q D E ( X | Y ) if, and only if, E ( X ) E ( X | Y > y ) for all y R (and similarly for P Q D E ( Y | X ) ). Note that, actually, a study on a negative dependence notion analog of the property P Q D E ( Y | X ) goes back to [20].
Proposition 15.
Let ( X , Y ) be a vector of non-negative variables having continuous marginal distribution functions F X and F Y , respectively. Assuming that E ( X | Y > y ) and E ( X | Y y ) exist for all y, then the following conditions are equivalent:
(i) 
E ( X ) E ( X | Y > y ) for all y R (i.e., P Q D E ( X | Y ) );
(ii) 
E ( X ) E ( X | Y y ) for all y R ;
(iii) 
C o v ( X , ϕ ( Y ) ) 0 for all increasing real functions ϕ for which the covariance exists;
(iv) 
The survival copula C ^ satisfies:
0 1 ( C ^ ( u , v ) u v ) d F ¯ X 1 ( u ) 0 , v [ 0 , 1 ] .
Proof. 
For the equivalence between ( i ) and ( i i ) , note that, for fixed y R , one has:
E ( X ) = Pr ( Y > y ) E ( X | Y > y ) + Pr ( Y y ) E ( X | Y y ) ,
which implies:
[ E ( X ) E ( X | Y > y ) ] P ( Y > y ) = P ( Y y ) [ E ( X | Y y ) E ( X ) ] .
Thus, E ( X ) E ( X | Y > y ) and E ( X | Y y ) E ( X ) must have the same sign, and the equivalence follows. In a similar manner, again by using Inequality (16), one can prove the equivalence between ( i ) and ( i i i ) (see [20], Theorem 3.1, for details). For the relationship between ( i ) and ( i v ) , observe that:
E ( X ) = 0 F ( x ) d x + 0 F ¯ ( x ) d x ,
and similarly for E ( X | Y > y ) . If the marginal expectations are finite, then:
E X | Y > y E ( X ) = Pr ( X > x | Y > y ) Pr ( X > x ) d x .
We can thus verify that E ( X ) E ( X | Y > y ) for all y if, and only if,
[ C ^ ( F ¯ X ( x ) , F ¯ Y ( y ) ) F ¯ X ( x ) F ¯ Y ( y ) ] d x 0 , y
or:
0 1 ( C ^ ( u , v ) u v ) d F ¯ X 1 ( u ) 0 , t , v [ 0 , 1 ] .
Note that, as an immediate consequence of the preceding proposition, we have that the PQD property implies E ( X ) E ( X | Y > y ) for all y and all F ¯ X , F ¯ Y , if the expectations exist. Moreover, it is easy to verify that r X , Y 0 if E ( X ) E ( X | Y > y ) for all y, where:
r X , Y = 1 σ X σ Y 0 1 0 1 ( C ^ ( u , v ) u v ) d F ¯ X 1 ( u ) d F ¯ Y 1 ( v )
is the Pearson’s correlation coefficient (here, σ X and σ Y are the standard deviations of X and Y, respectively). For the formula of r X , Y , see, e.g., [21].
For R T I M R L ( X | Y ) , we have the following property.
Proposition 16.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
( X | Y > y 1 ) M R L ( X | Y > y 2 ) for all y 1 y 2 , y 1 , y 2 R (i.e., R T I M R L ( X | Y ) );
(ii) 
( X t | Y > y 1 ) M R L ( X t | Y > y 2 ) for all y 1 y 2 , y 1 , y 2 R , and for all t R ;
(iii) 
( X t | Y > y 1 ) I C X ( X t | Y > y 2 ) for all y 1 y 2 , y 1 , y 2 R , and for all t R ;
(iv) 
The survival copula C ^ satisfies:
0 z [ C ^ ( z , v 1 ) C ^ ( u , v 2 ) C ^ ( z , v 2 ) C ^ ( u , v 1 ) ] d F ¯ X 1 ( u ) 0
for all z [ 0 , 1 ] and 0 v 1 v 2 1 .
Proof. 
The equivalence between the first three items follows from Proposition 3. For the equivalence with ( i v ) , observe that ( X | Y > y 1 ) M R L ( X | Y > y 2 ) for all y 1 y 2 if, and only if,
t F ¯ ( x , y 1 ) d x t F ¯ ( x , y 2 ) d x decreases in t y 1 y 2 ,
i.e., if, and only if,
t C ^ ( F ¯ X ( x ) , v 1 ) d x t C ^ ( F ¯ X ( x ) , v 2 ) d x decreases in t 0 v 2 v 1 1 .
This is equivalent to:
C ^ ( F ¯ X ( t ) , v 1 ) t C ^ ( F ¯ X ( x ) , v 2 ) d x + C ^ ( F ¯ X ( t ) , v 2 ) t C ^ ( F ¯ X ( x ) , v 1 ) d x 0
for all 0 v 2 v 1 1 and t S X . The latter can be written as:
z 0 [ C ^ ( z , v 1 ) C ^ ( u , v 2 ) C ^ ( z , v 2 ) C ^ ( u , v 1 ) ] d F ¯ X 1 ( u ) 0
for all z [ 0 , 1 ] and 0 v 2 v 1 1 , which is equivalent to (17). □
Note that, as an immediate consequence of the preceding proposition, we have that the RCSI property (i.e., C ^ is T P 2 ) implies ( X | Y > y 1 ) M R L ( X | Y > y 2 ) for all y 1 y 2 and all F ¯ X , F ¯ Y .
Corollary 5.
Under the above assumptions, if F ¯ X is strictly monotone and convex and:
0 z C ^ ( u , v ) d u is T P 2 in ( z , v ) [ 0 , 1 ] 2
then ( X | Y > y 1 ) M R L ( X | Y > y 2 ) for all y 1 y 2 , y 1 , y 2 R and all F ¯ Y .
Proof. 
From (18) and the convexity of F ¯ X , by applying Corollary 1, one gets that t 1 C ¯ ( 1 F X ( u ) , 1 v ) d u is also T P 2 in ( t , v ) . This implies that:
t C ¯ ( 1 F X ( u ) , 1 F Y ( v 1 ) ) d u t C ¯ ( 1 F X ( u ) , 1 F Y ( v 2 ) ) d u is decreasing in t for all 0 v 1 v 2 1 ,
which means:
t F ¯ ( u , s 1 ) d u t F ¯ ( u , s 2 ) d u is decreasing in t for all 0 s 1 s 2 ,
i.e., ( X | Y > s 1 ) M R L ( X | Y > s 2 ) . □
Note that the sufficient conditions for R T I M R L ( X | Y ) described in Corollary 5 were proven already in [1] with a different and less immediate proof. However, the strict monotonicity of F ¯ X is not required in the proof provided there.
The relationships between these new dependence notions and the ones described in the previous section are summarized in the following Table 4. Note that the fact that P Q D ( X , Y ) is not implied by R T I M R L ( X | Y ) , mentioned in the table, is because the MRL order does not imply the ST order.
To avoid false implications in Table 4, one can try to replace the PQD property with the R T I I C X ( X | Y ) property, for which we have the following statements.
Proposition 17.
Let ( X , Y ) be a random vector with continuous distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
( X | Y > y 1 ) I C X ( X | Y > y 2 ) for all y 1 y 2 , y 1 , y 2 R (i.e., R T I I C X ( X | Y ) );
(ii) 
The survival copula C ^ satisfies:
0 z [ v 2 C ^ ( u , v 1 ) v 1 C ^ ( u , v 2 ) ] d F ¯ X 1 ( u ) 0 , z [ 0 , 1 ] and 0 v 1 v 2 1 .
Proof. 
Observe that ( X | Y > y 1 ) I C X ( X | Y > y 2 ) for y 1 y 2 , if, and only if,
t F ¯ ( x , y 1 ) F ¯ ( 0 , y 1 ) F ¯ ( x , y 2 ) F ¯ ( 0 , y 2 ) d x 0 t ,
i.e., if, and only if,
t F ¯ ( x , y 1 ) F ¯ ( 0 , y 2 ) F ¯ ( x , y 2 ) F ¯ ( 0 , y 1 ) d x 0 t .
The latter is the same as:
t C ^ ( F ¯ ( x ) , v 1 ) v 2 C ^ ( F ¯ ( x ) , v 2 ) v 1 d x 0
for v 1 v 2 and t R , which can be also restated as:
0 F ¯ ( t ) C ^ ( u , v 1 ) v 2 C ^ ( u , v 2 ) v 1 d F ¯ X 1 ( u ) 0
for v 1 v 2 and t R . This is the same as (19). □
Corollary 6.
Under the above assumptions, if F ¯ X is convex and:
0 z [ v 2 C ^ ( u , v 1 ) v 1 C ^ ( u , v 2 ) ] d u 0 , z [ 0 , 1 ] and 0 v 1 v 2 1 ,
then ( X | Y > y 1 ) I C X ( X | Y > y 2 ) for y 1 y 2 , y 1 , y 2 R and all F Y .
Proof. 
Since F ¯ X is decreasing and convex, then F ¯ X 1 ( u ) is decreasing and convex and F ¯ X 1 ( u ) is increasing and concave. Then, by applying Lemma 7.1(b) of [6] to Equation (20) above, we obtain:
0 z [ v 2 C ^ ( u , v 1 ) v 1 C ^ ( u , v 2 ) ] d F ¯ X 1 ( u ) 0
for all z [ 0 , 1 ] and 0 v 1 v 2 1 , which is the same as (19). □
Thus, the relationships between the new dependence notions (and some of the strong ones) are summarized in the following Table 5, where only the dependencies of X given Y are considered (due to the fact that the R T I I C X ( X | Y ) is not symmetric, i.e., that R T I I C X ( X | Y ) is different than R T I I C X ( Y | X ) ). However, a similar table can be obtained for ( Y | X ) .
All the properties mentioned above are not independent of the marginal distributions of ( X , Y ) , and the convexity of the marginal survival functions is required. However, interesting properties of the survival copula of the vector are introduced. In the following, these particular properties are studied in detail, and their mutual relationships are pointed out.
To this aim, first observe that considering the property of the copula rather than the property of the vector, one gets a different final result in terms of dependence indexes. In fact, letting P denote the property:
0 z [ C ^ ( u , v ) u v ] d u 0 , z [ 0 , 1 ] , v [ 0 , 1 ] ,
and letting P ˜ denote the property:
X I C X ( X | Y > y ) y R and for fixed continuous survival functions F ¯ X , F ¯ Y ,
one immediately observes that both are positive dependence properties weaker than P Q D , but with different implications. Property P verifies:
P Q D P ρ X , Y 0 ,
where:
ρ X , Y = 12 0 1 0 1 C ( u , v ) d u d v 3
is the Spearman’s rho coefficient for X and Y (for the formula of ρ X , Y , , see equation (5.1.15c) in [11]), and the first implication follows from (6). Property P ˜ verifies:
P Q D P ˜ r X , Y 0 .
Because of this fact, one can define weak dependence properties as has been done for P, by letting the margins be uniformly distributed on ( 0 , 1 ) in the definitions above. Doing this, one gets the weak positive dependence properties described in the statement that follows (easy to prove). Note that property P described above is denoted here as P 2 .
Proposition 18.
Let ( U , V ) be a random vector having uniformly U [ 0 , 1 ] distributed margins. Then:
( P 1 )
E ( U ) E ( U | V > v ) for all v (i.e., P Q D E ( U | V ) ) if, and only if,
0 1 [ C ^ ( u , v ) u v ] d u 0 for all v [ 0 , 1 ] ;
( P 2 )
U I C X ( U | V > v ) for all v (i.e., R T I I C X 0 ( U | V ) ) if, and only if,
0 z [ C ^ ( u , v ) u v ] d u 0 for all v , z [ 0 , 1 ] ;
( P 3 )
U M R L ( U | V > v ) for all v (i.e., R T I M R L 0 ( U | V ) ) if, and only if,
0 z [ z C ^ ( u , v ) u C ^ ( z , v ) ] d u 0 for all v , z [ 0 , 1 ] ;
( P 4 )
( U | V > v 1 ) I C X ( U | V > v 2 ) for all v 1 v 2 (i.e., R T I I C X ( U | V ) ) if, and only if,
0 z [ v 2 C ^ ( u , v 1 ) v 1 C ^ ( u , v 2 ) ] d u 0 for all z [ 0 , 1 ] and v 1 v 2 ;
( P 5 )
( U | V > v 1 ) M R L ( U | V > v 2 ) for all v 1 v 2 (i.e., R T I M R L ( U | V ) ) if, and only if,
0 z [ C ^ ( z , v 2 ) C ^ ( u , v 1 ) C ^ ( z , v 1 ) C ^ ( u , v 2 ) ] d u 0 for all z [ 0 , 1 ] and v 1 v 2 .
Remark 1.
The relationships between these properties can be better understood by considering their equivalent formulations described as follows.
For (23), observe that it is the same as:
0 1 [ C ^ ( u , v ) C ^ ( 1 , 1 ) C ^ ( u , 1 ) C ^ ( 1 , v ) ] d u 0 for all v [ 0 , 1 ] ,
i.e.,
0 1 C ^ ( u , v ) d u C ^ ( 1 , v ) 0 1 C ^ ( u , 1 ) d u C ^ ( 1 , 1 ) v [ 0 , 1 ] .
In a similar manner, one has that (24) holds if, and only if,
0 z C ^ ( u , v ) d u C ^ ( 1 , v ) 0 z C ^ ( u , 1 ) d u C ^ ( 1 , 1 ) v , z [ 0 , 1 ] .
Clearly, (29) implies (28), i.e., property P 2 implies P 1 .
For (25), observe that it is the same as:
C ^ ( z , 1 ) 0 z C ^ ( u , v ) d u C ^ ( z , v ) 0 z C ^ ( u , 1 ) d u 0 for all v , z [ 0 , 1 ] ,
which in turn means that:
0 z C ^ ( u , v ) d u 0 z C ^ ( u , 1 ) d u decreases in z [ 0 , 1 ] for every fixed v [ 0 , 1 ] .
Note that (30) is the same as:
0 z C ^ ( u , v ) d u C ^ ( z , v ) 0 z C ^ ( u , 1 ) d u C ^ ( z , 1 ) z , v [ 0 , 1 ] .
Thus it follows that, if (25) holds, then:
0 z C ^ ( u , v ) d u 0 z C ^ ( u , 1 ) d u 0 1 C ^ ( u , v ) d u 0 1 C ^ ( u , 1 ) d u C ^ ( 1 , v ) C ^ ( 1 , 1 ) v , z [ 0 , 1 ] ,
and (24) holds. Thus, property P 3 implies P 2 .
For (26), just observe that we can rewrite it as:
0 z [ C ^ ( 1 , v 2 ) C ^ ( u , v 1 ) C ^ ( 1 , v 1 ) C ^ ( u , v 2 ) ] d u 0 for all v 1 v 2 ,
i.e.,
0 z C ^ ( u , v 1 ) d u C ^ ( 1 , v 1 ) 0 z C ^ ( u , v 2 ) d u C ^ ( 1 , v 2 ) z [ 0 , 1 ] and 0 v 1 v 2 1 .
Clearly, (31) implies (29), i.e., property P 4 implies P 2 .
Finally, (27) is the same as:
0 z C ^ ( u , v 1 ) d u C ^ ( z , v 1 ) 0 z C ^ ( u , v 2 ) d u C ^ ( z , v 2 ) z [ 0 , 1 ] and 0 v 1 v 2 1 ,
which means that:
0 z C ^ ( u , v 1 ) d u 0 z C ^ ( u , v 2 ) d u decreases in z [ 0 , 1 ] for every fixed v 1 v 2 ,
i.e., 0 z C ^ ( u , v ) d u is T P 2 in ( z , v ) [ 0 , 1 ] 2 . Clearly, (32) implies (30) (i.e., P 5 implies P 3 ). Moreover, it implies (31) (i.e., P 4 ), as it can be proven in a similar manner as we proved that (30) implies (29).
These equivalent formulations of the properties of C ^ , and their mutual implications, are described in the following Table 6. Note that all these properties only depend on the copula, thus being actually dependence properties. Some strong dependence properties are included in the chain of relationships, i.e., the P Q D ( P Q D ( U , V ) ), the R T I ( R T I S T ( U | V ) ), and the R C S I ( R C S I ( U , V ) ) properties.

5. Reversed Weak Dependence Notions

By considering relationships between inactivity times rather than residual times, one can define other dependence notions, which we call “reversed weak dependence notions”. In particular, the notions considered here are based on comparisons in the MIT and the ICV orders and on the L T D property. Thus, the notions considered here are L T D I C V ( X | Y ) , L T D M I T ( X | Y ) , L T D I C V ( X | Y ) , L T D M I T ( X | Y ) , and the one based on inequality E ( X ) E ( X | Y y ) for all y 0 (which is actually equivalent to P Q D E ( X | Y ) , as seen in Proposition 15).
All the proofs of the statements described in this section follow the same lines of those described in Section 4 and are therefore omitted, except for the first one (given as an example).
Proposition 19.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, X I C V ( X | Y y ) for all y R (i.e., L T D I C V ( X | Y ) ) if, and only if:
0 z ( C ( u , v ) u v ) d F X 1 ( u ) 0 , z , v [ 0 , 1 ] .
Proof. 
It is clear that X I C V ( X | Y y ) for all y, if, and only if,
t Pr X x d x t Pr X x | Y y d x , t , y .
This is equivalent to:
t [ C ( F X ( x ) , F Y ( y ) ) F X ( x ) F Y ( y ) ] d x 0 , t , y
or:
0 F X ( t ) ( C ( u , v ) u v ) d F X 1 ( u ) 0 , t , v [ 0 , 1 ] .
Since F X is continuous, this is the same as (33). □
Corollary 7.
Under the above assumptions, if F X is convex and:
0 t ( C ( u , v ) u v ) d u 0 , t , v [ 0 , 1 ] ,
then X I C V ( X | Y y ) for all y R (i.e., L T D I C V ( X | Y ) ) for all F Y .
Proof. 
The result follows by applying Lemma 7.1(b) of [6] to Equation (34) above. □
From Proposition 6, we obtain the following result.
Proposition 20.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
X M I T ( X | Y y ) for all y R (i.e., L T D M I T ( X | Y ) );
(ii) 
t X M I T ( t X | Y y ) for all y , t R ;
(iii) 
t X I C X ( t X | Y y ) for all y , t R ;
(iv) 
0 t [ t C ( u , v ) u C ( t , v ) ) d F X 1 ( u ) 0 , t , v [ 0 , 1 ] .
Corollary 8.
Under the above assumptions, if F X is convex and:
0 t [ t C ( u , v ) u C ( t , v ) ) d u 0 , t , v [ 0 , 1 ] ,
then X M I T ( X | Y y ) for all y R (i.e., L T D M I T ( X | Y ) ) for all F Y .
Proposition 21.
Let ( X , Y ) be a random vector with continuous distribution functions F X and F Y . Then, ( X | Y y 2 ) I C V ( X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R (i.e., L T D I C V ( X | Y ) ) if, and only if:
0 t [ v 2 C ( u , v 1 ) v 1 C ( u , v 2 ) ] d F X 1 ( u ) 0 , t [ 0 , 1 ] , 0 v 1 v 2 1 .
Corollary 9.
Under the above assumptions, if F X is convex and:
0 t [ v 2 C ( u , v 1 ) v 1 C ( u , v 2 ) ] d u 0 , t [ 0 , 1 ] , v 1 v 2 ,
then ( X | Y y 2 ) I C V ( X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R (i.e., L T D I C V ( X | Y ) ) for all F Y .
The following theorem can also be obtained from Proposition 6.
Proposition 22.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, the following conditions are equivalent:
(i) 
( X | Y y 2 ) M I T ( X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R (i.e., L T D M I T ( X | Y ) );
(ii) 
( t X | Y y 2 ) M I T ( t X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R and t R ;
(iii) 
( t X | Y y 2 ) I C X ( t X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R and t R ;
(iv) 
0 t [ C ( t , v 2 ) C ( u , v 1 ) C ( t , v 1 ) C ( u , v 2 ) ] d F X 1 ( u ) 0 , t [ 0 , 1 ] , 0 v 1 v 2 1 .
Corollary 10.
Under the above assumptions, if F X is convex and:
0 t [ C ( t , v 2 ) C ( u , v 1 ) C ( t , v 1 ) C ( u , v 2 ) ] d u 0 , t [ 0 , 1 ] , 0 v 1 v 2 1 ,
then ( X | Y y 2 ) M I T ( X | Y y 1 ) for all y 1 y 2 , y 1 , y 2 R (i.e., L T D M I T ( X | Y ) ) for all F Y .
Proposition 23.
Let ( X , Y ) be a random vector with continuous marginal distribution functions F X and F Y . Then, E ( X ) E ( X | Y y ) for all y R , whenever the expectations exist, if, and only if:
0 1 ( C ( u , v ) u v ) d F X 1 ( u ) 0 , t , v [ 0 , 1 ] .
Note that the following chain of implications holds.
PQD X I C V ( X | Y y ) y E ( X ) E ( X | Y y ) y r X , Y 0 .
Thus, all the notions described above are positive dependence notions, depending on the marginal distributions of ( X , Y ) , whose relationships are described in Table 7.
As has been done for the positive dependence notions in the previous section, one can again define weak dependence properties that are independent of the margins by considering only the properties of the connecting copula C, thus letting the margins be uniformly distributed on ( 0 , 1 ) in the definitions above. Doing so, one gets the weak positive dependence properties described in the statement that follows.
Proposition 24.
Let ( U , V ) be a random vector having uniformly U [ 0 , 1 ] distributed margins. Then:
( P 1 R )
E ( U ) E ( U | V v ) for all v (i.e., P Q D E ( U | V ) ) if, and only if,
0 1 [ C ( u , v ) u v ] d u 0 for all v [ 0 , 1 ] ;
( P 2 R )
U I C V ( U | V v ) for all v (i.e., L T D I C V ( U | V ) ) if, and only if,
0 z [ C ( u , v ) u v ] d u 0 for all v , z [ 0 , 1 ] ;
( P 3 R )
U M I T ( U | V v ) for all v (i.e., L T D M I T ( U | V ) ) if, and only if,
0 z [ z C ( u , v ) u C ( z , v ) ] d u 0 for all v , z [ 0 , 1 ] ;
( P 4 R )
( U | V v 2 ) I C V ( U | V v 1 ) for all 0 v 1 v 2 1 (i.e., L T D I C V ( U | V ) ) if, and only if,
0 z [ v 2 C ( u , v 1 ) v 1 C ( u , v 2 ) ] d u 0 for all z [ 0 , 1 ] and 0 v 1 v 2 1 ;
( P 5 R )
( U | V v 2 ) M I T ( U | V v 1 ) for all v 1 v 2 (i.e., L T D M I T ( U | V ) ) if, and only if,
0 z [ C ( z , v 2 ) C ( u , v 1 ) C ( z , v 1 ) C ( u , v 2 ) ] d u 0 for all z [ 0 , 1 ] and 0 v 1 v 2 1 .
The relationships among these notions can be proven as described for the weak positive dependence properties and are listed in Table 8.

6. Counterexamples

Comments on the relationships among the above-described properties of copulas are given here, together with other useful counterexamples, like the first one that follows, which shows that the convexity of F ¯ X is not a necessary condition for R T I M R L 0 ( X | Y ) .
Example 1.
Let us consider an FGM survival copula, that is,
C ^ ( u , v ) = u v + θ u v ( 1 u ) ( 1 v )
for θ [ 1 , 1 ] . Then:
1 C ^ ( u , v ) = v + θ v ( 1 v ) 2 θ u v ( 1 v )
is decreasing in u when θ 0 . Hence, from Proposition 8, we get X L R ( X | Y > s ) for all s and all F ¯ X , F ¯ Y (i.e., R T I L R 0 ( X | Y ) ). Therefore, X M R L ( X | Y > s ) for all s and all F ¯ X , F ¯ Y . Therefore, we do not need the condition “ F ¯ X is convex” for R T I M R L 0 ( X | Y ) to hold. A straightforward calculation shows that (11) holds for this copula when θ 0 .
In the next example, we discuss the relationships between C ^ ( u , v ) u v ( P Q D property) and the conditions on C ^ for R T I M R L 0 ( X | Y ) to hold, i.e., (11) (which is the same as R T I M R L 0 ( U | V ) ) and (14). In particular, it proves that the P Q D property does not imply (11) or (14). Moreover, it also proves that R T I I C X 0 ( X | Y ) does not imply R T I M R L 0 ( X | Y ) .
Example 2.
Let us consider a random vector ( X , Y ) with the following Fredricks–Nelsen survival copula (see, e.g., p. 32 in [12]):
C ^ ( u , v ) = min u , v , u 2 + v 2 2 .
Clearly, C ^ ( u , v ) u v since u 2 + v 2 2 u v for all u , v [ 0 , 1 ] . Hence, ( X , Y ) is P Q D , and so, X S T ( X | Y > s ) for all s and all F X , F Y (i.e., R T I S T 0 ( X | Y ) holds). Therefore, R T I I C X 0 ( X | Y ) and R T I I C X 0 ( U | V ) hold as well. A straightforward calculation shows that C ^ ( u , v ) / u is not decreasing in u for all v [ 0 , 1 ] , that is ( X , Y ) is not R T I ( Y | X ) . Therefore, X H R ( X | Y > s ) does not hold for all s and all F X , F Y .
Let us see now that ( X , Y ) (or C ^ ) does not satisfy (11). Hence, X M R L ( X | Y > s ) does not hold for all s and all F X , F Y (e.g., for uniform distributions). For a fixed v ( 0 , 1 ) , let us consider an r.v. Z v having density defined as:
g v ( u ) = C ^ ( u , v ) k v = u / k v for 0 u α 1 ( v ) ; u 2 + v 2 2 k v for α 1 ( v ) < u α 2 ( v ) ; v / k v for α 2 ( v ) < u 1 ; 0 for u [ 0 , 1 ] ;
with k v : = 0 1 C ^ ( u , v ) d u > 0 , 0 < α 1 ( v ) < α 2 ( v ) < 1 ,
α 1 ( v ) : = 1 1 v 2
and:
α 2 ( v ) : = 2 v v 2 .
They are plotted in Figure 1, left. Note that (11) is equivalent to Z 1 R H R Z v for all 0 < v < 1 . The associated distribution and reversed hazard rate functions are:
G v ( z ) = 0 z g v ( u ) d u = 1 k v 0 z u d u = z 2 2 k v
and:
h ¯ v ( z ) = g v ( z ) G v ( z ) = 2 z
for 0 z α 1 ( v ) . Analogously, for α 1 ( v ) < z α 2 ( v ) , we get:
G v ( z ) = 0 z g v ( u ) d u = α 1 2 ( v ) 2 k v + α 1 ( v ) z u 2 + v 2 2 k v d u = α 1 2 ( v ) 2 k v + z 3 α 1 3 ( v ) 6 k v + z α 1 ( v ) 2 k v v 2
and:
h ¯ v ( z ) = g v ( z ) G v ( z ) = z 2 + v 2 α 1 2 ( v ) + ( z 3 α 1 3 ( v ) ) / 3 + v 2 ( z α 1 ( v ) )
for α 1 ( v ) < z α 2 ( v ) . By plotting h ¯ v , we see that h ¯ v ( z ) and h 1 ( z ) = 2 / z are not ordered for α 1 ( v ) < z α 2 ( v ) . For example, if v = 0.3 , then α 1 ( v ) = 0.0460608 and α 2 ( v ) = 0.7141428 , and we obtain the reversed hazard rate functions h ¯ v ( z ) plotted in Figure 1, right, for 0.0460608 < z < 0.7141428 . Therefore, Z v and Z 1 are not RHR-ordered, and (11) does not hold.
Finally, we prove that (14) does not hold. For 0 < u α 1 ( v ) ,
β v ( u ) : = 1 u C ^ ( u , v ) = 1
and for α 1 ( v ) < u < α 2 ( v ) :
β v ( u ) : = 1 u C ^ ( u , v ) = u 2 + v 2 2 u .
Then:
β v ( u ) = 1 2 v 2 2 u 2 = u 2 v 2 2 u 2
for α 1 ( v ) < u < α 2 ( v ) . Hence, β v ( u ) is decreasing for α 1 ( v ) < u < v and increasing for v < u < α 2 ( v ) . Thus, for α 2 ( v ) < u < 1 , we get:
β v ( u ) = 1 u C ^ ( u , v ) = v u
is decreasing in u for α 2 ( v ) < u < 1 . Therefore, (14) does not hold.
The following is a copula that satisfies both (11) (i.e., R T I M R L 0 ( U | V ) ) and (14), but not the P Q D property.
Example 3.
Let us consider a random vector ( X , Y ) with the following survival copula:
C ^ ( u , v ) = u v + θ γ ( u ) g ( v ) ,
where θ [ 0 , 1 ] , γ ( u ) = sin ( 2 π u ) / ( 2 π ) , and g ( v ) = v ( 1 v ) . It can be verified that γ satisfy all the conditions for C ^ to be a copula (see [22], for details). It is easy to see that it is not PQD (the difference C ^ ( u , v ) u v is positive for 0 < u < 1 / 2 and negative for 1 / 2 < u < 1 ). To study (11), we consider the function:
0 z C ^ ( u , v ) d u 0 z C ^ ( u , 1 ) d u = 1 z 2 0 z C ^ ( u , v ) d u = v 2 + θ v ( 1 v ) ( 2 π ) 2 1 cos ( 2 π z ) z 2 .
It is easy to see that ( 1 cos ( 2 π z ) ) / z 2 is decreasing for z ( 0 , 1 ) , so (11) holds. Moreover, by plotting C ^ ( u , v ) / u , we see that it is a bathtub in u for all v ( 0 , 1 ) . Hence, (14) also holds.

7. Conclusions

In the paper, we analyzed the dependence properties related to orderings of conditional distributions. The main novelties are new weak dependence notions related to mean residual life, increasing concave, mean inactivity time, and increasing convex orders. All these new classes imply positive or negative correlation coefficients and can be related to the classical dependence properties defined in a similar way as the stochastic, hazard rate, reversed hazard rate, and likelihood ratio orders. The relationships for all the positive dependence notions (summarized in Table 9) are described in Figure 2. The relationships for the negative dependence notions are similar.
The main disadvantage of the new dependence notions proposed here is that they depend on the marginal distributions (as the Pearson’s correlation coefficient). This problem can be solved by replacing them with the respective copula properties obtained by assuming uniform marginals. In this case, they imply a positive Spearman’s rho coefficient. Moreover, we must say that there are other weak dependence notions that come through in papers devoted to more specific areas, which are not studied here for the sake of brevity. This is the case, for example, of the Gini correlation introduced in economics in [23] and further studied in [24,25]. Given two random variables X and Y, the Gini correlation is a non-symmetric measure given by:
ρ X Y = Cov X , F Y ( Y ) Cov X , F X ( X )
whose properties are a mixture of Pearson’s and Spearman’s correlations. It follows from Proposition 15(iii) that P Q D E ( X | Y ) implies ρ X Y 0 . Furthermore, for simplicity, we have just studied the bivariate case. The study of other notions and the extensions of these dependence properties to n-dimensional random vectors are not straightforward and will be studied in future research projects.

Author Contributions

Investigation, J.N., F.P., M.A.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by Ministerio de Ciencia e Innovación of Spain under Grant PID2019-103971GB-I00 (J.N.), by the GNAMPA research group of INdAM (Istituto Nazionale Di Alta Matematica), Italy, and Progetto di Eccellenza, CUP: E11G18000350001, Italy (F.P.), by Ministerio de Economía y Competitividad of Spain under Grant MTM2017-89577-P, and the 2014-2020 ERDF Operational Programme and the Department of Economy, Knowledge, Business and University of the Regional Government of Andalusia under Grant FEDER-UCA18-107519 (M.A.S.).

Informed Consent Statement

Not applicable

Data Availability Statement

Not applicable

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.; Li, X. On Stochastic Dependence in Residual Lifetime and Inactivity Time with Some Applications; Techical Report; School of Science, Tianjin University of Commerce: Tianjin, China, 2020. [Google Scholar]
  2. Longobardi, M.; Pellerey, F. On the role of dependence in residual lifetimes. Stat. Probab. Lett. 2019, 153, 56–64. [Google Scholar] [CrossRef]
  3. Kimeldorf, G.; Sampson, A.R. A framework for positive dependence. Ann. Inst. Stat. Math. 1989, 41, 31–45. [Google Scholar]
  4. Shaked, M.; Shanthikumar, J.G. Stochastic Orders; Springer Series in Statistics: New York, NY, USA, 2007. [Google Scholar]
  5. Kayid, M.; Ahmad, A.H. On the mean inactivity time ordering with reliability applications. Probab. Eng. Inf. Sci. 2004, 18, 395–409. [Google Scholar] [CrossRef]
  6. Barlow, R.; Proschan, F. Statistical Theory of Reliability and Life Testing: Probability Models; Rinehart and Winston; Holt: New York, NY, USA, 1981. [Google Scholar]
  7. Denuit, M.; Dhaene, J.; Goovaerts, M.J.; Kaas, R. Actuarial Theory for Dependent Risks; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar]
  8. Ahmad, A.H.; Kayid, M.; Pellerey, F. Further results involving the MIT order and the IMIT class. Probab. Eng. Inf. Sci. 2005, 19, 377–395. [Google Scholar] [CrossRef] [Green Version]
  9. Karlin, S. Total Positivity; Stanford University Press: Stanford, CA, USA, 1968; Volume 1. [Google Scholar]
  10. Joe, H. Multivariate Models and Dependence Notions; Chapman & Hill: London, UK, 1997. [Google Scholar]
  11. Nelsen, R.B. An Introduction to Copulas; Lecture Notes in Statistics No. 139; Springer: New York, NY, USA, 1999. [Google Scholar]
  12. Durante, F.; Sempi, C. Principles of Copula Theory; CRC/Chapman & Hall: London, UK, 2016. [Google Scholar]
  13. Navarro, J.; Sordo, M.A. Stochastic comparisons and bounds for conditional distributions by using copula properties. Depend. Model. 2008, 6, 156–177. [Google Scholar] [CrossRef] [Green Version]
  14. Foschi, R.; Spizzichino, F. Reversing Conditional Orderings. In Stochastic Orders in Reliability and Risk; Lecture Notes in Statistics; Li, H., Li, X., Eds.; Springer: New York, NY, USA, 2013; Volume 208. [Google Scholar] [CrossRef]
  15. Dhaene, J.; Goovaerts, M.J. Dependency of risks and stop-loss order. ASTIN Bull. 1996, 26, 201–212. [Google Scholar] [CrossRef] [Green Version]
  16. Navarro, J.; Durante, F.; Fernández-Sánchez, J. Connecting copula properties with reliability properties of coherent systems. Appl. Stoch. Models Bus. Ind. 2020. to appear. [Google Scholar] [CrossRef]
  17. Navarro, J.; Torrado, N.; del Águila, Y. Comparisons between largest order statistics from multiple-outlier models with dependence. Methodol. Comput. Appl. Probab. 2018, 20, 411–433. [Google Scholar] [CrossRef]
  18. Belzunce, F.; Martinez-Riquelme, C.; Ruiz, J.M. On sufficient conditions for mean residual life and related orders. Comput. Stat. And Data Anal. 2013, 61, 199–210. [Google Scholar] [CrossRef]
  19. Balakrishnan, N.; Lai, C.D. Continuous Bivariate Distributions, 2nd ed.; Springer: New York, NY, USA, 2009. [Google Scholar]
  20. Wright, R. Expectation dependence of random variables, with an application in portfolio theory. Theory Decis. 1987, 22, 111–124. [Google Scholar] [CrossRef]
  21. Hoeffding, W. Masstabinvariante Korrelationtheorie. Schriften Math. Inst. Univ. Berl. 1940, 5, 181–233. [Google Scholar]
  22. Rodríguez-Lallena, J.A.; Úbeda-Flores, M. A new class of bivariate copulas. Stat. Probab. Lett. 2004, 66, 315–325. [Google Scholar] [CrossRef]
  23. Schechtman, E.; Yitzhaki, S. A Measure of Association Based on Gini’s Mean Difference. Commun. Stat. 1987, 16, 207–231. [Google Scholar] [CrossRef]
  24. Schechtman, E.; Yitzhaki, S. On the proper bounds of the Gini correlation. Econ. Lett. 1999, 63, 133–138. [Google Scholar] [CrossRef]
  25. Yitzhaki, S.; Olkin, I. Concentration indices and concentration curves. In Stochastic Orders and Decisions under Risk; Statistics: Lecture-Notes Monograph Series; Mosler, K., Scarsini, M., Eds.; Institute of Mathematical: Hayward, CA, USA, 1991; Volume 19, pp. 380–392. [Google Scholar]
Figure 1. Functions (left) α 1 (down) and α 2 (top) for the copula in Example 2. Reversed hazard rate functions (right) for Z v (black) and Z 1 (red) when v = 0.3 for the copula in Example 2.
Figure 1. Functions (left) α 1 (down) and α 2 (top) for the copula in Example 2. Reversed hazard rate functions (right) for Z v (black) and Z 1 (red) when v = 0.3 for the copula in Example 2.
Mathematics 09 00081 g001
Figure 2. Relationships among the positive dependence notions in Table 9.
Figure 2. Relationships among the positive dependence notions in Table 9.
Mathematics 09 00081 g002
Table 1. Relationships among the main stochastic orders.
Table 1. Relationships among the main stochastic orders.
X L R Y X H R Y X M R L Y
X R H R Y X S T Y X I C X Y
X M I T Y X I C V Y E ( X ) E ( Y )
Table 2. Relationships among positive dependence properties.
Table 2. Relationships among positive dependence properties.
P Q D ( X , Y ) R T I ( Y | X ) S I ( Y | X )
R T I ( X | Y ) R C S I ( X , Y ) S I R L ( Y | X )
S I ( X | Y ) S I R L ( X | Y ) P L R D ( X , Y )
Table 3. Relationships among reversed positive dependence properties.
Table 3. Relationships among reversed positive dependence properties.
S I ( Y | X ) L T D ( Y | X ) P Q D ( X , Y )
S I R H R ( Y | X ) L C S D ( X , Y ) L T D ( X | Y )
P L R D ( X , Y ) S I R H R ( X | Y ) S I ( X | Y )
Table 4. Relationships among weak positive dependence properties.
Table 4. Relationships among weak positive dependence properties.
r X , Y 0
P Q D E ( X | Y ) R T I I C X 0 ( X | Y ) R T I M R L 0 ( X | Y )
R T I I C X 0 ( Y | X ) P Q D ( X , Y ) R T I M R L ( X | Y )
R T I M R L 0 ( Y | X ) R T I M R L ( Y | X ) R C S I ( X , Y )
Table 5. Relationships among weak positive dependence properties.
Table 5. Relationships among weak positive dependence properties.
r X , Y 0
P Q D E ( X | Y ) R T I I C X 0 ( X | Y ) R T I M R L 0 ( X | Y )
R T I I C X 0 ( X | Y ) R T I I C X ( X | Y ) R T I M R L ( X | Y )
P Q D ( X , Y ) R T I S T ( X | Y ) R C S I ( X , Y )
Table 6. Relationships among weak positive dependence properties.
Table 6. Relationships among weak positive dependence properties.
ρ U , V 0
P Q D E ( U | V ) R T I I C X 0 ( U | V ) R T I M R L 0 ( U | V )
R T I I C X 0 ( U | V ) R T I I C X ( U | V ) R T I M R L ( U | V )
P Q D ( U , V ) R T I S T ( U | V ) R C S I ( U , V )
Table 7. Relationships among reversed weak dependence properties.
Table 7. Relationships among reversed weak dependence properties.
r X , Y 0
L T D M I T ( X | Y ) L T D I C V ( X | Y ) P Q D E ( X | Y )
L T D M I T ( X | Y ) L T D I C V ( X | Y ) L T D I C V ( X | Y )
R C S I ( X , Y ) L T D S T ( X | Y ) P Q D ( X , Y )
Table 8. Relationships among reversed weak dependence properties.
Table 8. Relationships among reversed weak dependence properties.
ρ U , V 0
L T D M I T ( U | V ) L T D I C V ( U | V ) P Q D E ( U | V )
L T D M I T ( U | V ) L T D I C V ( U | V ) L T D I C V ( U | V )
R C S I ( U , V ) L T D S T ( U | V ) P Q D ( U , V )
Table 9. Positive dependence notions.
Table 9. Positive dependence notions.
NName
0 P Q D E E ( X ) E ( X | Y > s ) E ( X ) E ( X | Y s )
1 P Q D R T I S T 0 ( Y | X ) R T I S T 0 ( X | Y ) L T D S T ( Y | X ) L T D S T ( X | Y )
2 R T I ( Y | X ) R T I S T ( Y | X ) R T I H R 0 ( X | Y )
2’ R T I ( X | Y ) R T I S T ( X | Y ) R T I H R 0 ( Y | X )
3 S I ( Y | X ) S I S T ( Y | X ) R T I L R 0 ( X | Y ) L T D L R ( X | Y )
3’ S I ( X | Y ) S I S T ( X | Y ) R T I L R 0 ( Y | X ) L T D L R ( Y | X )
4 L T D ( Y | X ) L T D S T ( Y | X ) L T D R H R ( X | Y )
4’ L T D ( X | Y ) L T D S T ( X | Y ) L T D R H R ( Y | X )
5 R C S I R T I H R ( Y | X ) R T I H R ( X | Y )
6 L C S D L T D R H R ( Y | X ) L T D R H R ( X | Y )
7 S I R L ( Y | X ) S I H R ( Y | X ) R T I L R ( X | Y )
7’ S I R L ( X | Y ) S I H R ( X | Y ) R T I L R ( Y | X )
8 S I R H R ( Y | X ) L T D L R ( X | Y )
8’ S I R H R ( X | Y ) L T D L R ( Y | X )
9 P L R D S I L R ( Y | X ) S I L R ( X | Y )
10 R T I M R L ( Y | X )
10’ R T I M R L ( X | Y )
11 R T I M R L 0 ( Y | X )
11’ R T I M R L 0 ( X | Y )
12 R T I I C X ( Y | X )
12’ R T I I C X ( X | Y )
13 R T I I C X 0 ( Y | X )
13’ R T I I C X 0 ( X | Y )
14 L T D M I T ( Y | X )
14’ L T D M I T ( X | Y )
15 L T D M I T ( Y | X )
15’ L T D M I T ( X | Y )
16 L T D I C V ( Y | X )
16’ L T D I C V ( X | Y )
17 L T D I C V ( Y | X )
17’ L T D I C V ( X | Y )
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Navarro, J.; Pellerey, F.; Sordo, M.A. Weak Dependence Notions and Their Mutual Relationships. Mathematics 2021, 9, 81. https://0-doi-org.brum.beds.ac.uk/10.3390/math9010081

AMA Style

Navarro J, Pellerey F, Sordo MA. Weak Dependence Notions and Their Mutual Relationships. Mathematics. 2021; 9(1):81. https://0-doi-org.brum.beds.ac.uk/10.3390/math9010081

Chicago/Turabian Style

Navarro, Jorge, Franco Pellerey, and Miguel A. Sordo. 2021. "Weak Dependence Notions and Their Mutual Relationships" Mathematics 9, no. 1: 81. https://0-doi-org.brum.beds.ac.uk/10.3390/math9010081

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop