Next Article in Journal
A Double-Weighted Bankruptcy Method to Allocate CO2 Emissions Permits
Previous Article in Journal
The Number of Parties and Decision-Making in Legislatures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Screening Teams of Moral and Altruistic Agents

by
Roberto Sarkisian
Department of Economics and Finance, University of Rome Tor Vergata, 00133 Rome, Italy
Submission received: 30 August 2021 / Revised: 8 October 2021 / Accepted: 13 October 2021 / Published: 20 October 2021
(This article belongs to the Section Learning and Evolution in Games)

Abstract

:
This paper studies the problem of screening teams of either moral or altruistic agents, in a setting where agents choose whether or not to exert effort in order to achieve a high output for the principal. I show that there exists no separating equilibrium menu of contracts that induces the agents to reveal their types unless the principal either (i) excludes one group from the productive relationship, or (ii) demands different efforts from different preference groups. I also characterize the contract-inducing pooling equilibria in which all agents are incentivized to exert a high level of effort.
JEL Classification:
D82; D86; D03

1. Introduction

Can an employer distinguish between moral and altruistic employees? Ref. [1] explores a moral hazard in teams problem where an employer has to choose between hiring a team of altruistic agents or a team of moral agents (as in [2,3,4]). The key finding is that the principal sometimes prefers the team of moral agents over the team of altruistic ones depending on the production technology and the common degree of morality or altruism. The author then argues that firms may have incentives to collect information about their prospective employees’ preferences in order to benefit from offering less costly contracts.
This last point, however, is not developed there. In particular, Ref. [1] assumes that the agents’ preferences are common knowledge, i.e. the principal knows not only which kind of prosocial preferences the prospective employees have, but also what is the common degree of morality or altruism displayed by the agents.
The objective of this paper is to relax that strong assumption: in what follows, it is assumed that the degree of altruism or morality is known to all parties, but the utility function specification is private knowledge of the agents. The principal then seeks to distinguish the two groups by offering menus of contracts that induce participation, effort provision, and revelation of private information by the employees.
This class of adverse selection followed by moral hazard problems has been analyzed before. RefS. [5,6] have dedicated sections to this broad class of problems, and provide more references to the literature. To cite but a few articles exploring screening preferences, Ref. [7] considers the problem of screening risk-averse agents under moral hazard under the strong assumption that the utility function satisfies single-crossing and CARA properties. As a result, they find that the power of incentives is decreasing with respect to risk-aversion. Ref. [8] study a two-output model with risk-neutral agent protected by limited liability and ex-post participation constraints, and find that a fully pooling contract is optimal. Ref. [9] build upon the previous model by assuming that the agent is risk-averse, and also finds that pooling contracts are difficult to avoid.
All the papers cited above differ from the environment studied here in one important way: they assume that preferences are common knowledge, but that either the degree of risk-aversion or a productivity parameter is private information of the single agent. Here, as stated before, the utility function rather than the common degree of altruism or morality is private information of the agents. The main results, however, are in line with [8,9]: separation is difficult to achieve by the principal if she desires the agents to exert effort in equilibrium. Intuitively, this is a consequence of the utility functions not displaying a single-crossing-like property, an assumption that is imposed in [7].
Screening prosocial preferences has been the central issue in some studies, both theoretically and empirically. Ref. [10] studies an environment with a single principal screening a continuum of workers that have private information about their ability and preferences over social comparisons. In particular, Ref. [10] contrasts the optimal employment contracts for selfish and inequity-averse agents, and finds that it is impossible to screen workers of similar ability with respect to their social preferences within the firm, a result that is line with the ones found here. The main difference between [10] and the model in this study is that the former considers only the adverse selection problem faced by the principal when hiring a single agent, while the latter assumes teamwork and moral hazard. Closer in essence to this paper are the works of [11,12], who consider screening followed by moral hazard when agents’ prosocial preferences are characterized by inequity aversion. Their results also suggest that screening agents according to their social preferences is not feasible.
The approach here developed is close in essence to [13]. In the first step, I search for the set of feasible contracts, that is, those that satisfy the participation constraints plus the moral hazard incentive compatibility constraints for each preference group. In the second step, the principal selects the least costly menu of feasible contracts that also induce the preference groups to truthfully reveal their types. As a consequence, I obtain a no-distortion-at-the-top-like result: the principal offers the second-best moral hazard contract to the least costly to hire preference group, and distorts the allocation to the costlier one either by excluding it from the relationship or by demanding a different level of effort.
The paper goes as follows. Section 2 presents the environment and the concept of separating equilibrium to be considered. Section 3 discusses screening and existence of separating equilibria, while Section 4 characterizes contracts that support pooling equilibria. Section 5 concludes. For ease of exposition, all proofs are relegated to Appendix A.

2. The Model

This model builds upon the model in [1]. Consider a single risk-neutral principal (she/firm) who faces a continuum of potential employees with total mass normalized to one. Alternatively, the model can be restated by considering n pairs of potential employees, without loss. The firm seeks to hire a pair of agents to work on a common task that yields output x { x H , x L } to the principal, with x H > x L . The probability p of the high outcome being achieved depends on the binary choices of effort made by the agents employed in the firm, e i = 0 , 1 for i { A , B } . In particular,
P r ( x = x H | e A , e B ) = p e A + e B ,
where I assume that 1 > p 2 p 1 p 0 > 0 . The cost of exerting effort is identical to every agent, C ( e ) = c e , for c > 0 .
Output is contractible, and the principal posts wage schedules w i ( x ) in order to attract the teams of agents. If the firm successfully attracts a pair of employees, her realized profit is
V ( x , w A , w B ) = x w A ( x ) w B ( x ) .
Denoted by π i ( e i , e j , w i ( x ) ) , the expected material payoff accruing to agent i from the effort choices ( e i , e j ) and wage schedule w i ( x ) , for i , j { A , B } , i j . I restrict attention to wage schedules pairs w i = ( w i H , w i L ) determining the payments following good and bad realizations of revenues. This is in line with [8,9], where the schedules are composed of a fixed plus a variable part. In what follows, the material payoff function takes the expected additively separable form
π ( e i , e j , w i ) = p e i + e j u ( w i H ) c ( e i ) + ( 1 p e i + e j ) u ( w i L ) c ( e i ) ,
where u : R + R is the function that associates the agent’s consumption utility to each wage realization w. The agents are risk averse towards wages: u ( w ) is assumed to be twice-continuously differentiable, strictly increasing and strictly concave.
Each pair of agents belongs to one class of preference group: altruistic or moral. More precisely, each team is composed of two agents drawn from the same preference group, as in [1]. The principal only knows the proportion of the population that each group corresponds to: λ ( 0 , 1 ) for altruist, and 1 λ for moral. The agents’ preferences in each group are represented by the utility functions
U A l t ( e i , e j , w i , w j , α i ) = π ( e i , e j , w i ) + α i π ( e j , e i , w j )
for the altruists and
U H M ( e i , e j , w i , κ i ) = ( 1 κ i ) π ( e i , e j , w i ) + κ i π ( e i , e i , w i )
for the moral agents, where α [ 0 , 1 ] and κ [ 0 , 1 ] represent the agents’ degrees of altruism and morality, respectively. While subtle, the difference in the utility functions is crucial. For both preference groups, the first term captures the individual’s material payoff, given the strategy profile effectively used. For altruistic agents, the second term captures the impact of agent j’s material payoff given the strategy profile ( e i , e j ) on agent i’s utility, weighted by the degree of altruism α i . Meanwhile, for moral agents, the second term captures the Kantian moral concern. In other words, each moral individual considers what his material payoff would be if, hypothetically, the other agent were to mimic him. Extended discussions on the interpretations of moral preferences can be found in [2,3,4].
In what follows, as discussed in [1], I assume that α A = α B = κ A = κ B = θ , and focus on the comparable functions
U A l t ( e i , e j , w i , w j , θ ) = π ( e i , e j , w i ) + θ π ( e j , e i , w j ) ,
U H M ( e i , e j , w i , θ ) = ( 1 θ ) π ( e i , e j , w i ) + θ π ( e i , e i , w i ) .
As pointed out in [2] and also explored in [14], this is the formulation that gives rise to the behavioral equivalence between homo moralis and altruistic preferences. Under an appropriate change of variables, the altruistic utility function could be rewritten as U A l t ( e i , e j , w i , w j , θ ) = ( 1 θ ˜ ) π ( e i , e j , w i ) + θ ˜ π ( e j , e i , w j ) , for θ ˜ [ 0 , 1 / 2 ] .
Throughout the exposition, I use the superscripts H M to denote equations and variables relevant to moral agents, and A l t to refer to altruistic ones.
The timing of the game is depicted in Figure 1.

3. Screening

Due to the assumption of a common degree of morality or altruism, I restrict attention to symmetric contracts offered to each team. These assumptions simplify the problem in the sense that both the incentive compatibility and individual rationality constraints are similar to the ones studied in the literature with a single agent, save for their dependence on the common degree of morality/altruism. For the pure moral hazard problem, these constraints are
( 1 + θ ) p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c u ¯ A l t ,
u ( w H ) u ( w L ) c ( 1 + θ ) ( p 2 p 1 )
for altruistic agents, and
p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c u ¯ H M ,
u ( w H ) u ( w L ) c ( p 2 p 1 ) + θ ( p 1 p 0 )
for moral agents.
In contrast to [1], I allow the different groups to have different reservation utilities. Two particular cases deserve a special mention. First, as in [1], agents in each group may have exactly the same reservation utility u ¯ A l t = u ¯ H M = u ¯ , which generates different utility levels for the participating agents whenever u ¯ > 0 due to the utility function representing each prosocial preference. The second particular case is u ¯ A l t = ( 1 + θ ) u ¯ H M , so that the participation constraints for both moral and altruistic agents are identical for any common degree of prosociality θ ( 0 , 1 ] . If θ = 0 , then both moral and altruistic agents behave as purely selfish individuals, and the screening problem becomes irrelevant.
Let C A l t denote the set of contracts that satisfy the participation and incentive compatibility constraints of altruistic agents, and similarly define the set C H M for moral agents. The principal’s screening problem is to choose w A l t C A l t and w H M C H M such that neither group has an incentive to pick the contract designed for the other group. This is akin to the incentive compatibility constraint in the adverse selection problem, and it can be seen as an additional set of constraints in the principal’s maximization program. The issue, however, is that the intersection between these two sets of feasible contracts is not empty, and thus one can always construct a separating equilibrium by selecting two contracts, w and w , in C A l t C H M , and arguing that each group will self-select into one, and only one, of these contracts.
I will, therefore, focus on a stronger form of separation: I will require that a menu of contracts has at most one element in the intersection of the feasible sets. This will ensure that at least one group has no incentives to deviate and accept the contract designed for the other group.
Let h = u ( w ) , which is uniquely defined for each w R since u is strictly increasing by assumption. I can therefore rewrite the sets of feasible contracts using the linear constraints
( 1 + θ ) [ p 2 h H + ( 1 p 2 ) h L c ] u ¯ A l t ,
h H h L c ( 1 + θ ) ( p 2 p 1 )
for altruistic agents and
p 2 h H + ( 1 p 2 ) h L c u ¯ H M ,
h H h L c ( p 2 p 1 ) + θ ( p 1 p 0 )
for moral agents. I can easily draw the sets of feasible contracts for the cases in which the production technology displays decreasing or increasing returns to efforts, by appropriately choosing the reservation utilities u ¯ H M and u ¯ A l t . In Figure 2 and Figure 3, I assume that u ¯ H M = u ¯ A l t > 0 . Notice, in Figure 2, that C H M C A l t , which implies that any feasible contract offered to moral agents is also accepted by altruistic ones at the same time that it also provides the latter with incentives to exert high effort. Meanwhile, in Figure 3, a contract in C H M is also accepted by altruistic agents, but it may not necessarily induce them to exert the high effort.
Lemma 1.
Suppose that u ¯ A l t < ( 1 + θ ) u ¯ H M . There exists no strictly separating menu of contracts that induces both agents to exert high effort. Similarly, if u ¯ A l t > ( 1 + θ ) u ¯ H M , no strictly separating equilibrium exists that induces all agents to exert effort.
Lemma 1 states that a separating equilibrium does not exist if the reservation utility of both groups is such that the participation constraints are never identical and contracts incentivize agents to exert the high effort. Then, for each case, one can find a profitable deviation for a group, i.e. either the moral agents are better off taking the contract designed for the altruistic teams, or altruists like the moral contracts better than their own.
Lemma 2.
Suppose that u ¯ A l t = ( 1 + θ ) u ¯ H M 0 . There exists no strictly separating menu of contracts that induces both agents to exert high effort.
The negative results in Lemmas 1 and 2 can be linked to the fact that the isoutility curves of moral and altruistic agents never cross in the region where both groups are incentivized to exert the high effort. Indeed, under the assumptions of each statement, the indifference curves of each kind of prosocial agent are either identical to one another, or they are parallel. It is this violation of a single-crossing-like property that prevents the principal from finding schedules that elicit the agents’ preferences.
Proposition 1.
There exists no strictly separating menu of contracts that induces both types of agents to accept a contract and exert high effort for any u ¯ A l t , u ¯ H M R + .

3.1. Separating Equilibria with Low Effort

A separating equilibrium also does not exist if the principal requires both types of agents to exert low effort. Indeed, due to risk aversion by the agents, the principal can induce participation by offering the constant schedules w ¯ A l t and w ¯ H M for the altruistic and moral groups, respectively, satisfying the individual rationality constraints
u w ¯ A l t u ¯ A l t 1 + θ ,
u w ¯ H M u ¯ H M .
By standard arguments, these constraints must bind in an equilibrium. However, if u ¯ H M u ¯ A l t 1 + θ , one preference group always has incentives to deviate and accept the contract designed for the second group. On the other hand, if u ¯ H M = u ¯ A l t 1 + θ , then w ¯ A l t = w ¯ H M since u is strictly increasing, which implies that all workers accept exactly the same contract, and thus picking them apart is impossible for the principal. This argument is collected in the following result.
Proposition 2.
No separating equilibrium exists if the principal wishes to induce both preference groups to accept the contract and exert low effort.

3.2. Screening Preference Groups through Exclusion

Propositions 1 and 2 have shown that the principal cannot screen moral agents from altruistic ones when she must induce both participation and high effort. However, the principal might be able to screen the different preference groups by offering a single (non-null) contract.
Turn once more to Figure 2, by assuming identical reservation utilities and increasing returns to effort. If the principal offers a menu with a single contract that satisfies both the participation and incentive compatibility constraint of the altruistic agent with equality (the intersection of the green lines), she will ensure that: (i) altruistic agents accept the offer and exert high effort, and (ii) moral agents choose not to participate in the relationship with the principal. The same can be achieved under decreasing returns to effort by offering a similar contract (Figure 3).
More generally, the principal can screen the preference groups by offering a singleton menu, where the contract offered necessarily satisfies with equality the participation constraint of the preference group with the lowest reservation utility.
Proposition 3.
Suppose that u ¯ H M u ¯ A l t 1 + θ . The principal can screen different preference groups by offering a single contract that excludes the agents with the highest reservation utility.
Proposition 3 holds either when the principal wishes to induce high or low effort. For the latter case, the argument behind Proposition 3 is even more compelling since the principal will offer a constant wage schedule to the risk-averse agents to exert zero effort, and therefore she can simply choose to employ the cheapest of the preference groups in terms of reservation utilities.

3.3. Screening with Different Efforts

So far, my analysis has focused on the case where both groups of agents are required by the principal to exert the same level of effort, either high or low. The negative results are basically a consequence of the indifference curves for the two groups being parallel to one another when efforts are the same: this implies that the contract offered to the group with the highest outside option also attracts the other team.
Although excluding one preference group from participating in the relationship with the principal is one way to screen agents, a second one exists, namely, requiring that only one group exerts high effort.
If only one group is expected to exert effort, the incentive compatibility constraint with respect to effort can be neglected for that group. Moreover, a constant schedule should be offered to that same group due to the agents’ risk-aversion. In what follows, I will denote by 1 the preference group that should exert effort, and by 2 the preference group who should not exert effort. The feasible set of contracts for the principal will be given by all values of w = ( ( w 1 H , w 1 L ) , w 2 ) satisfying the incentive compatibility and individual rationality constraints for group 1, and the participation constraint for group 2.
One must, however, notice an important difference between the participation constraints for both groups. For group 1, which is bound by the incentive compatibility constraint, the ( I R ) is given by
p 2 u ( w 1 H ) + ( 1 p 2 ) u ( w 1 L ) u ¯ 1 for all ( w H , w L ) that satisfies ( I C ) ,
p 0 u ( w 1 H ) + ( 1 p 2 ) u ( w 1 L ) u ¯ 1 otherwise .
On the other hand, for group 2, participation must satisfy
u ( w 2 ) = p 0 u ( w 2 H ) + ( 1 p 0 ) u ( w 2 L ) u ¯ 2 .
If the individual rationality curves never intersect, i.e. if either u ¯ 2 < u ¯ 1 or u ¯ 2 > > u ¯ 1 , then a separating equilibrium does not exist, for the simple reason that the contract offered to the group with the highest outside option also attracts the agents of the other group, in much a similar manner to the case where the principal induces no group to high effort.
This is not true if the participation constraints intersect (which requires that u ¯ 2 u ¯ 1 ). Using the linearization h = u ( w ) , the feasible set of contracts can be represented as in Figure 4 below.
The contract offered to the agents in group 2 is given by the intersection of the 45 line with the participation constraint for said group, since such a point has the principal proposing a constant schedule to the agents who are not expected to exert effort. On the other hand, agents in group 1 are offered the contract lying in the intersection between the two participation constraints, which they strictly prefer to the constant schedule of group 1 (while the latter is indifferent between the two contracts). The assumption that the participation constraints intersect also implies that the incentive compatibility constraint for group 1 is satisfied.
One remark is in order here: the principal leaves group 1 agents some rent for exerting the high effort, in the sense that the incentive compatibility constraint is not necessarily satisfied with equality (i.e. the pair ( w 1 H , w 1 L ) does not lie in I C 1 ). This can be interpreted as a no distortion at the top result: the principal’s offer does not distort (downward) the effort demanded from the least costly group, but she must still pay a rent to that group.
Proposition 4.
Suppose that u ¯ H M and u ¯ A l t are such that u ¯ H M u ¯ A l t 1 + θ and the individual rationality constraints from both groups cross each other once. Then, a separating equilibrium exists if the principal induces only one preference group to exert the high effort.

4. Pooling Equilibria

Let w H M be the contract that satisfies both conditions in C H M with equality, and similarly, define w A l t . Additionally, denote by w P the contract that satisfies both the participation constraint for moral agents and the incentive compatibility constraint for altruistic agents with equality. The following proposition states the result formally.
Proposition 5.
Suppose that u ¯ A l t = u ¯ H M = u ¯ > 0 . w H M constitutes a pooling equilibrium with both groups of agents exerting the high effort under increasing returns to efforts, while w P constitutes such an equilibrium under decreasing returns to efforts.
I do not claim in Proposition 5 that w H M and w P are the unique pooling equilibrium contracts under increasing and decreasing returns to efforts, respectively. Indeed, in the former case, any contract in C H M indeed constitutes a pooling equilibrium. These two contracts, however, are completely characterized by a simple linear system of two equations. They also characterize one pooling equilibrium when u ¯ A l t = ( 1 + θ ) u ¯ H M : in this case, the participation constraints for both groups are identical, and characterizing the feasible sets for the contracts depends only on comparisons of the incentive compatibility constraints. Moreover, they are the least costly for the principal to offer among all pooling contracts.
However, one must stress that the principal is better off by offering a menu of contracts that screens different preference groups through exclusion. Indeed, since the principal always favors one preference group over the other for any degree of prosociality different than zero (where both moral and altruistic agents would be identical to purely selfish ones), the principal maximizes expected profits by hiring only the cheapest group he can incentivize to participate and exert the high effort, while excluding the more expensive group from partaking in any relationship with himself.

5. Discussion

The results presented above, in line with the literature on screening prosocial preferences, imply that the principal may be unable to construct a menu of contracts that is successful in screening teams of agents belonging to different preference groups. As a consequence, developing experiments to infer agents’ preferences in a static environment would present the same difficulties.
However, one possible strategy would be to offer the contracts sequentially. To fix ideas, suppose that the production technology exhibits increasing returns to efforts, and that u ¯ A l t = u ¯ H M . Under these circumstances, C H M C A l t , as was argued in the proof of Lemma 1. If agents are perfectly patient, then the principal could offer w A l t in the first period, which would be accepted by all the altruistic agents but not by the moral ones, and only then offer w H M to the remaining agents. Such sequential mechanism would make use of time to screen the agents, a channel that is not available in the static model described above.
There are two main issues with such an approach, at least from a theoretical viewpoint. First, if all the potential employees are aware that the employer would utilize the sequential offer mechanism above, altruistic agents would not accept w A l t in the first period in order to contract under w H M in the second period and therefore enjoy a higher utility. Clearly, such deviation by altruistic agents would again leave the principal unable to screen between the two preference groups.
Secondly, the sequential approach relies on the agents being infinitely patient and the two preference groups displaying the same reservation utility. The mechanism could still be employed in the situation where u ¯ A l t < ( 1 + θ ) u ¯ H M and δ A l t < δ H M , where δ j ( 0 , 1 ) denotes group j = { A l t , H M } discount factor. In this case, if the altruistic group discounts the future much more than its moral counterpart, the mechanism could indeed lead to full screening. Unfortunately, to the best of my knowledge, I do not know any research establishing conditions under which different prosocial preferences lead to heterogeneous discount factors.

6. Concluding Remarks

This paper extends the analysis in [1] by relaxing the assumption that the agents’ preferences are common knowledge in the contractual relationship. In particular, the interest lies in characterizing a separating equilibrium in which moral and altruistic individuals reveal their type and exert a high level of effort in the task proposed by the principal.
In effect, the results are negative, but in line with the literature of adverse selection followed by moral hazard: screening prosocial preferences is not possible unless the principal distorts the allocation provided to the least preferred group by either excluding them from the relationship or by inducing a different level of effort. The empirical implication follows naturally: one cannot distinguish groups of agents characterized by the two classes of preferences described above when the degree of prosociality is the same for the two groups, without relaxing either the participation or the moral hazard incentive compatibility constraint.

Funding

This research received no external funding.

Acknowledgments

I thank Ingela Alger, Renee Bowen, Leandro de Magalhaes, Roger Myerson, François Salanié, Jörgen Weibull, and three anonymous referees for very valuable feedback. Finally, I also thank seminar audiences at Toulouse School of Economics, as well as conference participants at the 2018 Africa Meeting of the Econometric Society.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Proofs

Appendix A.1. Proof of Lemma 1

The assumption that u ¯ A l t < ( 1 + θ ) u ¯ H M implies that the participation constraint for one group is different than the one for the other. In particular, setting h = u 1 ( w ) , and drawing the participation constraints on the plane ( h L , h H ) allows us to see that the participation constraint for the altruistic group is always below its counterpart for moral agents.
The proof then considers two cases in turn. Suppose first that the production technology is characterized by decreasing returns to efforts, i.e. p 2 p 1 < p 1 p 0 . Then, any contract w C H M satisfies the participation constraint of altruistic agents with slackness, and thus it is profitable for this group of agents to deviate and take the contract designed for moral agents. Thus, no separating equilibrium exists in this case.
Under increasing returns to efforts, p 2 p 1 p 1 p 0 , on the other hand, one can readily check that C H M C A l t . Then, again, all altruistic agents would deviate and choose the contract designed for moral agents, since this contract would satisfy the former group’s participation constraint with slackness.
For u ¯ A l t > ( 1 + θ ) u ¯ H M , the proof if similar. If p 2 p 1 < p 1 p 0 , then C A l t C H M and thus every altruistic agent has an incentive to deviate and take the contract designed for moral agents. Conversely, if p 2 p 1 > p 1 p 0 , every contract in C A l t satisfies the participation constraint of the moral agents with slackness, and therefore such agents have incentives to deviate and take the contract designed for the former group.

Appendix A.2. Proof of Lemma 2

Suppose first that p 2 p 1 p 1 p 0 , so that the incentive compatibility constraint of moral agents is always above the one for altruists. Then, C H M C A l t and the latter always prefer a contract designed for the former. If p 2 p 1 < p 1 p 0 , the reverse holds: C A l t C H M and moral agents always prefer the contract designed for altruists rather than their own.

Appendix A.3. Alternative Proof of Proposition 1

Proposition 1 is a direct consequence of Lemma 1 and Lemma 2. Alternatively, the same result can be reached by the following reasoning. Suppose that h A l t = ( h A l t H , h A l t L ) and h H M = ( h H M H , h H M L ) are contracts that constitute a strictly separating equilibrium. Then, no team of agents has incentives to deviate and take the contract designed for the other group. In particular, this means that for altruistic agents the following inequality must hold
( 1 + θ ) [ p 2 h A l t H + ( 1 p 2 ) h A l t L c ] > ( 1 + θ ) [ p 2 h H M H + ( 1 p 2 ) h H M L c ] ,
while
p 2 h H M H + ( 1 p 2 ) h H M L c > p 2 h A l t H + ( 1 p 2 ) h A l t L c
must hold for moral agents. Since θ [ 0 , 1 ] , condition (A1) reduces to
p 2 h A l t H + ( 1 p 2 ) h A l t L c > p 2 h H M H + ( 1 p 2 ) h H M L c ,
which together with (A2) imply that
p 2 h A l t H + ( 1 p 2 ) h A l t L > p 2 h H M H + ( 1 p 2 ) h H M L > p 2 h A l t H + ( 1 p 2 ) h A l t L ,
a contradiction.

Appendix A.4. Proof of Proposition 3

For u ¯ H M u ¯ A l t 1 + θ , the proof is analogous to that of Lemma 1. The impossibility of screening through exclusion when u ¯ H M = u ¯ A l t 1 + θ comes from the argument of Proposition 2 if the principal does not wish to induce high effort, and generalizes straightforwardly to the case when she wishes to induce effort.

Appendix A.5. Proof of Proposition 5

Under increasing returns to efforts, C H M C A l t , and thus w H M C A l t . In particular, as shown in Sarkisian (2017), this contract is the least costly one the principal can offer to moral agents in order to induce both participation and effort.
Under decreasing returns to efforts, c ( 1 + θ ) ( p 2 p 1 ) > c ( p 2 p 1 ) + θ ( p 1 p 0 ) , and thus any contract satisfying the incentive compatibility constraint for altruistic agents automatically satisfies, with slackness, its counterpart for moral agents. Therefore, let us take the most restrictive set of constraints, namely the participation constraint for moral agents and the participation constraint for altruistic agents. Any contract satisfying both constraints, in particular, w P will necessarily belong to both C H M and C A l t , so moral and altruistic agents alike accept such contract and exert the high level of effort.

References

  1. Sarkisian, R. Team Incentives under Moral and Altruistic Preferences: Which Team to Choose? Games 2017, 8, 37. [Google Scholar] [CrossRef]
  2. Alger, I.; Weibull, J.W. Homo Moralis-Preference Evolution Under Incomplete Information and Assortative Matching. Econometrica 2013, 81, 2269–2302. [Google Scholar]
  3. Alger, I.; Weibull, J.W. Evolution and Kantian Morality. Games Econ. Behav. 2016, 98, 56–67. [Google Scholar] [CrossRef]
  4. Alger, I.; Weibull, J.W. Strategic Behavior of Moralists and Altruists. Games 2017, 8, 38. [Google Scholar] [CrossRef] [Green Version]
  5. Laffont, J.J.; Martimort, D. (Eds.) The Theory of Incentives: The Principal-Agent Model, 1st ed.; Princeton University Press: Princeton, NJ, USA, 2002; Volume 1. [Google Scholar]
  6. Bolton, P.; Dewatripont, M. Contract Theory; The MIT Press: Cambridge, MA, USA, 2005; Volume 1. [Google Scholar]
  7. Jullien, B.; Salanié, B.; Salanié, F. Screening risk-averse agents under moral hazard: Single-crossing and the CARA case. Econ. Theory 2007, 30, 151–169. [Google Scholar] [CrossRef] [Green Version]
  8. Ollier, S.; Thomas, L. Ex post participation constraint in a principal-agent model with adverse selection and moral hazard. J. Econ. Theory 2013, 148, 2383–2403. [Google Scholar] [CrossRef]
  9. Maréchal, F.; Thomas, L. The Optimal Contract under Adverse Selection in a Moral-Hazard Model with a Risk-Averse Agent. Games 2018, 9, 12. [Google Scholar] [CrossRef] [Green Version]
  10. Von Siemens, F. Heterogeneous social preferences, screening, and employment contracts. Oxf. Econ. Pap. 2011, 63, 499–522. [Google Scholar] [CrossRef]
  11. Cabrales, A.; Charness, G. Optimal contracts with team production and hidden information: An experiment. J. Econ. Behav. Organ. 2011, 77, 163–176. [Google Scholar] [CrossRef] [Green Version]
  12. Demougin, D.; Fluet, C.; Helm, C. Output and Wages with Inequality Averse Agents. Can. J. Econ. Rev. Can. D’Economique 2006, 39, 399–413. [Google Scholar] [CrossRef] [Green Version]
  13. Grossman, S.J.; Hart, O.D. An Analysis of the Principal-Agent Problem. Econometrica 1983, 51, 7–45. [Google Scholar] [CrossRef] [Green Version]
  14. Bergström, T.C. On the Evolution of Altruistic Ethical Rules for Siblings. Am. Econ. Rev. 1995, 85, 58–81. [Google Scholar]
Figure 1. Timing of the game.
Figure 1. Timing of the game.
Games 12 00077 g001
Figure 2. Feasible sets of contracts for p 2 p 1 > p 1 p 0 .
Figure 2. Feasible sets of contracts for p 2 p 1 > p 1 p 0 .
Games 12 00077 g002
Figure 3. Feasible sets of contracts for p 2 p 1 < p 1 p 0 .
Figure 3. Feasible sets of contracts for p 2 p 1 < p 1 p 0 .
Games 12 00077 g003
Figure 4. Feasible sets of contracts for different efforts.
Figure 4. Feasible sets of contracts for different efforts.
Games 12 00077 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sarkisian, R. Screening Teams of Moral and Altruistic Agents. Games 2021, 12, 77. https://0-doi-org.brum.beds.ac.uk/10.3390/g12040077

AMA Style

Sarkisian R. Screening Teams of Moral and Altruistic Agents. Games. 2021; 12(4):77. https://0-doi-org.brum.beds.ac.uk/10.3390/g12040077

Chicago/Turabian Style

Sarkisian, Roberto. 2021. "Screening Teams of Moral and Altruistic Agents" Games 12, no. 4: 77. https://0-doi-org.brum.beds.ac.uk/10.3390/g12040077

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop