Next Article in Journal / Special Issue
Strategic Behavior of Moralists and Altruists
Previous Article in Journal / Special Issue
Dual-Process Reasoning in Charitable Giving: Learning from Non-Results
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Team Incentives under Moral and Altruistic Preferences: Which Team to Choose?

by
Roberto Sarkisian
Toulouse School of Economics, 21 Allee de Brienne, MF 003, Toulouse 31000, France
Submission received: 28 July 2017 / Revised: 27 August 2017 / Accepted: 30 August 2017 / Published: 5 September 2017
(This article belongs to the Special Issue Ethics, Morality, and Game Theory)

Abstract

:
This paper studies incentives provision when agents are characterized either by homo moralis preferences, i.e., their utility is represented by a convex combination of selfish preferences and Kantian morality, or by altruism. In a moral hazard in a team setting with two agents whose efforts affect output stochastically, I demonstrate that the power of extrinsic incentives decreases with the degrees of morality and altruism displayed by the agents, thus leading to increased profits for the principal. I also show that a team of moral agents will only be preferred if the production technology exhibits decreasing returns to efforts; the probability of a high realization of output conditional on both agents exerting effort is sufficiently high; and either the outside option for the agents is zero or the degree of morality is sufficiently low.

1. Introduction

Teamwork permeates economic activities. In some cases, different skills are needed complementing each other to complete a project; in other cases, division of labor plays a crucial role in timely delivery of a product. Partnerships, group projects and team sports are but a few examples in which individuals team up to achieve a certain goal. With the exception of partnerships, it falls upon an employer to hire the team of employees to fulfill the task at hand. In particular, as expenditure in recruitment and assessment surges1, with U.S. companies spending on average around US $ 4000 per hire, an increase of nearly 15 % in the last four years, it is clear that recruitment and talent research divisions have turned their attention to more than the job applicants’ professional abilities. As a matter of fact, common practice includes the analysis of criminal2 and credit histories3, and more recently, social networks as well4.
While employees’ technical skills are important, interest in their personal characteristics other than job-relevant skills may be related to the now widespread knowledge that economic agents are not purely selfish, often displaying other-regarding preferences [3,4,5,6,7]. The literature in behavioral and experimental economics strongly suggests that social preferences affect outcomes in standard economic models [8,9]. The works in [10,11,12,13] analyze, in particular, the role other-regarding preferences play in interactions among employees in models of the workplace. The main findings in this literature show that the employees’ concerns towards one another affect not only the provision of effort, but also the compensation schemes that are offered. Thus, it is only natural to wonder what an ideal team would look like, for a given set of skills: Would it be a team composed of selfish individuals, whose only concern is their own gains, or maybe a group of altruistic agents, who would be content with increasing their workmates’ wellbeing? Perhaps a crew of moral employees deriving satisfaction in choosing actions they think are the right ones? These are the questions I address in this paper.
In view of the overwhelming experimental evidence of behaviors that are incompatible with purely selfish preferences [4,5,14,15], it is important to understand how prosocial preferences affect behavior in the workplace and, by extension, the design of contracts in the workplace. I propose a model to address this question.
Specifically, I focus on the optimal compensation schemes that should be used in a standard moral hazard setting to incentivize the employees to fulfill their tasks. In doing so, I am able to compare the profits obtained by the employer from a team composed of individuals with different kinds of prosocial preferences. Although I do not study the recruitment process per se, I am able to make predictions about which preferences the principal would prefer.
The framework utilized is the multiagent moral hazard model, as first proposed by [16,17], where a risk-neutral principal hires a team of two risk-averse agents. The agents can exert costly effort in order to stochastically affect the realization of output. By assumption, efforts are simultaneously and independently chosen by the agents and cannot be observed by the principal. On the other hand, output is observable by third parties after being realized and can thus be contracted upon.
In the behavioral economics literature, several classes of prosocial preferences have been proposed. I analyze two of them. The first one is altruism [18]5, a class that has been extensively used in the literature on the voluntary contribution of public goods. This is natural since one can think of efforts made in the context of teamwork in a firm as contributions to a public good (the firm’s profit). Second, in light of recent results by [20,21], who show that a particular, novel, class of preferences stands out as being favored by evolution, I compare the optimal contract under altruism with the optimal contract under this class of homo moralis preferences, a convex combination of selfishness and morality. In sum, my model allows addressing the following questions: if an employer could choose between a team of two moral agents and a team of two altruists, which team would he/she prefer and why6?
I characterize the optimal contract for a team of equally altruistic agents and a team of equally moral agents and compare them. First, I find that the trade-off between risk-sharing and incentive provision is present, as in the case with standard selfish preferences. However, as intuition would suggest, I find that high-powered incentives are less needed to induce effort as the agents become more concerned about the right thing to do or about each other’s material payoff, and that the principal’s expected profit obtained from the interaction with each team is increasing in the team’s degree of morality or altruism. Second, if efforts are symmetric and could be contracted upon, the principal would be better off hiring a team of altruistic agents over the other ones, for any degrees of morality and altruism, because altruism towards one’s partner reduces the payment necessary to induce participation, one effect that is not present with selfish or moral preferences. On the other hand, when efforts are not observable, which team is going to be preferred depends on the production technology: in particular, if the stochastic production technology displays increasing returns to efforts, the altruistic team is the least expensive to hire. This is a consequence of the different nature of each class of preferences. While altruistic agents derive benefits from increased material payoffs of their fellows, moral agents take satisfaction in doing the right thing. Intuitively, a higher effort under increasing returns drastically increases the expected material payoff of the agents, on which altruism is based. Meanwhile, the choice of the right thing to do depends only on the contract offered by the principal, and not on the production’s underlying technology. Therefore, under increasing returns, altruistic agents possess higher intrinsic motivation to exert the high level of effort, thus demanding a less high-powered contract and saving costs for the principal.
This paper is closely related to the moral hazard literature, in particular to two of its strands: moral hazard in teams and moral hazard with prosocial preferences. References [17,23] characterize the basic results on moral hazard in teams that are used to build the model below7. References [25,26,27,28] study optimal incentive schemes under different prosocial preferences: the first three focus on inequity aversion, while the last models agents exhibiting reciprocity concerns towards each other. References [29,30,31] consider moral hazard in team problems where the agents can monitor one another and, therefore, study the effects of peer pressure on the optimal contract design. Furthermore, [32,33,34] study problems where the agents’ prosocial preferences are private information and give conditions for separating equilibria and self-selection by different types. None of them, however, raises the question of which preferences yield the least cost to the principal.
The analysis below differs from the previous literature in three crucial points: first, it considers homo moralis preferences, which has not, to the best of my knowledge, been done before in a contracting setting, thus presenting a simple environment where the principal can profitably explore idiosyncrasies generated by those and altruistic preferences. Second, it does not allow for monitoring, nor private information about the agents’ preferences, so that I can focus solely on the effect of the prosocial preferences on the optimal contract design. Last, and more importantly, the analysis contrasts the optimal contracts under each class of preferences, and derives conditions under which the principal would prefer hiring one team over the other, therefore providing a rationale for firms to collect soft information on potential employees to compose teams that will minimize the total payments to be made.
I proceed as follows. Section 2 presents the environment, while Section 3 and Section 4 study the optimal contract assuming efforts are contractible and non-contractible, respectively. Section 5 concludes. For the ease of exposition, all proofs are relegated to Appendix C.

2. The Model

I analyze the interaction between a principal and two agents, denoted by i { A , B } . The principal hires the two agents to work on a joint task, which generates revenue x { x H , x L } for the principal, where x H > x L . Each agent can exert either a low or a high effort level e i { 0 , 1 } . Efforts determine revenues stochastically, according to the following probability distribution:
e B = 1 e B = 0 e A = 1 e A = 0 (   p 2 p 1 p 1 p 0   )
Throughout, I assume that revenue is never certain and that the probability of achieving a high outcome is increasing in the total effort exerted by the agents: 1 > p 2 > p 1 > p 0 > 0 .
If effort is costless, the assumption above indicates a preference of the principal for both agents to exert effort. However, effort is costly to each agent; for each i = A , B 8,
c ( e i ) = c > 0 if e i = 1 , 0 if e i = 0 .
The principal offers the agents contracts w i ( x ) , i = A , B , specifying payments that will follow each realization of revenues. The principal is assumed to be risk neutral, and his/her payoff is given by:
V ( x , w A ( x ) , w B ( x ) ) = x w A ( x ) w B ( x ) .
Denote by π ( e i , e j , w i ( x ) ) the expected material payoff accruing to agent i from the effort choices ( e i , e j ) and wage schedule w i ( x ) , for i , j { A , B } , i j . I restrict attention to wage schedules pairs w i = ( w i H , w i L ) determining the payments following good and bad realizations of revenues. In what follows, the material payoff function takes the expected additively separable form:
π ( e i , e j , w i ) = p e i + e j u ( w i H ) c ( e i ) + ( 1 p e i + e j ) u ( w i L ) c ( e i ) ,
where u : R + R is the function that associates the agent’s consumption utility with each amount of money. The dependence of i’s expected material payoff on e j comes from the effect of the other agent’s effort on the probability distribution of revenues. The agents are risk averse towards wages: u ( w ) is assumed to be twice-continuously differentiable, strictly increasing and strictly concave9.
The principal faces either a team consisting of two agents characterized by homo moralis preferences with degree of morality κ i [ 0 , 1 ] , represented by the utility function:
U H M ( e i , e j , w i , κ i ) = ( 1 κ i ) π ( e i , e j , w i ) + κ i π ( e i , e i , w i ) ,
a team comprised of two altruistic agents, whose preferences are summarized by the utility function:
U A l t ( e i , e j , w i , w j , α i ) = π ( e i , e j , w i ) + α i π ( e j , e i , w j ) ,
for α i [ 0 , 1 ] and i , j { A , B } , i j . Both specifications take the standard selfish preferences as a special case ( κ i = α i = 0 ), and this will allow comparisons between the results to be presented below and the benchmark moral hazard problem.
As pointed in [20,22], this specification of preferences for altruistic agents gives rise to the behavioral equivalence between homo moralis preferences and altruism for α i = κ i in many classes of games. With that in mind, I will make the following assumption for the rest of the exposition.
Assumption 1.
α A = α B = κ A = κ B = θ .
Thus, the agents’ utility functions are simplified to:
U H M ( e i , e j , w i , θ ) = ( 1 θ ) π ( e i , e j , w i ) + θ π ( e i , e i , w i ) ,
U A l t ( e i , e j , w i , w j , θ ) = π ( e i , e j , w i ) + θ π ( e j , e i , w j ) .
The relationship among the three parties unfolds as follows. First, the principal offers each agent a contract w i , which can be either accepted or rejected by the agents. If at least one agent rejects the contract, the game ends, and every party receives his/her own reservation utility. If both agents accept the principal’s offers, they play a normal form game10: both of them must simultaneously and independently choose an effort level, from which revenues will be realized according to the probability distribution given by the production technology above. Payments are made according to the schedules proposed by the firm, and the agents’ payoffs in the normal form game are given by their expected utilities with regard to received wages, efforts and preferences. While each agent’s effort choice is private information, revenues and wages are publicly observable. It is also assumed that the agents’ preferences are common knowledge11.

3. Studying the Benchmark: The Contractible Effort Case

As a starting point, I derive the optimal contract assuming efforts are observable and contractible by the principal, to serve as a benchmark for later results. In what follows, I assume that each agent possesses an outside option that gives him/her utility u ¯ 0 if he/she does not accept the principal’s contract offer. Therefore, agent i { A , B } is willing to participate in the proposed relationship iff:
U ( e i , e j , w i , w j , θ ) u ¯ .
As discussed in the previous section, standard selfish preferences are a particular case of both homo moralis and altruistic preferences, and for the ease of exposition, I begin this and the next section by analyzing the optimal contract for that instance. Thus, under contractible efforts, the standard Borch rule:
p ( e i , e j ) u ( w i H ) p ( e i , e j ) = 1 p ( e i , e j ) u ( w i L ) [ 1 p ( e i , e j ) ]
gives:
( B R S ) u ( w i H ) u ( w i L ) = 1 ,
which implies w i H = w i L = w i = u 1 u ¯ + c ( e i ) for e i { 0 , 1 } . The intuition here is the same as in the classical moral hazard problem with one principal and one agent: if effort is contractible, the principal optimally offers a constant wage schedule remunerating the agent according to his/her reservation utility and the cost of the principal’s desired level of effort.
When the principal faces a team of altruistic agents, he/she solves:
max w A , w B p ( e A , e B ) x H w A H w B H + ( 1 p ( e A , e B ) ) x L w A L w B L s . t . p ( e A , e B ) u ( w A H ) + ( 1 p ( e A , e B ) ) u ( w A L ) c ( e A ) ( I R A ) + θ p ( e A , e B ) u ( w B H ) + ( 1 p ( e A , e B ) ) u ( w B L ) c ( e B ) u ¯ p ( e A , e B ) u ( w B H ) + ( 1 p ( e A , e B ) ) u ( w B L ) c ( e B ) ( I R B ) + θ p ( e A , e B ) u ( w A H ) + ( 1 p ( e A , e B ) ) u ( w A L ) c ( e A ) u ¯
An interior solution is characterized by the KKT first-order conditions:
p ( e i , e j ) + λ i p ( e i , e j ) u ( w i H ) + λ j θ p ( e i , e j ) u ( w i H ) = 0 [ 1 p ( e i , e j ) ] + λ i [ 1 p ( e i , e j ) ] u ( w i L ) + λ j θ [ 1 p ( e i , e j ) ] u ( w i L ) = 0 ,
so the Borch rule becomes12:
( BR Alt ) u ( w i H ) u ( w i L ) = 1 ,
for any choices of effort ( e A , e B ) { 0 , 1 } 2 and i , j { A , B } , i j . Therefore, if agents are altruistic, the optimal contract under verifiable efforts proposes a constant wage schedule, just as was the case in the benchmark selfish preferences, given by:
w A l t = u 1 1 1 + θ u ¯ + c ( e i ) ,
which is well-defined for all θ [ 0 , 1 ] . It is easy to see that for θ = 0 , this is exactly the same expression as for the optimal contract under verifiable efforts in the benchmark case, while that for any positive degree of altruism is lower than it would be for selfish agents. Intuitively, each altruistic agent recognizes that by participating in the relationship, his partner’s material payoff increases, and therefore, the wage demanded for participation declines when the degree of altruism becomes larger.
Finally, consider the team with homo moralis preferences, in which agent i { A , B } is willing to participate in the proposed relationship iff:
U H M ( e i , e j , w i , θ ) = ( 1 θ ) π ( e i , e j , w i ) + θ π ( e i , e i , w i ) u ¯ ,
i.e., iff
u ( w i H ) [ ( 1 θ ) p ( e i , e j ) + θ p ( e i , e i ) ] + u ( w i L ) [ ( 1 θ ) ( 1 p ( e i , e j ) ) + θ ( 1 p ( e i , e i ) ) ] c ( e i ) u ¯ .
Some points are noteworthy. First, as mentioned before, in the case where θ = 0 , this participation constraint reduces to the usual ( I R ) constraint in the benchmark moral hazard problem, since the selfish preference is a particular case of this framework. Second, for θ = 1 , U κ ( e i , e j , w i , 1 ) = π ( e i , e i , w i ) . In this case, agent i’s choice of effort does not depend on agent j’s effort choice, and choosing e i becomes an individual decision problem. Third, if e i = e j = e { 0 , 1 } , the participation constraint collapses into:
p ( e , e ) u ( w i H ) + ( 1 p ( e , e ) ) u ( w i L ) c ( e ) u ¯ ,
Note here that the agents’ degrees of morality are irrelevant, and the participation constraints are exactly the same as those that would be obtained in a symmetric equilibrium in the benchmark moral hazard problem: by imposing e i = e j , both expected material payoffs terms are identical, and since the utility function is constructed as a convex combination of these functions, the expressions above are obtained.
By Assumption 1, every agent in each team is identical to his/her partner, since the only source of heterogeneity in the general formulation was given by the preferences. Therefore, I will restrict attention to symmetric choices of effort e A = e B in the rest of the discussion13.
Proposition 1.
Suppose Assumption 1 holds. Then, there exists c > 0 such that for all c ( 0 , c ) , the principal induces agents in the teams of moral or altruistic agents to exert high effort by means of a constant wage.
This result is not surprising: if efforts are contractible, the principal compensates the agents with a fixed transfer in case they exert the desired level of effort or punishes them if there is a deviation. Furthermore, if the cost of exerting effort is small, then the amount the principal has to transfer back to the agents in order to have an increased chance of obtaining a high realization of revenues is also small and, thus, profitable to implement. Moreover, since I restrict attention to symmetric equilibria, the degree of morality plays no role when I consider a team of homo moralis agents: the compensation schedule and effort choices are exactly the same as those obtained in the benchmark problem.
Now, I can focus on the central question of the paper: given the optimal contracts that induce the desired level of effort, which team should the principal hire?
Proposition 2.
Suppose Assumption 1 holds, efforts are verifiable and the principal wants both agents to make the high effort. For any θ [ 0 , 1 ] , the principal prefers hiring the team of altruistic agents over the team of selfish or moral agents.
Intuitively, in situations where someone fails to do the right thing, a moral agent derives part of his/her utility from contemplating what would happen if everyone did the right thing. If both agents do in fact exert high effort, the contemplation in question does not add utility beyond the material utility that the agents thus obtain. By contrast, for altruistic agents, any choice but high effort decreases the material payoff of both agents and, consequently, all of the utility of each altruistic employee. Therefore, intrinsic motivation is larger for altruistic agents, and a team comprised of such employees is less costly for the principal.

4. Moving to the Second Best: Non-Contractible Efforts

Throughout the rest of the exposition, I focus on contracts that induce both agents to participate in the relationship and also exert the high level of effort ( e = 1 ).
As a benchmark, focus first on standard selfish preferences. If efforts are non-contractible and the principal wishes to induce both agents to exert effort, he/she must solve, for i , j { A , B } , i j ,
max w A , w B p 2 ( x H w A H w B H ) + ( 1 p 2 ) ( x L w A L w B L ) s . t . p 2 u ( w i H ) c + ( 1 p 2 ) u ( w i L ) c p 1 u ( w i H ) + ( 1 p 1 ) u ( w i L ) ( I C i ) p 2 u ( w i H ) c + ( 1 p 2 ) u ( w i L ) c u ¯ . ( I R i )
Manipulating the incentive compatibility constraint yields:
( I C i ) u ( w i H ) u ( w i L ) c p 2 p 1 .
By assumption, c > 0 and p 2 > p 1 . Thus, the incentive compatibility constraint implies a monotonicity constraint on the wages following a good and a bad realization of output, since u ( · ) is assumed to be strictly increasing. Standard arguments show that both the incentive compatibility and the individual rationality constraints must bind at the optimum, so that the solution to the principal’s problem is a contract w S = ( w S H , w S L ) such that:
u ( w S L ) = u ¯ p 1 c p 2 p 1 , u ( w S H ) = u ¯ + ( 1 p 1 ) c p 2 p 1 .
Given the incentive compatibility constraint, it is clear that Δ w S w S H w S L > 0 .
Of course, if the principal wishes to induce the agents not to exert effort, a constant wage schedule w H = w L = w = u 1 u ¯ would be optimal. Comparison of the principal’s profits when agents exert effort and shirk show that the former is preferred by the employer for any c c ¯ S , 0 < c ¯ S < c .
Under altruistic preferences for the agents, the principal’s problem is:
max w H , w L p 2 ( x H 2 w H ) + ( 1 p 2 ) ( x L 2 w L ) ( I R ) s . t . ( 1 + θ ) [ p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c ] u ¯ ( 1 + θ ) p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c ( I C ) p 1 u ( w H ) + ( 1 p 1 ) u ( w L ) + θ p 1 u ( w H ) + ( 1 p 1 ) u ( w L ) c .
Rewrite the incentive compatibility constraint as:
u ( w H ) u ( w L ) c ( 1 + θ ) ( p 2 p 1 ) ,
and notice that the right-hand side is strictly decreasing in the degree of altruism α . The intuition behind this is the tradeoff between explicit and intrinsic incentives. Indeed, as the agent cares less about his/her own material payoff relative to that of his/her teammate, the intrinsic incentive derived from an increase in the probability of a high realization of output (and a consequent increase in the expected material benefit of his/her partner) becomes larger than the explicit incentives given by a high powered contract in inducing the agent to exert the high level of effort.
The proposition below characterizes the optimal contract14.
Proposition 3.
Suppose Assumption 1 holds. There exists c ¯ A l t > c ¯ S such that, for all c < c ¯ A l t , it is optimal for the principal to induce both altruistic agents to exert effort, e A = e B = 1 , by means of a contract w Alt = ( w A l t H , w A l t L ) such that:
Δ w A l t w A l t H w A l t L Δ w S
θ [ 0 , 1 ] , with strict inequality for any θ > 0 .
Close inspection of the incentive compatibility constraint shows that any contract that would induce a selfish agent to exert the high effort would also induce an altruistic employee to do the same. Furthermore, for any given contract, an increase in θ would increase the utility of each agent. Hence, the principal can profit by reducing both wages15. This argument is formally stated below.
Corollary 1.
Suppose Assumption 1 holds. Then, the principal’s expected profits are strictly increasing in θ.
Last, I consider homo moralis preferences. The principal must choose wage vectors w A , w B to solve:
max w A , w B p 2 x H w A H w B H + ( 1 p 2 ) x L w A L w B L s . t . ( 1 θ ) p 2 ( u ( w i H ) c ) + ( 1 p 2 ) ( u ( w i L ) c ) + θ p 2 ( u ( w i H ) c ) + ( 1 p 2 ) ( u ( w i L ) c ) ( 1 θ ) p 1 u ( w i H ) + ( 1 p 1 ) u ( w i L ) ( I C i ) + θ p 0 u ( w i H ) + ( 1 p 0 ) u ( w i L ) ( 1 θ ) p 2 ( u ( w i H ) c ) + ( 1 p 2 ) ( u ( w i L ) c ) ( I R i ) + θ p 2 ( u ( w i H ) c ) + ( 1 p 2 ) ( u ( w i L ) c ) u ¯
for i = A , B . Note that the individual rationality constraint can be rewritten in the simpler form:
p 2 u ( w i H ) + ( 1 p 2 ) u ( w i L ) c u ¯ ,
since, in equilibrium, the principal’s offer induces the symmetric effort choice e A = e B = 1 .
The incentive compatibility constraint also simplifies to:
p 2 u ( w i H ) + ( 1 p 2 ) u ( w i L ) c ( 1 θ ) p 1 u ( w i H ) + ( 1 p 1 ) u ( w i L ) + θ p 0 u ( w i H ) + ( 1 p 0 ) u ( w i L ) .
The right-hand side of this inequality highlights an interesting fact: a positive degree of morality implies the agent internalizes the cost of choosing a low effort by evaluating what would happen if the other agent also were to make the same decision. This is very different in nature to how an altruistic agent evaluates any deviation: while the latter considers only the effects of his/her own deviation on his/her own material payoff and on his/her partner’s, the former would consider the effect of the same deviation being made by his/her partner on his/her own payoff. Besides, by force of the assumptions presented above, further manipulation of ( I C i ) yields:
u ( w i H ) u ( w i L ) c ( p 2 p 1 ) + θ ( p 1 p 0 ) ,
where c ( p 2 p 1 ) + θ ( p 1 p 0 ) > 0 . Because of the strict concavity of u ( · ) , the incentive compatibility constraint for moral agents also implies a monotonicity condition on the optimum compensation schedules offered by the principal, even when the agents display the highest degree of morality. This last remark implies that the intrinsic incentives of the most moral agent are not sufficiently large to overcome the need to provide him/her with explicit incentives to exert the high level of effort.
Proposition 4.
Suppose Assumption 1 holds. There exists c ¯ H M > c ¯ S such that, for all c < c ¯ H M , it is optimal for the principal to induce both moral agents to exert effort, e A = e B = 1 , by means of a contract w HM = ( w H M H , w H M L ) , such that:
Δ w H M w H M H w H M L Δ w S
θ [ 0 , 1 ] , and strict inequality for θ > 0 .
The intuition behind the monotonicity constraint is the same as in the benchmark model: if wages following a low realization of revenues were larger than their counterpart after a good realization, then agents would prefer to exert low effort in order to receive this higher compensation and save in the cost of exerting effort.
The novelty in the results relates to how the compensation schedules vary with respect to the degree of morality θ . Keeping in mind that u > 0 , one can see that as θ increases, the right-hand side of ( I C i ) becomes ever smaller, albeit positive. This implies that the gap in wages following good and bad realizations of revenues must decrease, since the incentive compatibility constraint binds, but monotonicity still holds. Intuitively, the principal can reduce the compensation over high realizations of revenues given to an agent who is very concerned about doing the right thing. However, at the same time, he/she must increase wages after bad outcomes in order to satisfy the participation constraint.
Because of this diminishing wage gap, intuition would suggest that the first-best result is obtained for a sufficiently high degree of morality. However, this is not the case. To see this, take θ = 1 , where agent i’s preferences are purely Kantian, and thus, his/her utility is completely characterized by the expected material payoff π ( e i , e i ) . Although the participation constraint does not vary with the agent’s degree of morality16, the same is not true for the incentive compatibility constraint. Now, when considering the pros and cons of a deviation in terms of effort choice, agent i internalizes what would happen if agent j were to do the same. Specifically, this entails a reduction in the probability of the good revenue being realized from p 2 to p 0 , instead of the reduction to p 1 in the selfish term. This internalization is reflected in the ( I C i ) constraint, which becomes:
u ( w i H ) u ( w i L ) = c p 2 p 0 > 0 .
The denominator on the right-hand side is exactly the difference in the probabilities discussed above. Taking θ = 1 makes the incentive compatibility constraint for a team of moral agents as easy to satisfy as possible, but it still binds, thus pushing the optimal contract away from the first-best one (constant wage schedule).
Given this behavior of wage schedules with respect to the degree of morality, a natural question to be asked is whether the principal is better off with highly moral agents or not. The answer is unconditional and presented in the following result.
Corollary 2.
The principal’s expected profit is strictly increasing in θ.
Corollary 2 contrasts with the contractible effort case, where the principal’s profits were identical when hiring a team of selfish agents or a team of moral agents, for any degree of morality the last would display. Mathematically, the result is a consequence of the individual rationality constraints being identical in both cases, while the incentive compatibility constraint has a smaller right-hand side under moral agents than under selfish ones. Intuitively, the principal exploits the agents’ morality, as he/he did with altruistic employees, as well, to induce high effort by means of less high-powered incentives, while inducing participation with a slightly increased payment after a bad realization of output. Thus, one concludes that the expected savings in wages after a good realization made by the principal by choosing a high morality agent offsets the expected increase in payments after low revenues.
One remark is in order here. Because of the assumption that 1 > p 2 > p 1 > p 0 > 0 and the monotonicity condition implied by the incentive compatibility constraints for each preferences, it is the case that U ( 0 , 1 , w ; θ ) U ( 0 , 0 , w ; θ ) , where w is the optimal contract offered by the principal. Thus, using the ( I C ) constraints again, I have that U ( 1 , 1 , w ; θ ) U ( 0 , 0 , w ; θ ) , so the agents have no incentives to jointly deviate to shirking. The same is also true for altruistic agents.
So far, I have showed that the principal can attain higher profits by exploiting the agents’ morality or altruism, thus reducing high-powered explicit incentives in the optimal contract in such a way that participation and incentives to exert high effort are still satisfied. Therefore, from the employer’s perspective, knowing which class of preferences demands the least amount of explicit incentives is crucial. Lemma 1 tells us that the answer to that question depends on the stochastic production technology.
Lemma 1.
Under Assumption 1, u ( w H M H ) u ( w H M L ) u ( w A l t H ) u ( w A l t L ) has the same sign as p 2 p 1 p 1 p 0 .
In other words, if the stochastic technology presents increasing returns on aggregate efforts, the optimal contract under homo moralis preferences is (weakly) more high-powered than its counterpart under altruism when κ = α : any contract inducing moral agents to exert high effort would do the same to altruistic employees. The converse is true if the technology has decreasing returns on efforts. This can be seen in Figure 1. The middle (green) line represents the incentive compatibility constraint for altruistic agents, whose format is not affected by the production technology. The top (red) and the bottom (blue) lines are the graphic representations of the ( I C ) constraint for moral agents when p 2 p 1 > p 1 p 0 and p 2 p 1 < p 1 p 0 , respectively17.
Given the result above, one would expect that the principal’s expected payoff will be uniformly higher if the agents are altruistic rather than moral when the production technology presents increasing returns to efforts, while the opposite would be true if decreasing returns are present. The flaw with such a logic is not considering the effects of the binding individual rationality constraints, which implied under contractible efforts that the team of altruistic agents was always the least expensive to hire. In particular, remember that for altruistic agents, the outside option u ¯ is divided by 1 + θ in the participation constraint, a factor that is not present under selfish and moral preferences. This implies that w A l t L should also be smaller than w H M L 18. However, the principal has clear preferences over the composition of the team, and the result below precisely states when one team is preferred over the other.
Theorem 1.
Assume the principal offers contracts w H M and w A l t to homo moralis and altruistic agents, respectively, inducing them to exert the high level of effort. Furthermore, assume Assumption 1 holds. Then, if the stochastic production technology exhibits:
1.
increasing returns to efforts ( p 2 p 1 p 1 p 0 ), the principal is better off hiring a team of altruistic agents over a team of moral agents,
2.
decreasing returns to efforts ( p 2 p 1 < p 1 p 0 ) and
  • the outside option is zero ( u ¯ = 0 ), the principal prefers a team of moral agents; or
  • the outside option is positive, and the degree of morality is sufficiently low ( u ¯ > 0 , κ 0 ); the principal prefers a team of moral agents only if p 2 > p ¯ 2 ( 0 , 1 ) .
Under increasing returns to efforts, an altruistic team is less expensive for the principal because of two reasons. First, the wage that must be paid after a bad realization of output is smaller than its counterparts under selfish or moral preferences, and this is a consequence of the fact that the former’s consideration with regards to the payoff of his/her partner slackens the participation constraint. On he other hand, such a concern also slackens the incentive compatibility constraint in this case, because exerting efforts drastically increases the probability of being successful, thus providing implicit incentives for the altruistic worker to exert effort and requiring a less high-powered contract to be proposed by the employer.
Such a difference in the intrinsic incentives to exert effort disappear when the production technology has constant returns, so that the power of the contract remains the same for both teams. However, it is still the case that the principal exploits the fact that altruistic agents derive utility from each other’s material payoff and can thus pay them less.
The third case, with decreasing returns to efforts, is the most interesting, because the preference of the principal results from the net effect of two opposing forces. While it is still true that w A l t L w H M L , Lemma 1 states that now, the power of the contract required by moral agents is smaller than the one for altruistic agents. In the range where such a reduction is the most drastic, the principal will prefer the team of moral agents rather than altruistic ones. The first condition for this to happen is that the probability of a success when both agents are exerting effort is sufficiently high, as can be seen from the incentive compatibility constraints. The second condition is that either the outside option for the agents is zero or if it is positive, the degree of morality or altruism is close to zero. If both cases, the participation constraints for moral and altruistic agents become arbitrarily close (identical if u ¯ = 0 ) so that the exploitability of altruistic preferences, described in the preceding paragraph, becomes small, and the principal profits by hiring the agents demanding the least powered contracts: the moral agents in this case.
Figure 2a–c provides an example of Theorem 1 for u ( w ) = w . Figure 2a represents the case where the production technology exhibits increasing returns to efforts. With the exception of θ = 0 , where both teams are identical to the selfish agents, the principal’s profit is higher with a team of altruistic agents ( V A l t ) than with a team of moral agents ( V H M ).
Figure 2b exemplifies the case with decreasing returns to efforts, the zero outside option for the agents and a high probability of success if both agents exert effort (namely, I set p 2 = 0.9 ). As Theorem 1 states, under these conditions, V H M ( θ ) V A l t ( θ ) for all equal degrees of morality and altruism.
Finally, Figure 2c plots the ratio V H M / V A l t for decreasing returns to efforts and u ¯ = 0.2 . Contrary to the previous case where u ¯ = 0 , the difference in the participation constraints for moral and altruistic agents makes it unprofitable for the employer to hire the moral team if θ becomes larger, since the decrease in w A l t L would be, in expected terms, sufficient to compensate the savings related to the power of the contract. This is represented by the region in the figure in which V H M / V A l t 1 .
Therefore, a rationale in terms of the principal’s expected profits is given for trying to sort employees with respect to their preferences. If the production technology exhibits increasing returns with respect to efforts, the principal’s choice is straightforward: always choose to employ altruistic agents. However, if the condition does not hold, employing moral individuals may lead to higher profits in comparison to both altruistic and purely selfish agents.

5. Concluding Remarks

This paper presents a comparison between the optimal contracts offered to teams of agents, who may be characterized by either homo moralis preferences or altruism towards each other. These contracts were explored in situations where the teams have only two agents with binary choices of efforts, affecting stochastically the revenues accrued by the principal.
Under contractible efforts, I show that altruistic agents are more exploitable by the principal, in the sense that the employers need to pay a smaller wage to induce the participation of the those agents when compared to the case where he/she would hire a team of selfish or moral employees. When efforts are no longer contractible, this exploitability also shows up for moral agents, and I show that the larger the degree of altruism or morality displayed by the members of each team, the higher the expected profits for the principal. Then, the natural question is which class of preferences would require smaller wages to exert effort and participate in the contractual relationship?
The main finding is that the principal obtains a higher expected profit hiring a team composed of moral agents under restrictive conditions: first, the stochastic technology exhibits decreasing returns with respect to efforts; second, the outside option of the agents yields zero utility; or third, the degree of morality is sufficiently low.
It is noteworthy that even in such a simple environment, prosocial preferences affect the contractual design, by adding a third channel to the traditional trade-off between risk-sharing and incentive provision. In effect, the principal will be better off employing a team of either altruistic or moral agents instead of a team composed solely of selfish employees, since a higher degree of morality and altruism decreases the amount of explicit incentives provided by the optimal contracts to induce the agents to exert effort. However, this additional channel is not enough to completely extinguish the need for explicit incentives even when the agents are purely moral or altruistic. Because it is more costly to the principal to hire a team of selfish agents, the exploitability of prosocial preferences can thus explain the costly acquisition of job applicants’ soft information in the labor market.

Acknowledgments

I thank the editor and two anonymous referees for their feedback. I am indebted to Ingela Alger, Braz Camargo, Jacques Cremer, Maximillian Conze, Mikhail Drugov, Fabian Franke, Marta Troya-Martinez, François Salanié and Jörgen Weibull for invaluable discussions. I also thank seminar audiences at Toulouse School of Economics and University of Mannheim, as well as conference participants at the 2016 International Meeting on Experimental and Behavioral Social Sciences, 10th RGS Doctoral Conference and 70th European Meeting of the Econometric Society.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. Multiplicity of Equilibria

The optimal contracts derived in the main text are such that the principal can induce both agents to exert the high level of effort e = 1 , both for the homo moralis or altruistic teams. However, the strategy profile ( e A , e B ) = ( 1 , 1 ) may not be the unique Nash equilibrium of the simultaneous game played by the pair agents for a given contract. Indeed, let U ( e i , e j , w i , w j ; θ ) denote agent i’s expected utility under this contract and degree of morality or altruism θ 19. Then, the stage game played by Agents 1 and 2 is represented by
e B = 1 e B = 0
e A = 1 U ( 1 , 1 , w A , w B ; θ ) U ( 1 , 0 , w A , w B ; θ )
e A = 0 U ( 0 , 1 , w A , w B ; θ ) U ( 0 , 0 , w A , w B ; θ )
From the principal’s problem, the incentive compatibility constraint implies that:
U ( 1 , 1 , w A , w B ; θ ) U ( 0 , 1 , w A , w B ; θ ) ,
i.e., given that the other agent is already exerting the high level of effort, it is not profitable for agent A to shirk when his/her compensation follows the optimal contract w . Since this is true for both agents, it follows that ( e A , e B ) = ( 1 , 1 ) is a pure strategy Nash equilibrium for w .
However, the comparison between U ( 1 , 0 , w A , w B ; θ ) and U ( 0 , 0 , w A , w B ; θ ) is not clear. In particular, if the latter is greater than the former, then ( e A , e B ) = ( 0 , 0 ) would constitute another pure strategy Nash equilibrium for w .
Under homo moralis preferences, note that:
U ( 0 , 0 , w ; θ ) U ( 1 , 0 , w ; θ ) p 0 u ( w H ) + ( 1 p 0 ) u ( w L ) ( 1 θ ) [ p 1 u ( w H ) + ( 1 p 1 ) u ( w L ) c ] + θ [ p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c ] c u ( w H ) [ ( 1 θ ) p 1 + θ p 2 p 0 ] + u ( w L ) [ ( 1 θ ) ( 1 p 1 ) + θ ( 1 p 2 ) ( 1 p 0 ) ] c u ( w H ) [ ( 1 θ ) p 1 + θ p 2 p 0 ] u ( w L ) [ ( 1 θ ) p 1 + θ p 2 p 0 ] c [ u ( w H ) u ( w L ) ] [ ( 1 θ ) p 1 + θ p 2 p 0 ] c c ( p 2 p 1 ) + κ ( p 1 p 0 ) [ ( 1 κ ) p 1 + κ p 2 p 0 ] ( p 2 p 1 ) + θ ( p 1 p 0 ) ( p 1 p 0 ) + θ ( p 2 p 1 ) ( p 2 p 1 ) ( p 1 p 0 ) θ [ ( p 2 p 1 ) ( p 1 p 0 ) ] ( 1 θ ) ( p 2 p 1 ) ( p 1 p 0 ) 0 ,
while for altruistic preferences:
U ( 0 , 0 , w Alt ; θ ) U ( 1 , 0 , w Alt ; θ ) ( 1 + θ ) [ p 0 u ( w A l t H ) + ( 1 p 0 ) u ( w A l t L ) ] ( 1 + θ ) [ p 1 u ( w A l t H ) + ( 1 p 1 ) u ( w A l t L ) ] c c ( 1 + θ ) ( p 1 p 0 ) [ u ( w A l t H ) u ( w A l t L ) ] = ( 1 + θ ) ( p 1 p 0 ) c ( 1 + θ ) ( p 2 p 1 ) p 2 p 1 p 1 p 0 .
Observe that the inequality for moral agents is always satisfied if θ = 1 . On the other hand, if κ [ 0 , 1 ) , that inequality holds iff p 2 p 1 p 1 p 0 . Therefore, the following result holds.
Lemma A1.
Suppose Assumption 1 holds and that the principal offers the optimal contracts w H M and w A l t for the teams of moral and altruistic agents, respectively. Then, e A = e B = 1 is the unique symmetric Nash equilibrium of the simultaneous choice of effort game played by the agents iff p 2 p 1 < p 1 p 0 and θ < 1 for homo moralis agents. Otherwise, e A = e B = 0 is also a symmetric Nash equilibrium.
One remark about asymmetric equilibria must be made here. If p 2 p 1 = p 1 p 0 , both an altruistic and a moral agent will be indifferent between shirking and exerting effort when their partners are shirking. Moreover, since the optimal contract satisfies the incentive compatibility constraint with equality for both types of pro-social preferences, the workers are also indifferent between shirking or not when their partner is exerting the high effort. Therefore, in this case, the asymmetric efforts ( e A = 1 , e B = 0 ) and ( e A = 0 , e B = 1 ) are also pure strategy Nash equilibria of the simultaneous choice of effort game.

Appendix B. Obtaining the Borch Rule for Asymmetric Efforts under Homo Moralis Preferences

Relax Assumption 1, and consider the contracting problem of a team of moral agents when efforts are observable. If the principal wishes to induce asymmetric choices of effort, constant wages are not optimal for moral agents, as they were for altruistic and selfish agents20. Note first that the Borch rule for teams of selfish and altruistic agents is derived for an arbitrary pair ( e A , e B ) , and the ratio of marginal utilities with high or low wages is equal to one whether e A = e B or not. For homo moralis preferences, suppose, without loss of generality, that agent A exerts high effort, while agent B exerts low effort. In this case, the principal solves:
max w A , w B p 1 x H w A H w B H + ( 1 p 1 ) x L w A L w B L s . t . ( 1 κ A ) p 1 u ( w A H ) + ( 1 p 1 ) u ( w A L ) ( I R A ) + κ A p 2 u ( w A H ) + ( 1 p 2 ) u ( w A L ) c u ¯ ( 1 κ B ) p 1 u ( w B H ) + ( 1 p 1 ) u ( w B L ) ( I R B ) + κ B p 0 u ( w B H ) + ( 1 p 0 ) u ( w B L ) u ¯
Close observation of the constraints reveals two differences between them. First, only ( I R A ) contains the cost of effort, since agent A is the only one to exert high effort. Second, and more important, the probabilities of high and low realizations of revenues in the Kantian morality terms of the two constraints are different, but the same in the other term. This is true because each agent evaluates the consequence of his/her own effort should both agents choose this particular effort.
The wages must satisfy the Borch rule, given by:
p 1 u ( w A H ) ( 1 κ A ) p 1 + κ A p 2 = 1 p 1 u ( w A L ) ( 1 κ A ) ( 1 p 1 ) + κ A ( 1 p 2 ) , p 1 u ( w B H ) ( 1 κ B ) p 1 + κ B p 0 = 1 p 1 u ( w B L ) ( 1 κ B ) ( 1 p 1 ) + κ B ( 1 p 0 ) .
Observe that the usual finding that w i H = w i L = w i F B is only obtained if κ i = 0 , that is only if both agents display the standard selfish preferences. If the degree of morality is not zero, the marginal utility ratios must be such that:
u ( w A H ) u ( w A L ) = ( 1 κ A ) p 1 ( 1 p 1 ) κ A p 1 p 2 + κ A p 1 ( 1 κ A ) p 1 ( 1 p 1 ) κ A p 1 p 2 + κ A p 2 < 1 , u ( w B H ) u ( w B L ) = ( 1 κ B ) p 1 ( 1 p 1 ) κ B p 1 p 0 + κ B p 1 ( 1 κ B ) p 1 ( 1 p 1 ) κ B p 1 p 0 + κ B p 0 > 1 ,
which implies that the optimal contract satisfies w A H > w A L and w B H < w B L .
Therefore, should the principal want to induce the moral agents to undertake different efforts, two differences arise in comparison to the selfish and altruistic preferences cases. First is the general argument that the principal should pay a constant wage (that satisfies the participation constraint) in case the appropriate level of effort is exerted by the agent no longer holds. Indeed, for the agent exerting high effort, a monotonicity result similar to the one obtained in the second-best cases of the traditional moral hazard problems is observed. On the other hand, agent B, who is not supposed to exert effort, is paid according to a reverse monotonicity result: that wage after a good realization of revenue must be lower than its counterpart after a bad realization. These results are summarized in the next proposition.
Proposition A1.
Suppose the principal restricts attention to asymmetric equilibria of the kind e i = 1 > e j = 0 for i , j { A , B } , i j when the agents exhibit homo moralis preferences. Then, there does not exist a constant contract ( w i H = w i L and w j H = w J L ) that maximizes the principal’s profits and satisfies the agents participation constraints.

Appendix C. Proofs

Proof of Proposition 1.
Assume e i = e j = e { 0 , 1 } for i { A , B } . The discussion in the main text shows that the optimal contract under contractible efforts for teams of selfish or moral agents is:
w F B = u 1 u ¯ + c ( e ) ,
while altruistic agents must be compensated according to:
w i , A l t F B = u 1 1 1 + θ u ¯ + c ( e ) .
Denote by V 11 F B and V 00 F B the principal’s expected profits in the cases where both agents exert high and low effort, respectively, and the teams are comprised of either selfish or moral agents. Plugging in the optimal wages obtained above yields:
V 11 F B = p 2 x H + ( 1 p 2 ) x L 2 u 1 ( u ¯ + c ) V 00 F B = p 0 x H + ( 1 p 0 ) x L 2 u 1 ( u ¯ )
and thus V 11 F B V 00 F B if and only if:
u 1 ( u ¯ + c ) u 1 ( u ¯ ) ( x H x L ) ( p 2 p 0 ) 2 .
By assumption, p 2 > p 0 and x H > x L , so the right-hand side is strictly positive, while u > 0 implies that the left-hand side is also positive since c > 0 . By continuity of u, there exists c > 0 such that 2 u 1 ( u ¯ + c ) u 1 ( u ¯ ) = ( x H x L ) ( p 2 p 0 ) , and the inequality above holds for all c ( 0 , c ] .
Now, doing the same for altruistic agents, write:
V 11 , A l t F B = p 2 x H + ( 1 p 2 ) x L 2 u 1 1 1 + θ u ¯ + c V 00 , A l t F B = p 0 x H + ( 1 p 0 ) x L 2 u 1 1 1 + θ u ¯
where an argument similar to the paragraph above implies that there exists c > 0 such that, c [ 0 , c ) , V 11 , A l t F B > V 00 , A l t F B .
Letting c = min { c , c } concludes the proof. ☐
Proof of Proposition 2.
Since u is a strictly increasing strictly concave function by assumption, its inverse u 1 is, in turn, a strictly increasing strictly convex function. Therefore, since:
u ¯ u ¯ 1 1 + θ
For all θ [ 0 , 1 ] , the amount of compensation dispensed by the principal is larger under homo moralis or selfish preferences. ☐
Proof of Proposition 3.
Under Assumption 1 and e A = e B = 1 , the optimal symmetric contract w Alt = ( w A l t H , w A l t L ) offered by the principal must solve:
max w H , w L p 2 ( x H 2 w H ) + ( 1 p 2 ) ( x L 2 w L ) ( I R ) s . t . ( 1 + θ ) [ p 2 u ( w H ) + ( 1 p 2 ) u ( w L ) c ] u ¯ ( I C ) u ( w H ) u ( w L ) c ( 1 + θ ) ( p 2 p 1 )
where the KKTconditions are necessary and sufficient for optimality and given by:
( A 2 a ) u ( w A l t H ) ( 1 + θ ) [ λ p 2 + μ ( p 2 p 1 ) ] = p 2 ( A 2 b ) u ( w A l t L ) ( 1 + θ ) [ λ ( 1 p 2 ) μ ( p 2 p 1 ) ] = ( 1 p 2 ) ( A 2 c ) λ ( 1 + θ ) [ p 2 u ( w A l t H ) + ( 1 p 2 ) u ( w A l t L ) c ] u ¯ = 0 ( A 2 d ) μ ( 1 + θ ) ( p 2 p 1 ) [ u ( w A l t H ) u ( w A l t L ) ] c = 0 ( A 2 e ) ( 1 + θ ) [ p 2 u ( w A l t H ) + ( 1 p 2 ) u ( w A l t L ) c ] u ¯ 0 ( A 2 f ) ( 1 + θ ) ( p 2 p 1 ) [ u ( w A l t H ) u ( w A l t L ) ] c 0 ( A 2 g ) λ 0 ( A 2 h ) μ 0
Note that λ = 0 cannot be a solution since it violates Equation ( A 2 b ) , because u > 0 and 1 > p 2 > p 1 > p 0 > 0 by assumption. Moreover, μ > 0 ; otherwise, Equations ( A 2 a ) and ( A 2 b ) would imply:
u ( w A l t H ) = 1 ( 1 + θ ) λ = u ( w A l t L ) ,
which yields w A l t L = w A l t H for all θ [ 0 , 1 ] since u < 0 , thus violating the incentive compatibility constraint in ( A 2 b ) . Therefore, any solution must have λ , μ > 0 such that:
λ p 2 + μ ( p 2 p 1 ) > 0 λ ( 1 p 2 ) μ ( p 2 p 1 ) > 0
Since the Lagrange multipliers are strictly positive, the optimal contract is fully characterized by the bindings I C and I R , which rearranged result in:
u ( w A l t H ) = u ¯ 1 + θ + c [ ( 1 p 1 ) + θ ( p 2 p 1 ) ] ( 1 + θ ) ( p 2 p 1 ) , u ( w A l t L ) = u ¯ 1 + θ c [ p 1 θ ( p 2 p 1 ) ] ( 1 + θ ) ( p 2 p 1 ) .
Differentiation of the incentive compatibility constraint with respect to θ leads to:
( w A l t H w A l t L ) θ < 0 .
Given the optimal contract, one must again wonder whether the principal will induce both agents to exert high effort or not. If not, then the principal can offer the constant wage w = u 1 u ¯ 1 + θ as before, since this satisfies the participation constraint, but not the incentive compatibility constraint. Thus, the principal’s expected payoff in this case is again given by:
V 00 ( θ ) = p 0 x H + ( 1 p 0 ) x L 2 u 1 u ¯ 1 + θ ,
while inducing high effort yields expected profits:
V 11 ( θ ) = p 2 x H 2 w A l t H + ( 1 p 2 ) x L 2 w A l t L .
Consequently, it is only beneficial to the principal demanding high effort from both agents if V 11 ( θ ) V 00 ( θ ) , that is,
( p 2 p 0 ) ( x H x L ) + 2 u 1 u ¯ 1 + θ 2 p 2 u 1 u ¯ 1 + θ + c [ ( 1 p 1 ) + θ ( p 2 p 1 ) ] ( 1 + θ ) ( p 2 p 1 ) + ( 1 p 2 ) u 1 u ¯ 1 + θ c [ p 1 θ ( p 2 p 1 ) ] ( 1 + θ ) ( p 2 p 1 ) .
Take c = 0 . The right-hand side reduces to 2 [ p 2 u 1 u ¯ 1 + θ + ( 1 p 2 ) u 1 u ¯ 1 + θ ] = 2 u 1 u ¯ 1 + θ , and the inequality is automatically satisfied, since 1 > p 2 > p 1 > p 0 > 0 and x H > x L by assumption.
Therefore, by continuity, c ¯ A l t > 0 such that c ( 0 , c ¯ A l t ] , V 11 ( θ ) V 00 ( θ ) . ☐
Proof of Corollary 1.
Take θ 0 , θ 1 [ 0 , 1 ] such that θ 0 < θ 1 , and let w A l t ( θ ) = w A l t H ( θ ) , w A l t L ( θ ) denote the optimal wage offered by the principal when agents display the degree of morality θ [ 0 , 1 ] . Moreover, for any θ [ 0 , 1 ] , let C ( θ ) denote the set of contracts satisfying both the I R and I C constraints for the degree of altruism θ , so that w A l t ( θ ) C ( θ ) .
Then, using the constraints, one can check that:
u ( w A l t H ( θ 0 ) ) u ( w A l t L ( θ 0 ) ) = c ( 1 + θ 0 ) ( p 2 p 1 ) > c ( 1 + θ 1 ) ( p 2 p 1 )
and:
u ¯ = ( 1 + θ 0 ) [ p 2 u ( w A l t H ( θ 0 ) ) + ( 1 p 2 ) u ( w A l t L ( θ 0 ) ) c ] < ( 1 + θ 1 ) [ p 2 u ( w A l t H ( θ 0 ) ) + ( 1 p 2 ) u ( w A l t L ( θ 0 ) ) c ] ,
so that w A l t ( θ 0 ) C ( θ 1 ) . However, the KKT conditions imply that w A l t ( θ 1 ) is the unique solution to the principal’s problem for θ = θ 1 . Then, it must be the case that V 11 A l t ( w A l t ( θ 1 ) ; θ 1 ) > V 11 A l t ( w A l t ( θ 0 ) ; θ 1 ) .
Now, observe that:
d w A l t H ( θ ) d θ = 1 u ( w A l t H ( θ 0 ) ) ( 1 p 2 ) c ( 1 + θ ) 2 ( p 2 p 1 ) + u ¯ ( 1 + θ ) 2 < 0 .
Thus, keeping w A l t L ( θ 1 ) = w A l t L ( θ 0 ) and taking w A l t H ( θ 1 ) = w A l t H ( θ 0 ) ε , ε 0 , the principal satisfies both constraints while increasing his/her payoff by 2 p 2 ε > 0 . Therefore, the principal’s expected profit is strictly increasing in θ . ☐
Proof of Proposition 4.
The principal’s problem is given by:
L = p 2 x H w i H w j H + ( 1 p 2 ) x L w i L w j L + i = 1 2 λ i p 2 u ( w i H ) + ( 1 p 2 ) u ( w i L ) c u ¯ + i = 1 2 μ i u ( w i H ) u ( w i L ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) c
for i = A , B and j i . Then, the KKT conditions are given by the system of equations:
( A 3 a ) p 2 + λ i p 2 u ( w i H ) + μ i u ( w i H ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) = 0 ( A 3 b ) ( 1 p 2 ) + λ i ( 1 p 2 ) u ( w i L ) μ i u ( w i L ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) = 0 ( A 3 c ) p 2 u ( w i H ) + ( 1 p 2 ) u ( w i L ) c u ¯ ( A 3 d ) u ( w i H ) u ( w i L ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) c 0 ( A 3 e ) λ i p 2 u ( w i H ) + ( 1 p 2 ) u ( w i L ) c u ¯ = 0 ( A 3 f ) μ i u ( w i H ) u ( w i L ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) c = 0 ( A 3 g ) λ i 0 ( A 3 h ) μ i 0
Equations ( A 3 a ) and ( A 3 b ) clearly show that λ i = μ i = 0 is not a possibility. Indeed, if that were the case, then p 2 = 0 , which contradicts our initial assumption. Furthermore, I cannot have μ i > 0 = λ i , because this would imply that Equation ( A 3 b ) is not satisfied. Therefore, I must either have λ i > 0 = μ i or λ i , μ i > 0 . Solving for the multipliers in Equations ( A 3 a ) and ( A 3 b ) yields:
λ i = ( 1 p 2 ) u ( w i H ) + p 2 u ( u i L ) u ( w i H ) u ( w i L ) > 0 μ i = p 2 ( 1 p 2 ) ( u ( w i L ) u ( w i H ) ) u ( w i H ) u ( w i L ) ( ( p 2 p 1 ) + κ i ( p 1 p 0 ) ) > 0
so both the ( I C i ) and ( I R i ) constraints bind. Thus, using Equations ( A 3 c ) and ( A 3 d ) , one finds that the optimal schedule must satisfy:
u ( w i L ) = u ¯ c [ ( 1 κ i ) p 1 + κ i p 0 ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) u ( w i H ) = u ¯ + c [ ( 1 p 1 ) + κ i ( p 1 p 0 ) ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) .
Differentiating this expressions with respect to κ i yields:
d w i H d κ i = ( 1 p 2 ) ( p 1 p 0 ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) c u ( w i H ) < 0 d w i L d κ i = p 2 ( p 1 p 0 ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) c u ( w i L ) > 0 .
Given the optimal contract, one must again wonder whether the principal will induce both agents to exert high effort or not. If not, then the principal can offer the constant wage w = u 1 ( u ¯ ) as before, since this satisfies the participation constraint, but not the incentive compatibility constraint. Thus, the principal’s expected payoff in this case is again given by:
V 00 ( κ ) = p 0 x H + ( 1 p 0 ) x L 2 u 1 ( u ¯ ) ,
while inducing high effort yields expected profits:
V 11 ( κ ) = p 2 x H i = A , B u 1 u ¯ + c [ ( 1 p 1 ) + κ i ( p 1 p 0 ) ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) + ( 1 p 2 ) x L i = A , B u 1 u ¯ c [ ( 1 κ i ) p 1 + κ i p 0 ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) .
Consequently, it is only beneficial to the principal demanding high effort from both agents if V 11 ( κ ) V 00 ( κ ) , that is,
( p 2 p 0 ) ( x H x L ) + 2 u 1 ( u ¯ ) i = A , B p 2 u 1 u ¯ + c [ ( 1 p 1 ) + κ i ( p 1 p 0 ) ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) + ( 1 p 2 ) u 1 u ¯ c [ ( 1 κ i ) p 1 + κ i p 0 ] ( p 2 p 1 ) + κ i ( p 1 p 0 ) .
Take c = 0 . The the right-hand side reduces to i = A , B [ p 2 u 1 ( u ¯ ) + ( 1 p 2 ) u 1 ( u ¯ ) ] = 2 u 1 ( u ¯ ) , and the inequality is automatically satisfied, since 1 > p 2 > p 1 > p 0 > 0 and x H > x L by assumption.
Therefore, by continuity, c ¯ H M > 0 such that c ( 0 , c ¯ H M ] , V 11 ( κ ) V 00 ( κ ) . ☐
Proof of Corollary 2.
Denote again the principal’s indirect expected profits by V 11 ( κ A , κ B ) . Then,
V 11 ( κ A , κ B ) κ i = p 2 d w i H d κ i ( 1 p 2 ) d w i L d κ i = c ( p 1 p 0 ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) 2 p 2 ( 1 p 2 ) u ( w i H ) ( 1 p 2 ) p 2 u ( w i L ) = c ( p 1 p 0 ) ( 1 p 2 ) p 2 ( p 2 p 1 ) + κ i ( p 1 p 0 ) 2 > 0 × u ( w i L ) u ( w i H ) u ( w i L ) u ( w i H ) > 0
since w i H > w i L by the monotonicity implied by the ( I C i ) and u < 0 . Therefore, the principal’s expected payoff is strictly increasing in each degree of morality κ i .
In a similar fashion, let U ( κ i ) denote the agent’s indirect utility under the optimal contract. Then,
U ( κ i ) κ i = p 2 u ( w i H ) d w i H d κ i + ( 1 p 2 ) u ( w i L ) d w i L d κ i = c ( p 1 p 0 ) ( p 2 p 1 ) + κ i ( p 1 p 0 ) 2 p 2 ( 1 p 2 ) + ( 1 p 2 ) p 2 = 0 .
 ☐
Proof of Lemma 1.
Let κ = α = θ [ 0 , 1 ] , c > 0 and 1 > p 2 > p 1 > p 0 > 0 by assumption. Then:
c ( p 2 p 1 ) + θ ( p 1 p 0 ) c ( 1 + θ ) ( p 2 p 1 ) ( p 2 p 1 ) + θ ( p 2 p 1 ) ( p 2 p 1 ) + θ ( p 1 p 0 ) p 2 p 1 p 1 p 0 .
 ☐
Proof of Theorem 1.
Let h H = u ( w H ) and h L = u ( w L ) , which are uniquely determined for any values of w since u is strictly increasing by assumption. The principal’s problem can thus be rewritten as:
max h L , h H p 2 x H 2 u 1 ( h H ) + ( 1 p 2 ) x L 2 u 1 ( h L )
subject to
( h L , h H ) C H M ( θ ) = ( h 1 , h 2 ) R 2 : p 2 h 2 + ( 1 p 2 ) h 1 c u ¯ , h 2 h 1 c k 2 + θ k 1
if the principal is hiring a team of moral agents, and:
( h L , h H ) C A l t ( θ ) = ( h 1 , h 2 ) R 2 : p 2 h 2 + ( 1 p 2 ) h 1 c u ¯ 1 + θ , h 2 h 1 c ( 1 + θ k 2 ,
where k 1 = p 1 p 0 > 0 and k 2 = p 2 p 1 > 0 , if he/she considers a team of altruistic agents. The sets C H M ( θ ) and C A l t ( θ ) collect all of the values of h L and h H satisfying the participation and incentive compatibility constraints for a given degree of morality or altruism θ [ 0 , 1 ] .
First, notice that for any value of θ [ 0 , 1 ] , a pair ( h L , h L ) C H M ( θ ) also satisfies the ( I R ) constraint in C A l t ( θ ) : indeed, p 2 h 2 + ( 1 p 2 ) h 1 c u ¯ u ¯ 1 + θ for u ¯ 0 .
Suppose that p 2 p 1 p 1 p 0 , i.e., k 2 k 1 . Then, by Lemma 1, h H M H h H M L h A l t H h A l t L , which implies that the optimal contract under homo moralis preferences also satisfies the incentive compatibility of altruistic agents, so that ( h H M L ( θ ) , h H M H ( θ ) ) C A l t ( θ ) . This implies that C H M ( θ ) C A l t ( θ ) , and one can conclude that V 11 A l t ( θ ) V 11 H M ( θ ) . This can be graphically seen in Figure A1.
Suppose now that p 2 p 1 < p 1 p 0 , i.e., k 2 < k 1 . In this case, the incentive compatibility constraint for moral agents is below the one for altruistic agents, as can be seen in Figure A2 and implied by Lemma 1, but because the reverse holds for the participation constraint, one cannot say that C A l t ( θ ) C H M ( θ ) . Using the results in Propositions 2 and 3, one can check that:
h H M L h A l t L = u ¯ u ¯ 1 + θ + c c + p 2 c 1 ( 1 + θ ) k 2 1 k 2 + θ k 1 = θ u ¯ 1 + θ + p 2 c θ ( k 1 k 2 ) ( 1 + θ ) k 2 ( k 2 + θ k 1 ) 0
for all θ [ 0 , 1 ] , u ¯ 0 , c > 0 and 0 < p 0 < p 1 < p 2 < 1 . Thus, the wage paid after a bad realization of output for a moral agent is larger than the corresponding wage paid to an altruistic agent if k 1 > k 2 . Therefore, the principal can only be better off hiring a team of moral agents if the wage paid after a good realization of output to the latter is sufficiently smaller than the one paid for altruistic agents and the isoprofit curve is sufficiently flat. The former holds only if:
h H M H h A l t H = h H M L h A l t L + c 1 k 2 + θ k 1 1 ( 1 + θ ) k 2 = θ u ¯ 1 + θ ( 1 p 2 ) c θ ( k 1 k 2 ) ( 1 + θ ) k 2 ( k 2 + θ k 1 ) < 0 .
For θ ( 0 , 1 ] , c > 0 and 0 < p 0 < p 1 < p 2 < 1 , the inequality above holds iff:
u ¯ ( k 2 + θ k 1 ) < 1 p 2 p 2 p 1 ( k 1 k 2 ) ,
that is, for u ¯ = 0 or small values of θ for u ¯ > 0 .
Now, remember that the slope of the isoprofit curve for the principal is given by d h H d h L = 1 p 2 p 2 u ( w H ) u ( w L ) < 0 , which becomes flatter as p 2 approaches one. Thus, if k 1 > k 2 , the principal is better off with a team of moral agents if p 2 is close to one and either u ¯ = 0 or u ¯ > 0 and θ 0 . ☐
Figure A1. Optimal contracts for p 2 p 1 > p 1 p 0 .
Figure A1. Optimal contracts for p 2 p 1 > p 1 p 0 .
Games 08 00037 g003
Figure A2. Optimal contracts for p 2 p 1 < p 1 p 0 .
Figure A2. Optimal contracts for p 2 p 1 < p 1 p 0 .
Games 08 00037 g004
Proof of Proposition 5.
The existence of the contract follows from the KKT conditions written in the main text. The same goes for the inequalities on the wage schedules. ☐

References

  1. O’Leonard, K.; Erickson, R.; Krider, J. Talent Acquisition Factbook 2015: Benchmarks and Trends in Spending, Staffing, and Key Recruiting Metrics; Bersin: Oakland, CA, USA, 2015. [Google Scholar]
  2. Brown, V.R.; Vaughn, E.D. The Writing on the (Facebook) Wall: The Use of Social Networking Sites in Hiring Decisions. J. Bus. Psychol. 2011, 26, 219. [Google Scholar] [CrossRef]
  3. Fehr, E.; Schmidt, K.M. The Economics of Fairness, Reciprocity and Altruism—Experimental Evidence and New Theories. In Handbook on the Economics of Giving, Reciprocity and Altruism; Kolm, S., Ythier, J.M., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 1, Chapter 8; pp. 615–691. [Google Scholar]
  4. Kolm, S.; Ythier, J.M. (Eds.) Handbook of the Economics of Giving, Altruism and Reciprocity, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 1. [Google Scholar]
  5. Kolm, S.; Ythier, J.M. (Eds.) Handbook of the Economics of Giving, Altruism and Reciprocity, 1st ed.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 2. [Google Scholar]
  6. Kahneman, D.; Knetsch, J.L.; Thaler, R.H. Fairness and the Assumptions of Economics. J. Bus. 1986, 59, S285–S300. [Google Scholar] [CrossRef]
  7. Kahneman, D.; Knetsch, J.L.; Thaler, R.H. Anomalies: The Endowment Effect, Loss Aversion, and Status Quo Bias. J. Econ. Perspect. 1991, 5, 193–206. [Google Scholar] [CrossRef]
  8. Dufwenberg, M.; Heidhues, P.; Kirchsteiger, G.; Riedel, F.; Sobel, J. Other-Regarding Preferences in General Equilibrium. Rev. Econ. Stud. 2011, 78, 613–639. [Google Scholar] [CrossRef]
  9. Bowles, S.; Polania-Reyes, S. Economic Incentives and Social Preferences: Substitutes or Complements? J. Econ. Lit. 2012, 50, 368–425. [Google Scholar] [CrossRef]
  10. Bandiera, O.; Barankay, I.; Rasul, I. Social Preferences and the Response to Incentives: Evidence from Personnel Data. Q. J. Econ. 2005, 120, 917–962. [Google Scholar]
  11. Bandiera, O.; Barankay, I.; Rasul, I. Social Incentives in the Workplace. Rev. Econ. Stud. 2010, 77, 417–458. [Google Scholar] [CrossRef]
  12. Barr, A.; Serneels, P. Reciprocity in the Workplace. Exp. Econ. 2009, 12, 99–112. [Google Scholar] [CrossRef]
  13. Rotemberg, J.J. Altruism, Reciprocity and Cooperation in the Workplace. In Handbook on the Economics of Giving, Reciprocity and Altruism; Kolm, S., Ythier, J.M., Eds.; Elsevier: Amsterdam, The Netherlands, 2006; Volume 1, Chapter 21; pp. 1371–1407. [Google Scholar]
  14. Thaler, R.H. Anomalies: The Winner’s Curse. J. Econ. Perspect. 1988, 2, 191–202. [Google Scholar] [CrossRef]
  15. Tversky, A.; Thaler, R.H. Anomalies: Preference Reversals. J. Econ. Perspect. 1990, 4, 201–211. [Google Scholar] [CrossRef]
  16. Alchian, A.A.; Demsetz, H. Production, Information Costs, and Economic Organization. Am. Econ. Rev. 1972, 62, 777–795. [Google Scholar]
  17. Holmström, B. Moral Hazard in Teams. Bell J. Econ. 1982, 13, 324–340. [Google Scholar] [CrossRef]
  18. Becker, G.S. A Theory of Social Interactions. J. Polit. Econ. 1974, 82, 1063–1093. [Google Scholar] [CrossRef]
  19. Wilson, D.S.; Kniffin, K. Altruism from an Evolutionary Perspective. In Research on Altruism and Love: An Annotated Bibliography of Major Studies in Socionlogy, Evolutionary Biology, and Theology; Post, S., Johnson, B., McCullough, M., Schloss, J., Eds.; Templeton Foundation Press: West Conshohocken, PA, USA, 2003; Volume 1, Chapter 3; pp. 117–136. [Google Scholar]
  20. Alger, I.; Weibull, J.W. Homo Moralis—Preference Evolution Under Incomplete Information and Assortative Matching. Econometrica 2013, 81, 2269–2302. [Google Scholar]
  21. Alger, I.; Weibull, J.W. Evolution and Kantian Morality. Games Econ. Behav. 2016, 98, 56–67. [Google Scholar] [CrossRef]
  22. Bergström, T.C. On the Evolution of Altruistic Ethical Rules for Siblings. Am. Econ. Rev. 1995, 85, 58–81. [Google Scholar]
  23. Mookherjee, D. Optimal Incentive Schemes with Many Agents. Rev. Econ. Stud. 1984, 51, 433–446. [Google Scholar] [CrossRef]
  24. Che, Y.K.; Yoo, S.W. Optimal Incentives for Teams. Am. Econ. Rev. 2001, 91, 525–541. [Google Scholar] [CrossRef]
  25. Itoh, H. Moral Hazard and Other-Regarding Preferences. Jpn. Econ. Rev. 2004, 55, 18–45. [Google Scholar] [CrossRef]
  26. Englmaier, F.; Wambach, A. Optimal incentive contracts under inequity aversion. Games Econ. Behav. 2010, 69, 312–328. [Google Scholar] [CrossRef]
  27. Rey-Biel, P. Inequity Aversion and Team Incentives. Scand. J. Econ. 2008, 110, 297–320. [Google Scholar] [CrossRef]
  28. Livio, L. Friend or Foes? Optimal Incentives for Reciprocal Agents. 2016. Available online: https://lucalivio85.files.wordpress.com/2016/05/reciprocity-november-2015-mgmm.pdf (accessed on 4 September 2017).
  29. Dur, R.; Sol, J. Social Interaction, Co-worker Altruism, and Incentives. Games Econ. Behav. 2010, 69, 293–301. [Google Scholar] [CrossRef]
  30. Barron, J.M.; Gjerde, K.P. Peer Pressure in an Agency Relationship. J. Labor Econ. 1997, 15, 234–254. [Google Scholar] [CrossRef]
  31. Kandel, E.; Lazear, E.P. Peer Pressure and Partnerships. J. Polit. Econ. 1992, 100, 801–817. [Google Scholar] [CrossRef]
  32. Kosfeld, M.; von Siemens, F.A. Worker Self-Selection and the Profits from Cooperation. J. Eur. Econ. Assoc. 2009, 7, 573–582. [Google Scholar] [CrossRef]
  33. Kosfeld, M.; von Siemens, F.A. Competition, cooperation, and corporate culture. RAND J. Econ. 2011, 42, 23–43. [Google Scholar] [CrossRef]
  34. Delfgaauw, J.; Dur, R. Signaling and screening of workers’ motivation. J. Econ. Behav. Organ. 2007, 62, 605–624. [Google Scholar] [CrossRef]
1
See [1].
2
3
4
See, for instance, [2].
5
For a broader debate on the definition of altruism from an evolutionary biology view, see [19].
6
Reference [20] shows that homo moralis and altruistic preferences are behaviorally alike in many situations, and a similar point can be found in [22]; however, this is not the case in this exposition, as will be seen later on.
7
Reference [24] study optimal incentives for teams in a repeated setting.
8
in reality, agents may differ in their respective cost of effort, but this is not pursued in this paper because it does not qualitatively change the results, and at the same time, it adds a more cumbersome notation.
9
The assumption that u ( · ) is strictly concave can be relaxed, and the same model below can be solved in a setting with risk-neutral agents and limited liability constraints, where the qualitative results are not changed from the analysis below.
10
The normal form game here is comprised of the set of players { A , B } , the common set of pure strategies S = { 0 , 1 } and payoff function U ( e i , e j , w i , w j , θ ) .
11
This is the reason why contracts are indexed by i.
12
Since 1 p 2 > p 1 > p 0 0 and u > 0 by assumption, the first-order conditions imply that λ i + λ j θ > 0 for i , j { A , B } , i j .
13
In Appendix B, I show that relaxing both these assumptions leads to a Borch rule for moral agents that demands nonconstant wages when efforts are observable, in stark contracts to the literature with selfish and altruistic agents.
14
In Appendix A, I show that the optimal contracts for both moral and altruistic agents may lead to a multiplicity of equilibria in their effort choices, as in [17].
15
See Appendix C for the proof.
16
For symmetric equilibrium choices of effort.
17
If p 2 p 1 = p 1 p 0 , both lines coincide with the ( I C ) for altruistic agents.
18
This intuition in indeed right and integrates the proof of Theorem 1 in the Appendix.
19
Given the symmetry of the problem, I focus attention on agent A and drop the subscripts. The same results would hold for agent B by simply reversing the effort choices e A and e B .
20
As shown by the Borch rules ( B R S ) , ( B R A l t ) .
Figure 1. Comparing the power of optimal contracts.
Figure 1. Comparing the power of optimal contracts.
Games 08 00037 g001
Figure 2. (a) Comparing principal’s profits for p 2 p 1 p 1 p 0 ; (b) comparing principal’s profits for p 2 p 1 < p 1 p 0 and u ¯ = 0 ; (c) comparing the ratio of principal’s profits for p 2 p 1 < p 1 p 0 and u ¯ > 0 .
Figure 2. (a) Comparing principal’s profits for p 2 p 1 p 1 p 0 ; (b) comparing principal’s profits for p 2 p 1 < p 1 p 0 and u ¯ = 0 ; (c) comparing the ratio of principal’s profits for p 2 p 1 < p 1 p 0 and u ¯ > 0 .
Games 08 00037 g002

Share and Cite

MDPI and ACS Style

Sarkisian, R. Team Incentives under Moral and Altruistic Preferences: Which Team to Choose? Games 2017, 8, 37. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030037

AMA Style

Sarkisian R. Team Incentives under Moral and Altruistic Preferences: Which Team to Choose? Games. 2017; 8(3):37. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030037

Chicago/Turabian Style

Sarkisian, Roberto. 2017. "Team Incentives under Moral and Altruistic Preferences: Which Team to Choose?" Games 8, no. 3: 37. https://0-doi-org.brum.beds.ac.uk/10.3390/g8030037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop