Next Article in Journal
Emotion and Knowledge in Decision Making under Uncertainty
Next Article in Special Issue
Does Informational Equivalence Preserve Strategic Behavior? Experimental Results on Trockel’s Model of Selten’s Chain Store Story
Previous Article in Journal
Generalized Backward Induction: Justification for a Folk Algorithm
 
 
Article
Peer-Review Record

Computational Behavioral Models for Public Goods Games on Social Networks

by Marco Tomassini 1 and Alberto Antonioni 2,*
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3: Anonymous
Submission received: 11 June 2019 / Revised: 19 August 2019 / Accepted: 29 August 2019 / Published: 2 September 2019
(This article belongs to the Special Issue Behavioral Game Theory: Theory and Experiments)

Round 1

Reviewer 1 Report

This paper simulates behavior in a public goods game using social network structures and inspired agent rules. The decision rules, from my reading, are inspired by “tit for tat” like behavioral rules or payoff inspired rules. Under the first rule, agents (usually) lower their contribution if their earnings are less than their contribution and increase their contribution if their earnings are greater than their contribution. This rule is later relaxed so that agents have an individual propensity to keep or increase their contribution. Under the second rule, agents, instead of having their behavior decided by an earnings to contribution comparison, base their contributions on their contribution relative to the group’s average. The authors later run simulations in which groups are mixed (i.e., some agents use the first rule and some agents use the second). Generally, the author(s) find that their simulations qualitatively match the behavior observed in laboratory experiments.

Main Comment:

Let me start off by saying, I very much like this paper. It is well written, easy to follow, and suitable for publication in Games. My main comment is that I would like to see more comparisons to actual laboratory behavior. The authors do this to some extent but the comparisons are vague (e.g., phrases like “qualitatively similar…” are too common). I would prefer it if the author(s) included graphs of behavior from comparison works to give readers a better idea of what is meant when they say ``qualitatively similar”. I do not expect a perfect match but think it would be nice to see the simulated agents’ behavior alongside real behavior.

Minor comments:

On page 2 line 48 the author(s) mention punishment and briefly discuss how the introduction of punishment tends to increase contributions. This is fine but the authors should also add that punishment can also harm efficiency – especially when there is the potential for counter punishment (see  Nikiforakis, 2008).

Line 53 page 2: “when taking their decision” should be “when making their decision”

Not all readers will know what is meant by a “multiplication factor” so the author(s) might consider describing this a bit more. Same with “Bipartite networks”

Line 168 page 6: “getting less than what it put in” is a bit ambiguous. Perhaps reword to make it obvious.

Line 363 page 15:  “in” should be capitalized.

I am a little worried about the decision set agents have (i.e., they can only invest 0, .25, .5… percent of their endowment). So perhaps mention how expanding the agents’ choices effects results (e.g., increments of .1 instead of .25). I don’t think this requires redoing all of the simulations -- a line or two would do -- but I would guess that this would slow convergence.


Author Response

Please see the attached document.

Author Response File: Author Response.pdf

Reviewer 2 Report

In the present work the authors simulate a population in a social network  in order to study the dynamics of a public good, and bridge the gap between experiments and simulation.

The basic structure to construct the PGG is actually a satisficing behavior, in which if the group gets more than the average of the group, increase their contribution or keep it constant, with a certain probability.

The main problem I see is that, this structure avoids strategies of pure defection which is normally the type of strategies that destroys the sustainment of cooperation in repeated experiments. These types of strategies are not quite rare and are the real core of the problem of why cooperation is reduced with time and repetitions.

Second problem, there is only one type of strategy,which is made heterogeneous through randomizing parameters, but at least it should be mixed up with some pure strategies, even pure cooperation.

Author Response

Please refer to attached document.

Author Response File: Author Response.pdf

Reviewer 3 Report

The authors applied in this paper a reverse approach, they calibrated their models based on empirical results. This can be useful if we want to extend the processes at the micro level to the population level. This goal is legitimate, but it is uncertain whether it can be done based on the results of only a few experiments. The literature on social dilemmas is so broad that it is possible to build models on the results of further empirical research.

The most questionable issue in the paper is why do the authors think that the payoff-based learning mechanism disproves the theory that people are inherently pro-social if it is true that participants in the experiments usually do not reach the zero contribution even at the end of the game (line 50- 58) and if participants contribute to the public good even at the beginning of the game. How can be explained these phenomena by the payoff-based learning mechanism? In line 130-133, the authors write that this can be explained by the fact that the contribution(?) does not change significantly depending on whether the players have information about how much others contributed to the public good. Why does this disprove the pro-social theory? (see ‘rather’ in line 130). It is also incomprehensible why do the authors think that the knowledge of earnings in the game does not contain information about the contribution of others. Please, explain in more detail why payoff-based learning is different from conditional cooperation.

Another question to this issue is how is it possible to disprove the theory of conditional cooperation in a game where individuals cannot 'modify' their network structure (e.g. exclude free-riders etc.)?

Incentives appears in the introduction part, but it is not clear enough what is an incentive in this paper that can facilitate PGG participants to move from the dominant strategy. Can the multiplication factor be considered as a positive incentive?

There is no reference in line 75 where the authors write about the critical value to the multiplication factor.

The paragraph about the difficulties encountered by the researcher in conduction an experiment cannot be considered as professional arguments. I suggest that instead of highlighting the difficulties of the experiment, you should highlight the advantages of numerical simulation because the two methods are rather complementary, and not substitutes (line 97).

The exact description of the preferential attachment is missing. Is this only means a spatially independent, popularity-based attachment? (line 236) How was the network containing the preferential attachment generated? In the two types of net (α = 0 and α = 1), what is the group size distribution (please provide not just the mean degree)? Please explain why do you think that social networks are similar to a free-scale network that you generated?

Please provide clarification of the 'noise' in line 297.

The initial contribution is chosen randomly (line 156-157). Does this mean an equal distribution of the five potential fractions? What is the reason for this? What is happening if it is based on normal distribution? (50% may be the average in both cases)


Author Response

Please find attached document.

Author Response File: Author Response.pdf

Back to TopTop