Special Issue "Artificial Intelligence in Dynamics of Human Cooperation"

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 December 2021).

Special Issue Editors

Dr. The Anh Han
E-Mail Website
Guest Editor
School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
Interests: evolutionary game theory; dynamics of human cooperation; AI; cognitive modeling; agent-based simulations
Dr. Simon Powers
E-Mail Website
Guest Editor
School of Computing, Edinburgh Napier University, Edinburgh EH11 4DY, UK
Interests: institutions; social dilemmas; multi-agent systems; cultural evolution; game theory; evolutionary game theory
Prof. Dr. Luís Moniz Pereira
E-Mail Website
Guest Editor
Departamento de Informática, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, Campus de Caparica, 2829-516 Caparica, Portugal
Interests: knowledge representation and reasoning; logic programming; cognitive sciences; evolutionary game theory; machine ethics; computer science philosophy
Prof. Dr. Isamu Okada
E-Mail Website
Guest Editor
Department of Business Administration, Soka University, Tangi 1-236, Hachioji, Tokyo 192-8577, Japan
Interests: social dilemmas; evolution of cooperation; evolutionary game theory; indirect reciprocity; agent-based simulations; computational social science
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The problem of the evolution of cooperation and the emergence of collective behavior, which cuts across diverse disciplines such as Economics, Physics, Biology, Psychology, Sociology, Political, plus Cognitive and Computer Sciences, remains one of the greatest integrative interdisciplinary challenges facing science today. Mathematical and simulation techniques including evolutionary game theory, statistical physics, and agent-based simulations have proven powerful to study this problem. To understand the evolutionary mechanisms that promote and more or less stably maintain collective behavior in various societies, it is important to take into account the intrinsic complexity of individuals partaking therein, namely their cognitive and complex decision-making processes. On the other hand, artificial intelligence (AI) and related technologies have become increasingly prevalent in human life, making decisions that might alter the dynamics of human interactions in many ways. Moreover, there exists a double-edged sword: what cognition affords collective advantageous communities and vice-versa, what cognitive abilities are advantageously selected or enhanced in a collective community of what structure.

This Special Issue aims to provide a forum for the exploration of the potential interplay between AI and the dynamics of human collective behavior such as cooperation, coordination, trust and fairness; in particular, the different ways that the advancement of AI might alter the dynamics of human collective behavior, and vice-versa. Both theoretical modeling and behavioral experiment studies are welcome.

Some potential topics include (but are not limited to): 

  • Cooperation in hybrid societies;
  • Cooperation with autonomous agents;
  • AI-based cooperation engineering;
  • Trust and cooperation in human–machine interactions;
  • Cognitive mechanisms and cooperation;
  • Emergence of the cognitive mechanisms for cooperation;
  • Reputation and information processing;
  • Cooperation and competition in AI development;
  • Incentives design for pro-sociality in human-agent societies;
  • AI and social cohesion.

Dr. The Anh Han
Dr. Simon Powers
Prof. Dr. Luís Moniz Pereira
Prof. Dr. Isamu Okada
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Evolution of cooperation
  • Evolutionary game theory
  • Complex system topological influence
  • Agent-based simulation
  • Cognition, separately or co-mingled
  • Statistical physics
  • AI modeling
  • Multi-agent systems
  • Behavior experiments
  • Interdisciplinary lessons and bridging

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Employing AI to Better Understand Our Morals
Entropy 2022, 24(1), 10; https://doi.org/10.3390/e24010010 - 21 Dec 2021
Viewed by 612
Abstract
We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution [...] Read more.
We present a summary of research that we have conducted employing AI to better understand human morality. This summary adumbrates theoretical fundamentals and considers how to regulate development of powerful new AI technologies. The latter research aim is benevolent AI, with fair distribution of benefits associated with the development of these and related technologies, avoiding disparities of power and wealth due to unregulated competition. Our approach avoids statistical models employed in other approaches to solve moral dilemmas, because these are “blind” to natural constraints on moral agents, and risk perpetuating mistakes. Instead, our approach employs, for instance, psychologically realistic counterfactual reasoning in group dynamics. The present paper reviews studies involving factors fundamental to human moral motivation, including egoism vs. altruism, commitment vs. defaulting, guilt vs. non-guilt, apology plus forgiveness, counterfactual collaboration, among other factors fundamental in the motivation of moral action. These being basic elements in most moral systems, our studies deliver generalizable conclusions that inform efforts to achieve greater sustainability and global benefit, regardless of cultural specificities in constituents. Full article
(This article belongs to the Special Issue Artificial Intelligence in Dynamics of Human Cooperation)
Back to TopTop