entropy-logo

Journal Browser

Journal Browser

Information Theory at the Crossroads of Artificial Intelligence, Human Cognition, and Economics

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 January 2021) | Viewed by 13993

Special Issue Editor


E-Mail Website
Guest Editor
Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
Interests: machine intelligence; artificial psychology; information theory, social networks; game theory; criticality
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Information theory is a versatile tool for studying the overlap between theories of economic decisions and theories of artificial intelligence (AI) and human cognition. In AI, this can be seen in the appearance of neural networks, machine learning, and pattern recognition alongside discussions of gambling, optimal portfolio selection, and financial markets in the key texts on information theory. Consequently, information theory can help explore prescriptive questions about decisions such as: Given the information an agent has what action will optimise the outcome?

Alternatively, approaches in computer science, neuroscience, and psychology have viewed the cognitive sciences as a study of constrained information processing. For example, information theory has played an important role in models that relate perception to action via neural structures. This descriptive use of information theory can be used to explore questions about decisions such as: Given the information an agent has how do they go about making a decision?

These distinctions become more important as economics is using more and more sophisticated intelligent agents, larger agent-based models, and larger datasets to understand an increasingly complex economic world. These new developments need rigorous foundations in order to answer questions about how individuals could optimally behave, how they really behave, what drives these differences, and how material these differences are.

This Special Issue invites submissions of original research articles and reviews that explore the role of information theory in economics and:

  • agent-based modeling;
  • psychology and neuroscience;
  • artificial intelligence;
  • reinforcement learning;
  • agent-to-agent interactions;
  • network theory;
  • game theory;
  • game theory of mind;
  • the basis of rational decisions;
  • human versus algorithmic rationality; and
  • emergent market complexity.

Dr. Michael Harré
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • information theory
  • economics
  • agent-based models
  • psychology
  • neuroeconomics
  • artificial intelligence
  • rationality
  • network theory
  • game theory.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

33 pages, 2474 KiB  
Article
Information Structures for Causally Explainable Decisions
by Louis Anthony Cox, Jr.
Entropy 2021, 23(5), 601; https://0-doi-org.brum.beds.ac.uk/10.3390/e23050601 - 13 May 2021
Cited by 4 | Viewed by 3110
Abstract
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link [...] Read more.
For an AI agent to make trustworthy decision recommendations under uncertainty on behalf of human principals, it should be able to explain why its recommended decisions make preferred outcomes more likely and what risks they entail. Such rationales use causal models to link potential courses of action to resulting outcome probabilities. They reflect an understanding of possible actions, preferred outcomes, the effects of action on outcome probabilities, and acceptable risks and trade-offs—the standard ingredients of normative theories of decision-making under uncertainty, such as expected utility theory. Competent AI advisory systems should also notice changes that might affect a user’s plans and goals. In response, they should apply both learned patterns for quick response (analogous to fast, intuitive “System 1” decision-making in human psychology) and also slower causal inference and simulation, decision optimization, and planning algorithms (analogous to deliberative “System 2” decision-making in human psychology) to decide how best to respond to changing conditions. Concepts of conditional independence, conditional probability tables (CPTs) or models, causality, heuristic search for optimal plans, uncertainty reduction, and value of information (VoI) provide a rich, principled framework for recognizing and responding to relevant changes and features of decision problems via both learned and calculated responses. This paper reviews how these and related concepts can be used to identify probabilistic causal dependencies among variables, detect changes that matter for achieving goals, represent them efficiently to support responses on multiple time scales, and evaluate and update causal models and plans in light of new data. The resulting causally explainable decisions make efficient use of available information to achieve goals in uncertain environments. Full article
Show Figures

Figure 1

19 pages, 971 KiB  
Article
The Natural Philosophy of Economic Information: Autonomous Agents and Physiosemiosis
by Carsten Herrmann-Pillath
Entropy 2021, 23(3), 277; https://0-doi-org.brum.beds.ac.uk/10.3390/e23030277 - 25 Feb 2021
Cited by 3 | Viewed by 2077
Abstract
Information is a core concept in modern economics, yet its definition and empirical specification is elusive. One reason is the intellectual grip of the Shannon paradigm which marginalizes semantic information. However, a precise concept of economic information must be based on a theory [...] Read more.
Information is a core concept in modern economics, yet its definition and empirical specification is elusive. One reason is the intellectual grip of the Shannon paradigm which marginalizes semantic information. However, a precise concept of economic information must be based on a theory of semantics, since what counts economically is the meaning, function and use of information. This paper introduces a new principled approach to information that adopts the paradigm of biosemiotics, rooted in the philosophy of Charles S. Peirce and builds on recent developments of the thermodynamics of information. Information processing by autonomous agents, defined as autopoietic heat engines, is conceived as physiosemiosis operating according to fundamental thermodynamic principles of information processing, as elucidated in recent work by Kolchinsky and Wolpert (KW). I plug the KW approach into a basic conceptual model of physiosemiosis and present an evolutionary interpretation. This approach has far-reaching implications for economics, such as suggesting an evolutionary view of the economic agent, choice and behavior, which is informed by applications of statistical thermodynamics on the brain. Full article
Show Figures

Figure 1

26 pages, 783 KiB  
Article
Behavioural Effects and Market Dynamics in Field and Laboratory Experimental Asset Markets
by Sandra Andraszewicz, Ke Wu and Didier Sornette
Entropy 2020, 22(10), 1183; https://0-doi-org.brum.beds.ac.uk/10.3390/e22101183 - 20 Oct 2020
Viewed by 2650
Abstract
A vast literature investigating behavioural underpinnings of financial bubbles and crashes relies on laboratory experiments. However, it is not yet clear how findings generated in a highly artificial environment relate to the human behaviour in the wild. It is of concern that the [...] Read more.
A vast literature investigating behavioural underpinnings of financial bubbles and crashes relies on laboratory experiments. However, it is not yet clear how findings generated in a highly artificial environment relate to the human behaviour in the wild. It is of concern that the laboratory setting may create a confound variable that impacts the experimental results. To explore the similarities and differences between human behaviour in the laboratory environment and in a realistic natural setting, with the same type of participants, we translate a field study conducted by reference (Sornette, D.; et al. Econ. E-J.2020, 14, 1–53) with trading rounds each lasting six full days to a laboratory experiment lasting two hours. The laboratory experiment replicates the key findings from the field study but we observe substantial differences in the market dynamics between the two settings. The replication of the results in the two distinct settings indicates that relaxing some of the laboratory control does not corrupt the main findings, while at the same time it offers several advantages such as the possibility to increase the number of participants interacting with each other at the same time and the number of traded securities. These findings pose important insights for future experiments investigating human behaviour in complex systems. Full article
Show Figures

Figure 1

Review

Jump to: Research

19 pages, 362 KiB  
Review
Information Theory for Agents in Artificial Intelligence, Psychology, and Economics
by Michael S. Harré
Entropy 2021, 23(3), 310; https://0-doi-org.brum.beds.ac.uk/10.3390/e23030310 - 06 Mar 2021
Cited by 14 | Viewed by 5387
Abstract
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and [...] Read more.
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in. Full article
Show Figures

Graphical abstract

Back to TopTop