entropy-logo

Journal Browser

Journal Browser

Maximum Entropy Applied to Inductive Logic and Reasoning

A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: closed (1 December 2014) | Viewed by 44941

Special Issue Editors


E-Mail
Guest Editor
Department of Philosophy, School of European Culture and Languages, University of Kent, Canterbury CT2 7NF, UK
Interests: mathematical logics and applications; imperfect information, in all varieties and forms; rationality; decision science; quantum computing

E-Mail Website
Guest Editor
Department of Philosophy, School of European Culture and Languages, University of Kent, Canterbury CT2 7NF, UK
Interests: causality; probability; logics and reasoning; their application to science, maths and ai

Special Issue Information

Dear Colleagues,

Since E.T. Jaynes showed how maximizing Shannon Entropy can be applied to rational belief formation, Maximum Entropy (MaxEnt) methods have played an important role in inductive reasoning. This special issue provides a forum for proponents, opponents and practitioners to discuss and advance the current state of the art. We explicitly welcome contributions arguing for or against MaxEnt methods.

Specific areas of interest include (but are not limited to):

  • Formal applications of MaxEnt to inductive logic or inductive reasoning.
  • Philosophical accounts of MaxEnt methods for inductive logic or inductive reasoning (including contributions arguing for or against MaxEnt methods).
  • MaxEnt methods for rational agents (in a single agent, multi-agent or autonomous agent setting).
  • Connections between MaxEnt and scoring rules.
  • Surveys of the state of the art in one of the above areas.
  • Historical perspectives on MaxEnt and inductive logic with a focus on where we stand today.

Dr. Juergen Landes
Prof. Dr. Jon Williamson
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • maximum entropy principle
  • maxent
  • inductive logic
  • inductive reasoning
  • inductive inference
  • objective bayesianism
  • scoring rules

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

70 KiB  
Editorial
Maximum Entropy Applied to Inductive Logic and Reasoning
by Jürgen Landes and Jon Williamson
Entropy 2015, 17(5), 3458-3460; https://0-doi-org.brum.beds.ac.uk/10.3390/e17053458 - 18 May 2015
Cited by 1 | Viewed by 4085
Abstract
This editorial explains the scope of the special issue and provides a thematic introduction to the contributed papers. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)

Research

Jump to: Editorial

560 KiB  
Article
Justifying Objective Bayesianism on Predicate Languages
by Jürgen Landes and Jon Williamson
Entropy 2015, 17(4), 2459-2543; https://0-doi-org.brum.beds.ac.uk/10.3390/e17042459 - 22 Apr 2015
Cited by 11 | Viewed by 5266
Abstract
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper [...] Read more.
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
208 KiB  
Article
Maximum Entropy and Probability Kinematics Constrained by Conditionals
by Stefan Lukits
Entropy 2015, 17(4), 1690-1700; https://0-doi-org.brum.beds.ac.uk/10.3390/e17041690 - 27 Mar 2015
Cited by 5 | Viewed by 4985
Abstract
Two open questions of inductive reasoning are solved: (1) does the principle of maximum entropy (PME) give a solution to the obverse Majerník problem; and (2) isWagner correct when he claims that Jeffrey’s updating principle (JUP) contradicts PME? Majerník shows that PME provides [...] Read more.
Two open questions of inductive reasoning are solved: (1) does the principle of maximum entropy (PME) give a solution to the obverse Majerník problem; and (2) isWagner correct when he claims that Jeffrey’s updating principle (JUP) contradicts PME? Majerník shows that PME provides unique and plausible marginal probabilities, given conditional probabilities. The obverse problem posed here is whether PME also provides such conditional probabilities, given certain marginal probabilities. The theorem developed to solve the obverse Majerník problem demonstrates that in the special case introduced by Wagner PME does not contradict JUP, but elegantly generalizes it and offers a more integrated approach to probability updating. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
211 KiB  
Article
Maximum Relative Entropy Updating and the Value of Learning
by Patryk Dziurosz-Serafinowicz
Entropy 2015, 17(3), 1146-1164; https://0-doi-org.brum.beds.ac.uk/10.3390/e17031146 - 11 Mar 2015
Cited by 2 | Viewed by 4782
Abstract
We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to [...] Read more.
We examine the possibility of justifying the principle of maximum relative entropy (MRE) considered as an updating rule by looking at the value of learning theorem established in classical decision theory. This theorem captures an intuitive requirement for learning: learning should lead to new degrees of belief that are expected to be helpful and never harmful in making decisions. We call this requirement the value of learning. We consider the extent to which learning rules by MRE could satisfy this requirement and so could be a rational means for pursuing practical goals. First, by representing MRE updating as a conditioning model, we show that MRE satisfies the value of learning in cases where learning prompts a complete redistribution of one’s degrees of belief over a partition of propositions. Second, we show that the value of learning may not be generally satisfied by MRE updates in cases of updating on a change in one’s conditional degrees of belief. We explain that this is so because, contrary to what the value of learning requires, one’s prior degrees of belief might not be equal to the expectation of one’s posterior degrees of belief. This, in turn, points towards a more general moral: that the justification of MRE updating in terms of the value of learning may be sensitive to the context of a given learning experience. Moreover, this lends support to the idea that MRE is not a universal nor mechanical updating rule, but rather a rule whose application and justification may be context-sensitive. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
304 KiB  
Article
Relational Probabilistic Conditionals and Their Instantiations under Maximum Entropy Semantics for First-Order Knowledge Bases
by Christoph Beierle, Marc Finthammer and Gabriele Kern-Isberner
Entropy 2015, 17(2), 852-865; https://0-doi-org.brum.beds.ac.uk/10.3390/e17020852 - 13 Feb 2015
Cited by 11 | Viewed by 5237
Abstract
For conditional probabilistic knowledge bases with conditionals based on propositional logic, the principle of maximum entropy (ME) is well-established, determining a unique model inductively completing the explicitly given knowledge. On the other hand, there is no general agreement on how to extend the [...] Read more.
For conditional probabilistic knowledge bases with conditionals based on propositional logic, the principle of maximum entropy (ME) is well-established, determining a unique model inductively completing the explicitly given knowledge. On the other hand, there is no general agreement on how to extend the ME principle to relational conditionals containing free variables. In this paper, we focus on two approaches to ME semantics that have been developed for first-order knowledge bases: aggregating semantics and a grounding semantics. Since they use different variants of conditionals, we define the logic PCI, which covers both approaches as special cases and provides a framework where the effects of both approaches can be studied in detail. While the ME models under PCI-grounding and PCI-aggregating semantics are different in general, we point out that parametric uniformity of a knowledge base ensures that both semantics coincide. Using some concrete knowledge bases, we illustrate the differences and common features of both approaches, looking in particular at the ground instances of the given conditionals. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Show Figures

496 KiB  
Article
A Foundational Approach to Generalising the Maximum Entropy Inference Process to the Multi-Agent Context
by George Wilmers
Entropy 2015, 17(2), 594-645; https://0-doi-org.brum.beds.ac.uk/10.3390/e17020594 - 02 Feb 2015
Cited by 16 | Viewed by 5261
Abstract
The present paper seeks to establish a logical foundation for studying axiomatically multi-agent probabilistic reasoning over a discrete space of outcomes. We study the notion of a social inference process which generalises the concept of an inference process for a single agent which [...] Read more.
The present paper seeks to establish a logical foundation for studying axiomatically multi-agent probabilistic reasoning over a discrete space of outcomes. We study the notion of a social inference process which generalises the concept of an inference process for a single agent which was used by Paris and Vencovská to characterise axiomatically the method of maximum entropy inference. Axioms for a social inference process are introduced and discussed, and a particular social inference process called the Social Entropy Process, or SEP, is defined which satisfies these axioms. SEP is justified heuristically by an information theoretic argument, and incorporates both the maximum entropy inference process for a single agent and the multi–agent normalised geometric mean pooling operator. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
438 KiB  
Article
The Information Geometry of Bregman Divergences and Some Applications in Multi-Expert Reasoning
by Martin Adamčík
Entropy 2014, 16(12), 6338-6381; https://0-doi-org.brum.beds.ac.uk/10.3390/e16126338 - 01 Dec 2014
Cited by 16 | Viewed by 9101
Abstract
The aim of this paper is to develop a comprehensive study of the geometry involved in combining Bregman divergences with pooling operators over closed convex sets in a discrete probabilistic space. A particular connection we develop leads to an iterative procedure, which is [...] Read more.
The aim of this paper is to develop a comprehensive study of the geometry involved in combining Bregman divergences with pooling operators over closed convex sets in a discrete probabilistic space. A particular connection we develop leads to an iterative procedure, which is similar to the alternating projection procedure by Csiszár and Tusnády. Although such iterative procedures are well studied over much more general spaces than the one we consider, only a few authors have investigated combining projections with pooling operators. We aspire to achieve here a comprehensive study of such a combination. Besides, pooling operators combining the opinions of several rational experts allows us to discuss possible applications in multi-expert reasoning. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Show Figures

Graphical abstract

156 KiB  
Article
What You See Is What You Get
by Jeff B. Paris
Entropy 2014, 16(11), 6186-6194; https://0-doi-org.brum.beds.ac.uk/10.3390/e16116186 - 21 Nov 2014
Cited by 18 | Viewed by 5222
Abstract
This paper corrects three widely held misunderstandings about Maxent when used in common sense reasoning: That it is language dependent; That it produces objective facts; That it subsumes, and so is at least as untenable as, the paradox-ridden Principle of Insufficient Reason. Full article
(This article belongs to the Special Issue Maximum Entropy Applied to Inductive Logic and Reasoning)
Back to TopTop