entropy-logo

Journal Browser

Journal Browser

Rough Set Theory and Entropy in Information Science

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 11968

Special Issue Editors


E-Mail Website
Guest Editor
School of Business and Economics, Vrije Universiteit Amsterdam, 1081 HV Amsterdam, The Netherlands
Interests: logic

E-Mail Website
Co-Guest Editor
Department of Computer Science, University of Regina, Regina S4S 0A2, Canada
Interests: Three-way decisions

E-Mail Website
Co-Guest Editor
School of Mathematics, University of the Witwatersrand, Johannesburg WITS 2050, South Africa
Interests: modal logic

Special Issue Information

In recent years, Pawlak’s classical framework of rough set theory has been generalized in various ways. Particularly relevant to the present volume is the fact that some of these generalisations have been building on algebraic and logical insights and techniques, and involve modal algebras based on general (i.e. not necessarily distributive) lattices on the algebraic side, and the following two types of relational structures on the side corresponding to approximation spaces:

- structures based on formal contexts from Rudolf Wille’s Formal Concept Analysis;

- structures based on reflexive (directed) graphs.

It is precisely this second type of structures that have been recently proposed as a basic framework for modelling relative entropy in information theory. This proposal pivots on the following core ideas:

- in many concrete situations, the indiscernibility relations of approximation spaces can be assumed to be reflexive, but neither symmetric nor transitive;

- in Pawlak’s original framework, the upper and lower approximations arising from any given indiscernibility relation reflect an external perspective on (in)discernibility, which is captured algebraically by the notion of Boolean algebra with modal operators and its canonically associated classical modal logic S5.

Specifically, formulas in this logic denote arbitrary subsets of a given approximation spaces, and modal operators serve to identify those special formulas that denote definable sets (and hence, definable approximations of arbitrary sets).

Contrasting the well-known, external perspective just outlined, recently, an internal perspective on indiscernibility has also been provided, which is particularly useful in all situations in which the indiscernibility in question is induced by relative entropy, i.e. an inherent boundary to knowability (due e.g. to perceptual, theoretical, evidential or linguistic limits).

In these cases, rather than generating modal operators, the indiscernibility relation can also generate a complete general (i.e. not necessarily distributive) lattice, the associated logic of which is much weaker than classical propositional (modal) logic. Formulas in this logic denote the definable sets by default, and hence they represent ‘all there is to know’, i.e. the theoretical horizon to knowability, given the information entropy encoded into the indiscernibility relation.

Weaker logics than classical propositional logic have higher standards of truth, and the logic mentioned above is no exception: this weaker propositional logic behaves as an evidential logic in which formulas can be true, or false, or neither, depending on whether there is evidence in support, or against, or neither. Hence, as the principle of excluded middle fails at a meta-theoretic level, we will refer to this (evidence-based) notion of truth as hyper-constructive, rather than classical. The seminal papers in which these ideas are discussed are [1,2]. Some applications of these same ideas in a many-valued context are discussed in the papers [3,4].

Much work needs to be done to fully develop this research program, even if we only consider the logic side of it.

Thematically appropriate contributions to this research program include (among many others):

- Gödel translation on graph-based frames with a Block-Esakia-style theorem;

- Parametric correspondence involving graph-based frames;

- Dempster Shafer theory on graph-based frames;

- A Goldblatt Thomason characterization theorem for graph-based frames;

- Algebraic proof theory on graphs-based frames;

- Lawvere/allegory-based semantics of non-distributive first order logic.

However, other facets of this program go beyond logic and include a broad range of disciplines spanning from formal modeling and probabilistic and statistical reasoning to formal epistemology.

The ambitious aim of this volume is to collect cutting-edge results and insights in the formal modelling of entropy in rough set theory, so that these results and insights can be systematically connected to each other.

Reference

  1. Conradie, M.; Craig, A.; Palmigiano, A.; Wijnberg, N.M. Modelling informational entropy. arXiv 2019, arXiv:1903.12518.
  2. Conradie, W.; Palmigiano, A.; Robinson, C.; Wijnberg, N. Non-distributive logics: from semantics to meaning. arXiv 2020, arXiv:2002.04257.
  3. Conradie, W.; Craig, A.; Palmigiano, A.; Wijnberg, N.M. Modelling competing theories. arXiv 2019, arXiv:1905.11748.
  4. Conradie, W.; Palmigiano, A.; Robinson, C.; Tzimoulis, A.; Wijnberg, N.M. Modelling socio-political competition. arXiv 2019, arXiv:1908.04817. 

Prof. Dr. Alessandra Palmigiano
Dr. Willem Conradie
Dr. Yiyu Yao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • entropy
  • rough set theory
  • formal concept analysis
  • evidential logics
  • non-distributive logics
  • formalizations of statistical reasoning
  • knowability and uncertainty

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3540 KiB  
Article
The Cuts Selection Method Based on Histogram Segmentation and Impact on Discretization Algorithms
by Visnja Ognjenovic, Vladimir Brtka, Jelena Stojanov, Eleonora Brtka and Ivana Berkovic
Entropy 2022, 24(5), 675; https://0-doi-org.brum.beds.ac.uk/10.3390/e24050675 - 11 May 2022
Cited by 2 | Viewed by 1369
Abstract
The preprocessing of data is an important task in rough set theory as well as in Entropy. The discretization of data as part of the preprocessing of data is a very influential process. Is there a connection between the segmentation of the data [...] Read more.
The preprocessing of data is an important task in rough set theory as well as in Entropy. The discretization of data as part of the preprocessing of data is a very influential process. Is there a connection between the segmentation of the data histogram and data discretization? The authors propose a novel data segmentation technique based on a histogram with regard to the quality of a data discretization. The significance of a cut’s position has been researched on several groups of histograms. A data set reduct was observed with respect to the histogram type. Connections between the data histograms and cuts, reduct and the classification rules have been researched. The result is that the reduct attributes have a more irregular histogram than attributes out of the reduct. The following discretization algorithms were used: the entropy algorithm and the Maximal Discernibility algorithm developed in rough set theory. This article presents the Cuts Selection Method based on histogram segmentation, reduct of data and MD algorithm of discretization. An application on the selected database shows that the benefits of a selection of cuts relies on histogram segmentation. The results of the classification were compared with the results of the Naïve Bayes algorithm. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
Show Figures

Figure 1

12 pages, 304 KiB  
Article
On the Depth of Decision Trees with Hypotheses
by Mikhail Moshkov
Entropy 2022, 24(1), 116; https://0-doi-org.brum.beds.ac.uk/10.3390/e24010116 - 12 Jan 2022
Cited by 3 | Viewed by 1494
Abstract
In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system [...] Read more.
In this paper, based on the results of rough set theory, test theory, and exact learning, we investigate decision trees over infinite sets of binary attributes represented as infinite binary information systems. We define the notion of a problem over an information system and study three functions of the Shannon type, which characterize the dependence in the worst case of the minimum depth of a decision tree solving a problem on the number of attributes in the problem description. The considered three functions correspond to (i) decision trees using attributes, (ii) decision trees using hypotheses (an analog of equivalence queries from exact learning), and (iii) decision trees using both attributes and hypotheses. The first function has two possible types of behavior: logarithmic and linear (this result follows from more general results published by the author earlier). The second and the third functions have three possible types of behavior: constant, logarithmic, and linear (these results were published by the author earlier without proofs that are given in the present paper). Based on the obtained results, we divided the set of all infinite binary information systems into four complexity classes. In each class, the type of behavior for each of the considered three functions does not change. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
14 pages, 292 KiB  
Article
Decision Rules Derived from Optimal Decision Trees with Hypotheses
by Mohammad Azad, Igor Chikalov, Shahid Hussain, Mikhail Moshkov and Beata Zielosko
Entropy 2021, 23(12), 1641; https://0-doi-org.brum.beds.ac.uk/10.3390/e23121641 - 07 Dec 2021
Cited by 2 | Viewed by 2356
Abstract
Conventional decision trees use queries each of which is based on one attribute. In this study, we also examine decision trees that handle additional queries based on hypotheses. This kind of query is similar to the equivalence queries considered in exact learning. Earlier, [...] Read more.
Conventional decision trees use queries each of which is based on one attribute. In this study, we also examine decision trees that handle additional queries based on hypotheses. This kind of query is similar to the equivalence queries considered in exact learning. Earlier, we designed dynamic programming algorithms for the computation of the minimum depth and the minimum number of internal nodes in decision trees that have hypotheses. Modification of these algorithms considered in the present paper permits us to build decision trees with hypotheses that are optimal relative to the depth or relative to the number of the internal nodes. We compare the length and coverage of decision rules extracted from optimal decision trees with hypotheses and decision rules extracted from optimal conventional decision trees to choose the ones that are preferable as a tool for the representation of information. To this end, we conduct computer experiments on various decision tables from the UCI Machine Learning Repository. In addition, we also consider decision tables for randomly generated Boolean functions. The collected results show that the decision rules derived from decision trees with hypotheses in many cases are better than the rules extracted from conventional decision trees. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
12 pages, 720 KiB  
Article
The Feature Description of Formal Context Based on the Relationships among Concept Lattices
by Ting Qian, Yongwei Yang and Xiaoli He
Entropy 2021, 23(11), 1397; https://0-doi-org.brum.beds.ac.uk/10.3390/e23111397 - 24 Oct 2021
Cited by 3 | Viewed by 1543
Abstract
Three-way concept analysis (3WCA) is extended research of formal concept analysis (FCA) by combining three-way decision. The three-way object oriented concept lattice (OEOL) is one of the important data structures which integrates rough set, concept lattice and three-way decision in 3WCA. In the [...] Read more.
Three-way concept analysis (3WCA) is extended research of formal concept analysis (FCA) by combining three-way decision. The three-way object oriented concept lattice (OEOL) is one of the important data structures which integrates rough set, concept lattice and three-way decision in 3WCA. In the paper, we investigate the characteristics of formal context based on the isomorphic relationship among the kinds of concept lattices with OEOL. Firstly, II-dual intersectable attributes and II-dual intersectable context are proposed and the relationship between the type I-dual intersectable context(dual intersectable context) and the type II-dual intersectable context are studied. In addition, the relationship among the kinds of concept lattices with OEOL are studied when the formal context is both I-dual intersectable context and II-dual intersectable context. Finally, the inverse problems of the above conclusions are discussed and the following two conclusions are obtained: (1) the formal context is the type II-dual intersectable context, when the object oriented concept lattice and OEOL are isomorphic. (2) In addition, the formal context is the type I-dual intersectable context, when the concept lattice and OEOL are anti-isomorphic. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
Show Figures

Figure 1

12 pages, 314 KiB  
Article
Decompose Boolean Matrices with Correlation Clustering
by László Aszalós
Entropy 2021, 23(7), 852; https://0-doi-org.brum.beds.ac.uk/10.3390/e23070852 - 02 Jul 2021
Cited by 1 | Viewed by 1712
Abstract
One of the tasks of data science is the decomposition of large matrices in order to understand their structures. A special case of this is when we decompose relations, i.e., logical matrices. In this paper, we present a method based on the similarity [...] Read more.
One of the tasks of data science is the decomposition of large matrices in order to understand their structures. A special case of this is when we decompose relations, i.e., logical matrices. In this paper, we present a method based on the similarity of rows and columns, which uses correlation clustering to cluster the rows and columns of the matrix, facilitating the visualization of the relation by rearranging the rows and columns. In this article, we compare our method with Gunther Schmidt’s problems and solutions. Our method produces the original solutions by selecting its parameters from a small set. However, with other parameters, it provides solutions with even lower entropy. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
Show Figures

Figure 1

8 pages, 250 KiB  
Article
Entropy-Based Greedy Algorithm for Decision Trees Using Hypotheses
by Mohammad Azad, Igor Chikalov, Shahid Hussain and Mikhail Moshkov
Entropy 2021, 23(7), 808; https://0-doi-org.brum.beds.ac.uk/10.3390/e23070808 - 25 Jun 2021
Cited by 13 | Viewed by 2196
Abstract
In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries [...] Read more.
In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions. Full article
(This article belongs to the Special Issue Rough Set Theory and Entropy in Information Science)
Back to TopTop