entropy-logo

Journal Browser

Journal Browser

Entropy in Real-World Datasets and Its Impact on Machine Learning II

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Multidisciplinary Applications".

Deadline for manuscript submissions: 15 October 2024 | Viewed by 778

Special Issue Editors


E-Mail Website
Guest Editor
1. Department of Knowledge Engineering, University of Economics, 1 Maja 50, 40-287 Katowice, Poland
2. Łukasiewicz Research Network - Institute of Innovative Technologies EMAG, 40-189 Katowice, Poland
Interests: machine learning; ensemble methods; decision trees; ant colony optimization; computational intelligence; data analysis; optimization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Łukasiewicz Research Network - Institute of Innovative Technologies EMAG, 40-189 Katowice, Poland
Interests: cyber security; artificial intelligence; data security; automation; electric power engineering

E-Mail Website
Guest Editor
Department of Machine Learning, University of Economics, 1 Maja 50, 40-287 Katowice, Poland
Interests: machine learning; natural language processing; social networks; artificial intelligence; fake news detection

Special Issue Information

Dear Colleagues,

Today, data science and machine learning remain pivotal pillars for solving the most intricate real-world challenges. Their versatility and utility span across various domains, including medicine, finance, text mining, image analysis, and more. Simultaneously, the abundance of user-accessible data continues to escalate, with concepts like big data and data streams garnering ever-increasing recognition. However, traditional classification methods may exhibit questionable efficacy in handling such data complexities, thereby necessitating continuous advancements in machine learning methodologies.

The second edition of this special session centers on the intricacies of real-world data and the impact of entropy on machine learning algorithms. Our particular focus lies on novel classification algorithms that harness the power of data science to model and process diverse real-world datasets effectively.

We welcome researchers to submit their original work, showcasing innovative approaches to data classification and analysis of real-world datasets, with a keen emphasis on entropy and its influence on machine learning effectiveness. We aim to foster a collaborative platform for exchanging knowledge and experiences related to cutting-edge developments in data science and data classification.

Prof. Dr. Jan Kozak
Dr. Artur Kozłowski
Dr. Barbara Probierz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • real-world datasets
  • data science
  • machine learning algorithms
  • optimization
  • classification
  • prediction methods
  • entropy in big data

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 5009 KiB  
Article
A Novel Classification Method: Neighborhood-Based Positive Unlabeled Learning Using Decision Tree (NPULUD)
by Bita Ghasemkhani, Kadriye Filiz Balbal, Kokten Ulas Birant and Derya Birant
Entropy 2024, 26(5), 403; https://0-doi-org.brum.beds.ac.uk/10.3390/e26050403 - 4 May 2024
Viewed by 476
Abstract
In a standard binary supervised classification task, the existence of both negative and positive samples in the training dataset are required to construct a classification model. However, this condition is not met in certain applications where only one class of samples is obtainable. [...] Read more.
In a standard binary supervised classification task, the existence of both negative and positive samples in the training dataset are required to construct a classification model. However, this condition is not met in certain applications where only one class of samples is obtainable. To overcome this problem, a different classification method, which learns from positive and unlabeled (PU) data, must be incorporated. In this study, a novel method is presented: neighborhood-based positive unlabeled learning using decision tree (NPULUD). First, NPULUD uses the nearest neighborhood approach for the PU strategy and then employs a decision tree algorithm for the classification task by utilizing the entropy measure. Entropy played a pivotal role in assessing the level of uncertainty in the training dataset, as a decision tree was developed with the purpose of classification. Through experiments, we validated our method over 24 real-world datasets. The proposed method attained an average accuracy of 87.24%, while the traditional supervised learning approach obtained an average accuracy of 83.99% on the datasets. Additionally, it is also demonstrated that our method obtained a statistically notable enhancement (7.74%), with respect to state-of-the-art peers, on average. Full article
(This article belongs to the Special Issue Entropy in Real-World Datasets and Its Impact on Machine Learning II)
Show Figures

Figure 1

Back to TopTop