Supervised and Unsupervised Classification Algorithms

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Evolutionary Algorithms and Machine Learning".

Deadline for manuscript submissions: closed (20 December 2020) | Viewed by 22918

Special Issue Editors


E-Mail Website
Guest Editor
Department of Economics and Law, University of Cassino and Southern Lazio, 03043 Cassino, Italy
Interests: data science; statistical network analysis; supervised classification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Supervised and unsupervised classification algorithms are the two main branches of machine learning methods. Supervised classification refers to the task of training a system using labeled data divided into classes, and assigning data to these existing classes. The process consists in computing a model from a set of labeled training data, and then applying the model to predict the class label for incoming unlabeled data. It is called supervised learning because the training data set supervises the learning process. Supervised classification algorithms are divided into two categories: classification and regression.

In unsupervised classification, the data being processed are unlabeled, so in the lack of prior knowledge, the algorithm tries to search for a similarity to generate clusters and assign classes. Unsupervised classification algorithms are divided into three categories: clustering, data estimation, and dimensionality reduction.

Applications range from object detection from biomedical images and disease prediction to natural language understanding and generation.

Submissions are welcome both for traditional classification problems as well as new applications. Potential topics include but are not limited to image classification, data integration, clustering approaches, feature extraction, etc.

Dr. Mario Rosario Guarracino
Dr. Laura Antonelli
Dr. Pietro Hiram Guzzi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Supervised classification
  • Clustering
  • Network analysis
  • Community extraction
  • Data science
  • Biological knowledge extraction

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 156 KiB  
Editorial
Special Issue on Supervised and Unsupervised Classification Algorithms—Foreword from Guest Editors
by Laura Antonelli and Mario Rosario Guarracino
Algorithms 2023, 16(3), 145; https://0-doi-org.brum.beds.ac.uk/10.3390/a16030145 - 07 Mar 2023
Viewed by 964
Abstract
Supervised and unsupervised classification algorithms are the two main branches of machine learning [...] Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)

Research

Jump to: Editorial

22 pages, 4176 KiB  
Article
A Multinomial DGA Classifier for Incipient Fault Detection in Oil-Impregnated Power Transformers
by George Odongo, Richard Musabe and Damien Hanyurwimfura
Algorithms 2021, 14(4), 128; https://0-doi-org.brum.beds.ac.uk/10.3390/a14040128 - 20 Apr 2021
Cited by 15 | Viewed by 3317
Abstract
This study investigates the use of machine-learning approaches to interpret Dissolved Gas Analysis (DGA) data to find incipient faults early in oil-impregnated transformers. Transformers are critical pieces of equipment in transmitting and distributing electrical energy. The failure of a single unit disturbs a [...] Read more.
This study investigates the use of machine-learning approaches to interpret Dissolved Gas Analysis (DGA) data to find incipient faults early in oil-impregnated transformers. Transformers are critical pieces of equipment in transmitting and distributing electrical energy. The failure of a single unit disturbs a huge number of consumers and suppresses economic activities in the vicinity. Because of this, it is important that power utility companies accord high priority to condition monitoring of critical assets. The analysis of dissolved gases is a technique popularly used for monitoring the condition of transformers dipped in oil. The interpretation of DGA data is however inconclusive as far as the determination of incipient faults is concerned and depends largely on the expertise of technical personnel. To have a coherent, accurate, and clear interpretation of DGA, this study proposes a novel multinomial classification model christened KosaNet that is based on decision trees. Actual DGA data with 2912 entries was used to compute the performance of KosaNet against other algorithms with multiclass classification ability namely the decision tree, k-NN, Random Forest, Naïve Bayes, and Gradient Boost. Investigative results show that KosaNet demonstrated an improved DGA classification ability particularly when classifying multinomial data. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

17 pages, 2299 KiB  
Article
Feature and Language Selection in Temporal Symbolic Regression for Interpretable Air Quality Modelling
by Estrella Lucena-Sánchez, Guido Sciavicco and Ionel Eduard Stan
Algorithms 2021, 14(3), 76; https://0-doi-org.brum.beds.ac.uk/10.3390/a14030076 - 26 Feb 2021
Cited by 7 | Viewed by 2029
Abstract
Air quality modelling that relates meteorological, car traffic, and pollution data is a fundamental problem, approached in several different ways in the recent literature. In particular, a set of such data sampled at a specific location and during a specific period of time [...] Read more.
Air quality modelling that relates meteorological, car traffic, and pollution data is a fundamental problem, approached in several different ways in the recent literature. In particular, a set of such data sampled at a specific location and during a specific period of time can be seen as a multivariate time series, and modelling the values of the pollutant concentrations can be seen as a multivariate temporal regression problem. In this paper, we propose a new method for symbolic multivariate temporal regression, and we apply it to several data sets that contain real air quality data from the city of Wrocław (Poland). Our experiments show that our approach is superior to classical, especially symbolic, ones, both in statistical performances and the interpretability of the results. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

17 pages, 1111 KiB  
Article
Effects of Nonlinearity and Network Architecture on the Performance of Supervised Neural Networks
by Nalinda Kulathunga, Nishath Rajiv Ranasinghe, Daniel Vrinceanu, Zackary Kinsman, Lei Huang and Yunjiao Wang
Algorithms 2021, 14(2), 51; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020051 - 05 Feb 2021
Cited by 11 | Viewed by 3386
Abstract
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these [...] Read more.
The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with L-ReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

12 pages, 586 KiB  
Article
Fuzzy C-Means Clustering Algorithm with Multiple Fuzzification Coefficients
by Tran Dinh Khang, Nguyen Duc Vuong, Manh-Kien Tran and Michael Fowler
Algorithms 2020, 13(7), 158; https://0-doi-org.brum.beds.ac.uk/10.3390/a13070158 - 30 Jun 2020
Cited by 28 | Viewed by 6733
Abstract
Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have [...] Read more.
Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

11 pages, 2282 KiB  
Article
Time Series Clustering Model based on DTW for Classifying Car Parks
by Taoying Li, Xu Wu and Junhe Zhang
Algorithms 2020, 13(3), 57; https://0-doi-org.brum.beds.ac.uk/10.3390/a13030057 - 02 Mar 2020
Cited by 11 | Viewed by 4582
Abstract
An increasing number of automobiles have led to a serious shortage of parking spaces and a serious imbalance of parking supply and demand. The best way to solve these problems is to achieve the reasonable planning and classify management of car parks, guide [...] Read more.
An increasing number of automobiles have led to a serious shortage of parking spaces and a serious imbalance of parking supply and demand. The best way to solve these problems is to achieve the reasonable planning and classify management of car parks, guide the intelligent parking, and then promote its marketization and industrialization. Therefore, we aim to adopt clustering method to classify car parks. Owing to the time series characteristics of car park data, a time series clustering framework, including preprocessing, distance measurement, clustering and evaluation, is first developed for classifying car parks. Then, in view of the randomness of existing clustering models, a new time series clustering model based on dynamic time warping (DTW) is proposed, which contains distance radius calculation, obtaining density of the neighbor area, k centers initialization, and clustering. Finally, some UCR datasets and data of 27 car parks are employed to evaluate the performance of the models and results show that the proposed model performs obviously better results than those clustering models based on Euclidean distance (ED) and traditional clustering models based on DTW. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms)
Show Figures

Figure 1

Back to TopTop