New Insights in Machine Learning and Deep Neural Networks

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (15 March 2023) | Viewed by 34482

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal
Interests: data mining; text mining; machine learning; social network analysis; data visualization; e-learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Instituto de Telecomunicações, Universidade do Porto, 162400 Porto, Portugal
Interests: biomedical signal processing; biomedical imaging; deep learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

You are cordially invited to submit an original research article or a comprehensive review to this Special Issue on "New Insights in Machine Learning and Deep Neural Networks”.

The focus of this Special Issue is primarily on theoretical results and applications of machine learning and/or deep neural network models that aim to describe systems that can better learn and generalize. Deep learning is an emerging branch of machine learning, especially in terms of neural architecture design and parameter tuning difficulty. Although such models are increasingly used in complex problems, they are sometimes designed based on general and not-so-well-founded assumptions, which can limit their use in applications. Here, novel models based on nonclassical assumptions are particularly welcome. We are seeking research based on mathematical and algorithmic approaches, as well as statistical and computational methods, with a specific interest in applications related to complex systems and challenging research areas (such as biology and medicine, computer science, economics and finance, social media analysis, marketing, epidemiology, and information theory).

Dr. Álvaro Figueira
Dr. Francesco Renna
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • generative adversarial networks
  • data augmentation
  • object identification and scene classification
  • medical imaging
  • detecting fake news on social media
  • facial expression recognition
  • automatic feature selection
  • text and narrative representation
  • image and video reconstruction
  • prediction analysis

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

31 pages, 8906 KiB  
Article
Imbalanced Ectopic Beat Classification Using a Low-Memory-Usage CNN LMUEBCNet and Correlation-Based ECG Signal Oversampling
by You-Liang Xie and Che-Wei Lin
Mathematics 2023, 11(8), 1833; https://0-doi-org.brum.beds.ac.uk/10.3390/math11081833 - 12 Apr 2023
Viewed by 1293
Abstract
Objective: This study presents a low-memory-usage ectopic beat classification convolutional neural network (CNN) (LMUEBCNet) and a correlation-based oversampling (Corr-OS) method for ectopic beat data augmentation. Methods: A LMUEBCNet classifier consists of four VGG-based convolution layers and two fully connected layers with the [...] Read more.
Objective: This study presents a low-memory-usage ectopic beat classification convolutional neural network (CNN) (LMUEBCNet) and a correlation-based oversampling (Corr-OS) method for ectopic beat data augmentation. Methods: A LMUEBCNet classifier consists of four VGG-based convolution layers and two fully connected layers with the continuous wavelet transform (CWT) spectrogram of a QRS complex (0.712 s) segment as the input of the LMUEBCNet. A Corr-OS method augmented a synthetic beat using the top K correlation heartbeat of all mixed subjects for balancing the training set. This study validates data via a 10-fold cross-validation in the following three scenarios: training/testing with native data (CV1), training/testing with augmented data (CV2), and training with augmented data but testing with native data (CV3). Experiments: The PhysioNet MIT-BIH arrhythmia ECG database was used for verifying the proposed algorithm. This database consists of a total of 109,443 heartbeats categorized into five classes according to AAMI EC57: non-ectopic beats (N), supraventricular ectopic beats (S), ventricular ectopic beats (V), a fusion of ventricular and normal beats (F), and unknown beats (Q), with 90,586/2781/7236/803/8039 heartbeats, respectively. Three pre-trained CNNs: AlexNet/ResNet18/VGG19 were utilized in this study to compare the ectopic beat classification performance of the LMUEBCNet. The effectiveness of using Corr-OS data augmentation was determined by comparing (1) with/without using the Corr-OS method and (2) the Next-OS data augmentation method. Next-OS augmented the synthetic beat using the next heartbeat of one subject. Results: The proposed LMUEBCNet can achieve a 99.4% classification accuracy under the CV2 and CV3 cross-validation scenarios. The accuracy of the proposed LMUEBCNet is 0.4–0.5% less than the performance obtained from AlexNet/ResNet18/VGG19 under the same data augmentation and cross-validation scenario, but the parameter usage is only 10% or less than that of the AlexNet/ResNet18/VGG19 method. The proposed Corr-OS method can improve ectopic beat classification accuracy by 0.3%. Conclusion: This study developed a LMUEBCNet that can achieve a high ectopic beat classification accuracy with efficient parameter usage and utilized the Corr-OS method for balancing datasets to improve the classification performance. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

31 pages, 1839 KiB  
Article
AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks
by Ehsan Nazari, Paula Branco and Guy-Vincent Jourdan
Mathematics 2023, 11(4), 977; https://0-doi-org.brum.beds.ac.uk/10.3390/math11040977 - 14 Feb 2023
Viewed by 1580
Abstract
Generative Adversarial Networks (GANs) have been used for many applications with overwhelming success. The training process of these models is complex, involving a zero-sum game between two neural networks trained in an adversarial manner. Thus, to use GANs, researchers and developers need to [...] Read more.
Generative Adversarial Networks (GANs) have been used for many applications with overwhelming success. The training process of these models is complex, involving a zero-sum game between two neural networks trained in an adversarial manner. Thus, to use GANs, researchers and developers need to answer the question: “Is the GAN sufficiently trained?”. However, understanding when a GAN is well trained for a given problem is a challenging and laborious task that usually requires monitoring the training process and human intervention for assessing the quality of the GAN generated outcomes. Currently, there is no automatic mechanism for determining the required number of epochs that correspond to a well-trained GAN, allowing the training process to be safely stopped. In this paper, we propose AutoGAN, an algorithm that allows one to answer this question in a fully automatic manner with minimal human intervention, being applicable to different data modalities including imagery and tabular data. Through an extensive set of experiments, we show the clear advantage of our solution when compared against alternative methods, for a task where the GAN outputs are used as an oversampling method. Moreover, we show that AutoGAN not only determines a good stopping point for training the GAN, but it also allows one to run fewer training epochs to achieve a similar or better performance with the GAN outputs. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

26 pages, 3801 KiB  
Article
Geo-Spatial Mapping of Hate Speech Prediction in Roman Urdu
by Samia Aziz, Muhammad Shahzad Sarfraz, Muhammad Usman, Muhammad Umar Aftab and Hafiz Tayyab Rauf
Mathematics 2023, 11(4), 969; https://0-doi-org.brum.beds.ac.uk/10.3390/math11040969 - 14 Feb 2023
Cited by 1 | Viewed by 2457
Abstract
Social media has transformed into a crucial channel for political expression. Twitter, especially, is a vital platform used to exchange political hate in Pakistan. Political hate speech affects the public image of politicians, targets their supporters, and hurts public sentiments. Hate speech is [...] Read more.
Social media has transformed into a crucial channel for political expression. Twitter, especially, is a vital platform used to exchange political hate in Pakistan. Political hate speech affects the public image of politicians, targets their supporters, and hurts public sentiments. Hate speech is a controversial public speech that promotes violence toward a person or group based on specific characteristics. Although studies have been conducted to identify hate speech in European languages, Roman languages have yet to receive much attention. In this research work, we present the automatic detection of political hate speech in Roman Urdu. An exclusive political hate speech labeled dataset (RU-PHS) containing 5002 instances and city-level information has been developed. To overcome the vast lexical structure of Roman Urdu, we propose an algorithm for the lexical unification of Roman Urdu. Three vectorization techniques are developed: TF-IDF, word2vec, and fastText. A comparative analysis of the accuracy and time complexity of conventional machine learning models and fine-tuned neural networks using dense word representations is presented for classifying and predicting political hate speech. The results show that a random forest and the proposed feed-forward neural network achieve an accuracy of 93% using fastText word embedding to distinguish between neutral and politically offensive speech. The statistical information helps identify trends and patterns, and the hotspot and cluster analysis assist in pinpointing Punjab as a highly susceptible area in Pakistan in terms of political hate tweet generation. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

18 pages, 597 KiB  
Article
Imbalanced Multimodal Attention-Based System for Multiclass House Price Prediction
by Yansong Li, Paula Branco and Hanxiang Zhang
Mathematics 2023, 11(1), 113; https://0-doi-org.brum.beds.ac.uk/10.3390/math11010113 - 27 Dec 2022
Cited by 2 | Viewed by 1777
Abstract
House price prediction is an important problem for individuals, companies, organizations, and governments. With a vast amount of diversified and multimodal data available about houses, the predictive models built should seek to make the best use of these data. This leads to the [...] Read more.
House price prediction is an important problem for individuals, companies, organizations, and governments. With a vast amount of diversified and multimodal data available about houses, the predictive models built should seek to make the best use of these data. This leads to the complex problem of how to effectively use multimodal data for house price prediction. Moreover, this is also a context suffering from class imbalance, an issue that cannot be disregarded. In this paper, we propose a new algorithm for addressing these problems: the imbalanced multimodal attention-based system (IMAS). The IMAS makes use of an oversampling strategy that operates on multimodal data, namely using text, numeric, categorical, and boolean data types. A self-attention mechanism is embedded to leverage the usage of neighboring information that can benefit the model’s performance. Moreover, the self-attention mechanism allows for the determination of the features that are the most relevant and adapts the weights used according to that information when performing inference. Our experimental results show the clear advantage of the IMAS, which outperforms all the competitors tested. The analysis of the weights obtained through the self-attention mechanism provides insights into the features’ relevance and also supports the importance of using this mechanism in the predictive model. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Graphical abstract

14 pages, 975 KiB  
Article
A Comprehensive Analysis of Transformer-Deep Neural Network Models in Twitter Disaster Detection
by Vimala Balakrishnan, Zhongliang Shi, Chuan Liang Law, Regine Lim, Lee Leng Teh, Yue Fan and Jeyarani Periasamy
Mathematics 2022, 10(24), 4664; https://0-doi-org.brum.beds.ac.uk/10.3390/math10244664 - 09 Dec 2022
Cited by 3 | Viewed by 1867
Abstract
Social media platforms such as Twitter are a vital source of information during major events, such as natural disasters. Studies attempting to automatically detect textual communications have mostly focused on machine learning and deep learning algorithms. Recent evidence shows improvement in disaster detection [...] Read more.
Social media platforms such as Twitter are a vital source of information during major events, such as natural disasters. Studies attempting to automatically detect textual communications have mostly focused on machine learning and deep learning algorithms. Recent evidence shows improvement in disaster detection models with the use of contextual word embedding techniques (i.e., transformers) that take the context of a word into consideration, unlike the traditional context-free techniques; however, studies regarding this model are scant. To this end, this paper investigates a selection of ensemble learning models by merging transformers with deep neural network algorithms to assess their performance in detecting informative and non-informative disaster-related Twitter communications. A total of 7613 tweets were used to train and test the models. Results indicate that the ensemble models consistently yield good performance results, with F-score values ranging between 76% and 80%. Simpler transformer variants, such as ELECTRA and Talking-Heads Attention, yielded comparable and superior results compared to the computationally expensive BERT, with F-scores ranging from 80% to 84%, especially when merged with Bi-LSTM. Our findings show that the newer and simpler transformers can be used effectively, with less computational costs, in detecting disaster-related Twitter communications. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

20 pages, 14153 KiB  
Article
Latent-PER: ICA-Latent Code Editing Framework for Portrait Emotion Recognition
by Isack Lee and Seok Bong Yoo
Mathematics 2022, 10(22), 4260; https://0-doi-org.brum.beds.ac.uk/10.3390/math10224260 - 14 Nov 2022
Cited by 1 | Viewed by 1297
Abstract
Although real-image emotion recognition has been developed in several studies, an acceptable accuracy level has not been achieved in portrait drawings. This paper proposes a portrait emotion recognition framework based on independent component analysis (ICA) and latent codes to overcome the performance degradation [...] Read more.
Although real-image emotion recognition has been developed in several studies, an acceptable accuracy level has not been achieved in portrait drawings. This paper proposes a portrait emotion recognition framework based on independent component analysis (ICA) and latent codes to overcome the performance degradation problem in drawings. This framework employs latent code extracted through a generative adversarial network (GAN)-based encoder. It learns independently from factors that interfere with expression recognition, such as color, small occlusion, and various face angles. It is robust against environmental factors since it filters latent code by adding an emotion-relevant code extractor to extract only information related to facial expressions from the latent code. In addition, an image is generated by changing the latent code to the direction of the eigenvector for each emotion obtained through the ICA method. Since only the position of the latent code related to the facial expression is changed, there is little external change and the expression changes in the desired direction. This technique is helpful for qualitative and quantitative emotional recognition learning. The experimental results reveal that the proposed model performs better than the existing models, and the latent editing used in this process suggests a novel manipulation method through ICA. Moreover, the proposed framework can be applied for various portrait emotion applications from recognition to manipulation, such as automation of emotional subtitle production for the visually impaired, understanding the emotions of objects in famous classic artwork, and animation production assistance. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

18 pages, 831 KiB  
Article
Semi-Supervised Approach for EGFR Mutation Prediction on CT Images
by Cláudia Pinheiro, Francisco Silva, Tania Pereira and Hélder P. Oliveira
Mathematics 2022, 10(22), 4225; https://doi.org/10.3390/math10224225 - 12 Nov 2022
Cited by 1 | Viewed by 1274
Abstract
The use of deep learning methods in medical imaging has been able to deliver promising results; however, the success of such models highly relies on large, properly annotated datasets. The annotation of medical images is a laborious, expensive, and time-consuming process. This difficulty [...] Read more.
The use of deep learning methods in medical imaging has been able to deliver promising results; however, the success of such models highly relies on large, properly annotated datasets. The annotation of medical images is a laborious, expensive, and time-consuming process. This difficulty is increased for the mutations status label since these require additional exams (usually biopsies) to be obtained. On the other hand, raw images, without annotations, are extensively collected as part of the clinical routine. This work investigated methods that could mitigate the labelled data scarcity problem by using both labelled and unlabelled data to improve the efficiency of predictive models. A semi-supervised learning (SSL) approach was developed to predict epidermal growth factor receptor (EGFR) mutation status in lung cancer in a less invasive manner using 3D CT scans.The proposed approach consists of combining a variational autoencoder (VAE) and exploiting the power of adversarial training, intending that the features extracted from unlabelled data to discriminate images can help in the classification task. To incorporate labelled and unlabelled images, adversarial training was used, extending a traditional variational autoencoder. With the developed method, a mean AUC of 0.701 was achieved with the best-performing model, with only 14% of the training data being labelled. This SSL approach improved the discrimination ability by nearly 7 percentage points over a fully supervised model developed with the same amount of labelled data, confirming the advantage of using such methods when few annotated examples are available. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

20 pages, 6385 KiB  
Article
Camouflage Object Segmentation Using an Optimized Deep-Learning Approach
by Muhammad Kamran, Saeed Ur Rehman, Talha Meraj, Khalid A. Alnowibet and Hafiz Tayyab Rauf
Mathematics 2022, 10(22), 4219; https://0-doi-org.brum.beds.ac.uk/10.3390/math10224219 - 11 Nov 2022
Cited by 3 | Viewed by 2251
Abstract
Camouflage objects hide information physically based on the feature matching of the texture or boundary line within the background. Texture matching and similarities between the camouflage objects and surrounding maps make differentiation difficult with generic and salient objects, thus making camouflage object detection [...] Read more.
Camouflage objects hide information physically based on the feature matching of the texture or boundary line within the background. Texture matching and similarities between the camouflage objects and surrounding maps make differentiation difficult with generic and salient objects, thus making camouflage object detection (COD) more challenging. The existing techniques perform well. However, the challenging nature of camouflage objects demands more accuracy in detection and segmentation. To overcome this challenge, an optimized modular framework for COD tasks, named Optimize Global Refinement (OGR), is presented. This framework comprises a parallelism approach in feature extraction for the enhancement of learned parameters and globally refined feature maps for the abstraction of all intuitive feature sets at each extraction block’s outcome. Additionally, an optimized local best feature node-based rule is proposed to reduce the complexity of the proposed model. In light of the baseline experiments, OGR was applied and evaluated on a benchmark. The publicly available datasets were outperformed by achieving state-of-the-art structural similarity of 94%, 93%, and 96% for the Kvasir-SEG, COD10K, and Camouflaged Object (CAMO) datasets, respectively. The OGR is generalized and can be integrated into real-time applications for future development. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

23 pages, 2104 KiB  
Article
Deep Reinforcement Learning for Crowdshipping Last-Mile Delivery with Endogenous Uncertainty
by Marco Silva and João Pedro Pedroso
Mathematics 2022, 10(20), 3902; https://0-doi-org.brum.beds.ac.uk/10.3390/math10203902 - 20 Oct 2022
Cited by 3 | Viewed by 2203
Abstract
In this work, we study a flexible compensation scheme for last-mile delivery where a company outsources part of the activity of delivering products to its customers to occasional drivers (ODs), under a scheme named crowdshipping. All deliveries are completed at the minimum total [...] Read more.
In this work, we study a flexible compensation scheme for last-mile delivery where a company outsources part of the activity of delivering products to its customers to occasional drivers (ODs), under a scheme named crowdshipping. All deliveries are completed at the minimum total cost incurred with their vehicles and drivers plus the compensation paid to the ODs. The company decides on the best compensation scheme to offer to the ODs at the planning stage. We model our problem based on a stochastic and dynamic environment where delivery orders and ODs volunteering to make deliveries present themselves randomly within fixed time windows. The uncertainty is endogenous in the sense that the compensation paid to ODs influences their availability. We develop a deep reinforcement learning (DRL) algorithm that can deal with large instances while focusing on the quality of the solution: we combine the combinatorial structure of the action space with the neural network of the approximated value function, involving techniques from machine learning and integer optimization. The results show the effectiveness of the DRL approach by examining out-of-sample performance and that it is suitable to process large samples of uncertain data, which induces better solutions. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

Review

Jump to: Research

41 pages, 23839 KiB  
Review
Survey on Synthetic Data Generation, Evaluation Methods and GANs
by Alvaro Figueira and Bruno Vaz
Mathematics 2022, 10(15), 2733; https://0-doi-org.brum.beds.ac.uk/10.3390/math10152733 - 02 Aug 2022
Cited by 48 | Viewed by 16896
Abstract
Synthetic data consists of artificially generated data. When data are scarce, or of poor quality, synthetic data can be used, for example, to improve the performance of machine learning models. Generative adversarial networks (GANs) are a state-of-the-art deep generative models that can generate [...] Read more.
Synthetic data consists of artificially generated data. When data are scarce, or of poor quality, synthetic data can be used, for example, to improve the performance of machine learning models. Generative adversarial networks (GANs) are a state-of-the-art deep generative models that can generate novel synthetic samples that follow the underlying data distribution of the original dataset. Reviews on synthetic data generation and on GANs have already been written. However, none in the relevant literature, to the best of our knowledge, has explicitly combined these two topics. This survey aims to fill this gap and provide useful material to new researchers in this field. That is, we aim to provide a survey that combines synthetic data generation and GANs, and that can act as a good and strong starting point for new researchers in the field, so that they have a general overview of the key contributions and useful references. We have conducted a review of the state-of-the-art by querying four major databases: Web of Sciences (WoS), Scopus, IEEE Xplore, and ACM Digital Library. This allowed us to gain insights into the most relevant authors, the most relevant scientific journals in the area, the most cited papers, the most significant research areas, the most important institutions, and the most relevant GAN architectures. GANs were thoroughly reviewed, as well as their most common training problems, their most important breakthroughs, and a focus on GAN architectures for tabular data. Further, the main algorithms for generating synthetic data, their applications and our thoughts on these methods are also expressed. Finally, we reviewed the main techniques for evaluating the quality of synthetic data (especially tabular data) and provided a schematic overview of the information presented in this paper. Full article
(This article belongs to the Special Issue New Insights in Machine Learning and Deep Neural Networks)
Show Figures

Figure 1

Back to TopTop