Previous Issue
Volume 17, May
 
 

Algorithms, Volume 17, Issue 6 (June 2024) – 20 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 3410 KiB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060240 (registering DOI) - 3 Jun 2024
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

14 pages, 1059 KiB  
Article
Linear System Identification-Oriented Optimal Tampering Attack Strategy and Implementation Based on Information Entropy with Multiple Binary Observations
by Zhongwei Bai, Peng Yu, Yan Liu and Jin Guo
Algorithms 2024, 17(6), 239; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060239 (registering DOI) - 3 Jun 2024
Abstract
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. [...] Read more.
With the rapid development of computer technology, communication technology, and control technology, cyber-physical systems (CPSs) have been widely used and developed. However, there are massive information interactions in CPSs, which lead to an increase in the amount of data transmitted over the network. The data communication, once attacked by the network, will seriously affect the security and stability of the system. In this paper, for the data tampering attack existing in the linear system with multiple binary observations, in the case where the estimation algorithm of the defender is unknown, the optimization index is constructed based on information entropy from the attacker’s point of view, and the problem is modeled. For the problem of the multi-parameter optimization with energy constraints, this paper uses particle swarm optimization (PSO) to obtain the optimal data tampering attack solution set, and gives the estimation method of unknown parameters in the case of unknown parameters. To implement the real-time improvement of online implementation, the BP neural network is designed. Finally, the validity of the conclusions is verified through numerical simulation. This means that the attacker can construct effective metrics based on information entropy without the knowledge of the defense’s discrimination algorithm. In addition, the optimal attack strategy implementation based on PSO and BP is also effective. Full article
Show Figures

Figure 1

19 pages, 1087 KiB  
Article
Simple Histogram Equalization Technique Improves Performance of VGG Models on Facial Emotion Recognition Datasets
by Jaher Hassan Chowdhury, Qian Liu and Sheela Ramanna
Algorithms 2024, 17(6), 238; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060238 (registering DOI) - 3 Jun 2024
Abstract
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. [...] Read more.
Facial emotion recognition (FER) is crucial across psychology, neuroscience, computer vision, and machine learning due to the diversified and subjective nature of emotions, varying considerably across individuals, cultures, and contexts. This study explored FER through convolutional neural networks (CNNs) and Histogram Equalization techniques. It investigated the impact of histogram equalization, data augmentation, and various model optimization strategies on FER accuracy across different datasets like KDEF, CK+, and FER2013. Using pre-trained VGG architectures, such as VGG19 and VGG16, this study also examined the effectiveness of fine-tuning hyperparameters and implementing different learning rate schedulers. The evaluation encompassed diverse metrics including accuracy, Area Under the Receiver Operating Characteristic Curve (AUC-ROC), Area Under the Precision–Recall Curve (AUC-PRC), and Weighted F1 score. Notably, the fine-tuned VGG architecture demonstrated a state-of-the-art performance compared to conventional transfer learning models and achieved 100%, 95.92%, and 69.65% on the CK+, KDEF, and FER2013 datasets, respectively. Full article
(This article belongs to the Special Issue Algorithms for Image Processing and Machine Vision)
Show Figures

Figure 1

14 pages, 341 KiB  
Article
Competitive Analysis of Algorithms for an Online Distribution Problem
by Alessandro Barba, Luca Bertazzi and Bruce L. Golden
Algorithms 2024, 17(6), 237; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060237 - 3 Jun 2024
Abstract
We study an online distribution problem in which a producer has to send a load from an origin to a destination. At each time period before the deadline, they ask for transportation price quotes and have to decide to either accept or not [...] Read more.
We study an online distribution problem in which a producer has to send a load from an origin to a destination. At each time period before the deadline, they ask for transportation price quotes and have to decide to either accept or not accept the minimum offered price. If this price is not accepted, they have to pay a penalty cost, which may be the cost to ask for new quotes, the penalty cost for a late delivery, or the inventory cost to store the load for a certain duration. The aim is to minimize the sum of the transportation and the penalty costs. This problem has interesting real-world applications, given that transportation quotes can be obtained from professional websites nowadays. We show that the classical online algorithm used to solve the well-known Secretary problem is not able to provide, on average, effective solutions to our problem, given the trade-off between the transportation and the penalty costs. Therefore, we design two classes of online algorithms. The first class is based on a given time of acceptance, while the second is based on a given threshold price. We formally prove the competitive ratio of each algorithm, i.e., the worst-case performance of the online algorithm with respect to the optimal solution of the offline problem, in which all transportation prices are known at the beginning, rather than being revealed over time. The computational results show the algorithms’ performance on average and in the worst-case scenario when the transportation prices are generated on the basis of given probability distributions. Full article
Show Figures

Figure 1

24 pages, 1023 KiB  
Article
Hybrid Machine Learning Algorithms to Evaluate Prostate Cancer
by Dimitrios Morakis and Adam Adamopoulos
Algorithms 2024, 17(6), 236; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060236 - 2 Jun 2024
Abstract
The adequacy and efficacy of simple and hybrid machine learning and Computational Intelligence algorithms were evaluated for the classification of potential prostate cancer patients in two distinct categories, the high- and the low-risk group for PCa. The evaluation is based on randomly generated [...] Read more.
The adequacy and efficacy of simple and hybrid machine learning and Computational Intelligence algorithms were evaluated for the classification of potential prostate cancer patients in two distinct categories, the high- and the low-risk group for PCa. The evaluation is based on randomly generated surrogate data for the biomarker PSA, considering that reported epidemiological data indicated that PSA values follow a lognormal distribution. In addition, four more biomarkers were considered, namely, PSAD (PSA density), PSAV (PSA velocity), PSA ratio, and Digital Rectal Exam evaluation (DRE), as well as patient age. Seven simple classification algorithms, namely, Decision Trees, Random Forests, Support Vector Machines, K-Nearest Neighbors, Logistic Regression, Naïve Bayes, and Artificial Neural Networks, were evaluated in terms of classification accuracy. In addition, three hybrid algorithms were developed and introduced in the present work, where Genetic Algorithms were utilized as a metaheuristic searching technique in order to optimize the training set, in terms of minimizing its size, to give optimal classification accuracy for the simple algorithms including K-Nearest Neighbors, a K-means clustering algorithm, and a genetic clustering algorithm. Results indicated that prostate cancer cases can be classified with high accuracy, even by the use of small training sets, with sizes that could be even smaller than 30% of the dataset. Numerous computer experiments indicated that the proposed training set minimization does not cause overfitting of the hybrid algorithms. Finally, an easy-to-use Graphical User Interface (GUI) was implemented, incorporating all the evaluated algorithms and the decision-making procedure. Full article
(This article belongs to the Special Issue Hybrid Intelligent Algorithms)
25 pages, 3788 KiB  
Article
A Comprehensive Exploration of Unsupervised Classification in Spike Sorting: A Case Study on Macaque Monkey and Human Pancreatic Signals
by Francisco Javier Iñiguez-Lomeli, Edgar Eliseo Franco-Ortiz, Ana Maria Silvia Gonzalez-Acosta, Andres Amador Garcia-Granada and Horacio Rostro-Gonzalez
Algorithms 2024, 17(6), 235; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060235 - 30 May 2024
Abstract
Spike sorting, an indispensable process in the analysis of neural biosignals, aims to segregate individual action potentials from mixed recordings. This study delves into a comprehensive investigation of diverse unsupervised classification algorithms, some of which, to the best of our knowledge, have not [...] Read more.
Spike sorting, an indispensable process in the analysis of neural biosignals, aims to segregate individual action potentials from mixed recordings. This study delves into a comprehensive investigation of diverse unsupervised classification algorithms, some of which, to the best of our knowledge, have not previously been used for spike sorting. The methods encompass Principal Component Analysis (PCA), K-means, Self-Organizing Maps (SOMs), and hierarchical clustering. The research draws insights from both macaque monkey and human pancreatic signals, providing a holistic evaluation across species. Our research has focused on the utilization of the aforementioned methods for the sorting of 327 detected spikes within an in vivo signal of a macaque monkey, as well as 386 detected spikes within an in vitro signal of a human pancreas. This classification process was carried out by extracting statistical features from these spikes. We initiated our analysis with K-means, employing both unmodified and normalized versions of the features. To enhance the performance of this algorithm, we also employed Principal Component Analysis (PCA) to reduce the dimensionality of the data, thereby leading to more distinct groupings as identified by the K-means algorithm. Furthermore, two additional techniques, namely hierarchical clustering and Self-Organizing Maps, have also undergone exploration and have demonstrated favorable outcomes for both signal types. Across all scenarios, a consistent observation emerged: the identification of six distinctive groups of spikes, each characterized by distinct shapes, within both signal sets. In this regard, we meticulously present and thoroughly analyze the experimental outcomes yielded by each of the employed algorithms. This comprehensive presentation and discussion encapsulate the nuances, patterns, and insights uncovered by these algorithms across our data. By delving into the specifics of these results, we aim to provide a nuanced understanding of the efficacy and performance of each algorithm in the context of spike sorting. Full article
(This article belongs to the Special Issue Supervised and Unsupervised Classification Algorithms (2nd Edition))
Show Figures

Figure 1

29 pages, 767 KiB  
Article
Unleashing the Power of Tweets and News in Stock-Price Prediction Using Machine-Learning Techniques
by Hossein Zolfagharinia, Mehdi Najafi, Shamir Rizvi and Aida Haghighi
Algorithms 2024, 17(6), 234; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060234 - 28 May 2024
Viewed by 240
Abstract
Price prediction tools play a significant role in small investors’ behavior. As such, this study aims to propose a method to more effectively predict stock prices in North America. Chiefly, the study addresses crucial questions related to the relevance of news and tweets [...] Read more.
Price prediction tools play a significant role in small investors’ behavior. As such, this study aims to propose a method to more effectively predict stock prices in North America. Chiefly, the study addresses crucial questions related to the relevance of news and tweets in stock-price prediction and highlights the potential value of considering such parameters in algorithmic trading strategies—particularly during times of market panic. To this end, we develop innovative multi-layer perceptron (MLP) and long short-term memory (LSTM) neural networks to investigate the influence of Twitter count (TC), and news count (NC) variables on stock-price prediction under both normal and market-panic conditions. To capture the impact of these variables, we integrate technical variables with TC and NC and evaluate the prediction accuracy across different model types. We use Bloomberg Twitter count and news publication count variables in North American stock-price prediction and integrate them into MLP and LSTM neural networks to evaluate their impact during the market pandemic. The results showcase improved prediction accuracy, promising significant benefits for traders and investors. This strategic integration reflects a nuanced understanding of the market sentiment derived from public opinion on platforms like Twitter. Full article
(This article belongs to the Special Issue Recent Advances in Algorithms for Swarm Systems)
Show Figures

Figure 1

16 pages, 4902 KiB  
Article
Data-Driven Load Frequency Control for Multi-Area Power System Based on Switching Method under Cyber Attacks
by Guangqiang Tian and Fuzhong Wang
Algorithms 2024, 17(6), 233; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060233 - 27 May 2024
Viewed by 270
Abstract
This paper introduces an innovative method for load frequency control (LFC) in multi-area interconnected power systems vulnerable to denial-of-service (DoS) attacks. The system is modeled as a switching system with two subsystems, and an adaptive control algorithm is developed. Initially, a dynamic linear [...] Read more.
This paper introduces an innovative method for load frequency control (LFC) in multi-area interconnected power systems vulnerable to denial-of-service (DoS) attacks. The system is modeled as a switching system with two subsystems, and an adaptive control algorithm is developed. Initially, a dynamic linear data model is used to model each subsystem. Next, a model-free adaptive control strategy is introduced to maintain frequency stability in the multi-area interconnected power system, even during DoS attacks. A rigorous stability analysis of the power system is performed, and the effectiveness of the proposed approach is demonstrated by applying it to a three-area interconnected power system. Full article
Show Figures

Figure 1

24 pages, 3149 KiB  
Article
A Multi-Process System for Investigating Inclusive Design in User Interfaces for Low-Income Countries
by Yann Méhat, Sylvain Sagot, Egon Ostrosi and Dominique Deuff
Algorithms 2024, 17(6), 232; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060232 - 27 May 2024
Viewed by 298
Abstract
Limited understanding exists regarding the methodologies behind designing interfaces for low-income contexts, despite acknowledging their potential value. The ERSA (Engineering design Research meta-model based Systematic Analysis) process, defined as a dynamic interactive multi-process system, proposes a new approach to constructing learnings to succeed [...] Read more.
Limited understanding exists regarding the methodologies behind designing interfaces for low-income contexts, despite acknowledging their potential value. The ERSA (Engineering design Research meta-model based Systematic Analysis) process, defined as a dynamic interactive multi-process system, proposes a new approach to constructing learnings to succeed in designing interfaces for low-income countries. ERSA is developed by integrating database searches, snowballing, thematic similarity searches for corpus of literature creation, multilayer networks, clustering algorithms, and data processing. ERSA employs an engineering design meta-model to analyze the corpus of literature, facilitating the identification of diverse methodological approaches. The insights from ERSA empower researchers, designers, and engineers to tailor design methodologies to their specific low-income contexts. Our findings show the importance of adopting more versatile and holistic approaches. They suggest that user-based design methodologies and computational design can be defined and theorized together. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 3686 KiB  
Article
Prediction of Customer Churn Behavior in the Telecommunication Industry Using Machine Learning Models
by Victor Chang, Karl Hall, Qianwen Ariel Xu, Folakemi Ololade Amao, Meghana Ashok Ganatra and Vladlena Benson
Algorithms 2024, 17(6), 231; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060231 - 27 May 2024
Viewed by 300
Abstract
Customer churn is a significant concern, and the telecommunications industry has the largest annual churn rate of any major industry at over 30%. This study examines the use of ensemble learning models to analyze and forecast customer churn in the telecommunications business. Accurate [...] Read more.
Customer churn is a significant concern, and the telecommunications industry has the largest annual churn rate of any major industry at over 30%. This study examines the use of ensemble learning models to analyze and forecast customer churn in the telecommunications business. Accurate churn forecasting is essential for successful client retention initiatives to combat regular customer churn. We used innovative and improved machine learning methods, including Decision Trees, Boosted Trees, and Random Forests, to enhance model interpretability and prediction accuracy. The models were trained and evaluated systematically by using a large dataset. The Random Forest model performed best, with 91.66% predictive accuracy, 82.2% precision, and 81.8% recall. Our results highlight how well the model can identify possible churners with the help of explainable AI (XAI) techniques, allowing for focused and timely intervention strategies. To improve the transparency of the decisions made by the classifier, this study also employs explainable artificial intelligence methods such as LIME and SHAP to illustrate the results of the customer churn prediction model. Our results demonstrate how crucial it is for customer relationship managers to implement strong analytical tools to reduce attrition and promote long-term economic viability in fiercely competitive marketplaces. This study indicates that ensemble learning models have strategic implications for improving consumer loyalty and organizational profitability in addition to confirming their performance. Full article
Show Figures

Figure 1

11 pages, 925 KiB  
Article
Mitigating Co-Activity Conflicts and Resource Overallocation in Construction Projects: A Modular Heuristic Scheduling Approach with Primavera P6 EPPM Integration
by Khwansiri Ninpan, Shuzhang Huang, Francesco Vitillo, Mohamad Ali Assaad, Lies Benmiloud Bechet and Robert Plana
Algorithms 2024, 17(6), 230; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060230 - 24 May 2024
Viewed by 284
Abstract
This paper proposes a heuristic approach for managing complex construction projects. The tool incorporates Primavera P6 EPPM and Synchro 4D, enabling proactive clash detection and resolution of spatial conflicts during concurrent tasks. Additionally, it performs resource verification for sufficient allocation before task initiation. [...] Read more.
This paper proposes a heuristic approach for managing complex construction projects. The tool incorporates Primavera P6 EPPM and Synchro 4D, enabling proactive clash detection and resolution of spatial conflicts during concurrent tasks. Additionally, it performs resource verification for sufficient allocation before task initiation. This integrated approach facilitates the generation of conflict-free and feasible construction schedules. By adhering to project constraints and seamlessly integrating with existing industry tools, the proposed solution offers a comprehensive and robust approach to construction project management. This constitutes, to our knowledge, the first dynamic digital twin for the delivery of a complex project. Full article
(This article belongs to the Special Issue Scheduling Theory and Algorithms for Sustainable Manufacturing)
16 pages, 1924 KiB  
Article
Employing a Convolutional Neural Network to Classify Sleep Stages from EEG Signals Using Feature Reduction Techniques
by Maadh Rajaa Mohammed and Ali Makki Sagheer
Algorithms 2024, 17(6), 229; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060229 - 24 May 2024
Viewed by 229
Abstract
One of the most essential components of human life is sleep. One of the first steps in spotting abnormalities connected to sleep is classifying sleep stages. Based on the kind and frequency of signals obtained during a polysomnography test, sleep phases can be [...] Read more.
One of the most essential components of human life is sleep. One of the first steps in spotting abnormalities connected to sleep is classifying sleep stages. Based on the kind and frequency of signals obtained during a polysomnography test, sleep phases can be separated into groups. Accurate classification of sleep stages from electroencephalogram (EEG) signals plays a crucial role in sleep disorder diagnosis and treatment. This study proposes a novel approach that combines feature selection techniques with convolutional neural networks (CNNs) to enhance the classification performance of sleep stages using EEG signals. Firstly, a comprehensive feature selection process was employed to extract discriminative features from raw EEG data, aiming to reduce dimensionality and enhance the efficiency of subsequent classification using mutual information (MI) and analysis of variance (ANOVA) after splitting the dataset into two sets—the training set (70%) and testing set (30%)—then processing it using the standard scalar method. Subsequently, a 1D-CNN architecture was designed to automatically learn hierarchical representations of the selected features, capturing complex patterns indicative of different sleep stages. The proposed method was evaluated on a publicly available EDF-Sleep dataset, demonstrating superior performance compared to traditional approaches. The results highlight the effectiveness of integrating feature selection with CNNs in improving the accuracy and reliability of sleep stage classification from EEG signals, which reached 99.84% with MI-50. This approach not only contributes to advancing the field of sleep disorder diagnosis, but also holds promise for developing more efficient and robust clinical decision support systems. Full article
(This article belongs to the Special Issue Machine Learning in Medical Signal and Image Processing (2nd Edition))
15 pages, 724 KiB  
Article
Automated Personalized Loudness Control for Multi-Track Recordings
by Bogdan Moroșanu, Marian Negru and Constantin Paleologu
Algorithms 2024, 17(6), 228; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060228 - 24 May 2024
Viewed by 269
Abstract
This paper presents a novel approach to automated music mixing, focusing on the optimization of loudness control in multi-track recordings. By taking into consideration the complexity and artistic nature of traditional mixing processes, we introduce a personalized multi-track leveling method using two types [...] Read more.
This paper presents a novel approach to automated music mixing, focusing on the optimization of loudness control in multi-track recordings. By taking into consideration the complexity and artistic nature of traditional mixing processes, we introduce a personalized multi-track leveling method using two types of approaches: a customized genetic algorithm and a neural network-based method. Our method tackles common challenges encountered by audio professionals during prolonged mixing sessions, where consistency can decrease as a result of fatigue. Our algorithm serves as a ‘virtual assistant’ to consistently uphold the initial mixing objectives, hence assuring consistent quality throughout the process. In addition, our system automates the repetitive elements of the mixing process, resulting in a substantial reduction in production time. This enables engineers to dedicate their attention to more innovative and intricate jobs. Our experimental framework involves 20 diverse songs and 10 sound engineers possessing a wide range of expertise, offering a useful perspective on the adaptability and effectiveness of our method in real-world scenarios. The results demonstrate the capacity of the algorithms to mimic decision-making, achieving an optimal balance in the mix that resonates with the emotional and technical aspects of music production. Full article
(This article belongs to the Special Issue 2024 and 2025 Selected Papers from Algorithms Editorial Board Members)
Show Figures

Figure 1

42 pages, 790 KiB  
Article
Explainable AI Frameworks: Navigating the Present Challenges and Unveiling Innovative Applications
by Neeraj Anand Sharma, Rishal Ravikesh Chand, Zain Buksh, A. B. M. Shawkat Ali, Ambreen Hanif and Amin Beheshti
Algorithms 2024, 17(6), 227; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060227 - 24 May 2024
Viewed by 347
Abstract
This study delves into the realm of Explainable Artificial Intelligence (XAI) frameworks, aiming to empower researchers and practitioners with a deeper understanding of these tools. We establish a comprehensive knowledge base by classifying and analyzing prominent XAI solutions based on key attributes like [...] Read more.
This study delves into the realm of Explainable Artificial Intelligence (XAI) frameworks, aiming to empower researchers and practitioners with a deeper understanding of these tools. We establish a comprehensive knowledge base by classifying and analyzing prominent XAI solutions based on key attributes like explanation type, model dependence, and use cases. This resource equips users to navigate the diverse XAI landscape and select the most suitable framework for their specific needs. Furthermore, the study proposes a novel framework called XAIE (eXplainable AI Evaluator) for informed decision-making in XAI adoption. This framework empowers users to assess different XAI options based on their application context objectively. This will lead to more responsible AI development by fostering transparency and trust. Finally, the research identifies the limitations and challenges associated with the existing XAI frameworks, paving the way for future advancements. By highlighting these areas, the study guides researchers and developers in enhancing the capabilities of Explainable AI. Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
Show Figures

Figure 1

19 pages, 346 KiB  
Article
Gain and Pain in Graph Partitioning: Finding Accurate Communities in Complex Networks
by Arman Ferdowsi and Maryam Dehghan Chenary
Algorithms 2024, 17(6), 226; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060226 - 23 May 2024
Viewed by 270
Abstract
This paper presents an approach to community detection in complex networks by simultaneously incorporating a connectivity-based metric and Max-Min Modularity. By leveraging the connectivity-based metric and employing a heuristic algorithm, we develop a novel complementary graph for the Max-Min Modularity that enhances its [...] Read more.
This paper presents an approach to community detection in complex networks by simultaneously incorporating a connectivity-based metric and Max-Min Modularity. By leveraging the connectivity-based metric and employing a heuristic algorithm, we develop a novel complementary graph for the Max-Min Modularity that enhances its effectiveness. We formulate community detection as an integer programming problem of an equivalent yet more compact counterpart model of the revised Max-Min Modularity maximization problem. Using a row generation technique alongside the heuristic approach, we then provide a hybrid procedure for near-optimally solving the model and discovering high-quality communities. Through a series of experiments, we demonstrate the success of our algorithm, showcasing its efficiency in detecting communities, particularly in extensive networks. Full article
(This article belongs to the Special Issue Algorithms for Network Systems and Applications)
Show Figures

Figure 1

13 pages, 3066 KiB  
Article
Context Privacy Preservation for User Validation by Wireless Sensors in the Industrial Metaverse Access System
by John Owoicho Odeh, Xiaolong Yang, Cosmas Ifeanyi Nwakanma and Sahraoui Dhelim
Algorithms 2024, 17(6), 225; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060225 - 23 May 2024
Viewed by 274
Abstract
The Industrial Metaverse provides unparalleled prospects for increasing productivity and efficiency across multiple sectors. As wireless sensor networks play an important role in data collection and transmission within this ecosystem, preserving context privacy becomes critical to protecting sensitive information. This paper investigates the [...] Read more.
The Industrial Metaverse provides unparalleled prospects for increasing productivity and efficiency across multiple sectors. As wireless sensor networks play an important role in data collection and transmission within this ecosystem, preserving context privacy becomes critical to protecting sensitive information. This paper investigates the issue of context privacy preservation for user validation via AccesSensor in the Industrial Metaverse and presents a technological method to address it. We explore the need for context privacy, look at existing privacy preservation solutions, and propose novel user validation methods that are customized to the Industrial Metaverse’s access system. This method is evaluated on time-based efficiency, privacy method and bandwidth utilization. Our method performs better as compared to the DPSensor. Our research seeks to provide insights and recommendations for developing strong privacy protection methods in wireless sensor networks that operate within the Industrial Metaverse ecosystem. Full article
(This article belongs to the Special Issue AI Algorithms for Positive Change in Digital Futures)
19 pages, 2573 KiB  
Article
Bayesian Estimation of Simultaneous Regression Quantiles Using Hamiltonian Monte Carlo
by Hassan Hachem and Candy Abboud
Algorithms 2024, 17(6), 224; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060224 - 23 May 2024
Viewed by 239
Abstract
The simultaneous estimation of multiple quantiles is a crucial statistical task that enables a thorough understanding of data distribution for robust analysis and decision-making. In this study, we adopt a Bayesian approach to tackle this critical task, employing the asymmetric Laplace distribution (ALD) [...] Read more.
The simultaneous estimation of multiple quantiles is a crucial statistical task that enables a thorough understanding of data distribution for robust analysis and decision-making. In this study, we adopt a Bayesian approach to tackle this critical task, employing the asymmetric Laplace distribution (ALD) as a flexible framework for quantile modeling. Our methodology implementation involves the Hamiltonian Monte Carlo (HMC) algorithm, building on the foundation laid in prior work, where the error term is assumed to follow an ALD. Capitalizing on the interplay between two distinct quantiles of this distribution, we endorse a straightforward and fully Bayesian method that adheres to the non-crossing property of quantiles. Illustrated through simulated scenarios, we showcase the effectiveness of our approach in quantile estimation, enhancing precision via the HMC algorithm. The proposed method proves versatile, finding application in finance, environmental science, healthcare, and manufacturing, and contributing to sustainable development goals by fostering innovation and enhancing decision-making in diverse fields. Full article
Show Figures

Figure 1

16 pages, 1055 KiB  
Review
Inertial Sensors-Based Assessment of Human Breathing Pattern: A Systematic Literature Review
by Rodrigo Martins, Fátima Rodrigues, Susana Costa and Nelson Costa
Algorithms 2024, 17(6), 223; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060223 - 23 May 2024
Viewed by 309
Abstract
Breathing pattern assessment holds critical importance in clinical practice for detecting respiratory dysfunctions and their impact on health and wellbeing. This systematic literature review investigates the efficacy of inertial sensors in assessing adult human breathing patterns, exploring various methodologies, challenges, and limitations. Utilizing [...] Read more.
Breathing pattern assessment holds critical importance in clinical practice for detecting respiratory dysfunctions and their impact on health and wellbeing. This systematic literature review investigates the efficacy of inertial sensors in assessing adult human breathing patterns, exploring various methodologies, challenges, and limitations. Utilizing the PSALSAR framework, incorporating the PICOC method and PRISMA statement for comprehensive research, 22 publications were scrutinized from the Scopus, Web of Science, and PubMed databases. A diverse range of sensor fusion methods, data signal analysis techniques, and classifier performances were investigated. Notably, Madgwick’s algorithm and the Principal Component Analysis showed superior performance in tracking respiratory movements. Classifiers like Long Short-Term Memory Recurrent Neural Networks exhibited high accuracy in detecting breathing events. Motion artifacts, limited sample sizes, and physiological variability posed challenges, highlighting the need for further research. Optimal sensor configurations were explored, suggesting improvements with multiple sensors, especially in different body postures. In conclusion, this systematic literature review elucidates methods, challenges, and potential future developments in using inertial sensors for assessing adult human breathing patterns. Overcoming the challenges related to sensor placement, motion artifacts, and algorithm development is essential for progress. Future research should focus on extending sensor applications to clinical settings and diverse populations, enhancing respiratory health management. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Figure 1

46 pages, 1768 KiB  
Review
Multiobjective Path Problems and Algorithms in Telecommunication Network Design—Overview and Trends
by José Craveirinha, João Clímaco, Rita Girão-Silva and Marta Pascoal
Algorithms 2024, 17(6), 222; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060222 - 22 May 2024
Viewed by 410
Abstract
A major area of application of multiobjective path problems and resolution algorithms is telecommunication network routing design, taking into account the extremely rapid technological and service evolutions. The need for explicit consideration of heterogeneous Quality of Service metrics makes it advantageous for the [...] Read more.
A major area of application of multiobjective path problems and resolution algorithms is telecommunication network routing design, taking into account the extremely rapid technological and service evolutions. The need for explicit consideration of heterogeneous Quality of Service metrics makes it advantageous for the development of routing models where various technical–economic aspects, often conflicting, should be tackled. Our work is focused on multiobjective path problem formulations and resolution methods and their applications to routing methods. We review basic concepts and present main formulations of multiobjective path problems, considering different types of objective functions. We outline the different types of resolution methods for these problems, including a classification and overview of relevant algorithms concerning different types of problems. Afterwards, we outline background concepts on routing models and present an overview of selected papers considered as representative of different types of applications of multiobjective path problem formulations and algorithms. A broad characterization of major types of path problems relevant in this context is shown regarding the overview of contributions in different technological and architectural network environments. Finally, we outline research trends in this area, in relation to recent technological evolutions in communication networks. Full article
Show Figures

Figure 1

17 pages, 39975 KiB  
Article
A Hybrid Learning-Architecture for Improved Brain Tumor Recognition
by Jose Dixon, Oluwatunmise Akinniyi, Abeer Abdelhamid, Gehad A. Saleh, Md Mahmudur Rahman and Fahmi Khalifa
Algorithms 2024, 17(6), 221; https://0-doi-org.brum.beds.ac.uk/10.3390/a17060221 - 21 May 2024
Viewed by 520
Abstract
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture [...] Read more.
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture for improved brain tumor classification. We introduce a hybrid architecture that integrates vision transformer (ViT) and deep neural networks to create an ensemble classifier, resulting in a more robust brain tumor classification framework. The analysis pipeline begins with preprocessing and data normalization, followed by extracting three types of MRI-derived information-rich features. The latter included higher-order texture and structural feature sets to harness the spatial interactions between image intensities, which were derived using Haralick features and local binary patterns. Additionally, local deeper features of the brain images are extracted using an optimized convolutional neural networks (CNN) architecture. Finally, ViT-derived features are also integrated due to their ability to handle dependencies across larger distances while being less sensitive to data augmentation. The extracted features are then weighted, fused, and fed to a machine learning classifier for the final classification of brain MRIs. The proposed weighted ensemble architecture has been evaluated on publicly available and locally collected brain MRIs of four classes using various metrics. The results showed that leveraging the benefits of individual components of the proposed architecture leads to improved performance using ablation studies. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Figure 1

Previous Issue
Back to TopTop