Previous Issue
Volume 16, May
 
 

Future Internet, Volume 16, Issue 6 (June 2024) – 19 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 1017 KiB  
Article
In-Home Evaluation of the NeoCare Artificial Intelligence Sound-Based Fall Detection System
by Carol Maher, Kylie A. Dankiw, Ben Singh, Svetlana Bogomolova and Rachel G. Curtis
Future Internet 2024, 16(6), 197; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060197 (registering DOI) - 2 Jun 2024
Abstract
The NeoCare home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated NeoCare’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean [...] Read more.
The NeoCare home monitoring system aims to detect falls and other events using artificial intelligence. This study evaluated NeoCare’s accuracy and explored user perceptions through a 12-week in-home trial with 18 households of adults aged 65+ years old at risk of falls (mean age: 75.3 years old; 67% female). Participants logged events that were cross-referenced with NeoCare logs to calculate sensitivity and specificity for fall detection and response. Qualitative interviews gathered in-depth user feedback. During the trial, 28 falls/events were documented, with 12 eligible for analysis as others occurred outside the home or when devices were offline. NeoCare was activated 4939 times—4930 by everyday household sounds and 9 by actual falls. Fall detection sensitivity was 75.00% and specificity 6.80%. For responding to falls, sensitivity was 62.50% and specificity 17.28%. Users felt more secure with NeoCare but identified needs for further calibration to improve accuracy. Advantages included avoiding wearables, while key challenges were misinterpreting noises and occasional technical issues like going offline. Suggested improvements were visual indicators, trigger words, and outdoor capability. The study demonstrated NeoCare’s potential with modifications. Users found it beneficial, but highlighted areas for improvement. Real-world evaluations and user-centered design are crucial for healthcare technology development. Full article
(This article belongs to the Special Issue eHealth and mHealth)
22 pages, 890 KiB  
Article
Efficiency of Federated Learning and Blockchain in Preserving Privacy and Enhancing the Performance of Credit Card Fraud Detection (CCFD) Systems
by Tahani Baabdullah, Amani Alzahrani, Danda B. Rawat and Chunmei Liu
Future Internet 2024, 16(6), 196; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060196 (registering DOI) - 2 Jun 2024
Abstract
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment [...] Read more.
Increasing global credit card usage has elevated it to a preferred payment method for daily transactions, underscoring its significance in global financial cybersecurity. This paper introduces a credit card fraud detection (CCFD) system that integrates federated learning (FL) with blockchain technology. The experiment employs FL to establish a global learning model on the cloud server, which transmits initial parameters to individual local learning models on fog nodes. With three banks (fog nodes) involved, each bank trains its learning model locally, ensuring data privacy, and subsequently sends back updated parameters to the global learning model. Through the integration of FL and blockchain, our system ensures privacy preservation and data protection. We utilize three machine learning and deep neural network learning algorithms, RF, CNN, and LSTM, alongside deep optimization techniques such as ADAM, SGD, and MSGD. The SMOTE oversampling technique is also employed to balance the dataset before model training. Our proposed framework has demonstrated efficiency and effectiveness in enhancing classification performance and prediction accuracy. Full article
19 pages, 1936 KiB  
Article
GreenLab, an IoT-Based Small-Scale Smart Greenhouse
by Cristian Volosciuc, Răzvan Bogdan, Bianca Blajovan, Cristina Stângaciu and Marius Marcu
Future Internet 2024, 16(6), 195; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060195 - 31 May 2024
Abstract
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling [...] Read more.
In an era of connectivity, the Internet of Things introduces smart solutions for smart and sustainable agriculture, bringing alternatives to overcome the food crisis. Among these solutions, smart greenhouses support crop and vegetable agriculture regardless of season and cultivated area by carefully controlling and managing parameters like temperature, air and soil humidity, and light. Smart technologies have proven to be successful tools for increasing agricultural production at both the macro and micro levels, which is an important step in streamlining small-scale agriculture. This paper presents an experimental Internet of Things-based small-scale greenhouse prototype as a proof of concept for the benefits of merging smart sensing, connectivity, IoT, and mobile-based applications, for growing cultures. Our proposed solution is cost-friendly and includes a photovoltaic panel and a buffer battery for reducing energy consumption costs, while also assuring functionality during night and cloudy weather and a mobile application for easy data visualization and monitoring of the greenhouse. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

14 pages, 3949 KiB  
Article
Research on Multi-Modal Pedestrian Detection and Tracking Algorithm Based on Deep Learning
by Rui Zhao, Jutao Hao and Huan Huo
Future Internet 2024, 16(6), 194; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060194 - 31 May 2024
Abstract
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal [...] Read more.
In the realm of intelligent transportation, pedestrian detection has witnessed significant advancements. However, it continues to grapple with challenging issues, notably the detection of pedestrians in complex lighting scenarios. Conventional visible light mode imaging is profoundly affected by varying lighting conditions. Under optimal daytime lighting, visibility is enhanced, leading to superior pedestrian detection outcomes. Conversely, under low-light conditions, visible light mode imaging falters due to the inadequate provision of pedestrian target information, resulting in a marked decline in detection efficacy. In this context, infrared light mode imaging emerges as a valuable supplement, bolstering pedestrian information provision. This paper delves into pedestrian detection and tracking algorithms within a multi-modal image framework grounded in deep learning methodologies. Leveraging the YOLOv4 algorithm as a foundation, augmented by a channel stack fusion module, a novel multi-modal pedestrian detection algorithm tailored for intelligent transportation is proposed. This algorithm capitalizes on the fusion of visible and infrared light mode image features to enhance pedestrian detection performance amidst complex road environments. Experimental findings demonstrate that compared to the Visible-YOLOv4 algorithm, renowned for its high performance, the proposed Double-YOLOv4-CSE algorithm exhibits a notable improvement, boasting a 5.0% accuracy rate enhancement and a 6.9% reduction in logarithmic average missing rate. This research’s goal is to ensure that the algorithm can run smoothly even on a low configuration 1080 Ti GPU and to improve the algorithm’s coverage at the application layer, making it affordable and practical for both urban and rural areas. This addresses the broader research problem within the scope of smart cities and remote ends with limited computational power. Full article
Show Figures

Figure 1

16 pages, 336 KiB  
Article
Enhancing Sensor Data Imputation: OWA-Based Model Aggregation for Missing Values
by Muthana Al-Amidie, Laith Alzubaidi, Muhammad Aminul Islam and Derek T. Anderson
Future Internet 2024, 16(6), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060193 - 31 May 2024
Abstract
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of [...] Read more.
Due to some limitations in the data collection process caused either by human-related errors or by collection electronics, sensors, and network connectivity-related errors, the important values at some points could be lost. However, a complete dataset is required for the desired performance of the subsequent applications in various fields like engineering, data science, statistics, etc. An efficient data imputation technique is desired to fill in the missing data values to achieve completeness within the dataset. The fuzzy integral is considered one of the most powerful techniques for multi-source information fusion. It has a wide range of applications in many real-world decision-making problems that often require decisions to be made with partially observable/available information. To address this problem, algorithms impute missing data with a representative sample or by predicting the most likely value given the observed data. In this article, we take a completely different approach to the information fusion task in the ordered weighted averaging (OWA) context. In particular, we empirically explore for different distributions how the weights/importance of the missing sources are distributed across the observed inputs/sources. The experimental results on the synthetic and real-world datasets demonstrate the applicability of the proposed methods. Full article
Show Figures

Figure 1

16 pages, 5464 KiB  
Article
Prophet–CEEMDAN–ARBiLSTM-Based Model for Short-Term Load Forecasting
by Jindong Yang, Xiran Zhang, Wenhao Chen and Fei Rong
Future Internet 2024, 16(6), 192; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060192 - 31 May 2024
Abstract
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the [...] Read more.
Accurate short-term load forecasting (STLF) plays an essential role in sustainable energy development. Specifically, energy companies can efficiently plan and manage their generation capacity, lessening resource wastage and promoting the overall efficiency of power resource utilization. However, existing models cannot accurately capture the nonlinear features of electricity data, leading to a decline in the forecasting performance. To relieve this issue, this paper designs an innovative load forecasting method, named Prophet–CEEMDAN–ARBiLSTM, which consists of Prophet, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN), and the residual Bidirectional Long Short-Term Memory (BiLSTM) network. Specifically, this paper firstly employs the Prophet method to learn cyclic and trend features from input data, aiming to discern the influence of these features on the short-term electricity load. Then, the paper adopts CEEMDAN to decompose the residual series and yield components with distinct modalities. In the end, this paper designs the advanced residual BiLSTM (ARBiLSTM) block as the input of the above extracted features to obtain the forecasting results. By conducting multiple experiments on the New England public dataset, it demonstrates that the Prophet–CEEMDAN–ARBiLSTM method can achieve better performance compared with the existing Prophet-based ones. Full article
Show Figures

Figure 1

22 pages, 2048 KiB  
Article
Harnessing the Cloud: A Novel Approach to Smart Solar Plant Monitoring
by Mohammad Imran Ali, Shahi Dost, Khurram Shehzad Khattak, Muhammad Imran Khan and Riaz Muhammad
Future Internet 2024, 16(6), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060191 - 29 May 2024
Viewed by 203
Abstract
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature [...] Read more.
Renewable Energy Sources (RESs) such as hydro, wind, and solar are merging as preferred alternatives to fossil fuels. Among these RESs, solar energy is the most ideal solution; it is gaining extensive interest around the globe. However, due to solar energy’s intermittent nature and sensitivity to environmental parameters (e.g., irradiance, dust, temperature, aging and humidity), real-time solar plant monitoring is imperative. This paper’s contribution is to compare and analyze current IoT trends and propose future research directions. As a result, this will be instrumental in the development of low-cost, real-time, scalable, reliable, and power-optimized solar plant monitoring systems. In this work, a comparative analysis has been performed on proposed solutions using the existing literature. This comparative analysis has been conducted considering five aspects: computer boards, sensors, communication, servers, and architectural paradigms. IoT architectural paradigms employed have been summarized and discussed with respect to communication, application layers, and storage capabilities. To facilitate enhanced IoT-based solar monitoring, an edge computing paradigm has been proposed. Suggestions are presented for the fabrication of edge devices and nodes using optimum compute boards, sensors, and communication modules. Different cloud platforms have been explored, and it was concluded that the public cloud platform Amazon Web Services is the ideal solution. Artificial intelligence-based techniques, methods, and outcomes are presented, which can help in the monitoring, analysis, and management of solar PV systems. As an outcome, this paper can be used to help researchers and academics develop low-cost, real-time, effective, scalable, and reliable solar monitoring systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

15 pages, 845 KiB  
Article
Tracing Student Activity Patterns in E-Learning Environments: Insights into Academic Performance
by Evgenia Paxinou, Georgios Feretzakis, Rozita Tsoni, Dimitrios Karapiperis, Dimitrios Kalles and Vassilios S. Verykios
Future Internet 2024, 16(6), 190; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060190 - 29 May 2024
Viewed by 153
Abstract
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and [...] Read more.
In distance learning educational environments like Moodle, students interact with their tutors, their peers, and the provided educational material through various means. Due to advancements in learning analytics, students’ transitions within Moodle generate digital trace data that outline learners’ self-directed learning paths and reveal information about their academic behavior within a course. These learning paths can be depicted as sequences of transitions between various states, such as completing quizzes, submitting assignments, downloading files, and participating in forum discussions, among others. Considering that a specific learning path summarizes the students’ trajectory in a course during an academic year, we analyzed data on students’ actions extracted from Moodle logs to investigate how the distribution of user actions within different Moodle resources can impact academic achievements. Our analysis was conducted using a Markov Chain Model, whereby transition matrices were constructed to identify steady states, and eigenvectors were calculated. Correlations were explored between specific states in users’ eigenvectors and their final grades, which were used as a proxy of academic performance. Our findings offer valuable insights into the relationship between student actions, link weight vectors, and academic performance, in an attempt to optimize students’ learning paths, tutors’ guidance, and course structures in the Moodle environment. Full article
20 pages, 2760 KiB  
Article
Dynamic Spatial–Temporal Self-Attention Network for Traffic Flow Prediction
by Dong Wang, Hongji Yang and Hua Zhou
Future Internet 2024, 16(6), 189; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060189 - 25 May 2024
Viewed by 321
Abstract
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and [...] Read more.
Traffic flow prediction is considered to be one of the fundamental technologies in intelligent transportation systems (ITSs) with a tremendous application prospect. Unlike traditional time series analysis tasks, the key challenge in traffic flow prediction lies in effectively modelling the highly complex and dynamic spatiotemporal dependencies within the traffic data. In recent years, researchers have proposed various methods to enhance the accuracy of traffic flow prediction, but certain issues still persist. For instance, some methods rely on specific static assumptions, failing to adequately simulate the dynamic changes in the data, thus limiting their modelling capacity. On the other hand, some approaches inadequately capture the spatiotemporal dependencies, resulting in the omission of crucial information and leading to unsatisfactory prediction outcomes. To address these challenges, this paper proposes a model called the Dynamic Spatial–Temporal Self-Attention Network (DSTSAN). Firstly, this research enhances the interaction between different dimension features in the traffic data through a feature augmentation module, thereby improving the model’s representational capacity. Subsequently, the current investigation introduces two masking matrices: one captures local spatial dependencies and the other captures global spatial dependencies, based on the spatial self-attention module. Finally, the methodology employs a temporal self-attention module to capture and integrate the dynamic temporal dependencies of traffic data. We designed experiments using historical data from the previous hour to predict traffic flow conditions in the hour ahead, and the experiments were extensively compared to the DSTSAN model, with 11 baseline methods using four real-world datasets. The results demonstrate the effectiveness and superiority of the proposed approach. Full article
Show Figures

Figure 1

19 pages, 261 KiB  
Article
Studying the Quality of Source Code Generated by Different AI Generative Engines: An Empirical Evaluation
by Davide Tosi
Future Internet 2024, 16(6), 188; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060188 - 24 May 2024
Viewed by 397
Abstract
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in [...] Read more.
The advent of Generative Artificial Intelligence is opening essential questions about whether and when AI will replace human abilities in accomplishing everyday tasks. This issue is particularly true in the domain of software development, where generative AI seems to have strong skills in solving coding problems and generating software source code. In this paper, an empirical evaluation of AI-generated source code is performed: three complex coding problems (selected from the exams for the Java Programming course at the University of Insubria) are prompted to three different Large Language Model (LLM) Engines, and the generated code is evaluated in its correctness and quality by means of human-implemented test suites and quality metrics. The experimentation shows that the three evaluated LLM engines are able to solve the three exams but with the constant supervision of software experts in performing these tasks. Currently, LLM engines need human-expert support to produce running code that is of good quality. Full article
17 pages, 1140 KiB  
Article
Enhanced Beacons Dynamic Transmission over TSCH
by Erik Ortiz Guerra, Mario Martínez Morfa, Carlos Manuel García Algora, Hector Cruz-Enriquez, Kris Steenhaut and Samuel Montejo-Sánchez
Future Internet 2024, 16(6), 187; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060187 - 24 May 2024
Viewed by 313
Abstract
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation [...] Read more.
Time slotted channel hopping (TSCH) has become the standard multichannel MAC protocol for low-power lossy networks. The procedure for associating nodes in a TSCH-based network is not included in the standard and has been defined in the minimal 6TiSCH configuration. Faster network formation ensures that data packet transmission can start sooner. This paper proposes a dynamic beacon transmission schedule over the TSCH mechanism that achieves a shorter network formation time than the default minimum 6TiSCH static schedule. A theoretical model is derived for the proposed mechanism to estimate the expected time for a node to get associated with the network. Simulation results obtained with different network topologies and channel conditions show that the proposed mechanism reduces the average association time and average power consumption during network formation compared to the default minimal 6TiSCH configuration. Full article
(This article belongs to the Special Issue Industrial Internet of Things (IIoT): Trends and Technologies)
Show Figures

Figure 1

19 pages, 4134 KiB  
Article
Data Collection in Areas without Infrastructure Using LoRa Technology and a Quadrotor
by Josué I. Rojo-García, Sergio A. Vera-Chavarría, Yair Lozano-Hernández, Victor G. Sánchez-Meza, Jaime González-Sierra and Luz N. Oliva-Moreno
Future Internet 2024, 16(6), 186; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060186 - 24 May 2024
Viewed by 269
Abstract
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places [...] Read more.
The use of sensor networks in monitoring applications has increased; they are useful in security, environmental, and health applications, among others. These networks usually transmit data through short-range stations, which makes them attractive for incorporation into applications and devices for use in places without access to satellite or mobile signals, for example, forests, seas, and jungles. To this end, unmanned aerial vehicles (UAVs) have attractive characteristics for data collection and transmission in remote areas without infrastructure. Integrating systems based on wireless sensors and UAVs seems to be an economical and easy-to-use solution. However, the main difficulty is the amount of data sent, which affects the communication time and even the flight status of the UAV. Additionally, factors such as the UAV model and the hardware used for these tasks must be considered. Based on those difficulties mentioned, this paper proposes a system based on long-range (LoRa) technology. We present a low-cost wireless sensor network that is flexible, easy to deploy, and capable of collecting/sending data via LoRa transceivers. The readings obtained are packaged and sent to a UAV. The UAV performs predefined flights at a constant height of 30 m and with a direct line-of-sight (LoS) to the stations, during which it collects information from two data stations, concluding that it is possible to carry out a correct data transmission with a flight speed of 10 m/s and a transmission radius of 690 m for a group of three packages confirmed by 20 messages each. Thus, it is possible to collect data from routes of up to 8 km for each battery charge, considering the return of the UAV. Full article
Show Figures

Graphical abstract

21 pages, 1248 KiB  
Article
HP-LSTM: Hawkes Process–LSTM-Based Detection of DDoS Attack for In-Vehicle Network
by Xingyu Li, Ruifeng Li and Yanchen Liu
Future Internet 2024, 16(6), 185; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060185 - 23 May 2024
Viewed by 216
Abstract
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of [...] Read more.
Connected and autonomous vehicles (CAVs) are advancing at a fast speed with the improvement of the automotive industry, which opens up new possibilities for different attacks. A Distributed Denial-of-Service (DDoS) attacker floods the in-vehicle network with fake messages, resulting in the failure of driving assistance systems and impairment of vehicle control functionalities, seriously disrupting the normal operation of the vehicle. In this paper, we propose a novel DDoS attack detection method for in-vehicle Ethernet Scalable service-Oriented Middleware over IP (SOME/IP), which integrates the Hawkes process with Long Short-Term Memory networks (LSTMs) to capture the dynamic behavioral features of the attacker. Specifically, we employ the Hawkes process to capture features of the DDoS attack, with its parameters reflecting the dynamism and self-exciting properties of the attack events. Subsequently, we propose a novel deep learning network structure, an HP-LSTM block, inspired by the Hawkes process, while employing a residual attention block to enhance the model’s detection efficiency and accuracy. Additionally, due to the scarcity of publicly available datasets for SOME/IP, we employed a mature SOME/IP generator to create a dataset for evaluating the validity of the proposed detection model. Finally, extensive experiments were conducted to demonstrate the effectiveness of the proposed DDoS attack detection method. Full article
(This article belongs to the Special Issue Security for Vehicular Ad Hoc Networks)
19 pages, 1169 KiB  
Article
Exploiting Autoencoder-Based Anomaly Detection to Enhance Cybersecurity in Power Grids
by Fouzi Harrou, Benamar Bouyeddou, Abdelkader Dairi and Ying Sun
Future Internet 2024, 16(6), 184; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060184 - 22 May 2024
Viewed by 307
Abstract
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. [...] Read more.
The evolution of smart grids has led to technological advances and a demand for more efficient and sustainable energy systems. However, the deployment of communication systems in smart grids has increased the threat of cyberattacks, which can result in power outages and disruptions. This paper presents a semi-supervised hybrid deep learning model that combines a Gated Recurrent Unit (GRU)-based Stacked Autoencoder (AE-GRU) with anomaly detection algorithms, including Isolation Forest, Local Outlier Factor, One-Class SVM, and Elliptical Envelope. Using GRU units in both the encoder and decoder sides of the stacked autoencoder enables the effective capture of temporal patterns and dependencies, facilitating dimensionality reduction, feature extraction, and accurate reconstruction for enhanced anomaly detection in smart grids. The proposed approach utilizes unlabeled data to monitor network traffic and identify suspicious data flow. Specifically, the AE-GRU is performed for data reduction and extracting relevant features, and then the anomaly algorithms are applied to reveal potential cyberattacks. The proposed framework is evaluated using the widely adopted IEC 60870-5-104 traffic dataset. The experimental results demonstrate that the proposed approach outperforms standalone algorithms, with the AE-GRU-based LOF method achieving the highest detection rate. Thus, the proposed approach can potentially enhance the cybersecurity in smart grids by accurately detecting and preventing cyberattacks. Full article
(This article belongs to the Special Issue Cybersecurity in the IoT)
21 pages, 4943 KiB  
Article
Cross-Layer Optimization for Enhanced IoT Connectivity: A Novel Routing Protocol for Opportunistic Networks
by Ayman Khalil and Besma Zeddini
Future Internet 2024, 16(6), 183; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060183 - 22 May 2024
Viewed by 346
Abstract
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic [...] Read more.
Opportunistic networks, an evolution of mobile Ad Hoc networks (MANETs), offer decentralized communication without relying on preinstalled infrastructure, enabling nodes to route packets through different mobile nodes dynamically. However, due to the absence of complete paths and rapidly changing connectivity, routing in opportunistic networks presents unique challenges. This paper proposes a novel probabilistic routing model for opportunistic networks, leveraging nodes’ meeting probabilities to route packets towards their destinations. Thismodel dynamically builds routes based on the likelihood of encountering the destination node, considering factors such as the last meeting time and acknowledgment tables to manage network overload. Additionally, an efficient message detection scheme is introduced to alleviate high overhead by selectively deleting messages from buffers during congestion. Furthermore, the proposed model incorporates cross-layer optimization techniques, integrating optimization strategies across multiple protocol layers to maximize energy efficiency, adaptability, and message delivery reliability. Through extensive simulations, the effectiveness of the proposed model is demonstrated, showing improved message delivery probability while maintaining reasonable overhead and latency. This research contributes to the advancement of opportunistic networks, particularly in enhancing connectivity and efficiency for Internet of Things (IoT) applications deployed in challenging environments. Full article
Show Figures

Figure 1

16 pages, 2410 KiB  
Systematic Review
Urban Green Spaces and Mental Well-Being: A Systematic Review of Studies Comparing Virtual Reality versus Real Nature
by Liyuan Liang, Like Gobeawan, Siu-Kit Lau, Ervine Shengwei Lin and Kai Keng Ang
Future Internet 2024, 16(6), 182; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060182 - 21 May 2024
Viewed by 666
Abstract
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. [...] Read more.
Increasingly, urban planners are adopting virtual reality (VR) in designing urban green spaces (UGS) to visualize landscape designs in immersive 3D. However, the psychological effect of green spaces from the experience in VR may differ from the actual experience in the real world. In this paper, we systematically reviewed studies in the literature that conducted experiments to investigate the psychological benefits of nature in both VR and the real world to study nature in VR anchored to nature in the real world. We separated these studies based on the type of VR setup used, specifically, 360-degree video or 3D virtual environment, and established a framework of commonly used standard questionnaires used to measure the perceived mental states. The most common questionnaires include Positive and Negative Affect Schedule (PANAS), Perceived Restorativeness Scale (PRS), and Restoration Outcome Scale (ROS). Although the results from studies that used 360-degree video were less clear, results from studies that used 3D virtual environments provided evidence that virtual nature is comparable to real-world nature and thus showed promise that UGS designs in VR can transfer into real-world designs to yield similar physiological effects. Full article
(This article belongs to the Special Issue Advances in Extended Reality for Smart Cities)
Show Figures

Figure 1

20 pages, 4156 KiB  
Article
MADDPG-Based Offloading Strategy for Timing-Dependent Tasks in Edge Computing
by Yuchen Wang, Zishan Huang, Zhongcheng Wei and Jijun Zhao
Future Internet 2024, 16(6), 181; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060181 - 21 May 2024
Viewed by 323
Abstract
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application [...] Read more.
With the increasing popularity of the Internet of Things (IoT), the proliferation of computation-intensive and timing-dependent applications has brought serious load pressure on terrestrial networks. In order to solve the problem of computing resource conflict and long response delay caused by concurrent application service applications from multiple users, this paper proposes an improved edge computing timing-dependent, task-offloading scheme based on Multi-Agent Deep Deterministic Policy Gradient (MADDPG) that aims to shorten the offloading delay and improve the resource utilization rate by means of resource prediction and collaboration among multiple agents to shorten the offloading delay and improve the resource utilization. First, to coordinate the global computing resource, the gated recurrent unit is utilized, which predicts the next computing resource requirements of the timing-dependent tasks according to historical information. Second, the predicted information, the historical offloading decisions and the current state are used as inputs, and the training process of the reinforcement learning algorithm is improved to propose a task-offloading algorithm based on MADDPG. The simulation results show that the algorithm reduces the response latency by 6.7% and improves the resource utilization by 30.6% compared with the suboptimal benchmark algorithm, and it reduces nearly 500 training rounds during the learning process, which effectively improves the timeliness of the offloading strategy. Full article
Show Figures

Figure 1

21 pages, 718 KiB  
Review
Using ChatGPT in Software Requirements Engineering: A Comprehensive Review
by Nuno Marques, Rodrigo Rocha Silva and Jorge Bernardino
Future Internet 2024, 16(6), 180; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060180 - 21 May 2024
Viewed by 573
Abstract
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role [...] Read more.
Large language models (LLMs) have had a significant impact on several domains, including software engineering. However, a comprehensive understanding of LLMs’ use, impact, and potential limitations in software engineering is still emerging and remains in its early stages. This paper analyzes the role of large language models (LLMs), such as ChatGPT-3.5, in software requirements engineering, a critical area in software engineering experiencing rapid advances due to artificial intelligence (AI). By analyzing several studies, we systematically evaluate the integration of ChatGPT into software requirements engineering, focusing on its benefits, challenges, and ethical considerations. This evaluation is based on a comparative analysis that highlights ChatGPT’s efficiency in eliciting requirements, accuracy in capturing user needs, potential to improve communication among stakeholders, and impact on the responsibilities of requirements engineers. The selected studies were analyzed for their insights into the effectiveness of ChatGPT, the importance of human feedback, prompt engineering techniques, technological limitations, and future research directions in using LLMs in software requirements engineering. This comprehensive analysis aims to provide a differentiated perspective on how ChatGPT can reshape software requirements engineering practices and provides strategic recommendations for leveraging ChatGPT to effectively improve the software requirements engineering process. Full article
Show Figures

Figure 1

14 pages, 5787 KiB  
Article
Object and Event Detection Pipeline for Rink Hockey Games
by Jorge Miguel Lopes, Luis Paulo Mota, Samuel Marques Mota, José Manuel Torres, Rui Silva Moreira, Christophe Soares, Ivo Pereira, Feliz Ribeiro Gouveia and Pedro Sobral
Future Internet 2024, 16(6), 179; https://0-doi-org.brum.beds.ac.uk/10.3390/fi16060179 - 21 May 2024
Viewed by 369
Abstract
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also [...] Read more.
All types of sports are potential application scenarios for automatic and real-time visual object and event detection. In rink hockey, the popular roller skate variant of team hockey, it is of great interest to automatically track player movements, positions, and sticks, and also to make other judgments, such as being able to locate the ball. In this work, we present a real-time pipeline consisting of an object detection model specifically designed for rink hockey games, followed by a knowledge-based event detection module. Even in the presence of occlusions and fast movements, our deep learning object detection model effectively identifies and tracks important visual elements in real time, such as: ball, players, sticks, referees, crowd, goalkeeper, and goal. Using a curated dataset consisting of a collection of rink hockey videos containing 2525 annotated frames, we trained and evaluated the algorithm’s performance and compared it to state-of-the-art object detection techniques. Our object detection model, based on YOLOv7, presents a global accuracy of 80% and, according to our results, good performance in terms of accuracy and speed, making it a good choice for rink hockey applications. In our initial tests, the event detection module successfully detected an important event type in rink hockey games, namely, the occurrence of penalties. Full article
(This article belongs to the Special Issue Advances Techniques in Computer Vision and Multimedia II)
Show Figures

Figure 1

Previous Issue
Back to TopTop