sensors-logo

Journal Browser

Journal Browser

Recent Advances in Algorithm and Distributed Computing for the Internet of Things

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (15 August 2022) | Viewed by 20274

Special Issue Editors

Department of Computer Science and Technology, Nanjing University, Nanjing 210023, China
Interests: Internet of things; data mining; edge computing; mobile computing
Special Issues, Collections and Topics in MDPI journals
School of Cyber Science and Technology, Huazhong University of Science and Technology, Wuhan, China
Interests: Internet of Things
School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing, China
Interests: Big Data; Edge Computing; Knowledge Graph; Cloud Computing; Service Computing
Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
Interests: Game Theory; Mobile Computing; Machine Learning
Faculty of Mathematics and Computer Science, George-August-University of Goettingen, Göttingen, Germany
Interests: UAV Deployment; Video Analytics System

Special Issue Information

Dear Colleagues,

With the spark of Industry 4.0, the IoT has witnessed huge development in recent years in our daily life, such as smart homes, smart cities, smart stores, and smart buildings. Connected IoT devices will be ubiquitous for healthcare and remote work due to the COVID-19 pandemic. Moreover, there are more choices to connect IoT devices for specific use cases—e.g., smart wearable devices require different kinds of connectivity than those for big smart devices such as smart vehicles, UAVs, and machineries, due to high-speed 5G connectivity. In addition, edge computing emerges to offer cheaper, faster, and more efficient service by processing data locally.

The IoT makes our lives easier; however, ubiquitous IoT devices such as smart phones create a huge amount of data every day which requires large computing resources to analyze, creating great challenges. What is more, the private nature of data on IoT devices causes increased security problems. To this end, it is important to design efficient and distributed algorithms to deal with these problems based on IoT devices, such as energy restriction, limited storage space, and security issues.

The aim of this Special Issue is to provide a platform for researchers to discuss cutting-edge algorithms to deal with IoT-related issues.

Prof. Dr. Haipeng Dai
Prof. Dr. Xianjun Deng
Prof. Dr. Xiaolong Xu
Dr. Ning Wang
Dr. Zhenzhe Zheng
Dr. Weijun Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deployment/scheduling algorithms of IoT
  • Energy harvesting schemes for IoT
  • Energy-restricted algorithms for IoT
  • Algorithms in next generations of IoT
  • Algorithms in IoT security, privacy, and trust problems
  • Algorithms of fault tolerance and reliability for IoT
  • Algorithms for robust IoT scheduling in dynamic scenarios
  • Algorithms for scheduling real-time IoT systems
  • Algorithms for managing and processing heterogeneous IoT data
  • Algorithms for self-organizing IoT networks
  • Algorithms for real IoT systems, e.g., educational, environmental, agricultural, business, and industrial IoT systems
  • Design of low energy IoT protocols
  • 5G related IoT connectivity algorithms
  • Distributed algorithms in edge computing for IoT devices
  • Distributed algorithms of data processing and data fusion in IoT

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3255 KiB  
Article
Network Traffic Prediction Incorporating Prior Knowledge for an Intelligent Network
by Chengsheng Pan, Yuyue Wang, Huaifeng Shi, Jianfeng Shi and Ren Cai
Sensors 2022, 22(7), 2674; https://0-doi-org.brum.beds.ac.uk/10.3390/s22072674 - 30 Mar 2022
Cited by 14 | Viewed by 2583
Abstract
Network traffic prediction is an important tool for the management and control of IoT, and timely and accurate traffic prediction models play a crucial role in improving the IoT service quality. The degree of burstiness in intelligent network traffic is high, which creates [...] Read more.
Network traffic prediction is an important tool for the management and control of IoT, and timely and accurate traffic prediction models play a crucial role in improving the IoT service quality. The degree of burstiness in intelligent network traffic is high, which creates problems for prediction. To address the problem faced by traditional statistical models, which cannot effectively extract traffic features when dealing with inadequate sample data, in addition to the poor interpretability of deep models, this paper proposes a prediction model (fusion prior knowledge network) that incorporates prior knowledge into the neural network training process. The model takes the self-similarity of network traffic as a priori knowledge, incorporates it into the gating mechanism of the long short-term memory neural network, and combines a one-dimensional convolutional neural network with an attention mechanism to extract the temporal features of the traffic sequence. The experiments show that the model can better recover the characteristics of the original data. Compared with the traditional prediction model, the proposed model can better describe the trend of network traffic. In addition, the model produces an interpretable prediction result with an absolute correction factor of 76.4%, which is at least 10% better than the traditional statistical model. Full article
Show Figures

Figure 1

18 pages, 1324 KiB  
Article
Greedy Firefly Algorithm for Optimizing Job Scheduling in IoT Grid Computing
by Adil Yousif, Samar M. Alqhtani, Mohammed Bakri Bashir, Awad Ali, Rafik Hamza, Alzubair Hassan and Tawfeeg Mohmmed Tawfeeg
Sensors 2022, 22(3), 850; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030850 - 23 Jan 2022
Cited by 13 | Viewed by 3382
Abstract
The Internet of Things (IoT) is defined as interconnected digital and mechanical devices with intelligent and interactive data transmission features over a defined network. The ability of the IoT to collect, analyze and mine data into information and knowledge motivates the integration of [...] Read more.
The Internet of Things (IoT) is defined as interconnected digital and mechanical devices with intelligent and interactive data transmission features over a defined network. The ability of the IoT to collect, analyze and mine data into information and knowledge motivates the integration of IoT with grid and cloud computing. New job scheduling techniques are crucial for the effective integration and management of IoT with grid computing as they provide optimal computational solutions. The computational grid is a modern technology that enables distributed computing to take advantage of a organization’s resources in order to handle complex computational problems. However, the scheduling process is considered an NP-hard problem due to the heterogeneity of resources and management systems in the IoT grid. This paper proposed a Greedy Firefly Algorithm (GFA) for jobs scheduling in the grid environment. In the proposed greedy firefly algorithm, a greedy method is utilized as a local search mechanism to enhance the rate of convergence and efficiency of schedules produced by the standard firefly algorithm. Several experiments were conducted using the GridSim toolkit to evaluate the proposed greedy firefly algorithm’s performance. The study measured several sizes of real grid computing workload traces, starting with lightweight traces with only 500 jobs, then typical with 3000 to 7000 jobs, and finally heavy load containing 8000 to 10,000 jobs. The experiment results revealed that the greedy firefly algorithm could insignificantly reduce the makespan makespan and execution times of the IoT grid scheduling process as compared to other evaluated scheduling methods. Furthermore, the proposed greedy firefly algorithm converges on large search spacefaster , making it suitable for large-scale IoT grid environments. Full article
Show Figures

Figure 1

16 pages, 3106 KiB  
Article
Abnormal Detection of Cash-Out Groups in IoT Based Payment
by Hao Zhou, Ming Zhang, Lei Pang and Jian-Hua Li
Sensors 2021, 21(22), 7507; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227507 - 12 Nov 2021
Viewed by 1951
Abstract
With the rise of online/mobile transactions, the cost of cash-out has decreased and the cost of detection has increased. In the world of online/mobile payment in IoT, merchants and credit cards can be applied and approved online and used in the form of [...] Read more.
With the rise of online/mobile transactions, the cost of cash-out has decreased and the cost of detection has increased. In the world of online/mobile payment in IoT, merchants and credit cards can be applied and approved online and used in the form of a QR code but not a physical card or Point of Sale equipment, making it easy for these systems to be controlled by a group of fraudsters. In mainland China, where the credit card transaction fee is, on average, lower than a retail loan rate, the credit card cash-out option is attractive for people for an investment or business operation, which, after investigation, can be considered unlawful if over a certain amount is used. Because cash-out will incur fees for the merchants, while bringing money to the credit cards’ owners, it is difficult to confirm, as nobody will declare or admit it. Furthermore, it is more difficult to detect cash-out groups than individuals, because cash-out groups are more hidden, which leads to bigger transaction amounts. We propose a new method for the detection of cash-out groups. First, the seed cards are mined and the seed cards’ diffusion is then performed through the local graph clustering algorithm (Approximate PageRank, APR). Second, a merchant association network in IoT is constructed based on the suspicious cards, using the graph embedding algorithm (Node2Vec). Third, we use the clustering algorithm (DBSCAN) to cluster the nodes in the Euclidean space, which divides the merchants into groups. Finally, we design a method to classify the severity of the groups to facilitate the following risk investigation. The proposed method covers 145 merchants from 195 known risky merchants in groups that acquire cash-out from four banks, which shows that this method can identify most (74.4%) cash-out groups. In addition, the proposed method identifies a further 178 cash-out merchants in the group within the same four acquirers, resulting in a total of 30,586 merchants. The results and framework are already adopted and absorbed into the design for a cash-out group detection system in IoT by the Chinese payment processor. Full article
Show Figures

Figure 1

16 pages, 15650 KiB  
Article
Water Quality Prediction Method Based on Multi-Source Transfer Learning for Water Environmental IoT System
by Jian Zhou, Jian Wang, Yang Chen, Xin Li and Yong Xie
Sensors 2021, 21(21), 7271; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217271 - 01 Nov 2021
Cited by 8 | Viewed by 1955
Abstract
Water environmental Internet of Things (IoT) system, which is composed of multiple monitoring points equipped with various water quality IoT devices, provides the possibility for accurate water quality prediction. In the same water area, water flows and exchanges between multiple monitoring points, resulting [...] Read more.
Water environmental Internet of Things (IoT) system, which is composed of multiple monitoring points equipped with various water quality IoT devices, provides the possibility for accurate water quality prediction. In the same water area, water flows and exchanges between multiple monitoring points, resulting in an adjacency effect in the water quality information. However, traditional water quality prediction methods only use the water quality information of one monitoring point, ignoring the information of nearby monitoring points. In this paper, we propose a water quality prediction method based on multi-source transfer learning for a water environmental IoT system, in order to effectively use the water quality information of nearby monitoring points to improve the prediction accuracy. First, a water quality prediction framework based on multi-source transfer learning is constructed. Specifically, the common features in water quality samples of multiple nearby monitoring points and target monitoring points are extracted and then aligned. According to the aligned features of water quality samples, the water quality prediction models based on an echo state network at multiple nearby monitoring points are established with distributed computing, and then the prediction results of distributed water quality prediction models are integrated. Second, the prediction parameters of multi-source transfer learning are optimized. Specifically, the back propagates population deviation based on multiple iterations, reducing the feature alignment bias and the model alignment bias to improve the prediction accuracy. Finally, the proposed method is applied in the actual water quality dataset of Hong Kong. The experimental results demonstrate that the proposed method can make full use of the water quality information of multiple nearby monitoring points to train several water quality prediction models and reduce the prediction bias. Full article
Show Figures

Figure 1

15 pages, 1026 KiB  
Article
Synchronized Data Collection for Human Group Recognition
by Weiping Zhu, Lin Xu, Yijie Tang and Rong Xie
Sensors 2021, 21(21), 7094; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217094 - 26 Oct 2021
Cited by 1 | Viewed by 1279
Abstract
It is commonplace for people to perform various kinds of activities in groups. The recognition of human groups is of importance in many applications including crowd evacuation, teamwork coordination, and advertising. Existing group recognition approaches require snapshots of human trajectories, which is often [...] Read more.
It is commonplace for people to perform various kinds of activities in groups. The recognition of human groups is of importance in many applications including crowd evacuation, teamwork coordination, and advertising. Existing group recognition approaches require snapshots of human trajectories, which is often impossible in the reality due to different data collection start time and frequency, and the inherent time deviations of devices. This study proposes an approach to synchronize the data of people for group recognition. All people’s trajectory data are aligned by using data interpolating. The optimal interpolating points are computed based on our proposed error function. Moreover, the time deviations among devices are estimated and eliminated by message passing. A real-life data set is used to validate the effectiveness of the proposed approach. The results show that 97.7% accuracy of group recognition can be achieved. The approach proposed to deal with time deviations was also proven to lead to better performance compared to that of the existing approaches. Full article
Show Figures

Figure 1

12 pages, 2007 KiB  
Communication
Three-Dimensional Microscopic Image Reconstruction Based on Structured Light Illumination
by Taichu Shi, Yang Qi, Cheng Zhu, Ying Tang and Ben Wu
Sensors 2021, 21(18), 6097; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186097 - 11 Sep 2021
Cited by 5 | Viewed by 2605
Abstract
In this paper, we propose and experimentally demonstrate a three-dimensional (3D) microscopic system that reconstructs a 3D image based on structured light illumination. The spatial pattern of the structured light changes according to the profile of the object, and by measuring the change, [...] Read more.
In this paper, we propose and experimentally demonstrate a three-dimensional (3D) microscopic system that reconstructs a 3D image based on structured light illumination. The spatial pattern of the structured light changes according to the profile of the object, and by measuring the change, a 3D image of the object is reconstructed. The structured light is generated with a digital micro-mirror device (DMD), which controls the structured light pattern to change in a kHz rate and enables the system to record the 3D information in real time. The working distance of the imaging system is 9 cm at a resolution of 20 μm. The resolution, working distance, and real-time 3D imaging enable the system to be applied in bridge and road crack examinations, and structure fault detection of transportation infrastructures. Full article
Show Figures

Figure 1

18 pages, 410 KiB  
Article
Computational Offloading in Mobile Edge with Comprehensive and Energy Efficient Cost Function: A Deep Learning Approach
by Ziaul Haq Abbas, Zaiwar Ali, Ghulam Abbas, Lei Jiao, Muhammad Bilal, Doug-Young Suh and Md. Jalil Piran
Sensors 2021, 21(10), 3523; https://0-doi-org.brum.beds.ac.uk/10.3390/s21103523 - 19 May 2021
Cited by 11 | Viewed by 3177
Abstract
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the [...] Read more.
In mobile edge computing (MEC), partial computational offloading can be intelligently investigated to reduce the energy consumption and service delay of user equipment (UE) by dividing a single task into different components. Some of the components execute locally on the UE while the remaining are offloaded to a mobile edge server (MES). In this paper, we investigate the partial offloading technique in MEC using a supervised deep learning approach. The proposed technique, comprehensive and energy efficient deep learning-based offloading technique (CEDOT), intelligently selects the partial offloading policy and also the size of each component of a task to reduce the service delay and energy consumption of UEs. We use deep learning to find, simultaneously, the best partitioning of a single task with the best offloading policy. The deep neural network (DNN) is trained through a comprehensive dataset, generated from our mathematical model, which reduces the time delay and energy consumption of the overall process. Due to the complexity and computation of the mathematical model in the algorithm being high, due to trained DNN the complexity and computation are minimized in the proposed work. We propose a comprehensive cost function, which depends on various delays, energy consumption, radio resources, and computation resources. Furthermore, the cost function also depends on energy consumption and delay due to the task-division-process in partial offloading. None of the literature work considers the partitioning along with the computational offloading policy, and hence, the time and energy consumption due to task-division-process are ignored in the cost function. The proposed work considers all the important parameters in the cost function and generates a comprehensive training dataset with high computation and complexity. Once we get the training dataset, then the complexity is minimized through trained DNN which gives faster decision making with low energy consumptions. Simulation results demonstrate the superior performance of the proposed technique with high accuracy of the DNN in deciding offloading policy and partitioning of a task with minimum delay and energy consumption for UE. More than 70% accuracy of the trained DNN is achieved through a comprehensive training dataset. The simulation results also show the constant accuracy of the DNN when the UEs are moving which means the decision making of the offloading policy and partitioning are not affected by the mobility of UEs. Full article
Show Figures

Figure 1

Back to TopTop