Topic Editors

1. Faculty of Data Science, Musashino University, Tokyo, Japan
2. Professor Emeritus, Faculty of Environmental Information, Keio University, Tokyo, Japan
School of Engineering, Macquarie University, Sydney, NSW 2109, Australia
Department of Management and Production Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy

Internet of Things: Latest Advances

Abstract submission deadline
closed (31 December 2022)
Manuscript submission deadline
closed (28 February 2023)
Viewed by
69390

Topic Information

Dear Colleagues,

The Internet of Things (IoT) is one of the most prominent tech trends to have emerged in recent years. It refers to the fact that while the word “internet” initially referred to the wide-scale networking of computers, today, devices of every size and shape – from cars to kitchen appliances to industrial machinery - are connected and sharing information digitally, on a global scale.

The purpose of this Topic is to bring together state-of-the-art achievements on IoT and its applications. It discusses all aspects of emerging IoT sciences and technologies and serves as a platform for colleagues to exchange novel ideas in this area.

Especially, IoT devices are used in many applications in non-harsh environments. Initially, there were no IoT devices for harsh environments or highly protected expensive IoT devices. However, with the advancement of AI, AI is now able to predict target parameters even without IoT devices in harsh environments. In addition to critical boundaries with IoT devices in non-harsh and harsh environments, this Topic is also interested in how engineers and scientists can cope with and overcome harsh situations by protecting fragile IoT devices, and other technologies including AI technologies that can predict without IoT devices.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). We encourage authors to submit original research articles, case studies, reviews, theoretical and critical perspectives, and viewpoint articles on (but not limited to) the following topics:

  • Artificial Intelligence;
  • Internet of Things;
  • vulnerable sensors;
  • AI prediction;
  • harsh environments;
  • non-harsh environments…
Prof. Dr. Yoshiyasu Takefuji
Prof. Dr. Subhas Mukhopadhyay
Prof. Dr. Enrico Vezzetti
Topic Editors

Keywords

  • artificial intelligence
  • IoT
  • internet of things
  • vulnerable sensors
  • AI prediction
  • harsh environments
  • non-harsh environment
  • AIoT
  • IIOT
  • smart sensing
  • smart sensors
  • industrial internet of things
  • artificial intelligence of things
  • internet of medical things
  • IoMT
  • sensing
  • sensors

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Journal of Sensor and Actuator Networks
jsan
3.5 7.6 2012 20.4 Days CHF 2000
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400
Sustainability
sustainability
3.9 5.8 2009 18.8 Days CHF 2400
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (28 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
21 pages, 5839 KiB  
Article
Multi-Armed Bandit Algorithm Policy for LoRa Network Performance Enhancement
by Anjali R. Askhedkar and Bharat S. Chaudhari
J. Sens. Actuator Netw. 2023, 12(3), 38; https://0-doi-org.brum.beds.ac.uk/10.3390/jsan12030038 - 04 May 2023
Cited by 2 | Viewed by 1669
Abstract
Low-power wide-area networks (LPWANs) constitute a variety of modern-day Internet of Things (IoT) applications. Long range (LoRa) is a promising LPWAN technology with its long-range and low-power benefits. Performance enhancement of LoRa networks is one of the crucial challenges to meet application requirements, [...] Read more.
Low-power wide-area networks (LPWANs) constitute a variety of modern-day Internet of Things (IoT) applications. Long range (LoRa) is a promising LPWAN technology with its long-range and low-power benefits. Performance enhancement of LoRa networks is one of the crucial challenges to meet application requirements, and it primarily depends on the optimal selection of transmission parameters. Reinforcement learning-based multi-armed bandit (MAB) is a prominent approach for optimizing the LoRa parameters and network performance. In this work, we propose a new discounted upper confidence bound (DUCB) MAB to maximize energy efficiency and improve the overall performance of the LoRa network. We designed novel discount and exploration bonus functions to maximize the policy rewards to increase the number of successful transmissions. The results show that the proposed discount and exploration functions give better mean rewards irrespective of the number of trials, which has significant importance for LoRa networks. The designed policy outperforms other policies reported in the literature and has a lesser time complexity, a comparable mean rewards, and improves the mean rewards by a minimum of 8%. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

21 pages, 512 KiB  
Article
Sum Rate Optimization for Multiple Access in Multi-FD-UAV-Assisted NOMA-Enabled Backscatter Communication Network
by Siqiang Wang, Jing Guo, Hanxiao Yu, Han Zhang, Yuping Gong and Zesong Fei
Electronics 2023, 12(8), 1873; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12081873 - 15 Apr 2023
Viewed by 1021
Abstract
With the rapid development of the Internet of Things (IoT) network, research on low-power and energy-saving devices has attracted extensive attention from both academia and the industry. Although the backscatter devices (BDs) that utilize the environmental power to activate circuits and transmit signals [...] Read more.
With the rapid development of the Internet of Things (IoT) network, research on low-power and energy-saving devices has attracted extensive attention from both academia and the industry. Although the backscatter devices (BDs) that utilize the environmental power to activate circuits and transmit signals are a promising technology to be deployed as IoT nodes, it is challenging to design a flexible data backhaul scheme for massive BDs. Therefore, in this paper, we consider an unmanned-aerial-vehicle (UAV)-assisted backscatter communication network, where BDs are served by multiple full-duplex (FD) UAVs with the non-orthogonal multiple access (NOMA) schemes and modulate their signals on the downlink signals, which are generated by the UAVs to serve the coexisting regular user equipments (UEs). To maximize the sum rate of the considered system, we construct an optimization problem to optimize the reflection coefficient of BDs, the downlink and the backhaul transmission power, and the trajectory of UAVs jointly. Since the formulated problem is a non-convex optimization problem and is difficult to solve directly, we decouple the original problem into three sub-problems and solve them with the successive convex approximation (SCA) method, thereby addressing the original problem by a block coordinate descent (BCD)-based iterative algorithm. The simulation results show that, compared with the benchmark schemes, the proposed algorithm can obtain the highest system sum rate and utilize limited time-frequency resources more efficiently. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Graphical abstract

17 pages, 4456 KiB  
Article
Deadline-Aware Scheduling for Transmitted RSU Packets in Cooperative Vehicle-Infrastructure Systems
by Beipo Su, Yongfeng Ju and Liang Dai
Appl. Sci. 2023, 13(7), 4329; https://0-doi-org.brum.beds.ac.uk/10.3390/app13074329 - 29 Mar 2023
Cited by 2 | Viewed by 819
Abstract
In Cooperative Vehicle Infrastructure System (CVIS), the roadside unit (RSU) obtains many kinds of monitoring data through observation equipment carried by the RSU. The monitoring data from RSUs are transmitted to an RSU that is connected to the backbone network using the “store–carry–forward” [...] Read more.
In Cooperative Vehicle Infrastructure System (CVIS), the roadside unit (RSU) obtains many kinds of monitoring data through observation equipment carried by the RSU. The monitoring data from RSUs are transmitted to an RSU that is connected to the backbone network using the “store–carry–forward” scheme through the mobile vehicle. The monitoring data obtained by RSUs are timely, and different types of monitoring data have corresponding timelines. Reducing end-to-end delays to ensure more packets can be transmitted before deadlines is challenging. In this paper, we propose a Distributed Packet Scheduling Scheme for Delay-Packets Queue Length Tradeoff System (DDPS) in CVIS to solve the multi-RSU-distributed packet transmission problem. We also establish the vehicle speed state, vehicle communication quantity prediction, data arrival, and end-to-end delay minimization models. After Lyapunov’s optimization theory transformed the optimization model, a knapsack problem was described. The simulation results verified that DDPS reduced the end-to-end average delay and ensured the data queue’s stability under packet deadline conditions. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

27 pages, 11909 KiB  
Article
A Novel Multi Algorithm Approach to Identify Network Anomalies in the IoT Using Fog Computing and a Model to Distinguish between IoT and Non-IoT Devices
by Rami J. Alzahrani and Ahmed Alzahrani
J. Sens. Actuator Netw. 2023, 12(2), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/jsan12020019 - 28 Feb 2023
Cited by 8 | Viewed by 2306
Abstract
Botnet attacks, such as DDoS, are one of the most common types of attacks in IoT networks. A botnet is a collection of cooperated computing machines or Internet of Things gadgets that criminal users manage remotely. Several strategies have been developed to reduce [...] Read more.
Botnet attacks, such as DDoS, are one of the most common types of attacks in IoT networks. A botnet is a collection of cooperated computing machines or Internet of Things gadgets that criminal users manage remotely. Several strategies have been developed to reduce anomalies in IoT networks, such as DDoS. To increase the accuracy of the anomaly mitigation system and lower the false positive rate (FPR), some schemes use statistical or machine learning methodologies in the anomaly-based intrusion detection system (IDS) to mitigate an attack. Despite the proposed anomaly mitigation techniques, the mitigation of DDoS attacks in IoT networks remains a concern. Because of the similarity between DDoS and normal network flows, leading to problems such as a high FPR, low accuracy, and a low detection rate, the majority of anomaly mitigation methods fail. Furthermore, the limited resources in IoT devices make it difficult to implement anomaly mitigation techniques. In this paper, an efficient anomaly mitigation system has been developed for the IoT network through the design and implementation of a DDoS attack detection system that uses a statistical method that combines three algorithms: exponentially weighted moving average (EWMA), K-nearest neighbors (KNN), and the cumulative sum algorithm (CUSUM). The integration of fog computing with the Internet of Things has created an effective framework for implementing an anomaly mitigation strategy to address security issues such as botnet threats. The proposed module was evaluated using the Bot-IoT dataset. From the results, we conclude that our model has achieved a high accuracy (99.00%) with a low false positive rate (FPR). We have also achieved good results in distinguishing between IoT and non-IoT devices, which will help networking teams make the distinction as well. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

21 pages, 3657 KiB  
Article
UltrasonicGS: A Highly Robust Gesture and Sign Language Recognition Method Based on Ultrasonic Signals
by Yuejiao Wang, Zhanjun Hao, Xiaochao Dang, Zhenyi Zhang and Mengqiao Li
Sensors 2023, 23(4), 1790; https://0-doi-org.brum.beds.ac.uk/10.3390/s23041790 - 05 Feb 2023
Cited by 1 | Viewed by 1745
Abstract
With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and [...] Read more.
With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals—UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

20 pages, 3572 KiB  
Review
Architectural Threats to Security and Privacy: A Challenge for Internet of Things (IoT) Applications
by Yasser Khan, Mazliham Bin Mohd Su’ud, Muhammad Mansoor Alam, Sayed Fayaz Ahmad, Nur Agus Salim and Nasir Khan
Electronics 2023, 12(1), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12010088 - 26 Dec 2022
Cited by 9 | Viewed by 3074
Abstract
The internet of things (IoT) is one of the growing platforms of the current era that has encircled a large population into its domain, and life appears to be useless without adopting this technology. A significant amount of data is generated from an [...] Read more.
The internet of things (IoT) is one of the growing platforms of the current era that has encircled a large population into its domain, and life appears to be useless without adopting this technology. A significant amount of data is generated from an immense number of smart devices and their allied applications that are constructively utilized to automate our daily life activities. This big data requires fast processing, storage, and safe passage through secure channels to safeguard it from any malicious attacks. In such a situation, security is considered crucial to protect the technological resources from unauthorized access or any interruption to disrupt the seamless and ubiquitous connectivity of the IoT from the perception layer to cloud computers. Motivated by this, this article demonstrates a general overview about the technology and layered architecture of the IoT followed by critical applications with a particular focus on key features of smart homes, smart agriculture, smart transportation, and smart healthcare. Next, security threats and vulnerabilities included with attacks on each layer of the IoT are explicitly elaborated. The classification of security challenges such as confidentiality, integrity, privacy, availability, authentication, non-repudiation, and key management is thoroughly reviewed. Finally, future research directions for security concerns are identified and presented. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

24 pages, 7686 KiB  
Article
Wi-GC: A Deep Spatiotemporal Gesture Recognition Method Based on Wi-Fi Signal
by Xiaochao Dang, Yanhong Bai, Zhanjun Hao and Gaoyuan Liu
Appl. Sci. 2022, 12(20), 10425; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010425 - 16 Oct 2022
Cited by 4 | Viewed by 1465
Abstract
Wireless sensing has been increasingly used in smart homes, human–computer interaction and other fields due to its comprehensive coverage, non-contact and absence of privacy leakage. However, most existing methods are based on the amplitude or phase of the Wi-Fi signal to recognize gestures, [...] Read more.
Wireless sensing has been increasingly used in smart homes, human–computer interaction and other fields due to its comprehensive coverage, non-contact and absence of privacy leakage. However, most existing methods are based on the amplitude or phase of the Wi-Fi signal to recognize gestures, which provides insufficient recognition accuracy. To solve this problem, we have designed a deep spatiotemporal gesture recognition method based on Wi-Fi signals, namely Wi-GC. The gesture-sensitive antennas are selected first and the fixed antennas are denoised and smoothed using a combined filter. The consecutive gestures are then segmented using a time series difference algorithm. The segmented gesture data is fed into our proposed RAGRU model, where BAGRU extracts temporal features of Channel State Information (CSI) sequences and RNet18 extracts spatial features of CSI amplitudes. In addition, to pick out essential gesture features, we introduce an attention mechanism. Finally, the extracted spatial and temporal characteristics are fused and input into softmax for classification. We have extensively and thoroughly verified the Wi-GC method in a natural environment and the average gesture recognition rate of the Wi-GC way is between 92–95.6%, which has strong robustness. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

21 pages, 6085 KiB  
Article
A Non-Contact Detection Method for Multi-Person Vital Signs Based on IR-UWB Radar
by Xiaochao Dang, Jinlong Zhang and Zhanjun Hao
Sensors 2022, 22(16), 6116; https://0-doi-org.brum.beds.ac.uk/10.3390/s22166116 - 16 Aug 2022
Cited by 7 | Viewed by 2418
Abstract
With the vigorous development of ubiquitous sensing technology, an increasing number of scholars pay attention to non-contact vital signs (e.g., Respiration Rate (RR) and Heart Rate (HR)) detection for physical health. Since Impulse Radio Ultra-Wide Band (IR-UWB) technology has good characteristics, such as [...] Read more.
With the vigorous development of ubiquitous sensing technology, an increasing number of scholars pay attention to non-contact vital signs (e.g., Respiration Rate (RR) and Heart Rate (HR)) detection for physical health. Since Impulse Radio Ultra-Wide Band (IR-UWB) technology has good characteristics, such as non-invasive, high penetration, accurate ranging, low power, and low cost, it makes the technology more suitable for non-contact vital signs detection. Therefore, a non-contact multi-human vital signs detection method based on IR-UWB radar is proposed in this paper. By using this technique, the realm of multi-target detection is opened up to even more targets for subjects than the more conventional single target. We used an optimized algorithm CIR-SS based on the channel impulse response (CIR) smoothing spline method to solve the problem that existing algorithms cannot effectively separate and extract respiratory and heartbeat signals. Also in our study, the effectiveness of the algorithm was analyzed using the Bland–Altman consistency analysis statistical method with the algorithm’s respiratory and heart rate estimation errors of 5.14% and 4.87%, respectively, indicating a high accuracy and precision. The experimental results showed that our proposed method provides a highly accurate, easy-to-implement, and highly robust solution in the field of non-contact multi-person vital signs detection. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

22 pages, 10203 KiB  
Article
Multi-Sensor Data Fusion with a Reconfigurable Module and Its Application to Unmanned Storage Boxes
by Sung-Kyu Lee, Seung-Hyun Hong, Won-Ho Jun and Youn-Sik Hong
Sensors 2022, 22(14), 5388; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145388 - 19 Jul 2022
Cited by 2 | Viewed by 2474
Abstract
We present a multi-sensor data fusion model based on a reconfigurable module (RM) with three fusion layers. In the data layer, raw data are refined with respect to the sensor characteristics and then converted into logical values. In the feature layer, a fusion [...] Read more.
We present a multi-sensor data fusion model based on a reconfigurable module (RM) with three fusion layers. In the data layer, raw data are refined with respect to the sensor characteristics and then converted into logical values. In the feature layer, a fusion tree is configured, and the values of the intermediate nodes are calculated by applying predefined logical operations, which are adjustable. In the decision layer, a final decision is made by computing the value of the root according to predetermined equations. In this way, with given threshold values or sensor characteristics for data refinement and logic expressions for feature extraction and decision making, we reconstruct an RM that performs multi-sensor fusion and is adaptable for a dedicated application. We attempted to verify its feasibility by applying the proposed RM to an actual application. Considering the spread of the COVID-19 pandemic, an unmanned storage box was selected as our application target. Four types of sensors were used to determine the state of the door and the status of the existence of an item inside it. We implemented a prototype system that monitored the unmanned storage boxes by configuring the RM according to the proposed method. It was confirmed that a system built with only low-cost sensors can identify the states more reliably through multi-sensor data fusion. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

19 pages, 1491 KiB  
Article
A Communication Framework for Image Transmission through LPWAN Technology
by Fabián Chaparro B., Manuel Pérez and Diego Mendez
Electronics 2022, 11(11), 1764; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11111764 - 02 Jun 2022
Cited by 3 | Viewed by 3078
Abstract
Analyzing the conditions of use and selecting which technology is more efficient to apply is required when transmitting information through wireless networks.The Internet of Things (IoT) has gained traction in industry and academia as a paradigm in which information and communication technologies merge [...] Read more.
Analyzing the conditions of use and selecting which technology is more efficient to apply is required when transmitting information through wireless networks.The Internet of Things (IoT) has gained traction in industry and academia as a paradigm in which information and communication technologies merge to deliver unique solutions by detecting, actuating, calculating, and sharing massive volumes of data via embedded systems. In this scenario, Low-Power Wide-Area Networks (LPWAN) appear to be an attractive solution for node connectivity. Typical IoT solutions demand flexible restrictions for wireless communication networks in terms of data rates and latency in exchange for having larger communication ranges and low energy consumption. Nonetheless, as the amount of data and data speeds demanded for particular applications increase, such as image transmissions, IoT network connectivity deteriorates. This paper proposes a communication architecture for image transmission across LPWAN networks utilizing LoRa modulation. The framework combines image processing techniques (classification, compressive sensing (CS), and reconstruction) with an investigation of LoRa modulation parameters using a Software-Defined Radio (SDR) environment. The results show that is possible to communicate an image of 128×128 pixels with four packets and one frequency channel in 2.51 s. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

14 pages, 1965 KiB  
Article
Remote Laboratory Offered as Hardware-as-a-Service Infrastructure
by Wojciech Domski
Electronics 2022, 11(10), 1568; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11101568 - 13 May 2022
Cited by 2 | Viewed by 1990
Abstract
This paper presents a solution for remote classes where hardware is offered as a service. The infrastructure was based on Raspberry Pi mini computers to which a set of different developments boards were connected. The proposed software architecture allows students to connect to [...] Read more.
This paper presents a solution for remote classes where hardware is offered as a service. The infrastructure was based on Raspberry Pi mini computers to which a set of different developments boards were connected. The proposed software architecture allows students to connect to remote resources and interact with them. Moreover, the services monitoring status of remote resources were introduced to facilitate software development and the learning process. Furthermore, live video feedback is available to visually monitor operation of the resources. Finally, a debugging server was deployed allowing us to establish a remote debugging session between a user’s PC and the dev board on the server premises. The solution offers a comprehensive remote service including user management. Safety risks of the Internet-exposed infrastructure and safety precautions were discussed. The presented RemoteLab system allows students of WUST to gain knowledge, practise and realize exercises in scope of academic courses such as robot controllers and advanced robot control. Thanks to advances in remote education and utilized tools, the RemoteLab was designed and deployed, allowing stationary classes to be substituted with remote ones, while maintaining a high level of class knowledge transfer. Up to the present, the system has been utilized by over 100 students who could realize exercises and prepare for classes thanks to 24 h system availability. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

14 pages, 2512 KiB  
Article
Trusted Blockchain-Driven IoT Security Consensus Mechanism
by Chuansheng Wang, Xuecheng Tan, Cuiyou Yao, Feng Gu, Fulei Shi and Haiqing Cao
Sustainability 2022, 14(9), 5200; https://0-doi-org.brum.beds.ac.uk/10.3390/su14095200 - 26 Apr 2022
Cited by 5 | Viewed by 1759
Abstract
Single point of failure and node attack tend to cause instability in the centralized Internet of Things (IoT). Combined with blockchain technology, the deficiency of traditional IoT architecture can be effectively alleviated. However, the existing blockchain consensus mechanism still has the problems of [...] Read more.
Single point of failure and node attack tend to cause instability in the centralized Internet of Things (IoT). Combined with blockchain technology, the deficiency of traditional IoT architecture can be effectively alleviated. However, the existing blockchain consensus mechanism still has the problems of forks and wasting of computing power. Therefore, this paper proposes a new framework based on a two-stage credit calculation to handle these problems. Notably, the nodes are selected through the model, and these nodes will compete on the chain according to the behavior of participating in the creation of the block. Then, a comparative simulation with the existing consensus mechanism proof of work (PoW) is presented. The results show that the proposed framework can quickly eliminate malicious nodes, maintain the overall security of the blockchain and reduce consensus delay. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

12 pages, 2238 KiB  
Article
The Adaptation of Internet of Things in the Indian Insurance Industry—Reviewing the Challenges and Potential Solutions
by Maryam Saeed, Noman Arshed and Haikuan Zhang
Electronics 2022, 11(3), 419; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11030419 - 29 Jan 2022
Cited by 3 | Viewed by 2924
Abstract
The concept of insurance was found several centuries before Christ. Correspondingly, Chinese and Babylonian traders practiced moving or dispensing risks in the second and third millennia BC. Nowadays, insurance is the backbone of the economy. The recent introduction of big data, IoT, and [...] Read more.
The concept of insurance was found several centuries before Christ. Correspondingly, Chinese and Babylonian traders practiced moving or dispensing risks in the second and third millennia BC. Nowadays, insurance is the backbone of the economy. The recent introduction of big data, IoT, and other forms of InsurTech led to the fourth industrial revolution in insurance in the developed world. The industry is looking to improve the ergonomics of remote sensing technology to improve the acceptability of the clients. The adaptation of IoT in developing economies may provide a solution in increasing insurance penetration. This study explores the challenges and solutions in adopting IoT to increase insurance penetration in India. This study applied a systematic literature review (SLR) to extract the themes/variables related to challenges and solutions in adopting IoT in India’s insurance sector. Several keywords were used to search the relevant literature from Google Scholar. Based on inclusion and exclusion criteria, the filtered studies were explored. This study listed several challenges and their solutions in the adaption of IoT in the Indian insurance industry. Policymakers could adapt the suggestions provided to improve the service delivery insurance sector. The authors listed several challenges and solutions in the adaption of IoT in the Indian insurance industry through a systematic literature review to facilitate the policymakers to make the right decisions. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

31 pages, 1691 KiB  
Article
VLC Network Design for High Mobility Users in Urban Tunnels
by Edmundo Torres-Zapata, Victor Guerra, Jose Rabadan, Martin Luna-Rivera and Rafael Perez-Jimenez
Sensors 2022, 22(1), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010088 - 23 Dec 2021
Cited by 3 | Viewed by 2850
Abstract
Current vehicular systems require real-time information to keep drivers safer and more secure on the road. In addition to the radio frequency (RF) based communication technologies, Visible Light Communication (VLC) has emerged as a complementary way to enable wireless access in intelligent transportation [...] Read more.
Current vehicular systems require real-time information to keep drivers safer and more secure on the road. In addition to the radio frequency (RF) based communication technologies, Visible Light Communication (VLC) has emerged as a complementary way to enable wireless access in intelligent transportation systems (ITS) with a simple design and low-cost deployment. However, integrating VLC in vehicular networks poses some fundamental challenges. In particular, the limited coverage range of the VLC access points and the high speed of vehicles create time-limited links that the existing handover procedures of VLC networks can not be accomplished timely. Therefore, this paper addresses the problem of designing a vehicular VLC network that supports high mobility users. We first modify the traditional VLC network topology to increase uplink reliability. Then, a low-latency handover scheme is proposed to enable mobility in a VLC network. Furthermore, we validate the functionality of the proposed VLC network design method by using system-level simulations of a vehicular tunnel scenario. The analysis and the results show that the proposed method provides a steady connection, where the vehicular node is available more than 99% of the time regardless of the number of vehicular nodes on this network. Additionally, the system is able to achieve a Frame-Error-Rate (FER) performance lower than 10−3. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

17 pages, 804 KiB  
Article
Multi-Cloud Resource Management Techniques for Cyber-Physical Systems
by Vlad Bucur and Liviu-Cristian Miclea
Sensors 2021, 21(24), 8364; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248364 - 15 Dec 2021
Cited by 4 | Viewed by 2278
Abstract
Information technology is based on data management between various sources. Software projects, as varied as simple applications or as complex as self-driving cars, are heavily reliant on the amounts, and types, of data ingested by one or more interconnected systems. Data is not [...] Read more.
Information technology is based on data management between various sources. Software projects, as varied as simple applications or as complex as self-driving cars, are heavily reliant on the amounts, and types, of data ingested by one or more interconnected systems. Data is not only consumed but is transformed or mutated which requires copious amounts of computing resources. One of the most exciting areas of cyber-physical systems, autonomous vehicles, makes heavy use of deep learning and AI to mimic the highly complex actions of a human driver. Attempting to map human behavior (a large and abstract concept) requires large amounts of data, used by AIs to increase their knowledge and better attempt to solve complex problems. This paper outlines a full-fledged solution for managing resources in a multi-cloud environment. The purpose of this API is to accommodate ever-increasing resource requirements by leveraging the multi-cloud and using commercially available tools to scale resources and make systems more resilient while remaining as cloud agnostic as possible. To that effect, the work herein will consist of an architectural breakdown of the resource management API, a low-level description of the implementation and an experiment aimed at proving the feasibility, and applicability of the systems described. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

16 pages, 907 KiB  
Article
Determinants and Cross-National Moderators of Wearable Health Tracker Adoption: A Meta-Analysis
by Chenming Peng, Hong Zhao and Sha Zhang
Sustainability 2021, 13(23), 13328; https://0-doi-org.brum.beds.ac.uk/10.3390/su132313328 - 01 Dec 2021
Cited by 2 | Viewed by 1821
Abstract
Wearable health trackers improve people’s health management and thus are beneficial for social sustainability. Many prior studies have contributed to the knowledge on the determinants of wearable health tracker adoption. However, these studies vary remarkably in focal determinants and countries of data collection, [...] Read more.
Wearable health trackers improve people’s health management and thus are beneficial for social sustainability. Many prior studies have contributed to the knowledge on the determinants of wearable health tracker adoption. However, these studies vary remarkably in focal determinants and countries of data collection, leading to a call for a structured and quantitative review on what determinants are generally important, and whether and how their effects on adoption vary across countries. Therefore, this study performed the first meta-analysis on the determinants and cross-national moderators of wearable health tracker adoption. This meta-analysis accumulated 319 correlations between nine determinants and adoption from 59 prior studies in 18 countries/areas. The meta-analytic average effects of the determinants revealed the generalized effect and the relative importance of each determinant. For example, technological characteristics generally had stronger positive correlations with adoption than consumer characteristics, except for privacy risk. Second, drawing on institutional theory, it was observed that cross-national characteristics regarding socioeconomic status, regulative systems, and cultures could moderate the effects of the determinants on adoption. For instance, the growth rate of gross domestic product decreased the effect of innovativeness on adoption, while regulatory quality and control of corruption could increase this effect. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

17 pages, 1102 KiB  
Article
Energy Performance Analysis and Modelling of LoRa Prototyping Boards
by Solomon Ould and Nick S. Bennett
Sensors 2021, 21(23), 7992; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237992 - 30 Nov 2021
Cited by 9 | Viewed by 2363
Abstract
LoRaWAN has gained significant attention for Internet-of-Things (IOT) applications due to its low power consumption and long range potential for data transmission. While there is a significant body of work assessing LoRA coverage and data transmission characteristics, there is a lack of data [...] Read more.
LoRaWAN has gained significant attention for Internet-of-Things (IOT) applications due to its low power consumption and long range potential for data transmission. While there is a significant body of work assessing LoRA coverage and data transmission characteristics, there is a lack of data available about commercially available LoRa prototyping boards and their power consumption, in relation to their features. It is currently difficult to estimate the power consumption of a LoRa module operating under different transmission profiles, due to a lack of manufacturer data available. In this study, power testing has been carried out on physical hardware and significant variation was found in the power consumption of competing boards, all marketed as “extremely low power”. In this paper, testing results are presented alongside an experimentally-derived power model for the lowest power LoRa module, and power requirements are compared to firmware settings. The power analysis adds to existing work showing trends in data-rate and transmission power settings effects on electrical power consumption. The model’s accuracy is experimentally verified and shows acceptable agreement to estimated values. Finally, applications for the model are presented by way of a hypothetical scenario and calculations performed in order to estimate battery life and energy consumption for varying data transmission intervals. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

22 pages, 859 KiB  
Article
Energy-Aware Wireless Sensor Networks for Smart Buildings: A Review
by Najem Naji, Mohamed Riduan Abid, Nissrine Krami and Driss Benhaddou
J. Sens. Actuator Netw. 2021, 10(4), 67; https://0-doi-org.brum.beds.ac.uk/10.3390/jsan10040067 - 26 Nov 2021
Cited by 2 | Viewed by 3073
Abstract
The design of Wireless Sensor Networks (WSN) requires the fulfillment of several design requirements. The most important one is optimizing the battery’s lifetime, which is tightly coupled to the sensor lifetime. End-users usually avoid replacing sensors’ batteries, especially in massive deployment scenarios like [...] Read more.
The design of Wireless Sensor Networks (WSN) requires the fulfillment of several design requirements. The most important one is optimizing the battery’s lifetime, which is tightly coupled to the sensor lifetime. End-users usually avoid replacing sensors’ batteries, especially in massive deployment scenarios like smart agriculture and smart buildings. To optimize battery lifetime, wireless sensor designers need to delineate and optimize active components at different levels of the sensor’s layered architecture, mainly, (1) the number of data sets being generated and processed at the application layer, (2) the size and the architecture of the operating systems (OS), (3) the networking layers’ protocols, and (4) the architecture of electronic components and duty cycling techniques. This paper reviews the different relevant technologies and investigates how they optimize energy consumption at each layer of the sensor’s architecture, e.g., hardware, operating system, application, and networking layer. This paper aims to make the researcher aware of the various optimization opportunities when designing WSN nodes. To our knowledge, there is no other work in the literature that reviews energy optimization of WSN in the context of Smart Energy-Efficient Buildings (SEEB) and from the formerly four listed perspectives to help in the design and implementation of optimal WSN for SEEB. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

14 pages, 2255 KiB  
Article
An Adaptive Routing Algorithm Based on Relation Tree in DTN
by Diyue Chen, Hongyan Cui and Roy E. Welsch
Sensors 2021, 21(23), 7847; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237847 - 25 Nov 2021
Viewed by 1524
Abstract
It is found that nodes in Delay Tolerant Networks (DTN) exhibit stable social attributes similar to those of people. In this paper, an adaptive routing algorithm based on Relation Tree (AR-RT) for DTN is proposed. Each node constructs its own Relation Tree based [...] Read more.
It is found that nodes in Delay Tolerant Networks (DTN) exhibit stable social attributes similar to those of people. In this paper, an adaptive routing algorithm based on Relation Tree (AR-RT) for DTN is proposed. Each node constructs its own Relation Tree based on the historical encounter frequency, and will adopt different forwarding strategies based on the Relation Tree in the forwarding phase, so as to achieve more targeted forwarding. To further improve the scalability of the algorithm, the source node dynamically controls the initial maximum number of message copies according to its own cache occupancy, which enables the node to make negative feedback to network environment changes. Simulation results show that the AR-RT algorithm proposed in this paper has significant advantages over existing routing algorithms in terms of average delay, average hop count, and message delivery rate. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

18 pages, 4819 KiB  
Article
Evaluation of the COVID-19 Era by Using Machine Learning and Interpretation of Confidential Dataset
by Andreas Andreou, Constandinos X. Mavromoustakis, George Mastorakis, Jordi Mongay Batalla and Evangelos Pallis
Electronics 2021, 10(23), 2910; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10232910 - 24 Nov 2021
Cited by 4 | Viewed by 1799
Abstract
Various research approaches to COVID-19 are currently being developed by machine learning (ML) techniques and edge computing, either in the sense of identifying virus molecules or in anticipating the risk analysis of the spread of COVID-19. Consequently, these orientations are elaborating datasets that [...] Read more.
Various research approaches to COVID-19 are currently being developed by machine learning (ML) techniques and edge computing, either in the sense of identifying virus molecules or in anticipating the risk analysis of the spread of COVID-19. Consequently, these orientations are elaborating datasets that derive either from WHO, through the respective website and research portals, or from data generated in real-time from the healthcare system. The implementation of data analysis, modelling and prediction processing is performed through multiple algorithmic techniques. The lack of these techniques to generate predictions with accuracy motivates us to proceed with this research study, which elaborates an existing machine learning technique and achieves valuable forecasts by modification. More specifically, this study modifies the Levenberg–Marquardt algorithm, which is commonly beneficial for approaching solutions to nonlinear least squares problems, endorses the acquisition of data driven from IoT devices and analyses these data via cloud computing to generate foresight about the progress of the outbreak in real-time environments. Hence, we enhance the optimization of the trend line that interprets these data. Therefore, we introduce this framework in conjunction with a novel encryption process that we are proposing for the datasets and the implementation of mortality predictions. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

19 pages, 822 KiB  
Article
Nomadic, Informal and Mediatised Work Practices: Role of Professional Social Approval and Effects on Quality of Life at Work
by Maëlle Périssé, Anne-Marie Vonthron and Émilie Vayre
Sustainability 2021, 13(22), 12878; https://0-doi-org.brum.beds.ac.uk/10.3390/su132212878 - 21 Nov 2021
Cited by 2 | Viewed by 1877
Abstract
Several studies have emphasised the effects of perceived social approval in employees’ professional environment (colleagues and managers) on the implementation of remote and mediatised work practices and, more specifically, on their spatial, temporal and material characteristics. The use of information and communication technologies [...] Read more.
Several studies have emphasised the effects of perceived social approval in employees’ professional environment (colleagues and managers) on the implementation of remote and mediatised work practices and, more specifically, on their spatial, temporal and material characteristics. The use of information and communication technologies has been identified in the literature not only as affecting the levels felt by employees in terms of their relation to work (organisational commitment and recognition for work accomplished) but also in terms of work-life balance and health (stress and addictions). However, these studies are few in number when it comes to nomadic and informal work practices and rarely address perceived social approval in employees’ professional entourage. We used an empirical study based on a questionnaire survey. The results indicate that employees favour smartphone and laptop use. The effects of perceived social approval in their professional entourage differ according to the technologies used. These uses also have an impact on commitment and recognition, but their effects on employees’ perception of the effects of work life on “non-work” life and on addiction-related behaviours are more nuanced. These findings lead us to discuss the “right to disconnect” and the development of support and supervision schemes for nomadic, informal and mediatised work practices. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

16 pages, 1669 KiB  
Review
Edge Network Optimization Based on AI Techniques: A Survey
by Mitra Pooyandeh and Insoo Sohn
Electronics 2021, 10(22), 2830; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10222830 - 18 Nov 2021
Cited by 11 | Viewed by 3700
Abstract
The network edge is becoming a new solution for reducing latency and saving bandwidth in the Internet of Things (IoT) network. The goal of the network edge is to move computation from cloud servers to the edge of the network near the IoT [...] Read more.
The network edge is becoming a new solution for reducing latency and saving bandwidth in the Internet of Things (IoT) network. The goal of the network edge is to move computation from cloud servers to the edge of the network near the IoT devices. The network edge, which needs to make smart decisions with a high level of response time, needs intelligence processing based on artificial intelligence (AI). AI is becoming a key component in many edge devices, including cars, drones, robots, and smart IoT devices. This paper describes the role of AI in a network edge. Moreover, this paper elaborates and discusses the optimization methods for an edge network based on AI techniques. Finally, the paper considers the security issue as a major concern and prospective approaches to solving this issue in an edge network. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

19 pages, 32838 KiB  
Article
Experimental FIA Methodology Using Clock and Control Signal Modifications under Power Supply and Temperature Variations
by Francisco Eugenio Potestad-Ordóñez, Erica Tena-Sánchez, José Miguel Mora-Gutiérrez, Manuel Valencia-Barrero and Carlos Jesús Jiménez-Fernández
Sensors 2021, 21(22), 7596; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227596 - 16 Nov 2021
Viewed by 1638
Abstract
The security of cryptocircuits is determined not only for their mathematical formulation, but for their physical implementation. The so-called fault injection attacks, where an attacker inserts faults during the operation of the cipher to obtain a malfunction to reveal secret information, pose a [...] Read more.
The security of cryptocircuits is determined not only for their mathematical formulation, but for their physical implementation. The so-called fault injection attacks, where an attacker inserts faults during the operation of the cipher to obtain a malfunction to reveal secret information, pose a serious threat for security. These attacks are also used by designers as a vehicle to detect security flaws and then protect the circuits against these kinds of attacks. In this paper, two different attack methodologies are presented based on inserting faults through the clock signal or the control signal. The optimization of the attacks is evaluated under supply voltage and temperature variation, experimentally determining the feasibility through the evaluation of different Trivium versions in 90 nm ASIC technology implementations, also considering different routing alternatives. The results show that it is possible to inject effective faults with both methodologies, improving fault efficiency if the power supply voltage decreases, which requires only half the frequency of the short pulse inserted into the clock signal to obtain a fault. The clock signal modification methodology can be extended to other NLFSR-based cryptocircuits and the control signal-based methodology can be applied to both block and stream ciphers. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

26 pages, 4359 KiB  
Article
Network Lifetime Improvement through Energy-Efficient Hybrid Routing Protocol for IoT Applications
by Mukesh Mishra, Gourab Sen Gupta and Xiang Gui
Sensors 2021, 21(22), 7439; https://0-doi-org.brum.beds.ac.uk/10.3390/s21227439 - 09 Nov 2021
Cited by 14 | Viewed by 3027
Abstract
The application of the Internet of Things (IoT) in wireless sensor networks (WSNs) poses serious challenges in preserving network longevity since the IoT necessitates a considerable amount of energy usage for sensing, processing, and data communication. As a result, there are several conventional [...] Read more.
The application of the Internet of Things (IoT) in wireless sensor networks (WSNs) poses serious challenges in preserving network longevity since the IoT necessitates a considerable amount of energy usage for sensing, processing, and data communication. As a result, there are several conventional algorithms that aim to enhance the performance of WSN networks by incorporating various optimization strategies. These algorithms primarily focus on the network layer by developing routing protocols to perform reliable communication in an energy-efficient manner, thus leading to an enhanced network life. For increasing the network lifetime in WSNs, clustering has been widely accepted as an important method that groups sensor nodes (SNs) into clusters. Additionally, numerous researchers have been focusing on devising various methods to increase the network lifetime. The prime factor that helps to maximize the network lifetime is the minimization of energy consumption. The authors of this paper propose a multi-objective optimization approach. It selects the optimal route for transmitting packets from source to sink or the base station (BS). The proposed model employs a two-step approach. The first step employs a trust model to select the cluster heads (CHs) that manage the data communication between the BS and nodes in the cluster. Further, a novel hybrid algorithm, combining a particle swarm optimization (PSO) algorithm and a genetic algorithm (GA), is proposed to determine the routes for data transmission. To validate the efficacy of the proposed hybrid algorithm, named PSOGA, simulations were conducted and the results were compared with the existing LEACH method and PSO, with a random route selection for five different cases. The obtained results establish the efficiency of the proposed approach, as it outperforms existing methods with increased energy efficiency, increased network throughput, high packet delivery rate, and high residual energy throughout the entire iterations. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

20 pages, 1780 KiB  
Article
Direct-to-Satellite IoT Slotted Aloha Systems with Multiple Satellites and Unequal Erasure Probabilities
by Felipe Augusto Tondo, Samuel Montejo-Sánchez, Marcelo Eduardo Pellenz, Sandra Céspedes and Richard Demo Souza
Sensors 2021, 21(21), 7099; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217099 - 26 Oct 2021
Cited by 11 | Viewed by 3119
Abstract
Direct-to-satellite Internet of Things (IoT) solutions have attracted a lot of attention from industry and academia recently, as promising alternatives for large scale coverage of a massive number of IoT devices. In this work, we considered that a cluster of IoT devices was [...] Read more.
Direct-to-satellite Internet of Things (IoT) solutions have attracted a lot of attention from industry and academia recently, as promising alternatives for large scale coverage of a massive number of IoT devices. In this work, we considered that a cluster of IoT devices was under the coverage of a constellation of low-Earth orbit (LEO) satellites, while slotted Aloha was used as a medium access control technique. Then, we analyzed the throughput and packet loss rate while considering potentially different erasure probabilities at each of the visible satellites within the constellation. We show that different combinations of erasure probabilities at the LEO satellites and the IoT traffic load can lead to considerable differences in the system’s performance. Next, we introduce an intelligent traffic load distribution (ITLD) strategy, which, by choosing between a non-uniform allocation and the uniform traffic load distribution, guarantees a high overall system throughput, by allocating more appropriate amounts of traffic load at different positions (i.e., different sets of erasure probabilities) of the LEO constellation with respect to the IoT cluster. Finally, the results show that ITLD, a mechanism with low implementation complexity, allows the system to be much more scalable, intelligently exploiting the potential of the different positions of the satellite constellation. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

25 pages, 3348 KiB  
Article
Yield Estimation and Visualization Solution for Precision Agriculture
by Youssef Osman, Reed Dennis and Khalid Elgazzar
Sensors 2021, 21(19), 6657; https://0-doi-org.brum.beds.ac.uk/10.3390/s21196657 - 07 Oct 2021
Cited by 14 | Viewed by 2687
Abstract
We present an end-to-end smart harvesting solution for precision agriculture. Our proposed pipeline begins with yield estimation that is done through the use of object detection and tracking to count fruit within a video. We use and train You Only Look Once model [...] Read more.
We present an end-to-end smart harvesting solution for precision agriculture. Our proposed pipeline begins with yield estimation that is done through the use of object detection and tracking to count fruit within a video. We use and train You Only Look Once model (YOLO) on video clips of apples, oranges and pumpkins. The bounding boxes obtained through objection detection are used as an input to our selected tracking model, DeepSORT. The original version of DeepSORT is unusable with fruit data, as the appearance feature extractor only works with people. We implement ResNet as DeepSORT’s new feature extractor, which is lightweight, accurate and generically works on different fruits. Our yield estimation module shows accuracy between 91–95% on real footage of apple trees. Our modification successfully works for counting oranges and pumpkins, with an accuracy of 79% and 93.9% with no need for training. Our framework additionally includes a visualization of the yield. This is done through the incorporation of geospatial data. We also propose a mechanism to annotate a set of frames with a respective GPS coordinate. During counting, the count within the set of frames and the matching GPS coordinate are recorded, which we then visualize on a map. We leverage this information to propose an optimal container placement solution. Our proposed solution involves minimizing the number of containers to place across the field before harvest, based on a set of constraints. This acts as a decision support system for the farmer to make efficient plans for logistics, such as labor, equipment and gathering paths before harvest. Our work serves as a blueprint for future agriculture decision support systems that can aid in many other aspects of farming. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

17 pages, 10754 KiB  
Article
Secure Audio-Visual Data Exchange for Android In-Vehicle Ecosystems
by Alfred Anistoroaei, Adriana Berdich, Patricia Iosif and Bogdan Groza
Appl. Sci. 2021, 11(19), 9276; https://0-doi-org.brum.beds.ac.uk/10.3390/app11199276 - 06 Oct 2021
Cited by 1 | Viewed by 1573
Abstract
Mobile device pairing inside vehicles is a ubiquitous task which requires easy to use and secure solutions. In this work we exploit the audio-video domain for pairing devices inside vehicles. In principle, we rely on the widely used elliptical curve version of the [...] Read more.
Mobile device pairing inside vehicles is a ubiquitous task which requires easy to use and secure solutions. In this work we exploit the audio-video domain for pairing devices inside vehicles. In principle, we rely on the widely used elliptical curve version of the Diffie-Hellman key-exchange protocol and extract the session keys from the acoustic domain as well as from the visual domain by using the head unit display. The need for merging the audio-visual domains first stems from the fact that in-vehicle head units generally do not have a camera so they cannot use visual data from smartphones, however, they are equipped with microphones and can use them to collect audio data. Acoustic channels are less reliable as they are more prone to errors due to environmental noise. However, this noise can be also exploited in a positive way to extract secure seeds from the environment and audio channels are harder to intercept from the outside. On the other hand, visual channels are more reliable but can be more easily spotted by outsiders, so they are more vulnerable for security applications. Fortunately, mixing these two types of channels results in a solution that is both more reliable and secure for performing a key exchange. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

21 pages, 11977 KiB  
Article
Practical Particulate Matter Sensing and Accurate Calibration System Using Low-Cost Commercial Sensors
by Hyuntae Cho and Yunju Baek
Sensors 2021, 21(18), 6162; https://0-doi-org.brum.beds.ac.uk/10.3390/s21186162 - 14 Sep 2021
Cited by 8 | Viewed by 4301
Abstract
Air pollution is a social problem, because the harmful suspended materials can cause diseases and deaths to humans. Specifically, particulate matters (PM), a form of air pollution, can contribute to cardiovascular morbidity and lung diseases. Nowadays, humans are exposed to PM pollution everywhere [...] Read more.
Air pollution is a social problem, because the harmful suspended materials can cause diseases and deaths to humans. Specifically, particulate matters (PM), a form of air pollution, can contribute to cardiovascular morbidity and lung diseases. Nowadays, humans are exposed to PM pollution everywhere because it occurs in both indoor and outdoor environments. To purify or ventilate polluted air, one need to accurately monitor the ambient air quality. Therefore, this study proposed a practical particulate matter sensing and accurate calibration system using low-cost commercial sensors. The proposed system basically uses noisy and inaccurate PM sensors to measure the ambient air pollution. This paper mainly deals with three types of error caused in the light scattering method: short-term noise, part-to-part variation, and temperature and humidity interferences. We propose a simple short-term noise reduction method to correct measurement errors, an auto-fitting calibration for part-to-part repeatability to pinpoint the baseline of the signal that affects the performance of the system, and a temperature and humidity compensation method. This paper also contains the experiment setup and performance evaluation to prove the superiority of the proposed methods. Based on the evaluation of the performance of the proposed system, part-to-part repeatability was less than 2 μg/m3 and the standard deviation was approximately 1.1 μg/m3 in the air. When the proposed approaches are used for other optical sensors, it can result in better performance. Full article
(This article belongs to the Topic Internet of Things: Latest Advances)
Show Figures

Figure 1

Back to TopTop