Machine Learning in the Industrial Internet of Things

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (1 September 2022) | Viewed by 10170

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Electronic Engineering, The University of Melbourne, Parkville, VIC 3010, Australia
Interests: Sensors; Biosensors; Crystalline Materials; Artificial Intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, China
Interests: multimedia processing; sensor fusion; machine learning; information hiding
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
UniSA STEM, University of South Australia, Adelaide, SA,Australia
Interests: wireless sensor networks; smart grid; Internet of Things; security
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues, 

The Internet of Things (IoT), according to the European Parliament, is a global, distributed network (or networks) of physical objects that are capable of sensing or acting on their environment, and able to communicate with each other, other machines, or computers. Leveraging the IoT for the Fourth Industrial Revolution, or “Industry 4.0”, gave rise to the Industrial Internet of Things (IIoT). Originally a German initiative, Industry 4.0 has now gained international recognition for its potential to innovate product design, enhance manufacturing flexibility and efficiency, improve productivity, and reduce costs. According to the Boston Consulting Group, IIoT is only one of the nine pillars of Industry 4.0. A common enabler of IIoT and the other pillars of Industry 4.0—including autonomous robots, big data analytics, cybersecurity, and simulation—is machine learning, and in particular, deep learning. Deep learning refers to the application of neural networks consisting of many layers of activity vectors as representations to a learning task that can be supervised, unsupervised, semi-supervised, or self-supervised.

For the past few years, many deep learning architectures and approaches (e.g., convolutional neural networks, generative adversarial networks, deep reinforcement learning, transformers) have been proposed for different applications targeted at the IIoT. Many of these applications have made a major impact. For example, major advances in automating robots to handle challenging tasks using deep reinforcement learning have been frequenting media headlines. Applying big data analytics to predictive maintenance has become the core business model for many companies. Deep learning can be applied to intrusion detection in the IIoT. Graph networks have seen successes when used to accelerate mesh-based simulations (i.e., simulations of physical phenomena that can be described by partial differential equations, such as fluid dynamics and electromagnetics), paving the way for real-time digital twins. Deep-learning-enabled sensing, communication, and (edge/cloud) computing platforms are entering the IIoT market at a rapid pace. The availability of this ever-expanding portfolio of technologies further accelerates the applications of deep learning.

This Electronics Special Issue serves as a spiritual successor to the 2019 MDPI Electronics Special Issue “Towards an Industrial Internet of Things” and invites your original contributions related to the applications of machine learning, especially deep learning, to the IIoT. These applications include but are not limited to the aforementioned examples. Both theory-oriented and practice-oriented submissions are welcome.

Prof. Dr. Marimuthu Palaniswami
Prof. Dr. Jeng-Shyang Pan
Dr. Yee Wei Law
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Industrial Internet of Things
  • Industry 4.0
  • Cyberphysical systems
  • Wireless sensor networks
  • Cloud computing, edge computing
  • Deep learning, machine learning, artificial intelligence
  • Autonomous robots, collaborative robots (cobots)
  • Big data analytics
  • Predictive maintenance

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 4715 KiB  
Article
Application of Improved Quasi-Affine Transformation Evolutionary Algorithm in Power System Stabilizer Optimization
by Jing Huang, Jiajing Liu, Cheng Zhang, Yu Kuang and Shaowei Weng
Electronics 2022, 11(17), 2785; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11172785 - 4 Sep 2022
Cited by 3 | Viewed by 1121
Abstract
This paper proposes a parameter coordination optimization design of a power system stabilizer (PSS) based on an improved quasi-affine transformation evolutionary (QUATRE) algorithm to suppress low-frequency oscillation and improve the dynamic stability of power systems. To begin, the simulated annealing (SA) algorithm randomly [...] Read more.
This paper proposes a parameter coordination optimization design of a power system stabilizer (PSS) based on an improved quasi-affine transformation evolutionary (QUATRE) algorithm to suppress low-frequency oscillation and improve the dynamic stability of power systems. To begin, the simulated annealing (SA) algorithm randomly updates the globally optimal solution of each QUATRE iteration and matches the inferior solution with a certain probability to escape the local extreme point. This new algorithm is first applied to the power system. Since the damping ratio is one of the criteria with which to measure the dynamic stability of the power system, this paper sets the objective function according to the principle of maximization of the damping coefficient of the electromechanical mode, and uses SA-QUATRE to search a group of global optimal PSS parameter combinations to improve the safety factor of the system as much as possible. Finally, the method’s rationality and validity were validated by applying it to the simulation examples of the IEEE 4-machine 2-area system with different operation states. The comparison with the traditional optimization algorithm shows that the proposed method has more advantages for multi-machine PSS parameter coordination optimization, can restrain the low-frequency oscillation of the power system more effectively and can enhance the system’s stability. Full article
(This article belongs to the Special Issue Machine Learning in the Industrial Internet of Things)
Show Figures

Figure 1

16 pages, 1117 KiB  
Article
Task Scheduling in Cloud Computing Environment Using Advanced Phasmatodea Population Evolution Algorithms
by An-Ning Zhang, Shu-Chuan Chu, Pei-Cheng Song, Hui Wang and Jeng-Shyang Pan
Electronics 2022, 11(9), 1451; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11091451 - 30 Apr 2022
Cited by 16 | Viewed by 2372
Abstract
Cloud computing seems to be the result of advancements in distributed computing, parallel computing, and network computing. The management and allocation of cloud resources have emerged as a central research direction. An intelligent resource allocation system can significantly minimize the costs and wasting [...] Read more.
Cloud computing seems to be the result of advancements in distributed computing, parallel computing, and network computing. The management and allocation of cloud resources have emerged as a central research direction. An intelligent resource allocation system can significantly minimize the costs and wasting of resources. In this paper, we present a task scheduling technique based on the advanced Phasmatodea Population Evolution (APPE) algorithm in a heterogeneous cloud environment. The algorithm accelerates up the time taken for finding solutions by improving the convergent evolution of the nearest optimal solutions. It then adds a restart strategy to prevent the algorithm from entering local optimization and balance its exploration and development capabilities. Furthermore, the evaluation function is meant to find the best solutions by considering the makespan, resource cost, and load balancing degree. The results of the APPE algorithm being tested on 30 benchmark functions show that it outperforms similar algorithms. Simultaneously, the algorithm solves the task scheduling problem in the cloud computing environment. This method has a faster convergence time and greater resource usage when compared to other algorithms. Full article
(This article belongs to the Special Issue Machine Learning in the Industrial Internet of Things)
Show Figures

Figure 1

19 pages, 6294 KiB  
Article
Fully Automatic Analysis of Muscle B-Mode Ultrasound Images Based on the Deep Residual Shrinkage U-Net
by Weimin Zheng, Linxueying Zhou, Qingwei Chai, Jianguo Xu and Shangkun Liu
Electronics 2022, 11(7), 1093; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11071093 - 30 Mar 2022
Cited by 4 | Viewed by 2377
Abstract
The parameters of muscle ultrasound images reflect the function and state of muscles. They are of great significance to the diagnosis of muscle diseases. Because manual labeling is time-consuming and laborious, the automatic labeling of muscle ultrasound image parameters has become a research [...] Read more.
The parameters of muscle ultrasound images reflect the function and state of muscles. They are of great significance to the diagnosis of muscle diseases. Because manual labeling is time-consuming and laborious, the automatic labeling of muscle ultrasound image parameters has become a research topic. In recent years, there have been many methods that apply image processing and deep learning to automatically analyze muscle ultrasound images. However, these methods have limitations, such as being non-automatic, not applicable to images with complex noise, and only being able to measure a single parameter. This paper proposes a fully automatic muscle ultrasound image analysis method based on image segmentation to solve these problems. This method is based on the Deep Residual Shrinkage U-Net(RS-Unet) to accurately segment ultrasound images. Compared with the existing methods, the accuracy of our method shows a great improvement. The mean differences of pennation angle, fascicle length and muscle thickness are about 0.09°, 0.4 mm and 0.63 mm, respectively. Experimental results show that the proposed method realizes the accurate measurement of muscle parameters and exhibits stability and robustness. Full article
(This article belongs to the Special Issue Machine Learning in the Industrial Internet of Things)
Show Figures

Figure 1

21 pages, 869 KiB  
Article
A Convolution Location Method for Multi-Node Scheduling in Wireless Sensor Networks
by Pu Han, Jiandong Shang and Jeng-Shyang Pan
Electronics 2022, 11(7), 1031; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11071031 - 25 Mar 2022
Cited by 5 | Viewed by 1534
Abstract
The localization of continuous objects and the scheduling of resources are challenging issues in wireless sensor networks (WSNs). Due to the irregular shape of the continuous target area and the sensor deployment in WSNs, the sensor data are always discrete and sparse, and [...] Read more.
The localization of continuous objects and the scheduling of resources are challenging issues in wireless sensor networks (WSNs). Due to the irregular shape of the continuous target area and the sensor deployment in WSNs, the sensor data are always discrete and sparse, and most network resources are also limited by the node energy. To achieve faster detection and tracking of continuous objects, we propose a convolution-based continuous object localization algorithm (named CCOL). Moreover, we implement the idea of greedy and dynamic programming to design an energy-saving and efficient strategy model (named MSSM) to respond to emergencies caused by multiple continuous targets in most specific WSNs. The simulation experiments demonstrate that CCOL is superior to other localization algorithms in terms of time complexity and execution performance. Furthermore, the feasibility of the multinode scheduling strategy is verified by setting different mobile nodes to respond to the target area in certain green WSNs. Full article
(This article belongs to the Special Issue Machine Learning in the Industrial Internet of Things)
Show Figures

Figure 1

12 pages, 3635 KiB  
Article
Automatic Tracking of Muscle Fiber Direction in Ultrasound Images Based on Improved Kalman Filter
by Shangkun Liu, Qingwei Chai and Weimin Zheng
Electronics 2022, 11(3), 466; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11030466 - 5 Feb 2022
Viewed by 1479
Abstract
Ultrasound myograph (SMG) is a real-time and dynamic acquisition of muscle structure parameter changes by recording ultrasound images of muscle contraction through an ultrasound instrument. Muscle parameters are essential for judging whether the muscle and the human body are healthy. In order to [...] Read more.
Ultrasound myograph (SMG) is a real-time and dynamic acquisition of muscle structure parameter changes by recording ultrasound images of muscle contraction through an ultrasound instrument. Muscle parameters are essential for judging whether the muscle and the human body are healthy. In order to solve the problem of muscle fiber tracking in a sequence of ultrasound muscle images, we propose a method to track the direction of muscle fibers automatically based on the improved Kalman filter. Firstly, the measurement value of the muscle fiber direction is obtained by introducing a reference line into the ultrasound muscle image based on deep learning. Secondly, the framework of a Kalman filter is improved by introducing a set of neural units. Finally, the optimal estimated value of muscle fiber direction is obtained by combining the measured value with the improved Kalman filter. It is verified by conducting experiments where the result obtained by our proposed method is closer to the manually labeled value compared with the original measurement method, and the root mean square error is reduced by about 10%. Full article
(This article belongs to the Special Issue Machine Learning in the Industrial Internet of Things)
Show Figures

Figure 1

Back to TopTop