Applications of Artificial Intelligence and Machine Learning in Wireless Communications and Networks

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Electrical, Electronics and Communications Engineering".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 11544

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan 33302, Taiwan
Interests: multimedia; computer vision; wireless networks; stock prediction; natural language processing; Internet of Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Software, Dalian University of Technology, Dalian 116024, China
Interests: wireless networking; AI; underwater robot
School of Computer and Information Engineering, Hubei University, Wuhan 430062, China
Interests: ML-based resource management; UAV; Internet of Vehicles; deep reinforcement learning

Special Issue Information

Dear Colleagues,

The great success of artificial intelligence (AI) and machine learning (ML) technologies has led to the possibility of developing more advanced communications and networks technologies. Deep neural network-based machine learning is a promising tool to overcome the greatest challenges in wireless communications and networks imposed by the increasing demands for improved capacity, coverage, latency, efficiency, flexibility, compatibility, and quality of experience. This Special Issue is dedicated to documenting the latest applications and results of AI and ML technologies in wireless communications and networks. We welcome articles on the following relevant topics:

  • ML-based resource management;
  • ML in energy efficiency optimization;
  • ML in medium access control layer;
  • Machine learning based quality of service (QoS) management in wireless networks;
  • ML-based intelligent computing algorithms for wireless networks;
  • Novel reinforcement learning (RL) methods for wireless networks;
  • AI/ML for 5G and beyond;
  • ML-Aided unmanned aerial vehicles (UAV) control;
  • ML-based killer applications in future wireless communications and networks.

Submissions on other topics that are in accordance with the theme of the Special Issue are also welcome and may take the form of original research articles, review articles, or opinions on methodologies or applications.

Prof. Dr. Jenhui Chen
Prof. Dr. Lei Wang
Dr. Zhiqun Hu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • machine learning
  • deep learning
  • ML-based resource management
  • ML-based networking
  • mobility management
  • transmission intelligence
  • distributed intelligence
  • AIoT
  • ML-based killer applications

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1114 KiB  
Article
Random-Delay-Corrected Deep Reinforcement Learning Framework for Real-World Online Closed-Loop Network Automation
by Keliang Du, Luhan Wang, Yu Liu, Haiwen Niu, Shaoxin Huang and Xiangming Wen
Appl. Sci. 2022, 12(23), 12297; https://0-doi-org.brum.beds.ac.uk/10.3390/app122312297 - 01 Dec 2022
Viewed by 1401
Abstract
The future mobile communication networks (beyond 5th generation (5G)) are evolving toward the service-based architecture where network functions are fine-grained, thereby meeting the dynamic requirements of diverse and differentiated vertical applications. Consequently, the complexity of network management becomes higher, and artificial intelligence (AI) [...] Read more.
The future mobile communication networks (beyond 5th generation (5G)) are evolving toward the service-based architecture where network functions are fine-grained, thereby meeting the dynamic requirements of diverse and differentiated vertical applications. Consequently, the complexity of network management becomes higher, and artificial intelligence (AI) technologies can improve AI-native network automation with their ability to solve complex problems. Specifically, deep reinforcement learning (DRL) technologies are considered the key to intelligent network automation with a feedback mechanism similar to that of online closed-loop architecture. However, the 0-delay assumptions of the standard Markov decision process (MDP) of traditional DRL algorithms cannot directly be adopted into real-world networks because there exist random delays between the agent and the environment that will affect the performance significantly. To address this problem, this paper proposes a random-delay-corrected framework. We first abstract the scenario and model it as a partial history-dependent MDP (PH-MDP), and prove that it can be transformed to be the standard MDP solved by the traditional DRL algorithms. Then, we propose a random-delay-corrected DRL framework with a forward model and a delay-corrected trajectory sampling to obtain samples by continuous interactions to train the agent. Finally, we propose a delayed-deep-Q-network (delayed-DQN) algorithm based on the framework. For the evaluation, we develop a real-world cloud-native 5G core network prototype whose management architecture follows an online closed-loop mechanism. A use case on the top of the prototype namely delayed-DQN-enabled access and mobility management function (AMF) scaling is implemented for specific evaluations. Several experiments are designed and the results show that our proposed methodologies perform better in the random-delayed networks than other methods (e.g., the standard DQN algorithm). Full article
Show Figures

Figure 1

11 pages, 2762 KiB  
Communication
Radio Signal Modulation Recognition Method Based on Deep Learning Model Pruning
by Xinyu Hao, Zhang Xia, Mengxi Jiang, Qiubo Ye and Guangsong Yang
Appl. Sci. 2022, 12(19), 9894; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199894 - 01 Oct 2022
Cited by 1 | Viewed by 1287
Abstract
With the development of communication technology and the increasingly complex wireless communication channel environment, the requirements for radio modulation recognition are also increased to avoid interference and improve the efficiency of radio spectrum resources. To achieve high recognition accuracy with less computational overload, [...] Read more.
With the development of communication technology and the increasingly complex wireless communication channel environment, the requirements for radio modulation recognition are also increased to avoid interference and improve the efficiency of radio spectrum resources. To achieve high recognition accuracy with less computational overload, we propose a radio signal modulation recognition method based on deep learning, which uses a pruning strategy to reduce computational overload, based on the original model, CNN-LSTM-DNN (CLDNN), and the double-layer long short-term memory (LSTM). Effect factors are analyzed in terms of recognition accuracy by adjusting the parameters of each network layer. The results of the experiments show that the model not only has a greater precision improvement than some existing models, but also reduces the computational resources necessary. Full article
Show Figures

Figure 1

17 pages, 1961 KiB  
Article
Call Failure Prediction in IP Multimedia Subsystem (IMS) Networks
by Amr Bahaa, Mohamed Shehata, Safa M. Gasser and Mohamed S. El-Mahallawy
Appl. Sci. 2022, 12(16), 8378; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168378 - 22 Aug 2022
Cited by 2 | Viewed by 2450
Abstract
An explosion of traffic volume is the main driver behind launching various 5G services. The 5G network will utilize the IP Multimedia Subsystems (IMS) as a core network, same as in 4G networks. Thus, ensuring a high level of survivability and efficient failure [...] Read more.
An explosion of traffic volume is the main driver behind launching various 5G services. The 5G network will utilize the IP Multimedia Subsystems (IMS) as a core network, same as in 4G networks. Thus, ensuring a high level of survivability and efficient failure management in the IMS is crucial before launching 5G services. We introduce a new methodology based on machine learning to predict the call failures occurring inside the IMS network using the traces for the Session Initiation Protocol (SIP) communication. Predicting that the call will fail enables the operator to prevent the failure by redirecting the call to another radio access technique by initiating the Circuit Switching fallback (CS-fallback) through a 380 SIP error response sent to the handset. The advantage of the model is not limited to call failure prediction, but also to know the root causes behind the failure; more specifically, the multi-factorial root is caused by using machine learning, which cannot be obtained using the traditional method (manual tracking of the traces). We built eight different machine learning models using four different classifiers (decision tree, naive Bayes, K-Nearest Neighbor (KNN), and Support Vector Machine (SVM)) and two different feature selection methods (Filter and Wrapper). Finally, we compare the different models and use the one with the highest prediction accuracy to obtain the root causes beyond the call failures. The results demonstrate that using SVM classifier with Wrapper feature selection method conducts the highest prediction accuracy, reaching 97.5%. Full article
Show Figures

Figure 1

16 pages, 2815 KiB  
Article
5G Technology: ML Hyperparameter Tuning Analysis for Subcarrier Spacing Prediction Model
by Faris Syahmi Samidi, Nurul Asyikin Mohamed Radzi, Kaiyisah Hanis Mohd Azmi, Norazizah Mohd Aripin and Nayli Adriana Azhar
Appl. Sci. 2022, 12(16), 8271; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168271 - 18 Aug 2022
Cited by 4 | Viewed by 1651
Abstract
Resource optimisation is critical because 5G is intended to be a major enabler and a leading infrastructure provider in the information and communication technology sector by supporting a wide range of upcoming services with varying requirements. Therefore, system improvisation techniques, such as machine [...] Read more.
Resource optimisation is critical because 5G is intended to be a major enabler and a leading infrastructure provider in the information and communication technology sector by supporting a wide range of upcoming services with varying requirements. Therefore, system improvisation techniques, such as machine learning (ML) and deep learning, must be applied to make the model customisable. Moreover, improvisation allows the prediction system to generate the most accurate outcomes and valuable insights from data whilst enabling effective decisions. In this study, we first provide a literature study on the applications of ML and a summary of the hyperparameters influencing the prediction capabilities of the ML models for the communication system. We demonstrate the behaviour of four ML models: k nearest neighbour, classification and regression trees, random forest and support vector machine. Then, we observe and elaborate on the suitable hyperparameter values for each model based on the accuracy in prediction performance. Based on our observation, the optimal hyperparameter setting for ML models is essential because it directly impacts the model’s performance. Therefore, understanding how the ML models are expected to respond to the system utilised is critical. Full article
Show Figures

Figure 1

15 pages, 916 KiB  
Article
The UAV Trajectory Optimization for Data Collection from Time-Constrained IoT Devices: A Hierarchical Deep Q-Network Approach
by Zhenquan Qin, Xuan Zhang, Xinwei Zhang, Bingxian Lu, Zhonghao Liu and Linlin Guo
Appl. Sci. 2022, 12(5), 2546; https://0-doi-org.brum.beds.ac.uk/10.3390/app12052546 - 28 Feb 2022
Cited by 10 | Viewed by 2571
Abstract
Recently, using unmanned aerial vehicles (UAVs) to collect information from distributed sensors has become one of the hotspots in the Internet of Things (IoT) research. However, previous studies on the UAV-assisted data acquisition systems focused mainly on shortening the acquisition time, reducing the [...] Read more.
Recently, using unmanned aerial vehicles (UAVs) to collect information from distributed sensors has become one of the hotspots in the Internet of Things (IoT) research. However, previous studies on the UAV-assisted data acquisition systems focused mainly on shortening the acquisition time, reducing the energy consumption, and increasing the amount of collected data, but it lacked the optimization of data freshness. Moreover, we hope that UAVs can perform long-term data collection tasks in dynamic scenarios within a constantly changing age of information (AoI) and within their own power levels. Therefore, we aim to maximize the quality of service (QoS) based on the freshness of data, while considering the endurance of the UAVs. Since our scenario is not an inertial order decision process with uniform time slots, we first transform the optimization problem into a semi-Markov decision process (SMDP) through modeling, and then we propose a hierarchical deep Q-network (DQN)-based path-planning algorithm to learn the optimal strategy. The simulation results show that the algorithm is better than the benchmark algorithm, and the tradeoff between the system QoS and the safe power state can be achieved by adjusting the parameter βe. Full article
Show Figures

Figure 1

Back to TopTop