Machine Learning for Wireless Communications

A special issue of Future Internet (ISSN 1999-5903). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 12591

Special Issue Editors

School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
Interests: machine learning for wireless communications; statistical signal processing; Internet of Things (IoT); 6G; spectrum sensing and sharing in cognitive radio (CR) networks
School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, NSW 2052, Australia
Interests: machine learning for wireless communications; statistical signal processing; Internet of Things (IoT); 6G; spectrum sensing and sharing in cognitive radio (CR) networks

E-Mail Website
Guest Editor
School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, NSW 2052, Australia
Interests: wireless communication; integrated access and backhaul; orthogonal time frequency space; deep learning; dynamic spectrum access; privacy and security; massive mimo; anti-jamming

E-Mail Website
Guest Editor
School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
Interests: wireless communications; network security; privacy preservations; machine learning

E-Mail Website
Guest Editor
Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Interests: delay-Doppler communications; integrated sensing and communications; orthogonal time frequency space
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The constantly emerging new applications of communications, such as truly immersive multisensory extended reality (XR), wearable devices, unmanned aerial vehicles (UAVs), etc., are progressively improving the quality of our daily life. Meanwhile, these new communication applications are generating a large amount of data traffic with heterogeneous quality-of-service (QoS) requirements. As such, there is an emerging need to integrate machine learning technologies into designing, planning, and optimizing future wireless communications. Recently, modern machine learning technologies, especially deep learning techniques, have been proven to have the powerful data-driven capability to facilitate wireless communications in a variety of scenarios, such as channel modeling, channel estimation, signal detection, resource allocation, network optimization, etc. This Special Issue aims to bring together advances in the research on machine learning for wireless communications across a broad range of applications.

Topics of interest include but are not limited to the following:

  • Machine learning (including deep learning, deep reinforcement learning, etc.) for signal detection, classification, compression;
  • Machine learning for spectrum sensing, localization, and positioning;
  • Machine learning for channel modeling, estimation, and prediction;
  • Machine learning for resource allocation and network optimization;
  • Machine learning for new emerging applications toward 6G (including intelligent reflection surface, unmanned aerial vehicles, the Internet of Things, etc.)
  • Performance analysis and evaluation of machine learning empowered wireless communication systems;
  • Machine learning for vehicular networks;
  • Distributed machine learning/federated learning and communications.

Dr. Chang Liu
Dr. Shihao Yan
Dr. Qingqing Cheng
Dr. Minghui Min
Dr. Weijie Yuan
Guest Editors

Keywords

  • machine learning for wireless communications
  • deep learning
  • deep reinforcement learning
  • neural network
  • 6G communications

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 2482 KiB  
Article
Low-Complexity GSM Detection Based on Maximum Ratio Combining
by Xinhe Zhang, Wenbo Lv and Haoran Tan
Future Internet 2022, 14(5), 159; https://0-doi-org.brum.beds.ac.uk/10.3390/fi14050159 - 23 May 2022
Cited by 1 | Viewed by 1601
Abstract
Generalized spatial modulation (GSM) technology is an extension of spatial modulation (SM) technology, and one of its main advantages is to further improve band efficiency. However, the multiple active antennas for transmission also brings the demodulation difficulties at the receiver. To solve the [...] Read more.
Generalized spatial modulation (GSM) technology is an extension of spatial modulation (SM) technology, and one of its main advantages is to further improve band efficiency. However, the multiple active antennas for transmission also brings the demodulation difficulties at the receiver. To solve the problem of high computational complexity of the optimal maximum likelihood (ML) detection, two sub-optimal detection algorithms are proposed through reducing the number of transmit antenna combinations (TACs) detected at the receiver. One is the maximum ratio combining detection algorithm based on repetitive sorting strategy, termed as (MRC-RS), which uses MRC repetitive sorting strategy to select the most likely TACs in detection. The other is the maximum ratio combining detection algorithm, which is based on the iterative idea of the orthogonal matching pursuit, termed the MRC-MP algorithm. The MRC-MP algorithm reduces the number of TACs through finite iterations to reduce the computational complexity. For M-QAM constellation, a hard-limited maximum likelihood (HLML) detection algorithm is introduced to calculate the modulation symbol. For the M-PSK constellation, a low-complexity maximum likelihood (LCML) algorithm is introduced to calculate the modulation symbol. The computational complexity of these two algorithms for calculating the modulation symbol are independent of modulation order. The simulation results show that for GSM systems with a large number of TACs, the proposed two algorithms not only achieve almost the same bit error rate (BER) performance as the ML algorithm, but also can greatly reduce the computational complexity. Full article
(This article belongs to the Special Issue Machine Learning for Wireless Communications)
Show Figures

Figure 1

19 pages, 3481 KiB  
Article
Task Offloading Based on LSTM Prediction and Deep Reinforcement Learning for Efficient Edge Computing in IoT
by Youpeng Tu, Haiming Chen, Linjie Yan and Xinyan Zhou
Future Internet 2022, 14(2), 30; https://0-doi-org.brum.beds.ac.uk/10.3390/fi14020030 - 18 Jan 2022
Cited by 27 | Viewed by 6808
Abstract
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task [...] Read more.
In IoT (Internet of Things) edge computing, task offloading can lead to additional transmission delays and transmission energy consumption. To reduce the cost of resources required for task offloading and improve the utilization of server resources, in this paper, we model the task offloading problem as a joint decision making problem for cost minimization, which integrates the processing latency, processing energy consumption, and the task throw rate of latency-sensitive tasks. The Online Predictive Offloading (OPO) algorithm based on Deep Reinforcement Learning (DRL) and Long Short-Term Memory (LSTM) networks is proposed to solve the above task offloading decision problem. In the training phase of the model, this algorithm predicts the load of the edge server in real-time with the LSTM algorithm, which effectively improves the convergence accuracy and convergence speed of the DRL algorithm in the offloading process. In the testing phase, the LSTM network is used to predict the characteristics of the next task, and then the computational resources are allocated for the task in advance by the DRL decision model, thus further reducing the response delay of the task and enhancing the offloading performance of the system. The experimental evaluation shows that this algorithm can effectively reduce the average latency by 6.25%, the offloading cost by 25.6%, and the task throw rate by 31.7%. Full article
(This article belongs to the Special Issue Machine Learning for Wireless Communications)
Show Figures

Figure 1

21 pages, 706 KiB  
Article
Underwater Target Recognition Based on Multi-Decision LOFAR Spectrum Enhancement: A Deep-Learning Approach
by Jie Chen, Bing Han, Xufeng Ma and Jian Zhang
Future Internet 2021, 13(10), 265; https://0-doi-org.brum.beds.ac.uk/10.3390/fi13100265 - 13 Oct 2021
Cited by 19 | Viewed by 2737
Abstract
Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features [...] Read more.
Underwater target recognition is an important supporting technology for the development of marine resources, which is mainly limited by the purity of feature extraction and the universality of recognition schemes. The low-frequency analysis and recording (LOFAR) spectrum is one of the key features of the underwater target, which can be used for feature extraction. However, the complex underwater environment noise and the extremely low signal-to-noise ratio of the target signal lead to breakpoints in the LOFAR spectrum, which seriously hinders the underwater target recognition. To overcome this issue and to further improve the recognition performance, we adopted a deep-learning approach for underwater target recognition, and a novel LOFAR spectrum enhancement (LSE)-based underwater target-recognition scheme was proposed, which consists of preprocessing, offline training, and online testing. In preprocessing, we specifically design a LOFAR spectrum enhancement based on multi-step decision algorithm to recover the breakpoints in LOFAR spectrum. In offline training, the enhanced LOFAR spectrum is adopted as the input of convolutional neural network (CNN) and a LOFAR-based CNN (LOFAR-CNN) for online recognition is developed. Taking advantage of the powerful capability of CNN in feature extraction, the recognition accuracy can be further improved by the proposed LOFAR-CNN. Finally, extensive simulation results demonstrate that the LOFAR-CNN network can achieve a recognition accuracy of 95.22%, which outperforms the state-of-the-art methods. Full article
(This article belongs to the Special Issue Machine Learning for Wireless Communications)
Show Figures

Figure 1

Back to TopTop