Topic Editors

1. Digital Systems, University of Piraeus, Piraeus, Greece
2. Electrical and Computer Engineering, University of Western Macedonia, 5010 Kozani, Greece
Department of Electrical and Computer Engineering, University of Western Macedonia, 50100 Kozani, Greece
Department of Computer Science, International Hellenic University, 65404 Kavala, Greece
Department of Networks and Digital Media, School of Computer Science and Mathematics, SEC, Kingston University, London KT1 2EE, UK
Electrical and Computer Engineering, University of Western Macedonia, 5010 Kozani, Greece

Next Generation Intelligent Communications and Networks

Abstract submission deadline
31 May 2024
Manuscript submission deadline
31 August 2024
Viewed by
41891

Topic Information

Dear Colleagues,

As a response to the spectrum scarcity caused by the aggressive proliferation of wireless devices and quality-of-service (QoS) and quality-of-experience (QoE) hungry services, which are expected to support a broad range of diverse multi-scale and multi-environment applications, sixth-generation (6G) wireless networks have adopted higher frequency bands, such as the millimenter wave, the terahertz (THz) and the optical band. High-frequency wireless communications are recognized as a technological enabler of a varied set of use cases, from in-body nano-scale networks to indoor and outdoor wireless personal/local area and fronthaul/backhaul networks. Nano-scale applications require compact transceiver designs and self-organized ad hoc network topologies. On the other hand, macro-scale applications demand flexibility, sustainability, adaptability and security in an ever-changing heterogeneous environment. Moreover, the ability to support a high data rate of up to 1 Tb/s and energy-efficient massive connectivity are only some of the key demands. To address the aforementioned requirements, artificial intelligence (AI), in combination with novel structures capable of altering the wireless environment, have been regarded as complementary pillars to 6G wireless THz systems. AI is expected to enable a series of new features in next-generation networks, including, but not limited to: self-aggregation, context awareness, self-configuration and opportunistic deployment. In addition, integrating AI in wireless networks is predicted to bring about a revolutionary transformation of conventional cognitive radio systems into intelligent platforms by unlocking the full potential of radio signals and exploiting new degrees of freedom. Considering this context, this Special Issue aims to present papers investigating AI-empowered and/or -enabled next-generation wireless systems and networks. Potential topics include, but are not limited to, the following:

  • Identification of communication systems and networks requirements that call for the use of AI approaches.
  • AI-enabled architectures with an emphasis on open radio-access networks, SD fabric and verticals, such as agriculture, self-driving vehicles, automation, industry 4.0, etc.
  • Semantic and task-oriented communications beyond Shannon performance.
  • Topics related to AI-empowered physical layers, such as: machine learning channel modeling and/or estimation approaches based on point cloud ray tracing algorithms or similar schemes, as well as channel prediction, involving reconfigurable-intelligent-surface-enabled wireless systems; modulation recognition and signal detection in complex wireless environments; and analog, digital, hybrid and reconfigurable intelligent surface (RIS) beamforming design.
  • Medium and multiple access control: 3D radio resource management, channel allocation, power management, blockage avoidance schemes, localization approaches, pro-active and predictive mobility management, intelligent routing, etc.
  • Novel AI deployment schemes for next-generation networks.

Dr. Alexandros-Apostolos Boulogeorgos
Dr. Panagiotis Sarigiannidis
Dr. Thomas Lagkas
Prof. Dr. Vasileios Argyriou
Prof. Dr. Pantelis Angelidis
Topic Editors

Keywords

  • artificial intelligence
  • explainable AI
  • federated learning
  • machine learning
  • medium and multiple access control
  • physical layer
  • radio resource management
  • reinforcement learning
  • transfer learning
  • semantic communications

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.7 4.5 2011 16.9 Days CHF 2400 Submit
Digital
digital
- - 2021 22.7 Days CHF 1000 Submit
Electronics
electronics
2.9 4.7 2012 15.6 Days CHF 2400 Submit
Sensors
sensors
3.9 6.8 2001 17 Days CHF 2600 Submit
Telecom
telecom
- 3.1 2020 26.1 Days CHF 1200 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (23 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
20 pages, 1082 KiB  
Article
An Adaptive State Consistency Architecture for Distributed Software-Defined Network Controllers: An Evaluation and Design Consideration
by Rawan Alsheikh, Etimad Fadel and Nadine Akkari
Appl. Sci. 2024, 14(6), 2627; https://0-doi-org.brum.beds.ac.uk/10.3390/app14062627 - 21 Mar 2024
Viewed by 473
Abstract
The Physically Distributed Logically Centralized (PDLC) software-defined network (SDN) control plane is physically dispersed across several controllers with a global network view for performance, scalability, and fault tolerance. This design, providing control applications with a global network view, necessitates network state synchronization among [...] Read more.
The Physically Distributed Logically Centralized (PDLC) software-defined network (SDN) control plane is physically dispersed across several controllers with a global network view for performance, scalability, and fault tolerance. This design, providing control applications with a global network view, necessitates network state synchronization among controllers. The amount of inter-controller synchronization can affect the performance and scalability of the system. The absence of standardized communication protocols for East-bound SDN interfaces underscores the need for high-performance communication among diverse SDN controllers to maintain consistent state exchange. An inconsistent controller’s network view can significantly impact network effectiveness and application performance, particularly in dynamic networks. This survey paper offers an overview of noteworthy AI and non-AI solutions for PDLC SDN architecture in industry and academia, specifically focusing on their approaches to consistency and synchronization challenges. The suggested PDLC framework achieves an adaptive controller-to-controller synchronization rate in a dynamic network environment. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

33 pages, 3578 KiB  
Article
6G Goal-Oriented Communications: How to Coexist with Legacy Systems?
by Mattia Merluzzi, Miltiadis C. Filippou, Leonardo Gomes Baltar, Markus Dominik Mueck and Emilio Calvanese Strinati
Telecom 2024, 5(1), 65-97; https://0-doi-org.brum.beds.ac.uk/10.3390/telecom5010005 - 24 Jan 2024
Viewed by 1147
Abstract
6G will connect heterogeneous intelligent agents to make them natively operate complex cooperative tasks. When connecting intelligence, two main research questions arise to identify how artificial intelligence and machine learning models behave depending on (i) their input data quality, affected by errors induced [...] Read more.
6G will connect heterogeneous intelligent agents to make them natively operate complex cooperative tasks. When connecting intelligence, two main research questions arise to identify how artificial intelligence and machine learning models behave depending on (i) their input data quality, affected by errors induced by interference and additive noise during wireless communication; (ii) their contextual effectiveness and resilience to interpret and exploit the meaning behind the data. Both questions are within the realm of semantic and goal-oriented communications. With this paper, we investigate how to effectively share communication spectrum resources between a legacy communication system (i.e., data-oriented) and a new goal-oriented edge intelligence one. Specifically, we address the scenario of an enhanced Mobile Broadband (eMBB) service, i.e., a user uploading a video stream to a radio access point, interfering with an edge inference system, in which a user uploads images to a Mobile Edge Host that runs a classification task. Our objective is to achieve, through cooperation, the highest eMBB service data rate, subject to a targeted goal effectiveness of the edge inference service, namely the probability of confident inference on time. We first formalize a general definition of a goal in the context of wireless communications. This includes the goal effectiveness, (i.e., the goal achievability rate, or the probability of achieving the goal), as well as goal cost (i.e., the network resource consumption needed to achieve the goal with target effectiveness). We argue and show, through numerical evaluations, that communication reliability and goal effectiveness are not straightforwardly linked. Then, after a performance evaluation aiming to clarify the difference between communication performance and goal effectiveness, a long-term optimization problem is formulated and solved via Lyapunov stochastic network optimization tools to guarantee the desired target performance. Finally, our numerical results assess the advantages of the proposed optimization and the superiority of the goal-oriented strategy against baseline 5G-compliant legacy approaches, under both stationary and non-stationary communication (and computation) environments. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

30 pages, 970 KiB  
Article
Study of the Impact of Data Compression on the Energy Consumption Required for Data Transmission in a Microcontroller-Based System
by Dominik Piątkowski, Tobiasz Puślecki and Krzysztof Walkowiak
Sensors 2024, 24(1), 224; https://0-doi-org.brum.beds.ac.uk/10.3390/s24010224 - 30 Dec 2023
Viewed by 724
Abstract
As the number of Internet of Things (IoT) devices continues to rise dramatically each day, the data generated and transmitted by them follow similar trends. Given that a significant portion of these embedded devices operate on battery power, energy conservation becomes a crucial [...] Read more.
As the number of Internet of Things (IoT) devices continues to rise dramatically each day, the data generated and transmitted by them follow similar trends. Given that a significant portion of these embedded devices operate on battery power, energy conservation becomes a crucial factor in their design. This paper aims to investigate the impact of data compression on the energy consumption required for data transmission. To achieve this goal, we conduct a comprehensive study using various transmission modules in a severely resource-limited microcontroller-based system designed for battery power. Our study evaluates the performance of several compression algorithms, conducting a detailed analysis of computational and memory complexity, along with performance metrics. The primary finding of our study is that by carefully selecting an algorithm for compressing different types of data before transmission, a significant amount of energy can be saved. Moreover, our investigation demonstrates that for a battery-powered embedded device transmitting sensor data based on the STM32F411CE microcontroller, the recommended transmission module is the nRF24L01+ board, as it requires the least amount of energy to transmit one byte of data. This module is most effective when combined with the LZ78 algorithm for optimal energy and time efficiency. In the case of image data, our findings indicate that the use of the JPEG algorithm for compression yields the best results. Overall, our research underscores the importance of selecting appropriate compression algorithms tailored to specific data types, contributing to enhanced energy efficiency in IoT devices. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

24 pages, 4258 KiB  
Article
Saving Energy Using the Modified Heuristic Algorithm for Energy Saving (MHAES) in Software-Defined Networks
by Péter András Agg and Zsolt Csaba Johanyák
Sensors 2023, 23(23), 9581; https://0-doi-org.brum.beds.ac.uk/10.3390/s23239581 - 02 Dec 2023
Viewed by 873
Abstract
Energy consumption is a significant concern in daily life, often neglected in terms of cost and environmental impact. Since IT networks play an essential role in our daily routines, energy-saving in this area is crucial. However, the implementation of energy efficiency solutions in [...] Read more.
Energy consumption is a significant concern in daily life, often neglected in terms of cost and environmental impact. Since IT networks play an essential role in our daily routines, energy-saving in this area is crucial. However, the implementation of energy efficiency solutions in this field have to ensure that the network performance is minimally affected. Traditional networks encounter difficulties in achieving this goal. Software-Defined Networks (SDN), which have gained popularity in the past decade, offer easy-to-use opportunities to increase energy efficiency. Features like central controllability and quick programmability can help to reduce energy consumption. In this article, a new algorithm named the Modified Heuristic Algorithm for Energy Saving (MHAES) is presented, which was compared to eight commonly used methods in different topologies for energy efficiency. The results indicate that by maintaining an appropriate load balance, one can save more energy than in case of using some other well-known procedures by applying a threshold value based on forecast, keeping only a minimal number of nodes in an active state, and ensuring that nodes not participating in packet transmission remain in sleep mode. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

14 pages, 349 KiB  
Article
Port-Based Anonymous Communication Network: An Efficient and Secure Anonymous Communication Network
by Xiance Meng and Mangui Liang
Sensors 2023, 23(21), 8810; https://0-doi-org.brum.beds.ac.uk/10.3390/s23218810 - 29 Oct 2023
Viewed by 747
Abstract
With the rise of the internet, there has been an increasing focus on user anonymity. Anonymous communication networks (ACNs) aim to protect the identity privacy of users in the network. As a typical ACN, Tor achieves user anonymity by relaying user data through [...] Read more.
With the rise of the internet, there has been an increasing focus on user anonymity. Anonymous communication networks (ACNs) aim to protect the identity privacy of users in the network. As a typical ACN, Tor achieves user anonymity by relaying user data through a series of relay nodes. However, this results in higher latency due to the transmission of network traffic between multiple nodes. This paper proposes a port-based anonymous communication network (PBACN) to address this issue. First, we propose a path construction algorithm. This algorithm describes constructing paths by partitioning the communication path information, which can reduce the probability of being discovered by adversaries. Secondly, we design a port-based source routing addressing method. During data transmission from the source to the destination, each node can directly forward the data by resolving the address into the port of each node. This method eliminates the need for table lookups, reducing the complexity of routing. Lastly, we propose an entropy-based metric to measure the anonymity of different ACNs. In terms of experimental evaluation, we quantitatively analyze the anonymity and end-to-end delay of various ACNs. The experimental results show that our proposed method reduces end-to-end delay by approximately 25% compared to Tor. When the adversary fraction is 20%, PBACN can improve the anonymity degree by approximately 4%. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

23 pages, 449 KiB  
Article
Harnessing the Potential of Emerging Technologies to Break down Barriers in Tactical Communications
by Laura Concha Salor and Victor Monzon Baeza
Telecom 2023, 4(4), 709-731; https://0-doi-org.brum.beds.ac.uk/10.3390/telecom4040032 - 16 Oct 2023
Cited by 2 | Viewed by 1706
Abstract
In the realm of military communications, the advent of new technologies like 5G and the future 6G networks holds promise. However, incorporating these technologies into tactical environments presents unique security challenges. This article delves into an analysis of these challenges by examining practical [...] Read more.
In the realm of military communications, the advent of new technologies like 5G and the future 6G networks holds promise. However, incorporating these technologies into tactical environments presents unique security challenges. This article delves into an analysis of these challenges by examining practical use cases for military communications, where emerging technologies can be applied. Our focus lies on identifying and presenting a range of emerging technologies associated with 5G and 6G, including the Internet of things (IoT), tactile internet, network virtualization and softwarization, artificial intelligence, network slicing, digital twins, neuromorphic processors, joint sensing and communications, and blockchain. We specifically explore their applicability in tactical environments by proposing where they can be potential use cases. Additionally, we provide an overview of legacy tactical radios so that they can be researched to address the challenges posed by these technologies. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

16 pages, 2713 KiB  
Article
CSI Feedback Model Based on Multi-Source Characterization in FDD Systems
by Fei Pan, Xiaoyu Zhao, Boda Zhang, Pengjun Xiang, Mengdie Hu and Xuesong Gao
Sensors 2023, 23(19), 8139; https://0-doi-org.brum.beds.ac.uk/10.3390/s23198139 - 28 Sep 2023
Cited by 1 | Viewed by 763
Abstract
In wireless communication, to fully utilize the spectrum and energy efficiency of the system, it is necessary to obtain the channel state information (CSI) of the link. However, in Frequency Division Duplexing (FDD) systems, CSI feedback wastes part of the spectrum resources. In [...] Read more.
In wireless communication, to fully utilize the spectrum and energy efficiency of the system, it is necessary to obtain the channel state information (CSI) of the link. However, in Frequency Division Duplexing (FDD) systems, CSI feedback wastes part of the spectrum resources. In order to save spectrum resources, the CSI needs to be compressed. However, many current deep-learning algorithms have complex structures and a large number of model parameters. When the computational and storage resources are limited, the large number of model parameters will decrease the accuracy of CSI feedback, which cannot meet the application requirements. In this paper, we propose a neural network-based CSI feedback model, Mix_Multi_TransNet, which considers both the spatial characteristics and temporal sequence of the channel, aiming to provide higher feedback accuracy while reducing the number of model parameters. Through experiments, it is found that Mix_Multi_TransNet achieves higher accuracy than the traditional CSI feedback network in both indoor and outdoor scenes. In the indoor scene, the NMSE gains of Mix_Multi_TransNet are 4.06 dB, 4.92 dB, 4.82 dB, and 6.47 dB for compression ratio η = 1/8, 1/16, 1/32, 1/64, respectively. In the outdoor scene, the NMSE gains of Mix_Multi_TransNet are 3.63 dB, 6.24 dB, 4.71 dB, 4.60 dB, and 2.93 dB for compression ratio η = 1/4, 1/8, 1/16, 1/32, 1/64, respectively. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

20 pages, 13010 KiB  
Article
Solving Load Balancing Problems in Routing and Limiting Traffic at the Network Edge
by Alexander Barkalov, Oleksandr Lemeshko, Oleksandra Yeremenko, Larysa Titarenko and Maryna Yevdokymenko
Appl. Sci. 2023, 13(17), 9489; https://0-doi-org.brum.beds.ac.uk/10.3390/app13179489 - 22 Aug 2023
Cited by 1 | Viewed by 990
Abstract
This study focuses on creating and investigating models that optimize load balancing in communication networks by managing routing and traffic limitations. The purpose is to use these models to optimize the network’s routing and traffic limitations while ensuring predictable quality of service levels, [...] Read more.
This study focuses on creating and investigating models that optimize load balancing in communication networks by managing routing and traffic limitations. The purpose is to use these models to optimize the network’s routing and traffic limitations while ensuring predictable quality of service levels, and adhering to traffic engineering requirements for routing and limiting traffic at the network edge. In order to achieve this aim, a mathematical optimization model was developed based on a chosen optimality criterion. Two modifications of the traffic engineering routing were created, including the linear limitation model (TER-LLM) and traffic engineering limitation (TER-TEL), each considering the main features of packet flow: intensity and priority. The proposed solutions were compared by analyzing various data inputs, including the ratio of flow parameters and the intensity with which packets will be limited at the border router. The study presents recommendations on the optimal use of the proposed solutions based on their respective features and advantages. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

15 pages, 1470 KiB  
Article
Physical-Layer Security with Irregular Reconfigurable Intelligent Surfaces for 6G Networks
by Emmanuel Obeng Frimpong, Bong-Hwan Oh, Taehoon Kim and Inkyu Bang
Sensors 2023, 23(4), 1881; https://0-doi-org.brum.beds.ac.uk/10.3390/s23041881 - 07 Feb 2023
Viewed by 3048
Abstract
The goal of 6G is to make far-reaching changes in communication systems with stricter demands, such as high throughput, extremely low latency, stronger security, and ubiquitous connectivity. Several promising techniques, such as reconfigurable intelligent surfaces (RISs), have been introduced to achieve these goals. [...] Read more.
The goal of 6G is to make far-reaching changes in communication systems with stricter demands, such as high throughput, extremely low latency, stronger security, and ubiquitous connectivity. Several promising techniques, such as reconfigurable intelligent surfaces (RISs), have been introduced to achieve these goals. An RIS is a 2D low-cost array of reflecting elements that can adjust the electromagnetic properties of an incident signal. In this paper, we guarantee secrecy by using an irregular RIS (IRIS). The main idea of an IRIS is to irregularly activate reflecting elements for a given number of RIS elements. In this work, we consider a communication scenario in which, with the aid of an IRIS, a multi-antenna base station establishes a secure link with a legitimate single-antenna user in the presence of a single-antenna eavesdropper. To this end, we formulate a topology-and-precoding optimization problem to maximize the secrecy rate. We then propose a Tabu search-based algorithm to jointly optimize the RIS topology and the precoding design. Finally, we present simulation results to validate the proposed algorithm, which highlights the performance gain of the IRIS in improving secure transmissions compared to an RIS. Our results show that exploiting an IRIS can allow additional spatial diversity to be achieved, resulting in secrecy performance improvement and overcoming the limitations of conventional RIS-assisted systems (e.g., a large number of active elements). Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

17 pages, 1677 KiB  
Article
Machine Learning Based Recommendation System for Web-Search Learning
by Veeramanickam M. R. M., Ciro Rodriguez, Carlos Navarro Depaz, Ulises Roman Concha, Bishwajeet Pandey, Reena S. Kharat and Raja Marappan
Telecom 2023, 4(1), 118-134; https://0-doi-org.brum.beds.ac.uk/10.3390/telecom4010008 - 01 Feb 2023
Cited by 5 | Viewed by 3021
Abstract
Nowadays, e-learning and web-based learning are the most integrated new learning methods in schools, colleges, and higher educational institutions. The recent web-search-based learning methodological approach has helped online users (learners) to search for the required topics from the available online resources. The learners [...] Read more.
Nowadays, e-learning and web-based learning are the most integrated new learning methods in schools, colleges, and higher educational institutions. The recent web-search-based learning methodological approach has helped online users (learners) to search for the required topics from the available online resources. The learners extracted knowledge from textual, video, and image formats through web searching. This research analyzes the learner’s significant attention to searching for the required information online and develops a new recommendation system using machine learning (ML) to perform the web searching. The learner’s navigation and eye movements are recorded using sensors. The proposed model automatically analyzes the learners’ interests while performing online searches and the origin of the acquired and learned information. The ML model maps the text and video contents and obtains a better recommendation. The proposed model analyzes and tracks online resource usage and comprises the following steps: information logging, information processing, and word mapping operations. The learner’s knowledge of the captured online resources using the sensors is analyzed to enhance the response time, selectivity, and sensitivity. On average, the learners spent more hours accessing the video and the textual information and fewer hours accessing the images. The percentage of participants addressing the two different subject quizzes, Q1 and Q2, increased when the learners attempted the quiz after the web search; 43.67% of the learners addressed the quiz Q1 before completing the web search, and 75.92% addressed the quiz Q2 after the web search. The average word counts analysis corresponding to text, videos, overlapping text or video, and comprehensive resources indicates that the proposed model can also apply for a continuous multi sessions online search learning environment. The experimental analysis indicates that better measures are obtained for the proposed recommender using sensors and ML compared with other methods in terms of recall, ranking score, and precision. The proposed model achieves a precision of 27% when the recommendation size becomes 100. The root mean square error (RMSE) lies between 8% and 16% when the number of learners < 500, and the maximum value of RMSE is 21% when the number of learners reaches 1500. The proposed recommendation model achieves better results than the state-of-the-art methods. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

17 pages, 2550 KiB  
Article
Cooperative Transmission Mechanism Based on Revenue Learning for Vehicular Networks
by Mingyang Chen, Haixia Cui, Mingsheng Nie, Qiuxian Chen, Shunan Yang, Yongliang Du and Feipeng Dai
Appl. Sci. 2022, 12(24), 12651; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412651 - 09 Dec 2022
Viewed by 1097
Abstract
With the rapid development of science and technology and the improvement of people’s living standards, vehicles have gradually become the main means of travel. The increase in vehicles has also brought about an increasing incidence of car accidents. In order to reduce traffic [...] Read more.
With the rapid development of science and technology and the improvement of people’s living standards, vehicles have gradually become the main means of travel. The increase in vehicles has also brought about an increasing incidence of car accidents. In order to reduce traffic accidents, many researchers have proposed the use of vehicular networks to quickly transmit information. As long as these vehicles can receive information from other vehicles or buildings nearby in a timely manner, they can avoid accidents. In vehicular networks, the traditional double connection technique, through interference coordination scheduling strategy based on graph theory, can ensure the fairness of vehicles and obtain suitable neighborhood interference resistance with limited computing resources. However, when a base station transmits data to the vehicular user, the nearby base station and the vehicular network user may be in a state of suspended communication. Thus, the resource utilization of the above double connection vehicular network is not sufficient, resulting in a waste of resources. To solve this issue, this paper presents a study based on earnings learning with a vehicular network multi-point collaborative transmission mechanism, in which the vehicular network users communicate with the surrounding collaborative transmission. We use the Q-learning algorithm in the reinforcement learning process to enable vehicular network users to learn from each other and make cooperative decisions in different environments. In reinforcement learning, the agent makes a decision and changes the state of the environment. Then, the environment feeds back the benefit to the agent through the related algorithm so that the agent gradually learns the optimal decision. Simulation results demonstrate the superiority of our proposed approach with the revenue machine learning model compared with the benchmark schemes. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

17 pages, 1339 KiB  
Article
Augmented Lagrangian-Based Reinforcement Learning for Network Slicing in IIoT
by Qi Qi, Wenbin Lin, Boyang Guo, Jinshan Chen, Chaoping Deng, Guodong Lin, Xin Sun and Youjia Chen
Electronics 2022, 11(20), 3385; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11203385 - 19 Oct 2022
Cited by 2 | Viewed by 1423
Abstract
Network slicing enables the multiplexing of independent logical networks on the same physical network infrastructure to provide different network services for different applications. The resource allocation problem involved in network slicing is typically a decision-making problem, falling within the scope of reinforcement learning. [...] Read more.
Network slicing enables the multiplexing of independent logical networks on the same physical network infrastructure to provide different network services for different applications. The resource allocation problem involved in network slicing is typically a decision-making problem, falling within the scope of reinforcement learning. The advantage of adapting to dynamic wireless environments makes reinforcement learning a good candidate for problem solving. In this paper, to tackle the constrained mixed integer nonlinear programming problem in network slicing, we propose an augmented Lagrangian-based soft actor–critic (AL-SAC) algorithm. In this algorithm, a hierarchical action selection network is designed to handle the hybrid action space. More importantly, inspired by the augmented Lagrangian method, both neural networks for Lagrange multipliers and a penalty item are introduced to deal with the constraints. Experiment results show that the proposed AL-SAC algorithm can strictly satisfy the constraints, and achieve better performance than other benchmark algorithms. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

18 pages, 3646 KiB  
Article
DDS: A Delay-Based Differentiated Service Virtual Network Embedding Algorithm
by Jiamin Tian, Xuewen Zeng and Xiaodong Zhu
Appl. Sci. 2022, 12(19), 9897; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199897 - 01 Oct 2022
Cited by 2 | Viewed by 1049
Abstract
Network virtualization (NV) is considered a promising technology that may solve the problem of Internet rigidity. The resource competition of multiple virtual networks for shared substrate network resources is a challenging problem in NV called virtual network embedding (VNE). Existing approaches do not [...] Read more.
Network virtualization (NV) is considered a promising technology that may solve the problem of Internet rigidity. The resource competition of multiple virtual networks for shared substrate network resources is a challenging problem in NV called virtual network embedding (VNE). Existing approaches do not consider the differences between multi-tenant requests and adopt a single embedding method, resulting in poor performance. This paper proposes a virtual network embedding algorithm that distinguishes the network types requested by tenants. This method divides virtual network requests into ordinary requests and delay-sensitive requests according to the delay constraints, provides personalized mapping strategies for different networks, and flexibly responds to the resource requirements and quality of service (QoS) requirements of the virtual network. The simulation results show that, compared with other algorithms, the proposed algorithm improves the request acceptance ratio by about 2% to 15% and the substrate network resources are more effectively utilized. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

16 pages, 630 KiB  
Article
QoS-Aware Downlink Traffic Scheduling for Cellular Networks with Dual Connectivity
by Haoru Su, Meng-Shiuan Pan and Hung-Wei Mai
Electronics 2022, 11(19), 3085; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11193085 - 27 Sep 2022
Cited by 3 | Viewed by 1231
Abstract
In a cellular network, how to preserve users’ quality of service (QoS) demands is an important issue. To provide better data services, researchers and industry have discussed the deployment of small cells in cellular networks to support dual connectivity enhancement for user equipments [...] Read more.
In a cellular network, how to preserve users’ quality of service (QoS) demands is an important issue. To provide better data services, researchers and industry have discussed the deployment of small cells in cellular networks to support dual connectivity enhancement for user equipments (UEs). By such an enhancement, a base station can dispatch downlink data to its surrounding small cells, and UEs that are located in the overlapping areas of the base station and small cells can receive downlink data from both sides simultaneously. We observe that previous works do not jointly consider QoS requirements and system capabilities when making scheduling decisions. Therefore, in this work, we design a QoS traffic scheduling scheme for dual connectivity networks. The designed scheme contains two parts. First, we propose a data dispatching decision scheme for the base station to decide how much data should be dispatched to small cells. When making a dispatching decision, the proposed scheme aims to maximize throughput and ensure that data flows can be processed in time. Second, we design a radio resource scheduling method, which aims to reduce dropping ratios of high-priority QoS data flows, while avoiding wasting radio resources. In this work, we verify our design using simulation programs. The experimental results show that compared to the existing methods, the proposed scheme can effectively increase system throughput and decrease packet drop ratios. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

14 pages, 1117 KiB  
Article
A Learning Automaton-Based Algorithm for Maximizing the Transfer Data Rate in a Biological Nanonetwork
by Konstantinos F. Kantelis
Appl. Sci. 2022, 12(19), 9499; https://0-doi-org.brum.beds.ac.uk/10.3390/app12199499 - 22 Sep 2022
Viewed by 1145
Abstract
Biological nanonetworks have been envisaged to be the most appropriate alternatives to classical electromagnetic nanonetworks for applications in biological environments. Due to the diffusional method of the message exchange process, transfer data rates are not proportional to their electromagnetic counterparts. In addition, the [...] Read more.
Biological nanonetworks have been envisaged to be the most appropriate alternatives to classical electromagnetic nanonetworks for applications in biological environments. Due to the diffusional method of the message exchange process, transfer data rates are not proportional to their electromagnetic counterparts. In addition, the molecular channel has memory affecting the reception of a message, as the molecules from previously transmitted messages remain in the channel, affecting the number of information molecules that are required from a node to perceive a transmitted message. As a result, the ability of a node to receive a message is directly connected to the transmission rate from the transmitter. In this work, a learning automaton approach has been followed as a way to provide the receiver nodes with an algorithm that could firstly enhance their reception capability and secondly boost the performance of the transfer data rate between the biological communication parties. To this end, a complete set of simulation scenarios has been devised, simulating different distances between nodes and various input signal distributions. Most of the operational parameters, such as the speed of convergence for different numbers of ascension and descension steps and the number of information molecules per message, have been tested pertaining to the performance characteristics of the biological nanonetwork. The applied analysis revealed that the proposed protocol manages to adapt to the communication channel changes, such as the number of remaining information molecules, and can be successfully employed at nanoscale dimensions as a tool for pursuing an increased transfer data rate, even with time-variant channel characteristics. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

15 pages, 2146 KiB  
Article
MAToC: A Novel Match-Action Table Architecture on Corundum for 8 × 25G Networking
by Jiawei Lin, Zhichuan Guo and Xiao Chen
Appl. Sci. 2022, 12(17), 8734; https://0-doi-org.brum.beds.ac.uk/10.3390/app12178734 - 31 Aug 2022
Viewed by 1397
Abstract
Packet processing offloads are increasingly needed by high-speed networks. This paper proposes a high throughput, low latency, scalable and reconfigurable Match-Action Table (MAT) architecture based on the open source FPGA-based NIC Corundum. The flexibility and capability of this scheme is demonstrated by an [...] Read more.
Packet processing offloads are increasingly needed by high-speed networks. This paper proposes a high throughput, low latency, scalable and reconfigurable Match-Action Table (MAT) architecture based on the open source FPGA-based NIC Corundum. The flexibility and capability of this scheme is demonstrated by an example implementation of IP layer forwarding offload. It makes the NIC work as a router that can forward packets for different subnet and virtual local area networks (VLAN). Experiments are performed on a Zynq MPSoC device with two QSFPs and the results show that it can work at line rate of 8 × 25 Gbps (200 Gbps), within a maximum latency of 76 nanoseconds. In addition, a high-performance MAT pipeline with full-featured, resource-efficient TCAM and a compact frame merging deparser are presented. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

14 pages, 2705 KiB  
Article
Spectrum Sensing Based on STFT-ImpResNet for Cognitive Radio
by Jianxin Gai, Linghui Zhang and Zihao Wei
Electronics 2022, 11(15), 2437; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11152437 - 04 Aug 2022
Cited by 2 | Viewed by 1481
Abstract
Spectrum sensing is a crucial technology for cognitive radio. The existing spectrum sensing methods generally suffer from certain problems, such as insufficient signal feature representation, low sensing efficiency, high sensibility to noise uncertainty, and drastic degradation in deep networks. In view of these [...] Read more.
Spectrum sensing is a crucial technology for cognitive radio. The existing spectrum sensing methods generally suffer from certain problems, such as insufficient signal feature representation, low sensing efficiency, high sensibility to noise uncertainty, and drastic degradation in deep networks. In view of these challenges, we propose a spectrum sensing method based on short-time Fourier transform and improved residual network (STFT-ImpResNet) in this work. Specifically, in STFT, the received signal is transformed into a two-dimensional time-frequency matrix which is normalized to a gray image as the input of the network. An improved residual network is designed to classify the signal samples, and a dropout layer is added to the residual block to mitigate over-fitting effectively. We conducted comprehensive evaluations on the proposed spectrum sensing method, which demonstrate that—compared with other current spectrum sensing algorithms—STFT-ImpResNet exhibits higher accuracy and lower computational complexity, as well as strong robustness to noise uncertainty, and it can meet the needs of real-time detection. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

26 pages, 3872 KiB  
Article
Node-Based QoS-Aware Security Framework for Sinkhole Attacks in Mobile Ad-Hoc Networks
by Bukohwo Michael Esiefarienrhe, Thulani Phakathi and Francis Lugayizi
Telecom 2022, 3(3), 407-432; https://0-doi-org.brum.beds.ac.uk/10.3390/telecom3030022 - 29 Jun 2022
Cited by 6 | Viewed by 1871
Abstract
Most networks strive to provide good security and an acceptable level of performance. Quality of service (QoS) plays an important role in the performance of a network. Mobile ad hoc networks (MANETs) are a decentralized and self-configuring type of wireless network. MANETs are [...] Read more.
Most networks strive to provide good security and an acceptable level of performance. Quality of service (QoS) plays an important role in the performance of a network. Mobile ad hoc networks (MANETs) are a decentralized and self-configuring type of wireless network. MANETs are generally challenging and the provision of security and QoS becomes a huge challenge. Many researchers in literature have proposed parallel mechanisms that investigate either security or QoS. This paper presents a security framework that is QoS-aware in MANETs using a network protocol called optimized link state routing protocol (OLSR). Security and QoS targets may not necessarily be similar but this framework seeks to bridge the gap for the provision of an optimal functioning MANET. The framework is evaluated for throughput, jitter, and delay against a sinkhole attack presented in the network. The contributions of this paper are (a) implementation of a sinkhole attack using OLSR, (b) the design and implementation of a lightweight-intrusion detection system using OLSR, and (c) a framework that removes fake routes and bandwidth optimization. The simulation results revealed that the QoS-aware framework increased the performance of the network by more than 70% efficiency in terms of network throughput. Delay and jitter levels were reduced by close to 85% as compared to when the network was under attack. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

20 pages, 881 KiB  
Article
The Smart in Smart Cities: A Framework for Image Classification Using Deep Learning
by Rabiah Al-qudah, Yaser Khamayseh, Monther Aldwairi and Sarfraz Khan
Sensors 2022, 22(12), 4390; https://0-doi-org.brum.beds.ac.uk/10.3390/s22124390 - 10 Jun 2022
Cited by 4 | Viewed by 1632
Abstract
The need for a smart city is more pressing today due to the recent pandemic, lockouts, climate changes, population growth, and limitations on availability/access to natural resources. However, these challenges can be better faced with the utilization of new technologies. The zoning design [...] Read more.
The need for a smart city is more pressing today due to the recent pandemic, lockouts, climate changes, population growth, and limitations on availability/access to natural resources. However, these challenges can be better faced with the utilization of new technologies. The zoning design of smart cities can mitigate these challenges. It identifies the main components of a new smart city and then proposes a general framework for designing a smart city that tackles these elements. Then, we propose a technology-driven model to support this framework. A mapping between the proposed general framework and the proposed technology model is then introduced. To highlight the importance and usefulness of the proposed framework, we designed and implemented a smart image handling system targeted at non-technical personnel. The high cost, security, and inconvenience issues may limit the cities’ abilities to adopt such solutions. Therefore, this work also proposes to design and implement a generalized image processing model using deep learning. The proposed model accepts images from users, then performs self-tuning operations to select the best deep network, and finally produces the required insights without any human intervention. This helps in automating the decision-making process without the need for a specialized data scientist. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

14 pages, 446 KiB  
Article
A Reinforcement Learning Based Data Caching in Wireless Networks
by Muhammad Sheraz, Shahryar Shafique, Sohail Imran, Muhammad Asif, Rizwan Ullah, Muhammad Ibrar, Jahanzeb Khan and Lunchakorn Wuttisittikulkij
Appl. Sci. 2022, 12(11), 5692; https://0-doi-org.brum.beds.ac.uk/10.3390/app12115692 - 03 Jun 2022
Cited by 2 | Viewed by 2214
Abstract
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited [...] Read more.
Data caching has emerged as a promising technique to handle growing data traffic and backhaul congestion of wireless networks. However, there is a concern regarding how and where to place contents to optimize data access by the users. Data caching can be exploited close to users by deploying cache entities at Small Base Stations (SBSs). In this approach, SBSs cache contents through the core network during off-peak traffic hours. Then, SBSs provide cached contents to content-demanding users during peak traffic hours with low latency. In this paper, we exploit the potential of data caching at the SBS level to minimize data access delay. We propose an intelligence-based data caching mechanism inspired by an artificial intelligence approach known as Reinforcement Learning (RL). Our proposed RL-based data caching mechanism is adaptive to dynamic learning and tracks network states to capture users’ diverse and varying data demands. Our proposed approach optimizes data caching at the SBS level by observing users’ data demands and locations to efficiently utilize the limited cache resources of SBS. Extensive simulations are performed to evaluate the performance of proposed caching mechanism based on various factors such as caching capacity, data library size, etc. The obtained results demonstrate that our proposed caching mechanism achieves 4% performance gain in terms of delay vs. contents, 3.5% performance gain in terms of delay vs. users, 2.6% performance gain in terms of delay vs. cache capacity, 18% performance gain in terms of percentage traffic offloading vs. popularity skewness (γ), and 6% performance gain in terms of backhaul saving vs. cache capacity. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

12 pages, 3149 KiB  
Article
Deep Learning for Joint Pilot Design and Channel Estimation in MIMO-OFDM Systems
by Xiao-Fei Kang, Zi-Hui Liu and Meng Yao
Sensors 2022, 22(11), 4188; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114188 - 31 May 2022
Cited by 13 | Viewed by 2352
Abstract
In MIMO-OFDM systems, pilot design and estimation algorithm jointly determine the reliability and effectiveness of pilot-based channel estimation methods. In order to improve the channel estimation accuracy with less pilot overhead, a deep learning scheme for joint pilot design and channel estimation is [...] Read more.
In MIMO-OFDM systems, pilot design and estimation algorithm jointly determine the reliability and effectiveness of pilot-based channel estimation methods. In order to improve the channel estimation accuracy with less pilot overhead, a deep learning scheme for joint pilot design and channel estimation is proposed. This new hybrid network structure is named CAGAN, which is composed of a concrete autoencoder (concrete AE) and a conditional generative adversarial network (cGAN). We first use concrete AE to find and select the most informative position in the time-frequency grid to achieve pilot optimization design and then input the optimized pilots to cGAN to complete channel estimation. Simulation experiments show that the CAGAN scheme outperforms the traditional LS and MMSE estimation methods with fewer pilots, and has good robustness to environmental noise. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

30 pages, 3440 KiB  
Article
B5GEMINI: AI-Driven Network Digital Twin
by Alberto Mozo, Amit Karamchandani, Sandra Gómez-Canaval, Mario Sanz, Jose Ignacio Moreno and Antonio Pastor
Sensors 2022, 22(11), 4106; https://0-doi-org.brum.beds.ac.uk/10.3390/s22114106 - 28 May 2022
Cited by 13 | Viewed by 6314
Abstract
Network Digital Twin (NDT) is a new technology that builds on the concept of Digital Twins (DT) to create a virtual representation of the physical objects of a telecommunications network. NDT bridges physical and virtual spaces to enable coordination and synchronization of physical [...] Read more.
Network Digital Twin (NDT) is a new technology that builds on the concept of Digital Twins (DT) to create a virtual representation of the physical objects of a telecommunications network. NDT bridges physical and virtual spaces to enable coordination and synchronization of physical parts while eliminating the need to directly interact with them. There is broad consensus that Artificial Intelligence (AI) and Machine Learning (ML) are among the key enablers to this technology. In this work, we present B5GEMINI, which is an NDT for 5G and beyond networks that makes an extensive use of AI and ML. First, we present the infrastructural and architectural components that support B5GEMINI. Next, we explore four paradigmatic applications where AI/ML can leverage B5GEMINI for building new AI-powered applications. In addition, we identify the main components of the AI ecosystem of B5GEMINI, outlining emerging research trends and identifying the open challenges that must be solved along the way. Finally, we present two relevant use cases in the application of NDTs with an extensive use of ML. The first use case lays in the cybersecurity domain and proposes the use of B5GEMINI to facilitate the design of ML-based attack detectors and the second addresses the design of energy efficient ML components and introduces the modular development of NDTs adopting the Digital Map concept as a novelty. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

20 pages, 12198 KiB  
Article
A 24-to-30 GHz Ultra-High-Linearity Down-Conversion Mixer for 5G Applications Using a New Linearization Method
by Shenghui Yang, Kejie Hu, Haipeng Fu, Kaixue Ma and Min Lu
Sensors 2022, 22(10), 3802; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103802 - 17 May 2022
Cited by 2 | Viewed by 1930
Abstract
The linearity of active mixers is usually determined by the input transistors, and many works have been proposed to improve it by modified input stages at the cost of a more complex structure or more power consumption. A new linearization method of active [...] Read more.
The linearity of active mixers is usually determined by the input transistors, and many works have been proposed to improve it by modified input stages at the cost of a more complex structure or more power consumption. A new linearization method of active mixers is proposed in this paper; the input 1 dB compression point (IP1dB) and output 1 dB compression point (OP1dB) are greatly improved by exploiting the “reverse uplift” phenomenon. Compared with other linearization methods, the proposed one is simpler, more efficient, and sacrifices less conversion gain. Using this method, an ultra-high-linearity double-balanced down-conversion mixer with wide IF bandwidth is designed and fabricated in a 130 nm SiGe BiCMOS process. The proposed mixer includes a Gilbert-cell, a pair of phase-adjusting inductors, and a Marchand-balun-based output network. Under a 1.6 V supply voltage, the measurement results show that the mixer exhibits an excellent IP1dB of +7.2~+10.1 dBm, an average OP1dB of +5.4 dBm, which is the state-of-the-art linearity performance in mixers under a silicon-based process, whether active or passive. Moreover, a wide IF bandwidth of 8 GHz from 3 GHz to 11 GHz was achieved. The circuit consumes 19.8 mW and occupies 0.48 mm2, including all pads. The use of the "reverse uplift" allows us to implement high-linearity circuits more efficiently, which is helpful for the design of 5G high-speed communication transceivers. Full article
(This article belongs to the Topic Next Generation Intelligent Communications and Networks)
Show Figures

Figure 1

Back to TopTop