Networks, Communication, and Computing Vol. 2

A special issue of Algorithms (ISSN 1999-4893). This special issue belongs to the section "Combinatorial Optimization, Graph, and Network Algorithms".

Deadline for manuscript submissions: closed (31 August 2020) | Viewed by 20667

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editor

Department of Computer Science, Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, P.O. Box 830688, MS-EC31, Richardson, TX 75083-0688, USA
Interests: communication networks and their protocols; network design/analysis methods; algorithms; complexity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Networks, communication, and computing have become ubiquitous and inseparable parts of everyday life. This Special Issue is devoted to the exploration of the many-faceted relationship of these areas. We will explore the current state-of-the-art of research in networks, communication, and computing, with a particular interest in the interactions among these fields.

Topics of interest for submission include, but are not limited to the following:

  • Coding techniques
  • Modeling and simulation of communication systems
  • Network architecture and protocols, optical fiber/microwave communication
  • Satellite communication
  • Wired and wireless communication
  • Wireless sensor networks and related topics
  • Artificial intelligence
  • Computer graphics and virtual reality
  • Speech/image processing
  • Data mining algorithms
  • Distributed computing
  • Grid and cloud computing
  • Software architecture
  • Bioinformatics
  • Evolutionary algorithms
  • Software engineering
  • Ubiquitous computing
  • Semantic web and related topics
  • Case studies about the interactions of networks, communication, and computing

Prof. Dr. Andras Farago
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Networks
  • Communication
  • Computing and algorithms
  • Computing applications
  • Modeling and simulation
  • Network architecture

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

11 pages, 694 KiB  
Article
Safe Approximation—An Efficient Solution for a Hard Routing Problem
by András Faragó and Zohre R. Mojaveri
Algorithms 2021, 14(2), 48; https://0-doi-org.brum.beds.ac.uk/10.3390/a14020048 - 02 Feb 2021
Cited by 1 | Viewed by 2085
Abstract
The Disjoint Connecting Paths problem and its capacitated generalization, called Unsplittable Flow problem, play an important role in practical applications such as communication network design and routing. These tasks are NP-hard in general, but various polynomial-time approximations are known. Nevertheless, the approximations [...] Read more.
The Disjoint Connecting Paths problem and its capacitated generalization, called Unsplittable Flow problem, play an important role in practical applications such as communication network design and routing. These tasks are NP-hard in general, but various polynomial-time approximations are known. Nevertheless, the approximations tend to be either too loose (allowing large deviation from the optimum), or too complicated, often rendering them impractical in large, complex networks. Therefore, our goal is to present a solution that provides a relatively simple, efficient algorithm for the unsplittable flow problem in large directed graphs, where the task is NP-hard, and is known to remain NP-hard even to approximate up to a large factor. The efficiency of our algorithm is achieved by sacrificing a small part of the solution space. This also represents a novel paradigm for approximation. Rather than giving up the search for an exact solution, we restrict the solution space to a subset that is the most important for applications, and excludes only a small part that is marginal in some well-defined sense. Specifically, the sacrificed part only contains scenarios where some edges are very close to saturation. Since nearly saturated links are undesirable in practical applications, therefore, excluding near saturation is quite reasonable from the practical point of view. We refer the solutions that contain no nearly saturated edges as safe solutions, and call the approach safe approximation. We prove that this safe approximation can be carried out efficiently. That is, once we restrict ourselves to safe solutions, we can find the exact optimum by a randomized polynomial time algorithm. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

15 pages, 843 KiB  
Article
A Hybrid Metaheuristic Algorithm for the Efficient Placement of UAVs
by Stephanie Alvarez Fernandez, Marcelo M. Carvalho and Daniel G. Silva
Algorithms 2020, 13(12), 323; https://0-doi-org.brum.beds.ac.uk/10.3390/a13120323 - 03 Dec 2020
Cited by 7 | Viewed by 2259
Abstract
This work addresses the problem of using Unmanned Aerial Vehicles (UAV) to deploy a wireless aerial relay communications infrastructure for stations scattered on the ground. In our problem, every station in the network must be assigned to a single UAV, which is responsible [...] Read more.
This work addresses the problem of using Unmanned Aerial Vehicles (UAV) to deploy a wireless aerial relay communications infrastructure for stations scattered on the ground. In our problem, every station in the network must be assigned to a single UAV, which is responsible for handling all data transfer on behalf of the stations that are assigned to it. Consequently, the placement of UAVs is key to achieving both network coverage and the maximization of the aggregate link capacities between UAVs and stations, and among the UAVs themselves. Because the complexity of this problem increases significantly with the number of stations to cover, for a given fixed number p of available UAVs, we model it as a single allocation p-hub median optimization problem, and we propose a hybrid metaheuristic algorithm to solve it. A series of numerical experiments illustrate the efficiency of the proposed algorithm against traditional optimization tools, which achieves high-quality results in very short time intervals, thus making it an attractive solution for real-world application scenarios. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

18 pages, 1389 KiB  
Article
A Clustering Routing Algorithm Based on Improved Ant Colony Optimization Algorithms for Underwater Wireless Sensor Networks
by Xingxing Xiao and Haining Huang
Algorithms 2020, 13(10), 250; https://0-doi-org.brum.beds.ac.uk/10.3390/a13100250 - 01 Oct 2020
Cited by 34 | Viewed by 3537
Abstract
Because of the complicated underwater environment, the efficiency of data transmission from underwater sensor nodes to a sink node (SN) is faced with great challenges. Aiming at the problem of energy consumption in underwater wireless sensor networks (UWSNs), this paper proposes an energy-efficient [...] Read more.
Because of the complicated underwater environment, the efficiency of data transmission from underwater sensor nodes to a sink node (SN) is faced with great challenges. Aiming at the problem of energy consumption in underwater wireless sensor networks (UWSNs), this paper proposes an energy-efficient clustering routing algorithm based on an improved ant colony optimization (ACO) algorithm. In clustering routing algorithms, the network is divided into many clusters, and each cluster consists of one cluster head node (CHN) and several cluster member nodes (CMNs). This paper optimizes the CHN selection based on the residual energy of nodes and the distance factor. The selected CHN gathers data sent by the CMNs and transmits them to the sink node by multiple hops. Optimal multi-hop paths from the CHNs to the SN are found by an improved ACO algorithm. This paper presents the ACO algorithm through the improvement of the heuristic information, the evaporation parameter for the pheromone update mechanism, and the ant searching scope. Simulation results indicate the high effectiveness and efficiency of the proposed algorithm in reducing the energy consumption, prolonging the network lifetime, and decreasing the packet loss ratio. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

34 pages, 457 KiB  
Article
When 5G Meets Deep Learning: A Systematic Review
by Guto Leoni Santos, Patricia Takako Endo, Djamel Sadok and Judith Kelner
Algorithms 2020, 13(9), 208; https://0-doi-org.brum.beds.ac.uk/10.3390/a13090208 - 25 Aug 2020
Cited by 35 | Viewed by 6497
Abstract
This last decade, the amount of data exchanged on the Internet increased by over a staggering factor of 100, and is expected to exceed well over the 500 exabytes by 2020. This phenomenon is mainly due to the evolution of high-speed broadband Internet [...] Read more.
This last decade, the amount of data exchanged on the Internet increased by over a staggering factor of 100, and is expected to exceed well over the 500 exabytes by 2020. This phenomenon is mainly due to the evolution of high-speed broadband Internet and, more specifically, the popularization and wide spread use of smartphones and associated accessible data plans. Although 4G with its long-term evolution (LTE) technology is seen as a mature technology, there is continual improvement to its radio technology and architecture such as in the scope of the LTE Advanced standard, a major enhancement of LTE. However, for the long run, the next generation of telecommunication (5G) is considered and is gaining considerable momentum from both industry and researchers. In addition, with the deployment of the Internet of Things (IoT) applications, smart cities, vehicular networks, e-health systems, and Industry 4.0, a new plethora of 5G services has emerged with very diverging and technologically challenging design requirements. These include high mobile data volume per area, high number of devices connected per area, high data rates, longer battery life for low-power devices, and reduced end-to-end latency. Several technologies are being developed to meet these new requirements, and each of these technologies brings its own design issues and challenges. In this context, deep learning models could be seen as one of the main tools that can be used to process monitoring data and automate decisions. As these models are able to extract relevant features from raw data (images, texts, and other types of unstructured data), the integration between 5G and DL looks promising and one that requires exploring. As main contribution, this paper presents a systematic review about how DL is being applied to solve some 5G issues. Differently from the current literature, we examine data from the last decade and the works that address diverse 5G specific problems, such as physical medium state estimation, network traffic prediction, user device location prediction, self network management, among others. We also discuss the main research challenges when using deep learning models in 5G scenarios and identify several issues that deserve further consideration. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

16 pages, 1913 KiB  
Article
Citywide Cellular Traffic Prediction Based on a Hybrid Spatiotemporal Network
by Dehai Zhang, Linan Liu, Cheng Xie, Bing Yang and Qing Liu
Algorithms 2020, 13(1), 20; https://0-doi-org.brum.beds.ac.uk/10.3390/a13010020 - 08 Jan 2020
Cited by 16 | Viewed by 5608
Abstract
With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction [...] Read more.
With the arrival of 5G networks, cellular networks are moving in the direction of diversified, broadband, integrated, and intelligent networks. At the same time, the popularity of various smart terminals has led to an explosive growth in cellular traffic. Accurate network traffic prediction has become an important part of cellular network intelligence. In this context, this paper proposes a deep learning method for space-time modeling and prediction of cellular network communication traffic. First, we analyze the temporal and spatial characteristics of cellular network traffic from Telecom Italia. On this basis, we propose a hybrid spatiotemporal network (HSTNet), which is a deep learning method that uses convolutional neural networks to capture the spatiotemporal characteristics of communication traffic. This work adds deformable convolution to the convolution model to improve predictive performance. The time attribute is introduced as auxiliary information. An attention mechanism based on historical data for weight adjustment is proposed to improve the robustness of the module. We use the dataset of Telecom Italia to evaluate the performance of the proposed model. Experimental results show that compared with the existing statistics methods and machine learning algorithms, HSTNet significantly improved the prediction accuracy based on MAE and RMSE. Full article
(This article belongs to the Special Issue Networks, Communication, and Computing Vol. 2)
Show Figures

Figure 1

Back to TopTop