10th Anniversary of Applied Sciences: Invited Papers in Computing and Artificial Intelligence Section

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 46774

Special Issue Editor

Department of Engineering and Architecture, University of Parma, Parco Area delle Scienze, 181/A, 43124 Parma, Italy
Interests: video surveillance; mobile vision; visual sensor networks; machine vision; multimedia and video processing; performance analysis of multimedia computer architectures
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The Section “Computing and Artificial Intelligence” of Applied Sciences covers a set of emerging research topics related to computer science and artificial intelligence. It is well known that artificial intelligence (as a general term) has gained a lot of attention worldwide both in academia and in industry. Applications are numerous and range from computer vision to natural language processing, to IoT and blockchain, to robotics and Industry 4.0, etc. On the one hand, many baseline techniques are now mature enough to reach common daily applications; nevertheless, so much is still under discovery and development, making this field one of the keystones for the future.

This Special Issue intends to gather moderate-sized review papers featuring important and recent developments or achievements of computing and artificial intelligence with a special emphasis on recently discovered techniques or applications. The authors are well-known experts in their domain who are invited to submit their contribution at any moment from now to the end of October 2020. The papers can cover either experimental or theoretical aspects or both. Machine and deep learning, applied artificial intelligence, IoT and fog computing, distributed systems and blockchain, computer vision and pattern recognition, natural language processing, etc. are among the main topics.

Prof. Dr. Andrea Prati
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 4621 KiB  
Article
Task Migration with Partitioning for Load Balancing in Collaborative Edge Computing
by Sungwon Moon and Yujin Lim
Appl. Sci. 2022, 12(3), 1168; https://0-doi-org.brum.beds.ac.uk/10.3390/app12031168 - 23 Jan 2022
Cited by 11 | Viewed by 2174
Abstract
Multi-access edge computing (MEC) has emerged as a promising technology to facilitate efficient vehicular applications, such as autonomous driving, path planning and navigation. By offloading tasks from vehicles to MEC servers (MECSs), the MEC system can facilitate computation-intensive applications with hard latency constraints [...] Read more.
Multi-access edge computing (MEC) has emerged as a promising technology to facilitate efficient vehicular applications, such as autonomous driving, path planning and navigation. By offloading tasks from vehicles to MEC servers (MECSs), the MEC system can facilitate computation-intensive applications with hard latency constraints in vehicles with limited computing resources. However, owing to the mobility of vehicles, the vehicles are not evenly distributed across the MEC system. Therefore, some MECSs are heavily congested, whereas others are lightly loaded. If a task is offloaded to a congested MECS, it can be blocked or have high latency. Moreover, service interruption would occur because of the high mobility and limited coverage of the MECS. In this paper, we assume that the task can be divided into a set of subtasks and computed by multiple MECSs in parallel. Therefore, we propose a method of task migration with partitioning. To balance loads, the MEC system migrates the set of subtasks of tasks in an overloaded MECS to one or more underloaded MECSs according to the load difference. Simulations have indicated that, compared with conventional methods, the proposed method can increase the satisfaction of quality-of-service requirements, such as low latency, service reliability, and MEC system throughput by optimizing load balancing and task partitioning. Full article
Show Figures

Figure 1

12 pages, 955 KiB  
Article
Intelligent Traffic Signal Phase Distribution System Using Deep Q-Network
by Hyunjin Joo and Yujin Lim
Appl. Sci. 2022, 12(1), 425; https://0-doi-org.brum.beds.ac.uk/10.3390/app12010425 - 03 Jan 2022
Cited by 9 | Viewed by 2078
Abstract
Traffic congestion is a worsening problem owing to an increase in traffic volume. Traffic congestion increases the driving time and wastes fuel, generating large amounts of fumes and accelerating environmental pollution. Therefore, traffic congestion is an important problem that needs to be addressed. [...] Read more.
Traffic congestion is a worsening problem owing to an increase in traffic volume. Traffic congestion increases the driving time and wastes fuel, generating large amounts of fumes and accelerating environmental pollution. Therefore, traffic congestion is an important problem that needs to be addressed. Smart transportation systems manage various traffic problems by utilizing the infrastructure and networks available in smart cities. The traffic signal control system used in smart transportation analyzes and controls traffic flow in real time. Thus, traffic congestion can be effectively alleviated. We conducted preliminary experiments to analyze the effects of throughput, queue length, and waiting time on the system performance according to the signal allocation techniques. Based on the results of the preliminary experiment, the standard deviation of the queue length is interpreted as an important factor in an order allocation technique. A smart traffic signal control system using a deep Q-network, which is a type of reinforcement learning, is proposed. The proposed algorithm determines the optimal order of a green signal. The goal of the proposed algorithm is to maximize the throughput and efficiently distribute the signals by considering the throughput and standard deviation of the queue length as reward parameters. Full article
Show Figures

Figure 1

13 pages, 2349 KiB  
Article
A Cluster-Based Optimal Computation Offloading Decision Mechanism Using RL in the IIoT Field
by Seolwon Koo and Yujin Lim
Appl. Sci. 2022, 12(1), 384; https://0-doi-org.brum.beds.ac.uk/10.3390/app12010384 - 31 Dec 2021
Cited by 3 | Viewed by 1326
Abstract
In the Industrial Internet of Things (IIoT), various tasks are created dynamically because of the small quantity batch production. Hence, it is difficult to execute tasks only with devices that have limited battery lives and computation capabilities. To solve this problem, we adopted [...] Read more.
In the Industrial Internet of Things (IIoT), various tasks are created dynamically because of the small quantity batch production. Hence, it is difficult to execute tasks only with devices that have limited battery lives and computation capabilities. To solve this problem, we adopted the mobile edge computing (MEC) paradigm. However, if there are numerous tasks to be processed on the MEC server (MECS), it may not be suitable to deal with all tasks in the server within a delay constraint owing to the limited computational capability and high network overhead. Therefore, among cooperative computing techniques, we focus on task offloading to nearby devices using device-to-device (D2D) communication. Consequently, we propose a method that determines the optimal offloading strategy in an MEC environment with D2D communication. We aim to minimize the energy consumption of the devices and task execution delay under certain delay constraints. To solve this problem, we adopt a Q-learning algorithm that is part of reinforcement learning (RL). However, if one learning agent determines whether to offload tasks from all devices, the computing complexity of that agent increases tremendously. Thus, we cluster the nearby devices that comprise the job shop, where each cluster’s head determines the optimal offloading strategy for the tasks that occur within its cluster. Simulation results show that the proposed algorithm outperforms the compared methods in terms of device energy consumption, task completion rate, task blocking rate, and throughput. Full article
Show Figures

Figure 1

20 pages, 470 KiB  
Article
DOA Estimation in Low SNR Environment through Coprime Antenna Arrays: An Innovative Approach by Applying Flower Pollination Algorithm
by Khurram Hameed, Shanshan Tu, Nauman Ahmed, Wasim Khan, Ammar Armghan, Fayadh Alenezi, Norah Alnaim, Muhammad Salman Qamar, Abdul Basit and Farman Ali
Appl. Sci. 2021, 11(17), 7985; https://0-doi-org.brum.beds.ac.uk/10.3390/app11177985 - 29 Aug 2021
Cited by 5 | Viewed by 1858
Abstract
The design of the modern computing paradigm of heuristics is an innovative development for parameter estimation of direction of arrival (DOA) using sparse antenna arrays. In this study, the optimization strength of the flower pollination algorithm (FPA) is exploited for the DOA estimation [...] Read more.
The design of the modern computing paradigm of heuristics is an innovative development for parameter estimation of direction of arrival (DOA) using sparse antenna arrays. In this study, the optimization strength of the flower pollination algorithm (FPA) is exploited for the DOA estimation in a low signal to noise ratio (SNR) regime by applying coprime sensor arrays (CSA). The enhanced degree of freedom (DOF) is achieved with FPA by investigating the global minima of highly nonlinear cost function with multiple local minimas. The sparse structure of CSA demonstrates that the DOF up to O(MN) is achieved by employing M+N CSA elements, where M and N are the numbers of antenna elements used to construct the CSA. Performance analysis is conducted for estimation accuracy, robustness against noise, robustness against snapshots, frequency distribution of root mean square error (RMSE), variability analysis of RMSE, cumulative distribution function (CDF) of RMSE over Monte Carlo runs and the comparative studies of particle swarm optimization (PSO). These reveal the worth of the proposed methodology for estimating DOA. Full article
Show Figures

Figure 1

22 pages, 3219 KiB  
Article
A Top-N Movie Recommendation Framework Based on Deep Neural Network with Heterogeneous Modeling
by Jibing Gong, Xinghao Zhang, Qing Li, Cheng Wang, Yaxi Song, Zhiyong Zhao and Shuli Wang
Appl. Sci. 2021, 11(16), 7418; https://0-doi-org.brum.beds.ac.uk/10.3390/app11167418 - 12 Aug 2021
Cited by 3 | Viewed by 2585
Abstract
To provide more accurate and stable recommendations, it is necessary to combine display information with implicit information and to dig out potential information. Existing methods only consider explicit feedback information or implicit feedback information unilaterally and ignore the potential information of explicit feedback [...] Read more.
To provide more accurate and stable recommendations, it is necessary to combine display information with implicit information and to dig out potential information. Existing methods only consider explicit feedback information or implicit feedback information unilaterally and ignore the potential information of explicit feedback information and implicit feedback information, which is also crucial to the accuracy of the recommendation system. However, the traditional Heterogeneous Information Networks (HIN) recommendation ignores the attribute information in the meta-path and the interaction between the user and the item and, instead, only considers the linear characteristics of the user-object often ignoring its non-linear characteristics. Aiming at the potential information acquisition problem from assorted feedback, we propose a new top-N recommendation method MFDNN for Heterogeneous Information Networks (HINs). First, we consider explicit and implicit feedback information to determine the potential preferences of users and the potential features of the product. Then, matrix factorization (MF) and a deep neural network (DNN) are fused to learn independent feature embeddings through MF and DNN, and fully considering the linear and non-linear characteristics of the user-object. MFDNN was tested on several real data sets, such as Movie-Lens, and compared with benchmark experiments. MFDNN significantly improved the hit ratio (HR) and normalized discounted cumulative gain (NDCG). Further research showed that the meta-path bias had an excellent effect on the gain of potential information mining and the fusion of explicit and implicit information in the accuracy and stability of user interest classification. Full article
Show Figures

Figure 1

15 pages, 2237 KiB  
Article
Arabic Gloss WSD Using BERT
by Mohammed El-Razzaz, Mohamed Waleed Fakhr and Fahima A. Maghraby
Appl. Sci. 2021, 11(6), 2567; https://0-doi-org.brum.beds.ac.uk/10.3390/app11062567 - 13 Mar 2021
Cited by 20 | Viewed by 2861
Abstract
Word Sense Disambiguation (WSD) aims to predict the correct sense of a word given its context. This problem is of extreme importance in Arabic, as written words can be highly ambiguous; 43% of diacritized words have multiple interpretations and the percentage increases to [...] Read more.
Word Sense Disambiguation (WSD) aims to predict the correct sense of a word given its context. This problem is of extreme importance in Arabic, as written words can be highly ambiguous; 43% of diacritized words have multiple interpretations and the percentage increases to 72% for non-diacritized words. Nevertheless, most Arabic written text does not have diacritical marks. Gloss-based WSD methods measure the semantic similarity or the overlap between the context of a target word that needs to be disambiguated and the dictionary definition of that word (gloss of the word). Arabic gloss WSD suffers from a lack of context-gloss datasets. In this paper, we present an Arabic gloss-based WSD technique. We utilize the celebrated Bidirectional Encoder Representation from Transformers (BERT) to build two models that can efficiently perform Arabic WSD. These models can be trained with few training samples since they utilize BERT models that were pretrained on a large Arabic corpus. Our experimental results show that our models outperform two of the most recent gloss-based WSDs when we test them against the same test data used to evaluate our model. Additionally, our model achieves an F1-score of 89% compared to the best-reported F1-score of 85% for knowledge-based Arabic WSD. Another contribution of this paper is introducing a context-gloss benchmark that may help to overcome the lack of a standardized benchmark for Arabic gloss-based WSD. Full article
Show Figures

Figure 1

18 pages, 963 KiB  
Article
Animal Sound Classification Using Dissimilarity Spaces
by Loris Nanni, Sheryl Brahnam, Alessandra Lumini and Gianluca Maguolo
Appl. Sci. 2020, 10(23), 8578; https://0-doi-org.brum.beds.ac.uk/10.3390/app10238578 - 30 Nov 2020
Cited by 16 | Viewed by 3305
Abstract
The classifier system proposed in this work combines the dissimilarity spaces produced by a set of Siamese neural networks (SNNs) designed using four different backbones with different clustering techniques for training SVMs for automated animal audio classification. The system is evaluated on two [...] Read more.
The classifier system proposed in this work combines the dissimilarity spaces produced by a set of Siamese neural networks (SNNs) designed using four different backbones with different clustering techniques for training SVMs for automated animal audio classification. The system is evaluated on two animal audio datasets: one for cat and another for bird vocalizations. The proposed approach uses clustering methods to determine a set of centroids (in both a supervised and unsupervised fashion) from the spectrograms in the dataset. Such centroids are exploited to generate the dissimilarity space through the Siamese networks. In addition to feeding the SNNs with spectrograms, experiments process the spectrograms using the heterogeneous auto-similarities of characteristics. Once the similarity spaces are computed, each pattern is “projected” into the space to obtain a vector space representation; this descriptor is then coupled to a support vector machine (SVM) to classify a spectrogram by its dissimilarity vector. Results demonstrate that the proposed approach performs competitively (without ad-hoc optimization of the clustering methods) on both animal vocalization datasets. To further demonstrate the power of the proposed system, the best standalone approach is also evaluated on the challenging Dataset for Environmental Sound Classification (ESC50) dataset. Full article
Show Figures

Figure 1

19 pages, 2847 KiB  
Article
An Energy Saving Road Sweeper Using Deep Vision for Garbage Detection
by Luca Donati, Tomaso Fontanini, Fabrizio Tagliaferri and Andrea Prati
Appl. Sci. 2020, 10(22), 8146; https://0-doi-org.brum.beds.ac.uk/10.3390/app10228146 - 17 Nov 2020
Cited by 16 | Viewed by 6843
Abstract
Road sweepers are ubiquitous machines that help preserve our cities cleanliness and health by collecting road garbage and sweeping out dirt from our streets and sidewalks. They are often very mechanical instruments, needing to operate in harsh conditions dealing with all sorts of [...] Read more.
Road sweepers are ubiquitous machines that help preserve our cities cleanliness and health by collecting road garbage and sweeping out dirt from our streets and sidewalks. They are often very mechanical instruments, needing to operate in harsh conditions dealing with all sorts of abandoned trash and natural garbage. They are usually composed of rotating brushes, collector belts and bins, and sometimes water or air streams. All of these mechanical tools are usually high in power demand and strongly subject to wear and tear. Moreover, due to the simple working logic often implied by these cleaning machines, these tools work in an “always on”/“max power” state, and any further regulation is left to the pilot. Therefore, adding artificial intelligence able to correctly operate these tools in a semi-automatic way would be greatly beneficial. In this paper, we propose an automatic road garbage detection system, able to locate with great precision most types of road waste, and to correctly instruct a road sweeper in order to handle them. With this simple addition to an existing sweeper, we will be able to save more than 80% electrical power currently absorbed by the cleaning systems and reduce by the same amount brush weariness (prolonging their lifetime). This is done by choosing when to use the brushes and when not to, with how much strength, and where. The only hardware components needed by the system will be a camera and a PC board able to read the camera output (and communicate via CanBus). The software of the system will be mainly composed of a deep neural network for semantic segmentation of images, and a real-time software program to control the sweeper actuators with the appropriate timings. To prove the claimed results, we run extensive tests onboard of such a truck, as well as benchmark tests for accuracy, sensitivity, specificity and inference speed of the system. Full article
Show Figures

Figure 1

21 pages, 2202 KiB  
Article
Ocular Biometrics Recognition by Analyzing Human Exploration during Video Observations
by Dario Cazzato, Pierluigi Carcagnì, Claudio Cimarelli, Holger Voos, Cosimo Distante and Marco Leo
Appl. Sci. 2020, 10(13), 4548; https://0-doi-org.brum.beds.ac.uk/10.3390/app10134548 - 30 Jun 2020
Cited by 1 | Viewed by 2281
Abstract
Soft biometrics provide information about the individual but without the distinctiveness and permanence able to discriminate between any two individuals. Since the gaze represents one of the most investigated human traits, works evaluating the feasibility of considering it as a possible additional soft [...] Read more.
Soft biometrics provide information about the individual but without the distinctiveness and permanence able to discriminate between any two individuals. Since the gaze represents one of the most investigated human traits, works evaluating the feasibility of considering it as a possible additional soft biometric trait have been recently appeared in the literature. Unfortunately, there is a lack of systematic studies on clinically approved stimuli to provide evidence of the correlation between exploratory paths and individual identities in “natural” scenarios (without calibration, imposed constraints, wearable tools). To overcome these drawbacks, this paper analyzes gaze patterns by using a computer vision based pipeline in order to prove the correlation between visual exploration and user identity. This correlation is robustly computed in a free exploration scenario, not biased by wearable devices nor constrained to a prior personalized calibration. Provided stimuli have been designed by clinical experts and then they allow better analysis of human exploration behaviors. In addition, the paper introduces a novel public dataset that provides, for the first time, images framing the faces of the involved subjects instead of only their gaze tracks. Full article
Show Figures

Figure 1

15 pages, 688 KiB  
Article
COVID-19: A Comparison of Time Series Methods to Forecast Percentage of Active Cases per Population
by Vasilis Papastefanopoulos, Pantelis Linardatos and Sotiris Kotsiantis
Appl. Sci. 2020, 10(11), 3880; https://0-doi-org.brum.beds.ac.uk/10.3390/app10113880 - 03 Jun 2020
Cited by 109 | Viewed by 15159
Abstract
The ongoing COVID-19 pandemic has caused worldwide socioeconomic unrest, forcing governments to introduce extreme measures to reduce its spread. Being able to accurately forecast when the outbreak will hit its peak would significantly diminish the impact of the disease, as it would allow [...] Read more.
The ongoing COVID-19 pandemic has caused worldwide socioeconomic unrest, forcing governments to introduce extreme measures to reduce its spread. Being able to accurately forecast when the outbreak will hit its peak would significantly diminish the impact of the disease, as it would allow governments to alter their policy accordingly and plan ahead for the preventive steps needed such as public health messaging, raising awareness of citizens and increasing the capacity of the health system. This study investigated the accuracy of a variety of time series modeling approaches for coronavirus outbreak detection in ten different countries with the highest number of confirmed cases as of 4 May 2020. For each of these countries, six different time series approaches were developed and compared using two publicly available datasets regarding the progression of the virus in each country and the population of each country, respectively. The results demonstrate that, given data produced using actual testing for a small portion of the population, machine learning time series methods can learn and scale to accurately estimate the percentage of the total population that will become affected in the future. Full article
Show Figures

Figure 1

Review

Jump to: Research

21 pages, 774 KiB  
Review
Software Engineering Applications Enabled by Blockchain Technology: A Systematic Mapping Study
by Selina Demi, Ricardo Colomo-Palacios and Mary Sánchez-Gordón
Appl. Sci. 2021, 11(7), 2960; https://0-doi-org.brum.beds.ac.uk/10.3390/app11072960 - 25 Mar 2021
Cited by 12 | Viewed by 4651
Abstract
The novel, yet disruptive blockchain technology has witnessed growing attention, due to its intrinsic potential. Besides the conventional domains that benefit from such potential, such as finance, supply chain and healthcare, blockchain use cases in software engineering have emerged recently. In this study, [...] Read more.
The novel, yet disruptive blockchain technology has witnessed growing attention, due to its intrinsic potential. Besides the conventional domains that benefit from such potential, such as finance, supply chain and healthcare, blockchain use cases in software engineering have emerged recently. In this study, we aim to contribute to the body of knowledge of blockchain-oriented software engineering by providing an adequate overview of the software engineering applications enabled by blockchain technology. To do so, we carried out a systematic mapping study and identified 22 primary studies. Then, we extracted data within the research type, research topic and contribution type facets. Findings suggest an increasing trend of studies since 2018. Additionally, findings reveal the potential of using blockchain technologies as an alternative to centralized systems, such as GitHub, Travis CI, and cloud-based package managers, and also to establish trust between parties in collaborative software development. We also found out that smart contracts can enable the automation of a variety of software engineering activities that usually require human reasoning, such as the acceptance phase, payments to software engineers, and compliance adherence. In spite of the fact that the field is not yet mature, we believe that this systematic mapping study provides a holistic overview that may benefit researchers interested in bringing blockchain to the software industry, and practitioners willing to understand how blockchain can transform the software development industry. Full article
Show Figures

Figure 1

Back to TopTop