Deep Neural Network: Algorithms and Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (20 June 2023) | Viewed by 10388

Special Issue Editors

School of Mathematics, Physics and Computing, University of Southern Queensland, Brisbane 4300, Australia
Interests: artificial intelligence; machine learning; deep learning; applied sciences; mathematical modeling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Mechanical and Electrical Engineering, University of Southern Queensland, Brisbane, QLD 4300, Australia
Interests: artificial intelligence; machine learning applications

Special Issue Information

Dear Colleagues,

Deep neural networks have become an important and popular tool for solving various tyes of classification, clustering, and regression problems. The new deep learning architectures offer many advancement in computational ability for highly accurate and reliable results. Deep learning also features as a highly active field for image and signal processing.

Therefore, this Special Issue is intended for the presentation of novel algorithms and their applications in various areas of research, including the natural sciences, engineering and the built environment, information technology, social sciences, and business/economics. Areas relevant to deep learning networks and their applications include, but are not limited to, artificial intelligence, machine learning, deep learning, and the processing of large datasets from satellites, ground based datasets, scientific experiments, sensor networks, medical instruments, and many other sources.

This Special Issue will publish high-quality, original research papers, in the overlapping fields of:

  • Artificial intelligence;
  • Machine learning;
  • Deep learning;
  • Computational and data science;
  • Big data applications, algorithms, and systems;
  • Hardware and software optimized for artificial intelligence and machine learning;
  • New applications of artificial intelligence and machine learning;
  • Forecasting and predictions using deep learning neural network.

Dr. Nawin Raj
Dr. Jason Brown
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep neural network
  • deep learning architecture and algorithms

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 2288 KiB  
Article
Improving Graph Neural Network Models in Link Prediction Task via A Policy-Based Training Method
by Yigeng Shang, Zhigang Hao, Chao Yao and Guoliang Li
Appl. Sci. 2023, 13(1), 297; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010297 - 26 Dec 2022
Viewed by 2105
Abstract
Graph neural network (GNN), as a widely used deep learning model in processing graph-structured data, has attracted numerous studies to apply it in the link prediction task. In these studies, observed edges in a network are utilized as positive samples, and unobserved edges [...] Read more.
Graph neural network (GNN), as a widely used deep learning model in processing graph-structured data, has attracted numerous studies to apply it in the link prediction task. In these studies, observed edges in a network are utilized as positive samples, and unobserved edges are randomly sampled as negative ones. However, there are problems in randomly sampling unobserved edges as negative samples. First, some unobserved edges are missing edges that are existing edges in the network. Second, some unobserved edges can be easily distinguished from the observed edges, which cannot contribute sufficiently to the prediction task. Therefore, using the randomly sampled unobserved edges directly as negative samples is difficult to make GNN models achieve satisfactory prediction performance in the link prediction task. To address this issue, we propose a policy-based training method (PbTRM) to improve the quality of negative samples. In the proposed PbTRM, a negative sample selector generates the state vectors of the randomly sampled unobserved edges and determines whether to select them as negative samples. We perform experiments with three GNN models on two standard datasets. The results show that the proposed PbTRM can enhance the prediction performance of GNN models in the link prediction task. Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 2238 KiB  
Article
CNN Based Image Classification of Malicious UAVs
by Jason Brown, Zahra Gharineiat and Nawin Raj
Appl. Sci. 2023, 13(1), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/app13010240 - 24 Dec 2022
Cited by 4 | Viewed by 1620
Abstract
Unmanned Aerial Vehicles (UAVs) or drones have found a wide range of useful applications in society over the past few years, but there has also been a growth in the use of UAVs for malicious purposes. One way to manage this issue is [...] Read more.
Unmanned Aerial Vehicles (UAVs) or drones have found a wide range of useful applications in society over the past few years, but there has also been a growth in the use of UAVs for malicious purposes. One way to manage this issue is to allow reporting of malicious UAVs (e.g., through a smartphone application) with the report including a photo of the UAV. It would be useful to able to automatically identify the type of UAV within the image in terms of the manufacturer and specific product identification using a trained image classification model. In this paper, we discuss the collection of images for three popular UAVs at different elevations and different distances from the observer, and using different camera zoom levels. We then train 4 image classification models based upon Convolutional Neural Networks (CNNs) using this UAV image dataset and the concept of transfer learning from the well-known ImageNet database. The trained models can classify the type of UAV contained in unseen test images with up to approximately 81% accuracy (for the Resnet-18 model), even though 2 of the UAVs represented in the UAV image dataset are visually similar, and the fact that the UAV image dataset contains images of UAVs that are a significant distance from the observer. This provides a motivation to expand the study in the future to include more UAV types and other usage scenarios (e.g., UAVs carrying loads). Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

17 pages, 6707 KiB  
Article
Laboratory Flame Smoke Detection Based on an Improved YOLOX Algorithm
by Maolin Luo, Linghua Xu, Yongliang Yang, Min Cao and Jing Yang
Appl. Sci. 2022, 12(24), 12876; https://0-doi-org.brum.beds.ac.uk/10.3390/app122412876 - 15 Dec 2022
Cited by 5 | Viewed by 1687
Abstract
Fires in university laboratories often lead to serious casualties and property damage, and traditional sensor-based fire detection techniques suffer from fire warning delays. Current deep learning algorithms based on convolutional neural networks have the advantages of high accuracy, low cost, and high speeds [...] Read more.
Fires in university laboratories often lead to serious casualties and property damage, and traditional sensor-based fire detection techniques suffer from fire warning delays. Current deep learning algorithms based on convolutional neural networks have the advantages of high accuracy, low cost, and high speeds in processing image-based data, but their ability to process the relationship between visual elements and objects is inferior to Transformer. Therefore, this paper proposes an improved YOLOX target detection algorithm combining Swin Transformer architecture, the CBAM attention mechanism, and a Slim Neck structure applied to flame smoke detection in laboratory fires. The experimental results verify that the improved YOLOX algorithm has higher detection accuracy and more accurate position recognition for flame smoke in complex situations, with APs of 92.78% and 92.46% for flame and smoke, respectively, and an mAP value of 92.26%, compared with the original YOLOX algorithm, SSD, Faster R-CNN, YOLOv4, and YOLOv5. The detection accuracy is improved, which proves the effectiveness and superiority of this improved YOLOX target detection algorithm in fire detection. Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

13 pages, 4395 KiB  
Article
Model Compression and Acceleration: Lip Recognition Based on Channel-Level Structured Pruning
by Yuanyao Lu, Ran Ni and Jing Wen
Appl. Sci. 2022, 12(20), 10468; https://0-doi-org.brum.beds.ac.uk/10.3390/app122010468 - 17 Oct 2022
Viewed by 981
Abstract
In recent years, with the rapid development of deep learning, the requirements for the performance of the corresponding real-time recognition system are getting higher and higher. However, the rapid expansion of data volume means that time delay, power consumption, and cost have become [...] Read more.
In recent years, with the rapid development of deep learning, the requirements for the performance of the corresponding real-time recognition system are getting higher and higher. However, the rapid expansion of data volume means that time delay, power consumption, and cost have become problems that cannot be ignored. In this case, the traditional neural network is almost impossible to use to achieve productization. In order to improve the potential problems of a neural network facing a huge number of datasets without affecting the recognition effect, the model compression method has gradually entered people’s vision. However, the existing model compression methods still have some shortcomings in some aspects, such as low rank decomposition, transfer/compact convolution filter, knowledge distillation, etc. These problems enable the traditional model compression to cope with the huge amount of computation brought by large datasets to a certain extent, but also make the results unstable on some datasets, and the system performance has not been improved satisfactorily. To address this, we proposed a structured network compression and acceleration method for the convolutional neural network, which integrates the pruned convolutional neural network and the recurrent neural network, and applied it to the lip-recognition system in this paper. Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

33 pages, 9029 KiB  
Article
Serial Decoders-Based Auto-Encoders for Image Reconstruction
by Honggui Li, Maria Trocan, Mohamad Sawan and Dimitri Galayko
Appl. Sci. 2022, 12(16), 8256; https://0-doi-org.brum.beds.ac.uk/10.3390/app12168256 - 18 Aug 2022
Cited by 2 | Viewed by 1361
Abstract
Auto-encoders are composed of coding and decoding units; hence, they hold an inherent potential of being used for high-performance data compression and signal-compressed sensing. The main disadvantages of current auto-encoders comprise the following aspects: the research objective is not to achieve lossless data [...] Read more.
Auto-encoders are composed of coding and decoding units; hence, they hold an inherent potential of being used for high-performance data compression and signal-compressed sensing. The main disadvantages of current auto-encoders comprise the following aspects: the research objective is not to achieve lossless data reconstruction but efficient feature representation; the evaluation of data recovery performance is neglected; it is difficult to achieve lossless data reconstruction using pure auto-encoders, even with pure deep learning. This paper aims at performing image reconstruction using auto-encoders, employs cascade decoders-based auto-encoders, perfects the performance of image reconstruction, approaches gradually lossless image recovery, and provides a solid theoretical and applicational basis for auto-encoders-based image compression and compressed sensing. The proposed serial decoders-based auto-encoders include the architectures of multi-level decoders and their related progressive optimization sub-problems. The cascade decoders consist of general decoders, residual decoders, adversarial decoders, and their combinations. The effectiveness of residual cascade decoders for image reconstruction is proven in mathematics. Progressive training can efficiently enhance the quality, stability, and variation of image reconstruction. It has been shown by the experimental results that the proposed auto-encoders outperform classical auto-encoders in the performance of image reconstruction. Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

Review

Jump to: Research

16 pages, 988 KiB  
Review
A Survey on Adversarial Perturbations and Attacks on CAPTCHAs
by Suliman A. Alsuhibany
Appl. Sci. 2023, 13(7), 4602; https://0-doi-org.brum.beds.ac.uk/10.3390/app13074602 - 05 Apr 2023
Cited by 2 | Viewed by 1567
Abstract
The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been a topic of interest for several years. The ability of computers to recognize CAPTCHA has significantly increased due to the development of deep learning techniques. To prevent [...] Read more.
The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been a topic of interest for several years. The ability of computers to recognize CAPTCHA has significantly increased due to the development of deep learning techniques. To prevent this ability from being utilised, adversarial machine learning has recently been proposed by perturbing CAPTCHA images. As a result of the introduction of various removal methods, this perturbation mechanism can be removed. This paper, thus, presents the first comprehensive survey on adversarial perturbations and attacks on CAPTCHAs. In particular, the art of utilizing deep learning techniques with the aim of breaking CAPTCHAs are reviewed, and the effectiveness of adversarial CAPTCHAs is discussed. Drawing on the reviewed literature, several observations are provided as part of a broader outlook of this research direction. To emphasise adversarial CAPTCHAs as a potential solution for current attacks, a set of perturbation techniques have been suggested for application in adversarial CAPTCHAs. Full article
(This article belongs to the Special Issue Deep Neural Network: Algorithms and Applications)
Show Figures

Figure 1

Back to TopTop