New Technological Advancements and Applications of Deep Learning

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 November 2022) | Viewed by 43828

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Science & Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia
Interests: deep learning; machine learning; medical imaging; pattern recognition; transfer learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia
Interests: big data analysis; data mining; machine learning; deep learning; artificial intelligence; computational intelligence

E-Mail Website
Guest Editor
Department of Electrical Engineering and Computer Science, University of Missouri, Columbia MO 65211, USA
Interests: computer graphics; computer vision; machine learning; biomedical imaging; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor

Special Issue Information

Dear Colleagues,

Currently, deep learning models have become an alternative approach applied to many challenging real-world environments ranging from medical imaging, agriculture, and bioinformatics, among others. Specifically, deep learning has overcome many of the drawbacks of traditional machine learning techniques. Despite its great success in dealing with challenging tasks, it still has some drawbacks and pitfalls to be addressed, such as the lack of training data and imbalanced data. Many more suitable network architectures are being proposed.

This Special Issue aims to present novel works regarding the proposal of new approaches dealing with challenging applications of deep learning such as medical information processing, cybersecurity, natural language processing, informatics, robotics and control, among many others. Moreover, new contributions focused on hardware solutions are encouraged, e.g., based on FPGA and GPU. Furthermore, high-quality review and survey papers are welcomed. The papers considered for possible publication may focus on, but are not necessarily be limited to, the following areas:

Deep learning,

Convolutional neural networks,

Deep learning algorithm/architectures/theory,

Deep learning applications,

Image classification,

Image segmentation,

Image registration,

Supervised learning,

Unsupervised learning,

FPGA- and GPU-based solutions,

Overfitting,

Imbalanced data,

Small datasets for training.

Dr. Laith Alzubaidi
Dr. Jinglan Zhang
Prof. Dr. Ye Duan
Prof. Dr. Jose Santamaria Lopez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning
  • Convolutional neural networks
  • Deep learning algorithm/architectures/theory
  • Deep learning applications
  • Image classification
  • Image segmentation
  • Image registration
  • Supervised learning
  • Unsupervised learning
  • FPGA- and GPU-based solutions
  • Overfitting
  • Imbalanced data
  • Small datasets for training

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 1506 KiB  
Article
TWGH: A Tripartite Whale–Gray Wolf–Harmony Algorithm to Minimize Combinatorial Test Suite Problem
by Heba Mohammed Fadhil, Mohammed Najm Abdullah and Mohammed Issam Younis
Electronics 2022, 11(18), 2885; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11182885 - 12 Sep 2022
Cited by 1 | Viewed by 1348
Abstract
Today’s academics have a major hurdle in solving combinatorial problems in the actual world. It is nevertheless possible to use optimization techniques to find, design, and solve a genuine optimal solution to a particular problem, despite the limitations of the applied approach. A [...] Read more.
Today’s academics have a major hurdle in solving combinatorial problems in the actual world. It is nevertheless possible to use optimization techniques to find, design, and solve a genuine optimal solution to a particular problem, despite the limitations of the applied approach. A surge in interest in population-based optimization methodologies has spawned a plethora of new and improved approaches to a wide range of engineering problems. Optimizing test suites is a combinatorial testing challenge that has been demonstrated to be an extremely difficult combinatorial optimization limitation of the research. The authors have proposed an almost infallible method for selecting combinatorial test cases. It uses a hybrid whale–gray wolf optimization algorithm in conjunction with harmony search techniques. Test suite size was significantly reduced using the proposed approach, as shown by the analysis of the results. In order to assess the quality, speed, and scalability of TWGH, experiments were carried out on a set of well-known benchmarks. It was shown in tests that the proposed strategy has a good overall strong reputation test reduction size and could be used to improve performance. Compared with well-known optimization-based strategies, TWGH gives competitive results and supports high combinations (2 ≤ t ≤ 12). Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

15 pages, 10120 KiB  
Article
A Five Convolutional Layer Deep Convolutional Neural Network for Plant Leaf Disease Detection
by J. Arun Pandian, K. Kanchanadevi, V. Dhilip Kumar, Elżbieta Jasińska, Radomír Goňo, Zbigniew Leonowicz and Michał Jasiński
Electronics 2022, 11(8), 1266; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11081266 - 16 Apr 2022
Cited by 32 | Viewed by 3196
Abstract
In this research, we proposed a Deep Convolutional Neural Network (DCNN) model for image-based plant leaf disease identification using data augmentation and hyperparameter optimization techniques. The DCNN model was trained on an augmented dataset of over 240,000 images of different healthy and diseased [...] Read more.
In this research, we proposed a Deep Convolutional Neural Network (DCNN) model for image-based plant leaf disease identification using data augmentation and hyperparameter optimization techniques. The DCNN model was trained on an augmented dataset of over 240,000 images of different healthy and diseased plant leaves and backgrounds. Five image augmentation techniques were used: Generative Adversarial Network, Neural Style Transfer, Principal Component Analysis, Color Augmentation, and Position Augmentation. The random search technique was used to optimize the hyperparameters of the proposed DCNN model. This research shows the significance of choosing a suitable number of layers and filters in DCNN development. Moreover, the experimental outcomes illustrate the importance of data augmentation techniques and hyperparameter optimization techniques. The performance of the proposed DCNN was calculated using different performance metrics such as classification accuracy, precision, recall, and F1-Score. The experimental results show that the proposed DCNN model achieves an average classification accuracy of 98.41% on the test dataset. Moreover, the overall performance of the proposed DCNN model was better than that of advanced transfer learning and machine learning techniques. The proposed DCNN model is useful in the identification of plant leaf diseases. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

20 pages, 1677 KiB  
Article
Improved k-Means Clustering Algorithm for Big Data Based on Distributed SmartphoneNeural Engine Processor
by Fouad H. Awad and Murtadha M. Hamad
Electronics 2022, 11(6), 883; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11060883 - 11 Mar 2022
Cited by 32 | Viewed by 3599
Abstract
Clustering is one of the most significant applications in the big data field. However, using the clustering technique with big data requires an ample amount of processing power and resources due to the complexity and resulting increment in the clustering time. Therefore, many [...] Read more.
Clustering is one of the most significant applications in the big data field. However, using the clustering technique with big data requires an ample amount of processing power and resources due to the complexity and resulting increment in the clustering time. Therefore, many techniques have been implemented to improve the performance of the clustering algorithms, especially for k-means clustering. In this paper, the neural-processor-based k-means clustering technique is proposed to cluster big data by accumulating the advantage of dedicated machine learning processors of mobile devices. The solution was designed to be run with a single-instruction machine processor that exists in the mobile device’s processor. Running the k-means clustering in a distributed scheme run based on mobile machine learning efficiently can handle the big data clustering over the network. The results showed that using a neural engine processor on a mobile smartphone device can maximize the speed of the clustering algorithm, which shows an improvement in the performance of the cluttering up to two-times faster compared with traditional laptop/desktop processors. Furthermore, the number of iterations that are required to obtain (k) clusters was improved up to two-times faster than parallel and distributed k-means. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

27 pages, 7714 KiB  
Article
A Transfer Learning Approach for Lumbar Spine Disc State Classification
by Ali Al-kubaisi and Nasser N. Khamiss
Electronics 2022, 11(1), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics11010085 - 28 Dec 2021
Cited by 6 | Viewed by 2952
Abstract
Recently, deep learning algorithms have become one of the most popular methods and forms of algorithms used in the medical imaging analysis process. Deep learning tools provide accuracy and speed in the process of diagnosing and classifying lumbar spine problems. Disk herniation and [...] Read more.
Recently, deep learning algorithms have become one of the most popular methods and forms of algorithms used in the medical imaging analysis process. Deep learning tools provide accuracy and speed in the process of diagnosing and classifying lumbar spine problems. Disk herniation and spinal stenosis are two of the most common lower back diseases. The process of diagnosing pain in the lower back can be considered costly in terms of time and available expertise. In this paper, we used multiple approaches to overcome the problem of lack of training data in disc state classification and to enhance the performance of disc state classification tasks. To achieve this goal, transfer learning from different datasets and a proposed region of interest (ROI) technique were implemented. It has been demonstrated that using transfer learning from the same domain as the target dataset may increase performance dramatically. Applying the ROI method improved the disc state classification results in VGG19 2%, ResNet50 16%, MobileNetV2 5%, and VGG16 2%. The results improved VGG16 4% and in VGG19 6%, compared with the transfer from ImageNet. Moreover, it has been stated that the closer the data to be classified is to the data that the system trained on, the better the achieved results will be. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

12 pages, 1953 KiB  
Article
IoT and Cloud Computing in Health-Care: A New Wearable Device and Cloud-Based Deep Learning Algorithm for Monitoring of Diabetes
by Ahmed R. Nasser, Ahmed M. Hasan, Amjad J. Humaidi, Ahmed Alkhayyat, Laith Alzubaidi, Mohammed A. Fadhel, José Santamaría and Ye Duan
Electronics 2021, 10(21), 2719; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10212719 - 08 Nov 2021
Cited by 47 | Viewed by 4340
Abstract
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert [...] Read more.
Diabetes is a chronic disease that can affect human health negatively when the glucose levels in the blood are elevated over the creatin range called hyperglycemia. The current devices for continuous glucose monitoring (CGM) supervise the glucose level in the blood and alert user to the type-1 Diabetes class once a certain critical level is surpassed. This can lead the body of the patient to work at critical levels until the medicine is taken in order to reduce the glucose level, consequently increasing the risk of causing considerable health damages in case of the intake is delayed. To overcome the latter, a new approach based on cutting-edge software and hardware technologies is proposed in this paper. Specifically, an artificial intelligence deep learning (DL) model is proposed to predict glucose levels in 30 min horizons. Moreover, Cloud computing and IoT technologies are considered to implement the prediction model and combine it with the existing wearable CGM model to provide the patients with the prediction of future glucose levels. Among the many DL methods in the state-of-the-art (SoTA) have been considered a cascaded RNN-RBM DL model based on both recurrent neural networks (RNNs) and restricted Boltzmann machines (RBM) due to their superior properties regarding improved prediction accuracy. From the conducted experimental results, it has been shown that the proposed Cloud&DL-based wearable approach achieves an average accuracy value of 15.589 in terms of RMSE, then outperforms similar existing blood glucose prediction methods in the SoTA. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

19 pages, 662 KiB  
Article
Identification of Plant-Leaf Diseases Using CNN and Transfer-Learning Approach
by Sk Mahmudul Hassan, Arnab Kumar Maji, Michał Jasiński, Zbigniew Leonowicz and Elżbieta Jasińska
Electronics 2021, 10(12), 1388; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10121388 - 09 Jun 2021
Cited by 204 | Viewed by 21965
Abstract
The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of [...] Read more.
The timely identification and early prevention of crop diseases are essential for improving production. In this paper, deep convolutional-neural-network (CNN) models are implemented to identify and diagnose diseases in plants from their leaves, since CNNs have achieved impressive results in the field of machine vision. Standard CNN models require a large number of parameters and higher computation cost. In this paper, we replaced standard convolution with depth=separable convolution, which reduces the parameter number and computation cost. The implemented models were trained with an open dataset consisting of 14 different plant species, and 38 different categorical disease classes and healthy plant leaves. To evaluate the performance of the models, different parameters such as batch size, dropout, and different numbers of epochs were incorporated. The implemented models achieved a disease-classification accuracy rates of 98.42%, 99.11%, 97.02%, and 99.56% using InceptionV3, InceptionResNetV2, MobileNetV2, and EfficientNetB0, respectively, which were greater than that of traditional handcrafted-feature-based approaches. In comparison with other deep-learning models, the implemented model achieved better performance in terms of accuracy and it required less training time. Moreover, the MobileNetV2 architecture is compatible with mobile devices using the optimized parameter. The accuracy results in the identification of diseases showed that the deep CNN model is promising and can greatly impact the efficient identification of the diseases, and may have potential in the detection of diseases in real-time agricultural systems. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

Review

Jump to: Research

22 pages, 1930 KiB  
Review
Review on Deep Learning Approaches for Anomaly Event Detection in Video Surveillance
by Sabah Abdulazeez Jebur, Khalid A. Hussein, Haider Kadhim Hoomod, Laith Alzubaidi and José Santamaría
Electronics 2023, 12(1), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/electronics12010029 - 22 Dec 2022
Cited by 18 | Viewed by 4499
Abstract
In the last few years, due to the continuous advancement of technology, human behavior detection and recognition have become important scientific research in the field of computer vision (CV). However, one of the most challenging problems in CV is anomaly detection (AD) because [...] Read more.
In the last few years, due to the continuous advancement of technology, human behavior detection and recognition have become important scientific research in the field of computer vision (CV). However, one of the most challenging problems in CV is anomaly detection (AD) because of the complex environment and the difficulty in extracting a particular feature that correlates with a particular event. As the number of cameras monitoring a given area increases, it will become vital to have systems capable of learning from the vast amounts of available data to identify any potential suspicious behavior. Then, the introduction of deep learning (DL) has brought new development directions for AD. In particular, DL models such as convolution neural networks (CNNs) and recurrent neural networks (RNNs) have achieved excellent performance dealing with AD tasks, as well as other challenging domains like image classification, object detection, and speech processing. In this review, we aim to present a comprehensive overview of those research methods using DL to address the AD problem. Firstly, different classifications of anomalies are introduced, and then the DL methods and architectures used for video AD are discussed and analyzed, respectively. The revised contributions have been categorized by the network type, architecture model, datasets, and performance metrics that are used to evaluate these methodologies. Moreover, several applications of video AD have been discussed. Finally, we outlined the challenges and future directions for further research in the field. Full article
(This article belongs to the Special Issue New Technological Advancements and Applications of Deep Learning)
Show Figures

Figure 1

Back to TopTop