Advanced Deep Learning Architecture and Related Technologies Based on Cloud Computing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 October 2021) | Viewed by 5503

Special Issue Editor


E-Mail Website
Guest Editor
Department of Industrial Security, Chung-Ang University, Seoul 06974, Republic of Korea
Interests: databases; big data analysis; music retrieval; multimedia systems; machine learning; knowledge management; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear colleagues,

Deep learning plays an essential role in solving various challenges, such as prediction, object recognition, and natural language processing. However, deep learning techniques require large amounts of data and hardware resources.

Various IT industries, including Amazon, provide virtual hardware called cloud computing. The provided virtual resources have various performances, and an environment requiring high performance can be constructed at a relatively low cost. So, a deep learning network can be built relatively free from restrictions on hardware resources.

Therefore, deep learning using cloud computing can train networks with a huge amount of parameters that are expected to show excellent performance, and furthermore, ensemble deep learning can be used to achieve one major goal. Alternatively, an optimized deep learning network can achieve reasonable results at a minimal cost.

Additionally, in order to use deep learning on a cloud computing system, a communication system capable of transmitting/receiving enormous amounts of data is required, and a security system is needed to protect datasets and the network under study.

This Special Issue aims to cover the state-of-the-art deep learning technologies and underlying technologies required to use deep learning in cloud computing. We pay attention to how to apply the latest deep learning technologies in cloud computing to get better results or how to optimize the network to avoid wasting resources and data communication and security methods, which are an important issue in using cloud computing systems. The main topics are related to :

  • Ensemble deep learning networks for solving challenging problems;
  • AI technology for cloud computing;
  • Cost-effective deep learning architecture;
  • Hyperparameter tuning on large-scale deep learning architecture;
  • The application of state-of-the-art deep learning technology;
  • Large data transmission/reception baseline suitable for deep learning using cloud computing;
  • Advanced methods for security systems during data transmission and reception;
  • Comprehensive deep learning technology based on edge computing;
  • Deep learning network optimization methods for cost-effective cloud computing.

Prof. Dr. Seungmin Rho
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 4704 KiB  
Article
Communication Optimization Schemes for Accelerating Distributed Deep Learning Systems
by Jaehwan Lee, Hyeonseong Choi, Hyeonwoo Jeong, Baekhyeon Noh and Ji Sun Shin
Appl. Sci. 2020, 10(24), 8846; https://0-doi-org.brum.beds.ac.uk/10.3390/app10248846 - 10 Dec 2020
Viewed by 1666
Abstract
In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput [...] Read more.
In a distributed deep learning system, a parameter server and workers must communicate to exchange gradients and parameters, and the communication cost increases as the number of workers increases. This paper presents a communication data optimization scheme to mitigate the decrease in throughput due to communication performance bottlenecks in distributed deep learning. To optimize communication, we propose two methods. The first is a layer dropping scheme to reduce communication data. The layer dropping scheme we propose compares the representative values of each hidden layer with a threshold value. Furthermore, to guarantee the training accuracy, we store the gradients that are not transmitted to the parameter server in the worker’s local cache. When the value of gradients stored in the worker’s local cache is greater than the threshold, the gradients stored in the worker’s local cache are transmitted to the parameter server. The second is an efficient threshold selection method. Our threshold selection method computes the threshold by replacing the gradients with the L1 norm of each hidden layer. Our data optimization scheme reduces the communication time by about 81% and the total training time by about 70% in a 56 Gbit network environment. Full article
Show Figures

Figure 1

17 pages, 1239 KiB  
Article
Data-Augmented Hybrid Named Entity Recognition for Disaster Management by Transfer Learning
by Hung-Kai Kung, Chun-Mo Hsieh, Cheng-Yu Ho, Yun-Cheng Tsai, Hao-Yung Chan and Meng-Han Tsai
Appl. Sci. 2020, 10(12), 4234; https://0-doi-org.brum.beds.ac.uk/10.3390/app10124234 - 20 Jun 2020
Cited by 10 | Viewed by 3057
Abstract
This research aims to build a Mandarin named entity recognition (NER) module using transfer learning to facilitate damage information gathering and analysis in disaster management. The hybrid NER approach proposed in this research includes three modules: (1) data augmentation, which constructs a concise [...] Read more.
This research aims to build a Mandarin named entity recognition (NER) module using transfer learning to facilitate damage information gathering and analysis in disaster management. The hybrid NER approach proposed in this research includes three modules: (1) data augmentation, which constructs a concise data set for disaster management; (2) reference model, which utilizes the bidirectional long short-term memory–conditional random field framework to implement NER; and (3) the augmented model built by integrating the first two modules via cross-domain transfer with disparate label sets. Through the combination of established rules and learned sentence patterns, the hybrid approach performs well in NER tasks for disaster management and recognizes unfamiliar words successfully. This research applied the proposed NER module to disaster management. In the application, we favorably handled the NER tasks of our related work and achieved our desired outcomes. Through proper transfer, the results of this work can be extended to other fields and consequently bring valuable advantages in diverse applications. Full article
Show Figures

Figure 1

Back to TopTop