Advanced Artificial Neural Networks

A special issue of Algorithms (ISSN 1999-4893).

Deadline for manuscript submissions: closed (31 May 2018) | Viewed by 50089

Special Issue Editors

Department of Industrial Management, Vanung University, No. 1 Van Nung Rd., Chung-Li, Tao-Yuan,Taiwan
Interests: ergonomics; human–machine system; industrial safety and health; risk assessment; facility planning
Special Issues, Collections and Topics in MDPI journals
Department of Industrial Engineering and Management, Chaoyang University of Technology, Taichung 413310, Taiwan
Interests: computer vision; optical inspection; quality management; automated industrial inspection
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial neural networks (ANNs) have been extensively applied to a wide range of disciplines, such as system identification and control, decision making, pattern recognition, medical diagnosis, finance, data mining, visualization, and others. With advances in computing and networking technologies, more complicated forms of ANNs are expected to emerge, requiring the design of advanced learning algorithms. This Special Issue is intended to provide technical details of the construction and training of advanced ANNs. These details will hold great interest for researchers in computer science, artificial intelligence, soft computing, and information management, as well as for practicing engineers. This Special Issue features a balance between state-of-the-art research and practical applications. This Special Issue also provides a forum for researchers and practitioners to review and disseminate quality research work on advanced ANNs and the critical issues for further development.

Prof. Dr. Tin-Chih Toly Chen
Prof. Dr. Cheng-Li Liu
Prof. Dr. Hong-Dar  Lin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Algorithms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial neural network
  • algorithm
  • learning

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

2 pages, 148 KiB  
Editorial
Advanced Artificial Neural Networks
by Tin-Chih Toly Chen, Cheng-Li Liu and Hong-Dar Lin
Algorithms 2018, 11(7), 102; https://0-doi-org.brum.beds.ac.uk/10.3390/a11070102 - 10 Jul 2018
Cited by 16 | Viewed by 3746
Abstract
Artificial neural networks (ANNs) have been extensively applied to a wide range of disciplines, such as system identification and control, decision making, pattern recognition, medical diagnosis, finance, data mining, visualization, and others. With advances in computing and networking technologies, more complicated forms of [...] Read more.
Artificial neural networks (ANNs) have been extensively applied to a wide range of disciplines, such as system identification and control, decision making, pattern recognition, medical diagnosis, finance, data mining, visualization, and others. With advances in computing and networking technologies, more complicated forms of ANNs are expected to emerge, requiring the design of advanced learning algorithms. This Special Issue is intended to provide technical details of the construction and training of advanced ANNs. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)

Research

Jump to: Editorial

14 pages, 18203 KiB  
Article
BELMKN: Bayesian Extreme Learning Machines Kohonen Network
by J. Senthilnath, Sumanth Simha C, Nagaraj G, Meenakumari Thapa and Indiramma M
Algorithms 2018, 11(5), 56; https://0-doi-org.brum.beds.ac.uk/10.3390/a11050056 - 27 Apr 2018
Cited by 10 | Viewed by 8408
Abstract
This paper proposes the Bayesian Extreme Learning Machine Kohonen Network (BELMKN) framework to solve the clustering problem. The BELMKN framework uses three levels in processing nonlinearly separable datasets to obtain efficient clustering in terms of accuracy. In the first level, the Extreme Learning [...] Read more.
This paper proposes the Bayesian Extreme Learning Machine Kohonen Network (BELMKN) framework to solve the clustering problem. The BELMKN framework uses three levels in processing nonlinearly separable datasets to obtain efficient clustering in terms of accuracy. In the first level, the Extreme Learning Machine (ELM)-based feature learning approach captures the nonlinearity in the data distribution by mapping it onto a d-dimensional space. In the second level, ELM-based feature extracted data is used as an input for Bayesian Information Criterion (BIC) to predict the number of clusters termed as a cluster prediction. In the final level, feature-extracted data along with the cluster prediction is passed to the Kohonen Network to obtain improved clustering accuracy. The main advantage of the proposed method is to overcome the problem of having a priori identifiers or class labels for the data; it is difficult to obtain labels in most of the cases for the real world datasets. The BELMKN framework is applied to 3 synthetic datasets and 10 benchmark datasets from the UCI machine learning repository and compared with the state-of-the-art clustering methods. The experimental results show that the proposed BELMKN-based clustering outperforms other clustering algorithms for the majority of the datasets. Hence, the BELMKN framework can be used to improve the clustering accuracy of the nonlinearly separable datasets. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

10 pages, 11548 KiB  
Article
Thermal Environment Prediction for Metro Stations Based on an RVFL Neural Network
by Qing Tian, Weihang Zhao, Yun Wei and Liping Pang
Algorithms 2018, 11(4), 49; https://0-doi-org.brum.beds.ac.uk/10.3390/a11040049 - 17 Apr 2018
Cited by 4 | Viewed by 5024
Abstract
With the improvement of China’s metro carrying capacity, people in big cities are inclined to travel by metro. The carrying load of these metros is huge during the morning and evening rush hours. Coupled with the increase in numbers of summer tourists, the [...] Read more.
With the improvement of China’s metro carrying capacity, people in big cities are inclined to travel by metro. The carrying load of these metros is huge during the morning and evening rush hours. Coupled with the increase in numbers of summer tourists, the thermal environmental quality in early metro stations will decline badly. Therefore, it is necessary to analyze the factors that affect the thermal environment in metro stations and establish a thermal environment change model. This will help to support the prediction and analysis of the thermal environment in such limited underground spaces. In order to achieve relatively accurate and rapid on-line modeling, this paper proposes a thermal environment modeling method based on a Random Vector Functional Link Neural Network (RVFLNN). This modeling method has the advantages of fast modeling speed and relatively accurate prediction results. Once the preprocessed data is input into this RVFLNN for training, the metro station thermal environment model will be quickly established. The study results show that the thermal model based on the RVFLNN method can effectively predict the temperature inside the metro station. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

16 pages, 1837 KiB  
Article
An Online Energy Management Control for Hybrid Electric Vehicles Based on Neuro-Dynamic Programming
by Feiyan Qin, Weimin Li, Yue Hu and Guoqing Xu
Algorithms 2018, 11(3), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/a11030033 - 19 Mar 2018
Cited by 9 | Viewed by 5732
Abstract
Hybrid electric vehicles are a compromise between traditional vehicles and pure electric vehicles and can be part of the solution to the energy shortage problem. Energy management strategies (EMSs) are highly related to energy utilization in HEVs’ fuel economy. In this research, we [...] Read more.
Hybrid electric vehicles are a compromise between traditional vehicles and pure electric vehicles and can be part of the solution to the energy shortage problem. Energy management strategies (EMSs) are highly related to energy utilization in HEVs’ fuel economy. In this research, we have employed a neuro-dynamic programming (NDP) method to simultaneously optimize fuel economy and battery state of charge (SOC). In this NDP method, the critic network is a multi-resolution wavelet neural network based on the Meyer wavelet function, and the action network is a conventional wavelet neural network based on the Morlet function. The weights and parameters of both networks are obtained by an algorithm of backpropagation type. The NDP-based EMS has been applied to a parallel HEV and compared with a previously reported NDP EMS and a stochastic dynamic programing-based method. Simulation results under ADVISOR2002 have shown that the proposed NDP approach achieves better performance than both the methods. These indicate that the proposed NDP EMS, and the CWNN and MRWNN, are effective in approximating a nonlinear system. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

15 pages, 3376 KiB  
Article
Modified Convolutional Neural Network Based on Dropout and the Stochastic Gradient Descent Optimizer
by Jing Yang and Guanci Yang
Algorithms 2018, 11(3), 28; https://0-doi-org.brum.beds.ac.uk/10.3390/a11030028 - 07 Mar 2018
Cited by 96 | Viewed by 14409
Abstract
This study proposes a modified convolutional neural network (CNN) algorithm that is based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS), after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the [...] Read more.
This study proposes a modified convolutional neural network (CNN) algorithm that is based on dropout and the stochastic gradient descent (SGD) optimizer (MCNN-DS), after analyzing the problems of CNNs in extracting the convolution features, to improve the feature recognition rate and reduce the time-cost of CNNs. The MCNN-DS has a quadratic CNN structure and adopts the rectified linear unit as the activation function to avoid the gradient problem and accelerate convergence. To address the overfitting problem, the algorithm uses an SGD optimizer, which is implemented by inserting a dropout layer into the all-connected and output layers, to minimize cross entropy. This study used the datasets MNIST, HCL2000, and EnglishHand as the benchmark data, analyzed the performance of the SGD optimizer under different learning parameters, and found that the proposed algorithm exhibited good recognition performance when the learning rate was set to [0.05, 0.07]. The performances of WCNN, MLP-CNN, SVM-ELM, and MCNN-DS were compared. Statistical results showed the following: (1) For the benchmark MNIST, the MCNN-DS exhibited a high recognition rate of 99.97%, and the time-cost of the proposed algorithm was merely 21.95% of MLP-CNN, and 10.02% of SVM-ELM; (2) Compared with SVM-ELM, the average improvement in the recognition rate of MCNN-DS was 2.35% for the benchmark HCL2000, and the time-cost of MCNN-DS was only 15.41%; (3) For the EnglishHand test set, the lowest recognition rate of the algorithm was 84.93%, the highest recognition rate was 95.29%, and the average recognition rate was 89.77%. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

19 pages, 2979 KiB  
Article
Research on Degeneration Model of Neural Network for Deep Groove Ball Bearing Based on Feature Fusion
by Lijun Zhang and Junyu Tao
Algorithms 2018, 11(2), 21; https://0-doi-org.brum.beds.ac.uk/10.3390/a11020021 - 11 Feb 2018
Cited by 5 | Viewed by 5360
Abstract
Aiming at the pitting fault of deep groove ball bearing during service, this paper uses the vibration signal of five different states of deep groove ball bearing and extracts the relevant features, then uses a neural network to model the degradation for identifying [...] Read more.
Aiming at the pitting fault of deep groove ball bearing during service, this paper uses the vibration signal of five different states of deep groove ball bearing and extracts the relevant features, then uses a neural network to model the degradation for identifying and classifying the fault type. By comparing the effects of training samples with different capacities through performance indexes such as the accuracy and convergence speed, it is proven that an increase in the sample size can improve the performance of the model. Based on the polynomial fitting principle and Pearson correlation coefficient, fusion features based on the skewness index are proposed, and the performance improvement of the model after incorporating the fusion features is also validated. A comparison of the performance of the support vector machine (SVM) model and the neural network model on this dataset is given. The research shows that neural networks have more potential for complex and high-volume datasets. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

13 pages, 3202 KiB  
Article
Application of a Hybrid Model Based on a Convolutional Auto-Encoder and Convolutional Neural Network in Object-Oriented Remote Sensing Classification
by Wei Cui, Qi Zhou and Zhendong Zheng
Algorithms 2018, 11(1), 9; https://0-doi-org.brum.beds.ac.uk/10.3390/a11010009 - 16 Jan 2018
Cited by 21 | Viewed by 6119
Abstract
Variation in the format and classification requirements for remote sensing data makes establishing a standard remote sensing sample dataset difficult. As a result, few remote sensing deep neural network models have been widely accepted. We propose a hybrid deep neural network model based [...] Read more.
Variation in the format and classification requirements for remote sensing data makes establishing a standard remote sensing sample dataset difficult. As a result, few remote sensing deep neural network models have been widely accepted. We propose a hybrid deep neural network model based on a convolutional auto-encoder and a complementary convolutional neural network to solve this problem. The convolutional auto-encoder supports feature extraction and data dimension reduction of remote sensing data. The extracted features are input into the convolutional neural network and subsequently classified. Experimental results show that in the proposed model, the classification accuracy increases from 0.916 to 0.944, compared to a traditional convolutional neural network model; furthermore, the number of training runs is reduced from 40,000 to 22,000, and the number of labelled samples can be reduced by more than half, all while ensuring a classification accuracy of no less than 0.9, which suggests the effectiveness and feasibility of the proposed model. Full article
(This article belongs to the Special Issue Advanced Artificial Neural Networks)
Show Figures

Figure 1

Back to TopTop