Artificial Neural Networks in Pattern Recognition

A special issue of Computers (ISSN 2073-431X).

Deadline for manuscript submissions: closed (1 November 2020) | Viewed by 9871

Special Issue Editors


E-Mail Website
Guest Editor
School of Engineering, Zurich University of Applied Sciences ZHAW, 8400 Winterthur, Switzerland
Interests: artificial intelligence; deep learning; pattern recognition; reinforcement learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Engineering, Zurich University of Applied Sciences ZHAW, 8400 Winterthur, Switzerland
Interests: artificial intelligence; deep learning; pattern recognition; reinforcement learning; speaker recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The 9th IAPR TC3 Workshop on Artificial Neural Networks in Pattern Recognition, ANNPR 2020, will be held from September 2 to September 4, 2020, at Zurich University of Applied Sciences ZHAW in Winterthur, Switzerland. The workshop is a major forum for international researchers and practitioners working in all areas of neural network- and machine learning-based pattern recognition to present and discuss the latest research, results, and ideas in these areas. ANNPR is the biannual workshop organized by the Technical Committe 3 (TC3) on Neural Networks & Computational Intelligence of the International Association for Pattern Recognition (IAPR). For more information about the workshop, please use the following link: https://annpr2020.ch/.

Selected papers that are presented at the workshop will be invited to submit extended versions to this Special Issue of Computers after the conference. All selected papers will free of charge if they are accepted after peer review. Submitted papers should be extended to the length of regular research or review articles, with at least 50% coverage of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together in this Special Issue. There are no page limitations for this journal.

We are also inviting original research work covering novel theories, innovative methods, and meaningful applications that can potentially lead to significant advances in artificial neural networks in pattern recognition. The main topics include, but are not limited to, the following:

Methodological issues:

  • Supervised, semi-supervised, unsupervised, and reinforcement learning;
  • Deep learning and deep reinforcement learning;
  • Feed-forward, recurrent, and convolutional neural networks;
  • Generative models;
  • Interpretability and explainability of neural networks;
  • Robustness and generalization of neural networks;
  • Meta-learning (ML) and auto-ML.

Applications to pattern recognition:

  • Image classification and segmentation;
  • Object detection;
  • Document analysis, e.g., handwriting recognition;
  • Sensor-fusion and multi-modal processing;
  • Biometrics, including speech and speaker recognition and segmentation;
  • Data, text, and web mining;
  • Bioinformatics and medical applications;
  • Industrial applications, e.g., quality control and predictive maintenance.

Dr. Frank-Peter Schilling
Prof. Dr. Thilo Stadelmann
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 374 KiB  
Article
Minimal Complexity Support Vector Machines for Pattern Classification
by Shigeo Abe
Computers 2020, 9(4), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040088 - 04 Nov 2020
Cited by 4 | Viewed by 2344
Abstract
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM [...] Read more.
Minimal complexity machines (MCMs) minimize the VC (Vapnik-Chervonenkis) dimension to obtain high generalization abilities. However, because the regularization term is not included in the objective function, the solution is not unique. In this paper, to solve this problem, we discuss fusing the MCM and the standard support vector machine (L1 SVM). This is realized by minimizing the maximum margin in the L1 SVM. We call the machine Minimum complexity L1 SVM (ML1 SVM). The associated dual problem has twice the number of dual variables and the ML1 SVM is trained by alternatingly optimizing the dual variables associated with the regularization term and with the VC dimension. We compare the ML1 SVM with other types of SVMs including the L1 SVM using several benchmark datasets and show that the ML1 SVM performs better than or comparable to the L1 SVM. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

23 pages, 3246 KiB  
Article
Structured (De)composable Representations Trained with Neural Networks
by Graham Spinks and Marie-Francine Moens
Computers 2020, 9(4), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9040079 - 02 Oct 2020
Cited by 1 | Viewed by 2297
Abstract
This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations [...] Read more.
This paper proposes a novel technique for representing templates and instances of concept classes. A template representation refers to the generic representation that captures the characteristics of an entire class. The proposed technique uses end-to-end deep learning to learn structured and composable representations from input images and discrete labels. The obtained representations are based on distance estimates between the distributions given by the class label and those given by contextual information, which are modeled as environments. We prove that the representations have a clear structure allowing decomposing the representation into factors that represent classes and environments. We evaluate our novel technique on classification and retrieval tasks involving different modalities (visual and language data). In various experiments, we show how the representations can be compressed and how different hyperparameters impact performance. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

30 pages, 5703 KiB  
Article
Advanced Convolutional Neural Network-Based Hybrid Acoustic Models for Low-Resource Speech Recognition
by Tessfu Geteye Fantaye, Junqing Yu and Tulu Tilahun Hailu
Computers 2020, 9(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/computers9020036 - 02 May 2020
Cited by 7 | Viewed by 4255
Abstract
Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling [...] Read more.
Deep neural networks (DNNs) have shown a great achievement in acoustic modeling for speech recognition task. Of these networks, convolutional neural network (CNN) is an effective network for representing the local properties of the speech formants. However, CNN is not suitable for modeling the long-term context dependencies between speech signal frames. Recently, the recurrent neural networks (RNNs) have shown great abilities for modeling long-term context dependencies. However, the performance of RNNs is not good for low-resource speech recognition tasks, and is even worse than the conventional feed-forward neural networks. Moreover, these networks often overfit severely on the training corpus in the low-resource speech recognition tasks. This paper presents the results of our contributions to combine CNN and conventional RNN with gate, highway, and residual networks to reduce the above problems. The optimal neural network structures and training strategies for the proposed neural network models are explored. Experiments were conducted on the Amharic and Chaha datasets, as well as on the limited language packages (10-h) of the benchmark datasets released under the Intelligence Advanced Research Projects Activity (IARPA) Babel Program. The proposed neural network models achieve 0.1–42.79% relative performance improvements over their corresponding feed-forward DNN, CNN, bidirectional RNN (BRNN), or bidirectional gated recurrent unit (BGRU) baselines across six language collections. These approaches are promising candidates for developing better performance acoustic models for low-resource speech recognition tasks. Full article
(This article belongs to the Special Issue Artificial Neural Networks in Pattern Recognition)
Show Figures

Figure 1

Back to TopTop