Computer Vision and Pattern Recognition

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (30 September 2017) | Viewed by 57444

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Applied Sciences and Intelligent Systems “ScienceApp", Consiglio Nazionale delle Ricerche, c/o Dhitech Campus Universitario Ecotekne, Via Monteroni s/n, 73100 Lecce, Italy
Interests: computer vision; pattern recognition; video surveillance; object tracking; deep learning; audience measurements; visual interaction; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, computer vision and pattern recognition have received a great deal of attention and on a wide range of topics, in order to extract structures or answers from video and image data, both spatially and temporally. Building mathematical models to data to describe relevant patterns to be localized and recognized.

The intent of this Special Issue is to collect the experiences of leading scientists, but also to serve as an assessment tool for people who are new to the world of computer vision and pattern recognition

This Special Issue is intended to covering the following topics, but is not limited to them:

  • Deep learning techniques for object detection and classification
  • Human behavior analysis
  • Video surveillance and technologies homeland security
  • Medical imaging
  • Nondestructive testing
  • Visual question answering
  • Human/computer and human/robot interaction
  • Robot vision
  • Assistive computer vision technologies

Prof. Dr. Cosimo Distante
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Shape
  • Motion
  • Range
  • Matching and recogntion
  • Feature extraction
  • Vision systems

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 1495 KiB  
Article
An Overview of Deep Learning Based Methods for Unsupervised and Semi-Supervised Anomaly Detection in Videos
by B. Ravi Kiran, Dilip Mathew Thomas and Ranjith Parakkal
J. Imaging 2018, 4(2), 36; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4020036 - 07 Feb 2018
Cited by 320 | Viewed by 22984
Abstract
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly [...] Read more.
Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

31 pages, 810 KiB  
Article
Partition and Inclusion Hierarchies of Images: A Comprehensive Survey
by Petra Bosilj, Ewa Kijak and Sébastien Lefèvre
J. Imaging 2018, 4(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging4020033 - 01 Feb 2018
Cited by 28 | Viewed by 7470
Abstract
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image [...] Read more.
The theory of hierarchical image representations has been well-established in Mathematical Morphology, and provides a suitable framework to handle images through objects or regions taking into account their scale. Such approaches have increased in popularity and been favourably compared to treating individual image elements in various domains and applications. This survey paper presents the development of hierarchical image representations over the last 20 years using the framework of component trees. We introduce two classes of component trees, partitioning and inclusion trees, and describe their general characteristics and differences. Examples of hierarchies for each of the classes are compared, with the resulting study aiming to serve as a guideline when choosing a hierarchical image representation for any application and image domain. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

1554 KiB  
Article
Baseline Fusion for Image and Pattern Recognition—What Not to Do (and How to Do Better)
by Ognjen Arandjelović
J. Imaging 2017, 3(4), 44; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3040044 - 11 Oct 2017
Cited by 3 | Viewed by 4150
Abstract
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the [...] Read more.
The ever-increasing demand for a reliable inference capable of handling unpredictable challenges of practical application in the real world has made research on information fusion of major importance; indeed, this challenge is pervasive in a whole range of image understanding tasks. In the development of the most common type—score-level fusion algorithms—it is virtually universally desirable to have as a reference starting point a simple and universally sound baseline benchmark which newly developed approaches can be compared to. One of the most pervasively used methods is that of weighted linear fusion. It has cemented itself as the default off-the-shelf baseline owing to its simplicity of implementation, interpretability, and surprisingly competitive performance across a widest range of application domains and information source types. In this paper I argue that despite this track record, weighted linear fusion is not a good baseline on the grounds that there is an equally simple and interpretable alternative—namely quadratic mean-based fusion—which is theoretically more principled and which is more successful in practice. I argue the former from first principles and demonstrate the latter using a series of experiments on a diverse set of fusion problems: classification using synthetically generated data, computer vision-based object recognition, arrhythmia detection, and fatality prediction in motor vehicle accidents. On all of the aforementioned problems and in all instances, the proposed fusion approach exhibits superior performance over linear fusion, often increasing class separation by several orders of magnitude. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

534 KiB  
Article
Enhancing Face Identification Using Local Binary Patterns and K-Nearest Neighbors
by Idelette Laure Kambi Beli and Chunsheng Guo
J. Imaging 2017, 3(3), 37; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3030037 - 05 Sep 2017
Cited by 46 | Viewed by 8462
Abstract
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety [...] Read more.
The human face plays an important role in our social interaction, conveying people’s identity. Using the human face as a key to security, biometric passwords technology has received significant attention in the past several years due to its potential for a wide variety of applications. Faces can have many variations in appearance (aging, facial expression, illumination, inaccurate alignment and pose) which continue to cause poor ability to recognize identity. The purpose of our research work is to provide an approach that contributes to resolve face identification issues with large variations of parameters such as pose, illumination, and expression. For provable outcomes, we combined two algorithms: (a) robustness local binary pattern (LBP), used for facial feature extractions; (b) k-nearest neighbor (K-NN) for image classifications. Our experiment has been conducted on the CMU PIE (Carnegie Mellon University Pose, Illumination, and Expression) face database and the LFW (Labeled Faces in the Wild) dataset. The proposed identification system shows higher performance, and also provides successful face similarity measures focus on feature extractions. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

29502 KiB  
Article
Pattern Reconstructability in Fully Parallel Thinning
by Yung-Sheng Chen and Ming-Te Chao
J. Imaging 2017, 3(3), 29; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3030029 - 19 Jul 2017
Cited by 4 | Viewed by 6315
Abstract
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the [...] Read more.
It is a challenging topic to perform pattern reconstruction from a unit-width skeleton, which is obtained by a parallel thinning algorithm. The bias skeleton yielded by a fully-parallel thinning algorithm, which usually results from the so-called hidden deletable points, will result in the difficulty of pattern reconstruction. In order to make a fully-parallel thinning algorithm pattern reconstructable, a newly-defined reconstructable skeletal pixel (RSP) including a thinning flag, iteration count, as well as reconstructable structure is proposed and applied for thinning iteration to obtain a skeleton table representing the resultant thin line. Based on the iteration count and reconstructable structure associated with each skeletal pixel in the skeleton table, the pattern can be reconstructed by means of the dilating and uniting operations. Embedding a conventional fully-parallel thinning algorithm into the proposed approach, the pattern may be over-reconstructed due to the influence of a biased skeleton. A simple process of removing hidden deletable points (RHDP) in the thinning iteration is thus presented to reduce the effect of the biased skeleton. Three well-known fully-parallel thinning algorithms are used for experiments. The performances investigated by the measurement of reconstructability (MR), the number of iterations (NI), as well as the measurement of skeleton deviation (MSD) confirm the feasibility of the proposed pattern reconstruction approach with the assistance of the RHDP process. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Figure 1

11384 KiB  
Article
A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens
by Cristina Portalés, Sergio Casas, Inmaculada Coma and Marcos Fernández
J. Imaging 2017, 3(2), 19; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging3020019 - 03 Jun 2017
Cited by 3 | Viewed by 6532
Abstract
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies [...] Read more.
The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system—which is currently installed in a shopping mall in Spain—has been tested by different users. Full article
(This article belongs to the Special Issue Computer Vision and Pattern Recognition)
Show Figures

Graphical abstract

Back to TopTop