sensors-logo

Journal Browser

Journal Browser

Advanced Visual Sensor Networks for Object Detection and Tracking

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 1442

Special Issue Editor


E-Mail Website
Guest Editor
Department of IT Engineering, Sookmyung Women's University, Seoul, Korea
Interests: visual sensor network; real-time object segmentation; deep learning for object detection; facial expression recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Following the success of the previous Special Issue “Visual Sensor Networks for Object Detection and Tracking” (https://0-www-mdpi-com.brum.beds.ac.uk/journal/sensors/special_issues/VSN_ODT), we are pleased to announce the next in the series, entitled “Advanced Visual Sensor Networks for Object Detection and Tracking”.

Information obtained through the human eye is more diverse and efficient for object recognition/tracking than information obtained through any other sensory organ. Recently, these kinds of tasks for visual object detection, recognition, and tracking have beeen enabled by more flexible vision sensors and network schemes such as the 5G standard. In addition, visual intelligence technology and inference systems based on deep/reinforcement learning are currently being actively researched to make vision systems more accurate. This Issue will publish original technical papers and review papers on these recent technologies with a focus on visual recognition, real-time visual object tracking, knowledge extraction, distributed visual sensor networks, and applications.

You are welcome to submit an unpublished original research work related to the theme of “Advanced Visual Sensor Networks for Object Detection and Tracking.”

Prof. Dr. Byung-Gyu Kim
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 

Keywords

  • intelligent object detection algorithms
  • fast and complexity reduction algorithms for real-time object detection and tracking
  • knowledge extraction and mining from visual sensor data
  • visual sensor network architecture for object detection and tracking
  • awareness-based visual sensor network design
  • intelligent machine learning mechanism for object detection and recognition
  • lightweight deep learning for real-time object detection and tracking
  • visual data representation and transmission in a 5G network
  • real-time visual object tracking in vision sensor network
  • intelligent CCTV applications

Related Special Issue

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 3003 KiB  
Article
Applying Ternion Stream DCNN for Real-Time Vehicle Re-Identification and Tracking across Multiple Non-Overlapping Cameras
by Lesole Kalake, Wanggen Wan and Yanqiu Dong
Sensors 2022, 22(23), 9274; https://0-doi-org.brum.beds.ac.uk/10.3390/s22239274 - 28 Nov 2022
Viewed by 962
Abstract
The increase in security threats and a huge demand for smart transportation applications for vehicle identification and tracking with multiple non-overlapping cameras have gained a lot of attention. Moreover, extracting meaningful and semantic vehicle information has become an adventurous task, with frameworks deployed [...] Read more.
The increase in security threats and a huge demand for smart transportation applications for vehicle identification and tracking with multiple non-overlapping cameras have gained a lot of attention. Moreover, extracting meaningful and semantic vehicle information has become an adventurous task, with frameworks deployed on different domains to scan features independently. Furthermore, approach identification and tracking processes have largely relied on one or two vehicle characteristics. They have managed to achieve a high detection quality rate and accuracy using Inception ResNet and pre-trained models but have had limitations on handling moving vehicle classes and were not suitable for real-time tracking. Additionally, the complexity and diverse characteristics of vehicles made the algorithms impossible to efficiently distinguish and match vehicle tracklets across non-overlapping cameras. Therefore, to disambiguate these features, we propose to implement a Ternion stream deep convolutional neural network (TSDCNN) over non-overlapping cameras and combine all key vehicle features such as shape, license plate number, and optical character recognition (OCR). Then jointly investigate the strategic analysis of visual vehicle information to find and identify vehicles in multiple non-overlapping views of algorithms. As a result, the proposed algorithm improved the recognition quality rate and recorded a remarkable overall performance, outperforming the current online state-of-the-art paradigm by 0.28% and 1.70%, respectively, on vehicle rear view (VRV) and Veri776 datasets. Full article
(This article belongs to the Special Issue Advanced Visual Sensor Networks for Object Detection and Tracking)
Show Figures

Figure 1

Back to TopTop