sensors-logo

Journal Browser

Journal Browser

Visual Sensor Networks and Related Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 October 2019) | Viewed by 29020

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Porto, 4200-465 Porto, Portugal
Interests: emerging networks and applications; smart cities; internet of things; wireless networks; multimedia communications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
Interests: real-time communication; reliable communication; wireless sensor networks; wireless industrial networks; mobile body area networks; energy-efficiency systems; Internet of Things

Special Issue Information

Dear Colleagues,

The development of affordable low-power electronic devices with embedded capabilities for sensing, processing and communication functions spread the use of wireless sensor networks technologies, opening new frontiers for monitoring and control applications. Being inherently distributed and autonomous, the use of cameras as sensing units can provide multi-perspective visual data, which may be highly valuable for multiple types of monitoring applications. The resulting Visual Sensor Networks (VSN) can then be used for applications in smart cities, the Internet of things, Industry 4.0, vehicular networks, health assistance, home automation, and immersive entertainment, among many others, paving the way for impressive developments in those areas, built on the multiple-perspective available visual data.

Since the first experiments with wirelessly connected camera sensors, multiple innovative research challenges have driven research efforts in the area of VSNs. In more than a decade, real-time transmission, energy efficiency, coverage optimization, visual data processing, Quality of Service (QoS) and Quality of Experience (QoE), reliability and availability, security and many other aspects of multi-perspective visual sensing have been addressed by the academic community and industry, strengthen VSNs as an effective resource for distributed visual data acquisition and processing.

In this Special Issue, innovative research papers addressing classical and new challenges of visual sensor networks are especially welcome. Emerging applications for smart cities, vehicular networks, immersive entertainment and IoT environments have brought new challenges for visual sensor networks, mostly due to the integration of distributed and/or multi-perspective visual sensing with new data processing paradigms (e.g., big data, machine learning and crowdsensing algorithms). Therefore, this Special Issue aims to compile classical and new emerging challenges of visual sensor networks and applications, welcoming submissions covering their different aspects.

Prof. Daniel G. Costa
Dr. Francisco Vasques
Prof. Mario Collotta
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Real-time transmission and processing in VSN
  • Multi-perspective visual sensing
  • Visual coverage optimizations
  • Reliability, availability and robustness of VSN communications
  • Performance enhancement in VSN
  • VSN management and security
  • QoS, QoE and visual data quality
  • Dependable architecture design in VSN
  • Authentication and key agreement for VSN
  • Security and privacy in VSN
  • Malware detection in VSN
  • Protocols and algorithms for VSN
  • Emerging applications of VSNs, including smart cities, the Internet of things, Industry 4.0, vehicular networks, health assistance, home automation and immersive entertainment

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 147 KiB  
Editorial
Visual Sensor Networks and Related Applications
by Daniel G. Costa, Francisco Vasques and Mario Collotta
Sensors 2019, 19(22), 4960; https://0-doi-org.brum.beds.ac.uk/10.3390/s19224960 - 14 Nov 2019
Viewed by 1840
Abstract
The use of sensing devices to perform monitoring tasks has continuously evolved in the past decades [...] Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)

Research

Jump to: Editorial, Review

16 pages, 1420 KiB  
Article
An AutoEncoder and LSTM-Based Traffic Flow Prediction Method
by Wangyang Wei, Honghai Wu and Huadong Ma
Sensors 2019, 19(13), 2946; https://0-doi-org.brum.beds.ac.uk/10.3390/s19132946 - 04 Jul 2019
Cited by 126 | Viewed by 8368
Abstract
Smart cities can effectively improve the quality of urban life. Intelligent Transportation System (ITS) is an important part of smart cities. The accurate and real-time prediction of traffic flow plays an important role in ITSs. To improve the prediction accuracy, we propose a [...] Read more.
Smart cities can effectively improve the quality of urban life. Intelligent Transportation System (ITS) is an important part of smart cities. The accurate and real-time prediction of traffic flow plays an important role in ITSs. To improve the prediction accuracy, we propose a novel traffic flow prediction method, called AutoEncoder Long Short-Term Memory (AE-LSTM) prediction method. In our method, the AutoEncoder is used to obtain the internal relationship of traffic flow by extracting the characteristics of upstream and downstream traffic flow data. Moreover, the Long Short-Term Memory (LSTM) network utilizes the acquired characteristic data and the historical data to predict complex linear traffic flow data. The experimental results show that the AE-LSTM method had higher prediction accuracy. Specifically, the Mean Relative Error (MRE) of the AE-LSTM was reduced by 0.01 compared with the previous prediction methods. In addition, AE-LSTM method also had good stability. For different stations and different dates, the prediction error and fluctuation of the AE-LSTM method was small. Furthermore, the average MRE of AE-LSTM prediction results was 0.06 for six different days. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

17 pages, 3205 KiB  
Article
A Deep Learning Approach for Maximum Activity Links in D2D Communications
by Bocheng Yu, Xingjun Zhang, Francesco Palmieri, Erwan Creignou and Ilsun You
Sensors 2019, 19(13), 2941; https://0-doi-org.brum.beds.ac.uk/10.3390/s19132941 - 03 Jul 2019
Cited by 8 | Viewed by 4072
Abstract
Mobile cellular communications are experiencing an exponential growth in traffic load on Long Term Evolution (LTE) eNode B (eNB) components. Such load can be significantly contained by directly sharing content among nearby users through device-to-device (D2D) communications, so that repeated downloads of the [...] Read more.
Mobile cellular communications are experiencing an exponential growth in traffic load on Long Term Evolution (LTE) eNode B (eNB) components. Such load can be significantly contained by directly sharing content among nearby users through device-to-device (D2D) communications, so that repeated downloads of the same data can be avoided as much as possible. Accordingly, for the purpose of improving the efficiency of content sharing and decreasing the load on the eNB, it is important to maximize the number of simultaneous D2D transmissions. Specially, maximizing the number of D2D links can not only improve spectrum and energy efficiency but can also reduce transmission delay. However, enabling maximum D2D links in a cellular network poses two major challenges. First, the interference between the D2D and cellular communications could critically affect their performance. Second, the minimum quality of service (QoS) requirement of cellular and D2D communication must be guaranteed. Therefore, a selection of active links is critical to gain the maximum number of D2D links. This can be formulated as a classical integer linear programming problem (link scheduling) that is known to be NP-hard. This paper proposes to obtain a set of network features via deep learning for solving this challenging problem. The idea is to optimize the D2D link schedule problem with a deep neural network (DNN). This makes a significant time reduction for delay-sensitive operations, since the computational overhead is mainly spent in the training process of the model. The simulation performed on a randomly generated link schedule problem showed that our algorithm is capable of finding satisfactory D2D link scheduling solutions by reducing computation time up to 90% without significantly affecting their accuracy. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

21 pages, 20112 KiB  
Article
Block Compressive Sensing (BCS) Based Low Complexity, Energy Efficient Visual Sensor Platform with Joint Multi-Phase Decoder (JMD)
by Mansoor Ebrahim, Wai Chong Chia, Syed Hasan Adil and Kamran Raza
Sensors 2019, 19(10), 2309; https://0-doi-org.brum.beds.ac.uk/10.3390/s19102309 - 19 May 2019
Cited by 7 | Viewed by 3541
Abstract
Devices in a visual sensor network (VSN) are mostly powered by batteries, and in such a network, energy consumption and bandwidth utilization are the most critical issues that need to be taken into consideration. The most suitable solution to such issues is to [...] Read more.
Devices in a visual sensor network (VSN) are mostly powered by batteries, and in such a network, energy consumption and bandwidth utilization are the most critical issues that need to be taken into consideration. The most suitable solution to such issues is to compress the captured visual data before transmission takes place. Compressive sensing (CS) has emerged as an efficient sampling mechanism for VSN. CS reduces the total amount of data to be processed such that it recreates the signal by using only fewer sampling values than that of the Nyquist rate. However, there are few open issues related to the reconstruction quality and practical implementation of CS. The current studies of CS are more concentrated on hypothetical characteristics with simulated results, rather than on the understanding the potential issues in the practical implementation of CS and its computational validation. In this paper, a low power, low cost, visual sensor platform is developed using an Arduino Due microcontroller board, XBee transmitter, and uCAM-II camera. Block compressive sensing (BCS) is implemented on the developed platform to validate the characteristics of compressive sensing in a real-world scenario. The reconstruction is performed by using the joint multi-phase decoding (JMD) framework. To the best of our knowledge, no such practical implementation using off the shelf components has yet been conducted for CS. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

26 pages, 2421 KiB  
Article
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks
by Nabeel Khan and Maria G. Martini
Sensors 2019, 19(8), 1751; https://0-doi-org.brum.beds.ac.uk/10.3390/s19081751 - 12 Apr 2019
Cited by 16 | Viewed by 3772
Abstract
Silicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, [...] Read more.
Silicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, DVS devices are gaining more and more attention in various applications such as drone surveillance, robotics, high-speed motion photography, etc. The output of such sensors is a sequence of events rather than a series of frames as for classical cameras. Estimating the data rate of the stream of events associated with such sensors is needed for the appropriate design of transmission systems involving such sensors. In this work, we propose to consider information about the scene content and sensor speed to support such estimation, and we identify suitable metrics to quantify the complexity of the scene for this purpose. According to the results of this study, the event rate shows an exponential relationship with the metric associated with the complexity of the scene and linear relationships with the speed of the sensor. Based on these results, we propose a two-parameter model for the dependency of the event rate on scene complexity and sensor speed. The model achieves a prediction accuracy of approximately 88.4% for the outdoor environment along with the overall prediction performance of approximately 84%. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

29 pages, 1445 KiB  
Review
A Survey of Energy-Efficient Communication Protocols with QoS Guarantees in Wireless Multimedia Sensor Networks
by Shu Li, Jeong Geun Kim, Doo Hee Han and Kye San Lee
Sensors 2019, 19(1), 199; https://0-doi-org.brum.beds.ac.uk/10.3390/s19010199 - 07 Jan 2019
Cited by 56 | Viewed by 6238
Abstract
In recent years, wireless multimedia sensor networks (WMSNs) have emerged as a prominent technique for delivering multimedia information such as still images and videos. Being under the great spotlight of research communities, however, multimedia delivery over resource- constraint WMSNs poses great challenges, especially [...] Read more.
In recent years, wireless multimedia sensor networks (WMSNs) have emerged as a prominent technique for delivering multimedia information such as still images and videos. Being under the great spotlight of research communities, however, multimedia delivery over resource- constraint WMSNs poses great challenges, especially in terms of energy efficiency and quality-of-service (QoS) guarantees. In this paper, recent developments in techniques for designing highly energy-efficient and QoS-capable WMSNs are surveyed. We first study the unique characteristics and the relevantly imposed requirements of WMSNs. For each requirement we also summarize their existing solutions. Then we review recent research efforts on energy-efficient and QoS-aware communication protocols, including MAC protocols, with a focus on their prioritization and service differentiation mechanisms and disjoint multipath routing protocols. Full article
(This article belongs to the Special Issue Visual Sensor Networks and Related Applications)
Show Figures

Figure 1

Back to TopTop