Special Issue "Survey in Deep Learning for IoT Applications"

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "Internet of Things (IoT) and Industrial IoT".

Deadline for manuscript submissions: 31 December 2022.

Special Issue Editors

Dr. Rytis Maskeliunas
E-Mail Website
Guest Editor
Department of Multimedia Technologies, Kaunas University of Technology, Kaunas, Lithuania
Interests: sustainable; multimodal; collaborative; intelligent; HMIs
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

In recent years, methods and dedicated communication channels of the Internet of Things (IoT) have been developed to detect and collect all kinds of information to deliver a variety of advanced services and applications, generating huge amounts of data, constantly received from millions of IoT sensors deployed around the world. The techniques behind deep learning now play an important role in desktop and mobile applications and are now entering the resource-constrained IoT sector, enabling the development of more advanced IoT applications, with proven results in a variety of areas already, including image recognition, medical data analysis, information retrieval, language recognition, natural language processing, indoor location, autonomous vehicles, smart cities, sustainability, pollution, bioeconomy, etc. This Special Issue focuses on the research and application of the Internet of Things, focusing on multimodal signal processing, sensor extraction, data visualization and understanding, and other related topics, answering the question of which deep neural network structures can efficiently process and integrate multimodal sensor input data for various IoT applications, how to adapt current and develop new designs to help to reduce the resource cost of running deep learning models for the efficient deployment on IoT devices, how to correctly calculate reliability measurements in deep learning predictions for IoT applications within limited and constrained calculation requirements, how to reduce the use of labeled IoT for needs linked to learning signal data considering operational limitations and other key areas.

Dr. Rytis Maskeliunas
Prof. Dr. Robertas Damaševičius
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Internet of Things
  • Deep learning
  • Data fusion
  • Multimodal signal processing
  • Data processing and visualization

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
An IoT System Using Deep Learning to Classify Camera Trap Images on the Edge
Computers 2022, 11(1), 13; https://0-doi-org.brum.beds.ac.uk/10.3390/computers11010013 - 13 Jan 2022
Viewed by 232
Abstract
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to [...] Read more.
Camera traps deployed in remote locations provide an effective method for ecologists to monitor and study wildlife in a non-invasive way. However, current camera traps suffer from two problems. First, the images are manually classified and counted, which is expensive. Second, due to manual coding, the results are often stale by the time they get to the ecologists. Using the Internet of Things (IoT) combined with deep learning represents a good solution for both these problems, as the images can be classified automatically, and the results immediately made available to ecologists. This paper proposes an IoT architecture that uses deep learning on edge devices to convey animal classification results to a mobile app using the LoRaWAN low-power, wide-area network. The primary goal of the proposed approach is to reduce the cost of the wildlife monitoring process for ecologists, and to provide real-time animal sightings data from the camera traps in the field. Camera trap image data consisting of 66,400 images were used to train the InceptionV3, MobileNetV2, ResNet18, EfficientNetB1, DenseNet121, and Xception neural network models. While performance of the trained models was statistically different (Kruskal–Wallis: Accuracy H(5) = 22.34, p < 0.05; F1-score H(5) = 13.82, p = 0.0168), there was only a 3% difference in the F1-score between the worst (MobileNet V2) and the best model (Xception). Moreover, the models made similar errors (Adjusted Rand Index (ARI) > 0.88 and Adjusted Mutual Information (AMU) > 0.82). Subsequently, the best model, Xception (Accuracy = 96.1%; F1-score = 0.87; F1-Score = 0.97 with oversampling), was optimized and deployed on the Raspberry Pi, Google Coral, and Nvidia Jetson edge devices using both TenorFlow Lite and TensorRT frameworks. Optimizing the models to run on edge devices reduced the average macro F1-Score to 0.7, and adversely affected the minority classes, reducing their F1-score to as low as 0.18. Upon stress testing, by processing 1000 images consecutively, Jetson Nano, running a TensorRT model, outperformed others with a latency of 0.276 s/image (s.d. = 0.002) while consuming an average current of 1665.21 mA. Raspberry Pi consumed the least average current (838.99 mA) with a ten times worse latency of 2.83 s/image (s.d. = 0.036). Nano was the only reasonable option as an edge device because it could capture most animals whose maximum speeds were below 80 km/h, including goats, lions, ostriches, etc. While the proposed architecture is viable, unbalanced data remain a challenge and the results can potentially be improved by using object detection to reduce imbalances and by exploring semi-supervised learning. Full article
(This article belongs to the Special Issue Survey in Deep Learning for IoT Applications)
Show Figures

Figure 1

Back to TopTop