sensors-logo

Journal Browser

Journal Browser

Neural Networks and Semantic Analysis in Sensor, Image and Video Processing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (10 July 2023) | Viewed by 7538

Special Issue Editors


E-Mail Website
Guest Editor
Center for Research and Technology Hellas (CERTH) - Information Technologies Institute, 57001 Thessaloniki, Greece
Interests: image and video procesing; semantic analysis; neural networks; 3-D data processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information Technologies Institute, Centre for Research and Technology Hellas, 57001 Thessaloniki, Greece
Interests: UAV detection and classification; 3D/4D computer vision; 3D human reconstruction and motion capturing; medical image processing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering , Ben Gurion University of the Negev, Be’er-Sheva 84105001, Israel
Interests: image and video correction and analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Recent advancements in machine learning have driven researchers and the industry to progressively integrate neural networks into an abundance of applications in domains such as media, healthcare, security, banking or retail. The power of deep neural networks is situated in their predicting efficiency and adaptability to diverse circumstances, depending on their specific demands and available resources. Despite the capabilities of artificial intelligence models, they are often designed and tested upon unstructured data, i.e., uncorrelated data in terms of contextual comprehension. Although a trained neural network is capable of distinguishing different objects even when there is no prior information about them, delving into further information on how these objects relate to each other under the scope of a specific context requires further tuning. For example, a company’s chatbot engine can provide useful information about a client’s request based on the provided keywords; however, the customer may use words or expressions with ambivalent meaning, resulting in a failed sentiment estimation, if not examined under context.

Semantic analysis enables researchers and corporations to extract additional information from unstructured data by exploiting the machine learning predictive capabilities. In retail, semantic analysis aims at providing better decision-making for organizations and superior customer experience. In image and video applications, semantic analysis is useful in numerous occasions of human–computer interaction, such as smart surveillance or autonomous driving, in the form of scene recognition.

This Special Issue is addressed to high quality, state-of-the-art research papers that deal with neural networks in semantic analysis. We solicit original papers of unpublished and completed research that are not currently under review by any other conference/magazine/journal. Topics of interest include but are not limited to the list below:

  • Explainable AI
  • Human-In-The-Loop
  • Scene Recognition
  • Affective Computing
  • Semantic Classification and Clustering
  • Medical Imaging Segmentation
  • Autonomous Cars
  • Smart Surveillance
  • Recommendation Systems

Dr. Nicholas Vretos
Dr. Dimitrios Zarpalas
Prof. Dr. Yitzhak Yitzhaky
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 29073 KiB  
Article
Emergency Floor Plan Digitization Using Machine Learning
by Mohab Hassaan, Philip Alexander Ott, Ann-Kristin Dugstad, Miguel A. Vega Torres and André Borrmann
Sensors 2023, 23(19), 8344; https://0-doi-org.brum.beds.ac.uk/10.3390/s23198344 - 09 Oct 2023
Cited by 1 | Viewed by 1419
Abstract
An increasing number of special-use and high-rise buildings have presented challenges for efficient evacuations, particularly in fire emergencies. At the same time, however, the use of autonomous vehicles within indoor environments has received only limited attention for emergency scenarios. To address these issues, [...] Read more.
An increasing number of special-use and high-rise buildings have presented challenges for efficient evacuations, particularly in fire emergencies. At the same time, however, the use of autonomous vehicles within indoor environments has received only limited attention for emergency scenarios. To address these issues, we developed a method that classifies emergency symbols and determines their location on emergency floor plans. The method incorporates color filtering, clustering and object detection techniques to extract walls, which were used in combination to generate clean, digitized plans. By integrating the geometric and semantic data digitized with our method, existing building information modeling (BIM) based evacuation tools can be enhanced, improving their capabilities for path planning and decision making. We collected a dataset of 403 German emergency floor plans and created a synthetic dataset comprising 5000 plans. Both datasets were used to train two distinct faster region-based convolutional neural networks (Faster R-CNNs). The models were evaluated and compared using 83 floor plan images. The results show that the synthetic model outperformed the standard model for rare symbols, correctly identifying symbol classes that were not detected by the standard model. The presented framework offers a valuable tool for digitizing emergency floor plans and enhancing digital evacuation applications. Full article
Show Figures

Figure 1

13 pages, 765 KiB  
Article
Transformer-Based Fire Detection in Videos
by Konstantina Mardani, Nicholas Vretos and Petros Daras
Sensors 2023, 23(6), 3035; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063035 - 11 Mar 2023
Cited by 3 | Viewed by 1704
Abstract
Fire detection in videos forms a valuable feature in surveillance systems, as its utilization can prevent hazardous situations. The combination of an accurate and fast model is necessary for the effective confrontation of this significant task. In this work, a transformer-based network for [...] Read more.
Fire detection in videos forms a valuable feature in surveillance systems, as its utilization can prevent hazardous situations. The combination of an accurate and fast model is necessary for the effective confrontation of this significant task. In this work, a transformer-based network for the detection of fire in videos is proposed. It is an encoder–decoder architecture that consumes the current frame that is under examination, in order to compute attention scores. These scores denote which parts of the input frame are more relevant for the expected fire detection output. The model is capable of recognizing fire in video frames and specifying its exact location in the image plane in real-time, as can be seen in the experimental results, in the form of segmentation mask. The proposed methodology has been trained and evaluated for two computer vision tasks, the full-frame classification task (fire/no fire in frames) and the fire localization task. In comparison with the state-of-the-art models, the proposed method achieves outstanding results in both tasks, with 97% accuracy, 20.4 fps processing time, 0.02 false positive rate for fire localization, and 97% for f-score and recall metrics in the full-frame classification task. Full article
Show Figures

Figure 1

12 pages, 1387 KiB  
Article
Less Is More: Adaptive Trainable Gradient Dropout for Deep Neural Networks
by Christos Avgerinos, Nicholas Vretos and Petros Daras
Sensors 2023, 23(3), 1325; https://0-doi-org.brum.beds.ac.uk/10.3390/s23031325 - 24 Jan 2023
Cited by 2 | Viewed by 1668
Abstract
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, [...] Read more.
The undeniable computational power of artificial neural networks has granted the scientific community the ability to exploit the available data in ways previously inconceivable. However, deep neural networks require an overwhelming quantity of data in order to interpret the underlying connections between them, and therefore, be able to complete the specific task that they have been assigned to. Feeding a deep neural network with vast amounts of data usually ensures efficiency, but may, however, harm the network’s ability to generalize. To tackle this, numerous regularization techniques have been proposed, with dropout being one of the most dominant. This paper proposes a selective gradient dropout method, which, instead of relying on dropping random weights, learns to freeze the training process of specific connections, thereby increasing the overall network’s sparsity in an adaptive manner, by driving it to utilize more salient weights. The experimental results show that the produced sparse network outperforms the baseline on numerous image classification datasets, and additionally, the yielded results occurred after significantly less training epochs. Full article
Show Figures

Figure 1

13 pages, 1858 KiB  
Article
Visual Relationship Detection with Multimodal Fusion and Reasoning
by Shouguan Xiao and Weiping Fu
Sensors 2022, 22(20), 7918; https://0-doi-org.brum.beds.ac.uk/10.3390/s22207918 - 18 Oct 2022
Cited by 3 | Viewed by 1410
Abstract
Visual relationship detection aims to completely understand visual scenes and has recently received increasing attention. However, current methods only use the visual features of images to train the semantic network, which does not match human habits in which we know obvious features of [...] Read more.
Visual relationship detection aims to completely understand visual scenes and has recently received increasing attention. However, current methods only use the visual features of images to train the semantic network, which does not match human habits in which we know obvious features of scenes and infer covert states using common sense. Therefore, these methods cannot predict some hidden relationships of object-pairs from complex scenes. To address this problem, we propose unifying vision–language fusion and knowledge graph reasoning to combine visual feature embedding with external common sense knowledge to determine the visual relationships of objects. In addition, before training the relationship detection network, we devise an object–pair proposal module to solve the combination explosion problem. Extensive experiments show that our proposed method outperforms the state-of-the-art methods on the Visual Genome and Visual Relationship Detection datasets. Full article
Show Figures

Figure 1

Back to TopTop