Next Issue
Volume 7, June
Previous Issue
Volume 7, April

J. Imaging, Volume 7, Issue 5 (May 2021) – 17 articles

Cover Story (view full-size image): Ubiquitous digital cameras, pervasive social media, easy access to the Internet, and new storage technologies are some of the factors that have led to an explosion in the daily production of digital visual data. This places a great demand on systems that enable efficient and effective management and retrieval of visual archives. This paper presents a content-based video retrieval system, called VISIONE, which provides various search functionalities to allow users to quickly fulfil a particular need  for information on large video collections. VISIONE exploits artificial intelligence techniques to automatically analyze visual data and encodes the extracted information into a specifically designed textual representation that enables the use of mature and scalable full-text search technologies for indexing and searching video content. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Select all
Export citation of selected articles as:
Article
Lip Reading by Alternating between Spatiotemporal and Spatial Convolutions
J. Imaging 2021, 7(5), 91; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050091 - 20 May 2021
Viewed by 465
Abstract
Lip reading (LR) is the task of predicting the speech utilizing only the visual information of the speaker. In this work, for the first time, the benefits of alternating between spatiotemporal and spatial convolutions for learning effective features from the LR sequences are [...] Read more.
Lip reading (LR) is the task of predicting the speech utilizing only the visual information of the speaker. In this work, for the first time, the benefits of alternating between spatiotemporal and spatial convolutions for learning effective features from the LR sequences are studied. In this context, a new learnable module named ALSOS (Alternating Spatiotemporal and Spatial Convolutions) is introduced in the proposed LR system. The ALSOS module consists of spatiotemporal (3D) and spatial (2D) convolutions along with two conversion components (3D-to-2D and 2D-to-3D) providing a sequence-to-sequence-mapping. The designed LR system utilizes the ALSOS module in-between ResNet blocks, as well as Temporal Convolutional Networks (TCNs) in the backend for classification. The whole framework is composed by feedforward convolutional along with residual layers and can be trained end-to-end directly from the image sequences in the word-level LR problem. The ALSOS module can capture spatiotemporal dynamics and can be advantageous in the task of LR when combined with the ResNet topology. Experiments with different combinations of ALSOS with ResNet are performed on a dataset in Greek language simulating a medical support application scenario and on the popular large-scale LRW-500 dataset of English words. Results indicate that the proposed ALSOS module can improve the performance of a LR system. Overall, the insertion of ALSOS module into the ResNet architecture obtained higher classification accuracy since it incorporates the contribution of the temporal information captured at different spatial scales of the framework. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

Article
End-to-End Deep One-Class Learning for Anomaly Detection in UAV Video Stream
J. Imaging 2021, 7(5), 90; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050090 - 19 May 2021
Viewed by 447
Abstract
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised [...] Read more.
In recent years, the use of drones for surveillance tasks has been on the rise worldwide. However, in the context of anomaly detection, only normal events are available for the learning process. Therefore, the implementation of a generative learning method in an unsupervised mode to solve this problem becomes fundamental. In this context, we propose a new end-to-end architecture capable of generating optical flow images from original UAV images and extracting compact spatio-temporal characteristics for anomaly detection purposes. It is designed with a custom loss function as a sum of three terms, the reconstruction loss (Rl), the generation loss (Gl) and the compactness loss (Cl) to ensure an efficient classification of the “deep-one” class. In addition, we propose to minimize the effect of UAV motion in video processing by applying background subtraction on optical flow images. We tested our method on very complex datasets called the mini-drone video dataset, and obtained results surpassing existing techniques’ performances with an AUC of 85.3. Full article
(This article belongs to the Special Issue Image and Video Forensics)
Show Figures

Figure 1

Review
Feature Extraction for Finger-Vein-Based Identity Recognition
J. Imaging 2021, 7(5), 89; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050089 - 15 May 2021
Viewed by 621
Abstract
This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein [...] Read more.
This paper aims to provide a brief review of the feature extraction methods applied for finger vein recognition. The presented study is designed in a systematic way in order to bring light to the scientific interest for biometric systems based on finger vein biometric features. The analysis spans over a period of 13 years (from 2008 to 2020). The examined feature extraction algorithms are clustered into five categories and are presented in a qualitative manner by focusing mainly on the techniques applied to represent the features of the finger veins that uniquely prove a human’s identity. In addition, the case of non-handcrafted features learned in a deep learning framework is also examined. The conducted literature analysis revealed the increased interest in finger vein biometric systems as well as the high diversity of different feature extraction methods proposed over the past several years. However, last year this interest shifted to the application of Convolutional Neural Networks following the general trend of applying deep learning models in a range of disciplines. Finally, yet importantly, this work highlights the limitations of the existing feature extraction methods and describes the research actions needed to face the identified challenges. Full article
(This article belongs to the Section Biometrics, Forensics, and Security)
Show Figures

Figure 1

Article
Remote Density Measurements of Molten Salts via Neutron Radiography
J. Imaging 2021, 7(5), 88; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050088 - 14 May 2021
Viewed by 377
Abstract
With an increased interest in the use of molten salts in both nuclear and non-nuclear systems, measuring important thermophysical properties of specific salt mixtures becomes critical in understanding salt performance and behavior. One of the more basic and significant thermophysical properties of a [...] Read more.
With an increased interest in the use of molten salts in both nuclear and non-nuclear systems, measuring important thermophysical properties of specific salt mixtures becomes critical in understanding salt performance and behavior. One of the more basic and significant thermophysical properties of a given salt system is density as a function of temperature. With this in mind, this work aims to present and layout a novel approach to measuring densities of molten salt systems using neutron radiography. This work was performed on Flight Path 5 at the Los Alamos Neutron Science Center at Los Alamos National Laboratory. In order to benchmark this initial work, three salt mixtures were measured, NaCl, LiCl (58.2 mol%) + KCl (41.8 mol%), and MgCl2 (32 mol%) + KCl (68 mol%). Resulting densities as a function of temperature for each sample from this work were then compared to previous works employing traditional techniques. Results from this work match well with previous literature values for all salt mixtures measured, establishing that neutron radiography is a viable technique to measure density as a function of temperature in molten salt systems. Finally, advantages of using neutron radiography over other methods are discussed and future work in improving this technique is covered. Full article
Show Figures

Figure 1

Article
Simulation-Based Estimation of the Number of Cameras Required for 3D Reconstruction in a Narrow-Baseline Multi-Camera Setup
J. Imaging 2021, 7(5), 87; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050087 - 13 May 2021
Viewed by 452
Abstract
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with [...] Read more.
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with the data, since information is pre-recorded and available entirely in digital form. However, some fields, such as microsurgery, do not benefit from this yet. Microscopes, endoscopes, and laparoscopes show the surgical site as it is. To allow free data manipulation and information fusion, 3D digitization of surgical sites is required. We aimed to find the number of cameras needed to add this functionality to surgical microscopes. For this, we performed in silico simulations of the 3D reconstruction of representative models of microsurgical sites with different numbers of cameras in narrow-baseline setups. Our results show that eight independent camera views are preferable, while at least four are necessary for a digital surgical site. In most cases, eight cameras allow the reconstruction of over 99% of the visible part. With four cameras, still over 95% can be achieved. This answers one of the key questions for the development of a prototype microscope. In future, such a system can provide functionality which is unattainable today. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Graphical abstract

Article
Formalization of the Burning Process of Virtual Reality Objects in Adaptive Training Complexes
J. Imaging 2021, 7(5), 86; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050086 - 12 May 2021
Viewed by 375
Abstract
Within the scope of this article, the problem of the formalization of physical processes in adaptive training complexes is considered on the example of virtual objects burning. Despite a fairly complete study of this process, the existing mathematical models are not adapted for [...] Read more.
Within the scope of this article, the problem of the formalization of physical processes in adaptive training complexes is considered on the example of virtual objects burning. Despite a fairly complete study of this process, the existing mathematical models are not adapted for the application in training complexes, which leads to a significant increase in costs and lower productivity due to the complexity of the calculations. Therefore, an adapted mathematical model is proposed that allows us to formalize the structure of virtual objects of burning, their basic properties and the processes of changing states, starting from the flame development of an object and ending with their complete destruction or extinguishment. The article proposes the use of threshold value diagrams and rules for changing the states of virtual reality objects to solve the problem of the formalization of burning processes. This tool is quite multi-purpose, which allows you to describe various physical processes, such as smoke, flooding, the spread of toxic gases, etc. The area of the proposed formalization approach includes the design and implementation of physical processes in simulators and multimedia complexes using virtual and augmented reality. Thus, the presented scientific research can be used to formalize the physical processes in adaptive training complexes for professional ergatic systems. Full article
(This article belongs to the Special Issue The Mixed Reality Revolution: Challenges and Prospects)
Show Figures

Figure 1

Article
Utilizing a Terrestrial Laser Scanner for 3D Luminance Measurement of Indoor Environments
J. Imaging 2021, 7(5), 85; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050085 - 10 May 2021
Viewed by 474
Abstract
We aim to present a method to measure 3D luminance point clouds by applying the integrated high dynamic range (HDR) panoramic camera system of a terrestrial laser scanning (TLS) instrument for performing luminance measurements simultaneously with laser scanning. We present the luminance calibration [...] Read more.
We aim to present a method to measure 3D luminance point clouds by applying the integrated high dynamic range (HDR) panoramic camera system of a terrestrial laser scanning (TLS) instrument for performing luminance measurements simultaneously with laser scanning. We present the luminance calibration of a laser scanner and assess the accuracy, color measurement properties, and dynamic range of luminance measurement achieved in the laboratory environment. In addition, we demonstrate the 3D luminance measuring process through a case study with a luminance-calibrated laser scanner. The presented method can be utilized directly as the luminance data source. A terrestrial laser scanner can be prepared, characterized, and calibrated to apply it to the simultaneous measurement of both geometry and luminance. We discuss the state and limitations of contemporary TLS technology for luminance measuring. Full article
(This article belongs to the Special Issue High Dynamic Range Imaging)
Show Figures

Figure 1

Article
Vegetation Structure Index (VSI): Retrieving Vegetation Structural Information from Multi-Angular Satellite Remote Sensing
J. Imaging 2021, 7(5), 84; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050084 - 09 May 2021
Viewed by 426
Abstract
Utilization of the Bidirectional Reflectance Distribution Function (BRDF) model parameters obtained from the multi-angular remote sensing is one of the approaches for the retrieval of vegetation structural information. In this research, the potential of multi-angular vegetation indices, formulated by the combination of multi-spectral [...] Read more.
Utilization of the Bidirectional Reflectance Distribution Function (BRDF) model parameters obtained from the multi-angular remote sensing is one of the approaches for the retrieval of vegetation structural information. In this research, the potential of multi-angular vegetation indices, formulated by the combination of multi-spectral reflectance from different view angles, for the retrieval of forest above-ground biomass was assessed in the New England region. The multi-angular vegetation indices were generated by the simulation of the Moderate Resolution Imaging Spectroradiometer (MODIS) BRDF/Albedo Model Parameters Product (MCD43A1 Version 6)-based BRDF parameters. The effects of the seasonal (spring, summer, autumn, and winter) composites of the multi-angular vegetation indices on the above-ground biomass, the angular relationship of the spectral reflectance with above-ground biomass, and the interrelationships between the multi-angular vegetation indices were analyzed. Among the existing multi-angular vegetation indices, only the Nadir BRDF-adjusted NDVI and Hot-spot incorporated NDVI showed significant relationship (more than 50%) with the above-ground biomass. The Vegetation Structure Index (VSI), newly proposed in the research, performed in the most efficient way and explained 64% variation of the above-ground biomass, suggesting that the right choice of the spectral channel and observation geometry should be considered for improving the estimates of the above-ground biomass. In addition, the right choice of seasonal data (summer) was found to be important for estimating the forest biomass, while other seasonal data were either insensitive or pointless. The promising results shown by the VSI suggest that it could be an appropriate candidate for monitoring vegetation structure from the multi-angular satellite remote sensing. Full article
Show Figures

Figure 1

Article
Variational Autoencoder for Image-Based Augmentation of Eye-Tracking Data
J. Imaging 2021, 7(5), 83; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050083 - 03 May 2021
Viewed by 481
Abstract
Over the past decade, deep learning has achieved unprecedented successes in a diversity of application domains, given large-scale datasets. However, particular domains, such as healthcare, inherently suffer from data paucity and imbalance. Moreover, datasets could be largely inaccessible due to privacy concerns, or [...] Read more.
Over the past decade, deep learning has achieved unprecedented successes in a diversity of application domains, given large-scale datasets. However, particular domains, such as healthcare, inherently suffer from data paucity and imbalance. Moreover, datasets could be largely inaccessible due to privacy concerns, or lack of data-sharing incentives. Such challenges have attached significance to the application of generative modeling and data augmentation in that domain. In this context, this study explores a machine learning-based approach for generating synthetic eye-tracking data. We explore a novel application of variational autoencoders (VAEs) in this regard. More specifically, a VAE model is trained to generate an image-based representation of the eye-tracking output, so-called scanpaths. Overall, our results validate that the VAE model could generate a plausible output from a limited dataset. Finally, it is empirically demonstrated that such approach could be employed as a mechanism for data augmentation to improve the performance in classification tasks. Full article
Show Figures

Figure 1

Article
Optical Imaging of Magnetic Particle Cluster Oscillation and Rotation in Glycerol
J. Imaging 2021, 7(5), 82; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050082 - 29 Apr 2021
Cited by 1 | Viewed by 472
Abstract
Magnetic particles have been evaluated for their biomedical applications as a drug delivery system to treat asthma and other lung diseases. In this study, ferromagnetic barium hexaferrite (BaFe12O19) and iron oxide (Fe3O4) particles were suspended in water or glycerol, as glycerol can be 1000 times more viscous than water. The particle concentration was 2.50 mg/mL for BaFe12O19 particle clusters and 1.00 mg/mL for Fe3O4 particle clusters. The magnetic particle cluster cross-sectional area ranged from 15 to 1000 μμm2, and the particle cluster diameter ranged from 5 to 45 μμm. The magnetic particle clusters were exposed to oscillating or rotating magnetic fields and imaged with an optical microscope. The oscillation frequency of the applied magnetic fields, which was created by homemade wire spools inserted into an optical microscope, ranged from 10 to 180 Hz. The magnetic field magnitudes varied from 0.25 to 9 mT. The minimum magnetic field required for particle cluster rotation or oscillation in glycerol was experimentally measured at different frequencies. The results are in qualitative agreement with a simplified model for single-domain magnetic particles, with an average deviation from the model of 1.7 ± 1.3. The observed difference may be accounted for by the fact that our simplified model does not include effects on particle cluster motion caused by randomly oriented domains in multi-domain magnetic particle clusters, irregular particle cluster size, or magnetic anisotropy, among other effects. Full article
(This article belongs to the Special Issue Current Highlights and Future Applications of Computational Imaging)
Show Figures

Figure 1

Article
CORONA-Net: Diagnosing COVID-19 from X-ray Images Using Re-Initialization and Classification Networks
J. Imaging 2021, 7(5), 81; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050081 - 28 Apr 2021
Viewed by 542
Abstract
The COVID-19 pandemic has been deemed a global health pandemic. The early detection of COVID-19 is key to combating its outbreak and could help bring this pandemic to an end. One of the biggest challenges in combating COVID-19 is accurate testing for the [...] Read more.
The COVID-19 pandemic has been deemed a global health pandemic. The early detection of COVID-19 is key to combating its outbreak and could help bring this pandemic to an end. One of the biggest challenges in combating COVID-19 is accurate testing for the disease. Utilizing the power of Convolutional Neural Networks (CNNs) to detect COVID-19 from chest X-ray images can help radiologists compare and validate their results with an automated system. In this paper, we propose a carefully designed network, dubbed CORONA-Net, that can accurately detect COVID-19 from chest X-ray images. CORONA-Net is divided into two phases: (1) The reinitialization phase and (2) the classification phase. In the reinitialization phase, the network consists of encoder and decoder networks. The objective of this phase is to train and initialize the encoder and decoder networks by a distribution that comes out of medical images. In the classification phase, the decoder network is removed from CORONA-Net, and the encoder network acts as a backbone network to fine-tune the classification phase based on the learned weights from the reinitialization phase. Extensive experiments were performed on a publicly available dataset, COVIDx, and the results show that CORONA-Net significantly outperforms the current state-of-the-art networks with an overall accuracy of 95.84%. Full article
(This article belongs to the Special Issue X-ray Digital Radiography and Computed Tomography)
Show Figures

Figure 1

Article
From IR Images to Point Clouds to Pose: Point Cloud-Based AR Glasses Pose Estimation
J. Imaging 2021, 7(5), 80; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050080 - 27 Apr 2021
Viewed by 590
Abstract
In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is [...] Read more.
In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach “PointsToRotation” is based on a Deep Neural Network alone, whereas our second approach “PointsToPose” is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

Article
Determination of the Round Window Niche Anatomy Using Cone Beam Computed Tomography Imaging as Preparatory Work for Individualized Drug-Releasing Implants
J. Imaging 2021, 7(5), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050079 - 26 Apr 2021
Viewed by 373
Abstract
Modern therapy of inner ear disorders is increasingly shifting to local drug delivery using a growing number of pharmaceuticals. Access to the inner ear is usually made via the round window membrane (RWM), located in the bony round window niche (RWN). We hypothesize [...] Read more.
Modern therapy of inner ear disorders is increasingly shifting to local drug delivery using a growing number of pharmaceuticals. Access to the inner ear is usually made via the round window membrane (RWM), located in the bony round window niche (RWN). We hypothesize that the individual shape and size of the RWN have to be taken into account for safe reliable and controlled drug delivery. Therefore, we investigated the anatomy and its variations. Cone beam computed tomography (CBCT) images of 50 patients were analyzed. Based on the reconstructed 3D volumes, individual anatomies of the RWN, RWM, and bony overhang were determined by segmentation using 3D SlicerTM with a custom build plug-in. A large individual anatomical variability of the RWN with a mean volume of 4.54 mm3 (min 2.28 mm3, max 6.64 mm3) was measured. The area of the RWM ranged from 1.30 to 4.39 mm2 (mean: 2.93 mm2). The bony overhang had a mean length of 0.56 mm (min 0.04 mm, max 1.24 mm) and the shape was individually very different. Our data suggest that there is a potential for individually designed and additively manufactured RWN implants due to large differences in the volume and shape of the RWN. Full article
Show Figures

Figure 1

Article
Structural Beauty: A Structure-Based Computational Approach to Quantifying the Beauty of an Image
J. Imaging 2021, 7(5), 78; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050078 - 23 Apr 2021
Viewed by 719
Abstract
To say that beauty is in the eye of the beholder means that beauty is largely subjective so varies from person to person. While the subjectivity view is commonly held, there is also an objectivity view that seeks to measure beauty or aesthetics [...] Read more.
To say that beauty is in the eye of the beholder means that beauty is largely subjective so varies from person to person. While the subjectivity view is commonly held, there is also an objectivity view that seeks to measure beauty or aesthetics in some quantitative manners. Christopher Alexander has long discovered that beauty or coherence highly correlates to the number of subsymmetries or substructures and demonstrated that there is a shared notion of beauty—structural beauty—among people and even different peoples, regardless of their faiths, cultures, and ethnicities. This notion of structural beauty arises directly out of living structure or wholeness, a physical and mathematical structure that underlies all space and matter. Based on the concept of living structure, this paper develops an approach for computing the structural beauty or life of an image (L) based on the number of automatically derived substructures (S) and their inherent hierarchy (H). To verify this approach, we conducted a series of case studies applied to eight pairs of images including Leonardo da Vinci’s Mona Lisa and Jackson Pollock’s Blue Poles. We discovered among others that Blue Poles is more structurally beautiful than the Mona Lisa, and traditional buildings are in general more structurally beautiful than their modernist counterparts. This finding implies that goodness of things or images is largely a matter of fact rather than an opinion or personal preference as conventionally conceived. The research on structural beauty has deep implications on many disciplines, where beauty or aesthetics is a major concern such as image understanding and computer vision, architecture and urban design, humanities and arts, neurophysiology, and psychology. Full article
Show Figures

Figure 1

Communication
UnCanny: Exploiting Reversed Edge Detection as a Basis for Object Tracking in Video
J. Imaging 2021, 7(5), 77; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050077 - 23 Apr 2021
Viewed by 354
Abstract
Few object detection methods exist which can resolve small objects (<20 pixels) from complex static backgrounds without significant computational expense. A framework capable of meeting these needs which reverses the steps in classic edge detection methods using the Canny filter for edge detection [...] Read more.
Few object detection methods exist which can resolve small objects (<20 pixels) from complex static backgrounds without significant computational expense. A framework capable of meeting these needs which reverses the steps in classic edge detection methods using the Canny filter for edge detection is presented here. Sample images taken from sequential frames of video footage were processed by subtraction, thresholding, Sobel edge detection, Gaussian blurring, and Zhang–Suen edge thinning to identify objects which have moved between the two frames. The results of this method show distinct contours applicable to object tracking algorithms with minimal “false positive” noise. This framework may be used with other edge detection methods to produce robust, low-overhead object tracking methods. Full article
(This article belongs to the Special Issue Edge Detection Evaluation)
Show Figures

Graphical abstract

Article
The VISIONE Video Search System: Exploiting Off-the-Shelf Text Search Engines for Large-Scale Video Retrieval
J. Imaging 2021, 7(5), 76; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050076 - 23 Apr 2021
Viewed by 423
Abstract
This paper describes in detail VISIONE, a video search system that allows users to search for videos using textual keywords, the occurrence of objects and their spatial relationships, the occurrence of colors and their spatial relationships, and image similarity. These modalities can be [...] Read more.
This paper describes in detail VISIONE, a video search system that allows users to search for videos using textual keywords, the occurrence of objects and their spatial relationships, the occurrence of colors and their spatial relationships, and image similarity. These modalities can be combined together to express complex queries and meet users’ needs. The peculiarity of our approach is that we encode all information extracted from the keyframes, such as visual deep features, tags, color and object locations, using a convenient textual encoding that is indexed in a single text retrieval engine. This offers great flexibility when results corresponding to various parts of the query (visual, text and locations) need to be merged. In addition, we report an extensive analysis of the retrieval performance of the system, using the query logs generated during the Video Browser Showdown (VBS) 2019 competition. This allowed us to fine-tune the system by choosing the optimal parameters and strategies from those we tested. Full article
(This article belongs to the Special Issue 2020 Selected Papers from Journal of Imaging Editorial Board Members)
Show Figures

Figure 1

Review
An Update of the Possible Applications of Magnetic Resonance Imaging (MRI) in Dentistry: A Literature Review
J. Imaging 2021, 7(5), 75; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050075 - 21 Apr 2021
Cited by 6 | Viewed by 537
Abstract
This narrative review aims to evaluate the current evidence for the application of magnetic resonance imaging (MRI), a radiation-free diagnostic exam, in some fields of dentistry. Background: Radiographic imaging plays a significant role in current first and second level dental diagnostics and treatment [...] Read more.
This narrative review aims to evaluate the current evidence for the application of magnetic resonance imaging (MRI), a radiation-free diagnostic exam, in some fields of dentistry. Background: Radiographic imaging plays a significant role in current first and second level dental diagnostics and treatment planning. However, the main disadvantage is the high exposure to ionizing radiation for patients. Methods: A search for articles on dental MRI was performed using the PubMed electronic database, and 37 studies were included. Only some articles about endodontics, conservative dentistry, implantology, and oral and craniofacial surgery that best represented the aim of this study were selected. Results: All the included articles showed that MRI can obtain well-defined images, which can be applied in operative dentistry. Conclusions: This review highlights the potential of MRI for diagnosis in dental clinical practice, without the risk of biological damage from continuous ionizing radiation exposure. Full article
(This article belongs to the Special Issue New Frontiers of Advanced Imaging in Dentistry)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop