sensors-logo

Journal Browser

Journal Browser

Computer Vision and Sensors Innovations for Microscopy Imaging Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 30 April 2024 | Viewed by 12959

Special Issue Editors


E-Mail Website
Guest Editor
IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) “Dino Amadori”, 47014 Meldola, Italy
Interests: computer vision; microscopy and imaging; 3D cell cultures; software development; machine learning

E-Mail Website
Guest Editor
Department of Computer Science and Engineering, University of Bologna, 40136 Bologna, Italy
Interests: customization and content-based information processing for data and knowledge representation; semantic web technologies; personalized environments; heterogeneous data integration from IoT devices
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Biochemistry, Biological Research Centre (BRC), Szeged, Hungary
Interests: computer vision; microscopy and imaging; single-cell analysis; software development; machine learning

E-Mail Website
Guest Editor
Department of Experimental, Diagnostic and Specialty Medicine, University of Bologna, 40138 Bologna, Italy
Interests: artificial intelligence; machine learning; multiomic data; complex systems; neural networks; biophysics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

It is well known that microscopy imaging innovations pave the way for discoveries in biology, medicine, engineering, and many other disciplines of health and industrial research. In this scenario, computer vision and sensors are the driving force for innovative imaging applications leading to new microcopy opportunities.

This Special Issue, entitled: "Computer Vision and Sensors Innovations for Microscopy Imaging Applications" aims to explore the scientific-technological frontiers that characterize the microscopy scenario. It seeks original, previously unpublished research and review articles empirically addressing key issues and challenges related to the methods, implementation, results, and evaluation of novel approaches and technologies in the field of microscopy imaging.

Dr. Filippo Piccinini
Dr. Antonella Carbonaro
Prof. Dr. Peter Horvath
Prof. Dr. Gastone C. Castellani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • microscopy
  • novel imaging and sensing
  • imaging technology for biomedical applications
  • software development
  • machine learning
  • deep learning
  • segmentation, tracking, and classification
  • image processing
  • signal processing

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 9857 KiB  
Article
Data Science for Health Image Alignment: A User-Friendly Open-Source ImageJ/Fiji Plugin for Aligning Multimodality/Immunohistochemistry/Immunofluorescence 2D Microscopy Images
by Filippo Piccinini, Marcella Tazzari, Maria Maddalena Tumedei, Mariachiara Stellato, Daniel Remondini, Enrico Giampieri, Giovanni Martinelli, Gastone Castellani and Antonella Carbonaro
Sensors 2024, 24(2), 451; https://0-doi-org.brum.beds.ac.uk/10.3390/s24020451 - 11 Jan 2024
Viewed by 760
Abstract
Most of the time, the deep analysis of a biological sample requires the acquisition of images at different time points, using different modalities and/or different stainings. This information gives morphological, functional, and physiological insights, but the acquired images must be aligned to be [...] Read more.
Most of the time, the deep analysis of a biological sample requires the acquisition of images at different time points, using different modalities and/or different stainings. This information gives morphological, functional, and physiological insights, but the acquired images must be aligned to be able to proceed with the co-localisation analysis. Practically speaking, according to Aristotle’s principle, “The whole is greater than the sum of its parts”, multi-modal image registration is a challenging task that involves fusing complementary signals. In the past few years, several methods for image registration have been described in the literature, but unfortunately, there is not one method that works for all applications. In addition, there is currently no user-friendly solution for aligning images that does not require any computer skills. In this work, DS4H Image Alignment (DS4H-IA), an open-source ImageJ/Fiji plugin for aligning multimodality, immunohistochemistry (IHC), and/or immunofluorescence (IF) 2D microscopy images, designed with the goal of being extremely easy to use, is described. All of the available solutions for aligning 2D microscopy images have also been revised. The DS4H-IA source code; standalone applications for MAC, Linux, and Windows; video tutorials; manual documentation; and sample datasets are publicly available. Full article
Show Figures

Figure 1

23 pages, 6340 KiB  
Article
Automated Stabilization, Enhancement and Capillaries Segmentation in Videocapillaroscopy
by Vincenzo Taormina, Giuseppe Raso, Vito Gentile, Leonardo Abbene, Antonino Buttacavoli, Gaetano Bonsignore, Cesare Valenti, Pietro Messina, Giuseppe Alessandro Scardina and Donato Cascio
Sensors 2023, 23(18), 7674; https://0-doi-org.brum.beds.ac.uk/10.3390/s23187674 - 05 Sep 2023
Cited by 4 | Viewed by 934
Abstract
Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomic–pathological lesions playing [...] Read more.
Oral capillaroscopy is a critical and non-invasive technique used to evaluate microcirculation. Its ability to observe small vessels in vivo has generated significant interest in the field. Capillaroscopy serves as an essential tool for diagnosing and prognosing various pathologies, with anatomic–pathological lesions playing a crucial role in their progression. Despite its importance, the utilization of videocapillaroscopy in the oral cavity encounters limitations due to the acquisition setup, encompassing spatial and temporal resolutions of the video camera, objective magnification, and physical probe dimensions. Moreover, the operator’s influence during the acquisition process, particularly how the probe is maneuvered, further affects its effectiveness. This study aims to address these challenges and improve data reliability by developing a computerized support system for microcirculation analysis. The designed system performs stabilization, enhancement and automatic segmentation of capillaries in oral mucosal video sequences. The stabilization phase was performed by means of a method based on the coupling of seed points in a classification process. The enhancement process implemented was based on the temporal analysis of the capillaroscopic frames. Finally, an automatic segmentation phase of the capillaries was implemented with the additional objective of quantitatively assessing the signal improvement achieved through the developed techniques. Specifically, transfer learning of the renowned U-net deep network was implemented for this purpose. The proposed method underwent testing on a database with ground truth obtained from expert manual segmentation. The obtained results demonstrate an achieved Jaccard index of 90.1% and an accuracy of 96.2%, highlighting the effectiveness of the developed techniques in oral capillaroscopy. In conclusion, these promising outcomes encourage the utilization of this method to assist in the diagnosis and monitoring of conditions that impact microcirculation, such as rheumatologic or cardiovascular disorders. Full article
Show Figures

Figure 1

23 pages, 10973 KiB  
Article
A Deep Learning Approach to Intrusion Detection and Segmentation in Pellet Fuels Using Microscopic Images
by Sebastian Iwaszenko, Marta Szymańska and Leokadia Róg
Sensors 2023, 23(14), 6488; https://0-doi-org.brum.beds.ac.uk/10.3390/s23146488 - 18 Jul 2023
Viewed by 776
Abstract
Pellet fuels are nowadays commonly used as a heat source for food preparation. Unfortunately, they may contain intrusions which might be harmful for humans and the environment. The intrusions can be identified precisely using immersed microscopy analysis. The aim of this study is [...] Read more.
Pellet fuels are nowadays commonly used as a heat source for food preparation. Unfortunately, they may contain intrusions which might be harmful for humans and the environment. The intrusions can be identified precisely using immersed microscopy analysis. The aim of this study is to investigate the possibility of autonomous identification of selected classes of intrusions using relatively simple deep learning models. The semantic segmentation was chosen as a method for impurity identification in the microscopic image. Three architectures of deep networks based on UNet architecture were examined. The networks contained the same depth as UNet but with a successively limited number of filters. The input image influence on the segmentation results was also examined. The efficiency of the network was assessed using the intersection over union index. The results showed an easily observable impact of the filter used on segmentation efficiency. The influence of the input image resolution is not so clear, and even the lowest (256 × 256 pixels) resolution used gave satisfactory results. The biggest (but still smaller than originally proposed UNet) network yielded segmentation quality good enough for practical applications. The simpler one was also applicable, although the quality of the segmentation decreased considerably. The simplest network gave poor results and is not suitable in applications. The two proposed networks can be used as a support for domain experts in practical applications. Full article
Show Figures

Figure 1

15 pages, 9109 KiB  
Article
High-Quality 3D Visualization System for Light-Field Microscopy with Fine-Scale Shape Measurement through Accurate 3D Surface Data
by Ki Hoon Kwon, Munkh-Uchral Erdenebat, Nam Kim, Anar Khuderchuluun, Shariar Md Imtiaz, Min Young Kim and Ki-Chul Kwon
Sensors 2023, 23(4), 2173; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042173 - 15 Feb 2023
Cited by 3 | Viewed by 1996
Abstract
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain [...] Read more.
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images. Full article
Show Figures

Figure 1

26 pages, 14328 KiB  
Article
Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images
by Adam Witmer and Bir Bhanu
Sensors 2022, 22(1), 206; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010206 - 29 Dec 2021
Cited by 4 | Viewed by 2351
Abstract
Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it [...] Read more.
Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it may be necessary for the researcher to use most of their data for testing, as opposed to model training. Therefore, the goal of this paper is to perform dataset augmentation using generative adversarial networks (GAN) to increase the classification accuracy of deep convolutional neural networks (CNN) trained on induced pluripotent stem cell microscopy images. The main challenges are: 1. modeling complex data using GAN and 2. training neural networks on augmented datasets that contain generated data. To address these challenges, a temporally constrained, hierarchical classification scheme that exploits domain knowledge is employed for model learning. First, image patches of cell colonies from gray-scale microscopy images are generated using GAN, and then these images are added to the real dataset and used to address class imbalances at multiple stages of training. Overall, a 2% increase in both true positive rate and F1-score is observed using this method as compared to a straightforward, imbalanced classification network, with some greater improvements on a classwise basis. This work demonstrates that synergistic model design involving domain knowledge is key for biological image analysis and improves model learning in high-throughput scenarios. Full article
Show Figures

Figure 1

14 pages, 5958 KiB  
Communication
Multi-Focus Image Fusion Using Focal Area Extraction in a Large Quantity of Microscopic Images
by Jiyoung Lee, Seunghyun Jang, Jungbin Lee, Taehan Kim, Seonghan Kim, Jongbum Seo, Ki Hean Kim and Sejung Yang
Sensors 2021, 21(21), 7371; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217371 - 05 Nov 2021
Cited by 1 | Viewed by 1785
Abstract
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of [...] Read more.
The non-invasive examination of conjunctival goblet cells using a microscope is a novel procedure for the diagnosis of ocular surface diseases. However, it is difficult to generate an all-in-focus image due to the curvature of the eyes and the limited focal depth of the microscope. The microscope acquires multiple images with the axial translation of focus, and the image stack must be processed. Thus, we propose a multi-focus image fusion method to generate an all-in-focus image from multiple microscopic images. First, a bandpass filter is applied to the source images and the focus areas are extracted using Laplacian transformation and thresholding with a morphological operation. Next, a self-adjusting guided filter is applied for the natural connections between local focus images. A window-size-updating method is adopted in the guided filter to reduce the number of parameters. This paper presents a novel algorithm that can operate for a large quantity of images (10 or more) and obtain an all-in-focus image. To quantitatively evaluate the proposed method, two different types of evaluation metrics are used: “full-reference” and “no-reference”. The experimental results demonstrate that this algorithm is robust to noise and capable of preserving local focus information through focal area extraction. Additionally, the proposed method outperforms state-of-the-art approaches in terms of both visual effects and image quality assessments. Full article
Show Figures

Figure 1

18 pages, 4869 KiB  
Article
Density Distribution Maps: A Novel Tool for Subcellular Distribution Analysis and Quantitative Biomedical Imaging
by Ilaria De Santis, Michele Zanoni, Chiara Arienti, Alessandro Bevilacqua and Anna Tesei
Sensors 2021, 21(3), 1009; https://0-doi-org.brum.beds.ac.uk/10.3390/s21031009 - 02 Feb 2021
Cited by 5 | Viewed by 2541
Abstract
Subcellular spatial location is an essential descriptor of molecules biological function. Presently, super-resolution microscopy techniques enable quantification of subcellular objects distribution in fluorescence images, but they rely on instrumentation, tools and expertise not constituting a default for most of laboratories. We propose a [...] Read more.
Subcellular spatial location is an essential descriptor of molecules biological function. Presently, super-resolution microscopy techniques enable quantification of subcellular objects distribution in fluorescence images, but they rely on instrumentation, tools and expertise not constituting a default for most of laboratories. We propose a method that allows resolving subcellular structures location by reinforcing each single pixel position with the information from surroundings. Although designed for entry-level laboratory equipment with common resolution powers, our method is independent from imaging device resolution, and thus can benefit also super-resolution microscopy. The approach permits to generate density distribution maps (DDMs) informative of both objects’ absolute location and self-relative displacement, thus practically reducing location uncertainty and increasing the accuracy of signal mapping. This work proves the capability of the DDMs to: (a) improve the informativeness of spatial distributions; (b) empower subcellular molecules distributions analysis; (c) extend their applicability beyond mere spatial object mapping. Finally, the possibility of enhancing or even disclosing latent distributions can concretely speed-up routine, large-scale and follow-up experiments, besides representing a benefit for all spatial distribution studies, independently of the image acquisition resolution. DDMaker, a Software endowed with a user-friendly Graphical User Interface (GUI), is also provided to support users in DDMs creation. Full article
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: DS4H Image Alignment: a user-friendly open-source ImageJ/Fiji plugin for aligning multimodality/IHC/IF 2D microscopy images
Authors: Filippo Piccinini, Marco Edoardo Duma, Marcella Tazzari, Maria Maddalena Tumedei, Giovanni Martinelli, Gastone Castellani and Antonella Carbonaro.
Affiliation: IRCCS Istituto Romagnolo per lo Studio dei Tumori (IRST) “Dino Amadori”, 47014 Meldola, Italy
Abstract: Most of the time, the deep analysis of a biological sample requires the acquisition of images at different time points, using different modalities and/or different stainings. This information gives morphological, functional and physiological insights, but the acquired images must be aligned to be then able to proceed with co-localisation analysis. Practically speaking, according to Aristotle's principle “The whole is greater than the sum of its parts”, multi-modal image registration is the challenging task that brings fusing together complementary signals. In the last years, several methods for image registration have been described in the literature, but unfortunately there is no one method that works for all applications. In addition, today there is no user-friendly tool for aligning images without requiring any computer skills. In this work, besides revising all the solutions available for aligning 2D microscopy images, we describe DS4H Image Alignment (DS4H-IA), an open-source ImageJ/Fiji plugin for aligning multimodality, immunohistochemistry (IHC), and/or immunofluorescence (IF) 2D microscopy images, designed with the goal to be extremely easy-to-use.

Back to TopTop