The Present and the Future of Imaging

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 12690

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Systems and Communication, University of Milano-Bicocca, viale Sarca, 336, 20126 Milan, Italy
Interests: color imaging; image and video processing; analysis and classification; visual information systems; image quality
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Applied Sciences and Intelligent Systems “ScienceApp", Consiglio Nazionale delle Ricerche, c/o Dhitech Campus Universitario Ecotekne, Via Monteroni s/n, 73100 Lecce, Italy
Interests: computer vision; pattern recognition; video surveillance; object tracking; deep learning; audience measurements; visual interaction; human–robot interaction
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Democritus University of Thrace, University Campus, Kimmeria, GR-67100 Xanthi, Greece
Interests: computer vision; 3D object classification and retrieval; document image analysis and recognition; medical image analysis; computer-aided diagnosis; biometrics

Special Issue Information

Dear Colleagues,

The definition of imaging is wide and its articulation includes many sub-sectors. In a few years our journal has become a reference point for scientists and researchers dealing with innovative imaging methods, technologies and applications.

With this special issue, to which all of you (but only you) are invited to participate, we want to further strengthen our image as a reference point for the imaging community. I ask you to contribute an article that represents to you the state of the art and the future of imaging science, image technologies and applications. Topics can be varied, and can be in any area (see https://0-www-mdpi-com.brum.beds.ac.uk/journal/jimaging/about). Your contributions will be organized by the special editorial that will be read and studied by our attentive readers for years.  

Prof. Dr. Raimondo Schettini
Prof. Dr. Cosimo Distante
Prof. Dr. Ioannis Pratikakis
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 31395 KiB  
Article
Conditional Random Field-Guided Multi-Focus Image Fusion
by Odysseas Bouzos, Ioannis Andreadis and Nikolaos Mitianoudis
J. Imaging 2022, 8(9), 240; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8090240 - 05 Sep 2022
Cited by 2 | Viewed by 1403
Abstract
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are [...] Read more.
Multi-Focus image fusion is of great importance in order to cope with the limited Depth-of-Field of optical lenses. Since input images contain noise, multi-focus image fusion methods that support denoising are important. Transform-domain methods have been applied to image fusion, however, they are likely to produce artifacts. In order to cope with these issues, we introduce the Conditional Random Field (CRF) CRF-Guided fusion method. A novel Edge Aware Centering method is proposed and employed to extract the low and high frequencies of the input images. The Independent Component Analysis—ICA transform is applied to high-frequency components and a Conditional Random Field (CRF) model is created from the low frequency and the transform coefficients. The CRF model is solved efficiently with the α-expansion method. The estimated labels are used to guide the fusion of the low-frequency components and the transform coefficients. Inverse ICA is then applied to the fused transform coefficients. Finally, the fused image is the addition of the fused low frequency and the fused high frequency. CRF-Guided fusion does not introduce artifacts during fusion and supports image denoising during fusion by applying transform domain coefficient shrinkage. Quantitative and qualitative evaluation demonstrate the superior performance of CRF-Guided fusion compared to state-of-the-art multi-focus image fusion methods. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

24 pages, 1324 KiB  
Article
Comparison of Convolutional Neural Networks and Transformers for the Classification of Images of COVID-19, Pneumonia and Healthy Individuals as Observed with Computed Tomography
by Azucena Ascencio-Cabral and Constantino Carlos Reyes-Aldasoro
J. Imaging 2022, 8(9), 237; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8090237 - 01 Sep 2022
Cited by 4 | Viewed by 2383
Abstract
In this work, the performance of five deep learning architectures in classifying COVID-19 in a multi-class set-up is evaluated. The classifiers were built on pretrained ResNet-50, ResNet-50r (with kernel size 5×5 in the first convolutional layer), DenseNet-121, MobileNet-v3 and the state-of-the-art [...] Read more.
In this work, the performance of five deep learning architectures in classifying COVID-19 in a multi-class set-up is evaluated. The classifiers were built on pretrained ResNet-50, ResNet-50r (with kernel size 5×5 in the first convolutional layer), DenseNet-121, MobileNet-v3 and the state-of-the-art CaiT-24-XXS-224 (CaiT) transformer. The cross entropy and weighted cross entropy were minimised with Adam and AdamW. In total, 20 experiments were conducted with 10 repetitions and obtained the following metrics: accuracy (Acc), balanced accuracy (BA), F1 and F2 from the general Fβ macro score, Matthew’s Correlation Coefficient (MCC), sensitivity (Sens) and specificity (Spec) followed by bootstrapping. The performance of the classifiers was compared by using the Friedman–Nemenyi test. The results show that less complex architectures such as ResNet-50, ResNet-50r and DenseNet-121 were able to achieve better generalization with rankings of 1.53, 1.71 and 3.05 for the Matthew Correlation Coefficient, respectively, while MobileNet-v3 and CaiT obtained rankings of 3.72 and 5.0, respectively. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

11 pages, 603 KiB  
Article
A New LBP Variant: Corner Rhombus Shape LBP (CRSLBP)
by Ibtissam Al Saidi, Mohammed Rziza and Johan Debayle
J. Imaging 2022, 8(7), 200; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8070200 - 17 Jul 2022
Cited by 4 | Viewed by 1485
Abstract
The local binary model is a straightforward, dependable, and effective method for extracting relevant local information from images. However, because it only uses sign information in the local region, the local binary pattern (LBP) is ineffective at capturing discriminating characteristics. Furthermore, most LBP [...] Read more.
The local binary model is a straightforward, dependable, and effective method for extracting relevant local information from images. However, because it only uses sign information in the local region, the local binary pattern (LBP) is ineffective at capturing discriminating characteristics. Furthermore, most LBP variants select a region with one specific center pixel to fill all neighborhoods. In this paper, a new variant of a LBP is proposed for texture classification, known as corner rhombus-shape LBP (CRSLBP). In the CRSLBP approach, we first use three methods to threshold the pixel’s neighbors and center to obtain four center pixels by using sign and magnitude information with respect to a chosen region of an even block. This helps determine not just the relationship between neighbors and the pixel center but also between the center and the neighbor pixels of neighborhood center pixels. We evaluated the performance of our descriptors using four challenging texture databases: Outex (TC10,TC12), Brodatz, KTH-TIPSb2, and UMD. Various extensive experiments were performed that demonstrated the effectiveness and robustness of our descriptor in comparison with the available state of the art (SOTA). Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

16 pages, 562 KiB  
Article
Dynamic Label Assignment for Object Detection by Combining Predicted IoUs and Anchor IoUs
by Tianxiao Zhang, Bo Luo, Ajay Sharda and Guanghui Wang
J. Imaging 2022, 8(7), 193; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8070193 - 11 Jul 2022
Cited by 6 | Viewed by 1949
Abstract
Label assignment plays a significant role in modern object detection models. Detection models may yield totally different performances with different label assignment strategies. For anchor-based detection models, the IoU (Intersection over Union) threshold between the anchors and their corresponding ground truth bounding boxes [...] Read more.
Label assignment plays a significant role in modern object detection models. Detection models may yield totally different performances with different label assignment strategies. For anchor-based detection models, the IoU (Intersection over Union) threshold between the anchors and their corresponding ground truth bounding boxes is the key element since the positive samples and negative samples are divided by the IoU threshold. Early object detectors simply utilize the fixed threshold for all training samples, while recent detection algorithms focus on adaptive thresholds based on the distribution of the IoUs to the ground truth boxes. In this paper, we introduce a simple while effective approach to perform label assignment dynamically based on the training status with predictions. By introducing the predictions in label assignment, more high-quality samples with higher IoUs to the ground truth objects are selected as the positive samples, which could reduce the discrepancy between the classification scores and the IoU scores, and generate more high-quality boundary boxes. Our approach shows improvements in the performance of the detection models with the adaptive label assignment algorithm and lower bounding box losses for those positive samples, indicating more samples with higher-quality predicted boxes are selected as positives. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

15 pages, 14037 KiB  
Article
PyPore3D: An Open Source Software Tool for Imaging Data Processing and Analysis of Porous and Multiphase Media
by Amal Aboulhassan, Francesco Brun, George Kourousias, Gabriele Lanzafame, Marco Voltolini, Adriano Contillo and Lucia Mancini
J. Imaging 2022, 8(7), 187; https://doi.org/10.3390/jimaging8070187 - 07 Jul 2022
Cited by 5 | Viewed by 2916
Abstract
In this work, we propose the software library PyPore3D, an open source solution for data processing of large 3D/4D tomographic data sets. PyPore3D is based on the Pore3D core library, developed thanks to the collaboration between Elettra Sincrotrone (Trieste) and the University of [...] Read more.
In this work, we propose the software library PyPore3D, an open source solution for data processing of large 3D/4D tomographic data sets. PyPore3D is based on the Pore3D core library, developed thanks to the collaboration between Elettra Sincrotrone (Trieste) and the University of Trieste (Italy). The Pore3D core library is built with a distinction between the User Interface and the backend filtering, segmentation, morphological processing, skeletonisation and analysis functions. The current Pore3D version relies on the closed source IDL framework to call the backend functions and enables simple scripting procedures for streamlined data processing. PyPore3D addresses this limitation by proposing a full open source solution which provides Python wrappers to the the Pore3D C library functions. The PyPore3D library allows the users to fully use the Pore3D Core Library as an open source solution under Python and Jupyter Notebooks PyPore3D is both getting rid of all the intrinsic limitations of licensed platforms (e.g., closed source and export restrictions) and adding, when needed, the flexibility of being able to integrate scientific libraries available for Python (SciPy, TensorFlow, etc.). Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

18 pages, 1345 KiB  
Article
Embedded Quantitative MRI T Mapping Using Non-Linear Primal-Dual Proximal Splitting
by Matti Hanhela, Antti Paajanen, Mikko J. Nissi and Ville Kolehmainen
J. Imaging 2022, 8(6), 157; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8060157 - 31 May 2022
Cited by 3 | Viewed by 1759
Abstract
Quantitative MRI (qMRI) methods allow reducing the subjectivity of clinical MRI by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based. However, qMRI measurements typically take more time than anatomical imaging due to requiring multiple measurements with [...] Read more.
Quantitative MRI (qMRI) methods allow reducing the subjectivity of clinical MRI by providing numerical values on which diagnostic assessment or predictions of tissue properties can be based. However, qMRI measurements typically take more time than anatomical imaging due to requiring multiple measurements with varying contrasts for, e.g., relaxation time mapping. To reduce the scanning time, undersampled data may be combined with compressed sensing (CS) reconstruction techniques. Typical CS reconstructions first reconstruct a complex-valued set of images corresponding to the varying contrasts, followed by a non-linear signal model fit to obtain the parameter maps. We propose a direct, embedded reconstruction method for T1ρ mapping. The proposed method capitalizes on a known signal model to directly reconstruct the desired parameter map using a non-linear optimization model. The proposed reconstruction method also allows directly regularizing the parameter map of interest and greatly reduces the number of unknowns in the reconstruction, which are key factors in the performance of the reconstruction method. We test the proposed model using simulated radially sampled data from a 2D phantom and 2D cartesian ex vivo measurements of a mouse kidney specimen. We compare the embedded reconstruction model to two CS reconstruction models and in the cartesian test case also the direct inverse fast Fourier transform. The T1ρ RMSE of the embedded reconstructions was reduced by 37–76% compared to the CS reconstructions when using undersampled simulated data with the reduction growing with larger acceleration factors. The proposed, embedded model outperformed the reference methods on the experimental test case as well, especially providing robustness with higher acceleration factors. Full article
(This article belongs to the Special Issue The Present and the Future of Imaging)
Show Figures

Figure 1

Back to TopTop