Previous Issue
Volume 10, June
 
 

J. Imaging, Volume 10, Issue 7 (July 2024) – 7 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 11698 KiB  
Article
GOYA: Leveraging Generative Art for Content-Style Disentanglement
by Yankun Wu, Yuta Nakashima and Noa Garcia
J. Imaging 2024, 10(7), 156; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070156 - 26 Jun 2024
Viewed by 211
Abstract
The content-style duality is a fundamental element in art. These two dimensions can be easily differentiated by humans: content refers to the objects and concepts in an artwork, and style to the way it looks. Yet, we have not found a way to [...] Read more.
The content-style duality is a fundamental element in art. These two dimensions can be easily differentiated by humans: content refers to the objects and concepts in an artwork, and style to the way it looks. Yet, we have not found a way to fully capture this duality with visual representations. While style transfer captures the visual appearance of a single artwork, it fails to generalize to larger sets. Similarly, supervised classification-based methods are impractical since the perception of style lies on a spectrum and not on categorical labels. We thus present GOYA, which captures the artistic knowledge of a cutting-edge generative model for disentangling content and style in art. Experiments show that GOYA explicitly learns to represent the two artistic dimensions (content and style) of the original artistic image, paving the way for leveraging generative models in art analysis. Full article
20 pages, 2399 KiB  
Article
Impact of Color Space and Color Resolution on Vehicle Recognition Models
by Sally Ghanem and John H. Holliman II
J. Imaging 2024, 10(7), 155; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070155 - 26 Jun 2024
Viewed by 170
Abstract
In this study, we analyze both linear and nonlinear color mappings by training on versions of a curated dataset collected in a controlled campus environment. We experiment with color space and color resolution to assess model performance in vehicle recognition tasks. Color encodings [...] Read more.
In this study, we analyze both linear and nonlinear color mappings by training on versions of a curated dataset collected in a controlled campus environment. We experiment with color space and color resolution to assess model performance in vehicle recognition tasks. Color encodings can be designed in principle to highlight certain vehicle characteristics or compensate for lighting differences when assessing potential matches to previously encountered objects. The dataset used in this work includes imagery gathered under diverse environmental conditions, including daytime and nighttime lighting. Experimental results inform expectations for possible improvements with automatic color space selection through feature learning. Moreover, we find there is only a gradual decrease in model performance with degraded color resolution, which suggests the need for simplified data collection and processing. By focusing on the most critical features, we could see improved model generalization and robustness, as the model becomes less prone to overfitting to noise or irrelevant details in the data. Such a reduction in resolution will lower computational complexity, leading to quicker training and inference times. Full article
Show Figures

Graphical abstract

30 pages, 37493 KiB  
Review
What to Expect (and What Not) from Dual-Energy CT Imaging Now and in the Future?
by Roberto García-Figueiras, Laura Oleaga, Jordi Broncano, Gonzalo Tardáguila, Gabriel Fernández-Pérez, Eliseo Vañó, Eloísa Santos-Armentia, Ramiro Méndez, Antonio Luna and Sandra Baleato-González
J. Imaging 2024, 10(7), 154; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070154 - 26 Jun 2024
Viewed by 201
Abstract
Dual-energy CT (DECT) imaging has broadened the potential of CT imaging by offering multiple postprocessing datasets with a single acquisition at more than one energy level. DECT shows profound capabilities to improve diagnosis based on its superior material differentiation and its quantitative value. [...] Read more.
Dual-energy CT (DECT) imaging has broadened the potential of CT imaging by offering multiple postprocessing datasets with a single acquisition at more than one energy level. DECT shows profound capabilities to improve diagnosis based on its superior material differentiation and its quantitative value. However, the potential of dual-energy imaging remains relatively untapped, possibly due to its intricate workflow and the intrinsic technical limitations of DECT. Knowing the clinical advantages of dual-energy imaging and recognizing its limitations and pitfalls is necessary for an appropriate clinical use. The aims of this paper are to review the physical and technical bases of DECT acquisition and analysis, to discuss the advantages and limitations of DECT in different clinical scenarios, to review the technical constraints in material labeling and quantification, and to evaluate the cutting-edge applications of DECT imaging, including artificial intelligence, qualitative and quantitative imaging biomarkers, and DECT-derived radiomics and radiogenomics. Full article
Show Figures

Figure 1

18 pages, 880 KiB  
Article
A Study on Data Selection for Object Detection in Various Lighting Conditions for Autonomous Vehicles
by Hao Lin, Ashkan Parsi, Darragh Mullins, Jonathan Horgan, Enda Ward, Ciaran Eising, Patrick Denny, Brian Deegan, Martin Glavin and Edward Jones
J. Imaging 2024, 10(7), 153; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070153 - 22 Jun 2024
Viewed by 212
Abstract
In recent years, significant advances have been made in the development of Advanced Driver Assistance Systems (ADAS) and other technology for autonomous vehicles. Automated object detection is a crucial component of autonomous driving; however, there are still known issues that affect its performance. [...] Read more.
In recent years, significant advances have been made in the development of Advanced Driver Assistance Systems (ADAS) and other technology for autonomous vehicles. Automated object detection is a crucial component of autonomous driving; however, there are still known issues that affect its performance. For automotive applications, object detection algorithms are required to perform at a high standard in all lighting conditions; however, a major problem for object detection is poor performance in low-light conditions due to objects being less visible. This study considers the impact of training data composition on object detection performance in low-light conditions. In particular, this study evaluates the effect of different combinations of images of outdoor scenes, from different times of day, on the performance of deep neural networks, and considers the different challenges encountered during the training of a neural network. Through experiments with a widely used public database, as well as a number of commonly used object detection architectures, we show that more robust performance can be obtained with an appropriate balance of classes and illumination levels in the training data. The results also highlight the potential of adding images obtained in dusk and dawn conditions for improving object detection performance in day and night. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
13 pages, 38681 KiB  
Article
Efficient Wheat Head Segmentation with Minimal Annotation: A Generative Approach
by Jaden Myers, Keyhan Najafian, Farhad Maleki and Katie Ovens
J. Imaging 2024, 10(7), 152; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070152 - 21 Jun 2024
Viewed by 256
Abstract
Deep learning models have been used for a variety of image processing tasks. However, most of these models are developed through supervised learning approaches, which rely heavily on the availability of large-scale annotated datasets. Developing such datasets is tedious and expensive. In the [...] Read more.
Deep learning models have been used for a variety of image processing tasks. However, most of these models are developed through supervised learning approaches, which rely heavily on the availability of large-scale annotated datasets. Developing such datasets is tedious and expensive. In the absence of an annotated dataset, synthetic data can be used for model development; however, due to the substantial differences between simulated and real data, a phenomenon referred to as domain gap, the resulting models often underperform when applied to real data. In this research, we aim to address this challenge by first computationally simulating a large-scale annotated dataset and then using a generative adversarial network (GAN) to fill the gap between simulated and real images. This approach results in a synthetic dataset that can be effectively utilized to train a deep-learning model. Using this approach, we developed a realistic annotated synthetic dataset for wheat head segmentation. This dataset was then used to develop a deep-learning model for semantic segmentation. The resulting model achieved a Dice score of 83.4% on an internal dataset and Dice scores of 79.6% and 83.6% on two external datasets from the Global Wheat Head Detection datasets. While we proposed this approach in the context of wheat head segmentation, it can be generalized to other crop types or, more broadly, to images with dense, repeated patterns such as those found in cellular imagery. Full article
(This article belongs to the Special Issue Imaging Applications in Agriculture)
27 pages, 8795 KiB  
Article
Robust PCA with Lw, and L2,1 Norms: A Novel Method for Low-Quality Retinal Image Enhancement*
by Habte Tadesse Likassa, Ding-Geng Chen, Kewei Chen, Yalin Wang and Wenhui Zhu
J. Imaging 2024, 10(7), 151; https://doi.org/10.3390/jimaging10070151 - 21 Jun 2024
Viewed by 269
Abstract
Nonmydriatic retinal fundus images often suffer from quality issues and artifacts due to ocular or systemic comorbidities, leading to potential inaccuracies in clinical diagnoses. In recent times, deep learning methods have been widely employed to improve retinal image quality. However, these methods often [...] Read more.
Nonmydriatic retinal fundus images often suffer from quality issues and artifacts due to ocular or systemic comorbidities, leading to potential inaccuracies in clinical diagnoses. In recent times, deep learning methods have been widely employed to improve retinal image quality. However, these methods often require large datasets and lack robustness in clinical settings. Conversely, the inherent stability and adaptability of traditional unsupervised learning methods, coupled with their reduced reliance on extensive data, render them more suitable for real-world clinical applications, particularly in the limited data context of high noise levels or a significant presence of artifacts. However, existing unsupervised learning methods encounter challenges such as sensitivity to noise and outliers, reliance on assumptions like cluster shapes, and difficulties with scalability and interpretability, particularly when utilized for retinal image enhancement. To tackle these challenges, we propose a novel robust PCA (RPCA) method with low-rank sparse decomposition that also integrates affine transformations τi, weighted nuclear norm, and the L2,1 norms, aiming to overcome existing method limitations and to achieve image quality improvement unseen by these methods. We employ the weighted nuclear norm (Lw,*) to assign weights to singular values to each retinal images and utilize the L2,1 norm to eliminate correlated samples and outliers in the retinal images. Moreover, τi is employed to enhance retinal image alignment, making the new method more robust to variations, outliers, noise, and image blurring. The Alternating Direction Method of Multipliers (ADMM) method is used to optimally determine parameters, including τi, by solving an optimization problem. Each parameter is addressed separately, harnessing the benefits of ADMM. Our method introduces a novel parameter update approach and significantly improves retinal image quality, detecting cataracts, and diabetic retinopathy. Simulation results confirm our method’s superiority over existing state-of-the-art methods across various datasets. Full article
(This article belongs to the Special Issue Advances in Retinal Image Processing)
15 pages, 3248 KiB  
Article
Color Biomimetics in Textile Design: Reproduction of Natural Plant Colors through Instrumental Colorant Formulation
by Isabel Cabral, Amanda Schuch and Fernanda Steffens
J. Imaging 2024, 10(7), 150; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging10070150 - 21 Jun 2024
Viewed by 340
Abstract
This paper explores the intersection of colorimetry and biomimetics in textile design, focusing on mimicking natural plant colors in dyed textiles via instrumental colorant formulation. The experimental work was conducted with two polyester substrates dyed with disperse dyes using the exhaustion process. Textiles [...] Read more.
This paper explores the intersection of colorimetry and biomimetics in textile design, focusing on mimicking natural plant colors in dyed textiles via instrumental colorant formulation. The experimental work was conducted with two polyester substrates dyed with disperse dyes using the exhaustion process. Textiles dyed with different dye colors and concentrations were measured in a spectrophotometer and a database was created in Datacolor Match Textile software version 2.4.1 (0) with the samples’ colorimetric properties. Colorant recipe formulation encompassed the definition and measurement of the pattern colors (along four defined natural plants), the selection of the colorants, and the software calculation of the recipes. After textile dyeing with the lowest expected CIELAB color difference (ΔE*) value recipe for each pattern color, a comparative analysis was conducted by spectral reflectance and visual assessment. Scanning electron microscopy and white light interferometry were also used to characterize the surface of the natural elements. Samples dyed with the formulated recipe attained good chromatic similarity with the respective natural plants’ colors, and the majority of the samples presented ΔE* between 1.5 and 4.0. Additionally, recipe optimization can also be conducted based on the colorimetric evaluation. This research contributes a design framework for biomimicking colors in textile design, establishing a systematic method based on colorimetry and color theory that enables the reproduction of nature’s color palette through the effective use of colorants. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop