Adaptive Optical and Computational Imaging towards Biomedical Application

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Optics and Lasers".

Deadline for manuscript submissions: closed (10 November 2021) | Viewed by 14219

Special Issue Editors


E-Mail Website
Guest Editor
Competence Center for Biomedical Laser Systems, Faculty of Electrical and Computer Engineering, TU Dresden, 01062 Dresden, Germany
Interests: quantitative phase imaging; holography; wavefront shaping; computational and adaptive microscopy; fiber-based optical communication; Brillouin microscopy; optical traps; microrobots
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Chair of Measurement and Sensor System Technique, Faculty of Electrical and Computer Engineering, TU Dresden, 01062 Dresden, Germany
Interests: optical process metrology; interferometry; holography; wavefront shaping; speckle metrology; lens-less endoscopy.

Special Issue Information

Dear Colleagues,

We jointly invite you to submit a paper to this Special Issue dedicated to adaptive optical and computational imaging toward biomedical applications. The main core of this call relates to techniques employing adaptive optical devices, such as adaptive lenses, digital micromirror devices or spatial light modulators to enable tailored illumination, aberration correction, wavefront shaping or fast flexible scanning and PSF engineering. The scope of this issue further aims at microendoscopic techniques that are based on fibers that are flexibly controlled by SLMs. Further, computational imaging techniques, such as digital holography, phase retrieval, deconvolution, ptychography, and deep-learning-based approaches are (among others) addressed. Applications can range from excitation in one, two and multiphoton microscopy and optogenetics, to super-resolution techniques, dynamic imaging, cell manipulation, and deep tissue applications. Novel approaches and numerical tools to recover or process image information will form a further part of this issue. We hope that you find the content of this call relevant to your research and will consider publication of your work within this Special Issue.

Kind regards,

Dr. Nektarios Koukourakis
Dr. Robert Kuschmierz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Wavefront shaping
  • Adaptive optics
  • Computational imaging
  • Deep learning
  • Digital holography

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 4307 KiB  
Article
Two-Wavelength Computational Holography for Aberration-Corrected Simultaneous Optogenetic Stimulation and Inhibition of In Vitro Biological Samples
by Felix Schmieder, Lars Büttner, Tony Hanitzsch, Volker Busskamp and Jürgen W. Czarske
Appl. Sci. 2022, 12(5), 2283; https://0-doi-org.brum.beds.ac.uk/10.3390/app12052283 - 22 Feb 2022
Cited by 1 | Viewed by 1411
Abstract
Optogenetics is a versatile toolset for the functional investigation of excitable cells such as neurons and cardiomyocytes in vivo and in vitro. While monochromatic illumination of these cells for either stimulation or inhibition already enables a wide range of studies, the combination of [...] Read more.
Optogenetics is a versatile toolset for the functional investigation of excitable cells such as neurons and cardiomyocytes in vivo and in vitro. While monochromatic illumination of these cells for either stimulation or inhibition already enables a wide range of studies, the combination of activation and silencing in one setup facilitates new experimental interrogation protocols. In this work, we present a setup for the simultaneous holographic stimulation and inhibition of multiple cells in vitro. The system is based on two fast ferroelectric liquid crystal spatial light modulators with frame rates of up to 1.7 kHz. Thereby, we are able to illuminate up to about 50 single spots with better than cellular resolution and without crosstalk, perfectly suited for refined network analysis schemes. System-inherent aberrations are corrected by applying an iterative optimization scheme based on Zernike polynomials. These are superposed on the same spatial light modulators that display the pattern-generating holograms, hence no further adaptive optical elements are needed for aberration correction. A near-diffraction-limited spatial resolution is achieved over the whole field of view, enabling subcellular optogenetic experiments by just choosing an appropriate microscope objective. The setup can pave the way for a multitude of optogenetic experiments, in particular with cardiomyocytes and neural networks. Full article
Show Figures

Figure 1

14 pages, 4510 KiB  
Article
Optical Diffraction Tomography Using Nearly In-Line Holography with a Broadband LED Source
by Ahmed B. Ayoub, Abhijit Roy and Demetri Psaltis
Appl. Sci. 2022, 12(3), 951; https://0-doi-org.brum.beds.ac.uk/10.3390/app12030951 - 18 Jan 2022
Cited by 5 | Viewed by 2832
Abstract
We present optical tomography methods for a 3D refractive index reconstruction of weakly scattering objects using LED light sources. We are able to record holograms by minimizing the optical path difference between the signal and reference beams while separating the scattered field from [...] Read more.
We present optical tomography methods for a 3D refractive index reconstruction of weakly scattering objects using LED light sources. We are able to record holograms by minimizing the optical path difference between the signal and reference beams while separating the scattered field from its twin image. We recorded multiple holograms by illuminating the LEDs sequentially and reconstructed the 3D refractive index reconstruction of the sample. The reconstructions show high signal-to-noise ratio in which the effect of speckle artifacts is highly minimized due to the partially incoherent illumination of the LEDs. Results from combining different illumination wavelengths are also described demonstrating higher acquisition speed. Full article
Show Figures

Figure 1

11 pages, 862 KiB  
Article
Assignment of Focus Position with Convolutional Neural Networks in Adaptive Lens Based Axial Scanning for Confocal Microscopy
by Katharina Schmidt, Nektarios Koukourakis and Jürgen W. Czarske
Appl. Sci. 2022, 12(2), 661; https://0-doi-org.brum.beds.ac.uk/10.3390/app12020661 - 10 Jan 2022
Cited by 3 | Viewed by 1334
Abstract
Adaptive lenses offer axial scanning without mechanical translation and thus are promising to replace mechanical-movement-based axial scanning in microscopy. The scan is accomplished by sweeping the applied voltage. However, the relation between the applied voltage and the resulting axial focus position is not [...] Read more.
Adaptive lenses offer axial scanning without mechanical translation and thus are promising to replace mechanical-movement-based axial scanning in microscopy. The scan is accomplished by sweeping the applied voltage. However, the relation between the applied voltage and the resulting axial focus position is not unambiguous. Adaptive lenses suffer from hysteresis effects, and their behaviour depends on environmental conditions. This is especially a hurdle when complex adaptive lenses are used that offer additional functionalities and are controlled with more degrees of freedom. In such case, a common approach is to iterate the voltage and monitor the adaptive lens. Here, we introduce an alternative approach which provides a single shot estimation of the current axial focus position by a convolutional neural network. We use the experimental data of our custom confocal microscope for training and validation. This leads to fast scanning without photo bleaching of the sample and opens the door to automatized and aberration-free smart microscopy. Applications in different types of laser-scanning microscopes are possible. However, maybe the training procedure of the neural network must be adapted for some use cases. Full article
Show Figures

Figure 1

18 pages, 4166 KiB  
Article
GMANet: Gradient Mask Attention Network for Finding Clearest Human Fecal Microscopic Image in Autofocus Process
by Xiangzhou Wang, Lin Liu, Xiaohui Du, Jing Zhang, Guangming Ni and Juanxiu Liu
Appl. Sci. 2021, 11(21), 10293; https://0-doi-org.brum.beds.ac.uk/10.3390/app112110293 - 02 Nov 2021
Cited by 1 | Viewed by 1284
Abstract
The intelligent recognition of formed elements in microscopic images is a research hotspot. Whether the microscopic image is clear or blurred is the key factor affecting the recognition accuracy. Microscopic images of human feces contain numerous items, such as undigested food, epithelium, bacteria [...] Read more.
The intelligent recognition of formed elements in microscopic images is a research hotspot. Whether the microscopic image is clear or blurred is the key factor affecting the recognition accuracy. Microscopic images of human feces contain numerous items, such as undigested food, epithelium, bacteria and other formed elements, leading to a complex image composition. Consequently, traditional image quality assessment (IQA) methods cannot accurately assess the quality of fecal microscopic images or even identify the clearest image in the autofocus process. In response to this difficulty, we propose a blind IQA method based on a deep convolutional neural network (CNN), namely GMANet. The gradient information of the microscopic image is introduced into a low-level convolutional layer of the CNN as a mask attention mechanism to force high-level features to pay more attention to sharp regions. Experimental results show that the proposed network has good consistency with human visual properties and can accurately identify the clearest microscopic image in the autofocus process. Our proposed model, trained on fecal microscopic images, can be directly applied to the autofocus process of leucorrhea and blood samples without additional transfer learning. Our study is valuable for the autofocus task of microscopic images with complex compositions. Full article
Show Figures

Figure 1

12 pages, 6511 KiB  
Article
Complex Wavefront Shaping through a Multi-Core Fiber
by Jiawei Sun, Nektarios Koukourakis and Jürgen W. Czarske
Appl. Sci. 2021, 11(9), 3949; https://0-doi-org.brum.beds.ac.uk/10.3390/app11093949 - 27 Apr 2021
Cited by 11 | Viewed by 2655
Abstract
Wavefront shaping through a multi-core fiber (MCF) is turning into an attractive method for endoscopic imaging and optical cell-manipulation on a chip. However, the discrete distribution and the low number of cores induce pixelated phase modulation, becoming an obstacle for delivering complex light [...] Read more.
Wavefront shaping through a multi-core fiber (MCF) is turning into an attractive method for endoscopic imaging and optical cell-manipulation on a chip. However, the discrete distribution and the low number of cores induce pixelated phase modulation, becoming an obstacle for delivering complex light field distributions through MCFs. We demonstrate a novel phase retrieval algorithm named Core–Gerchberg–Saxton (Core-GS) employing the captured core distribution map to retrieve tailored modulation hologram for the targeted intensity distribution at the distal far-field. Complex light fields are reconstructed through MCFs with high fidelity up to 96.2%. Closed-loop control with experimental feedback denotes the capability of the Core-GS algorithm for precise intensity manipulation of the reconstructed light field. Core-GS provides a robust way for wavefront shaping through MCFs; it facilitates the MCF becoming a vital waveguide in endoscopic and lab-on-a-chip applications. Full article
Show Figures

Figure 1

16 pages, 3844 KiB  
Article
Trichomonas vaginalis Detection Using Two Convolutional Neural Networks with Encoder-Decoder Architecture
by Xiangzhou Wang, Xiaohui Du, Lin Liu, Guangming Ni, Jing Zhang, Juanxiu Liu and Yong Liu
Appl. Sci. 2021, 11(6), 2738; https://0-doi-org.brum.beds.ac.uk/10.3390/app11062738 - 18 Mar 2021
Cited by 2 | Viewed by 3762
Abstract
Diagnosis of Trichomonas vaginalis infection is one of the most important factors in the routine examination of leucorrhea. According to the motion characteristics of Trichomonas vaginalis, a viable detection method is the use of a microscopic camera to record videos of leucorrhea [...] Read more.
Diagnosis of Trichomonas vaginalis infection is one of the most important factors in the routine examination of leucorrhea. According to the motion characteristics of Trichomonas vaginalis, a viable detection method is the use of a microscopic camera to record videos of leucorrhea samples and video object detection algorithms for detection. Most Trichomonas vaginalis is defocused and displays as shadow regions on microscopic images, and it is hard to recognize the movement of shadow regions using traditional video object detection algorithms. In order to solve this problem, we propose two convolutional neural networks based on an encoder-decoder architecture. The first network has the ability to learn the difference between frames and utilizes the image and optical flow information of three consecutive frames as the input to perform rough detection. The second network corrects the coarse contours and uses the image information and the rough detection result of the current frame as the input to perform fine detection. With these two networks applied, the metric value of the mean intersection over union of Trichomonas vaginalis achieves 72.09% on test videos. The proposed networks can effectively detect defocused Trichomonas vaginalis and suppress false alarms caused by the motion of formed elements or impurities. Full article
Show Figures

Figure 1

Back to TopTop