Next Article in Journal
THz Multi-Mode Q-Plate with a Fixed Rate of Change of the Optical Axis Using Form Birefringence
Next Article in Special Issue
Influence of Radiation-Induced Displacement Defect in 1.2 kV SiC Metal-Oxide-Semiconductor Field-Effect Transistors
Previous Article in Journal
Microgripper Using Soft Microactuators for Manipulation of Living Cells
Previous Article in Special Issue
Optofluidic Particle Manipulation Platform with Nanomembrane
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions

1
Electronics Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico
2
Computer Science Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico
3
Optics Department, Instituto Nacional de Astrofísica, Óptica y Electrónica—INAOE, Calle Luis Enrique Erro 1, Puebla 72840, Mexico
*
Author to whom correspondence should be addressed.
Submission received: 7 April 2022 / Revised: 2 May 2022 / Accepted: 10 May 2022 / Published: 20 May 2022
(This article belongs to the Special Issue Feature Papers of Micromachines in Physics 2022)

Abstract

:
In the last decade, the vision systems have improved their capabilities to capture 3D images in bad weather scenarios. Currently, there exist several techniques for image acquisition in foggy or rainy scenarios that use infrared (IR) sensors. Due to the reduced light scattering at the IR spectra it is possible to discriminate the objects in a scene compared with the images obtained in the visible spectrum. Therefore, in this work, we proposed 3D image generation in foggy conditions using the single-pixel imaging (SPI) active illumination approach in combination with the Time-of-Flight technique (ToF) at 1550 nm wavelength. For the generation of 3D images, we make use of space-filling projection with compressed sensing (CS-SRCNN) and depth information based on ToF. To evaluate the performance, the vision system included a designed test chamber to simulate different fog and background illumination environments and calculate the parameters related to image quality.

1. Introduction

Outdoors object visualization under bad weather conditions, such as in the presence of rain, fog, smoke, or under extreme background illumination conditions normally caused by the sun’s glare, is a fundamental computer vision problem to be solved. Over the last decade, the increased efforts in the development of autonomous robots, including self-driving vehicles and Unmanned Aerial Vehicles (UAV) [1], boosted the evolution of vision system technologies used for autonomous navigation and object recognition [2]. However, one of the remaining challenges to be solved is object recognition and 3D spatial reconstruction in fog, rain, or smoke-rich environments [3]. In such scenarios, the performance of the vision system based on RGB (Red–Green–Blue) is limited, usually producing low-contrast images. Depending on the diameter D of water droplets present in the scene to be depicted, compared to the wavelength λ of light to be detected, three regimes for their interaction have been defined: (1) if D < < λ the Rayleigh scattering effects occur where photons get scattered almost isotropically, (2) if D λ , then Mie scattering occurs where the photons are asymetrically, (3) if D > > λ , the  ray’s optics occurs and photons are mostly forward scattered. In this work, Rayleigh scattering will be neglected, since typical diameters of fog and rain are larger than the wavelength of the light.
Enhancing the visibility in foggy conditions is an area of great interest. Various studies have been conducted, posing solutions based on processing algorithms and integration technologies in other spectral bands. These include “defogging” algorithms based on the physical scattering model [4,5], detection algorithms based on the ratio photons residual energy [6], and using deep learning algorithms [7,8]. Other solutions use the redundancy of multiple sensor modalities integrated with RGB camera [9] such as the Light Detection and Ranging (LIDAR) technology [10], the Radio Detection and Ranging (RADAR) technology [11], Time-of-Flight (ToF) [12], or using multispectral (MSI) and hyperspectral imaging technologies [13,14]. In the area of application of single-pixel imaging (SPI) with scattering scenarios, some works focused on improving quality 2D images [15], using high-pass filters by suppressing the effects of temporal variations caused by fog. In 3D reconstruction applications based on compressive ghost imaging, random patterns and photometric stereo vision have been implemented [16].
SPI offers a high capacity of integration with other technologies, such as, for example, Time-of-Flight (ToF), and it can be adapted to operate using the NIR spectral band (800–2000 μm) that exhibits lower loss on foggy conditions [17], offering better performance over the visible spectrum. Therefore, based on the advantages provided by SPI, we propose an approach for 3D image reconstruction under foggy conditions that combines NIR-based SPI using the Shape-from-Shading (SFS) method to generate 3D information, in combination with the indirect Time-of-Flight (iToF) method applied on four reference points, the information of which is finally embedded into the final 3D generated image using a mapping method. The solution proposed in this work, unlike others based on, e.g., ghost imaging (GI) that needs a high number of patterns and high processing time [15], will make use of a 3D mesh robust algorithm that works with space-filling protection and CS-SRCNN, using active illumination with 1550 nm wavelength.
To evaluate the performance of the 3D NIR-SPI imaging system proposed, we performed three analyses. Firstly, we developed a theoretical model to estimate the maximum distance at which different objects in a scene (under controlled and simulated conditions in a laboratory) could still be distinguished, yielding the maximum measurement range. The model was experimentally validated through the estimation of the extinction coefficient  Q e x t . In the second analysis, we compared the different figures of merit obtained for the images reconstructed under different experimental conditions, and finally, we characterized the system carrying out an evaluation in terms of the maximum image reconstruction time required if different space-filling methods are to be used. To summarize, the main contributions and limitations of this paper are as follows:
  • The work presents an experimentally validated theoretical model of the system proposed for Single-Pixel Imaging (SPI) if operating in foggy conditions, considering Mie scattering (in environments rich in 3 μm diameter particles), calculating the level of irradiance reaching the photodetector, and the amount of light being reflected from objects for surfaces with different reflection coefficients.
  • Experimental validation of the SPI model presented thorough measurement of the extinction coefficient [18] to calculate the maximum imaging distance and error.
  • A system based on a combination of NIR-SPI and iToF methods is developed for imaging in foggy environments. We demonstrate an improvement in image recovery using different space-filling methods.
  • We fabricated a test chamber to generate water droplets with 3 μm average diameter and different background illumination levels.
  • We experimentally demonstrated the feasibility of our 3D NIR-SPI system for 3D image reconstruction. To evaluate the image reconstruction quality, the Structural Similarity Index Measure (SSIM), the Peak Signal-to-Noise Ratio (PSNR), Root Mean Square Error (RMSE), and skewness were implemented.

2. Single-Pixel Image Reconstruction

Single-pixel imaging is based on the projection of spatially structured light patterns over an object, which are generated by either a Spatial Light Modulators (SLM) or Digital Micro-Mirror Devices (DMD), and the reflected light is focused on a photodetector with no spatial information, as shown in Figure 1. The correlations between the patterns Φ i and the object O are determined by intensity measurements S i shown in Equation (1), which is provided by the photodetector as [19], where ( x , y ) denote the spatial coordinate, S i is the i t h single-pixel measurement corresponding to pattern Φ i , and  α is a factor that depends on the optoelectronic response of the photodetector.
S i = α x = 1 M y = 1 N O x , y Φ i x , y
The image resolution defined as the number of columns multiplied by the number of rows (or an array of virtual pixels), and therefore the number of projected patterns, is M × N. Knowing the structure of the illumination patterns and the electrical signal from the single-pixel photodetector, it is possible recover the image of the objects using several computational algorithms. One of them is expressed by Equation (2) [19], where the reconstructed image is obtained as the product of the measured single S i and the corresponding structured pattern that originated it.
O x , y = α x = 1 M y = 1 N S i Φ i x , y

2.1. Generation of the Hadamard Active Illumination Pattern Sequence

To generate the illumination patterns, we employ Hadamard patterns, which consist of a square matrix H its components defined as +1 or −1 with two distinct rows agreeing in exactly n/2 positions [21]. This matrix H should satisfy the condition H H T = n I , where T is the transposition of the matrix H, and I stands for the identity matrix. A matrix of order N can be generated using the K r o n e c k e r product defined through Equation (3).
H 2 k = H 2 k 1 H 2 k 1 H 2 k 1 H 2 k 1 = H 2 H 2 k 1
H 2 k = H 1 , 1 H 1 , 2 H 1 , N H 2 , 1 H 2 , 2 H 2 , N H M , 1 H M , 2 H M , N
The matrix size is defined as m×n, with m = 1, 2, 3, …, M, and n = 1, 2, 3, …, N. Here, we consider M = N. Once the matrix H is defined, the Hadamard sequence is constructed using Sylvester’s recursive matrix generation principle defined through Equations (3) and (4) [21] to obtain the final Hadamard matrix H 2 k ( m , n ) . It is important to take into consideration that if less than 20% of the required m × n Hadamard patterns is used for image reconstruction (see Figure 2a), then the quality of the reconstructed image will be poor. Therefore, if the sampling rate is reduced, and good image reconstruction is required, then different types of image reconstruction methods based on different space-filling curves such as Hilbert trajectory (see Figure 2b) [22], Zig–Zag (see Figure 2c) [23], or Spiral (see Figure 2d) [24] space-filling curves, must be implemented.

3. NIR-SPI System Test Architecture

In this work, we propose an NIR-SPI vision system based on the structured illumination scheme depicted in Figure 1b, but instead of using an SLM or a DMD to generate the structured illumination patterns, an array of 8 × 8 NIR LEDs is used, emitting radiation with the wavelength λ = 1550 nm. The NIR-SPI system architecture is divided into two stages: the first one controls the elements used to generate images by applying the already explained single-pixel imaging principle: an InGaAs photodetector (diode FGA015 @ 1550 nm), accompanied by an array of 8 × 8 NIR LEDs. Nevertheless, the spatial resolution of the objects in the scene is achieved by applying the Shape-From-Shading (SFS) [25] method and the unified reflectance model [26], additionally applying mesh enhancement algorithms, is still very much away from the aimed goal of below 10 mm at a distance of 3 m. Thus, four control spots were incorporated into the system illumination array, consisting of NIR lasers with controlled variable light intensity emulating an illumination sinusoidal signal modulated in time and four additional InGaAs photodiode pairs to measure the distance to the objects in the depicted scene with much higher precision, using the indirect Time-of-Flight (iTOF) ranging method (see Figure 3a). The second stage of the system is responsible for processing the captured signals by the photodiode module through the use of an analog-to-digital converter (ADC), which is controlled by a Graphics Processing Unit (GPU) (see Figure 3b). The GPU unit ( J e t s o n N a n o ) is responsible for generating the H a d a m a r d patterns and processing the converted data by the ADC. The 2D/3D image reconstruction is performed using the OMP-GPU algorithm [27].

iTOF System Architecture

The iTOF system consists of four pulsed lasers emitting at 1550 nm peak wavelengths (ThorLabs @ L1550P5DFB), all located at an angle of 90º from each other, emitting a pulsewidth of 65 ns at the optical power of 5 mW (allowed by the IEC Eye Safety regulation IEC62471 [28]). For time-modulation, we are using a Direct Digital Synthesis (DDS) to generate a sinusoidal signal (CW-iToF). The signal modulation is controlled by laser biasing with an amplitude of between 0 and 10 V. Each laser is emitting a time-modulated signal within time windows of 100 μs. The signal reflected by the objects in the scene is detected by the InGaAs photodetector using an integration time of T i n t = 150 μ s. The voltage signal generated by the photodetectors is then converted via an ADC into a digital signal, which is finally processed by the GPU unit. Table 1 shows the different parameters of evaluation such as: frequency modulation equivalent F m o d e q allows calculating the spatial resolution [29], the  Correlated Power Responsivity P R c o r r , [29] that defines the maximum amplitude power with respect to the phase delay, the Uncorrelated Power Responsivity P R u n c o r r  [29] that defines the average power density detected on the photodetector with respect to the background irradiation noise, and Background Light Rejection Ratio (BLRR), which is the ratio between the sensor’s (uncorrelated) responsivity to background light on the one side and the photodetector’s responsivity to correlated time-modulated light on the other. A high level of P R c o r r is required in order to obtain a distance error smaller than the intrinsic distance noise (the constraint is that Δ δ V u n c o r r < σ Δ δ V c o r r [29]). Regarding our proposed system, the BLRR obtained is in the order of −50 dB; i.e., the system can operate in outdoor conditions with 40 kLux of background illumination, achieving a maximum distance of 3 m and a spatial resolution of 10 mm.

4. Fog Chamber Fabrication and Characterization

The chamber used to simulate the fog-rich environment is shown in Figure 4. The chamber has dimensions of 30 cm × 30 cm × 35 cm and has a system that controls the size of droplets based on a piezoelectric humidifier that operates with a frequency of 1.7 MHz to create water droplets with a diameter of 3 μ m, following the relation shown by Equation (5) [30].
d = 0.34 8 π σ ρ f 2 1 / 3
Equation (5) describes the droplet diameter as a function of the piezoelectric frequency, where σ stands for the surface tension (in N/m), ρ stands for the density of the liquid used ( kg / m 3 ) , and f is the electrical frequency applied to the piezoelectric (Figure 5 shows particles diameters water vs frequency piezoelectric). The scattering produced by these droplets is given by Equation (6) [31], where Q s c is the scattering coefficient (calculate using matlab [32]), D d e n s i t y is the density of particles suspended in the medium, and r is the particles’ radius. The chamber allows us to properly test the NIR-SPI system prototype in a controlled environment, simulating the scattering effects under foggy conditions.
β = D d e n s i t y π r 2 Q s c
The light attenuation caused by a scattering medium can be modeled using the Beer–Lambert–Bouguet law [33], which defines the transmittance as τ = e k z , where z is the propagation distance, and k is the extinction coefficient. (Figure 6 shown change contrast image with the distance). The extinction coefficient takes into account the absorption ( α ) and scattering ( β ) coefficients, respectively, i.e., k = α + β . The effect of the absorption will be the neglected, and the scattering coefficient is determined by measuring the transmittance for different distances inside the chamber by displacing a mirror.

5. Modeling the Visibility and Contrast

Koschmieder’s law describes the radiance attenuation caused by the surrounding media between the observer (the sensor) and the objects. Koschmieder’s law allows us to estimate the apparent contrast of an object under different environmental conditions. The total radiance L reaching the observer after being reflected from an the object at a distance z is defined by Equation (7) [34].
L ( z ) = L o e β z + L f 1 e β z .
In Equation (7), L o is the radiance of the object at close range, and  L f is the background radiance (noise). The term L o e β z corresponds to the amount of light being reflected by the object and detected at a distance z, and the term L f 1 e β z corresponds to the amount of light detected at a distance z. Thus, as the distance between the observer and the depicted object increases, the observer will see less light being reflected from the object and more of the scattered light, causing a loss of the image contrast C defined by Equation (8) [35], where C o is the contrast at close range. Since the human eye can distinguish an object until a contrast threshold of 5%, the distance z at which the threshold contrast occurs is given by Equation (9) [36].
C = L ( z ) L f L f = C o e β z .
z = ln 0.05 β 3 β

Modeling the NIR-SPI System in Presence of Fog

To model the NIR-SPI system performance in foggy conditions (see Appendix A.1 Algorithm A1), we will need to determinate the number of photons E ( N ) impinging on the photodetector photoactive area determinated by Equation (10) [37].
E ( N ) = λ 1 λ 2 R τ l e n d s Q E λ T i n t A p i x e l λ h c f # 2 E e λ _ s u m λ + Φ e λ π z 2 t a n α F O V d λ
In Equation (10), Q E λ is the photodetector’s quantum efficiency, T i n t is the photodetector integration time, A p i x e l is the effective photosensitive area, F F is the photodetector’s fill-factor, the f # number is defined as f # = f f o c / d a p e r t u r e , where f f o c is the focal length of the lenses used and  d a p e r t u r e is the focal distance/opening distance, h is Planck’s constant, z is the measured distance, c is the speed of light, τ l e n s is the lens transmittance, R is the material reflection index, α F O V is the focal aperture angle of the emitting LED array, E e λ _ s u m λ is the irradiation level of the sun illumination received on the photoactive area of the photodetector in Equation (11), and  R p d is the reflectivity of the photodetector surfaces.
E e λ _ s u m λ = L ( z ) · A p i x e l R p d
Φ e λ = L ( z ) G ( z ) B ( z ) is the level of irradiation captured by the photodetector, G ( z ) = O ( z ) / z 2 is the transversal function that depends on the geometrical characteristics of the object, the distance is z, and  B ( z ) is the backscattering contribution to the pixel signal defined by Equation (12) [31], where G s is a conversion factor of the sensor, D k is the effective aperture, and Ω k is the effective irradiance.
B ( z ) = Ω k D k G s L 1 e β z
To estimate the maximum theoretical operation of the NIR-SPI system, we calculated the point of intersection between the E ( N ) , given by Equation (10), and the overall noise floor [38], in order to calculate the maximum distance at which the NIR-SPI system might still operate (see Table 2).

6. 3D Using Unified Shape-From-Shading Model (USFSM) and iToF

For the 3D reconstruction of the object captured by an NIR-SPI system (see Figure 7a,b), we applied the unified Shape-From-Shading model (USFSM), which builds 3D images from spatial intensity variations of the 2D recovered image I ( x , y ) [39] (see Appendix A.2 Algorithm A2). However, the obtained mesh yields insufficient quality, and it presents outliers and missing parts (see Figure 7c). To improve the mesh, we applied to it a mapping iToF depth information (see Appendix A.3 Algorithm A3), generating a new mesh that will be processed by applying a heat diffusion filter [40] to remove the mentioned outliers (see Figure 7d) and also a power crust algorithm [41] (re-compilate C++ in Python) (see Figure 7e) to generate an improved mesh (see Figure 7f). For mapping iToF over the points SFS depth, we use a four-point iTOF system that consists of four laser modules (see Figure 8a) to measure four reference depth points of the depicted scene. These reference points allow us to create a reference image depth mesh that can be combined with the NIR-SPI 2D image point cloud generated using the SFS reconstruction (see Algorithm A2). We can generate an initial 3D mesh using the method described in the previous subsection. To generate the final 3D mesh, a method based on ray tracing used in TOF scanning with a laser beam [42] is applied. For this, a strategy based on voxelization [43] is followed, where a method of choice for the 3D mesh generation is based on surface fragmentation and coverage. Combining the point cloud obtained by the SFS method for NIR-SPI and the scene depth information obtained from four reference points, a semi-even 3D point distribution [44] is obtained over the original mesh with a distance (pitch) between each pair of points within the mesh d p i t c h = 5 mm. The defined vertices of the 3D mesh generated (see Algorithm A3) are used to divide the point cloud into four different regions: each region corresponding to each depth reference point defined through an independent iTOF measurement (see Figure 8b), where the V 0 vertices of the mesh become the iTOF reference normalized depth points. Here, V 1 and V 2 define the neighboring points in the point cloud (see Figure 8c). In the manner described, more additional points are defined to form part of the final point cloud, as the positions of the points covering the triangles defined by Equation (13) [44] are included, which form an angle between the vectors defined in Equation (14) [44]) that are used to reduce the number of separate triangles (remove the remaining space between adjacent meshes). In this way, after the voxelization [45] is applied, all triangles with the same voxel form part of the final mesh shown in Equation (15), creating a new final 3D mesh of the scene considering the iTOF originated depth reference points (see Figure 7f).
v 1 = V 1 V l a s e r r e f V 1 · V l a s e r r e f v 2 = V 2 V l a s e r r e f V 2 · V l a s e r r e f
α = arccos ( v 1 · v 2 )
P i = V 0 + v 1 d 1 x + v 2 d 2 y d 1 = d , d 2 = d / sin ( α ) 0 d 1 x < V 0 V 1 0 d 2 y < V 0 V 2 1 d 1 x V 0 V 1

7. Experimental Results

To evaluate the capabilities of the 3D NIR-SPI system, we used a semi-direct light source to simulate background illumination in outdoor conditions [46] with an optical power between 5 and 50 kLux. The scattering is provided by water droplets of 3 μ m diameter (see Figure 4). We reconstructed images of four different types of objects placed 20 cm from the camera: a sphere with a 50 mm diameter, a torus-shaped object with an external diameter of 55 mm and an internal diameter of 25 mm, a cube with dimensions of 40 mm × 40 mm × 40 mm, and a U-shaped object with dimensions of 65 mm × 40 mm × 17 mm. The objects were placed inside the test chamber (see Figure 4). The NIR-SPI images were reconstructed using four space-filling projections, as discussed in Section 2.1.
We determine the extinction coefficient β and the maximum distance for the contrast Equation (9) using three materials with different reflection coefficients (see Table 2).
  • 2D reconstruction: Two-dimensional (2D) image reconstruction with the NIR-SPI camera using respectively the B a s i c , H i l b e r t , Z i g - Z a g , and  S p i r a l scanning methods in combination with the GPU-OMP algorithm [27] and the Fast Super-Resolution Convolutional Neural Network (FSRCNN) method with four upscaling factors [47]. For the reconstruction of 2D single-pixel images, we decided to use 100% of the illumination patterns projected. We generated the following different outdoor conditions and background light scenarios using the described test bench: (1) very cloudy conditions (5 klux), (2) half-cloudy conditions (15 klux), midday (30 klux), and clean-sky sun-glare (40–50 kLux). To evaluate the quality of the reconstructed 2D images, we used the Structural Similarity Index (SSIM) [48] and the Peak Signal-to-Noise Ratio (PSNR) [49] as fuction background illumination (see Figure 9).
    For the highest background illumination level, the  S p i r a l scanning provided better reconstructed quality (see Figure 9a), reaching PSNR = 28 dB (see Figure 9b).
  • 3D reconstruction: We carried out a 3D image reconstruction from a 2D NIR-SPI image (see Figure 10) and iTOF information using Algorithms A2 and A3 under different background illumination conditions (very cloudy conditions (5 klux) and half-cloudy conditions (15 K Lux). The 3D images are shown in Figure 11. In the test, we calculated the level of RMSE, defined by Equation (16), and skewness, which defines the symmetry of the 3D shapes. A value near 0 indicates a best mesh and a value close to 1 indicates a completely degenerate mesh [50] (see Figure 12), while   i m p r o v e m e n t r a t e R M S E % , as shown in Equation (17), indicates the percentage of improving the 3D image reconstruction in terms of RMSE (see Table 3).
    R M S E = 1 M N i = 1 M j = 1 N ( I m a g 1 ( i , j ) I m a g 2 ( i , j ) ) 2
    i m p r o v e m e n t r a t e R M S E % = ( R M S E A l g . A 2 R M S E A l g . A 3 ) R M S E S f S × 100
    We can observe an improvement in the obtained 3D mesh compared to the first 3D reconstructions carried out using the SFS method (see Figure 12), mostly related to surface smoothing, correction of imperfections, and removal of outlying points. The  S p i r a l space-filling method yields the best performance, with an improvement factor of 29.68%, followed by the Z i g - Z a g method, reaching an improvement of 28.68% (see Table 3). On the other hand, in case the background illumination reaches 15 Klux, the  S p i r a l method reached 34.14% improvement, while the H i l b e r t method reached 28.24% (see Table 3). Applying the SFS method, the Skewness and the mesh present an increase in a fog scenario from 0.6–0.7 (cell quality fair, see Table 4) to 0.8–1 (cell quality poor, see Table 5); with that, the cell quality degrades (see Figure 12a–c). For improving these values, using the power crust algorithm integrated with iToF for reaching a best range of skewness, for the case without fog, the range of skewness obtained was from 0.02 to 0.2 (cell quality excellent, see Table 4), which are the values of skewness recommended [50]. In the fog condition, we will seek to obtain a cell quality level mesh <0.5, which is considered a good mesh quality (see Table 5). Using the H i l b e r t scanning method delivered the lowest skewness level, which was lower than if other space-filling methods were used, which indicates its sensitivity to noise.
  • Evaluation of the image reconstruction time: An important parameter regarding the 3D reconstruction in vision systems is the processing time required for this task. For that, we search the method with the lowest reconstruction time (see Table 6) considering a trade-off between the image overall quality and the time required for its reconstruction.
Figure 9. Image reconstruction using the NIR-SPI camera when placing the object 20 cm from the lens, using different scanning techniques in foggy conditions, and varying the background illumination between 5 and 50 kLux: (a) SSIM and (b) PSNR.
Figure 9. Image reconstruction using the NIR-SPI camera when placing the object 20 cm from the lens, using different scanning techniques in foggy conditions, and varying the background illumination between 5 and 50 kLux: (a) SSIM and (b) PSNR.
Micromachines 13 00795 g009
Figure 10. Reconstruction using the 2D NIR-SPI camera with active illumination at wavelength of λ = 1550 nm and object placed 20 cm from the camera for different scanning techniques under foggy conditions with particles diameter of 3 μm and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter sphere, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
Figure 10. Reconstruction using the 2D NIR-SPI camera with active illumination at wavelength of λ = 1550 nm and object placed 20 cm from the camera for different scanning techniques under foggy conditions with particles diameter of 3 μm and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter sphere, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
Micromachines 13 00795 g010
Figure 11. Reconstructed3D mesh improving at a distance of 20 cm from the focal lens, using different scanning techniques under foggy conditions with particles’ size of 3 μ m and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter spherical, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
Figure 11. Reconstructed3D mesh improving at a distance of 20 cm from the focal lens, using different scanning techniques under foggy conditions with particles’ size of 3 μ m and background light of 5 and 15 kLux, respectively: (a) 50 mm diameter spherical, (b) cube with dimensions of 40 mm × 40 mm × 40 mm, (c) torus (ring-like object) with an external diameter of 55 mm and an internal diameter of 25 mm, and (d) U-shaped object with dimensions of 65 mm × 40 mm × 17 mm.
Micromachines 13 00795 g011
Figure 12. Three-dimensional (3D) mesh sphere without/with fog conditions: (a) without fog mesh using SFS with Skewness = 0.6, (b) mesh improving power crust and iToF with Skewness = 0.09, (c) with fog mesh using SFS with Skewness = 0.8, and (d) mesh improving power crust and iToF with Skewness = 0.2.
Figure 12. Three-dimensional (3D) mesh sphere without/with fog conditions: (a) without fog mesh using SFS with Skewness = 0.6, (b) mesh improving power crust and iToF with Skewness = 0.09, (c) with fog mesh using SFS with Skewness = 0.8, and (d) mesh improving power crust and iToF with Skewness = 0.2.
Micromachines 13 00795 g012
Table 3. Improvement rate expressed through RMSE Equation (17) of the reconstructed 3D image under foggy conditionss with particle diameter of 3 μ m and background light of 5 and 15 kLux, respectively, after Algorithm A3 has been applied.
Table 3. Improvement rate expressed through RMSE Equation (17) of the reconstructed 3D image under foggy conditionss with particle diameter of 3 μ m and background light of 5 and 15 kLux, respectively, after Algorithm A3 has been applied.
Scanning Method5 kLux15 kLux
B a s i c 27.58%9.67%
H i l b e r t 27.52%28.24%
Z i g Z a g 28.68%19.2%
S p i r a l 29.68%32.14%
Table 4. Three-dimensional (3D) images perception of surface qualities without fog conditions calculating the skewness.
Table 4. Three-dimensional (3D) images perception of surface qualities without fog conditions calculating the skewness.
Scanning Method Skewness SFS Skewness mesh + iToF
B a s i c 0.650.09
H i l b e r t 0.520.02
Z i g Z a g 0.660.2
S p i r a l 0.690.12
Table 5. Three-dimensional (3D) images perception of surface qualities fog conditions calculating the skewness.
Table 5. Three-dimensional (3D) images perception of surface qualities fog conditions calculating the skewness.
Scanning Method Skewness SFS Skewness mesh + iToF
B a s i c 0.820.2
H i l b e r t 0.730.11
Z i g Z a g 1.060.34
S p i r a l 0.810.17
Table 6. Three-dimensional (3D) image reconstruction processing time using SFS and Algorithm A3.
Table 6. Three-dimensional (3D) image reconstruction processing time using SFS and Algorithm A3.
Scanning Method Time SfS ( ms ) Time 3 D mesh ( ms ) Time Total ( ms )
B a s i c 19.83147.69167.53
H i l b e r t 19.18127.36146.54
Z i g Z a g 21.69130.89152.58
S p i r a l 24.95133.53158.49
Finally, we calculated the 3D reconstruction time (see Table 6), applying at first the SFS method and subsequently applying Algorithm A3 to improve the 3D mesh (See Figure 11). Following, we compared the reconstruction time to the 3D mesh improvement rate, and the skewness of the reconstructed 3D images (see Table 7) was reached if different scanning methods were used for image reconstruction. It is important to take into consideration that in order to reach a higher 3D reconstruction quality, longer processing times must be taken into account. In the cases where the H i l b e r t scanning was used, yielding the best performance as far as the 3D mesh improvement rate and skewness are concerned, the reconstruction times required were in the order of 146 ms.

8. Conclusions

This paper presents an NIR-SPI system prototype capable of generating 2D/3D images of depicted scenes in the presence of coarse fog. For the evaluation of the performance of the built system, a theoretical model of the entire NIR-SPI system operating under foggy conditions was firstly developed, which was used to quantify the light-scattering effects of the foggy environment on the quality of the 3D images generated by the system. This model was validated in the laboratory using a test bench that simulates the outdoor conditions considering the presence of coarse fog with a droplet of 3 μ m diameter and variable background illumination conditions. The maximum detection range between 18 and 30 cm was assessed, reaching spatial resolutions between 4 and 6 mm, with a measuring accuracy between 95% and 97%, depending on the reflection index of the material used.
The 3D NIR-SPI system image reconstruction is based on the combination of iToF and photometric (SFS) approaches. For this, we defined a methodology that initially evaluates the 2D SPI image quality through SSIM and PSNR parameters, using four different space-filling (scanning) methods. We showed that S p i r a l and H i l b e r t scanning methods, respectively, offered the best performances if adapted to the SFS algorithm, which was mainly due to the fact that the SFS method strongly depends on the level of background illumination present. Thus, we proposed an algorithm in which we map the measured distances of four defined test points in the depicted scene obtained by the four implemented iToF modules to improve the final 3D image and overcome the limitation of the SFS method. The system complements the missing points at the surface of the depicted objects through a post-processing step based on thermal filtering and the the Power Crust algorithm. By applying the described method, we reach a mesh quality of 0.2 to 0.3 in terms of skewness under fog conditions (see Table 7), which is a result comparable with the performance of similar vision systems operating in fog-free environments.
Finally, we evaluated the 3D reconstruction in terms of the required computational time. The results indicate that the H a d a m a r d projection method without changes defined as B a s i c yielded the worst performance, and it was outperformed mainly by the S p i r a l and H i l b e r t methods. Based on the experimental evaluation performed, we can conclude that in outdoor scenarios in the presence of fog, with a variable illumination background, the NIR-SPI system built delivered a quite acceptable performance, applying different space-filling (scanning) strategies such as the S p i r a l or H i l b e r t methods, respectively, reaching good contrast levels and quite acceptable 2D image spatial resolutions of <30 mm, on which the 3D reconstruction is based. Due to the scattering effects, a method of robust 3D reconstruction was proposed and proven to be quite effective. This study provides a new field of research for SPI vision systems for application in outdoor scenarios, e.g., for the cases where they could be integrated into the navigation systems of Unmanned Flight Vehicles (UFVs), as a primary or redundant sensor, with  applications such as surface mapping or obstacle avoidance operating in fog or low-visibility environments [51,52].

9. Patents

Daniel Durini Romero, Carlos Alexander Osorio Quero, José de Jesús Rangel Magdaleno, José Martínez Carranza “Sistema híbrido de creación de imágenes 3D”, File-No.: MX/a/2020/012197, Priority date: 13 November 2020.

Author Contributions

All authors contributed to the article. C.O.Q. proposed the concept, designed the optical system, and wrote the paper under the supervision of D.D., J.R.-M., J.M.-C. and R.R.-G., who supervised the experiments, analyzed the data, reviewed the manuscript, and provided valuable suggestions. All authors have read and agreed to the published version of the manuscript.

Funding

The Ph.D. studies of the first author, Carlos Alexander Osorio Quero were founded by Mexican Government through the National Council for Science and Technology—CONACyT, agreement-Nr.: 251992.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The first author is thankful to Consejo Nacional de Ciencia y Tecnología (CONACYT) for his scholarship with No. CVU: 661331. The authors wish to thank Humberto García Flores, Head of the Illumination and Energy Efficiency (LIEE) laboratory of INAOE for the most appreciated help provided for developing the test bench and performing the experimental testing of the NIR SPI system.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

    The following abbreviations are used in this manuscript:
ADCAnalog to Digital Converters
BLRRBackground Light Rejection Ratio
C e q Equivalent integration capacitance
CS-SRCNNCompressive Sensing-Super-Resolution Convolutional Neural Network
CW-iTOFContinuous-wave-Time-of-Flight
DDSDirect digital synthesis
DMDDigital micromirror device
Fmod-eqFrequency modulation equivalent
FSRCNNFast Super-Resolution Convolutional Neural Network
GPUGraphics processing unit
iTOFIndirect Time-of-Flight
InGaAsIndium Gallium Arsenide
NIRNear infrared imaging
OMPOrthogonal matching pursuit
PSNRPeak Signal-to-Noise Ratio
PDEPartial differential equation
RGBRed–Green–Blue
RMSERoot mean square error
SSIMStructural Similarity Index Measure
SFSShape-from-Shading
SLMSpatial light modulator
SNRSignal-to-noise ratio
SPDPixel Detector System
SPISingle-Pixel Imaging
UFVunmanned flight vehicles
USFSMUnified Shape-From-Shading model
VISVisible wavelengths

Appendix A

Appendix A.1. Pseudocode for Estimating the Maximum Capture Distance of NIR-SPI Vision System

From conditions radiance of the object in scene ( L o ), background radiance (noise) ( L f ), r p a r t i c l e water-particles radius, λ wavelength, and the system optical parameters as: Q E λ photodetector’s quantum efficiency, T i n t photodetector integration time, τ l e n s the lens transmittance, A p i x e l the effective photosensitive area, α F O V focal aperture angle, R material reflection index, G s conversion factor of the sensor, D k effective aperture and Ω k , we calculation the number of photons impinging on the photodetector E(N), as shown in Equation (10) (line 10). From the σ N o i s e f l o o r of the NIR-SPI [38] and E(N), we can define the maximum distance reached z for the minimum level condition where the E ( N ) i i is affected for the σ N o i s e f l o o r through equation E ( N ) i i σ N o i s e f l o o r < δ t h (line 12), where δ t h is the threshold of detection of the photodiode InGaAs.    
Algorithm A1: Estimatemaximum distance NIR-SPI
Micromachines 13 00795 i001

Appendix A.2. 3D Reconstruction of USFSM Using Fast Sweeping Algorithm

Using the fast sweeping method that obtains the depth information for the objects depicted in a scene, from an SPI image that corresponds to the surface point of the scene, we defined a surface Z i , j , solving through the Lax–Friedrichs Hamiltonian method [53] applying an iterative sweeping strategy based on the fast sweeping scheme. First, the surface is initialized with the boundary values Z i , j ( N x , N y ) (lines 7 and 10), grid size, and artificial viscosity condition. Next, the value of Z i , j is updated by sweeping through the image grid in four alternating directions. Finally, after each sweep, the boundary values are evaluated at the four image boundaries ( D x p , D y p , D x q , and  D y q ) (lines 11 and 16); then, we calculate the solution for the image irradiance equation (Eikonal equation) F x (line 17) and update H (line 18) and Z i , j (line 19).    
Algorithm A2: Fast sweeping algorithm for HJ based on the Lax–Friedrichs method [53].
Micromachines 13 00795 i002

Appendix A.3. iTOF Algorithm

Algorithm mapping iTOF is proposed, which is based on the scanning surface method proposed by [45] of voxelization for reconstruction surface to complement through TOF information the missing points in the surfaces. From the depth information generated using SFS (see Algorithm A2), we obtained initial cloud points that will be use jointly with TOF information to generate new mesh. This new mesh has no missing information, so it is easier to implement smoothing methods on it to improve 3D reconstruction using the Power Crust algorithm [41].    
Algorithm A3: Finding the points of contact of the iTOF ray to generate mesh [44].
1 Function Generation-to-Mesh ( V L a s e r 1 , V L a s e r 2 , V L a s e r 3 , V L a s e r 4 , d p i t c h , M a t r i x P o i n t ) :
   Input: Vectors with information distance, d p i t c h separation between points generated using SFS Algorithm A2, and matrix with points clouds M a t r i x P o i n t
   Output M a t r i x M e s h N e w generation of the matrix with new mesh
2Initialization: ( N x , N y ) = s i z e ( M a t r i x P o i n t ) //size matrix points clouds
3 R 1 = [ 1 , N x 1 ] , [ 1 , ( N y 1 ) / 2 ] //Defining region 1
4 R 2 = [ ( N x 1 ) / 2 , N x 1 ] , [ 1 , ( N y 1 ) / 2 ] //Defining region 2
5 R 3 = [ 1 , N x 1 ] , [ ( N y 1 ) / 2 , N y 1 ] //Defining region 3
6 R 4 = [ ( N x 1 ) / 2 , N x 1 ] , [ ( N y 1 ) / 2 , N y 1 ] //Defining region 4
7 M a t r i x T e m p 1 = M a t r i x P o i n t ( R 1 )
8 M a t r i x T e m p 2 = M a t r i x P o i n t ( R 2 )
9 M a t r i x T e m p 3 = M a t r i x P o i n t ( R 3 )
10 M a t r i x T e m p 4 = M a t r i x P o i n t ( R 4 )
11 M e s h T e m p 1 = TriangleMesh( V L a s e r 1 , d p i t c h , M a t r i x T e m p 1 )//We apply Algorithm A4
12 M e s h T e m p 2 = TriangleMesh( V L a s e r 2 , d p i t c h , M a t r i x T e m p 2 )//We apply Algorithm A4
13 M e s h T e m p 3 = TriangleMesh( V L a s e r 3 , d p i t c h , M a t r i x T e m p 3 )//We apply Algorithm A4
14 M e s h T e m p 4 = TriangleMesh( V L a s e r 4 , d p i t c h , M a t r i x T e m p 4 )//We apply Algorithm A4
15 M a t r i x M e s h N e w = [MatrixTemp MatrixTemp2 MatrixTemp3 MatrixTemp4]
16 return
Algorithm A4: Semi-even distribution of points on a single triangle [44].
Micromachines 13 00795 i003

References

  1. Moon, H.; Martinez-Carranza, J.; Cieslewski, T.; Faessler, M.; Falanga, D.; Simovic, A.; Scaramuzza, D.; Li, S.; Ozo, M.; De Wagter, C.; et al. Challenges and implemented technologies used in autonomous drone racing. Intell. Serv. Robot. 2019, 12, 137–148. [Google Scholar] [CrossRef]
  2. Valenti, F.; Giaquinto, D.; Musto, L.; Zinelli, A.; Bertozzi, M.; Broggi, A. Enabling Computer Vision-Based Autonomous Navigation for Unmanned Aerial Vehicles in Cluttered GPS-Denied Environments. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 3886–3891. [Google Scholar] [CrossRef]
  3. Fujimura, Y.; Iiyama, M.; Hashimoto, A.; Minoh, M. Photometric Stereo in Participating Media Using an Analytical Solution for Shape-Dependent Forward Scatter. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 708–719. [Google Scholar] [CrossRef] [PubMed]
  4. Jiang, Y.; Sun, C.; Zhao, Y.; Yang, L. Fog Density Estimation and Image Defogging Based on Surrogate Modeling for Optical Depth. IEEE Trans. Image Process. 2017, 26, 3397–3409. [Google Scholar] [CrossRef] [PubMed]
  5. Narasimhan, S.; Nayar, S. Removing weather effects from monochrome images. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 2, p. II. [Google Scholar] [CrossRef] [Green Version]
  6. Chen, Z.; Ou, B. Visibility Detection Algorithm of Single Fog Image Based on the Ratio of Wavelength Residual Energy. Math. Probl. Eng. 2021, 2021, 5531706. [Google Scholar] [CrossRef]
  7. Liu, W.; Hou, X.; Duan, J.; Qiu, G. End-to-End Single Image Fog Removal Using Enhanced Cycle Consistent Adversarial Networks. Trans. Img. Proc. 2020, 29, 7819–7833. [Google Scholar] [CrossRef]
  8. Palvanov, A.; Giyenko, A.; Cho, Y. Development of Visibility Expectation System Based on Machine Learning. In Proceedings of the 17th International Conference, CISIM 2018, Olomouc, Czech Republic, 27–29 September 2018; pp. 140–153. [Google Scholar] [CrossRef]
  9. Katyal, S.; Kumar, S.; Sakhuja, R.; Gupta, S. Object Detection in Foggy Conditions by Fusion of Saliency Map and YOLO. In Proceedings of the 2018 12th International Conference on Sensing Technology (ICST), Limerick, Ireland, 4–6 December 2018; pp. 154–159. [Google Scholar] [CrossRef]
  10. Dannheim, C.; Icking, C.; Mader, M.; Sallis, P. Weather Detection in Vehicles by Means of Camera and LIDAR Systems. In Proceedings of the 2014 Sixth International Conference on Computational Intelligence, Communication Systems and Networks, Bhopal, India, 27–29 May 2014; pp. 186–191. [Google Scholar] [CrossRef]
  11. Guan, J.; Madani, S.; Jog, S.; Gupta, S.; Hassanieh, H. Through Fog High-Resolution Imaging Using Millimeter Wave Radar. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 11461–11470. [Google Scholar] [CrossRef]
  12. Kijima, D.; Kushida, T.; Kitajima, H.; Tanaka, K.; Kubo, H.; Funatomi, T.; Mukaigawa, Y. Time-of-flight imaging in fog using multiple time-gated exposures. Opt. Express 2021, 29, 6453–6467. [Google Scholar] [CrossRef]
  13. Kang, X.; Fei, Z.; Duan, P.; Li, S. Fog Model-Based Hyperspectral Image Defogging. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1–12. [Google Scholar] [CrossRef]
  14. Thornton, M.P.; Judd, K.M.; Richards, A.A.; Redman, B.J. Multispectral short-range imaging through artificial fog. In Proceedings of the Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XXX; Holst, G.C., Krapels, K.A., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2019; Volume 11001, pp. 340–350. [Google Scholar] [CrossRef]
  15. Bashkansky, M.; Park, S.D.; Reintjes, J. Single pixel structured imaging through fog. Appl. Opt. 2021, 60, 4793–4797. [Google Scholar] [CrossRef]
  16. Soltanlou, K.; Latifi, H. Three-dimensional imaging through scattering media using a single pixel detector. Appl. Opt. 2019, 58, 7716–7726. [Google Scholar] [CrossRef] [PubMed]
  17. Zeng, X.; Chu, J.; Cao, W.; Kang, W.; Zhang, R. Visible–IR transmission enhancement through fog using circularly polarized light. Appl. Opt. 2018, 57, 6817–6822. [Google Scholar] [CrossRef]
  18. Tai, H.; Zhuang, Z.; Jiang, L.; Sun, D. Visibility Measurement in an Atmospheric Environment Simulation Chamber. Curr. Opt. Photon. 2017, 1, 186–195. [Google Scholar]
  19. Gibson, G.M.; Johnson, S.D.; Padgett, M.J. Single-pixel imaging 12 years on: A review. Opt. Express 2020, 28, 28190–28208. [Google Scholar] [CrossRef] [PubMed]
  20. Osorio Quero, C.A.; Durini, D.; Rangel-Magdaleno, J.; Martinez-Carranza, J. Single-pixel imaging: An overview of different methods to be used for 3D space reconstruction in harsh environments. Rev. Sci. Instrum. 2021, 92, 111501. [Google Scholar] [CrossRef] [PubMed]
  21. Zhang, Z.; Wang, X.; Zheng, G.; Zhong, J. Hadamard single-pixel imaging versus Fourier single-pixel imaging. Opt. Express 2017, 25, 19619–19639. [Google Scholar] [CrossRef]
  22. Ujang, U.; Anton, F.; Azri, S.; Rahman, A.; Mioc, D. 3D Hilbert Space Filling Curves in 3D City Modeling for Faster Spatial Queries. Int. J. 3D Inf. Model. (IJ3DIM) 2014, 3, 1–18. [Google Scholar] [CrossRef] [Green Version]
  23. Ma, H.; Sang, A.; Zhou, C.; An, X.; Song, L. A zigzag scanning ordering of four-dimensional Walsh basis for single-pixel imaging. Opt. Commun. 2019, 443, 69–75. [Google Scholar] [CrossRef]
  24. Cabreira, T.M.; Franco, C.D.; Ferreira, P.R.; Buttazzo, G.C. Energy-Aware Spiral Coverage Path Planning for UAV Photogrammetric Applications. IEEE Robot. Autom. Lett. 2018, 3, 3662–3668. [Google Scholar] [CrossRef]
  25. Zhang, R.; Tsai, P.S.; Cryer, J.; Shah, M. Shape-from-shading: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 1999, 21, 690–706. [Google Scholar] [CrossRef] [Green Version]
  26. Wang, G.; Zhang, X.; Cheng, J. A Unified Shape-From-Shading Approach for 3D Surface Reconstruction Using Fast Eikonal Solvers. Int. J. Opt. 2020, 2020, 6156058. [Google Scholar] [CrossRef]
  27. Quero, C.O.; Durini, D.; Ramos-Garcia, R.; Rangel-Magdaleno, J.; Martinez-Carranza, J. Hardware parallel architecture proposed to accelerate the orthogonal matching pursuit compressive sensing reconstruction. In Proceedings of the Computational Imaging V; Tian, L., Petruccelli, J.C., Preza, C., Eds.; International Society for Optics and Photonics, SPIE: Bellingham, WA, USA, 2020; Volume 11396, pp. 56–63. [Google Scholar] [CrossRef]
  28. Laser Safety Facts. Available online: https://www.lasersafetyfacts.com/laserclasses.html (accessed on 28 April 2021).
  29. Perenzoni, M.; Stoppa, D. Figures of Merit for Indirect Time-of-Flight 3D Cameras: Definition and Experimental Evaluation. Remote Sens. 2011, 3, 2461–2472. [Google Scholar] [CrossRef] [Green Version]
  30. Rajan, R.; Pandit, A. Correlations to predict droplet size in ultrasonic atomisation. Ultrasonics 2001, 39, 235–255. [Google Scholar] [CrossRef]
  31. Oakley, J.; Satherley, B. Improving image quality in poor visibility conditions using a physical model for contrast degradation. IEEE Trans. Image Process. 1998, 7, 167–179. [Google Scholar] [CrossRef] [PubMed]
  32. Matzler, C. MATLABfunctions for Mie scattering and absorption. IAP Res. Rep. 2002, 8. Available online: http://www.atmo.arizona.edu/students/courselinks/spring09/atmo656b/maetzler_mie_v2.pdf (accessed on 28 April 2021).
  33. Lee, Z.; Shang, S. Visibility: How Applicable is the Century-Old Koschmieder Model? J. Atmos. Sci. 2016, 73, 4573–4581. [Google Scholar] [CrossRef]
  34. Middleton, W.E.K. Vision through the Atmosphere. In Geophysik II / Geophysics II; Bartels, J., Ed.; Springer: Berlin/Heidelberg, Germany, 1957; pp. 254–287. [Google Scholar] [CrossRef]
  35. Hautière, N.; Tarel, J.P.; Didier, A.; Dumont, E. Blind Contrast Enhancement Assessment by Gradient Ratioing at Visible Edges. Image Anal. Stereol. 2008, 27, 87–95. [Google Scholar] [CrossRef]
  36. International Lighting Vocabulary = Vocabulaire International de L’éclairage. 1987. p. 365. Available online: https://cie.co.at/publications/international-lighting-vocabulary (accessed on 28 April 2021).
  37. Süss, A. High Performance CMOS Range Imaging: Device Technology and Systems Considerations; Devices, Circuits, and Systems; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  38. Osorio Quero, C.A.; Romero, D.D.; Ramos-Garcia, R.; de Jesus Rangel-Magdaleno, J.; Martinez-Carranza, J. Towards a 3D Vision System based on Single-Pixel imaging and indirect Time-of-Flight for drone applications. In Proceedings of the 2020 17th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), Mexico City, Mexico, 11–13 November 2020; pp. 1–6. [Google Scholar] [CrossRef]
  39. Tozza, S.; Falcone, M. Analysis and Approximation of Some Shape-from-Shading Models for Non-Lambertian Surfaces. J. Math. Imaging Vis. 2016, 55, 153–178. [Google Scholar] [CrossRef] [Green Version]
  40. Peyré, G. NumericalMesh Processing. Course Notes. Available online: https://hal.archives-ouvertes.fr/hal-00365931 (accessed on 28 April 2021).
  41. Amenta, N.; Choi, S.; Kolluri, R.K. The Power Crust. In Proceedings of the Sixth ACM Symposium on Solid Modeling and Applications; Association for Computing Machinery: New York, NY, USA, 2001; pp. 249–266. [Google Scholar] [CrossRef]
  42. Möller, T.; Trumbore, B. Fast, Minimum Storage Ray-Triangle Intersection. J. Graph. Tools 1997, 2, 21–28. [Google Scholar] [CrossRef]
  43. Kaufman, A.; Cohen, D.; Yagel, R. Volume graphics. Computer 1993, 26, 51–64. [Google Scholar] [CrossRef]
  44. Kot, T.; Bobovský, Z.; Heczko, D.; Vysocký, A.; Virgala, I.; Prada, E. Using Virtual Scanning to Find Optimal Configuration of a 3D Scanner Turntable for Scanning of Mechanical Parts. Sensors 2021, 21, 5343. [Google Scholar] [CrossRef]
  45. Huang, J.; Yagel, R.; Filippov, V.; Kurzion, Y. An accurate method for voxelizing polygon meshes. In Proceedings of the IEEE Symposium on Volume Visualization (Cat. No.989EX300), Research Triangle Park, NC, USA, 19–20 October 1998; pp. 119–126. [Google Scholar] [CrossRef] [Green Version]
  46. Ravi, S.; Kurian, C. White light source towards spectrum tunable lighting—A review. In Proceedings of the 2014 International Conference on Advances in Energy Conversion Technologies (ICAECT), Manipal, India, 23–25 January 2014; pp. 203–208. [Google Scholar] [CrossRef]
  47. Dong, C.; Loy, C.C.; Tang, X. Accelerating the Super-Resolution Convolutional Neural Network. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2016. [Google Scholar]
  48. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [CrossRef] [Green Version]
  49. Chen, T.; Liu, M.; Gao, T.; Cheng, P.; Mei, S.; Li, Y. A Fusion-Based Defogging Algorithm. Remote Sens. 2022, 14, 425. [Google Scholar] [CrossRef]
  50. Budd, C.J.; McRae, A.T.; Cotter, C.J. The scaling and skewness of optimally transported meshes on the sphere. J. Comput. Phys. 2018, 375, 540–564. [Google Scholar] [CrossRef] [Green Version]
  51. Rojas-Perez, L.O.; Martinez-Carranza, J. Metric monocular SLAM and colour segmentation for multiple obstacle avoidance in autonomous flight. In Proceedings of the 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), Linköping, Sweden, 3–5 October 2017; pp. 234–239. [Google Scholar]
  52. Dionisio-Ortega, S.; Rojas-Perez, L.O.; Martinez-Carranza, J.; Cruz-Vega, I. A deep learning approach towards autonomous flight in forest environments. In Proceedings of the 2018 International Conference on Electronics, Communications and Computers (CONIELECOMP), Cholula, Mexico, 21–23 February 2018; pp. 139–144. [Google Scholar]
  53. Kao, C.Y.; Osher, S.; Qian, J. Lax–Friedrichs sweeping scheme for static Hamilton–Jacobi equations. J. Comput. Phys. 2004, 196, 367–391. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Two Different configurations for SPI: (a) Structured detection: the object illuminated by a light source and the light reflected by it gets directed through a lens onto an SLM, and captured by the SPD, (b) Structured illumination: the SLM device projects a sequence of patterns on the object and reflected light that is captured by the SPD. Representation of SPI based on published [20].
Figure 1. Two Different configurations for SPI: (a) Structured detection: the object illuminated by a light source and the light reflected by it gets directed through a lens onto an SLM, and captured by the SPD, (b) Structured illumination: the SLM device projects a sequence of patterns on the object and reflected light that is captured by the SPD. Representation of SPI based on published [20].
Micromachines 13 00795 g001
Figure 2. Example Hadamard sequence H 64 scanning scheme applying different space-filling curves: (a) basic H a d a m a r d sequence, (b) H i l b e r t scan [22], (c) Z i g Z a g scan [23], (d) S p i r a l scan [24].
Figure 2. Example Hadamard sequence H 64 scanning scheme applying different space-filling curves: (a) basic H a d a m a r d sequence, (b) H i l b e r t scan [22], (c) Z i g Z a g scan [23], (d) S p i r a l scan [24].
Micromachines 13 00795 g002
Figure 3. Proposed 2D/3D NIR-SPI camera system: (a) the sequence used for projection of active illumination patterns and reconstruction of 2D/3D images using the SPI approach; (b) The NIR-SPI system proposed and its subsystems: dimension is of 11 × 12 × 13 cm, weight 1.3 kg, and power consumption of 25 W module photodiode InGaAs, active illumination source, photodetector diode InGaAs FGA015, graphics processing unit (GPU) and Analog to Digital Converters (ADC).
Figure 3. Proposed 2D/3D NIR-SPI camera system: (a) the sequence used for projection of active illumination patterns and reconstruction of 2D/3D images using the SPI approach; (b) The NIR-SPI system proposed and its subsystems: dimension is of 11 × 12 × 13 cm, weight 1.3 kg, and power consumption of 25 W module photodiode InGaAs, active illumination source, photodetector diode InGaAs FGA015, graphics processing unit (GPU) and Analog to Digital Converters (ADC).
Micromachines 13 00795 g003
Figure 4. Experimental setup for the NIR-SPI system prototype built. The test bench has a control system to emulate fog and background illumination. The test object is placed inside the glass box.
Figure 4. Experimental setup for the NIR-SPI system prototype built. The test bench has a control system to emulate fog and background illumination. The test object is placed inside the glass box.
Micromachines 13 00795 g004
Figure 5. The operating range (108.3 kHz to 1.7 MHz) of the piezoelectric generates fog particles with mean diameters between 3 and 180 μ m.
Figure 5. The operating range (108.3 kHz to 1.7 MHz) of the piezoelectric generates fog particles with mean diameters between 3 and 180 μ m.
Micromachines 13 00795 g005
Figure 6. Simulationof image contrast attenuation or degradation using Matlab, due to the presence of fog with two different scattering coefficients (absorption was set to zero), shown as a function of the light propagation distance.
Figure 6. Simulationof image contrast attenuation or degradation using Matlab, due to the presence of fog with two different scattering coefficients (absorption was set to zero), shown as a function of the light propagation distance.
Micromachines 13 00795 g006
Figure 7. Three-dimensional (3D) reconstruction schematic: (a) original image of the object, (b) reconstructed 2D image obtained using the SPI NIR system prototype, (c) 3D SFS with imperfections, gaps and outliers in the surface, (d) 3D image obtained after filtering, (e) 3D mesh obtained after using the power crust algorithm, and (f) the final and improved 3D image with iToF.
Figure 7. Three-dimensional (3D) reconstruction schematic: (a) original image of the object, (b) reconstructed 2D image obtained using the SPI NIR system prototype, (c) 3D SFS with imperfections, gaps and outliers in the surface, (d) 3D image obtained after filtering, (e) 3D mesh obtained after using the power crust algorithm, and (f) the final and improved 3D image with iToF.
Micromachines 13 00795 g007
Figure 8. Three-dimensional (3D) final mesh generation using CW-iTOF reference: (a) laser array and InGaAs photodetector, (b) defining reference regions, and (c) method of distribution of points of the mesh (d distance (pitch), v n , v n + 1 and v n + 2 vertices, P i points triangles).
Figure 8. Three-dimensional (3D) final mesh generation using CW-iTOF reference: (a) laser array and InGaAs photodetector, (b) defining reference regions, and (c) method of distribution of points of the mesh (d distance (pitch), v n , v n + 1 and v n + 2 vertices, P i points triangles).
Micromachines 13 00795 g008
Table 1. Figures of merit of the proposed CW-iTOF system working at 1550 nm peak wavelength.
Table 1. Figures of merit of the proposed CW-iTOF system working at 1550 nm peak wavelength.
ParametersValue
Q e x t λ 0.8 @ 1550 nm
C e q 19 fF
A p i x 235 μ m 2
FF0.38
T p u l s e 65 ns
F m o d e q 4.8 MHz
T i n t 150 μ s
σ m i n 1 cm
α F O V 10º
NED1 cm Hz
PR c o r r 11.84 V W m 2
SNR m a x 20–30 dB
BLRR−50 dB
Table 2. Theoretically obtained maximum distance at which the measurement can still be performed vs. that experimentally obtained under the same conditions.
Table 2. Theoretically obtained maximum distance at which the measurement can still be performed vs. that experimentally obtained under the same conditions.
Reflection Coefficient0.20.50.8
Theoretically calculated maximum measurement
distance in absence of fog (cm)
22.43544
Theoretically calculated maximum measurement
distance in presence of 3 μ m diameter fog particles (cm)
182730.8
Experimentally obtained maximum measurement
distance in absence of fog using the LSM method (cm)
2234.243.4
Experimentally obtained maximum measurement
distance in presence of 3 μ m diameter fog
particles using the LSM method (cm)
17.626.2130.18
Table 7. Three-dimensional (3D) NI-SPI performance summary under foggy conditions.
Table 7. Three-dimensional (3D) NI-SPI performance summary under foggy conditions.
Scanning MethodSkewnessImprovement (%) Time Total ( ms )
B a s i c 0.219167.53
H i l b e r t 0.128146.54
Z i g Z a g 0.3424152.58
S p i r a l 0.1731158.49
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Osorio Quero, C.; Durini, D.; Rangel-Magdaleno, J.; Martinez-Carranza, J.; Ramos-Garcia, R. Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions. Micromachines 2022, 13, 795. https://0-doi-org.brum.beds.ac.uk/10.3390/mi13050795

AMA Style

Osorio Quero C, Durini D, Rangel-Magdaleno J, Martinez-Carranza J, Ramos-Garcia R. Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions. Micromachines. 2022; 13(5):795. https://0-doi-org.brum.beds.ac.uk/10.3390/mi13050795

Chicago/Turabian Style

Osorio Quero, C., D. Durini, J. Rangel-Magdaleno, J. Martinez-Carranza, and R. Ramos-Garcia. 2022. "Single-Pixel Near-Infrared 3D Image Reconstruction in Outdoor Conditions" Micromachines 13, no. 5: 795. https://0-doi-org.brum.beds.ac.uk/10.3390/mi13050795

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop