Next Article in Journal
Application of Low-Cost MEMS Spectrometers for Forest Topsoil Properties Prediction
Next Article in Special Issue
Federated Learning for 5G Radio Spectrum Sensing
Previous Article in Journal
Medical Images Encryption Based on Adaptive-Robust Multi-Mode Synchronization of Chen Hyper-Chaotic Systems
Previous Article in Special Issue
Angle-of-Arrival Estimation Using Difference Beams in Localized Hybrid Arrays
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Remote Sensing Image Dehazing

1
School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210000, China
2
School of Business, Macquarie University, Sydney 2109, Australia
*
Author to whom correspondence should be addressed.
Submission received: 29 April 2021 / Revised: 29 May 2021 / Accepted: 2 June 2021 / Published: 7 June 2021
(This article belongs to the Special Issue Communications and Sensing Technologies for the Future)

Abstract

:
Remote sensing (RS) is one of the data collection technologies that help explore more earth surface information. However, RS data captured by satellite are susceptible to particles suspended during the imaging process, especially for data with visible light band. To make up for such deficiency, numerous dehazing work and efforts have been made recently, whose strategy is to directly restore single hazy data without the need for using any extra information. In this paper, we first classify the current available algorithm into three categories, i.e., image enhancement, physical dehazing, and data-driven. The advantages and disadvantages of each type of algorithm are then summarized in detail. Finally, the evaluation indicators used to rank the recovery performance and the application scenario of the RS data haze removal technique are discussed, respectively. In addition, some common deficiencies of current available methods and future research focus are elaborated.

1. Introduction

Remote sensing (RS) was widely used in military affairs [1], e.g., missile early warning [2], military reconnaissance [3], and surveying [4]. With the popularity of satellites, it is also being used for civilian purposes increasingly, such as land planning and crop yield surveys [5]. Despite its usefulness, RS images or data taken by satellites are easy to be affected by the fog or haze during the imaging process, which makes images low contrast or dim color [6] and decreases the performance of computer vision tasks such as object detection [7]. This adverse effect not only reduces the visual quality of RS images, but also limits such precious RS data from being effectively applied.
To collect high-quality RS data, the most intuitive way is to perform imaging under good visibility and ideal illumination [8]. However, in some practical applications [9], it is urgent to shoot the location of the incident in time and continuously. Once haze or fog fills the atmosphere, RS imaging would lose its original worth. Therefore, a robust and real-time haze removal algorithm is very critical for restoring the RS data.
Singh et al. [10] summarized the image dehazing algorithms from several perspectives including: Theory, mathematical models, and performance measures. He divided dehazing algorithms into seven categories, i.e., depth estimation, wavelet, enhancement, filtering, supervised learning, fusion and meta-heuristic techniques, and introduced the strengths and weaknesses, respectively. Although the content of Ref. [10] is very comprehensive, its explanation of some related algorithms is not detailed enough. Unlike Ref. [10], this paper would group the current RS image dehazing algorithms into three categories. The first one is based on image enhancement, the main advantage of which is having a low complexity to ensure real-time performance. However, it does not work well for most situations due to the ignored imaging theory. The second one is physical dehazing [11], which is to impose hand craft prior knowledge on the atmospheric scattering model (ASM) to estimate the imaging parameters. Regrettably, the existing prior knowledge cannot be satisfied to all the scenes, thereby hindering the practicality of this type of algorithm. Benefiting from a sharp development of deep-learning [12,13], the last one utilizes convolutional neural network (CNN) [14], generative adversarial networks (GAN) [15] or other networks to train an end-to-end dehazing system [16]. Although high-quality results can be restored by these created networks, the recovery performance depends too much on the sample dataset used.
The rest of the paper is organized as follows. Section 2 describes the haze imaging model and gives the deriving of its formulas. Image enhancement such as Histogram Equalization and Retinex are discussed in Section 3. Section 4 mainly describes two well-known physical dehazing, i.e., dark channel prior and haze-lines prior dehazing. Since data-driven methods have a strong learning ability, some data-driven dehazing techniques such as Dehazenet, MSCNN, and AOD-NET, are discussed in Section 5. Section 6 describes the different quality metrics used to evaluate the recovery performance of different techniques. In Section 7, the significance and application of RS dehazing techniques to society are discussed. Future work is elaborated in Section 8. The concluding remarks are presented in Section 9.

2. Haze Degradation Model

2.1. Causes of Image Degradation in Hazy Weather

Haze is a common natural phenomenon in the real-world, which is formed by suspended particles, e.g., aerosols, dust, compounds, and tiny water droplets in the atmosphere [17]. These suspended particles, on the one hand, would interfere with the propagation of scene reflected light during the imaging procedure, i.e., causing scattering and refraction [18]. On the other hand, the atmospheric light will also participate in this imaging process, which becomes more serious as the depth increases. Therefore, these two factors lead to the massive loss of the reflected light of target object, and are bound to result in blurred texture, excessive brightness, decreased contrast, and dim color [19].

2.2. Atmospheric Scattering Model

According to the above analysis, Nayer and Narasimhan [19,20,21,22] consider that the scene reflected light in hazy atmosphere can be separated into two parts: Attenuation and Airlight.
Attenuation refers to the scene reflected light that reaches the observer after being scattered by atmospheric particles, and this procedure is shown in Figure 1. As can be seen from this figure, when the incident light (or reflected light) enters the scattering medium, the intensity changes for each distance d x is:
d E ( x , λ ) = β ( λ ) · E ( x , λ ) d x ,
where β ( λ ) is the scattering coefficient used to measure the ability of a medium to scatter light at different wavelengths, and λ is the wavelength of light. To calculate definite integrals on both sides of the above formula within the range of x [ 0 ,   d ] , the following equation is given as:
E ( x , λ ) = E 0 ( λ ) · e 0 d β ( λ ) d x ,
where E 0   ( λ ) represents the radiance at x = 0 . Assuming that each point on the scene can be regarded as a light source, the flux of light per unit area is inversely proportional to the square of the distance, which yields:
E ( d , λ ) = L h ( , λ ) · ρ · e β ( λ ) d d 2 ,
where L h ( , λ ) stands for the atmospheric light at infinity, and   ρ represents the ability of an object to reflect light.
Airlight represents the component of atmospheric light involved in the imaging process, which is depicted in Figure 2. Assuming that the imaging ranges are the same and the angle between the tangential light and the horizontal light is d ω , thus we can produce its luminous intensity:
d I ( x , λ ) = d V · k · β ( λ ) = d ω · x 2 d x · k · β ( λ ) ,
where d V = d ω x 2 d x is the volume and k is a constant. If d V is regarded as a light source with brightness d ( x , λ ) , the scattered light intensity can be further expressed as:
d E ( x ,   λ ) = d I ( x , λ ) · e β ( λ ) x x 2 ,
From the combination Equations (4) and (5) and d L ( x , λ ) = ( d E ( x , λ ) ) / d ω , we have
L ( d , λ ) = k ( 1 e β ( λ ) d ) ,
Now, extending the atmospheric scattering model to RGB space:
E ( d ) = L h ( , λ ) · ρ · e β ( λ ) d d 2 · D ˜ + L h ( , λ ) · ( 1 e β ( λ ) d ) · A ˜ ,
where D ˜ and A ˜ are the unit direction vectors of scene pixels and atmospheric color in RGB space, respectively. Therefore, in the RGB space, ASM can be modeled as:
I ( x ) = L ( λ ) · ρ ( x ) d 2 · e β ( λ ) d x + L ( λ ) · ( 1 e β ( λ ) d x ) = J ( x ) · t ( x ) + ( 1 t ( x ) ) · A ,
where x is the pixel coordinates, I is the observed intensity, J is the true radiance of the scene point imaged at x , A is the global atmospheric light, and t is the medium transmission. In ASM, the first term on the right side, named Direct Attenuation, is used to describe the direct impact of scene reflection light caused by haze, which usually attenuates exponentially with the scene depth d . The second term is called airight, which increases with the scene depth d [23].

3. Dehazing Using Image Enhancement

Image enhancement based dehazing does not consider the physical model of image degradation but improves the image quality by increasing the contrast of an image [24]. In these algorithms, the most representative is histogram equalization, Retinex algorithm, and homomorphic filtering.

3.1. Histogram Equalization

Histogram equalization [25] is a classic image enhancement method. Mathematically, it can be detailed by:
s k = T ( r k ) = ( L 1 ) · j = 0 k p r ( r j ) = ( L 1 ) M N · j = 0 k n j ,   k = 0 , 1 , 2 , , L 1 ,
where M and N are the height and width of an image, n j is the total number of pixels in the image with grayscale j , L is the total number of grayscale levels in the image (8-bit image corresponding to 256), and r and s represent the pixel grayscale before and after histogram equalization, respectively. M N is the total number of pixels in the image, and p r ( r j ) is the probability of occurrence of grayscale j and p r ( r j ) = n j M N . The main advantage of histogram equalization is low computational cost and easy to implement [26]. Therefore, it has the potential to deal with RS data with a high resolution. However, it only works well on an image with heavy haze due to its powerful overall contrast enhancement ability. To address this issue, Kim et al. [27,28] proposed a local histogram equalization, which can be divided into three strategies: Sub-block non-overlapped, fully overlapped sub-block, and partially overlapped sub-block. Although they can produce a visual haze-free result for most cases, the recovery color appears to be darker than the real one. In fact, due to the same scene depth in RS data, these images usually have a uniform haze distribution, thus histogram equalization is more suitable for the RS image.

3.2. Retinex

Retinex theory was found by Edwin Land et al. [29] in 1963, which is a combination of retina and cortex and simulates the imaging process of the human eye. Based on this fact, it is also called a cerebral cortex theory.

3.2.1. Retinex Algorithm

Retinex algorithm holds that the image observed by the eye can be represented by the product of the reflection and irradiation component:
I i ( x , y ) = R i ( x , y ) · L i ( x , y ) ,
where i { R , G , B } represent the three color bands, I i ( x , y ) represents the actual observed value, R i ( x , y ) represents the reflection component, and L i ( x , y ) represents the irradiation component. R i ( x , y ) can be obtained by calculating the irradiation component from the original image. For the convenience of calculation, Equation (10) is transformed into a logarithm form, i.e.,
l o g [ R i ( x , y ) ] = l o g [ I i ( x , y ) ] l o g [ L i ( x , y ) ] ,

3.2.2. Single Scale Retinex

Jobson [30] proposed the Single Scale Retinex algorithm. It can estimate the irradiation component by weighting the average of the pixels in the neighborhood, which is expressed as follows:
L i ( x , y ) = I i ( x , y ) G ( x , y ) ,
where is the convolution operation, and G ( x , y ) is the Gaussian function, which can be described by:
G ( x , y ) = 1 2 π e ( x 2 + y 2 ) 2 σ 2 ,
where G ( x , y ) d x d y = 1 ,   σ is the radius range. When the value of σ is small, more details will be displayed, but color distortion may occur. On the contrary, when the value of σ is large, the color information in the image is more natural, while the details are easy to lose. Combining Equations (12) and (13), it can be expressed as follows:
l o g [ R i ( x , y ) ] = l o g [ I i ( x , y ) ] l o g [ I i ( x , y ) G ( x , y ) ] ,
Here, we remark that the SSR algorithm only uses a single scale to estimate the unknown parameter, thus it may significantly reduce the enhancement quality [31].

3.2.3. Multi-Scale Retinex

To overcome the above flaw, MSR [32] is designed to weigh the average values of different reflection components, and it is calculated as follows:
l o g R i ( x , y ) = k = 1 N ω k { l o g I i ( x , y ) l o g [ I i ( x , y ) G k ( x , y ) ] } ,
where N represents the number of scales, G k ( x , y ) represents the k-th Gaussian function, and ω k is the weight of the k-th scale, satisfying k = 1 N ω k = 1 . If N = 1 , the MSR is transformed into SSR. Although MSR has the ability to make up for the shortcomings of SSR, it still produces the halo effect and the overall luminance is insufficient.

3.2.4. Multi-Scale Retinex with Color Restoration

Since the MSR algorithm processes the three RGB channels separately, the change of color ratio will inevitably lead to color distortion. Therefore, Rahman et al. [33] and Jobson et al. [34,35] proposed MSRCR to adjust the reflection component R ( x , y ) by introducing a color restoration factor, that is:
R M S R C R i ( x , y ) = C i ( x , y ) · R M S R i ( x , y )         i ϵ { R , G , B } ,
C i ( x , y ) = l o g [ α I i ( x , y ) n = 1 N I n ( x , y ) ] ,
where C i is the color restoration factor of the i-th channel, and α is a non-linear adjustment factor. In general, MSRCR can have a stronger robustness and restore richer detailed information than MSR. However, the complexity of the algorithm is increased undoubtedly.

3.3. Other Dehazing Algorithms Based on Image Enhancement

Homomorphic filtering [36] is one of the well-known image enhancement methods and is based on the frequency domain of irradiation-reflection. In this method, the irradiation component is used to determine the image’s grayscale variation, mainly corresponding to the low-frequency information. Moreover, the reflection component determines the image’s edge details, mainly corresponding to the high-frequency information. The homomorphic filtering method aims to use a certain filter function to reduce the low-frequency information and increase the high-frequency information [37]. This means that the homomorphic filtering method and the Retinex algorithm are very similar in the calculation [38]. Both of them divide the image into two parts: The irradiation component and the reflection. However, the difference is that the former processes the image in the frequency domain, and the latter is in the space domain. Homomorphic filtering is able to remove the shadows caused by uneven illumination, and can maintain the original information of the image. However, it needs two Fourier transforms, which take up a larger computing space. The basic idea of wavelet transform is similar to the above homomorphic filtering. Different frequency features of the original hazy image are obtained by the wavelet transform. It can enhance the image’s detailed information to achieve the dehazing image [39], but it cannot apply to a situation where the image is too bright or dark. Ancuti et al. [40] applied a white balance and a contrast enhancing procedure to enhance the visibility of hazy images. However, it has not been shown to be physically valid.

3.4. Remote Sensing Image Dehazing Based on Image Enhancement

Shi et al. [41] developed an image enhancement algorithm to restore hazy RS images by combining the Retinex algorithm and chromaticity ratio. It introduces the color information of the original image when using the Retinex algorithm, and also overcomes the color distortion easily caused by the histogram equalization and the grayish image caused by the Retinex algorithm. S. Huang et al. [42] proposed a dehazing algorithm called the new Urban Remote Sensing Haze Removal (URSHR) algorithm for the dehazing urban RS image. The URSHR algorithm combines phase consistency features, multi-scale Retinax theory, and histogram features to restore haze-free images. This algorithm is a feasible and effective method for haze removal of urban RS images and has a good application and promotion value. Chaudhry et al. [43] proposed a framework for image restoration and haze removal. It uses hybrid median filtering and accelerated local Laplacian filtering to dehaze the image and has achieved good results on outdoor RGB images and RS images.

4. Physical Dehazing

As discussed in Section 2, the physical dehazing technique is based on the well-known ASM and imposes one or more prior knowledge [44,45] or assumptions on it to reduce the uncertainty of haze removal [46,47].

4.1. Dark Channel Prior

He et al. [48] observed a large number of outdoor haze-free images and found that in most of the non-sky patches, at least one color channel has some pixels whose intensity are very low and close to zero. For an arbitrary image J , its dark channel [49,50] J d a r k is given by:
J d a r k ( x ) = m i n y Ω ( x ) ( m i n c ϵ { R , G , B } J c ( y ) ) ,
where J c is a color channel of J , and Ω ( x ) is a local patch centered at x . If J is an outdoor haze-free image, then the value of J d a r k should be very low or close to zero. Please note that the low intensity in the dark channel is mainly due to shadows of scene, dark objects, and colorful objects or surfaces.

4.1.1. Estimating the Transmission

Equation (18) can be normalized by:
I c ( x ) A c = t ( x ) · J c ( x ) A c + 1 t ( x ) ,
Assuming that the value of A is known and the transmission in a local patch Ω ( x ) is constant, which is defined as t ˜ ( x ) . Then, the minimization operation of Equation (19) is:
m i n y Ω ( x ) ( m i n c I c ( y ) A c ) = t ˜ ( x ) · min y Ω ( x ) ( m i n c J c ( y ) A c ) + 1 t ˜ ( x ) ,
By imposing DCP into Equation (20), we have:
J d a r k ( x ) = m i n y Ω ( x ) ( m i n c ϵ { R , G , B } J c ( y ) ) 0 ,
Putting Equation (21) into Equation (20), the estimated transmission is simplified as:
t ˜ ( x ) = 1 m i n y Ω ( x ) ( m i n c I c ( y ) A c ) ,  
In practice, even on clear days the atmosphere is not absolutely free of any particle. Therefore, the haze still exists when we look at distant objects. Moreover, the presence of haze is a fundamental cue for humans to perceive depth [51,52]. Therefore, it is necessary to retain a certain degree of haze to obtain a better visual effect. It can be modified by introducing a factor ω between [0, 1] in Equation (22), usually setting it to be 0.95, and then Equation (22) is modified as:
t ˜ ( x ) = 1 ω m i n y Ω ( x ) ( m i n c I c ( y ) A c ) ,  

4.1.2. Estimating the Atmospheric Light

To estimate the atmospheric light, He firstly picked the top 0.1% brightest pixels in the dark channel and then recorded the coordinates of these pixels. Finally, the max value of corresponding pixel in the original image is regarded as atmospheric light [48].

4.1.3. Recovering the Scene Radiance

Putting the estimated values of atmospheric light A and transmission t into Equation (11), the haze-free can be obtained by:
J ( x ) = I ( x ) A t ( x ) + A ,
The direct attenuation term J ( x ) t ( x ) will be very close to zero when the transmission t is close to zero. Therefore, setting a lower bound value t 0 for transmission. The final scene radiance J is recovered by:
J ( x ) = I ( x ) A m a x ( t ( x ) , t 0 ) + A .
Due to the fact that transmission is not always constant in a patch, the restored image will have block artifacts using a rough transmission. He et al. proposed a soft matting algorithm to optimize the transmission. However, it takes a long time to calculate. Later, He et al. [53] used guided filtering to replace the soft matting. The complexity was reduced, and the computational efficiency was greatly improved. The restored image by DCP has a promising visual result. However, if the target scene is similar to atmospheric light, such as snow, white walls, and sea, satisfactory results will not be obtained.

4.2. Non-Local Image Dehazing

According to the fact that a nature image usually contains a lot of repeated colors, Berman et al. [54] develop a non-local dehazing technique, which is different from the patch-wise and pixel-wise dehazing ones. The core idea is to adopt K-means [55] to cluster the image input into 500 haze-line, and then estimate the transmission map using these haze-lines [56]. Having this estimated parameter, a haze-free result can be recovered from single hazy images.

4.2.1. Haze-Lines Clustering

Firstly, I A was defined by the following equation:
I A ( x ) = I ( x ) A ,
where the 3D RGB coordinate system is translated such that the air light is at the origin. Combining Equation (11), we can get:
I A ( x ) = t ( x ) · [ J ( x ) A ] ,
Then, redefining I A ( x ) using spherical coordinates, i.e.,
I A ( x ) = [ γ ( x ) , θ ( x ) , φ ( x ) ] ,
where γ is the distance to the origin, and θ and φ are the longitude and latitude, respectively. It can be noticed from Equation (27) that scene points at different distances differ only in the transmission value. In other words, pixels x and y have similar RGB values in the underlying haze-free image if their [ φ , θ   ] are similar:
J ( x ) J ( y ) { φ ( x ) φ ( y ) , θ ( x ) θ ( y ) } ,         t ,
Therefore, pixels belong to the same haze-line if their [ φ ( x ) , θ ( x )   ] values are similar.

4.2.2. Estimating Transmission

For a given haze-line defined by J and A , r ( x ) depends on the object distance:
r ( x ) = t ( x ) · J ( x ) A ,       0 t ( x ) 1 ,
Thus, t = 1 corresponds to the largest radial coordinate:
r m a x = def J A ,
Combining Equations (30) and (31), the estimated transmission can be simplified as:
t ˜ ( x ) = r ( x ) / r ^ m a x ,
where r ^ m a x ( x ) = max x H { r ( x ) } , H is a haze-line containing the haze-free pixel. After regularization for t ˜ , the dehazed image can be obtained by ASM. This method can deal with most hazy examples well. However, it may perform poorly for some hazy images that do not satisfy the introduced priors.

4.3. Other Physical Dehazing Methods

TAN [57] observed that haze-free images have higher contrast compared with the hazy images, and maximized the contrast per patch, while maintaining a global coherent image. This algorithm enhances the contrast of the image and improves its visibility. Unfortunately, color oversaturation and halo effect are visible in the images after dehazing. Fattal [58] firstly assumed that the albedo of the local image regions is a constant, and the transmission and surface shading are locally uncorrelated. Then, the independent component analysis (LCA) is used to estimate the albedo. As expected, the performance of this method mainly depends on the statistical characteristics of the input data to a certain extent, thus insufficient color information is bound to lead to unreliable statistical estimates.

4.4. Remote Sensing Image Dehazing Using DCP

Since RS images are imaged from a high altitude, they generally do not include the sky area. Wang [59] believes that most areas’ dark channel value is maintained at a relatively low level. Therefore, the blocking phenomenon has little effect on the dehazing RS image. This enables omitting the transmission refinement process, thus simplifying the dehazing process and improving the calculation efficiency. Zheng et al. [60] introduced the failure point based on the DCP. They set the failure point threshold, and effectively avoided the bright objects’ influence on dehazing RS images. Li et al. [61] used the median filter method to refine the transmission and improve aerial images’ calculation efficiency. Wang et al. [62] proposed a block-based DCP method for remotely sensed multispectral images, using the atmospheric light surface hypothesis to replace the global atmospheric light, making RS images better restored. Long et al. [63,64] used a low-pass Gaussian filter to refine the atmospheric veil and redefined the transmission to eliminate color distortion. Dai et al. [65] generated a dark channel image by directly obtaining the minimum of the three channels of each pixel of the RS image.

5. Data-Driven Based Dehazing

With the continuous development of deep learning theory, convolution neural network (CNN) [66,67,68,69,70] has been utilized and achieved good results in face recognition, image segmentation, and other fields. Image dehazing, as an issue of great concern in image processing, has also attracted many scholars’ attention. Most data-driven based dehazing techniques have achieved tremendous success compared with the traditional haze removal methods.

5.1. DehazeNet

DehazeNet [71,72] was proposed by Cai et al. [73] in 2016. It uses a multi-level architecture based on a CNN, which takes a hazy image as an input and outputs its transmission map. Then, according to this estimated output, they restored the haze-free image based on the ASM. The structure of DehazeNet is shown in Figure 3.
DehazeNet employs feature extraction, multi-scale mapping, local extremum, and nonlinear regression to calculate the transmission map of a hazy image.
Feature extraction: It consists of a convolutional layer and a Maxout unit [74], which convolves the hazy image with appropriate filters, and then uses nonlinear mapping to obtain the feature map. The Maxout unit is a simple feed-forward nonlinear activation function used in multi-layer perceptron or CNNs. When it is used in CNNs, it generates a new feature map by taking a pixel-wise maximization operation over k affine feature maps.
Multi-scale mapping: It is composed of 16 convolution kernels with sizes of 3 × 3, 5 × 5, and 7 × 7 to adapt to features of different sizes and scales. In previous studies, multi-scale features have been proven to have significant effects on image dehazing.
Local extremum: The neighborhood maximum is considered under each pixel to overcome local sensitivity. In addition, the local extremum is in accordance with the assumption that the medium transmission is locally constant, and it is common to overcome the noise of transmission estimation.
Nonlinear regression: Since ReLU [75,76] is only prohibited when the value is less than zero and the output value of the last layer of the image reconstruction task is between 0 and 1, it may cause the overflow. Therefore, the value greater than one is suppressed. To this end, a Bilateral Rectified Linear Unit (BReLU) [77] activation function is proposed by Cai et al. to overcome this limitation (as shown in Figure 4). As a novel linear unit, BReLU maintains bilateral constraints and local linearity.
Experiments show that the system has better performance than existing methods. However, ASM relies on a single light source without considering multi-light source, and the dehazing effect in the distant area needs to be improved.

5.2. MSCNN

DehazeNet extracts the feature map through a convolution neural network to get the transmission map, but the transmission obtained through DehazeNet is not refined. Therefore, Ren et al. [78] designed a multi-scale CNN for image dehazing. As shown in Figure 5, the original hazy image is used as input, the transmission map first estimated by a coarse-scale network and then refined by a fine-scale network.
The coarse-scale CNN predicts the scene’s overall transmission map, which is composed of a multi-scale convolution layer, a pooling layer, an up-sampling layer [79,80,81,82], and a linear combination layer. The convolutional layer is designed to have different sizes of convolution kernels to learn multi-scale features. Each convolutional layer is followed by a ReLU layer, a pooling layer, and an upsampling layer. The linear combination layer linearly combines the features of the previous layer to obtain a rough transmission map, which will be used as the input of the fine-scale CNN.
The fine-scale CNN is to refine the transmission map output by the coarse-scale neural network. It is similar to the coarse-scale network. The rough transmission map is input into a fine-scale CNN. They work together to obtain a refined transmission map.
As discussed in [78], the performance of haze-free results using this training network can be improved compared to those of traditional techniques. Despite this, the max-pooling adopted in the model will result in loss of details, and the image dehazing at nighttime is not reliable as well.

5.3. AOD-NET

DehazeNet and MSCNN estimate the atmospheric light by DCP. However, the estimated value may cause errors when the color of the object in the hazy image is close to the atmospheric light. Moreover, the separate estimation of transmission and atmospheric light may further increase the error and affect the result. To solve this problem, Li et al. [83] proposed the first end-to-end trainable dehazing model, which can directly restore the haze-free image from the hazy image rather then relying on any intermediate parameter estimation. The AOD-Net [83] model transforms the ASM Equation (11), and it is calculated as:
J ( x ) = 1 t ( x ) · I ( x ) A · 1 t ( x ) + A ,
The core idea is to combine the transmission t and the atmospheric light value A into K ( x ) , which is calculated as:
J ( x ) = K ( x ) · I ( x ) K ( x ) + b ,  
where,
K ( x ) = 1 t ( x ) · ( I ( x ) A ) + ( A b ) I ( x ) 1 ,
b is a constant bias, and the default value is 1.
AOD-Net is composed of two parts: K-estimation module and clean image generation module (as shown in Figure 6). Parameters in K ( x ) vary with the input hazy image. The model is trained by minimizing the loss between the output image J and the clear image. Continuously reducing the loss, thereby outputting the haze-free image J . This model has greatly improved in terms of PSNR and SSIM. In addition, this end-to-end design can easily embed the model into other data-driven ones, thereby improving the performance of image processing tasks.

5.4. FD-GAN

Yu et al. [84] proposed a fully end-to-end Generative Adversarial Network with Fusion-discriminator (FD-GAN) for image dehazing. FD-GAN consists of Generator and Fusion-discriminator (as shown in Figure 7). The Generator including decoder and encoder can directly generate the dehazed images G ( I ) without estimation of parameters. The encoder contains three dense blocks, including a series of convolutional, batch normalization (BN), and ReLU layers. The decoder uses the nearest-neighbor interpolation for up-sampling to recover the size of feature maps to the original resolution gradually. The low-frequency (LF) component and high-frequency (HF) component were obtained by Gaussian filter and Laplace operator, respectively. Yu et al. concatenate the G ( I ) (or Ground truth image J ) and its corresponding LF and HF as a sample, then feed it into the Fusion-discriminator. The LF and HF can assist the discriminator to distinguish the differences between hazy and ground truth images well, and can guide the generator to output more natural and realistic hazy-free images.

5.5. Remote Sensing Image Dehazing Using Data-Driven

Guo et al. [85] proposed an end-to-end RSDehazeNet for haze removal. Guo et al. utilize both local and global residual learning strategies in RSDehazeNet for fast convergence with superior performance. To obtain enough RS images for CNN training, Guo et al. proposed a novel haze synthesis method to generate realistic hazy multispectral images by modeling the wavelength-dependent and spatial-varying characteristics of haze in RS images. Jiang et al. [86] proposed a multi-scale residual convolutional neural network (MRCNN) for haze removal of RS images. MRCNN uses three-dimensional convolution kernels to extract spatial-spectral correlation information and abstract features from the surrounding neighborhoods for haze transmission estimation, achieving extremely low verification error and test error. Qin et al. [87] proposed a novel dehazing method based on a deep CNN with the residual structure for multispectral RS images. First, connect CNN individuals with multiple residual structures in parallel, and each individual is used to learn the regression from a hazy image to a clear image. Then, the individual output of CNN is fused with the weight map to produce the final dehazing result. This method can accurately remove the haze in each band of multispectral images under different scenes. Chen et al. [88] proposed an end-to-end hybrid high-resolution learning network framework termed H2RL-Net to remove a single satellite image haze. It can deliver significant improvements in RS image owing to its novel feature extraction architecture. Mehta et al. [89] proposed SkyGAN for haze removal in aerial images, including a hazy-to-hyperspectral (H2H) module, and a conditional GAN (cGAN) module for dehazing. A high-quality result can be produced when evaluating this algorithm on the SateHaze1k dataset and the HAI dataset. Huang et al. [90] proposed the self-supporting dehazing network (SSDN) to improve the efficiencies in the restoration of content and details. The SSDN introduced the self-filtering block to raise the representation abilities of learned features and achieved good performance.

6. Remote Sensing Dehazing Image Quality Evaluation

After realizing the RS image haze removal according to the aforementioned algorithms, it is also crucial to use some quality metrics to evaluate the image quality. This section first introduces several commonly used metrics in detail and then uses them to assess the result dehazed by different methods.

6.1. Mean Squared Error (MSE)

The mean squared error (MSE) is a metric used to estimate the error between the actual image and the restored image, which is computed as [91,92]:
MSE = 1 P · Q x = 1 P y = 1 Q ( f ( x , y ) h ( x , y ) ) 2 ,  
where f ( x , y ) and h ( x , y ) represent the real image and the restored image, respectively. P and Q represent the length and width of the image, and x and y are the coordinate of the pixel in an image.

6.2. Mean Absolute Error (MAS)

The mean absolute error (MAE) represents the mean of the absolute error between the predicted and the observed. Compared to MSE, it can avoid the problem of errors cancelling each other out and basically provides a positive integer ranging from 0 to 255 for an 8-bit image. Formally, it is computed by:
MAE = 1 P · Q x = 1 p y = 1 Q | f ( x , y ) h ( x , y ) | ,

6.3. Peak Signal-to-Noise Ratio (PSNR)

PSNR is the most common and widely used objective metric for ranking the quality of images. It evaluates the ratio of actual pixels value and the evaluated error using MSE. It can be computed by [91,92]:
PSNR = 10 l o g 10 M 2 P · Q x = 1 P y = 1 Q ( f ( x , y ) h ( x , y ) ) 2 = 10 l o g 10 M 2 M S E ,
where M is the image gray level, generally taking 255, and n is the binary digit used by a pixel, generally 8-bits.

6.4. Structural Similarity Index (SSIM)

SSIM is a metric used to measure the similarity of pictures and can also be used to judge the quality of pictures after compression [93]. In general, a larger SSIM value means a smaller image distortion. Natural images are extremely structural and reflect the correlation among pixels. It carries essential information about the structure of the object in the visual scene, and is computed as [92]:
SSIM = ( 2 μ x μ y + c 1 ) · ( 2 σ x y + c 2 ) ( μ x 2 + μ y 2 + c 1 ) · ( σ x 2 + σ y 2 + c 2 ) ,
where μ x , μ y and σ x 2 , σ y 2 are the mean and variance of x and y , respectively,   c 1 = ( r 1 T ) 2 , c 2 = ( r 2 T ) 2 is a constant used to maintain stability, r 1 = 0.01 , r 2 = 0.03 , σ x y is the covariance of x and y , and T is the dynamic range of the pixel value, generally T = 255 .

6.5. Quantitative Comparison

To check the recovery performances of different techniques, the above mentioned methods (including HE, Retinex, DCP, Non-Local, DehazeNet, MSCNN, AOD-NET, and GCANet [94]) were tested on eight challenging real-world RS hazy pictures. The selected RS images and the results dehazed by the compared approaches are shown in Figure 8. It can be seen from this figure that RS images dehazed by traditional enhancement methods, i.e., HE and Retinex, have high contrast, while they lose some details, e.g., the brighter area in the upper right corner of E1 and the darker area on the left in E2. Moreover, the results of the physical dehazing, i.e., DCP and Non-Local, may lead to some darker RS outputs than they should be (see the DCP result of E5). In contrast, despite the fact that data-driven dehazing is able to produce a high-quality haze-free scene for most given examples, they may fail to the case with heavy haze.
To accurately rank the performance of above compared techniques, we also tested them on eight simulated RS data consisting of hazy image and ground truth. The corresponding recovery results are shown in Figure 9. As expected, the results on simulated input also confirm that both image enhancement, physical model, and data-driven have a somewhat ability to remove the haze cover in an image, i.e., having a good output on a special example. However, they do not work well on the images with various scenes.
Furthermore, we employ MSE, MAE, PSNR, and SSIM to access the restoration quality of selected dehazing methods, as summarized in Table 1. It can be found that data-driven dehazing has more potential to achieve RS image dehazing since it roughly wins the best score in terms of all used evaluation index.

7. Remote Sensing Technology Application

7.1. Monitoring and Control of Geological Disaster

Geological disasters, such as landslides, mudslides, and ground fissures, seriously endanger human life and wealth security. A high-quality RS image can help us roughly investigate the overall damage in the disaster area. However, RS data may lose its value when it is obscured by clouds and haze. Therefore, removing the haze from hazy RS images is very significant in geological disaster monitoring and control.

7.2. Urban Planning

The main task of urban planning is to obtain comprehensive urban spatial information. Using RS technology to take city images can easily and accurately capture such information, but there are many factories and construction sites located in cities, which result in a large number of smoke over cities, and thus blurs the RS images. After dehazing the RS data, comprehensive planning and development of the city can be carried out reliably.

7.3. Military Application

It is well-known that valuable military intelligence can be obtained from clear RS images, which can be used to discover missiles, identify troops, confirm airports, monitor changes in forces, and make operation plans. Due to the haze interference, the data collected in military will also have the characteristics of low contrast and dim colors. Therefore, the haze removal technique can be useful to handle this issue.

8. Future Efforts

Researchers have done a large amount of research work on RS image dehazing and have achieved a promising result for most cases. However, there is still a lot of vital work to be further studied.

8.1. Drawback of ASM

The image dehazed by ASM will have a dim effect since ASM fails to consider the light trapping phenomenon related to the texture density and scene depth. In other words, ASM considers that all scenes in the image are directly illuminated by the atmospheric light, while ignoring the influence of uneven illumination. To address the above problems, many useful methods [18,23] which optimize the robustness of the ASM are proposed. Although the dim effect is solved to a certain extent, uneven haze remains a challenge. Therefore, it is a challenging problem to use a more robust physical model to describe complex scenes.

8.2. Priori Limitation

Most of the current existing methods are based on ASM, and achieve haze removal using latent prior on ASM. However, due to the fact that prior cannot fully satisfy all images or data, it is difficult to ensure the recovery performance of these approaches. In some conditions, especially for an image with heavy haze, haze removal using prior will be ineffective. Benefitting from the learning mechanism, building a deep architecture or Bayesian framework to integrate the remarkable merit of each algorithm is a good choice, so that a better haze-free result can be obtained.

8.3. Real-Time Dehazing

Although current dehazing algorithms are able to effectively remove haze for a single image, most of them still have a common problem, i.e., lacking real-time performance. This means that these dehazing methods still cannot support the normal operation of computer vision systems that need high efficiency processing. In a word, a “good” haze removal algorithm must have reliable recovery capability and low computational complexity simultaneously. To the best of our knowledge, all the existing algorithms reduce the complexity by optimizing the algorithm itself. In fact, using hardware (graphics processing unit) to accelerate the processing may be more effective than the previous work.

8.4. Drawback of Data-Driven

On the one hand, the data-driven restoration quality depends on the selection of the training dataset. However, almost all open datasets are artificially synthesized rather than being collected from the real-world, especially for RS data. This is bound to lower the dehazing effect on real-world RS data. On the other hand, data-driven dehazing is similar to a “black box”, which lacks interpretability and is specifically theoretical despite its effectiveness. Therefore, researchers could combine statistical learning with symbolic computation and construct an uneven haze image dataset to obtain more natural and realistic hazy-free images.

9. Conclusions

In conclusion, this paper details the degradation mechanism of hazy data and the corresponding physical model, i.e., ASM. Then, a brief introduction of RS images and attributes of each type of dehazing algorithm were discussed categorically. In short, image enhancement neglects the imaging theory of hazy data and only stresses the enhancement of local or global contrast as much as possible. In addition, physical dehazing extracts the parameters by imposing latent prior knowledge on ASM, thereby it can restore a haze-free scene from the hazy image physically. Moreover, data-driven dehazing makes use of the powerful learning ability of neural network to find the mapping relationship between hazy data and the corresponding haze-free one or transmission map. Therefore, its success on dehazing performance mainly lies in the training dataset used to drive the expected models. Finally, the commonly used quantitative metrics and the application scenario of RS dehazing approaches were also illustrated. Furthermore, we emphasized some challenging problems faced by these RS dehazing methods that enlighten the future efforts in this topic.

Author Contributions

Literature searching and writing, J.L., S.W. and X.W.; Paper layout and designing, M.J.; Project leader, D.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (61902198), Natural Science Foundation of Jiangsu Province (BK20190730), Research Foundation of Nanjing University of Posts and Telecommunications (NY219135). This work was also supported by Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hudson, R.D.; Hudson, J.W. The military applications of remote sensing by infrared. Proc. IEEE 1975, 63, 104–128. [Google Scholar] [CrossRef]
  2. Zhang, C.Y.; Cheng, H.F.; Chen, Z.H.; Zheng, W.W. The Development of Hyperspectral Remote Sensing and Its Threatening to Military Equipments. Electro. Opt. Technol. Appl. 2008, 23, 10–12. [Google Scholar]
  3. Stevens, M.M. Application of Remote Sensing to the Assessment of Surface Characteristics of Selected Mojave Desert Playas for Military Purposes. Ph.D. Thesis, University of Missouri-Rolla, Rolla, MO, USA, 1988. [Google Scholar]
  4. Wang, F.; Zhou, K.; Wang, M.; Wang, Q. The Impact Analysis of Land Features to JL1-3B Nighttime Light Data at Parcel Level: Illustrated by the Case of Changchun, China. Sensors 2020, 20, 5447. [Google Scholar] [CrossRef]
  5. Mancini, F.; Pirotti, F. Innovations in Photogrammetry and Remote Sensing: Modern Sensors, New Processing Strate-gies and Frontiers in Applications. Sensors 2021, 21, 2420. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, K.; He, L.; Ma, S.; Gao, S.; Bi, D. A Sensor Image Dehazing Algorithm Based on Feature Learning. Sensors 2018, 18, 2606. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Jiang, F.; Kong, B.; Li, J.; Dashtipour, K.; Gogate, M. Robust Visual Saliency Optimization Based on Bidirectional Markov Chains. Cogn. Comput. 2021, 13, 69–80. [Google Scholar] [CrossRef]
  8. Qu, C.; Bi, D.-Y.; Sui, P.; Chao, A.-N.; Wang, Y.-F. Robust Dehaze Algorithm for Degraded Image of CMOS Image Sensors. Sensors 2017, 17, 2175. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Tao, S. Research on Optical Image Degradation and Compensation Technology Based on Atmospheric Physical Characteristics; Zhejiang University: Hangzhou, China, 2014. (In Chinese) [Google Scholar]
  10. Singh, D.; Kumar, V. A Comprehensive Review of Computational Dehazing Techniques. Arch. Comput. Methods Eng. 2018, 26, 1395–1413. [Google Scholar] [CrossRef]
  11. Yuan, X.; Ju, M.; Gu, Z.; Wang, S. An Effective and Robust Single Image Dehazing Method Using the Dark Channel Prior. Information 2017, 8, 57. [Google Scholar] [CrossRef] [Green Version]
  12. Yuan, K.; Lu, W.; Wei, J.; Xiong, N. Single Image Dehazing via NIN-DehazeNet. IEEE Access 2019, 7, 181348–181356. [Google Scholar] [CrossRef]
  13. Salazar-Colores, S.; Cruz-Aceves, I.; Ramos-Arreguin, J.M. Single image dehazing using a multilayer perceptron. J. Electron. Imaging 2018, 27, 043022. [Google Scholar] [CrossRef]
  14. Chua, L.O.; Roska, T. The CNN paradigm. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 1993, 40, 147–156. [Google Scholar] [CrossRef]
  15. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. Adv. Neural Inf. Process. Syst. 2014, 3, 2672–2680. [Google Scholar] [CrossRef]
  16. An End-to-End Pyramid Convolutional Neural Network for Dehazing; Springer: Singapore, 2019.
  17. Gu, Z.; Ju, M.; Zhang, D. A Single Image Dehazing Method Using Average Saturation Prior. Math. Probl. Eng. 2017, 2017, 6851301. [Google Scholar] [CrossRef]
  18. Ju, M.; Gu, Z.; Zhang, D. Single image haze removal based on the improved atmospheric scattering model. Neurocomputing 2017, 260, 180–191. [Google Scholar] [CrossRef]
  19. Nayar, S.K.; Narasimhan, S.G. Vision in bad weather. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999; Volume 2, pp. 820–827. [Google Scholar]
  20. Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. ACM SIGGRAPH ASIA 2008 Courses 2008, 3, 233–254. [Google Scholar] [CrossRef]
  21. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. ACM SIGGRAPH ASIA 2008 Courses 2008, 6, 713–724. [Google Scholar] [CrossRef]
  22. Narasimhan, S.G.; Nayar, S.K. Removing weather effects from monochrome images. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 186–193. [Google Scholar]
  23. Ju, M.; Ding, C.; Ren, W.; Yang, Y.; Zhang, D.; Guo, Y.J. IDE: Image Dehazing and Exposure Using an Enhanced Atmospheric Scattering Model. IEEE Trans. Image Process. 2021, 30, 2180–2192. [Google Scholar] [CrossRef]
  24. Ju, M.-Y.; Ding, C.; Zhang, D.-Y.; Guo, Y.J. Gamma-Correction-Based Visibility Restoration for Single Hazy Images. IEEE Signal Process. Lett. 2018, 25, 1084–1088. [Google Scholar] [CrossRef]
  25. Hadjidemetriou, E. Use of Histograms for Recognition; Columbia University: New York, NY, USA, 2002. [Google Scholar]
  26. Cheng, H.; Shi, X. A simple and effective histogram equalization approach to image enhancement. Digit. Signal Process. 2004, 14, 158–170. [Google Scholar] [CrossRef]
  27. Kim, T.K.; Paik, J.; Kang, B.S. Contrast enhancement system using spatially adaptive histogram equalization with temporal filtering. IEEE Trans. Consum. Electron. 1998, 44, 82–87. [Google Scholar] [CrossRef]
  28. Kim, J.-Y.; Kim, L.-S.; Hwang, S.-H. An advanced contrast enhancement using partially overlapped sub-block histogram equalization. IEEE Trans. Circuits Syst. Video Technol. 2001, 11, 475–484. [Google Scholar] [CrossRef]
  29. Land, E. Lightness and Retinex Theory. J. Opt. Soc. Am. 1971, 61, 1–11. [Google Scholar] [CrossRef] [PubMed]
  30. Jobson, D.; Rahman, Z.; Woodell, G. Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  31. Finlayson, G.D.; Hordley, S.D.; Drew, M.S. Removing shadows from Images using retinex. In Color & Imaging Conference; Society for Imaging Science and Technology: Scottsdale, AZ, USA, 12–15 November 2002. [Google Scholar]
  32. Rahman, Z.U.; Jobson, D.J.; Woodell, G.A. Multi-scale retinex for color image. In Proceedings of the 3rd IEEE International Conference on Image Processing, Lausanne, Switzerland, 19 September 1996. [Google Scholar]
  33. Jobson, D.J.; Rahman, Z.-U.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–110. [Google Scholar] [CrossRef]
  34. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Retinex Image Processing: Improved Fidelity to Direct Visual Observation. In Color and Imaging Conference; NASA Langley Technical Report Server; Society for Imaging Science and Technology: Hampton, VA, USA, 1996. [Google Scholar]
  35. Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
  36. Seow, M.-J.; Asari, V.K. Ratio rule and homomorphic filter for enhancement of digital colour image. Neurocomputing 2006, 69, 954–958. [Google Scholar] [CrossRef]
  37. Fries, R.; Modestino, J. Image enhancement by stochastic homomorphic filtering. ICASSP IEEE Int. Conf. Acoust. Speech Signal Process. 2005, 6, 625–637. [Google Scholar] [CrossRef] [Green Version]
  38. Wang, X.; Ju, M.; Zhang, D. Automatic hazy image enhancement via haze distribution estimation. Adv. Mech. Eng. 2018, 10. [Google Scholar] [CrossRef] [Green Version]
  39. Wu, D.; Zhu, Q. The latest research progress of image defogging. Acta Autom. Sin. 2015, 41, 221–239. (In Chinese) [Google Scholar]
  40. Ancuti, C.O. Single Image Dehazing by Multi-Scale Fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef]
  41. Shi, W.; Li, J. Research on Remote Sensing Image Dehazing Algorithm. Spacecr. Recovery Remote Sens. 2010, 6, 50–55. (In Chinese) [Google Scholar]
  42. Huang, S.; Liu, Y.; Wang, Y.; Wang, Z.; Guo, J. A New Haze Removal Algorithm for Single Urban Remote Sensing Image. IEEE Access 2020, 8, 1. [Google Scholar] [CrossRef]
  43. Chaudhry, A.M.; Riaz, M.M.; Ghafoor, A. A Framework for Outdoor RGB Image Enhancement and Dehazing. IEEE Geosci. Remote Sens. Lett. 2018, 15, 932–936. [Google Scholar] [CrossRef]
  44. Ju, M.; Ding, C.; Zhang, D.; Guo, Y.J. BDPK: Bayesian Dehazing Using Prior Knowledge. IEEE Trans. Circuits Syst. Video Technol. 2018, 29, 2349–2362. [Google Scholar] [CrossRef]
  45. Ju, M.; Gu, Z.; Zhang, D.; Qin, H. Visibility Restoration for Single Hazy Image Using Dual Prior Knowledge. Math. Probl. Eng. 2017, 2017, 8190182. [Google Scholar] [CrossRef] [Green Version]
  46. Wang, J.; Lu, K.; Xue, J.; He, N.; Shao, L. Single Image Dehazing Based on the Physical Model and MSRCR Algorithm. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2190–2199. [Google Scholar] [CrossRef]
  47. Ju, M.; Zhang, D.; Wang, X. Single image dehazing via an improved atmospheric scattering model. Vis. Comput. 2017, 33, 1613–1625. [Google Scholar] [CrossRef] [Green Version]
  48. He, K.; Jian, S.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  49. Xu, H.; Guo, J.; Liu, Q.; Ye, L. Fast image dehazing using improved dark channel prior. In Proceedings of the 2012 IEEE International Conference on Information Science and Technology, Wuhan, China, 23–25 March 2012. [Google Scholar]
  50. Xie, B.; Guo, F.; Cai, Z. Improved Single Image Dehazing Using Dark Channel Prior and Multi-scale Retinex. In Proceedings of the 2010 International Conference on Intelligent System Design and Engineering Application, Changsha, China, 13–14 October 2010; Volume 1, pp. 848–851. [Google Scholar] [CrossRef]
  51. Houston, J.P.; Bee, H.; Hatfield, E.; Rimm, D.C. Sensation and Perception. Int. J. Psychol. 1979, 51, 80–108. [Google Scholar]
  52. Preetham, A.J.; Shirley, P.; Smits, B. A Practical Analytic Model for Daylight; ACM: New York, NY, USA, 1999; pp. 91–100. [Google Scholar]
  53. He, K.; Sun, J.; Tang, X. Guided Image Filtering. Trans. Petri Nets Other Models Concurr. XV 2010, 6, 1–14. [Google Scholar] [CrossRef]
  54. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  55. Hartigan, J.A.; Wong, M.A. A K-Means Clustering Algorithm. Appl. Stat. 1979, 28, 100–108. [Google Scholar] [CrossRef]
  56. Berman, D.; Treibitz, T.; Avidan, S. Single Image Dehazing Using Haze-Lines. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 42, 720–734. [Google Scholar] [CrossRef]
  57. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  58. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar] [CrossRef]
  59. Wang, S.; Wan, H.; Zeng, L.; Peng, X. Remote sensing image fog removal technology using DCP. J. Geomat. Sci. Technol. 2011, 3, 182–185. (In Chinese) [Google Scholar]
  60. Zheng, X.; Xiao, Y.; Gong, Y. Research on remote sensing image defogging method based on DCP. Geomat. Spat. Inf. Technol. 2020, 249, 69–72. (In Chinese) [Google Scholar]
  61. Li, L.; Tang, X.; Liu, W.; Chen, C. Speed improvement of aerial image defogging algorithm based on DCP. J. Jilin Univ. 2021, 59, 77–84. (In Chinese) [Google Scholar]
  62. Wang, L.; Pei, J.; Xie, W. Patch-Based Dark Channel Prior Dehazing for RS Multi-spectral Image. Chin. J. Electron. 2015, 24, 573–578. [Google Scholar] [CrossRef]
  63. Jiao, L.; Shi, Z.; Wei, T. Fast haze removal for a single remote sensing image using dark channel prior. In Proceedings of the 2012 International Conference on Computer Vision in Remote Sensing, Xiamen, China, 16–18 December 2012. [Google Scholar]
  64. Long, J.; Shi, Z.; Tang, W.; Zhang, C. Single Remote Sensing Image Dehazing. IEEE Geosci. Remote. Sens. Lett. 2013, 11, 59–63. [Google Scholar] [CrossRef]
  65. Dai, S.; Xu, W.; Pu, Y.; Chen, Y. Remote sensing image defogging method based on DCP. Acta Opt. Sin. 2017, 37, 348–354. (In Chinese) [Google Scholar]
  66. Lecun, Y. Generalization and Network Design Strategies; Connectionism in Perspective Elsevier: Amsterdam, The Netherlands, 1989. [Google Scholar]
  67. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
  68. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  69. Bouvrie, J. Notes on Convolutional Neural Networks; Neural Nets, MIT CBCL Tech Report; MIT: Cambridge, MA, USA, 2006. [Google Scholar]
  70. Krizhevsky, A.; Sutskever, I.; Hinton, G. ImageNet Classification with Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  71. Pan, P.-W.; Yuan, F.; Guo, J.; Cheng, E. Underwater image visibility improving algorithm based on HWD and DehazeNet. In Proceedings of the 2017 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC), Xiamen, China, 22–25 October 2017; Volume 2017, pp. 1–4. [Google Scholar]
  72. Chen, W.-T.; Fang, H.-Y.; Ding, J.-J.; Kuo, S.-Y. PMHLD: Patch Map-Based Hybrid Learning DehazeNet for Single Image Haze Removal. IEEE Trans. Image Process. 2020, 29, 6773–6788. [Google Scholar] [CrossRef]
  73. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  74. Goodfellow, I.J.; Warde-Farley, D.; Mirza, M.; Courville, A.; Bengio, Y. Maxout Networks. Comput. Sci. 2013, 28, 1319–1327. [Google Scholar]
  75. Glorot, X.; Bordes, A.; Bengio, Y. Deep Sparse Rectifier Neural Networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Ft. Lauderdale, FL, USA, 11–13 April 2011. [Google Scholar]
  76. Si, J.; Harris, S.L.; Yfantis, E. A Dynamic ReLU on Neural Network. In Proceedings of the 2018 IEEE 13th Dallas Circuits and Systems Conference (DCAS), Dallas, TX, USA, 12–12 November 2018; pp. 1–6. [Google Scholar]
  77. Super-Resolution Convolutional Neural Networks Using Modified and Bilateral ReLU. In Proceedings of the 2019 International Conference on Electronics, Information, and Communication (ICEIC) 2019, Auckland, New Zealand, 22–25 January 2018.
  78. Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.-H. Single Image Dehazing via Multi-scale Convolutional Neural Networks with Holistic Edges. Int. J. Comput. Vis. 2019, 128, 240–259. [Google Scholar] [CrossRef]
  79. Esmaeilzehi, A.; Ahmad, M.O.; Swamy, M. UPDCNN: A New Scheme for Image Upsampling and Deblurring Using a Deep Convolutional Neural Network. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 2154–2158. [Google Scholar]
  80. Li, Y.; Zhang, L.; Zhang, Y.; Xuan, H.; Dai, Q. Depth map super-resolution via iterative joint-trilateral-upsampling. In Proceedings of the 2014 IEEE Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2014; pp. 386–389. [Google Scholar]
  81. Dziembowski, A.; Grzelka, A.; Mieloch, D.; Stankiewicz, O.; Domanski, M. Enhancing view synthesis with image and depth map upsampling. In Proceedings of the 2017 International Conference on Systems, Signals and Image Processing (IWSSIP), Poznan, Poland, 22–24 May 2017; pp. 1–4. [Google Scholar]
  82. Tsuchiya, A.; Sugimura, D.; Hamamoto, T. Depth upsampling by depth prediction. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 1662–1666. [Google Scholar]
  83. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. AOD-Net: All-in-One Dehazing Network. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4780–4788. [Google Scholar]
  84. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 10729–10736. [Google Scholar]
  85. Guo, J.; Yang, J.; Yue, H.; Tan, H.; Hou, C.; Li, K. RSDehazeNet: Dehazing Network with Channel Refinement for Multispectral Remote Sensing Images. IEEE Trans. Geosci. Remote. Sens. 2021, 59, 2535–2549. [Google Scholar] [CrossRef]
  86. Jiang, H.; Lu, N. Multi-Scale Residual Convolutional Neural Network for Haze Removal of Remote Sensing Images. Remote Sens. 2018, 10, 945. [Google Scholar] [CrossRef] [Green Version]
  87. Qin, M.; Xie, F.; Li, W.; Shi, Z.; Zhang, H. Dehazing for Multispectral Remote Sensing Images Based on a Convolutional Neural Network with the Residual Architecture. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 1645–1655. [Google Scholar] [CrossRef]
  88. Chen, X.; Li, Y.; Dai, L.; Kong, C. Hybrid High-Resolution Learning for Single Remote Sensing Satellite Image Dehazing. IEEE Geosci. Remote Sens. Lett. 2021, 30, 1–5. [Google Scholar] [CrossRef]
  89. Mehta, A.; Sinha, H.; Mandal, M.; Narang, P. Domain-Aware Unsupervised Hyperspectral Reconstruction for Aerial Image Dehazing. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision, Waikola, HI, USA, 5–9 January 2021. [Google Scholar]
  90. Huang, P.; Zhao, L.; Jiang, R.; Wang, T.; Zhang, X. Self-filtering image dehazing with self-supporting module. Neurocomputing 2021, 432, 57–69. [Google Scholar] [CrossRef]
  91. Amintoosi, M.; Fathy, M.; Mozayani, N. Video enhancement through image registration based on structural similarity. Imaging Sci. J. 2011, 59, 238–250. [Google Scholar] [CrossRef]
  92. Singh, D.; Garg, D.; Pannu, H.S. Efficient Landsat image fusion using fuzzy and stationary discrete wavelet transform. Imaging Sci. J. 2017, 6, 1–7. [Google Scholar] [CrossRef]
  93. Zhou, W.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  94. Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated Context Aggregation Network for Image Dehazing and Deraining. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 7–11 January 2019; pp. 1375–1383. [Google Scholar]
Figure 1. Attenuation of a collimated beam of light by suspended particles.
Figure 1. Attenuation of a collimated beam of light by suspended particles.
Sensors 21 03926 g001
Figure 2. The atmosphere scatters environmental illumination in the direction of the observer.
Figure 2. The atmosphere scatters environmental illumination in the direction of the observer.
Sensors 21 03926 g002
Figure 3. The architecture of DehazeNet.
Figure 3. The architecture of DehazeNet.
Sensors 21 03926 g003
Figure 4. Rectified linear unit (ReLU) and bilateral rectified linear unit (BReLU). (a) ReLU. (b) BreLU.
Figure 4. Rectified linear unit (ReLU) and bilateral rectified linear unit (BReLU). (a) ReLU. (b) BreLU.
Sensors 21 03926 g004
Figure 5. Main steps of the proposed single-image dehazing algorithm.
Figure 5. Main steps of the proposed single-image dehazing algorithm.
Sensors 21 03926 g005
Figure 6. The diagram of AOD-Net.
Figure 6. The diagram of AOD-Net.
Sensors 21 03926 g006
Figure 7. The structure of FD-GAN. (a) Generator. (b) Fusion-discriminator.
Figure 7. The structure of FD-GAN. (a) Generator. (b) Fusion-discriminator.
Sensors 21 03926 g007
Figure 8. Comparison of RS image dehazing methods discussed above.
Figure 8. Comparison of RS image dehazing methods discussed above.
Sensors 21 03926 g008
Figure 9. Comparison of RS image dehazing methods discussed above.
Figure 9. Comparison of RS image dehazing methods discussed above.
Sensors 21 03926 g009
Table 1. Comparison of evaluation metrics on images shown in Figure 7.
Table 1. Comparison of evaluation metrics on images shown in Figure 7.
MetricImageHERetinexDCPNon-LocalDehazeNetMSCNNAOD-NETGCANet
MSEE158.025065.369440.137763.013545.564341.857845.955060.8299
E258.454665.31562.321318.885830.159858.228648.436845.0116
E363.838074.995458.091670.654278.519750.860454.852761.5088
E465.977683.236775.131474.860579.763138.767858.727366.4670
E566.520384.406250.103678.273868.531657.035457.390579.1826
E666.473478.607774.939276.318284.360360.781158.983765.4599
E764.661779.267771.770668.934073.902350.341158.600460.0329
E852.505450.05315.256816.877529.183328.426440.353621.0587
MAEE116.86979.34875.093710.26785.45627.00956.02349.6560
E214.84999.28200.36302.16773.22297.11886.25195.4573
E320.928913.02078.679316.474112.85978.08187.417211.7235
E423.559618.785513.211016.191814.55046.86178.291311.2540
E521.397019.76716.479615.35619.91778.37467.840416.3443
E625.959521.083512.839321.406018.42719.69467.993216.2544
E724.505014.959412.487814.362711.98808.80398.132711.5565
E814.00126.13920.69121.84483.03574.69554.83472.2535
PSNRE111.301817.316119.921816.112420.530616.729119.177516.4637
E212.449517.383416.028821.249922.667319.298019.114320.1044
E310.017815.023917.284312.516915.348316.926718.178714.5837
E49.116612.316014.714312.938114.240216.407717.478215.5258
E59.991812.032219.209613.783916.978717.378017.937113.0886
E68.238710.855614.992710.480212.780516.159117.759012.0327
E78.627414.027714.992113.488315.629015.978217.630114.5927
E812.167219.954217.561121.049622.779315.287720.571721.1517
SSIME10.62440.85330.86870.76700.88700.77420.84290.8416
E20.62900.82970.66810.86490.89000.87230.83730.8581
E30.51790.62850.79210.60420.79090.74270.80610.7321
E40.49340.61500.75540.60820.64340.54570.78070.7662
E50.61960.69110.86520.72510.79910.82300.82310.7827
E60.39960.33570.72440.51040.68640.72460.79370.4873
E70.41930.73450.74780.62310.77450.70450.77830.6874
E80.54790.86890.71420.82670.87760.36330.84450.8437
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Wang, S.; Wang, X.; Ju, M.; Zhang, D. A Review of Remote Sensing Image Dehazing. Sensors 2021, 21, 3926. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113926

AMA Style

Liu J, Wang S, Wang X, Ju M, Zhang D. A Review of Remote Sensing Image Dehazing. Sensors. 2021; 21(11):3926. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113926

Chicago/Turabian Style

Liu, Juping, Shiju Wang, Xin Wang, Mingye Ju, and Dengyin Zhang. 2021. "A Review of Remote Sensing Image Dehazing" Sensors 21, no. 11: 3926. https://0-doi-org.brum.beds.ac.uk/10.3390/s21113926

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop