# A Novel Active Contours Model for Environmental Change Detection from Multitemporal Synthetic Aperture Radar Images

^{1}

^{2}

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Previous Article in Special Issue

Previous Article in Special Issue

Department of Civil Engineering, Faculty of Engineering, University of Kurdistan, Sanandaj 6617715175, Iran

Centre Eau Terre Environnement, Institute National de la Recherche Scientifique, Quebec, QC G1K 9A9, Canada

Author to whom correspondence should be addressed.

Received: 16 April 2020
/
Revised: 20 May 2020
/
Accepted: 22 May 2020
/
Published: 29 May 2020

(This article belongs to the Special Issue Multi-temporal Synthetic Aperture Radar)

In this paper, we propose a novel approach based on the active contours model for change detection from synthetic aperture radar (SAR) images. In order to increase the accuracy of the proposed approach, a new operator was introduced to generate a difference image from the before and after change images. Then, a new model of active contours was developed for accurately detecting changed regions from the difference image. The proposed model extracts the changed areas as a target feature from the difference image based on training data from changed and unchanged regions. In this research, we used the Otsu histogram thresholding method to produce the training data automatically. In addition, the training data were updated in the process of minimizing the energy function of the model. To evaluate the accuracy of the model, we applied the proposed method to three benchmark SAR data sets. The proposed model obtains 84.65%, 87.07%, and 96.26% of the Kappa coefficient for Yellow River Estuary, Bern, and Ottawa sample data sets, respectively. These results demonstrated the effectiveness of the proposed approach compared to other methods. Another advantage of the proposed model is its high speed in comparison to the conventional methods.

Change detection is the process in which two remote sensing images of a region at different times are used to extract areas that have been changed during the time between the images. In the last decades, optical and radar remote sensing imageries have become vital resources in change detection applications due to their high spatial and temporal resolutions, useful spectral or polarimetric characteristics, extensive spatial coverage, and cost-effectiveness. This includes the synthetic aperture radar (SAR) images, owing to their ability in imaging in all weather conditions, such as rainy and dusty air and in the night, and also their suitable spatial resolution. They have been widely exploited in change detection applications for natural hazards’ impacts [1,2,3], monitoring and mapping of environment and natural resources [4,5], urban development [6,7,8], etc.

The change detection algorithms can be categorized into supervised, semi-supervised, and unsupervised classes, depending on the availability of ground truth information. Supervised approaches, however, do have higher accuracy; they need sample data of change and unchanged areas to train the classifier model [9,10,11]. These models are less frequently used due to a lack of prior information in real applications. Additionally, the labeled and unlabeled samples are utilized simultaneously in the semi-supervised change detection algorithms [12,13]. In these algorithms, the labeled sample data are training data, and the rest of the pixels are the unlabeled sample data. In general, supervised and semi-supervised algorithms have a better efficiency than unsupervised methods. However, due to the unnecessary usage of training data, the unsupervised approaches are more prevalent in change detection applications [14].

The unsupervised change detection methods can be summarized into two classes: Threshold-based and classification methods. In threshold-based algorithms, the goal is to find a threshold in order to classify the difference image into changed and unchanged classes correctly [15,16]. Gong et al. introduced a neighborhood-based ratio operator to generate a difference image and finally extracted the changed regions by applying a threshold on the difference image [17]. Furthermore, Sumaiya and Kumari calculated a threshold based on the mean of the difference image and the logarithm of two input SAR images to extract the changed areas from the background [18]. In another study, they proposed a change detection algorithm based on the Gabor filter and Kittler–Illingworth thresholding algorithm [19]. Although the threshold-based methods have higher simplicity and speed, they are less accurate than other methods.

According to the suitable accuracy and simplicity of the clustering classification-based models, they have been widely used in change detection from temporal SAR images. Classification-based methods are usually combined with conventional image processing and optimization algorithms to increase the accuracy, which reduces their computational cost [20,21]. Shang et al. utilized an artificial immune multi-objective clustering algorithm based on the intensity and texture of the differences image to identify changed regions from unchanged ones [22]. Additionally, Zheng et al. used a K-means clustering method to detect changed pixels using a linear combination of subtraction and log-ratio difference images [23]. Li et al. proposed a multi-objective fuzzy clustering algorithm in which the change detection problem is modeled as a multi-objective optimization problem with two objective functions to preserve image details and remove the noise [24]. Zheng et al. used an unsupervised saliency-guided method and K-means clustering to detect the changed areas in a difference image obtained from a logarithmic ratio operator [14]. In addition, Tian et al. developed an edge-based fuzzy clustering algorithm to detect the changes in the SAR images [25].

The artificial neural networks as a classification method are utilized in change detection applications. Although artificial neural networks can obtain a better performance, they have high complexity and low speed. Convolutional neural networks (CNNs), sparse auto-encoder, and unsupervised clustering algorithms were applied for detecting changes in SAR images [26]. Additionally, Li et al. used a deep learning network, named PCANet (Principal Component Analysis Network), and saliency method for SAR change detection [27]. Recently, Chen et al. introduced a fast unsupervised deep neural network to generate a difference image for change detection of SAR images [28].

Furthermore, other classification models, such as graph cuts, have been used as a change detector of temporal SAR images. Carincotte et al. developed a fuzzy hidden Markov chain model by combining two fuzzy and statistical points of view to identify changes in the SAR images [29]. Ma et al. fused wavelet coefficients for low- and high-frequency bands using fusion rules based on weight averaging and the minimum standard deviation, respectively, to extract changed regions from temporal SAR images [30]. Gong et al. presented a change detection method based on texture and intensity information and a multivariate generalized Gaussian graph cuts model [31].

In this paper, we developed an innovative model based on the active contour model for change detection from the SAR images. The proposed model consists of three stages: The difference image generation, change detection by active contours, and accuracy assessment of the model. Firstly, a new difference image operator is introduced to increase the accuracy of change detection. At the next step, a novel change detection algorithm based on the active contours, named desired feature local active contour (DFLAC), is presented. Finally, the accuracy and speed of the model are evaluated and compared with other studies.

In all change detection methods, at first, a difference image is produced from two images of the same region before and after the change using one of two standard operator subtraction or log-ratio. Next, a model is developed to detect the changed from the unchanged areas. In this paper, we first introduced a new difference image generation operator to increase the accuracy of the change detection method. On the other hand, we developed an innovative active contour model to extract changed regions from the difference image precisely.

Based on the active contours model proposed by Chunming Li [32], we introduced a new model that detects changes in the presence of inhomogeneity and noise in SAR images. We named the proposed model the desired feature local active contour (DFLAC) model because of the extraction of the desired target feature (changed regions) using the local image information. The DFLAC model needs training data from changed and non-changed areas, which are automatically produced by the Otsu thresholding method.

The first step of the change detection algorithm is to generate the difference image from two images before and after the change. Subtraction and log-ratio are two frequently used operators employed in many change detection research works [14]:
where ${\mathrm{I}}_{\mathrm{d}}$, ${\mathrm{I}}_{\mathrm{b}}$, and ${\mathrm{I}}_{\mathrm{a}}$ are the difference, before and after images, respectively.

$${\mathrm{I}}_{\mathrm{d}1}=\left|{\mathrm{I}}_{\mathrm{b}}-{\mathrm{I}}_{\mathrm{a}}\right|,$$

$${\mathrm{I}}_{\mathrm{d}2}=\left|{\mathrm{log}}_{10}\left(\frac{{\mathrm{I}}_{\mathrm{b}}+{1}^{\prime}}{{\mathrm{I}}_{\mathrm{a}}+1}\right)\right|,$$

Although the subtraction operator can detect the small changes, the log-ratio has a better performance in the change detection of SAR images, which have multiplicative noises [14]. We proposed another equation for generating the difference image, namely the root multiplication log-ratio and normal difference (RMLND) operator. This operator fused two subtraction and log-ratio operators and has the benefits of them. Therefore, it can find out the small changes but has less sensitivity to SAR image noises (Figure 3):
where ${\mathrm{I}}_{\mathrm{d}2}$ is log-ratio operator and ${\mathrm{I}}_{\mathrm{d}4}$ is a normal difference operator defined as follows:
where the parameter $\mathsf{\eta}$ is a constant value that avoids wrong results in pixels, when ${\mathrm{I}}_{\mathrm{a}}$ and ${\mathrm{I}}_{\mathrm{b}}$ are equal to zero.

$${\mathrm{I}}_{\mathrm{d}3}=\sqrt{{\mathrm{I}}_{\mathrm{d}2}\times {\mathrm{I}}_{\mathrm{d}4}},$$

$${\mathrm{I}}_{\mathrm{d}4}=\left|\frac{{\mathrm{I}}_{\mathrm{b}}-{\mathrm{I}}_{\mathrm{a}}}{{\mathrm{I}}_{\mathrm{b}}+{\mathrm{I}}_{\mathrm{a}}+\mathsf{\eta}}\right|,$$

The proposed DFLAC model was developed based on Chunming Li’s model [32], which detects changed regions from the difference image utilizing training data. In this model, the contour C of the model separates the difference image (${\mathrm{I}}_{\mathrm{d}}$) domain (R) into changed and unchanged regions ${\mathrm{R}}_{\mathrm{c}}$ and ${\mathrm{R}}_{\mathrm{u}}$, so that $\mathrm{R}={\mathrm{R}}_{\mathrm{c}}\cup {\mathrm{R}}_{\mathrm{u}}$. The energy function of the model is the sum of the difference between the pixel values inside the curve C and training data. The function is minimized in an iteration process, and the curve C moves towards the border of the changed and non-changed regions. Inspired by Li’s model, the energy function of the proposed model in a local region ${\mathrm{S}}_{\mathrm{x}}$ in the neighborhood of pixel x is defined as follows:
where I(y) represents the difference image in ${\mathrm{S}}_{\mathrm{x}}$, and ${\mathrm{t}}_{\mathrm{i}}$ (i = 1,2) represents the training data of changed and unchanged regions. Additionally, A is defined as:
when more than one training data of changed and unchanged regions are introduced to the model. The minimum difference between image intensity and training data is used in the integral of Equation (5). Therefore, this equation can be changed as follows:
where ${\mathrm{t}}_{\mathrm{i}}^{\mathrm{j}}$ shows j^{th} training data for the i^{th} class of the image (changed and unchanged regions).

$$\mathrm{F}\left(\mathrm{x}\right)={{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{\mathrm{A}}^{}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-{\mathrm{t}}_{\mathrm{i}}\right)}^{2}\mathrm{dy},$$

$$\mathrm{A}=\{\begin{array}{c}{\mathrm{S}}_{\mathrm{x}}\cap {\mathrm{R}}_{\mathrm{c}\hspace{1em}\mathrm{i}=1}\\ {\mathrm{S}}_{\mathrm{x}}\cap {\mathrm{R}}_{\mathrm{u}\hspace{1em}\mathrm{i}=2}\end{array},$$

$$\mathrm{F}\left(\mathrm{x}\right)={{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{\mathrm{A}}^{}\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-{\mathrm{t}}_{\mathrm{i}}^{\mathrm{j}}\right)}^{2}\mathrm{dy},$$

The above equation calculates the difference between the intensity of each pixel inside the curve C and the most similar pixel to it from the training data. Accordingly, after minimizing the energy function of the model, the curve C would extract changed regions from the unchanged area. Equation (7) is improved by utilizing a kernel function K such that K(x-y) = 0 for $\mathrm{x}\notin {\mathrm{S}}_{\mathrm{x}}$ as a non-negative window function to separate ${\mathrm{S}}_{\mathrm{x}}$ from other image domains and make use of the local intensity in the energy function [32]:

$$\mathrm{F}\left(\mathrm{x}\right)={{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{{\mathrm{S}}_{\mathrm{x}}}^{}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-{\mathrm{t}}_{\mathrm{i}}^{\mathrm{j}}\right)}^{2}\mathrm{dy}.$$

Due to the nature of the SAR images, the difference image ${\mathrm{I}}_{\mathrm{d}}$ is very heterogeneous and noisy. Therefore, we used the Li model to decompose the difference image into the true image, bias field (a parameter to formulate the intensity inhomogeneity in each pixel of the image), and noise:
where ${\mathrm{I}}_{\mathrm{d}}$(y) depicts the difference image, J(y) represents the true image, b represents the bias field, and n represents image noise [32].

$${\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)=\mathrm{b}\left(\mathrm{x}\right).\mathrm{J}\left(\mathrm{y}\right)+\mathrm{n}\left(\mathrm{y}\right),$$

In the ${\mathrm{S}}_{\mathrm{x}}$ area, the true image J(y), the training data consequently take approximately two constant values for changed and unchanged regions so that we can write:
where $\mathrm{p}$ represents the true value of training data. Therefore, based on Equation (10), Equation (8) will be presented as follows:

$${\mathrm{t}}_{\mathrm{i}}^{\mathrm{j}}=\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)+\mathrm{n}\left(\mathrm{y}\right),$$

$$\mathrm{F}\left(\mathrm{x}\right)={{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{{\mathrm{S}}_{\mathrm{x}}}^{}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)-\mathrm{n}\left(\mathrm{y}\right)\right)}^{2}\mathrm{dy}.$$

The $\mathrm{F}\left(\mathrm{x}\right)$ is considered as the energy function of pixel x, and we should calculate the integral of ${\mathrm{F}}_{\mathrm{x}}$ to obtain the energy function of the whole image domain:

$$\mathrm{F}={{\displaystyle \int}}_{\mathsf{\Omega}}^{}{{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{{\mathrm{S}}_{\mathrm{x}}}^{}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)-\mathrm{n}\left(\mathrm{y}\right)\right)}^{2}\mathrm{dydx}.$$

After exchanging the order of the integration, Equation (12) is written as follows:

$$\mathrm{F}={{\displaystyle \int}}_{\mathsf{\Omega}}^{}{{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{{\mathrm{S}}_{\mathrm{x}}}^{}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)-\mathrm{n}\left(\mathrm{y}\right)\right)}^{2}\mathrm{dxdy}.$$

The implicit form of curve C based on level set theory is used to minimize the energy function of the DFLAC model [33]:

$$\mathrm{F}={{\displaystyle \int}}_{\mathsf{\Omega}}^{}{{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{{\displaystyle \int}}_{{\mathrm{S}}_{\mathrm{x}}}^{}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)-\mathrm{n}\left(\mathrm{y}\right)\right)}^{2}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right)\mathrm{dxdy}.$$

In the above equation, the changed and unchanged regions can be presented by their membership functions defined by ${\mathrm{W}}_{1}\left(\mathsf{\phi}\right)=\mathrm{H}\left(\mathsf{\phi}\right)$ and ${\mathrm{W}}_{2}\left(\mathsf{\phi}\right)=1-\mathrm{H}\left(\mathsf{\phi}\right),$ respectively, where H represents the Heaviside function, and $\mathsf{\phi}$ represents a signed distance function to describe curve C implicitly [33].

The above energy function F, namely the image term of the DFLAC model, is used to regularize the model. The two additional terms called the length and distance regularization terms are added to the energy function introduced in the study by Li and his colleagues [32]:
where $\mathsf{\alpha}$, $\mathsf{\beta},$ and $\mathsf{\gamma}$ represent the constant parameters that set the weight of each term in the energy function.

$${\mathrm{F}}_{\mathrm{Final}}={\mathsf{\alpha}\mathrm{F}}_{\mathrm{image}}+{\mathsf{\beta}\mathrm{F}}_{\mathrm{length}}+{\mathsf{\gamma}\mathrm{F}}_{\mathrm{distance}},$$

The parameter $\mathsf{\phi}$ is an implicit form of curve C, where $\mathsf{\phi}>0$ depicts the inside of the curve and $\mathsf{\phi}<0$ represents the outside of the curve and wherever $\mathsf{\phi}=0$ shows the curve C of the model. As a result, changing the parameter $\mathsf{\phi}$ over time by minimizing ${\mathrm{F}}_{\mathrm{Final}}$ with respect to $\mathsf{\phi}$ using the standard gradient descent method, the position of the curve C is moved towards the border of the changed and unchanged regions of the image [34]:

$$\frac{\partial \mathsf{\phi}}{\partial \mathrm{t}}=-\frac{\partial {\mathrm{F}}_{\mathrm{Final}}}{\partial \mathsf{\phi}}.$$

Therefore, the evolution equation of the level set function $\mathsf{\phi}$ over time is represented as follows:
where $\mathsf{\Delta}\mathrm{t}$ is the time step, and ${\mathsf{\phi}}_{\mathrm{k}}$ represents a level set function in iteration k. The derivative of $\frac{\partial {\mathrm{F}}_{\mathrm{Final}}}{\partial \mathsf{\phi}}$ is calculated, and the corresponding gradient flow equation is presented as follows:
where $\mathsf{\delta}\left(\mathsf{\varphi}\right)=\frac{\partial \mathrm{H}\left(\mathsf{\varphi}\right)}{\partial \mathsf{\varphi}}$, and div is the divergence operator, $\nabla $ represents the gradient operator, and ${\mathrm{d}}_{\mathrm{p}}\text{}\mathrm{is}$ defined as below according to the study by [32], as follows:

$${\mathsf{\phi}}_{\mathrm{k}+1}={\mathsf{\phi}}_{\mathrm{k}}+\frac{\partial {\mathsf{\phi}}_{\mathrm{k}}}{\partial \mathrm{t}}\mathsf{\Delta}\mathrm{t}\Rightarrow \u2206{\mathsf{\phi}}_{\mathrm{k}+1}={\mathsf{\phi}}_{\mathrm{k}}-\frac{\partial {\mathrm{F}}_{\mathrm{Final}}}{\partial \mathsf{\phi}}\mathsf{\Delta}\mathrm{t},$$

$$\frac{\partial \mathsf{\phi}}{\partial \mathrm{t}}=-\mathsf{\alpha}\mathsf{\delta}\left(\mathsf{\phi}\right){{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{\mathrm{f}}_{\mathrm{i}}+\mathsf{\beta}\mathsf{\delta}\left(\mathsf{\phi}\right)\mathrm{div}\left(\frac{\nabla \mathsf{\phi}}{\left|\nabla \mathsf{\phi}\right|}\right)+\mathsf{\gamma}\mathrm{div}\left({\mathrm{d}}_{\mathrm{p}}\left(\left|\nabla \mathsf{\phi}\right|\right)\nabla \mathsf{\phi}\right),$$

$${\mathrm{d}}_{\mathrm{p}}\left(\mathrm{x}\right)=\frac{{\mathrm{p}}^{\prime}\left(\mathrm{x}\right)}{\mathrm{x}}\hspace{1em}\&\hspace{1em}\mathrm{p}\left(\mathrm{x}\right)=\frac{{\left(\mathrm{x}-1\right)}^{2}}{2}.$$

Additionally, the term ${\mathrm{f}}_{\mathrm{i}}$ can be achieved using the following expression:

$${\mathrm{f}}_{\mathrm{i}}={{\displaystyle \int}}^{\text{}}\mathrm{K}\left(\mathrm{x}-\mathrm{y}\right)\underset{\mathrm{j}}{\mathrm{min}}{\left({\mathrm{I}}_{\mathrm{d}}\left(\mathrm{y}\right)-\mathrm{b}\left(\mathrm{x}\right){\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\left(\mathrm{y}\right)-\mathrm{n}\left(\mathrm{y}\right)\right)}^{2}\mathrm{dx}.$$

Moreover, the above integrals can be described using the convolution function so that:
where * represents the convolution operator and ${\mathrm{K}}_{\mathrm{u}}={{\displaystyle \int}}^{\text{}}\mathrm{K}\left(\mathrm{y}-\mathrm{x}\right)\mathrm{dx}$, where ${\mathrm{K}}_{\mathrm{u}}=1\text{}$ except for the region near the boundary of the image domain Ω [32].

$${\mathrm{f}}_{\mathrm{i}}=\underset{\mathrm{j}}{\mathrm{min}}\left({\mathrm{I}}_{\mathrm{d}}{}^{2}{\mathrm{K}}_{\mathrm{u}}-2{\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}{\mathrm{I}}_{\mathrm{d}}\left(\mathrm{b}\ast \mathrm{K}\right)-2{\mathrm{InK}}_{\mathrm{u}}+{\left({\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\right)}^{2}\left({\mathrm{b}}^{2}\ast \mathrm{K}\right)+2{\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}\mathrm{n}\left(\mathrm{b}\ast \mathrm{K}\right)+{\mathrm{n}}^{2}{\mathrm{K}}_{\mathrm{u}}\right),$$

Furthermore, in order to calculate parameter b, we assume the parameters $\mathsf{\phi}$ and t are fixed parameters and minimize ${\mathrm{F}}_{\mathrm{Final}}$ with respect to b. So, the parameter $\mathrm{b}$ is given by:
where ${\mathrm{P}}_{1}$ and ${\mathrm{P}}_{2}$ are two matrixes, with the same size as the intensity matrix. Each element of these matrixes is a training data of changed and unchanged regions, respectively, that minimizes the E function for a given pixel:

$$\mathrm{b}=\frac{\left(\left({\mathrm{I}}_{\mathrm{d}}-\mathrm{n}\right){{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{\mathrm{P}}_{\mathrm{i}}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right)\right)\ast \mathrm{K}}{\left({{\displaystyle \sum}}_{\mathrm{i}=1}^{2}{\mathrm{P}}_{\mathrm{i}}^{2}\right)\ast \mathrm{K}},$$

$${\mathrm{E}}_{\mathrm{i}}={\left({\mathrm{I}}_{\mathrm{d}}-\mathrm{bt}-\mathrm{n}\right)}^{2}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right).$$

In the same way, other unknown parameters n and t are obtained as follows:

$$\mathrm{n}=\frac{\left(\left({\mathrm{I}}_{\mathrm{d}}\ast \mathrm{K}\right)-\mathrm{b}{{\displaystyle \sum}}_{\mathrm{i}=1}^{2}\left({\mathrm{P}}_{\mathrm{i}}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right)\right)\right)}{{\mathrm{K}}_{\mathrm{u}}}.$$

Furthermore, the true value of training data can be computed in each iteration as follows:
where ${\mathrm{B}}_{\mathrm{i}}^{\mathrm{j}}$ is a binary array, which indicates pixels that ${\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}$ minimized the function ${\mathrm{E}}_{\mathrm{i}}$ defined in Equation (20). In step 1 of iteration, the training data ${\mathrm{t}}_{\mathrm{i}}^{\mathrm{j}}$ used instead of ${\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}$.

$${\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}=\frac{{{\displaystyle \int}}^{\text{}}\left(\mathrm{b}\ast \mathrm{K}\right)\left({\mathrm{I}}_{\mathrm{d}}-\mathrm{n}\right){\mathrm{B}}_{\mathrm{i}}^{\mathrm{j}}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right)\mathrm{dx}}{{{\displaystyle \int}}^{\text{}}\left({\mathrm{b}}^{2}\ast \mathrm{K}\right){\mathrm{B}}_{\mathrm{i}}^{\mathrm{j}}{\mathrm{W}}_{\mathrm{i}}\left(\mathsf{\phi}\right)\mathrm{dx}},$$

The DFLAC model separates the difference image domain in two classes of changed and unchanged areas based on the training data of those classes. In the proposed model, the training data are not sampled from the difference image but are generated automatically. Firstly, a threshold number (T) is obtained from the difference image using the Otsu algorithm [35]. The number T is a normalized value that lies in the range [0, 1] that can be used to classify an image into two classes. Therefore, we use the Otsu’ algorithm threshold (T) to generate the training data of the model from the difference image, automatically. Secondly, in the range of $\left[\mathrm{T},\text{}1\right]$, ${\mathrm{k}}_{1}$ numbers and in the range $\left[0,\text{}\mathrm{T}\right]$, ${\mathrm{k}}_{2}$ numbers with equal steps are selected as training data. It should be noted that ${\mathrm{k}}_{1}$ and ${\mathrm{k}}_{2}$ are the numbers of training data of the changed and unchanged classes, respectively. For example, if T = 0.6 and ${\mathrm{k}}_{1}=2$ and ${\mathrm{k}}_{2}=4$, then the numbers 0.8 and 1 are the training data of the changed class and the numbers 0, 0.15, 0.3, and 0.45 are selected as training data of the unchanged class. Thirdly, the produced training data can be projected into the other ranges, such as [0, 255]. It should be noted that each difference image operator has its specific threshold value (T) and training data.

In order to evaluate the accuracy of the proposed model, three indices of the percentage correct classification (PCC), overall error (OE), and Kappa were used. PCC is the percentage of pixels that are correctly classified [36]:
where TP and TN indicate the number of changed and unchanged pixels that are correctly classified, respectively. Additionally, N is the number of pixels in the image. OE is the sum of the number of changed and unchanged pixels incorrectly classified [36]:
where FP and FN are the numbers of unchanged and changed pixels, respectively, that are falsely detected. Finally, the Kappa coefficient is a parameter to indicate the accuracy of the classification model according to the difference between the observed accuracy and the chance agreement [36]:
where:
where ${\mathrm{N}}_{\mathrm{c}}$ and ${\mathrm{N}}_{\mathrm{u}}$ are the total numbers of pixels that belong to the changed and unchanged classes, respectively [36].

$$\mathrm{PCC}=\frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{N}},$$

$$\mathrm{OE}=\frac{\mathrm{FP}+\mathrm{FN}}{\mathrm{N}},$$

$$\mathrm{Kapp}=\frac{\mathrm{PCC}-\mathrm{PRE}}{1-\mathrm{PRE}},$$

$$\mathrm{PRE}=\frac{\left(\mathrm{TP}+\mathrm{FP}\right).{\mathrm{N}}_{\mathrm{c}}+\left(\mathrm{TN}+\mathrm{FN}\right).{\mathrm{N}}_{\mathrm{u}}}{{\mathrm{N}}^{2}},$$

Figure 1 shows the flowchart of the proposed model. The proposed model has four main steps, including difference image generation, training data production, implementation of the DFLAC model, and finally, accuracy assessment. Furthermore, each step has several sub-steps that are described in the following.

- At the first step of difference image generation, two SAR images of the data set (before and after change) are introduced to the model, and the difference image is then produced based on one of Equations (1) to (4). Secondly, in the training data sampling step, a threshold T was first estimated using Otsu’s method, then, the training data of changed and unchanged classes were selected based on this threshold. In the third step, the DFLAC model was implemented. The DFLAC model starts with defining the initial curve implicitly $\left({\mathsf{\phi}}_{0}\right)$ based on a level set theory, which is a simple shape as a square and circle. Then, the evolution of the DFLAC model’s curve was done over time using Equation (17). Next, the parameters $\mathrm{b}$, $\mathrm{n}$, and ${\mathrm{p}}_{\mathrm{i}}^{\mathrm{j}}$ were then estimated according to Equations (22), (24), and (25). These last previous steps were repeated until the curve model reached stability and was not changed (i.e., ${{\displaystyle \int}}^{\text{}}\left|{\mathsf{\phi}}_{\mathrm{n}}-{\mathsf{\phi}}_{\mathrm{n}-1}\right|\mathsf{\epsilon}$). Finally, the output of the model was generated by separating changed regions (pixels inside the curve that $\mathsf{\phi}\ge 0$) from unchanged areas (pixels outside the curve that $\mathsf{\phi}<0$).
- The accuracy assessment was the last step of the workflow, in which the error image was computed by subtracting the output image from the reference image as follows:$$\mathrm{error}\text{}\mathrm{image}=\left|\mathrm{Reference}\text{}\mathrm{image}-\mathrm{output}\text{}\mathrm{image}\right|.$$
- Finally, the accuracy assessment of the model using the error map and some accuracy criteria, such as PCC, OE, and the Kappa, were estimated based on Equations (26)–(28).

The main stage in the proposed model is the implementation of the DFLAC model stage. Moreover, this step takes the most running time of the whole model. It is mainly because the evolution of the active contour is an iterative process that needs too much time to run. In this regard, more distinctions between the values of the changed and unchanged pixels in the difference image lead to a higher accuracy and speed of evolution of the active contour. In addition, the time step parameter, i.e., $\mathsf{\Delta}\mathrm{t}$ in Equation (17), regularizes the rate of the active contour evolution and has a considerable effect on the performance and speed of the model. Therefore, selecting a large value for the time step parameter leads to the passing of the model through the minimum of the energy function. Contrariwise, a very small value reduces the model speed. Accordingly, assigning an optimal value for the time step has a crucial impact on the efficiency of the model.

We used three sample data sets to evaluate the DFLAC model in change detection from SAR images. Brief information about the data sets is presented in Table 1.

The first data set is a part of the Yellow River Estuary SAR data at Dongying, Shandong province, China. This data set shows a block of farmland that is landlocked. It should be noted that the first image of the Yellow River Estuary data set is four-look data, but the second image is single-look data, which means that the two images have different levels of speckle.

The second data set named Bern data was taken by the European Remote Sensing 2 satellite SAR sensor, which relates to a region near the city of Bern, Switzerland. River Aare flooded parts of the cities of Thun and Bern and the airport of Bern completely. Therefore, the Aare valley between Bern and Thun was chosen to extract flooded regions. The Ottawa data set is the third sample data set, which was acquired over the city of Ottawa by the Radarsat SAR sensor. This data set illustrates regions that were once flooded. Moreover, all data sets have a reference image as ground truth, which indicates changed regions precisely, and we used them to evaluate our model. Figure 2 illustrates the sample data sets and their reference image.

In this section, the difference images were generated using four formulas explained in Equations (1) and (2). The produced difference images lie on the range [0, 1], but for better processing, we normalized them in the range [0, 255]. Figure 3 demonstrates the difference image of the selected data sets.

The constant parameters of the DFLAC model, i.e., $\alpha $, $\beta ,$ and $\gamma $, were determined by the trial and error method as $1$, $0.11$, and $0.4$, respectively. In order to define the training data of changed and unchanged regions, we computed the Otsu thresholding algorithm on four difference image operators. Then, four numbers as training data of the changed class and two numbers as training data of the unchanged class were determined for each difference image operator in the range of 0 to 255 (Section 2.4). The numbers of training data of each dataset for the RMLND difference image operator are shown in Table 2.

Then, the difference image, constant parameters, and training data of each sample data set were introduced to the DFLAC model, and after 20 iterations, the curve of the model (C) extracted the changed regions. The final output of the DFLAC model based on the RMLND operator is represented in Figure 4.

To evaluate the accuracy of the DFLAC model, we calculated the error image by subtracting the output of the model from the reference image. For this purpose, the accuracy parameters, such as Kappa, PCC, and OE, were calculated and compared with the results of the three most known models in change detection of SAR images, including saliency guided K-means (SGK) [14], neighborhood-based ratio (NR) [17], and log-normal generalized Kittler and Illingworth thresholding (LN-GKIT) [37]. Besides, two models of active contours, Chan and Vese (CV) [34] and distance regularized level set evolution (DRLSE) [38], were used for the evaluation of the proposed model. Table 3 demonstrates the accuracy parameters of the DFLAC model comparison with other algorithms for selected data sets by using the RMLND difference image operator as the best operator (Table 4).

As shown in Figure 4, due to the different levels of speckle in the two images of the Yellow River Estuary data sets, the difference between the two images in some areas of unchanged regions is relatively high. Therefore, the proposed model detects some unchanged area as a changed region and decrease the accuracy of the model.

In this section, we discuss the parameters that affect the accuracy of the model. Additionally, to evaluate the efficiency of our model, the speed of the model is compared with the SGK approach [14].

The accuracy and performance of the proposed model depend on three parameters, including the fixed parameters, i.e., $\mathsf{\alpha},\text{}\mathsf{\beta},\text{}\mathrm{and}\text{}\mathsf{\gamma}$, in Equation (18), the difference image operator type, and the number of training data in which the impact of each is discussed below.

The constant parameters $\mathsf{\alpha}$, $\mathsf{\beta},$ and $\mathsf{\gamma}$ regularize the effect of the length, distance, and image terms in the energy function of the model (Equation (12)). Changing these parameters causes a change in the evolution of the curve and the results of the model. As a result, the accuracy of the model is affected. To determine the role of these parameters in the accuracy of the model, we fixed two parameters and then changed the other parameters with 0.1 steps. We then calculated the average accuracy of the three data sets. The rate of change of the Kappa parameter due to the variation of the constant parameters $\mathsf{\alpha}$, $\mathsf{\beta},$ and $\mathsf{\gamma}$ is depicted in Figure 5, Figure 6 and Figure 7, respectively.

Based on Figure 5, in all data sets, the Kappa increases when $\mathsf{\alpha}$ rises from zero to 1. Therefore, 1 is the best value for parameter $\mathsf{\alpha}$. Additionally, as shown in Figure 6, the change rate of Kappa with respect to $\mathsf{\beta}$ in all data sets is low because the impact of the length term in the energy function of the active contour models is slight. Figure 7 shows that the Yellow River Estuary data set has its maximum Kappa $\mathsf{\gamma}=0.2$ contrast with other data sets and the average of all data sets, which reached its peak in 0.4.

Consequently, the maximum value of Kappa occurs when $\mathsf{\alpha}$, $\mathsf{\beta},$ and $\mathsf{\gamma}$ are 1, 0.11, and 0.4, and therefore, we selected these numbers as optimal values of the constant parameters for implementing the model. It can be noted that the Yellow River Estuary and Bern data sets have minimum and maximum sensitivity to the variation of the constant parameters, respectively.

We assessed the four difference-image operators, including subtraction, log-ratio, normal difference, and RMLND, in the case of the Kappa parameter. The results and statistics of these operators, which were executed on sample data sets, are illustrated in Table 4 and Figure 8.

It can be seen that the RMLND operator has the best performance compared to the other operators. Therefore, the RMLND difference image was chosen as the principal operator for implementing our proposed model. Additionally, the normal difference model, after the RMLND operator, has the best efficiency; the log-ratio and the subtraction models are in the next ranks, respectively. The reason for the low performance of the subtraction model is that it detects small changes due to speckle noise. This defect is very significant at the border of the changed regions.

According to Figure 8, the normal difference operator achieved the best result for the Yellow River Estuary data set, and after that, the RMLND, log-ratio, and subtraction operators have more accuracy, respectively. Moreover, the subtraction operator obtains the worst results for the Bern data set 32.44%) and the accuracy of the log-ratio, normal difference, and RMLND operators increases, respectively. Additionally, the RMLND operator has the best results for the Ottawa image, and the accuracy of the log-ratio, normal difference, and subtraction is in the next ranks. In addition, according to the mean accuracy of the three data sets, the best operator is RMLND, and after that, the normal difference and log-ratio have a better performance, respectively. Finally, the subtraction operator achieved less accuracy compared to the rest of the operators.

The numbers of training data of changed and unchanged regions is an effective parameter in the accuracy of the proposed model. In order to evaluate the effect of the numbers of training data, and determine the optimal numbers of the data, we fixed the numbers of unchanged data and calculated the mean of the Kappa of data sets relative to several training data of changed areas. Similarly, the number 2 was obtained as the best number of the training data in the unchanged regions. Figure 9 and Figure 10 demonstrate the variation of Kappa with respect to the numbers of training data of the changed and unchanged areas.

Based on Figure 9, the kappa parameter increases with an increasing number of classes of the changed regions in all data sets and reaches its maximum value in four, and then decreases gradually. According to Figure 10, the Kappa of all data sets and the average rise sharply by increasing the number of training data of the unchanged regions from 1 to 2, then it goes down slightly. Therefore, the maximum Kappa is related to the optimal numbers of training data of the changed and unchanged region, which the four and two number is the best, respectively.

One of the efficiency parameters of a model is its running time. Since the proposed model has five steps, the running time of each step was estimated separately (Table 5). Therefore, we compared the run time of the proposed model and the SGK model [14] in three sample data sets. Table 5 illustrates the run time of two models in each data set. As seen in Table 5, our model is approximately 10 times faster than the SGK model in the three sample data.

In this paper, we proposed a novel model of active contours for change detection of SAR images. The model was designed to extract a target feature in a digital image by getting some training data from the target feature and image background. Therefore, in this paper, the changed and unchanged features were considered as target features and image backgrounds, respectively. Besides, the training data were generated from the difference image using the Otsu thresholding method automatically. Furthermore, we introduced a new difference image operator to attain more accuracy compared to the existing operators. For the accuracy assessment of the model, it was applied to three temporal SAR images, and the outputs were compared to their corresponding reference images (ground truth). The accuracy of the proposed model depends on three constant values of the model, which were determined by the trial and error method. Additionally, the number of training data of the changed and unchanged region, which were identified manually, affect the accuracy of the model. The results of the model demonstrate the higher accuracy of the proposed model compared with the five most known models of change detection in SAR images. It should be mentioned that the proposed model was completely implemented in the Matlab R15b software environment.

Conceptualization, methodology, and implementation of the programing: S.A.; Original draft preparation: S.H. and S.A.; Result evaluation: S.H.; Review and editing: S.H. All authors have read and agreed to the published version of the manuscript.

This research received no external funding.

The authors would like to thank Kamran Kazemi, associate professor of the Electrical Engineering Department, Shiraz University, Shiraz, Iran, for providing the data sets used in this study.

The authors declare no conflict of interest.

- Giustarini, L.; Hostache, R.; Matgen, P.; Schumann, G.J.-P.; Bates, P.D.; Mason, D.C. A change detection approach to flood mapping in urban areas using TerraSAR-X. IEEE Trans. Geosci. Remote Sens.
**2012**, 51, 2417–2430. [Google Scholar] [CrossRef] - Tzeng, Y.-C.; Chiu, S.-H.; Chen, D.; Chen, K.-S. Change detections from SAR images for damage estimation based on a spatial chaotic model. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 1926–1930. [Google Scholar]
- Li, N.; Wang, R.; Deng, Y.; Chen, J.; Liu, Y.; Du, K.; Lu, P.; Zhang, Z.; Zhao, F. Waterline mapping and change detection of Tangjiashan Dammed Lake after Wenchuan Earthquake from multitemporal high-resolution airborne SAR imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2014**, 7, 3200–3209. [Google Scholar] [CrossRef] - Jin, S.; Yang, L.; Danielson, P.; Homer, C.; Fry, J.; Xian, G. A comprehensive change detection method for updating the National Land Cover Database to circa 2011. Remote Sens. Environ.
**2013**, 132, 159–175. [Google Scholar] [CrossRef] - Quegan, S.; Le Toan, T.; Yu, J.J.; Ribbes, F.; Floury, N. Multitemporal ERS SAR analysis applied to forest mapping. IEEE Trans. Geosci. Remote Sens.
**2000**, 38, 741–753. [Google Scholar] [CrossRef] - Nonaka, T.; Shibayama, T.; Umakawa, H.; Uratsuka, S. A comparison of the methods for the urban land cover change detection by high-resolution SAR data. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 3470–3473. [Google Scholar]
- Ban, Y.; Yousif, O.A. Multitemporal spaceborne SAR data for urban change detection in China. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.
**2012**, 5, 1087–1094. [Google Scholar] [CrossRef] - Gong, M.; Zhang, P.; Su, L.; Liu, J. Coupled dictionary learning for change detection from multisource data. IEEE Trans. Geosci. Remote Sens.
**2016**, 54, 7077–7091. [Google Scholar] [CrossRef] - Samaniego, L.; Bárdossy, A.; Schulz, K. Supervised classification of remotely sensed imagery using a modified k-NN technique. IEEE Trans. Geosci. Remote Sens.
**2008**, 46, 2112–2125. [Google Scholar] [CrossRef] - Frate, F.D.; Schiavon, G.; Solimini, C. Application of neural networks algorithms to QuickBird imagery for classification and change detection of urban areas. In Proceedings of the IGARSS 2004, 2004 IEEE International Geoscience and Remote Sensing Symposium, Anchorage, AK, USA, 20–24 September 2004; Volume 2, pp. 1091–1094. [Google Scholar]
- Fernàndez-Prieto, D.; Marconcini, M. A novel partially supervised approach to targeted change detection. IEEE Trans. Geosci. Remote Sens.
**2011**, 49, 5016–5038. [Google Scholar] [CrossRef] - Roy, M.; Ghosh, S.; Ghosh, A. A novel approach for change detection of remotely sensed images using semi-supervised multiple classifier system. Inf. Sci.
**2014**, 269, 35–47. [Google Scholar] [CrossRef] - Yuan, Y.; Lv, H.; Lu, X. Semi-supervised change detection method for multi-temporal hyperspectral images. Neurocomputing
**2015**, 148, 363–375. [Google Scholar] [CrossRef] - Zheng, Y.; Jiao, L.; Liu, H.; Zhang, X.; Hou, B.; Wang, S. Unsupervised saliency-guided SAR image change detection. Pattern Recognit.
**2017**, 61, 309–326. [Google Scholar] [CrossRef] - Bazi, Y.; Bruzzone, L.; Melgani, F. An unsupervised approach based on the generalized Gaussian model to automatic change detection in multitemporal SAR images. IEEE Trans. Geosci. Remote Sens.
**2005**, 43, 874–887. [Google Scholar] [CrossRef] - Bazi, Y.; Bruzzone, L.; Melgani, F. Automatic identification of the number and values of decision thresholds in the log-ratio image for change detection in SAR images. IEEE Geosci. Remote Sens. Lett.
**2006**, 3, 349–353. [Google Scholar] [CrossRef] - Gong, M.; Cao, Y.; Wu, Q. A neighborhood-based ratio approach for change detection in SAR images. IEEE Geosci. Remote Sens. Lett.
**2011**, 9, 307–311. [Google Scholar] [CrossRef] - Sumaiya, M.N.; Kumari, R.S.S. Logarithmic mean-based thresholding for sar image change detection. IEEE Geosci. Remote Sens. Lett.
**2016**, 13, 1726–1728. [Google Scholar] [CrossRef] - Sumaiya, M.N.; Kumari, R.S.S. Gabor filter based change detection in SAR images by KI thresholding. Opt. Int. J. Light Electron Opt.
**2017**, 130, 114–122. [Google Scholar] [CrossRef] - Bazi, Y.; Melgani, F.; Bruzzone, L.; Vernazza, G. A genetic expectation-maximization method for unsupervised change detection in multitemporal SAR imagery. Int. J. Remote Sens.
**2009**, 30, 6591–6610. [Google Scholar] [CrossRef] - Belghith, A.; Collet, C.; Armspach, J.P. Change detection based on a support vector data description that treats dependency. Pattern Recognit. Lett.
**2013**, 34, 275–282. [Google Scholar] [CrossRef] - Shang, R.; Qi, L.; Jiao, L.; Stolkin, R.; Li, Y. Change detection in SAR images by artificial immune multi-objective clustering. Eng. Appl. Artif. Intell.
**2014**, 31, 53–67. [Google Scholar] [CrossRef] - Zheng, Y.; Zhang, X.; Hou, B.; Liu, G. Using combined difference image and $ k $-means clustering for SAR image change detection. IEEE Geosci. Remote Sens. Lett.
**2014**, 11, 691–695. [Google Scholar] [CrossRef] - Li, H.; Gong, M.; Wang, Q.; Liu, J.; Su, L. A multi-objective fuzzy clustering method for change detection in SAR images. Appl. Soft Comput.
**2016**, 46, 767–777. [Google Scholar] [CrossRef] - Tian, D.; Gong, M. A novel edge-weight based fuzzy clustering method for change detection in SAR images. Inf. Sci.
**2018**, 467, 415–430. [Google Scholar] [CrossRef] - Gong, M.; Yang, H.; Zhang, P. Feature learning and change feature classification based on deep learning for ternary change detection in SAR images. ISPRS J. Photogramm. Remote Sens.
**2017**, 129, 212–225. [Google Scholar] [CrossRef] - Li, M.; Li, M.; Zhang, P.; Wu, Y.; Song, W.; An, L. SAR Image Change Detection Using PCANet Guided by Saliency Detection. IEEE Geosci. Remote Sens. Lett.
**2019**, 16, 402–406. [Google Scholar] [CrossRef] - Chen, H.; Jiao, L.; Liang, M.; Liu, F.; Yang, S.; Hou, B. Fast unsupervised deep fusion network for change detection of multitemporal SAR images. Neurocomputing
**2019**, 332, 56–70. [Google Scholar] [CrossRef] - Carincotte, C.; Derrode, S.; Bourennane, S. Unsupervised change detection on SAR images using fuzzy hidden Markov chains. IEEE Trans. Geosci. Remote Sens.
**2006**, 44, 432–441. [Google Scholar] [CrossRef] - Ma, J.; Gong, M.; Zhou, Z. Wavelet fusion on ratio images for change detection in SAR images. IEEE Geosci. Remote Sens. Lett.
**2012**, 9, 1122–1126. [Google Scholar] [CrossRef] - Gong, M.; Li, Y.; Jiao, L.; Jia, M.; Su, L. SAR change detection based on intensity and texture changes. ISPRS J. Photogramm. Remote Sens.
**2014**, 93, 123–135. [Google Scholar] [CrossRef] - Li, C.; Huang, R.; Ding, Z.; Gatenby, J.C.; Metaxas, D.N.; Gore, J.C. A level set method for image segmentation in the presence of intensity inhomogeneities with application to MRI. IEEE Trans. Image Process.
**2011**, 20, 2007. [Google Scholar] - Ahmadi, S.; Zoej, M.J.V.; Ebadi, H.; Moghaddam, H.A.; Mohammadzadeh, A. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours. Int. J. Appl. Earth Obs. Geoinf.
**2010**, 12, 150–157. [Google Scholar] [CrossRef] - Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process.
**2001**, 10, 266–277. [Google Scholar] [CrossRef] [PubMed] - Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern.
**1979**, 9, 62–66. [Google Scholar] [CrossRef] - Gong, M.; Su, L.; Jia, M.; Chen, W. Fuzzy clustering with a modified MRF energy function for change detection in synthetic aperture radar images. IEEE Trans. Fuzzy Syst.
**2013**, 22, 98–109. [Google Scholar] [CrossRef] - Moser, G.; Serpico, S.B. Generalized minimum-error thresholding for unsupervised change detection from SAR amplitude imagery. IEEE Trans. Geosci. Remote Sens.
**2006**, 44, 2972–2982. [Google Scholar] [CrossRef] - Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process.
**2010**, 19, 3243–3254. [Google Scholar]

Data Set | Size (Pixel) | Resolution (m) | Date of the First Image | Date of the Second Image | Location | Sensor |
---|---|---|---|---|---|---|

Yellow River Estuary | 289 × 257 | 8 | June 2008 | June 2009 | Dongying, Shandong Province of China | Radarsat 2 |

Bern | 301 × 301 | 30 | April 1999 | May 1999 | a region near the city of Bern | European Remote Sensing 2 satellite |

Ottawa | 350 × 290 | 10 | July 1997 | August 1997 | Ottawa City | Radarsat 2 |

Dataset | Changed Regions | Unchanged Regions | ||||
---|---|---|---|---|---|---|

Yellow river Estuary | 127.00 | 169.67 | 212.33 | 255 | 0 | 42.167 |

Bern | 112.40 | 159.94 | 207.47 | 255 | 0 | 32.44 |

Ottawa | 127.00 | 169.67 | 212.33 | 255 | 0 | 42.17 |

Dataset | Method | PCC % | OE % | Kappa % |
---|---|---|---|---|

Yellow River Estuary | SGK | 98.06 | 1.92 | 85.24 |

NR | 88.33 | 79.99 | ||

LN-GKIT | 69.60 | 30.82 | 33.78 | |

CV | 95.37 | 4.63 | 84.38 | |

DRLSE | 90.84 | 9.16 | 69.47 | |

DFLAC | 95.49 | 4.51 | 84.65 | |

Bern | SGK | 99.68 | 0.32 | 87.05 |

NR | 99.66 | 0.34 | 85.90 | |

LN-GKIT | 99.90 | 0.35 | 85.37 | |

CV | 99.61 | 0.39 | 85.32 | |

DRLSE | 98.70 | 1.30 | 63.32 | |

DFLAC | 99.68 | 0.32 | 87.07 | |

Ottawa | SGK | 98.95 | 1.05 | 95.98 |

NR | 97.91 | 2.09 | 92.2 | |

LN-GKIT | 98.35 | 2.22 | 91.87 | |

CV | 97.06 | 2.93 | 88.92 | |

DRLSE | 95.44 | 4.56 | 81.37 | |

DFLAC | 99.00 | 1.00 | 96.26 |

Subtraction | Log-Ratio | Normal Difference | RMLND | |
---|---|---|---|---|

Yellow River Estuary | 75.22 | 83.97 | 84.71 | 84.65 |

Bern | 32.44 | 81.26 | 83.14 | 87.07 |

Ottawa | 83.81 | 95.26 | 94.92 | 96.26 |

Average | 63.82 | 86.83 | 87.59 | 89.33 |

Data Sets | DFLAC Time Steps | SGK | SGK/DFLAC | |||||
---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | Total Time | |||

Yellow River Estuary | 0.01 | 0.04 | 0.01 | 8.90 | 0.01 | 8.97 | 60.53 | 6.75 |

Bern | 0.02 | 0.05 | 0.01 | 8.31 | 0.01 | 8.40 | 86.60 | 10.31 |

Ottawa | 0.02 | 0.05 | 0.02 | 9.34 | 0.01 | 9.44 | 101.17 | 10.72 |

Average | 0.02 | 0.05 | 0.01 | 8.85 | 0.01 | 8.94 | 82.77 | 9.25 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).