Next Article in Journal
Novel Dual Beam Cascaded Schemes for 346 GHz Harmonic-Enhanced TWTs
Next Article in Special Issue
Sensor System: A Survey of Sensor Type, Ad Hoc Network Topology and Energy Harvesting Techniques
Previous Article in Journal
A Study on the Effect of Bond Wires Lift-Off on IGBT Thermal Resistance Measurement
Previous Article in Special Issue
Findings about LORETA Applied to High-Density EEG—A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Active Contour Model Using Fast Fourier Transformation for Salient Object Detection

School of Computer Science and Technology, Xi’an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Submission received: 13 December 2020 / Revised: 11 January 2021 / Accepted: 12 January 2021 / Published: 15 January 2021
(This article belongs to the Special Issue Multidimensional Digital Signal Processing)

Abstract

:
The active contour model is a comprehensive research technique used for salient object detection. Most active contour models of saliency detection are developed in the context of natural scenes, and their role with synthetic and medical images is not well investigated. Existing active contour models perform efficiently in many complexities but facing challenges on synthetic and medical images due to the limited time like, precise automatic fitted contour and expensive initialization computational cost. Our intention is detecting automatic boundary of the object without re-initialization which further in evolution drive to extract salient object. For this, we propose a simple novel derivative of a numerical solution scheme, using fast Fourier transformation (FFT) in active contour (Snake) differential equations that has two major enhancements, namely it completely avoids the approximation of expansive spatial derivatives finite differences, and the regularization scheme can be generally extended more. Second, FFT is significantly faster compared to the traditional solution in spatial domain. Finally, this model practiced Fourier-force function to fit curves naturally and extract salient objects from the background. Compared with the state-of-the-art methods, the proposed method achieves at least a 3% increase of accuracy on three diverse set of images. Moreover, it runs very fast, and the average running time of the proposed methods is about one twelfth of the baseline.

1. Introduction

Active contour models serve as a powerful tool in various domains of image processing such as salient object detection and segmentation tasks. Since, Salient object segmentation, aimed at extracting the most prominent object regions in the visual range, has been a fascinating research topic of computer vision for decades. A broad family of active contour models have been introduced to tackle the problem of saliency segmentation, with the distinction of whether the pixel pertains to a noticeable object or to an inconsequential background. The success of algorithm depends on initial contour and sensitive to noises [1]. The basic idea of the ACM framework is to regulate a curve proceeding toward its interior normal and to stop at the actual limits of an energy-based object minimal mode [2]. The active contour model is one of the most significant algorithms in image segmentation [3]. Active contours are connectivity-preserving relaxation [4] techniques that are valid to the image segmentation complicated problem. ACM’s provides smooth and closed contours of object to achieve the subpixels accuracy of the object. In a word, the ACM’s are more suitable for being homogenous images and proceed passably high segmentation performance. In contrast, inhomogeneity images and automatic initialization problem cannot be processed ideally. Kass et al. [5] presented this idea as a variational approach for accurately locating closed image contours without the exigency of detecting edge elements and then drawing loop between them. Since the initial introduction of snakes by Kass active contours have been used for boundary tracking and image segmentation. Active contours have desirable advantages and suit pretty favored for a variety of application. The methodology based on the implementation of deformable contours that correspond to various attentive object shapes and motions. This powerful tool (active contour model) has the advantages of implementation in various domains of computer field and facilitates mostly computer vision task [6,7], image segmentation [8,9,10,11], visual tracking and visual navigation [12,13,14,15,16,17,18,19,20,21], synthetic and medical applications [22,23,24,25,26,27,28,29], and salient object detection [30,31].
Recently, Weruaga et al. [32] came up with the idea of creating a frequency domain of the dynamic contour model. The authors derived the equation of motion for snakes from the formation of the direct-energy reduction of the Parseval’s relationship without “deviating” through the Euler-Lagrange equation. Solving the active contour equation in spectral domain is computationally faster instead of spatial domain.
Generally, researchers focus to choose their interest according to the techniques used for saliency detection, i.e., two types: spatial domain and frequency domain. Spatial domain methods in general are widely used in computer vision application and object detection researches. The spatial domain is suited to low computational complexity. However, it still presents an unsatisfactory complex convolution. Spatial domain processing sometimes fails on real-time object detection while the frequency domain efficiently manages to make simple pixels multiplication. Frequency transformation describes details and features of object clearly. We can obtain better parameter analyzation of spectrum in frequency domain. The spatial domain focuses on local information to detect saliency details in an image while global information can be explored and pointed out well by the frequency domain. Spatial domain neglects inner information and makes it hard to extract the contour of a sensitive and complicated background. On the other hand, the frequency domain provides a smooth curve to highlight an edge of the object.
To facilitate and reduce the complexity, we make image transformation to frequency domain i.e., involve complex arguments. Frequency transformation of an oscillating signal is a smooth curve outlining its extremes [33]. The frequency domain is suitable for cyclic objects. Researchers are attracted due to its multi-resolution and high quality of local characteristics. The convolution theorem [34] represents, that under acceptable conditions, the Fourier transform of a convolution of two signals is the pointwise product of their Fourier transforms. The utilization of FFT transform domain model not only capture energy variations within multi-dimensional data successfully [35] but are also computationally less expensive [36]. The coefficient of low complexity specifies contour of the salient object. Keeping in view the favorable hints about Frequency domain, we propose a model salient object detection using active contour in the Frequency domain. The proposed model implementing active contour is most effective in dealing with topological changes in the curve.
Active contour model can be further divided into two categories, namely the edge based model (EBM) [3,5,37,38,39,40] and region based model (RBM) [41,42]. These two can be described in detail as follows. Recently, most popular algorithms have followed edge detection as an initial step towards object segmentation. Global contrast based region segmentation model [43] is suitable for object boundary representation and sensitive to noise. Conformal curvature flows (CCF) model [44] depends on initial curve placement and fails to detect precise and accurate segmentation. Previously, a model introduced by Kass et al. [5,45] was a popular model of active contour in the image segmentation, which is suitable in most applications. D. Cremer [46] proposed a model which implements a smooth and closed curve to identify objects of interest with accuracy which is suitable in most applications. The edge-based model applies energy intensities and gradient information to minimize level set approach. Edge-based active contour models employ image specifications for object classification and a heightened aspired object boundary for further investigation. Geodesic active contours (GAC) [3] are patterns of an edge-based model that hold contours based on gradient information during curve evolution. Gradient sensitivity also impacts on object edges, leading to weak segmentation results. Region-based methods appropriate statistical information, guiding active contour during motion. Statistical information identifies foreground from background and succeeds in the energy optimum that is best fits. The better utilization of statistical information proffers priority to RBM on EBM and curve evolution can be controlled within and outside of the object such as, weak edges or performance on images without edges and less sensitive with initial contour location for interior and exterior boundaries simultaneously [23]. Region-based models evaluate curves and fit local edges rather than fitting with mathematical attributes of images, i.e., color attributes and intensity or texture information [46,47,48]. Region-based models are less sensitive to noise and for initialization. Chen-Vese (CV) [41] is an attractive region based model for researchers is a binary phase and follows the assumption that each region holds constant intensities. The CV model often presents problems with the intensities of inhomogeneity in images. Mumford–Shah [49,50] introduced a model that minimizes energy function but neglected to address computational cost and the complicated procedure.
Moreover, in case of simplicity in the problem, the previous implementation methods do not lead to simpler mechanisms. To find salient region in frequency domain, we proposed to introduce the residual spectral that is implemented in Fourier transform map, generating a map of salient values per pixels. The local force function approach makes abrupt changes to intensities. This function work as a penalty term. During this stage, we utilize gradient operator to detect boundaries of the object. We introduce a Fourier force function that works without trigonometric function for extracting saliency object detection for the active contour. With an active contour using fast Fourier transformation, i.e., a frequency domain fitted curve during initialization, the incorporation of a spectral residual function highlights the salient object inside the boundary and distinct object from the background using the Fourier force function.
Contributions of the paper can be summarized as follows:
  • A new hybrid active contour model using FFT comprising efficient features of the local region-based and global region-based fitting energies for salient object detection is proposed.
  • Frequency domain has a property that can be used for discovering the salient object.
  • Fourier force function is used to distinguish saliency objects from background for the active contour.
  • Evaluating the proposed fast Fourier active contour model through a set of experiments using medical and synthetic images.
We present a model that characterizes the functioning of an active deformable model from a frequency domain perspective, and has yielded some significant achievements. The new configuration furnishes an interesting framework for designing the stiffness characteristics of the model addition to the so-called standard flexibility and rigidity parameters. In fact, using partial stiffness directives that can be easily grasped with the described formulation imparts a simple way to obtain a different perspective regarding gradual stiffness. This analysis is based on frequency domain, which is not an odd idea. The parametric models are space invariant and are linear systems that can be usually presented in the frequency domain. The basic idea behind the proposed algorithm is generating a high level fast active contour model which increases computational efficiency and maximizes effectiveness for saliency detection on complex synthetic and medical images. Our model objectively received results that conclude on.
  • The fast Fourier transform (FFT) is an effective process to design the discreate Fourier transform (DFT) time of series. The series takes advantages of the information that DFT coefficients can furnish iteratively to save considerable computational time.
  • FFT not only set the computational problem, but also significantly reduces the associated round of errors computations.
  • Frequency-based formulation provides simple characterization of the processing kernel.
  • Automated smooth fitted curve during initialization time.
  • Avoids re-initialization and re-iteration.
  • Immediately draws curve on object boundary.
  • Calculates accurate salient object with minimum complexity.
  • Fast execution offers significant computational savings.
To address the limitations of previous methods and feature innovations, we propose a hybrid method which combines region (local or global) and edge information for salient objects using active contour. In this paper, active contour exploring global information and region statistics for saliency object detection in favor of frequency domain is illustrated in Figure 1.
Rest of the paper organization as follows. Section 2 presents a brief overview of related work. Section 3 describes the proposed works and its preliminary implementation details. Section 4: experimental results and discussion. Section 5: demonstrates conclusion.

2. Related Work

The related models are implemented in active contour which is a computer-generated curve, working under energy force to extract visual attentive pixels from the image to detect salient objects. Recently, the active contour model (ACM) for saliency detection has been widely used to enhance applications, like edge detection, shape recognition, and object tracking, etc. In this section, we briefly reviewed related models of active contour.

2.1. Active Contour (Snake) Formulations

Let the standard snake v ( s ) introduced by Kass et al. [5] have a parametric description of the real value of the spatial coordinates x and y ;
v ( s ) = ( ( x ( s ) , y ( s ) , ) s [ 0 ,   L ) )
That is circular in L . The internal energy of the snake’s describes by the functional can be written as
E i n t 1 2 0 L α ( s ) | v ( s ) s | 2 + β | 2 v ( s ) s 2 | 2 d s
where α ( s ) and β ( s ) indicate the tension and rigidity parameter functions, respectively. The external force function E e x t ( v ( s ) ) describe the salient feature, moves the active contour proved by the principal of physics. The snake attains a state of most least overall energy.
min v ( s ) { E i n t ( v ( s ) ) + E e x t ( v ( s ) ) }
where F i n t ( v ( s ) ) represents the internal force property i.e., (stretching and bending) and F e x t ( v ( s ) ) represent external force. To determine linear equation of the fourth order, a time dominion of the model [51] is included v ( s ) v ( s ; t ) , beginning with the initial contour v ( s ; 0 ) = v 0 ( s ) , the equation of motion can be written as:
d v ( s ; t ) d t [ α ( s ) v ( s ; t ) ] +   [ β ( s ) v ( s ; t ) ] F e x t ( v ( s ; t ) )   =   0
Lastly, in the steady state, no evolution of the contour happens anymore and the force stability equation is satisfied.

2.2. Chan-Vese Model

The Model offered by Chan-Vese [52] illustrating uncomplicated energy function i.e., based on Mumford Shah [53], approximating average constant intensities inside the contour by c 1 and outside by c 2 . Let I : Ω R 2 is an input image and Chan-Vese level set function can be defined as
E C V = λ 1 Ω | I ( x ) c 1 | 2 H ε ( ϕ ( x ) ) d x + λ 2 Ω | I ( x ) c 2 | 2 ( 1 H ε ( ϕ ( x ) ) d x
+ μ Ω | H ε ( ϕ ) | 2 d x + ν Ω H ε ( ϕ ) d x ,   μ 0 , ν 0
where μ , λ 1 , λ 2 are positive parameters. The Euclidian distance of curve C is scaled by μ 0 and constant ν defined area term inside the contour where Heaviside H ε ( ϕ ) function represented by the formula as
H ε ( ϕ ) = 1 2 ( 1 + 2 π arctan ( ϕ ε ) ) ,
where ν is smoothness of Heaviside. The global region intensities c 1 and c 2 that drive contour inside and outside of the region in Equation (5) minimize with respect to ϕ , implementing the gradient decent method [54] under formulation as;
ϕ t = ( λ 1 ( I c 1 ) 2 + λ 2 ( I c 2 ) 2 + μ d i v ( ϕ | ϕ | ) ν ) δ ε ( ϕ ) ,
where δ ε ( ϕ ) is the smooth version of Direct delta function formulation as,
δ ε ( ϕ ) = ε π ( ϕ 2 + ε 2 ) ,
where Heaviside function ε supervises the smoothness of the curve. Direct delta function maintain width in Equation (6) and in Equation (8) preserved and adjusted curve ϕ . The energy function can be minimized according to Equation (5) with respect to c 1 and c 2 can be defined by,
c 1 ( ϕ ) = Ω I ( x ) H ε ( ϕ ( x ) ) d x Ω H ε ( ϕ ( x ) ) d x .
c 2 ( ϕ ) = Ω I ( x ) ( 1 H ε ( ϕ ( x ) ) ) d x Ω ( 1 H ε ( ϕ ( x ) ) ) d x .
Curve of CV algorithm evolves inside the image region, has the global nature and the result fails on encountering inhomogeneous region.

2.3. Active Contour with Selective Local and Global Segmentation Model

The ACSLGS [23] includes a force function based on internal and external intensities. Force function shrinks dynamic contour outside and expands when curve is inside the object. This function keeps a value range between [–1, 1] formulation of the force function as:
s p f ( I ( x ) ) = I ( x ) c 1 + c 2 2 max ( | I ( x ) c 1 + c 2 2 | ) , x Ω
where c 1 and c 2 are constant intensities which drive force function that is defined in Equations (9) and (10). ACSLGS level set implementation drives curve to the object boundary, and mathematically the ACSLGS model formulation can be written by,
ϕ t = s p f ( I ( x ) ) . ( d i v ( ϕ | ϕ | ) + α ) | ϕ | + s p f ( I ( x ) ) ϕ , x Ω
The initialization value is constant during inside and outside of contour and d i v ( ϕ | ϕ | ) | ϕ | regularize level set function ϕ [55]. Gaussian function drives smooth curve in process. The regulating term is un-necessary in the presence of Gaussian filter, removing both d i v ( ϕ | ϕ | ) | ϕ | , s p f ( I ( x ) ) ϕ from the Equation (13) and minimized the formulation as follows.
ϕ t = s p f ( I ( x ) ) . α | ϕ | x Ω
Level set initialization can be set to,
ϕ ( x , t = 0 ) = { p x Ω 0 Ω 0 0 x Ω 0 p x Ω Ω 0
where image domain Ω , Ω 0 is the subset of Ω (image domain), Ω 0 detect the boundary of image domain and p is a constant where value of p > 0 . ACSLGS model has the same problem and disadvantages like CV model and fails on intensity inhomogeneity images.

3. Solution Implementing Fast Fourier Transformation

Before giving our formulation, we review the solution on spectral domain for the active contour models based on contours [51]. The formulation starts by adding the time discrete version in Euler–Lagrange equation, and assumes the constant α and β.
v t v t 1 τ α v t + β v t F e x t ( v t 1 )   =   0
where superscripts ( · ) t denote temporal indices. Let v ^   = FFT { v } and F ^ e x t ( v t 1 )   = FFT { F e x t ( v t 1 ) } be the Fourier transform of the contour and vectors of external forces. In spectral domain, spatial derivatives of contour transformation outline the multiplication of contour with vector of the spectral coefficient, called stiffness spectrum k ^ .
v ^ t v ^ t 1 τ + [ α ( 2 π ν / N ) 2 + β ( 2 π ν / N ) 4 ] k ^ v ^ t F ^ e x t ( v t 1 ) = 0
The operation represents the Hadamard product and ν = [ 0   1   2 N 1 ] T . After some primary rearrangements, as an intermediary step, the following equation can be obtained.
v ^ t = 1 1 + τ k ^ ( v ^ t 1 + τ F ^ e x t ( v t 1 ) )
where 1 =   [ 1   1 1 ] T of length N. By using the inverse Fourier transform, the contour in the spatial domain can be recovered.
Note that the maximum FFT design is implemented for complex value data configurations. As for a 2D point, the complex number based on trigonometry can be used and the number z = u + i v correlated with a point ( u , v ) in the image plane.

3.1. Fourier Solution for Region Based Model

The primary weakness of the method [51] is that it can only deal with one single contour and cannot handle the topological change. Thus, we choose the region-based formulation. For region based active contour methods, we can take the similar formulation.
ϕ t ϕ t 1 τ s p f ( I , ϕ t 1 )   =   0
where ϕ is the level set function defined on the image plane, and τ the time step. Here, we slightly abuse the notation s p f ; let it depend on both the input image and the former estimation. Using FFT for both sides, we then have,
ϕ ^ t ϕ ^ t 1 τ FFT { s p f ( I , ϕ t 1 ) } = 0
Here, ϕ ^ t is the Fourier transform of the level set function. As for initialization, we set ϕ 0 according to the to the spectral residual [56]. For ϕ ^ 0 , we transform it into the spectral domain by the FFT.
ϕ ^ 0 = exp ( log ( | FFT { ϕ 0 } | + 1 ) G σ + i * A ( FFT { ϕ 0 } ) )
Here, we smooth the log magnitude of the Fourier transform and use A to extract the angle.

3.2. The Local Force from the Gradient

The active contour models usually use the gradient to catch the contour or force the zero set of the level set function. As for the region-based methods, the gradient will encourage the slow change of the function around the flattened region, while leading to the abrupt change between the fore ground and background. The force from the gradient gives the local force to inference the level set function. Thus, we can then get an equivalent formulation in the spectral domain for the gradient just as follows.
F e x t = FFT { μ x 2 + μ y 2 }
where F e x t is the external force from the gradient. The spatial frequency decomposition is independent of the location of feature inside image.

3.3. Fourier Force Function

We modified the force function formulation [23,57] and define a Fourier force function which discriminates object from background and extracts saliency of the object. The proposed model is creating force differently to keep the range of values between [–1, 1] during curve evaluations. We can describe the proposed model as follows.
F F = FFT { I c 1 2 I c 1 2 + I c 2 2 1 2 }
The modified Fourier force function dynamically increasing interior and exterior energies of the curves to reduce the chance of entrapments in local minimal while the curve is far from the boundary of the object. When the curve is close to the object boundaries, then the energies decrease and automatically stop the curve near the boundary of the object. The introduced model is efficiently reducing the computational-cost where c 1 and c 2 are the constant intensities [23,57] defined c 1 and c 2 in Equation (9) and Equation (10).
The image per pixels with Fourier force function creates a global Fourier pressure force function. The proposed model uses the global Fourier pressure force function in algorithm to successfully draws contour on the boundary of the object and detect salient object within highly fast iteration. The salient object detection using active contour model in Fourier convolution is proposed and adopted according to the formulation. We first add the ϕ ^ t 1 with the scaled Fourier force τ F F . After that, we calculate the magnitude and angle of these transform, and denote them by Mag and Ang.
ϕ t = IFFT { ( Mag + τ | F e x t | ) exp ( i * Ang ) }
Here, we introduce a gaussian function for ϕ t to smooth the change of the subsequent level set function.

4. Experimental Results and Discussion

In this section, we check and validate the performance of the proposed algorithm on medical and synthetic images in the presence of noise and intensity inhomogeneity. The proposed Algorithm 1 is compared based on average number of iterations and running time image per second. We quantitatively measure the accuracy, sensitivity and dice index of the proposed algorithm and famous state-of-the-art active contour models such as; the DRLSE [58], LIF [22], C-V model [41], WHRSPF [59], FACM model [60], and Farhan model [61]. The DRLSE and the LIF model codes are available online: http://www.kaihuazhang.net/, the CV model code available is online at: https://ww2.mathworks.cn/matlabcentral/fileexchange/49034-oracm-online-region-asedactive-contour-model, and WHRSPF at: https://github.com/fangchj2002/WHRSPF. The code for the FACM model is available: https://github.com/gaomingqi/FACM and the Farhan model [61] code can be downloaded online: https://0-doi-org.brum.beds.ac.uk/10.1371/journal.pone.0174813.s003. The images are taken from the BRATS dataset [62], and the PH2 database available publicly [63]. All, ACM-based models are processed using a 3.40-GHz Intel (R) Core (TM) i-7-6700 CPU with 8-GB RAM on Matlab. The proposed algorithm work with fixed parameters during the evaluation for all images. Gaussian function with standard deviation σ = 4 and τ = 0.02 .
Algorithm 1. The pseudo-code for the proposed algorithm.
Input: the input image I, the initial level set function ϕ 0 .
Initialization:
1: Initialization the Fourier level set function according to Equation (20).
2: Initialize the related parameters: σ = 4 , τ = 0.02 .
Repeat:
3: For 1 to T do
4:   Compute Magnitude using Fourier external function with Equation (21).
5:    Compute Fourier force function on Equation (22).
6:   Update the level set function according to Equation (23).
7: end for
8: If converge
9: end
10: Output: The resultant salient object ϕ = ϕ T .

4.1. Robustness to Initial Curves

The basic aim is to check the validity and sensitivity of initial curve on medical, synthetic and noisy images. The proposed method detects boundary of the object on initial contour successfully without re-iteration and re-initialization. The comparison results are shown in in figure below proving information and fit curve on the boundary during initialization time while the famous models DRLSE, LIF, CV, and WHRSPF fail during the initialization stage. These active contour models can segment images with intensity inhomogeneity effectively, but an inappropriate initial contour can highly reduce the segmentation efficiency, and even cause the failure of object segmentation. It is difficult to ensure that user can find a suitable initial contour quickly. It is important to find an effective way to address the initialization problem in the local fitting-based models.
Experimental results in Figure 2a on three synthetic images shows the segmentation results of the DRLSE, LIF model, CV model and the proposed model with initial contours. For these three images, DRLSE model gets a result only under the first initial contour that is far from the object boundary. In the LIF model and CV model, the desired initial contour has the same fitting problem that can be obtained by a fitted curve with initial contour. These models (the DRLSE, LIF model, and CV model) are computationally expansive on initialization stage, while the proposed model directly fitting curve on the boundary of the object. Thus, the whole curve will evolve along the inner (or outer) boundaries of object. As a result, the curve is less likely to be stuck in the object or background. Experimental results have showed that using the proposed method can enhance the robustness of initial contour.

4.2. Qualitative Analysis

We checked the proposed algorithm qualitatively, in Figure 2b we shown comparison based on final contour results for the same images in Figure 2a. Similarly, the famous DRLSE, the LIF model, and CV model under the first initial contour led to obtain the correct segmentation result with each final contour, except for the DRLSE model middle image, LIF right side image, and CV model, that has a large inaccuracy problem in the final contour shown in Figure 2b. The results in shown in Figure 2b have demonstrated that the fitting of initial contour has a great effect on the final contour. The initial contour may cause and leads to wrong object segmentation, in such case, some parameters, weight of length term need to be adjusted to obtain a desired result. Obviously, this is a cumbersome process, and largely depends on human experience. Moreover, the desired result cannot be obtained just by adjusting the parameters. Thus, the satisfactory segmentation result can be obtained without changing any conditions including the parameters by the proposed model. Therefore, the results proved that the proposed model is more robust to initial contour. Some results shown in Figure 2b of state-of-the-arts model DRLSE and LIF are weak, thus further dealing with parameter adjustment will also lead to high computational cost.
Next, we visually check synthetic images results based on final contour and saliency object. The segmentation results using the state-of-the-art DRLSE model, CV model and LIF model, and the proposed model initial and final contour are shown in Figure 3 in the second row, while the salient object results are given in the third row, respectively. From Figure 3, shows proposed segmentation result on a severe intensity inhomogeneity synthetic image in which the bottom right loop of this image almost seems to be part of background. The proposed model has recovered this loop accurately to detect the foreground objected successfully while the DRLSE model, LIF model, and CV model fail to detect the precise contours as well as their object saliency. Compared to these three models, our proposed model detects final boundary on the first iteration with minimum cost.
The results showed that the DRLSE formulation has an intrinsic capability of maintaining regularity of the level set function, particularly the desirable signed distance property in a vicinity of the zero-level set, which ensures accurate computation and stable level set evolution. DRLSE model can be implemented by a simpler and more efficient numerical scheme than conventional level set methods compare to CV model and LIF model. Similarly, the DRLSE fitting curve likes the proposed algorithm. The LIF model is specifically designed to deal with the intensity inhomogeneity problem which uses a rectangular window function like Gaussian truncated window or constant window with global intensities to overcome the issue of intensity inhomogeneity. Regardless, LIF model also gets trapped by local minima. The LIF model fail to accurately detect the boundary of the object and saliency of the object in case of complex image. CV model fails to segment these images due to strong inhomogeneity of pixel intensities. The active contour is trapped by local minima and, as a result, the active contour is diverged, and the CV model fails. The LIF segmentation model which deal with intensity inhomogeneity also fails to precisely segment. In Figure 3, compare to these three models, our proposed model can obtain accurate results with the DRLSE model.
The computational cost image per second for the ten synthetic images results in Figure 2 and Figure 3 are given in Figure 4 in the form of a table, and the extracted curve shows the computational running time for each algorithm. Figure 4 clearly indicates that the computational running time image per second of the proposed algorithm is significantly lower than other state-of-the-art models.
The iterations are given in form of graph extracted form table shown in Figure 5. The comparison results illustrate that the proposed algorithm has a very minimum iteration for each image that is one, except for image-8, image-9, and image-10, which are three in Figure 5, which is best compared with three state-of-the-art models DRSLE, LIF, and CV.
The proposed method is visually compared for the saliency object detection with LIF, CV, and WHRSPF models on six images shown in Figure 6. The CV model gives precisely salient object only for image-11 and fails on other images for salient object detection. The WHRSPF Model gives visually better results on the image-11 and image-12, while fails on other images. The LIF Model salient results on image-11, -12, -13 and image-14 are visually better than CV model and WHRSPF model. The comparison clearly specifies that proposed method is most suitable and has better salient object segmentation results, while other state-of-the-arts models fail in the presence of low contrast and complex background to precisely distinguish salient objects. Based on visual comparison, it is proven that proposed algorithm has significant saliency object results for all the images.
In this experiment, we focus on the application of the proposed model to find contour on the medical images of brain. The experiments have been conducted to visually present the final contour of our proposed model and state-of-the-art CV model, WHRSPF model, and LIF model on brain images. The images shown in the first column of Figure 7 are represented by gray color while the resultant final contour images are marked by red and blue curves. In this image there are two important portions that are to be separated from each other. The white matter exists inside while gray matter covers white matter from exterior. Both regions almost overlap each other at boundaries which make the segmentation difficult. Among these models, the proposed model can obtain and separates and converges to the true boundaries of white and gray matter similar results with the CV model, and other two model the WHRSPF model, and LIF model has poor results. The LIF model is sensitive to initial position while WHRSPF model has the continues iteration problem. Excellent contours are still obtained with our proposed model, despite the complex composition. From the results in Figure 7, we can find that final contour is well fitted by our method compare to the state-of-the-arts CV, WHRSPF, and LIF models.
The proposed method is compared with LIF, CV, and WHRSPF models on nine images shown in Figure 6 and Figure 7. The computational costs image per seconds are given in Figure 8, and the iteration-based results are given in Figure 9. Both the computational cost image per second and iterations are illustrated in form of table and curve in Figure 8 and in Figure 9. Hence, the proposed algorithm has shown significant low computational cost and minimum iterations. The comparison clearly specifies that proposed method is most suitable and has better subjective performance while other state-of-the-arts models fails in the presence of low contrast and complex background to precisely detect the salient object and reduce the expensive computational and iterations cost.
Figure 10 shows the final contour and salient object visual results on images taken from PH2 database. We have implemented FACM model [60] and Farhan model [61] on the melanoma in skin lesion (medical images) and compared results with the proposed model. The comparison results in Figure 10 of the proposed model are quite acceptable. The results in Figure 11a containing computational costs image per second and Figure 11b the iteration shows that the proposed model is highly efficient and fast in terms of time accompanied with FACM and Farhan methods.

4.3. Quantitative Analysis

To fairly compare object segmentation performance of above models, the quantitative evaluation of their saliency results is conducted in terms of three quantitative evaluation indexes, namely accuracy, sensitivity, and Dice metric. The achieved result will be counted as valid when the estimated values of these metrics are approaching to 1, then accuracy of the salient object leads closest to ground truth, sensitivity matric value will be considered correct according to ground truth and Dice coefficient of salient region measure define the overlapping region nearest to ground truth. The accuracy, sensitivity, and Dice metric can be defined as follows.
Accuracy   = T P + T N T P + T N + F P + F N
DSC = 2   ×   T P 2   ×   T P + F P + F N
Sensitvity = T P T P + F P
where TP describes true positive salient object, TN represent true negative unsegmented salient object, FP for false positive represent normal salient region, and FN for false negative describing undetected salient object.
Thus, the accuracy, dice index, and sensitivity of a salient object in synthetic and medical images are shown in Figure 2, Figure 3, Figure 6, Figure 7, and Figure 10. The salient object results of synthetic and medical images are relying on Equations (24)–(26) and their calculation data are put in Figure 12, Figure 13 and Figure 14. The proposed model successfully achieves the performance criteria and are better results than state-of-the-art CV, DRLSE, WHRSPF, LIF model, FACM model, and Farhan model.
In the light of Figure 12, Figure 13 and Figure 14, we can draw the following conclusion: the proposed model illustrates the best segmentation performance. In detail, its segmentation accuracy is the highest and its segmentation sensitivity as well as dice index are the highest. Therefore, the proposed model is superior to other state-of-the-art DRLSE, CV, LIF, WHRSPF, FACM, and Farhan models in object segmentation performance according to above three quantitative evaluation indexes.
Examples of failure cases: the failure cases are given below in Figure 15. Since the shadow near edge creates a stronger filter response while the skin lesion has weak edges, the active contour is attracted to the edge of shadow and passes the skin lesion. For these cases, the proposed model yielded low accuracy on these images and captured uninterested regions or was stuck in the background. Thus, the active contour failed to get the desired boundaries and salient segmentation results.

4.4. Robustness to Noisy Images

Experimental results validate that the proposed method precisely detect boundary of the object and salient object in the presence of noise. To further confirm the robustness of our proposed model, we choose the images with complex objects and an inhomogeneous background. This image is different with respect to the intensity of background and foreground, which is a big challenge in the field of object segmentation. Noisy images in Figure 2 image-3 and Figure 3 image-5 are evaluated with DRLSE, LIF, CV, and WHRSPF models and compared with the proposed algorithm. Generally, challenging task in synthetic and medical images is extracting the desired salient object from images with grievous noise and complex background. Our proposed algorithm directly detects boundary of the object and saliency of the object in noisy images while the famous ACM fails to draw and detect directly.

5. Conclusions

This paper presents a novel fast Fourier convolution approach for salient object detection in ACM that has the capability of fitting curve without re-initialization, re-iteration, and detect boundary of the object during the initialization time in the presence of intensity inhomogeneity and noise. The proposed approach implements fast Fourier convolution to provide a smooth curve, draw a straightforward contour, and substantially speed up the evolution process. We also describe the local force function which highlights the visibility of salient region within the proposed method. Interestingly, the Fourier force function constructed a smooth curve and converged fast on the boundary of the object with minimum cost compared to the previous method. Experimental results of the proposed model on a variety of synthetic and medical images are promising in that the performance has significantly faster distinction functionality compare to other state-of-the-art models.
An interesting future research direction, we can combine the frequency domain (FFT) convolution in neural network for likely to enhance the detection process with medical images and real images with minimum cost, especially for color image segmentation. The ideas introduce may lead to even faster methods.

Author Contributions

U.S.K. conceived the algorithm, designed and implemented the experiments and wrote the manuscript; resources were provided by X.Z.; Y.S. reviewed and edited. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key Research and Development Program of China, 13th five-year plan period under the Grant 2018YFB1700405.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Miao, J.; Huang, T.-Z.; Zhou, X.; Wang, Y.; Liu, J. Image segmentation based on an active contour model of partial image restoration with local cosine fitting energy. Inf. Sci. 2018, 447, 52–71. [Google Scholar] [CrossRef]
  2. Yang, X.; Jiang, X. A hybrid active contour model based on new edge-stop functions for image segmentation. Int. J. Ambient Comput. Intell. 2020, 11, 87–98. [Google Scholar] [CrossRef]
  3. Caselles, V.; Kimmel, R.; Sapiro, G. Geodesic active contours. Int. J. Comput. Vis. 1997, 22, 61–79. [Google Scholar] [CrossRef]
  4. Baswaraj, D.; Govardhan, A.; Premchand, P. Active Contours and Image Segmentation: The Current State of the Art. Available online: https://globaljournals.org/GJCST_Volume12/1-Active-Contours-and-Image-Segmentation.pdf (accessed on 19 August 2020).
  5. Kass, M.; Witkin, A.; Terzopoulos, D. Snakes: Active contour models. Int. J. Comput. Vis. 1988, 1, 321–331. [Google Scholar] [CrossRef]
  6. Metaxas, D.N. Physics-Based Deformable Models: Applications to Computer Vision, Graphics and Medical Imaging; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  7. Cohen, L.D.; Cohen, I. Finite-element methods for active contour models and balloons for 2-D and 3-D images. IEEE Trans. Pattern Anal. Mach. Intell. 1993, 15, 1131–1147. [Google Scholar] [CrossRef] [Green Version]
  8. Han, B.; Wu, Y. Active contours driven by global and local weighted signed pressure force for image segmentation. Pattern Recognit. 2019, 88, 715–728. [Google Scholar] [CrossRef]
  9. Reddy, G.R.; Ramudu, K.; Yugander, P.; Rao, R.R. Fast global region based minimization of satellite and medical imagery with geometric active contour and level set evolution on noisy images. In Proceedings of the 2011 IEEE Recent Advances in Intelligent Computational Systems, Trivandrum, India, 22–24 September 2011; pp. 696–700. [Google Scholar]
  10. Xia, X.; Lin, T.; Chen, Z.; Xu, H. Salient object segmentation based on active contouring. PLoS ONE 2017, 12, e0188118. [Google Scholar] [CrossRef] [Green Version]
  11. Han, B.; Wu, Y. Active contour model for inhomogenous image segmentation based on Jeffreys divergence. Pattern Recognit. 2020, 107, 107520. [Google Scholar] [CrossRef]
  12. Qi, Y.; Yao, H.; Sun, X.; Sun, X.; Zhang, Y.; Huang, Q. Structure-aware multi-object discovery for weakly supervised tracking. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 466–470. [Google Scholar]
  13. Yi, Y.; Ni, F.; Ma, Y.; Zhu, X.; Qi, Y.; Qiu, R.; Zhao, S.; Li, F.; Wang, Y. High Performance Gesture Recognition via Effective and Efficient Temporal Modeling. In Proceedings of the IJCAI, Macao, China, 10–16 August 2019; pp. 1003–1009. [Google Scholar]
  14. Hong, Y.; Rodriguez-Opazo, C.; Qi, Y.; Wu, Q.; Gould, S. Language and visual entity relationship graph for agent navigation. Adv. Neural. Inf. Process. Syst. 2020, 33, 2. [Google Scholar]
  15. Zhang, L.; Zhang, S.; Jiang, F.; Qi, Y.; Zhang, J.; Guo, Y.; Zhou, H. BoMW: Bag of manifold words for one-shot learning gesture recognition from kinect. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2562–2573. [Google Scholar] [CrossRef] [Green Version]
  16. Qi, Y.; Pan, Z.; Zhang, S.; van den Hengel, A.; Wu, Q. Object-and-action aware model for visual language navigation. In Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, UK, 23–28 August 2020; pp. 23–28. [Google Scholar]
  17. Yang, Y.; Li, G.; Qi, Y.; Huang, Q. Release the Power of Online-Training for Robust Visual Tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; pp. 12645–12652. [Google Scholar]
  18. Qi, Y.; Wu, Q.; Anderson, P.; Wang, X.; Wang, W.Y.; Shen, C.; Hengel, A. REVERIE: Remote Embodied Visual Referring Expression in Real Indoor Environments. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9982–9991. [Google Scholar]
  19. Qi, Y.; Qin, L.; Zhang, S.; Huang, Q.; Yao, H. Robust visual tracking via scale-and-state-awareness. Neurocomputing 2019, 329, 75–85. [Google Scholar] [CrossRef]
  20. Qi, Y.; Zhang, S.; Jiang, F.; Zhou, H.; Tao, D.; Li, X. Siamese local and global networks for robust face tracking. IEEE Trans. Image Process. 2020, 29, 9152–9164. [Google Scholar] [CrossRef] [PubMed]
  21. Sun, X.; Wang, W.; Li, D.; Zou, B.; Yao, H. Object contour tracking via adaptive data-driven kernel. Eurasip J. Adv. Signal. Process. 2020, 2020, 1–13. [Google Scholar] [CrossRef] [Green Version]
  22. Zhang, K.; Song, H.; Zhang, L. Active contours driven by local image fitting energy. Pattern Recognit. 2010, 43, 1199–1206. [Google Scholar] [CrossRef]
  23. Zhang, K.; Zhang, L.; Song, H.; Zhou, W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis. Comput. 2010, 28, 668–676. [Google Scholar] [CrossRef]
  24. Memon, A.A.; Soomro, S.; Shahid, M.T.; Munir, A.; Niaz, A.; Choi, K.N. Segmentation of Intensity-Corrupted Medical Images Using Adaptive Weight-Based Hybrid Active Contours. Comput. Math. Methods Med. 2020, 1–14, 1–14. [Google Scholar] [CrossRef]
  25. Zhou, S.; Wang, J.; Zhang, S.; Liang, Y.; Gong, Y. Active contour model based on local and global intensity information for medical image segmentation. Neurocomputing 2016, 186, 107–118. [Google Scholar] [CrossRef] [Green Version]
  26. Fang, L.; Wang, X.; Wang, L. Multi-modal medical image segmentation based on vector-valued active contour models. Inf. Sci. 2020, 513, 504–518. [Google Scholar] [CrossRef]
  27. Sun, L.; Meng, X.; Xu, J.; Tian, Y. An image segmentation method using an active contour model based on improved SPF and LIF. Appl. Sci. 2018, 8, 2576. [Google Scholar] [CrossRef] [Green Version]
  28. Thanh, D.N.; Hien, N.N.; Prasath, V.S.; Hai, N.H. Automatic initial boundary generation methods based on edge detectors for the level set function of the Chan-Vese segmentation model and applications in biomedical image processing. In Frontiers in Intelligent Computing: Theory and Applications; Springer: New York, NY, USA, 2020; pp. 171–181. [Google Scholar]
  29. Li, Y.; Cao, G.; Wang, T.; Cui, Q.; Wang, B. A novel local region-based active contour model for image segmentation using Bayes theorem. Inf. Sci. 2020, 506, 443–456. [Google Scholar]
  30. Ksantini, R.; Boufama, B.; Memar, S. A new efficient active contour model without local initializations for salient object detection. Eurasip J. Image Video Process. 2013, 2013, 40. [Google Scholar] [CrossRef] [Green Version]
  31. Khan, U.S.; Zhang, X.; Su, Y. Active Contours in the Complex Domain for Salient Object Detection. Appl. Sci. 2020, 10, 3845. [Google Scholar] [CrossRef]
  32. Weruaga, L.; Verdu, R.; Morales, J. Frequency domain formulation of active parametric deformable models. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 1568–1578. [Google Scholar] [CrossRef] [PubMed]
  33. Johnson, C.R., Jr.; Sethares, W.A.; Klein, A.G. Software Receiver Design: Build Your Own Digital Communication System in Five Easy Steps; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  34. Takos, G. A Survey on Deep Learning Methods for Semantic Image Segmentation in Real-Time. arXiv 2020, arXiv:2009.12942. [Google Scholar]
  35. Shafiq, M.A.; Long, Z.; Di, H.; Regib, G.A.; Deriche, M. Fault detection using attention models based on visual saliency. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1508–1512. [Google Scholar]
  36. Cui, X.; Liu, Q.; Metaxas, D. Temporal spectral residual: Fast motion saliency detection. In Proceedings of the 17th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2009; pp. 617–620. [Google Scholar]
  37. Sumengen, B.; Manjunath, B. Edgeflow-Driven Variational Image Segmentation: Theory and Performance Evaluation. Available online: https://vision.ece.ucsb.edu/sites/default/files/publications/05TechRepBaris_0.pdf (accessed on 19 August 2020).
  38. Wang, B.; Gao, X.; Tao, D.; Li, X. A nonlinear adaptive level set for image segmentation. IEEE Trans. Cybern. 2014, 44, 418–428. [Google Scholar] [CrossRef]
  39. Caselles, V.; Catté, F.; Coll, T.; Dibos, F. A geometric model for active contours in image processing. Numer. Math. 1993, 66, 1–31. [Google Scholar] [CrossRef]
  40. Zheng, Y.; Li, G.; Sun, X.; Zhou, X. Fast edge integration based active contours for color images. Comput. Electr. Eng. 2009, 35, 141–149. [Google Scholar] [CrossRef]
  41. Chan, T.; Vese, L. An active contour model without edges. In Proceedings of the International Conference on Scale-Space Theories in Computer Vision, Corfu, Greece, 26–27 September 1999; pp. 141–151. [Google Scholar]
  42. Lie, J.; Lysaker, M.; Tai, X.-C. A binary level set model and some applications to Mumford-Shah image segmentation. IEEE Trans. Image Process. 2006, 15, 1171–1181. [Google Scholar] [CrossRef] [Green Version]
  43. Cheng, M.-M.; Mitra, N.J.; Huang, X.; Torr, P.H.; Hu, S.-M. Global contrast based salient region detection. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 569–582. [Google Scholar] [CrossRef] [Green Version]
  44. Kichenassamy, S.; Kumar, A.; Olver, P.; Tannenbaum, A.; Yezzi, A. Conformal curvature flows: From phase transitions to active vision. Arch. Ration. Mech. Anal. 1996, 134, 275–301. [Google Scholar] [CrossRef] [Green Version]
  45. He, L.; Peng, Z.; Everding, B.; Wang, X.; Han, C.Y.; Weiss, K.L.; Wee, W.G. A comparative study of deformable contour methods on medical image segmentation. Image Vis. Comput. 2008, 26, 141–163. [Google Scholar] [CrossRef]
  46. Cremers, D.; Rousson, M.; Deriche, R. A review of statistical approaches to level set segmentation: Integrating color, texture, motion and shape. Int. J. Comput. Vis. 2007, 72, 195–215. [Google Scholar] [CrossRef] [Green Version]
  47. Itti, L.; Koch, C.; Niebur, E. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 1254–1259. [Google Scholar] [CrossRef] [Green Version]
  48. Wolfe, J.M. Guided search 2.0 a revised model of visual search. Psychon. Bull. Rev. 1994, 1, 202–238. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Tsai, A.; Yezzi, A.; Willsky, A.S. Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. IEEE Trans. Image Process. 2001, 10, 1169–1186. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Vese, L.A.; Chan, T.F. A multiphase level set framework for image segmentation using the Mumford and Shah model. Int. J. Comput. Vis. 2002, 50, 271–293. [Google Scholar] [CrossRef]
  51. Ihlow, A.; Seiffert, U.; Gatersleben, I. Snakes revisited–speeding up active contour models using the Fast Fourier Transform. In Proceedings of the Eighth IASTED International Conference on Intelligent Systems and Control (ISC 2005), Cambridge, MA, USA, 31 October–2 November 2005; pp. 416–420. [Google Scholar]
  52. Chan, T.F.; Vese, L.A. Active contours without edges. IEEE Trans. Image Process. 2001, 10, 266–277. [Google Scholar] [CrossRef] [Green Version]
  53. Mumford, D.; Shah, J. Optimal approximations by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 1989, 42, 577–685. [Google Scholar] [CrossRef] [Green Version]
  54. Aubert, G.; Kornprobst, P. Mathematical Problems in Image Processing: Partial Differential Equations and the Calculus of Variations; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  55. Fedkiw, S.O.R.; Osher, S. Level set methods and dynamic implicit surfaces. Surfaces 2002, 44, 77. [Google Scholar]
  56. Hou, X.; Zhang, L. Saliency detection: A spectral residual approach. In Proceedings of the 2007 IEEE Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA, 17–22 June 2007; pp. 1–8. [Google Scholar]
  57. Hussain, S.; Khan, M.S.; Asif, M.R.; Chun, Q. Active contours for image segmentation using complex domain-based approach. IET Image Process. 2016, 10, 121–129. [Google Scholar] [CrossRef]
  58. Li, C.; Xu, C.; Gui, C.; Fox, M.D. Distance regularized level set evolution and its application to image segmentation. IEEE Trans. Image Process. 2010, 19, 3243–3254. [Google Scholar] [CrossRef] [PubMed]
  59. Lu, X.; Zhang, X.; Li, M.; Zhang, Z.; Xu, H. A Level Set Method Combined with Gaussian Mixture Model for Image Segmentation. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Nanjing, China, 16–18 October 2020; pp. 185–196. [Google Scholar]
  60. Gao, M.; Chen, H.; Zheng, S.; Fang, B. A factorization based active contour model for texture segmentation. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4309–4313. [Google Scholar]
  61. Akram, F.; Garcia, M.A.; Puig, D. Active contours driven by local and global fitted image models for image segmentation robust to intensity inhomogeneity. PLoS ONE 2017, 12, e0174813. [Google Scholar]
  62. Menze, B.H.; Jakab, A.; Bauer, S.; Kalpathy-Cramer, J.; Farahani, K.; Kirby, J.; Burren, Y.; Porz, N.; Slotboom, J.; Wiest, R. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imaging 2014, 34, 1993–2024. [Google Scholar] [CrossRef] [PubMed]
  63. Mendonça, T.; Ferreira, P.M.; Marques, J.S.; Marcal, A.R.; Rozeira, J. PH 2-A dermoscopic image database for research and benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013; pp. 5437–5440. [Google Scholar]
Figure 1. Geometrical illustration of the proposed model.
Figure 1. Geometrical illustration of the proposed model.
Electronics 10 00192 g001
Figure 2. (a) Consisting three original synthetic images, comparing results of the proposed model with DRLSE model [58], CV model [41], LIF model [22] on the basis of initial contour; (b) comparing results of the proposed model with the DRLSE model [58], CV model [41], and LIF model [22] on the basis of final contour.
Figure 2. (a) Consisting three original synthetic images, comparing results of the proposed model with DRLSE model [58], CV model [41], LIF model [22] on the basis of initial contour; (b) comparing results of the proposed model with the DRLSE model [58], CV model [41], and LIF model [22] on the basis of final contour.
Electronics 10 00192 g002
Figure 3. Final contour and salient object segmentation results of the proposed model and state-of-the-art DRLSE model [58], CV model [41], and LIF model [22] on synthetic images with different levels of complex background, low contrast and speckle noise.
Figure 3. Final contour and salient object segmentation results of the proposed model and state-of-the-art DRLSE model [58], CV model [41], and LIF model [22] on synthetic images with different levels of complex background, low contrast and speckle noise.
Electronics 10 00192 g003
Figure 4. Computation cost of different stat-of-the-arts models on the basis of image per second on images 1 to 10 of CV model [41], DRLSE model [58], and LIF model [22], and the proposed algorithm.
Figure 4. Computation cost of different stat-of-the-arts models on the basis of image per second on images 1 to 10 of CV model [41], DRLSE model [58], and LIF model [22], and the proposed algorithm.
Electronics 10 00192 g004
Figure 5. Computation cost on the basis of iteration on the images from 1 to 10 of CV model [41], DRLSE model [58], and LIF model [22], and the proposed algorithm on the basis of image per second.
Figure 5. Computation cost on the basis of iteration on the images from 1 to 10 of CV model [41], DRLSE model [58], and LIF model [22], and the proposed algorithm on the basis of image per second.
Electronics 10 00192 g005
Figure 6. Comparison of the proposed algorithm on the basis salient object segmentation with the CV model [41], WHRSPF model [59], and LIF model [22].
Figure 6. Comparison of the proposed algorithm on the basis salient object segmentation with the CV model [41], WHRSPF model [59], and LIF model [22].
Electronics 10 00192 g006
Figure 7. Comparison on the basis of final contour the (a) CV model [41], (b) WHRSPF model [59], (c) LIF model [22], and (d) proposed algorithm using medical images.
Figure 7. Comparison on the basis of final contour the (a) CV model [41], (b) WHRSPF model [59], (c) LIF model [22], and (d) proposed algorithm using medical images.
Electronics 10 00192 g007
Figure 8. Comparison based on computation cost (image per second) between different state-of-the-arts algorithm and the proposed algorithm on 11 to 19 synthetic and medical images.
Figure 8. Comparison based on computation cost (image per second) between different state-of-the-arts algorithm and the proposed algorithm on 11 to 19 synthetic and medical images.
Electronics 10 00192 g008
Figure 9. Comparison based on computation cost (iteration) between different state-of-the-arts algorithm and the proposed algorithm on 11 to 19 synthetic and medical images.
Figure 9. Comparison based on computation cost (iteration) between different state-of-the-arts algorithm and the proposed algorithm on 11 to 19 synthetic and medical images.
Electronics 10 00192 g009
Figure 10. Visual comparison of the proposed model, FACM model [60], and Farhan model [61]. (a) Final contour of the proposed model and (b) proposed model saliency of the object, (c) Fahan model final contour and (d), saliency results, (e) the final contour of FACM model and (f) saliency results.
Figure 10. Visual comparison of the proposed model, FACM model [60], and Farhan model [61]. (a) Final contour of the proposed model and (b) proposed model saliency of the object, (c) Fahan model final contour and (d), saliency results, (e) the final contour of FACM model and (f) saliency results.
Electronics 10 00192 g010
Figure 11. Computational cost of the proposed model, FACM model [60], and Farhan model [61]. (a) time (image per second) and based on (b) iterations.
Figure 11. Computational cost of the proposed model, FACM model [60], and Farhan model [61]. (a) time (image per second) and based on (b) iterations.
Electronics 10 00192 g011
Figure 12. Comparison results of images 1 to 10 of the proposed algorithm with DRLSE model [58], CV model [41] and LIF model [22], on the basis of accuracy, sensitivity and dice index.
Figure 12. Comparison results of images 1 to 10 of the proposed algorithm with DRLSE model [58], CV model [41] and LIF model [22], on the basis of accuracy, sensitivity and dice index.
Electronics 10 00192 g012
Figure 13. Comparison results of images 11 to 19 of the proposed algorithm with CV model [41], the WHRSPF model [59], and LIF mode [22] on the basis of accuracy, sensitivity and dice index.
Figure 13. Comparison results of images 11 to 19 of the proposed algorithm with CV model [41], the WHRSPF model [59], and LIF mode [22] on the basis of accuracy, sensitivity and dice index.
Electronics 10 00192 g013
Figure 14. Comparison of the proposed algorithm with two different state-of-the-art models FACM model [60], and Farhan model [61] on the basis of accuracy, sensitivity, and dice index.
Figure 14. Comparison of the proposed algorithm with two different state-of-the-art models FACM model [60], and Farhan model [61] on the basis of accuracy, sensitivity, and dice index.
Electronics 10 00192 g014
Figure 15. Failure cases of the proposed algorithm.
Figure 15. Failure cases of the proposed algorithm.
Electronics 10 00192 g015
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khan, U.S.; Zhang, X.; Su, Y. Active Contour Model Using Fast Fourier Transformation for Salient Object Detection. Electronics 2021, 10, 192. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020192

AMA Style

Khan US, Zhang X, Su Y. Active Contour Model Using Fast Fourier Transformation for Salient Object Detection. Electronics. 2021; 10(2):192. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020192

Chicago/Turabian Style

Khan, Umer Sadiq, Xingjun Zhang, and Yuanqi Su. 2021. "Active Contour Model Using Fast Fourier Transformation for Salient Object Detection" Electronics 10, no. 2: 192. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics10020192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop