Next Article in Journal
Spectroscopic and Microscopic Characterization of Flashed Glasses from Stained Glass Windows
Previous Article in Journal
International Symposium on New Frontiers in Reef Coral Biotechnology (5 May 2022, Taiwan)
 
 
applsci-logo
Article Menu

Article Menu

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Bi-Mutation-Based Differential Evolution Algorithm for Multi-Threshold Image Segmentation

School of Computer and Electronics and Information, Guangxi University, Nanning 530004, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 20 April 2022 / Revised: 30 May 2022 / Accepted: 1 June 2022 / Published: 6 June 2022
(This article belongs to the Topic Advanced Systems Engineering: Theory and Applications)

Abstract

:
Aiming at solving the problems of large calculation, time-consuming, and low segmentation accuracy of multi-threshold image segmentation, an adaptive threshold value based differential evolution algorithm is proposed in this paper. Firstly, an opposite learning strategy is introduced into the initial population to improve the quality of the initial population; secondly, a threshold-value-based mutation strategy is proposed to balance the exploration and development capabilities of the algorithm, and the number of successfully evolved individuals is considered as a threshold value to adaptively adjust the evolution of superior and inferior individuals. Experiments demonstrate that the proposed algorithm has better performance in enhancing accuracy and speeding up the convergence.

1. Introduction

Image segmentation is a classic problem in the field of computer vision, and it is also the main step of image analysis. Image segmentation refers to the segmentation of an image into several disjoint parts according to grayscale, color, spatial texture, geometric shapes, etc., so that the parts with similar characteristics in the same area are clearly different from the parts in different areas. It is widely used in machine vision, medical imaging, object detection and other fields [1]. Segmentation accuracy is an important indicator for detecting the quality of image segmentation, but in the process of research, the main challenge faced by many researchers is to find a perfect solution to obtain higher segmentation accuracy. At present, segmentation algorithms are mainly divided into threshold-based method, region growth-based method, histogram-based method and edge detection-based method and so on [2]. Among them, the threshold segmentation method is widely used in image processing due to its simplicity, efficiency, and fast segmentation speed. The use of threshold-based methods means that it is necessary to find an optimal threshold or a set of optimal threshold combinations [3]. The traditional image thresholding algorithm is effective for image binarization, but for multi-level image thresholding, the computational complexity of exhaustive search will increase significantly due to the increase in the number of thresholds, resulting in lower computational efficiency. Therefore, in order to reduce the computational cost, many research scholars integrate the meta-heuristic intelligent optimization algorithm such as Genetic Algorithm (GA) [4], Differential Evolution algorithm (DE) [5], Particle Swarm Optimization algorithm (PSO) [6], Artificial Bee Colony Algorithm (ABC) [7], etc., into the problem of multi-threshold segmentation of images. A better solution was obtained, which improved the accuracy of image segmentation to a certain extent and reduced the time complexity. Kurban et al. [8] took Kapur entropy as the objective function, and applied it to the task of color image segmentation using ABC, and achieved certain results in terms of computing efficiency and image segmentation quality. In [9], DE is employed for solving thresholding based image segmentation problem using Otsu objective function. Qi Qianhui et al. [10] studied several segmentation algorithms, and concluded that the threshold based segmentation method has a good segmentation performance on images with obvious peaks and valleys. Cerfa et al. [11] applied the fuzzy C-means (ARKFCM) clustering algorithm based on the adaptive regular kernel to the magnetic resonance (MR) brain image segmentation problem, initialized the sensitive cluster center value and combined the PSO algorithm to optimize the cluster centroid, and obtained more Good to achieve the segmentation effect. In the research of Wang et al. [12] they proposed an improved Otsu method and an improved GA for image segmentation. First, a morphological weighted adaptive algorithm is used to reduce morphological noise, and then the improved GA is used to optimize the largest Otsu globally. The image segmentation function is the between-class variance function to obtain the optimal image segmentation threshold. In fact, image segmentation based on the combination of threshold method and optimization algorithm can not only reduce the computational time complexity but also improve the segmentation accuracy and obtain higher segmentation quality pictures. After conducting a lot of research on the segmentation results, combining intelligent algorithms with Otsu’s method seems to be a feasible and worthy method [3].
Therefore, in this article, DE is used in Otsu’s method to improve the quality of image segmentation. Common threshold segmentation algorithms, including the maximum entropy method, balanced histogram threshold method, Otsu method, and k-means clustering, are widely used in industrial applications. Among them, the Otsu method is one of the most famous global threshold methods for binary segmentation problems, which was proposed by the Japanese scientist Otsu in 1979 [13]. The main idea of this method is to maximize the variance function between classes. Because of its strong adaptability and high segmentation accuracy, it is often used in the field of image segmentation. Due to the difference in image complexity, when single-threshold segmentation is extended to multi-threshold segmentation, the search space increases, and the time complexity and computational complexity increase accordingly.

2. Otsu Multi-Threshold Image Segmentation Principle

Multi-threshold image segmentation is mainly to find several optimal thresholds that can accurately segment the image into different segments.
The Otsu method takes the maximum variance function between classes as the objective function. It is the most commonly used method and has the advantages of simple application, simple calculation, and is easy to understand [14]. When solving the problem of image segmentation, this method eliminates the interference of contrast and brightness, thus realizing the effective separation of background and target information. With the inter-class variance between foreground and background becoming larger, the difference between the two parts also becomes larger. The threshold obtained by Otsu will be inclined to the type with a large variance when the difference between the inter-class variance of the two parts is large, some pixels with a large inter-class variance will be classified into those with small inter-class variance, resulting in incorrect segmentation. When the threshold value could maximize the inter-class variance function, then the probability of misclassification reaches the minimum.
The basic principle of Otsu is as follows:
Let f ( x , y ) be a grayscale image with a pixel grayscale value ranging from 0 to L 1 , where L represents the total number of gray levels, which represents the total number of pixels whose gray level value is equal to, then:
N = i = 0 L 1 n i .
n i represents the number of pixels with pixel value i in the image. N is the total number of pixels in the entire image. If p i represents the probability of a pixel whose gray value is i, it can be approximately expressed as a frequency, which is defined as:
p i = n i N .
If T is used to represent the threshold, then for the problem of binary image segmentation, pixels can be divided into two different parts, labeled as C 0 and C 1 , where C 0 and C 1 contain pixels with different characteristics respectively, Threshold T divides the gray values into two ranges, and their gray values are within the range [0, T] and [ T + 1 , L 1 ], respectively. For the definition of variance between classes, several data should be obtained first, namely μ 0 ( T ) , μ 1 ( T ) , ω 0 ( T ) , ω 1 ( T ) . The calculation formula for the above statistical data is as follows:
μ 0 T = i = 0 T i p i ω 0 ( T )
ω 0 T = i = 0 T p i
μ 1 T = i = T + 1 L i p i ω 1 ( T )
ω 1 T = i = T + 1 L p i
μ = i = 0 L i p i i = 0 L p i = i = 0 L i p i = ω 0 T · μ 0 T + ω 1 ( T ) · μ 1 ( T ) .
In Formulas (3)–(7), μ 0 ( T ) and μ 1 ( T ) represent the gray value expectation of the background and foreground, respectively, ω 0 ( T ) and ω 1 ( T ) represent the sum of inter-class probability of background and foreground respectively. μ is the average gray value of all pixels in the entire image. Based on the above definition, the between-class variance function can be expressed as:
σ B T = ω 0 T μ 0 T μ 2 + ω 1 ( T ) ( μ 1 T μ ) 2 .
The formula consists of two parts, corresponding to C 0 and C 1 .
In the Otsu method, when the between-class variance function obtains the maximum value with T * as the threshold, T * is used as the threshold. Therefore, the image segmentation problem can be transformed into an optimization problem, which can be described as:
For the multi-threshold image segmentation problem, the weighted sum of variance between categories is calculated. If the image has m categories to be classified, and the categories are divided by m−1 thresholds T 1 , T 2 , , T m 1 , the objective function is expressed as follows:
σ ( T 1 , T 2 ,   , T m 1 ) = ω 0 ( μ 0 μ ) 2 + ω 1 ( μ 1 μ ) 2 +   + ω m 1 ( μ m 1 μ ) 2 .
The corresponding multi-threshold image segmentation problem can be denoted as:
T * = a r g m a x ( σ ( T 1 , T 2 ,   , T m 1 ) ) .
That is, it is necessary to find a set of T 1 , T 2 ,   , T m 1 to maximize the inter-class variance function.

3. Original DE Algorithm

Due to the disaster of dimension, the task becomes more challenging. When the dimension is increased, the search space expands exponentially and the distribution of good solutions is sparse. Differential evolution is an effective population-based meta-heuristic algorithm, which has been successfully used in various applications [15]. DE is based on three main operators, mutation, crossover, and selection. Mutation is the core operator in DE, which generates mutation vectors based on the linear combination of different candidate solutions. The role of the crossover operator is to combine the mutant vector with its target vector to produce a test vector. Finally, the selection operator adopts a greedy strategy to select and enter the formula here. Choose the better one of the test vector and the target vector.
Differential Evolution (DE) is a meta-heuristic algorithm proposed by Price and Storn in 1995. The algorithm has been used to solve multiple applications in various engineering fields. It is similar to other evolutionary algorithms (EA) used for optimization problems, and DE is a population-based algorithm that evolves through differences between individuals and achieves evolution through three operators: mutation, crossover, and selection. The execution of the three operators also contains three control parameters: population size NP, scaling factor F, and crossover rate CR.
Initialization: starting from the candidate solution randomly generated by NP, usually, a DE population is composed of NP individuals, denoted as:
x i , G = ( x 1 , G , x 2 , G , , x D , G ) i = 1 , 2 ,   , N p ,
where Np is the population size. Np test individuals are generated from the current population through mutation, crossover, and selection of each generation. This step forms a loop until the termination condition is met. The initial population is randomly generated:
x i , j , G = x i , j , G L + r a n d × ( x i , j , G U x i , j , G L ) ,
where x i , j represents the jth component of the ith individual; L and U in x i , j L and x i , j U represent the lower and upper bounds of the jth component respectively. r a n d represents a random decimal in the range of [0, 1].
Mutation operator: a mutant individual is generated through the difference between two random individuals and F (scaling factor) and a target vector, DE achieves individual mutation through a differential strategy. This is also the embodiment of DE’s central idea, which is based on the current individual in the population and randomly selects several vectors for differential operation and generates a differential v vector. Several commonly used differential mutation strategies are as follows:
(1)
DE/rand/1
v i , G = x r 1 , G + F × x r 2 , G x r 3 , G
(2)
DE/best/1
v i , G = x b e s t , G + F × ( x r 1 , G x r 2 , G )
(3)
DE/current-to-best/1
v i , G = x i , G + F × x b e s t , G x i , G + F × ( x r 1 , G x r 2 , G )
(4)
DE/current-to-rand/1
v i , G = x i , G + r a n d × x r 1 , G x i , G + F × ( x r 2 , G x r 3 , G )
where r 1 , r 2 , and r 3 represent the index subscripts of three different individuals randomly selected from the population. F represents the scaling factor, which controls the strength of the difference. x b e s t represents the individual with the best fitness value.
Crossover operator: The purpose of the crossover is to generate a new candidate solution based on the combination of the mutation vector and the parent vector to generate a test vector. The crossover operation between the individual x g , i of the gth generation and the mutated intermediate individual v g , i can be denoted as:
u i , j , G = v i , j , G , i f r a n d C R o r j = j r a n d x i , j , G , o t h e r w i s e ,
where j r a n d is an integer randomly selected from 1, 2, …, D to ensure that at least one individual is from a variant individual. CR is the crossover rate, which determines the ratio of inheritance from variant individuals to test individuals.
Selection operator: the selection operation is performed according to the greedy algorithm, and the individual with the optimal function value is selected from the test individual u i , G and the original individual x i , G to enter the next generation. When the problem is a minimization problem, then the individual with the smallest function fitness value will be chosen to replace the original individual and enter the next generation. The specific formula can be expressed as:
x i , G + 1 = u i , G , i f f ( u i , G ) f ( x i , G ) x i , G , o t h e r w i s e ,
where f ( x ) is the minimization function to be optimized and x i , G + 1 is the individual entering into the G + 1 generation for further evolution.

4. Opposition-Based Optimization after Initialization and Adaptive Threshold-Value-Based Bi-Mutation Strategy

4.1. Opposition-Based Optimization after Initialization

Opposite learning was first proposed by Tizhoosh for the initialization process of intelligent algorithms and subsequently was introduced to DE to find better individuals [16].
Evolutionary optimization methods usually try to improve the initial population toward optimal solutions. The generation of individuals in the initialization process embodies the traits of randomness. Although without prior information about the solution, we can improve our chance of starting with a fitter solution by simultaneously considering the opposite solution [17]. Through this operation, the better individual (random and the opposite of random) can be selected as an initial solution. There is a 50% possibility that the opposite individual is better than the initial solution, so starting with the individual that has the potential to accelerate convergence is quite sensible [17,18]. The opposition optimization is arranged after the initialization process. Supposing that X = ( x 1 , x 2 , x 3 ,   , x D ) is a feasible solution to the maximum inter-class variance function and the opposite solution X ^ = ( x 1 ^ , x 2 ^ , x 3 ^ ,   , x D ^ ) can be defined as follows:
x j ˘ = a j + b j x j ,
where X = ( x 1 , x 2 , x 3 ,   , x D ) is an individual in D dimensional space, and x 1 , x 2 , x 3 ,   , x D R , x j [ a j , b j ] . For each initial individual, calculate the corresponding opposite individual, and the opposite population is generated after initialization.

4.2. Adaptive Threshold-Value-Based Bi-Mutation Strategy

In the evolution of DE, historical prior information often plays a crucial role in the subsequent evolution of the population, and can guide the evolutionary direction of the next generation to a certain extent.
Therefore, designing a mutation strategy that facilitates both exploration (discovering more global optimal regions) and exploitation (refining the accuracy of the solution in each global optimal region) is indispensable for DE-based multi-threshold image segmentation algorithms. For example, in the traditional DE algorithm, the mutation mechanism with high randomness focuses on exploration, while the greedy mutation mechanism guided by local optima focuses on exploitation. When using otsu to solve the multi-threshold image segmentation problem, it works better for images with unimodal between-class variance. However, when the inter-class variance between background and object is small, the multiple peaks of the inter-class variance function are relatively close, which makes it easy for the individual to fall into the local optimum. In this case, the application of a single mutation mechanism will show certain defects. Using mutation strategy with high diversity for sufficient exploration can greatly increase the running time of the algorithm, however, if the greedy mutation strategy guided by the local optima is used for rapid exploration, it would easily fall into local optima and the optimal solution will not be searched.
To address these shortcomings, a dual mutation strategy DE (named as historical adaptive threshold-based dual mutation strategy DE) is designed. The adoption of this strategy enables each individual to choose a suitable mutation strategy to meet its own needs of exploration and exploitation, which can not only better balance the diversity of the population to improve the accuracy of the solution, but also accelerate the convergence to save the search cost of the algorithm. In this way, the entire population can reach a state of balance between exploration and exploitation. The proposed mutation strategy divides the population of each generation into two parts by using the number of successful mutations in the previous generation as a threshold. The better part uses the individual as the mutation base vector, and combines with a group of differential vector to generate mutant individual, because the individual is superior in this generation, it is sensible to further seek surrounding feasible areas to speed up the exploitation process; while for the poor part, DE/rand/1 is used as the mutation operator so as to fully search the solution space and speed up exploration.
After sorting the whole population, the individuals with better fitness values are chosen and the idea of adaptive bi-mutation strategy is illustrated in Figure 1. The number of successful variant individuals—svi in the last generation—is considered as a threshold value to divide the individuals into superior individuals and inferior individuals [19]. The svi value can well reflect the evolution of the whole population. When fewer individuals are successfully mutated, the entire population is stagnated, and more individuals are required to perform the DE/rand/1 mutation strategy. Therefore, the small svi can exactly adjust the direction of population variation and meet the requirement of exploring better individuals. Instead, when there are quite a lot of individuals successfully mutated, which means the svi is large, the threshold-value-based mutation strategy can be executed to help exploit its neighborhood to speed up the convergence and to refine the accuracy of the solution.
Among the sorted population, individuals within the threshold value are considered as the superior individual, while individuals out of the threshold value are reckoned as inferior individuals. Meanwhile, the threshold value varies adaptively as the numbers of successful mutant variants in each generation are different. If the current individual belongs to the superior region, it should exploit further to accelerate the convergence speed and refine the accuracy of the solution. On the contrary, if the current individual belongs to the inferior individuals, then another strategy is conducted to maintain population diversity.
For superior individuals, the first mutation strategy—the adaptive historical threshold-value-based mutation strategy as shown in Formula (19)—should be conducted to enhance local searchability, while for inferior individuals, exploiting their neighborhoods is unnecessary but they can be utilized to explore more areas. The second mutation strategy—DE/rand/1—shows a good performance in diversity and is suitable for inferior individuals to ensure the population diversity:
v i = x i + F × x r 2 x r 3 x r 1 + F × x r 2 x r 3 ,
where x i is the current individual and used as base vector, as explained before, to further exploit the individuals around the splendid individuals. x r 2 and x r 3 are the random individuals different from the current and best individuals. F is the scaling factor.
The detailed procedure of the adaptive threshold-value-based bi-mutation strategy is presented as follows:
  • step1: Randomly generate the initial populations.
  • step2: For each individual, calculate the corresponding opposite individual and generate the opposite population;
  • step3: Calculate the fitness values of the whole initial population and opposite population;
  • step4: Replace the initial individual with its opposite individual if the opposite individual is better than initial individual;
  • step5: Sort the population in descending order according to fitness value;
  • step6: The svi value is considered as a threshold to divide the whole population into two groups and respectively execute different mutation strategies;
  • step7: For superior individuals, execute threshold-value-based mutation strategy, and for inferior individuals, execute DE/rand/1 strategy;
  • step8: Execute crossover and selection operation.

4.3. Experiment Results and Analysis

4.3.1. Evaluation Criterion

SSIM and PSNR are used in our experiment to evaluate the effectiveness of the segmentation algorithm. Both of those criteria follow the rule that the higher the value, the better the image quality. As it is easy to extract structural information from images with human vision, then the structural similarity is practicably computed to appraise the quality of the image after being segmented. The range of SSIM is 0 to 1, and if the same two images are measured, the SSIM value of those two images equals 1. The formula is as follows:
S S I M = ( 2 μ x μ y + C 1 ) ( σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 ) ,
where μ x and μ y is the mean value of x and y respectively, μ x 2 and μ y 2 are the variance of x and y respectively, and σ x y represents the covariance between x and y, C 1 and C 2 are two constants for stabilization denoted as C 1 = ( k 1 L ) 2 , C 2 = ( k 2 L ) 2 , among which k 1 = 0.01 , k 2 = 0.03 , and L refers to the dynamic range of the pixel value and is usually equal to 255.
PSNR is Peak-Signal-to-Noise-Ratio, which is also the image objective evaluation index. It is the most commonly and widely used criteria and the unit of the measurement of PSNR is decibels (DB). The formula is as follows:
P S N R = 10 l o g 10 255 2 M S E
M S E = i = 1 M j = 1 N ( f ( i , j ) f ( i , j ) ) 2 M × N ,
where f ( i , j ) and f ( i , j ) refer to the image to be evaluated and the original image respectively, M and N represent the length and width of the image. M S E in image segmentation indicates the mean value of the sum of squares of the difference of pixel values between the segmented image and the original one.

4.3.2. Contrast Experiments for High-Dimensional Segmentation

In the DE algorithm, the scaling factor can adjust the variation interval range of the population. As in image segmentation, the gray value of an image can only be integers and there are only 256 possibilities to vary, therefore, a larger F is not friendly for individuals to attain suitable mutation results as thresholds are prone to be out of bounds. The experiment shows that when F is set as 0.5, the optimal threshold can be obtained with relatively less iteration compared to other settings and CR is the crossover rate and is set as 0.9; empirical experience shows that high crossover can decrease the time.
In this experiment, the Berkeley segmentation dataset image was adopted as the experimental dataset, in which images 1–3 (forest, kangaroo, actinia) are a typical image with relatively concentrated image gray value distribution and small inter-class variance, and their corresponding histograms are shown in Figure 2, while images 4–6 (numbered 41033, 385039, 145086) have a more obvious difference in the image gray value distribution, that is, the inter-class variance is relatively large. For typical images, this paper segments the image through three higher dimensions when D = 8, 9, 10 to test the segmentation effect of the image. The time cost of the algorithm is concerned with the real-time problem, and it is an important standard to evaluate the performance of algorithms. As most practical problems are related to multi-threshold segmentation, which means that multiple parts need to be segmented, the multi-threshold image segmentation problem has high computational complexity and long time consumption. Thus, under the same condition, the time cost is not stable but fluctuated. In our experiment, with the same hardware configuration, the iteration time of DE is set as 1000 to sufficiently evolve, run the same image segmentation algorithm 50 times and take its average running time as an objective evaluation standard. The results of our proposed DE on three typical images with respect to the evaluation criterion including fitness value, PSNR, SSIM, iteration times and running time are given in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. Two of five algorithms that perform better on the evaluation criteria are marked in bold. It can be seen from the tables that every criterion of our method is marked in bold, which means that our method outperforms other competitive methods nearly in overall performance. The comparison of every single indicator is shown in Figure 2, Figure 3 and Figure 4.
To further evaluate the performance of the segmentation of our proposed algorithm, we compared the proposed algorithm with the original DE, DE with DE/best/1 operator, CSDE [20], and VNCDE [21] on several evaluation criteria. The results of these algorithms are obtained under different dimensions. It can be seen from Table 1 that our proposed method presents a relatively better performance in terms of computational speed and segmentation effect. A forest image, kangaroo image and actinia image were selected as experimental images to testify the segmentation effect and the computational efficiency of the OTSU algorithm. Taking “Forest picture” as an example, the PSNR value of the image is 30.1521, which obviously precedes other comparative algorithms. The SSIM value of the image is 0.8947, which is basically the same as other algorithms, indicating the low image distortion, and the quality of the image after segmentation is guaranteed. On the premise of maintaining the segmentation effect, the running time of our proposed method is 2.8 times shorter than that of the CSDE algorithm, reduced from 7.5639 to 2.6921, which decreased by more than 60% and greatly improved the running speed. Iteration times were 108.32, although this cannot compare with DE using the DE/best/1 strategy but is better than the original DE and CSDE, which reflects the fast convergence of the algorithm to a certain extent. Analysis of the other two images also reached a similar conclusion. Regarding the experiment of the second group of pictures, under the same experimental settings as the first group of pictures, it can also be seen that although the DE algorithm using DE/best/1 has a better performance in terms of time, the accuracy of the algorithm is relatively low. In the case of ensuring high precision, our proposed algorithm has obvious advantages in comprehensive performance. The image after segmentation is shown in Figure 3 and Figure 4.

5. Conclusions

This paper aims to solve problems in traditional OTSU when tackling high dimensional image segmentation, and a probability-based bi-mutation strategy is proposed in DE, which achieves a better performance in refining the objective function and thus improves the quality of the image being segmented. According to maximal variance between classes, the proposed strategy can effectively improve the calculation speed and segmentation efficiency. The experiment results indicate that the DE with the proposed strategy can not only outperform other contrast algorithms in terms of time cost, but also excels in segmentation quality.

Author Contributions

Conceptualization, Y.S. and Y.Y.; methodology, Y.Y.; validation, Y.S., Y.Y.; formal analysis, Y.Y.; investigation, Y.Y.; resources, Y.Y.; data curation, Y.Y.; writing—original draft preparation, Y.Y.; writing—review and editing, Y.Y. and Y.S.; visualization, Y.Y.; project administration, Y.S.; funding acquisition, Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by National Natural Science Foundation of China (Grant Nos. 61763002 and 62072124), Guangxi Major projects of science and technology (Grants No. 2020AA21077007).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DEDifferential Evolution
EAEvolution Algorithm
PSNRPeak-Signal-to-Noise-Ratio
SSIMStructural Similarity
CSDECombined Strategy Differential Evolution
VNCDEParameter-free Voronoi Neighborhood for Evolutionary Multimodal Optimization

References

  1. Kang, W.X.; Yang, Q.Q.; Liang, R.P. The Comparative Research on Image Segmentation Algorithms. In Proceedings of the International Workshop on Education Technology & Computer Science, Wuhan, China, 7–8 March 2009. [Google Scholar]
  2. Cheng, Y.; Li, B. Image Segmentation Technology and Its Application in Digital Image Processing. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2021; pp. 1174–1177. [Google Scholar] [CrossRef]
  3. Ye, Z.; Zhang, A.; Cao, Y.; Ma, L.; Hu, J. An Image Thresholding Method Based on Differential Evolution Algorithm and Genetic Algorithm. In Proceedings of the 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), Metz, France, 18–21 September 2019. [Google Scholar]
  4. Mao, X.Y. Application of Genetic Algorithm in Image Segmentation. J. Proj. Guid. 2004, 3, 186–188. [Google Scholar] [CrossRef]
  5. Storn, R.; Price, K. Differential Evolution—A Simple and Efficient Heuristic for global Optimization over Continuous Spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  6. Liu, Y.; Mu, C.; Kou, W.; Liu, J. Modified particle swarm optimization-based multilevel thresholding for image segmentation. Soft Comput. 2015, 19, 1311–1327. [Google Scholar] [CrossRef]
  7. Gao, H.; Fu, Z.; Pun, C.M.; Hu, H.; Lan, R. A Multi-Level Thresholding Image Segmentation Based on an Improved Artificial Bee Colony Algorithm. Comput. Electr. Eng. 2018, 70, 931–938. [Google Scholar] [CrossRef]
  8. Kurban, T.; Civicioglu, P.; Kurban, R.; Besdok, E. Comparison of evolutionary and swarm based computational techniques for multilevel color image thresholding. Appl. Soft Comput. 2014, 23, 128–143. [Google Scholar] [CrossRef]
  9. Charansiriphaisan, K.; Chiewchanwattana, S.; Sunat, K. A Global Multilevel Thresholding Using Differential Evolution Approach. Math. Probl. Eng. 2014, 2014, 1–23. [Google Scholar] [CrossRef] [Green Version]
  10. Qianhui, Q.I.; Tian, Y.; Han, L.; Zhang, T. Research on Image Segmentation Algorithms. J. Beijing Inst. Graph. Commun. 2019, 27, 102–106. [Google Scholar]
  11. Forouzanfar, M.; Forghani, N.; Teshnehlab, M. Parameter optimization of improved fuzzy c-means clustering algorithm for brain MR image segmentation. Eng. Appl. Artif. Intell. 2010, 23, 160–168. [Google Scholar] [CrossRef]
  12. Wang, Y. Improved OTSU and Adaptive Genetic Algorithm for Infrared Image Segmentation. In Proceedings of the 30th China Conference on Control and Decision Making, Shenyang, China, 9–11 June 2018. [Google Scholar]
  13. Ostu, N.; Nobuyuki, O.; Otsu, N. A thresholding selection method from gray level histogram. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar]
  14. Luo, J.; Yang, Y.; Shi, B. Multi-threshold Image Segmentation of 2D Otsu Based on Improved Adaptive Differential Evolution Algorithm. J. Electron. Inf. Technol. 2019, 41, 2017–2024. [Google Scholar]
  15. Mousavirad, S.J.; Rahnamayan, S.; Schaefer, G. Many-level Image Thresholding using a Center-Based Differential Evolution Algorithm. In Proceedings of the 2020 IEEE Congress on Evolutionary Computation (CEC), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  16. Tizhoosh, H.R. Opposition-Based Learning: A New Scheme for Machine Intelligence. In Proceedings of the International Conference on International Conference on Computational Intelligence for Modelling, Control & Automation, Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  17. Rahnamayan, S.; Tizhoosh, H.R.; Salama, M.M.A. Opposition-Based Differential Evolution. IEEE Trans. Evol. Comput. 2008, 12, 64–79. [Google Scholar] [CrossRef] [Green Version]
  18. Zhou, X.Y.; Zhi-Jian, W.U.; Wang, H. A Differential Evolution Algorithm Using Elite Opposition-based Learning. J. Chin. Comput. Syst. 2013, 34, 2129–2134. [Google Scholar]
  19. Wang, Z.J.; Zhan, Z.H.; Lin, Y.; Yu, W.J.; Yuan, H.Q.; Gu, T.L.; Kwong, S.; Zhang, J. Dual-Strategy Differential Evolution with Affinity Propagation Clustering for Multimodal Optimization Problems. IEEE Trans. Evol. Comput. 2017, 22, 894–908. [Google Scholar] [CrossRef]
  20. Huang, C.; Li, X.; Wen, Y. AN OTSU image segmentation based on fruitfly optimization algorithm. AEJ Alex. Eng. J. 2020, 60, 183–188. [Google Scholar] [CrossRef]
  21. Zhang, Y.H.; Gong, Y.J.; Gao, Y.; Wang, H.; Zhang, J. Parameter-Free Voronoi Neighborhood for Evolutionary Multimodal Optimization. IEEE Trans. Evol. Comput. 2020, 24, 335–349. [Google Scholar] [CrossRef]
Figure 1. Flowchart of Bi-mutation Strategy Framework.
Figure 1. Flowchart of Bi-mutation Strategy Framework.
Applsci 12 05759 g001
Figure 2. Histogram distribution.
Figure 2. Histogram distribution.
Applsci 12 05759 g002
Figure 3. Group1: Original Images and Segmented Images on D = 10.
Figure 3. Group1: Original Images and Segmented Images on D = 10.
Applsci 12 05759 g003
Figure 4. Group2: Original Images and Segmented Images on D = 10.
Figure 4. Group2: Original Images and Segmented Images on D = 10.
Applsci 12 05759 g004
Table 1. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 8.
Table 1. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 8.
D = 8AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image1(Forest)DE(DE/rand/1)924.240530.09920.8936114.062.9308
DE(DE/best/1)922.424129.85630.884530.961.0445
CSDE924.308330.14540.8956327.367.5639
VNCDE924.178730.14390.89501130.3245.8256
Our Method924.308230.15210.895499.322.6921
Image2(Kangaroo)DE(DE/rand/1)889.817929.83950.8769107.482.7631
DE(DE/best/1)887.562229.62490.86731.260.9713
CSDE889.906229.86810.8785328.688.6718
VNCDE889.755329.84220.8778152.1253.0893
Our Method889.904329.86680.878395.32.5922
Image3(Actinia)DE(DE/rand/1)1116.689229.49440.7632105.123.5271
DE(DE/best/1)1113.718429.22550.759831.060.9773
CSDE1116.70329.49960.7638304.027.1308
VNCDE1116.565129.48790.764235.8681.6603
Our Method1116.701429.49590.763697.242.8167
Table 2. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 9.
Table 2. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 9.
D = 9AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image1(Forest)DE(DE/rand/1)928.362230.58390.9102135.284.2656
DE(DE/best/1)925.530730.19350.893833.221.0158
CSDE928.481630.64110.9132369.88.9023
VNCDE928.116530.59940.9129134.1454.7727
Our Method928.47730.62730.9139119.23.18
Image2(Kangaroo)DE(DE/rand/1)894.710330.46940.8953132.664.14
DE(DE/best/1)891.321329.91780.881333.661.1719
CSDE894.841530.55980.8972384.768.3398
VNCDE894.719430.48930.8969161.2459.9547
Our Method894.818830.52140.8967115.983.2098
Image3(Actinia)DE(DE/rand/1)1121.174929.83810.7793124.363.7242
DE(DE/best/1)1118.276229.57980.78233.361.0423
CSDE1121.295229.87340.7808378.928.1109
VNCDE1120.9129.84180.7809290.74107.641
Our Method1121.271329.87350.7835116.143.6367
Table 3. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 10.
Table 3. Results comparison of five algorithms for segmentation of images 1–3 on different criteria when D = 10.
D = 10AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image1(Forest)DE(DE/rand/1)931.196430.99170.9187165.725.0252
DE(DE/best/1)927.944230.4660.905734.461.0486
CSDE931.54131.22470.9242414.889.6212
VNCDE931.350531.20110.9245149.2660.1333
Our Method931.510231.23950.9246132.43.776
Image2(Kangaroo)DE(DE/rand/1)898.252331.00230.9084154.124.7712
DE(DE/best/1)895.419630.38220.896336.681.1319
CSDE898.48131.13430.9115430.589.5338
VNCDE897.97731.02150.9101201.2479.1909
Our Method898.480831.14090.9119125.683.4951
Image3(Actinia)DE(DE/rand/1)1124.650330.45930.799148.243.9829
DE(DE/best/1)1121.193629.87450.801734.260.8487
CSDE1124.749230.50890.8008459.648.4798
VNCDE1124.289730.45490.8092338.84125.4491
Our Method1124.737230.52410.8006140.823.6164
Table 4. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 8.
Table 4. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 8.
D = 8AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image1(41033)DE(DE/rand/1)3434.671129.07860.7675100.86672.6089
DE(DE/best/1)3433.633829.15430.768331.23330.8624
CSDE3434.696529.04870.7587366.73337.6867
VNCDE3434.660929.02670.7634120.43265.3453
Our Method3434.689429.16250.768698.82.7308
Image2(385039)DE(DE/rand/1)4030.942128.59710.824984.43334.8844
DE(DE/best/1)4030.940928.59580.824288.33333.3523
CSDE4030.939728.5940.823388.13337.5729
VNCDE4030.913228.43870.8078167.908770.4665
Our Method4030.941828.59960.825484.76672.3907
Image3(145086)DE(DE/rand/1)3725.23228.54940.792194.764.5831
DE(DE/best/1)3724.494628.50590.791628.541.5662
CSDE3725.231528.55140.7922275.3210.5008
VNCDE3724.976528.54390.7786242.57986.7562
Our Method3725.233628.55330.792289.244.3692
Table 5. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 9.
Table 5. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 9.
D = 9AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image4(41033)DE(DE/rand/1)3441.595629.32330.7951107.93332.9114
DE(DE/best/1)3439.519929.29660.788931.13330.8228
CSDE3441.614129.34040.7969333.46.212
VNCDE3441.54329.29940.7129141.2562.6528
Our Method3441.597829.34060.7959107.63333.033
Image5(385039)DE(DE/rand/1)4038.719528.79890.8417106.23.0852
DE(DE/best/1)4038.337128.91040.846128.73330.7629
CSDE4038.768128.72690.8376500.711.3685
VNCDE4038.545328.68930.8369153.569.7843
Our Method4038.720928.81380.8424104.26672.88
Image6(145086)DE(DE/rand/1)3734.05228.66860.8021109.384.9802
DE(DE/best/1)3733.122528.66880.8035106.24.8643
CSDE3734.126928.67980.8041357.3211.8959
VNCDE3734.123528.67010.7809290.74107.641
Our Method3734.124428.67040.80391085.9049
Table 6. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 10.
Table 6. Results comparison of five algorithms for segmentation of images 4–6 on different criteria when D = 10.
D = 10AlgorithmEvaluation Criterion
Fitness ValuePSNRSSIMIterationsRunning Time
Image1(Forest)DE(DE/rand/1)3446.156129.77780.8202135.06673.8724
DE(DE/best/1)3444.625929.55650.812734.23330.883
CSDE3446.187829.78580.8226489.810.0595
VNCDE3446.167829.76580.8206207.066780.7313
Our Method3446.177129.80020.8213136.93334.2332
Image2(Kangaroo)DE(DE/rand/1)4046.161729.24010.8507119.63.231
DE(DE/best/1)4045.294329.2710.862831.26670.9824
CSDE4046.234529.26790.86360.93337.7758
VNCDE4046.206429.24150.9101201.2476.1909
Our Method4046.248129.29750.8677114.33333.248
Image3(Actinia)DE(DE/rand/1)3740.888228.93190.8066125.125.4697
DE(DE/best/1)3739.808528.92770.821731.461.3898
CSDE3741.014328.94040.8155387.2214.4902
VNCDE3740.998228.93340.8092338.84125.4491
Our Method3741.010528.93760.8156122.985.576
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, Y.; Yang, Y. An Adaptive Bi-Mutation-Based Differential Evolution Algorithm for Multi-Threshold Image Segmentation. Appl. Sci. 2022, 12, 5759. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115759

AMA Style

Sun Y, Yang Y. An Adaptive Bi-Mutation-Based Differential Evolution Algorithm for Multi-Threshold Image Segmentation. Applied Sciences. 2022; 12(11):5759. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115759

Chicago/Turabian Style

Sun, Yu, and Yingying Yang. 2022. "An Adaptive Bi-Mutation-Based Differential Evolution Algorithm for Multi-Threshold Image Segmentation" Applied Sciences 12, no. 11: 5759. https://0-doi-org.brum.beds.ac.uk/10.3390/app12115759

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop