Next Article in Journal
A Fractional-Order Sinusoidal Discrete Map
Previous Article in Journal
A Novel General (n, n)-Threshold Multiple Secret Images Sharing Scheme Based on Information Hiding in the Sharing Domain
Previous Article in Special Issue
Combining Sparse and Dense Features to Improve Multi-Modal Registration for Brain DTI Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Adaptive Image Thresholding within Nonextensive Entropy and the Variance of the Gray-Level Distribution

Department of Physics, College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China
*
Author to whom correspondence should be addressed.
Submission received: 28 January 2022 / Revised: 19 February 2022 / Accepted: 20 February 2022 / Published: 23 February 2022
(This article belongs to the Special Issue Entropy Based Image Registration)

Abstract

:
In order to automatically recognize different kinds of objects from their backgrounds, a self-adaptive segmentation algorithm that can effectively extract the targets from various surroundings is of great importance. Image thresholding is widely adopted in this field because of its simplicity and high efficiency. The entropy-based and variance-based algorithms are two main kinds of image thresholding methods, and have been independently developed for different kinds of images over the years. In this paper, their advantages are combined and a new algorithm is proposed to deal with a more general scope of images, including the long-range correlations among the pixels that can be determined by a nonextensive parameter. In comparison with the other famous entropy-based and variance-based image thresholding algorithms, the new algorithm performs better in terms of correctness and robustness, as quantitatively demonstrated by four quality indices, ME, RAE, MHD, and PSNR. Furthermore, the whole process of the new algorithm has potential application in self-adaptive object recognition.

1. Introduction

One of the most important tasks in image segmentation is to precisely extract objects from their backgrounds. Image thresholding has previously been widely adopted because of its simplicity and efficiency [1,2,3]. For different types of images, a large number of thresholding algorithms exist based on the characteristics of images. More specifically, the gray-level distribution, i.e., the histogram of the gray-level image, plays an important role in the image thresholding algorithms. It is obvious that different types of images will show different histogram profiles, which contain information relating to both the objects and their backgrounds. Therefore, it is desirable to identify characteristic functions that can suggest proper thresholds to separate the objects and backgrounds.
The Otsu algorithm [4] is widely adopted to deal with images having a bimodal histogram distribution and can be easily extended to multi-level image segmentation [5,6,7,8,9]. The entropy-based algorithm [10,11,12,13,14] is another option for image segmentation since the gray-level histogram can be considered as a kind of probability distribution, and maximization of the corresponding entropies is a nature-inspired means of finding the optimal thresholds. In order to improve the robustness and anti-interference of the thresholding algorithms, two-dimensional histogram distributions [15,16,17] are frequently used to detect the edges and noise of the images, and thus achieve better segmentation results [18,19,20]. It is worth mentioning that, among these entropy-based algorithms, “Tsallis entropy-based thresholding” introduces the concept of nonextensivity into the image segmentation field [21,22]. The nonextensive entropy can be traced from the complex physical systems that have long-range interactions and/or long-duration memories [23,24]. There is a nonextensive parameter that measures the strength of the above mentioned non-local effects. Therefore, it is reasonable to adopt the nonextensive parameter to illustrate the global correlations among all pixels of an image. From the viewpoint of information theory, the nonextensive parameter of an image can be determined by the maximization of the redundancy of the gray-level distribution [25].
Since there are too many categories of images, a unique segmentation algorithm to deal with all of them effectively does not exist. Nevertheless, a stable algorithm that can correctly segment a wider variety of images is one of the important goals in computer vision research [26,27,28,29,30,31,32,33]. The Otsu and Otsu-based algorithms tend to separate the foreground and background equivalently, so they are not suitable for the extraction of tiny objects. Conversely, the entropy-based algorithms are too sensitive to the perturbation in images, and this instability restricts these algorithms from being applied in a more general scope. In this study, based on the explicit mathematical interpretation and numerical evaluation, it was found that the Otsu algorithm and the nonextensive entropy-based algorithm can be properly combined. An effective objective function is thus proposed to overcome both of the above-mentioned deficiencies. Moreover, the effective nonextensive parameter in the proposed algorithm is automatically determined by the information redundancy of an image [25]. Therefore, the proposes approach is a self-adaptive algorithm that can hopefully be applied to a more general scope of scenes.
The remainder of this paper is organized as follow: in Section 2, the general properties of the Otsu algorithm are illustrated, and the entropy-based algorithms are briefly introduced; in Section 3, based on mathematical calculation and numerical evaluation, an effective objective function is proposed for self-adaptive image thresholding; the detailed results and analysis are illustrated in Section 4; and the conclusions are presented in Section 5.

2. Image Thresholding Algorithms

Assuming that the size of an image is M × N and the range of its gray-level is considered as i = 0 , 1 , , L 1 , the probability of the i - th gray-level can be defined as:
p i = h i M × N , p i 0 , i = 1 L p i = 1
where M × N is the total number of pixels in the image, h i represents the number of pixels for which the gray-level value is equal to i . Therefore, the normalization of the probability distribution is explicitly expressed as Equation (1).

2.1. Otsu Algorithm

Now suppose that the threshold of an image is t . The corresponding gray-level histogram can be divided into two classes, C a = ( 0 , 1 , , t ) and C b = ( t + 1 , , L 1 ) , and the cumulative probability of the above two classes can be written as:
P a = i = 0 t p i , P b = i = t + 1 L 1 p i
The mean values of the gray-level of C a and C b are given by:
{ ω a = 1 P a i = 0 t i p i ω b = 1 P b i = t + 1 L 1 i p i
Using the same idea, the mean gray-level value of the image is:
ω G = i = 0 L 1 i p i
Therefore, the variances of C a , C b , and the total histogram can be respectively written as:
{ σ a 2 = i = 0 t ( i ω a ) p i P a , σ b 2 = i = t + 1 L 1 ( i ω b ) p i P b σ G 2 = i = 0 L 1 ( i ω G ) p i
Based on Equation (5), the within-class variance and between-class variance are defined as [4]:
{ σ W 2 = ω a σ a 2 + ω b σ b 2 σ B 2 = P a ( ω a - ω G ) 2 + P b ( ω b - ω G ) 2
and the following relation holds:
σ B 2 + σ W 2 = σ G 2
It can be easily seen that for the arbitrary threshold value t , the following relations always hold:
{ P a ω a + P b ω b = ω G P a + P b = 1
The key point of the Otsu algorithm is to maximize the between-class variance by selecting a proper threshold value t * , i.e.,
t * = A r g max { σ B 2 ( t ) }
In fact, using Equation (8), the between-class variance can be rewritten as:
σ B 2 = P a P b ( ω b ω a ) 2
From Equation (10), it is shown that the between-class variance is dominated by two factors, ( ω b ω a ) 2 and P a P b . Maximizing the factor ( ω b ω a ) 2 means that the gray-level difference between C a and C b is tuned to the maximum by a proper threshold t 1 , which coincides with the principle of image segmentation. However, maximizing the factor P a P b requires finding another threshold t 2 to satisfy P a = P b = 1 / 2 , which means that the number of pixels in the foreground is equal to that in the background. In general, for a given image, t 1 = t 2 . However, the optimal threshold t * represents the trade-off between t 1 and t 2 . Therefore, the Otsu algorithm always has a tendency to equally separate the pixels of an image, demonstrating the deficiency when extracting tiny objects from the background.

2.2. Otsu–Kapur Algorithm

The Otsu algorithm is a classical global thresholding technique based on the clustering theorem. The idea of the entropy-based algorithm is quite different from Otsu’s, although both of them start from the image’s histogram. Shannon entropy is widely adopted in entropy-based image thresholding. It was first proposed by Pun [34] and improved by Kapur in 1985 [35]. By using the a priori entropy of the foreground and background, an objective function is obtained to indicate the optimal threshold under the Maximum Entropy Principle.
Based on the gray-level histogram distribution of an image, Shannon entropy is given by:
S k = i = 0 L 1 p i ln p i
Assuming that the histogram is separated into two parts ( a and b ) by threshold t ,. the corresponding entropies are:
{ S ( a ) = i = 0 t p i P t ln p i P t = ln P t + S t P t S ( b ) = i = t + 1 L 1 p i 1 P t ln p i 1 P t = ln ( 1 P t ) + S k S t 1 P t P t = i = 0 t p i , S t = i = 0 t p i ln p i P t
The objective function φ ( t ) is given by the sum of S ( a ) and S ( b ) :
φ ( t ) = S ( a ) + S ( b )
and the optimal threshold of Kapur algorithm is determined by:
t * = A r g max { φ ( t ) }
In practice, the Kapur algorithm has better performance than the Otsu algorithm in extracting tiny targets from their background. However, this algorithm is quite sensitive to the perturbation of pixels. For instance, the value of Equation (13) varies drastically with the threshold t , which means that the optimal threshold can be easily disturbed by the variation in gray-level distribution and lead to incorrect segmentation. This instability also restricts the application of the Kapur algorithm to a more general scope. Taking the characteristics of the Otsu algorithm into account, it is possible to increase the stability by combining the Kapur and Otsu algorithms, without losing the accuracy of extracting tiny objects.
For a given image, the total gray-level variance σ G 2 is fixed. From Equation (7), we can see that maximizing the between-class variance p ( x ) is equivalent to minimizing the within-class variance σ W 2 . Therefore, Equations (5) and (14) yield the objective function:
N e ( t ) = ln σ W ( t ) 2 φ ( t )
The optimal threshold is obtained by the following algorithm:
t * = A r g min { N e ( t ) }

2.3. Two-Dimensional Entropic Algorithm

The above-mentioned thresholding algorithms are based on the one-dimensional(1D) gray-level histogram. In order to improve the accuracy and robustness, Ahmed [36] considered not only the pixel’s gray-level value, but also the spatial correlation of the pixels in an image. Therefore, the mean gray-level value of neighboring pixels is relevant and the one-dimensional (1D) histogram distribution is extended to the two-dimensional (2D) distribution. If a pixel’s gray-level is equal to i and the average gray-level of its neighborhood is j , the number of this kind of pixel among the image is f i j .
The 2D probability distribution can be written as:
p i j = f i j M × N
The total entropy of the 2D histogram is defined as:
H ( L ) = i = 0 L 1 j = 0 L 1 p i j ln p i j
If the two thresholds are located at s and t , the 2D gray-level histogram is divided into four regions, as shown in Figure 1.
Assume that the pixels are mainly distributed at two regions, a and b in Figure 1. The cumulative probabilities a of and b are:
{ P A ( s , t ) = i = 0 s j = 0 t p i j P B ( s , t ) = i = s + 1 L 1 j = t + 1 L 1 p i j
The corresponding entropies can be written as:
{ H A ( s , t ) = i = 0 s j = 0 t p i j P A ( s , t ) ln p i j P A ( s , t ) H B ( s , t ) = i = s + 1 L 1 j = t + 1 L 1 p i j P B ( s , t ) ln p i j P B ( s , t )
Based on the additivity of Shannon entropy, the total entropy is defined as:
Ψ ( s , t ) = H A ( s , t ) + H B ( s , t )
which is dependent on threshold ( s , t ) . By the same idea of the 1D entropy-based algorithm, maximizing the objective function, i.e., Equation (21), can yield the optimal thresholds:
( s * , t * ) = A r g { max 0 < s < L 1 { max 0 < t < L 1 Ψ ( s , t ) } }
In practice, the above 2D entropic algorithm is effective for images with uneven illumination, noise, missing edges, poor contrast, and other interference from the environment [37]. It is reasonable to consider more correlations between the pixel and its neighborhood, and the histogram distribution can be extended to higher dimensions. However, the increase in the number of dimensions will lead to an exponential increment in computation.

2.4. Tsallis Entropy Algorithm

As mentioned above, Shannon entropy is additive and shows the property of extensivity in image processing. The concept of entropy was first proposed in thermodynamics to describe the physical systems that have a huge number of microstates. Furthermore, the extensivity of entropy is based on the assumption that the microstates among the system are independent of each other. However, for some systems with long-range interactions, long-time memory and fractal-type structures, the extensivity may not hold anymore. Tsallis introduces a kind of non-extensive entropy [23] to describe such systems, expressed as:
S T 1 i = 1 L p i q q 1
where q is a real number that describes the nonextensivity of the system. In the q 1 limit, Tsallis entropy is reduced to Shannon entropy and the extensivity of the system is recovered. The nonextensive generalization of entropy also shed lights on the information theory. In image segmentation, Tsallis entropy shows potential superiority and flexibility in a more general scope of image class [21].
In the Tsallis entropy algorithm, the cumulative probability of foreground a and background b are:
P a = i = 1 t p i , P b = i = t + 1 L p i
According to Equation (23), the entropy of each part can be defined as [21]:
{ S T a ( t ) = 1 i = 1 t ( p i P a ) q q 1 S T b ( t ) = 1 i = t + 1 L ( p i P b ) q q 1
Suppose that a and b are subsystems of the full image, due to the nonextensivity; the total entropy of the image is expressed as:
S q a + b ( t ) = S q a ( t ) + S q b ( t ) + ( 1 q ) S q a ( t ) S q b ( t )
where the third term on the right-hand side of Equation (26) shows the pseudo-additivity of Tsallis entropy. Maximizing S q a + b yields the optimal threshold t * , which is given by:
t * = arg max { S q a + b ( t ) }
Obviously, the optimal result of Equation (27) depends on the nonextensive parameter q , which describes the strength of internal correlation of the image. In other words, for an arbitrary two pixels in the image, their gray-level values may have long-range correlations. More specifically, for an image containing several objects, the pixels of objects will exhibit similar gray-level values, even though they are not adjacent to each other. It is possible to measure this kind of long-range correlation by nonextensive entropy [18,21], and this idea inspired a new algorithm that is discussed below. Since the parameter q is an additional index that can tune the optimal threshold, it is of great importance to determine the exact value of q for a given image. Recently, Abdiel and coauthors introduce a methodology to evaluate the nonextensive parameter q of an image [25]. Based on the information theory, the generalized redundancy of an image that presents nonextensive properties can be expressed as [25]:
R ( q ) = 1 S T S T max
where S T max = ( 1 L 1 q ) / ( q 1 ) is the possible maximum q-entropy of the image that can be achieved at p i = 1 / L ( 0 i L 1 ) , i.e., equipartition of the gray-level probability. Maximizing Equation (28) by a proper value of q means that the gray-level histogram of the given image is renormalized to deviate from the equal probability case (containing zero information) as far as possible. Therefore, the information contained in the image histogram can be strengthened by a particular q , which is highly image category dependent.

3. New Algorithm

As mentioned above, the nonextensive entropy algorithm is suitable for describing the long-range correlations within an image. However, like other entropy-based algorithms, it is still very sensitive to the perturbation of signals, so the scope of its application is limited. By comparison, the Otsu algorithm is stable but not accurate for small target extraction. Therefore, it is possible to combine the advantages of the two and develop a new algorithm with a more general scope of application. It is worth mentioning that the nonextensive parameter q in Tsallis entropy is now determined by information redundancy and cannot be tuned arbitrarily.
Based on Equations (5), (7) and (26), a new objective function can be written as:
μ ( t ) = S q a + b ( σ W 2 ) 1 q
In order to retain the concavity of Tsallis entropy, q > 0 should be satisfied [23]. Alternatively, q < 1 is called superextensivity, which will increase the total entropy of the system in comparison with the extensive case ( q = 1 ) [38]. In practice, almost all categories of images exhibit the property of superextensivity [25]. Therefore, the proper range of the nonextensive parameter can be 0 < q < 1 . From Equations (9) and (27), we can see that both of the two algorithms are aimed to maximize the objective functions. Taking Equation (7) into account, it can be easily seen that the aim of Equation (29) is to maximize the objective function, i.e.,
t * = A r g max { μ ( t ) }
The optimal threshold is obtained from Equation (30) with the above-mentioned range of q . For a synthetic image having a bimodal histogram distribution, as shown in Figure 2, the profile of each peak is the normalized q-Gaussian distribution function [39]. From Equations (9) and (27), we can see that both the Otsu algorithm and the Tsallis entropy algorithm indicate the valley gray-level between the two peaks as the optimal threshold, which exactly coincides with the result of Equation (30). For other natural pictures that have an arbitrary histogram distribution, there is no evidence that the result of Equation (9) coincides with that of Equation (27), whereas Equation (29) shows a trade-off between them and Equation (30) may yield a proper suggestion. For the histogram of Figure 2, it should be noted that the magnitude difference between S q a + b and σ W 2 is very large. As shown in Figure 3, both of them are functions of threshold t . However, the values of the Tsallis entropy algorithm are totally suppressed by those of the Otsu algorithm for any possible threshold t . Therefore, it is unsuitable to combine S q a + b and σ W 2 directly.
In order to avoid the impact of the magnitude difference, the q-exponential function [40] can be adopted to revise the magnitude of σ W 2 . By definition, Tsallis entropy with a continuous probability distribution function can be expressed as:
S T = 1 0 1 p ( x ) q d x q 1
where p ( x ) represents the probability density of the normalized gray-level value x . For a system presenting nonextensive q-entropy, the corresponding probability distribution can be written as the q-Gaussian function [39]:
p ( x ) = 1 Z q [ 1 ( 1 q ) · x 2 σ 2 ] 1 1 q
where σ 2 is the variance of x and Z q is the partition function to keep the probability normalization condition, i.e.,
Z q = 0 1 [ 1 ( 1 q ) · ( x σ ) 2 ] 1 1 q d x = σ π 2 1 q · Γ ( 1 + 1 1 q ) Γ ( 3 2 + 1 1 q )
where Γ ( k ) is the Gamma function and will reduce to factorial ( k 1 ) ! if k is an integer. Substituting p ( x ) into Equation (31) yields:
S T = 1 0 1 1 Z q q [ 1 ( 1 q ) ( x σ ) 2 ] q 1 q d x q 1 = 1 ξ · ( σ 2 ) 1 q 2 q 1
where:
ξ = [ π 4 ( 1 q ) ] 1 q 2 · [ Γ ( 3 2 + 1 1 q ) Γ ( 1 + 1 1 q ) ] q · Γ ( 1 1 q ) Γ ( 3 q 2 ( 1 q ) )
is the integration constant for a given value of q. If p a and p b are two identical q-Gaussian distribution functions, according to the nonextensivity of Tsallis entropy, the total entropy can be written as:
S T ( a + b ) = S T ( a ) + S T ( b ) + ( 1 q ) S T ( a ) S T ( b ) = 1 ξ a ( σ a 2 ) 1 q 2 q 1 + 1 ξ b ( σ b 2 ) 1 q 2 q 1 + ( 1 q ) · 1 ξ a ( σ a 2 ) 1 q 2 q 1 · 1 ξ b ( σ b 2 ) 1 q 2 q 1
Substituting σ a 2 = σ b 2 = σ W 2 into Equation (36) yields:
S T ( a + b ) = ξ a ξ b ( σ W 2 ) 1 q 1 1 q
Therefore, the magnitude of ( σ W 2 ) 1 q is comparable with S T ( a + b ) at the proper range of q, and the rationality of Equation (29) is shown. The main steps of the present algorithm can be seen in Figure 4:
The above procedure can also be applied to the segmentation of RGB or other color images. The intense distribution of each color channel can be considered as a gray-level distribution. Therefore, the threshold value of each channel can be obtained directly. It should be mentioned that the intense distributions may differ for different color channels, so the above algorithm cannot yield a unified value, in general. By comparison, both the Otsu algorithm and entropy-based algorithm can be independently adopted for multi-level image thresholding. According to the idea of Equation (29), the advantages of these two kinds of typical thresholding algorithms can be combined by extending Equation (29) to the multi-level case.

4. Analysis of Experimental Results

In order to show the stability and feasibility of the proposed algorithm, we used four quality indices, namely, Misclassification Error (ME), Relative Foreground Area Error (RAE), Modified Hausdorff Distance (MHD), and Peak Signal-to-Noise Ratio (PSNR), to illustrate the performance of Equation (29) and make comparisons with the other algorithms mentioned in Section 2.

4.1. Misclassification Error (ME)

Misclassification error expresses the percentage of wrongly assigned image pixels that represent the object and background images. For the single threshold segmentation, ME can be simply expressed as [41]:
M E = 1 | C g t C t | + | B g t B t | | C g t | + | B g t |
where C g t and B g t represent the foreground and background of the ground-truth image, C t and B t are the foreground and background pixels in the segmented image, and |   .   | is the cardinality of the set. The value of ME is between 0 and 1. The lower the value of ME, the better the segmentation result.

4.2. Relative Foreground Area Error (RAE)

RAE is a quality assessment parameter that calculates the area of difference between the segmented image and the ground-truth image, which is defined as [42]:
R A E = { A s A t A s ,   i f A t < A s A t A s A t ,   i f A s < A t
where A s and A t are the area of the ground-truth image and the segmented image, respectively. Obviously, for an ideal segmentation in which A t coincides with A s , RAE is zero.

4.3. Modified Hausdorff Distance (MHD)

Hausdorff distance is used to determine the degree of similarity between two objects that are overlapped with each other. In order to maintain the symmetry form, the Modified Hausdorff Distance (MHD) is more frequently used and is defined as [43]:
M H D ( R g t , R t ) = max ( d M H D ( R g t , R t ) , d M H D ( R t , R g t ) )
d M H D ( R g t , R t ) = 1 R g t r g t R g t min r t R t r g t r t
where r g t and r t represent objects belonging to the ground-truth image R g t and the segmented result R t , respectively, and r g t r t is the Hausdorff distance. This parameter can objectively describe the distortion degree of the segmented image and the ground-truth image. If R t perfectly coincides with R g t , then MHD is zero, by definition. Unlike ME and RAE, MHD is not normalized. For failed segmentation, the value of MHD will be much larger than 1.

4.4. Peak Signal-to-Noise Ratio (PSNR)

The Peak Signal-to-Noise Ratio is a measurement algorithm used in the image transmission. First, the concept of Mean Square Error (MSE) is required, which is a measure of the difference between two images. It is defined as [44]:
M S E = 1 M × N i = 0 M 1 j = 0 N 1 [ R g t ( i , j ) R t ( i , j ) ] 2
where R g t ( i , j ) and R t ( i , j ) are pixels of the ground-truth image and segmented image, respectively. It can be easily seen that M S E = 0 if R g t ( i , j ) = R t ( i . j ) for arbitrary coordinates ( i , j ) . Therefore, lower MSE represents better quality of image segmentation. Accordingly, PSNR is defined in terms of MSE:
P S N R = 10 · log 10 ( ( L 1 ) 2 M S E )
Equation (43) shows that, for ideal segmentation ( M S E 0 ), PSNR will tend to infinity.

4.5. Experimental Results

First, we applied the proposed algorithm to segment several well-known testing images. The results of the four other algorithms mentioned above are also listed, as shown in Figure 5, Figure 6 and Figure 7.
In Figure 5, compared to the results of the four other algorithms, i.e., Figure 5b–e, the result of the proposed algorithm has more details and edge contours. In Figure 6, we can see that both the 2D histogram algorithm and the Tsallis entropic algorithm failed to extract the objects from the image. However, the result of the proposed algorithm, i.e., Figure 6f is quite acceptable. We can see more detail information in it, in comparison with Figure 6b,c. In Figure 7e, it is shown that the Tsallis entropic algorithm over segments the original image and the detail of the baboon’s face is lost. However, the Otsu, Otsu–Kapur and Shannon 2D thresholds are also not appropriate. As shown in Figure 7b–d, the baboon’s eyes are blurred by too many black pixels. In contrast, the proposed algorithm has a moderate result, as shown in Figure 7f.
In order to show the advantages of the proposed algorithm more convincingly, we choose 50 test images from VOC-2012, BSD300, and Ref. [45] to compare the performance of these five algorithms. These images have totally different gray-level histograms. Accordingly, their nonextensive parameters are also very different from each other, as shown in Table 1.
In order to further illustrate the segmentation results visually, we chose pictures 1–5 of Table 1 as examples, as shown in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
In Figure 8, the ground-truth image shows that the number of pixels in the foreground is comparable to that of background. Therefore, both the Otsu algorithm and the proposed algorithm achieve acceptable results. However, the entropy-based algorithms cannot yield good results, as shown in Figure 8e,f. In Figure 9, the infrared object is tiny in comparison with the full image size. The Otsu-based algorithm failed to determine the correct results as expected, as shown in Figure 9c,d. By comparison, the infrared image may have a long-range correlation among the pixels, so the Shannon entropy-based algorithm also failed, as shown in Figure 9e. The results of Figure 9f,g are very close to the ground-truth image, which indicates that the value of the nonextensive parameter q can correctly evaluate the long-range correlation in an image. In addition, the value of q is automatically generated by maximizing Equation (27), so the new algorithm is self-adaptive. The results of Figure 10 and Figure 12 are quite similar to that of Figure 8, since there is a large amount of noise in the background and the entropy-based algorithms are very unstable to perturbation, in spite of the increasing dimension of the histogram. However, the new algorithm can still correctly segment the images, which shows the potential application in a more general scope, including tiny object recognition (Figure 9 and Figure 11), background noise suppression (Figure 10 and Figure 12), and detection of long-range correlation.
From Figure 8e to Figure 12e, we can see that the 2-D Shannon algorithm, as a well-known entropic thresholding procedure, does not have stable outputs. However, the idea of extending the dimension of the histogram using the correlation of the neighboring pixels is still heuristic. It is of great interest to extend Equation (29) into two, or even higher, dimensions of the histogram because the development of optimization algorithms [15], refs. [19,20] can effectively reduce the computational cost.
For each image from the testing set, it should be mentioned that the new algorithm and the Tsallis entropy algorithm share the same value of q, which is determined by maximizing the information redundancy. However, the Tsallis entropy algorithm is very unstable if the image is subject to noise interference, even with a proper value of q. In comparison, the new algorithm is always stable. By comparison, Otsu algorithm is robust but cannot effectively recognize tiny objects. The new algorithm extracts the advantages of both the Otsu and entropy-based algorithms in a proper manner, and this point can be further shown using the detailed quality indices.
For 50 images in the testing set, Table 2, Table 3, Table 4 and Table 5 list the above-mentioned four quality indices of the results generated by the five different algorithms, respectively. Due to the variety in the testing set, the new algorithm cannot always ensure the best performance for all images, but its results are still acceptable. Furthermore, the statistical results of Table 2, Table 3, Table 4 and Table 5 clearly show the universality of the proposed algorithm for different kinds of images.
For 50 testing images, the segmented results and the corresponding ME of the different thresholding algorithms may be very different. However, the performances of these algorithms can be statistically evaluated. Figure 13 shows the average ME values of the 50 images yielded by the above-mentioned five algorithms, and Figure 14 represents the average values of RAE. As shown, the average ME value of the proposed algorithm is the lowest in comparison with those of the other algorithms. Furthermore, the variance in the ME value of the proposed algorithm is much less than those of the other algorithms. Therefore, both accuracy (lower ME value) and stability (lower variance) of the new algorithm are better than those of the algorithms, which means that this algorithm is suitable and more robust for a more general category of images. By the same analysis, we can see that the average value and variance in RAE of the new algorithm were both the lowest among all the results, which indicates that the new algorithm is better than the others in foreground area detection.
As mentioned above, the values of MHD and PSNR are not normalized. For 50 segmented results of the testing set, the distributions of MHD and PSNR are not at [ 0 , 1 ] , but at ( 0 , ) . Therefore, it is possible that their variances are larger than the averages. Figure 15 shows the comparison of MHD among the five algorithms. As we can see, the average MHD of the new algorithm again achieves the lowest value, and the corresponding variance is less than that of the others. This means that the new algorithm can maintain the shape of the objects in a more correct and stable manner. Figure 16 shows the comparison results of PSNR. Unlike the other three quality indices, the larger PSNR value means a better quality of information transmission. Therefore, we can see that the new algorithm still performs better than the others in this quality index (largest PSNR value), with a better robustness (lowest variance).

5. Conclusions

In the task of computer vision, it is of great importance to explore algorithms that can correctly recognize the objects from different kinds of backgrounds in a stable way. The Otsu algorithm is based on the variance in the gray-level distribution of an image. It can yield stable thresholding results but has deficiencies in small target recognition. The entropy-based algorithms are suitable for small target extraction and can even detect the long-range correlation among pixels using a nonextensive parameter. However, the entropy-based objective functions can be easily disturbed by noise. In the present paper, based on the rigorous mathematical and numerical results, we combine the advantages of the Otsu algorithm and nonextensive entropy algorithm to develop a new algorithm that can effectively segment the objects from various kinds of background in a more stable manner. For 50 images chosen from different categories, the quality indices of ME, RAE, MHD, and PSNR were adopted to evaluate the segmentation results. In comparison with the other famous thresholding algorithms, the statistical results show that the proposed algorithm has better performance than the others in each of the four quality indices. In addition, there is no artificial intervention during the whole process. Therefore, the proposed algorithm is an approach to automatic image thresholding that has potential application in self-adaptive object recognition.

Author Contributions

Conceptualization, C.O. and Z.S.; methodology, C.O.; software, Q.D.; validation, Q.D. and Z.S.; formal analysis, C.O. and Q.D.; investigation, Q.D.; resources, Q.D.; data curation, Q.D.; writing—original draft preparation, Q.D.; writing—review and editing, C.O.; visualization, Q.D.; supervision, C.O. All authors have read and agreed to the published version of the manuscript.

Funding

The authors are grateful for the support of the National Natural Science Foundation of China (No. 11775084), the Program for prominent Talents in Fujian Province, and Scientific Research Foundation for the Returned Overseas Chinese Scholars.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html (accessed on 28 January 2022) and https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ (accessed on 28 January 2022) for providing source images.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lei, B.; Fan, J. Image thresholding segmentation method based on minimum square rough entropy. Appl. Soft Comput. 2019, 84, 105687. [Google Scholar] [CrossRef]
  2. Cheng, Y.; Li, B. Image segmentation technology and its application in digital image processing. In Proceedings of the 2021 IEEE Asia-Pacific Conference on Image Processing, Electronics and Computers (IPEC), Dalian, China, 14–16 April 2021; pp. 1174–1177. [Google Scholar] [CrossRef]
  3. Song, Y.; Yan, H. Image Segmentation Techniques Overview. In Proceedings of the 2017 Asia Modelling Symposium (AMS), Kota Kinabalu, Malaysia, 4–6 December 2017; pp. 103–107. [Google Scholar] [CrossRef]
  4. Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. Syst. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  5. Merzban, M.H.; Elbayoumi, M. Efficient solution of Otsu multilevel image thresholding: A comparative study. Expert Syst. Appl. 2019, 116, 299–309. [Google Scholar] [CrossRef]
  6. Naidu, M.S.R.; Kumar, P.R.; Chiranjeevi, K. Shannon and Fuzzy entropy based evolutionary image thresholding for image segmentation. Alex. Eng. J. 2018, 57, 1643–1655. [Google Scholar] [CrossRef]
  7. Elaziz, M.A.; Oliva, D.; Ewees, A.A.; Xiong, S. Multi-level thresholding-based grey scale image segmentation using multi-objective multi-verse optimizer. Expert Syst. Appl. 2019, 125, 112–129. [Google Scholar] [CrossRef]
  8. Liu, L.; Huo, J. Apple Image Recognition Multi-Objective Method Based on the Adaptive Harmony Search Algorithm with Simulation and Creation. Information 2018, 9, 180. [Google Scholar] [CrossRef] [Green Version]
  9. Feng, Y.; Liu, W.; Zhang, X.; Liu, Z.; Liu, Y.; Wang, G. An Interval Iteration Based Multilevel Thresholding Algorithm for Brain MR Image Segmentation. Entropy 2021, 23, 1429. [Google Scholar] [CrossRef]
  10. Abdel-Basset, M.; Chang, V.; Mohamed, R. A novel equilibrium optimization algorithm for multi-thresholding image segmentation problems. Neural Comput. Appl. 2020, 33, 10685–10718. [Google Scholar] [CrossRef]
  11. Wu, B.; Zhou, J.; Ji, X.; Yin, Y.; Shen, X. An ameliorated teaching-learning-based optimization algorithm based study of image segmentation for multilevel thresholding using Kapur’s entropy and Otsu’s between class variance. Inf. Sci. 2020, 533, 72–107. [Google Scholar] [CrossRef]
  12. El-Sayed, M.A.; Ali, A.A.; Hussien, M.E.; Sennary, H.A. A Multi-Level Threshold Method for Edge Detection and Segmentation Based on Entropy. Comput. Mater. Contin. 2020, 63, 1–16. [Google Scholar] [CrossRef]
  13. Truong, M.T.N.; Kim, S. Automatic image thresholding using Otsu’s method and entropy weighting scheme for surface defect detection. Soft Comput. 2018, 22, 4197–4203. [Google Scholar] [CrossRef]
  14. Pare, S.; Kumar, A.; Singh, G.K.; Bajaj, V. Image Segmentation Using Multilevel Thresholding: A Research Review. Iran. J. Sci. Technol. Trans. Electr. Eng. 2020, 44, 1–29. [Google Scholar] [CrossRef]
  15. Ye, Z.; Yang, J.; Wang, M.; Zong, X.; Yan, L.; Liu, W. 2D Tsallis Entropy for Image Segmentation Based on Modified Chaotic Bat Algorithm. Entropy 2018, 20, 239. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Cao, X.; Li, T.; Li, H.; Xia, S.; Ren, F.; Sun, Y.; Xu, X. A Robust Parameter-Free Thresholding Method for Image Segmentation. IEEE Access 2019, 7, 3448–3458. [Google Scholar] [CrossRef] [PubMed]
  17. Yang, W.; Cai, L.; Wu, F. Image segmentation based on gray level and local relative entropy two dimensional histogram. PLoS ONE 2020, 15, e0229651. [Google Scholar] [CrossRef]
  18. Khairuzzaman, A.K.M.; Chaudhury, S. Masi entropy based multilevel thresholding for image segmentation. Multimed. Tools Appl. 2019, 78, 33573–33591. [Google Scholar] [CrossRef]
  19. Zhao, D.; Liu, L.; Yu, F.; Heidari, A.A.; Wang, M.; Liang, G.; Muhammad, K.; Chen, H. Chaotic random spare ant colonyoptimization for multi-threshold image segmentation of 2D Kapur entropy. Knowl.-Based Syst. 2021, 216, 106510. [Google Scholar] [CrossRef]
  20. Bhandari, A.K.; Kumar, I.V.; Srinivas, K. Cuttlefish algorithm-based multilevel 3-D Otsu function for color image segmentation. IEEE Trans. Instrum. Meas. 2020, 69, 1871–1880. [Google Scholar] [CrossRef]
  21. Albuquerque, M.P.D.; Esquef, I.A.; Gesualdi Mello, A.R.; Albuquerque, M.P.D. Image thresholding using Tsallis entropy. Pattern Recognit. Lett. 2004, 25, 1059–1065. [Google Scholar] [CrossRef]
  22. Xu, Y.; Chen, R.; Li, Y.; Zhang, P.; Yang, J.; Zhao, X.; Liu, M.; Wu, D. Multispectral image segmentation based on a Fuzzy clustering algorithm combined with Tsallis entropy and a Gaussian mixture model. Remote Sens. 2019, 11, 2772. [Google Scholar] [CrossRef] [Green Version]
  23. Tsallis, C. Possible generalization of Boltzmann-Gibbs statistics. J. Stat. Phys. 1988, 52, 479–487. [Google Scholar] [CrossRef]
  24. Lin, Q.; Ou, C. Tsallis entropy and the long-range correlation in image thresholding. Signal Process. 2012, 92, 2931–2939. [Google Scholar] [CrossRef]
  25. Abdiel, R.R.; Alejandro, H.M.; Gerardo, H.C.; Ismael, D.J. Determining the entropic index q of Tsallis entropy in images through redundancy. Entropy 2016, 18, 299. [Google Scholar] [CrossRef] [Green Version]
  26. Xiao, L.; Fan, C.; Ouyang, H.; Abate, A.F.; Wan, S. Adaptive trapezoid region intercept histogram based Otsu method for brain MR image segmentation. J. Ambient. Intell. Humaniz Comput. 2021, 12, 1–16. [Google Scholar] [CrossRef]
  27. Hernández Reséndiz, J.D.; Marin Castro, H.M.; Tello Leal, E. A Comparative Study of Clustering Validation Indices and Maximum Entropy for Sintonization of Automatic Segmentation Techniques. IEEE Lat. Am. Trans. 2019, 17, 1229–1236. [Google Scholar] [CrossRef]
  28. Wu, B.; Zhu, L.; Cao, J.; Wang, J. A Hybrid Preaching Optimization Algorithm Based on Kapur Entropy for Multilevel Thresholding Color Image Segmentation. Entropy 2021, 23, 1599. [Google Scholar] [CrossRef]
  29. Mousavirad, S.J.; Zabihzadeh, D.; Oliva, D.; Perez-Cisneros, M.; Schaefer, G. A Grouping Differential Evolution Algorithm Boosted by Attraction and Repulsion Strategies for Masi Entropy-Based Multi-Level Image Segmentation. Entropy 2022, 24, 8. [Google Scholar] [CrossRef]
  30. Liu, Y.; Xie, Z.; Liu, H. An adaptive and robust edge detection method based on edge proportion statistics. IEEE Trans. Image Process. 2020, 29, 5206–5215. [Google Scholar] [CrossRef]
  31. Lin, S.; Jia, H.; Abualigah, L.; Altalhi, M. Enhanced Slime Mould Algorithm for Multilevel Thresholding Image Segmentation Using Entropy Measures. Entropy 2021, 23, 1700. [Google Scholar] [CrossRef]
  32. Song, S.; Jia, H.; Ma, J. A Chaotic Electromagnetic Field Optimization Algorithm Based on Fuzzy Entropy for Multilevel Thresholding Color Image Segmentation. Entropy 2019, 21, 398. [Google Scholar] [CrossRef] [Green Version]
  33. Mozaffari, M.H.; Seyed, H.Z. Unsupervised Data and Histogram Clustering Using Inclined Planes System Optimization Algorithm. Image Anal. Stereol. 2014, 33, 65–74. [Google Scholar] [CrossRef]
  34. Pun, T. Entropic thresholding, a new approach. Comput. Graph. Image Process. 1981, 16, 210–239. [Google Scholar] [CrossRef] [Green Version]
  35. Kapur, J.N.; Sahoo, P.K.; Wong, A.K. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vis. Graph. Image Process. 1985, 29, 273–285. [Google Scholar] [CrossRef]
  36. Abutaleb, A.S. Automatic thresholding of gray-level pictures using two-dimensional entropy. Comput. Vis. Graph. Image Process. 1989, 47, 22–32. [Google Scholar] [CrossRef]
  37. Brink, A.D. Thresholding of digital images using two-dimensional entropies. Pattern Recognit. 1992, 25, 803–808. [Google Scholar] [CrossRef]
  38. Tsallis, C.; Mendes, R.S.; Plastino, A.R. The role of constraints within generalized nonextensive statistics. Physica. A 1998, 261, 534–554. [Google Scholar] [CrossRef]
  39. Huang, Z.; Ou, C.; Lin, B.; Su, G.; Chen, J. The available force in long-duration memory complex systems and its statistical physical properties. Europhys. Lett. 2013, 103, 10011. [Google Scholar] [CrossRef]
  40. Ou, C.; Kaabouchi, A.E.; Mehaute, A.L.; Wang, Q.A.; Chen, J. Generalized measurement of uncertainty and the maximizable entropy. Mod. Phys. Lett. B 2008, 24, 825–831. [Google Scholar] [CrossRef]
  41. Yasnoff, W.A.; Mui, J.K.; Bacus, J.W. Error measures for scene segmentation. Pattern Recognit. 1977, 9, 217–231. [Google Scholar] [CrossRef]
  42. Kampke, T.; Kober, R. Nonparametric optimal binarization. In Proceedings of the Fourteenth International Conference on Pattern Recognition, Brisbane, Australia, 16–20 August 1998; Volume 1, pp. 27–29. [Google Scholar] [CrossRef]
  43. Dubuisson, M.P.; Jain, A.K. A modified Hausdorff distance for object matching. In Proceedings of the 12th International Conference on Pattern Recognition, Jerusalem, Israel, 9–13 October 1994; Volume 1, pp. 566–568. [Google Scholar] [CrossRef]
  44. Dhal, K.G.; Das, A.; Ray, S.; Jorge, G.; Sanjoy, D. Nature-inspired optimization algorithms and their application in multi-thresholding image segmentation. Arch. Comput. Methods Eng. 2019, 27, 855–888. [Google Scholar] [CrossRef]
  45. Zou, Y.; Zhang, J.; Upadhyay, M.; Sun, S.; Jiang, T. Automatic image thresholding based on Shannon entropy difference and dynamic synergic entropy. IEEE Access 2020, 8, 171218–171239. [Google Scholar] [CrossRef]
Figure 1. The distribution of the two-dimensional histogram with the threshold (s, t).
Figure 1. The distribution of the two-dimensional histogram with the threshold (s, t).
Entropy 24 00319 g001
Figure 2. Normalized histogram distribution.
Figure 2. Normalized histogram distribution.
Entropy 24 00319 g002
Figure 3. Objective functions of the Otsu and Tsallis algorithms.
Figure 3. Objective functions of the Otsu and Tsallis algorithms.
Entropy 24 00319 g003
Figure 4. The procedure of the new algorithm.
Figure 4. The procedure of the new algorithm.
Entropy 24 00319 g004
Figure 5. Lena.
Figure 5. Lena.
Entropy 24 00319 g005
Figure 6. Cameraman.
Figure 6. Cameraman.
Entropy 24 00319 g006
Figure 7. Baboon.
Figure 7. Baboon.
Entropy 24 00319 g007
Figure 8. Rice.
Figure 8. Rice.
Entropy 24 00319 g008
Figure 9. Infrared image.
Figure 9. Infrared image.
Entropy 24 00319 g009
Figure 10. Jet.
Figure 10. Jet.
Entropy 24 00319 g010
Figure 11. Plane.
Figure 11. Plane.
Entropy 24 00319 g011
Figure 12. Hawk.
Figure 12. Hawk.
Entropy 24 00319 g012
Figure 13. The average values of ME of different algorithms and the corresponding variance bars.
Figure 13. The average values of ME of different algorithms and the corresponding variance bars.
Entropy 24 00319 g013
Figure 14. The average values of RAE of different algorithms and the corresponding variance bars.
Figure 14. The average values of RAE of different algorithms and the corresponding variance bars.
Entropy 24 00319 g014
Figure 15. The average value of MHD of different algorithms and the corresponding variance bars.
Figure 15. The average value of MHD of different algorithms and the corresponding variance bars.
Entropy 24 00319 g015
Figure 16. The average value of PSNR of different algorithms and the corresponding variance bars.
Figure 16. The average value of PSNR of different algorithms and the corresponding variance bars.
Entropy 24 00319 g016
Table 1. The values of q of 50 test images.
Table 1. The values of q of 50 test images.
TestqTestqTestq
10.6971180.4908350.5385
20.3891190.3971360.5217
30.6992200.6089370.5613
40.4067210.5240380.4409
50.6424220.4844390.5170
60.4933230.4003400.5139
70.4472240.5079410.5189
80.4706250.4895420.4960
90.4846260.4724430.5670
100.4990270.5000440.5107
110.5218280.5730450.5304
120.5159290.5602460.4680
130.4993300.4379470.4757
140.4572310.5881480.5823
150.6161320.5557490.5716
160.5976330.4936500.4884
170.5276340.5479
Table 2. The values of ME of 50 testing images segmented by 5 different algorithms.
Table 2. The values of ME of 50 testing images segmented by 5 different algorithms.
ImagesOtsuOtsu-KapurShannon2DTsallisProposed
13.628 × 10−38.611 × 10−31.213 × 10−12.205 × 10−14.408 × 10−2
25.453 × 10−15.366 × 10−14.563 × 10−11.487 × 10−31.416 × 10−3
32.946 × 10−32.188 × 10−38.968 × 10−18.965 × 10−13.899 × 10−3
46.011 × 10−16.169 × 10−16.639 × 10−31.514 × 10−32.614 × 10−3
51.083 × 10−29.282 × 10−39.399 × 10−19.401 × 10−14.328 × 10−3
63.580 × 10−13.136 × 10−11.053 × 10−21.247 × 10−22.623 × 10−2
74.384 × 10−11.006 × 10−31.006 × 10−31.822 × 10−31.388 × 10−3
82.930 × 10−15.017 × 10−31.941 × 10−25.503 × 10−35.017 × 10−3
91.168 × 10−31.917 × 10−33.372 × 10−33.196 × 10−32.799 × 10−3
107.516 × 10−31.917 × 10−39.763 × 10−39.532 × 10−37.789 × 10−3
112.305 × 10−23.689 × 10−25.975 × 10−23.689 × 10−23.689 × 10−2
122.034 × 10−23.025 × 10−25.327 × 10−23.943 × 10−23.943 × 10−2
131.950 × 10−26.446 × 10−35.553 × 10−28.635 × 10−11.950 × 10−2
143.745 × 10−11.221 × 10−22.893 × 10−21.317 × 10−21.221 × 10−2
151.002 × 10−21.122 × 10−23.156 × 10−22.614 × 10−21.685 × 10−2
164.359 × 10−34.359 × 10−32.533 × 10−21.671 × 10−26.512 × 10−3
172.238 × 10−22.651 × 10−25.562 × 10−22.866 × 10−22.651 × 10−2
184.041 × 10−13.276 × 10−25.220 × 10−22.166 × 10−23.276 × 10−2
194.014 × 10−14.014 × 10−15.303 × 10−43.409 × 10−42.272 × 10−4
202.714 × 10−16.770 × 10−48.680 × 10−47.118 × 10−26.770 × 10−4
211.126 × 10−21.126 × 10−21.119 × 10−21.126 × 10−29.982 × 10−3
225.111 × 10−12.359 × 10−24.783 × 10−22.476 × 10−22.359 × 10−2
235.150 × 10−15.150 × 10−13.889 × 10−18.214 × 10−34.829 × 10−3
244.171 × 10−14.278 × 10−24.346 × 10−24.391 × 10−24.278 × 10−2
255.128 × 10−13.313 × 10−37.931 × 10−33.313 × 10−33.313 × 10−3
261.737 × 10−36.830 × 10−46.803 × 10−32.590 × 10−32.590 × 10−3
271.998 × 10−22.161 × 10−21.662 × 10−21.998 × 10−22.161 × 10−2
284.378 × 10−25.334 × 10−21.167 × 10−16.683 × 10−25.843 × 10−2
293.967 × 10−12.186 × 10−23.360 × 10−21.654 × 10−22.186 × 10−2
304.126 × 10−16.240 × 10−49.885 × 10−18.053 × 10−41.888 × 10−3
319.050 × 10−31.214 × 10−21.954 × 10−22.059 × 10−21.749 × 10−2
321.881 × 10−11.928 × 10−18.684 × 10−12.020 × 10−11.975 × 10−1
332.676 × 10−12.789 × 10−12.642 × 10−12.921 × 10−12.882 × 10−1
345.120 × 10−35.020 × 10−39.320 × 10−17.235 × 10−36.085 × 10−3
353.980 × 10−28.950 × 10−31.796 × 10−29.851 × 10−39.851 × 10−3
369.672 × 10−28.409 × 10−28.336 × 10−28.409 × 10−28.409 × 10−2
371.425 × 10−21.195 × 10−29.081 × 10−17.309 × 10−36.360 × 10−3
382.487 × 10−13.413 × 10−49.976 × 10−12.453 × 10−43.413 × 10−4
397.105 × 10−34.132 × 10−39.818 × 10−13.855 × 10−33.855 × 10−3
402.120 × 10−11.879 × 10−15.034 × 10−15.036 × 10−11.434 × 10−1
411.169 × 10−11.243 × 10−12.059 × 10−11.661 × 10−11.462 × 10−1
425.753 × 10−32.372 × 10−39.867 × 10−12.372 × 10−33.891 × 10−3
431.121 × 10−11.172 × 10−11.540 × 10−11.158 × 10−11.172 × 10−1
441.081 × 10−42.012 × 10−37.158 × 10−12.792 × 10−33.765 × 10−3
456.593 × 10−27.443 × 10−27.850 × 10−27.610 × 10−27.747 × 10−2
464.632 × 10−12.564 × 10−38.158 × 10−32.913 × 10−32.564 × 10−3
471.810 × 10−11.621 × 10−11.746 × 10−11.447 × 10−11.563 × 10−1
481.727 × 10−21.940 × 10−22.483 × 10−13.504 × 10−22.738 × 10−2
496.357 × 10−34.805 × 10−34.645 × 10−31.680 × 10−31.226 × 10−3
501.773 × 10−11.574 × 10−31.504 × 10−31.875 × 10−31.574 × 10−3
Table 3. The value of RAE of 50 testing images segmented by 5 different algorithms.
Table 3. The value of RAE of 50 testing images segmented by 5 different algorithms.
ImagesOtsuOtsu-KapurShannon2DTsallisProposed
11.031 × 10−22.776 × 10−24.025 × 10−17.316 × 10−11.462 × 10−1
29.964 × 10−19.963 × 10−19.957 × 10−13.640 × 10−13.428 × 10−1
31.988 × 10−39.748 × 10−49.758 × 10−19.753 × 10−13.780 × 10−3
42.897 × 10−12.653 × 10−19.690 × 10−19.694 × 10−12.158 × 10−1
56.043 × 10−16.202 × 10−16.624 × 10−31.380 × 10−32.511 × 10−3
63.668 × 10−14.187 × 10−11.045 × 10−21.459 × 10−23.068 × 10−2
79.851 × 10−12.551 × 10−22.051 × 10−21.415 × 10−18.173 × 10−2
89.622 × 10−13.009 × 10−16.280 × 10−13.210 × 10−13.009 × 10−1
93.699 × 10−28.886 × 10−21.464 × 10−11.398 × 10−11.246 × 10−1
101.569 × 10−28.886 × 10−21.352 × 10−29.160 × 10−33.171 × 10−3
112.542 × 10−24.068 × 10−26.589 × 10−24.068 × 10−24.068 × 10−2
122.290 × 10−23.416 × 10−26.015 × 10−24.452 × 10−24.452 × 10−2
138.167 × 10−22.120 × 10−14.338 × 10−19.225 × 10−12.120 × 10−1
149.742 × 10−15.519 × 10−17.448 × 10−15.705 × 10−15.519 × 10−1
151.710 × 10−23.738 × 10−21.961 × 10−11.609 × 10−19.227 × 10−2
161.156 × 10−21.156 × 10−26.296 × 10−24.434 × 10−21.727 × 10−2
171.011 × 10−11.175 × 10−12.184 × 10−11.258 × 10−11.175 × 10−1
184.710 × 10−12.131 × 10−24.327 × 10−27.205 × 10−32.131 × 10−2
199.948 × 10−19.948 × 10−12.029 × 10−11.129 × 10−16.779 × 10−2
209.675 × 10−12.416 × 10−23.314 × 10−23.136 × 10−22.416 × 10−2
214.315 × 10−14.315 × 10−14.300 × 10−14.315 × 10−14.021 × 10−1
229.552 × 10−14.787 × 10−16.642 × 10−14.938 × 10−14.787 × 10−1
239.762 × 10−19.762 × 10−19.687 × 10−13.959 × 10−12.781 × 10−1
248.857 × 10−14.430 × 10−14.469 × 10−14.494 × 10−14.430 × 10−1
259.551 × 10−11.143 × 10−12.476 × 10−11.208 × 10−11.143 × 10−1
261.790 × 10−31.692 × 10−56.459 × 10−32.008 × 10−32.008 × 10−3
271.010 × 10−21.261 × 10−26.825 × 10−31.010 × 10−21.261 × 10−2
283.615 × 10−21.110 × 10−13.536 × 10−11.782 × 10−11.395 × 10−1
294.042 × 10−12.227 × 10−23.423 × 10−21.685 × 10−22.227 × 10−2
304.145 × 10−14.988 × 10−49.944 × 10−12.199 × 10−49.230 × 10−4
311.719 × 10−21.781 × 10−29.285 × 10−28.773 × 10−26.442 × 10−2
321.937 × 10−11.988 × 10−19.007 × 10−12.088 × 10−12.038 × 10−1
332.787 × 10−12.904 × 10−12.752 × 10−13.042 × 10−13.001 × 10−1
343.808 × 10−32.893 × 10−39.977 × 10−13.324 × 10−31.986 × 10−3
351.867 × 10−14.910 × 10−26.479 × 10−22.067 × 10−22.067 × 10−2
362.157 × 10−11.399 × 10−19.805 × 10−21.399 × 10−11.399 × 10−1
371.514 × 10−21.264 × 10−29.946 × 10−12.725 × 10−35.820 × 10−3
382.491 × 10−13.419 × 10−49.995 × 10−12.457 × 10−43.419 × 10−4
397.191 × 10−34.195 × 10−39.986 × 10−12.747 × 10−32.747 × 10−3
401.141 × 10−29.791 × 10−39.979 × 10−19.987 × 10−14.589 × 10−3
411.679 × 10−11.796 × 10−13.030 × 10−12.432 × 10−12.133 × 10−1
425.796 × 10−31.452 × 10−39.996 × 10−11.452 × 10−34.138 × 10−4
431.128 × 10−11.190 × 10−11.599 × 10−11.174 × 10−11.190 × 10−1
441.108 × 10−42.063 × 10−37.342 × 10−12.864 × 10−33.862 × 10−3
455.195 × 10−26.429 × 10−27.306 × 10−26.650 × 10−26.842 × 10−2
469.915 × 10−13.928 × 10−16.730 × 10−14.237 × 10−13.928 × 10−1
472.117 × 10−11.897 × 10−12.042 × 10−11.693 × 10−11.829 × 10−1
484.399 × 10−22.698 × 10−25.420 × 10−11.175 × 10−18.164 × 10−2
496.647 × 10−34.985 × 10−34.885 × 10−31.077 × 10−33.077 × 10−4
508.989 × 10−16.968 × 10−24.994 × 10−21.045 × 10−26.968 × 10−2
Table 4. The value of MHD of 50 testing images segmented by 5 different algorithms.
Table 4. The value of MHD of 50 testing images segmented by 5 different algorithms.
ImagesOtsuOtsu-KapurShannon2DTsallisProposed
10.70291.28364.96636.77153.1038
211.715611.611010.62440.19760.1890
30.59220.502418.457718.45770.6871
414.334714.64050.86460.32460.4621
51.25171.162621.383621.36400.7552
69.230010.24771.07500.93961.4021
79.43870.12410.11570.23320.1869
87.93420.55241.49780.58380.5524
90.11660.22860.34170.35430.3202
101.04531.04821.29701.31171.0921
111.34591.88482.59761.88481.8848
121.13201.52872.30101.90391.9039
130.36940.85071.69098.82870.8507
148.63371.37372.27071.43571.3737
150.86800.95321.61641.56511.2187
160.27100.27101.02970.79900.3800
171.24001.36892.02931.46681.3689
187.97691.92612.40251.58221.9261
196.96896.96890.02190.03390.0294
204.16510.12660.14330.13490.1266
210.85450.85450.85200.85450.7773
229.82651.66022.39061.69951.6602
2311.281711.28179.73910.86200.5391
246.17401.89421.90341.91991.8942
259.19260.49141.01720.49140.4914
260.31060.19671.01520.54960.5496
271.82181.85721.67331.82181.8572
283.20173.45561.86303.49653.4838
294.64760.42810.59620.31060.4281
3010.33850.178021.69960.22720.3698
311.28041.64572.24502.47512.1817
325.63625.740218.57675.97575.8568
335.50166.12905.30756.91986.7114
340.71130.720021.24281.27861.0295
353.62821.29322.05791.63021.6302
365.91865.43925.66585.43925.4392
371.88221.691020.81211.24891.0532
3810.76410.096622.20880.07600.0966
390.63360.474721.80530.53340.5334
400.55250.250020.04220.73320.7332
414.29364.58296.54645.79885.2712
420.43110.348022.12190.34800.5231
437.37377.55728.72837.50837.5572
440.04880.447817.65880.57560.7164
454.36654.82124.90574.89454.9490
468.56510.29030.75850.32220.2903
478.38247.91257.88237.45987.7639
481.48071.80187.05692.45572.1798
490.66110.54770.85800.41130.3190
505.51220.12840.16000.22340.1284
Table 5. The value of PSNR of 50 testing images segmented by 5 different algorithms.
Table 5. The value of PSNR of 50 testing images segmented by 5 different algorithms.
ImagesOtsuOtsu-KapurShannon2DTsallisProposed
124.402820.64949.16136.565813.5576
22.63352.70343.406728.276828.4887
325.306626.59940.47280.474525.0903
42.20992.097321.779028.197225.8268
519.651420.32350.26880.267923.6369
619.481819.481819.508619.481820.0075
73.580929.969929.969927.392328.5733
85.331322.995217.119822.593622.9952
929.324827.172424.720724.953925.5296
1021.239821.239820.104120.208121.0849
1116.372114.330912.236514.330914.3309
1216.916115.191812.734714.041714.0417
1321.906917.098712.55420.637117.0987
144.265019.130915.385718.804119.1309
1519.991219.497715.008415.825417.7334
1623.606223.606215.963117.768221.8623
1716.500115.765312.547115.427015.7653
183.934914.845612.823216.643014.8456
193.96383.963832.754834.673636.4345
205.663831.693630.614531.476431.6936
2121.906917.098712.55420.637117.0987
222.914916.272613.202316.061816.2726
2319.991219.497715.008415.825417.7334
2423.606223.606215.963117.768221.8623
2516.500115.765312.547115.427015.7653
2627.600231.655421.672825.866725.8667
2716.992516.651617.792216.992516.6516
2813.586412.72889.328711.750112.3330
294.014516.603114.735617.812916.6031
303.844232.04820.049930.940227.2400
3120.433219.154517.090516.861917.5713
327.25477.14740.61256.94477.0439
335.72385.54555.77925.34335.4028
3422.907322.99300.305721.405622.1574
3514.000620.481417.455420.065220.0652
3610.144410.752110.790010.752110.7521
3718.461219.22560.418321.361221.9652
386.042634.66820.010136.102434.6682
3921.484323.83830.079724.138824.1388
406.73627.26012.98062.97898.4326
419.32019.05536.86207.79548.3483
4222.400526.24820.058226.248224.0984
439.50239.30828.12339.36109.3082
4439.661426.96371.451625.539624.2415
4511.809011.282111.051011.185811.1083
463.342025.910620.883925.355525.9106
477.42307.90097.57968.39318.0586
4817.625917.12146.049714.553715.6255
4921.967323.182823.329827.746929.1127
507.511428.029728.225727.270028.0297
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Deng, Q.; Shi, Z.; Ou, C. Self-Adaptive Image Thresholding within Nonextensive Entropy and the Variance of the Gray-Level Distribution. Entropy 2022, 24, 319. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030319

AMA Style

Deng Q, Shi Z, Ou C. Self-Adaptive Image Thresholding within Nonextensive Entropy and the Variance of the Gray-Level Distribution. Entropy. 2022; 24(3):319. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030319

Chicago/Turabian Style

Deng, Qingyu, Zeyi Shi, and Congjie Ou. 2022. "Self-Adaptive Image Thresholding within Nonextensive Entropy and the Variance of the Gray-Level Distribution" Entropy 24, no. 3: 319. https://0-doi-org.brum.beds.ac.uk/10.3390/e24030319

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop