# An Optimization Clustering Algorithm Based on Texture Feature Fusion for Color Image Segmentation

^{1}

^{2}

^{3}

^{4}

^{*}

Next Article in Journal

Next Article in Special Issue

Next Article in Special Issue

Previous Article in Journal

Hubei Collaborative Innovation Centre for High-efficiency Utilization of Solar Energy, Hubei University of Technology, Wuhan 430068, China

School of Electrical and Electronic Engineering, Hubei University of Technology, Wuhan 430068, China

Faculty of Technology, University of Vaasa, PL 700, 65101 Vaasa, Finland

School of Computer Science, Hubei University of Technology, Wuhan 430068, China

Author to whom correspondence should be addressed.

Academic Editor: Javier Del Ser Lorente

Received: 4 April 2015 / Revised: 4 May 2015 / Accepted: 19 May 2015 / Published: 22 May 2015

(This article belongs to the Special Issue Clustering Algorithms)

We introduce a multi-feature optimization clustering algorithm for color image segmentation. The local binary pattern, the mean of the min-max difference, and the color components are combined as feature vectors to describe the magnitude change of grey value and the contrastive information of neighbor pixels. In clustering stage, it gets the initial clustering center and avoids getting into local optimization by adding mutation operator of genetic algorithm to particle swarm optimization. Compared with well-known methods, the proposed method has an overall better segmentation performance and can segment image more accurately by evaluating the ratio of misclassification.

Image segmentation is a process of dividing an image into different regions such that each item in the same class is as similar as possible whereas items in different classes are as dissimilar as possible. It is a key in image analysis and pattern recognition. However, because of the variety and complexity of images, image segmentation is still a very challenging task.

Fuzzy c-means (FCM) algorithm is one of the most widely used methods for color image segmentation. However, the selection of the initial parameters of standard FCM usually influences the clustering results greatly. Moreover, it does not consider any spatial information in image context, which makes it very sensitive to noise. To minimize the disadvantages of standard FCM, various modified FCM algorithms have been proposed. A comprehensive comparative analysis of kernel-based fuzzy clustering and fuzzy clustering is presented [1,2]. It introduces enhanced FCM clustering algorithms with spatial constraints for noisy color image segmentation. The Rank M-type L (RM-L) and L-estimators are used to obtain the sufficient spatial information of the pixels [3]. A robust modified FCM is presented by introducing a non-local adaptive spatial constraint term into the objective function [4]. An algorithm for fuzzy segmentation of MRI data by using two fuzzifiers used in interval type-2 FCM and a spatial constraint on the membership functions is present [5].

Most of the above methods are dependent on separate features to complete segmentation. If inadequate features are used, even the best classifier could fail to achieve accurate segmentation. So it is important that how to effectively choose image features. Recently, the local binary pattern (LBP) operator has received considerable attention. It has been successfully applied to computer vision tasks, such as texture classification [6], face recognition [7,8,9], object detection [10,11], and fingerprint matching [12]. It is a non-parametric kernel which summarizes the local structure around a pixel and provides a unified description, including statistical and structural characteristics of a texture patch.

Particle swarm optimization (PSO) has been suggested by Kennedy and Eberhart [13]. It is inspired by certain social behavior of bird flocking or fish schooling. When used for solving the optimization problem, PSO makes every particle fly through the solution space with a certain trajectory. Under the guidance of its own or its neighbor’s experience, each particle will gradually fly into the potential area of global optimum [14]. PSO is gaining more and more attention due to its fast convergence rate, simple algorithmic structure and strong global optimization capability. It has been applied to solve varieties of optimization problems successfully [15,16,17].

In this paper, we introduce an improved FCM segmentation algorithm for color images. We choose LBP, the mean of the min-max difference, and color features as a segmental feature vector. It is a more accurate description of multi-feature. In the clustering stage, we use an improved PSO method to optimize the initial clustering centers. PSO is used to ensure that the search converges faster, and we add mutation operator into PSO to make the search jump out of local optima. Subsequently, we use FCM algorithm to complete color image segmentation. The experimental results show that the proposed method has a better segmentation performance.

The rest of the paper is organized as follows. Section 2 introduces LBP operator and analyzes the drawbacks of LBP operator. Section 3 describes the PSO algorithm. Section 4 presents the proposed method. Section 5 compares the proposed algorithm with some existing methods. Section 6 gives concluding remarks.

The LBP operator was first introduced by Ojala et al. [18]. It labels the pixels of an image by considering the difference between the grey values of the pixel x from the grey values of the circularly symmetric neighborhood and considers the results as a binary number. Given a pixel, LBP code can be expressed in the decimal form as equation (1), (2).

$$\mathrm{L}\mathrm{B}{\mathrm{P}}_{\mathrm{P},\mathrm{R}}={\displaystyle \sum _{p=0}^{p-1}sign({g}_{p}-{g}_{c})\cdot {2}^{p}}$$

$$sign(x)=\{\begin{array}{l}1,x\ge 0\\ 0,x<0\end{array}$$

Where
${g}_{c}$ corresponds to the grey value of the center pixel of a local neighborhood and
${g}_{p}$ to the grey value of N equally spaced neighboring pixels. It produces
${2}^{\text{P}}$different output values, corresponding to the
${2}^{\text{P}}$ different binary patterns. The
${2}^{\text{P}}$ different binary patterns codify local primitives including different types of curved edges, spots, flat areas, etc. So each LBP code can be regarded as a micro-texture. The result over the 3 × 3 neighborhood as a binary number is described as follows (Figure 1).

We can see that the LBP code is robust against illumination changes and very fast to compute, and does not require many parameters to be set. However, LBP cannot describe the characteristics of textures efficiently and completely. In Figure 2, the first structure of (a) and (b) is possibly a flat area, and the second structure is possibly an edge or a spot. However, the value of LBP is uniform. So LBP ignores the contrastive information while the different contrast is an important feature of texture.

Particle swarm optimization is an evolutionary optimization technique created by inspiring behaviors such as fish schooling and bird flocking. The system is initialized with a population of random solutions and searches for optima by updating generations.

Suppose that the search space is D-dimensional. At the beginning of the PSO process, the position
${Z}_{i}=\left\{{z}_{i1},{z}_{i2,}\dots ,{z}_{id}\right\}$ and the velocity
${V}_{i}=\left\{{v}_{i1},{v}_{i2,}\dots ,{v}_{id}\right\}$ of each particle are randomly created. Every particle in the swarm is a part of the solution set.
${y}_{i}=\left\{{y}_{i1},{y}_{i2,}\dots ,{y}_{id}\right\}$ is the best position discovered by the ith individual.
$\widehat{y}=\left\{{\widehat{y}}_{1},{\widehat{y}}_{2},\dots ,{\widehat{y}}_{d}\right\}$ stands for the global best position searched by the whole swarm. In each iteration,
${Z}_{i}$ and
${V}_{i}$ are updated using the following equation (3), (4).

$${v}_{ij}\left(t+1\right)=\omega *{v}_{ij}\left(t\right)+{c}_{1}{r}_{1}\left({y}_{ij}\left(t\right)-{x}_{ij}\left(t\right)\right)+{c}_{2}{r}_{2}\left({\widehat{y}}_{j}\left(t\right)-{x}_{ij}\left(t\right)\right)$$

$${z}_{ij}\left(t+1\right)={z}_{ij}\left(t\right)+{v}_{ij}\left(t+1\right)$$

Where
${c}_{1}$ and
${c}_{2}$ are the acceleration coefficients which determine the extent of stochastic weighting for the cognitive and the social components individually.
${r}_{1}$,${r}_{2}$ are two random numbers generated by uniform distribution in the range [0,1] separately. The inertia weight
$\omega $ is employed to control the impact of the previous history of velocities on the current velocity. The linear decreasing method is represented as equation (5).

$$\omega ={\omega}_{max}-\frac{{\omega}_{max}-{\omega}_{min}}{Ite{r}_{max}}\times iter$$

$Ite{r}_{max}$ is maximum iteration time.
$iter$ is current iteration time.
$\text{}{\omega}_{max}$ and
${\omega}_{min}$ are maximum and minimum inertia weight respectively (it is set for
${\omega}_{max}$= 0.9,
${\omega}_{min}$= 0.4). The inertia weight is decreased linearly with increasing iterations.

The solution quality is measured by the fitness function. The fitness value of each particle is calculated by the objective function. The values of
${y}_{i}$ and
$\widehat{y}$ are then evaluated and replaced if a better particle best position or a better global best position is obtained. The smaller objective function is, the better fitness value is, given that objective function is expressed as equation (6). The fitness value is represented as equation (7) (to avoid zero in the denominators,
$\text{\sigma}=0.001$).

$$J={\displaystyle \sum _{k=1}^{C}{\displaystyle \sum _{i=1}^{N}\Vert {x}_{i}-{z}_{k}\Vert}}$$

$$fitness=1/(J+\sigma )$$

The personal best position of each particle is defined as equation (8).

$${y}_{ij}\left(t+1\right)=\{\begin{array}{c}{y}_{ij}\left(t\right),iffitness\left({z}_{ij}\left(t+1\right)\right)\le fitness\left({y}_{ij}\left(t\right)\right)\\ {z}_{ij}\left(t+1\right),iffitness\left({z}_{ij}\left(t+1\right)\right)fitness\left({y}_{ij}\left(t\right)\right)\end{array}$$

The global best position is defined as equation (9).

$$\widehat{y}=\left\{{\widehat{y}}_{1},{\widehat{y}}_{2},\dots ,{\widehat{y}}_{d}\right\}|fitness\left(\widehat{y}\right)=max\left\{fitness\left({y}_{1}\left(t\right)\right),fitness\left({y}_{2}\left(t\right)\right),\dots ,fitness\left({y}_{s}\left(t\right)\right)\right\}$$

Then we can use the standard procedure to find the optimum. The searching is a repeated process, and the stop criteria are that the maximum iteration number is reached or the minimum error condition is satisfied.

Given that a RGB color image is commonly represented as array
$H\times W\times l$ where every pixel
${f}_{l}\left(i,j\right)$ is a vector of integer values between the interval of [0, 255] (for color image
$l$ = 3). LBP code of
${f}_{l}\left(i,j\right)$ is represented as
$LBP\left(i,j\right)$. We combine LBP code of color components as equation (10).

$$LBP(i,j)={\displaystyle \sum _{p=0}^{p-1}sign({\displaystyle \sum _{i=1}^{l}({g}_{lp}-{g}_{lc}})})\cdot {2}^{p}$$

To consider contrastive information, we use the mean of the min-max difference to discriminate the incorrect instance of LBP of Figure 2. S represents the mean of the min-max difference in equation (11). In the flat area, the mean of the min-max difference is nearly 0. For edges or spots, the mean of the min-max difference is larger. Therefore, it can reflect the change of contrastive information. In equation (11),
$\sqrt{3}$ limits the values in the interval of [0, 255].

$$S=\frac{1}{\sqrt{3}p}\xb7{\displaystyle \sum}_{p=0}^{p-1}{\Vert {g}_{lp}-{g}_{lc}\Vert}_{\mathrm{\u220e}}$$

${\Vert {\text{g}}_{\text{lp}}-{\text{g}}_{\text{lc}}\Vert}_{\mathrm{\u220e}}$represents Euclidean distance.

We use LBP, the mean of the min-max difference, color components as segmental features, and merge above features as feature vector
$\text{}\left({\text{f}}_{1}\left(\text{i},\text{j}\right),{\text{f}}_{2}\left(\text{i},\text{j}\right),{\text{f}}_{3}\left(\text{i},\text{j}\right),\text{LBP},\text{S}\right)$.

PSO takes real numbers as particles. It is theoretically simple and can be implemented in a few lines of code. But PSO easily gets stuck in local optima. An effective way to overcome premature convergence of basic PSO is to maintain the population diversity for exploring the new search domain during the evolution process. We propose an improved optimization algorithm by adding mutation operator of genetic algorithm (GA) to PSO. The hybrid algorithm combines the standard velocity and position update rules of PSO with mutation operator from GA and makes the search jump out of local optima.

To improve the FCM algorithm, we utilize the texture information for each pixel to define a spatial constraint term, and then introduce this spatial constraint term into the objective function of FCM. The modified objective function can be expressed as equation (12)

$$J=\alpha {\displaystyle \sum _{k=1}^{K}{\displaystyle \sum _{i=1}^{H}{\displaystyle \sum _{j=1}^{W}{{\mu}_{kij}}^{m}\Vert {f}_{l}(i,j)-{z}_{kl}\Vert}}}+\beta {\displaystyle \sum _{k=1}^{K}{\displaystyle \sum _{i=1}^{H}{\displaystyle \sum _{j=1}^{W}{{\mu}_{kij}}^{m}\Vert LBP-{z}_{kl}\Vert}}}+\gamma {\displaystyle \sum _{k=1}^{K}{\displaystyle \sum _{i=1}^{H}{\displaystyle \sum _{j=1}^{W}{{\mu}_{kij}}^{m}\Vert S-{z}_{kl}\Vert}}}$$

To add spatial information, we use the above feature vector to complete clustering.
${\mu}_{kij}$ is a membership matrix where each value represents the membership grade of
${f}_{l}$ (i,j) which belongs to the kth class in equation (13).
${z}_{xk}$ is the clustering centre.
${z}_{lk}$ represents represent the clustering center of the color component which is the color feature.
${z}_{4k}$ represents the clustering centre of LBP code.
${z}_{5k}$ represents the clustering center of the mean of difference in equation (14). LBP code and
${z}_{5k}$represent the texture feature of the area. m is a weighting exponent that determines the amount of fuzziness of the resulting classification (in this paper, m = 2).
$\alpha ,\beta ,\gamma $ are weights that are manually determined in a trial-and-error fashion (in this paper,
$\alpha =0.1,\beta =0.1,\gamma =0.8$). Through merging feature vector, the distance between pixels is more accurate.

$${\mu}_{kij}=\frac{1}{{{\displaystyle \sum _{{k}_{1}=1}^{C}(\frac{\alpha (LBP(i,j)-{z}_{4}(k))+\beta (S(i,j)-{z}_{5}(k))+\gamma {\Vert {f}_{l}(i,j)-{z}_{l}(k)\Vert}_{\u2022}}{\alpha (LBP(i,j)-{z}_{4}({k}_{1}))+\beta (S(i,j)-{z}_{5}({k}_{1}))+\gamma {\Vert {f}_{l}(i,j)-{z}_{l}({k}_{1})\Vert}_{\u2022}})}}^{2/(m-1)}}$$

$$\{\begin{array}{l}{z}_{lk}=\frac{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}({f}_{l}(i,j))}{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}}\\ {z}_{lk}=\frac{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}(LBP(i,j))}{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}}\\ {z}_{lk}=\frac{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}(S(i,j))}{{\displaystyle \sum _{i=1}^{{H}^{\prime}}{\displaystyle \sum _{j=1}^{W\prime}{({\mu}_{kij})}^{m}}}}\end{array}$$

The proposed algorithm is described as follows.

Step 1: Set the iteration number
$\text{iter}$ to zero,
$Ite{r}_{max}=5$. An initial swarm of particles is generated in the search space. The population size is set to N (in this paper,
$\text{}N=20$). The position
${z}_{ij}$ and the velocity
${v}_{ij}$ of each particle are randomly created. Given
${\omega}_{max}=0.9$,${\omega}_{min}=0.4,$ mutation probability
${\text{p}}_{\text{m}}=0.05$.

Step 2: Computer
$\text{\omega}$ by using equation (5) and evaluate the fitness of each particle by using equation (7).

Step 3: Compare the personal best of each particle to its current fitness and set the personal best of each particle to the better performance.

Step 4: Set the global best to the position of the particle with the best fitness within the swarm.

Step 5: Change the velocity vector for each particle according to equation (3).

Step 6: Move each particle to its new position, according to equation (4).

Step 7: Randomly choose
$p$, when
$p>{p}_{m}$, the position of each particle will apply mutation operator.

Step 8: Let
$iter=iter+1$. Go to step 2. And repeat until meets the stop criteria
$iter>Ite{r}_{max}$.

Step 9: Make the position
${z}_{ij}$ as initial cluster centers, initializes membership matrix u_{kij} for random value between 0, 1.

Step 10: Use the membership matrix u_{ij} calculated in equation (13).

Step 11: According to equation (14), calculate cluster centers. If the relative change between
$\stackrel{\rightharpoonup}{{v}_{old}}$ and
$\stackrel{\rightharpoonup}{{v}_{new}}$ are less than a certain threshold (($\stackrel{\rightharpoonup}{\Vert {v}_{new}}-\stackrel{\rightharpoonup}{{v}_{old}}\Vert <\epsilon $) (in this paper,
$\text{\epsilon}=10$)), then the algorithm stops. Otherwise, return to Step 10.

In the experiments, we test with several 256 × 256 color images which are obtained from WebGIS (for example, Google Map/Earth) to assess the performance of the proposed algorithm with Matlab7.0 running on a desktop PC with 2.0GHz CPU and 1.0G RAM. The results of kernel-based FCM with prototypes in feature space (KFCM_F), FCM with spatial constraints (FCM_S) and the proposed method are compared.

Figure 3 (a) is “map1” original image. In KFCM_F and FCM_S, the initial class is set to 7. Figure 3 (b), (c), (d) shows the results of “map1” image segmentation. KFCM_F and FCM_S only divide the image into 3 classes. They use a separate color feature and cannot segment accurately. The proposed method divides the image into 7 classes. Figure 3 (b1), (b2) are the “road” and “lake” of KFCM_F. Figure 3 (c1), (c2) are the “road” and “lake” of FCM_S. We can see that in KFCM_F and FCM_S the “lake” and “terrain” are considered to be the same and cannot be segmented exactly. Figure 3 (d1), (d2), (d3) are the “road”, “terrain” and “lake” of the proposed method. It can be seen that the proposed method has a better performance to segment accurately the “road”, “terrain” and “lake”.

Figure 4 (a) is “map2” original image. Figure 4 (b1), (b2) are the “road” and “lake” of KFCM_F. Figure 4 (c1), (c2), (c3) are the result of FCM_S. Figure 4 (d1), (d2), (d3) are the “road”, “terrain” and “lake” of the proposed method. Each method divides the image into five classes. Since KFCM_F and FCM_S can easily get into local optimization, we can see that from Figure (b2), the “lake” and “terrain” of KFCM_F are segmented into the same class. The “road” segmented from KFCM_F and FCM_S have less irrelevant information, but other shapes are mixed and cannot be segmented accurately. The proposed method can avoid getting into local optimization to get an overall better segmentation result.

To get more accurate results, we can implement post-processing by using the morphological method. Given
$f(i,j)$ is the segmental result of the proposed method. We use two times fusion operator, then two times dilation operator. E represents fusion operator. D represents dilation operator.

$$F(i,j)={E}^{2}(f){D}^{2}(f)$$

Figure 5 shows the post-processing results of Figure 3 (d) and Figure 4 (d). Figure 5 (a1) is the post-processing “road” of Figure 3 (d1). Figure 5 (a2) is the post-processing “lake” of Figure 3 (d3). Figure 5 (a3) is the post-processing “terrain” of Figure 3 (d2). Figure (b1) is the post-processing “road” of Figure 4 (d1). Figure 5 (b2) is the post-processing “lake” of Figure 4 (d3). Figure 5 (b3) is the post-processing “terrain” of Figure 4 (d2).

To compare the proposed method with existing methods, we use the ratio of misclassification to evaluate the different algorithms. We define the ratio of misclassification as equation (16).

$$Error=\frac{\sum \left|s-v\right|}{\sum s}\times 100\%$$

s is the binary image of manual segmentation. v is the binary image of actual segmentation by algorithms. The absolute difference between the binary images s and v is the number of misclassification.
${\sum}^{\text{}}\text{s}$ is the total number of pixels for the manual segmental image. Figure 6 is the manual segmentation result.

According to equation (16), the ratios of misclassification are shown in Table 1. The ratio of misclassification of “road” is lowest in KFCM_F and FCM_S, however, still higher than the proposed method which is 10.24% and can basically segment “road” accurately. For “terrain” segmentation, the result of KFCM_F is the worst, and the ratio of misclassification is 30.05%. The result of FCM_S is better with adding spatial information. The proposed method is the best in “terrain” segmentation, and the accuracy has achieved 95%. For “lake” segmentation, FCM_S and KFCM_F have much more errors. The proposed method can segment “lake” very accurately.

Algorithm | KFCM_F | FCM_S | The proposed | |
---|---|---|---|---|

“map1” image | The ratios of misclassification of “lake” | 35.70% | 31.00% | 1.1% |

The ratios of misclassification of “terrain” | 32.05% | 19.56% | 5.05% | |

The ratios of misclassification of “road” | 20.33% | 15.81% | 10.24% |

In addition, we test target extraction with real images in database [19] to assess the performance of the proposed algorithm by Matlab7.0. Figure 7(a) is the original “horse” image. Figure 7(b) is its segmentation result of KFCM_F. Figure 7(c) is its segmentation result of FCM_S. Figure 7(d) is its segmentation result of the proposed method. Figure 8(a) is the original “plane” image. Figure 8(b) is its segmentation result of KFCM_F. Figure 8(c) is its segmentation result of FCM_S. Figure 8(d) is its segmentation result of the proposed method. Comparing the proposed segmentation results with the existing methods, it can be seen that KFCM_F cannot segment the objects accurately as a lot of unrelated pixels are segmented into the object areas, while FCM_S has a certain randomness. The segmentation results of the proposed method are most accurate compared to KFCM_F and FCM_S.

In this paper, we propose a multi-feature optimization clustering algorithm for color image segmentation. We combine pixel and texture features as a feature vector to effectively improve segmentation performance. It utilizes PSO with mutation operator of GA to evaluate the initial clustering center, and ensures that the search converges faster and makes the search jump out of local optima. Experimental results show that the proposed algorithm can achieve better performance than other classic clustering algorithms in terms of segmentation accuracy.

This work is supported by the Science Foundation of Hubei Collaborative Innovation Centre for High-efficiency Utilization of Solar Energy under Grant No. HBSKFMS2014018, and the Science Foundation of Hubei University of Technology in China under Grant No. BSQD13028 and BSQD12118.

Gaihua Wang proposed the algorithm and prepared the manuscript. Yang Liu was in charge of the overall research and critical revision of the paper. Caiquan Xiong assisted in the work.

The authors declare no conflict of interest.

- Graves, D.; Pedrycz, W. Kernel-based fuzzy clustering and fuzzy clustering: A comparative experimental study. Fuzzy Sets Syst.
**2010**, 161, 522–543. [Google Scholar] [CrossRef] - Kannan, S.R.; Ramathilagam, S.; Devi, R.; Sathy, A. Robust kernel FCM in segmentation of breast medical images. Expert Syst. Appl.
**2011**, 38, 4382–4389. [Google Scholar] [CrossRef] - Mújica-Vargas, D.; Gallegos-Funes, F.J.; Rosales-Silva, A.J. A fuzzy clustering algorithm with spatial robust estimation constraint for noisy color image segmentation. Pattern Recognit. Lett.
**2013**, 34, 400–413. [Google Scholar] [CrossRef] - Zhao, F.; Jiao, L.C.; Liu, H.Q.; Gao, X.B. A novel fuzzy clustering algorithm with non-local adaptive spatial constraint for image segmentation. Signal Process.
**2011**, 91, 988–999. [Google Scholar] [CrossRef] - Qiu, C.; Xiao, J.; Yu, L.; Han, L.; Iqbal, M.N. A modified interval type-2 fuzzy C-means algorithm with application in MR image segmentation. Pattern Recognit. Lett.
**2013**, 34, 1329–1338. [Google Scholar] [CrossRef] - Costa, Y.M.G.; Oliveira, L.S.; Koerich, A.L.; Gouyon, F.; Martins, J.G. Music genre classification using LBP textural features. Signal Process.
**2012**, 92, 2723–2737. [Google Scholar] [CrossRef] - Shan, C.; Gong, S.; McOwan, P.W. Facial expression recognition based on Local Binary Patterns:A comprehensive study. Image Vis. Comput.
**2009**, 27, 803–816. [Google Scholar] [CrossRef] - Moore, S.; Bowden, R. Local binary patterns for multi-view facial expression recognition. Comput. Vis. Image Underst.
**2011**, 115, 541–558. [Google Scholar] [CrossRef] - Luo, Y.; Wu, C.; Zhang, Y. Facial expression recognition based on fusion feature of PCA and LBP with SVM. Int. J. Light Electron Optics.
**2013**, 124, 2767–2770. [Google Scholar] [CrossRef] - Liu, Y.; Chen, M.; Ishikawa, H.; Wollstein, G.; Schuman, J.S. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding. Med. Image Anal.
**2011**, 15, 748–759. [Google Scholar] [CrossRef] [PubMed] - Heikkila, M.; Ainen, M.; Schmid, C. Description of interest regions with local binary patterns. Pattern Recognit.
**2009**, 42, 425–436. [Google Scholar] [CrossRef] - Nanni, L.; Lumini, A. Local binary patterns for a hybrid fingerprint matcher. Pattern Recognit.
**2008**, 41, 3461–3466. [Google Scholar] [CrossRef] - Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the 1995 IEEE International Conference on Neural Networks, Perth, WA, USA, 1995; pp. 1942–1948.
- Jie, J.; Zeng, J.; Han, C.; Wang, Q. Knowledge-based cooperative particle swarm optimization. Appl. Math. Comput.
**2008**, 205, 861–873. [Google Scholar] [CrossRef] - Tsai, C.; Kao, I. Particle swarm optimization with selective particle regeneration for data clustering. Expert Syst. Appl.
**2011**, 38, 6565–6576. [Google Scholar] [CrossRef] - Bedi, P.; Bansal, R.; Sehgal, P. Using PSO in a spatial domain based image hiding scheme with distortion tolerance. Comput. Elect. Engin.
**2013**, 39, 640–654. [Google Scholar] [CrossRef] - Vellasques, E.; Sabourin, R.; Granger, E. Fast intelligent watermarking of heterogeneous image streams through mixture modeling of PSO populations. Appl. Soft Comput.
**2013**, 13, 3130–3148. [Google Scholar] [CrossRef] - Ojala, T.; Pietikainen, M.; Harwood, D. A comparative study of texture measure with classification based on feature distribution. Pattern Recognit.
**1996**, 29, 51–59. [Google Scholar] [CrossRef] - The Berkeley Segmentation Dataset and Benchmark. Available online: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ (accessed on 4 May 2015).

© 2015 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/4.0/).