Next Article in Journal
Integrating and Controlling ICT Implementation in the Supply Chain: The SME Experience from Baja California
Next Article in Special Issue
Sorting-Based Discrete Artificial Bee Colony Algorithm for Solving Fuzzy Hybrid Flow Shop Green Scheduling Problem
Previous Article in Journal
New Regression Models Based on the Unit-Sinh-Normal Distribution: Properties, Inference, and Applications
Previous Article in Special Issue
Multi-Task Optimization and Multi-Task Evolutionary Computation in the Past Five Years: A Brief Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum-Inspired Differential Evolution with Grey Wolf Optimizer for 0-1 Knapsack Problem

College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Submission received: 1 March 2021 / Revised: 3 May 2021 / Accepted: 5 May 2021 / Published: 28 May 2021
(This article belongs to the Special Issue Evolutionary Computation 2020)

Abstract

:
The knapsack problem is one of the most widely researched NP-complete combinatorial optimization problems and has numerous practical applications. This paper proposes a quantum-inspired differential evolution algorithm with grey wolf optimizer (QDGWO) to enhance the diversity and convergence performance and improve the performance in high-dimensional cases for 0-1 knapsack problems. The proposed algorithm adopts quantum computing principles such as quantum superposition states and quantum gates. It also uses adaptive mutation operations of differential evolution, crossover operations of differential evolution, and quantum observation to generate new solutions as trial individuals. Selection operations are used to determine the better solutions between the stored individuals and the trial individuals created by mutation and crossover operations. In the event that the trial individuals are worse than the current individuals, the adaptive grey wolf optimizer and quantum rotation gate are used to preserve the diversity of the population as well as speed up the search for the global optimal solution. The experimental results for 0-1 knapsack problems confirm the advantages of QDGWO with the effectiveness and global search capability for knapsack problems, especially for high-dimensional situations.

1. Introduction

The 0-1 knapsack problem (KP01) is a classical combinatorial optimization problem. It has many practical applications, such as project selection, investment decisions, and complexity theory [1,2]. Two classes of approaches were previously proposed to solve the KP01 [3]. The first class of approaches includes exact methods based on mathematical programming and operational research. It is possible to obtain the exact solutions of small-scale KP01 problems by exact methods such as branching and bound algorithm [4] and dynamic programming [5]. However, KP01 problems in various complex situations are NP-hard problems, and it is impractical to obtain optimal solutions using deterministic optimization methods for large-scale problems. The second class contains approximate methods based on metaheuristic algorithms [6]. Metaheuristic algorithms are shown to be effective approaches to solving complex engineering problems in a reasonable time when compared with exact methods [7]. Therefore, the application of metaheuristic algorithms has drawn a great deal of attention in the field of optimization.
In recent years, most of the metaheuristic algorithms, such as the genetic algorithm (GA) [8], ant colony optimization (ACO) [9], particle swarm algorithm (PSO) [10], artificial bee colony (ABC) [11], cuckoo search (CS) [12], firefly algorithm (FA) [13], and improved approaches based on these algorithms [14,15,16,17,18,19], were applied to KP01 problems and achieved outstanding results. However, these metaheuristic algorithms require not only a large amount of memory for storing the population of solutions, but also a long computational time for finding the optimal solutions. In the last few years, novel metaheuristic algorithms were proposed. Wang et al. [20,21] improved the swarm intelligence optimization approach inspired by the herding behavior of krill and came up with the krill herd (KH) algorithm to solve combinatorial optimization problems. Faramarzi et al. [22] presented a metaheuristic called the Marine Predators Algorithm (MPA) with an application in engineering design problems. MPA follows the rules that naturally govern in optimal foraging strategy and encounters a rate policy between predator and prey in marine ecosystems. Inspired by the phototaxis and Lévy flights of moths, Wang et al. [23] developed a new metaheuristic algorithm called the moth search (MS) algorithm. MS was applied to solve discounted 0-1 knapsack problems [24] and set-union knapsack problems [25]. Gao et al. [26] presented a novel selection mechanism augmenting the generic DE algorithm (NSODE) to achieve better optimization results for solving fuzzy job-shop scheduling problems. Abualigah et al. [27] proposed the arithmetic optimization algorithm (AOA) based on the distribution behavior of the main arithmetic operators in mathematics: multiplication, division, subtraction, and addition. Wang et al. [28] proposed a new nature-inspired metaheuristic algorithm called monarch butterfly optimization (MBO) by simulating the migration of monarch butterflies. MBO was applied to solve classic KP01 [29], discounted KP01 [30], and large-scale KP01 [31,32] problems with superior searching accuracy, convergent capability, and stability. In most metaheuristic algorithms, it is difficult to use the information from individuals in previous iterations in the updating process. Wang et al. [33] presented a method for reusing the information available from previous individuals and feeding previous useful information back into the updating process in order to guide later searches.
The emergence of quantum computing [34,35] was derived from the principles of quantum theory such as quantum superposition, quantum entanglement, quantum interference, and quantum collapse. Quantum computing brings new ideas to optimization due to its underlying concepts, along with the ability to process huge numbers of quantum states simultaneously in parallel. The merging of metaheuristic optimization and quantum computing recently became a growing theoretical and practical interest aiming at deriving benefits from quantum computing capabilities to enhance the convergence and speed of metaheuristic algorithms. Several scholars investigated the effect of introducing quantum computing in metaheuristic algorithms to maintain a balance between exploration and exploitation. Han and Kim proposed a genetic quantum algorithm (GQA) [36] and a quantum inspired evolutionary algorithm (QEA) [37] by merging classical evolutionary algorithms with quantum computing concepts such as the quantum bit and the quantum rotation gate. Talbi et al. [38] proposed a new algorithm inspired by genetic algorithms and quantum computing for solving the traveling salesman problem (TSP). Chang et al. [39] proposed a quantum-inspired electromagnetism-like mechanism (QEM) to solve the KP01. Xiong et al. [40] presented an analysis of quantum rotation gates in quantum-inspired evolutionary algorithms. To avoid the problem of premature convergence, mutation operation [41], crossover operation [42], new termination criterion, and new rotation gate [43] were applied to the QEA and subsequently improved.
The differential evolution (DE) algorithm [44], proposed by Storn and Price, was derived from differential vectors of solutions for global optimization. Several simple operations including mutation, crossover, and selection were used in the DE algorithm to explore the search space. Subsequently, several algorithms combining the DE algorithm with quantum computing were designed to increase global search ability. Hota and Pat [45] extended the concept of differential operators with adaptive parameter control to the quantum paradigm and proposed the adaptive quantum-inspired differential evolution (AQDE) algorithm. In addition, quantum interference operation [46] and mutation operation [47] were brought into a quantum-inspired DE algorithm.
Several metaheuristic algorithms combining the QEA and DE were proven to be effective and efficient for solving the KP01. Wang et al. [48] proposed a quantum swarm evolutionary (QSE) algorithm that updated quantum angles automatically with improved PSO. Layeb [49] presented a quantum inspired harmony search algorithm (QIHSA) based on a harmony search algorithm (HSA) and quantum computing. Zouache et al. [50] proposed a merged algorithm called quantum-inspired differential evolution with particle swarm optimization (QDEPSO) to solve the KP01. Gao et al. [51] proposed a quantum-inspired wolf pack algorithm (QWPA) with quantum rotation and quantum collapse to improve the performance of the wolf pack algorithm for the KP01.
The grey wolf optimizer (GWO) proposed by Mirjalili et al. [52] mimics the specific behavior of grey wolves based on leadership hierarchy in nature. Srikanth et al. [53] presented a quantum-inspired binary grey wolf optimizer (QIBGWO) to solve the problem of unit commitment scheduling.
Population diversity is crucial in evolutionary algorithms to enable global exploration and to avoid poor performance due to premature convergence [54]. However, it is hard for classical algorithms to enhance diversity and convergence performance because their population quickly converges to a specific region of the solution space. In addition, these algorithms require a large amount of memory as well as long computational time to find the optimal solution for high-dimensional situations. To avoid these difficulties, we propose a new algorithm, called quantum-inspired differential evolution algorithm with grey wolf optimizer (QDGWO). The proposed algorithm combines the features of the QEA, DE, and GWO to solve the 0-1 knapsack problem. To preserve diversity throughout the evolution, the new algorithm adopts the concepts of quantum representation and the integration of the quantum operators such as quantum measurement and quantum rotation. The adaptive operations of the DE (mutation, crossover, selection) and GWO can increase the adaptation and diversification in updating individuals. The experimental results demonstrate the competitive performance of the proposed algorithm.
The rest of the paper is organized as follows: Section 2 defines the 0-1 knapsack problem. The proposed QDGWO algorithm is presented in Section 3. The experimental results and discussion are summarized in Section 4. Conclusions and directions for future work are discussed in Section 5.

2. Related Work

2.1. Knapsack Problem

The 0-1 knapsack problem is a well-known combinatorial optimal problem that has been studied in areas such as project selection, resource distribution, and the network interdiction problem. The KP01 was demonstrated to be NP-complete [55,56]. It can be described as follows:
Given a set W of m items, W = (x1, x2, x3, …, xm). wi is the weight item xi, and pi is the profit of xi. C is the weight capacity of the knapsack. The objective is to find the subset Xoptimal from set of m items that maximizes the total profit, while keeping the total weight of the selected items from exceeding C.
The 0-1 knapsack problem can be defined as:
Maximize   f ( x ) = i = 1 m p i x i s . t .   i = 1 m w i x i C , x i { 0 , 1 } , i { 1 , 2 , , m }
where xi can take either the value 1 (as selected) or the value 0 (as not selected, also called rejected).

2.2. Grey Wolf Optimizer (GWO)

The GWO algorithm [52] is inspired by the leadership hierarchy and hunting mechanism of grey wolves. To model the social order of grey wolves in the GWO, the best solution is considered the alpha (α) wolf, and the second and third best solutions are beta (β) and delta (δ) wolves, respectively. The rest of the feasible solutions are considered as omega (ω) wolves. In the GWO, α, β, and δ wolves lead the hunting, and the ω wolves go after these leading wolves when searching for the global optimal solution (target) as the prey, as shown in Figure 1.
Grey wolves have the ability to recognize the location of prey and encircle them during the hunt. In order to simulate the hunting behavior of grey wolves mathematically, it is supposed that the α, β, and δ wolves have better knowledge about the potential location of prey. Therefore, the GWO saves the first three best solutions arrived at so far and forces the other ω wolves to update their positions according to the position of the best search agents.
During optimization, the GWO algorithm allows its search agents to update their position based on the location of the alpha, beta, and delta wolves with the distance vector between itself and the three best wolves when attacking the prey. Finally, the position and fitness of the alpha wolf are regarded as the global optimal solution in searching for the optimization when a termination criterion is satisfied.

3. Quantum-Inspired Differential Evolution with Adaptive Grey Wolf Optimizer

The proposed QDGWO algorithm is presented for the knapsack problems. First, the proposed algorithm adopts the quantum computing principles such as quantum representation and quantum measurement operation. Quantum representation allows the representation of the superposition of all potential states in one quantum individual. Second, adaptive mutation operations (used in the DE), crossover operations (used in the DE), and quantum observation are combined to generate new solutions as trial individuals in the solution space. Finally, the selection operator chooses the better solutions between the stored individuals and the trial individuals generated by the mutation and crossover operations of the DE. In the event that the trial individuals are worse than the current individuals, the QDGWO integrates the adaptive GWO and quantum rotation gate to preserve the diversity of the population of solutions as well as accelerate the search for the global optimum. The framework of the QDGWO algorithm is shown in Figure 2.

3.1. Binary Representation

The choice of the representation for individuals, also known as individual coding, is a crucial issue in evolutionary algorithms. The proposed QDGWO algorithm adopts the binary coding, which is the most appropriate way to indicate the selection or rejection of items. Each individual X is represented as a binary vector with length m (m is the item size): X = (x1, x2, …, xm), where m is the number of items.
{ x i = 1 , if   item   x i   is   selected x i = 0 , if   item   x i   is   rejected
The following example shows the binary representation for item selection: x1 and x3 from the item set W are selected: W = {x1, x2, x3, x4} → X = (1 0 1 0).
The binary population P ( t ) = ( X 1 t , X 2 t , , X n t ) is made up of the binary individuals at the tth generation, where n is the size of population.

3.2. Quantum Representation

The representation of the AQDE [45] is used in the proposed algorithm, where each quantum individual q corresponds to a phase vector qθ, which is a string of phase angles θi (1 ≤ im), which can be given by
q θ = [ θ 1 , θ 2 , , θ m ] , θ i [ 0 , 2 π ]
where m is the length of the quantum bit (qubit) individual.
Each quantum individual q is a string of qubits:
q = [ cos ( θ 1 ) sin ( θ 1 ) | cos ( θ 2 ) sin ( θ 2 ) | | cos ( θ m ) sin ( θ m ) ]
The probability amplitudes of a quantum bit are expressed as a pair of numbers (cos(θi), sin(θi)). |sin(θi)|2 represents the probability of selecting item xi, and |cos(θi)|2 represents the probability of rejecting item xi.
The quantum population Q ( t ) = ( q 1 t , q 2 t , , q n t ) is made up of the quantum individuals at the tth generation, where n is the size of the population.

3.3. Initialization

The general principle of superposition of quantum mechanics assumes that the original state must be considered as the result of a superposition of two or more other states in an infinite number of ways [57]. Therefore, the initial quantum individual is regarded as a superposition of all possible states. For 0-1 knapsack problems, each state represents a combination of selecting or rejecting items, and the initial quantum individual is required to generate every possible combination. For this, each vector qθi is initialized by:
q θ i t = 0 = ( ( π 4 ) r i 1 , , ( π 4 ) r i m )
where rij = random{1,3,5,7}, which means rij is an odd integer generated randomly in the set r i j { 1 , 3 , 5 , 7 } while θ i j { π 4 , 3 π 4 , 5 π 4 , 7 π 4 } . This means that for initial individuals, |cos(θij)|2 = |sin(θij)|2 = 1/2, so that the probabilities of selecting item xi and rejecting item xi are equal.
The initial quantum individual q i t = 0 which corresponds to q θ i t = 0 can be given by:
q i t = 0 = [ cos θ i 1 0 sin θ i 1 0 | cos θ i 2 0 sin θ i 2 0 | | cos θ i m 0 sin θ i m 0 ]
where m is the length of the qubit quantum individual.
Q ( 0 ) = ( q 1 0 , q 2 0 , , q n 0 ) is the initial quantum population, where n is the size of the population.

3.4. Quantum Observation and Fitness Evaluation

Based on the quantum superposition principle, a quantum state is a superposition of all possibly stationary states. The quantum superposition state will collapse to a stationary state by quantum observation. In the proposed QDGWO algorithm, the quantum superposition states are represented by quantum individuals, and the stationary states are represented through binary individuals. Before evaluating the fitness of individuals, quantum observation and reparation for quantum individuals q are required to receive binary individuals X, as shown in Algorithm 1.
After quantum observation, the fitness of binary individual X is evaluated as:
f ( X i t ) = i = 1 m p i x i t
Algorithm 1 Quantum Observation and Reparation
Input: quantum individuals q
Output: binary individuals X
x i 0 //Initialize the bits of individual X to 0.
w total 0 // Initialize the total weights of the individuals to 0.
while ( t o t a l w C ) do
   i rand_i [ 1 , m ] //Generate the random integer i { 1 , 2 , , m } .
  if (xi = 0) then r rand ( 0 , 1 )
    if(r > |cos(θi)|2) then
       x i 1
       w total w total + w i //Select item xi and include the weight wi of item xi in the total weight wtotal.
    end if
  end if
end while
x i 0
w total w total w i //The total weight wtotal has exceeded the capacity C when the loop is ended, so item xi needs to be extracted from the selected items for reparation.

3.5. Adaptive Mutation Operation with Dynamic Iteration Factor

The mutation operation is one of the main operations in differential evolution. In the QDGWO, DE/best/2, proposed by Price and Storn [44], is used to select parent vectors. In this strategy, the mutation vector q θ i M t is generated by the vector qθα corresponding to the current best binary individual Xα and the difference between two different target vectors qθr1 and qθr2 which are randomly selected. This difference is weighted by the differentiation control factor F.
The quantum mutation vector q θ i M t at the tth generation can be generated by:
q θ i M t = q θ α + F t ( q θ r 1 q θ r 2 )
where r 1 { 1 , 2 , , m }   and   r 1 i ; r 2 { 1 , 2 , , m }   and   r 2 i ; r 1 r 2 .
To improve the performance of differential operations in different phases, we propose an adaptive strategy to determine the differentiation control factor F with the iteration of evolution:
F t = F 0 + F 1 2 ω rand ( 0 , 1 )
ω = e 1 t max t max t
where F0 is the initial differentiation control factor, and F1 is the adaptive f differentiation control factor. tmax is the maximum iteration number of the algorithm.
With this adaptive strategy, in the early stage of the iteration, the smaller t will be proposed with a larger Ft, which is beneficial for attaining good diversity of individuals for global searching. Ft will be smaller and smaller during the iteration. In the late stages, Ft is close to F0, which aids in local searching for the global optimal solution.
The quantum mutation individual q i M t corresponding to the quantum mutant vector q θ i M t can be obtained by:
q i M t = [ cos θ i 1 M t sin θ i 1 M t | cos θ i 2 M t sin θ i 2 M t | | cos θ i m M t sin θ i m M t ]
where m is the length of the qubit quantum individual.
The quantum mutation population Q M ( t ) = ( q 1 M t , q 2 M t , , q n M t ) is made up of the quantum mutation individuals at the tth generation, where n is the size of the population.

3.6. Crossover Operation

The crossover operation is another main operation in differential evolution. The trial vector q θ i C t is generated by crossover between the mutant vector q θ i M t and the target vector q θ i t with a binomial crossover strategy [44].
The quantum trial vector q θ i C t at the tth generation can be generated by:
q θ i j C t = { q θ i j M t ,   if   ( r a n d j ( 0 , 1 ) C R t )   or   ( j = r n b r _ i ) q θ i j t ,   if   ( r a n d j ( 0 , 1 ) > C R t )   and   ( j r n b r _ i )
where C R [ 0 , 1 ] is the probability of the crossover operation which is randomly generated at tth iteration. In addition, r n b r _ i { 1 , m } is an integer to ensure that q θ i C t obtains at least one vector from q θ i M t .
The quantum trial individual q i C t corresponding to the quantum trial vector q θ i C t can be obtained by:
q i C t = [ cos θ i 1 C t sin θ i 1 C t | cos θ i 2 C t sin θ i 2 C t | | cos θ i m C t sin θ i m C t ]
where m is the length of the qubit quantum individual.
The quantum trial population Q C ( t ) = ( q 1 C t , q 2 C t , , q n C t ) is made up of the quantum trial individuals at the tth generation, where n is the size of the population.

3.7. Selection Operation

After the crossover operation, the trial quantum individuals will be transformed into binary individuals by observation and reparation, as discussed in Section 3.4. The population of trial quantum individuals Q C ( t ) is transformed into a population of trial binary individuals P C ( t ) = { X 1 C t , X 2 C t , , X n C t } .
The selection operation generates the individuals of next iteration X i t + 1 between the current individuals X i t and the trial binary individuals X i C t , as follows:
X i t + 1 = { X i C t ,   if   ( f ( X i C t ) > f ( X i t ) ) X i t ,   if   ( f ( X i C t ) f ( X i t ) )
The quantum individuals of the next iteration are generated by:
q θ i t + 1 = { q θ i C t ,   if   ( f ( X i C t ) > f ( X i t ) ) update   q θ i t   by   R GWO ,   if   ( f ( X i C t ) f ( X i t ) )
where R GWO is a quantum rotation gate (QRG) with an adaptive GWO. R GWO is presented in Section 3.8.

3.8. Quantum Rotation Gate with Adaptive GWO

The quantum rotation gate U ( θ i ) is used to update the values of the qubits in a quantum individual as follows [36]:
U ( θ i ) = [ cos ( θ i ) sin ( θ i ) sin ( θ i ) cos ( θ i ) ]
The quantum individuals of the next iteration after quantum rotation are presented as:
[ α i β i ] = [ cos ( θ i ) sin ( θ i ) sin ( θ i ) cos ( θ i ) ] [ α i β i ]
where θ i = s ( α i β i ) Δ θ i is the rotation angle of the QRG, and s(αi βi) is the direction signal of the rotation angle.
The polar plot of the QRG for qubits is illustrated in Figure 3, and the quantum rotation angle parameters used in [36] are shown in Table 1.
Generally speaking, the core concept of the quantum rotation gate is to motivate the probability amplitudes of each qubit in quantum individuals to converge to the corresponding bits of the current best solution in the population. Realistically, the lookup table is a convergence strategy. In this strategy, the fitness f(x) of the binary solution x after quantum observation is compared with the fitness f(b) of the current best solution b. The quantum rotation gate will update the probability amplitude (αi, βi) toward the direction that favors the emergence of the better solution between x and b. For example, if xi = 0 and bi = 1, and f(xi) > f(bi), then xi is the current best solution. This means that the state | 0 is the optimal state for the ith qubit of a quantum individual, and that the QRG will update (αi, βi) to increase the probability of the state | 0 to make the probability amplitude evolve toward the direction benefiting the appearance of xi = 0. If (αi, βi) is located in the first quadrant as shown in Figure 2, the quantum rotation is in a clockwise direction, which favors xi = 0.
Normally, quantum rotation can bring the quantum chromosomes closer to the current optimal chromosomes and generate the exploitation near the current optimal solution to find better solutions. As with other metaheuristic algorithms, individuals of the population converge more closely to the best solution after quantum rotation. The QRG makes the population evolve continuously and speeds up the convergence of the algorithm.
The magnitude of Δ θ i determines the granularity of the search and should be chosen appropriately. A too-small value of the rotation angle will affect the convergence speed and may even lead to a stagnant state. However, if the value is too large, the solutions may diverge or converge prematurely to a local optimal solution [37].
The traditional QRG requires predefined rotation angles, and the value and direction of θ i should be designed for specific application problems. Because the values of quantum rotation angles are dependent on the problems, the verification of angle selection becomes important, although tedious, work when traditional QRGs are used for optimization problems. If the quantum rotation angles can be obtained adaptively without relying on predefined data, the efficiency of the QRG will be greatly improved, while the types of applications with the QRG can be easily increased. This is exactly what metaheuristic algorithms are good at. Of the large number of metaheuristics, the GWO is a novel swarm optimization algorithm motivated by the social behavior of grey wolves. Because the GWO has the advantages of simple principles, fast searching speed, high seeking accuracy, and easy implementation, it can be easily combined with practical engineering problems [58]. Therefore, the GWO has high theoretical research value and application value, and becomes suitable for generating quantum rotation angles.
In addition, the traditional QRG motivates the probability amplitudes of each qubit in quantum individuals to converge to the corresponding bits of the current best solution in the population. If the current best solution is not the global optimal solution, the direction of quantum rotation may be far from the global optimal solution, and the algorithm may be trapped in local optimal stagnation. Since the GWO records multiple best individuals, it is useful for the QRG to jump out of local optimum with the GWO.
In the proposed QDGWO algorithm, the rotation angle of the QRG is determined with an adaptive GWO. This is described as follows: In the original GWO algorithm, the positions of other ω wolves are updated by the distance vector between themselves and the three best wolves, while for the α, β, and δ wolves, they can hunt the prey more freely. The hunting zone for α, β, and δ wolves will become smaller during the iteration. In the adaptive GWO, these features of the GWO will be inherited and developed.
For a quantum rotation gate with an adaptive GWO ( R GWO ), Δ θ i j t is calculated using the position of α, β, and δ wolves as follows:
Δ θ i j t = θ { γ i α ( X α j t X i j t ) + γ i β ( X β j t X i j t ) + γ i δ ( X δ j t X i j t ) }
where i { 1 , 2 , 3 , , n } ; j { 1 , 2 , 3 , , m } ; n is the size of the population; m is the number of items; θ is the rotation angle magnitude; X i j t is the jth component in the binary individual of the ith wolf during tth iteration; and γ i α , γ i β and γ i δ are determined by comparing the fitness of the current binary individual with α, β, and δ wolves as follows:
γ i α = { f ( X α t ) f ( X i t ) ,   if   f ( X i t ) < f ( X α t ) N ( 0 , 1 ) × t max k ( t max + t ) ,   otherwise
γ i β = { f ( X β t ) f ( X i t ) ,   if   f ( X i t ) < f ( X β t ) N ( 0 , 1 ) × t max k ( t max + t ) ,   otherwise
γ i δ = { f ( X δ t ) f ( X i t ) ,   if   f ( X i t ) < f ( X δ t ) N ( 0 , 1 ) × t max k ( t max + t ) ,   otherwise
where N(0,1) is the Gaussian distribution, μ = 0 , σ = 1 .
For the ω wolves, they hunt the prey based on their own positions vs. the positions of the three best wolves. It is obvious that for the ith wolf in the ω wolves group, f ( X i t ) < f ( X δ t ) f ( X β t ) f ( X α t ) . Therefore, the rotation angle of the ith wolf can be calculated with the binary individuals of α, β, δ, and the ith wolf.
For α, β, and δ wolves, they have the duty of searching for the optimal solution from their old positions. With increasing t, γ i α , γ i β and γ i δ will be much smaller in the late stages of the iteration than what they are in the early stages of the iteration, which helps the ω wolves converge toward an estimated position of prey calculated by α, β, and δ wolves. This strategy is helpful for jumping out of local optimal stagnation, especially in the final stage of iterations.
The speed of convergence and quality of the solution are greatly affected by the magnitude of the rotation angle θ , which is given by:
θ = θ min + ( 1 t t max ) ( θ max θ min ) 0 < θ min < θ max
where θ is linearly decreasing from θ max to θ min during the iteration.
In the end, q θ i t + 1 can be generated by:
θ i j t + 1 = θ i j t + s i j t Δ θ i j t
where s i j t is the direction signal of the rotation angle as follows:
s i j t = { 1 ,   if   θ i j t ( 0 , π 2 ) ( π , 3 π 2 ) 1 ,   if   θ i j t ( π 2 , π ) ( 3 π 2 , 2 π ) ± 1 ,   otherwise
With the adaptive strategy of γ and θ , the searching granularity of the QDGWO changes from coarse to fine. A different searching granularity in the iteration will facilitate the process for search agents to reach the global optimal solution.

3.9. Procedure of QDGWO Algorithm

Based on the description above, the procedure of the QDGWO and its main steps can be summarized as shown in Algorithm 2.
Algorithm 2 QDGWO
t 0 // Initializes the iteration
Initialize Q(0) by Equations (5) and (6)
while (t < MaxIter) do
  Observe to get X(t) from q(t)//Quantum observation
  Evaluate fitness of X(t) by Equation (7)
  Apply mutation on qM(t) by Equations (8) and (11)//Adaptive mutation
  Obtain qC(t) by crossover by Equations (12) and (13)//Crossover
  Observe to get XC(t) from qC(t)//Quantum observation
  Evaluate fitness of XC(t)
  if the trial binary individuals XC(t) is better than X(t) then
    Update X(t+1) and q(t+1) by Equations (14) and (15)//Selection
  else
    Update q(t + 1) using QRG with adaptive GWO by Equations (16)–(24)
  end if
   t t + 1
end while

3.10. Example of QDGWO Algorithm to Solve KP01

In the following, a KP01 problem is described as an example to be solved by the QDGWO algorithm.
Given a set of ten items, W = (x1, x2, x3, …, x10), and wi is the weight of the ith item xi; pi is the value of xi; and C is the weight capacity of the knapsack. Suppose that wi = i, and pi = wi+5, and C = 1 2 i = 1 m w i = 27.5 .
Then, this 0-1 knapsack problem can be defined as:
Maximize   f ( x ) = i = 1 m p i x i s . t .   i = 1 m w i x i 27.5 , x i { 0 , 1 } , i { 1 , 2 , , 10 }
where wi = i, and pi = wi + 5 = i + 5.
In this example, the population size was set to 20, and the maximum number of iterations was set to 200. The other parameters are presented in Table 2. The evolution process of q 1 , an individual of the quantum population, is shown below.
The initial quantum population Q ( 0 ) = ( q 1 0 , q 2 0 , , q 20 0 ) was initialized by Equations (5) and (6). The vector qθ1 was initialized by q θ 1 0 = ( 3 π 4 , π 4 , 5 π 4 , 7 π 4 , 5 π 4 , 7 π 4 , 3 π 4 , π 4 , 5 π 4 , π 4 ) , and the quantum individual q 1 0 was initialized by q 1 0 = [ 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 | 2 2 2 2 ] .
After quantum observation, the binary individual X 1 0 was generated as ( 1 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 0 , 1 ) , and the fitness of X 1 0 was evaluated as f ( X 1 0 ) = i = 1 10 p i x i 0 = 50 .
The mutation vector q θ 1 M 0 was generated by Equations (8)–(11), where t = 0; ω = e 1 t max t max t = 1 ; and F 0 = F 0 + F 1 2 ω rand ( 0 , 1 ) = 0.02 + 0.03 × 2 × 0.68 = 0.0608 (rand(0,1) was randomly generated as 0.68).
The vector qθα corresponding to the current best binary individual Xα and two different random target vectors qθr1 and qθr2 are shown as:
q θ α = ( π 4 , 3 π 4 , π 4 , 5 π 4 , 7 π 4 , 3 π 4 , π 4 , π 4 , 3 π 4 , 3 π 4 ) ;
q θ r 1 = ( 3 π 4 , 7 π 4 , 7 π 4 , 5 π 4 , π 4 , 3 π 4 , 5 π 4 , 5 π 4 , π 4 , 7 π 4 ) ;
q θ r 2 = ( 7 π 4 , 7 π 4 , 3 π 4 , 5 π 4 , π 4 , 5 π 4 , π 4 , π 4 , 5 π 4 , 3 π 4 ) .
The quantum mutation vector q θ 1 M 0 was generated by:
q θ 1 M 0 = q θ α + F 0 ( q θ r 1 q θ r 2 ) = ( 0.1892 π , 0.75 π , 0.3108 π , 1.25 π , 1.75 π , 0.7196 π , 0.3108 π , 0.3108 π , 0.6892 π , 0.8108 π )
The quantum trial vector q θ 1 C 0 at the tth generation was generated by Equations (12) and (13):
q θ 1 C 0 = ( 0.75 π , 0.75 π , 0.3108 π , 1.75 π , 1.25 π , 1.75 π , 0.3108 π , 0.25 π , 0.6892 π , 0.25 π )
where CR was randomly generated as 0.35, and rnbr_i = 3.
After quantum observation, the trial binary individual X 1 C 0 was generated as ( 1 , 0 , 1 , 1 , 0 , 0 , 1 , 0 , 1 , 0 ) , and the fitness was evaluated as f ( X 1 C 0 ) = i = 1 10 p i x i 0 = 49 . Because f ( X 1 C t ) < f ( X 1 t ) , qt+1 should be updated by the QRG with an adaptive GWO.
The positions and profits of the current three best wolves were shown as:
X α 0 = ( 1 , 1 , 1 , 0 , 1 , 1 , 0 , 1 , 0 , 0 ) , f ( X α 0 ) = i = 1 10 p i x i 0 = 55 ;
X β 0 = ( 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 1 , 0 ) , f ( X β 0 ) = i = 1 10 p i x i 0 = 54 ;
X δ 0 = ( 0 , 1 , 1 , 0 , 0 , 1 , 1 , 0 , 1 , 0 ) , f ( X δ 0 ) = i = 1 10 p i x i 0 = 52 .
Additionally, θ = θ min + ( 1 t t max ) ( θ max θ min ) = θ max = 0.03 π ; γ 1 α = f ( X α 0 ) f ( X 1 0 ) = 1.1 ; γ 1 β = f ( X β 0 ) f ( X 1 0 ) = 1.08 ; γ 1 δ = f ( X δ 0 ) f ( X 1 0 ) = 1.04 .
Then, the quantum individual of the next iteration q θ i t + 1 was generated by Equations (16)–(24) as:
Δ q θ 1 0 = [ Δ θ 11 0 , Δ θ 12 0 , , Δ θ 110 0 ] = [ 0.0312 π , 0.0966 π , 0 , 0.0642 π , 0.0654 π , 0.0642 π , 0.0654 π , 0.033 π , 0.0636 π , 0.0966 π ]
q θ 1 0 = ( 3 π 4 , π 4 , 5 π 4 , 7 π 4 , 5 π 4 , 7 π 4 , 3 π 4 , π 4 , 5 π 4 , π 4 )
q θ 1 1 = q θ 1 0 + s 1 0 Δ q θ 1 0 = ( 0.7812 π , 0.3466 π , 1.25 π , 1.8142 π , 1.3154 π , 1.6858 π , 0.8154 π , 0.283 π , 1.3136 π , 0.1534 π )
The individuals of the next iteration continued to evolve until the maximum number of iterations was reached. After the iterations, the best profit of this KP01 problem was 57, and one of the best solutions was (0,1,1,1,1,1,1,0,0,0).

4. Experimental Results

To assess the performance of the proposed QDGWO algorithm, two groups of datasets are used for solving the KP01.
All experiments were conducted with Matlab 2016b, running on an Intel Core i7-4790 CPU @ 3.60 GHz, and Windows 7 Ultimate Edition.
In the first experiment described in [37], there were 50, 250, 500, 1000, 1500, 2000, 2500, and 3000 dimension sets of data by Equation (26) to test the performance of the QDGWO in high-dimension situations.
Given a set of m items, W = (x1, x2, x3, …, xm).
w i = rand_i [ 1 ,   10 ] p i = w i + 5 C = 1 2 i = 1 m w i
where wi is the weight of the ith item xi; pi is the value of xi; C is the weight capacity of the knapsack; and m is the number of items.
In the first experiment, m ranged from 50 to 3000, and the maximum number of iterations in all cases was set to 1000.
To verify the effectiveness and efficiency of the QDGWO, the results of the proposed algorithm were compared with three algorithms: QEA [37], AQDE [45], and QSE [48]. The parameters of algorithms used in the experiments are presented in Table 2, where the population size is 20. The best profits, the average profits, the worst profits, and the standard deviations of 30 independent runs are shown in Table 3 and Figure 4, Figure 5, Figure 6 and Figure 7. The Wilcoxon signed-rank test [59] is performed for the results of the competing algorithms in Table 3 with a significance level α = 0.05, where +, −, and = indicate that this algorithm is superior, inferior, or equal to the QDGWO, respectively.
To illustrate the importance of the role of the crossover operation of the DE in exploring the global optimum, comparative tests between the QDGWO with and without the crossover operation were performed. Moreover, we compared the binomial crossover operator of the DE with the exponential crossover operator of the DE. The best profits, the average profits, the worst profits, and the standard deviations of 30 independent runs are presented in Table 4. The Wilcoxon signed-rank test [59] is performed for the results in Table 4 with a significance level α = 0.05, where +, −, and = indicate that this strategy is superior, inferior, or equal to the QDGWO with a binomial crossover of the DE, respectively.
In the second experiment described in [49], there were 50, 200, 500, 1000, 1500, and 2000 dimension sets of data by Equation (27) to test the performance of the QDGWO in high-dimension situations.
Given a set of m items, W = (x1, x2, x3, …, xm).
w i = rand_i [ 1 ,   10 ] p i = w i + 5 C = 3 4 i = 1 m w i
where wi is the weight of the ith item xi; pi is the value of xi; C is the weight capacity of the knapsack; and m is the number of items.
In the second experiment, m ranged from 50 to 2000, and the maximum number of iterations in all cases was set to 1000.
To present the performance of the proposed algorithm in the global optimization, we compared the QDGWO algorithm with the QIHSA [49] for knapsack problems. The optimization results of the success rate (SR%) and the best profit are shown in Table 5. The Wilcoxon signed-rank test [59] is performed for the results of the QIHSA in Table 5 with a significance level α = 0.05, where +, −, and = indicate that this algorithm is superior, inferior, or equal to the QDGWO, respectively.
The obtained results demonstrate the competitive performance of the proposed QDGWO algorithm. According to the results, the proposed algorithm is more efficient for the high-dimensional 0-1 knapsack problems, as shown in Table 4 and Figure 3, Figure 4 and Figure 5. Compared with the QEA [25], AQDE [33], QSE [36], and QIHSA [37], the QDGWO was the most effective and efficient algorithm in the experiments. The advantages of the QDGWO became more obvious when the number of items was large, especially in high dimensional cases of the knapsack problems.
The proposed algorithm obtains both rapid exploration and high exploitation in searching solutions. The QDGWO converges quickly to the global optimal solution. For example, the algorithm approaches the global optimum at about the 500th iteration in the case of 500 items (see Figure 4). However, the algorithm continues searching near the global optimal solution, i.e., the exploitation. To illustrate this, in the case of 500 items (see Figure 4), the QDGWO continues seeking further optimization after approaching the optimal solution and obtains better solutions in the exploitation until the end of the iterations.
Based on the results shown in Table 3, it can be concluded that the crossover operation plays a significant role in searching the solution space efficiently. However, the performance of the QDGWO is not very sensitive to which kind of crossover operator is used in the algorithm. From the experiment results, the binomial crossover operator of the DE yields slightly better optimal solutions than the exponential crossover operator of the DE in all cases. The results can be interpreted to show that quantum updating with the quantum rotation gate remains the most decisive and crucial operation in exploring the search space even if the crossover operation is required to improve the solutions.
Finally, compared with the other four methods, the experimental results show the advantages of the collaborative optimization with operations of adaptive mutation, crossover, and quantum rotation gate with the adaptive GWO in investigating the search space.

5. Conclusions

A quantum-inspired differential evolution algorithm with grey wolf optimizer (QDGWO) was proposed to solve the 0-1 knapsack problems. The proposed algorithm combined the superposition principles of quantum computing, differential evolution operations, and the hunting behaviors of grey wolves. The QDGWO used the principles of quantum computing such as quantum superposition states and quantum gates. Furthermore, it contained mutation, crossover, and selection operations of the DE. To maintain a better balance between the exploration and exploitation of searching for the global optimal solution, the proposed algorithm adapted a quantum rotation gate with the adaptive GWO to update the population of solutions. The results of tests performed for resolving the knapsack problems demonstrate that the QDGWO was able to enhance diversity and convergence performance for solving 0-1 knapsack problems. In addition, the QDGWO was effective and efficient in finding the optimal solutions for high-dimensional situations.
Although the QDGWO displays excellent performance in solving 0-1 knapsack problems, there are several directions of improvement for the proposed algorithm. First, to improve the effectiveness of the QDGWO, initial solutions of the quantum population can be generated with metaheuristic methods. In addition, the proposed approaches can be applied to solve other combinatorial optimization problems. Moreover, it is worth studying how to use the concepts of quantum computing in other novel metaheuristic approaches such as the MPA [22] and AOA [27], as well as multi-objective optimization algorithms.

Author Contributions

Conceptualization, Y.W. and W.W.; methodology, Y.W.; software, Y.W.; validation, Y.W.; formal analysis, Y.W.; investigation, Y.W.; resources, Y.W.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W. and W.W.; visualization, Y.W.; supervision, W.W.; funding acquisition, W.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61873240).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions. This work was supported in part by the National Natural Science Foundation of China (No. 61873240).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kellerer, H.; Pferschy, U.; Pisinger, D. Knapsack Problems; Springer: Berlin/Heidelburg, Germany, 2004. [Google Scholar]
  2. Wang, X.; He, Y. Evolutionary algorithms for knapsack problems. J. Softw. 2017, 28, 1–16. [Google Scholar]
  3. Jourdan, L.; Basseur, M.; Talbi, E.G. Hybridizing exact methods and metaheuristics: A taxonomy. Eur. J. Oper. Res. 2009, 199, 620–629. [Google Scholar] [CrossRef]
  4. Shih, W. A branch and bound method for the multiconstraint zero-one knapsack problem. J. Oper. Res. Soc. 1979, 30, 369–378. [Google Scholar] [CrossRef]
  5. Toth, P. Dynamic programming algorithms for the zero-one knapsack problem. Computing 1980, 25, 29–45. [Google Scholar] [CrossRef]
  6. Zou, D.; Gao, L.; Li, S.; Wu, J. Solving 0-1 knapsack problem by a novel global harmony search algorithm. Appl. Soft Comput. 2011, 11, 1556–1564. [Google Scholar] [CrossRef]
  7. Fogel, D.B. Introduction to evolutionary computation. In Evolutionary Computation 1; Taylor & Francis Group: New York, NY, USA, 2000. [Google Scholar]
  8. Chen, G.-L.; Wang, X.-F.; Zhuang, Z.-Q.; Wang, D.-S. Genetic Algorithm and Its Applications; The People’s Posts and Telecommunications Press: Beijing, China, 2003. [Google Scholar]
  9. Dorigo, M.; Birattari, M.; Stutzle, T. Ant colony optimization. IEEE Comput. Intell. Mag. 2006, 1, 28–39. [Google Scholar] [CrossRef] [Green Version]
  10. Poli, R.; Kennedy, J.; Blackwell, T. Particle swarm optimization. Swarm Intell. 2007, 1, 33–57. [Google Scholar] [CrossRef]
  11. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical function optimization: Artificial bee colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  12. Yang, X.-S.; Deb, S. Cuckoo search via Lévy flights. In Proceedings of the 2009 World Congress on Nature & Biologically Inspired Computing (NaBIC), Coimbatore, India, 9–11 December 2009; pp. 210–214. [Google Scholar]
  13. Yang, X.-S.; Xin, S. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  14. Feng, Y.-H.; Wang, G.-G.; Wang, L. Solving randomized time-varying knapsack problems by a novel global firefly algorithm. Eng. Comput. 2018, 34, 621–635. [Google Scholar] [CrossRef]
  15. Cao, J.; Yin, B.; Lu, X.; Kang, Y.; Chen, X. A modified artificial bee colony approach for the 0-1 knapsack problem. Appl. Intell. 2017, 48, 1582–1595. [Google Scholar] [CrossRef]
  16. Wu, H.; Zhou, Y.; Luo, Q. Hybrid symbiotic organisms search algorithm for solving 0-1 knapsack problem. Int. J. Bio-Inspired Comput. 2018, 12, 23–53. [Google Scholar] [CrossRef]
  17. Feng, Y.-H.; Jia, K.; He, Y.-C. An improved hybrid encoding cuckoo search algorithm for 0-1 knapsack problems. Comput. Intell. Neurosci. 2014, 2014, 970456. [Google Scholar] [CrossRef]
  18. Zhou, Y.-Q.; Chen, X.; Zhou, G. An improved monkey algorithm for a 0-1 knapsack problem. Appl. Soft Comput. 2016, 38, 817–830. [Google Scholar] [CrossRef] [Green Version]
  19. Sun, J.; Miao, Z.; Gong, D.-W.; Zeng, X.-J.; Li, J.-Q.; Wang, G.-G. Interval multi-objective optimization with memetic algorithms. IEEE Trans. Cybern. 2020, 50, 3444–3457. [Google Scholar] [CrossRef]
  20. Wang, G.-G.; Guo, L.-H.; Gandomi, A.H.; Hao, G.-S.; Wang, H.-Q. Chaotic krill herd algorithm. Inf. Sci. 2014, 274, 17–34. [Google Scholar] [CrossRef]
  21. Wang, G.-G.; Bai, D.; Gong, W.; Ren, T.; Liu, X.; Yan, X. Particle-swarm krill herd algorithm. In Proceedings of the 2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), Bangkok, Thailand, 16–19 December 2018; pp. 1073–1080. [Google Scholar]
  22. Faramarzi, A.; Heidarinejad, M.; Mirjalili, S.; Gandomi, A.H. Marine predators algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  23. Wang, G.-G. Moth search algorithm: A bio-inspired metaheuristic algorithm for global optimization problems. Memetic Comput. 2016, 10, 151–164. [Google Scholar] [CrossRef]
  24. Feng, Y.-H.; Wang, G.-G. Binary moth search algorithm for discounted {0-1} knapsack problem. IEEE Access 2018, 6, 10708–10719. [Google Scholar] [CrossRef]
  25. Feng, Y.-H.; Yi, J.-H.; Wang, G.-G. Enhanced moth search algorithm for the set-union knapsack problems. IEEE Access 2019, 7, 173774–173785. [Google Scholar] [CrossRef]
  26. Gao, D.; Wang, G.-G.; Pedrycz, W. Solving fuzzy job-shop scheduling problem using DE algorithm improved by a selection mechanism. IEEE Trans. Fuzzy Syst. 2020, 28, 3265–3275. [Google Scholar] [CrossRef]
  27. Abualigah, L.; Diabat, A.; Mirjalili, S.; Abd Elaziz, M.; Gandomi, A.H. The arithmetic optimization algorithm. Comput. Methods Appl. Mech. Eng. 2021, 376, 113609. [Google Scholar] [CrossRef]
  28. Wang, G.-G.; Deb, S.; Cui, Z. Monarch butterfly optimization. Neural Comput. Appl. 2019, 31, 1995–2014. [Google Scholar] [CrossRef] [Green Version]
  29. Feng, Y.-H.; Wang, G.-G.; Deb, S.; Lu, M.; Zhao, X.-J. Solving 0-1 knapsack problem by a novel binary monarch butterfly optimization. Neural Comput. Appl. 2017, 28, 1619–1634. [Google Scholar] [CrossRef]
  30. Feng, Y.-H.; Wang, G.-G.; Li, W.-B.; Li, N. Multi-strategy monarch butterfly optimization algorithm for discounted {0-1} knapsack problem. Neural Comput. Appl. 2018, 30, 3019–3036. [Google Scholar] [CrossRef]
  31. Feng, Y.-H.; Wang, G.-G.; Dong, J.-Y.; Wang, L. Opposition-based learning monarch butterfly optimization with Gaussian perturbation for large-scale 0-1 knapsack problem. Comput. Electr. Eng. 2018, 67, 454–468. [Google Scholar] [CrossRef]
  32. Feng, Y.-H.; Yu, X.; Wang, G.-G. A novel monarch butterfly optimization with global position updating operator for large-scale 0-1 knapsack problems. Mathematics 2019, 7, 1056. [Google Scholar] [CrossRef] [Green Version]
  33. Wang, G.-G.; Tan, Y. Improving metaheuristic algorithms with information feedback models. IEEE Trans. Cybern. 2019, 49, 542–555. [Google Scholar] [CrossRef]
  34. Benioff, P. The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines. J. Stat. Phys. 1980, 22, 563–591. [Google Scholar] [CrossRef]
  35. Feynman, R.P. Simulating physics with computers. Int. J. Theor. Phys. 1999, 21, 467–488. [Google Scholar] [CrossRef]
  36. Han, K.-H.; Kim, J.-H. Genetic quantum algorithm and its application to combinatorial optimization problems. In Proceedings of the International Congress on Evolutionary Computation (CEC2000), San Diego, CA, USA, 16–19 July 2000; Volume 2, pp. 1354–1360. [Google Scholar]
  37. Han, K.-H.; Kim, J.-H. Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Trans. Evol. Comput. 2002, 6, 580–593. [Google Scholar] [CrossRef] [Green Version]
  38. Talbi, H.; Draa, A.; Batouche, M. A new quantum-inspired genetic algorithm for solving the travelling salesman problem. In Proceedings of the 2004 IEEE International Conference on Industrial Technology, 2004, IEEE ICIT’04, Hammamet, Tunisia, 8–10 December 2004; Volume 3, pp. 1192–1197. [Google Scholar]
  39. Chang, C.-C.; Chen, C.-Y.; Fan, C.-W.; Chao, H.-C.; Chou, Y.-H. Quantum-inspired electromagnetism-like mechanism for solving 0/1 knapsack problem. In Proceedings of the 2010 2nd International Conference on Information Technology Convergence and Services, Cebu, Philippines, 11–13 August 2010; pp. 1–6. [Google Scholar]
  40. Xiong, H.; Wu, Z.; Fan, H.; Li, G.; Jiang, G. Quantum rotation gate in quantum-inspired evolutionary algorithm: A review, analysis and comparison study. Swarm Evol. Comput. 2018, 42, 43–57. [Google Scholar] [CrossRef]
  41. Zhou, W.; Zhou, C.; Liu, G.; Lv, H.; Liang, Y. An improved quantum-inspired evolutionary algorithm for clustering gene expression data. In Computational Methods; Springer: Dordrecht, The Netherlands, 2006; pp. 1351–1356. [Google Scholar]
  42. Xiao, J.; Yan, Y.; Lin, Y.; Yuan, L.; Zhang, J. A quantum-inspired genetic algorithm for data clustering. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–6 June 2008; pp. 1513–1519. [Google Scholar]
  43. Han, K.-H.; Kim, J.-H. Quantum-inspired evolutionary algorithms with a new termination criterion, HεGate, and two-phase scheme. IEEE Trans. Evol. Comput. 2004, 8, 156–169. [Google Scholar] [CrossRef]
  44. Storn, R.; Price, K. Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  45. Hota, A.R.; Pat, A. An adaptive quantum-inspired differential evolution algorithm for 0-1 knapsack problem. In Proceedings of the 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC), Kitakyushu, Japan, 15–17 December 2010; pp. 703–708. [Google Scholar]
  46. Draa, A.; Meshoul, S.; Talbi, H.; Batouche, M. A quantum-inspired differential evolution algorithm for solving the N-queens problem. Neural Netw. 2011, 1, 21–27. [Google Scholar]
  47. Su, H.; Yang, Y. Quantum-inspired differential evolution for binary optimization. In Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China, 18–20 October 2008; Volume 1, pp. 341–346. [Google Scholar]
  48. Wang, Y.; Feng, X.-Y.; Huang, Y.-X.; Pu, D.-B.; Zhou, W.-G.; Liang, Y.-C.; Zhou, C.-G. A novel quantum swarm evolutionary algorithm and its applications. Neurocomputing 2007, 70, 633–640. [Google Scholar] [CrossRef]
  49. Layeb, A. A hybrid quantum inspired harmony search algorithm for 0-1 optimization problems. J. Comput. Appl. Math. 2013, 253, 14–25. [Google Scholar] [CrossRef]
  50. Zouache, D.; Moussaoui, A. Quantum-inspired differential evolution with particle swarm optimization for knapsack problem. J. Inf. Sci. Eng. 2015, 31, 1757–1773. [Google Scholar]
  51. Gao, Y.; Zhang, F.; Zhao, Y.; Li, C. Quantum-inspired wolf pack algorithm to solve the 0-1 knapsack problem. Math. Probl. Eng. 2018, 2018, 5327056. [Google Scholar] [CrossRef]
  52. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  53. Srikanth, K.; Panwar, L.K.; Panigrahi, B.K.; Herrera-Viedma, E.; Sangaiah, A.K.; Wang, G.-G. Meta-heuristic framework: Quantum inspired binary grey wolf optimizer for unit commitment problem. Comput. Electr. Eng. 2018, 70, 243–260. [Google Scholar] [CrossRef]
  54. Sudholt, D. The benefits of population diversity in evolutionary algorithms: A survey of rigorous runtime analyses. In Theory of Evolutionary Computation; Springer: Cham, Switzerland, 2020; pp. 359–404. [Google Scholar]
  55. Pisinger, D. Where are the hard knapsack problems? Comput. Oper. Res. 2005, 32, 2271–2284. [Google Scholar] [CrossRef]
  56. Vasquez, M.; Yannick, V. Improved results on the 0-1 multidimensional knapsack problem. Eur. J. Oper. Res. 2005, 165, 70–81. [Google Scholar] [CrossRef]
  57. Dirac, P.A.M. The Principles of Quantum Mechanics, 4th ed.; Oxford University Press: New York, NY, USA, 1981; p. 12. [Google Scholar]
  58. Wang, J.S.; Li, S.X. An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci. Rep. 2019, 9, 1–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. Woolson, R.F. Wilcoxon signed-rank test. In Wiley Encyclopedia of Clinical Trials, 1–3; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2007. [Google Scholar]
Figure 1. Position updating mechanism of GWO.
Figure 1. Position updating mechanism of GWO.
Mathematics 09 01233 g001
Figure 2. Framework of QDGWO.
Figure 2. Framework of QDGWO.
Mathematics 09 01233 g002
Figure 3. Polar plot of quantum rotation gate for qubit.
Figure 3. Polar plot of quantum rotation gate for qubit.
Mathematics 09 01233 g003
Figure 4. Best profits for the 0-1 knapsack problems (250 items in Experiment 1).
Figure 4. Best profits for the 0-1 knapsack problems (250 items in Experiment 1).
Mathematics 09 01233 g004
Figure 5. Best profits for the 0-1 knapsack problems (500 items in Experiment 1).
Figure 5. Best profits for the 0-1 knapsack problems (500 items in Experiment 1).
Mathematics 09 01233 g005
Figure 6. Best profits for the 0-1 knapsack problems (1000 items in Experiment 1).
Figure 6. Best profits for the 0-1 knapsack problems (1000 items in Experiment 1).
Mathematics 09 01233 g006
Figure 7. Best profits for the 0-1 knapsack problems (3000 items in Experiment 1).
Figure 7. Best profits for the 0-1 knapsack problems (3000 items in Experiment 1).
Mathematics 09 01233 g007
Table 1. Lookup table of rotation angle.
Table 1. Lookup table of rotation angle.
xibi f ( x ) f ( b ) Δ θ i s(αiβi)
αiβi > 0αiβi < 0αi = 0βi = 0
00false00000
00true00000
01false00000
01true0.05π−1+1±10
10false0.01π−1+1±10
10true0.025π+1−10±1
11false0.005π+1−10±1
11true0.025π+1−10±1
Where f(.) is the profit; s(αi βi) is the direction sign of rotation angle; and xi and bi are the ith bits of the binary solution x and the best solution b, respectively.
Table 2. Parameters of algorithms in experiments.
Table 2. Parameters of algorithms in experiments.
QEAAQDEQSEQDGWO
Differential control
parameter (F)
/rand(0,1) × rand(0,1) × 0.1/F0= 0.02
F1 = 0.03
Crossover control
parameter (CR)
/Gaussian distribution
N(0.5, 0.0375)
/Gaussian distribution
N(0.5, 0.0375)
Parameters of PSO//W = 0.7298
c1 = 1.42
c2 = 1.57
/
Quantum rotation
angle ( Δ θ )
0.01π//θmin = 0.01π
θmax = 0.03π
k = 10
Table 3. Experimental results for 0-1 knapsack problems (Experiment 1).
Table 3. Experimental results for 0-1 knapsack problems (Experiment 1).
Number of Items QEAAQDEQSEQDGWO
50Best302(=)292(−)297(=)302
Average300.63(=)287.2(−)294.26(−)302
Worst297(=)282(−)290(−)302
Std2.23(−)2.97(−)2.41(−)0
250Best1517(=)1417(−)1446(−)1554
Average1502.9(=)1397.6(−)1,427.7(−)1549.3
Worst1496(=)1382(−)1412(−)1542
Std4.4562(−)7.3178(−)8.2040(−)2.3419
500Best2946(=)2772(−)2799(−)3091
Average2917.3(−)2732(−)2783(−)3072.1
Worst2907(−)2717(−)2763(−)3058
Std8.8198(=)11.3304(=)9.2364(=)8.9624
1000Best5695(−)5382(−)5460(−)6121
Average5662.5(−)5,364.4(−)5442.2(−)6085.3
Worst5633(−)5342(−)5422(−)6048
Std12.7028(=)11.2975(=)10.1018(+)13.4812
1500Best8464(−)8198(−)8128(−)9126
Average8,439.4(−)8,178.7(−)8,082.8(−)9077.1
Worst8414(−)8149(−)8039(−)9027
Std15.1535(+)13.6188(+)20.6722(=)21.5347
2000Best11,217(−)10,951(−)10,813(−)12,027
Average11,191.2(−)10,900.4(−)10,781.1(−)11,971.4
Worst11,164(=)10,865(−)10,747(−)11,913
Std14.7202(+)24.6167(=)16.2827(+)24.3967
2500Best13,907(−)13,569(−)13,466(−)14,886
Average13,865.8(−)13,523.3(−)13,394.2(−)14,831.4
Worst13,839(−)13,482(−)13,342(−)14,751
Std19.0504(+)24.5971(+)23.5438(+)28.4894
3000Best16,604(−)16,221(−)16,071(−)17,769
Average16,549.9(−)16,175.8(−)16,033.2(−)17,670.1
Worst16,506(−)16,128(−)15,995(−)17,588
Std20.2286(+)22.0621(+)20.5269(+)29.5280
Table 4. Experimental results of QDGWO algorithm without crossover of DE, with binomial crossover of DE, and with exponential crossover of DE.
Table 4. Experimental results of QDGWO algorithm without crossover of DE, with binomial crossover of DE, and with exponential crossover of DE.
Number of Items Without
Crossover of DE
With Binomial
Crossover of DE (CR = 0.5)
With Exponential
Crossover of DE (CR = 0.5)
500Best3001(−)30463046(=)
Average2990.3(−)3038.63040.1(=)
Worst2981(−)30313031(=)
Std6.1752(−)3.61913.4287(=)
1000Best5926(−)61266126(=)
Average5893.8(−)6109.46107.7(=)
Worst5851(−)60966081(=)
Std18.0843(−)7.26129.5447(=)
1500Best8752(−)91269126(=)
Average8707.7(−)9094.99090.9(=)
Worst8647(−)90669042(=)
Std23.2206(−)16.851621.4710(−)
Table 5. Experimental results of QDGWO and QIHSA for 0-1 knapsack problems (Experiment 2).
Table 5. Experimental results of QDGWO and QIHSA for 0-1 knapsack problems (Experiment 2).
TestItem SizeOptimal Solution QIHSAQDGWO
Knapinst50501177SR%99.83(=)100
best1175(=)1177
Knapinst2002004860SR%97.83(−)100
best4755(−)4860
Knapinst50050011,922SR%93.74(−)98.56
best11,174(−)11,748
Knapinst1000100024,356SR%87.97(−)98.14
best21,427(−)23,903
Knapinst1500150035,891SR%86.31(−)97.25
best30,978(−)34,904
Knapinst2000200049,007SR%85.8(−)96.36
best42,052(−)47,223
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Y.; Wang, W. Quantum-Inspired Differential Evolution with Grey Wolf Optimizer for 0-1 Knapsack Problem. Mathematics 2021, 9, 1233. https://0-doi-org.brum.beds.ac.uk/10.3390/math9111233

AMA Style

Wang Y, Wang W. Quantum-Inspired Differential Evolution with Grey Wolf Optimizer for 0-1 Knapsack Problem. Mathematics. 2021; 9(11):1233. https://0-doi-org.brum.beds.ac.uk/10.3390/math9111233

Chicago/Turabian Style

Wang, Yule, and Wanliang Wang. 2021. "Quantum-Inspired Differential Evolution with Grey Wolf Optimizer for 0-1 Knapsack Problem" Mathematics 9, no. 11: 1233. https://0-doi-org.brum.beds.ac.uk/10.3390/math9111233

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop