Next Article in Journal
Genz and Mendell-Elston Estimation of the High-Dimensional Multivariate Normal Distribution
Next Article in Special Issue
The Stock Index Prediction Based on SVR Model with Bat Optimization Algorithm
Previous Article in Journal
Globally Optimizing QAOA Circuit Depth for Constrained Optimization Problems
Previous Article in Special Issue
The Power of Human–Algorithm Collaboration in Solving Combinatorial Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ant Colony Optimization with Warm-Up

Department of Engineering and Architecture, University of Parma, 43124 Parma, Italy
Submission received: 16 September 2021 / Revised: 7 October 2021 / Accepted: 10 October 2021 / Published: 12 October 2021
(This article belongs to the Special Issue Metaheuristics)

Abstract

:
The Ant Colony Optimization (ACO) is a probabilistic technique inspired by the behavior of ants for solving computational problems that may be reduced to finding the best path through a graph. Some species of ants deposit pheromone on the ground to mark some favorable paths that should be used by other members of the colony. Ant colony optimization implements a similar mechanism for solving optimization problems. In this paper a warm-up procedure for the ACO is proposed. During the warm-up, the pheromone matrix is initialized to provide an efficient new starting point for the algorithm, so that it can obtain the same (or better) results with fewer iterations. The warm-up is based exclusively on the graph, which, in most applications, is given and does not need to be recalculated every time before executing the algorithm. In this way, it can be made only once, and it speeds up the algorithm every time it is used from then on. The proposed solution is validated on a set of traveling salesman problem instances, and in the simulation of a real industrial application for the routing of pickers in a manual warehouse. During the validation, it is compared with other ACO adopting a pheromone initialization technique, and the results show that, in most cases, the adoption of the proposed warm-up allows the ACO to obtain the same or better results with fewer iterations.

1. Introduction

The Ant Colony Optimization (ACO) is a combinatorial optimization technique inspired by the behaviour of some species of ants. Broadly, when an ant must choose one route instead of the other, he/she looks at the quantity of pheromone left by other members of the colony. A higher level of pheromone means a better route, usually because it is shorter if compared to the others. This curious behavior inspired the creation of a probabilistic technique of operational research for solving computational problems, which can be reduced to finding the best path through a graph. The first version was proposed by [1], and it was originally called Ant System. Since then, many versions and different applications of the ACO were studied, and the algorithm is nowadays known to be a well-established and efficient approach for many practical problems, primarily the well-known traveling salesman problem (TSP) [2]. Consequently, an improvement in the ACO would lead to great benefits in many industrial and nonindustrial fields. Being the ACO a metaheuristic algorithm, most of the problems approached with it are strictly time-critical. Usually, they are NP-hard problems, in which the global optimum is refused a priori to seek a reasonably good suboptimal solution. However, the ACO, like all the evolutionary algorithms, needs many iterations to converge to a good solution, and, in the case of large-size problems, this process can be very time-consuming [3]. For this reason, the implementation of the ACO for solving large-size problems in real-time (i.e., a few seconds or even less) might be problematic. This is the first open problem highlighted also by [4], and this is because the adoption of a technique able to speed up the ACO may be very useful.
In this paper, a warm-up procedure to reduce the number of iterations required by the ACO to converge to a good solution is proposed. The success of the ACO is essentially based on a variable called the pheromone matrix. The pheromone matrix registers the current quantity of pheromone on each edge of the graph, and it, therefore, determines the probability to include each specific edge in a newly generated solution. In the classic version of the ACO, every time the algorithm is executed, all the elements of the pheromone matrix are set equal to a starting (generally low) value. Then, as the computed iterations increase, the pheromone on the most promising paths is increased and that on the less convenient paths is reduced. Although, in most real implementations, the graph of nodes is given and is the same in every execution. Furthermore, it was verified that, given a graph, in many cases, each time the ACO was executed, after a certain number of iterations, the pheromone matrix was always very similar. The warm-up procedure proposed in this paper aims to carry out a fine-tuning of the pheromone matrix on a specific graph, so that, every time the ACO is executed, it starts from an already weighted graph, where the promising paths were highlighted with a high level of pheromone and the bad paths excluded a priori. This process is supposed to reduce the number of iterations that the ACO requires to converge every time it is executed. The remainder of this paper is organized as follows. Firstly, a brief overview of the scientific contributions to ACO is presented in Section 2. Then, the warm-up procedure proposed in this paper is described in Section 2. The computational experiments are shown in Section 4, where the ACO with warm-up is compared to the classic ACO, and two other ACO versions that carry out an initialization of the pheromone matrix. Finally, the conclusions are presented in Section 5.

2. Literature Review

The Ant Colony Optimization (ACO) was introduced by [1] as a novel nature-inspired metaheuristic for the solution of hard combinatorial optimization problems. ACO belongs to the class of metaheuristics, which are approximate algorithms used to obtain good enough solutions to NP-hard problems in a reasonable amount of time. When searching for food, some species of ants initially explore the area surrounding the nest randomly. As soon as an ant finds a food source, it evaluates the quantity and the quality of the food. On the way back, the ant deposits a chemical pheromone trail on the ground. The quantity of pheromone deposited depends on the quantity and quality of the food and guides other ants to the food source. As shown by [5], the communication via pheromone between the ants enables them to find the shortest paths between their nest and food sources, and the same consideration also applies in ant colony optimization algorithms for solving combinatorial optimization problems. Even if the first proof-of-concept application for the ACO was a traveling salesman problem (TSP), up to now the above algorithm was applied to many combinatorial optimization problems. For instance, it was applied to assignment problems [6,7,8], routing problems [9,10,11], scheduling problems [12,13]. Less known but equally efficient applications concern the resource-constrained project scheduling problem [14,15], flow shop scheduling [16], sequential ordering problem [17], and open shop scheduling problem [18]. The scientific community also proposed many applications for nonindustrial environments such as solutions for DNA sequencing or web page ranking [19]. The various variants of the ACO generally differ from each other in the pheromone update rules. In particular, most applications belong to one of these two categories: the iteration-best-update or the best-so-far-update. Basically, in the first case, the update of pheromone takes place at every iteration, while in the second case it takes place only when a new best solution is found, introducing in this way a much stronger bias towards the good solutions found. The most successful ACO variants are the Ant Colony System [20] and the Min-Max Ant System [21], which also are the most used in practice. Since this claims to be just a brief overview of the key points concerning ACO, for a deeper analysis of the scientific contributions on this algorithm the literature reviews by [4,22] are suggested.

3. Warm-Up

3.1. General Considerations

In most applications where the ACO is used or might be used, the graph of nodes that characterizes the problem is given and constant. Consequently, even the matrix of the costs associated with the edges is constant. This is well-known by the practitioners and affirmed by many scientific publications: see for example [4,23,24], which, indeed, takes the matrix of the costs as given. For instance, in classic traveling salesman problems for vehicle routing or picker routing in manual warehouses, the nodes represent the locations to visit and the costs associated with the edges represent the distance between the two connected nodes. Hence, the matrix of distances does not change until the roads network changes (in case of vehicle routing) or the warehouse layout changes (in case of picker routing). As matter of fact, in almost all the papers that treat these topics, the matrix of distances is defined only once using an exhaustive algorithm such as Floyd-Warshall to find the shortest path between all the nodes of the graph (see for instance [10]). Hence, when the graph and the matrix of costs are formalized, a warm-up may also be carried out. The warm-up allows a tuning of the pheromone matrix used by the ACO, and, in this way, every time the ACO is executed from then on, the number of iterations it needs to converge to a good solution is reduced, and, consequently, its computational time is reduced. The aim of the warm-up is therefore to highlight a priori the most promising paths, as well as excluding a priori the worst ones. All these aspects were already affirmed and well-described also by other scientific contributions focused on the initialization of the pheromone matrix (see for instance [25,26]).

3.2. The Notation Used

In the remainder of this section, for describing the proposed procedure, the following notation is used.
  • m = 1 , , M are the iterations of the warm-up process;
  • i , j = 1 , , N are the nodes of the graph;
  • C is the matrix of costs, where each element c i , j is the cost associated to the edge ( i , j ) ;
  • T is the pheromone matrix of the ACO, where each element τ i , j is the pheromone on the edge ( i , j ) ;
  • P ( m ) is the matrix of probabilities in iteration m, where each element p i , j ( m ) is the probability to increase the pheromone on the edge ( i , j ) during the iteration m;
  • U ( m ) is the matrix of updates in iteration m, where each element u i , j ( m ) says how much the pheromone on the edge ( i , j ) is supposed to be increased during the iteration m;
  • α , β , ρ , Q , τ 0 are classic and well-known parameters of the ACO [1]: α and β define the probability to select a specific edge according to the pheromone on it, ρ [ 0 , 1 ] is known as evaporation rate and defines the decrease of the pheromone that takes place at each iteration, Q defines the quantity of pheromone laid by the ants at each iteration, and, τ 0 is the starting pheromone on each edge;
  • ρ w u is a variant of ρ , with a different value used during the warm-up;
  • I A is the identity matrix of size A x A ;
  • ∘ is the Hadamard product.

3.3. The Procedure

The warm-up emulates the update of the pheromone that, in classic ACO, is made during the first iterations of the algorithm, i.e., those generally aimed to explore the graph. The procedure is iterative and relatively easy. First of all, the pheromone matrix T is initialized, setting each element τ i , j = τ 0 ( τ i , j { 1 , , N } | i j ) and τ i , j = 0 ( τ i , j { 1 , , N } | i = j ). Similarly, to avoid divisions by zero, all the elements on the diagonal of the matrix of costs are made equal to 1 ( C = C + I N ). At each iteration m, the probability matrix P ( m ) is built by calculating each of its elements as in the following equation.
p i , j = ( τ i , j ) α · ( 1 c i , j ) β j = 0 N ( τ i , j ) α · ( 1 c i , j ) β i , j 1 , , N
Note Equation (1) is the same used by many authors and mentioned by [4] to compute the probability to include the edge (i,j) in the new generated solution at each iteration of the algorithm. Then, the matrix of updates U ( m ) is calculated as in Equation (2), according to [1].
u i , j = Q c i , j i , j 1 , , N
Then, the pheromone matrix T is updated according to Equation (3). In particular, the pheromone on each edge is updated according to its cost, and its corresponding value in the matrix of probabilities.
T = T + [ U ( m ) P ( m ) ]
Finally, the pheromone evaporates as expressed in Equation (4).
T = ρ w u · T
The process is then repeated until the maximum number of iterations M is reached. In general, it is possible to see how the warm-up emulates exactly the same process that takes place during the iterations of the ACO. However, while during each iteration of the classic ACO only the pheromone on the edges owning to a new generated best solution is increased, in this case, at each iteration, all the edges of the graph see an increase of the pheromone, and this increase is proportional to their attractiveness. This is also a peculiarity of the proposed approach when compared to existing ones in literature (see for instance [25] or [26]), which generally initialize the pheromone simply depending on the cost associated to each edge of the graph. The author is aware that, over the years, several versions of the ACO were proposed by the scientific community, and most of them differ from the others for the formulas adopted to calculate the increase of pheromone [27], the evaporation [9], and the probability to choose an edge instead of the other [28]. On occasion of this study, reference is made to the first version by [1]. As several different versions of the ACO exist, many different versions of this warm-up procedure can be made by doing slight modifications to the formulas.

3.4. The Parameters Tuning

Concerning the tuning of parameters, the same setting analyzed and defined as ‘optimal’ by [1] is used (i.e., α = 1 , β = 2 , ρ = 0.9 , Q = 5 , τ 0 = 0.1 ). The additional parameters used in the warm-up that need an optimization are the evaporation rate used during the warm-up (i.e., ρ w u ) and the number of iterations of the warm-up (i.e., M). There is no real optimum for these parameters that can be defined a priori; both depend on the size of the problem, its complexity, and the type of connections in the graph. In occasion of this study, to carry out a good setting before the computational experiments described in the next section, three different traveling salesman problem benchmarks are used. Each of these problems consists in the construction of the cheapest Hamiltonian cycle through a set of nodes, and each of them has a different complexity identified by the number of nodes to connect (i.e., 20, 30, 40). Concerning the parameters, three different levels were identified per each of them, i.e., ρ w u { 0.5 , 0.9 , 1.0 } and M { 200 , 400 , 600 } , and the ACO with warm-up was tested on each problem using all the possible combinations. Moreover, because of the randomness of the procedure, given a benchmark problem, and a combination of ρ w u and M, not just a single execution of the ACO was considered; conversely, the algorithm was executed five times under the same conditions and its average result and standard deviation monitored. The results are reported in Table 1, where is visible that the best results (those highlighted in greed) are obtained for ρ w u = 1 and M = 400 . As suggested by ρ w u = 1 , the evaporation should be avoided during the warm up.

4. Computational Experiments

4.1. General Considerations

For validating the efficiency of the proposed warm-up approach, a set of computational experiments is presented in this section. All the experiments carried out are based on the traveling salesman problem (TSP), which, to the author’s best knowledge, is also the most frequent and popular application of the ACO. The objective of the algorithm is therefore the definition of a low-cost Hamiltonian cycle: given (i) a set of nodes to visit, (ii) a set of edges connecting them to each other, and (iii) a cost associated to each edge, the algorithm has to define the sequence in which the nodes should be visited that minimizes the total cost of covered edges. Firstly, a set of generic TSP instances is used. In particular, five different graphs are generated, and, on each graph, five different experiments of different complexities are done. Each experiment is taking in consideration a different set of nodes of the graph: the greater is the set of nodes, the higher is the complexity of the problem. Then, to validate the proposed approach in a more realistic context, the simulation of a real industrial case is used. The layout of a manual warehouse for order picking is considered, and the proposed algorithm is used to define the optimal (or almost optimal) paths made by pickers to collect the desired products. No capacity limits are imposed on pickers or aisles, hence the situation is perfectly comparable to a classic TSP, although, the graph is more constrained and has all the characteristics of those used to model warehouses.

4.2. The Comparison Algorithms

The proposed ACO with warm-up (ACOWU) is compared to a classic ACO (i.e., without warm-up) having the same parameters setting, and two ACOs using a pheromone initialization technique(i.e., [25,26]).
The first comparison algorithm with pheromone initialization proposed by [26] (hereafter simply referred to as Dai) is based on the Minimal Spanning Tree (MST). Given the graph of nodes, once calculated the MST using the well-known Prim’s algorithm, and given τ 0 the starting pheromone on nodes, the pheromone on nodes belonging to the MST is set to τ 0 1 / β .
Conversely, the algorithm proposed by [25] (hereafter simply referred to as Bellaachia) says to set the pheromone on edge ( i , j ) , namely τ i , j :
τ i , j = 1 z N * ( c i , z )
where N * (i.e., ⊂N) is the set of nodes, different by j, which can be reached by i.

4.3. Collected Information

The warm-up is made only once on each graph, while at each run of the algorithms three main parameters are controlled: (i) the cost of the best solution found, (ii) the number of iterations needed to find it, and (iii) the computational time. Being all the observed algorithms subject to a certain randomness, to have a better understanding of their reliability, they were all iterated 10 times on each experiment, and the average and standard deviations are therefore reported. For sake of clarity, in all the following tables, the results of the proposed algorithm are written in bold when it outperforms the classic ACO, and highlighted in grey every time it outperforms all the other algorithms.

4.4. Results Obtained on Generic TSP Instances

The results obtained on the generic TSP instances are reported in Table 2, Table 3 and Table 4. In particular, results concerning the cost of the best solution found are reported in Table 2, results concerning the number of iterations needed to find the best solution are reported in Table 3, and computational times are in Table 4. For sake of clarity, the results obtained by the proposed ACOWU are written in bold when it outperformed the classic ACO without warm-up, and highlighted in gray when it outperformed all the other comparison algorithms.
As visible in Table 2 and represented in Figure 1, the proposed warm-up allows the ACOWU to outperform all the other algorithms in almost all the experiments. The Dai algorithm also performs better than the classic ACO, but it rarely reach equals the results of the proposed one. The Bellaachia algorithm provides reasonably good results, although it struggle to reach even the ACO. It is reasonable to believe the authors of Bellaachia algorithm focused more on the reduction of iterations needed to converge to a good solution than on the quality of the solution itself.
Table 3 and Table 4, Figure 2 and Figure 3 show the comparison in terms of computational time and solutions explored by the algorithms before finding the best one. In this sense, the Dai and Bellaachia algorithms are the best ones, although, the difference with the proposed ACOWU is not that big—i.e., 100–300 milliseconds, which translate into a few milliseconds. Moreover, even the ACOWU is able to outperform all the others in some experiments. The experiments in which the ACOWU needs less iterations (and consequently, computational time) to find the best solutions are also the most complicated instances where the number of nodes to visit is higher (i.e., 40–60 nodes).

4.5. Results Obtained in the Simulation of the Real Warehouse

After studying the effect of the proposed warm-up on a set of generic TSP instances, it is in interest of the author to analyze its effect in a more realistic and complex environment. The simulation of a manual warehouse for picking was therefore used, and the proposed ant colony optimization with warm-up is used to define the routing of pickers—i.e., once defined the picking locations the picker has to visit, the order in which they are visited is defined. The faced problem is essentially a TSP, but the graph of nodes and paths is more constrained, with less possible paths between nodes and many mandatory walkways. Importantly, no additional constraints such as capacity of pickers’ baskets, definition of batches, or interference between pickers moving through the aisles are considered.
Starting from the warehouse layout, a graph of accessible positions is generated placing a node in front of each storage location and a node where aisles cross to each other, and then, using the well-known Floyd-Warshall algorithm, the matrix of minimum distances between nodes is generated. The starting warehouse is made of 20 aisles with 16 storage locations each, crossed by a single cross-aisle in the middle (i.e., between the 8th and the 9th locations). Each storage location are 2 × 2 m, aisles are 4 m wide, while the cross-aisle is 8 m wide. The resulting graph used in the tests is shown in Figure 4.
The results obtained by the compared algorithms in the simulation of the warehouse are reported in Table 5 and can be intuitively visualised looking at Figure 5, Figure 6 and Figure 7 respectively in terms of (i) cost of the best solution found, (ii) solutions explored before finding the best, and (iii) computational times. The results broadly respect what already seen in previous experiments. On average the proposed ACOWU is still the best in terms of cost even if sometimes it cannot provide a better solution than the classic ACO, but the same could be said for the other algorithms using a pheromone initialization strategy. The Dai algorithm is still in second position and proved to be a very good alternative. Concerning the solutions explored and therefore the computational time Bellaachia algorithm is the best (as already seen in previous experiments). However, the proposed ACOWU is again a good alternative as clearly visible in Figure 6 and Figure 7. Again, as in the previous experiments on generic TSP instances, the difference in terms of solutions explored and computational time is not that big. However, the utilization of a pheromone initialization technique, as already proved in literature, guarantees some advantages over the classic ACO.

5. Conclusions

In this paper, a warm-up procedure for the ACO was proposed and validated. During the warm-up, the pheromone matrix of the ACO is initialized to provide an efficient new starting point for the algorithm so that it can obtain the same (or better) results with less iterations. The warm-up is based exclusively on the graph made by the nodes and the edges that formalize the problem. This graph, in most applications, is given, and does not need to be recalculated every time before executing the algorithm. Because of this, the warm-up procedure can be made only once when setting the hyper-parameters of the algorithm to speed it up every time it is used from then on. Firstly, a parameters tuning was made to find the optimal setting for the warm-up. Then, two set of the experiments were carried out to validate the proposed approach. The first set of experiments was done using some generic TSP instances, then, to validate algorithm in a more realistic context, a second set of experiments in a warehouse for picking was made. The ant colony with warm-up was compared with a classic ACO (without warm-up), and with two ACO using a pheromone initialization technique. The results obtained are promising, and the warm-up approach is generic enough to find application in almost all the contexts where the ACO can be applied. Of course, the impact and the efficiency of the warm-up might change from one application to the other, but the preliminary results shown in this paper prove that its analysis is worth studying, paving the way for many studies and possible extensions.

Funding

This research received no external funding.

Data Availability Statement

The code of the algorithm was made open-source the 1 September 2021 at https://github.com/mattianeroni/AntColony.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Dorigo, M.; Maniezzo, V.; Colorni, A. Ant system: Optimization by a colony of cooperating agents. IEEE Trans. Syst. Man Cybern. 1996, 26, 29–41. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Yang, J.; Shi, X.; Marchese, M.; Liang, Y. An ant colony optimization method for generalized TSP problem. Prog. Nat. Sci. 2008, 18, 1417–1422. [Google Scholar] [CrossRef]
  3. Cheng, C.B.; Mao, C.P. A modified ant colony system for solving the traveling salesman problem with time windows. Math. Comput. Model. 2007, 46, 1225–1235. [Google Scholar] [CrossRef]
  4. Dorigo, M.; Blum, C. Ant colony optimization theory: A survey. Theor. Comput. Sci. 2005, 344, 243–278. [Google Scholar] [CrossRef]
  5. Deneubourg, J.L.; Aron, S.; Goss, S.; Pasteels, J.M. The self-organizing exploratory pattern of the argentine ant. J. Insect Behav. 1990, 3, 159–168. [Google Scholar] [CrossRef] [Green Version]
  6. Costa, D.; Hertz, A. Ants can colour graphs. J. Oper. Res. Soc. 1997, 48, 295–305. [Google Scholar] [CrossRef]
  7. Guan, B.; Zhao, Y.; Li, Y. An improved ant colony optimization with an automatic updating mechanism for constraint satisfaction problems. Expert Syst. Appl. 2021, 164, 114021. [Google Scholar] [CrossRef]
  8. Socha, K.; Sampels, M.; Manfrin, M. Ant algorithms for the university course timetabling problem with regard to the state-of-the-art. Work. Appl. Evol. Comput. 2003, 2611, 334–345. [Google Scholar]
  9. Bertolini, M.; Melloni, R.; Neroni, M. Order Picking: A Comparison of Heuristic and Metaheuristic Approaches. 25th Summer School Francesco Turco. Available online: https://drive.google.com/file/d/1SbF1pwCfHJKUdq4ohGfiE2azIvvxF8OO/view (accessed on 11 October 2021).
  10. De Santis, R.; Montanari, R.; Vignali, G.; Bottani, E. An adapted ant colony optimization algorithm for the minimization of the travel distance of pickers in manual warehouses. Eur. J. Oper. Res. 2018, 267, 120–137. [Google Scholar] [CrossRef]
  11. Gambardella, L.M.; Taillard, É.; Agazzi, G. Macs-vrptw: A multiple colony system for vehicle routing problems with time windows. In New Ideas in Optimization; McGraw-Hill Ltd.: Maidenhead, UK, 1999. [Google Scholar]
  12. Bell, J.E.; McMullen, P.R. Ant colony optimization techniques for the vehicle routing problem. Adv. Eng. Inform. 2004, 18, 41–48. [Google Scholar] [CrossRef]
  13. Meuleau, N.; Dorigo, M. Ant colony optimization and stochastic gradient descent. Artif. Life 2002, 8, 103–121. [Google Scholar] [CrossRef] [PubMed]
  14. Luo, S.; Wang, C.; Wang, J. Ant colony optimization for resource-constrained project scheduling with generalized precedence relations. In Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence, Sacramento, CA, USA, 5 November 2003; pp. 284–289. [Google Scholar]
  15. Merkle, D.; Middendorf, M.; Schmeck, H. Ant colony optimization for resource-constrained project scheduling. IEEE Trans. Evol. Comput. 2002, 6, 333–346. [Google Scholar] [CrossRef] [Green Version]
  16. Rajendran, C.; Ziegler, H. Ant-colony algorithms for permutation flowshop scheduling to minimize makespan/total flowtime of jobs. Eur. J. Oper. Res. 2004, 155, 426–438. [Google Scholar] [CrossRef]
  17. Gambardella, L.M.; Dorigo, M. An ant colony system hybridized with a new local search for the sequential ordering problem. INFORMS J. Comput. 2000, 12, 237–255. [Google Scholar] [CrossRef] [Green Version]
  18. Blum, C. Beam-ACO—Hybridizing ant colony optimization with beam search: An application to open shop scheduling. Comput. Oper. Res. 2005, 32, 1565–1591. [Google Scholar] [CrossRef]
  19. Singh, H.; Kaur, P. ACO with Heuristic Desirability for Web Page Positioning Problem. In Metaheuristic and Evolutionary Computation: Algorithms and Applications; Springer: Singapore, 2021; pp. 463–475. [Google Scholar]
  20. Dorigo, M.; Gambardella, L.M. Ant colony system: A cooperative learning approach to the traveling salesman problem. IEEE Trans. Evol. Comput. 1997, 1, 53–66. [Google Scholar] [CrossRef] [Green Version]
  21. Stützle, T.; Hoos, H.H. MAX–MIN ant system. Future Gener. Comput. Syst. 2000, 16, 889–914. [Google Scholar] [CrossRef]
  22. Valdez, F. Swarm Intelligence: A Review of Optimization Algorithms Based on Animal Behavior. Recent Adv. Hybrid Intell. Syst. Based Soft Comput. 2020, 915, 273–298. [Google Scholar]
  23. Afshar, A.; Massoumi, F.; Afshar, A.; Mariño, M.A. State of the art review of ant colony optimization applications in water resource management. Water Resour. Manag. 2015, 29, 3891–3904. [Google Scholar] [CrossRef]
  24. Zhou, Y. Runtime analysis of an ant colony optimization algorithm for TSP instances. IEEE Trans. Evol. Comput. 2009, 13, 1083–1092. [Google Scholar] [CrossRef]
  25. Bellaachia, A.; Alathel, D. A local pheromone initialization approach for ant colony optimization algorithm. In Proceedings of the IEEE International Conference on Progress in Informatics and Computing, Shanghai, China, 16–18 May 2014; pp. 133–138. [Google Scholar]
  26. Dai, Q.; Ji, J.; Liu, C. An effective initialization strategy of pheromone for ant colony optimization. In Proceedings of the Fourth International on Conference on Bio-Inspired Computing, Beijing, China, 16–19 October 2009; pp. 1–4. [Google Scholar]
  27. Mavrovouniotis, M.; Yang, S. Adapting the pheromone evaporation rate in dynamic routing problems. In European Conference on the Applications of Evolutionary Computation; Springer: Berlin/Heidelberg, Germany, 2013; pp. 606–615. [Google Scholar]
  28. Shuang, B.; Chen, J.; Li, Z. Study on hybrid PS-ACO algorithm. Appl. Intell. 2011, 34, 64–73. [Google Scholar] [CrossRef]
Figure 1. Comparison in terms of cost on generic TSP instances.
Figure 1. Comparison in terms of cost on generic TSP instances.
Algorithms 14 00295 g001
Figure 2. Comparison of iterations on generic TSP instances.
Figure 2. Comparison of iterations on generic TSP instances.
Algorithms 14 00295 g002
Figure 3. Comparison of solutions explored on generic TSP instances.
Figure 3. Comparison of solutions explored on generic TSP instances.
Algorithms 14 00295 g003
Figure 4. Graph of warehouse used for tests.
Figure 4. Graph of warehouse used for tests.
Algorithms 14 00295 g004
Figure 5. Results obtained in warehouse in terms of cost.
Figure 5. Results obtained in warehouse in terms of cost.
Algorithms 14 00295 g005
Figure 6. Results obtained in warehouse in terms of solutions explored.
Figure 6. Results obtained in warehouse in terms of solutions explored.
Algorithms 14 00295 g006
Figure 7. Results obtained in warehouse in terms of computational time.
Figure 7. Results obtained in warehouse in terms of computational time.
Algorithms 14 00295 g007
Table 1. Parameters’ tuning.
Table 1. Parameters’ tuning.
M ρ wu Problem 1
# Nodes: 20)
Problem 2
(# Nodes: 30)
Problem 3
(# Nodes: 40)
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
2000.558,526305061,487554168,7086035
0.958,765325765,716674974,4449170
1.046,45572049,971055,726948
4000.552,711225565,110560571,2795114
0.956,299610867,829478973,0062915
1.046,01735549,971055,6091069
6000.556,691450264,197345073,0635743
0.972,054205665,191703281,4854261
1.046,43416650,02030655,800491
Table 2. Results obtained on generic TSP instances in terms of cost.
Table 2. Results obtained on generic TSP instances in terms of cost.
GNACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
0204453162447115944961305154463
0306079513553911854972305994134
0406915422646817167823877421444
0507487199756338275655188472380
0609250392823627681061669080884
120427611540621214146894993382
1305663298526114856274206162475
1406849589606137363012267502794
1507550637694021674144048152178
16094151278786140882938289572479
220469338944944745031484964136
2305731243550021056872526484254
2406940594623326966984087639585
2508853102978739590765529101492
26094623488612271926680110,457677
32039061573808683874424512340
3305529241524314155141856359459
3407097499656214069554137914503
3507686569704921777984588308770
3609041470802815082616289309585
4204729261440716745101655206185
4305696541534427058573126271229
4407324404658626673423447710331
4508092757740013981131609145774
460909077982342968730499355494
Table 3. Results obtained on generic TSP instances in terms of solutions explored before finding best one.
Table 3. Results obtained on generic TSP instances in terms of solutions explored before finding best one.
GNACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
020108094416326271063737804938
03063811438628817008801157684
0409928871497480654491707307
05020055491261648407460977532
0601064691647360166629910781002
1203101549194011100490381373
1307134289357807287471092251
1406728528126668099161046958
1502050125811128891581593840497
160118691010819691274836896687
220972715837607883195609387
230159898210833551321956612370
240598277134489612937941406728
250107266012626996445491192953
2601542993161393412755321394610
32010896871101686987640648447
33015099839695961672881846533
34098210441202977981682973798
350186010347279478661063885806
3605702438193691836900766376
420739880134891712541067586227
43010656071341912621521907813
440139177045863515751016911663
4501918984868812981634584318
4601197111056635781595611781168
Table 4. Resultsobtained on generic TSP instances in terms of computational time.
Table 4. Resultsobtained on generic TSP instances in terms of computational time.
GNACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
0200.6390.3020.7940.0740.6170.2110.5040.264
0301.0340.691.190.5191.1680.5721.3390.335
0402.0180.7392.4010.3791.7640.4912.0010.201
0504.0220.4753.9220.7852.2980.7023.661.049
0605.4691.9314.0650.8766.8160.8634.5152.083
1200.3490.0430.5040.1070.5530.1280.3550.098
1301.0080.2421.0970.3710.9780.4241.1240.142
1401.5680.7521.8760.5631.7990.812.0320.727
1503.7881.2483.0530.9913.90.8842.8150.708
1604.7651.8084.9022.1145.6952.0894.7171.688
2200.630.250.5660.1890.5720.0770.5130.123
2301.6690.5591.3860.2421.5140.571.0510.224
2401.8690.3552.5560.7952.5560.4862.6750.717
2503.8331.1924.0481.4223.0541.0783.71.295
2605.9652.0036.111.4965.8151.2695.971.593
3200.6920.2260.7180.2290.6860.220.5380.15
3301.5780.4651.440.4291.5980.3561.2360.354
3402.0110.8632.370.5692.2290.6692.1421.046
3504.5351.2582.9361.3943.2351.7083.2031.504
3604.0870.64.7160.9796.6371.814.1210.966
4200.4590.2110.6690.270.5730.2270.4190.059
4301.1820.351.2950.4530.9230.3011.050.434
4402.3340.7361.440.6262.3940.8651.8280.64
4503.7950.9642.8261.2222.9880.9332.3120.467
4604.3751.933.3470.773.7241.7414.1961.889
Table 5. Results obtained in warehouse.
Table 5. Results obtained in warehouse.
Cost of the Best Solution Found
NACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
205431355215537971437
3060414626156161886434
40804307811877818107044
5080630755879243113869
60102614410015699546142074
Solutions Explored before Finding the Best
NACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
201157738707520610555335246
3011115428606651091293921477
401266590104134316608461230280
501009771172178116568611096466
601273103619254011460950544373
Computational Times
NACOACOWUDaiBellaachia
Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.Avg.St.Dev.
200.4730.1620.3660.1110.3650.1210.2890.055
300.9820.2490.8720.3090.9480.1310.8760.218
401.8240.431.6170.2661.9610.4941.7220.217
502.3340.8383.1030.7483.0220.7162.4940.553
603.8211.6264.8470.4253.9831.2742.6720.653
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Neroni, M. Ant Colony Optimization with Warm-Up. Algorithms 2021, 14, 295. https://0-doi-org.brum.beds.ac.uk/10.3390/a14100295

AMA Style

Neroni M. Ant Colony Optimization with Warm-Up. Algorithms. 2021; 14(10):295. https://0-doi-org.brum.beds.ac.uk/10.3390/a14100295

Chicago/Turabian Style

Neroni, Mattia. 2021. "Ant Colony Optimization with Warm-Up" Algorithms 14, no. 10: 295. https://0-doi-org.brum.beds.ac.uk/10.3390/a14100295

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop