Next Article in Journal
Pseudo Random Number Generation through Reinforcement Learning and Recurrent Neural Networks
Next Article in Special Issue
Sliding Mode Control Algorithms for Anti-Lock Braking Systems with Performance Comparisons
Previous Article in Journal
Searching via Nonlinear Quantum Walk on the 2D-Grid
Previous Article in Special Issue
Application of the Approximate Bayesian Computation Algorithm to Gamma-Ray Spectroscopy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finding the Best 3-OPT Move in Subcubic Time

1
DMIF, Via delle Scienze 206, University of Udine, 33100 Udine, Italy
2
DEI, Via Gradenigo 6/A, University of Padova, 35131 Padova, Italy
*
Author to whom correspondence should be addressed.
Submission received: 21 October 2020 / Revised: 18 November 2020 / Accepted: 20 November 2020 / Published: 23 November 2020
(This article belongs to the Special Issue 2020 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
Given a Traveling Salesman Problem solution, the best 3-OPT move requires us to remove three edges and replace them with three new ones so as to shorten the tour as much as possible. No worst-case algorithm better than the Θ ( n 3 ) enumeration of all triples is likely to exist for this problem, but algorithms with average case O ( n 3 ϵ ) are not ruled out. In this paper we describe a strategy for 3-OPT optimization which can find the best move by looking only at a fraction of all possible moves. We extend our approach also to some other types of cubic moves, such as some special 6-OPT and 5-OPT moves. Empirical evidence shows that our algorithm runs in average subcubic time (upper bounded by O ( n 2.5 ) ) on a wide class of random graphs as well as Traveling Salesman Problem Library (TSPLIB) instances.

1. Introduction

The Traveling Salesman Problem (TSP), in all likelihood the most famous combinatorial optimization problem, calls for finding the shortest Hamiltonian cycle in a complete graph G = ( V , E ) of n nodes, weighted on the arcs. In this paper we consider the symmetric TSP, i.e., the graph is undirected and the distance between two nodes is the same irrespective of the direction in which we traverse an edge. Let us denote by c ( i , j ) = c ( j , i ) the distance between any two nodes i and j. We call each solution of the problem a tour. A tour is identified by a permutation of vertices ( v 1 , , v n ) . We call { v i , v i + 1 } , for i = 1 , , n 1 , and { v n , v 1 } the edges of the tour. The length of a tour T, denoted by c ( T ) is the sum of the lengths of the edges of the tour. More generally, for any set F of edges, we denote by c ( F ) the value e F c ( e ) .
A large number of applications over the years have shown that local search is often a very effective way to tackle hard combinatorial optimization problems, including the TSP. The local search paradigm applies to a generic optimization problem in which we seek to minimize an objective function f ( x ) over a set X. Given a map N : X 2 X which associates to every solution x X a set N ( x ) called its neighborhood, the basic idea is the following: start at any solution x 0 , set s : = x 0 , and look for a solution x 1 N ( s ) better than s. If we find one, we replace s with x 1 and iterate the same search. We continue this way until we get to a solution s such that f ( s ) = min { f ( x ) | x N ( s ) } . In this case, we say that s is a local optimum. Replacing x i with x i + 1 is called performing a move of the search, and N ( s ) is the set of all solutions reachable with a move from s. The total number of moves performed to get from x 0 to the final local optimum is called the length of the convergence. If x is a solution reachable with a move from s and f ( x ) < f ( s ) we say that the move is an improving move and x is an improving solution. When searching in the neighborhood of x i we can adopt two main strategies, namely first-improvement and best-improvement (also called steepest-descent). In the first-improvement strategy, we set x i + 1 to be the first solution that we find in N ( x i ) such that f ( x i + 1 ) < f ( x i ) . In best-improvement, we set x i + 1 to be such that f ( x i + 1 ) = min { f ( x ) | x N ( x i ) f ( x ) < f ( x i ) } . For small-size neighborhoods such as the one considered in this paper, most local search procedures choose to adopt the best-improvement strategy, since it causes the largest decrease in the objective function value at any given iteration of the search. In this paper we will focus precisely on the objective of finding the best possible move for a popular TSP neighborhood, i.e., the 3-OPT.
The idea of basic local search has been around for a very long time and it is difficult to attribute it to any particular author. Examples of its use are reported in standard textbooks on combinatorial optimization, such as [1], or devoted surveys, such as [2]. In time, many variants have been proposed to make local search more effective by avoiding it to get stuck in local optima. Namely, sometimes a non-improving move must be performed to keep the search going. Examples of these techniques are tabu search [3] and simulated annealing [4]. Our results apply to basic local search but can be readily adapted to use in more sophisticated procedures such as tabu search. This, however, is beyond the scope of this paper.

1.1. The K-OPT Neighborhood

Let K N be a constant. A K-OPT move on a tour T consists of first removing a set R of K edges and then inserting a set I of K edges so as ( T \ R ) I is still a tour (we include, possibly, R I . This implies that the ( K 1 ) -OPT moves are a subset of the K-OPT moves). A K-OPT move is improving if c ( ( T \ R ) I ) < c ( T ) i.e., c ( I ) < c ( R ) . An improving move is best improving if c ( R ) c ( I ) is the maximum over all possible choices of R , I .
The standard local search approach for the TSP based on the K-OPT neighborhood starts from any tour T 0 (usually a random permutation of the vertices) and then proceeds along a sequence of tours T 1 , T 2 , , T N where each tour T j is obtained by applying an improving K-OPT move to T j 1 . The final tour T N is such that there are no improving K-OPT moves for it. The hope is that T N is a good tour (optimistically, a global optimum) but its quality depends on many factors. One of them is the size of the neighborhood, the rationale being that with a larger-size neighborhood we sample a larger number of potential solutions, and hence increase the probability of ending up at a really good one. Clearly, there is a trade-off between the size of a neighborhood and the time required to explore it, so that most times people resort to the use of small neighborhoods since they are very fast to explore (for a discussion on how to deal with some very large size, i.e., exponential, neighborhoods for various combinatorial optimization problems, see [5]).
The exploration of the K-OPT neighborhood, for a fixed K, might be considered “fast” from a theoretical point of view, since there is an obvious polynomial algorithm (complete enumeration). However, in practice, complete enumeration makes the use of K-OPT impossible already for K = 3 (if n is large enough, like 3000 or more). There are Θ ( n K ) choices of K edges to remove which implies that 3-OPT can be explored in time O ( n 3 ) by listing all possibilities. For a given tour n = 6000 nodes, the time required to try all 3-OPT moves, on a reasonably fast desktop computer, is more than one hour, let alone converging to a local optimum. For this reason, 3-OPT has never been really adopted for the heuristic solution of the TSP.
The first use of K-OPT dates back to 1958 with the introduction of 2-OPT for the solution of the TSP in [6]. In 1965 Lin [7] described the 3-OPT neighborhood, and experimented with the Θ ( n 3 ) algorithm, on which he also introduced a heuristic step fixing some edges of the solution (at risk of being wrong) with the goal of decreasing the size of the instance. Still, the instances which could be tackled at the time were fairly small ( n 150 ). Later in 1968, Steiglitz and Weiner [8] described an improvement over Lin’s method which made it 2 or 3 times faster, but still cubic in nature.
It was soon realized that the case K 3 was impractical, and later local search heuristics deviated from the paradigm of complete exploration of the neighborhood, replacing it with some clever ways to heuristically move from tour to tour. The best such heuristic is probably Lin and Kernighan’s procedure [9] which applies a sequence of K-OPT moves for different values of K. Notice that our objective, and our approach, is fundamentally different from Lin and Kernighan’s in that we always look for the best K-OPT move, and we keep K constantly equal to 3. On the other hand, Lin and Kernighan, besides varying K, apply a heuristic search for the best K-OPT move, which is fast but cannot guarantee to find the best move, and, indeed, is not guaranteed to find an improving move even if some exist. For a very good chapter comparing various heuristics for the TSP, including the 3-OPT neighborhood, see Johnson and Mc Geich [10].
An important recent result in [11] proves that, under a widely believed hypothesis similar to the P N P conjecture, it is impossible to find the best 3-OPT move with a worst-case algorithm of time O ( n 3 ϵ ) for any ϵ > 0 so that complete enumeration is, in a sense, optimal. However, this gives us little consolation when we are faced with the problem of applying 3-OPT to a large TSP instance. In fact, for complete enumeration the average case and the worst case coincide, and one might wonder if there exists a better practical algorithm, much faster than complete enumeration on the majority of instances but still O ( n 3 ) in the worst case. The algorithm described in this paper (of which a preliminary version appeared in [12]) is such an example. A similar goal, but for problems other than the TSP, is pursued in [13], which focuses on the alternatives to brute force exploration of polynomial-size neighborhoods from the point of view of parameterized complexity.

1.2. The Goal of This Paper

The TSP is today very effectively solved, even to optimality, by using sophisticated mathematical programming-based approaches, such as Concorde [14]. No matter how ingenious, heuristics can hardly be competitive with these approaches when the latter are given enough running time. It is clear that simple heuristics, such as local search, are even less effective.
However, simple heuristics have a great quality which lies in their very name: since they are simple (in all respects, to understand, to implement and to maintain), they are appealing, especially to the world of practical, real-life application solvers, i.e., in the industrial world where the very vast majority of in-house built procedures are of heuristic nature. Besides its simplicity, local search has usually another great appeal: it is in general very fast, so that it can overcome its simplemindedness by the fact that it is able to sample a huge amount of good solutions (the local optima) in a relatively small time. Of course, if its speed is too slow, one loses all the interest in using a local search approach despite its simplicity.
This is exactly the situation for the 3-OPT local search. It is simple, and might be effective, but we cannot use it in practice for mid-to-large sized instances because it is too slow. The goal of our work has been to show that, with a clever implementation of the search for improving moves, it can be actually used since it becomes much faster (even 1000 × on the instances we tried) with respect to its standard implementation.
We remark that our intention in this paper is not that of proving that steepest-descent 3-OPT is indeed an effective heuristics for the TSP (this would require a separate work with a lot of experiments and can be the subject of future research). Our goal is to show anyone who is interested in using such a neighborhood how to implement its exploration so as to achieve a huge reductions in the running times. While a full discussion of the computational results can be found in Section 5, here is an example of the type of results that we will achieve with our approach: on a graph of 1000 nodes, we can sample about 100 local optima in the same time that the enumerative approach would take to reach only one local optimum.

2. Selections, Schemes and Moves

Let G = ( V , E ) be a complete graph on n nodes, and c : E R + be a cost function for the edges. Without loss of generality, we assume V = { 0 , 1 , , n ¯ } , where n ¯ = n 1 . In this paper, we will describe an effective strategy for finding either the best improving or any improving move for a given current tour ( v 1 , , v n ) . Without loss of generality, we will always assume that the current tour T is the tour
( 0 , 1 , , n ¯ ) .
We will be using modular arithmetic frequently. For convenience, for each x V and t N we define
x t : = ( x + t ) mod n , x t : = ( x t ) mod n .
When moving from x to x 1 , x 2 etc. we say that we are moving clockwise, or forward. In going from x to x 1 , x 2 , we say that we are moving counter-clockwise, or backward.
We define the forward distance d + ( x , y ) from node x to node y as the t { 0 , , n 1 } such that x t = y . Similarly, we define the backward distance d ( x , y ) from x to y as the t { 0 , , n 1 } such that x t = y . Finally, the distance between any two nodes x and y is defined by
d ( x , y ) : = min { d + ( x , y ) , d ( x , y ) } .
A 3-OPT move is fully specified by two sets, i.e., the set of removed and of inserted edges. We call a removal set any set of three tour edges, i.e., three edges of type { i , i 1 } . A removal set is identified by a triple S = ( i 1 , i 2 , i 3 ) with 0 i 1 < i 2 < i 3 n ¯ , where the edges removed are R ( S ) : = { { i j , i j 1 } : j = 1 , 2 , 3 } . We call any such triple S a selection. A selection is complete if d ( i j , i h ) 2 for each j h , otherwise we say that S is a partial selection. We denote the set of all complete selections by S .
Complete selections should be treated differently than partial selections, since it is clear that the number of choices to make to determine a partial selection is lower than 3. For instance, the number of selections in which i 2 = i 1 1 is not cubic but quadratic, since it is enough to pick i 1 and i 3 in all possible ways such that d ( i 1 , i 3 ) 2 and then set i 2 = i 1 1 . We will address the computation of the number of complete selections for a given K in Section 2.2. Clearly, if we do not impose any special requirements on the selection then there are n 3 = Θ ( n 3 ) selections.
Let S be a selection and I E with | I | = 3 . If ( T \ R ( S ) ) I is still a tour then I is called a reinsertion set. Given a selection S, a reinsertion set I is pure if I R ( S ) = , and degenerate otherwise. Finding the best 3-OPT move when the reinsertions are constrained to be degenerate is O ( n 2 ) (in fact, 3-OPT degenerates to 2-OPT in this case). Therefore, the most computationally expensive task is to determine the best move when the selection is complete and the reinsertion is pure. We refer to these kind of moves as true 3-OPT. Thus, in the remainder of the paper, we will focus on true 3-OPT moves.

2.1. Reinsertion Schemes and Moves

Let S = ( i 1 , i 2 , i 3 ) be a complete selection. When the edges R ( S ) are removed from a tour, the tour gets broken into three consecutive segments which we can label by { 1 , 2 , 3 } (segment j ends at node i j ). Since the selection is pure, each segment is indeed a path of at least one edge. A reinsertion set patches back the segments into a new tour. If we adopt the convention to start always a tour with segment 1 traversed clockwise, the reinsertion set: (i) determines a new ordering in which the segments are visited along the tour and (ii) may cause some segments to be traversed counterclockwise. In order to represent this fact, we use a notation called a reinsertion scheme. A reinsertion scheme is a signed permutation of { 2 , 3 } . The permutation specifies the order in which the segments 2 , 3 are visited after the move. The signing s tells that segment s is traversed counterclockwise, while + s tells that it is traversed clockwise. For example, the third reinsertion set depicted in Figure 1 is represented by the reinsertion scheme < + 3 , 2 > since, from the end of segment 1, we jump to the beginning of segment 3 and traverse it forward. We then move to the last element of segment 2 and proceed backward to its first element. Finally, we close the tour by going back to the first element of segment 1.
There are potentially 2 2 × 2 ! = 8 reinsertion schemes, but for some of these the corresponding reinsertion sets are degenerate. A scheme for a pure reinsertion must not start with “ + 2 ”, nor end with “ + 3 ”, nor be < 3 , 2 > . This leaves only 4 possible schemes, let them be r 1 , , r 4 , depicted in Figure 1.
Given a selection S, the application of a reinsertion scheme r univocally determines reinsertion set I ( r , S ) and clearly for every reinsertion set I there is an r such that I = I ( r , S ) . Therefore, a 3-OPT move if fully specified by a pair ( S , r ) where S is a complete selection and r is a reinsertion scheme. Let us denote the set of all true 3-OPT moves by
M = ( ( i 1 , i 2 , i 3 ) , r ) : ( i 1 , i 2 , i 3 ) S , r { r 1 , r 2 , r 3 , r 4 } .
The enumeration of all moves can be done as follows: (i) we consider, in turn, each reinsertion scheme r = r 1 , , r 4 ; (ii) given r, we consider all complete selections S = ( i 1 , i 2 , i 3 ) , obtaining the moves ( S , r ) . Since step (ii) is done by complete enumeration, the cost of this procedure is Θ ( n 3 ) . In the remainder of the paper we will focus on a method for lowering significantly its complexity.

2.2. The Number of True 3-OPT Moves

For generality, we state here a result which applies to K-OPT for any K 2 .
Theorem 1.
For each K = 2 , , n / 2 the number of complete K-OPT selections is
n K + 1 K n K 1 K 2
Proof. 
Assume the indices of a selection are 0 i 1 < i 2 < < i K n ¯ . Consider the tour and let:  x 1  be the number of nodes between node 0 (included) and node i 1 (excluded); x t , for t = 2 , , K , be the number of nodes between node i t 1 (excluded) and node i t (excluded); x K + 1 be the number of nodes between node i K (excluded) and node n ¯ (included). Then, there are as many complete selections in a K-OPT move as the nonnegative integer solutions of the equation
x 1 + + x K + 1 = n K ,
subject to the constraints that x 1 + x K + 1 1 , and x t 1 for t = 2 , , K . If we ignore the first constraint and replace x t by y t + 1 for t = 2 , , K we get the equation
x 1 + y 2 + + y K + x K + 1 = n 2 K + 1 ,
in nonnegative integer variables, which, by basic combinatorics, has ( n 2 K + 1 ) + K K solutions. We then have to remove all solutions in which x 1 + x K + 1 = 0 , i.e., the solutions of the equation y 2 + + y K = n 2 K + 1 , of which there are ( n 2 K + 1 ) + K 2 K 2 . □
Corollary 1.
The number of true 3-OPT moves is
2 n 3 18 n 2 + 40 n 3 ,
i.e., it is, asymptotically, 2 3 n 3 .
Proof. 
We know from Theorem 1 that there are n 2 3 ( n 4 ) = n 3 9 n 2 + 20 n 6 complete selections, and from Section 2.1 that there are 4 reinsertion schemes. By multiplying the two values we get the claim. □
In Table 1 we report the number of true moves for various values of n, giving a striking example of why the exploration of the 3-OPT neighborhood would be impractical unless some effective strategies were adopted.

3. Speeding-Up the Search: The Basic Idea

The goal of our work is to provide an alternative, much faster, way for finding the best move to the classical “nested-for” approach over all reinsertion schemes and indices, which is something like
  • for ( r = r 1 , r 2 , r 3 , r 4 )
  •       for ( i 1 = 0; i 1 n ¯ 4 ; i 1 ++ )
  •           for ( i 2 = i 1 + 2 ; i 2 n ¯ 2 P ( i 1 = 0 ) ; i 2 ++ )
  •                for ( i 3 = i 2 + 2 ; i 3 n ¯ P ( i 1 = 0 ) ; i 3 ++ )
  •                     evaluateMove ( ( i 1 , i 2 , i 3 ) , r ) ; [* check if move is improving. possibly update best *]
and takes time Θ ( n 3 ) . (The expression P ( A ) , given a predicate A returns 1 if A is true and 0 otherwise). As opposed to the brute force approach, we will call our algorithm the smart force procedure.
Our idea for speeding-up the search is based on this consideration. Suppose there is a magic box that knows all the best moves. We can inquire the box, which answers in time O ( 1 ) by giving us a partial move. A partial move is the best move, but one of the selection indices has been deleted (and the partial move specifies which one). For example, we might get the answer ( ( 4 , , 9 ) , r 2 ) , and then we would know that there is some x such that the selection ( 4 , x , 9 ) , with reinsertion scheme r 2 , is an optimal move. How many times should we inquire about the box to quickly retrieve an optimal move?
Suppose there are many optimal moves (e.g., the arc { 0 , 1 } costs 1000 and all other arcs cost 1, so that the move ( ( 0 , x , y ) , r ) is optimal for every ( x , y ) and r). Then we could call the box O ( n 2 ) times and get the reply ( ( , x , y ) , r ) for all possible x , y , r without ever getting the value of the first index of an optimal selection. However, it is clear that with just one call to the box, it is possible to compute an optimal 3-OPT move in time O ( n ) . In fact, after the box has told us the reinsertion scheme to use and the values of two indices out of three, we can enumerate the values for the missing index (i.e., expand the two indices into a triple) to determine the best completion possible.
The bulk of our work has then been to simulate, heuristically, a similar magic box, i.e., a data structure that can be queried and should return a partial move much in a similar way as described above. In our heuristic version, the box, rather than returning a partial which could certainly be completed into a best move, returns a partial move which could likely be completed into a best move. As we will see, this can already greatly reduce the number of possible candidates to be best moves. In order to assess the likelihood with which a partial move could be completed into a best solution, we will use suitable functions described in the next sections.
Since there is no guarantee that a partial move suggested by our box can be indeed completed into an optimal move, we will have to iterate over many suggestions, looking for the best one. If there are O ( n ) iterations, given that each iteration costs us O ( n ) we end up with an O ( n 2 ) algorithm. Some expansions will be winning, in that they produce a selection better than any one seen so far, while others will be losing, i.e., we enumerate O ( n ) completions for the partial move but none of them beats the best move seen so far. Clearly, to have an effective algorithm we must keep the number of losing iterations small. To achieve this goal, we provide an O ( 1 ) strong necessary condition that a partial move must satisfy for being a candidate to a winning iteration. When no partial move satisfies this condition, the search can be stopped since the best move found is in fact the optimal move.

The Fundamental Quantities τ + and τ

In this section, we define two functions of V × V into R fundamental for our work. Loosely speaking, these functions will be used to determine, for each pair of indices of a selection, the contribution of that pair to the value of a move. The rationale is that, the higher the contribution, the higher the probability that a particular pair is in a best selection.
The two functions are called τ + ( ) and τ ( ) . For each a , b { 0 , , n ¯ } , we define τ + ( a , b ) to be the difference between the cost from a to its successor and to the successor of b, and τ ( a , b ) to be the difference between the cost from a to its predecessor and to the predecessor of b:
τ + ( a , b ) = c ( a , a 1 ) c ( a , b 1 ) , τ ( a , b ) = c ( a , a 1 ) c ( a , b 1 ) .
Clearly, each of these quantities can be computed in time O ( 1 ) , and computing their values for a subset of possible pairs can never exceed time O ( n 2 ) .

4. Searching the 3-OPT Neighborhood

As discussed in Section 2.1, the pure 3-OPT reinsertion schemes are four (see Figure 1), namely:
r 1 = < + 3 , + 2 > r 2 = < 2 , 3 > r 3 = < + 3 , 2 > r 4 = < 3 , + 2 >
Notice that r 3 and r 4 are symmetric to r 2 . Therefore, we can just consider r 1 and r 2 since all we say about r 2 can be applied, mutatis mutandis (i.e., with a suitable renaming of the indices), to r 3 and r 4 as well.
Given a move μ = ( S , r ) M , where S = ( i 1 , i 2 , i 3 ) , its value is the difference between the cost of the set R ( S ) = { { i 1 , i 1 1 } , { i 2 , i 2 1 } , { i 3 , i 3 1 } } of removed edges and the cost of the reinsertion set I ( r , S ) . We will denote the value of the move μ by
Δ ( ( i 1 , i 2 , i 3 ) , r ) .
A key observation is that we can break-up the function Δ ( ) , that has Θ ( n 3 ) possible arguments, into a sum of functions of two parameters each (each has Θ ( n 2 ) arguments). That is, we will have
Δ ( ( i 1 , i 2 , i 3 ) , r ) = f r 12 ( i 1 , i 2 ) + f r 23 ( i 2 , i 3 ) + f r 13 ( i 1 , i 3 ) ,
for suitable functions f r 12 ( ) , f r 23 ( ) , f r 13 ( ) , each representing the contribution of a particular pair of indices to the value of the move. The domains of these functions are subsets of { 0 , , n ¯ } × { 0 , , n ¯ } which limit the valid input pairs to values obtained from two specific elements of a selection. For  a , b { 1 , 2 , 3 } , with a < b , let us define
S a b : = { ( x , y ) : ( v 1 , v 2 , v 3 ) S with v a = x and v b = y } .
Then the domain of f r 12 is S 12 , the domain of f r 23 is S 23 and the domain of f r 13 is S 13 .
The functions f r 12 , f r 23 , f r 13 can be obtained through the functions τ + ( ) and τ ( ) as follows:
[r1: ]
We have I ( r 1 ) = { { i 1 , i 2 1 } , { i 2 , i 3 1 } , { i 1 1 , i 3 } } (see Figure 1) so that Δ ( ( i 1 , i 2 , i 3 ) , r 1 ) =
= c ( i 1 , i 1 1 ) + c ( i 2 , i 2 1 ) + c ( i 3 , i 3 1 ) c ( i 1 , i 2 1 ) + c ( i 2 , i 3 1 ) + c ( i 1 1 , i 3 ) = c ( i 1 , i 1 1 ) c ( i 1 , i 2 1 ) + c ( i 2 , i 2 1 ) c ( i 2 , i 3 1 ) + c ( i 3 , i 3 1 ) c ( i 3 , i 1 1 ) = τ + ( i 1 , i 2 ) + τ + ( i 2 , i 3 ) + τ + ( i 3 , i 1 ) .
The three functions are
f r 1 12 : ( x , y ) S 12 τ + ( x , y ) ;
f r 1 23 : ( x , y ) S 23 τ + ( x , y ) ;
f r 1 13 : ( x , y ) S 13 τ + ( y , x ) .
[r2: ]
We have I ( r 2 ) = { { i 1 , i 2 } , { i 2 1 , i 3 1 } , { i 1 1 , i 3 } } (see Figure 1) so that Δ ( ( i 1 , i 2 , i 3 ) , r 2 ) =
= c ( i 1 , i 1 1 ) + c ( i 2 , i 2 1 ) + c ( i 3 , i 3 1 ) c ( i 1 , i 2 ) + c ( i 2 1 , i 3 1 ) + c ( i 1 1 , i 3 ) = c ( i 1 , i 1 1 ) c ( i 1 , i 2 ) + c ( i 2 1 , i 2 ) c ( i 2 1 , i 3 1 ) + c ( i 3 , i 3 1 ) c ( i 3 , i 1 1 ) = τ + ( i 1 , i 2 1 ) + τ ( i 2 1 , i 3 2 ) + τ + ( i 3 , i 1 ) .
The three functions are
f r 2 12 : ( x , y ) S 12 τ + ( x , y 1 ) ;
f r 2 23 : ( x , y ) S 23 τ ( x 1 , y 2 ) ;
f r 2 13 : ( x , y ) S 13 τ + ( y , x ) .
(For convenience, these functions, as well as the functions of some other moves described later on, are also reported in Table 2 of Section 6). The functions f r 12 , f r 23 , f r 13 are used in our procedure in two important ways. First, we use them to decide which are the most likely pairs of indices to belong in an optimal selection for r (the rationale being that, the higher a value f r ( x , y ) , the more likely it is that ( x , y ) belongs in a good selection).
Secondly, we use them to discard from consideration some moves which cannot be optimal. These are the moves such that no two of the indices give a sufficiently large contribution to the total. Better said, we keep in consideration only moves for which at least one contribution of two indices is large enough. With respect to the strategy outlined in Section 3, this corresponds to a criterion for deciding if a partial move suggested by our heuristic box is worth expanding into all its completions.
Assume we want to find the best move overall and the best move we have seen so far (the current “champion”) is μ * . We make the key observation that for a move ( ( i 1 , i 2 , i 3 ) , r ) to beat μ * it must be
f r 12 ( i 1 , i 2 ) > Δ ( μ * ) 3 f r 23 ( i 2 , i 3 ) > Δ ( μ * ) 3 f r 13 ( i 1 , i 3 ) > Δ ( μ * ) 3 .
These three conditions are not exclusive but possibly overlapping, and in our algorithm, we will enumerate only moves that satisfy at least one of them. Furthermore, we will not enumerate these moves with a complete enumeration, but rather from the most promising to the least promising, stopping as soon as we realize that no move has still the possibility of being the best overall.
The most appropriate data structure for performing this kind of search (which hence be used for our heuristic implementation of the magic box described in Section 3) is the Max-Heap. A heap is perfect for taking the highest-valued elements from a set, in decreasing order. It can be built in linear time with respect to the number of its elements and has the property that the largest element can be extracted in logarithmic time, while still leaving a heap.
We then build a max-heap H whose elements are records of type
[ x , y , f , α , r ] ,
where α { 12 , 13 , 23 } , r { r 1 , r 2 , r 3 , r 4 } , ( x , y ) S α , and f : = f r α ( x , y ) . The heap elements correspond to partial moves. The heap is organized according to the values of f, and the field α is a label identifying which selection indices are associated to a given heap entry. We initialize H by putting in it an element for each ( x , y ) , r and α such that f r α ( x , y ) > Δ ( μ * ) / 3 . We then start to extract the elements from the heap. Let us denote by [ x j , y j , f j , α j , r j ] the j-th element extracted. Heuristically, we might say that x 1 and y 1 are the most likely values that a given pair of indices (specified by α 1 ) can take in the selection of a best-improving move, since these values give the largest possible contribution to the move value (2). We will keep extracting the maximum [ x j , y j , f j , α j , r j ] from the heap as long as f j > Δ ( μ * ) / 3 . This does not mean that we will extract all the elements of H, since Δ ( μ * ) could change (namely, increase) during the search and hence the extractions might terminate before the heap is empty.
Each time we extract the heap maximum, we have that x j and y j are two possible indices out of three for a candidate move to beat μ * . With a linear-time scan, we then enumerate the third missing index (identified by α j ) and see if we get indeed a better move than μ * . For example, if α j = 13 then the missing index is i 2 and we run a for-loop over i 2 , with x j + 2 i 2 y j 2 , checking each time if the move ( ( x j , i 2 , y j ) , r j ) is a better move than μ * . Whenever this is the case, we update μ * . Note that, since Δ ( μ * ) increases, the number of elements still in the heap for which f > Δ ( μ * ) / 3 after updating the champion may be considerably smaller than it was before the update.
Lemma 1.
When the procedure outlined above terminates, μ * is an optimal move.
Proof. 
Suppose, by contradiction, that there exists a move μ = ( ( w 1 , w 2 , w 3 ) , r ) better than μ * . Since  Δ ( ( w 1 , w 2 , w 3 ) , r ) > Δ ( μ * ) , there exists at least one pair a < b { 1 , 2 , 3 } such that f r a b ( w a , w b ) > Δ ( μ * ) / 3 . Then the partial move ( w a , w b , f r a b ( w a , w b ) , a b , r ) would have eventually been popped from the heap and evaluated, producing a move better than μ * . Therefore the procedure could not have ended with the move μ * . □
Assume L denotes the initial size of the heap and M denotes the number of elements that are extracted from the heap overall. Then the cost of the procedure is O ( n 2 + L + M ( log L + n ) ) since: (i)  Θ ( n 2 ) is the cost for computing the τ values; (ii) Θ ( L ) is the cost for building the heap; (iii) for each of the M elements extracted from the heap we must pay O ( log L ) for re-adjusting the heap, and then we complete the move in O ( n ) ways. Worst-case, the procedure has complexity O ( n 3 ) like complete enumeration but, as we will show in our computational experiments, it is much smaller in practice. In fact, on our tests the complexity was growing slightly faster than a quadratic function of n (see Section 5). This is because the M n selections which are indeed evaluated for possibly becoming the best move have a much bigger probability of being good than a generic selection, since two of the three indices are guaranteed to help the value of the move considerably.

4.1. Worst-Case Analysis

A theoretical result stated in [11] shows that no algorithm with a less than cubic worst-case complexity is possible for 3-OPT (under the widely believed ALL-PAIRS SHORTEST PAIR conjecture). In light of this result, we expected that there should exist some instances which force our algorithm to require cubic time. In particular, we have found the following example.
Theorem 2.
The Smart Force algorithm has a worst-case complexity Θ ( n 3 ) .
Proof. 
Clearly, the complexity is O ( n 3 ) since there are O ( n 2 ) partial moves and the evaluation of each of them takes O ( n ) time. To show the lower bound Ω ( n 3 ) consider the following instance.
Fix any ϵ > 0 and, for each 0 i < j n ¯ , define c ( i , j ) to be
c ( i , j ) = 1 + 4 ϵ if i = j ( mod 2 ) 1 if i j ( mod 2 ) and | i j | > 1 1 + ϵ if | i j | = 1
(Notice that if ϵ 2 / 3 this instance is in fact metric). For these costs, the current tour ( 0 , , n ¯ ) is a local optimum. In fact, for each selection, ( i 1 , i 2 , i 3 ) at least two of the nodes have the same parity and hence at least one edge of value 1 + 4 ϵ would be introduced by a move. Therefore the inserted edges would have cost at least 3 + 4 ϵ , while the removed edges have cost 3 + 3 ϵ .
We have
τ + ( i , j ) = c ( i , i + 1 ) c ( i , j + 1 ) = 1 + ϵ 1 = ϵ if i = j ( mod 2 ) 1 + ϵ 1 4 ϵ = 3 ϵ if i j ( mod 2 )
Therefore, for each of the Θ ( n 2 ) pairs ( i , j ) such that i and j have the same parity, it is τ + ( i , j ) = ϵ > 0 . Since
Δ ( ( i 1 , i 2 , i 3 ) , r 1 ) = τ + ( i 1 , i 2 ) + τ + ( i 2 , i 3 ) + τ + ( i 3 , i 1 ) ,
each such pair would be put in the heap and evaluated, fruitlessly, for a total running time of Ω ( n 3 ) . □

4.2. Implementing the Search Procedure

We now describe more formally the procedure that finds the best improving 3-OPT move. To simplify the description of the code, we will make use of a global variable μ * representing the current champion move. Initially μ * is NULL, and we extend the function Δ ( ) by defining Δ ( NULL ) : = 0 .
The main procedure is Find3-OPTmove (Procedures 1 and 2). The procedure starts by setting μ * to a solution for which, possibly, Δ ( μ * ) > 0 . This is not strictly necessary, and setting μ * to NULL would work too, but a little effort in finding a “good” starting champion can yield a smaller heap and thus a faster, overall, procedure. In our case, we have chosen to just sample 4 n random solutions and keep the best.
Procedure 1Find3-OPTmove
   1.  μ * startingSolution ( )
   2.  H buildHeap ( )
   3. while H [ 1 ] . val > Δ ( μ * ) / 3
   4.    ( x , y , α , r ) extractMax ( H )
   5.   for z range 3 rd ( x , y , α )
   6.        ( i , j , k ) selection ( x , y , z , α )
   7.       if ( Δ ( ( i , j , k ) , r ) > Δ ( μ * ) ) then  /* update global optimum */
   8.        μ * ((i, j, k), r)
   9. /* if μ * = NULL there are no improving moves */
Procedure 2StartingSolution ()
Output: move μ (such that Δ ( μ ) 0 )
   1.  μ NULL
   2. for r = r 1 , r 2 , r 3 , r 4
   3.    for t = 1 to n
   4.     ( i , j , k ) randomSelection ( )
   5.    if ( Δ ( ( i , j , k ) , r ) > Δ ( μ ) ) then
   6.       μ ( ( i , j , k ) , r )
   7. return μ
The procedure then builds a max-heap (see Procedure 3 BuildHeap) containing an entry for each r r 1 , , r 4 , α { 12 , 23 , 13 } and ( x , y ) S α such that f r α ( x , y ) > Δ ( μ * ) / 3 . The specific functions f r α , for each α and r, are detailed in Table 2 of Section 6. The heap is implemented by an array H of records with five fields: x and y , which represent two indices of a selection; alpha which is a label identifying the two type of indices; val which is the numerical value used as the key to sort the heap; and scheme which is a label identifying a reinsertion scheme. By using the standard implementation of a heap (see, e.g., [15]), the array corresponds to a complete binary tree whose nodes are stored in consecutive entries from 1 to H . SIZE . The left son of node H [ t ] is H [ 2 t ] , while the right son is H [ 2 t + 1 ] . The father of node H [ t ] is H [ t . div . 2 ] . The heap is such that the key of each node H [ t ] is the largest among all the keys of the subtree rooted at H [ t ] .
In the procedure BuildHeap we use a function range 1 st 2 nd ( α ) which returns the set of all values that a pair ( x , y ) of indices of type α can assume. More specifically,
-
range 1 st 2 nd ( 12 ) : = { ( x , y ) : 0 x < x + 2 y n ¯ 2 P ( x = 0 ) } /* x is i 1 and y is i 2 */
-
range 1 st 2 nd ( 23 ) : = { ( x , y ) : 2 + P ( y = n ¯ ) x < x + 2 y n ¯ } /* x is i 2 and y is i 3 */
-
range 1 st 2 nd ( 13 ) : = { ( x , y ) : 0 x < x + 4 y n ¯ P ( x = 0 ) } /* x is i 1 and y is i 3 */
Procedure 3BuildHeap ()
Output: heap H
   1.  new H
   2.  c 0
   3.  for r = r 1 , r 2 , r 3 , r 4
   4.        for α = 12 , 23 , 13
   5.            for ( x , y ) range 1 st 2 nd ( α )
   6.                  if f r α ( x , y ) > Δ ( μ * ) / 3 then
   7.                     c c + 1
   8.                     H [ c ] . x x
   9.                     H [ c ] . y y
   10.                      H [ c ] . val f r α ( x , y )
   11.                      H [ c ] . alpha α
   12.                      H [ c ] . scheme r
   13.  H . SIZE c
   14.  for t H . SIZE 2 , , 2 , 1
   15.        heapify ( H , t )     /* turns the array into a heap */
   16.  return H
BuildHeap terminates with a set of calls to the procedure Heapify (a standard procedure for implementing heaps), described in Procedure 4. To simplify the code, we assume H [ t ] . val to be defined also for t > H . SIZE , with value . The procedure Heapify ( H , t ) requires that the subtree rooted at t respects the heap structure at all nodes, except, perhaps, at the root. The procedure then adjusts the keys so that the subtree rooted at t becomes indeed a heap. The cost of heapify is linear in the height of the subtree. The loop of line 14 in procedure BuildHeap turns the unsorted array H into a heap, working its way bottom-up, in time O ( H . SIZE ) .
Procedure 4Heapify ( H , t )
Input: array H, integer t { 1 , H . SIZE }
   1.  l s 2 t   /* left son */
   2.  r s 2 t   +   1   /* right son */
   3.  if ( H [ l s ] . val   >   H [ t ] . val ) then largels else larget
   4.  if ( H [ r s ] . val   >   H [ l a r g e ] . val ) then largers
   5.  if (large ≠ t) then
   6.       H [ t ] H [ l a r g e ]   /* swaps H [ t ] with the largest of its sons */
   7.       heapify ( H , l a r g e )
Coming back to Find3-OPTMove, once the heap has been built, a loop (lines 3–8) extracts the partial moves from the heap, from the most to the least promising, according to their value (field val). For each partial move popped from the heap we use a function range 3 rd ( x , y , α ) to obtain the set of all values for the missing index with respect to a pair ( x , y ) of type α . More specifically.
-
range 3 rd ( x , y , 12 ) : = { z : y + 2 z n ¯ P ( x = 0 ) } /* missing index is i 3 */
-
range 3 rd ( x , y , 23 ) : = { z : 0 + P ( y = n ¯ ) z x 2 } /* missing index is i 1 */
-
range 3 rd ( x , y , 13 ) : = { z : x + 2 z y 2 } /* missing index is i 2 */.
We then complete the partial move in all possible ways, and each way is compared to the champion to see if there has been an improvement (line 7). Whenever the champion changes, the condition for loop termination becomes easier to satisfy than before.
In line 6 we use a function selection ( x , y , z , α ) that, given three indices x, y, z and a pair type α , rearranges the indices so as to return a correct selection. In particular, selection ( x , y , z , 12 ) : = ( x , y , z ) , selection ( x , y , z , 23 ) : = ( z , x , y ) , and selection ( x , y , z , 13 ) : = ( x , z , y ) .
The procedure ExtractMax (in the Procedure 5) returns the element of a maximum value of the heap (which must be in the root node, i.e., H [ 1 ] ). It then replaces the root with the leaf H [ H . SIZE ] and, by calling heapify ( H , 1 ) , it moves it down along the tree until the heap property is again fulfilled. The cost of this procedure is O ( log ( H . SIZE ) ) .
Procedure 5ExtractMax ( H )
Input: heap H
Output: the partial move ( x , y , α , r ) of maximum value in H
   1.  ( x , y , α , r ) ( H [ 1 ] . x , H [ 1 ] . y , H [ 1 ] . alpha , H [ 1 ] . scheme ) /* gets the max element */
   2.  H [ 1 ] H [ H . SIZE ] /* overwrites it */
   3.  H . SIZE H . SIZE 1
   4.  heapify ( H , 1 ) /* restores the heap */
   5.  return ( x , y , α , r )

5. Computational Results

In this section, we report on our extensive computational experiments showing the effectiveness of the approach we propose. All tests were run on a Intel®CoreTM i7-7700 8CPU under Linux Ubuntu, equipped with 16 GB RAM at 3.6GHz clock. The programs were implemented in C, compiled under gcc 5.4.0 and are available upon request,

5.1. Instance Types

The experiments were run on two types of graphs: (i) random graphs generated by us and (ii) instances from the standard benchmark repository TSPLIB [16]. The random graphs are divided into three categories:
-
Uniform (UNI): complete graphs in which the edge costs are independent random variables drawn uniformly in the range [ 0 , 1 ] .
-
Gaussian (GAU): complete graphs in which the edge costs are independent random variables drawn according to the Gaussian distribution N ( μ , σ ) with mean μ = 0.5 and standard deviation σ = 0.1 .
-
Geometric (GEO): complete graphs in which the edge costs are the Euclidean distances between n random points of the plane. In particular, the points are generated within a circle of radius 1 by drawing, u.a.r. a pair of polar coordinates ( d , α ) , where d [ 0 , 1 ] and α [ 0 , 2 π ] .
Our goal in using more than one type of random distribution was to assess if our method is sensible, and if so, to what extent, to variation in the type of edge costs involved. The results will show that the GEO instances are slightly harder to tackle, while UNI and GAU are more or less equivalent.

5.2. Experiment 1: Best Move from a Random Tour

5.2.1. Random Instances

In a first set of experiments we compared our method to the brute force (BF) approach on random instances of the type described before. In particular, for each type of random graph we generated instances of sizes n 1 , , n 9 where we set n 1 = 400 and n i + 1 = 2 × n i for i = 1 , , 8 . This geometric increase is chosen so that we should see a doubling of the running times in going from n i to n i + 1 with a quadratic method, while the ratio should be about 2 2 2.83 with the cubic BF approach.
For each given size n i we generated 1000 random instances. Finally, for each random instance, we generated a random initial tour and found the best 3-OPT move with both smart force (SF) and BF.
In Table 3 we report the results of the experiment. The table is divided vertically into three parts. The first part reports the average running time of the two approaches (columns BF avg and SF avg ) and the speed-up achieved by smart force over brute force. We can see that in our instances smart force is at least 70 times faster and as much as 500 times faster than brute force. Similar results are reported in the second vertical part, in which we focus on the number of triples evaluated by the two methods. Clearly, the average for the BF method is indeed a constant, since all triples must be evaluated. SF, on the other hand, evaluates a much smaller number of triples, going from a minimum of 200 times less up to 7000 times less triples than BF. Notice how the saving in the running time is actually smaller than the saving in the number of triples evaluated. This is due to the overhead needed for building and updating the heap. The standard deviations for the SF times (not reported in the table for space reasons) were in the range 10–25% of the mean value, while for the number of evaluated triples the standard deviations were between 30% and 40%.
In the final vertical section, we focus on the empirical estimate of a complexity function for the running time and the number of evaluated triples of the smart force approach. In particular, the column SF time reports the ratio between the average time of SF on instances of size n k and n k 1 . Given the way that we defined the n k ’s, a value 2 on this column would be indicative of a quadratic algorithm, while a value 2 2 2.83 would be indicative of a cubic algorithm.
The average ratios for the running time of the SF algorithm were 2.37, 2.31 and 2.47 for the UNI, GAU and GEO instances respectively. These values are indicative of an algorithm whose empirical average complexity is definitely sub-cubic, but not quite quadratic. A reasonably good fit with an experimental upper bound to the time complexity is O ( n 2.5 ) (see Figure 2).
The column labeled SF eval reports the same type of ratios but this time relatively to the number of triples evaluated by SF. We can see that for UNI and GAU graphs the number of triples grows almost exactly as a quadratic function of n. Notice that evaluating less than a quadratic number of triples would be impossible, since each edge must be looked at, and there is a quadratic number of edges. Again, the class of graphs GEO seems to be slightly harder than UNI and GAU.
For completeness, in columns BF time and BF eval we report the same type of values for the BF method.
The main message from the table is that while finding the best 3-OPT move for a tour of about 6000 nodes with the classical BF approach can take more than half an hour, with our approach it can be done in 5 s or so.

5.2.2. TSPLIB Instances

We have then run the same type of experiment on instances from the TSPLIB. For each instance, we generated 1000 random starting tours and found the best 3-OPT move. The results are reported in Table 4. The instances are all the Euclidean, Geographical and Explicit instances (according to TSPLIB categories) with sizes up to 6000 nodes. Smaller-size instances were ignored since the running times are too little, for both SF and BF, even to be measured precisely. Indeed, for n 200 finding the best move by SF yields only a small advantage (in the running time) over the BF approach. The effect increases with increasing n, and we can get about a factor-50 speed-up for instances with 3000 n 6000 . The largest improvement is achieved on one instance of size n = 2319 where SF is 77 times faster than BF, and finds the best move in about two seconds versus two and a half minutes.

5.3. Experiment 2: Convergence to a Local Optimum

5.3.1. Random Instances

A second set of experiments concerned the behavior of our algorithm over a sequence of iterations, i.e., in a full local search convergence starting from a random tour until a local optimum is reached. Since the time required for this experiment is considerable, we ran it only on a subset of the previous instances, namely the instances with n 1600 .
The goal of the experiment was to assess how much the method is sensible to the quality of the current tour. Note that for BF the quality of the tour is irrelevant as the time needed for finding the best move is in practice constant at each step since all moves must be looked at (actually, the number of times the optimum gets updated is variable and this accounts for tiny differences in the time needed for a move). With our method, however, the quality of the current tour matters: when the tour is “poor”, we expect to have a heap with a large number of candidates but we also expect that most extractions from the heap will be winning (i.e., they determine an improvement of the current champion) so that the stopping criterion is reached relatively soon. On the other hand, if the tour is “very good”, we expect to have a small heap, since there are few candidates moves that can be improving, but we also expect that most extractions from the heap will be losing (i.e., they won’t determine an improvement of the current champion) so that the stopping criterion will be hard to reach.
We can summarize this idea with the following trade-off:
(i)
When there are many improving moves (i.e., at the beginning of local search) we have many candidates on the heap, but the pruning is effective and we only expand a few of them.
(ii)
When there aren’t many improving moves (i.e., near the local optimum) we have very few candidates on the heap, but the pruning is not effective and we need to expand most of them.
The time for a move is the product between the number M of expansions and the cost Θ ( n ) of an expansion and so we are interested in determining how M changes along the convergence.
The experiment was conducted as follows. We used random instances of the families described before. For each value of n and instance type, we generated 10 random instances and for each of them, we started 10 times a convergence from a random tour (a total of 100 runs for each value of n). For each run we took 10 “snapshots” at 10 % , 20 % , , 100 % of the search. In particular, we divided the path followed to the local optimum into ten parts and averaged the values over each of these parts (notice that the paths can have different lengths but this way we normalize them, in the sense that we consider 10 cases, i.e., “very close to the random starting tour and very far from the local optimum” (first 10%), then one step farther from the random tour and closer to the optimum, etc., until “very far to the random starting tour and very close to the local optimum” (last 10%)).
The results are summarized in Table 5. Rows labeled BF time and SF time report, respectively, the average time of BF and SF (in milliseconds, rounded to an integer) for each tenth of the convergence. The column labeled “Avg time” reports the average time of each convergence and of each move within the convergence over all 100 tests for each instance size, for both BF and SF. The final column “Speed-up” reports the speed-up factor of SF over BF.
Rows “SF heap” report the average size of the heap. Rows “SF exps” report the average number of expansions (i.e., elements popped from the heap). Rows “SF wins” report the average number of winning expansions (i.e., improvements of the champion).
The behavior of SF seems to be sensible to the type of edge costs involved. While for UNI and GAU graphs the statistics are very similar, the running times for GEO instances are slightly larger. If we look at row “SF heap” it can be seen how the numbers along these rows decrease very quickly during the convergence for all types of instances. One interesting phenomenon, however, is that in GEO instances we start with a larger heap than for UNI and GAU, but we end with a smaller one, so that the rate of decrease in the heap size is higher for this type of instance.
The analysis of rows “SF exps” shows how for UNI and GAU instances, the numbers are relatively stable in the beginning and then start to increase around halfway through the convergence. For GEO instances, on the other hand, these values start by increasing until they reach a peak at around 20%, and then they start to drastically decrease.
Finally the rows “SF wins” for all instance types contain numbers which are quite small, stable in the beginning and then decreasing until the end of the convergence.
From Table 5 we can see that the net effect of having fewer elements on the heap but more expansions is that SF takes more or less the same time along the convergence. This is particularly true for UNI and GAU instances, with a little slow down while approaching the local optimum, while for GEO instances the effectiveness of the method increases when nearing the local optimum.
In Figure 3 we can see a graphic visualization of the aforementioned trade-off effect between the heap size and the number of expansions. The figure describes the way the heap size and the number of expansions change during the convergence for the 100 UNI instances of size n = 1600 . Since the lengths of the convergences are different, they have been normalized to 100 steps.
That is, for each convergence i = 1 , , 100 and step k = 1 , , 100 we have determined h k i (the average heap size in the interval from k 1 to k percent of the i-th convergence), e k i (the average number of expansions in the same interval) and t k i (the average time for a move in the interval). Once all the convergences have been normalized to have the same length, we have taken the average of values h, e and t over all convergences in each of the 100 intervals. For each interval k, let h ¯ k be the average of h k i over all i, and define similarly the averages e ¯ k and t ¯ k . The plots of h ¯ , e ¯ and t ¯ are shown in Figure 3. The x axis is labeled by the 100 intervals. The y-axis is labeled by the number of elements (left) and by the time in seconds (right). It can be seen that the heap size starts by staying more or less constant for about 20% of the convergence and then starts to rapidly decrease (with the exception of a peak at around 40%). The number of expansions stays more or less constant for about one-third of the convergence and then starts to increase in a linear fashion. The time for each move follows a similar curve, since t ¯ is proportional (with a factor n) to e ¯ . The two curves for h ¯ and e ¯ approach each other until they almost touch at the end, when, basically, all of the heap elements must be expanded and pruning is ineffective.
Overall, as shown by Table 4, the use of SF over BF allows for speed-up factors of about two orders of magnitude on instances of 1600 nodes.

5.3.2. TSPLIB Instances

We then performed the same type of experiment on the TSPLIB instances previously described. For each instance we ran a local search all to way to convergence to a local optimum, starting from 100 random tours.
The results are reported in Table 6. This type of experiment is extremely time-consuming when BF is run on large graphs. For this reason, not all the times for BF are actual real values, but the results for graphs with n 3000 are actually estimates of the running times. In particular, when n 3000 , BF would have to evaluate more than 18 billion triples at each move. Say the exact number is T n . Then we only evaluate the first T 0 : = 100 , 000 , 000 triples. Assume this takes time t 0 (usually less than 2 s on our computer). Then we estimate the actual time of a move as t n : = t 0 ( T n / T 0 ) . This is indeed a lower bound since in computing t 0 we do not account for updates to the current best triple (i.e., we removed the if and its body from the nested for loop of BF).
From Table 6 we can see that SF outperforms BF with improvements that range from 500% faster to more than 20,000% faster (instance pcb3038). These improvements are smaller than those for random graphs, but still, the experiment shows how with our method it is now possible to run a local search by using the 3-OPT neighborhood on TSPLIB instances that were previously impossible to tackle. For instance, reaching a 3-OPT local optimum for pcb3038 with the BF nested-for procedure would have taken at least two weeks (!) while with SF it can be done in less than 2 h.

6. Other Types of Cubic Moves

The ideas outlined for the 3-OPT neighborhood and the corresponding successful computational results have prompted us to investigate the possibility of speeding up in a similar way some other type of cubic moves. In this section, we just give a few examples to show that indeed this can be done.
Consider, for instance, some special types of K-OPT moves (where K > 3 edges are taken out from the tour and replaced by K new edges) in which the removed edges are identified by three indexes i 1 , i 2 , i 3 . For instance, let K = 6 and the removed edges be
{ i 1 1 , i 1 } , { i 1 , i 1 1 } , { i 2 1 , i 2 } , { i 2 , i 2 1 } , { i 3 1 , i 3 } , { i 3 , i 3 1 } .
The tour is then reconnected in any way that excludes the removed edges (clearly there are many possibilities, we’ll just look at a few). To describe the way in which the tour is reconnected we can still use the concept of the reinsertion scheme. Let us describe the tour before the move (by default, clockwise) as
A , i 1 , B , i 2 , C , i 3 ,
where A = ( i 3 1 , , i 1 1 ) , B = ( i 1 1 , , i 2 1 ) and C = ( i 2 1 , , i 3 1 ) . In the new tour, each segment X { A , B , C } can be traversed clockwise (denoted by X + ) or counter-clockwise (denoted by X ). A reinsertion scheme is then a permutation of { A + , B , C , i 1 , i 2 , i 3 } , starting with A + (we adopt the convention that A is always the first segment and is always traversed clockwise) and with B and C signed either ’+’ or ’-’. Let us first consider two simple examples of reinsertion schemes, i.e., those in which the move maintains the pattern “segment, node, segment, node, segment, node” and the segments keep the clockwise orientation and the original order (i.e., A + , B + , C + ). This leaves only two possible reinsertion schemes for the true moves (as many as the derangements of { 1 , 2 , 3 } ), namely (see Figure 4)
r 5 : = < A + , i 2 , B + , i 3 , C + , i 1 >
r 6 : = < A + , i 3 , B + , i 1 , C + , i 2 >
The value of a move ( ( i 1 , i 2 , i 3 ) , r 5 ) is Δ ( ( i 1 , i 2 , i 3 ) , r 5 ) =
= c ( i 1 1 , i 1 ) + c ( i 1 , i 1 1 ) + c ( i 2 1 , i 2 ) + c ( i 2 , i 2 1 ) + c ( i 3 1 , i 3 ) + c ( i 3 , i 3 1 ) c ( i 1 , i 3 1 ) + c ( i 1 , i 3 1 ) + c ( i 2 , i 1 1 ) + c ( i 2 , i 1 1 ) + c ( i 3 , i 2 1 ) + c ( i 3 , i 2 1 ) = [ τ + ( i 2 , i 1 ) + τ ( i 2 , i 1 ) ] + [ τ + ( i 3 , i 2 ) + τ ( i 3 , i 2 ) ] + [ τ + ( i 1 , i 3 ) + τ ( i 1 , i 3 ) ]
So we have
Δ ( ( i 1 , i 2 , i 3 ) , r 5 ) = f r 5 12 ( i 1 , i 2 ) + f r 5 23 ( i 2 , i 3 ) + f r 5 13 ( i 1 , i 3 )
if we define the three functions f r α to be
  • f r 5 12 : ( x , y ) S 12 τ + ( y , x ) + τ ( y , x )
  • f r 5 23 : ( x , y ) S 23 τ + ( y , x ) + τ ( y , x )
  • f r 5 13 : ( x , y ) S 13 τ + ( x , y ) + τ ( x , y )
Similarly, a move ( ( i 1 , i 2 , i 3 ) , r 6 ) has value Δ ( ( i 1 , i 2 , i 3 ) , r 6 ) =
= c ( i 1 1 , i 1 ) + c ( i 1 , i 1 1 ) + c ( i 2 1 , i 2 ) + c ( i 2 , i 2 1 ) + c ( i 3 1 , i 3 ) + c ( i 3 , i 3 1 ) c ( i 1 , i 2 1 ) + c ( i 1 , i 2 1 ) + c ( i 2 , i 3 1 ) + c ( i 2 , i 3 1 ) + c ( i 3 , i 1 1 ) + c ( i 3 , i 1 1 ) = [ τ + ( i 1 , i 2 ) + τ ( i 1 , i 2 ) ] + [ τ + ( i 2 , i 3 ) + τ ( i 2 , i 3 ) ] + [ τ + ( i 3 , i 1 ) + τ ( i 3 , i 1 ) ] = f r 6 12 ( i 1 , i 2 ) + f r 6 23 ( i 2 , i 3 ) + f r 6 13 ( i 1 , i 3 ) ,
where we have defined the three functions f r α as
  • f r 6 12 : ( x , y ) S 12 τ + ( x , y ) + τ ( x , y )
  • f r 6 23 : ( x , y ) S 23 τ + ( x , y ) + τ ( x , y )
  • f r 6 13 : ( x , y ) S 13 τ + ( y , x ) + τ ( y , x )
Let us now consider the case of the above move when we keep the order A, B, C, but we also allow for reversing either B or C. It turns out that, over all the signings of B and C and permutations of i 1 , i 2 , i 3 , there is only one possible reinsertion scheme, namely (see Figure 4)
r 7 : = < A + , i 3 , B , i 2 , C , i 1 > .
The value of a move ( ( i 1 , i 2 , i 3 ) , r 7 ) is Δ ( ( i 1 , i 2 , i 3 ) , r 7 ) =
= c ( i 1 1 , i 1 ) + c ( i 1 , i 1 1 ) + c ( i 2 1 , i 2 ) + c ( i 2 , i 2 1 ) + c ( i 3 1 , i 3 ) + c ( i 3 , i 3 1 ) c ( i 1 , i 2 1 ) + c ( i 1 , i 3 1 ) + c ( i 2 , i 1 1 ) + c ( i 2 , i 3 1 ) + c ( i 3 , i 1 1 ) + c ( i 3 , i 2 1 ) = [ τ + ( i 1 , i 2 ) + τ + ( i 2 , i 1 ) ] + [ τ ( i 2 , i 3 ) + τ ( i 3 , i 2 ) ] + [ τ + ( i 1 1 , i 3 1 ) + τ ( i 3 1 , i 1 1 ) ] = f r 7 12 ( i 1 , i 2 ) + f r 7 23 ( i 2 , i 3 ) + + f r 7 13 ( i 1 , i 3 ) ,
where we have defined the three functions f r α as
  • f r 7 12 : ( x , y ) S 12 τ + ( x , y ) + τ + ( y , x )
  • f r 7 23 : ( x , y ) S 23 τ ( x , y ) + τ ( y , x )
  • f r 7 13 : ( x , y ) S 13 τ + ( x 1 , y 1 ) + τ ( y 1 , x 1 )
Finally, let us consider one last example of these special 6-OPT moves, in which we rearrange the permutation of segments and nodes, namely (see Figure 4)
r 8 : = < A + , C , B + , i 3 , i 1 , i 2 > .
The value of a move ( ( i 1 , i 2 , i 3 ) , r 8 ) is Δ ( ( i 1 , i 2 , i 3 ) , r 8 ) =
= c ( i 1 1 , i 1 ) + c ( i 1 , i 1 1 ) + c ( i 2 1 , i 2 ) + c ( i 2 , i 2 1 ) + c ( i 3 1 , i 3 ) + c ( i 3 , i 3 1 ) c ( i 1 1 , i 3 1 ) + c ( i 1 , i 3 ) + c ( i 2 1 , i 3 ) + c ( i 2 , i 3 1 ) + c ( i 1 , i 2 ) + c ( i 1 1 , i 2 1 ) = [ τ ( i 1 1 , i 2 2 ) + τ ( i 2 , i 1 1 ) ] + [ τ + ( i 2 , i 3 ) + τ ( i 3 , i 2 ) ] + [ τ + ( i 1 1 , i 3 2 ) + τ + ( i 3 , i 1 1 ) ] = f r 8 12 ( i 1 , i 2 ) + f r 8 23 ( i 2 , i 3 ) + + f r 8 13 ( i 1 , i 3 ) ,
where we have defined the three functions f r α as
  • f r 8 12 : ( x , y ) S 12 τ ( x 1 , y 2 ) + τ ( y , x 1 )
  • f r 8 23 : ( x , y ) S 23 τ + ( x , y ) + τ ( y , x )
  • f r 8 13 : ( x , y ) S 13 τ + ( x 1 , y 2 ) + τ + ( y , x 1 )
To finish this section let us consider another type of cubic move, for example, a special 5-OPT move. In particular, let the removed edges be
{ i 1 1 , i 1 } , { i 1 , i 1 1 } , { i 2 1 , i 2 } , { i 2 , i 2 1 } , { i 3 , i 3 1 } .
The tour is then reconnected in any way that excludes the removed edges (clearly there are many possibilities, we will just look at one of them). By using the previous notation for the reinsertion schemes, let us consider the scheme (see Figure 4)
r 9 : = < A + , i 2 , B + , i 1 , C > .
The value of a move ( ( i 1 , i 2 , i 3 ) , r 9 ) is Δ ( ( i 1 , i 2 , i 3 ) , r 9 ) =
= c ( i 1 1 , i 1 ) + c ( i 1 , i 1 1 ) + c ( i 2 1 , i 2 ) + c ( i 2 , i 2 1 ) + c ( i 3 , i 3 1 ) c ( i 1 1 , i 2 ) + c ( i 1 , i 2 1 ) + c ( i 1 1 , i 2 ) + c ( i 2 1 , i 3 1 ) + c ( i 3 , i 1 ) = [ τ ( i 1 1 , i 2 1 ) + τ + ( i 1 1 , i 2 1 ) + τ + ( i 2 1 , i 1 1 ) ] + τ ( i 2 1 , i 3 + 2 ) + τ + ( i 3 , i 1 1 ) = f r 9 12 ( i 1 , i 2 ) + f r 9 23 ( i 2 , i 3 ) + f r 9 13 ( i 1 , i 3 ) ,
where we have defined the three functions f r α as
  • f r 9 12 : ( x , y ) S 12 τ ( x 1 , y 1 ) + τ + ( x 1 , y 1 ) + τ + ( y 1 , x 1 )
  • f r 9 23 : ( x , y ) S 23 τ ( x 1 , y 2 )
  • f r 9 13 : ( x , y ) S 13 τ + ( y , x 1 )
It should be clear that, to find the best move of the special types described in this section, we can put the partial moves in a heap and use the same strategy we have developed in Section 4. The algorithm is the same as before, with the difference that in line 2 of procedure BuildHeap we iterate over the schemes r 5 , , r 9 instead of r 1 , , r 4 .

7. Conclusions

In this work, we have described an algorithmic strategy for optimizing the 3-OPT neighborhood in an effective way. Our strategy relies on a particular order of enumeration of the triples which allows us to find the best 3-OPT move without having to consider all the possibilities. This is achieved by a pruning rule and by the use of the max-heap as a suitable data structure for finding in a quick way good candidates for the best move.
Extensive computational experiments prove that this strategy largely outperforms the classical Θ ( n 3 ) “nested-for” approach on average. In particular, our approach exhibits a time complexity bounded by O ( n 2.5 ) on various types of random graphs.
The goal of our work was to show how the use of the 3-OPT neighborhood can be extended to graphs of much larger size than before. We did not try to assess the effectiveness of this neighborhood in finding good-quality tours, nor possible refinements to 3-OPT local search (such as restarts, perturbations, the effectiveness of the other type of 3-OPT moves we introduced, etc.). These types of investigations would have required a large set of experiments and further coding, and can be the matter of future research. We also leave to further investigation the possibility of using our type of enumeration strategy for other polynomial-size neighborhoods, including some for different problems than the TSP. In this respect, we have obtained some promising preliminary results for the application of our approach to the 4-OPT neighborhood [17].
Perhaps the main open question deriving from our work is to prove that the expected time complexity of our algorithm is subcubic on, for instance, uniform random graphs. Note that the problem of finding the best 3-OPT move is the same as that of finding the largest triangle in a complete graph with suitable edge lengths L i j . For instance, for the reinsertion scheme r 1 in (4), the length of an edge ( i , j ) is L i j : = τ + ( i , j ) = c ( i , i 1 ) c ( i , j 1 ) . For a random TSP instance, each random variable L i j is therefore the difference of two uniform random variables, and this definitely complicates a probabilistic analysis of the algorithm. If one makes the simplifying assumption that the lengths L i j are independent uniform random variables drawn in [ 0 , 1 ] , then it can be shown that finding the largest triangle in a graph takes quadratic time on average, and this already requires a quite complex proof [18]. However, since in our case the variables L i j are not independent, nor uniformly distributed, we were not able to prove that the expected running time is subcubic, and such a proof appears very difficult to obtain.

Author Contributions

G.L.: Concept and analysis design. Paper writing. Software implementation. M.D.: Paper writing. Software implementation. Computational experiments. Tables and graphs. Data collection. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Papadimitriou, H.C.; Steiglitz, K. Combinatorial Optimization: Algorithms and Complexity; Prentice Hall: Upper Saddle River, NJ, USA, 1982; p. 496. [Google Scholar]
  2. Aarts, E.; Lenstra, J.K. (Eds.) Local Search in Combinatorial Optimization, 1st ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1997. [Google Scholar]
  3. Glover, F.; Laguna, M. Tabu Search; Kluwer Academic Publishers: Norwell, MA, USA, 1997. [Google Scholar]
  4. Kirkpatrick, S.; Gelatt, C.D.; Vecchi, M.P. Optimization by Simulated Annealing. Science 1983, 220, 671–680. [Google Scholar] [CrossRef] [PubMed]
  5. Ahuja, R.K.; Ergun, O.; Orlin, J.B.; Punnen, A.P. A survey of very large-scale neighborhood search techniques. Discret. Appl. Math. 2002, 123, 75–102. [Google Scholar] [CrossRef] [Green Version]
  6. Croes, G.A. A Method for Solving Traveling-Salesman Problems. Oper. Res. 1958, 6, 791–812. [Google Scholar] [CrossRef]
  7. Lin, S. Computer solutions of the traveling salesman problem. Bell Syst. Tech. J. 1965, 44, 2245–2269. [Google Scholar] [CrossRef]
  8. Steiglitz, K.; Weiner, P. Some Improved Algorithms for Computer Solution of the Traveling Salesman Problem. In Proceedings of the 6th annual Allerton Conf. on System and System Theory, Urbana, IL, USA, 2–4 October 1968; pp. 814–821. [Google Scholar]
  9. Lin, S.; Kernighan, B.W. An Effective Heuristic Algorithm for the Traveling-Salesman Problem. Oper. Res. 1973, 21, 498–516. [Google Scholar] [CrossRef]
  10. Johnson, D.A.; McGeoch, L. The Traveling Salesman Problem: A Case Study in Local Optimization. In Local Search in Combinatorial Optimization, 1st ed.; John Wiley & Sons, Inc.: New York, NY, USA, 1997; pp. 215–310. [Google Scholar]
  11. De Berg, M.; Buchin, K.; Jansen, B.M.P.; Woeginger, G.J. Fine-Grained Complexity Analysis of Two Classic TSP Variants. In Proceedings of the 43rd International Colloquium on Automata, Languages, and Programming, ICALP 2016, Rome, Italy, 11–15 July 2016; pp. 5:1–5:14. [Google Scholar]
  12. Lancia, G.; Dalpasso, M. Speeding-up the exploration of the 3-OPT neighborhood for the TSP. In New Trends in Emerging Complex Real Life Problems; AIRO Springer Series; Springer: Berlin/Heidelberg, Germany, 2018; Volume 1, pp. 345–356. [Google Scholar]
  13. Fellows, M.; Rosamond, F.; Fomin, F.; Lokhstnanov, D.; Saurabh, S.; Villanger, Y. Local Search: Is Brute-Force Avoidable? J. Comput. Syst. Sci. 2009, 78, 486–491. [Google Scholar] [CrossRef] [Green Version]
  14. Applegate, D.L.; Bixby, R.E.; Chvatál, V.; Cook, W.J. The Traveling Salesman Problem: A Computational Study; Princeton University Press: Princeton, NJ, USA, 2007. [Google Scholar]
  15. Cormen, T.H.; Leiserson, C.E.; Rivest, R.L.; Stein, C. Introduction to Algorithms, Third Edition, 3rd ed.; The MIT Press: Cambridge, MA, USA, 2009. [Google Scholar]
  16. Reinelt, G. TSPLIB—A Traveling Salesman Problem Library. ORSA J. Comput. 1991, 3, 376–384. [Google Scholar] [CrossRef]
  17. Lancia, G.; Dalpasso, M. Algorithmic Strategies for a Fast Exploration of the TSP 4-OPT Neighborhood. In Advances in Optimization and Decision Science for Society, Services and Enterprises; AIRO Springer Series; Springer: Berlin/Heidelberg, Germany, 2019; Volume 3, pp. 457–468. [Google Scholar]
  18. Lancia, G.; Vidoni, P. Finding the largest triangle in a graph in expected quadratic time. Eur. J. Oper. Res. 2020, 286, 458–467. [Google Scholar] [CrossRef]
Figure 1. The pure reinsertion schemes of 3-OPT.
Figure 1. The pure reinsertion schemes of 3-OPT.
Algorithms 13 00306 g001
Figure 2. Plots of possible fittings of the time for finding the first best move (GEO instances). The multiplicative constants are α 4.44 × 10 8 , β 0.33 × 10 8 , and γ 0.02 × 10 8 .
Figure 2. Plots of possible fittings of the time for finding the first best move (GEO instances). The multiplicative constants are α 4.44 × 10 8 , β 0.33 × 10 8 , and γ 0.02 × 10 8 .
Algorithms 13 00306 g002
Figure 3. Uniform (UNI) n = 1600, averages on 100 convergences, each of 100 steps (obtained by normalizing convergences of different lengths).
Figure 3. Uniform (UNI) n = 1600, averages on 100 convergences, each of 100 steps (obtained by normalizing convergences of different lengths).
Algorithms 13 00306 g003
Figure 4. The reinsertion schemes of some special cubic moves.
Figure 4. The reinsertion schemes of some special cubic moves.
Algorithms 13 00306 g004
Table 1. The number of true 2-OPT and 3-OPT moves for some n.
Table 1. The number of true 2-OPT and 3-OPT moves for some n.
n2-OPT3-OPT
50117569,000
1004850608,000
20019,7005,096,000
500124,25081,840,000
1000498,500660,680,000
20001,997,0005,309,360,000
500012,492,50083,183,400,000
10,00049,985,000666,066,800,000
Table 2. Expressing Δ ( ( i 1 , i 2 , i 3 ) , r ) as a sum f r 12 ( i 1 , i 2 ) + f r 23 ( i 2 , i 3 ) + f r 13 ( i 1 , i 3 ) .
Table 2. Expressing Δ ( ( i 1 , i 2 , i 3 ) , r ) as a sum f r 12 ( i 1 , i 2 ) + f r 23 ( i 2 , i 3 ) + f r 13 ( i 1 , i 3 ) .
r f r 12 ( x , y ) f r 23 ( x , y ) f r 13 ( x , y )
r 1 τ + ( x , y ) τ + ( x , y ) τ + ( y , x )
r 2 τ + ( x , y 1 ) τ ( x 1 , y 2 ) τ + ( y , x )
r 3 τ + ( x , y ) τ + ( x , y 1 ) τ ( y 1 , x 2 )
r 4 τ ( x 1 , y 2 ) τ + ( x , y ) τ + ( y , x 1 )
r 5 τ + ( y , x ) + τ ( y , x ) τ + ( y , x ) + τ ( y , x ) τ + ( x , y ) + τ ( x , y )
r 6 τ + ( x , y ) + τ ( x , y ) τ + ( x , y ) + τ ( x , y ) τ + ( y , x ) + τ ( y , x )
r 7 τ + ( x , y ) + τ + ( y , x ) τ ( x , y ) + τ ( y , x ) τ + ( x 1 , y 1 ) + τ ( y 1 , x 1 )
r 8 τ ( x 1 , y 2 ) + τ ( y , x 1 ) τ + ( x , y ) + τ ( y , x ) τ + ( x 1 , y 2 ) + τ + ( y , x 1 )
r 9 τ ( x 1 , y 1 ) + τ + ( x 1 , y 1 ) + τ + ( y 1 , x 1 ) τ ( x 1 , y 2 ) τ + ( y , x 1 )
Table 3. Comparing smart force (SF) and brute force (BF) in finding one Best Improving move starting from a random tour and using pure 3-OPT moves only (averaged over 1000 attempts).
Table 3. Comparing smart force (SF) and brute force (BF) in finding one Best Improving move starting from a random tour and using pure 3-OPT moves only (averaged over 1000 attempts).
TypeSizeTime (S)Number of Evaluated TriplesRatio Against the Previous Size
BF_avgSF_avgSpeed-upBFSF_avgReductionSF_timeSF_evalBF_timeBF_eval
UNI4000.650.0175×41,712,000177,247235×
5661.800.01136×118,966,408352,174337×1.531.982.792.85
8005.080.03187×337,504,000689,730489×2.051.952.812.83
113114.370.09160×956,827,5081,378,905693×3.311.992.822.83
160040.750.26157×2,715,328,0002,709,6081002×2.891.962.832.83
2262115.300.64179×7,685,229,4485,406,1341421×2.471.992.822.83
3200326.781.60204×21,783,936,00010,616,6082051×2.481.962.832.83
4524924.093.43269×61,604,454,08021,248,2342899×2.142.002.822.82
64002617.777.33356×174,516,992,00042,114,4264143×2.131.982.832.83
Avg 2.371.972.812.83
GAU4000.650.0185×41,712,000115,939359×
5661.790.01156×118,966,408226,615524×1.501.952.732.85
8005.130.02221×337,504,000426,427791×2.021.882.872.83
113114.230.07191×956,827,508838,0881141×3.191.962.772.83
160041.160.23181×2,715,328,0001,616,5521679×3.041.922.892.83
2262114.150.50227×7,685,229,4483,195,7482404×2.211.972.772.83
3200330.051.17282×21,783,936,0006,196,9793515×2.331.932.892.83
4524914.852.37386×61,604,454,08012,463,3674942×2.022.012.772.82
64002643.955.20508×174,516,992,00023,596,8907395×2.191.892.892.83
Avg 2.311.932.822.83
GEO4000.640.0173×41,712,000195,280213×
5661.820.01133×118,966,408415,980285×1.572.132.852.85
8005.030.03143×337,504,000904,099373×2.552.172.762.83
113114.520.10143×956,827,5081,922,566497×2.902.122.882.83
160040.340.30133×2,715,328,0004,159,708652×2.982.162.772.83
2262116.450.79146×7,685,229,4489,338,712822×2.622.242.882.83
3200323.512.03159×21,783,936,00021,611,3971007×2.562.312.772.83
4524933.334.45209×61,604,454,08048,245,2031276×2.192.232.882.82
64002591.6010.84239×174,516,992,000112,442,2101552×2.432.332.772.83
Avg 2.472.212.822.83
Table 4. Comparing SF and BF in finding one Best Improving move on Traveling Salesman Problem Library (TSPLIB) instances starting from a random tour (averaged over 1000 attempts).
Table 4. Comparing SF and BF in finding one Best Improving move on Traveling Salesman Problem Library (TSPLIB) instances starting from a random tour (averaged over 1000 attempts).
TypeNameSizeTime (S)Number of Evaluated Triples
BF_avgSF_avgSpeed-upBFSF_avgReduction
euc2dkroA1001000.010.01608,00033,12418×
euc2dkroB1001000.010.01608,00034,44917×
euc2dkroC1001000.010.01608,00036,68816×
euc2dkroD1001000.010.01608,00028,02521×
euc2dkroE1001000.010.01608,00039,83815×
euc2drd1001000.010.01608,00020,00530×
euc2deil1011010.010.01627,00820,65230×
euc2dlin1051050.010.01707,00038,66218×
euc2dpr1071070.010.01749,42899,522
explicitgr1201200.020.011,067,20052,98020×
euc2dpr1241240.020.011,180,48046,66125×
euc2dbier1271270.020.011,270,50857,12222×
euc2dch1301300.020.011,365,00031,17443×
euc2dpr1361360.020.011,567,80858,88326×
geogr1371370.020.011,603,44886,44018×
euc2dpr1441440.020.011,868,16051,29536×
euc2dch1501500.020.012,117,00047,14644×
euc2dkroA1501500.020.012,117,00093,53322×
euc2dkroB1501500.020.012,117,000120,82217×
euc2dpr1521520.020.012,204,60863,98734×
euc2du1591590.030.012,530,220127,25419×
explicitsi1751750.030.013,391,50048,98169×
explicitbrg1801800.030.013,696,00033,125111×
euc2drat1951950.050.014,717,700196,51624×
euc2dd1981980.040.024,942,344483,91610×
euc2dkroA2002000.040.015,096,000221,36423×
euc2dkroB2002000.040.015,096,000189,16026×
geogr2022020.040.015,252,808150,17334×
euc2dts2252250.050.017,293,000110,87965×
euc2dtsp2252250.050.017,293,000141,26051×
euc2dpr2262260.060.017,392,008215,30734×
geogr2292290.060.017,694,400233,13233×
euc2dgil2622620.090.0111,581,448190,34860×
euc2dpr2642640.090.0311,851,8401,248,519
euc2da2802800.10.0214,168,000465,15230×
euc2dpr2992990.120.0217,288,180696,97424×
euc2dlin3183180.170.0220,835,784408,55650×
euc2drd4004000.310.0215×41,712,000579,87471×
euc2dfl4174170.370.0947,303,3685,164,545
euc2dpr4394390.380.0312×55,252,5401,528,22636×
euc2dpcb4424420.430.0314×56,400,968759,68974×
euc2dd4934930.600.0610×78,430,3843,381,60523×
explicitsi5355350.730.0710×100,376,7004,008,17725×
geoali5355350.840.0516×100,376,7002,472,21040×
explicitpa5615611.110.0337×115,824,8081,041,688111×
euc2du5745741.080.0521×124,110,2802,914,48342×
euc2drat5755750.930.0811×124,763,5003,654,86134×
euc2dp6546541.290.36183,926,60027,434,417
euc2dd6576571.370.0527×186,481,1282,548,93273×
geogr6666661.420.0720×194,286,4084,045,49148×
euc2du7247241.860.0920×249,866,8805,334,18446×
euc2drat7837832.410.1318×316,364,3648,343,38037×
euc2dpr100210026.910.2428×664,664,00811,357,58158×
explicitsi103210328.010.1080×726,360,1282,854,805254×
euc2du106010608.900.6014×787,283,20031,809,05424×
euc2dvm108410849.880.4223×842,137,92019,784,52242×
euc2dpcb1173117313.460.3143×1,067,736,54412,760,47583×
euc2dd1291129119.350.3949×1,424,473,90814,188,288100×
euc2drl1304130420.300.4644×1,468,043,20016,276,24090×
euc2drl1323132321.150.5141×1,533,305,84418,847,96581×
euc2dnrw1379137924.520.8927×1,736,850,50034,849,49449×
euc2dfl1400140025.926.931,817,592,000243,306,185
euc2du1432143227.880.6741×1,945,377,72825,930,12475×
euc2dfl1577157739.560.9342×2,599,690,80831,376,04282×
euc2dd1655165546.351.0942×3,005,645,50034,926,80786×
euc2dvm1748174857.551.4639×3,542,370,94444,030,40180×
euc2du1817181764.491.9832×3,979,418,96864,252,64261×
euc2drl1889188977.102.6129×4,472,320,84075,746,23959×
euc2dd21032103110.252.9237×6,173,990,20483,064,10174×
euc2du21522152119.084.2827×6,616,332,608124,082,81753×
euc2du23192319153.021.9777×8,281,782,86064,064,827129×
euc2dpr23922392169.633.9043×9,089,848,768103,550,00987×
euc2dpcb30383038465.939.3150×18,637,364,424198,847,21893×
euc2dfl37953795908.7721.8441×36,350,761,700480,093,61775×
euc2dfnl446144611476.6236.0840×59,064,805,808692,003,53985×
euc2drl591559153443.9169.0849×137,756,446,1001,237,347,739111×
euc2drl593459343477.2278.9344×139,088,885,3201,460,420,87895×
Table 5. Comparing SF and BF in converging to a local optimum starting from a random tour.
Table 5. Comparing SF and BF in converging to a local optimum starting from a random tour.
Type: UNIConvergence Achievement Percentage (Times in Millisec)Avg Time (S)Speed-Up
Size0–10%10–20%20–30%30–40%40–50%50–60%60–70%70–80%80–90%90–100%per conv.per move
400BF time27027027027027027027027027027089.7060.270
SF time101010101010101010103.1960.01028×
SF heap28,32327,93824,30523,47419,13412,5128385599446743902
SF exps1660168316531520165118692102230625602703
SF wins8888665432
566BF time790790790790790790790790790790384.5670.789
SF time101010101010202020207.3090.01552×
SF heap50,59250,10442,95240,69231,34919,84813,007926871836047
SF exps2446243724062194262929323251361839934232
SF wins8998765442
800BF time23502360236023602360236023602360236023601679.2302.356
SF time3030303030404040404025.8760.03664×
SF heap90,22389,69775,88771,93151,36531,30420,34414,38511,1609386
SF exps3570350935183274407645755106569762496641
SF wins9998765543
1131BF time13,92013,84013,84013,90013,94013,93013,93014,03013,89013,81014,502.10013.904
SF time170170170170190200210220230240204.5590.19670×
SF heap162,645161,897134,756124,31682,99349,69031,86122,49017,34814,658
SF exps52235135516449606297715880258955961310,466
SF wins910108765543
1600BF time45,22045,32045,19045,22045,01045,09044,86044,94044,90045,02068,378.50045.075
SF time400400390400450470500540570580714.9490.47395×
SF heap287,797290,431235,283218,438136,58179,69850,22435,10727,13122,714
SF exps7712761175667598985311,00012,33314,05815,25416,105
SF wins1010119876543
400BF time27027027027027027027027027027072.0140.269
SF time101010101010101010102.5580.01028×
SF heap21,52722,40322,80322,37720,18413,1277995549942563604
SF exps1373164217141723167418062013226724832602
SF wins8888876532
566BF time790790790790790790790790790790306.8990.790
SF time101010101010102020205.6720.01554×
SF heap38,26339,68640,62239,90834,20919,65711764804961685141
SF exps2041245225812527243627253125336035593694
SF wins8899876542
800BF time24002400240024002400240024002400240024001341.5602.397
SF time3040404030404040404020.1770.03666×
SF heap67,36668,98271,08669,95157,31930,93918,06812,38694587915
SF exps3120378638933779366942004787520854445672
SF wins9999976543
1131BF time13,68013,47013,63013,55013,66013,36013,51013,44013,67013,67011,107.70013.563
SF time170180190180180190200210210210156.1080.19371×
SF heap118,999120,647124,113122,50592,85345,95326,45718,21914,01511,834
SF exps4665568559585627556265177244786982688644
SF wins10101010986543
1600BF time44,80044,85044,59044,37044,37044,76044,57044,40044,05044,59050,638.80044.537
SF time400440440430420460480500500510535.1400.45994×
SF heap210,066216,037217,268213,260154,15870,93240,62127,67621,19617,722
SF exps71578705899486458263998711,19612,07312,54212,765
SF wins101010111087543
400BF time27027027027027027027027027027078.5510.269
SF time102010101010101010103.0460.01025×
SF heap31,27232,33926,57120,32115,51211,2527283450030222346
SF exps3133616146502343157014011420147116411798
SF wins8888777643
566BF time790790790790790790790790790790336.1660.791
SF time203030201010101010107.3220.01745×
SF heap56,20459,11847,23734,45625,40317,92311,397673043213324
SF exps54961117977733375218619891943203022722573
SF wins9998887653
800BF time23802380238023802380238023802380238023801481.1602.379
SF time6090704030303030303027.1530.04454×
SF heap101,220108,10583,46857,80940,90128,31917,052980761494652
SF exps939321,02013,1754982311327742778290132133595
SF wins91099887654
1131BF time13,83013,80013,92013,79013,73013,81013,83013,98013,76013,82012,415.40013.826
SF time320600380200160150150160170180222.3560.24755×
SF heap185,437200,065149,51498,60666,75046,25626,95114,81189626624
SF exps17,33140,88723,2147841441738793930400744835079
SF wins10101010988664
1600BF time44,44044,82044,68044,40044,47044,72044,52044,26044,19044,65059,153.90044.510
SF time91019201120470370360350360380410871.7790.66467×
SF heap335,039371,782263,962166,173111,09271,45940,86221,57312,6119327
SF exps31,74580,77342,94811,235623254925310558663137201
SF wins11111010998764
Table 6. Average times needed to perform convergences to a local optimum tour starting from 100 different random tours. Instances are taken from TSPLIB. BruteForce running times are estimated for sizes larger than 3000.
Table 6. Average times needed to perform convergences to a local optimum tour starting from 100 different random tours. Instances are taken from TSPLIB. BruteForce running times are estimated for sizes larger than 3000.
TypeNameSizeTime per ConvergenceTime per Move (S)Evaluated Triples per Move
BFSFSpeed-upBFSFBFSF Reduct
euc2dkroA1001000.23 s0.03 s0.0000360.000004608,00016,13637×
euc2dkroB1001000.23 s0.03 s0.0000360.000004608,00016,29037×
euc2dkroC1001000.25 s0.03 s0.0000380.000005608,00016,24437×
euc2dkroD1001000.22 s0.03 s0.0000350.000005608,00015,75838×
euc2dkroE1001000.23 s0.03 s0.0000360.000004608,00017,16735×
euc2drd1001000.23 s0.04 s0.0000360.000005608,00014,68141×
euc2deil1011010.27 s0.03 s10×0.0000440.000004627,00814,45143×
euc2dlin1051050.33 s0.04 s0.0000480.000005707,00020,96833×
euc2dpr1071070.31 s0.06 s0.0000450.000009749,42853,94713×
explicitgr1201200.49 s0.06 s0.0000620.0000071,067,20027,03939×
euc2dpr1241240.57 s0.07 s0.0000680.0000081,180,48030,31438×
euc2dbier1271270.73 s0.09 s0.0000890.0000101,270,50844,80728×
euc2dch1301300.76 s0.08 s0.0000890.0000081,365,00026,04152×
euc2dpr1361361.01 s0.08 s12×0.0001110.0000081,567,80834,01146×
geogr1371371.11 s0.10 s11×0.0001170.0000101,603,44839,81840×
euc2dpr1441441.19 s0.11 s10×0.0001160.0000101,868,16041,71844×
euc2dch1501501.24 s0.12 s10×0.0001250.0000112,117,00030,51169×
euc2dkroA1501501.36 s0.11 s12×0.0001340.0000102,117,00040,17852×
euc2dkroB1501501.27 s0.11 s11×0.0001250.0000102,117,00044,21247×
euc2dpr1521521.31 s0.15 s0.0001290.0000142,204,60866,94032×
euc2du1591591.62 s0.15 s10×0.0001500.0000132,530,22050,00050×
explicitsi1751752.22 s0.18 s12×0.0002000.0000153,391,50043,20078×
explicitbrg1801802.55 s0.14 s17×0.0002270.0000123,696,00019,631188×
euc2drat1951954.91 s0.30 s16×0.0003590.0000214,717,70065,99471×
euc2dd1981984.89 s0.53 s0.0003460.0000374,942,344211,87623×
euc2dkroA2002004.27 s0.29 s14×0.0003070.0000205,096,00079,66263×
euc2dkroB2002004.28 s0.27 s16×0.0003080.0000195,096,00074,28268×
geogr2022024.30 s0.32 s13×0.0003120.0000235,252,808114,96945×
euc2dts2252257.20 s0.37 s19×0.0004360.0000227,293,00069,747104×
euc2dtsp2252256.87 s0.37 s18×0.0004380.0000237,293,00078,51392×
euc2dpr2262266.93 s0.45 s15×0.0004380.0000287,392,008130,92356×
geogr2292297.85 s0.43 s18×0.0004850.0000267,694,400113,49567×
euc2dgil26226213.94 s0.63 s21×0.0007640.00003411,581,448107,253107×
euc2dpr26426414.56 s1.76 s0.0007530.00009111,851,840582,90420×
euc2da28028017.47 s0.86 s20×0.0008950.00004314,168,000148,76495×
euc2dpr29929924.69 s1.31 s18×0.0011060.00005817,288,180237,57272×
euc2dlin31831836.14 s1.48 s24×0.0015610.00006320,835,784202,095103×
euc2drd4004001 min 29.01 s2.77 s32×0.0030200.00009441,712,000272,730152×
euc2dfl4174171 min 49.92 s7.56 s14×0.0034880.00023947,303,3681,223,92738×
euc2dpr4394392 min 3.84 s4.62 s26×0.0036620.00013655,252,540584,00394×
euc2dpcb4424422 min 12.40 s3.53 s37×0.0040950.00010956,400,968333,187169×
euc2dd4934933 min 28.93 s7.82 s26×0.0057740.00021678,430,3841,127,41069×
explicitsi5355354 min 18.27 s10.05 s25×0.0071140.000277100,376,7001,565,15264×
geoali5355355 min 41.66 s10.73 s31×0.0083120.000261100,376,7001,695,42159×
explicitpa5615617 min 3.19 s5.76 s73×0.0104720.000143115,824,808567,301204×
euc2du5745747 min 35.81 s8.34 s54×0.0103230.000188124,110,280915,879135×
euc2drat5755756 min 32.08 s11.08 s35×0.0090000.000256124,763,500960,004129×
euc2dp65465410 min 26.77 s40.87 s15×0.0124450.000812183,926,6005,691,97332×
euc2dd65765711 min 8.90 s12.13 s55×0.0132240.000239186,481,1281,129,597165×
geogr66666611 min 49.99 s18.80 s37×0.0136090.000360194,286,4082,169,15789×
euc2du72472417 min 7.68 s18.39 s55×0.0181410.000325249,866,8801,508,776165×
euc2drat78378323 min 32.28 s25.51 s55×0.0234190.000423316,364,3642,074,088152×
euc2dpr100210021h 17 min 33.75 s1 min 10.49 s66×5.7595910.086281664,664,0083,309,044200×
explicitsi103210321 h 19 min 20.40 s1 min 16.41 s62×6.6208620.106720726,360,1284,828,852150×
euc2du106010601 h 42 min 52.56 s2 min 22.57 s43×7.1194460.158585787,283,2006,946,337113×
euc2dvm108410842 h 23 min 26.78 s1 min 57.18 s73×9.5950720.130059842,137,9204,662,438180×
euc2dpcb117311732 h 49 min 37.60 s1 min 57.84 s86×10.7699470.1254901,067,736,5443,809,648280×
euc2dd129112914 h 39 min 48.60 s3 min 6.53 s90×15.0300800.1701891,424,473,9085,350,874266×
euc2drl130413045 h 44 min 46.60 s3 min 29.59 s98×19.1897950.1903651,468,043,2005,014,495292×
euc2drl132313236 h 30.20 s3 min 41.86 s97×19.8806980.2033581,533,305,8445,591,363274×
euc2dnrw137913796 h 10 min 16.00 s4 min 44.64 s78×19.4195800.2518971,736,850,5007,465,279232×
euc2dfl140014006 h 31 min 4.60 s22 min 31.22 s17×21.0633751.1863211,817,592,00043,868,18241×
euc2du143214326 h 43 min 17.80 s4 min 16.01 s94×22.1186470.2291921,945,377,7286,643,893292×
euc2dfl1577157711 h 36 min 11.30 s9 min 28.24 s73×30.2471390.4234302,599,690,80813,612,026190×
euc2dd1655165513 h 18 min 1.80 s8 min 37.20 s92×34.4969740.3761483,005,645,50010,602,602283×
euc2dvm1748174822 h 42 min 38.40 s12 min 59.16 s104×55.2795130.5278833,542,370,94412,057,284293×
euc2du1817181722 h 16 min 40.00 s14 min 25.74 s92×52.2475570.5647383,979,418,96815,075,802263×
euc2drl188918891 day 9 h 41 min 56.00 s20 min 57.61 s96×74.1993880.7696514,472,320,84017,805,568251×
euc2dd210321031 day 18 h 55 min 27.00 s24 min 13.78 s106×83.8453600.7765916,173,990,20419,466,626317×
euc2du215221522 day 1 min 17.00 s33 min 9.13 s86×93.9038561.0792896,616,332,60827,790,153238×
euc2du231923192 day 13 h 49 min 18.00 s20 min 37.50 s179×130.3796130.7249568,281,782,86017,871,452463×
euc2dpr239223923 day 2 h 35 min 26.00 s37 min 40.77 s118×130.4157351.0837829,089,848,76823,979,044379×
euc2dpcb3038303815 day 2 h 23 min 35.51s1 h 41 min 4.11 s215×494.1725412.29701118,637,364,42442,689,419436×
euc2dfl3795379529 day 10 h 49 min 13.32s8 h 27.90 s88×771.7783808.74367636,350,761,700189,013,692192×
euc2dfnl4461446147 day 20 h 28 min 56.41 s9 h 21 min 3.70 s122×1035.9650228.43490359,064,805,808140,512,377420×
euc2drl59155915111 day 14 h 35 min 51.23s23 h 46 min 57.90 s112×1774.55856215.755962137,756,446,100251,517,395547×
euc2drl59345934112 day 16 h 30 min 21.97s1 day 3 h 55 m 5.00 s96×1771.51054818.286935139,088,885,320306,301,694454×
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lancia, G.; Dalpasso, M. Finding the Best 3-OPT Move in Subcubic Time. Algorithms 2020, 13, 306. https://0-doi-org.brum.beds.ac.uk/10.3390/a13110306

AMA Style

Lancia G, Dalpasso M. Finding the Best 3-OPT Move in Subcubic Time. Algorithms. 2020; 13(11):306. https://0-doi-org.brum.beds.ac.uk/10.3390/a13110306

Chicago/Turabian Style

Lancia, Giuseppe, and Marcello Dalpasso. 2020. "Finding the Best 3-OPT Move in Subcubic Time" Algorithms 13, no. 11: 306. https://0-doi-org.brum.beds.ac.uk/10.3390/a13110306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop