Next Article in Journal
Detecting Inverse Boundaries by Weighted High-Order Gradient Collocation Method
Next Article in Special Issue
On the Hardness of Lying under Egalitarian Social Welfare
Previous Article in Journal
Effects of the Angled Blades of Extremely Small Wind Turbines on Energy Harvesting Performance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Teaching–Learning Based Optimization (TLBO) with Variable Neighborhood Search to Retail Shelf-Space Allocation

Department of Distribution Management, National Taichung University of Science and Technology, Taichung 1001, Taiwan
*
Author to whom correspondence should be addressed.
Current address: No.129, Sec.3, San-min Rd., Taichung 40401, Taiwan.
Submission received: 31 May 2020 / Revised: 30 July 2020 / Accepted: 3 August 2020 / Published: 5 August 2020
(This article belongs to the Special Issue Optimization of Resources)

Abstract

:
Shelf space is a scarce and expensive resource in the retail industry because a large number of products compete for limited display space. Thus, shelf-space allocation is frequently implemented in shops to increase product sales and profits. In the past few decades, numerous models and solution methods have been developed to deal with the shelf-space allocation problem (SSAP). In this paper, a novel population-oriented metaheuristic algorithm, teaching–learning-based optimization (TLBO) is applied to solve the problem and compared with existing solution methods with respect to their solution performance. Further, a hybrid algorithm that combines TLBO with variable neighborhood search (VNS) is proposed to enhance the performance of the basic TLBO. The research results show that the proposed TLBO-VNS algorithm is superior to other algorithms in terms of solution performance, in addition to using fewer control parameters. Therefore, the proposed TLBO-VNS algorithm has considerable potential in solving SSAP.

1. Introduction

The shelf-space allocation problem (SSAP) is one of the primary concerns for retailers [1]. A good shelf-space allocation plan guides retailers to place merchandise in limited shelf space to increase the profitability of retail stores. The basic idea of shelf-space allocation is to display products on shelves in a way that can better attract consumers and generate more profit for the store. The current solutions to SSAP are generally based on two models: the commercial model and the optimization model [2]. In the commercial model, retailers use commercial software such as Spaceman or Prospace to plan shelf layouts and manage product sales, profits, and inventory turnover. The commercial software uses some simple guidelines (e.g., ranking by gross profit margins) to formulate the operation process; the purpose is to help the retailer to easily operate these systems in practice to implement decisions about shelf-space allocation [3]. However, the space allocation decisions of these commercial systems neglects the influence of the in-store display effect on sales, making the business model unable to make ideal decisions and far below the optimal performance level [4].
In the optimization model, it is assumed that the unit sales of a product are affected by the shelf space allocated for it. This dependency is called “space elasticity” [5]. The shelf space occupied by a single product is called “facing”. The optimization model aims to determine the amount of facings allocated for each product under the limit of the total shelf space of the store, so that the total profit is maximized. However, the use of space elasticity does not fully explain the influence of the in-store display on customer shopping decisions. To this end, subsequent research also added other factors to the demand function to complete the shelf-space allocation model, such as substitution and complementarity between products [6] and location [1].
The optimization models and solution methods for SSAP have been extensively studied for more than 30 years [7]. Researchers tried to reduce the necessary resolution time by applying heuristics (e.g., [8]) or metaheuristic methods (e.g., [9,10,11]) to the basic SSAP. These methods are indeed necessary and can benefit retailers, because SSAP is usually nondeterministic polynomial-time hard (NP-hard). NP-hard indicates that there is no effective method to solve the large-scale instance within a reasonable time. As with the example given by Urban [12], there are more than a trillion possible shelf-space options for 40 kinds of products. The proposed methods for solving SSAP are metaheuristic methods because they are more flexible than heuristic methods and can cover a wider range of possible solutions in a shorter time [13]. Metaheuristic methods can be roughly divided into the single-point solution methods that generate a single solution at each iteration (e.g., the simulated annealing), and the multipoint solution methods that generate a population of solutions at each iteration (e.g., the genetic algorithms). Since the use of information in multipoint solution methods may greatly improve the search quality for retail managers, most of the literature on SSAP uses multipoint solution methods to find the near-optimal solution to the optimization model. See Hansen et al. [13] for a good review of heuristic and metaheuristic approaches in SSAP.
As the literature continues to advance multipoint solution methods, Rao et al. [14] pointed out that the traditional multipoint solution method, in addition to the need to set general control parameters (e.g., population size and maximum number of generations), must also set algorithm-specific control parameters (e.g., the crossover rate and mutation rate in genetic algorithms). As a result, they are limited in application. For these reasons, Rao et al. [14] proposed a new algorithm, called teaching–learning based optimization (TLBO), which was inspired by the teaching–learning process of human beings. The TLBO is an algorithm-specific parameterless algorithm and is able to solve the optimization problem in a short time. It has been tested on some unconstrained and constrained nonlinear programming problems, including some combinatorial optimization problems, and has achieved considerable success [15]. According to recent literature reviews, TLBO seems to have the potential to solve combinatorial optimization problems. However, its performance has not been tested for the issues of shelf-space allocation. It is a well-known fact that the continuous development of metaheuristics helps to provide effective solutions to optimization problems. In view of this, we developed TLBO-based solution methods to effectively solve the SSAP.
In the research, two TLBO-based solution methods were developed for the optimization model proposed by Yang and Chen [1], which has been recognized as one of the most representative models of SSAP [16]. This research first attempted to solve SSAP with the basic TLBO solution method. In order to evaluate its potential for solving SSAP, we compared its solution results with those from Yang’s heuristic algorithm [8], Yang’s improved heuristic algorithm [8], genetic algorithms (GA) [16], and genetic algorithms with variable neighborhood search (GA-VNS) [16]. Next, this study attempted to develop a hybrid of TLBO with variable neighborhood search (abbreviated as TLBO-VNS) to enhance the performance of the basic TLBO solution method for the SSAP optimization model by Yang and Chen [1]. The rest of this paper is organized as follows. The relevant literature was briefly reviewed in Section 2. Yang and Chen’s model and some extant solution methods were introduced in Section 3. The original TLBO solution method to solve Yang and Chen’s model was presented in Section 4. Section 5 presented the experimental settings and comparison results. Section 6 provided a description of the proposed TLBO-VNS algorithm and a comparison with other solution methods. Finally, in Section 7, conclusions and some suggestions were made for further research.

2. Related Works

The first optimization model for SSAP was inspired by the concept of space elasticity proposed by Curhan [5]. Space elasticity was defined as the ratio of relative change in unit sales to relative change in unit space. Curhan [5] estimated the space elasticity through a regression analysis of nearly 500 retail products and obtained an average space elasticity of 0.212. Curhan [5] found that the arrangement of shelf space is related to product sales, which shows that the location of products displayed and the shelf space allocated to products can affect consumers’ purchasing decisions. Following the work of Curhan [5], Hansen and Heinsbroek [9] developed the first SSAP optimization model in the form of mixed integer nonlinear programming (MINLP). The optimization model uses space elasticity to describe the demand of each product and takes the total profit of the retail store as the objective function. They solved the SSAP for the preselected product mix and solved it under the following three constraints: (1) shelf-capacity constraint, (2) minimum face number constraint for each product, and (3) decision variables must be nonnegative integers. The solution method they used is based on a generalized Lagrange multiplier technique, which only guarantees to find local solutions of nonconvex programs. Corstjens and Doyle [17] believed that product substitutability and complementarity are also the relevant factors for product sales. According to the model of Hansen and Heinsbroek [9], they developed a model that included space elasticity and cross-elasticity in the demand function and included more items in the cost structure, including purchase costs, transportation costs, and out-of-stock costs. Besides, it differs from the model of Hansen and Heinsbroek [9] mainly on these aspects: (1) it includes an upper bound of product availability, (2) it belongs to a geometric programming (GP) class, and (3) it uses signomial geometric programming to solve the SSAP. Borin et al. [18] extended Corstjens and Doyle’s [17] optimization model to develop a MINLP model that can determine the optimal product mix and shelf-space allocation at the same time. Considering product substitution due to temporary or permanent unavailability, they designed an objective function different from that used in general optimization models. Their objective function is not to maximize store profits, but to seek to optimize inventory returns through simulated annealing (SA) techniques. Drèze et al. [19] investigated the impact of the product facing and location on a shelf on product sales based on a binary integer linear programming (BILP) model and linear optimization package (LINDO/LINGO). Their findings showed that displaying products on shelves at eye level could boost product sales. This finding highlights the influence of product location on a shelf on product sales. As an extension of the Corstjens and Doyle [17] and Borin et al. [18] models, Urban [12] proposed a comprehensive MINLP model of inventory control and shelf-space allocation, which considered the effects of stock-out-based substitution and cross elasticity, but ignored the effect of location, as shown in Drèze et al. [19]. Their solution method was based on a Greedy algorithm and GA. Following Drèze et al. [19], Yang and Chen [1] developed a model based on the nonlinear model of Corstjens and Doyle [17]. However, as the latter is difficult to apply in real situations, they proposed a simplified but feasible alternative model in the form of integer linear programming (ILP) formulation. To simplify the computation procedure, they excluded the effect of cross elasticity from the problem. Yang [8] proposed two heuristic algorithms to solve Yang and Chen’s [1] optimization model. Hwang et al. [11] extended Urban’s [12] and Yang and Chen’s [1] optimization models to determine the optimal number of facings, order size, and allocated location for each product under the objective of profit maximization. They utilized a gradient descent search and GA to solve the MINLP model. Murray et al. [20] argued that cross-space elasticity is not easy to estimate in practice and proposed to replace it with cross-price elasticity, which is easier to extract from the point of sales (POS) data. They attempted to jointly optimize a retailer’s decisions based on a MINLP model for product prices and display facing areas, shelf-space locations, and orientations under product stacking and total shelf capacity constraints. They employed a branch-and-bound-based algorithm to solve the problem. More recently, Bai et al. [21] proposed an optimization model in the form of integer nonlinear programming (INLP) formulation for SSAP and solved it using a multiple neighborhood algorithm to solve the problem. Their method is a hybrid method combining an SA algorithm and a metaheuristic learning mechanism. Compared with the gradient descent search method, this hybrid method could greatly enhance shelf-space utilization and sales. Castelli and Vanneschi [16] used GA and a hybrid method combining GA with variable neighborhood search (GA-VNS) to solve Yang and Chen’s model [1]. They also compared their approaches with Yang’s two heuristic algorithms [8] and verified that their method is superior in solution performance. Schaal and Hübner [22] investigated the effects of cross-space elasticity on shelf-space planning, optimal facing decision, and retail profit. They showed that the effect of cross-space elasticity on profit and shelf-space allocation is small. Hence, measuring its effect through a complicated and costly procedure is not necessary, and the effect can be directly ignored in modeling research. More recently, Yu et al. [23] adopted a MINLP model for SSAP, which considers own-space and cross-space elasticities, and developed a reduced variable neighborhood search-based hyperheuristic (HyVNS) framework to solve this model. Table 1 summarizes the model classes and solution methods of the above relevant references.
According to the above literature, the decision variables in most optimization models are the amounts of facings allocated for each product on the shelves, and some optimization models additionally include decision variables, such as location and inventory control. Besides, since SSAP is NP-hard, these optimization models often use metaheuristic methods as solvers, such as GA, SA, and particle swarm optimization (PSO). See Bianchi-Aguiar et al. [24] for a state-of-the-art literature review of the SSAP, focusing on optimization models.

3. Yang and Chen’s Model and Extant Solution Methods

As mentioned earlier, Corstjens and Doyle [17] provided a comprehensive model of SSAP, which is managerially useful and well known to researchers in the field. However, the model of Corstjens and Doyle is complex and has practical limitations [1,8]. In view of these limitations, Yang and Chen [1] proposed a simplified integer linear program based on the nonlinear model developed by Corstjens and Doyle [17]. In Yang and Chen’s model, it was assumed that the profit of any product has a linear relationship with facings, and each shelf was regarded as a “knapsack”. Based on this, they converted SSAP into a multi-knapsack problem, hoping to load under the knapsack capacity to maximize the total profit of all knapsacks. The following briefly describes Yang and Chen’s model:
Suppose that there are N products to be displayed on M shelves, with length T k for shelf k . The width of a facing of product i displayed on any of the shelves is denoted as a i . The lower and upper bounds of facings of product i are set as U i and L i , respectively. Let P ik be the profit per facing of product i on shelf k and X i k be the allocated amount of facings for product i on shelf k . Multiply P i k of all products on the shelves by X i k   to get the total profit function P, by
  P = i = 1 N k = 1 M P ik   X ik ,  
which is to be maximized subject to three constraints:
  • The total space allocated to all the products cannot exceed the shelf capacity of the store, as expressed in Equation (2).
  • To ensure the exposure of new products or to maintain product competitiveness, there are lower and upper bounds of the amount of facings for each product, as expressed in Equation (3).
  • The allocated amount of facings for each product must be a nonnegative integer, as shown in Equation (4).
i = 1 N a i   X ik     T k   with   k = 1 ,   , M
L i       k = 1 M X ik       U i   with   i = 1 ,   ,   N
X ik     0 ,   1 ,   2 ,   3 ,  
Yang and Chen’s model is an integer linear programming and applicable because there are many integer programming packages available. However, it is still an NP-hard problem due to the combinatorial nature of integer variables. Considering the complexity of the problem, Yang [8] proposed two heuristic algorithms commonly applied to knapsack problems to solve Yang and Chen’s model. In contrast to the heuristic methods, Castelli and Vanneschi [16] proposed two metaheuristic methods for Yang and Chen’s model: one is the genetic algorithm (GA) described in [13], and the other is their own hybrid method that combines a genetic algorithm with variable neighborhood search (GA-VNS). They compared these two metaheuristic methods with the two heuristic algorithms proposed by Yang [8], and presented the suitability of the algorithm they proposed in producing optimal or suboptimal solutions of Yang and Chen’s model. Next, we briefly describe the four solution methods mentioned above for Yang and Chen’s model and use them as benchmarks for future comparisons.
(S1) Yang’s heuristic algorithm
This algorithm proposed by Yang [8] is an extension of the approach commonly used for solving the knapsack problem. It consists of three phases, including the preparatory phase, allocation phase, and termination phase.
  • Preparatory phase: this phase checks whether the shelf capacity is smaller than the minimum space requirement, as expressed in Equation (5). If so, the operation will be terminated, meaning this problem is infeasible. If not, the unit profit of each product ( P i k / a i ) will be calculated, and products will be sorted in descending order of unit profit for subsequent allocation.
    k = 1 M T k   <   i = 1 N a i   L i
  • Allocation phase: based on the descending order of unit profit obtained in the previous phase, the algorithm allocates available space of shelf k to the product i to satisfy its minimum space demand. After the allocation, if there is no available space of shelf k , the algorithm directly proceeds to the termination phase. If there is still space, the algorithm then allocates the available space of shelf k to product i with the highest unit profit, until all the available space of shelf k is used up, but the amount of facings allocated to each products cannot exceed its upper bounds.
  • Termination phase: this phase calculates the corresponding total profit for the final solution.
(S2) Yang’s improved heuristic algorithm
Yang [8] added three adjustment methods before the termination phase of the algorithm to improve the solution performance of S1:
  • Adjustment 1: this adjustment attempts to improve a solution by swapping one facing for a pair of products allocated on the same shelf. For example, a shelf is allocated with 3 beers and 2 sodas. Without violating the constraints, we replace 1 beer with 1 soda and recalculate the total profit. If a higher total profit is obtained, the solution will be updated; otherwise, the original solution will be kept.
  • Adjustment 2: this adjustment attempts to improve a solution by interchanging one facing for a pair of products allocated on two different shelves. For example, there are two shelves, respectively allocated with 3 beers and 2 sodas. Without violating the constraints, we swap the display locations for 1 beer on one shelf and 1 soda on the other. If a higher total profit can be obtained after the swap, the solution will be updated; if not, the original solution will be kept.
  • Adjustment 3: this is an extension of Adjustment 2. Since the length of facing varies from product to product, after the swapping of facings between the two products on two different shelves, there may be shelf space that can be reallocated to other products.
(S3) Genetic algorithm
The algorithms presented in Castelli and Vanneschi [16] to solve Yang and Chen’s model is based on the use of the genetic algorithm (GA). GA is usually regarded as a function optimizer that imitates the natural evolution process, and it has been applied to quite a wide range of problems [25]. In Castelli and Vanneschi [16], the solution to Yang and Chen’s model was represented as a string (called chromosome) of length equal to N × M , and each position (gene) in the string assumes a value, X i k   , corresponding to the number of facings of product i on shelf k . As an example, gene X i k   = 2 means that the i-th item has 2 facings on the k-th shelf. After solution encoding, the genetic operators (selection, crossover, and mutation) were used to create new chromosome groups (i.e., population of offspring), starting from existing chromosomes (i.e., population of parents). The selection operator selected the chromosomes in the population for reproduction. The fitter the chromosome, the higher the chance of being selected for reproduction. The fitness of a chromosome depends on its objective function value. The crossover operator randomly selected a locus and exchanges the subsequences before and after the locus between the two chromosomes to produce two offspring. The mutation operator randomly increased or decreased the number of certain locus in the chromosome by one. Generally, the probability of mutation in each locus is very small (for example, 0.001). It is worth noting that the offspring of chromosomes produced by either mating or mutation operators must meet the restrictions (2)–(4); otherwise, the original chromosomes remain unchanged.
(S4) GA combined with Variable Neighborhood Search, (GA-VNS)
The underlying concept of the GA-VNS algorithm is to execute VNS under a fixed probability after each iteration of the GA is completed. VNS algorithm was introduced by Mladenović and Hansen [26]. VNS is a metaheuristic method for solving combinatorial and global optimization problems. Its basic idea is to seek the local optimum in the descending stage, and to get rid of the corresponding valleys and change the neighborhood in the disturbance stage. The basic steps of VNS are as follows:
  • Select the set of neighborhood structure N v , with v   = 1, 2, …, N m a x to be used in the search.
  • Give the current incumbent solution X .
  • Enter the solution process. Set v = 1 and repeat the following steps until v   =   N m a x k m a x : (a) generate a point X at random from the v -th neighborhood of X ; (b) apply the local search method with X as the initial solution to obtain the local optimum X ; and (c) if the obtained solution X is better than the incumbent solution X , update the solution X with X and continue the local search with the current neighborhood structure; otherwise, move to the next neighborhood: v = v + 1.
Castelli and Vanneschi [16] mixed VNS and GA to improve the local search capability of basic GA (S3) in solving Yang and Chen’s model. The following are the four neighborhood structures used in Castelli and Vanneschi [16]:
  • Neighborhood structure, N 1 : Let X be the incumbent solution X = { X 11 ,   X 12 ,   ,   X i k ,   ,   X NM } and X i k   be the current amount of facings of the product i on the shelf k. The neighborhood structure N 1 changes the value of X i k   as follows: find the shelves with available space capacity to allocate one facing of product i, and then select one with the minimum available space to move one facing of product i. The new solution X is better than X if and only if the available space of the shelf with the minimum available capacity is smaller than that of X .
  • Neighborhood structure, N 2 : Given that shelf k 1 and shelf k 2 are respectively allocated with facings of product i 1 and product i 2 , N 2 exchanges facing(s) of the two products. In other words, N 2 attempts to reduce the minimum available capacity of each shelf by displaying facings of product i 1   on shelf k 2 and facings of product i 2 on shelf k 1 . As in N 1 , the new solution X is better than X if and only if the available space of the shelf with the minimum available capacity is smaller than that of X .
  • Neighborhood structure, N 3 : The objective of this neighborhood structure is to optimize sales profit. It works by removing a facing of a product i 1 and replaces it with a facing of product i 2 , where i 1 and i 2 are different products. The premise is that the available capacity of the shelf after the removal of the facing of i 1 is greater than the facing of product i 2 . The new solution X is better than X if and only if the sales profit is greater than that of X .
  • Neighborhood structure, N 4 : This neighborhood structure removes a facing of product i 1 and replaces it with a facing of i 2 and with a facing of product i 3 . This neighborhood structure is based on a simple idea: The profit of two “small” products can exceed that of a “large” one. When using this structure, the new solution X is better than X if and only if the sales profit is greater than that of X .

4. Teaching–Learning-Based Optimization

Teaching–learning-based optimization (TLBO) is a metaheuristics algorithm proposed by Rao et al. [14]. The TLBO algorithm is a population-based approach, which simulates the teaching and learning processes within a classroom to solve the problem at hand. TLBO consists of two phases, namely the teacher phase and the learner phase. In the teacher phase, the learner with the best grade in the population is selected to be the teacher. The teacher is responsible for training the learners and improving the mean grade of the class. In the learner phase, each learner randomly selects a learner to interact with. The ultimate goal is to improve the mean grade of the class at this phase. The above procedures are repeated iteratively until the stopping condition is met (see Zou et al. [27] for a survey of the latest developments in TLBO). With regard to SSAP, we propose a TLBO algorithm consisting of the following steps:
  • Configure parameters: The parameters include model parameters and control parameters of the algorithm. The parameters of Yang’s SSAP model include the number of products, the number of shelves, the profit per facing of each product, the width of facing of each product, the length of all shelves, and the upper bound and lower bound of the amount of facings for each product. The control parameters of the TLBO algorithm include the population size ( PS ) and maximum number of generations ( GN m a x ).
  • Initialize the learner population: a learner X = { X 11 ,   X 12 ,   ,   X i k ,   ,   X NM } , where   X i k denotes the allocated amount of facings of product i on shelf k . The amount of facings were generated by using Equation (6):
    X ik = X ik min   +   rand   ( 0 ,   1 )   ( X ik   max     X ik   min )
    where X i k   m i n and X i k   m a x respectively denote the upper and lower bounds of X i k , and r a n d   ( 0 , 1 ) denotes a random number in range [0, 1]. It should be noted that all the learners whose X i k must meet the constraints (2)–(4).
  • Evaluate the grade of learners: By substituting the learner’s X i k (decision variables) value into the objective function in Equation (1), we can obtain the learner’s grade (i.e., the objective value) and then select the learner with the highest grade as teacher and set it to X b e s t .
  • The teacher phase: Let M be the mean grade of the class. In this phase, the teacher attempts to improve M to his/her level ( X b e s t ). The difference between the teacher and the learners can be expressed as in Equation (7):
      Difference Mean   = r   X best     T f   M
    where T f   is the teaching factor, which decides the mean grade to be changed, and r   is a random number in the range [0, 1]. The value of T f   can be either 1 or 2, and it is randomly decided with equal probability, as in Equation (8):
    T f = round   1 + rand   0 ,   1   2 1
Based on the above difference value, the existing solution X is updated in the teacher phase, according to the following expression:
X ik   new = X ik   +   Difference Mean  
After the solution is updated, evaluate whether X n e w gives a better function value than X and meets the restriction conditions (2)–(4). If so, update the solution with X n e w ; otherwise, keep X .
  • The learner phase: Learners are allowed to randomly exchange knowledge with other learners for enhancing his/her knowledge. A learner will learn new information if the other learners have more knowledge than he or she has; a learner does not learn anything new if the other learners do not have more knowledge. Let X a be the other randomly selected learner. The learning of X from X a can be mathematically expressed as in (10):
    X ik   new = X ik   + r   X ik   X ik a ,     if   f   X ik     <   f X ik a X ik + r   X ik a X ik   ,     if   f   X ik       f X ik a  
    Similar to the teacher phase, those with a better grade between the learner and the newly generated learner will be accepted.
  • Termination: Check whether the stopping condition is met (i.e., maximum number of generations is achieved). If the condition is met, the algorithm terminates and outputs the current solution, which is the optimal solution for this generation; otherwise, evaluate the grade of all learners for the next teacher phase.

5. Experimental Setting and Results

In this section, we compared the TLBO method with the other four solution methods described in Section 4. In the rest of this article, we referred to these five solution methods as S1, S2, S3, S4, and S5.
S1: Yang’s heuristic algorithm;
S2: Yang’s improved heuristic algorithm;
S3: GA;
S4: GA-VNS;
S5: TLBO.
In order to compare the performance of various solution methods, we simulated different SSAP scenarios based on the parameter set of the previous SSAP literature. In a state-of-the-art literature review of SSAP [24], the authors observed that most SSAP models were solved using metaheuristic methods or specialized heuristics. In addition, the scope and data sets studied were very wide. For the scope, many SSAP scenarios came from a wide range of categories, such as quality candies, bottled juices, canned dog food, and distilled spirits. For the data set, the scale of SSAP scenarios was usually measured by the number of products. The authors found that most studies generated SSAP scenarios with less than 10 products, and other studies generated SSAP scenarios with more than 100 products. In terms of the scale of SSAP scenarios, Yang and Chen [1] and Yang [8] both considered that there are usually a lot of product items being considered to be displayed in a retail store. However, considering the fact that products are usually broken down into departments, categories, and items, putting so many items into a space allocation model is not only infeasible in computation, but also meaningless in practice. This research continued this idea, and borrowed the shelf parameter values and product parameter values from Yang [8] to simulate a wide range of SSAP scenarios (or SSAP problems) as a basis for comparing various solution methods. The values of these parameters used in Yang [8] were set to simulate the SSAP problems under an environment of health care product stores (see Table 2).
As we can see from Table 2, the SSAP problems were characterized by four parameter sets: (M, N), T k , L i , and U i L i , where (M, N) denotes an ordered pair of the number of shelves and the number of products; T k denotes the length of shelf k; L i denotes the lower bound for the total allocated amount of facings of product i; and U i L i denotes the difference between the upper and lower bounds for the amount of facings of product i . After the values for these four parameter sets have been chosen, the value for the profit per facing of product i on shelf k (i.e., P i k ) was randomly generated from a normal distribution N ( μ ,   σ ) , where μ is a random value generated from a uniform distribution of range [200, 7000] and σ is a random value in range [0.05, 0.15]. In addition, the width of a facing of product i (i.e., a i ) was randomly generated from a uniform distribution of range [15, 40]. Thus, there are, in total, 162 (=6 × 3 × 3 × 3) SSAP problems to be solved for comparisons. Due to the limitation of article length, we divided the 162 SSAP problems into six problem sets by the value of the first parameter set (M, N), and each problem set has 27 SSAP problems. In the follow-up work, we only took the problem set (M, N) = (2, 6) as an example to introduce the simulation process and demonstrate optimization results in Table 3, Table 4 and Table 5. The performance analysis results of the complete six problem sets were summarized in Table 6 and Table 7.
In the parameter settings of solution methods, it is well known that GA and GA-based solution methods can show the results sensitivity parameters setting. To make the comparison results more objective, in the general control parameters of the S3, S4, and S5 solution methods, we used the same setting as Castelli and Vanneschi [16], that is, the population size is 100 and the maximum number of generations is 100. Besides, regarding the specific parameters required by these solution methods, we set the crossover rate to 0.8 and the mutation rate to 0.1 for S3 and S4; and the VNS rate at 0.2 and the maximum number of iterations for each neighborhood structure as 5 for S4.
This study used MATLAB R2018b programming language to write the five solution methods and used Window 10 as the operating system of the experimental environment. In terms of the computational time, each solution method can obtain the optimal solution to a small problem within a few seconds. Taking (M, N) = (2, 6) as an example, the average calculation times required by the five solution methods was 0.02 s, 0.03 s, 1.82 s, 1.34 s, and 1.48 s, respectively. In contrast to the small problems, the time spent on the larger problems were about tens of minutes or more. However, as mentioned in the study by Hansen et al. [13], after their discussions with the retail field staff, it was learned that, since the shelf-space allocation is not a daily routine task, no special consideration is needed for the length of time to solve the SSAP. Instead, the maximum total profit of each solution method was used as the comparison criterion.
Table 3 shows the average profits obtained from the five solution methods for the problem set (M, N) = (2, 6). For each SSAP problem, 20 runs were performed. Based on the average profit, we can calculate the gap between the average profit of each solution method and the best average profit, which is called the performance gap, as defined below:
Performance   Gap   = The   average   profit   of   some   method     The   best   average   profit The   best   average   profit   ×   100 %
The performance gap was used to evaluate the solution performance of the methods. The smaller the value, the better the solution performance. A performance gap of 0 indicates that the solution method has the best performance for a given problem. Table 4 shows the performance gap of the five methods for the problem set (M, N) = (2, 6). To facilitate observation, the maximum performance gap, the median performance gap, and the standard deviation of the performance gap among the 27 problems for each solution method were provided in Table 5. As shown in Table 5, TLBO (S5) achieved a maximum of 0.62%, with a median of 0 and a standard deviation of 0.12%. This suggests that, in the problem set of 2 shelves and 6 products, TLBO (S5) could stably provide the best solution.
Since the size of the problem may have some impact on the performance of the method, this study also analyzed the problems of different sizes. Table 6 provides the maximum performance gap, the median performance gap, and the standard deviation of the performance gap for each method across the six problem sets. As shown in Table 6, the maximum of the performance gap and the standard deviation of the performance gap of TLBO (S5) were both slightly higher than those of S2 for the problem set of (M, N) = (2, 10). Except for this problem set, the maximum, median, and standard deviation of its performance gaps in other problem sets were smallest. In addition, it can be seen from Table 6 that the solution performance of S2 was better than that of S1, while the solution performance of S4 was better than that of S3. These results were consistent with the results of Yang [8] and Castelli and Vanneschi [16], respectively. However, from the data in Table 6, the solution method S4 only outperformed S1 or S2 in some data sets, so this comparative result was not enough to support the claim by Castelli and Vanneschi [16] that S4 outperformed S1 or S2.
To further test the statistical significance of TLBO (S5) superior to other solution methods in each problem set, the Kolmogorov–Smirnov method was first applied to verify whether the performance gaps of each solution method are in a normal distribution. The test result indicates that the performance gaps were in a skewed distribution across all the methods. Consequently, the Wilcoxon rank sum test was further utilized to examine the significance of differences in the median of the performance gaps between TLBO (S5) and the other four solution methods. Table 7 summarizes the result of Wilcoxon rank sum test. We can see that in the case of (M, N) = (2, 10), the median difference between TLBO (S5) and S2 was not significant, while in the case of (M, N) = (4, 4), the median difference between TLBO (S5) and S1 or S2 was not significant. Except these, the differences in the other cases were all significant. Accordingly, it could be inferred that TLBO is superior to other solution methods in most conditions.

6. TLBO-VNS

From the comparative results in the previous section, it can be observed that TLBO (S5) did not perform satisfactorily when solving the large SSAP problem (that is, the number of products was large), e.g., (M, N) = (2, 10). This observation is consistent with the shortcomings of TLBO mentioned by Kumar et al. [28] and Chen et al. [29]. Both papers pointed out that the basic TLBO method often falls into local optima when solving complex optimization problems. In order to improve this shortcoming of basic TLBO, this study developed a new solution method that combines the basic TLBO with VNS approach. The reason for choosing VNS is that in essence VNS is a metaheuristic method for solving combinatorial and global optimization problems. In addition to seeking local optimality in its descent stage, it is effective in its disturbance stage to get rid of the corresponding valley and change the neighborhood. Specially, this research combined the VNS approach to improve the performance of TLBO (S5) from the following two improvement directions: one is to modify the original solution by fine-tuning the allocated amount of facings to the products on the same shelf in each iteration. The other is to change the original solution by adjusting the number of product facings on different shelves in each iteration. Figure 1 depicts the solution process of TLBO-VNS (called S6), which uses the solution process of basic TLBO as the frame and embeds the VNS surrounded by the dotted line. The fundamental concept of the TLBO-VNS algorithm is to execute VNS after each iteration of the TLBO is completed.
To examine the solution performance of S6, the performance gap analysis has been redone for S1, S2, S3, S4, S5, and S6. The results were summarized, as shown in Table 8. The results reveal that, in solving the six problem sets, TLBO-VNS (S6) not only demonstrates an improvement over TLBO (S5), but also consistently outperforms other methods. Based on the above results, we might reasonably infer that for complex problems, the ability of VNS in the global search seems to help TLBO to improve its shortcomings of being often stuck in local optimal solutions.

7. Conclusions

Retail shelf space is an essential resource for retailers to meet customer demands and influence customer purchase decisions. Due to a lot of products to be displayed on the shelves, how to properly allocate the products in the limited space resources is of concern to the industry. Since the TLBO algorithm has few parameters and has a strong convergence ability and a good global search ability, this study used the basic form of the TLBO algorithm to solve the problem of shelf-space allocation and carried out extensive experimental works. The experimental results prove that, compared with some existing heuristic/metaheuristic algorithms, the basic form of the TLBO algorithm has certain potential. Immediately afterwards, this study combined TLBO and VNS to develop a new algorithm to further optimize the limited shelf-space resources by adjusting the arrangement of products on the same shelf and adjusting the arrangement of products on different shelves. The research results show that the new algorithm combining VNS and TLBO has successfully improved the solution performance of the basic TLBO algorithm. The logical ground to back up this success may be seen from the basic principles of VNS, which seeks local optima during the descent phase, and gets rid of the corresponding valley and changes the neighborhood during the disturbance phase. This feature can not only enhance the local search capability of the basic genetic algorithm (e.g., Castelli and Vanneschi [16]), but also enhance the global search capability of the basic TLBO. The results of this study can make some contributions to the literature of TLBO variants.
This study optimizes shelf-space resources with the goal of maximizing profits. In practice, as cautioned by Hansen et al. [13], in addition to maximizing profits, successful retail stores must also consider the store atmosphere and the completeness of assortment. Therefore, looking forward to future research, we can further explore the potential of this new algorithm to solve the problem of shelf space and product mix at the same time. Additionally, we can try to incorporate store atmosphere and profit into the goal of optimizing shelf-space resources and develop a new algorithm for multi-objective decision-making.

Author Contributions

Conceptualization, Y.-K.C.; Data curation, S.-X.W.; Funding acquisition, T.-P.L.; Methodology, Y.-K.C. and S.-X.W.; Software, S.-X.W.; Supervision, Y.-K.C.; Writing—original draft, S.-X.W. and T.-P.L.; Writing—review & editing, Y.-K.C. and T.-P.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yang, M.H.; Chen, W.C. A study on shelf space allocation and management. Int. J. Prod. Econ. 1999, 60–61, 309–317. [Google Scholar] [CrossRef]
  2. Chen, M.C.; Lin, C.P. A data mining approach to product assortment and shelf space allocation. Expert Syst. Appl. 2007, 32, 976–986. [Google Scholar] [CrossRef]
  3. Zufryden, F.S. A dynamic programming approach for product selection and supermarket shelf-space allocation. J. Oper. Res. Soc. 1986, 37, 413–422. [Google Scholar] [CrossRef]
  4. Levy, M.; Grewal, D.; Kopalle, P.K.; Hess, J.D. Emerging trends in pricing practice: Implications for research. J. Retail. 2004, 80, 13–31. [Google Scholar] [CrossRef]
  5. Curhan, R.C. The relationship between shelf space and unit sales in supermarkets. J. Mark. Res. 1972, 9, 406–412. [Google Scholar] [CrossRef]
  6. Corstjens, M.; Doyle, P. A dynamic model for strategically allocating retail space. J. Oper. Res. Soc. 1983, 34, 943–951. [Google Scholar] [CrossRef]
  7. Hübner, A.H.; Kuhn, H. Retail category management: State-of-the-art review of quantitative research and software applications in assortment and shelf space management. Omega 2012, 40, 199–209. [Google Scholar] [CrossRef]
  8. Yang, M.H. An efficient algorithm to allocate shelf space. Eur. J. Oper. Res. 2001, 131, 107–118. [Google Scholar] [CrossRef]
  9. Hansen, P.; Heinsbroek, H. Product selection and space allocation in supermarkets. Eur. J. Oper. Res. 1979, 3, 474–484. [Google Scholar] [CrossRef]
  10. Esparcia-Alcázar, A.I.; Lluch-Revert, L.; Sharman, K.C.; Albarracín-Guillem, J.M.; Palmer-Gato, M.E. Towards an evolutionary tool for the allocation of supermarket shelf space. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation, Genetic and Evolutionary Computation Conference, Seattle, WA, USA, 8–12 July 2006. [Google Scholar]
  11. Hwang, H.; Choi, B.; Lee, M.J. A model for shelf space allocation and inventory control considering location and inventory level effects on demand. Int. J. Prod. Econ. 2005, 97, 185–195. [Google Scholar] [CrossRef]
  12. Urban, T.L. An inventory-theoretic approach to product assortment and shelf-space allocation. J. Retail. 1998, 74, 15–35. [Google Scholar] [CrossRef]
  13. Hansen, J.M.; Raut, S.; Swami, S. Retail shelf allocation: A comparative analysis of heuristic and meta-heuristic approaches. J. Retail. 2010, 86, 94–105. [Google Scholar] [CrossRef]
  14. Rao, R.V.; Savsani, V.J.; Vakharia, D.P. Teaching–learning-based optimization: A novel method for constrained mechanical design optimization problems. Comput. Aided Des. 2011, 43, 303–315. [Google Scholar] [CrossRef]
  15. Baykasoğlu, A.; Hamzadayi, A.; Köse, S.Y. Testing the performance of teaching-learning based optimization (TLBO) algorithm on combinatorial problems: Flow shop and job shop scheduling cases. Inf. Sci. 2014, 276, 204–218. [Google Scholar] [CrossRef]
  16. Castelli, M.; Vanneschi, L. Genetic algorithm with variable neighborhood search for the optimal allocation of goods in shop shelves. Oper. Res. Lett. 2004, 42, 355–360. [Google Scholar] [CrossRef]
  17. Corstjens, M.; Doyle, P. A model for optimizing retail space allocations. Manag. Sci. 1981, 27, 822–833. [Google Scholar] [CrossRef]
  18. Borin, N.; Farris, P.W.; Freeland, J.R. A model for determining retail product category assortment and shelf space allocation. Decis. Sci. 1994, 25, 359–384. [Google Scholar] [CrossRef]
  19. Drèze, X.; Hoch, S.J.; Purk, M.E. Shelf management and space elasticity. J. Retail. 1994, 70, 301–326. [Google Scholar] [CrossRef] [Green Version]
  20. Murray, C.C.; Talukdar, D.; Gosavi, A. Joint optimization of product price, display orientation and shelf-space allocation in retail category management. J. Retail. 2010, 86, 125–136. [Google Scholar] [CrossRef]
  21. Bai, R.; Van Woensel, T.; Kendall, G.; Burke, E.K. A new model and a hyper-heuristic approach for two-dimensional shelf space allocation. 4OR 2013, 11, 31–55. [Google Scholar] [CrossRef]
  22. Schaal, K.; Hübner, A. When does cross-space elasticity matter in shelf-space planning? A decision analytics approach. Omega 2018, 80, 135–152. [Google Scholar] [CrossRef]
  23. Yu, V.F.; Maglassang, R.; Tsao, Y.C. A reduced variable neighborhood search-based hyperheuristic for the shelf space allocation problem. Comput. Ind. Eng. 2020, 143, 1–10. [Google Scholar] [CrossRef]
  24. Bianchi-Aguiar, T.; Hübner, A.H.; Carravilla, M.A.; Oliveira, J.F. Retail shelf space planning problems: A comprehensive review and classification framework. Eur. J. Oper. Res. 2020. [Google Scholar] [CrossRef]
  25. Holland, J.M. Adaptation in Natural and Artificial Systems; MIT Press: Cambridge, UK, 1975. [Google Scholar]
  26. Mladenović, N.; Hansen, P. Variable neighborhood search. Comput. Oper. Res. 1997, 24, 1097–1100. [Google Scholar] [CrossRef]
  27. Zou, F.; Chen, D.; Xu, Q. A survey of teaching–learning-based optimization. Neurocomputing 2019, 335, 366–383. [Google Scholar] [CrossRef]
  28. Kumar, Y.; Dahiya, N.; Malik, S.; Khatri, S. A new variant of teaching learning based optimization algorithm for global optimization problems. Informatica 2019, 43, 65–75. [Google Scholar] [CrossRef] [Green Version]
  29. Chen, X.; Xu, B.; Yu, K.; Du, W. Teaching-learning-based optimization with learning enthusiasm mechanism and its application in chemical engineering. J. Appl. Math. 2018, 2018, 1–19. [Google Scholar] [CrossRef]
Figure 1. The solution process of the proposed teaching–learning-based optimization (TLBO)-variable neighborhood search (VNS) method.
Figure 1. The solution process of the proposed teaching–learning-based optimization (TLBO)-variable neighborhood search (VNS) method.
Mathematics 08 01296 g001
Table 1. Model classes and solution methods of related works.
Table 1. Model classes and solution methods of related works.
ReferencesModel ClassSolution Method
Yang and Chen [1]MINLP/ILPNone
Corstjens and Doyle [6]GPNone
Yang [8]ILPSpecialized heuristics
Hansen and Heinsbroek [9]MINLPLagrange multiplier technique
Hwang et al. [11]MNILPGradient descent search, Genetic Algorithm (GA)
Urban [12]MINLPGreedy heuristic, Genetic Algorithm (GA)
Hansen et al. [13]MILPCPLEX, Genetic Algorithm (GA)
Castelli and Vanneschi [16]MINLPGenetic Algorithm (GA), Genetic Algorithm with Variable Neighborhood Search (GA-VNS)
Corstjens and Doyle [17]GPSignomial geometric programming
Borin et al. [18]MINLPSimulated Annealing (SA)
Drèze et al. [19]BILPLinear optimization package (LINDO, LINGO)
Murray et al. [20]MINLPBranch-and-Bound based MINLP algorithm
Bai et al. [21]INLPMultiple neighborhood approach
Schaal and Hübner [22]INLP/BNLPSpecialized heuristics
Yu et al. [23]MINLPReduced Variable Neighborhood Search-based Hyperheuristic (HyVNS)
Table 2. The settings for the model parameters, borrowed from [8].
Table 2. The settings for the model parameters, borrowed from [8].
Model ParametersValue
(M, N)(2, 6), (2, 8), (2, 10), (3, 4), (3, 6), (4, 4)
T k 270, 315, 360
L i 0, 1, 2
U i L i 1, 2, 3
P i k P i k ~ N ( μ ,   σ ) ; μ ~ Uniform (200, 7000) and σ ~ Uniform (0.05, 0.15)
a i a i ~ Uniform (15, 40)
Table 3. Average profits of the five solution methods for the small problem (M, N) = (2, 6).
Table 3. Average profits of the five solution methods for the small problem (M, N) = (2, 6).
Problem
No.
Parameter
( T k - U i - L i )
S1S2S3S4S5Best
1270-1-030,47030,47030,47030,47030,47030,470
2270-2-058,62758,62758,62758,62758,62758,627
3270-3-081,92781,92783,05883,84483,84483,844
4270-2-170,87270,87270,87270,87270,87270,872
5270-3-182,02482,02483,77183,77183,77183,771
6270-4-189,40989,40989,40989,40989,40989,409
7270-3-293,47293,47293,47293,47293,47293,472
8270-4-288,36092,33394,17694,17694,17694,176
9270-5-282,32082,65882,65883,13283,86583,865
10315-1-032,64932,64932,64932,64932,64932,649
11315-2-066,10366,10366,10366,10366,10366,103
12315-3-097,88497,88496,48797,13497,88497,884
13315-2-151,03151,03150,79551,03151,03151,031
14315-3-173,73373,73372,24273,67773,73373,733
15315-4-1108,130108,130102,850103,909108,133108,133
16315-3-292,75396,46892,51096,47096,47396,473
17315-4-2103,900104,47599,354104,988107,326107,326
18315-5-2101,690103,41098,425102,040102,766103,410
19360-1-031,60431,60431,60431,60431,60431,604
20360-2-050,95150,95150,34050,72850,95150,951
21360-3-089,11689,11689,06989,02289,11689,116
22360-2-177,02177,02169,24077,02177,02177,021
23360-3-188,54788,54782,44388,54788,54788,547
24360-4-1137,710137,710121,695134,914137,713137,713
25360-3-280,71980,71980,71980,71980,71980,719
26360-4-299,24499,244102,372103,503105,501105,501
27360-5-2100,040100,790100,788100,788100,788100,790
Table 4. Performance gap of the five solution methods under (M, N) = (2, 6).
Table 4. Performance gap of the five solution methods under (M, N) = (2, 6).
Problem
No.
Parameter
( T k - U i - L i )
Solution Method
S1S2S3S4S5
1270-1-00.00%0.00%0.00%0.00%0.00%
2270-2-00.00%0.00%0.00%0.00%0.00%
3270-3-02.29%2.29%0.94%0.00%0.00%
4270-2-10.00%0.00%0.00%0.00%0.00%
5270-3-12.09%2.09%0.00%0.00%0.00%
6270-4-10.00%0.00%0.00%0.00%0.00%
7270-3-20.00%0.00%0.00%0.00%0.00%
8270-4-26.18%1.96%0.00%0.00%0.00%
9270-5-21.84%1.44%1.44%0.87%0.00%
10315-1-00.00%0.00%0.00%0.00%0.00%
11315-2-00.00%0.00%0.00%0.00%0.00%
12315-3-00.00%0.00%1.43%0.77%0.00%
13315-2-10.00%0.00%0.46%0.00%0.00%
14315-3-10.00%0.00%2.02%0.08%0.00%
15315-4-10.00%0.00%4.89%3.91%0.00%
16315-3-23.86%0.01%4.11%0.00%0.00%
17315-4-23.19%2.66%7.43%2.18%0.00%
18315-5-21.66%0.00%4.82%1.32%0.62%
19360-1-00.00%0.00%0.00%0.00%0.00%
20360-2-00.00%0.00%1.20%0.44%0.00%
21360-3-00.00%0.00%0.05%0.11%0.00%
22360-2-10.00%0.00%10.10%0.00%0.00%
23360-3-10.00%0.00%6.89%0.00%0.00%
24360-4-10.00%0.00%11.63%2.03%0.00%
25360-3-20.00%0.00%0.00%0.00%0.00%
26360-4-25.93%5.93%2.97%1.89%0.00%
27360-5-20.74%0.00%0.00%0.00%0.00%
Table 5. The maximum, median, and standard deviation of the performance gap under (M, N) = (2, 6).
Table 5. The maximum, median, and standard deviation of the performance gap under (M, N) = (2, 6).
Solution Method
S1S2S3S4S5
Maximum of the Performance Gap6.18%5.93%11.63%3.91%0.62%
Median of the Performance Gap0.00%0.00%0.46%0.00%0.00%
StDev. of the Performance Gap1.82%1.36%3.33%0.96%0.12%
Table 6. The maximum, median, and standard deviation of the performance gap under different (M, N).
Table 6. The maximum, median, and standard deviation of the performance gap under different (M, N).
(#shelves, #products)
(M, N)
Solution Method
S1S2S3S4S5
Maximum of the Performance Gap
(2, 6)6.18%5.93%11.63%3.91%0.62%
(2, 8)14.04%14.04%23.78%15.96%0.64%
(2, 10)14.04%9.35%22.48%19.24%16.72%
(3, 4)5.63%5.63%10.44%3.82%0.00%
(3, 6)7.48%5.24%29.60%12.47%0.00%
(4, 4)6.87%6.87%22.06%3.69%0.00%
(M, N)Median of the Performance Gap
(2, 6)0.00%0.00%0.46%0.00%0.00%
(2, 8)0.94%0.94%6.02%2.46%0.00%
(2, 10)1.03%0.00%5.16%2.18%0.00%
(3, 4)0.00%0.00%3.01%0.00%0.00%
(3, 6)0.00%0.00%12.52%1.55%0.00%
(4, 4)0.00%0.00%10.26%2.09%0.00%
(M, N)Standard Deviation of the Performance Gap
(2, 6)1.82%1.36%3.33%0.96%0.12%
(2, 8)4.14%3.65%6.29%3.97%0.12%
(2, 10)2.91%2.10%7.52%4.72%3.80%
(3, 4)1.34%1.34%3.31%1.32%0.00%
(3, 6)1.82%1.28%8.06%3.61%0.00%
(4, 4)1.91%1.91%6.44%1.23%0.00%
Note: The best performance gap in each problem set is highlighted in gray.
Table 7. Wilcoxon rank sum test.
Table 7. Wilcoxon rank sum test.
Problem Set (M, N)S1S2S3S4
TLBO(S5)(2, 6)0.003 **0.016 *0.001 ***0.003 **
(2, 8)2.3 × 10−4 ***2.3 × 10−4 ***2.7 × 10−5 ***5.3 × 10−5 ***
(2, 10)0.019 *0.0581.03 × 10−4 ***0.001 ***
(3, 4)0.043 *0.043 *8.9 × 10−5 ***0.001 ***
(3, 6)0.018 *0.018 *1.8 × 10−5 ***6 × 10−5 ***
(4, 4)0.1440.1442.7 × 10−5 ***6 × 10−5 ***
* p < 0.05, ** p < 0.01, *** p < 0.001.
Table 8. The max, median, and standard deviation of the performance gap under different (M, N).
Table 8. The max, median, and standard deviation of the performance gap under different (M, N).
(#shelves, #products)
(M, N)
Solution Method
S1S2S3S4S5S6
Maximum of the Performance Gap
(2, 6)6.18%5.93%26.38%11.63%0.62%0.62%
(2, 8)17.03%15.66%23.78%20.00%13.26%13.26%
(2, 10)14.04%9.35%23.03%19.81%16.91%0.00%
(3, 4)5.63%5.63%10.44%3.82%0.00%0.00%
(3, 6)7.48%5.24%29.60%13.15%0.78%0.11%
(4, 4)6.87%6.87%22.06%3.69%0.00%0.00%
Median of the Performance Gap
(2, 6)0.00%0.00%0.86%0.46%0.00%0.00%
(2, 8)0.94%0.94%8.03%1.88%0.00%0.00%
(2, 10)1.35%1.09%5.16%2.89%0.00%0.00%
(3, 4)0.00%0.00%3.01%0.00%0.00%0.00%
(3, 6)0.00%0.00%12.52%1.55%0.00%0.00%
(4, 4)0.00%0.00%10.26%2.09%0.00%0.00%
Standard Deviation of the Performance Gap
(2, 6)1.82%1.36%8.70%3.33%0.12%0.12%
(2, 8)5.82%5.39%6.98%6.21%3.54%3.54%
(2, 10)2.89%2.05%7.42%4.70%3.86%0.00%
(3, 4)1.34%1.34%3.31%1.32%0.00%0.00%
(3, 6)1.81%1.28%8.07%3.68%0.15%0.02%
(4, 4)1.91%1.91%6.44%1.23%0.00%0.00%
Note: The best performance gap in each problem set is highlighted in gray.

Share and Cite

MDPI and ACS Style

Chen, Y.-K.; Weng, S.-X.; Liu, T.-P. Teaching–Learning Based Optimization (TLBO) with Variable Neighborhood Search to Retail Shelf-Space Allocation. Mathematics 2020, 8, 1296. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081296

AMA Style

Chen Y-K, Weng S-X, Liu T-P. Teaching–Learning Based Optimization (TLBO) with Variable Neighborhood Search to Retail Shelf-Space Allocation. Mathematics. 2020; 8(8):1296. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081296

Chicago/Turabian Style

Chen, Yan-Kwang, Shi-Xin Weng, and Tsai-Pei Liu. 2020. "Teaching–Learning Based Optimization (TLBO) with Variable Neighborhood Search to Retail Shelf-Space Allocation" Mathematics 8, no. 8: 1296. https://0-doi-org.brum.beds.ac.uk/10.3390/math8081296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop