Next Article in Journal
Not Another Computer Algebra System: Highlighting wxMaxima in Calculus
Next Article in Special Issue
Modelling Interaction Effects by Using Extended WOE Variables with Applications to Credit Scoring
Previous Article in Journal
An Accurate and Easy to Interpret Binary Classifier Based on Association Rules Using Implication Intensity and Majority Vote
Previous Article in Special Issue
Community Detection Problem Based on Polarization Measures: An Application to Twitter: The COVID-19 Case in Spain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Slime Mould Algorithm for Demand Estimation of Urban Water Resources

Department of Urban and Rural Planning, Academy of Architecture, Chang’an University, Xi’an 710061, China
*
Author to whom correspondence should be addressed.
Submission received: 10 May 2021 / Revised: 2 June 2021 / Accepted: 2 June 2021 / Published: 8 June 2021
(This article belongs to the Special Issue Artificial Intelligence with Applications of Soft Computing)

Abstract

:
A slime mould algorithm (SMA) is a new meta-heuristic algorithm, which can be widely used in practical engineering problems. In this paper, an improved slime mould algorithm (ESMA) is proposed to estimate the water demand of Nanchang City. Firstly, the opposition-based learning strategy and elite chaotic searching strategy are used to improve the SMA. By comparing the ESMA with other intelligent optimization algorithms in 23 benchmark test functions, it is verified that the ESMA has the advantages of fast convergence, high convergence precision, and strong robustness. Secondly, based on the data of historical water consumption and local economic structure of Nanchang, four estimation models, including linear, exponential, logarithmic, and hybrid, are established. The experiment takes the water consumption of Nanchang City from 2004 to 2019 as an example to analyze, and the estimation models are optimized using the ESMA to determine the model parameters, then the estimation models are tested. The simulation results show that all four models can obtain better prediction accuracy, and the proposed ESMA has the best effect on the hybrid prediction model, and the prediction accuracy is up to 97.705%. Finally, the water consumption of Nanchang in 2020–2024 is forecasted.

1. Introduction

Water is a very precious natural resource, even called the source of life. With the continuous economic development and population growth, people’s demand for water is also increasing. However, due to the long regeneration cycle of water, the contradiction between the supply and demand of water resources is more and more tense, resulting in a serious shortage of water resources. Therefore, the optimal and reasonable allocation of water resources is the key to sustainable utilization of water resources [1]. Due to the randomness of water consumption and the influence of the economy, population, and other factors, water resources demand estimation has always been a very difficult problem.
At present, the methods of water resources prediction at home and abroad are varied [2]. From the spatial and temporal scale of the forecast, it can be divided into a short-term, medium-term, long-term, global, national forecast, and so on. From the range of the forecast, it can be divided into the whole forecast and partial forecast. From the approach of the forecast, it can be divided into grey correlation model [3], regression analysis model [4], and neural network prediction model [5]. Although the traditional forecast models can predict the demand for water resources by different approaches, the low predicted accuracy of those models might not apply to solving practical problems. Choosing the appropriate parameters in the models is an effective way to eliminate the effect of sparsity and uncertainty of historical data, and improve the accuracy of predicted results. The selection of parameters among so many candidates is a challenging task and can be regarded as an optimization problem. Therefore, intelligent optimization algorithms are used to solve such parametric optimization models. In order to effectively ameliorate the demand estimation of irrigation water, Pulido-Calvo and Gutiérrez-Estrada studied a hybrid model based on genetic algorithm and computational neural network, as well as fuzzy logic in [6]. Bai et al. [7] proposed a multi-scale urban water resources demand estimation method based on an adaptive chaotic particle swarm optimization algorithm to search weight factors. Romano and Kapelan [8] constructed a valid estimation model with an average error of about 5% using evolutionary algorithms and artificial neural networks. Similarly, Oliveira et al. [9] applied the harmonious search algorithm to the short-term water demand estimation and searched the parameters in the model by using the harmony search (HS) algorithm. Swarm intelligence optimization algorithms are research hotspots in the optimization field. Swarm intelligence optimization algorithms simulate biological, social behavior, among which the most classic algorithm is particle swarm optimization (PSO) [10]. In recent years, other swarm intelligence optimization algorithms proposed include whale optimization algorithm (WOA) [11], gray wolf optimizer (GWO) [12], harris hawk optimization (HHO) [13], firefly algorithm (FA) [14], manta rays foraging optimization (MRFO) [15], marine predators algorithm (MPA) [16], slime mould algorithm (SMA) [17], etc. Among them, the SMA is a new meta-heuristic algorithm proposed by Li et al. [17] in 2020, which is inspired by the diffusion and foraging behavior of slime mould. The SMA algorithm has the advantages of strong global search ability and strong robustness, so it has been applied to solve some practical engineering optimization problems [18,19,20,21,22,23,24,25,26]. But at the same time, the SMA also has some defects, such as low calculation accuracy and premature convergence on some benchmark functions. In order to improve the convergence accuracy and speed of the algorithm, a new, improved slime mould algorithm (ESMA) is proposed. In view of the four water resources estimation models (linear, logarithmic, exponential, and hybrid) established in Nanchang City, the ESMA is used to optimize the model parameters and test the models. In addition, the ESMA is compared with other intelligent algorithms in the models, and the future water consumption of Nanchang City in 2020–2024 is predicted.
The rest of this paper is organized as follows: In Section 2, an improved slime mould algorithm (ESMA) is proposed. In Section 3, the ESMA is compared with the other six optimization algorithms on 23 test functions, and the superiority of the ESMA is verified by experiments. In Section 4, four estimation models of linear, logarithmic, exponential, and hybrid are proposed to predict the water resources of Nanchang City. The ESMA is used to optimize the model parameters, and the models are tested, and the simulation results and discussion are given. Finally, the work is summarized in Section 5.

2. An Improved Slime Mould Algorithm

2.1. Slime Mould Algorithm

The slime mould algorithm (SMA) was proposed by Li et al. [17] in 2020, which was inspired by the diffusion and foraging behavior of slime mould in nature. In this paper, the “slime mould” refers to Physarum polycephalum, and the main study in this paper is the nutritional stage of the slime mould, in which the organic matter in the slime mould is responsible for finding, surrounding, and digesting food. The mathematical models for these stages are shown below.

2.1.1. Initialization

A single objective optimization model can be represented by Equation (1),
min f ( X ) s . t . l b X u b
where f(x) is the optimization function, and l b , u b R d are the lower and upper bound of the variable x R d
For the above d-dimensional optimization problem, the initial slime mould population with n individuals is a n × d matrix called X ( 0 ) = { X 1 , X 2 , , X n } , . Each individual in the population is a vector with d elements, which is initialized by Equation (2).
X i = l b + r a n d · ( u b l b ) ,   i = 1 , 2 , , n

2.1.2. Approach Food

Since slime mould can approach food according to the smell in the air, and this approach behavior can be expressed by the following formula,
X ( t + 1 ) X b ( t ) + v b · ( W · X A ( t ) X B ( t ) ) , r < p v c · X ( t ) , r p
where, t represents the current iteration number, X represents the position of slime mould, X b is the individual position with the highest odor concentration, X A and X B are the two individuals randomly selected from the population. The selection behavior of slime mould is simulated by two parameters, vb and vc, and the value range of vb is [ a , a ] , vc decreased linearly from 1 to 0. r is a random number between [0, 1], W represents the weight of the search agent.
The formula of p is expressed as follows,
p = tanh | S ( i ) D F | ,   i = 1 , 2 , , n
where, S(i) represents the fitness value of the current individual, and DF represents the optimal fitness value in all the current iterations.
The expression of vb is as follows,
v b = [ a , a ] ,   a = arctanh ( 2 max _ t + 1 )
where, max_t represents the maximum number of iterations.
The weight W is given as follows,
W ( SmellIndex ( i ) = 1 + r · log ( b F S ( i ) b F w F + 1 ) , condition 1 + r · log ( b F S ( i ) b F w F + 1 ) , others
SmellIndex = sort ( S )
where, r is a random number between [0, 1], condition represents the first half of the population. bF and wF, respectively, represent the optimal and worst value obtained in the current iteration, and SmellIndex denotes the sequence of fitness values sorted (ascends in the minimum value problem).

2.1.3. Wrap Food

This stage simulates the contraction mode of venous tissue structure of slime mould mathematically when searching. The slime mould can adjust its search patterns according to the quality of food. The specific mathematical formula of the slime mould updating its position can be expressed as
X * = rand · ( u b l b ) + l b , rand < z X b ( t ) + v b · ( W · X A ( t ) X B ( t ) ) , r < p v c · X ( t ) , r p
where, ub and lb are the upper and lower bounds of the search space, respectively, rand and r are random numbers between [0, 1]. z is a parameter of balancing algorithm’s exploration and exploitation capability, and different values can be selected according to specific problems. In this paper, z is 0.03.
Algorithm 1 gives the pseudo-code of the SMA.
Algorithm 1. Slime mould algorithm
Input: Slime mould population Xi (i = 1,2,…,n) and related parameters such as n, dim, max_t;
Output: Optimal fitness value best_fitness and the corresponding optimal position Xb.
While (t < max_t)
 Check if solutions go outside the search space and bring them back
 Calculate fitness values of all individuals, update the best and worst fitness value
 Calculate the weight W according to Equation (6)
 Record the best fitness best_fitness and the corresponding Xb
For each search agent
   Update the value of vb, vc, and p
   Update the individual position according to Equation (8)
End For
t = t + 1
End While

2.2. The Proposed Improved Slime Mould Algorithm

2.2.1. Opposition-Based Learning

According to the idea of opposition-based learning (OBL) [27,28], in the optimization process, the current solution has a 50% probability of being far away from the optimal solution of the problem compared with its opposition solution. Therefore, selecting the better individual from the current solution population and the opposition solution population as the new generation population can accelerate the convergence to a certain extent, increase the diversity of the population, and improve the performance of the algorithm.
Suppose that the size of the population is n, then the population is represented as X = ( X 1 , X 2 , , X n ) , ub and lb represent the upper and lower bounds of the search agent, respectively. Let the algorithm generates n opposite solutions through the opposition-based learning, then the opposite population can be represented as X ˜ = ( X ˜ 1 , X ˜ 2 , , X ˜ n ) , the specific calculation formula of opposition-based learning is
X ˜ i = l b + u b X i
The fitness values of the current solution population X and the opposition solution population X ˜ were calculated, respectively. Among the 2n individuals composed of the current solution population and the opposition solution population, that is X 2 n = { X i , i = 1 , , n } { X ˜ i , i = 1 , , n } , n individuals with better fitness values were selected as the new generation population.

2.2.2. Elite Chaotic Searching Strategy

Opposition-based learning strategy mainly emphasizes the exploration ability of the algorithm, and to improve the exploitation ability of the algorithm, an elite chaotic searching strategy is added. Through chaotic mutation of the elite individual, the algorithm can further update the elite individual, this can improve the exploitation ability of the algorithm. The specific update process for the elite chaotic searching strategy is as follows.
Firstly, the fitness value of all the individuals (n) in the current population are calculated and sorted, and the first m( m = p r n ) individuals with better fitness value are selected as the elite individuals of the current population, where p r [ 0 , 1 ] is the selected elite proportion, and p r = 0.1 in this paper. The selected elite individuals are denoted as { E X 1 ( t ) , E X 2 ( t ) , , E X m ( t ) } { X 1 ( t ) , X 2 ( t ) , , X n ( t ) } and the upper and lower bounds of the j-th dimension are respectively:
E b j ( t ) = max ( E X 1 j ( t ) , E X 2 j ( t ) , , E X m j ( t ) ) E a j ( t ) = min ( E X 1 j ( t ) , E X 2 j ( t ) , , E X m j ( t ) )
Then, the elite individuals are mapped from the search space to the interval [0, 1] according to Equation (11), and the chaotic individuals C i ( t ) = ( C i 1 ( t ) , C i 2 ( t ) , , C i d ( t ) ) are obtained, where d is the dimension of individuals.
C i ( t ) = E X i ( t ) l b u b l b , i = 1 , 2 , , m
Logistic chaotic mapping is iterated for C i j ( t ) according to the following equation
C i j k + 1 ( t ) = μ C i j k ( t ) ( 1 C i j k ( t ) )
where, i = 1 , , m ; j = 1 , , d , constant μ = 4 , k represents the number of chaotic iterations, and k max is the maximum number of chaotic iterations. In this paper, the maximum number of current population iterations is taken as the maximum number of chaotic iterations.
When the chaotic iteration reaches k max , the chaotic individual C i j k max ( t ) are mapped to [ E a j ( t ) , E b j ( t ) ] according to the following formula to get the i-th new elite individual E C i j ( t ) .
E C i j ( t ) = C i j k max ( t ) ( E b j ( t ) E a j ( t ) ) + E a j ( t )
Finally, a greedy selection is made between the elite individuals E C i ( t ) and E X i ( t ) , and the individuals with better fitness value are selected to enter the next generation, i.e.,
E X i ( t ) = E X i ( t )    f ( E X i ( t ) ) f ( E C i ( t ) ) E C i ( t )    f ( E X i ( t ) ) > f ( E C i ( t ) )
Due to the introduction of chaotic mutation in the strategy, the randomness of the position of the elite individuals is enhanced, and the local search ability of the algorithm is improved accordingly. The greedy selection of the elite individuals can accelerate the convergence speed of the algorithm. Experiments show that this strategy can improve the exploitation ability of the original algorithm.

2.2.3. The Improved Slime Mould Algorithm Combining the Two Strategies

This paper improves the original slime mould algorithm by adding opposition-based learning and an elite chaotic searching strategy into the SMA. The opposition-based learning increases the population diversity, while the elite chaotic searching strategy improves the exploitation ability of the algorithm. The proposed improved slime mould algorithm is called the ESMA for short. The concrete steps of the improved algorithm are given below.
Step1: Initialize some parameters related to the ESMA, such as population size n, variable dimension dim, upper and lower bounds of variables, maximum iteration times max_t, etc.;
Step2: Initialize the population randomly by Equation (2), calculate the opposition solution population of the current population according to Equation (9), and sort the fitness of the current solution population and the opposition solution population, and select the first n individuals with better fitness value as the current solution population;
Step3: When t < max_t, the fitness value of each individual is calculated, the best and the worst fitness values are updated;
Step4: Update the value of weight W with Equation (6), update the minimum fitness value as the optimal value best_fitness, and record the optimal individual Xb corresponding to the optimal value;
Step5: Update parameters p, vb, and vc for each individual, and update population position according to Equation (8);
Step6: Perform opposition-based learning operation for the current population according to Equation (9), then sort the fitness of the current solution population and the opposition solution population, select the first n individuals with better fitness value as the current population, and then execute elite chaotic searching strategy according to Equations (10)–(14);
Step7: Let t = t + 1, if t < max_t, returns Step3, otherwise, outputs the optimal value best_fitness and the optimal individual Xb.
In addition, Algorithm 2 gives the pseudo-code of the ESMA.
Algorithm 2. Pseudo-code of the ESMA
Initialize related parameters such asn, dim, max_t, and Slime mould populationXi (i = 1,2,...,n);
Calculate opposition population X ˜ of current population Xi (i = 1,2,…,n) by Equation (9)
Calculate the fitness of population X ˜ X , pick n individuals with better fitness value as the current population
While (t < max_t)
 Calculate fitness values of all individuals, update the best and worst fitness value
 Calculate the weight W according to Equation (6)
 Record the best fitness best_fitness and the corresponding Xb
For i = 1: n
  Update the value of vb, vc, and p
  Update the population position according to Equation (8)
End For
 Calculate X ˜ by Equation (9), pick n individuals with better fitness value as the current population based on the fitness values of population X ˜ X
 Select the first m individuals as elite individuals EXi
 Calculate new elite individuals obtained by chaotic iteration through Equations (10)–(13)
 Update the elite individuals’ position according to Equation (14)
 Check if solutions go outside the search space and bring them back
t = t + 1
End While
Return optimal fitness value best_fitness and the corresponding optimal position Xb

3. Comparison of the ESMA with Other Algorithms

In order to further test the performance of the ESMA, it is compared with other intelligent algorithms. In this section, the ESMA is compared with other six algorithms in twenty-three test functions, and the six algorithms are the GWO [12], WOA [11], ant lion optimizer (ALO) [29], sine cosine algorithm (SCA) [30], moth-flame optimization (MFO) [31], and the original the SMA [17], the parameters setting in the Algorithms are shown in Table 1. To get unbiased experimental result, all the experiments are carried out on the same computer, and the detailed settings are shown in Table 2. Table 3, Table 4 and Table 5 show 23 benchmark test functions—they can effectively evaluate the ability of algorithms to explore, exploit and avoid falling into local optimum. Table 3 is unimodal test functions—it is mainly used to evaluate the exploitation ability of the algorithm. Table 4 is multimodal test functions—it can test the exploration performance of the algorithm, and the fixed-dimensional multimodal test functions in Table 5 can test the ability of the algorithm to jump out of local extremums in low dimensions.
In the simulation experiment of the algorithms, set the population size n = 30, dimension dim = 30, and the maximum number of iterations max_t = 500. In order to eliminate the influence of random factors on the experimental results, and to carry out better statistical analysis on the algorithms, each algorithm runs independently on each benchmark function 20 times, and gives the best value (Best), worst value (Worst), mean value (Mean), standard deviation (Std) and the Rank obtained according to the average value of each algorithm. The specific algorithms comparison results are shown in Table 6 and Figure 1, Figure 2 and Figure 3.
As shown in Table 6, for the unimodal test functions, namely, F1–F7, the ESMA can accurately obtain the optimal value of the test functions on both F1–F4, showing good optimization performance. The ESMA is slightly inferior to ALO on function F6, but obviously superior to it on other unimodal test functions. For the multimodal test functions, namely, F8–F13, the ESMA can also accurately obtain the optimal value of the test functions on F9 and F11, and it is obviously better than other algorithms on other test functions. For the low-dimensional multimodal test functions, the ESMA can also accurately obtain the optimal value of the test functions on F14, F16, and F18, and the optimization results of the seven algorithms on F16 and F18 are basically the same. On function F16–F19, the mean value of the ESMA and MFO algorithm is the same, but the standard deviation of the ESMA is slightly lower than that of the MFO algorithm. The ESMA is obviously better than other algorithms in other low dimensional multimodal test functions. Considering all the test functions, the standard deviation of the ESMA is small, so it is relatively stable on the test functions. According to the final ranking, the ESMA performs well in the 23 benchmark test functions.
Wilcoxon rank sum test, as a nonparametric test, can effectively evaluate the significant differences between the two optimizers. Table 7 shows the values of p obtained by the Wilxocon rank sum test for the other six algorithms under the condition of significance level α = 0.05 and taking the ESMA as the benchmark. In order to avoid Type I error, the p-value is corrected using Holm–Bonferroni correction method, and the process is as follows: First, the p-value of six comparison algorithms are sorted from small to large, if the sorting result is assumed to be p 1 < p 2 < p 3 < p 4 < p 5 < p 6 , if p 1 < 0.05 / 6 0.0083 , p 2 < 0.05 / 5 = 0.01 , p 3 < 0.05 / 4 = 0.0125 , p 4 < 0.05 / 3 0.0167 , p 5 < 0.05 / 2 = 0.025 , p 6 < 0.05 , it is considered that there are significant differences between the ESMA and the comparison algorithms. If a p-value is greater than the corresponding value, there is no significant difference between this algorithm corresponding to this p-value and the ESMA, and the subsequent comparison algorithm also has no significant difference from the ESMA whether the p-value is smaller than the corresponding value or not. The bold data in Table 7 are p-value data with no significant difference between the ESMA and its comparison algorithms. Combined with the data in Table 6, p-value are marked accordingly, where “+” means that the comparison algorithm is significantly better than the ESMA, “=“ means that the comparison algorithm has no significant difference with the ESMA, and “-” means that the comparison algorithm is significantly inferior to the ESMA.
From the last line of Table 7, the number of the SMA, GWO, WOA, ALO, SCA, and MFO superior to/similar to/inferior to the ESMA is 0/15/8, 0/2/21, 0/4/19, 4/4/15, 0/0/23, and 4/3/16, this shows that the ESMA is significantly better than comparison algorithm. Therefore, considering Table 6 and Table 7, the ESMA shows good competitiveness.
Figure 1, Figure 2 and Figure 3 are convergence curves plotted by the average fitness values obtained by each algorithm running 20 times on the test functions. As shown in Figure 1, for unimodal test functions, the ESMA is significantly better than other comparison algorithms in terms of convergence speed and convergence precision, except for the convergence precision of function F6, which is slightly inferior to the ALO algorithm. As shown in Figure 2, for multimodal test functions, the ESMA is also significantly better than other comparison algorithms in terms of convergence speed and convergence precision, and has good competitiveness. As shown in Figure 3, for the low-dimensional multimodal test functions, in the test function F16–F19, the difference between the seven algorithms is small, and they are basically close to the optimal value of the test function. For function F20, the ESMA is slightly inferior to the WOA and GWO in terms of convergence accuracy, while for other low-dimensional multimodal test functions, the ESMA performs better in terms of convergence speed and accuracy. In general, through the test of convergence curves, the proposed ESMA has obvious improvement in the convergence characteristics on the CEC-2005 benchmark functions.

4. The ESMA for Demand Estimation of Water Resources

4.1. Establishment of Water Resources Demand Estimation Model

In this section, the water resources demand forecasting model of Nanchang City, China, is established. The water resources demand of a city is related to many factors, such as ecological environment and economic development, so it is of great significance to forecast the water resources demand of a city. Table 8 shows the data of the total city water consumption, agricultural water consumption, industrial water consumption, residential water consumption, and ecological water consumption in Nanchang from 2004 to 2019.
In order to show the relationship between water resources and regional economic development level, water consumption in different areas can be connected with the corresponding economic indicators and population size. For agricultural and industrial water consumption, the water is mainly used for crop irrigation and industrial production, respectively. Therefore, the gross agricultural production, and gross industrial production are appropriate indicators for agricultural and industrial water consumption. Residential water consumption refers to the water that residents need for daily life, including drinking, washing, and bathing, which is closely related to the population size of Nanchang City. However, ecological water consumption is the total amount of water needed to maintain the integrity of an ecosystem, which is not used as social and economic water. This factor is not applicable to explain the relationship between water consumption and the national economy. On the other hand, as shown in Figure 4, ecological water use occupies the smallest proportion of the total water consumption, only 2.86%. Therefore, this paper ignores the ecological water use when establishing the water resources estimation model, and only considers the influence of industrial water use, agricultural water use, and residential water use on the total water consumption.
Table 8 summarizes population size, gross industrial production, and gross agricultural production from 2004 to 2019, which are used to replace industrial water, agricultural water, and residential water.
In view of the relationship between total water consumption and population, gross industrial production, and gross agricultural production, the linear model, logarithmic model, and exponential model for forecasting the water resource of Nanchang City are respectively expressed as,
Linear model:
y = a 1 x 1 + a 2 x 2 + a 3 x 3 + a 4
Logarithmic model:
y = a 1 log ( x 1 ) + a 2 log ( x 2 ) + a 3 log ( x 3 ) + a 4
Exponential model:
y = a 1 x 1 a 2 + a 3 x 1 a 4 + a 5 x 1 a 6 + a 7
In the above model, a i is the parameter in the model, x 1 , x 2 , and x 3 , respectively represent the population, gross industrial production, and gross agricultural production of Nanchang, and y is the water resources demand estimated by the model.
Hybrid model: By combining the above models, this paper obtains a hybrid water resources demand forecasting model based on a linear model, logarithmic model, and exponential model. The hybrid model is established as follows,
y = a 11 ( a 12 x 1 + a 13 x 2 + a 14 x 3 + a 15 ) + a 21 ( a 22 x 1 a 23 + a 24 x 2 a 25 + a 26 x 3 a 27 + a 28 ) + ( 1 a 11 a 21 ) ( a 31 log ( x 1 ) + a 32 log ( x 2 ) + a 33 log ( x 3 ) + a 34 )
where, a i j is the parameter to be estimated. x 1 , x 2 , x 3 and y are consistent with Equations (15)–(17).

4.2. Optimization of Water Resources Demand Estimation Model

There are uncertain parameters in the above four water demand estimation models. The selection of parameters is closely related to the accuracy of model forecasting. Therefore, how to determine the parameters in the model is the main issue discussed in this section. The selection of model parameters can be regarded as an optimization problem. In order to evaluate the strengths and weaknesses of parameters in the water resources demand forecasting model, the sum of the squares of errors between the real value and the predicted value is used as the objective function, and its mathematical equation is as follows,
f ( X ) = i = 1 k ( y i y i ) 2
where, X is the parameter in the water resource demand forecasting model, k is the number of years used in the optimization model, y i is the real total water consumption in the i-th year, and y i is the estimated total water consumption in the i-th year. The smaller the value of the objective function is, the better the model parameters are, and the closer the predicted value is to the real value. Therefore, the mathematical model of the optimization problem of water demand estimation model can be defined as,
min f ( X ) s . t . l X u
where, X is the D-dimensional decision variable, D is the number of parameters in the estimation model, and u , l R D are the upper and lower bounds of parameters.

4.3. The ESMA Solves the Parameters of Water Demand Estimation Model

Section 4.2 establishes a specific mathematical model for solving water resources demand parameters. The following shows the specific steps of solving the model with the ESMA.
Step1: Initialize some parameters related to the ESMA, and take Equation (20) as the objective function;
Step2: Initialize the population randomly, and perform the opposition-based learning operation;
Step3: When t < max_t, the fitness value of each individual is calculated, the best and the worst fitness values are updated;
Step4: Update the value of weight W with Equation (6), update the minimum fitness value as the optimal value best_fitness, and record the optimal individual Xb corresponding to the optimal value;
Step5: Update parameters p, vb, and vc for each individual, and update population position according to Equation (8);
Step6: Perform opposition-based learning operation, and then execute elite chaotic searching strategy;
Step7: Let t = t + 1, if t < max_t, returns Step3, otherwise, outputs the optimal value best_fitness and the optimal individual Xb.
Figure 5 shows the flow chart of the ESMA for solving water resources estimation parameters.

4.4. The Experiment and Analysis of Water Resources Demand Estimation Model

Based on the water resources data of Nanchang City from 2004 to 2019 in Table 9, different optimization algorithms are used to optimize the parameters in different forecasting models, and the performance of different models and algorithms are obtained according to the error analysis between the optimized forecasting values and the real data.

4.4.1. Data Preprocessing

In order to eliminate the influence of magnitude among different data, the data in Table 9 is preprocessed firstly, and the data is normalized as follows,
x i j * = x i j x j min x j max x j min ( i = 1 , 2 , , 16 ; j = 1 , 2 , 3 , 4 )
In the formula, x i j * is the normalized data, xij is the j-th index in the i-th year. i = 1, 2, ..., 16 corresponds to 2004–2019, j = 1, 2, 3, 4 corresponds to the total water use, the population, the gross industrial production, and the gross agricultural production, and xjmin and xjmax are the maximum and minimum value of the j-th index, respectively.

4.4.2. Algorithm Parameters Setting

For all algorithms, the population number n is 30, the number of iterations T is 1000, and the dimension D is the number of parameters of the optimized model (Linear model D = 4; Logarithmic model D = 4; Exponential model D = 7; Hybrid model D = 17). To eliminate the influence of random factors, each algorithm is run 20 times independently on different models.
Meanwhile, seven other algorithms are employed to compare with the ESMA in solving the problem of water resources demand estimation. These comparison algorithms are the salp swarm algorithm (SSA) [32], WOA [11], HHO [13], biogeography based optimization (BBO) [33], multi-verse optimizer (MVO) [34], archimedes optimization algorithm (AOA) [35], and SMA [17]. The parameters in these algorithms are set as Table 10 shows.

4.4.3. Performance Evaluation Criteria of the Algorithms

In this paper, the relative error (RE) and mean relative error (MRE) between the real value and the predicted value are used to evaluate the performance of different algorithms in handling different optimization models.
The relative error is calculated by the following formula,
RE = y y y
where, y and y are the real value and estimated value of water resources demand, respectively.
And the calculation formula of mean relative error is as follows,
MRE = i = 1 k y i y i y i k
where, y i is the real value of water resources demand in the i-th year, y i is the estimated value of water resources demand in the i-th year, and k is the number of years used in the optimization model.

4.4.4. Result and Analysis

Table 11, Table 12, Table 13 and Table 14 present the average relative error data obtained by the ESMA and seven other optimization algorithms on the four estimation models, including the optimal value, average value, maximum value, and standard deviation after 20 independent runs. The bold data represents the optimal data of all algorithms on the corresponding indexes.
From the perspective of the algorithm, the ESMA performs better on different models. In the linear model (Table 11), the worst value and standard deviation solved by the ESMA are small. In the logarithmic model (Table 12) and exponential model (Table 13), the mean value, maximum value, and standard deviation solved by the ESMA are all small, and the minimum value and mean value solved by the ESMA in the hybrid model (Table 14) are superior to other algorithms. This shows that the ESMA can solve the parameters to minimize the prediction error well in different models. At the same time, the smaller mean value and standard deviation can reflect that the algorithm has good stability when solving the optimization problem and is not easy to be affected by other factors. The worst value and standard deviation of the HHO in the hybrid model (Table 14) are small, indicating that the stability of the algorithm is good, but the results of solving the mean value and the optimal value are not ideal. Overall, the results of the ESMA are better than those of other algorithms.
From the perspective of the forecasting model, the minimum error of the hybrid water demand estimation model in the experimental is 2.2954%, which is better than the error value of other models, indicating that this model can reasonably solve the problem of water resources forecasting, but its stability is slightly weaker than that of logarithmic and exponential models.
Figure 6 shows the comparison between the forecasting data obtained by substituting the average value of model parameters obtained from 20 runs of different algorithms into the four prediction models and the actual value. The red curve in the figure represents the forecasting data of the ESMA, which is close to the actual data.

4.5. Forecast of Water Resources Demand for 2020–2024

In order to forecast the demand for water resources in Nanchang in 2020–2024, the population, gross industrial production, and gross agricultural production of the above five years are first estimated based on the average annual growth rate. The results are shown in Table 15. Then, by using the parameters of different water demand estimation models obtained by the eight optimization algorithms, the predicted value of water resource demands in 2020–2024 can be obtained. The predicted results are shown in Table 16.
Figure 7 shows the forecast graph of water demand by different algorithms on the four models. Obviously, the prediction curves of the ESMA on each model are located between the results predicted by other algorithms, which may be more in line with the actual use of water resources.

5. Conclusions

In this paper, the ESMA is proposed to estimate the water demand of Nanchang City. In the ESMA, the opposition-based learning strategy is adopted to enhance the diversity of the population and improve the convergence accuracy of the algorithm. The implementation of the elite chaos searching strategy enhances the exploitation ability of the algorithm, which enables the ESMA to better approach the theoretical optimal value of the 23 benchmark functions. According to the relationship between the total water consumption, and the population, the gross industrial production, and the gross agricultural production, four forecasting models of linear, exponential, logarithmic, and hybrid prediction for the water resources in Nanchang City are put forward.
In the experiment, based on the water demand data of Nanchang City from 2004 to 2019, the ESMA is used to optimize the parameters in the estimation models, and the models are tested. The water consumption from 2004 to 2019 is estimated by the ESMA, and the performance of the ESMA is compared with the SMA, SSA, WOA, HHO, BBO, MVO, and AOA. The experimental results show that the ESMA can predict the water consumption with good accuracy on the four models, and it can achieve the highest accuracy in the hybrid estimation model, which is 97.705%. At the end of the experiment, the prediction data of the water demand of Nanchang from 2020 to 2024 by each algorithm are given.

Author Contributions

Conceptualization, K.Y.; Data curation, L.L. and Z.C.; Formal analysis, K.Y., L.L. and Z.C.; Funding acquisition, K.Y.; Investigation, K.Y., L.L. and Z.C.; Methodology, K.Y., L.L. and Z.C.; Resources, K.Y.; Software, L.L. and Z.C.; Supervision, K.Y.; Validation, L.L. and Z.C.; Visualization, Z.C.; Writing—original draft, K.Y., L.L. and Z.C.; Writing—review & editing, K.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by National Key R&D Program of China (Grant No. 2019YFD1100905).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data generated or analysed during this study are included in this published article.

Conflicts of Interest

The authors declare that there is no conflict of interests regarding the publication of this paper.

References

  1. Davijani, M.H.; Banihabib, M.E.; Anvar, A.N.; Hashemi, S.R. Optimization model for the allocation of water resources based on the maximization of employment in the agriculture and industry sectors. J. Hydrol. 2016, 533, 430–438. [Google Scholar] [CrossRef]
  2. Arbués, F.; García-Valiñas, M.Á.; Martínez-Espiñeira, R. Estimation of residential water demand: A state-of-the-art review. J. Soc. Econ. 2003, 32, 81–102. [Google Scholar] [CrossRef]
  3. Hang, L.; Chi, Z.; Dong, M.; Ming, Z. Water demand prediction of Grey Markov model based on GM(1,1). In Proceedings of the 2016 3rd International Conference on Mechatronics and Information Technology, Shenzhen, China, 9–10 April 2016. [Google Scholar]
  4. Brentan, B.M.; Luvizotto, E., Jr.; Herrera, M.; Lzquierdo, J.; Pérez-García, R. Hybrid regression model for near real-time urban water demand forecasting. J. Comput. Appl. Math. 2017, 309, 532–541. [Google Scholar] [CrossRef]
  5. Al-Zahrani, M.A.; Abo-Monasar, A. Urban Residential Water Demand Prediction Based on Artificial Neural Networks and Time Series Models. Water Resour. Manag. 2015, 29, 3651–3662. [Google Scholar] [CrossRef]
  6. Bai, Y.; Wang, P.; Li, C.; Xie, J.J.; Wang, Y. A multi-scale relevance vector regression approach for daily urban water demand forecasting. J. Hydrol. 2014, 517, 236–245. [Google Scholar] [CrossRef]
  7. Pulido-Calvo, I.; Gutiérrez-Estrada, J.C. Improved irrigation water demand forecasting using a soft-computing hybrid model. Biosyst. Eng. 2009, 102, 202–218. [Google Scholar] [CrossRef]
  8. Romano, M.; Kapelan, Z. Adaptive water demand forecasting for near real-time management of smart water distribution systems. Environ. Modell. Softw. 2014, 60, 265–276. [Google Scholar] [CrossRef] [Green Version]
  9. Oliveira, P.J.; Steffen, J.L.; Cheung, P. Parameter estimation of seasonal arima models for water demand forecasting using the harmony search algorithm. Procedia Eng. 2017, 186, 177–185. [Google Scholar] [CrossRef]
  10. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948. [Google Scholar]
  11. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Soft. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  12. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey wolf optimizer. Adv. Eng. Soft. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
  13. Heidari, A.; Mirjalili, S.; Farris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  14. Yang, X.S. Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio Inspir. Comput. 2010, 2, 78–84. [Google Scholar] [CrossRef]
  15. Zhao, W.G.; Zhang, Z.X.; Wang, L.Y. Manta ray foraging optimization: An effective bio-inspired optimizer for engineering applications. Eng. Appl. Artif. Intell. 2020, 87, 103300. [Google Scholar] [CrossRef]
  16. Faramarzi, A.; Heidarinejad, M.; Mirjalil, S.; Gandomi, A.H. Marine Predators Algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 2020, 152, 113377. [Google Scholar] [CrossRef]
  17. Li, S.M.; Chen, H.L.; Wang, M.J.; Heidari, A.A.; Mirjalili, S. Slime mould algorithm: A new method for stochastic optimization. Future Gener. Comput. Syst. 2020, 111, 300–323. [Google Scholar] [CrossRef]
  18. Vashishtha, G.; Chauhan, S.; Singh, M.; Kumar, R. Bearing defect identification by swarm decomposition considering permutation entropy measure and opposition-based slime mould algorithm. Measurement 2021, 178, 109389. [Google Scholar] [CrossRef]
  19. Mostafa, M.; Rezk, H.; Aly, M.; Ahmed, E.M. A new strategy based on slime mould algorithm to extract the optimal model parameters of solar PV panel. Sustain. Energy Techn. 2020, 42, 100849. [Google Scholar] [CrossRef]
  20. Abdel-Basset, M.; Mohamed, R.; Chakrabortty, R.K.; Ryan, M.J.; Mirjalili, S. An efficient binary slime mould algorithm integrated with a novel attacking-feeding strategy for feature selection. Comput. Ind. Eng. 2021, 153, 107078. [Google Scholar] [CrossRef]
  21. Kumar, C.; Raj, T.D.; Premkumar, M.; Raj, T.D. A new stochastic slime mould optimization algorithm for the estimation of solar photovoltaic cell parameters. Optik 2020, 223, 165277. [Google Scholar] [CrossRef]
  22. Rizk-Allah, R.M.; Hassanien, A.E.; Song, D.R. Chaos-opposition-enhanced slime mould algorithm for minimizing the cost of energy for the wind turbines on high-altitude sites. ISA Trans. 2021, in press. [Google Scholar] [CrossRef]
  23. Abdel-Basset, M.; Chang, V.; Mohamed, R. HSMA_WOA: A hybrid novel Slime mould algorithm with whale optimization algorithm for tackling the image segmentation problem of chest X-ray images. Appl. Soft Comput. 2020, 95, 106642. [Google Scholar] [CrossRef]
  24. Houssein, E.H.; Mahdy, M.A.; Blondin, M.J.; Shebl, D.; Mohamed, W.M. Hybrid slime mould algorithm with adaptive guided differential evolution algorithm for combinatorial and global optimization problems. Expert Syst. Appl. 2021, 174, 114689. [Google Scholar] [CrossRef]
  25. Djekidel, R.; Bentouati, B.; Javaid, M.S.; Bouchekara, H.R.E.H.; Bayoumi, A.S.; El-Sehiemy, R.A. Mitigating the effects of magnetic coupling between HV Transmission Line and Metallic Pipeline using Slime Mould Algorithm. J. Magn. Magn. Mater. 2021, 529, 167865. [Google Scholar] [CrossRef]
  26. El-Fergany, A.A. Parameters identification of PV model using improved slime mould optimizer and Lambert W-function. Energy Rep. 2021, 7, 875–887. [Google Scholar] [CrossRef]
  27. Tizhoosh, R.H. Opposition-based learning: A new scheme for machine intelligence. In Proceedings of the International Conference on Computational Intelligence for Modelling, Control & Automation and International Conference on Intelligent Agents, Vienna, Austria, 28–30 November 2005; pp. 695–701. [Google Scholar]
  28. Muthusamy, H.; Ravindran, S.; Yaacob, S.; Polat, K. An improved elephant herding optimization using sine-cosine mechanism and opposition based learning for global optimization problems. Expert Syst. Appl. 2021, 172, 114607. [Google Scholar] [CrossRef]
  29. Mirjalili, S. The ant lion optimizer. Adv. Eng. Soft. 2015, 83, 80–98. [Google Scholar] [CrossRef]
  30. Mirjalili, S. SCA: A Sine Cosine Algorithm for solving optimization problems. Knowl. Based Syst. 2016, 96, 120–133. [Google Scholar] [CrossRef]
  31. Mirjalili, S. Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm. Knowl. Based Syst. 2015, 89, 228–249. [Google Scholar] [CrossRef]
  32. Mirjalili, S.; Gandomi, A.H.; Mirjalili, S.Z.; Saremi, S.; Faris, H.; Mirjalili, S.M. Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Adv. Eng. Soft. 2017, 114, 163–191. [Google Scholar] [CrossRef]
  33. Xing, B.; Gao, W.J. Biogeography—Based Optimization Algorithm; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  34. Mirjalili, S.; Mirjalili, S.M.; Hatamlou, A. Multi-Verse Optimizer: A nature-inspired algorithm for global optimization. Neural Comput. Appl. 2015, 27, 495–513. [Google Scholar] [CrossRef]
  35. Hashim, F.A.; Hussain, K.; Houssein, E.H.; Mai, S.M.; Ai-Atabany, W. Archimedes optimization algorithm: A new metaheuristic algorithm for solving optimization problems. Appl. Intell. 2021, 51, 1531–1551. [Google Scholar] [CrossRef]
Figure 1. Convergence curves of the ESMA and other algorithms on unimodal functions.
Figure 1. Convergence curves of the ESMA and other algorithms on unimodal functions.
Mathematics 09 01316 g001aMathematics 09 01316 g001b
Figure 2. Convergence curves of the ESMA and other algorithms on multimodal functions.
Figure 2. Convergence curves of the ESMA and other algorithms on multimodal functions.
Mathematics 09 01316 g002
Figure 3. Convergence curves of the ESMA and other algorithms on fixed-dimensional multimodal functions.
Figure 3. Convergence curves of the ESMA and other algorithms on fixed-dimensional multimodal functions.
Mathematics 09 01316 g003aMathematics 09 01316 g003b
Figure 4. The distribution of water use in different departments.
Figure 4. The distribution of water use in different departments.
Mathematics 09 01316 g004
Figure 5. The flow chart of the ESMA for solving water resources estimation parameters.
Figure 5. The flow chart of the ESMA for solving water resources estimation parameters.
Mathematics 09 01316 g005
Figure 6. Comparison between actual and estimated values for water demand in Nanchang city: (a) Linear model; (b) Logarithmic model; (c) Exponential model; (d) Hybrid model.
Figure 6. Comparison between actual and estimated values for water demand in Nanchang city: (a) Linear model; (b) Logarithmic model; (c) Exponential model; (d) Hybrid model.
Mathematics 09 01316 g006
Figure 7. Comparison for predicting water demand from 2020–2024 based on four models: (a) Linear model; (b) Logarithmic model; (c) Exponential model; (d) Hybrid model.
Figure 7. Comparison for predicting water demand from 2020–2024 based on four models: (a) Linear model; (b) Logarithmic model; (c) Exponential model; (d) Hybrid model.
Mathematics 09 01316 g007aMathematics 09 01316 g007b
Table 1. Initial parameter settings of all algorithms.
Table 1. Initial parameter settings of all algorithms.
AlgorithmParameter ValuePopsizeIterations of Number
SMAz = 0.0330500
ESMAz = 0.03, selected elite proportion pr = 0.130500
GWOComponent of coefficient vectors: a = [2, 0]30500
WOA a value of coefficient vectors A : a = [2, 0]30500
ALONA30500
SCAThe value of constant a = 230500
MFOb = 130500
Table 2. Details Settings.
Table 2. Details Settings.
ClassificationNameDetailed Settings
HardwareCPUIntel(R) Core(TM) i5-8625U
Frequency1.60 GHz 1.80 GHz
RAM8.00 GB
Hard drive512 GB
SoftwareOperating systemWindows 10
LanguageMATLAB R2018a
Table 3. Unimodal test functions.
Table 3. Unimodal test functions.
FunctionDRangefopt
f 1 ( x ) = i = 1 n x i 2 30[−100, 100]n0
f 2 ( x ) = i = 1 n | x i | + i = 1 n | x i | 30[−10, 10]n0
f 3 ( x ) = i = 1 n ( j = 1 i x j ) 2 30[−100, 100]n0
f 4 ( x ) = max i { | x i | , 1 i n } 30[−100, 100]n0
f 5 ( x ) = i = 1 n 1 ( 100 ( x i + 1 x i ) 2 ) + ( x i 1 ) 2 30[−30, 30]n0
f 6 ( x ) = i = 1 n ( x i + 0.5 ) 2 30[−100, 100]n0
f 7 ( x ) = i = 1 n i x i 4 + r a n d o m [ 0 , 1 ) 30[−1.28, 1.28]n0
Table 4. Multimodal test functions.
Table 4. Multimodal test functions.
FunctionDRangefopt
f 8 ( x ) = i = 1 n ( x i sin ( | x i | ) ) 30[−500, 500]n−12,569.5
f 9 ( x ) = i = 1 n ( x i 2 10 cos ( 2 π x i ) + 10 ) 2 30[−5.12, 5.12]n0
f 10 ( x ) = 20 exp ( 0.2 1 n i = 1 n x i 2 ) exp ( 1 n i = 1 n cos 2 π x i ) + 20 + e 30[−32, 32]n0
f 11 ( x ) = 1 4000 i = 1 n ( x i 100 ) 2 i = 1 n cos ( x i 100 i ) + 1 30[−600, 600]n0
f 12 ( x ) = π n 10 sin 2 ( π y 1 ) + i = 1 n 1 ( y i 1 ) 2 [ 1 + 10 sin 2 ( π y i + 1 ) ] + ( y n 1 ) 2 + i = 1 30 u ( x i , 10 , 100 , 4 ) 30[−50, 50]n0
f 13 ( x ) = 0.1 sin 2 ( 3 π x 1 ) + i = 1 29 ( x i 1 ) 2 p [ 1 + sin 2 ( 3 π x i + 1 ) ] + ( x n 1 ) 2 [ 1 + sin 2 ( 2 π x 30 ) ] + i = 1 30 u ( x i , 5 , 10 , 4 ) 30[−50, 50]n0
Table 5. Fixed-dimensional multimodal test functions.
Table 5. Fixed-dimensional multimodal test functions.
FunctionDRangefopt
f 14 ( x ) = 1 500 + j = 1 25 1 j + j = 1 2 ( x i a i j ) 6 1 2[−65.536, 65.536]n0.998
f 15 ( x ) = i = 1 11 a i x 1 ( b i 2 + b i x 2 ) b i 2 + b i x 3 + x 4 2 4[−5, 5]n3.075 × 10−4
f 16 ( x ) = 4 x 1 2 2.1 x 1 4 + 1 3 x 1 6 + x 1 x 2 4 x 2 2 + 4 x 2 4 2[−5, 5]n−1.0316
f 17 ( x ) = ( x 2 5.1 4 π 2 x 1 2 + 5 π x 1 6 ) 2 + 10 ( 1 1 8 π ) cos x 1 + 10 2[−5, 5]n0.398
f 18 ( x ) = [ 1 + ( x 1 + x 2 + 1 ) 2 ( 19 14 x 1 + 3 x 1 2 14 x 2 + 6 x 1 x 2 + 3 x 2 2 ) ] × [ 30 + ( 2 x 1 + 1 3 x 2 ) 2 ( 18 32 x 1 + 12 x 1 2 + 48 x 2 36 x 1 x 2 + 27 x 2 2 ) ] 2[−2, 2]n3
f 19 ( x ) = i = 1 4 exp [ j = 1 3 a i j ( x j p i j ) 2 ] 3[0, 1]n−3.86
f 20 ( x ) = i = 1 4 exp [ j = 1 6 a i j ( x j p i j ) 2 ] 6[0, 1]n−3.322
f 21 ( x ) = i = 1 5 | ( x i a i ) ( x i a i ) T + c i | 1 4[0, 10]n−10.1532
f 22 ( x ) = i = 1 7 | ( x i a i ) ( x i a i ) T + c i | 1 4[0, 10]n−10.4028
f 23 ( x ) = i = 1 10 | ( x i a i ) ( x i a i ) T + c i | 1 4[0, 10]n−10.5363
Table 6. Results of the ESMA and other optimization algorithms.
Table 6. Results of the ESMA and other optimization algorithms.
Algorithms
GWOWOAALOSCAMFOSMAESMA
F1Best4.31E-297.16E-822.00E-44.41E-37.01E-100
Worst6.38E-277.28E-755.01E-34.52E+12.00E+400
Mean1.48E-279.44E-761.57E-31.09E+13.03E+300
Std2.01E-271.84E-751.21E-31.32E+15.70E+300
Rank3245611
F2Best3.37E-172.71E-552.31377.20E-64.07E-11.54E-2840
Worst4.36E-163.54E-501.20E+26.19E-28.00E+13.58E-1510
Mean1.31E-161.87E-514.73E+11.23E-23.04E+11.79E-1520
Std1.09E-167.88E-514.72E+11.78E-22.44E+18.04E-1520
Rank4375621
F3Best2.84E-91.49E+41.22E+32.74E+31.72E+300
Worst5.78E-45.90E+49.89E+32.28E+45.39E+46.47E-2950
Mean3.37E-53.75E+44.12E+39.45E+32.08E+43.24E-2960
Std1.28E-41.07E+42.03E+35.15E+31.25E+400
Rank3745621
F4Best9.21E-81.68327.32981.99E+15.56E+13.82E-2880
Worst2.10E-68.93E+12.71E+15.30E+18.48E+14.17E-1560
Mean6.73E-75.18E+11.68E+13.74E+16.76E+12.09E-1570
Std5.89E-72.81E+15.10659.50608.94819.33E-1570
Rank3645721
F5Best2.61E+12.77E+12.70E+11.00E+22.36E+14.30E-12.48E-2
Worst2.87E+12.88E+12.05E+31.10E+58.00E+72.83E+16.2473
Mean2.73E+12.82E+13.24E+21.69E+47.80E+68.84901.0764
Std8.32E-11.61E-12.99E+57.23E+85.75E+141.30E+22.0468
Rank3456721
F6Best8.26E-51.19E-12.68E-44.97843.56E-11.64E-33.53E-4
Worst1.51391.09764.22E-31.68E+21.01E+42.08E-21.01E-2
Mean6.72E-13.68E-11.10E-31.96E+11.00E+36.67E-34.01E-3
Std1.44E-16.18E-21.45E-61.32E+39.47E+61.59E-56.64E-6
Rank5416732
F7Best8.91E-42.37E-41.56E-11.36E-29.39E-21.01E-051.95E-6
Worst6.99E-34.02E-34.93E-11.85312.89496.07E-41.86E-4
Mean2.56E-31.54E-32.79E-11.98E-17.07E-12.23E-47.38E-5
Std1.34E-31.14E-38.70E-23.97E-19.82E-11.46E-45.19E-5
Rank4365721
F8Best−7729.0907−12568.933−7095.6123−4391.8124−9972.9113−12,569.38−12,569.49
Worst−3476.3214−8256.6973−5417.6748−3425.6876−7105.0606−12,568.52−12,568.31
Mean−5941.1712−10,238.026−5556.2584−3851.2001−8482.0979−12,569.01−12,569.11
Std1.07E+62.91E+61.37E+58.53E+45.48E+55.59E-21.38E-1
Rank5367421
F9Best5.68E-1404.38E+14.51E-11.04E+200
Worst1.44E+101.17E+21.06E+12.52E+200
Mean3.661607.91E+14.59E+11.62E+200
Std4.122701.84E+12.84E+13.33E+100
Rank2143511
F10Best7.55E-148.88E-161.77831.41E-21.38118.88E-168.88E-16
Worst1.22E-137.99E-159.76662.04E+12.00E+18.88E-168.88E-16
Mean9.93E-143.91E-154.86951.52E+11.38E+18.88E-168.88E-16
Std1.33E-143.11E-152.50838.59527.773800
Rank3246511
F11Best003.55E-34.36E-16.15E-100
Worst4.11E-21.10E-11.27E-11.33271.81E+200
Mean9.28E-37.82E-35.47E-28.52E-11.90E+100
Std1.33E-22.62E-22.87E-22.60E-14.71E+100
Rank3245611
F12Best1.93E-28.79E-37.81281.28333.48314.07E-51.38E-6
Worst1.05E-14.51E-23.69E+11.28E+62.60E+11.51E-29.67E-3
Mean4.25E-22.13E-21.56E+16.44E+41.00E+14.91E-31.29E-3
Std5.75E-41.18E-45.44E+18.25E+105.31E+12.23E-55.61E-6
Rank4367521
F13Best4.46E-15.88E-28.49E-23.03079.45404.91E-42.30E-5
Worst1.11321.12525.44E+12.25E+53.62E+27.06E-21.41E-2
Mean7.66E-14.68E-12.29E+11.24E+44.13E+11.14E-23.79E-3
Std3.57E-28.03E-23.64E+22.50E+95.83E+32.64E-41.41E-5
Rank4357621
F14Best0.9980.9980.9980.9980.9980.9980.998
Worst12.670510.76326.903310.76327.8740.9980.998
Mean5.30463.74992.72912.08283.12010.9980.998
Std1.95E+11.11E+13.05755.02335.32927.86E-243.30E-25
Rank7643521
F15Best3.075E-43.078E-46.404E-44.884E-47.295E-43.075E-43.077E-4
Worst2.036E-22.194E-32.036E-21.535E-31.655E-31.227E-31.231E-3
Mean2.51E-38.72E-41.94E-39.78E-41.02E-35.93E-45.29E-4
Std3.74E-53.46E-71.89E-51.27E-71.32E-71.25E-77.58E-8
Rank7364521
F16Best−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Worst−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Mean−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316
Std5.53E-165.53E-195.94E-272.37E-95.19E-321.25E-183.97E-19
Rank6427153
F17Best0.397890.397890.397890.397890.397890.397890.39789
Worst0.397910.397920.397890.409070.397890.397890.39789
Mean0.397890.397890.397890.400370.397890.397890.39789
Std2.62E-111.00E-101.17E-261.11E-509.22E-165.23E-16
Rank5627143
F18Best3333333
Worst33.000633.0004333
Mean33.000133.0001333
Std1.66E-92.04E-86.35E-261.62E-87.71E-301.16E-201.84E-21
Rank5746132
F19Best−3.8628−3.8626−3.8628−3.8549−3.8628−3.8628−3.8628
Worst−3.8549−3.8215−3.8628−3.8518−3.8628−3.8628−3.8628
Mean−3.8611−3.852−3.8628−3.854−3.8628−3.8628−3.8628
Std7.17E-61.70E-43.18E-261.08E-65.19E-302.43E-136.37E-15
Rank5726143
F20Best−3.322−3.3216−3.322−3.1454−3.322−3.322−3.322
Worst−3.1365−2.4512−3.2018−1.9187−3.1327−3.1974−3.1997
Mean−3.2502−3.1889−3.2683−2.9488−3.2347−3.2559−3.2142
Std5.84E-34.05E-23.70E-37.15E-23.68E-33.76E-31.36E-3
Rank3617524
F21Best−10.1528−10.1502−10.1532−7.7535−10.1532−10.1532−10.1532
Worst−2.6826−2.6271−2.6305−0.4982−2.6305−10.152−10.1521
Mean−7.632−7.4809−6.3745−3.155−5.1376−10.1527−10.1529
Std8.64019.63741.09E+14.12559.84741.17E-71.30E-7
Rank3457621
F22Best−10.4028−10.4016−10.4029−6.555−10.4029−10.4029−10.4029
Worst−10.3969-3.7181−1.8376−0.5239−2.7519−10.402−10.4017
Mean−10.4012−8.197−5.9837−3.1307−7.5086−10.4025−10.4026
Std1.73E-67.64141.17E+13.371.35E+16.59E-89.85E-8
Rank3467521
F23Best−10.5363−10.5357−10.5364−6.7648−10.5364−10.5364−10.5364
Worst−2.4217−2.4173−1.6766−0.94237−2.8711−10.5354−10.5351
Mean−9.8612−7.3348−7.2174−3.6007−8.1616−10.5360−10.5361
Std4.49918.95621.47E+13.34931.12E+19.72E-88.31E-8
Rank3567421
Mean Rank4.04354.13044.26095.78264.82612.21741.4783
Result3457621
Table 7. The p-value of the Wilcoxon rank sum test, based on the ESMA.
Table 7. The p-value of the Wilcoxon rank sum test, based on the ESMA.
FunctionESMA vs. SMAESMA vs. GWOESMA vs. WOAESMA vs. ALOESMA vs. SCAESMA vs. MFO
F13.42E-1=8.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-
F28.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-
F31.98E-2-8.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-
F48.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-8.01E-9-
F54.16E-4-6.80E-8-6.80E-8-6.80E-8-6.80E-8-6.80E-8-
F68.35E-3-1.20E-6-6.80E-8-5.90E-5-6.80E-8-6.80E-8-
F71.61E-4-6.80E-8-6.80E-8-6.80E-8-6.80E-8-6.80E-8-
F81.48E-1=6.80E-8-1.66E-7-4.95E-8-6.80E-8-6.80E-8-
F9NaN=7.93E-9-NaN=8.01E-9-8.01E-9-8.01E-9-
F10NaN=7.68E-9-1.57E-4-8.01E-9-8.01E-9-8.01E-9-
F11NaN=2.09E-3-1.63E-1=8.01E-9-8.01E-9-8.01E-9-
F126.22E-4-6.80E-8-9.17E-8-6.80E-8-6.80E-8-6.80E-8-
F134.39E-2-6.80E-8-6.80E-8-6.80E-8-6.80E-8-6.80E-8-
F149.25E-1=6.80E-8-6.80E-8-3.13E-2=6.80E-8-1.06E-1=
F159.46E-1=8.18E-1=1.44E-2-2.30E-5-4.68E-5-3.71E-5-
F166.75E-1=1.06E-7-9.89E-1=6.80E-8+6.80E-8-8.01E-9+
F179.68E-1=5.23E-7-8.35E-3-6.79E-8+6.80E-8-8.01E-9+
F181.48E-1=6.80E-8-6.80E-8-9.13E-7-6.80E-8-5.71E-8+
F191.40E-1=6.80E-8-6.80E-8-6.80E-8+6.80E-8-8.01E-9+
F203.15E-2=5.61E-1=5.98E-1=6.56E-3+6.80E-8-1.29E-3-
F218.10E-2=7.95E-7-6.80E-8-2.85E-1=6.80E-8-6.93E-3-
F222.29E-1=1.10E-5-6.80E-8-1.08E-1=6.80E-8-2.83E-1=
F232.50E-1=5.87E-6-9.17E-8-5.98E-1=6.80E-8-1.04E-1=
+/=/-0/15/80/2/210/4/194/4/150/0/234/3/16
Table 8. Historical water use in Nanchang city from 2004 to 2019.
Table 8. Historical water use in Nanchang city from 2004 to 2019.
YearTotal Water Use (108 m3)Industrial Water Use (108 m3)Agricultural Water Use (108 m3)Residential Water Use (108 m3)Ecological Water Use (108 m3)
200426.228.7214.472.750.28
200528.148.3016.922.600.32
200627.718.1116.732.520.35
200732.557.5121.272.920.85
200830.426.9019.732.940.85
200933.426.5720.153.213.49
201030.877.5117.373.492.50
201131.268.9717.704.030.56
201228.829.2014.684.360.58
201332.629.3518.234.450.59
201431.428.9217.354.540.61
201530.649.1716.214.640.62
201631.449.2116.94.70.53
201731.549.2816.844.780.64
201832.029.1317.454.80.64
201932.089.0917.514.830.65
Total491.17135.94 (27.68%)279.51 (56.92%)61.56 (12.54%)14.06 (2.86%)
Table 9. The total water, population, gross industrial production, and gross agricultural production in Nanchang city from 2004 to 2019.
Table 9. The total water, population, gross industrial production, and gross agricultural production in Nanchang city from 2004 to 2019.
YearTotal Water Use (108 m3)Industrial Water Use (108 m3)Agricultural Water Use (108 m3)Residential Water Use (108 m3)Ecological Water Use (108 m3)
200426.228.7214.472.750.28
200528.148.3016.922.600.32
200627.718.1116.732.520.35
200732.557.5121.272.920.85
200830.426.9019.732.940.85
200933.426.5720.153.213.49
201030.877.5117.373.492.50
201131.268.9717.704.030.56
201228.829.2014.684.360.58
201332.629.3518.234.450.59
201431.428.9217.354.540.61
201530.649.1716.214.640.62
201631.449.2116.94.70.53
201731.549.2816.844.780.64
201832.029.1317.454.80.64
201932.089.0917.514.830.65
Total491.17135.94 (27.68%)279.51 (56.92%)61.56 (12.54%)14.06 (2.86%)
Table 10. The parameters in algorithms in solving the problem of water resource demand estimation.
Table 10. The parameters in algorithms in solving the problem of water resource demand estimation.
AlgorithmParameter ValuePopsizeIterations of Number
ESMAz = 0.03, selected elite proportion pr = 0.1301000
WOA a value of coefficient vectors A : a = [2, 0]301000
HHOE0 randomly changes inside the interval [−1, 1]301000
BBOThe largest immigration rate is 1,
largest emigration rate is 1,
Mutation rate is 0.1.
301000
MVOExistence probability = [0.2, 1]
Exploitation accuracy = 6
301000
AOAC1 = 2, C2 = 6, C3 = 1, C4 = 2.301000
SMAz = 0.03301000
Table 11. Results of the linear model.
Table 11. Results of the linear model.
AlgorithmBest MREMean MREWorst MREStd
SSA3.7057%3.7842%4.3348%1.3766E-03
WOA3.7830%9.4189%17.6245%3.3848E-02
HHO3.5682%3.8091%4.4485%2.0942E-03
BBO3.6662%4.2208%6.3299%8.0511E-03
MVO3.6962%3.7246%3.7450%9.2270E-05
AOA3.5694%3.9084%5.1945%4.2159E-03
SMA3.7204%3.7363%3.7643%1.3514E-04
ESMA3.7084%3.7254%3.7347%5.3146E-05
Table 12. Results of the logarithmic model.
Table 12. Results of the logarithmic model.
AlgorithmBest MREMean MREWorst MREStd
SSA2.8369%2.8381%2.8576%1.6084E-04
WOA2.7940%10.6478%24.2942%6.1449E-02
HHO2.8149%3.2130%4.0381%3.5587E-03
BBO2.8210%2.9604%3.4912%1.4766E-03
MVO2.8132%2.8374%2.8862%1.8078E-04
AOA2.8630%3.3421%4.3456%4.5821E-03
SMA2.8249%2.8394%2.8503%6.0100E-05
ESMA2.8309%2.8378%2.8489%4.9756E-05
Table 13. Results of the exponential model.
Table 13. Results of the exponential model.
AlgorithmBest MREMean MREWorst MREStd
SSA2.3868%3.1596%5.8017%7.8594E-03
WOA2.9855%9.8939%17.1511%4.5313E-02
HHO2.5645%2.9217%3.3943%2.0079E-03
BBO2.4364%3.0591%4.2006%4.2337E-03
MVO2.4067%2.9647%3.5502%3.1345E-03
AOA2.5685%3.0042%3.6609%2.5527E-03
SMA2.3783%2.6942%3.0273%1.6634E-03
ESMA2.3803%2.6488%2.8450%1.3262E-03
Table 14. Results of the hybrid model.
Table 14. Results of the hybrid model.
AlgorithmBest MREMean MREWorst MREStd
SSA2.8528%5.0161%9.6486%2.1377E-02
WOA4.2606%16.3721%50.7506%1.2419E-01
HHO2.4567%3.5252%4.5646%6.5378E-03
BBO2.4485%3.4070%5.9116%1.0518E-02
MVO2.9034%5.3989%12.8579%2.5195E-02
AOA2.6473%5.7096%12.6856%3.1945E-02
SMA2.4802%3.1261%4.7669%6.7312E-03
ESMA2.2954%3.1235%5.7619%8.3033E-03
Table 15. Estimates of the population, industrial production, and agricultural production in 2020−2024.
Table 15. Estimates of the population, industrial production, and agricultural production in 2020−2024.
YearPopulationGross Industrial Production (108 yuan)Gross Agricultural Production (108 yuan)
202056561142023.08404.92
202157122232161.82454.77
202257688882310.07510.75
202358261162468.48573.63
202458839112637.76644.25
Table 16. Prediction of the total water consumption (108 m3) based on four models.
Table 16. Prediction of the total water consumption (108 m3) based on four models.
AlgorithmSSAWOAHHOBBOMVOAOASMAESMA
MethodLinear Model
202037.0942.3835.8538.1736.6733.6336.5936.71
202140.6849.2538.5042.2639.9734.7739.8840.05
202244.8257.2041.5547.0043.7836.0643.6843.91
202349.5866.3745.0652.4548.1637.5448.0448.35
202455.0576.9349.0858.7353.1939.2253.0453.44
MethodLogarithmic Model
202031.9631.4431.4231.5331.7832.2931.8931.92
202132.1831.3831.4231.5931.9532.6132.0932.13
202232.4131.3331.4331.6532.1232.9332.2932.35
202332.6331.2831.4431.7232.2933.2532.4932.56
202432.8631.2331.4531.7932.4633.5732.6932.78
MethodExponential Model
202032.7729.1831.8032.7132.5931.7631.9131.91
202133.3224.5331.8533.1533.1631.8432.1132.06
202233.7715.1331.8633.6433.6531.9232.2632.16
202334.13−3.2331.8234.2234.0731.9832.3632.19
202434.40−38.3531.7334.8934.4232.0232.4232.19
MethodHybrid Model
202033.0628.0132.8031.5833.0933.4732.3130.68
202134.0726.8633.3430.3533.2634.5932.3129.39
202235.0825.6733.9328.2032.2735.8732.1927.54
202336.0524.4434.5724.8229.4237.3131.9725.08
202436.9623.1435.2919.8223.6438.9531.6321.95
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yu, K.; Liu, L.; Chen, Z. An Improved Slime Mould Algorithm for Demand Estimation of Urban Water Resources. Mathematics 2021, 9, 1316. https://0-doi-org.brum.beds.ac.uk/10.3390/math9121316

AMA Style

Yu K, Liu L, Chen Z. An Improved Slime Mould Algorithm for Demand Estimation of Urban Water Resources. Mathematics. 2021; 9(12):1316. https://0-doi-org.brum.beds.ac.uk/10.3390/math9121316

Chicago/Turabian Style

Yu, Kanhua, Lili Liu, and Zhe Chen. 2021. "An Improved Slime Mould Algorithm for Demand Estimation of Urban Water Resources" Mathematics 9, no. 12: 1316. https://0-doi-org.brum.beds.ac.uk/10.3390/math9121316

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop