Next Article in Journal
A Reinforcement Learning Approach for Ensemble Machine Learning Models in Peak Electricity Forecasting
Previous Article in Journal
A Solid-to-Solid 2D Model of a Magnetocaloric Cooler with Thermal Diodes: A Sustainable Way for Refrigerating
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Electricity Market Price Prediction Based on Quadratic Hybrid Decomposition and THPO Algorithm

1
School of Electric Power, Civil Engineering and Architecture, Shanxi University, Taiyuan 030031, China
2
North China Electric Power Research Institute Co., Ltd., Beijing 100045, China
3
State Grid Taiyuan Electric Power Supply Company, Taiyuan 030000, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Submission received: 12 June 2023 / Revised: 21 June 2023 / Accepted: 28 June 2023 / Published: 1 July 2023
(This article belongs to the Section C: Energy Economics and Policy)

Abstract

:
Electricity price forecasting is a crucial aspect of spot trading in the electricity market and optimal scheduling of microgrids. However, the stochastic and periodic nature of electricity price sequences often results in low accuracy in electricity price forecasting. To address this issue, this study proposes a quadratic hybrid decomposition method based on ensemble empirical modal decomposition (EEMD) and wavelet packet decomposition (WPD), along with a deep extreme learning machine (DELM) optimized by a THPO algorithm to enhance the accuracy of electricity price prediction. To overcome the problem of the optimization algorithm falling into local optima, an improved optimization algorithm strategy is proposed to enhance the optimization-seeking ability of HPO. The electricity price series is decomposed into a series of components using EEMD decomposition and WPD decomposition, and the DELM model optimized by the THPO algorithm is built for each component separately. The predicted values of all the series are then superimposed to obtain the final electricity price prediction. The proposed prediction model is evaluated using electricity price data from an Australian electricity market. The results demonstrate that the proposed improved algorithm strategy significantly improves the convergence performance of the algorithm, and the proposed prediction model effectively enhances the accuracy and stability of electricity price prediction, as compared to several other prediction models.

1. Introduction

The accuracy of electricity price forecasting will directly impact the interests of all parties participating in the spot trading of electricity as a result of the ongoing reform of the electricity market. This will enable investors to make informed decisions that will maximize profits and minimize risks. As a result, accurate power price prediction will become a study issue that many academics are interested in, and it is one of the crucial issues that the electricity market desperately needs to solve. However, because of their high volatility and cyclicality, as well as their wide range of influences, power prices are difficult to predict. At the moment, there are two primary categories of power price prediction models: time series approaches and intelligent prediction models.
The time series method for predicting electricity prices is based on the mean recovery characteristics of the prices to establish a regression model. Commonly used models include the generalized autoregressive conditions heteroscedasticity condition model (GARCH) [1] and the cumulative autoregressive sliding average model (ARIMA) [2]. Literature [3] utilizes the wavelet transform to decompose the random sequence of electricity prices and subsequently applies ARIMA to predict each sub-electricity price series. The prediction results of each sub-series are then reconstructed as the final prediction results during non-stationary periods of electricity prices. Intelligent models such as BP neural networks [4], convolutional neural networks (CNN) [5], support vector machines [6], and extreme learning machines (ELM) [7] have also been employed. Literature [8] proposes a day-ahead spot market price prediction method based on a hybrid extreme learning machine and uses an optimization algorithm to adjust the regularized limit learning machine (RELM) parameters to improve the accuracy of RELM parameters. This proposed prediction model is applied to the Shanxi spot market. Literature [9] utilizes CNN to reduce the dimensionality of data and improve the prediction accuracy of the long short-term memory network (LSTM) for electricity price sequences affected by multiple factors. Improvements in data decomposition processing and optimization algorithms are key technologies for improving accuracy. Ensemble empirical mode decomposition (EEMD), variational mode decomposition (VMD) [10], and other methods have been used to decompose data into different sequences and establish predictive models to improve accuracy. Literature [11] uses the empirical wavelet transform (EWT) and long short-term memory network (LSTM) based on hybrid AM as predictive models. The literature [12] uses the two methodologies of EEMD and VMD decomposition to increase the model’s overall prediction accuracy. Although population optimization algorithms such as the sparrow search algorithm [13], whale optimization algorithm [14], and marine predator algorithm [8] have significant advantages, they are prone to local optimization and slow convergence speeds and thus require improvement to enhance optimization performance. Literature [15] proposes an improved algorithm strategy based on the combination of Cauchy variation and adaptive weight, which significantly improves the optimization accuracy and convergence speed of the whale optimization algorithm.
Given the limitations of prior research, we present a DELM-based model for predicting electricity prices. Firstly, we employ EEMD decomposition to break down the electricity price sequence into a series of intrinsic modal functions. Secondly, to address irregularities that may impact prediction accuracy, we propose the use of WPD decomposition to further decompose the sequence into approximate and detailed components. We then establish a DELM model for each component of decomposition. To circumvent local optimal problems, we introduce an adaptive t-distribution variation operator to update the population. Function tests demonstrate that our proposed improvement strategy enhances the convergence speed and optimization accuracy of the algorithm. Subsequently, we utilize the THPO algorithm to optimize the input weight of DELM. By analyzing electricity price data from the Australian market and comparing our proposed model with conventional DELM, EEMD-DELM, and other models, we demonstrate that the prediction accuracy of the deep extreme learning electromechanical price prediction model proposed in this paper is superior.

2. Improved Hunter-Prey Optimization Algorithm

The hunter-prey optimizer (HPO) [16] is a novel population-based optimization algorithm proposed by Naruei and Molahosseini et al. in 2022. Unlike previous animal hunting scenarios, the HPO is designed to emulate hunting scenarios in which predators, such as wolves and tigers, prey upon herbivores, such as deer and antelope, while continuously adjusting the positions of the hunters and prey.

2.1. Hunter-Prey Optimization Algorithm

The search mechanism for the predator is presented through Equation (1).
x i , j ( t + 1 ) = x i , j ( t ) + 0.5 [ 2 c Z P p o s ( j ) x i , j ( t ) ] + ( 2 ( 1 c ) Z m ( j ) x i , j ( t ) )
the variable x i , j ( t ) represents the location of the predator, whereas P p o s signifies the location of the prey. Additionally, m ( j ) denotes the mean of all positions, and Z represents the adaptive parameter, which is computed utilizing Equation (2).
P = R 1 < c ; I D X = ( P = = 0 ) ; Z = R 2 I D X + R 3 ( I D X )
where R 1 and R 3 denote a stochastic vector within the interval [0, 1], R 2 represents a random number within the range [0, 1], and I D X signifies the index number of the vector that satisfies the condition ( P = = 0 ).
The variable c serves as a pivotal parameter that strikes a balance between the competing objectives of exploration and exploitation. Its value gradually diminishes from an initial value of 1 to a minimum threshold of 0.02 as the iteration progresses. This value is determined through the application of Equation (3) as expounded herein.
c = 1 T ( 0.98 T max )
Subsequently, the determination of the prey position P p o s ensues through the initial computation of the mean m of all positions, as per Equation (4), followed by the computation of the distance D e u c between each scrutinized individual and the mean position, as per Equation (5).
m = 1 n i = 1 n x i
D e u c ( i ) = ( j = 1 d ( x i , j m j ) 2 ) 1 2
When engaging in hunting activities, it is common practice for a hunter to pursue and capture prey, resulting in the death of the prey. Subsequently, the hunter must search for a new location to continue hunting. To address the algorithm convergence problem that is inherent in the current algorithm as depicted in Equation (6), it is recommended to implement a decreasing mechanism.
k = r o u n d ( c × N )
the parameter N denotes the number of search populations in the optimization process. At the onset of the algorithm, the value of k is equivalent to the population size. The farthest search individual from the designated target m is designated as the prey P p o s and is subsequently captured by the hunter. To determine the location of the prey P p o s , Equation (7) is developed from Equations (4)–(6) through a rigorous analytical process.
P p o s = x i i   i s   s o r t e d   D e u c ( k )
In the event that the prey experiences an attack, it will instinctively seek refuge in a secure location. It is hypothesized that the optimal safe position is none other than the global optimal position, as it affords the prey the greatest likelihood of survival. The hunter will subsequently select a new prey position, and this is achieved by utilizing Equation (8) to update the prey’s position.
x i , j ( t + 1 ) = T p o s ( j ) + c Z cos ( 2 π R 4 ) × [ T p o s ( j ) x i , j ( t ) ]
where the prey’s current location, represented by the variable x i , j ( t ) , is a critical component in the optimization process. The global optimum position is denoted by the variable T p o s ( j ) , while the variable R 4 represents a random number within the range of [0, 1]. The cos function and its input parameters are utilized to calculate the next prey position, which is strategically positioned at a varying radius and angle from the global optimum position to enhance system performance. The determination of the hunter and prey is achieved by means of configuring the adjustment parameter β , which is established at a value of 0.1. Furthermore, the variable R 5 , which is a randomly generated number within the interval of [0, 1], plays a crucial role in the optimization process. Specifically, the hunter’s position is updated using Equation (1) when R 5 < β , while Equation (8) is applied to update the prey’s position.

2.2. Chaotic Dyadic Learning Initializes Populations

Population intelligence optimization algorithms commonly utilize random generation to initialize population positions, which may not ensure uniform distribution of the population across the search space. This non-uniform distribution can directly impact the convergence speed and accuracy of the optimization algorithm to identify the optimal solution. Chaotic sequences, possessing diversity and good traversal characteristics, have been proposed as a potential solution to this issue [17]. Prey population that incorporates chaotic sequences, such as Tent mapping [18], Cubic mapping [19], Circle mapping [20], and Logistic mapping [21], exhibit better diversity. Tent chaotic mapping, characterized by good traversal uniformity and convergence speed, generates uniformly distributed chaotic sequences within the [0, 1] range. In this study, we propose a new Tent chaos mapping and opposition-based learning (OBL) strategy, referred to as TOBL, to enhance population quality.
The expression of Tent chaotic mapping is presented below.
y t + 1 = y t / a 0 y t < a 1 y t / 1 a a y t 1
where y t denotes the count of chaos generated after T iterations, T = 0 , 1 , , T max while a represents a constant within the interval [0, 1], which has been set to 0.7 in this study.
The TOBL strategy involves a dynamic compression of the distribution range of the initial population by utilizing the uniform variation of Tent, with the sum of the upper and lower bounds of the objective function serving as the center. The ultimate goal is to ensure uniformity in the population as much as possible. The model of the TOBL strategy is presented below.
x ¯ i , j = l b j + u b j y i x i , j
The steps of the TOBL strategy are as follows: assume that the number of populations is n . First, n population positions are randomly generated, then n chaotic opposing positions are generated using TOBL, and subsequently, these 2 n positions are ranked in terms of fitness, and the top n positions are selected as the initialized population.
The TOBL strategy comprises the following steps: given a population size of n , firstly, n population positions are generated randomly. Subsequently, n chaotic opposing positions are generated using TOBL. Following this, the 2 n positions are ranked based on their respective fitness levels, and the top n positions are selected to initialize the population.

2.3. Non-Linear Equilibrium Parameters

The parameter c plays a crucial role in balancing exploration and exploitation in the HPO algorithm. To enhance the effectiveness of this parameter, a non-linear decreasing strategy has been proposed, and the resulting improved equilibrium parameters are illustrated in Figure 1. As the equilibrium parameter c undergoes continuous change in the iterative process, this parameter facilitates the optimization algorithm’s ability to balance global exploration and local exploitation processes, thereby avoiding convergence to local optima. The specific formula for calculating this parameter is presented below.
c = c min + ( c max c min ) cos ( π T 2 T max )
where T max denotes the maximum number of iterations, T represents the current number of iterations, and c max ,   c min represents the maximum and minimum values of the balance parameters.
The non-linear strategy-adjusted HPO algorithm incorporates a weight factor that is relatively small in the early iterations, but maintains a large weight throughout this phase. This approach enhances the global search capability of the algorithm. In the middle of the iteration process, the weight factor begins to decline rapidly, which improves search accuracy in the later stages of the algorithm. This strategy ultimately achieves a balance between global and local search capabilities.

2.4. Adaptive t -Distribution

The t-distribution, commonly referred to as the Student distribution, is characterized by a distribution function curve that is closely linked to its degree of freedom, denoted by n [22]. In this study, we propose an adaptive t-distribution variational operator that employs the number of iterations as degrees of freedom to perturb the positions of the hunter and prey. This approach enhances the algorithm’s ability to effectively exploit global information in the early stages of the iteration process, while also facilitating the exploration of local information in the latter stages of the iteration process. Consequently, we observe a notable improvement in the convergence speed of the algorithm through the proposed approach to position updating.
x i t + 1 = x i t + x i t · t ( I t e r )
where x i t + 1 represents the population position following perturbation, x i t denotes the position at the tth iteration, and t ( I t e r ) refers to the t-distribution that employs the current number of iterations as its degree of freedom. This approach effectively utilizes current position information and incorporates random disturbance information to promote optimal performance.
Figure 2 illustrates a comparison of the probability density of the function distribution. Specifically, the t-distribution closely resembles the Cauchy distribution in the initial iteration phase and gradually transitions towards the Gaussian distribution as the number of iterations increases, thereby facilitating the enhanced convergence speed of the algorithm. The proposed variational operator based on the t-distribution combines the advantages of Gaussian and Cauchy operators while concurrently enhancing the global exploration and local exploitation capabilities of the algorithm.

2.5. Algorithm Performance Comparison

2.5.1. The Specific Process of the THPO Algorithm

Subsequent to the aforementioned enhancements to the fundamental HPO algorithm, the resulting THPO algorithmic flow is depicted in Figure 3.

2.5.2. Testing Functions

This paper presents a comparative investigation of five intelligent optimization algorithms, namely the particle swarm optimizer algorithm (PSO), the genetic algorithm (GA), the whale optimization algorithm (WOA), the hunter-prey optimizer algorithm (HPO), and the proposed improved hunter-prey optimizer algorithm (THPO). The study employs standard parameters, including a population size of 30 and a maximum number of iterations of 500. The primary objective of this examination is to appraise the performance of the THPO algorithm, given its proposed enhancements. To achieve this aim, the study utilizes ten benchmark test functions with varying characteristics, as elucidated in Table 1. These functions are chosen to evaluate the local search and global search capabilities of the algorithms, with F1 to F5 representing single-peak functions and F6 to F9 representing complex multi-peak functions. Additionally, F10 represents fixed-dimensional functions.

2.5.3. Analysis of Test Results

The current study involves the independent execution of the four selected comparison algorithms and the proposed THPO algorithm 50 times on each of the 10 basic test functions, as presented in Table 2 and Figure 4.
The optimal and average values in a table can serve as indicators of an algorithm’s convergence and search accuracy. In the case of the five single-peaked functions F1–F5, the proposed THPO algorithm outperforms the other four algorithms in terms of search results, with a significantly smaller standard deviation, indicating higher stability. On the other hand, the three algorithms, WOA, HPO, and THPO, are able to find the optimal value for the four multi-peaked functions F6–F8, with comparable performance, although WOA shows inferior stability compared to HPO and THPO. Notably, on the F9 function, THPO demonstrates substantially improved search performance and stability compared to the other algorithms. Regarding the fixed-dimensional function F10, both HPO and THPO achieve the same optimal value, although not at 0. THPO exhibits the best search performance on this function, with a small standard deviation, optimal value, and mean value in the same order of magnitude, indicating strong stability compared to the other four algorithms.
To further evaluate the convergence of THPO, the iterative convergence curves of the test functions F1 to F10 were selected. These curves can effectively demonstrate the algorithm’s convergence accuracy and speed, as well as its ability to escape local optima. As shown in Figure 4, the convergence curves of THPO are consistently below those of the other algorithms on single-peaked functions F1 to F5, indicating that THPO exhibits superior convergence accuracy and speed compared to other algorithms. Similarly, on multi-peaked functions F6 to F8, THPO, along with WOA and HPO, can all search for the optimal value. However, THPO’s convergence curve shows the fastest convergence speed, with the optimal value achieved within 50 iterations. Although THPO’s optimal value on F9 is slightly less than that of HPO, the analysis of the convergence curve reveals that THPO’s convergence speed and stability are significantly better than HPO’s. Finally, on the fixed-dimensional function F10, THPO’s convergence speed is substantially faster than that of the other algorithms. Overall, the comprehensive comparison results demonstrate that the proposed THPO algorithm outperforms other algorithms on all performance metrics, exhibiting exceptional ability in finding the optimal solution and achieving fast convergence speed.

3. Data Processing

3.1. Data Decomposition Method

3.1.1. Wavelet Packet Decomposition

Wavelet packet decomposition (WPD), which is also referred to as wavelet packet transform [23], is a specialized wavelet transform (WD) that possesses the ability to select the appropriate decomposition band based on the signal itself and the analysis requirements. As compared to wavelet decomposition, WPD is known to be superior in terms of frequency decomposition. The adaptive nature of WPD enables the decomposition of unstable electricity price data into high- and low-frequency components, thereby facilitating forecasting tasks. The working mechanism of WPD is depicted in Figure 5, and the equation governing the wavelet packet decomposition is represented by Equation (13).
d l j , 2 n = k [ h k 2 l × d k k l , n ] d l j , 2 n + 1 = k [ g k 2 l × d k k l , n ]
where d l j , 2 n is the wavelet packet coefficient, and h and g are the filter coefficients of the low-pass and high-pass filters, respectively.
After wavelet packet decomposition, the frequency signal needs to be reconstructed, and the reconstruction algorithm is Equation (14).
d l j 1 , n = k [ h ˜ l 2 k × d k j , 2 n + g ˜ l 2 k × k j , 2 n + 1 ]
where h ˜ and g ˜ are the reconfiguration coefficients of the low-pass and high-pass filters.

3.1.2. Ensemble Empirical Modal Decomposition

The empirical mode decomposition (EMD) is a signal processing algorithm that has been further developed to produce the ensemble empirical mode decomposition (EEMD) algorithm. The EEMD algorithm aims to mitigate the impact of white noise on the original signal by introducing uniformly distributed white noise to a sequence of decompositions and averaging the outcomes of multiple decompositions. The EEMD algorithm can be outlined as follows:
  • Step 1: The white noise sequence is added to the initial time series to create a new sequence x ( t ) .
  • Step 2: The new sequence is decomposed into several intrinsic mode function (IMF) components and one residual (Res) component.
  • Step 3: Perform iterations of steps (1) and (2) m times, incorporating a white noise sequence of equivalent amplitude for each iteration.
  • Step 4: The decomposition outcomes of all iterations are combined by taking the average to eliminate the influence of white noise.
A diagrammatic representation of this algorithm is depicted in Figure 6.

3.2. A Secondary Hybrid Decomposition Method Based on EEMD-WPD

In this study, a novel quadratic hybrid decomposition method is proposed for the purpose of predicting electricity prices. This approach involves the combination of two distinct techniques, namely ensemble empirical mode decomposition (EEMD) and wavelet packet decomposition (WPD). The EEMD method is initially employed to decompose the original signal into a set of intrinsic modal functions, and the corresponding decomposition outcome is presented in Figure 7. It is worth noting that the first intrinsic mode function (IMF1) exhibits non-linear and irregular characteristics, which may hinder the accuracy of the prediction. To address this issue, a WPD decomposition is proposed to decompose the non-linear and irregular IMF1 component into its approximate and detailed components. The decomposition results of WPD are demonstrated in Figure 8.

4. Deep Extreme Learning Machine Model Building

4.1. Principle of Deep Extreme Learning Machine

The extreme learning machine (ELM) is a type of neural network that employs a single hidden layer feed-forward architecture. During the training process, the ELM network randomly generates input weights and thresholds for the hidden layer. This randomization characteristic provides the ELM network with advantages such as rapid learning and high generalization ability. Additionally, the autoencoder (AE) algorithm is an unsupervised learning technique that is suitable for processing complex data and feature learning. One notable feature of AE is its ability to maintain an identical input and output structure in the network model.
When dealing with feature representations that are downscaled and high-dimensional, the output weights, denoted as β , of the hidden layer can be represented mathematically as:
β = ( H H T + 1 C ) H T X
where the weight between the output node and the hidden node is denoted by β = [ β 1   β 2 β n ] , while the regularization parameter is denoted by C , and the input data is denoted by X .
In the case of equal dimensional feature expression, the output weight β of the hidden layer can be mathematically expressed as follows:
β = T H 1
DELM is a network architecture that is essentially composed of multiple ELM-AEs. The integration of multiple hidden layers in ELM-AE facilitates the mapping of data features, thereby enhancing both the predictive accuracy and generalization capability of the model. Unlike conventional deep learning techniques, DELM employs a hierarchical unsupervised training approach to minimize reconstruction errors, thereby ensuring that the input data in the model closely approximate the output data [24].

4.2. EEMD-WPD-THPO-DELM Model

The proposed electricity price forecasting model in this study comprises three stages. Firstly, the original electricity price time series undergoes EEMD decomposition, and the IMF1 component with higher irregularity is further decomposed using WPD decomposition. Secondly, the THPO algorithm with adaptive t-distribution is introduced to optimize the input weights of the DELM. Thirdly, a THPO-DELM electricity price forecasting model is constructed for each series, and the final electricity price forecasts are obtained by superimposing the prediction results of each model. The specific modeling steps are illustrated in Figure 9.

5. Example Analysis

5.1. Data Preparation

Using Australia’s electricity market price data as a case study, one month’s worth of electricity price data is selected, as depicted in Figure 10. The data are sampled at 30 min intervals, with a daily sampling time of 0:00–23:30, resulting in a total of 48 sampling points and 1488 electricity price values. The data are divided into training and test sets, with the first 70% of the data being selected as the training set and the remaining data as the test set.
To enhance the predictive accuracy and expedite the convergence rate of the model, the electricity price data are subjected to normalization of its maximum and minimum values, thereby constraining it within the range of [0, 1]. The precise formula employed for this normalization process is as follows:
x = x x min x max x min
where x and x denote the values of the data prior to and post normalization, respectively, while x max and x min signify the maximum and minimum values of the data, correspondingly.

5.2. Error Evaluation Indicators

The present study employs mean absolute error (MAE), root mean squared error (RMSE), mean absolute percentage error (MAPE), and coefficients of determination (R-Square, R2) to assess the predictive performance of the developed model, and the formula is as follows:
X M A E = 1 N i = 1 N y i y i X R M S E = 1 N i = 1 N y i y i 2 X M A P E = 100 % N i = 1 N y i y i y i R 2 = 1 i = 1 N y i y i 2 i = 1 N y ¯ i y i 2
where N represents the sample size, y i represents the predicted value of electricity prices at moment i , y ¯ i represents the average of N price predictions, and y i represents the true value of electricity prices at moment i .

5.3. Analysis of the Results

5.3.1. Verification of Quadratic Hybrid Decomposition

The electricity price data were processed using different methods: (1) no decomposition; (2) EEMD decomposition; and (3) the quadratic hybrid decomposition method proposed in this paper. The THPO-DELM prediction model was built separately for each sub-series decomposed, and the final predicted electricity price was obtained by superimposing the prediction results of each series. The prediction curves obtained from the test set are shown in Figure 11, and the evaluation indicators are shown in Table 3.
Based on the analysis of Figure 11 and Table 3, the following conclusions can be drawn:
(1)
The EEMD-THPO-DELM and EEMD-WPD-THPO-DELM models exhibit a reduction in MAE by 61.1% and 74.9%, respectively, a reduction in MAPE by 61.4% and 75.91%, respectively, and a reduction in RMSE by 62.2% and 76.6%, respectively, as compared to the THPO-DELM model that employs raw data for prediction. The decomposition method facilitates the smoothing of feature series, thereby enhancing the prediction efficacy of the model.
(2)
The EEMD-WPD-THPO-DELM model demonstrates the most effective prediction performance, with a significant improvement in the coefficient of determination R2 as compared to other methods. This indicates that EEMD and WPD can render the data series smoother, and the proposed quadratic hybrid decomposition method resolves the issue of irregular components in EEMD decomposition that leads to low prediction accuracy, thereby enabling a closer match with the true value curve.

5.3.2. THPO-DELM Validation

In order to verify the validity of the proposed THPO-DELM prediction model, the electricity price data were processed by quadratic mixture decomposition, and each series was predicted and then superimposed. The following prediction models were used: (1) LSSVM; (2) DELM; (3) WOA-DELM; and (4) THPO-DELM, and the final predicted values were obtained by superimposing the prediction results of each series. The prediction curves are shown in Figure 12, and the evaluation indicators are shown in Table 4.
Based on the analysis of Figure 12 and Table 4, the following conclusions can be drawn:
(1)
The proposed EEMD-WPD-THPO-DELM model in this paper has higher prediction accuracy and better results. Compared with other prediction models, RMSE, MAE, and MAPE are lower and the value of R2 is larger, which verifies the effectiveness of the proposed model.
(2)
The DELM model has a 28.21% lower RMSE, 13.48% lower MAPE, and 4.28% lower MAE compared to the LSSVM model, reflecting the fact that the DELM model has better learning ability than the LSSVM model.
(3)
WOA-DELM and THPO-DELM represent enhancements of the DELM model, which can significantly improve the prediction accuracy by optimizing the DELM input weights through intelligent algorithms. The prediction accuracy of THPO-DELM is more significantly improved as compared to WOA-DELM, owing to the THPO algorithm’s good optimization-seeking performance. In comparison to DELM, THPO-DELM exhibits a reduction in RMSE, MAPE, and MAE by 33.47%, 39.87%, and 37.76%, respectively, and an improvement in the coefficient of determination R2 by 1.85%, which closely approximates the true curve.

6. Conclusions

This study presents a novel approach for electricity price forecasting using a quadratic hybrid decomposition method and THPO-optimized DELM. Initially, to enhance the search capability of the HPO algorithm, an improved algorithm strategy is proposed, and the stability and optimization-seeking performance of the proposed THPO algorithm are evaluated by comparing 10 benchmark test functions. Subsequently, through a comprehensive analysis of electricity price data from the Australian electricity market, the following conclusions can be drawn:
(1)
The THPO algorithm, with the introduction of variational operators with adaptive t-distribution, exhibits superior performance and stability as compared to the conventional HPO algorithm and converges rapidly.
(2)
The proposed EEMD-WPD quadratic hybrid decomposition method decomposes the random and highly periodic electricity price series into a relatively smooth series, thereby significantly improving the accuracy of electricity price prediction.
(3)
The THPO-DELM method addresses the influence of random input weights of the DELM model and utilizes the THPO algorithm for optimization, resulting in a significant improvement in forecasting accuracy as compared to the DELM forecasting model.
In summary, the prediction model proposed in this study is suitable for electricity spot market trading and microgrid scheduling optimization problems and exhibits promising application prospects.

Author Contributions

Conceptualization, L.Y. and Z.Y.; methodology, Z.Y. and Z.L.; software, L.Y. and Z.Y.; validation, Z.Y.; formal analysis, Z.Y., L.Y. and Z.L.; investigation, L.Y. and N.M.; resources, L.Y.; data curation, Z.Y.; writing—original draft preparation, Z.Y.; writing—review and editing, Z.Y., Z.L., L.Y., N.M., R.L. and J.Q.; visualization, Z.L. and Z.Y.; supervision, L.Y. and N.M.; project administration, L.Y.; funding acquisition, L.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Shanxi Development and Reform Commission’s Special Project for Mass Entrepreneurship and Innovation (137541005), the Shanxi Postgraduate Innovation Project (2022Y156), and the Ministry of Education University-Industry Cooperation Collaborative Education Project (221002262073019).

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, X.; Shen, J.; Li, Y. A Generalized Auto-Regressive Conditional Heteroskedasticity Model for System Marginal Price Forecasting Based on Weighted Double Gaussian Distribution. Power Syst. Technol. 2010, 34, 139–144. [Google Scholar]
  2. Zhang, J.; Wei, Y.-M.; Li, D.; Tan, Z.; Zhou, J. Short term electricity load forecasting using a hybrid model. Energy 2018, 158, 774–781. [Google Scholar] [CrossRef]
  3. Zhou, M.; Nie, Y.; Li, G.; Ni, Y. Wavelet analysis based arima hourly electricity prices forecasting approach. Power Syst. Technol. 2005, 29, 50–55. [Google Scholar]
  4. Liu, Y.; Jiang, C. Considering cyclical nature and change rate of load on short-term electricity price forecasting. Electr. Mach. Control 2010, 14, 21–26. [Google Scholar]
  5. Yildiz, C.; Acikgoz, H.; Korkmaz, D.; Budak, U. An improved residual-based convolutional neural network for very short-term wind power forecasting. Energy Convers. Manag. 2021, 228, 113731. [Google Scholar] [CrossRef]
  6. Zhang, S.; Zhang, N.; Zhang, Z.; Chen, Y. Electric Power Load Forecasting Method Based on a Support Vector Machine Optimized by the Improved Seagull Optimization Algorithm. Energies 2022, 15, 9197. [Google Scholar] [CrossRef]
  7. Yin, H.; Zeng, Y.; Meng, A.; Liu, Z. Short-term electricity price forecasting based on singular spectrum analysis. Power Syst. Prot. Control 2019, 47, 115–122. [Google Scholar]
  8. Dong, J.; Dou, X.; Bao, A.; Zhang, Y.; Liu, D. Day-Ahead Spot Market Price Forecast Based on a Hybrid Extreme Learning Machine Technique: A Case Study in China. Sustainability 2022, 14, 7767. [Google Scholar] [CrossRef]
  9. Ji, X.; Zeng, R.; Zhang, Y.; Song, F.; Sun, P.; Zhao, G. CNN-LSTM short-term electricity price prediction based on an attention mechanism. Power Syst. Prot. Control 2022, 50, 125–132. [Google Scholar]
  10. Zhang, J.; Tan, Z.; Wei, Y. An adaptive hybrid model for short term electricity price forecasting. Appl. Energy 2020, 258, 114087. [Google Scholar] [CrossRef]
  11. Meng, A.; Wang, P.; Zhai, G.; Zeng, C.; Chen, S.; Yang, X.; Yin, H. Electricity price forecasting with high penetration of renewable energy using attention-based LSTM network trained by crisscross optimization. Energy 2022, 254, 124212. [Google Scholar] [CrossRef]
  12. Zhang, T.; Tang, Z.; Wu, J.; Du, X.; Chen, K. Short term electricity price forecasting using a new hybrid model based on two-layer decomposition technique and ensemble learning. Electr. Power Syst. Res. 2022, 205, 107762. [Google Scholar] [CrossRef]
  13. Zhang, Y.; Tao, P.; Wu, X.; Yang, C.; Han, G.; Zhou, H.; Hu, Y. Hourly Electricity Price Prediction for Electricity Market with High Proportion of Wind and Solar Power. Energies 2022, 15, 1345. [Google Scholar] [CrossRef]
  14. Lu, Y.; Wang, G. A load forecasting model based on support vector regression with whale optimization algorithm. Multimed. Tools Appl. 2022, 82, 9939–9959. [Google Scholar] [CrossRef]
  15. Guo, Z.; Wang, P.; Ma, Y.; Wang, Q.; Gong, C. Whaleoptimization Algorithm Based on Adaptive Weight and Cauchy Mutation. Microelectron. Comput. 2017, 34, 20–25. [Google Scholar]
  16. Naruei, I.; Keynia, F.; Molahosseini, A.S. Hunter-prey optimization: Algorithm and applications. Soft Comput. 2022, 26, 1279–1314. [Google Scholar] [CrossRef]
  17. Duan, Y.; Liu, C. Sparrow search algorithm based on Sobol sequence and crisscross strategy. J. Comput. Appl. 2022, 42, 36–43. [Google Scholar]
  18. Arena, P.; Fazzino, S.; Fortuna, L.; Maniscalco, P. Game theory and non-linear dynamics: The Parrondo Paradox case study. Chaos Solitons Fractals 2003, 17, 545–555. [Google Scholar] [CrossRef]
  19. Zhang, M.; Long, D.; Qin, T.; Yang, J. A Chaotic Hybrid Butterfly Optimization Algorithm with Particle Swarm Optimization for High-Dimensional Optimization Problems. Symmetry 2020, 12, 1800. [Google Scholar] [CrossRef]
  20. Arora, S.; Anand, P. Chaotic grasshopper optimization algorithm for global optimization. Neural Comput. Appl. 2019, 31, 4385–4405. [Google Scholar] [CrossRef]
  21. Ning, J.; He, Q. Flower Pollination Algorithm Based on t-distribution Perturbation Strategy and Mutation Strategy. J. Chin. Comput. Syst. 2021, 42, 64–70. [Google Scholar]
  22. Han, F.; Liu, S. Adaptivesatin Bower Birdoptimization Algorithm Based on Tdistribution Mutation. Microelectron. Comput. 2018, 35, 117–121. [Google Scholar]
  23. Meng, A.; Lu, H.; Hu, H.; Guo, Z. Hybridwavelet packet-cso-enn approach for short-term wind power forecasting. Acta Energ. Sol. Sin. 2015, 36, 1645–1651. [Google Scholar]
  24. Zeng, L.; Lei, S.; Wang, S.; Chang, Y. Ultra-short-term Wind Power Prediction Based on OVMD-SSA-DELM-GM Model. Power Syst. Technol. 2021, 45, 4701–4710. [Google Scholar]
Figure 1. Comparison of parameter c before and after improvement.
Figure 1. Comparison of parameter c before and after improvement.
Energies 16 05098 g001
Figure 2. Cauchy distribution, t distribution and Gaussian distribution probability density plots.
Figure 2. Cauchy distribution, t distribution and Gaussian distribution probability density plots.
Energies 16 05098 g002
Figure 3. THPO algorithm flow chart.
Figure 3. THPO algorithm flow chart.
Energies 16 05098 g003
Figure 4. Comparison of function iteration convergence curves.
Figure 4. Comparison of function iteration convergence curves.
Energies 16 05098 g004
Figure 5. WPD decomposition structure diagram.
Figure 5. WPD decomposition structure diagram.
Energies 16 05098 g005
Figure 6. EEMD algorithm flowchart.
Figure 6. EEMD algorithm flowchart.
Energies 16 05098 g006
Figure 7. EEMD decomposition result graph.
Figure 7. EEMD decomposition result graph.
Energies 16 05098 g007
Figure 8. WPD decomposes IMF1 component result graph.
Figure 8. WPD decomposes IMF1 component result graph.
Energies 16 05098 g008
Figure 9. EEMD-WPD-THPO-DELM model procedure.
Figure 9. EEMD-WPD-THPO-DELM model procedure.
Energies 16 05098 g009
Figure 10. Electricity market price curve in Australia.
Figure 10. Electricity market price curve in Australia.
Energies 16 05098 g010
Figure 11. Prediction curves under different model decompositions.
Figure 11. Prediction curves under different model decompositions.
Energies 16 05098 g011
Figure 12. Prediction curves under different prediction models.
Figure 12. Prediction curves under different prediction models.
Energies 16 05098 g012
Table 1. Benchmark test function.
Table 1. Benchmark test function.
Test FunctionsFunction NameSearch RangeDimensionOptimal Value
F1Sphere Model[−100, 100]300
F2Schwefel 2.22[−10, 10]300
F3Schwefel 1.2[−100, 100]300
F4Schwefel 2.21[−100, 100]300
F5Quartic[−1.28, 1.28]300
F6 Rastrigin[−5.12, 5.12]300
F7Ackley[−32, 32]300
F8Griewank[−600, 600]300
F9Penalized[−50, 50]300
F10Kowalikl[−5, 5]40.00030
Table 2. Ten groups of test function data.
Table 2. Ten groups of test function data.
FunctionAlgorithmOptimum ValueMeanStandard Deviation
F1PSO5.293 × 1022.407 × 1031.146 × 103
GA2.0534 × 1043.098 × 1044.647 × 103
WOA4.325 × 10−1331.337 × 10−1216.943 × 10−121
HPO2.536 × 10−1898.583 × 10−1700
THPO000
F2PSO1.475 × 103.169 × 101.058 × 10
GA4.607 × 105.67 × 100.443 × 10
WOA5.224 × 10−757.795 × 10−694.020 × 10−68
HPO1.526 × 10−981.314 × 10−916.243 × 10−91
THPO000
F3PSO1.790 × 1036.369 × 1032.222 × 103
GA2.677 × 1044.417 × 1046.781 × 103
WOA0.379 × 1031.359 × 1048.621 × 103
HPO3.790 × 10−1628.269 × 10−1474.382 × 10−146
THPO000
F4PSO1.327 × 102.405 × 100.465 × 10
GA5.614 × 106.848 × 100.349 × 10
WOA5.834 × 10−70.432 × 100.889 × 10
HPO7.844 × 10−841.836 × 10−769.555 × 10−76
THPO000
F5PSO0.4182.0811.459
GA9.39422.1845.769
WOA1.593 × 10−51.371 × 10−31.571 × 10−3
HPO1.621 × 10−52.829 × 10−43.097 × 10−4
THPO9.674 × 10−76.687 × 10−58.043 × 10−5
F6PSO1.006 × 1021.313 × 1020.221 × 102
GA2.421 × 1022.922 × 1020.179 × 102
WOA02.274 × 10−151.608 × 10−14
HPO000
THPO000
F7PSO7.83111.8181.621
GA18.86819.4480.244
WOA8.882 × 10−164.441 × 10−152.381 × 10−15
HPO8.882 × 10−168.882 × 10−160
THPO8.882 × 10−168.882 × 10−160
F8PSO1.906 × 105.149 × 101.753 × 10
GA2.014 × 1022.871 × 1020.362 × 102
WOA01.448 × 10−21.023 × 10−1
HPO000
THPO000
F9PSO0.807 × 100.376 × 1032.274 × 103
GA1.588 × 1075.708 × 1072.164 × 107
WOA4.760 × 1031.219 × 1025.205 × 103
HPO2.426 × 10−112.628 × 1041.295 × 103
THPO2.989 × 10−116.421 × 10−82.530 × 10−7
F10PSO3.909 × 10−44.637 × 10−37.074 × 10−3
GA6.982 × 10−41.195 × 10−34.862 × 10−4
WOA3.119 × 10−47.413 × 10−49.458 × 10−4
HPO3.075 × 10−44.468 × 10−38.037 × 10−3
THPO3.075 × 10−43.269 × 10−41.294 × 10−4
Table 3. Evaluation indicators under different model decompositions.
Table 3. Evaluation indicators under different model decompositions.
Evaluation IndicatorsRMSEMAEMAPER2
THPO-DELM2.7161.668.01%79.78%
EEMD-THPO-DELM1.0260.653.09%97.12%
EEMD-WPD-THPO-DELM0.6380.411.93%98.89%
Table 4. Evaluation indicators under different forecasting models.
Table 4. Evaluation indicators under different forecasting models.
Evaluation IndicatorsRMSEMAEMAPER2
LSSVM1.3360.703.71%95.11%
DELM0.9590.673.21%97.10%
WOA-DELM0.8750.612.84%97.89%
THPO-DELM0.6380.411.93%98.89%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, L.; Yan, Z.; Li, Z.; Ma, N.; Li, R.; Qin, J. Electricity Market Price Prediction Based on Quadratic Hybrid Decomposition and THPO Algorithm. Energies 2023, 16, 5098. https://0-doi-org.brum.beds.ac.uk/10.3390/en16135098

AMA Style

Yan L, Yan Z, Li Z, Ma N, Li R, Qin J. Electricity Market Price Prediction Based on Quadratic Hybrid Decomposition and THPO Algorithm. Energies. 2023; 16(13):5098. https://0-doi-org.brum.beds.ac.uk/10.3390/en16135098

Chicago/Turabian Style

Yan, Laiqing, Zutai Yan, Zhenwen Li, Ning Ma, Ran Li, and Jian Qin. 2023. "Electricity Market Price Prediction Based on Quadratic Hybrid Decomposition and THPO Algorithm" Energies 16, no. 13: 5098. https://0-doi-org.brum.beds.ac.uk/10.3390/en16135098

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop