Next Article in Journal
Effect of Storage Conditions on the Storability and Nutritional Value of New Polish Apples Grown in Central Poland
Previous Article in Journal
Suppressive Activity of Glechoma hederacea Extracts against the Phytopathogenic Oomycete Plasmopara viticola, and First Screening of the Active Metabolites
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study on Agricultural Commodity Price Prediction Model Based on Secondary Decomposition and Long Short-Term Memory Network

College of Information and Management Science, Henan Agricultural University, Zhengzhou 450046, China
*
Author to whom correspondence should be addressed.
Submission received: 23 November 2023 / Revised: 25 December 2023 / Accepted: 27 December 2023 / Published: 28 December 2023
(This article belongs to the Section Agricultural Economics, Policies and Rural Management)

Abstract

:
In order to address the significant prediction errors resulting from the substantial fluctuations in agricultural product prices and the non-linear features, this paper proposes a hybrid forecasting model based on variational mode decomposition (VMD), ensemble empirical mode decomposition (EEMD), and long short-term memory networks (LSTM). This combined model is referred to as the VMD–EEMD–LSTM model. Initially, the original time series of agricultural product prices undergoes decomposition using VMD to obtain a series of variational mode functions (VMFs) and a residual component with higher complexity. Subsequently, the residual component undergoes a secondary decomposition using EEMD. All components are then fed into an LSTM model for training to obtain predictions for each component. Finally, the predictions for each component are linearly combined to generate the ultimate price forecast. To validate the effectiveness of the VMD–EEMD–LSTM model, empirical analyses were conducted for one-step and multi-step forecasts using weekly price data for pork, Chinese chives, shiitake mushrooms, and cauliflower from China’s wholesale agricultural markets. The results indicate that the composite model developed in this study provides enhanced forecasting accuracy.

1. Introduction

Agricultural products, as essential commodities in the daily consumption of Chinese residents, directly influence people’s economic well-being. The high or low prices of agricultural products are closely linked to individuals’ economic circumstances. Because of the characteristics of significant price volatility, nonlinearity, and non-stationarity, frequent price fluctuations add complexity to the challenges faced by producers, consumers, and decision-makers. Therefore, constructing accurate forecasting models for agricultural products is of significant importance for government macroeconomic regulation, balancing agricultural market supply and demand, and improving the living standards of residents.
Extensive research has been conducted both domestically and internationally on predicting agricultural product prices. Currently, the primary forecasting methods encompass traditional econometric models, intelligent models, and hybrid models.
Traditional econometric models are built on a solid theoretical foundation and offer strong interpretability. Examples of such models include autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA), grey model (GM), and generalized autoregressive conditional heteroskedasticity (GARCH) models [1,2,3,4,5,6]. Their advantage lies in having a clear internal structure, with each parameter having a well-defined interpretation. However, they have limitations when it comes to addressing nonlinear issues and are unable to capture the non-stationary, nonlinear, and multiscale characteristics of time series data [7,8].
In recent years, artificial intelligence model technology has rapidly advanced. Given that these models do not need to adhere to statistical assumptions, they are well-suited to address non-stationary and nonlinear problems. Typical models include the backpropagation neural network (BP), recurrent neural networks (RNN), artificial neural networks (ANN), and long short-term memory networks (LSTM), among other machine learning models [9,10,11,12]. Compared to traditional econometric models, intelligent models often achieve higher accuracy, but it can be challenging to find a single model that performs well in all scenarios [13]. At the same time, mainstream intelligent modeling methods struggle to describe the interdependencies between time series data [14]. They also require a large amount of data for training, and insufficient training data can lead to issues like poor fit and low accuracy in predictions [15]. Consequently, researchers have been focusing on improving and optimizing artificial intelligence optimization algorithms [16,17,18,19]. While these optimized intelligent models offer better predictive accuracy, the process can be intricate and the runtime can be lengthy.
The third category of hybrid modeling methods combines the advantages of both traditional econometric models and intelligent models. These models incorporate prior assumptions and data processing for prediction problems, thereby enhancing the model’s fitting capability. They have found widespread application in the prediction of agricultural product prices [20,21]. For instance, Ling et al. [22] developed a GM-VAR combination model to forecast the prices of various livestock products, and their predictions outperformed single forecasting models. Wu et al. [23] combined variational mode decomposition (VMD) with bald eagle optimization in an LSTM (long short-term memory) framework, creating the VMD-IBES-LSTM model for predicting seafood prices. However, this model overlooked essential information within the residuals obtained after VMD decomposition.
To further enhance predictive accuracy, a complex system theory approach that combines decomposition techniques with forecasting models has been proposed by Wang Shouyang [24]. This approach can effectively overcome the limitations of traditional econometric models and artificial intelligence models [25]. In fact, in most combination forecasting models, signal processing methods are used to decompose time series, and intelligent models are employed to predict the components after decomposition [26]. Typical sequence decomposition methods include empirical mode decomposition (EMD), which has a significant advantage in handling non-stationary and nonlinear complex signals as it does not rely on arbitrary basis functions. For example, Pankaj and others [27] combined the EMD method with support vector regression (SVR) to create an EMD-SVR model for predicting the wholesale price index (WPI) of chili peppers, and they found that the results were better than a standard SVR model. However, the EMD method is prone to mode mixing issues. Fang et al. [28] combined the ensemble empirical mode decomposition (EEMD) method with SVM, NN, and ARIMA models to forecast various agricultural products, and they found that the predictions were better than those of individual models. Nevertheless, the EEMD method struggles to decompose components with similar frequencies and faces issues related to envelopes and undershoots, which limits its decomposition results [29]. As an improved decomposition technique, variational mode decomposition (VMD) can adaptively decompose the effective components corresponding to each center frequency, leading to higher decomposition accuracy. VMD is more effective in feature selection for prediction models [30].
Given the limitations of traditional econometric models in addressing nonlinear problems and the inability of mainstream intelligent models to capture the interdependencies among time series data, we opt for the LSTM model for forecasting. This choice is motivated by the LSTM model’s effectiveness in capturing long-term trends and dependencies within time series data while adapting to different time scales. Considering the issues of mode mixing in EMD decomposition and the inability of EEMD to separate components with similar frequencies, we select the VMD method, which offers higher decomposition precision, to perform a single decomposition on the original price series.
From previous studies, it is evident that the predictive performance of composite models surpasses that of individual intelligent models. However, in some composite models utilizing decomposition techniques, the prediction overlooks crucial information within the residual terms post-decomposition, leading to an insufficient extraction of features from the sequence and preventing the model accuracy from reaching its optimum. Therefore, this paper’s contribution lies in the fact that no one has yet applied second-order decomposition to agricultural commodity price forecasting. This study proposes a VMD–EEMD bidecomposition method, using the components obtained after two decompositions as input for LSTM, to establish a VMD–EEMD–LSTM model for predicting agricultural commodity prices.
In addition, the VMD–EEMD–LSTM model proposed in this study represents a hybrid approach that combines frequency domain decomposition with the LSTM model. Unlike traditional hybrid models that involve restructuring the econometric model into a machine learning framework by integrating different types of models, the VMD–EEMD–LSTM model takes a unique approach. Through two decomposition steps, it reduces data complexity and introduces the LSTM model to effectively capture the multi-level structure of time series data. This integration enhances the model’s ability to learn and predict time series, providing greater flexibility and adaptability compared to conventional hybrid models.
The main work of this paper includes:
(i)
In response to the non-stationary and high complexity characteristics of the agricultural product price sequences used in the experiments in this paper, VMD is employed to decompose the original complex sequences into multiple sub-sequences based on their central frequencies. Additionally, a secondary decomposition is performed on the residual components containing complex information.
(ii)
After decomposing the original sequence with VMD, different variational mode functions (VMFs) and residual components (Res) are obtained. The residual components are further decomposed using EEMD, and then all these sequences are input into the LSTM model for prediction.
(iii)
To validate the predictive performance and accuracy of the proposed model, various metrics such as mean absolute error (MAE), mean absolute percentage error (MAPE), and root mean square error (RMSE) are chosen for evaluation. The baseline models and the method proposed in this paper are used to perform one-step and multi-step predictions for weekly agricultural product prices.

2. Materials and Methods

This section mainly describes the construction of the VMD–EEMD–LSTM hybrid model, including the VMD algorithm, EEMD algorithm, LSTM model, and the overall modeling steps.

2.1. Variational Mode Decomposition

Variational mode decomposition (VMD) is an adaptive and completely non-recursive modal variational and signal processing method proposed by Dragomiretskiy and Zosso [31]. Unlike empirical mode decomposition, VMD can determine the number of modal decompositions for a given signal sequence based on the actual situation. It decomposes the original input signal into k variational modal functions (VMFs) that follow central frequencies and have finite bandwidth. By minimizing the sum of bandwidth estimates for all modes, it obtains the corresponding modal component signals u k , where variations in each modal signal are centered around its central frequency ω k . The VMD variational constraint model is as follows:
{ min { u k } , { ω k } { k t [ ( δ ( t ) + j π t ) u k ( t ) ] e j ω k t 2 2 } , s . t .               k u k = f ( t ) .
In the equation, { u k } : = { u 1 , u 2 , , u k } , represents the variational mode function (VMF) obtained after decomposition. { ω k } : = { ω 1 , ω 2 , , ω k } represents the central frequency corresponding to each variational mode function (VMF). δ ( t ) is the Dirac function, k represents the number of modes, is the convolution operator, f ( t ) is the original signal, t is the gradient operation. ( δ ( t ) + j π t ) u k ( t ) is the spectrum of u k ( t ) obtained through the Hilbert transformation. The exponential term e j ω k t is used to adjust the estimates of each ω k , then integrate the spectra of the modes into the baseband. If you want to solve the constrained optimization problem described above, it needs to be transformed into an unconstrained problem by introducing quadratic penalty factor α and Lagrange multiplier operator λ ( t ) , resulting in the expression of the unconstrained variational problem:
L ( { u k } , { ω k } , λ ) = α k t [ ( δ ( t ) + j π t ) u k ( t ) ] e j ω k t 2 2 + f ( t ) k u k ( t ) 2 2 + λ ( t ) , f ( t ) k u k ( t ) .
By finding the extremum of this Lagrangian function, we obtain the expressions for VMF u k and the center frequency ω k in the unconstrained variational problem:
u ^ k n + 1 ( ω ) = f ^ ( ω ) i k u ^ i ( ω ) + λ ^ ( ω ) 2 1 + 2 α ( ω ω k ) 2 ,
ω ^ k n + 1 = 0 ω | u ^ k n + 1 ( ω ) | 2 d ω 0 u ^ k n + 1 | ( ω ) | 2 d ω .
The steps of the VMD decomposition method are as follows:
(1)
Initialize { u k 1 } , { ω k 1 } , { λ k 1 } , and n , and set an appropriate number of VMF components k ;
(2)
Update u k and ω k based on Equations (3) and (4);
(3)
Update the value of λ :
λ ^ n + 1 ( ω ) = λ ^ n ( ω ) + τ [ f ^ ( ω ) k u ^ k n + 1 ( ω ) ]
(4)
Given the precision criterion ε > 0 , if satisfied:
k u ^ k n + 1 u ^ k n 2 2 u ^ k n 2 2 < ε
Then, stop the iteration; otherwise, return to step (2).
In the equation, u ^ k n ( ω ) , f ^ ( ω ) , and λ ^ n ( ω ) correspond to the Fourier transforms of u ^ k n , f ( t ) , and λ n respectively.

2.2. Ensemble Empirical Mode Decomposition

The empirical mode decomposition (EMD) method can handle non-stationary and nonlinear signals, but it is prone to modal aliasing, where the resulting intrinsic mode functions (IMFs) exhibit mode mixing. To mitigate this issue, Wu and Huang [32] introduced the ensemble empirical mode decomposition (EEMD) technique, which involves injecting a low-amplitude white noise sequence into the signal to assist in the analysis. The decomposition steps are as follows:
(1)
Add a signal n i ( t ) of the same length to the initial signal x ( t ) , creating a new signal:
x i ( t ) = x ( t ) + n i ( t ) .
(2)
Perform EMD decomposition on the x ( t ) obtained in step (1) to obtain various IMF mode components:
x i ( t ) = j = 1 J c i , j ( t ) + r i ( t ) .
(3)
Repeat steps (1) and (2) N times, each time adding white noise sequences with different amplitudes during the decomposition to obtain various IMF components:
c 1 , j ( t ) , c 2 , j ( t ) c N , j ( t ) , J = 1 , 2 , J .
(4)
Since the statistical average of uncorrelated sequences is zero, the corresponding IMF components are averaged to obtain the final decomposition result of EEMD:
c j ( t ) = 1 N i = 1 N c i , j ( t ) .
In the equation, x i ( t ) represents the sequence after adding white noise for the i -th time; c i , j ( t ) is the j -th IMF component obtained after the i -th addition of white noise, r i ( t ) is the residual function, J is the number of IMFs.

2.3. Long Short-Term Memory

Long short-term memory (LSTM) is an improved type of recurrent neural network that introduces the concept of a “cell state” and “gates” to effectively address the vanishing or exploding gradient problem in traditional RNNs. LSTM is designed as a sequential structure primarily consisting of a Forget Gate, an Input Gate, and an Output Gate. The LSTM algorithm uses these gate structures to maintain and update the memory cell state.
The information forgetting of the memory cell is determined by the Forget Gate, and the specific formula is as follows:
f t = σ ( W f [ y t 1 , x t ] + b f ) .
The Sigmoid layer’s output of the Input Gate serves as the updated information and is determined by the following formula:
i t = σ ( W i [ y t 1 , x t ] + b i ) , C ˜ = tanh ( W c [ y t 1 , x t ] + b c ) , C t = f t C t 1 + i t C ˜ t .
The Output Gate determines the value of the next hidden state, and the specific formula is as follows:
ο t = σ ( W ο [ y t 1 , x t ] + b ο ) , y t = ο t tanh ( C t ) .
In the equations provided, σ represents the Sigmoid function; tanh represents the hyperbolic tangent (tanh) function; W i , W c , W f , and W o represent weight matrices; b i , b c , b f , and b o represent bias matrices; y t 1 and y t are, respectively the inputs from the previous unit and the outputs of this unit; C t 1 and C t are information states of two adjacent units.

2.4. VMD–EEMD–LSTM Prediction Model

The VMD decomposition can break down complex signals into several lower complexity modal components, with each mode representing a frequency component within the data. Through VMD decomposition, the prediction model can better capture the local and frequency-domain features of the data. EEMD is an enhanced empirical mode decomposition method that better handles nonlinear and non-stationary signals through multiple iterations and the addition of random noise. EEMD contributes to capturing the nonlinear dynamic characteristics in the data, enhancing the model’s adaptability to complex patterns. Therefore, using EEMD to decompose the residual term RES after VMD decomposition helps reduce the complexity of the residual components. The LSTM model is employed to process and learn from time series data, effectively capturing the long-term trends and dependencies in the time series while adapting to different time scales. The dual decomposition with VMD and EEMD effectively reduces the complexity of the data, enabling the LSTM model to better learn the features of the time series and thereby improving the predictive accuracy of the VMD–EEMD–LSTM model. The specific modeling steps are as follows:
(i)
By applying the VMD decomposition algorithm to the original agricultural product price time series, lower-complexity VMF components and a residual term Res (obtained by subtracting each VMF component from the original sequence) are obtained.
(ii)
Each VMF component obtained in step (1) is individually fed into an LSTM model for training, resulting in predictions for each VMF component.
(iii)
Using EEMD for the secondary decomposition of the Res obtained in step (1), each IMF component derived from this decomposition is individually input into an LSTM for prediction. The predicted values for each IMF component are then summed to obtain the prediction result for the residual term Res.
(iv)
Combine the prediction results for each VMF component and the residual term Res to obtain the final prediction result for the original sequence.
The modeling process, as shown in Figure 1.

2.5. Evaluation Metrics

In order to evaluate the predictive performance of the VMD–EEMD–LSTM model for agricultural product prices, this paper employs three evaluation metrics: root mean square error (RMSE), mean absolute error (MAE), and mean absolute percentage error (MAPE). These three evaluation metrics are used to assess the errors between the model’s predictions and the actual values, and a smaller value for these metrics indicates higher predictive accuracy. The specific formulas for the three evaluation metrics are as follows:
R M S E = 1 n i = 1 n ( y i y i ) 2 M A E = 1 n i = 1 n | y i y i | M A P E = 1 n i = 1 n | y i y i y i |
In the formulas provided, y i and y i represent the actual (true) value and predicted value of the i -th sample, respectively. n represents the total number of test samples.

3. Results

In this section, we conducted the following experiments. Firstly, we selected the optimal k values for VMD decomposition and performed secondary EEMD decomposition on the residuals for pork, leeks, shiitake mushrooms, and cauliflower. Secondly, we determined the appropriate time step for prediction in the VMD–EEMD–LSTM model. Finally, we obtained the ultimate prediction results and evaluated the model performance using metrics such as MAE, MAPE, and RMSE, conducting comparative analysis with other benchmark models.

3.1. Data Source

The empirical analysis in this paper is based on the weekly average pork price. The data is sourced from the National Agricultural Products Wholesale Market of the Chinese Ministry of Agriculture and Rural Affairs. The sample data covers a period from 1 March 2018, to 28 February 2023, with a total of 261 data points. The time step is set at five (Why the time step is chosen as five is demonstrated in Section 3.3.), which means historical data from the previous five weeks is used to predict the price for the sixth week. The first 183 data points are designated as the training sample, and the remaining 73 data points are used as the test sample, resulting in a training-to-test sample ratio of 7:3. The original sequence of pork prices is depicted in Figure 2, and the standard statistical results for the sample data are presented in Table 1. This empirical study was conducted using Matlab R2021a.

3.2. The Process of Secondary Decomposition

3.2.1. VMD Primary Decomposition

Before performing VMD decomposition on the raw pork price series, it is necessary to determine the appropriate number of mode components, denoted as k , as failure to do so can affect the effectiveness of the VMD decomposition and subsequently impact the overall predictive performance of the model. If the value of k is too large, it may lead to the overlapping of central frequencies of components, resulting in excessive decomposition. If k is too small, the complexity of the original sequence cannot be sufficiently reduced. Therefore, in this paper, according to the central frequency method [33], we determine the optimal value of k by initially selecting a range of modal numbers k from 3 to 8 for pre-decomposition. We obtain the central frequency of each modal component. When the central frequencies of adjacent modes are close, it is considered as an indication of over-decomposition of the signal, as shown in Table 2. A comparison of the central frequencies of the last layer of mode components is made, and when these frequencies become relatively stable, the optimal k value can be identified. Setting VMD parameters as follows: α = 2000 , ε = 10 7 .
From Table 2, it can be observed that when k > 6 , the central frequencies of the last layer of mode components tend to stabilize. Therefore, the optimal k value is chosen to be 7. Subsequently, the raw pork price series is decomposed using both VMD and EEMD, as illustrated in Figure 3. It is evident that the VMD decomposition results in a smoother sequence, effectively reducing the complexity of the original series.

3.2.2. EEMD Secondary Decomposition

From Figure 3, it can be observed that the residual component “Res” after VMD decomposition still exhibits significant fluctuations, indicating a high level of data complexity, with a calculated sample entropy of 0.8914. To improve the overall predictive performance of the model, EEMD is applied to decompose the residual component “Res” The white noise standard deviation is set to 0.4, and the number of ensemble trials is configured as 100. The residual component “Res” after VMD decomposition is decomposed into 7 IMF (intrinsic mode function) components and 1 residual trend component, as shown in Figure 4.

3.3. Experiments with Different Time Steps

Considering the distinct volatility patterns of various agricultural products, setting an appropriate time step for prediction can help reduce forecasting errors, i.e., using price data from previous weeks to predict prices for the following week. In order to validate the optimal time step for price prediction using the LSTM model for pork, Chinese leeks, shiitake mushrooms, and cauliflower, experiments with different time steps were conducted, keeping the parameters consistent with Section 3.4.1. The experimental results are shown in Table 3. The results revealed that for pork, Chinese leeks, shiitake mushrooms, and cauliflower, the minimum prediction errors were observed at a time step of 5. Therefore, in practical forecasting, the optimal time step for each product was chosen for prediction.

3.4. Price Prediction Based on VMD–EEMD–LSTM

3.4.1. LSTM Hyperparameter Configuration

Training epochs, learning rate, and other factors are important parameters affecting LSTM model accuracy. We experimented with different settings for the LSTM network, including training epochs of 100, 200, and 400, learning rates of 0.01, 0.001, and 0.005, and batch sizes of 32, 64, and 128. After multiple comparisons of validation set errors, we found that with 200 training epochs, a learning rate of 0.005, a batch size of 32, 70 LSTM hidden units, and a learning rate decay factor of 0.2, the model achieved the lowest loss.

3.4.2. Prediction Results Comparison and Analysis

In order to better demonstrate the predictive performance of the VMD–EEMD–LSTM model, this study, while maintaining consistency in the experimental process, concurrently employed RF (random forest), ELM (extreme learning machine), LSTM, EEMD–LSTM, and VMD–LSTM models for predicting pork prices. Different prediction horizons (1, 2, 3) were set for comparison. The error results for different prediction horizons are presented in Table 4, and the predictions of each model at various prediction horizons are illustrated in Figure 5, Figure 6 and Figure 7.
Based on Table 4 and Figure 5, Figure 6 and Figure 7, it can be observed that within a single model, LSTM outperforms RF and ELM in terms of prediction performance across different prediction horizons. It achieves lower MAE, MAPE, and RMSE values. Taking a prediction horizon of 1 as an example, when compared to RF, LSTM shows a reduction of 30.63% in MAE, 30.72% in MAPE, and 25.94% in RMSE. When compared to ELM, LSTM results in a reduction of 25.10% in MAE, 25.16% in MAPE, and 24.01% in RMSE. This result indicates that within a single model, the LSTM model exhibits superior predictive performance.
Although the LSTM model exhibits better predictive performance than other individual models, when compared to models that incorporate decomposition techniques, there is still a significant gap in the evaluation metric values. This is because decomposition methods can reduce the complexity of the time series, thereby enhancing the predictive accuracy of the models. When the prediction horizon is 1, the proposed VMD–EEMD–LSTM model, compared to RF, shows a reduction of 62.76%, 61.84%, and 64.23% in MAE, MAPE, and RMSE, respectively. In comparison to ELM, the reductions are 59.80%, 58.77%, and 63.30%, while compared to LSTM, they are 46.32%, 44.92%, and 51.70%. When compared to EEMD–LSTM, the reductions are 39.95%, 43.31%, and 38.34%, and in comparison to VMD–LSTM, they are 15.59%, 15.58%, and 16.19%.
For a prediction horizon of 2, the VMD–EEMD–LSTM model, in comparison to RF, exhibits reductions of 70.71%, 69.75%, and 71.17% in MAE, MAPE, and RMSE, respectively. When compared to ELM, the reductions are 65.48%, 64.16%, and 69.92%, and in comparison to LSTM, they are 63.21%, 62.02%, and 66.61%. Compared to EEMD–LSTM, the reductions are 49.51%, 52.78%, and 50.36%, and compared to VMD–LSTM, they are 13.74%, 13.26%, and 14.78%.
At a prediction horizon of 3, the VMD–EEMD–LSTM model, compared to RF, shows reductions of 75.12%, 74.12%, and 75.10% in MAE, MAPE, and RMSE, respectively. When compared to ELM, the reductions are 72.38%, 72.04%, and 73.53%, and in comparison to LSTM, they are 66.81%, 66.35%, and 68.98%. When compared to EEMD–LSTM, the reductions are 57.17%, 60.82%, and 57.72%, and compared to VMD–LSTM, they are 9.53%, 8.63%, and 9.34%.
This indicates that the proposed VMD–EEMD–LSTM model exhibits excellent predictive performance, both in single-step and multi-step forecasting. The trend predictions align well with the original data, demonstrating a good fit.

3.5. Generalizability Experiment of the VMD–EEMD–LSTM Model

To validate the predictive applicability of the model proposed in this study on price sequences of other agricultural products, price data for Chinese chives, shiitake mushrooms, and cauliflower were selected for prediction validation. The sample time range for all datasets is from 26 January 2020, to 25 December 2022, and the data sources are the same as described in Section 3.1, with model parameters consistent with Section 3.4.1. The conventional statistical results for each dataset are displayed in Table 1 and the original price sequences for chinese leeks, shiitake mushrooms, and cauliflower are shown in Figure 8. The optimal k values for Chinese chives, shiitake mushrooms, and cauliflower are selected as discussed in Section 3.2.1, specifically shown in Table 5. According to Table 5, the optimal values of k for VMD decomposition are selected as four for leek price sequence, six for shiitake mushroom price sequence, and six for cauliflower price sequence. The VMD decomposition sequences for Chinese chives, shiitake mushrooms, and cauliflower are shown in Figure 9a,c,e, respectively. The secondary decomposition of their residuals (Res) using EEMD is depicted in Figure 9b,d,f.The error results for different prediction horizons are listed in Table 6, Table 7 and Table 8. The prediction results for various models (using a prediction horizon of 1 as an example) can be found in Figure 10, Figure 11 and Figure 12.
Combining Table 6, Table 7 and Table 8, in the price prediction of Chinese chives, oyster mushrooms, and cauliflower, when the forecasting step is 1, the model proposed in this study, VMD–EEMD–LSTM, compared to RF, demonstrates improvements in MAE by 63.2%, 79.12%, and 60.86%, and in RMSE by 61.57%, 81.53%, and 61.39%, respectively. Compared to ELM, the improvements in MAE are 59.18%, 77.11%, and 57.17%, and in RMSE are 58.70%, 77.97%, and 59.93% for Chinese chives, oyster mushrooms, and cauliflower, respectively. In comparison to LSTM, the improvements in MAE are 55.01%, 66.52%, and 52.91%, and in RMSE are 54.73%, 68.87%, and 54.63% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Compared to EEMD–LSTM, the improvements in MAE are 34.88%, 61.84%, and 41.16%, and in RMSE are 38.02%, 67.25%, and 34.43% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Lastly, compared to VMD–LSTM, the improvements in MAE are 17.96%, 19.16%, and 4.26%, and in RMSE are 17.42%, 20.35%, and 3.62% for Chinese chives, oyster mushrooms, and cauliflower, respectively.
When the forecasting step is 2, the model proposed in this study, VMD–EEMD–LSTM, compared to RF, exhibits significant improvements in MAE and RMSE for Chinese chives, oyster mushrooms, and cauliflower. The improvements in MAE are 63.04%, 79.35%, and 63.79%, and in RMSE are 62.96%, 80.67%, and 62.54%, respectively. Compared to ELM, the improvements in MAE are 59.46%, 76.26%, and 63.06%, and in RMSE are 59.06%, 77.71%, and 61.96% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Relative to LSTM, the improvements in MAE are 58.12%, 75.17%, and 61.47%, and in RMSE are 56.48%, 77.04%, and 60.98% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Compared to EEMD–LSTM, the improvements in MAE are 21.99%, 63.85%, and 42.81%, and in RMSE are 19.27%, 70.63%, and 37.14% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Lastly, compared to VMD–LSTM, the improvements in MAE are 16.53%, 16.71%, and 2.98%, and in RMSE are 16.56%, 21.22%, and 2.49% for Chinese chives, oyster mushrooms, and cauliflower, respectively.
When the forecasting step is 3, the model proposed in this study, VMD–EEMD–LSTM, compared to RF, demonstrates substantial improvements in MAE and RMSE for Chinese chives, oyster mushrooms, and cauliflower. The improvements in MAE are 62.92%, 78.00%, and 59.21%, and in RMSE are 62.67%, 78.12%, and 56.72%, respectively. Compared to ELM, the improvements in MAE are 60.08%, 76.87%, and 57.11%, and in RMSE are 58.73%, 77.82%, and 55.57% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Relative to LSTM, the improvements in MAE are 57.66%, 74.20%, and 57.03%, and in RMSE are 57.16%, 75.82%, and 54.91% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Compared to EEMD–LSTM, the improvements in MAE are 25.74%, 63.58%, and 43.41%, and in RMSE are 22.19%, 70.66%, and 37.18% for Chinese chives, oyster mushrooms, and cauliflower, respectively. Lastly, compared to VMD–LSTM, the improvements in MAE are 10.62%, 8.22%, and 1.76%, and in RMSE are 13.70%, 14.35%, and 1.80% for Chinese chives, oyster mushrooms, and cauliflower, respectively.
By calculating the reduction rates of MAP, MAPE, and RMSE between the proposed model and other comparative models, this study further verifies the superiority of the proposed model over other baseline models. The results indicate that the original agricultural product prices become more predictable after undergoing a single decomposition using VMD and EEMD. The secondary decomposition of VMD residual components effectively reduces data complexity and improves predictive results, bringing them closer to actual values. Simultaneously, by calculating the reduction rates of MAP, MAPE, and RMSE between the proposed model and other baseline models in multi-step forecasting, it is demonstrated that the VMD–EEMD–LSTM model maintains good performance in multi-step forecasting. This reaffirms the effectiveness and superiority of the VMD–EEMD–LSTM model.

3.6. The Comparative Experimental Model Analysis Involves Comparing and Analyzing Various Models

Mohanty [34] et al. employed the DTR (decision tree regressor) model for crop yield and price forecasting. In comparison to the DTR model, the VMD–EEMD–LSTM model holds advantages in effectively handling signal components with different frequencies and amplitudes, showcasing enhanced adaptability to non-stationary and multi-scale time series data. The decompositions provided by VMD and EEMD contribute to a more profound understanding and capture of structural information within time series data. When combined with the LSTM model, it efficiently captures long-term dependencies in time series, resulting in outstanding performance in time series prediction tasks.
On the other hand, the DTR model is well-suited for data with relatively simple nonlinear relationships, offering good interpretability. Therefore, the proposed VMD–EEMD–LSTM model in this study exhibits superior efficacy in addressing predictive challenges associated with complex data sequences.
Zhang [35] et al. employed the VMD–ELM method for forecasting the prices of rice, wheat, and soybean meal, achieving favorable results. However, they overlooked the residual components’ continued complexity post VMD decomposition. It was observed that additional processing, such as secondary decomposition of the residual components, could enhance predictive accuracy when combined with the model. In contrast, the model proposed in this study takes into account the crucial information embedded in the residual components after VMD decomposition. By conducting a secondary decomposition of these residuals, the complexity of the data is significantly reduced, leading to improved predictive results. Additionally, through a comparative analysis of results at different prediction steps, this study effectively demonstrates the forecasting advantages of the VMD–EEMD–LSTM model.
Through comparative analysis, it has been observed that the proposed VMD–EEMD–LSTM model in this study exhibits excellent adaptability in handling non-stationary and multi-scale time series data. Furthermore, after undergoing a secondary decomposition with VMD–EEMD, the model effectively reduces the complexity of the data. The combination of this enhanced feature extraction with the LSTM model’s ability to capture long-term dependencies within time series leads to a significant improvement in predictive accuracy for the hybrid model.

4. Conclusions

To enhance the accuracy of agricultural product price forecasting, this study proposes a combined model prediction method based on secondary decomposition and long short-term memory (LSTM) networks. Through empirical analysis of pork, Chinese chives, shiitake mushrooms, and cauliflower prices, the following conclusions are drawn:
(i)
Based on the decomposition approach, the price sequences of pork, Chinese chives, shiitake mushrooms, and cauliflower are decomposed, and their residual components undergo EEMD secondary decomposition. Then, each subsequence is individually predicted using LSTM models, and the final prediction is obtained by combining the predictions of these sub-sequences. Furthermore, the model proposed in this study maintains high predictive accuracy across different prediction horizons.
(ii)
VMD decomposition yields higher accuracy than EEMD decomposition, but the residual components after VMD decomposition exhibit higher complexity. Performing a secondary decomposition on these residuals results in improved model accuracy.
(iii)
Comparing the predictions of the VMD–EEMD–LSTM model with those of the one-time decomposition model, it is evident that the VMD–EEMD–LSTM model demonstrates stronger predictive capabilities, thereby validating its effectiveness and applicability in agricultural product price forecasting.
Although the combined model constructed in this study demonstrates good predictive capabilities, it relies solely on historical data as model inputs. Agricultural product prices are influenced by various factors, such as temperature fluctuations, holidays, and consumer consumption levels, all of which can affect prices. Therefore, to further enhance model accuracy, integrating these influencing factors with the model proposed in this study could be a valuable avenue for improvement.

Author Contributions

Conceptualization, C.S. and M.P.; methodology, C.S. and M.P.; software, M.P.; validation, M.P. and B.C.; formal analysis, M.P. and B.C.; investigation, M.P.; resources, C.S. and M.P.; data curation, C.S. and M.P.; writing—original draft preparation, M.P.; writing—review and editing, M.P. and S.C.; visualization, M.P. and B.C.; supervision, C.S. and H.S.; project administration, C.S. and H.S.; funding acquisition, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research is funded by the Henan Provincial Science and Technology Research Project, Research on Key Technology for Secure Data Sharing of Crop Germplasm Resources in Collaborative Blocks on and off the Chain, Grant/Award Number: 232102210122. The Key Research Project of Henan Provincial Higher Education Institution, Research on Identity Authentication Based on Alliance Chain and Face Recognition, Grant/Award Number: 23A520005. Henan Provincial Science and Technology Public Relations Project, Research on Multi-attribute Based Germplasm Resources Mass Data Organization Methodology, Grant/Award Number: 232102520006.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is available from its original source as cited in the article. The weekly price data for pork, Chinese chives, shiitake mushrooms, and cauliflower can be obtained from the website of the National Agricultural Products Wholesale Market of the Ministry of Agriculture and Rural Affairs of China. (https://ncpscxx.moa.gov.cn (accessed on 22 November 2023)).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Liu, H.; Shao, S. India’s Tea Price Analysis Based on ARMA Model. Mod. Econ. 2016, 7, 118–123. [Google Scholar] [CrossRef]
  2. Samson, T.K.; Akanbi, A.A.; Omoyajowo, A.C.; Ogunlaran, O.M. Modelling and Forecasting Dairy Milk Production: Evidence from Autoregressive Moving Average (ARMA) Models. IOP Conf. Ser. Earth Environ. Sci. 2023, 1219, 012026. [Google Scholar] [CrossRef]
  3. Dutta, S.; Maiti, S. Price Forecasting of Agricultural Products Using Arima Models. Indian J. Agric. Mark. 2021, 35, 149–164. [Google Scholar]
  4. Padhan, P.C. Application of ARIMA Model for Forecasting Agricultural Productivity in India. J. Agric. Soc. Sci. 2012, 8, 50–56. [Google Scholar]
  5. Wang, J.; Wang, B. Application of GM 1,1) Model Based on Least Square Method in Vegetable Yield Forecast in China. Math. Theory Appl. 2016, 36, 116–124. [Google Scholar] [CrossRef]
  6. Bisht, A. Estimating Volatility in Prices of Pulses in India: An Application of Garch Model. Econ. Aff. 2019, 64, 513–516. [Google Scholar] [CrossRef]
  7. Lin, K.-P.; Pai, P.-F.; Yang, S.-L. Forecasting Concentrations of Air Pollutants by Logarithm Support Vector Regression with Immune Algorithms. Forecast. Conc. Air Pollut. Logarithm Support Vector Regres. Immune Algorithms 2011, 217, 5318–5327. [Google Scholar] [CrossRef]
  8. Taylan, O. Modelling and Analysis of Ozone Concentration by Artificial Intelligent Techniques for Estimating Air Quality. Atmos. Environ. 2017, 150, 356–365. [Google Scholar] [CrossRef]
  9. Jeong, M.; Lee, Y.J.; Choe, Y. Forecasting Agricultural Commodity Price: The Case of Onion. J. Res. Humanit. Soc. Sci. 2017, 5, 78–81. [Google Scholar]
  10. Kurumatani, K. Time Series Forecasting of Agricultural Product Prices Based on Recurrent Neural Networks and Its Evaluation Method. SN Appl. Sci. 2020, 2, 1434. [Google Scholar] [CrossRef]
  11. Adisa, O.; Botai, J.; Adeola, A.; Hassen, A.; Botai, C.; Darkey, D.; Tesfamariam, E. Application of Artificial Neural Network for Predicting Maize Production in South Africa. Sustainability 2019, 11, 1145. [Google Scholar] [CrossRef]
  12. Haider, S.; Naqvi, S.; Akram, T.; Umar, G.; Shahzad, A.; Sial, M.; Khaliq, S.; Kamran, M. LSTM Neural Network Based Forecasting Model for Wheat Production in Pakistan. Agronomy 2019, 9, 72. [Google Scholar] [CrossRef]
  13. Zheng, Y.; Shu, T.A. Study on the Forecasting Trend of Foreign Direct Investment Chain System-Based on the Comparison between Grey Markov Forecasting Modle and Time Series Forecasting Model. Syst. Eng. Theory Pract. 2016, 36, 897–909. [Google Scholar] [CrossRef]
  14. Xiao, Y.; Yin, H.; Zhang, Y.; Qi, H.; Zhang, Y.; Liu, Z. A Dual-stage Attention-based Conv-LSTM Network for Spatio-temporal Correlation and Multivariate Time Series Prediction. Int. J. Intell. Syst. 2021, 36, 2036–2057. [Google Scholar] [CrossRef]
  15. Schmidhuber, J. Deep Learning in Neural Networks: An Overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  16. Bu, C.; Chen, L. Demand Forecast of Cold Chain Logistics of Fresh Agricultural Products in Jiangsu Province Based on GA-BP Model. World Sci. Res. J. 2021, 7, 210–217. [Google Scholar] [CrossRef]
  17. Duan, Q.; Zhang, L.; Wei, F. Forecasting Model and Validation for Aquatic Product Price Based on Time Series GA-SVR. Trans. Chin. Soc. Agric. Eng. 2017, 33, 308–314. [Google Scholar] [CrossRef]
  18. Shao, B.; Li, M.; Zhao, Y.; Bian, G. Nickel Price Forecast Based on the LSTM Neural Network Optimized by the Improved PSO Algorithm. Math. Probl. Eng. 2019, 2019, 1934796. [Google Scholar] [CrossRef]
  19. Bhimavarapu, U.; Battineni, G.; Chintalapudi, N. Improved Optimization Algorithm in LSTM to Predict Crop Yield. Computers 2023, 12, 10. [Google Scholar] [CrossRef]
  20. Fang, Y.; Guan, B.; Wu, S.; Heravi, S. Optimal Forecast Combination Based on Ensemble Empirical Mode Decomposition for Agricultural Commodity Futures Prices. J. Forecast. 2020, 39, 877–886. [Google Scholar] [CrossRef]
  21. Guo, Y.; Tang, D.; Tang, W.; Yang, S.; Tang, Q.; Feng, Y.; Zhang, F. Agricultural Price Prediction Based on Combined Forecasting Model under Spatial-Temporal Influencing Factors. Sustainability 2022, 14, 10483. [Google Scholar] [CrossRef]
  22. Ling, L.; Zhang, D.; Mugera, A.W.; Chen, S.; Xia, Q. A Forecast Combination Framework with Multi-Time Scale for Livestock Products’ Price Forecasting. Math. Probl. Eng. 2019, 2019, 8096206. [Google Scholar] [CrossRef]
  23. Wu, J.; Hu, Y.; Wu, D.; Yang, Z. An Aquatic Product Price Forecast Model Using VMD-IBES-LSTM Hybrid Approach. Agriculture 2022, 12, 1185. [Google Scholar] [CrossRef]
  24. Wang, S. Crude Oil Price Forecasting with TEI Methodology. J. Syst. Sci. Complex. 2005, 18, 145–166. [Google Scholar]
  25. Zhan, L.; Tang, Z. Natural Gas Price Forecasting by a New Hybrid Model Combining Quadratic Decomposition Technology and LSTM Model. Math. Probl. Eng. 2022, 2022, 5488053. [Google Scholar] [CrossRef]
  26. Tang, H.; Bhatti, U.A.; Li, J.; Marjan, S.; Baryalai, M.; Assam, M.; Ghadi, Y.Y.; Mohamed, H.G. A New Hybrid Forecasting Model Based on Dual Series Decomposition with Long-Term Short-Term Memory. Int. J. Intell. Syst. 2023, 2023, 9407104. [Google Scholar] [CrossRef]
  27. Pankaj, D.; Kumar, J.G.; Achal, L. Empirical Mode Decomposition Based Support Vector Regression for Agricultural Price Forecasting. Indian J. Ext. Educ. 2020, 56, 7–12. [Google Scholar]
  28. Choudhary, K.; Jha, G.K.; Kumar, R.R.; Mishra, D.C. Agricultural Commodity Price Analysis Using Ensemble Empirical Mode Decomposition: A Case Study of Daily Potato Price Series. Indian J. Agric. Sci. 2019, 89, 882–886. [Google Scholar] [CrossRef]
  29. Pan, H.; Zheng, J.; Yang, Y.; Cheng, J. Nonlinear Sparse Mode Decomposition and Its Application in Planetary Gearbox Fault Diagnosis. Mech. Mach. Theory 2021, 155, 104082. [Google Scholar] [CrossRef]
  30. Alireza, R.; Amanollahi, J. Air Quality Data Series Estimation Based on Machine Learning Approaches for Urban Environments. Air Qual. Atmos. Health 2021, 14, 191–201. [Google Scholar] [CrossRef]
  31. Dragomiretskiy, K.; Zosso, D. Variational Mode Decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
  32. Wu, Z.; Huang, N.E. A Study of the Characteristics of White Noise Using the Empirical Mode Decomposition Method. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 2004, 460, 1597–1611. [Google Scholar] [CrossRef]
  33. Zhang, J. The Approach for Determining the Optimal Value of k in Signal Decomposition Using Variational Mode Decomposition (VMD). J. Lanzhou Univ. Arts Sci. (Nat. Sci.) 2022, 36, 75–79. [Google Scholar] [CrossRef]
  34. Mohanty, M.K.; Thakurta, P.K.G.; Kar, S. Agricultural Commodity Price Prediction Model: A Machine Learning Framework. Neural Comput. Appl. 2023, 35, 15109–15128. [Google Scholar] [CrossRef]
  35. Zhang, D.; Zeng, L.; Ling, L. Agricultural Futures Price Prediction Based on the VMD-ELM Decomposition and Ensemble Model. Oper. Res. Manag. Sci. 2023, 32, 127–133. [Google Scholar]
Figure 1. VMD–EEMD–LSTM modeling process diagram.
Figure 1. VMD–EEMD–LSTM modeling process diagram.
Agriculture 14 00060 g001
Figure 2. The original sequence of pork prices.
Figure 2. The original sequence of pork prices.
Agriculture 14 00060 g002
Figure 3. (a) VMD decomposition of pork price series; (b) EEMD decomposition of pork price series.
Figure 3. (a) VMD decomposition of pork price series; (b) EEMD decomposition of pork price series.
Agriculture 14 00060 g003
Figure 4. EEMD decomposition of Res series.
Figure 4. EEMD decomposition of Res series.
Agriculture 14 00060 g004
Figure 5. The prediction results for each model with a prediction horizon of 1.
Figure 5. The prediction results for each model with a prediction horizon of 1.
Agriculture 14 00060 g005
Figure 6. The prediction results for each model with a prediction horizon of 2.
Figure 6. The prediction results for each model with a prediction horizon of 2.
Agriculture 14 00060 g006
Figure 7. The prediction results for each model with a prediction horizon of 3.
Figure 7. The prediction results for each model with a prediction horizon of 3.
Agriculture 14 00060 g007
Figure 8. The original price sequences of Chinese chives, shiitake mushrooms, and cauliflower.
Figure 8. The original price sequences of Chinese chives, shiitake mushrooms, and cauliflower.
Agriculture 14 00060 g008
Figure 9. (a,c,e) represent the VMD decomposition sequences for leeks, shiitake mushrooms, and cauliflower, respectively. (b,d,f) depict the sequence decomposition of the residuals (Res) for leeks, shiitake mushrooms, and cauliflower after the secondary decomposition using EEMD.
Figure 9. (a,c,e) represent the VMD decomposition sequences for leeks, shiitake mushrooms, and cauliflower, respectively. (b,d,f) depict the sequence decomposition of the residuals (Res) for leeks, shiitake mushrooms, and cauliflower after the secondary decomposition using EEMD.
Agriculture 14 00060 g009aAgriculture 14 00060 g009b
Figure 10. The prediction results for Chinese chives with a prediction horizon of 1.
Figure 10. The prediction results for Chinese chives with a prediction horizon of 1.
Agriculture 14 00060 g010
Figure 11. The prediction results for shiitake mushrooms with a prediction horizon of 1.
Figure 11. The prediction results for shiitake mushrooms with a prediction horizon of 1.
Agriculture 14 00060 g011
Figure 12. The prediction results for cauliflower with a prediction horizon of 1.
Figure 12. The prediction results for cauliflower with a prediction horizon of 1.
Agriculture 14 00060 g012
Table 1. Conventional statistical analysis of pork, Chinese chives, shiitake mushrooms, and cauliflower price sequences.
Table 1. Conventional statistical analysis of pork, Chinese chives, shiitake mushrooms, and cauliflower price sequences.
Agricultural Product CategoriesAverage ValueMaximum ValueMinimum ValueStandard DeviationKurtosisSkewness
Pork29.3452.3815.8311.14−1.210.58
Chinese chives4.777.783.01.22−0.660.43
Shiitake mushrooms7.3311.144.141.061.850.30
Cauliflower4.368.012.780.992.861.45
Table 2. The central frequencies for different values of k.
Table 2. The central frequencies for different values of k.
KVMF1VMF2VMF3VMF4VMF5VMF6VMF7VMF8
30.4798.01643.83
40.3681.87348.351309.36
50.2970.56284.28731.683415.09
60.2968.66268.29605.971352.754109.05
70.2866.47259.82564.021090.771936.274187.84
80.2764.74251.67537.42994.391610.412683.424288.26
Table 3. Experiments with Different Time Steps for Various Agricultural Products.
Table 3. Experiments with Different Time Steps for Various Agricultural Products.
Agricultural Product
Categories
Evaluation MetricsTime Step
357911
PorkMAE0.91510.85470.94170.91470.9485
MAPE0.03710.03540.03680.03560.0367
RMSE1.24501.22681.26481.24091.2590
Chinese chivesMAE0.33390.28990.31500.30560.3116
MAPE0.06790.05900.06330.06170.0629
RMSE0.39120.35490.38780.37260.3822
Shiitake mushroomsMAE0.48500.43100.45170.46010.4716
MAPE0.06090.05410.05520.05660.0559
RMSE0.60380.55180.57460.57490.6073
CauliflowerMAE0.35140.32390.34240.33650.3309
MAPE0.08310.07550.08160.07990.0783
RMSE0.43030.41350.43060.43900.4437
Table 4. Different prediction horizon error results.
Table 4. Different prediction horizon error results.
Prediction HorizonEvaluation MetricsRFELMLSTMEEMD-LSTMVMD-LSTMVMD–EEMD–LSTM
OneMAE1.28641.19150.89240.79770.56750.4790
MAPE0.05110.04730.03540.03440.02310.0195
RMSE1.65661.61451.22680.96110.70710.5926
TwoMAE1.91771.62741.52691.11250.65120.5617
MAPE0.07570.06390.06030.04850.02640.0229
RMSE2.34962.25202.02921.36490.79500.6775
ThreeMAE2.84402.56122.13141.65170.78200.7075
MAPE0.11050.10230.08500.07300.03130.0286
RMSE3.42533.2222.74912.01750.94080.8529
Table 5. The central frequencies of the original price sequences of Chinese chives, shiitake mushrooms, and cauliflower under different values of k in the decomposition. (a) The central frequencies of Chinese chives price sequences under different values of k in the decomposition. (b) The central frequencies of shiitake mushroom price sequences under different values of k in the decomposition. (c) The central frequencies of cauliflower price sequences under different values of k in the decomposition.
Table 5. The central frequencies of the original price sequences of Chinese chives, shiitake mushrooms, and cauliflower under different values of k in the decomposition. (a) The central frequencies of Chinese chives price sequences under different values of k in the decomposition. (b) The central frequencies of shiitake mushroom price sequences under different values of k in the decomposition. (c) The central frequencies of cauliflower price sequences under different values of k in the decomposition.
(a)
KVMF1VMF2VMF3VMF4VMF5VMF6VMF7VMF8
30.79349.93849.5
40.70314.31261.44096.3
50.60283.11160.43426.54194.0
60.26220.9566.11354.73531.54274.2
(b)
KVMF1VMF2VMF3VMF4VMF5VMF6VMF7VMF8
50.10333.01780.52529.83654.7
60.12339.41748.02243.82904.64405.4
70.11313.01152.71875.42673.53612.64451.9
80.10302.51112.11790.02272.52858.83655.74464.5
(c)
KVMF1VMF2VMF3VMF4VMF5VMF6VMF7VMF8
50.46374.7859.21738.42976.0
60.43360.2802.31615.52625.04146.6
70.40342.5736.01328.21991.82873.34248.0
80.39337.2715.51261.01895.32599.83350.84383.9
Table 6. Error comparison for different prediction horizons in Chinese chives.
Table 6. Error comparison for different prediction horizons in Chinese chives.
Prediction HorizonEvaluation MetricsRFELMLSTMEEMD-LSTMVMD-LSTMVMD–EEMD–LSTM
OneMAE0.35000.31550.28990.19780.15700.1288
MAPE0.07090.06450.05900.03890.03350.0269
RMSE0.42280.38450.35490.25620.19230.1588
TwoMAE0.53840.49090.47520.25510.23840.1990
MAPE0.11050.10300.09730.05260.05070.0420
RMSE0.66520.60180.56620.30520.29530.2464
ThreeMAE0.68770.63870.60220.34340.28530.2550
MAPE0.14280.13590.12460.07290.06140.0551
RMSE0.84350.76300.73500.40470.36490.3149
Table 7. Error comparison for different prediction horizons in shiitake mushrooms.
Table 7. Error comparison for different prediction horizons in shiitake mushrooms.
Prediction HorizonEvaluation MetricsRFELMLSTMEEMD-LSTMVMD-LSTMVMD–EEMD–LSTM
OneMAE0.69120.63040.43100.37810.17850.1443
MAPE0.08280.07920.05410.05340.02210.0185
RMSE0.93000.78000.55180.52460.21570.1718
TwoMAE0.87390.76040.72690.49930.21670.1805
MAPE0.10620.09520.09010.07000.02670.0226
RMSE1.11390.96610.93780.73300.27330.2153
ThreeMAE1.02500.97480.87390.61920.24570.2255
MAPE0.12530.11920.10680.08840.03070.0285
RMSE1.24061.22421.12300.92540.31700.2715
Table 8. Error comparison for different prediction horizons in cauliflower.
Table 8. Error comparison for different prediction horizons in cauliflower.
Prediction HorizonEvaluation MetricsRFELMLSTMEEMD-LSTMVMD-LSTMVMD–EEMD–LSTM
OneMAE0.39040.35680.32390.25970.15960.1528
MAPE0.09780.08260.07550.06690.04090.0389
RMSE0.48930.47140.41360.28810.19600.1889
TwoMAE0.62090.59250.58340.39310.23170.2248
MAPE0.15900.14530.14440.10140.05960.0575
RMSE0.73260.72130.70330.43650.28140.2744
ThreeMAE0.73870.70250.70120.53240.30670.3013
MAPE0.19250.17900.17720.13620.08040.0788
RMSE0.85890.83660.82440.59170.37850.3717
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, C.; Pei, M.; Cao, B.; Chang, S.; Si, H. A Study on Agricultural Commodity Price Prediction Model Based on Secondary Decomposition and Long Short-Term Memory Network. Agriculture 2024, 14, 60. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture14010060

AMA Style

Sun C, Pei M, Cao B, Chang S, Si H. A Study on Agricultural Commodity Price Prediction Model Based on Secondary Decomposition and Long Short-Term Memory Network. Agriculture. 2024; 14(1):60. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture14010060

Chicago/Turabian Style

Sun, Changxia, Menghao Pei, Bo Cao, Saihan Chang, and Haiping Si. 2024. "A Study on Agricultural Commodity Price Prediction Model Based on Secondary Decomposition and Long Short-Term Memory Network" Agriculture 14, no. 1: 60. https://0-doi-org.brum.beds.ac.uk/10.3390/agriculture14010060

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop