Next Article in Journal
Factors for Bioeconomy Development in EU Countries with Different Overall Levels of Economic Development
Next Article in Special Issue
Comparative Analysis of Solar Panels with Determination of Local Significance Levels of Criteria Using the MCDM Methods Resistant to the Rank Reversal Phenomenon
Previous Article in Journal
Precise Evaluation of Gas–Liquid Two-Phase Flow Pattern in a Narrow Rectangular Channel with Stereology Method
Previous Article in Special Issue
Sampling Primary Power Standard from DC up to 9 kHz Using Commercial Off-The-Shelf Components
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Applying Wavelet Filters in Wind Forecasting Methods

by
José A. Domínguez-Navarro
1,*,
Tania B. Lopez-Garcia
1 and
Sandra Minerva Valdivia-Bautista
2
1
Department of Electrical Engineering, EINA, University of Zaragoza, 50018 Zaragoza, Spain
2
Centro Universitario de Ciencias e Ingenierías (CUCEI), Universidad de Guadalajara (UDG), Guadalajara 44160, Mexico
*
Author to whom correspondence should be addressed.
Submission received: 18 April 2021 / Revised: 22 May 2021 / Accepted: 26 May 2021 / Published: 29 May 2021
(This article belongs to the Special Issue Power System Simulation, Control and Optimization Ⅱ)

Abstract

:
Wind is a physical phenomenon with uncertainties in several temporal scales, in addition, measured wind time series have noise superimposed on them. These time series are the basis for forecasting methods. This paper studied the application of the wavelet transform to three forecasting methods, namely, stochastic, neural network, and fuzzy, and six wavelet families. Wind speed time series were first filtered to eliminate the high-frequency component using wavelet filters and then the different forecasting methods were applied to the filtered time series. All methods showed important improvements when the wavelet filter was applied. It is important to note that the application of the wavelet technique requires a deep study of the time series in order to select the appropriate family and filter level. The best results were obtained with an optimal filtering level and improper selection may significantly affect the accuracy of the results.

1. Introduction

When the penetration of wind power into the network reaches a certain level, system operators have difficulties in balancing generation with demand. To help address this issue, it is necessary to apply forecasting methods to estimate the wind power generated in the next few hours and days.
Several methods have been used to forecast the wind speed: stochastic methods such as the AR [1] and ARIMA [2,3,4] process or heuristic methods such as the Kalman filter [5,6], neural networks [7,8,9], and neuro-fuzzy systems [10,11]. Among all methods, neural networks are the most widely used by researchers.
It is difficult to compare different methods if they do not use the same dataset and the same performance indexes. A typical approach for comparison is to use the persistence model as the reference [12]. Table 1 illustrates the improvement achieved by different forecasting methods compared to the persistence method (although different forecast horizons are used, the list gives an idea of the range of improvement). These improvements are rarely over 25%.
Wind speed series have considerable uncertainty because of weather fluctuations and the added instrument uncertainty. This uncertainty and noise make it difficult to improve the forecasts. There are several strategies to process the data [13,14,15]. Some authors have used wavelet transforms [16] to process the time series. Most authors [17,18,19,20,21,22,23,24,25,26,27,28,29,30,31] have used wavelets to decompose the time series into sub-series, called approximation and details; applied the forecasting method to each sub-series, and finally, summarized the forecasting results to obtain the final solution. The advantage of this method comes from the sub-series having an improved performance with respect to the original series. A few authors [32,33,34,35,36,37] have used other wavelet filtering techniques to eliminate the high frequency variations and smooth the time series. In all papers, the authors selected the wavelet family and decomposition level without too much justification. For example, the cited authors only used one wavelet family. These works used the wavelet transform as an auxiliary technique and did not study them at sufficient depth.
In this paper, the wavelet transform was analyzed thoroughly. This work demonstrated that the selection of the wavelet family and decomposition level were far more important than they have been given credit for thus far. The improvement obtained was greater than that achieved with most new forecasting methods. The result was applied to the three main forecasting methods currently used, namely statistical, neural network, and fuzzy methods. These were applied to several forecast horizons and sample times. In all cases, the results obtained were improved for each method when the optimal wavelet filter was applied. Finally, the main contribution of the paper is to highlight the importance of data processing and to propose it as an additional phase in the forecasting method, so that both steps are optimized together.
The rest of this paper is structured as follows. Section 2 explains the basic concepts of the wavelet transform. Section 3 presents the different forecasting approaches. Section 4 describes the forecasting approach proposed. In Section 5, the comparison criteria to evaluate the improvement of each method are explained. Section 6 presents the results for the different methods considered. Finally, Section 7 draws the main conclusions of this research.

2. Basic Concepts of the Wavelet Transform

Fourier analysis is commonly used to help analyze different types of signals. With this method, a signal f(t) is expressed as a linear decomposition of real-valued functions of t, as shown in Equation (1),
f ( t ) = k a k ϕ k ( t )
where ak are the real-valued expansion coefficients and ϕk(t) are a set of real-valued functions of t called the expansion set. In Fourier series, these are sin(0t) and cos(0t) with frequencies of 0.

2.1. Wavelet Transform

An introduction to the wavelet transform can be found in [16]. In Equation (2), the signal is already decomposed into coefficients aj,k and functions Ψj,k(t), which depend on parameters j and k,
f ( t ) = k j a j , k Ψ j , k ( t )
where Ψj,k are the wavelet expansion functions and aj,k is the discrete wavelet transform of f(t), or the set of expansion coefficients.
The wavelet expansion functions, or family of wavelets are generated from a mother wavelet Ψ(t) by scaling and translation:
Ψ j , k ( t ) = 2 j / 2 Ψ ( 2 j t k )
where the parameter k translates the function and the parameter j scales it. Figure 1 shows the translating and scaling operations of the function Ψ ( 2 j t k ) .

2.2. Multi-Resolution Formulation of Wavelet Systems

In multi-resolution analysis, the resolution of the approximation of f(t) depends on the choice of j in Equation (2). For a value of j = j0, the equation is:
f ( t ) = k c j 0 , k φ j 0 , k ( t )
For low values of j, the approximation of f(t) can represent only coarse information. In multi-resolution formulation, φj0,k(t) are called scaling functions. If we want to represent detailed information, then high values of j are required.
However, there is another way to describe a signal with better resolution without increasing j. This new approach consists of describing the differences between the approximation and the original signal with a combination of other functions called wavelets Ψj,k(t) and the coefficients dj,k, as shown in Equation (5). The parameters k and j indicate the translation and scaling of the function.
f ( t ) = k c j 0 , k φ j 0 , k ( t ) + k j = j o J 1 d j , k ψ j , k ( t ) = k c j 0 ( k ) 2 j 0 / 2 φ ( 2 j 0 t k ) + k j = j o J 1 d j ( k ) 2 j / 2 ψ ( 2 j t k )
There are several packets of scaling functions φ(t) and wavelets Ψ(t), as shown in Equation (6) (see Figure 2), which were chosen depending on the signal that has to be approximated.
φ ( t ) = n h 0 ( n ) 2 φ ( 2 t n ) ψ ( t ) = n h 1 ( n ) 2 ψ ( 2 t n )
In Equation (6), the coefficients h0(n) and h1(n) with nZ, are a sequence of real numbers called filter coefficients. The process is similar to digital filters, where h0(m − 2k) acts as a low-pass filter and h1(m − 2k) acts as a high-pass filter. Figure 3 shows the decomposition process of cj in: cj+1 (low frequency) and dj+1 (high frequency).
The j+1 level scaling coefficients are:
c j + 1 ( k ) = m h 0 ( m 2 k ) c j ( m ) d j + 1 ( k ) = m h 1 ( m 2 k ) c j ( m )
These expressions represent the approximation and details of the signal for a j + 1 level scaling, and m = 2k + n is a sequence of real numbers.
This process can be repeated iteratively to reduce the high-frequency component as shown in Figure 4.
In Figure 5b, we can see the approximation c2(k) and the details d2(k), d1(k), and d0(k) of the original signal f(t) are shown in Figure 5a.
Wind speed time series have a high-frequency component due to wind gusts, measurement errors, and random events as well as a low-frequency component with slower variation. The high-frequency component of the signal introduces a lot of noise into forecasting methods, causing them to perform poorly. If this component is eliminated and the forecasting methods are applied to an approximation with only the low frequency component, improved results can be obtained.

2.3. Wavelet Families

There is a large number of wavelets. The selection of the wavelet function depends on the problem and the properties of the wavelet function [16]. The main properties are its region of support and the number of vanishing moments. The region of support affects its localization capabilities, whereas the vanishing moments limit the ability of the wavelet to represent the information of a signal. In this paper, the wavelet families used were: Haar, Daubechies, Symlet, Coiflet, Biorsplines, and Meyer.
There are some methods to select the optimal wavelet family, but they have been developed for specific applications and it is not certain that they can be applied to forecasting problems:
  • In the cross-correlation method [38], the optimum wavelet maximizes the cross-correlation between the signal of interest and the wavelet;
  • In the energy method [39], the aim is to maximize the energy of the signal of interest; and
  • In the entropy method [40], the best wavelet minimizes the entropy of the signal of interest.

3. Forecasting Models

The wavelet filter was applied to several forecasting methods, namely regression, neural network and fuzzy models.

3.1. Persistence Model

In the persistence model, the variable value in t + Δt is equal to the variable value in t. Due to its simplicity, this model was used as a reference.
y t = y t 1 .

3.2. Regressive Model

This model [41] is based on the multiple regressions that study the relations between a dependent variable and a set of independent variables. Among the independent variables, there are exogenous variables such as temperature, and intrinsic variables like the historical values of these variables. When the model only uses the historical values of these variables, it is called the auto-regressive temporal series model. In this work, the model used the historical values:
y t = i = 1 p α i y t i .
where αi represents the auto-regressive parameters and p is the number of past values.

3.3. Neural Network Model

Neural networks [42] are auto-adaptive dynamic systems that are able to find nonlinear relations between several variables. The model used is a multilayer perceptron that gives good results in forecasting problems:
y t = j w k j g ( i ( w j i y t i θ j ) θ k )
where θ j and θ k are the layer thresholds; wji and wkj are the layer weights; i and j are the number of neurons in each layer; and g is the activation function.

3.4. Fuzzy Model

The fuzzy model [43] is based on concepts of fuzzy sets theory, fuzzy rules of type if–then, and approximation reasoning:
{ R j :   if i ( y t i   is   A ˜ i )   then   u j = i ( p i y t i ) y t = j ( ω j u j ω ¯ j )
where ω i is the normalized firing strengths; ui is the functions that depend on the inputs yt-I; A ˜ i is the fuzzy set that represents the input variables; and pi is the membership grade of each input yt in A ˜ i .

4. Forecasting Approach

Wind time series have high variability due to the intrinsic uncertainty of the wind; this variability negatively influences the forecasting result. In this paper, the adopted approach, illustrated in Figure 6, consists of using an optimized filter based on wavelets to de-noise the data that are used to train the chosen forecasting method. The filter is optimized with a genetic algorithm that selects the best wavelet family and the optimal decomposition level.
The algorithm receives as inputs the wind speed data and the prediction method to be used.
In step 1, a random population of individuals is created. Each individual contains the information of the parameters of the prediction method, the wavelet family, and the level of decomposition.
In step 2, each individual in the population is evaluated. The evaluation has three phases.
The first phase consists of applying the wavelet filter to the input data with the wavelet family and the level indicated by each individual. The data are divided into training and test sets. The original time series is filtered with the wavelet transform and is decomposed into an approximation component and several details of the signal. The approximation component has improved behavior in comparison to the original series in the forecasting process. Therefore, only the approximation component is used in the next phase and the details are discarded. In this phase, there are two important decisions to analyze: the best wavelet family to use and the optimal filter level. These questions have not been answered in the technical literature.
The second phase consists of training the prediction method with the parameters indicated by each individual and the training dataset. A forecasting method is applied only to the approximation component. In this paper, three methods were used to forecast the time series: autoregressive, neural networks, and fuzzy models.
The third phase consists of evaluating the prediction method, already trained, with the test dataset.
In step 3, the best individuals are selected based on the error obtained in the evaluation with the test data.
In step 4, the crossover and mutation operators of the genetic algorithm are applied that give rise to a new population.
Steps 2 through 4 are repeated until the termination criterion is reached, which is the number of generations or iterations of the genetic algorithm.

5. Forecasting Errors

We compared these forecasting methods with the simpler persistence model used as the reference. The measure of the error of each method was calculated by the root mean square error (RMSE),
RMSE = t = 1 n ( Xpred t Xreal t ) 2 n
where Xpredt is the predicted value in t; Xrealt is the real value at t; and n is the number of samples.
The improvement of each method in comparison to the persistence model was calculated with the following equation:
Improvement   vs .   persistence   ( % ) = RMSE ( persistence ) RMSE ( method ) RMSE ( persistence )

6. Results of Validation Cases

6.1. Data Description

The wavelet filter was applied to several forecasting methods: regression, neural network, and fuzzy models. Five wind speed series were used in this work: two series with a high sampling frequency (1″ and 1′, respectively) and three with a sampling frequency of 10′. Table 2 shows the main statistic characteristics of these time series and Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5 in the Appendix A represent their temporal variation.
Several cases were built from these data to observe the performance of the forecasting models when the sampling step (Δt) and the forecasting horizon (FH) changed. An identifier (ID) was assigned to each case in Table 3.

6.2. Results

Every time series was divided into two sets of the same length: a training set and a test set. The forecasting models were built with the training set and the results presented here were obtained by applying these models to the test set. Moreover, eight inputs were used in all methods.
First, a detailed analysis of wavelet filtering is presented, aiming to answer whether they are helpful with different prediction methods and whether they depend on factors such as the level of filtering, the prediction horizon, and the sampling frequency. Afterward, we analyze whether it is possible to select the wavelet family by any of the methods described in the literature regardless of the prediction method used. Finally, its application is presented in a specific case.

6.2.1. Influence of Wavelet Filters in Several Forecasting Methods

The best obtained results of applying the wavelet filters on time series are presented in Figure 7 and Figure 8. It is shown that regardless of the model, the forecasting horizon, or time step, the performance was much better with the wavelet filter than without it when the optimal wavelet family and level were chosen.
Detailed results are provided in Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6 in the Appendix A. It is important to note that the optimal wavelet family and level was different in each case. That is, there was a lot of variability in this point. This fact is contrary to the widespread action among researchers who choose these parameters depending on the application.

6.2.2. Influence of Decomposition Level

However, there was a great similarity in the performance of the different wavelet families in each case. Each family reached different levels of improvement, but all families achieved their maximum improvement percentage at a similar level. Figure 9 shows the improvement/level rate for a particular case, with different forecasting methods. Figure 10 shows the improvement/level rate of the same case and method for different wavelet families (see Table A10, Table A11, Table A12 and Table A13 for details).
The last results explain why researchers can obtain favorable solutions by applying wavelets, even when they do not select the wavelet family and level accurately.

6.2.3. Influence of the Forecasting Horizon

Comparisons made up to now were in percent, because it permitted us to adequately show the difference between whether the wavelet filters were applied or not. However, it is necessary to remember that the error (RMSE) increased with the forecasting horizon, as can be seen in Figure 11, Figure 12 and Figure 13, although less when the wavelet filters were applied.

6.2.4. Influence of Different Sampling Frequencies

In Figure 14, it can be seen that with low filtering levels, an important improvement was obtained, but with high filtering levels, information was lost and the improvement decreased or even worsened substantially at high sampling frequencies.

6.2.5. Selection of Optimal Wavelet Family

In Table 4, the wavelet families found by the cross-correlation, energy, and entropy methods are shown in the columns “cross-corr”, “energy”, and “entropy”, respectively. The column “optimum” shows the wavelet family that gave the best results in our tests.
These methods were applied to the time series with poor results. The cross-correlation method obtained the correct result in cases 7, 30, and 31; the energy method in cases 1, 5, 13, 15, 18, 25, 27, 32, 33, 34 and 3; and the entropy method in cases 5 and 33.

6.2.6. Applying the Forecasting Approach

The importance of using filtered data is illustrated in the following example. Figure 15 shows the first 300 data points (to appreciate it in detail) of the original data series of case 22, the data series filtered with the wavelet family “dmey” and a filter level 2, and the difference between the two series. Figure 16 shows the results of the forecast made with the neural network trained with the filtered data training set, and Figure 17 shows the results of the forecast made with the neural network trained with the unfiltered data training set. The results using correctly filtered data were considerably better than those with the unfiltered data.

7. Conclusions

In this paper, the forecasting models were applied to the approximation component of the wavelet decomposition and the details components were discarded, as opposed to most authors who use both components in their forecasting models.
A deep analysis of the wavelet filter in results was made, and the conclusions will enable improvements in all forecasting models.
The wavelet filter method was applied to different forecasting models: regression, neural network, and fuzzy models. In all models, this technique (wavelet filter + forecasting model) improved the obtained results compared to the case when only the forecasting model was used. The improvement of these methods versus the persistence method was between 2% and 30%, but with the wavelet filter method, it was between 20% and 50%.
The study was extended to several wavelet families. In all cases, there were improvements, but it was not easy to select the best family. The selection methods did not work for the proposed method.
The filtering level was more important to obtain good results than the wavelet family in most of cases. This optimum level was between 2 and 5 in all wavelet families.
As a final conclusion, it seems necessary to use an optimization algorithm to select the wavelet family and level.
It has become clear that it is not easy to determine the parameters of the data processing methods and that they significantly influence the results obtained. Hence, future research is the joint optimization of the data processing and the forecasting method.

Supplementary Materials

Supplementary materials are available online at https://0-www-mdpi-com.brum.beds.ac.uk/article/10.3390/en14113181/s1.

Author Contributions

Conceptualization, J.A.D.-N.; Methodology, J.A.D.-N.; Software, S.M.V.-B. and T.B.L.-G.; Validation, S.M.V.-B. and T.B.L.-G.; Writing—original draft preparation, J.A.D.-N.; Writing—review and editing, T.B.L.-G.; Supervision, J.A.D.-N. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in supplementary material.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Below are the graphical representations of the measured wind speed data in each time series (Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5).
Figure A1. Time series 1: Measured wind speed data with 1-s sampling.
Figure A1. Time series 1: Measured wind speed data with 1-s sampling.
Energies 14 03181 g0a1
Figure A2. Time series 2: Measured wind speed data with 1-min sampling.
Figure A2. Time series 2: Measured wind speed data with 1-min sampling.
Energies 14 03181 g0a2
Figure A3. Time series 3: Measured wind speed data with 10-min sampling.
Figure A3. Time series 3: Measured wind speed data with 10-min sampling.
Energies 14 03181 g0a3
Figure A4. Time series 4: Measured wind speed data with 10-min sampling.
Figure A4. Time series 4: Measured wind speed data with 10-min sampling.
Energies 14 03181 g0a4
Figure A5. Time series 5: Measured wind speed data with 10-min sampling.
Figure A5. Time series 5: Measured wind speed data with 10-min sampling.
Energies 14 03181 g0a5
In each table, the column FH is the forecasting horizon, Δt is the time step in the look-ahead period, then we have the RMSE of the persistence model, the RMSE of the study model without a wavelet filter, the RMSE of the study model with the wavelet filter, and finally, we have the improvement of the model with and without the wavelet filter versus the persistence model.
Table A1. Time series I results of the regressive model.
Table A1. Time series I results of the regressive model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
11′5″1.2551.1330.7619.7139.35mey3
25′5″1.0171.0190.810−0.1320.37db24
35′10″1.2821.2300.6884.0846.35bior2.85
45′20″1.2881.2320.7054.3445.24db37
510′10″1.2581.1790.6106.2551.49mey4
610′20″1.2871.1960.7247.0143.74db56
710′30″1.2991.2000.6947.6546.54bior5.56
820′30″1.2661.1610.6477.9548.88db26
920′1′0.9730.9530.7182.0026.17db27
101 h1′1.2071.0830.64910.3346.18bior3.97
Table A2. Time series I results of the fuzzy model.
Table A2. Time series I results of the fuzzy model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
11′5″1.2551.1350.7619.5839.34mey3
25′5″1.0171.0010.7181.5729.37db54
35′10″1.2821.1830.6307.7650.85mey4
45′20″1.2881.1950.6567.2249.04bior3.96
510′10″1.2581.1670.6077.2051.73mey4
610′20″1.2871.1790.7128.3344.62db56
710′30″1.2991.1750.7119.5845.25db26
820′30″1.2661.1280.64110.9349.32db26
920′1′0.9730.9460.7472.7623.20db27
101 h1′1.2071.0630.64311.9846.68bior3.97
Table A3. Time series I results of the neuronal model.
Table A3. Time series I results of the neuronal model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
11′5″1.2551.1410.8129.0835.30mey3
25′5″1.0171.0030.8211.3719.33db24
35′10″1.2821.2010.6866.3546.50bior2.85
45′20″1.2881.2150.7125.6444.68db37
510′10″1.2581.1690.7917.0437.06mey4
610′20″1.2871.1890.6967.5545.87db56
710′30″1.2991.1860.6928.7246.70db26
820′30″1.2661.1480.6499.3048.70db26
920′1′0.9730.9470.7492.6422.99db27
101 h1′1.2071.0850.66010.145.31bior3.97
Table A4. Time series II results of the regressive model.
Table A4. Time series II results of the regressive model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
112 h5′0.9120.8650.4815.1247.19sym43
122 h10′0.9250.8340.3859.7758.29coif34
136 h10′0.7300.6720.3908.0046.61mey4
146 h30′0.6470.6320.4442.2931.34db46
1512 h10′6.8976.4822.7946.0159.49mey4
1612 h30′7.3466.7183.5238.5552.04sym56
1712 h1 h7.9687.2544.2458.9546.73coif47
1824 h30′15.73714.7506.9776.2655.69mey6
1924 h1 h20.24118.2569.5929.8052.61bior3.97
2024 h2 h23.96718.94217.48420.9627.05db47
Table A5. Time series II results of the fuzzy model.
Table A5. Time series II results of the fuzzy model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
112 h5′0.9120.8650.4815.1247.19sym43
122 h10′0.9250.8340.3839.7758.52coif34
136 h10′0.7300.6720.3908.0046.61mey4
146 h30′0.6470.5800.37010.3042.67db26
1512 h10′6.8976.4822.7946.0159.49mey4
1612 h30′7.3466.4913.21311.6356.26sym56
1712 h1 h7.9686.6454.50316.5943.48coif54
1824 h30′15.73712.0138.53323.6645.77mey6
1924 h1 h20.24118.2569.5929.8052.61bior3.97
2024 h2 h23.96716.32116.32131.9031.90haar1
Table A6. Time series II results of the neuronal model.
Table A6. Time series II results of the neuronal model.
IDFH.ΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
112 h5′0.9120.7820.45914.2249.61mey3
122 h10′0.9250.7740.48516.3647.51coif34
136 h10′0.7300.6720.3877.9846.99mey4
146 h30′0.6470.5990.3487.4046.15haar6
1512 h10′6.8976.6334.1393.8239.99mey4
1612 h30′7.3466.6993.5558.8351.60sym56
1712 h1 h7.9686.8224.23214.4046.89coif57
1824 h30′15.73714.7836.8686.0656.35mey6
1924 h1 h20.24118.22411.1979.9644.68bior3.97
2024 h2 h23.96717.16115.54528.3935.14haar6
Table A7. Time series III results of the neuronal model.
Table A7. Time series III results of the neuronal model.
IDFHΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
2110′10′0.4970.7460.295−50.1640.54dmey2
221 h1 h1.7382.4351.066−40.0938.70dmey2
236 h6 h4.6044.5182.3951.8647.97Bior6.81
2412 h12 h5.9194.7643.10719.5247.50Bior6.81
2524 h24 h5.2244.2413.04918.8141.63dmey1
Table A8. Time series IV results of the neuronal model.
Table A8. Time series IV results of the neuronal model.
IDFHΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLevel
2610′10′1.3722.0300.831−47.9539.42dmey2
271 h1 h4.4435.8572.386−31.8346.29Bior6.81
286 h6 h9.5959.3015.1793.0746.03coif31
2912 h12 h11.0819.6655.60312.7849.44Bior3.91
3024 h24 h11.6289.6345.83617.1549.81Bior3.91
Table A9. Time series V results of the neuronal model.
Table A9. Time series V results of the neuronal model.
IDFHΔtRMSE (m/s)Improve
vs. Persistence (%)
Wavelet
Persistencen-FilterFiltern-FilterFilterFamilyLEVEL
3110′10′0.8531.0840.552−27.0735.32dmey2
321 h1 h1.6562.0171.079−21.8434.82dmey1
336 h6 h3.0432.9891.8431.7739.43dmey1
3412 h12 h3.5323.1722.13410.1739.57dmey1
3524 h24 h3.7623.4492.3258.3038.20dmey2
In Table A10, Table A11, Table A12 and Table A13, we can see the effect (in %) of using different wavelet families and different filtered j-level. The best results are marked in bold and underlined. The abscissa axis is the filter level and the ordinate axis is the wavelet family.
Table A10. Results of the regressive model (%) in case 3.
Table A10. Results of the regressive model (%) in case 3.
WaveletLevel
Family1234567
haar4.085.249.6515.4514.816.53−64.89
daub 24.114.249.0129.1917.9214.64−75.76
daub 33.953.975.2618.6636.6212.20−9.48
daub 43.733.396.3024.2245.748.3210.82
daub 53.633.695.0935.1830.9724.14−46.79
symlet 24.114.249.0129.1917.9214.64−75.76
symlet 33.953.975.2618.6636.6212.20−9.48
symlet 43.773.655.1022.1039.735.559.77
symlet 54.013.366.6734.0526.6327.39−64.57
coiflet 23.693.455.3920.9842.796.81−6.10
coiflet 33.683.445.3631.1343.064.89−0.01
coiflet 43.683.464.1137.5038.234.414.79
coiflet 53.683.494.6232.4531.044.857.47
bior 1.33.603.826.0518.1217.8312.25−58.89
bior 2.83.674.056.6522.8146.355.62−5.97
bior 3.93.713.753.1736.7527.8122.25−42.57
bior 5.54.043.483.6025.4536.2211.22−12.70
bior 6.83.664.035.7929.4938.548.51−6.80
meyer3.643.513.1038.4424.2710.232.87
Table A11. Results of the fuzzy model (%) in case 3.
Table A11. Results of the fuzzy model (%) in case 3.
WaveletLevel
Family1234567
haar7.768.869.0215.4914.239.12−63.29
daub 27.954.7711.0530.5617.6014.14−74.53
daub 37.735.456.0720.5134.3310.21−10.20
daub 46.904.875.5427.2343.345.6710.92
daub 57.004.285.8141.6630.6923.73−46.99
symlet 27.964.7811.0530.5617.6114.15−74.54
symlet 37.745.456.0820.5234.3310.21−10.21
symlet 47.605.176.1625.6638.344.529.17
symlet 57.704.814.2439.1827.8327.24−65.07
coiflet 27.314.945.8922.4042.095.88−6.99
coiflet 37.504.916.1734.2540.494.940.67
coiflet 47.564.944.9744.5235.973.964.97
coiflet 57.594.985.6540.2030.524.817.52
bior 1.37.215.418.0720.6618.0514.06−58.78
bior 2.87.254.7910.3725.3645.004.23−6.61
bior 3.97.305.273.1844.6326.8421.94−42.45
bior 5.58.084.563.7929.1635.2411.25−12.30
bior 6.87.474.909.5831.9637.508.49−6.41
meyer7.584.974.3050.8524.429.972.89
Table A12. Results of the neuronal model (%) in case 3.
Table A12. Results of the neuronal model (%) in case 3.
WaveletLevel
Family1234567
haar6.367.5710.83−6.0216.362.55−72.60
daub 25.51−11.2310.805.2818.06−15.27−75.44
daub 35.394.267.0720.8136.57−9.02−10.14
daub 44.93−17.278.842.1545.687.397.40
daub 50.455.266.6032.9730.5724.66−72.56
symlet 25.51−11.2310.805.2818.06−15.27−75.44
symlet 35.394.267.0720.8136.57−9.02−10.14
symlet 4−0.31−11.467.34−3.0240.19−0.769.63
symlet 56.064.558.1732.2627.2927.35−65.68
coiflet 25.25−3.808.2520.863.666.88−9.63
coiflet 31.48−20.586.451.176.555.031.96
coiflet 45.29−20.46−0.4334.0539.664.404.62
coiflet 5−1.08−22.196.0031.3333.744.787.23
bior 1.3−1.23−15.938.15−4.0117.46−2.32−58.89
bior 2.84.935.578.596.5946.505.48−15.90
bior 3.95.164.41−16.524.3528.434.82−39.62
bior 5.55.05−16.065.7912.6636.2911.17−10.82
bior 6.85.065.268.1231.2638.990.41−7.17
meyer−3.08−2.655.3934.1925.643.522.10
Table A13. Results of the neuronal model (%) in cases 22 and 31.
Table A13. Results of the neuronal model (%) in cases 22 and 31.
WaveletLevels in Case 22Levels in Case 31
Family12341234
haar−30.73−25.46−42.13−30.73−19.06−4.92−10.73−32.24
daub 2−26.23−238.57−19.91−26.23−15.727.381.90−21.62
daub 3−15.857.02−12.60−15.85−9.7215.537.77−15.68
daub 4−4.1413.55−5.95−4.14−5.7723.3910.34−14.27
daub 5−10.5521.28−5.52−10.55−2.1426.5212.67−13.74
symlet 2−25.98−6.49−19.21−25.98−14.497.662.12−21.49
symlet 3−17.806.35−12.78−17.80−10.1314.857.11−15.78
symlet 416.5315.51−9.6116.53−4.9222.0311.38−13.86
symlet 5−2.9224.84−1.95−2.92−0.6326.5013.43−13.62
coiflet 230.7133.45−5.1830.71−5.0523.1310.78−13.54
coiflet 3−2.4227.34−0.32−2.423.3029.6114.71−12.76
coiflet 46.4933.27−1.696.4910.1933.4214.54−12.35
coiflet 512.5535.621.9312.5514.2034.5215.70−11.70
bior 1.3−34.18−27.20−41.09−34.18
bior 2.8−23.504.12−9.08−23.50
bior 3.9−7.4626.02−3.80−7.46
bior 5.56.3129.96−3.366.31
bior 6.84.4328.930.264.43
meyer34.1238.702.4834.12

References

  1. Brown, B.G.; Katz, R.W.; Murphy, A.H. Time Series Models to Simulate and Forecast Wind Speed and Wind Power. J. Clim. Appl. Meteorol. 1984, 23, 1184–1195. [Google Scholar] [CrossRef]
  2. Torres, J.; García, A.; de Blas, M.; de Francisco, A. Forecast of hourly average wind speed with ARMA models in Navarre (Spain). Sol. Energy 2005, 79, 65–77. [Google Scholar] [CrossRef]
  3. Bossanyi, E.A. Stochastic Wind Prediction for Wind Turbine System Control. In Proceedings of the 7th BWEA Wind Energy Conference, Oxford, UK, 27–29 March 1985. [Google Scholar]
  4. Hill, D.C.; McMillan, D.; Bell, K.R.W.; Infield, D. Application of Auto-Regressive models to U.K. wind speed data for power system impact studies. IEEE Trans. Sustain. Energy 2012, 3, 134–141. [Google Scholar] [CrossRef]
  5. Bossanyi, E.A. Short-Term Wind Prediction using Kalman Filters. Wind Eng. 1985, 9, 1–8. [Google Scholar]
  6. Djurovic, M.; Stankovic, L. Predicition of Wind Characteristics in Short-Term Periods. In Energy and the Environment into the 1990s, Proceedings of the 1st World Renewable Energy Congress, Reading, UK, 23–28 September 1990; Pergamon Press: Oxford, UK; New York, NY, USA, 1990. [Google Scholar]
  7. Beyer, H.G. Short-Term Prediction of Wind-Speed and Power Output of a Wind Turbine with Neural Networks. In Proceedings of the 5th European Community Wind Energy Conference, ECWEC’94, Thessaloniki, Greece, 10–14 October 1994. [Google Scholar]
  8. Alexiadis, M.C.; Dokopoulos, P.S.; Sahsamanoglou, H.S. Wind speed and power forecasting based on spatial correlation models. IEEE Trans. Energy Convers. 1999, 14, 836–842. [Google Scholar] [CrossRef]
  9. Flores, P.; Tapia, A.; Tapia, G. Application of a control algorithm for wind speed prediction and active power generation. Renew. Energy 2005, 30, 523–536. [Google Scholar] [CrossRef]
  10. Sideratos, G.; Hatziargyriou, N.D. An Advanced Statistical Method for Wind Power Forecasting. IEEE Trans. Power Syst. 2007, 22, 258–265. [Google Scholar] [CrossRef]
  11. Potter, C.W.; Negnevitsky, W. Very short-term wind forecasting for Tasmanian power generation. IEEE Trans. Power Syst. 2006, 21, 965–972. [Google Scholar] [CrossRef]
  12. Sfetsos, A. A comparison of various forecasting techniques applied to mean hourly wind speed time series. Renew. Energy 2000, 21, 23–35. [Google Scholar] [CrossRef]
  13. Liu, H.; Chen, C. Data processing strategies in wind energy forecasting models and applications: A comprehensive review. Appl. Energy 2019, 249, 392–408. [Google Scholar] [CrossRef]
  14. Qian, Z.; Pei, Y.; Zareipour, H.; Chen, N. A review and discussion of decomposition based hybrid models for wind energy forecasting applications. Appl. Energy 2019, 235, 939–953. [Google Scholar] [CrossRef]
  15. Huang, N.T.; Xing, E.K.; Cai, G.W.; Yu, Z.Y.; Qi, B.; Lin, L. Short-term wind speed forecasting based on low redundancy feature selection. Energies 2018, 11, 1638. [Google Scholar] [CrossRef] [Green Version]
  16. Sidney Burrus, C.; Gopinath, R.A.; Guo, H. Introduction to Wavelets and Wavelet Transforms; Prentice Hall: Upper Saddle River, NJ, USA, 1998. [Google Scholar]
  17. Catalão, J.P.S.; Pousinho, H.M.I.; Mendes, V.M.F. Hybrid Wavelet-PSO-ANFIS Approach for Short-Term Wind Power Forecasting in Portugal. IEEE Trans. Sustain. Energy 2010, 2, 50–59. [Google Scholar] [CrossRef]
  18. Liu, H.; Tian, H.-Q.; Pan, D.-F.; Li, Y.-F. Forecasting models for wind speed using wavelet, wavelet packet, time series and Artificial Neural Networks. Appl. Energy 2013, 107, 191–208. [Google Scholar] [CrossRef]
  19. Cao, L.; Li, R. Short-term wind speed forecasting model for wind farm based on wavelet decomposition. In Proceedings of the Third International Conference on Electric Utility Deregulation and Restructuring and Power Technologies, DRPT 2008, Nanjing, China, 6–9 April 2008; pp. 2525–2529. [Google Scholar]
  20. Yao, C.; Yu, Y. A Hybrid Model to Forecast Wind Speed Based on Wavelet and Neural Network. In Proceedings of the International Conference on Control, Automation and Systems Engineering (CASE), Singapore, 30–31 July 2011; 2011; pp. 1–4. [Google Scholar]
  21. Khan, A.A.; Shahidehpour, M. One day ahead wind speed forecasting using wavelets. In Proceedings of the Power Systems Conference and Exposition PSCE ‘09, Seattle, WA, USA, 15–18 March 2009; pp. 1–5. [Google Scholar]
  22. Mishra, S.; Sharma, A.; Panda, G. Wind power forecasting model using complex wavelet theory. In Proceedings of the International Conference on Energy, Automation, and Signal (ICEAS), Bhubaneswar, India, 28–30 December 2011; pp. 1–4. [Google Scholar]
  23. Tong, J.-L.; Zhao, Z.-B.; Zhang, W.-Y. A New Strategy for Wind Speed Forecasting Based on Autoregression and Wavelet Transform. In Proceedings of the 2nd International Conference on Remote Sensing, Environment and Transportation Engineering (RSETE), Nanjing, China, 1–3 June 2012; pp. 1–4. [Google Scholar]
  24. Li, Y.F.; Wu, H.P.; Liu, H. Multi-step wind speed forecasting using EWT decomposition, LSTM principal computing, RELM sub-ordinate computing and IEWT reconstruction. Energy Convers. Manag. 2018, 167, 203–219. [Google Scholar] [CrossRef]
  25. Liu, H.; Duan, Z.; Han, F.-Z.; Li, Y.-F. Big multi-step wind speed forecasting model based on secondary decomposition, ensemble method and error correction algorithm. Energy Convers. Manag. 2018, 156, 525–541. [Google Scholar] [CrossRef]
  26. Liu, H.; Mi, X.; Li, Y. Comparison of two new intelligent wind speed forecasting approaches based on Wavelet Packet Decomposition, Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Artificial Neural Networks. Energy Convers. Manag. 2018, 155, 188–200. [Google Scholar] [CrossRef]
  27. Chitsaz, H.; Amjady, N.; Zareipour, H. Wind power forecast using wavelet neural network trained by improved Clonal selection algorithm. Energy Convers. Manag. 2015, 89, 588–598. [Google Scholar] [CrossRef]
  28. Esfetang, N.N.; Kazemzadeh, R. A novel hybrid technique for prediction of electric power generation in wind farms based on WIPSO, neural network and wavelet transform. Energy 2018, 149, 662–674. [Google Scholar] [CrossRef]
  29. Zhang, Y.; Zhang, C.; Gao, S.; Wang, P.; Xie, F.; Cheng, P.; Lei, S. Wind Speed Prediction Using Wavelet Decomposition Based on Lorenz Disturbance Model. IETE J. Res. 2018, 66, 635–642. [Google Scholar] [CrossRef]
  30. Hu, J.; Wang, J.; Ma, K. A hybrid technique for short-term wind speed prediction. Energy 2015, 81, 563–574. [Google Scholar] [CrossRef]
  31. Wang, J.-Z.; Wang, Y.; Jiang, P. The study and application of a novel hybrid forecasting model—A case study of wind speed forecasting in China. Appl. Energy 2015, 143, 472–488. [Google Scholar] [CrossRef]
  32. Zhang, W.; Wang, J.; Wang, J.; Zhao, Z.; Tian, M. Short-term wind speed forecasting based on a hybrid model. Appl. Soft Comput. 2013, 13, 3225–3233. [Google Scholar] [CrossRef]
  33. Liu, D.; Niu, D.; Wang, H.; Fan, L. Short-term wind speed forecasting using wavelet transform and support vector machines optimized by genetic algorithm. Renew. Energy 2014, 62, 592–597. [Google Scholar] [CrossRef]
  34. Zhang, Y.; Li, R.; Zhang, J. Optimization scheme of wind energy prediction based on artificial intelligence. Environ. Sci. Pollut. Res. 2021, 1–16. [Google Scholar] [CrossRef]
  35. Yu, C.; Li, Y.; Zhang, M. An improved Wavelet Transform using Singular Spectrum Analysis for wind speed forecasting based on Elman Neural Network. Energy Convers. Manag. 2017, 148, 895–904. [Google Scholar] [CrossRef]
  36. Mi, X.-W.; Liu, H.; Li, Y.-F. Wind speed forecasting method using wavelet, extreme learning machine and outlier correction algorithm. Energy Convers. Manag. 2017, 151, 709–722. [Google Scholar] [CrossRef]
  37. Cheng, L.; Zang, H.; Ding, T.; Sun, R.; Wang, M.; Wei, Z.; Sun, G. Ensemble Recurrent Neural Network Based Probabilistic Wind Speed Forecasting Approach. Energies 2018, 11, 1958. [Google Scholar] [CrossRef] [Green Version]
  38. Ma, X.; Zhou, C.; Kemp, I.J. Automated wavelet selection and thresholding for PD detection. IEEE Electr. Insul. Mag. 2002, 18, 37–45. [Google Scholar] [CrossRef]
  39. Donoho, D. De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef] [Green Version]
  40. Coifman, R.; Wickerhauser, M. Entropy-based algorithms for best basis selection. IEEE Trans. Inf. Theory 1992, 38, 713–718. [Google Scholar] [CrossRef] [Green Version]
  41. Draper, N.R.; Smith, H. Applied Regression Analysis; John Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
  42. Tsoukalas, L.H.; Uhrigh, R.E. Fuzzy and Neural Approaches in Engineering; Wiley-Blackwell: Hoboken, NJ, USA, 1997. [Google Scholar]
  43. Jang, J.S.R. ANFIS: Adaptive-Network-Based Fuzzy Inference System. IEEE Trans. Syst. Man Cybern. 1993, 23, 665–685. [Google Scholar] [CrossRef]
Figure 1. Translating and scaling operations of Ψ(2jtk) function.
Figure 1. Translating and scaling operations of Ψ(2jtk) function.
Energies 14 03181 g001
Figure 2. Haar packet. (a) Haar scaling function, φ(t); (b) Haar wavelet function, Ψ(t).
Figure 2. Haar packet. (a) Haar scaling function, φ(t); (b) Haar wavelet function, Ψ(t).
Energies 14 03181 g002
Figure 3. Decomposition process.
Figure 3. Decomposition process.
Energies 14 03181 g003
Figure 4. Iterative decomposition.
Figure 4. Iterative decomposition.
Energies 14 03181 g004
Figure 5. Approximation and details of a signal. (a) Original signal; (b) Approximation c2(k) and details d2(k), d1(k) and d0(k).
Figure 5. Approximation and details of a signal. (a) Original signal; (b) Approximation c2(k) and details d2(k), d1(k) and d0(k).
Energies 14 03181 g005
Figure 6. Flow chart of the forecasting approach.
Figure 6. Flow chart of the forecasting approach.
Energies 14 03181 g006
Figure 7. Time series I: Improve versus persistence.
Figure 7. Time series I: Improve versus persistence.
Energies 14 03181 g007
Figure 8. Time series II: Improve versus persistence.
Figure 8. Time series II: Improve versus persistence.
Energies 14 03181 g008
Figure 9. Case 3: Improve versus level for the best family.
Figure 9. Case 3: Improve versus level for the best family.
Energies 14 03181 g009
Figure 10. Case 31: Improvement percentage versus level for all families with neural model.
Figure 10. Case 31: Improvement percentage versus level for all families with neural model.
Energies 14 03181 g010
Figure 11. Time series III: (a) Improvement percentage versus persistence; (b) RMSE.
Figure 11. Time series III: (a) Improvement percentage versus persistence; (b) RMSE.
Energies 14 03181 g011
Figure 12. Time series IV: (a) Improvement percentage versus persistence; (b) RMSE.
Figure 12. Time series IV: (a) Improvement percentage versus persistence; (b) RMSE.
Energies 14 03181 g012
Figure 13. Time series V: (a) Improvement percentage versus persistence; (b) RMSE.
Figure 13. Time series V: (a) Improvement percentage versus persistence; (b) RMSE.
Energies 14 03181 g013
Figure 14. Different sampling frequencies in series III using the dmey wavelet filter.
Figure 14. Different sampling frequencies in series III using the dmey wavelet filter.
Energies 14 03181 g014
Figure 15. Wavelet filtered process in case 22.
Figure 15. Wavelet filtered process in case 22.
Energies 14 03181 g015
Figure 16. Forecasting result for case 22 with the neural network method trained with filtered data.
Figure 16. Forecasting result for case 22 with the neural network method trained with filtered data.
Energies 14 03181 g016
Figure 17. Forecasting result for case 22 with the neural network method trained with unfiltered data.
Figure 17. Forecasting result for case 22 with the neural network method trained with unfiltered data.
Energies 14 03181 g017
Table 1. Review of forecasting methods.
Table 1. Review of forecasting methods.
AuthorsForecast MethodForecast HorizonImprovement vs. Persistence Model
[1]AR----
[2]ARMA1–10 h12–20%
[3]ARMA20″–10″5–12%
[4]ARMA6 h19%
[5]Kalman filter1′4–10%
[6]Kalman filter----
[7]ANN1′–10′11–8%
[8]Spatial/ANN2–2 h15–25%
[9]ANN1′–1 h--
[10]Neuro-fuzzy6–36 h46%
[11]Neuro-fuzzy10′5%
Table 2. Time series description.
Table 2. Time series description.
Time SeriesMeanStandard DeviationMaximumSkewnessKurtosisSizeSampling Frequency
I5.2811.86212.5970.6433.1391 month1″
II4.9512.31319.7261.2125.4671 month1′
III8.2584.70934.6901.1154.8152 years10′
IV8.3089.88330.0001.1242.8732 years10′
V6.8483.54827.0200.6843.4062 years10′
Table 3. Case description.
Table 3. Case description.
T.S.-IT.S.-IIT.S.-IIIT.S.-IVT.S.-V
ID FH.ΔtIDFH.ΔtIDFH.ΔtIDFH.ΔtIDFH.Δt
11′5″112 h5′2110′10′2610′10′3110′10′
25′5″122 h10′221 h1 h271 h1 h321 h1 h
35′10″136 h10′236 h6 h286 h6 h336 h6 h
45′20″146 h30′2412 h12 h2912 h12 h3412 h12 h
510′10″1512 h10′2524 h24 h3024 h24 h3524 h24 h
610′20″1612 h30′
710′30″1712 h1 h
820′30″1824 h30′
920′1′1924 h1 h
101 h1′2024 h2 h
Table 4. Selection of wavelet family.
Table 4. Selection of wavelet family.
CaseOptimumCross-CorrEnergyEntropyCaseOptimumCross-CorrEnergyEntropy
1dmeyBior5.5dmeysym521dmeyHaar-bior1.3Bior6.8Bior1.3
2db2Bior5.5dmeysym522dmeyHaar-bior1.3Bior6.8Bior1.3
3Bior2.8Bior5.5dmeydmey23Bior6.8Bior5.5dmeyBior1.3
4db3Bior5.5dmeydb224Bior6.8Bior5.5dmeyBior1.3
5dmeyBior5.5dmeydmey25dmeyBior5.5dmeyBior1.3
6db5Bior5.5dmeydb226dmeyHaar-bior1.3Bior6.8Bior1.3
7Bior5.5Bior5.5dmeyBior1.327dmeyHaar-bior1.3dmeyBior1.3
8db2Bior5.5dmeyBior1.328Bior6.8Bior5.5dmeyBior1.3
9db2Bior5.5dmeydb529coif3Bior5.5dmeyBior1.3
10Bior3.9Bior5.5dmeydb530Bior5.5Bior5.5dmeydb2-sym2
11sym4Bior5.5dmeyBior1.331Bior5.5Bior5.5Bior5.5Bior1.3
12coif3Bior5.5dmeysym432dmeyBior5.5dmeyBior1.3
13dmeyBior5.5dmeysym433dmeyBior5.5dmeydmey
14db4Bior5.5dmeyBior1.334dmeyBior5.5dmeyBior2.8
15dmeyBior5.5dmeysym435dmeyBior5.5dmeydb2-sym2
16sym5Bior5.5dmeyBior1.3
17coif4Bior5.5Bior6.8sym3
18dmeyBior5.5dmeyBior1.3
19Bior3.9Bior5.5Bior6.8sym3
20db4Bior5.5Bior6.8coif5
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Domínguez-Navarro, J.A.; Lopez-Garcia, T.B.; Valdivia-Bautista, S.M. Applying Wavelet Filters in Wind Forecasting Methods. Energies 2021, 14, 3181. https://0-doi-org.brum.beds.ac.uk/10.3390/en14113181

AMA Style

Domínguez-Navarro JA, Lopez-Garcia TB, Valdivia-Bautista SM. Applying Wavelet Filters in Wind Forecasting Methods. Energies. 2021; 14(11):3181. https://0-doi-org.brum.beds.ac.uk/10.3390/en14113181

Chicago/Turabian Style

Domínguez-Navarro, José A., Tania B. Lopez-Garcia, and Sandra Minerva Valdivia-Bautista. 2021. "Applying Wavelet Filters in Wind Forecasting Methods" Energies 14, no. 11: 3181. https://0-doi-org.brum.beds.ac.uk/10.3390/en14113181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop