Next Article in Journal
Innovative Green Initiatives in the Manufacturing SME Sector in Poland
Previous Article in Journal
A Combined Measurement and Modelling Approach to Assess the Sustainability of Whole-Tree Harvesting—A Swedish Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Prospective Methodologies in Hybrid Renewable Energy Systems for Energy Prediction Using Artificial Neural Networks

1
Department of Computer Science and Engineering, Jatiya Kabi Kazi Nazrul Islam University, Trishal, Mymensingh 2224, Bangladesh
2
Institute of Sustainable Energy, Universiti Tenaga Nasional (The National Energy University), Jalan IKRAM-UNITEN, Kajang 43000, Selangor, Malaysia
3
Department of Electronics and Telecommunication Engineering, Bangabandhu Sheikh Mujibur Rahman Science & Technology University, Gopalganj 8100, Bangladesh
4
Center for Cyber Security, School of Information Science and Technology, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Selangor, Malaysia
*
Authors to whom correspondence should be addressed.
Sustainability 2021, 13(4), 2393; https://0-doi-org.brum.beds.ac.uk/10.3390/su13042393
Submission received: 28 November 2020 / Revised: 28 December 2020 / Accepted: 29 December 2020 / Published: 23 February 2021

Abstract

:
This paper presents a comprehensive review of machine learning (ML) based approaches, especially artificial neural networks (ANNs) in time series data prediction problems. According to literature, around 80% of the world’s total energy demand is supplied either through fuel-based sources such as oil, gas, and coal or through nuclear-based sources. Literature also shows that a shortage of fossil fuels is inevitable and the world will face this problem sooner or later. Moreover, the remote and rural areas that suffer from not being able to reach traditional grid power electricity need alternative sources of energy. A “hybrid-renewable-energy system” (HRES) involving different renewable resources can be used to supply sustainable power in these areas. The uncertain nature of renewable energy resources and the intelligent ability of the neural network approach to process complex time series inputs have inspired the use of ANN methods in renewable energy forecasting. Thus, this study aims to study the different data driven models of ANN approaches that can provide accurate predictions of renewable energy, like solar, wind, or hydro-power generation. Various refinement architectures of neural networks, such as “multi-layer perception” (MLP), “recurrent-neural network” (RNN), and “convolutional-neural network” (CNN), as well as “long-short-term memory” (LSTM) models, have been offered in the applications of renewable energy forecasting. These models are able to perform short-term time-series prediction in renewable energy sources and to use prior information that influences its value in future prediction.

1. Introduction

In modern life, energy plays a vital role in industrialization, urbanization, as well as the economic development of a country [1,2]. According to the briefing of US EIA [3], global energy usage is growing by approximately 2.3% per annum. From the study [4], it is estimated that the petroleum will face the shortage soon in all over the world. Fossil fuels are also the main source of environmental pollution [5] including air [6] and water pollution [7]. Hence, renewable energies (RE) sources like geothermal, wind, solar, biomass, etc. have been gained a lot of attention as alternative sources of energy development [8,9]. According to the study of Kumar [6], renewable energy sources are abundant, sustainable, and pollution-free. Figure 1 shows the world energy consumption by different fuel types, such as renewables, nuclear, natural gas, coal, and oil. It is clear from the figure that fossil fuels including oil, coal, and natural gas, are widely used as a reliable source of energy to provide approximately 80% of the total required demand all over the world [10].
Since the remote and rural electrification suffers the unreachability of the grid power, it needs to prompt for different alternative sources of energy in that areas. Thus, a “hybrid-renewable-energy system” (HRES) [11], an example is shown in Figure 2, is used to improve electrical power production and to provide electricity in rural areas [12] while hydrogen can be produced by RES as well. A hybrid system can address and overcome the limitations of renewable energy resources with efficiency, reliability, and economics [13]. Different countries in ASEAN, especially Malaysia, Thailand, Indonesia, Philippines, and Vietnam have recently initiated devotion for green and sustainable energy deployment [14]. Several researchers presented HRES models including renewable energies such as wind-biomass [15], hydro-solar [16], wind-PV-hydro [17], PV-biomass [18] or wind-PV-biomass [19], to plan a power plant in stand-alone [20,21] or grid-connected modes [22,23]. There are many methodologies applied to optimize the hybrid systems including multi-objective [24], artificial intelligence [25], iterative techniques [26], probabilistic methods [25], and so on. As an example, Bao et al. [27] reviewed the development of the entire energy system for an isolated microgrid, mainly for hybrid renewable in hydrogen energy systems including energy management system, sizing optimization, and maximum power point tracking controller. The authors in [28] proposed an optimal hybrid model to predict electricity consumption based on season changes. The proposed model used adaptive fuzzy network, feed-forward neural network, and autoregressive integrated moving average methods to forecast power consumption.
Nowadays, the machine learning (ML) methods have rapidly expanded in various renewable energy related applications [29,30,31], such as energy generation and integration [32], consumption and demand analysis [33,34]. The ANN-based model with a multilayer perceptron and error back-propagation algorithm has been developed to estimate the mixed energy demand in South Korea [35]. This model considered the scaled FF–back-propagation (BP) (BFGS) method in terms of “root-mean-squared-error” (RMSE) to predict the highest energy demand from 2018 to 2023 based on several economic parameters, like gross domestic product (GDP), population, import, and export amounts. Authors have presented the “Harmony-Search-Transport-Energy-Demand-Estimation” (HASTEDE) based model for estimating transport energy demand in Turkey [36]. The developed HASTEDE model has applied heuristic information of population, GDP, and vehicle ownerships to project future energy consumption until 2025. Figure 3 demonstrates the growth of research on energy systems as well as different subject areas that used ML [37] for the last two decades. Frequently used ML models in the energy sector are “artificial-neural-network” (ANN), “Extreme-learneanig-machine” (ELM), “support-vector-machine” (SVM), “adaptive-neuro-fuzzy-inference-system” (ANFIS), deep learning, decision trees, advanced hybrid ML models, ensembles, etc. [38,39]. Due to the irregularity behavior of the RE resources and the capability of ANN to process complex data lead to the use of ANN techniques for renewable energy prediction [40,41].
Several studies of ANN models for renewable energy forecasting or prediction [42,43,44,45], including solar irradiance [46,47] and solar energy prediction [48,49], wind power predictions [50,51] as well as hydropower prediction [52,53] are summarized in Table 1.
James Mubiru [54] developed a prediction model using ANN to forecast daily average solar radiation. Ahmad et al. [55] presented a system for forecasting hourly solar irradiation utilizing autoregressive recurrent neural networks. Kazem et al. [56] proposed a system for predicting electricity production of the PV system using machine learning techniques including feed-forward networks, self-organizing feature maps, and support vector machines. Another design of the solar energy systems using MLP and autoregressive neural methods has been done by Loutfi [57]. The model could predict solar radiation on an hourly basis with very simple data such as temperature and humidity. Researchers in [58] presented a novel deep-learning model based on a “long-short-term memory” (LTSM) for prediction of PV systems‘ generation.
Table 1. Literature concerning the artificial neural network (ANN) methods of power forecasting.
Table 1. Literature concerning the artificial neural network (ANN) methods of power forecasting.
AuthorRE SourcesMethodsPrediction OutputsInputs
Fouzi [58]Solar EnergyLSTM based deep learning approachshort-term prediction of solar energyTime series meteorologicaldata, such as irradiance, temperature, and wind speed
Amith [59]Solar EnergyANN-based prediction modelsthe hourly prediction of the PV system’s powerParameters of environment, e.g., Solar irradiance, air humidity, temperature, wind direction/speed, surface temperature of PV
James Mubiru [54]Solar EnergyFeedforward neural networkPredict the monthly solar radiation based on a daily average radiationsolar irradiation on daily basis, Time of sunshine, temperature as well as latitude, and longitude
Andŕe Gensler [60]Solar EnergyDeep Learning and ANN algorithms, such as LSTM, Auto Encoder, and Deep Belief NetworksForecasting solar powerDaily average solar irradiation, sunshine hours, temperature, location, etc.
Daniel [61]Solar EnergyANN ModelSolar power predictionTime, direct beam solar irradiance, total solar irradiance, power produced from the solar panel
Rui Zhang [62]Solar EnergyDeep convolutional neural networksForecasting solar energy generationLatitude, longitude, altitude and time; temperature, humidity, moisture, wind velocity, etc.
Sumanta [63]Wind PowerSingle-step and multistep RNNWind speed prediction from a daily basis to monthly basisHistorical data on wind speed and wind direction for 15 years on an hourly basis
Alok [64]Wind PowerBP neural networkFuture prediction of wind speed1500 daily windspeed patterns
Alla [65]Wind PowerDeep learning approach, Feedforward ANN, Linear regressionPredict of Wind energy From 5 to 30 min aheadWind speed
Jaume [66]Wind PowerDeep neural networkWind speed predictionwind direction, speed of the wind, temperature, air pressure, etc.
Senthil [67]Wind PowerBP network, RBF network, and NARX modelsWind speed predictionTime series historical weather data for 3 years and intervals of 15 min: radiation, wind direction and speed, temperature, humidity, reflected radiation, etc.
Ummuhan [68]Wind Powermultivariable model based on ANNPrediction of Wind velocity (speed)Temperature, wind direction and speed, and air pressure
ALI [69]HydropowerANN: Feed Forward Neural Networks (FFNNs)Prediction of power generationWaters’ Flow rate and Net Turbine’s head
Ichiyauagi [70]Hydropower“multi-layer perceptron” (MLP)Prediction of the flow rate of the riverthe flow rate of the river, rainfall amounts, overall rainfall’s volume and duration
Stokelj [71]Hydropowermulti-layer perceptron (MLP)Forecasting up to 6 h ahead of the future natural water inflowAmount of precipitation in the last 15 min, last hour, last 2 or 4 h, current water inflow, natural inflow 8 h ago
Murat [72]Hydropowerthe Levenberg–Marquardt algorithm in Neural network and feedforward modeThe annual average’s prediction of the hydroelectric energyseries of Inflows, requirements for irrigation water, rates of evaporation, ratio of turbine running time, and the coefficient of C
Marcio [73]HydropowerMLP using the BP algorithmForecasting the monthly hydropower generationDaily rainfall data
Wind energy can be predicted either by utilizing weather forecast data or by using the historical time-series. Senthil Kumar [67] presented three different ANN-based models for wind speed prediction, such as a back-propagation network (BPN), non-linear auto-regressive with exogenous inputs (NARX), and radial basis function (RBF). The LSTM optimization algorithm was also used the recurrent neural networks (RNNs) to predict the wind speed for daily and monthly basis [68] and finally, a univariate single layer RNN was recommended for wind speed forecasting. Jaume et al. [66] proposed deep learning approaches for wind speed prediction and different structures including MLP, CNN and RNN were applied in large wind dataset. The study done a comparison analysis among the obtained results and recommended the deep-learning model for air speed and power forecasting.
The advent of machine learning has delivered an opportunity for forecasting hydropower generation. It can estimate the daily, monthly, yearly, and average hydroelectric energy. Ali [69] presented ANN-based models considering the feed-forward network along with a back-propagation procedure to predict the hydropower energy at the Himreen lake dam-diyala station. Shaw [74] used high-fidelity models as well as optimization methods and surrogate modeling techniques for an hourly power generation scheme which was emulated by genetic algorithm and artificial neural network approaches. Authors in [72] represented a single hidden-layer MLP along with the Levenberg–Marquardt algorithm to provide an accurate prediction of the hydroelectric energy annually from an irrigation dam.
This paper aims to propose the ANN-based methodology in a hybrid renewable energy system for power prediction. This study also highlights the utilization of AI approaches in different stages of a renewable energy-integrated system. This paper reviews different ANN models, such as MLP, CNN, RNN that will be employed to manage the hybrid energy systems and the renewable energy generation efficiently.

2. Machine Learning in HRES

Machine learning (ML) techniques have been employed for efficient energy management in generation and demand sectors. ML methods can be utilized either in a stand-alone or grid-connected renewable resources, depend on the requirements and the characteristics of the nature of obstacles. Figure 4 presents the sectors where machine learning methods can be used for power prediction, demand forecasting, management of renewable energy systems as well as enhancing the system performance. The major usages of ML methods in HRES are briefly described in the following subsections:
(i)
Predicting the Output Generation of Renewable Energy. Forecasting energy generation is a vital issue for renewable energy sources and machine learning as a tool of forecasting energy production play an important role in this context [75,76]. Solar or wind energy can be predicted using historical data. The accuracy of prediction is challenging due to the nature of dependency of the source of these energies to the environment condition [77]. This study presents different neural network techniques that are used to predict the output of the renewable energies.
(ii)
Specifying the Geographical Location, Configuration, and Sizing of Renewable Power Plants. Optimal sizing of renewable power plants is a challenging task in HRES. The location of the energy plant and other parameters depend on different factors, such as weather, territory, availability, and expenses. Also, unlike fossil fuel sources, the operation of renewable energy plants needs space. Hence, it is essential to specify size and analyze the location in case of e.g., weather data, humidity, temperature, wind speed, irradiation etc. [78]. Machine learning techniques have the ability to assist these decision making steps [32].
(iii)
Managing the Overall Operation of RE Integrated Smart Grid. Smart grid (SG) is a new generation of power plants that optimizes all sectors of the grid from generation to distribution, and storing energy [79,80]. With the speedy expansion of the power-grid and continuous advancement of making it intelligent, more effective, and efficient operation are expected by stakeholders to manage the grid. The intelligent techniques as well as the combination of AI, IoT, and ICT tools are needed to satisfy the efficient management of the smart grid [81,82] and provide the solutions for problems facing by power grids, such as demand-supply balancing, fault detection, operation of the grid, and management, data management of grid and control, and so on [83].
(iv)
Forecasting the Energy Demand. Energy demand prediction ensures the reliability of supply, and the demand-supply chain has to be perfectly balanced [84]. Since different stakeholders with diverse characteristics are existed in HRES, the energy demand forecasting is a difficult task. ML methods are able to sort the accurate estimation of power consumption and demand [85] as well as renewable energy production and supply [86].
(v)
Developing Renewable Energy Materials. Machine learning is expanding its capability to renovate materials discovery. It can be used to support other energy-related fields, such as solar cells, batteries, catalysis, and crystal discovery [87]. Thus, ML approaches can be used for developing renewable energy materials. ML is also used in another promising and exciting area, such as inverse design where the properties of the material are given to the ML model, and it finds the materials from those properties [88].

3. RE Forecasting Approaches

Energy forecasting is a process of estimating the energy generation from different sources. The growing perception of advanced technology has made energy forecasting a popular task in the today’s power system. There are two different approaches for energy forecasting. The first approach is called the top-down approach, where the prediction is done at the highest level. The second approach is called the bottom-up or build-up approach, where prediction is made from lower-level and predictions are collected to higher levels of the forecasting hierarchy [89,90]. In a hybrid system, a bottom-up approach is more meaningful and suitable for seeking the individual value of the forecast ed components [91]. An overview of a bottom-up method for predicting the energy generation of each source is illustrated in Figure 5. This section reviews the energy predictions of some common RE sources, such as wind, solar, hydro, biomass energies.

3.1. Solar Energy Prediction

Solar energy is a genuinely sustainable source of energy that directly uses the sun‘s energy either in the form of electricity (Photovoltaic) or heat (photo-thermal) [92,93]. Photovoltaic energy is non-contaminating, doesn’t make greenhouse gas like other forms of energy sources such as fuel-based energies. One of the meteorological parameters which is so difficult to predict the sun irradiance because of its dependency on diverse astronomical, geographical, and climatic parameters such as air pressure, ambient temperature, humidity, wind speed/direction, and sunshine period [94,95]. Due to the uncertainty nature of these parameters, the machine learning approach is used most frequently to predict global irradiance on a monthly, daily, or hourly basis [96].

3.2. Wind Power Prediction

The primary application of wind energy is to use the kinetic energy of the air and convert it to electricity on a large scale either onshore (on land) or offshore (on oceans, seas, etc.) [97]. The renewable and sustainable wind energy also depends on solar irradiance, wind speed as well as other ambiance conditions. For the intermittency and uncertainty of wind power [98], forecasting is needed to use this energy. Like solar energy, wind power prediction can be classified as hourly (immediate-short-term), daily (short-term), and monthly (long-term) prediction, according to time-horizons [99]. The techniques of prediction of wind energy are varying from physical (deterministic) approach to statistical or machine learning approach based on historical and time-series data analysis [100]. Accurately prediction of the wind power is challenging due to the intermittency of the speed of the wind over time. To enhance the forecasting accuracy in the long/short-term, various dynamic ANN-based techniques, such as CNN, RNN (with multi-variable, such as wind direction/speed, ambient temperature, solar irradiance, environment humidity, and air pressure [101]) have been proposed [102].

3.3. Hydro Power Prediction

Hydropower is a source of energy that harnesses from the kinetic energy of water, to generate electricity. Hydropower generators normally build in the river’s pathway. Hydropower energy has many advantages over the majority of the other sources of energy, such as a high level of reliability, high performance, low maintenance cost, and the capability of adjusting according to the load changes [103]. Hydropower depends on the volume of the water that passes through the turbine and the size of the turbine itself. For instance, in the rainy season hydropower turbines can produce more power due to the plentifulness of water, while the bigger turbines may produce more electricity. In other seasons due to lack of water, smaller turbines may be more useful. Thus the optimization and prediction of the size of hydropower turbines is essential here. However, the relationship between the turbine size and flowrate of water are non-linear and very complex in nature. The optimization can be done by artificial intelligence and machine learning methods, such as “support-vector-machine” (SVM), “genetic-algorithm” (GA) [104], and “artificial-neural-network” (ANN) [69]. Like other predictions, the hydro power prediction is a dynamic process that requires constantly updating information about weather measurements and previous energy production to ensure proper regulation of the system.

3.4. Biomass Energy Prediction

Biomass energy is another source of renewable energy that is harnessed from biological sources. Biomass can be considered as a part of the carbon cycle from the atmosphere into plants, from plants to soil, and finally from soil to the atmosphere during plant decay. Bio-energy can be utilized in many aspects, such as transport in the form of bio-diesel, generating electricity, and heating. Most of the relevant biomass is produced in village agricultural fields, wastelands, and forests. The process of biomass prediction is precise in nature, and different methodologies are used for biomass energy estimation. Literature shows that the accuracy of the image-derived-based prediction of the biomass energy is approximately high even by utilizing the linear regression models. Although, it is a big challenge to employ these methods in experiments in different situations such as environmental variables and conditions or a lack of data-sets for estimation [105].
Therefore, according to the literature, solar and wind energy are likely the future of the energy resources for power plants. Since their output power is intermittent, therefore, forecasting techniques can help to increase their efficiency. However, there are some challenges to forecast the output energy of PV panels and wind generators. The most significant challenge is the lack of proper data sets and measurement equipment in some locations. Besides, the other challenge is the lack of standard methodologies that users can easily follow and obtain the required parameters to evaluate the related solar or wind energy availability under diverse sky conditions. In the following section, the neural network techniques in the output predictions of the renewable energy generation will be reviewed and discussed.

4. Materials and Methodology

The overall power forecasting strategy involves two modules. First, it is tried to describe data acquisition and processing. Secondly, machine learning algorithms are reported for energy prediction. More details are provided in the following sub-sections.

4.1. Data Description and Processing

The main objective of this study is to present the data-driven models that can predict wind and solar power accurately. Collecting and analyzing data is the most important part of the renewable energy prediction. For proper energy forecasting, the data need to be gone through the processing module, including normalization, outsourcing the unwanted/wrong data, data clustering, and analysis of the correlation of the data [106,107]. Among the procedures, the first two are not implemented for all data training techniques and mainly are common for data pre-processing. To create a training data-set, however, data clustering is required. Finally, the correlation analysis provides a perspicacity for the delays that are utilized by the forecasting models.
Table 2 reviews different input vectors of renewable energy based on training model and validation for solar energy, wind power, and hydro-power prediction in specific [108]. Prediction of solar energy needs a maximum of nine parameters, such as sun altitude, latitude/longitude, time (including month and year), an average of ambient temperature, mean air pressure, the average speed of the wind, and means air‘s humidity. The prediction would be solar average irradiance. In the same way, wind power prediction required wind direction/speed at the specific location and the temperature. Hourly data values are collected to get the daily data set, and the maximum, minimum, mean, and standard deviation values for wind speed, temperature, relative humidity, current pressure, and solar radiation variables are used in the data. The data that are required for hydro-power prediction are time, temperature, humidity, and air pressure. In biomass power prediction, image-based techniques have been commonly used.

4.2. Types of RE Prediction Models

There are several classes of prediction methods applied on renewable energy, like solar or wind power forecasting [58], such as (i) statistical methods using collected data, (ii) ML techniques including ANN, (iii) numerical models based on weather condition forecast, images from satellite, and (iv) combined models of the above techniques. As we know that solar, wind, or hydro-power time series are non-linear and non-stationary, these methods are applied to learn from the data pattern and tell the future behavior of the weather occurrences. This study aims to review the machine learning techniques (especially neural network models) that use time series data for forecasting energy generation, focusing on wind power, solar energy and hydro power generation. Based on the input time-series and feedback output with a data-driven method, three network models can be utilized, such as I-O, NAR and NARX neural networks (see Figure 6) [109]. The neural network is designed with the embedded memory (tapped-delay) for inputs and feedback outputs and z 1 is the unit tapped-delay operator. To define the characteristics of the architectures, let consider, x ( τ ) as the input of the prediction function which is the independent variable, y ( τ ) as the output of the of x ( τ ) , and f ( τ ) as the prediction model as well as d y and d x denote the delay of output feedback and input, respectively. The delay corresponds to the former inputs and the target-outputs that will be utilized to forecast future values. ( p 1 ) is the steps ahead‘ number for which the later behavior being forecasted.
(i)
The input-output (I-O) approach, which utilizes different types of input except for the previous values of the target-output, is presented in the following equation:
y ( τ + p ) = f ( x ( τ ) , x ( τ 1 ) , , x ( τ d x ) )
(ii)
The non-linear auto-regressive (NAR) approach that only utilizes the previous values of the target-output, is presented as bellow:
y ( τ + p ) = f ( y ( τ ) , y ( τ 1 ) , , y ( τ d y ) )
(iii)
The nonlinear auto-regressive with exogenous (NARX) approach, which uses both the exogenous inputs and past values of target. The NARX provides better results in case of accuracy rather than the other former approaches when the exogenous inputs are associated with the targets due to carrying the high volume of information about the system. NARX equation is defined and presented as below:
y ( τ + p ) = f ( x ( τ ) , x ( τ 1 ) , , x ( τ d x ) , y ( τ ) , y ( τ 1 ) , , y ( τ d y ) )

4.3. Prediction Methodology

The methodology based on the machine learning (ML) model is summarized in Figure 7 [58]. The process of applying machine learning (ML), on renewable resource datasets to forecast values of the future, consists of three general phases [59], given as follows:
(a)
Collect data from the energy sources and environment.
(b)
Normalize and pre-process the original data to extract features.
(c)
Train the ML model, perceive the evaluation of the accuracy of training samples, and validate the pre-trained model by validation samples. Then the designed ML model is used for forecasting power output by test dataset.
The most amazing machine learning systems include neural network approaches that have the capability for learning, memorizing, and building relationships among non-linear data [110]. The systems can identify data with similar patterns, but not identical to each other which they have been trained. They are fault tolerant and capable of approximation for any continuous nonlinear function with better accuracy. Thus, they are capable of handling incomplete, noisy, nonlinear, and nonstationary time series [31]. There are many variations and refinement architectures of neural networks; three of them are presented here in the applications of renewable energy forecasting [66], such as “multi-layer-perceptron” (MLP), “convolutional-neural network” (CNN), and “recurrent-neural network” (RNN).

4.3.1. MLP Models

The most straightforward architecture of neural network is the multi-layer perceptron (MLP). It also known as feed forward neural network, in which all neurons of each layer have forward connections with every neuron of the subsequent layer, which is shown in Figure 8. The MLP network includes many perceptrons with a layered structure (includes input, hidden and output layers) that have the ability to approximate the future values given a certain input in renewable energy forecasting applications [59]. Generally, this model tries to iterate the ML in an alike nature to the human brain on non-linear and time series renewable energy data forecasting, in the following ways:
(i)
The data or in the same way signal is imported into the input layer and treated by the hidden layers. The result then is available in the output.
(ii)
As the MLP model is fully connected network, each connection in the network have a weight w i j associated to it. The weights are initialized by randomly assigning very small values, and the initialization process is done before the training of the network is being made.
(iii)
Each neuron has an activation function that acts on the inputs received and generates an output. A learning method, back-propagation algorithm, optimizes the weights on each connection in a process to find the optimal combination for the output.

4.3.2. CNN Models

Convolutional neural networks (CNNs) are specialized in processing large volume of matrix data, such as time-series or images [111,112], and a few studies have been done in energy applications using CNN methods. The CNN model has a potential to classify time-series for short-term scheme as well as it is capable to carry relevant information of the prediction process in renewable energy forecasting. Therefore, these models can be used to estimate the future behavior of the power generation from the prior information of renewable energy resources, like wind or solar energy. CNN models also have the ability to process input data, there is no need additional feature extraction methods in the earlier stage [113]. Figure 9 illustrates an example of a basic CNN architecture for time-series data prediction [114,115]. For efficient prediction and effective modelling of time-series data, the CNN model generally employs five mapping layers, such as “input-layer”, “convolutional-layer”, “pooling-layer”, “fully-connected-layer” (or “dense-layer”), and “output-layer” [47,113]. The roles of these layers are divided into three groups, such as
(i)
Inputs feeding: The input layer specifies the input time series data and then it is fed with shared weights to process in multiple learned kernels.
(ii)
Feature extraction: The convolutional layers determine local associations in input time series whereas pooling layer progressively reduces the dimension of the feature data related to the target variable, and finally provide feature maps as the outputs. Generally, this model has some levels of convolutional-pooling layers, and different convolution operations are performed in each layer.
(iii)
Output Prediction: The final phase combines the extracted feature maps to the fully-connected network layers through flattening (i.e., input layer for the neural network). This layer flattens the pooled feature maps into column, so that data can be easily insert into the neural network. Finally, the fully-connected layer predicts a future value based on features in input variables and a target variable.
There are many convolution and pooling layers involving feature generation in the CNN model, and each convolutional layer is intended to extract patterns related to its input time series [116]. The output of each convolution layer is expressed as follows:
h i j = f ( ( W k x ) i j + b k )
where f is the transfer function, x is the input time series, W k and b k are the weight and bias of the layer connected to k-th feature map, respectively and the ‘*’ represents the convolutional process. Also, the prediction part may contain many dense layers and finally the approach has the following input-output pattern:
I N [ C O N V P O O L ] M [ F C ] N O U T
where I N , C O N V , P O O L , F C and O U T denote the input, convolution, pooling, dense, and output layer activities, respectively. M and N indicate the number of CONV layers and FC layers, respectively, and ‘*’ shows repetition. The convolutional and fully-connected layers require transfer functions.
Moreover, autoencoder in CNN (unsupervised learning method) is designed to reduce input data dimension ignoring the input noises into single dimensional data array through hidden-layers [117]. Thus, it extracts only essential feature data (encode) and acquires how to rebuild the original data from encoded or compressed representation that is as close to inputs as possible (decode) at the output-layers.

4.3.3. RNN Models

The “recurrent-neural network” (RNN) is a distinct type of ANN with loops that can perceive information in sequences of inputs. It can share parameters among neurons in different layers making cycles of sequence in the network in order to predict better results. The looping architecture of RNN allow them to predict the time-series as well as to memorize prior information while yielding output [63]. Thus, RNN model can have memory and use prior information that will influence its value in future prediction. The feedback cycles inside the network keep the model up-to-date or help to remember information for future. It is one of the furthermost used ANN approach that has the ability to forecast time-series in energy systems [35]. As the model can memorize every information through time, it is an effective and efficient approach for solar or wind power like time-series data prediction [36]. Figure 10 shows the basic structure of an RNN unit, where the block ‘A’ captures input pattern x t , and delivers a hidden pattern h t as well as a final forecast value y t . The arrow pointing in the block ‘A’ indicates that the information inside the block is recursively used. Once unfolded the structure, it looks like a chain of networks, as illustrated in Figure 10 [118].
Like other approaches, the implementation of RNN model involves of three consecutive phases, i.e., training, validation and testing. The hidden layer shows the strength of the network and controls time steps going forward from the input time-series to estimated outputs [63]. To illustrate the function of the RNN unit at time step t, let consider, x [ x 1 , x 2 , , x t ] be the input time-series, h t be the hidden state and y t be the estimated output. The workflow of the network from input to output is represented by the following ways:
(i)
The hidden state is expressed as follows:
h t = f ( h t 1 , x t )
And the recurrent hidden state h t is updated by modifying input x t by a weight matrix W x h (input to hidden state matrix) added to the product of earlier state h t 1 and the weight W h h (hidden to hidden layer weight). The sum of the weighted values is then activated by the transfer function f, as follows:
h t = f ( W h h · h t 1 + W x h · x t )
(ii)
The output y t is computed by modifying the hidden state h t by the hidden-to-output weight W h y , as follows:
y t = f ( W h y · h t )
(iii)
The measured output is compared to the target in order to generate error deltas and then feedback it to update the weights in all layers until a tolerable value is reached.

4.3.4. LSTM Models

It is difficult to predict time-consuming data dependencies in large volume of time-series input using RNN model and hence, the “long short-term memory” (LSTM) model [118] is designed to overcome this limitation. The LSTM model is the update form of RNN that can be used to capture time-series dependency in prediction applications [119]. The most significant feature of LSTM technique is that it can memorize info passed through the networks during long time. Though the model is capable of learning long-term data dependencies, it establishes the success in renewable energy forecasting purposes [120]. Generally, the LSTM prediction approach involves three stages of processing, such as (i) feeding and preprocessing the input time-series, (ii) self-updating estimated record, and (iii) training the model using learning algorithm. This approach combines the convolution procedure over the memorized output and the input sequence. The prediction output is an association of the prior information. The simple RNN and LSTM architectures are almost alike, the only alteration is in the activation layer. A single activation layer exists in each cell of the RNN, whereas a set of activation layers involves in the LSTM (see Figure 11). In LSTM network, the memory cell (C) is allied with transfer functions (sigmoid or tanh) in order to regulate the info going to the cells and decide whether to add or eliminate that info to or from the cell space [121]. This linked memory unit is so-called “constant-error-carousel” (CEC) linear unit whose stimulus is the cell state. The LSTM block together with a multiplier and an activation layer is known as a gate; whose activated output is bounded between 0 and 1, and this value is multiplied by the original data. The gate will allow to pass the data while the output value will be 1. Three types of gates are operated in the LSTM model [122]. These include
(i)
Forget gate that decides whether the data will be retained or not;
(ii)
Input gate that activates the feeding data from previous block; and
(iii)
Output gate that chooses what forecast value will be the output to the following memory block.
From Figure 11b, it is seen that two input patterns are required to train the model, such as the present time-series input, X t and the earlier hidden state, H t 1 , and the cell provides the present output of H t . C is the memory cell content and the important fact in the block is that it allows the information to travel along the entire chain with some minor linear interactions, i.e., no change on the cell state to hold the integrity of the data in the future [123]. The output H t is calculated by the activation of cell state with the logical operation and nonlinear transformation of input. The equations that represent input-output association in the LSTM model are expressed as follows [124]:
F t = σ ( X t W x f + H t 1 W h f + b f )
I t = σ ( X t W x i + H t 1 W h i + b i )
C ˜ t = tanh ( X t W x c + H t 1 W h c + b c )
O t = σ ( X t W x o + H t 1 W h o + b o )
C t = F t C t 1 + I t C ˜ t
H t = O t tanh ( C t )
where F t , I t and O t are forget, input, and output gate, respectively. W x f , W x i , W x c , W x o , W h f , W h i , W h c and W h o are weight matrices, and b f , b i , b c and b o are bias values. For the standard practice of learning in neural network, these values are adjusted and updated by the uniform distribution with back-propagation procedure. The output in sigmoid ( σ ) layer is bounded by a value between 0 and 1, which actually controls the flow of information. The t a n h layer provides a vector of C ˜ t which is added to the prior state and this info goes to another tanh layer and is multiplied by the sigmoid activation to obtain the output H t .

4.4. Learning Algorithms

The back-propagation (BP) algorithm is a popular learning procedure used in ANN training. It provides the nonlinear associations between time-series inputs and predicted outputs [64]. For the proper training of the model, the BP algorithm involves a preferred output of input pattern, known as a target, and hence, this method is called the supervised learning [125]. In the ANN training, the output taken from the output layer is compared with the target, and the difference between them is computed as error. This error is then reduced in an acceptable level and the weights are updated repeated until getting the satisfactory prediction results. The training of ANN based prediction model by BP learning procedure is described by the following two phases:
(1)
Propagation phase includes the two passes, the forward pass and the backward pass.
(a)
In forward propagation, the input-time series goes from input layer to output layer in the network and compute the output value.
(b)
In backward propagation, the output taken from the output layer compares with the target in order to generate the deltas or associated errors of all neurons in output layer and then hidden layers.
(2)
In weight updating phase, all the weights in hidden-to-output and input-to-hidden layers are updated in the following ways:
(a)
Multiplying the output delta and input stimulus to obtain the gradient of the weight factors.
(b)
Adding the ratio of the gradient to the previous weights to reach the new weight values.
The overall BP training algorithm is summarized in Figure 12. The stopping criteria of the algorithm is checked at the end of each epoch and the iteration process will be terminated, either when the error at the end of an epoch will be felt below a threshold or when the maximum number of epochs will be reached [126]. It typically takes hundreds or thousands of epochs for a neural network to converge. Different updates of BP algorithms used in Matlab programing are given in Table 3; they used in training different networks with different datasets and have their own benefits and limitations [59].

4.4.1. Activation Functions

The activation functions, also known as transfer functions, are used to activate the neurons in neural networks (see Figure 13) that provide the non-linear transformation of the input time-series. It increases the expression ability of the ANN model and makes the network capable to learn and compute more complex tasks, and provide accurate predictions [127]. The neural network would be a linear regression model without using the activation functions. The derivative of a transfer function, known as a gradient, is tremendously important for training the neural network. Common activation functions used in different neural network structures include sigmoid or logistic ( σ ), “hyperbolic-tangent” (tanh), “rectified-linear-unit” (ReLu) and softplus (see Figure 14).
(1)
Sigmoid Function. The most common non-linear transfer function is the sigmoid/logistic function which is real-valued, monotonic, and differentiable. It has a positive value first derivative. The output of this function is bounded between 0 and 1, and widely used in deep neural networks. It is expressed by the following equation:
y = f ( x ) = 1 1 + e x
(2)
Hyperbolic-Tangent Function. The hyperbolic-tangent (tanh) function is an update version of the logistic function, which output is limited from 1 to +1. The graph of this function is a symmetric in nature as well as it conveys nonlinearity feature to the network. The equation of (tanh) function is as follows:
y = f ( x ) = tanh ( x ) = e x e x e x + e x
(3)
Rectified Linear Unit Function. This ReLU function is defined as the positive part of its argument in the ANNs. While ReLU function appearances alike to linear function, it has a derivative function and permits for backpropagation procedure. Compared with the sigmoid and tanh functions, the ReLU function allows the network to converge very quickly. The equation of this function is defined as:
y = f ( x ) = x + = max ( 0 , x )
Recently, it is a common activation function used in deep learning. Different types of ReLU functions are available, such as Leaky ReLU, Noisy ReLU, and exponential-linear-unit (ELU) function.
(4)
Softplus Function. The softplus function is similar to ReLu function. The difference is that this function has small reservations about values less than 0. The formula of this function is defined as:
y = f ( x ) = ln ( 1 + e x )
The derivative of softplus is f ( x ) = 1 1 + e x , which is the equation of the sigmoid function.

4.4.2. Prediction Evaluation Metrics

During the training session, the error between the actual output and the target is calculated and then the weights in all layers are being updated until the error reaches an acceptable level [128]. The back-propagation algorithm uses several cost functions as the evaluation metrics in forecasting accuracy through correlation and error analysis between the target and estimated value [36]. The principal uncertainty indices used to assess the performance of the network models include “root-mean-square-error” (RMSE) or “coefficient-variation-root-mean-square-error” (CV-RMSE), “mean-bias-error” (MBE) or “normalized-mean-bias-error” (NMBE), “mean-absolute-error” (MAE) or “mean-absolute-percentage-error” (MAPE), and the “coefficient-of-determination” ( R 2 ) [129].
(i)
RMSE and CV-RMSE are used to provide the error variability between the estimated value ( y k ) and the measured value ( x k ), with the number of observations N, and defined as follows:
RMSE = k = 1 N y k x k 2 N
CV - RMSE = RMSE x ¯ × 100 %
(ii)
MBE represents the average error of sample space that indicates the overall behavior of the predicted output with regards to the regression line of the sample. The positive values indicate under-prediction, whereas the negative values indicates over-prediction of the model. NMBE is a normalization of the MBE index that is used to scale the results of MBE, providing the global difference between the real-value and the predicted-value, which is defined as follows:
MBE = 1 N k = 1 N y i x k
NMBE = MBE x ¯ × 100 %
(iii)
MAE and MAPE are also the measures of the prediction performance, where MAE is a measure of average error between the estimated values and the corresponding observations, and MAPE is expressed the percentage of prediction accuracy [130], as follows:
MAE = 1 N k = 1 N | ( y k x k ) |
MAPE = 1 N k = 1 N | ( y k x k ) | x k × 100 %
(iv)
R 2 , known as coefficient of determination, is an uncertainty measure statistical metric that is bounded between 0 and 1. The value tends to 1 represents that the estimated values are closely related to the measured values. R 2 is expressed as follows:
R 2 = 1 k = 1 N y k x i 2 k = 1 N y i x ¯ 2

4.5. Other ANN Structures

Besides the common ANN models, there are some new structures such as a DCNN (“deep-convolutional-neural network”), ELM (“extreme-learning machine”) and the GRU-CNN hybrid model that can used in renewable energy prediction problems. These models are subcategories of the deep learning structure. The hybrid model GRU-CNN combines the “gated-recurrent-unit” (GRU) and the “convolutional-neural network” (CNN), in which the GRU is applied to extract time-series feature and the CNN is used to extract other high-dimensional feature data [131]. DCNN has the share-weight architecture and can operate with minimum pre-processing on the translation of features. Generally, DCNN includes of various alternating convolution and pooling layers that weight sharing is usually utilized in convolution-layer to lessen the memory traces and amount of network parameters. This also helps to simplify the back-propagation or feed-forward processes. Pooling layer is instead more focused on input maps. It decreases the dimensions of data by transforming the neuron batches at the input into an individual neuron in the output [132]. The other new structure which is widely used in REs’ prediction modes is an ELM. It is a feed-forward ANN that hidden layers’ parameters such as weights and biases, are randomly produced. This will help to feature extraction in RE‘s prediction [133].

5. Discussion and Future Directions

According to the literature, ANNs are suitable for complex datasets and showing good predictive ability. It is becoming more popular approach comparing to the conventional forecasting techniques, such as time-series and regression in case of reliability and efficiency. Several conventional works utilized weather prediction models along with the physical models as an input. These models using the physical variables in addition to the climatological variables to provide the optimal model for producing energy close to reality. Literature showed that ANN is the famous strategy that most of the researchers tried to use for accurate prediction of renewable energy generation [134] and the models have a correlation coefficient of more than 80% as well as improving day by day [76]. The forecasting accuracy of the model depends on various factors, such as input dataset, the number of hidden layer and their neurons, and the learning procedure.
The network is trained by Backpropagation algorithm, especially with the Levenberg-Marquardt algorithm, for minimizing output errors and achieving higher correlation coefficient that improves prediction results. The study recommends that the linear transfer function is set to the output layer while the log sigmoid and hyperbolic tangent functions are tried in the hidden layer. Though the number of hidden layers and neurons are chosen by trial and errors approach, they are subjected to the input and output layers, the nature of sample datasets and the activation function as well as learning procedures. Different combinations of the hidden layers and functions are tested to find the best predicted results. The study suggested that the neurons in hidden layers should be between the number of input and output neurons and less than twice of input neurons, in order to eliminate overfitting, with a particular training function. Hence, the best combination is selected, and the prejudiced forecast is defined to either over-forecast (the predicted value is more than the actual), or under-forecast (the predicted value is less than the actual). To improve the prediction accuracy, the bias is properly initiated, and the output error is adjusted by increasing it (in the case of under-forecast), or by decreasing it (in the case of over-forecast). The performance of model is represented by the comparison of the RMSE or R2 for different techniques, error histogram for training and validation, and the comparison of the predicted power with the actual power using test database.
The inputs served to the model are average-valued energy data, such as solar irradiation, temperature, wind speed, pressure, etc. The ANN model is employed to estimate the daily, weekly, monthly and yearly mean parameters. The most engaged parameters for energy prediction in renewable energy system include
(i)
global irradiance, sun elevation, sun azimuth angle, and temperature for solar energy;
(ii)
wind speed/direction, air pressure, and temperature for wind energy; and
(iii)
rain amount, temperature, and water pressure for hydropower generation.
The datasets used in the prediction model are divided for training and testing. The data should be normalized first, and at the output, the final results must be de-normalized to get the predicted values. The training dataset is then divided into training and validation datasets. At the training phase, the ANN model uses training dataset to find the finest model parameters and the validation dataset is used for fine-tuning the model constraints to avoid over-fitting. The effective technique for validating the prediction model is the cross-validation process. Finally, the unknown testing dataset is used to evaluate the prediction skills of the trained model.
The tendency of the application of ANN in renewable energy prediction is going towards the new types of ANN such as CNN and RNN. This can improve the model structures, forecasting output, and optimization in the required data [135]. The utilization of new types of ANNs in RE prediction is inclined to compare other AI-based-predictive models because of better accuracy, ease of utilization, less required data, and better performance. From the works analyzed, MLP obtains better results than with the linear methods, but with linear time-series data it could outperform traditional linear methods. In that case, the CNN and LSTM approaches are much more promising and these methods beat the MLP approaches in the same experiments with the remarkable performance improvements. Another consideration point is to use standard error measurements, such as RMSE and R2 might be a better choice than MAPE or MAE to express the results.

5.1. Ease of using ANN-based RE Prediction Models

The following points highlight the benefits of using ANN-based models in HRES for energy predication:
(i)
Adaptive learning: ANNs can be trained to model the desired renewable outputs with complex non-linear and non-stationary data.
(ii)
Real-time operation: ANNs be implemented and programmed to work fast to carry out real-time processes.
(iii)
Easy of implementation: ANN models can be easily implemented and integrated most of the systems such as embedded systems, etc.
(iv)
Accurate forecasting: Utilizing the ANNs has brought accurate results when they are used to predict renewable energies outputs with higher correlation coefficients.

5.2. Challenges of Using ANN-Based RE Prediction Models

Besides the advantages of using ANN-based prediction models in renewable energy system, the following challenges are taken to be considered:
(i)
Theoretical issue: To utilize the ANN-based prediction models in renewable energy generation, it is needed to figure out the complexity of forecasting samples. Besides, it is required to find out the number of training samples and computing resources to train the model.
(ii)
Model issues: Although ANN-based models are making the training of time-series RE data efficiently, the selection of proper network structure and learning algorithm to forecast output for the specific dataset is very challenging.
(iii)
Optimization challenge: ANN models commonly need proper biases and initial weights selection. So, it faces difficulties to optimize the model coefficient due to low-quality RE data.
(iv)
Lack of experts: Absence of specialists with experience in these energy sources areas, in parallel with knowledge in data science and machine-learning methods. Moreover, Lack of knowledge about advanced technology in the recent RE systems is another challenge that we have already faced.

5.3. Future Works Directions

Future directions of the study could be employed the following research advancements in RE systems:
(i)
ANN model of renewable energy data are varied in distinctive seasons and climatic conditions. Hence, the forecasting model is diverse in different situations. Datasets that are used in different researches are varied from each other, accordingly, evaluation of different prediction models is so hard. Therefore, a unified prediction model for renewable energy sources in different seasonal and climate changes is required for future works.
(ii)
Supplementary algorithms can be used in different layers of ANN model to enhance the performance of system. These algorithms can be used in any single weightege or entire structural configuration of the ANN. This can help on the performance improvement or data optimization in ANN application. As such, utilization of supplemental algorithms in ANN models could be beneficial for forecasting RE generation.
(iii)
New developments of AI in prediction tools could be employed to improve the time-series forecasting values. The most recommended ML techniques, such as “bi-directional long short-term memory” (BLSTM), “deep-neural-network”, “extreme-machine-learning”, etc. to be employed in the energy system. Also, introducing Internet of Things (IoT) tools with AI could be an amazing trend in HRES research.

6. Conclusions

Artificial intelligence has the ability to handle the uncertainty and complex data. As the renewable energy resources show the uncertain behavior in nature, the objective of this study is to present machine learning approaches for predicting time-series data in energy generation applications. Solar, wind and hydro power are the promising renewable sources that have been successfully experienced in the recent years. The major challenges in renewable energy system are the unpredictability and irregularity of power generation due to meteorological conditions and environmental factors, such as solar irradiation, wind speed and direction, cloud cover, time (day-night), etc. Thus, the accurate estimation of renewable energy generation is still crucial in hybrid renewable energy system. There are several classes of prediction methods applied in renewable energy systems, such as statistical analysis, regression algorithms, non-linear algorithms, machine learning models, etc. The most amazing machine learning systems include the neural network methods that have the capability to learn and memorize knowledge, build associations among non-linear data, and approximate future values with good accuracy. This study gives attention to the research methodology of machine learning systems that have been applied in the different phases of hybrid energy systems. Most of the research works in energy systems have been done for power generation forecasting, demand prediction, load forecasting, time series forecasting, optimization, energy consumption prediction, energy control and management, etc.
The advancements in machine learning approaches for energy generation forecasting have been identified three popular ANN architectures, such as the traditional neural network or “multi-layer-perceptron” (MLP), “recurrent-neural network” (RNN) and “convolutional-neural network” (CNN). The MLP model has a significant dependency feature in the future value estimation and offers the improvement in prediction accuracy over linear models. The RNN and CNN models have new experienced with a great potential in learning the complex time-series in renewable energy systems. These deep learning methods are capable of training large datasets and provide better results than traditional machine learning techniques. The development of the prediction model involves three phases, such as network design, training or learning the network as well as testing and performance evaluation. Most of the studies showed that the back-propagation procedure is a powerful supervised learning algorithm to train the ANN model and build a relationship between continuous-valued inputs and outputs. This study has demonstrated the promising strength of different ANN models in energy systems; future works will be employed to implement an intelligent management system for renewable energy resources using hybrid machine learning methods and the internet of thing (IoT) protocols.

Author Contributions

Conceptualization, M.M.R.; methodology, M.M.R.; software, M.M.R.; validation, M.M.R., M.S., and S.K.T.; formal analysis, M.M.R. and F.K.; investigation, M.M.R. and F.K.; Resources, M.M.R., M.S. and F.K.; data curation, M.M.R.; writing—original draft preparation, M.M.R., M.S. and F.K.; writing—review and editing, M.M.R., M.S., S.K.T. and M.K.H.; visualization, M.M.R., J.P., and M.K.H.; supervision, N.A. and S.K.T.; project administration, M.M.R., N.A. and S.K.T.; funding acquisition S.K.T. and N.A. All authors have read and agreed to the published version of the manuscript.

Funding

This work is fully supported by the Ministry of Higher Education (MOHE), Malaysia through LRGS grant code of LRGS/1/2019/UKM-UNITEN/6/2. However, publication related support was received from the grant code of RJO10517919/iRMC/Publication under the BOLD 2025 initiative of the Universiti Tenaga Nasional (@UNITEN, The Energy University).

Acknowledgments

The authors wish to express the deepest appreciation to the Ministry of Higher Education (MOHE), Malaysia for its support through LRGS Grant No. LRGS/1/2019/UKM-UNITEN/6/2. Authors are also indebted to the Institute of Sustainable Energy (ISE) of the Universiti Tenaga Nasional (@UNITEN, The Energy University) for laboratory support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Elhadidy, M.; Shaahid, S. Optimal sizing of battery storage for hybrid (wind+ diesel) power systems. Renew. Energy 1999, 18, 77–86. [Google Scholar] [CrossRef]
  2. Elhadidy, M.; Shaahid, S. Promoting applications of hybrid (wind+ photovoltaic+ diesel+ battery) power systems in hot regions. Renew. Energy 2004, 29, 517–528. [Google Scholar] [CrossRef]
  3. Briefing, U.S. International Energy Outlook 2013; US Energy Information Administration: Washington, DC, USA, 2013. [Google Scholar]
  4. Castagnetti, A.; Pegatoquet, A.; Belleudy, C.; Auguin, M. A framework for modeling and simulating energy harvesting WSN nodes with efficient power management policies. EURASIP J. Embed. Syst. 2012, 2012, 8. [Google Scholar] [CrossRef]
  5. Bhatia, S.; Gupta, R. Textbook of Renewable Energy; Woodhead Publishing India Pvt. Limited: Delhi, India, 2019. [Google Scholar]
  6. Kumar, A.; Kumar, K.; Kaushik, N.; Sharma, S.; Mishra, S. Renewable energy in India: Current status and future potentials. Renew. Sustain. Energy Rev. 2010, 14, 2434–2442. [Google Scholar] [CrossRef]
  7. Wang, X.; Guo, P.; Huang, X. A review of wind power forecasting models. Energy Procedia 2011, 12, 770–778. [Google Scholar] [CrossRef] [Green Version]
  8. Han, J.; Choi, C.S.; Park, W.K.; Lee, I.; Kim, S.H. Smart home energy management system including renewable energy based on ZigBee and PLC. IEEE Trans. Consum. Electron. 2014, 60, 198–202. [Google Scholar] [CrossRef]
  9. Kazmierkowski, M.P.; Jasinski, M.; Wrona, G. DSP-based control of grid-connected power converters operating under grid distortions. IEEE Trans. Ind. Inform. 2011, 7, 204–211. [Google Scholar] [CrossRef]
  10. Ritchie, H. Energy. Our World in Data. 2018. Available online: https://ourworldindata.org/energy (accessed on 19 August 2020).
  11. Li, Q.; Loy-Benitez, J.; Nam, K.; Hwangbo, S.; Rashidi, J.; Yoo, C. Sustainable and reliable design of reverse osmosis desalination with hybrid renewable energy systems through supply chain forecasting using recurrent neural networks. Energy 2019, 178, 277–292. [Google Scholar] [CrossRef]
  12. Rekioua, D. Hybrid Renewable Energy Systems: Optimization and Power Management Control; Springer Nature: London, UK, 2019. [Google Scholar]
  13. Bajpai, P.; Dash, V. Hybrid renewable energy systems for power generation in stand-alone applications: A review. Renew. Sustain. Energy Rev. 2012, 16, 2926–2939. [Google Scholar] [CrossRef]
  14. Kennedy, S.F. Indonesia’s energy transition and its contradictions: Emerging geographies of energy and finance. Energy Res. Soc. Sci. 2018, 41, 230–237. [Google Scholar] [CrossRef]
  15. Perez-Navarro, A.; Alfonso, D.; Álvarez, C.; Ibáñez, F.; Sanchez, C.; Segura, I. Hybrid biomass-wind power plant for reliable energy generation. Renew. Energy 2010, 35, 1436–1443. [Google Scholar] [CrossRef]
  16. Borhanazad, H.; Mekhilef, S.; Saidur, R.; Boroumandjazi, G. Potential application of renewable energy for rural electrification in Malaysia. Renew. Energy 2013, 59, 210–219. [Google Scholar] [CrossRef]
  17. Bhandari, B.; Lee, K.T.; Lee, C.S.; Song, C.K.; Maskey, R.K.; Ahn, S.H. A novel off-grid hybrid power system comprised of solar photovoltaic, wind, and hydro energy sources. Appl. Energy 2014, 133, 236–242. [Google Scholar] [CrossRef]
  18. Mazzola, S.; Astolfi, M.; Macchi, E. The potential role of solid biomass for rural electrification: A techno economic analysis for a hybrid microgrid in India. Appl. Energy 2016, 169, 370–383. [Google Scholar] [CrossRef]
  19. Ahmad, J.; Imran, M.; Khalid, A.; Iqbal, W.; Ashraf, S.R.; Adnan, M.; Ali, S.F.; Khokhar, K.S. Techno economic analysis of a wind-photovoltaic-biomass hybrid renewable energy system for rural electrification: A case study of Kallar Kahar. Energy 2018, 148, 208–234. [Google Scholar] [CrossRef]
  20. Baek, S.; Kim, H.; Chang, H.J. Optimal hybrid renewable power system for an emerging island of South Korea: The case of Yeongjong Island. Sustainability 2015, 7, 13985–14001. [Google Scholar] [CrossRef] [Green Version]
  21. Baek, S.; Kim, H.; Chang, H.J. Optimal hybrid renewable airport power system: Empirical study on Incheon International Airport, South Korea. Sustainability 2016, 8, 562. [Google Scholar] [CrossRef] [Green Version]
  22. Baek, S.; Park, E.; Kim, M.G.; Kwon, S.J.; Kim, K.J.; Ohm, J.Y.; del Pobil, A.P. Optimal renewable power generation systems for Busan metropolitan city in South Korea. Renew. Energy 2016, 88, 517–525. [Google Scholar] [CrossRef]
  23. Park, E.; Kwon, S.J. Renewable electricity generation systems for electric-powered taxis: The case of Daejeon metropolitan city. Renew. Sustain. Energy Rev. 2016, 58, 1466–1474. [Google Scholar] [CrossRef]
  24. Siddaiah, R.; Saini, R. A review on planning, configurations, modeling and optimization techniques of hybrid renewable energy systems for off grid applications. Renew. Sustain. Energy Rev. 2016, 58, 376–396. [Google Scholar] [CrossRef]
  25. Fadaee, M.; Radzi, M. Multi-objective optimization of a stand-alone hybrid renewable energy system by using evolutionary algorithms: A review. Renew. Sustain. Energy Rev. 2012, 16, 3364–3369. [Google Scholar] [CrossRef]
  26. Dawoud, S.M.; Lin, X.; Okba, M.I. Hybrid renewable microgrid optimization techniques: A review. Renew. Sustain. Energy Rev. 2018, 82, 2039–2052. [Google Scholar] [CrossRef]
  27. Phan, B.C.; Lai, Y.C. Control strategy of a hybrid renewable energy system based on reinforcement learning approach for an isolated microgrid. Appl. Sci. 2019, 9, 4001. [Google Scholar] [CrossRef] [Green Version]
  28. Chahkoutahi, F.; Khashei, M. A seasonal direct optimal hybrid model of computational intelligence and soft computing techniques for electricity load forecasting. Energy 2017, 140, 988–1004. [Google Scholar] [CrossRef]
  29. Mosavi, A.; Lopez, A.; Varkonyi-Koczy, A.R. Industrial applications of big data: State of the art survey. In Recent Advances in Technology Research and Education, Proceedings of the 16th International Conference on Global Research and Education, Iasi, Romania, 25–28 September 2017; Springer: Berlin, Germany, 2017; pp. 225–232. [Google Scholar]
  30. Qasem, S.N.; Samadianfard, S.; Sadri Nahand, H.; Mosavi, A.; Shamshirband, S.; Chau, K.w. Estimating daily dew point temperature using machine learning algorithms. Water 2019, 11, 582. [Google Scholar] [CrossRef] [Green Version]
  31. Voyant, C.; Notton, G.; Kalogirou, S.; Nivet, M.L.; Paoli, C.; Motte, F.; Fouilloy, A. Machine learning methods for solar radiation forecasting: A review. Renew. Energy 2017, 105, 569–582. [Google Scholar] [CrossRef]
  32. Perera, K.S.; Aung, Z.; Woon, W.L. Machine learning techniques for supporting renewable energy generation and integration: A survey. In Data Analytics for Renewable Energy Integration, Proceedings of the 2nd International Workshop on Data Analytics for Renewable Energy Integration, Nancy, France, 19 September 2014; Springer: Cham, Switzerland, 2014; pp. 81–96. [Google Scholar]
  33. Amasyali, K.; El-Gohary, N.M. A review of data-driven building energy consumption prediction studies. Renew. Sustain. Energy Rev. 2018, 81, 1192–1205. [Google Scholar] [CrossRef]
  34. Peng, Y.; Rysanek, A.; Nagy, Z.; Schlüter, A. Using machine learning techniques for occupancy-prediction-based cooling control in office buildings. Appl. Energy 2018, 211, 1343–1358. [Google Scholar] [CrossRef]
  35. Geem, Z.W.; Roper, W.E. Energy demand estimation of South Korea using artificial neural network. Energy Policy 2009, 37, 4049–4054. [Google Scholar] [CrossRef]
  36. Ceylan, H.; Ceylan, H.; Haldenbilen, S.; Baskan, O. Transport energy modeling with meta-heuristic harmony search algorithm, an application to Turkey. Energy Policy 2008, 36, 2527–2535. [Google Scholar] [CrossRef]
  37. Mosavi, A.; Salimi, M.; Faizollahzadeh Ardabili, S.; Rabczuk, T.; Shamshirband, S.; Varkonyi-Koczy, A.R. State of the art of machine learning models in energy systems, a systematic review. Energies 2019, 12, 1301. [Google Scholar] [CrossRef] [Green Version]
  38. Nam, K.; Hwangbo, S.; Yoo, C. A deep learning-based forecasting model for renewable energy scenarios to guide sustainable energy policy: A case study of Korea. Renew. Sustain. Energy Rev. 2020, 122, 109725. [Google Scholar] [CrossRef]
  39. Hwangbo, S.; Nam, K.; Heo, S.; Yoo, C. Hydrogen-based self-sustaining integrated renewable electricity network (HySIREN) using a supply-demand forecasting model and deep-learning algorithms. Energy Convers. Manag. 2019, 185, 353–367. [Google Scholar] [CrossRef]
  40. Đozić, D.J.; Urošević, B.D.G. Application of artificial neural networks for testing long-term energy policy targets. Energy 2019, 174, 488–496. [Google Scholar] [CrossRef]
  41. Pazikadin, A.R.; Rifai, D.; Ali, K.; Malik, M.Z.; Abdalla, A.N.; Faraj, M.A. Solar irradiance measurement instrumentation and power solar generation forecasting based on Artificial Neural Networks (ANN): A review of five years research trend. Sci. Total Environ. 2020, 715, 136848. [Google Scholar] [CrossRef]
  42. Chatziagorakis, P.; Ziogou, C.; Elmasides, C.; Sirakoulis, G.C.; Karafyllidis, I.; Andreadis, I.; Georgoulas, N.; Giaouris, D.; Papadopoulos, A.I.; Ipsakis, D. Enhancement of hybrid renewable energy systems control with neural networks applied to weather forecasting: The case of Olvio. Neural Comput. Appl. 2016, 27, 1093–1118. [Google Scholar] [CrossRef]
  43. Faizollahzadeh Ardabili, S.; Najafi, B.; Shamshirband, S.; Minaei Bidgoli, B.; Deo, R.C.; Chau, K.w. Computational intelligence approach for modeling hydrogen production: A review. Eng. Appl. Comput. Fluid Mech. 2018, 12, 438–458. [Google Scholar] [CrossRef]
  44. Karballaeezadeh, N.; Mohammadzadeh S, D.; Shamshirband, S.; Hajikhodaverdikhan, P.; Mosavi, A.; Chau, K.w. Prediction of remaining service life of pavement using an optimized support vector machine (case study of Semnan–Firuzkuh road). Eng. Appl. Comput. Fluid Mech. 2019, 13, 188–198. [Google Scholar] [CrossRef] [Green Version]
  45. Premalatha, N.; Valan Arasu, A. Prediction of solar radiation for solar systems by using ANN models with different back propagation algorithms. J. Appl. Res. Technol. 2016, 14, 206–214. [Google Scholar] [CrossRef] [Green Version]
  46. Dong, N.; Chang, J.F.; Wu, A.G.; Gao, Z.K. A novel convolutional neural network framework based solar irradiance prediction method. Int. J. Electr. Power Energy Syst. 2020, 114, 105411. [Google Scholar] [CrossRef]
  47. Ghimire, S.; Deo, R.C.; Downs, N.J.; Raj, N. Global solar radiation prediction by ANN integrated with European Centre for medium range weather forecast fields in solar rich cities of Queensland Australia. J. Clean. Prod. 2019, 216, 288–310. [Google Scholar] [CrossRef]
  48. Wang, Z.X.; He, L.Y.; Zheng, H.H. Forecasting the residential solar energy consumption of the United States. Energy 2019, 178, 610–623. [Google Scholar] [CrossRef]
  49. Yousif, J.H.; Kazem, H.A.; Alattar, N.N.; Elhassan, I.I. A comparison study based on artificial neural network for assessing PV/T solar energy production. Case Stud. Therm. Eng. 2019, 13, 100407. [Google Scholar] [CrossRef]
  50. Olson, J.B.; Kenyon, J.S.; Djalalova, I.; Bianco, L.; Turner, D.D.; Pichugina, Y.; Choukulkar, A.; Toy, M.D.; Brown, J.M.; Angevine, W.M. Improving wind energy forecasting through numerical weather prediction model development. Bull. Am. Meteorol. Soc. 2019, 100, 2201–2220. [Google Scholar] [CrossRef]
  51. Torres-Barrán, A.; Alonso, Á.; Dorronsoro, J.R. Regression tree ensembles for wind energy and solar radiation prediction. Neurocomputing 2019, 326, 151–160. [Google Scholar] [CrossRef]
  52. Chacón, M.C.; Díaz, J.A.R.; Morillo, J.G.; McNabola, A. Hydropower energy recovery in irrigation networks: Validation of a methodology for flow prediction and pump as turbine selection. Renew. Energy 2020, 147, 1728–1738. [Google Scholar] [CrossRef]
  53. Dehghani, M.; Riahi-Madvar, H.; Hooshyaripor, F.; Mosavi, A.; Shamshirband, S.; Zavadskas, E.K.; Chau, K.W. Prediction of hydropower generation using grey wolf optimization adaptive neuro-fuzzy inference system. Energies 2019, 12, 289. [Google Scholar] [CrossRef] [Green Version]
  54. Mubiru, J. Using artificial neural networks to predict direct solar irradiation. Adv. Artif. Neural Syst. 2011. [Google Scholar] [CrossRef] [Green Version]
  55. Ahmad, A.; Anderson, T.; Lie, T. Hourly global solar irradiation forecasting for New Zealand. Sol. Energy 2015, 122, 1398–1408. [Google Scholar] [CrossRef] [Green Version]
  56. Kazem, H.A.; Yousif, J.H. Comparison of prediction methods of photovoltaic power system production using a measured dataset. Energy Convers. Manag. 2017, 148, 1070–1081. [Google Scholar] [CrossRef]
  57. Loutfi, H.; Bernatchou, A.; Tadili, R. Generation of horizontal hourly global solar radiation from exogenous variables using an artificial neural network in Fes (Morocco). Int. J. Renew. Energy Res. 2017, 7, 1097–1107. [Google Scholar]
  58. Harrou, F.; Kadri, F.; Sun, Y. Forecasting of Photovoltaic Solar Power Production Using LSTM Approach. In Advanced Statistical Modeling, Forecasting, and Fault Detection in Renewable Energy Systems; IntechOpen: London, UK, 2020. [Google Scholar]
  59. Khandakar, A.; EH Chowdhury, M.; Khoda Kazi, M.; Benhmed, K.; Touati, F.; Al-Hitmi, M.; Gonzales, J.S. Machine learning based photovoltaics (PV) power prediction using different environmental parameters of Qatar. Energies 2019, 12, 2782. [Google Scholar] [CrossRef] [Green Version]
  60. Gensler, A.; Henze, J.; Sick, B.; Raabe, N. Deep Learning for solar power forecasting—An approach using AutoEncoder and LSTM Neural Networks. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002858–002865. [Google Scholar]
  61. O’Leary, D.; Kubby, J. Feature selection and ann solar power prediction. J. Renew. Energy 2017. [Google Scholar] [CrossRef] [Green Version]
  62. Zhang, R.; Feng, M.; Zhang, W.; Lu, S.; Wang, F. Forecast of Solar Energy Production—A Deep Learning Approach. In Proceedings of the 2018 IEEE International Conference on Big Knowledge (ICBK), Singapore, 17–18 November 2018; pp. 73–82. [Google Scholar]
  63. Pasari, S.; Shah, A.; Sirpurkar, U. Wind Energy Prediction Using Artificial Neural Networks. In Enhancing Future Skills and Entrepreneurship; Springer: Cham, Switzerland, 2020; pp. 101–107. [Google Scholar]
  64. Mishra, A.K.; Ramesh, L. Application of neural networks in wind power (generation) prediction. In Proceedings of the 2009 International Conference on Sustainable Power Generation and Supply, Nanjing, China, 6–7 April 2009; pp. 1–5. [Google Scholar]
  65. Sapronova, A.; Johannsen, K.; Thorsnes, E.; Meissner, C.; Mana, M. Deep learning for wind power production forecast. 2017. Available online: http://ceur-ws.org/Vol-1818/paper3.pdf (accessed on 12 September 2020).
  66. Manero, J.; Béjar, J.; Cortés, U. Deep Learning is blowing in the wind. Deep models applied to wind prediction at turbine level. In Journal of Physics: Conference Series, Proceedings of the WindEurope Conference and Exhibition 2019, Bilbao, Spain, 2–4 April 2019; IOP Publishing: Bristol, UK, 2009; Volume 1222, p. 012037. [Google Scholar]
  67. Senthil, K.P. Improved prediction of wind speed using machine learning. EAI Endorsed Trans. Energy Web 2019. [Google Scholar] [CrossRef] [Green Version]
  68. Filik, Ü.B.; Filik, T. Wind speed prediction using artificial neural networks based on multiple local measurements in Eskisehir. Energy Procedia 2017, 107, 264–269. [Google Scholar] [CrossRef]
  69. Hammid, A.T.; Sulaiman, M.H.B.; Abdalla, A.N. Prediction of small hydropower plant power production in Himreen Lake dam (HLD) using artificial neural network. Alex. Eng. J. 2018, 57, 211–221. [Google Scholar] [CrossRef]
  70. Ichiyanagi, K.; Kobayashi, H.; Matsumura, T.; Kito, Y. Application of artificial neural network to forecasting methods of time variation of the flow rate into a dam for a hydro-power plant. In Proceedings of the Second International Forum on Applications of Neural Networks to Power Systems, Yokohama, Japan, 19–22 April 1992; pp. 349–354. [Google Scholar]
  71. Stokelj, T.; Golob, R. Application of neural networks for hydro power plant water inflow forecasting. In Proceedings of the 5th Seminar on Neural Network Applications in Electrical Engineering, Belgrade, Yugoslavia, 27 September 2000; pp. 189–193. [Google Scholar]
  72. Cobaner, M.; Haktanir, T.; Kisi, O. Prediction of hydropower energy using ANN for the feasibility of hydropower plant installation to an existing irrigation dam. Water Resour. Manag. 2008, 22, 757–774. [Google Scholar] [CrossRef]
  73. Lopes, M.N.G.; da Rocha, B.R.P.; Vieira, A.C.; de Sá, J.A.S.; Rolim, P.A.M.; da Silva, A.G. Artificial neural networks approaches for predicting the potential for hydropower generation: A case study for Amazon region. J. Intell. Fuzzy Syst. 2019, 36, 5757–5772. [Google Scholar] [CrossRef]
  74. Shaw, A.R.; Smith Sawyer, H.; LeBoeuf, E.J.; McDonald, M.P.; Hadjerioua, B. Hydropower Optimization Using Artificial Neural Network Surrogate Models of a High-Fidelity Hydrodynamics and Water Quality Model. Water Resour. Res. 2017, 53, 9444–9461. [Google Scholar] [CrossRef]
  75. Aler, R.; Martín, R.; Valls, J.M.; Galván, I.M. A study of machine learning techniques for daily solar energy forecasting using numerical weather models. In Intelligent Distributed Computing VIII; Springer: Berlin, Germany, 2015; pp. 269–278. [Google Scholar]
  76. Ferrero Bermejo, J.; Gomez Fernandez, J.F.; Olivencia Polo, F.; Crespo Márquez, A. A review of the use of artificial neural network models for energy and reliability prediction. A study of the solar PV, hydraulic and wind energy sources. Appl. Sci. 2019, 9, 1844. [Google Scholar] [CrossRef] [Green Version]
  77. Khan, M.; Liu, T.; Ullah, F. A New Hybrid Approach to Forecast Wind Power for Large Scale Wind Turbine Data Using Deep Learning with TensorFlow Framework and Principal Component Analysis. Energies 2019, 12, 2229. [Google Scholar] [CrossRef] [Green Version]
  78. Malof, J.M.; Li, B.; Huang, B.; Bradbury, K.; Stretslov, A. Mapping solar array location, size, and capacity using deep learning and overhead imagery. arXiv 2019, arXiv:1902.10895. [Google Scholar]
  79. Shafique, K.; Khawaja, B.A.; Sabir, F.; Qazi, S.; Mustaqim, M. Internet of things (IoT) for next-generation smart systems: A review of current challenges, future trends and prospects for emerging 5G-IoT scenarios. IEEE Access 2020, 8, 23022–23040. [Google Scholar] [CrossRef]
  80. Hasan, M.K.; Ahmed, M.M.; Musa, S.S. Measurement and Modeling of DTCR Software Parameters Based on Intranet Wide Area Measurement System for Smart Grid Applications. In International Conference on Innovative Computing and Communications; Springer: Berlin, Germany, 2020; pp. 1139–1150. [Google Scholar]
  81. Bose, B.K. Artificial intelligence techniques in smart grid and renewable energy systems—Some example applications. Proc. IEEE 2017, 105, 2262–2273. [Google Scholar] [CrossRef]
  82. Hasan, M.K.; Ahmed, M.M.; Hashim, A.H.A.; Razzaque, A.; Islam, S.; Pandey, B. A Novel Artificial Intelligence Based Timing Synchronization Scheme for Smart Grid Applications. J. Wirel. Pers. Commun. 2020, 114, 1067–1084. [Google Scholar] [CrossRef]
  83. Hossain, E.; Khan, I.; Un-Noor, F.; Sikander, S.S.; Sunny, M.S.H. Application of big data and machine learning in smart grid, and associated security concerns: A review. IEEE Access 2019, 7, 13960–13988. [Google Scholar] [CrossRef]
  84. Skagestad, R. Electricity Demand Forecasting with Gaussian Process Regression. Master’s Thesis, Norwegian University of Science and Technology, Trondheim, Norway, 2018. [Google Scholar]
  85. Akhtaruzzaman, M.; Hasan, M.K.; Kabir, S.R.; Abdullah, S.N.H.S.; Sadeq, M.J.; Hossain, E. HSIC Bottleneck based Distributed Deep Learning Model for Load Forecasting in Smart Grid with A Comprehensive Survey. J. IEEE Access 2020. [Google Scholar] [CrossRef]
  86. Rodrigues, F.; Cardeira, C.; Calado, J.M.F. The daily and hourly energy consumption and load forecasting using artificial neural network method: A case study using a set of 93 households in Portugal. Energy Procedia 2014, 62, 220–229. [Google Scholar] [CrossRef] [Green Version]
  87. Chen, A.; Zhang, X.; Zhou, Z. Machine learning: Accelerating materials development for energy storage and conversion. InfoMat 2020, 2, 553–576. [Google Scholar] [CrossRef]
  88. Gu, G.H.; Noh, J.; Kim, I.; Jung, Y. Machine learning for renewable energy materials. J. Mater. Chem. A 2019, 7, 17096–17117. [Google Scholar] [CrossRef]
  89. Jacobsen, H.K. Integrating the bottom-up and top-down approach to energy–economy modelling: The case of Denmark. Energy Econ. 1998, 20, 443–461. [Google Scholar] [CrossRef] [Green Version]
  90. Rivers, N.; Jaccard, M. Combining top-down and bottom-up approaches to energy-economy modeling using discrete choice methods. Energy J. 2005, 26, 83–106. [Google Scholar] [CrossRef] [Green Version]
  91. Lee, C.Y.; Huh, S.Y. Forecasting new and renewable energy supply through a bottom-up approach: The case of South Korea. Renew. Sustain. Energy Rev. 2017, 69, 207–217. [Google Scholar] [CrossRef]
  92. Gong, J.; Li, C.; Wasielewski, M.R. Advances in solar energy conversion. Chem. Soc. Rev. 2019, 48, 1862–1864. [Google Scholar] [CrossRef] [PubMed]
  93. Arevalo, J.C.; Santos, F.; Rivera, S. Uncertainty cost functions for solar photovoltaic generation, wind energy generation, and plug-in electric vehicles: Mathematical expected value and verification by Monte Carlo simulation. Int. J. Power Energy Convers. 2019, 10, 171–207. [Google Scholar] [CrossRef]
  94. Hejazi, M.A.A.; Bamaga, O.A.; Al-Beirutty, M.H.; Gzara, L.; Abulkhair, H. Effect of intermittent operation on performance of a solar-powered membrane distillation system. Sep. Purif. Technol. 2019, 220, 300–308. [Google Scholar] [CrossRef]
  95. Boussaada, Z.; Curea, O.; Remaci, A.; Camblong, H.; Mrabet Bellaaj, N. A nonlinear autoregressive exogenous (NARX) neural network model for the prediction of the daily direct solar radiation. Energies 2018, 11, 620. [Google Scholar] [CrossRef] [Green Version]
  96. Rocha, P.C.; Fernandes, J.; Modolo, A.; Lima, R.P.; da Silva, M.V.; Bezerra, C.D. Estimation of daily, weekly and monthly global solar radiation using ANNs and a long data set: A case study of Fortaleza, in Brazilian Northeast region. Int. J. Energy Environ. Eng. 2019, 10, 319–334. [Google Scholar] [CrossRef] [Green Version]
  97. Shoaib, M.; Siddiqui, I.; Rehman, S.; Khan, S.; Alhems, L.M. Assessment of wind energy potential using wind energy conversion system. J. Clean. Prod. 2019, 216, 346–360. [Google Scholar] [CrossRef]
  98. Imtiaz, S.; Altaf, M.W.; Riaz, A.; Naz, M.N.; Bhatti, M.K.; Hassan, R.G. Intermittent Wind Energy Assisted Micro-Grid Stability Enhancement Using Security Index Currents. In Proceedings of the 2019 15th International Conference on Emerging Technologies (ICET), Peshawar, Pakistan, 2–3 December 2019; pp. 1–6. [Google Scholar]
  99. Soman, S.S.; Zareipour, H.; Malik, O.; Mandal, P. A review of wind power and wind speed forecasting methods with different time horizons. In Proceedings of the North American Power Symposium, Arlington, TX, USA, 26–28 September 2010; pp. 1–8. [Google Scholar]
  100. More, A.; Deo, M. Forecasting wind with neural networks. Mar. Struct. 2003, 16, 35–49. [Google Scholar] [CrossRef]
  101. Liu, H.; Chen, C.; Lv, X.; Wu, X.; Liu, M. Deterministic wind energy forecasting: A review of intelligent predictors and auxiliary methods. Energy Convers. Manag. 2019, 195, 328–345. [Google Scholar] [CrossRef]
  102. Santhosh, M.; Venkaiah, C.; Vinod Kumar, D. Current advances and approaches in wind speed and wind power forecasting for improved renewable energy integration: A review. Eng. Rep. 2020. [Google Scholar] [CrossRef]
  103. Zapata-Sierra, A.J.; Manzano-Agugliaro, F. Proposed methodology for evaluation of small hydropower sustainability in a Mediterranean climate. J. Clean. Prod. 2019, 214, 717–729. [Google Scholar] [CrossRef]
  104. Rahman, M.M.; Setu, T.A. An implementation for combining neural networks and genetic algorithms. IJCST 2015, 6, 218–222. [Google Scholar]
  105. Pastor, J.; Aber, J.D.; Melillo, J.M. Biomass prediction using generalized allometric regressions for some northeast tree species. For. Ecol. Manag. 1984, 7, 265–274. [Google Scholar] [CrossRef]
  106. Pasari, S.; Shah, A. Time Series Auto-Regressive Integrated Moving Average Model for Renewable Energy Forecasting. In Enhancing Future Skills and Entrepreneurship; Springer: Cham, Switzerland, 2020; pp. 71–77. [Google Scholar]
  107. Poggi, P.; Muselli, M.; Notton, G.; Cristofari, C.; Louche, A. Forecasting and simulating wind speed in Corsica by using an autoregressive model. Energy Convers. Manag. 2003, 44, 3177–3196. [Google Scholar] [CrossRef]
  108. Cox, S.L.; Lopez, A.J.; Watson, A.C.; Grue, N.W.; Leisch, J.E. Renewable Energy Data, Analysis, and Decisions: A Guide for Practitioners; National Renewable Energy Lab. (NREL): Golden, CO, USA, 2018. [Google Scholar]
  109. Sharifzadeh, M.; Sikinioti-Lock, A.; Shah, N. Machine-learning methods for integrated renewable power generation: A comparative study of artificial neural networks, support vector regression, and Gaussian Process Regression. Renew. Sustain. Energy Rev. 2019, 108, 513–538. [Google Scholar] [CrossRef]
  110. Hossain, M.A.; Rahman, M.M.; Prodhan, U.K.; Khan, M.F. Implementation of back-propagation neural network for isolated Bangla speech recognition. Int. J. Inf. Sci. Tech. 2013. [Google Scholar] [CrossRef] [Green Version]
  111. Buyar, V. A Framework for Modeling Sales Prediction Using Big Data; ProQuest Dissertations Publishing, Southern Connecticut State University: New Haven, CT, USA, 2019. [Google Scholar]
  112. Suresh, V.; Janik, P.; Rezmer, J.; Leonowicz, Z. Forecasting solar PV output using convolutional neural networks with a sliding window algorithm. Energies 2020, 13, 723. [Google Scholar] [CrossRef] [Green Version]
  113. Phung, V.H.; Rhee, E.J. A High-Accuracy Model Average Ensemble of Convolutional Neural Networks for Classification of Cloud Image Patches on Small Datasets. Appl. Sci. 2019, 9, 4500. [Google Scholar] [CrossRef] [Green Version]
  114. Rhee, E.J. A Deep Learning Approach for Classification of Cloud Image Patches on Small Datasets. J. Inf. Commun. Converg. Eng. 2018, 16, 173–178. [Google Scholar]
  115. Fawaz, H.I.; Forestier, G.; Weber, J.; Idoumghar, L.; Muller, P.A. Deep learning for time series classification: A review. Data Min. Knowl. Discov. 2019, 33, 917–963. [Google Scholar] [CrossRef] [Green Version]
  116. Shin, H.C.; Roth, H.R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; Summers, R.M. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans. Med. Imaging 2016, 35, 1285–1298. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  117. Toğaçar, M.; Ergen, B.; Cömert, Z. Waste classification using AutoEncoder network with integrated feature selection method in convolutional neural network models. Measurement 2020, 153, 107459. [Google Scholar] [CrossRef]
  118. Li, G.; Wang, H.; Zhang, S.; Xin, J.; Liu, H. Recurrent neural networks based photovoltaic power forecasting approach. Energies 2019, 12, 2538. [Google Scholar] [CrossRef] [Green Version]
  119. Eseye, A.T.; Zhang, J.; Zheng, D. Short-term photovoltaic solar power forecasting using a hybrid Wavelet-PSO-SVM model based on SCADA and Meteorological information. Renew. Energy 2018, 118, 357–367. [Google Scholar] [CrossRef]
  120. Srivastava, S.; Lessmann, S. A comparative study of LSTM neural networks in forecasting day-ahead global horizontal irradiance with satellite data. Sol. Energy 2018, 162, 232–247. [Google Scholar] [CrossRef]
  121. Wang, H.; Yi, H.; Peng, J.; Wang, G.; Liu, Y.; Jiang, H.; Liu, W. Deterministic and probabilistic forecasting of photovoltaic power based on deep convolutional neural network. Energy Convers. Manag. 2017, 153, 409–422. [Google Scholar] [CrossRef]
  122. Dong, D.; Sheng, Z.; Yang, T. Wind power prediction based on recurrent neural network with long short-term memory units. In Proceedings of the 2018 International Conference on Renewable Energy and Power Engineering (REPE), Toronto, ON, Canada, 24–26 November 2018; pp. 34–38. [Google Scholar]
  123. Olah, C. Understanding LSTM Networks. Colah’s Blog. 2015. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 20 September 2020).
  124. Jia, Y.; Wu, Z.; Xu, Y.; Ke, D.; Su, K. Long Short-Term Memory Projection Recurrent Neural Network Architectures for Piano’s Continuous Note Recognition. J. Robot. 2017. [Google Scholar] [CrossRef]
  125. Puig-Arnavat, M.; Bruno, J.C. Artificial neural networks for thermochemical conversion of biomass. In Recent Advances in Thermo-Chemical Conversion of Biomass; Elsevier: Amsterdam, The Netherlands, 2015; pp. 133–156. [Google Scholar]
  126. Rahman, M.; Bhuiyan, A.A. Comparison Study and Result Analysis of Improved Back-Propagation Algorithms in Bangla Speech Recognition. Int. J. Appl. Res. Inf. Technol. Comput. 2015, 6, 107–117. [Google Scholar] [CrossRef]
  127. Feng, J.; Lu, S. Performance analysis of various activation functions in artificial neural networks. J. Phys. Conf. Ser. 2019, 1237, 022030. [Google Scholar] [CrossRef]
  128. Ko, Y.; Han, S. A duration prediction using a material-based progress management methodology for construction operation plans. Sustainability 2017, 9, 635. [Google Scholar] [CrossRef] [Green Version]
  129. Ruiz, G.R.; Bandera, C.F. Validation of calibrated energy models: Common errors. Energies 2017, 10, 1587. [Google Scholar] [CrossRef] [Green Version]
  130. De Myttenaere, A.; Golden, B.; Le Grand, B.; Rossi, F. Mean absolute percentage error for regression models. Neurocomputing 2016, 192, 38–48. [Google Scholar] [CrossRef] [Green Version]
  131. Wu, L.; Kong, C.; Hao, X.; Chen, W. A Short-Term Load Forecasting Method Based on GRU-CNN Hybrid Neural Network Model. Math. Probl. Eng. 2020. [Google Scholar] [CrossRef] [Green Version]
  132. Wang, H.; Lei, Z.; Zhang, X.; Zhou, B.; Peng, J. A review of deep learning for renewable energy forecasting. Energy Convers. Manag. 2019, 198, 111799. [Google Scholar] [CrossRef]
  133. Liu, H.; Mi, X.; Li, Y. Smart multi-step deep learning model for wind speed forecasting based on variational mode decomposition, singular spectrum analysis, LSTM network and ELM. Energy Convers. Manag. 2018, 159, 54–64. [Google Scholar] [CrossRef]
  134. Debnath, K.B.; Mourshed, M. Forecasting methods in energy planning models. Renew. Sustain. Energy Rev. 2018, 88, 297–325. [Google Scholar] [CrossRef] [Green Version]
  135. Mohandes, S.R.; Zhang, X.; Mahdiyar, A. A comprehensive review on the application of artificial neural networks in building energy analysis. Neurocomputing 2019, 340, 55–75. [Google Scholar] [CrossRef]
Figure 1. Global direct primary energy consumption [10].
Figure 1. Global direct primary energy consumption [10].
Sustainability 13 02393 g001
Figure 2. An example of “hybrid-renewable-energy system” (HRES) schematic that involves multiple renewable energy resources.
Figure 2. An example of “hybrid-renewable-energy system” (HRES) schematic that involves multiple renewable energy resources.
Sustainability 13 02393 g002
Figure 3. Subject areas of using machine learning.
Figure 3. Subject areas of using machine learning.
Sustainability 13 02393 g003
Figure 4. Usages of advanced technologies in hybrid-renewable-energy system (HRES).
Figure 4. Usages of advanced technologies in hybrid-renewable-energy system (HRES).
Sustainability 13 02393 g004
Figure 5. Bottom-up or build-up approach for energy generation forecasting.
Figure 5. Bottom-up or build-up approach for energy generation forecasting.
Sustainability 13 02393 g005
Figure 6. Network architectures for predicting a time series with a data-driven method.
Figure 6. Network architectures for predicting a time series with a data-driven method.
Sustainability 13 02393 g006
Figure 7. Schematic block of the machine learning (ML) based training and prediction stages.
Figure 7. Schematic block of the machine learning (ML) based training and prediction stages.
Sustainability 13 02393 g007
Figure 8. The layered structure of multi-layer perception (MLP) network that will produce a single output value.
Figure 8. The layered structure of multi-layer perception (MLP) network that will produce a single output value.
Sustainability 13 02393 g008
Figure 9. Schematic diagram of a basic convolutional-neural network (CNN) model for time series data forecasting.
Figure 9. Schematic diagram of a basic convolutional-neural network (CNN) model for time series data forecasting.
Sustainability 13 02393 g009
Figure 10. Basic illustration of recurrent-neural network (RNN).
Figure 10. Basic illustration of recurrent-neural network (RNN).
Sustainability 13 02393 g010
Figure 11. Network architectures for forecasting a time series with a data-driven approach.
Figure 11. Network architectures for forecasting a time series with a data-driven approach.
Sustainability 13 02393 g011
Figure 12. Back-propagation training algorithm.
Figure 12. Back-propagation training algorithm.
Sustainability 13 02393 g012
Figure 13. Applying activation function to activate neurons in ANN.
Figure 13. Applying activation function to activate neurons in ANN.
Sustainability 13 02393 g013
Figure 14. The graphs of four common activation functions: sigmoid, tanh, ReLU and softplus.
Figure 14. The graphs of four common activation functions: sigmoid, tanh, ReLU and softplus.
Sustainability 13 02393 g014
Table 2. Inputs used for renewable energy prediction.
Table 2. Inputs used for renewable energy prediction.
Input VectorsSolar Energy PredictionWind Power PredictionHydropower Prediction
Time variable
Seasonal variable
Location variable
Wind power
Solar power
Temperature
Wind speed
Direct irradiance
Diffuse irradiance
average or seasonal water flow
hydraulic head
Table 3. Back-propagation (BP) training methods and corresponding Matlab functions.
Table 3. Back-propagation (BP) training methods and corresponding Matlab functions.
BP AlgorithmsFunctions Used in Matlab
Gradient-descenttraingd
Gradient-descent with momentumtraingdm
Gradient-descent with adaptive learning ratetraingda
Gradient-descent with variable learning ratetraingdx
Scaled-conjugate-gradienttrainscg
Resilient-Backpropagationtrainrp
Levenberg-Marquardttrainlm
Fletcher-Powell conjugate-gradienttraincgf
Polak-Ribiere conjugate-gradienttraincgp
Conjugate-gradient with Powell/Beale Restartstraincgb
Bayesian-regularizationtrainbr
BFGS quasi-Newtontarinbfg
Random-order incremental training with learning functionstrainr
Batch training with weight and bias learningrulestrainb
One step secanttrainoss
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rahman, M.M.; Shakeri, M.; Tiong, S.K.; Khatun, F.; Amin, N.; Pasupuleti, J.; Hasan, M.K. Prospective Methodologies in Hybrid Renewable Energy Systems for Energy Prediction Using Artificial Neural Networks. Sustainability 2021, 13, 2393. https://0-doi-org.brum.beds.ac.uk/10.3390/su13042393

AMA Style

Rahman MM, Shakeri M, Tiong SK, Khatun F, Amin N, Pasupuleti J, Hasan MK. Prospective Methodologies in Hybrid Renewable Energy Systems for Energy Prediction Using Artificial Neural Networks. Sustainability. 2021; 13(4):2393. https://0-doi-org.brum.beds.ac.uk/10.3390/su13042393

Chicago/Turabian Style

Rahman, Md Mijanur, Mohammad Shakeri, Sieh Kiong Tiong, Fatema Khatun, Nowshad Amin, Jagadeesh Pasupuleti, and Mohammad Kamrul Hasan. 2021. "Prospective Methodologies in Hybrid Renewable Energy Systems for Energy Prediction Using Artificial Neural Networks" Sustainability 13, no. 4: 2393. https://0-doi-org.brum.beds.ac.uk/10.3390/su13042393

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop