Next Article in Journal
Enhancing Efficiency and Reliability of Tidal Stream Energy Conversion through Swept-Blade Design
Previous Article in Journal
Novel Materials for Semi-Transparent Organic Solar Cells
Previous Article in Special Issue
Short-Term Photovoltaic Power Plant Output Forecasting Using Sky Images and Deep Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Short-Term Solar Irradiance Prediction with a Hybrid Ensemble Model Using EUMETSAT Satellite Images

1
Department of Physics, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
2
School of Engineering, University of Applied Sciences Upper Austria, 4600 Wels, Austria
*
Author to whom correspondence should be addressed.
Submission received: 21 November 2023 / Revised: 24 December 2023 / Accepted: 6 January 2024 / Published: 9 January 2024
(This article belongs to the Special Issue Forecasting, Modeling, and Optimization of Photovoltaic Systems)

Abstract

:
Accurate short-term solar irradiance forecasting is crucial for the efficient operation of solar energy-driven photovoltaic (PV) power plants. In this research, we introduce a novel hybrid ensemble forecasting model that amalgamates the strengths of machine learning tree-based models and deep learning neuron-based models. The hybrid ensemble model integrates the interpretability of tree-based models with the capacity of neuron-based models to capture complex temporal dependencies within solar irradiance data. Furthermore, stacking and voting ensemble strategies are employed to harness the collective strengths of these models, significantly enhancing the prediction accuracy. This integrated methodology is enhanced by incorporating pixels from satellite images provided by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT). These pixels are converted into structured data arrays and employed as exogenous inputs in the algorithm. The primary objective of this study is to improve the accuracy of short-term solar irradiance predictions, spanning a forecast horizon up to 6 h ahead. The incorporation of EUMETSAT satellite image pixel data enables the model to extract valuable spatial and temporal information, thus enhancing the overall forecasting precision. This research also includes a detailed analysis of the derivation of the GHI using satellite images. The study was carried out and the models tested across three distinct locations in Austria. A detailed comparative analysis was carried out for traditional satellite (SAT) and numerical weather prediction (NWP) models with hybrid models. Our findings demonstrate a higher skill score for all of the approaches compared to a smart persistent model and consistently highlight the superiority of the hybrid ensemble model for a short-term prediction window of 1 to 6 h. This research underscores the potential for enhanced accuracy of the hybrid approach to advance short-term solar irradiance forecasting, emphasizing its effectiveness at understanding the intricate interplay of the meteorological variables affecting solar energy generation worldwide. The results of this investigation carry noteworthy implications for advancing solar energy systems, thereby supporting the sustainable integration of renewable energy sources into the electrical grid.

1. Introduction

Accurate solar forecasting plays a pivotal role in effectively managing solar power resources and, consequently, in expanding the contribution of solar energy to the overall electricity supply. These forecasts offer valuable insights for making well-informed decisions regarding energy distribution, resulting in cost reductions related to the variability of solar energy, maintaining supply–demand equilibrium, and backup generation.
The choice of forecasting type and tools hinges on the unique needs and objectives of users, whether they are energy operators, traders, or policymakers in the renewable energy sector. There are three categories of solar irradiance forecasting that cater to varying timeframes and specific applications, from real-time energy management to long-term planning and resource optimization.
Day-ahead solar irradiance forecasting predicts solar irradiance levels 24 h in advance with hourly resolution [1,2]. This is typically achieved by leveraging NWP models, historical weather data, and machine learning algorithms. Recent cutting-edge post-processing techniques using model output statistics (MOS) analysis have shown their effectiveness in producing exceptional results for this horizon [3,4]. This type of forecasting plays a crucial role in energy planning, scheduling, and market trading strategies. It enables utilities, grid operators, and solar power plant operators to anticipate solar energy availability, facilitating preparations for energy storage, grid balancing, and scheduling operations in alignment with these forecasts.
Intra-day solar irradiance forecasting is focused on short-term predictions within the current day, generally spanning from 1 to 6 h ahead. It makes use of various data sources, including satellite imagery, ground-based sensors, NWP models, and artificial intelligence (AI) algorithms. Intra-day forecasting is essential for real-time decision-making in solar power generation, allowing operators to optimize grid integration and make dynamic adjustments to PV system output to adapt to rapidly changing irradiance conditions and for the energy trading market [5,6].
Intra-hour solar irradiance forecasting is designed for ultra-short-term predictions within the next hour, with a high level of granularity, often down to 5–10 min intervals. It relies on tools like all-sky imager (ASI) and AI models, including neural networks and deep learning. Typically, this approach involves the utilization of either cloud motion vectors (CMVs) or cloud coverage data obtained through image processing of sky images. Subsequently, a deep learning model is employed to forecast cloud cover for the upcoming 5–10 min [7].
Intra-hour forecasting is of paramount importance for grid operators and energy market stakeholders, as it assists in managing immediate fluctuations in solar power generation. It ensures a balanced supply–demand equation, sustaining grid stability, and is particularly vital for addressing rapid ramp events in real-time energy management.
For intra-day forecasting, it is necessary to obtain extensive cloud field observations. Satellite data, with their wide coverage, serve as suitable data sources for these forecasting horizons [8]. Satellite-based solar forecasting relies on visible channel geostationary satellite imagery for two primary reasons. The first reason is the distinct visibility of cloud cover in these images, as clouds tend to reflect more solar irradiance into outer space compared to the ground, making them appear brighter in the visible channel. However, it is crucial to emphasize that in regions characterized by high albedo terrain, such as snowy areas or salt flats, specific measures are required. These often entail utilizing infrared images for effective treatment [9].
The second reason pertains to the high temporal frequency of these images, with intervals typically ranging from 10 to 30 min, and their moderate spatial resolution, typically up to 1 km. This combination of temporal and spatial attributes allows for the tracking of cloud movement over time by estimating the cloud motion field (CMF) or CMV. CMVs are subsequently used to project the motion of clouds into the near future, enabling the prediction of forthcoming images and, in essence, the future position of clouds. To derive predictions for ground-level solar irradiation, a satellite-based assessment model is applied to each predicted image within each specified time horizon [10,11]. Several methods for estimating CMVs have been reported in the literature; yet, the following two approaches are extensively utilized for forecasting cloud motion.
(i)
Block-matching algorithm: This method entails the identification of cloud formations in sequential images to compute CMVs, under the assumption that cloud structures with specific spatial scales and pixel intensities exhibit stability within the relatively brief time intervals being examined. In practical implementation, matching areas in two consecutive images are located [12].
(ii)
Optical flow estimation—The Lucas–Kanade algorithm: Optical flow is a broad concept in image processing that pertains to the detection of object motion in consecutive images. Optical flow algorithms specifically focus on methods that detect motion by utilizing image gradients. Essentially, they allow for the generation of dense motion vector fields, providing one vector per pixel. The fundamental principle behind optical flow algorithms is the assumption that changes in pixel brightness are solely attributable to the translational motion of elements within the observed scene [13].
In recent years, numerous forecasting methodologies have been introduced, showcasing precise prediction outcomes. However, conducting comparisons of the proposed models or methodologies proves challenging due to the diverse locations and conditions associated with these distinct forecasts. Hybrid approaches are popular for intra-day forecasting utilizing optical flow methods to calculate CMVs from two subsequent satellite images and predict cloud displacement up to 3 h ahead combined with ANN. The initial phase of the methodology comprises calculating CMVs for the infrared channel pixels. Subsequently, these CMVs are extrapolated to the visible channel pixels to derive the cloud index. ANN models are employed to exclusively train on the cloud index, Sun-to-Earth position, and ground measurements of the Global horizontal irradiance (GHI) at the present moment, with the aim of forecasting the GHI in future instances [14]. Some recent research explores alternative methodologies and chooses not to utilize the CMV-based approach. Instead, the authors directly incorporate historical satellite-derived observations into their convolutional neural network (CNN) architecture, facilitating the forecasting of the GHI up to six hours in advance [15,16]. A study has been published, which showcases an evaluation and comparison of different techniques for satellite-derived cloud motion vectors [17].
The prediction of cloud behavior from satellite data is a challenging endeavor because clouds exhibit dynamic characteristics, constantly changing their position and shape, and even dissipating. These changes are influenced by the intricate dynamics of the atmosphere, which are not explicitly modeled in the forecasting method. Additionally, CMVs are represented as two-dimensional vector planes which ignore the vertical dimension. This implicit assumption implies that, within a local space, clouds are all at the same altitude, which limits the baseline techniques in capturing any phenomena related to vertical cloud motion, such as complex convection processes. To address these limitations, additional assumptions are introduced by the CMV estimation technique to facilitate motion estimation in the sequence of images.
Recent studies have been actively exploring a distinct domain, employing artificial intelligence (AI) methodologies such as CNN and deep learning. This aims to predict future cloud positions by analyzing their present states. Numerous scholarly articles have emerged in this field in recent times [18,19,20,21,22].
In this research, the derivation of the global horizontal irradiance (GHI) from satellite-derived images is presented. Utilizing the pixel data from these satellite images, GHI forecasts up to 6 h are made through hybrid AI models. These predictions are subsequently combined with ground measurement-based hybrid forecast models and GHI forecasts generated by the Global Forecast System (GFS). This paper provides two principal contributions: (1) the utilization of satellite imagery for accurate estimation of the GHI at any given location, and (2) it presents empirical evidence demonstrating the enhanced forecasting accuracy achieved through the use of hybrid ensemble techniques that integrate tree-based regression machine learning with neural network-based deep learning models.
The paper is divided into the following parts: Section 2 provides a comprehensive discussion of ground measurement data, satellite-derived data, and GFS data, along with relevant details concerning the three sites subject to evaluation. Section 3 focuses on the derivation of the GHI from satellite image pixels and presents a comparison of the applied clear sky models. Error metrics employed in the study are outlined in Section 4. Section 5 delves into the experimental setup, encompassing the forecasting model architecture, feature engineering, and an overview of various models. Section 6 is dedicated to the presentation of observations and the analysis of results. Finally, Section 7 encapsulates the paper with its key insights, conclusions, and scope of future work.

2. Data

2.1. Ground Measument Data

The ground measurement data employed in this analysis are obtained from three stations located in Austria having high cloud variability that is typical for central Europe. The geographic locations of the stations are depicted in Figure 1, and the period for which data are evaluated is illustrated in Table 1.
The ground measurement of the GHI in this analysis is taken from the Kipp and Zonen CMP11 at Wels and SMP10 pyranometer at Wien with 3% accuracy for the daily sum of the GHI. The standard benchmark protocol in global GHI forecasting is applied to remove night data and anomalous readings from the pyranometer at dawn and dusk by filtering data based on the solar zenith angle for θ z < 80°. Further, the data for any particular day where there was either shadowing effect or pyranometer calibration problems were removed. The ground measurement data for Wels and Wien are recorded at every 1 min, whereas at Salzburg at every 10 min. These data are assembled into hourly averages for the final results analysis.

2.2. Satellite Data

The satellite data utilized in this research were sourced from the Meteosat satellite, under the operational purview of the European Space Agency (ESA). Positioned at 0 degrees longitude, approximately 36,000 km above the equator, the Meteosat Second Generation (MSG 2), is equipped with the SEVIRI (Spinning Enhanced Visible and Infrared Imager). SEVIRI scans the Earth across 12 distinct spectral channels.
Channel 1 is dedicated to visible light at 0.6 µm, while channel 2 captures visible light at 0.8 µm. Collectively, these are frequently referred to as the “solar channels”. Channel 12 is acknowledged as the high-resolution visible channel (HRV) which delivers data with a finer resolution than other solar channels. The HRV channel has a resolution of 1 × 1 km, whereas the visible 0.6 µm channel offers a resolution of 3 × 3 km. Readers may refer to the following links for comprehensive details on Meteosat and its diverse array of products [23,24,25].
In this research, data from the MSG 0-degree product are used, which features a data update frequency of 15 min. Data from channel 1, corresponding to visible light at 0.6 µm, are used. The focus is on a small area for each location defined in decimal degrees as the aim of the study is to apply neural networks to forecasts from the extracted data. The areas for the three locations are as follows: Wels [48.20–48.0,14.0–14.2], Wien [48.30–48.10, 16.40–16.60], and Salzburg [47.89–47.69, 13.0–13.20] with the aim of the station at the center of the respective area.

2.3. Numerical Weather Prediction (GFS) Data

This study utilizes GFS data, accessible for retrieval from the National Centers for Environmental Prediction (NCEP) archive. The data have a temporal resolution of 3 h and a spatial resolution of 0.250 × 0.250 degrees [26]. The data interpolation to the desired resolution is achieved using a method prescribed for the conservation of the solar energy [27]. In this work, GFS data were provided by Blue Sky Wetteranalysen for Wels and Salzburg [28] and by Solar Radiation Data (SODA), Mines ParisTech from their archives.

3. Satellite Data to GHI Transformation

3.1. Cloud Index

In this study, the Meteosat MSG 0-degree 1.5B level product data from EUMETSAT are used. The data are downloaded in geotiff format for the specified period for the three respective stations mentioned in Table 1. A detailed study of the components of radiative energy observed using a satellite sensor in a specific spectral band during daytime conditions and the relationships between these components is presented in article [29]. The transformation of HRV radiance into a cloud index is accomplished using an internally developed iteration of the Heliosat method. The utilization of the geotiff format offers notable advantages, as it can be efficiently processed using the Rasterio library in Python. Moreover, geotiff files exhibit a more compact size in comparison to hdf5 and NetCDF formats.
The downloaded satellite datasets from the Meteosat are already in radiance pixel or reflectance form following radiometric calibrated radiance ( L ) observed by the sensor through the application of the following equation:
N 1 = π L d 2 I 0
where d is the Earth–Sun distance in AU and I 0 is the total solar irradiance for the HRV channel. After acquisition, the radiance pixel data are further processed by converting them into a number array format ( N 1 ) by using the Rioxarray library.
The pixel array data are then normalized using the cosine of the solar zenith angle to account for first order solar geometry effect to derive cloud reflectance ( ρ ) [10].
ρ = N 1 cos ( θ z )
Clouds have a notably high albedo, signifying their ability to reflect a substantial portion of incident sunlight. Consequently, when observed from above, satellites register a high reflectance ( ρ ) value in the presence of clouds.
In contrast, the Earth’s surface tends to be less reflective, resulting in lower radiance recorded by satellites under clear sky conditions. Notably, a distinctive scenario arises when the ground is blanketed by snow, as snow exhibits a remarkably high reflectivity that can be mistaken for cloud cover. A valuable approach involves leveraging the brightness temperature difference derived from infrared channels to mitigate this under snow conditions [29].
The Heliosat 2 method is applied for cloud index generation [30]. To compute the cloud albedo or reflectance ( ρ m a x ) and ground reflectance ( ρ m i n ), a filtering process is applied, considering only those instances when the Sun’s elevation angle exceeds 25 degrees. This selective approach is essential due to the Sun’s position near the horizon during dawn and dusk, leading to very low radiance values. The inclusion of such data can potentially compromise the accuracy of the calculated ρ m a x and ρ m i n values [30].
The cloud index ( n ) for the observed reflection ρ for each pixel and time is calculated using the following formula:
n = ρ ρ m i n ρ m a x ρ m i n
Selection of ρ m i n and ρ m a x of is a critical task and highly sensitive to the atmospheric background profile. Studies have reported using 5th percentile values for ρ m i n and 95th percentile for ρ m a x from the distribution of ρ [31]. We experimented with different percentiles and observed that these values are strongly seasonal dependent which is visible in the probability distribution as shown in Figure 2 for the summer months July and August in comparison to the fall months September-November. In this study, the 25th and 99th percentile for ρ m i n and ρ m a x , respectively, gave consistently the best results for all the three stations.

3.2. Clear Sky Index

There exists a direct correlation between the clear sky index and the cloud index in the Heliosat method. The clear sky index is delineated as the quotient of global horizontal irradiance to the irradiance predicted in situations of clear sky conditions.
K t = I I c s k y
We modified the equations for the calculation of clear sky index K t from the cloud index n as in [32] by introducing an additional second condition 0.2 < n 0 in place of 0.2 < n 0.8 as follows:
K t = 1.2   f o r   n   0.2 1.0 f o r 0.2 < n   0 1.0 n f o r   0 < n   0.8 2.6671 3.6667 n + 1.6667 n 2 f o r   0.8 < n   1.05 0.05 f o r   1.05 < n
The modified equation yielded statistically improved results especially under the clear sky conditions. The global horizontal irradiance I is calculated from the cloud index and I c s k y by using Equations (4) and (5).

3.3. Clear Sky Model Selection

Numerous clear sky models have been introduced in the scientific literature to calculate the solar radiation reaching the Earth’s surface under clear sky conditions. These models primarily rely on the position of the Sun, input climatic parameters, and aerosol content profiles. In this study, two widely used models are analyzed and compared.
The Perez Ineichen clear sky model is renowned for its precision in predicting solar radiation on a horizontal surface coupled with its ease to apply using the open source PVLIB library [33]. This model offers a reliable method for estimating both the direct and diffuse components of solar radiation in clear sky scenarios. The model typically considers various atmospheric and geographic parameters to estimate the solar radiation, including the solar zenith angle, the extraterrestrial radiation, i.e., the incoming solar radiation at the outer atmosphere, which depends on the Earth–Sun distance and solar declination, the optical air mass, and turbidity as a measure of the atmospheric clarity, with lower turbidity indicating clearer skies and ground reflectance.
The McClear sky model was created within the framework of the MACC (Monitoring Atmospheric Composition and Climate) project. This initiative aims to monitor the atmospheric composition and assess its influence on the climate [34]. The model considers several input parameters, including the location (latitude and longitude), date, time, and solar zenith angle. It can also utilize atmospheric data from sources like reanalysis datasets. One of the key features of the McClear model is its ability to incorporate information about atmospheric aerosols and water vapor content. This feature enhances its precision in scenarios where these atmospheric factors have a notable impact on solar radiation. The model provides estimates of global horizontal and direct normal solar radiation components under clear sky conditions.
The comparative results of the two models analyzed are discussed in the results, Section 6.1.

4. Performance Metrics

The three predominant metrics employed to assess the accuracy of point forecasts include the mean absolute error (MAE), the root mean square error (RMSE), and the mean bias error (MBE). In this paper, the aim is to minimize the RMSE, as a lower RMSE is better suited to applications such as grid operation and stability where high errors are critical [35], whereas the MAE is more suited to applications such as day-ahead markets, which have linear costs relative to the error [36].
M A E = 1 N i = 1 N | ( y i y ^ i ) |
R M S E = 1 N i = 1 N ( y i y ^ i ) 2
M B E = 1 N i = 1 N ( y i y ^ i )
where N represents the number of forecasts, y represents the actual ground measurement value, and y ^ denotes the forecast value.
The normalized MAE (nMAE) and RMSE (nRMSE) are computed by dividing the respective metrices by the mean value of the measured GHI ( y ) for the evaluated test period.
The coefficient of determination denoted as R2 score is a statistical measure commonly used to assess the goodness of fit between observed and predicted values. A higher value for the R2 score indicates a more favorable model fit.
The skill score serves as a critical parameter for assessing model performance when compared to a reference baseline model, typically the smart persistence model in the context of solar forecasting. The skill score ( S S ) of a particular model for the forecast is defined by the following equation:
S S = 1 S c o r e f o r e c a s t   m o d e l S c o r e   P e r s i s t e n c e   m o d e l
Additionally, the Kolmogorov–Smirnov integral (KSI) score is used to measure the similarity between forecast data distributions with distribution of the ground measurement data. It is often applied in the context of comparing the cumulative distribution functions (CDFs) of two datasets. The KSI score quantifies the maximum vertical deviation between the two CDFs.

5. Experimental Setup

In this study, ML and DL models were analyzed and, subsequently, a hybrid model was developed to allow for higher degrees of model complexity. The methodology employed in this study and the fundamental stages of the modeling and evaluation algorithm for assessing GHI forecasts are depicted in Figure 3.
The initial section on the left side of the flow chart represents the hybrid model (A), which solely relies on ground measurement data without incorporating any exogenous features. The middle segment beneath EUMETSAT demonstrates the computation of the GHI from satellite image pixels at the current time (t0), whereas the rightmost part illustrates forecasting of the GHI from satellite pixels (blue dashed lines). The hybrid ensemble model (B) is constructed using the output of Hybrid + SAT + GFS by applying a weighted average ensemble methodology.
The initial forecast for hourly solar irradiance within the day usually takes place early in the morning, typically between 5:00 and 7:00 local time. This forecast provides predictions for solar irradiance levels throughout the day, broken down into hourly intervals. The exact timing of the first forecast can vary depending on the weather forecasting service or organization providing the information. In view of this, the models are analyzed by issuing the first forecast at 6:00 hr only during daylight hours, i.e., when the solar zenith angle θ z < 80° for the following horizons:
t0τ
Every hour starting at 6:00 hr1, 2, …, 6 hr15 min
where t0 is the forecasting issuing time, and τ is the forecasting horizon and represents data ∆ aggregation.
This study utilizes a walk-forward approach, as visualized in Figure 4, to improve and evaluate the performance of the model. These models are trained with the first three months of the available data and, subsequently, trained until the previous month and tested for the following month. The outcomes have shown that this approach can be effectively employed even in situations where a substantial volume of historical data is lacking.

5.1. Model Description

The different models used in this study are listed in Table 2.
A comprehensive overview of theory and description of ML models which are used for modeling in this study, is presented by the authors of [37].

5.2. Deep Learning (DL) Models

5.2.1. LSTM

LSTM, an abbreviation for long short-term memory, represents a variation of the recurrent neural network (RNN) concept. The primary objective of this model approach is to effectively capture and retain long-term temporal relationships, mitigating the challenge of vanishing gradients often encountered in conventional RNN [38]. In addition, LSTM exhibits a noteworthy capability for learning and managing nonlinearity.
Within the LSTM architecture, memory blocks are constructed, forming an interconnected network designed to sustain and manage an information flow over time using nonlinear gating mechanisms. The structure of a standard LSTM architecture involves multiple individual units known as neurons, interconnected by weights. These neurons receive input from external parameters or other neurons, and through the training process, the associated weights are adjusted to achieve an optimal distribution of weights. Figure 5 illustrates the fundamental structure of an LSTM memory cell, encompassing these gating elements, input signals, activation functions, and the resulting output [39]. Notably, the output of this block is recurrently linked back to the input of the same block, involving all the gate components in this cyclical process.
The step-by-step description of the gates and the mathematical equations associated with them are discussed below:
Forget gate: This step concentrates on updating the forget input component, which entails combining the current input x t with the output of that LSTM unit h t 1 from the previous iteration, as expressed by the following equation [39]:
f t = σ W f · h t 1 , x t + b f
where W f is the weight associated with x t and h t 1 , and b f represents a standard bias weight vector.
Input gate: In this step, a sigmoid layer called input gate decides which value is to be updated by combining the current input x t and the output of that LSTM unit h t 1 [39].
i t = σ W i · h t 1 , x t + b i
Cell: A tanh layer creates a vector of new candidate values, C ~ t , that could be added to the state [39].
C ~ t = t a n h W C · h t 1 , x t + b C
The tanh function squashes input values to the range of −1 to 1 and is symmetric around the origin, with an “S” shape.
In this study, the rectified linear unit (ReLU) activation function is implemented for all deep learning models as it has demonstrated superior results during the fine-tuning of hyperparameters. The ReLU function is defined as follows:
f ( x ) = m a x 0 , x
In other words, for any input x, if x is positive, ReLU returns x, and if x is negative, ReLU returns 0. This results in a piecewise-linear, non-decreasing activation function. Further, ReLU has the distinct advantages of simplicity and computational efficiency, and it mitigates the vanishing gradient problem in deep networks. This also promotes sparse activation, which can lead to better feature learning.
The fraction of the old state is discarded by multiplying with f t and the result is added to the product of the current state product C ~ t i t to update the new state [39]:
C t = f t C t 1 + i t C ~ t
Output gate: The output layer consists of a single neuron employing an identity activation function. This step calculates the output gate or the current hidden state and can be depicted as below [39]:
o t = σ W o · h t 1 , x t + b o
h t = o t tanh ( C t )
The following variants of the LSTM model based on the topologies of the layers [40] are applied in this research:
Vanilla LSTM is the most basic model that has a single hidden layer of LSTM units, and an output layer to make a prediction.
Stacked LSMT: As the name implies, this variation comprises several hidden layers that are arranged one above the other. An LSTM layer by default requires input data in 3D format and will output 2D data.
Bidirectional LSTM: The key feature of this architecture is that it processes input sequences in both forward and backward directions, allowing it to capture information from past and future contexts simultaneously, both of which are connected to the same output layer to combine the interpretations. Bidirectional LSTM achieves a full error gradient calculation by eliminating the one-step truncation that was initially present in LSTM [40]. Bidirectional processing empowers the model to capture dependencies in both forward and backward directions, proving especially beneficial for tasks where context from future data is significant. Moreover, this model exhibits the ability to capture extended dependencies, thereby enhancing the model’s comprehension of the input sequence.
Dense LSTM: The model’s inception involves an input layer designed to receive sequential data. There are one or more LSTM layers following the input layer, tailored for the processing of sequential data, as they retain hidden states and capture temporal dependencies. Densely connected layers, often referred to as fully connected layers, are established subsequent to the LSTM layers. These densely connected layers are used to perform various non-linear transformations and reduce the dimensionality of the data.
Figure 6 illustrates a typical schematic of the sequential flow of an LSTM neural network with all layers, input, and output nodes. The hidden layer nodes are the LSTM cells while dealing with the sequential data.

5.2.2. GRU

The gated recurrent unit (GRU) is a type of recurrent neural network (RNN) architecture specifically designed to address some of the limitations of traditional RNNs like vanishing gradients. GRU is a simpler and computationally more economical option compared to the long short-term memory (LSTM) network, yet it retains the capacity to capture extended dependencies in sequential data.
GRU consists of a solitary hidden layer and two principal gates labeled as the reset gate r _ t and the update gate z _ t . These gates control the flow of information through the hidden state, allowing the network to determine what information to retain and what to discard. A schematic of a typical GRU block is illustrated in Figure 7 with subsequent equation and functions of each gate [41].
The Update Gate ( z t ) for time step t is derived by using the equation
z t = σ W z · h t 1 , x t + b z
where
  • σ denotes the sigmoid activation function;
  • W z denotes the weight matrix for the update gate;
  • h t 1 , x t is the concatenation of the previous hidden state and the current input;
  • b z represents the bias term for the update gate.
The Reset Gate ( r t ) is akin to the update gate, but it determines what information from the previous hidden state should be erased.
r t = σ W r h t 1 , x t + b r
Candidate Hidden State ( h t ): A novel memory component that utilizes the reset gate to retain pertinent information from previous time steps. It is calculated as follows:
h t = t a n h W h r t h t 1 , x t + b h
where ⊙ represents element-wise multiplication.
Final Hidden State ( h t ): In the final stage, the network is responsible for computing h t , a vector that encapsulates information pertaining to the current unit and transfers it further within the network. To achieve this, the update gate plays a crucial role by discerning what information to gather from the present memory content ( h t ) and what to retain from prior time steps, represented by h t 1 .
h t = ( 1 z t ) h t 1 + z t h t
The advantages of GRU compared to LSTM are threefold: (1) Compared to LSTM, GRU features a more streamlined architecture with a reduced number of gates and less intricate computations. This characteristic simplifies the training and implementation processes. (2) Due to its simplicity, GRU is computationally more efficient than LSTM, which can lead to faster training and inference times. (3) GRU is a good choice for tasks where capturing short-term dependencies in sequences is sufficient and a more complex model like LSTM may not be necessary. However, GRU has limited capacity in capturing very long-term dependencies in sequences, as it does not have a dedicated cell state like LSTM. LSTM is better suited for tasks that require modeling long-range dependencies.
The input data format for deep learning (DL) models differs from that of machine learning (ML) model algorithms. The data are structured as a 3-dimensional array, where the dimensions represent the batch size, the number of time-steps, and the number of input features. Through experimentation with hidden layers containing 24, 64, and 128 nodes, it was observed that 64 nodes produced the most favorable results.
All deep learning models are trained and compiled with buffer sizes equal to the length of training data, 50 epochs and early stopping using Tensforflow, mean squared error loss, mean absolute error metrics and Adam optimization. The hyperparameters of ML models are optimized by applying sklearn grid search.
Appendix A illustrates various hyperparameters of both ML and DL tuned and optimized. The optimized values shown are for a particular case of optimizing the N1 pixel (SAT) for a given month for Wien. These parameters are trained and optimized for every month as shown in Figure 4 and the trained model is tested for the subsequent month for each station. The optimized parameters remain the same for different stations but may slightly vary depending on optimization for the SAT or Hyb model and also on the month.

5.2.3. DL Voting Ensemble Model

A voting ensemble model from optimized stacked LSTM, bidirectional LSTM, dense, and GRU is built. This is achieved by loading the pretrained models and making predictions using individual models known as base models. Input layers are defined for the ensemble model. These input layers are used to feed the predictions of the individual models into the ensemble.
The output tensors of the individual models are obtained by passing the input layers as inputs to each model. This produces output1, output2, output3, and output4, which represent the outputs of each LSTM model. An ensemble layer is then created using the average() function, which averages the outputs of the individual models (output1, output2, output3, and output4). Averaging is a common method for combining predictions in a voting ensemble for LSTM-based models.
The ensemble model is then defined using the model class from Keras. The input layers and the ensemble layer are specified as inputs and outputs, respectively. This creates a new model called Ensemble model that takes predictions from the four models and produces an ensemble prediction. The ensemble model is compiled with the Adam optimizer and the mean squared error (MSE) loss function. Additionally, accuracy is tracked as a metric.
Leveraging a voting ensemble with diverse LSTM models is a potent strategy for enhancing the precision and resilience of predictions. It utilizes the diversity of multiple LSTM models to provide more reliable results, making it a valuable tool in various machine learning and sequence modeling tasks.
Recent literature demonstrates an increasing research interest in the utilization of deep learning models for predicting solar irradiance for intra-day and intra-hour forecast horizons [42,43,44,45,46].

5.3. Weighted Average Ensmble Model

In an ensemble model, particularly an average ensemble, the combined forecast obtained by averaging the predictions of each individual model is evaluated for each hour of the day, as expressed by Equation (21):
y ^ a v = y ^ a + y ^ b + y ^ c + y ^ d + y ^ e 5  
With the primary aim of minimizing the RMSE in short-term intra-day forecasting, the weighted average ensemble approach computes the performance weight of each model for different hours of the day by employing the reciprocal of the RMSE. For instance, the weight assigned to the forecast of model ‘ a ’ is determined as follows:
w a = I R M S E a I R M S E a + I R M S E b + I R M S E c + I R M S E d + I R M S E e  
where I R M S E = 1 / R M S E , i.e., the inverse of the RMSE for the respective models.
The forecast generated by the weighted average ensemble is subsequently assessed on an hourly basis using Equation (23).
y ^ w a v = w a y ^ a + w b y ^ b + w c y ^ c + w d y ^ d + w e y ^ e w a + w b + w c + w d + w e  
The hybrid model is intricately designed to begin by independently computing the hourly weights for each individual ML and DL model. Following this calculation, rankings are assigned based on the cumulative weight of each model, and the six top-performing models are automatically selected according to their ranking in a weighted average ensemble model. This methodology empowers the selection of the most effective models, while simultaneously optimizing the trade-off between bias and variance. The selected model is labeled as Hybrid (Hyb) model in the subsequent analysis and discussion.
Additionally, we assess cases where a hybrid model is combined with satellite data referred to as (Hyb + SAT), as well as scenarios where it is further enhanced by integrating GFS forecast data, labeled as (Hyb + SAT + NWP).

5.4. Baseline Naive Model

In the academic literature, a typical approach involves comparing the outcomes of the proposed models with those of simpler or reference models to evaluate the enhancements. A commonly employed reference model is the smart persistence model [47]. The smart persistence model exclusively relies on historical ground radiation data from the specific location and utilizes clear sky index series. It forecasts the clear sky index for each time horizon τ as the average of the τ preceding clear sky values [48]. In statistical terms it is also defined as rolling mean and is defined by the equation:
K ^ ( t 0 + τ ) = m e a n ( K ( t 0 τ ) K t 0 )
Similar methodology is applied for the N1 pixel to calculate the persistence value of N1.

5.5. Joint and Marginal Distribution

The distribution-based forecast verification framework offers a robust method for evaluating a model’s performance by analyzing the extent to which the forecasted values diverge from the observed value ranges [49].
The interconnection among joint, marginal, and conditional distributions of two random variables can be statistically expressed through the application of Bayes’ theorem. In the specific context of meteorology, where these variables pertain to forecast and observation, this very association is recognized as the Murphy–Winkler factorization expressed as [50]:
p ( f , x ) = p x f ) p ( f )
p ( f , x ) = p f x ) p ( x )
where
  • p f , x is the probability of both events, f and x , occurring simultaneously;
  • p x f ) denotes the conditional probability of event x occurring, given that the event f has already occurred;
  • p ( f ) denotes the probability of event f occurring;
  • p f x ) denotes the conditional probability of event f occurring, given that the event x has already occurred;
  • p ( x ) represents the probability of event x occurring.
The joint probability of both events happening, referred to as p f , x , is computed as the product of the conditional probability of f given x and the probability of x .

5.6. Feature Engineering

Feature engineering serves as a pivotal step in both machine and deep learning pipelines, enabling the transformation of raw data into a more informative and suitable format for training and enhancing model performance. Its significance extends to influencing the bias–variance trade-off, thereby mitigating overfitting and underfitting in machine learning models. The bias–variance trade-off aims to find a balance between underfitting and overfitting. Feature engineering is a key tool for achieving this balance because it allows to control the model’s complexity using techniques like dimensionality and noise reduction, regularization, and feature expansion.
In the context of satellite pixel data, which have a sampling frequency of 15 min, we employ the Fourier transform on both the hourly and quarter-hourly (15 min, denoted as ‘minute’) values. This transformation allows us to identify and extract the predominant frequency components that encapsulate the most significant recurring patterns within the dataset. Subsequently, these frequency components are harnessed as crucial input features for ML and DL models. The value of k is in range of 1–5 for sin 2 π k and cos 2 π k while extracting the Fourier transform of hour and quarter hour.
Important features are extracted by applying feature importance assessment using random forest and XGB by randomly selecting 10 days of data for each month. These features are further examined using a correlation method. Specifically, features exhibiting a correlation value greater than |0.3| with N1 are deemed pertinent, as illustrated in the heatmap in Figure 8 for the Wien station, and subsequently used for the modeling, where:
  • N1: satellite pixel value (target);
  • Cosze: cos ( θ z ) ;
  • ghi_mc: McClear GHI value (scaled using min-max scaler);
  • Pers_N1: persistent N1 value as defined by Equation (24);
  • Pers_csi: persistent clear sky index, ( K ^ ( t 0 + τ ) ) as defined by Equation (24);
  • Hour_cos1: Fourier transform cosine of hour with k = 1;
  • min_sin2: Fourier transform sin of quarter hour with k = 2;
  • min_cos4: Fourier transform cosine of quarter hour with k = 4;
  • min_cos5: Fourier transform cosine of quarter hour with k = 5.
The feature importance for other stations also exhibits similar patterns.

6. Results and Discussion

6.1. Overview and Analysis of Clear Sky Models

The following observations emphasize the importance of understanding and accounting for the deviations of satellite-derived GHI from measurements when applying clear sky models for GHI estimation.
The GHI is calculated from the satellite-derived clear sky index using Equation (4) by applying two clear sky models. The violin and distribution plots in Figure 9 clearly show that the Perez clear sky model deviates from ground measurements significantly under the clear sky conditions with a GHI above 800 W/m2. Both the models show statistically significant deviations from ground measurement data around 150–180 W/m2. But the Perez model shows a sharp deviation from ground measurements between 200 and 400 W/m2.
In this study, the joint and marginal distributions of the nowcast global horizontal irradiance (GHI) obtained from the McClear and Perez models, compared to ground observations collected at three monitoring stations, is analyzed. Figure 10 depicts the visual representation of these distributions where the diagonal black line corresponds to the ground measurements.
For the joint distribution using the McClear model, roughly equal probabilities are observed on both sides of the diagonal. However, for GHI calculated using the Perez model, it is discovered that it tends to underpredict the GHI. This is evident as the probability density is higher below the diagonal line, confirming that the Perez model’s predictions ( f ) are less than the observed values ( x ).
A significant pattern is observed upon closer inspection of the 2D kernel density contours. For low-irradiance conditions, the satellite-derived GHI using the McClear model exhibits a slight upward drift above the diagonal line. Conversely, the Perez model shows the opposite behavior, with the GHI prediction drifting below the diagonal.
We also note a distinct characteristic regarding the satellite observations at the Salzburg location. These observations exhibit greater dispersion compared to the other two monitoring sites. This disparity might be ascribed to the frequency of data collection in Salzburg, which occurs at 10 min intervals. This interval could potentially introduce an aggregation effect in the ground measurement errors when these data points are averaged to an hourly resolution. In contrast, the other two sites provide ground measurement data at a 1 min resolution, potentially resulting in fewer aggregation-induced discrepancies.
These observations offer valuable insights into how the McClear and Perez models perform compared with ground-based GHI measurements, particularly in central Europe. They affirm that the McClear model surpasses the Perez model in terms of performance. This is further substantiated by the error metrics in Table 3 for the two models for the GHI derived using satellite data. In light of these observations, the McClear model is applied for further work and assessment of forecast models.

6.2. Satellite-Derived GHI

The quality of GHI data obtained from METOSAT satellite pixels is demonstrated through a time series plot dated 24 June 2021 for Wels, presented in Figure 11. This figure highlights three specific scenarios: (a) clear sky conditions, (b) cloudy conditions, and (c) partially cloudy conditions. These specific events are graphically compared using images (d, e, f) acquired by an ASI with their corresponding satellite cloud images (m, n, o). Green arrows in the satellite cloud images represent cloud motion vectors, indicating cloud movement and direction. The cloud mask and pixel images from EUMETSAT are depicted in (g, h, i) and (j, k, l), respectively.
The station’s position is marked at the center of the red square box in the satellite cloud and cloud mask images. Within the cloud mask images (g, h, i), the presence of green indicates clear sky regions, whereas the white areas correspond to cloud coverage. In the RGB images (j, k, l), the green areas signify clear skies, the light yellow regions represent scattered clouds, and the dark blue and black regions denote the presence of heavily overcast dark clouds. These images, corresponding to the three identified events in the time series plot, clearly demonstrate a correlation and the effectiveness of our algorithm in accurately capturing clear sky, cloudy, and partially cloudy conditions.

6.3. Hybrid Model Results Analysis

Table 4 presents a summary of the RMSE (W/m2) values for the various individual models utilized in this study, as well as ensemble models and the ultimate hybrid model constructed using the six top-performing models. The hybrid model is denoted in bold font, while the voting ensemble model derived from deep learning (DL) models is highlighted in italics, and the voting ensemble model based on machine learning (ML) models is emphasized in bold italics. The table also showcases the percentage improvement (%) of the hybrid model over the least-performing individual model, as well as the corresponding improvement for the voting ensemble models compared to its lowest-performing constituent model for ML and DL, respectively.
The hybrid model exhibits a remarkable improvement, enhancing the RMSE score by 4.1% to 10.6% across various forecast horizons. These results also reveal that voting is a superior choice compared to stacking when it comes to ML ensemble models. Moreover, the performance of the ML voting ensemble model exhibits enhancement as the forecast horizon extends, whereas the opposite trend is observed for the DL voting ensemble model.
Comparable findings are observed at the other two locations, clearly underscoring the benefits of employing a hybrid model as opposed to relying on a single model for forecasting purposes.

6.4. Forecast Results

In this segment, we provide an assessment of intra-day GHI forecast up to 6 hr generated by the various models, conducting a comparative analysis to determine the performance of the models. Following this, we perform an in-depth examination of forecast performance across different clusters of clear sky indices.
Table 5, Table 6 and Table 7 provide an analysis of forecast performance for the three distinct stations, measured in terms of the RMSE in units of W/m2. Each row in these tables represents different forecasting models and combinations of models used in this research. The results emphasize the consistent and superior performance of all the developed models in comparison to the smart persistence model. Notably, during the first hour of the forecast, the GFS exhibits subpar performance, aligning with the existing literature that highlights challenges in initializing atmospheric boundary conditions for NWP models.
The combination of the hybrid ensemble model, comprising the hybrid model in conjunction with satellite and NWP models, Hyb + SAT + NWP, surpasses all other models across all forecast horizons. Nonetheless, it is important to acknowledge that the RMSE error tends to rise as the forecast horizon is extended, as anticipated. The percentage increase in the RMSE compared to the smart persistence model over a 1 h to 6 h interval falls spanning within a range of {28.84%–47.54%}, {36.76%–70.92%}, and {28.78%–71.93%}, for Wien, Salzburg, and Wels, respectively. In contrast, the corresponding rate of increase observed for the Hyb + SAT + NWP across the same time frame is in the range of {16.95%–28.79%}, {20.67%–44.14%}, and {17.68%–51.18%} for the respective three stations. The relatively modest rise in the RMSE for our models over extended time horizons underscores the efficiency of the model.
Figure 12 illustrates the normalized nRMSE (%) and RMSE skill score for all models across various time horizons at each station. Notably, SAT results consistently outperform the hybrid model up to a 3 h horizon, beyond which the hybrid model’s performance exhibits a marginal improvement over SAT. This observation aligns with the fact that satellite data correspond to time lags from ( t 0 ) to ( t 0 3 ) hr.
The combination of Hyb + SAT presents a distinct advantage over individual models, with the effect becoming more pronounced up to a 3 h lead time. Beyond this point, the addition of NWP offers a clear advantage for all three locations as the time horizon increases. In terms of performance, Hyb + SAT + NWP remains nearly unchanged when compared to the combination of Hyb + SAT during the first hour, but afterward, it delivers a 5% to 20% improvement in skill score from 2 to 6 h ahead.
In the case of a 6 h ahead forecast, the best-performing Hyb + SAT + NWP model only demonstrates a marginal advantage over the NWP model for Salzburg and Wels, while the situation is reversed for Wien.
The assessment of the models based on MAE and R2 scores confirms that all models surpass the smart persistence model. Figure 13 illustrates that Hyb + SAT + NWP emerges as the top-performing model across all horizons.
The best performing model Hybrid + SAT + NWP is further assessed in terms of skill score by categorizing day types according to the daily average clear sky index values, as outlined in Table 8 and the results are shown in Figure 14 for all three stations [37].
This analysis vividly reveals that, for overcast days (DT1), the model’s performance lags behind that of the smart persistence model, particularly in the 1 h ahead forecast. This finding corroborates our earlier observations, as depicted in the 2D kernel density contours in Figure 10 wherein SAT forecasts exhibit a higher bias for lower values of GHI in cases of overcast conditions. Nevertheless, it is important to highlight that this distinction does not wield a statistically significant impact on the overall results. This is because the total number of overcast days (DT1) amounts to 28, 22, and 20 for Wien, Salzburg, and Wels, respectively, with an average hourly GHI value of approximately 30 W/m2 for such days. Should the forecast values for the 1 h ahead prediction be adjusted to match the smart persistence values, the overall RMSE improvement for the entire test period is only around 2–3 W/m2 for the 1 h ahead forecast with this correction.
A closer examination of the results achieved using this method highlights the model’s competency in capturing extremely high cloud variability during cloudy and heavily clouded sky conditions, resulting in a skill score improvement ranging from 10% to 30%.
Table 9 presents a summary of additional metrics, including the Pearson correlation coefficient, mean bias error (MBE), and Kolmogorov–Smirnov integral (KSI), across forecast horizons for the three stations. HYB + SAT + NWP stands out as the top performer, particularly excelling in the Pearson correlation coefficient and mostly outperforming other models in terms of MBE. Station Wien tends to exhibit a mostly negative bias, while the other two stations show a predominantly positive bias. For the forecast horizon of 2 h–4 h, Hyb + SAT performs the best for the Wien station in terms of MBE. Notably, smart persistence demonstrates the lowest KSI values, aligning with expectations since its data points originate from ground measurements, making it more likely to have a distribution similar to ground measurements compared to other models. SAT outperforms the other models in terms of the KSI score for Wien and Wels, whereas Hyb + SAT has a better KSI score for Salzburg.

6.5. Forecast Quality

The reliability of the ensemble model is evaluated by using a reliability diagram which is a graphical tool to assess the calibration of the forecast, as illustrated Figure 15.
The horizontal axis represents the 10 forecast quantile values of the GHI. The vertical axis represents the observed relative frequency or mean forecast value associated with the forecast quantile. It indicates how often the forecasted probabilities align with the observed outcomes. The reference line is the diagonal line from the bottom-left to the top-right of the diagram and the points along this line indicate perfect reliability, where the forecasted probabilities match the observed relative frequencies. In an ideal case, the points on the reliability diagram should closely follow the diagonal reference line, indicating that the forecast probabilities are well-calibrated. Overconfident forecast points lie above the diagonal line; it suggests that the forecasted probabilities are too high, whereas the underconfident forecast points have low probabilities and lie below the reference line.
Hyb + SAT + NWP is exclusively depicted alongside the smart persistence model to enhance graphical clarity given the close adherence of all other models to the hybrid model. The plots evidently illustrate that the hybrid model is well calibrated and closely follows the reference line in contrast to the smart persistence model for all three stations across the different forecast horizons.

7. Conclusions and Future Work

The primary goal of this research was to apply satellite data to enhance the accuracy of intra-day GHI predictions, spanning from 1 to 6 h into the future, with an hourly time resolution. This study demonstrated the benefits of employing a hybrid model that harnesses the optimal performance of individual models across various temporal horizons, combining both machine learning (ML) and deep learning (DL) models. This study investigated the benefits of combining the hybrid model with models based on satellite imagery and numerical weather prediction (NWP) for improving intra-day solar radiation forecasts and proved that Hyb + SAT + NWP is the optimal choice for short-term hourly intra-day solar radiation forecasting. The nRMSE and RMSE skill score values demonstrate remarkable performance. Though the weighted average hybrid ensemble model is designed to minimize the RMSE, it demonstrated excellent performance for the MAE and R2 score as well. Table 10 illustrates the range of error metrices of the Hyb + SAT + NWP model (in bold) compared with the smart persistence model (in parentheses) for a forecast horizon ranging from 1 h to 6 h ahead.
A high R2 value for Hyb + SAT + NWP indicates that a large proportion of the variability in the observed values is explained by the forecast model. It suggests that the model is effective in explaining and capturing the variance in the observed data. A high Pearson correlation coefficient implies a strong linear relationship between the forecasted and observed values. A low RMSE indicates that, on average, the differences between the forecasted and observed values are small. It suggests that the model’s predictions are accurate in terms of minimizing overall error.
The reliability diagrams further substantiate the forecast quality and consistent performance of the Hyb + SAT + NWP model, illustrating its close adherence to a perfectly calibrated forecast across the 1 h to 6 h forecast horizon for all three stations.
This achievement is particularly noteworthy considering the limited dataset, which covers less than a year of measurements and originates from experimental sites in central Europe, where unpredictable cloudy weather conditions are prevalent.
This research yielded several additional noteworthy findings. Firstly, it introduced a consistent and highly accurate method of deriving the GHI based on satellite imagery. Secondly, this study emphasized the significance of using an appropriate clear sky model when deriving the GHI from satellite-based sources. A comprehensive comparative analysis of the Perez and McClear models clearly highlighted the advantages of the McClear model. This study also highlighted the importance of classifying different day types based on clear sky indices and underscored the benefits of the blending model, particularly on cloudy and highly cloudy days.
In light of the findings of this study, there are several valuable avenues for future research. First, in order to enhance forecasting accuracy, we recommend exploring the utilization of the European Centre for Medium-Range Weather Forecasts (ECMWF) as the primary source for NWP input data. Additionally, the incorporation of additional meteorological variables such as temperature, relative humidity, and wind speed as exogenous features needs to be investigated further.
Moreover, a promising area for future research involves conducting a more detailed analysis of historical large-scale satellite data. This analysis would focus on uncovering the precise temporal and seasonal correlations of ρ m i n and ρ m a x , thereby contributing to the refinement and increased accuracy of GHI derivation from satellite-based pixels.
The joint and marginal distribution analysis warrants a condition-based post-processing treatment of irradiance to correct satellite-based GHI calculations which can possibly enhance the forecasting accuracy to the next level.

Author Contributions

Conceptualization, J.T.; formal analysis, J.T.; software, J.T. and M.K.; visualization, J.T.; validation, J.T.; writing—original draft preparation, J.T.; writing—review and editing, R.H.; project administration, R.H.; supervision, R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Sources of publicly GFS, McClear and satellite data have been provided in Section 2. Restrictions apply to the availability of the ground measurement data. Ground measurement data were obtained from [University of Applied Sciences Upper Austria, 4600 Wels, Austria and Blue Sky Wetteranalysen, 4600 Wels, Austria] and are available on request from the corresponding author for the academic purpose subject to the permission of third parties.

Acknowledgments

Open Access Funding by the University for Continuing Education Krems, the University of Applied Sciences BFI Vienna and the University of Applied Sciences Upper Austria. We thank Wolfgang Traunmüller of Blue Sky Wetteranalysen for sharing ground measurement data and GFS data for Salzburg and Jean-Baptiste Pellegrino of Solar Radiation Data (SODA) Mines ParisTech for sharing GFS data from their archives for Wien and giving access to McClear clear sky model data for all stations. We also thank Douglas Vaught for reviewing the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

PVPhotovoltaics
GHI Global horizontal irradiance
EUMETSATEuropean Organisation for the Exploitation of Meteorological Satellites
SEVIRISpinning Enhanced Visible and Infrared Imager
MSGMeteosat Second Generation
CMVCloud motion vector
CMFCloud motion field
KtClear sky index
MLMachine learning
DLDeep learning
AIArtificial intelligence
ANNArtificial neural network
GFSGlobal Forecast System
ECMWFEuropean Centre for Medium Range Forecasts
NWPNumerical weather prediction
RFRandom forest
GBGradient boosting
XGBExtreme gradient boosting
kNNk-nearest neighbors
SVMSupport vector machine
StackingStacking ensemble of ML models
votingVoting ensemble of ML models
LSTMLong short-term memory
biDirBidirectional LSTM
denseDense LSTM
stackedStacked LSTM
GRUGated recurrent unit
DL_votingVoting ensemble of DL models
R2Coefficient of determination score
SSSkill score
MBEMean bias error
MAEMean absolute error
nMAENormalized mean absolute error
RMSERoot mean square error
IRMSEInverse of RMSE
nRMSENormalized root mean square error

Appendix A

Table A1. List of hyperparameters used for optimization of training in ML and DL models.
Table A1. List of hyperparameters used for optimization of training in ML and DL models.
ModelParameterValues for Grid SearchOptimized Value
Random Forest (RF)n_estimators[int(x) for x in np.linspace (10, 200, 10)]73
max_features[‘auto’, ‘sqrt’]auto
max_depth[1, 2, 3, 4, 5, 6]5
Extreme Gradient Boost (XGB)max_depth[3, 4, 5, 6, 7, 8, 9, 10]4
learning_rate[0.001, 0.01, 0.1, 0.2, 0.3]0.2
subsample[0.5, 0.6, 0.7, 0.8, 0.9, 1.0]0.9
colsample_bytree[0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]0.9
colsample_bylevel[0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]0.9
min_child_weight[0.5, 1.0, 3.0, 5.0, 7.0, 10.0]0.9
gamma[0, 0.25, 0.5, 1.0]1
n_estimators[10, 31, 52, 73, 94, 115, 136, 157, 178, 200]31
Gradient Boost (GB)max_depth[3, 4, 5, 6, 7, 8, 9, 10,12]9
learning_rate[0.001, 0.01, 0.1, 0.2, 0.3]0.01
min_samples_split[2, 3, 4, 5, 4, 7, 8]4
min_samples_leaf[2, 3, 4, 5, 4, 7, 8]8
max_features[0.3, 0.4, 0.5, 0.6, 0.7]0.3
n_estimators[50, 100, 200,400, 500,600, 650, 700, 750, 800]500
Support Vector Machine (SVM)C[1, 10, 100]1
gamma[0.001, 0.01, 0.1]0.01
epsilon[0.001, 0.01, 0.1]0.1
kernelrbfrbf
kNNn_neighborsrange (1, 50)48
weights[‘uniform’, ‘distance’]distance
Vanilla LSTM# LSTM layers11
unit[24, 64, 128]64
learning_rate[0.01, 0.001]0.001
act[‘relu’, ‘tanh’]relu
opt[‘adam’, ‘rms’]adam
batch_size[64, 128]64
Stacked LSTM# of LSTM layers22
Layer 1 unit[24, 64, 128]64
Layer 2 unit[24, 64, 128]128
learning_rate[0.01, 0.001]0.001
act[‘relu’, ‘tanh’]relu
opt[‘adam’, ‘rms’]adam
batch_size[64, 128]64
Deep LSTM# of LSTM layers11
# of Dense layers22
unit[24, 64, 128]64
learning_rate[0.01, 0.001]0.001
act[‘relu’, ‘tanh’]relu
opt[‘adam’, ‘rms’]adam
batch_size[64, 128]64
Bidirectional LSTM# of bi_LSTM layers11
unit[24, 64, 128]64
learning_rate[0.01, 0.001]0.001
act[‘relu’, ‘tanh’]relu
opt[‘adam’, ‘rms’]adam
batch_size[64, 128]64
GRU# of GU layers44
Layer 1 & 2 units[24, 64, 128]64
Layer 3 & 4 units[64, 128]128
learning_rate[0.01, 0.001]0.001
Act[‘relu’, ‘tanh’]relu
opt[‘adam’, ‘rms’]adam
batch_size[64, 128]64

References

  1. Heinemann, D.; Lorenz, E.; Girodo, M. Forecasting of Solar Radiation. Solar Energy Resource Management for Electricity Generation from Local Level to Global Scale; Nova Science Publishers: New York, NY, USA, 2006. [Google Scholar]
  2. Perez, R.; Lorenz, E.; Pelland, S.; Beauharnois, M.; Van Knowe, G.; Hemker, K., Jr.; Heinemann, D.; Remund, J.; Müller, S.C.; Traunmüller, W.; et al. Comparison of numerical weather prediction solar irradiance forecasts in the US, Canada and Europe. Sol. Energy 2013, 94, 305–326. [Google Scholar] [CrossRef]
  3. Thaker, J.; Höller, R. Hybrid Numerical Ensemble Method and Comparative Study for Solar Irradiance Forecasting. In Proceedings of the 8th World Conference on Photovoltaic Energy Conversion, Milan, Italy, 26–30 September 2022; pp. 1276–1284. [Google Scholar] [CrossRef]
  4. Verbois, H.; Saint-Drenan, Y.-M.; Thiery, A.; Blanc, P. Statistical learning for NWP post-processing: A benchmark for solar irradiance forecasting. Sol. Energy 2022, 238, 132–149. [Google Scholar] [CrossRef]
  5. Diagne, M.; David, M.; Lauret, P.; Boland, J.; Schmutz, N. Review of solar irradiance forecasting methods and a proposition for small-scale insular grids. Renew. Sustain. Energy Rev. 2013, 27, 65–76. [Google Scholar] [CrossRef]
  6. Antonanzas, J.; Osorio, N.; Escobar, R.; Urraca, R.; Martinez-De-Pison, F.J.; Antonanzas-Torres, F. Review of photovoltaic power forecasting. Sol. Energy 2016, 136, 78–111. [Google Scholar] [CrossRef]
  7. Rajagukguk, R.A.; Kamil, R.; Lee, H.-J. A Deep Learning Model to Forecast Solar Irradiance Using a Sky Camera. Appl. Sci. 2021, 11, 5049. [Google Scholar] [CrossRef]
  8. Sengupta, M.; Habte, A.; Gueymard, C.; Wilbert, S.; Renne, D.; Stoffel, T. (Eds.) Best Practices Handbook for the Collection and Use of Solar Resource Data for Solar Energy Applications, 2nd ed.; National Renewable Energy Laboratory: Golden, CO, USA, 2015. [Google Scholar]
  9. Perez, R.; Cebecauer, T.; Šúri, M. Semi-empirical satellite models. In Solar Energy Forecasting and Resource Assessment; Kleissl, J., Ed.; Academic Press: Boston, MA, USA, 2013; Chapter 2; pp. 21–48. [Google Scholar]
  10. Perez, R.; Ineichen, P.; Moore, K.; Kmiecik, M.; Chain, C.; George, R.; Vignola, F. A new operational model for satellite-derived irradiances: Description and validation. Sol. Energy 2002, 73, 307–317. [Google Scholar] [CrossRef]
  11. Laguarda, A.; Giacosa, G.; Alonso-Suárez, R.; Abal, G. Performance of the site-adapted CAMS database and locally adjusted cloud index models for estimating global solar horizontal irradiation over the Pampa Húmeda. Sol. Energy 2020, 199, 295–307. [Google Scholar] [CrossRef]
  12. Lorenz, E.; Hammer, A.; Heinemann, D. Short term forecasting of solar radiation based on satellite data. In EUROSUN2004 (ISES Europe Solar Congress); PSE GmbH, Solar Info Center: Freiburg, Germany, 2004; pp. 841–848. [Google Scholar]
  13. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence–IJCAI’81, Vancouver, BC, Canada, 24–28 August 1981; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1981; Volume 2, pp. 674–679. [Google Scholar]
  14. Garniwa, P.M.; Rajagukguk, R.A.; Kamil, R.; Lee, H. Intraday forecast of global horizontal irradiance using optical flow method and long short-term memory model. Sol. Energy 2023, 252, 234–251. [Google Scholar] [CrossRef]
  15. Perez, E.; P’erez, J.; Segarra-Tamarit, J.; Beltran, H. A deep learning model for intra-day forecasting of solar irradiance using satellite-based estimations in the vicinity of a PV power plant. Sol. Energy 2021, 218, 652–660. [Google Scholar] [CrossRef]
  16. Nielsen, A.H.; Iosifidis, A.; Karstoft, H. IrradianceNet: Spatiotemporal deep learning model for satellite-derived solar irradiance short-term forecasting. Sol. Energy 2021, 228, 659–669. [Google Scholar] [CrossRef]
  17. Aicardi, D.; Musé, P.; Alonso-Suárez, R. A comparison of satellite cloud motion vectors techniques to forecast intra-day hourly solar global horizontal irradiation. Sol. Energy 2022, 233, 46–60. [Google Scholar] [CrossRef]
  18. Aguiar, L.M.; Pereira, B.; Lauret, P.; Díaz, F.; David, M. Combining solar irradiance measurements, satellite-derived data and a numerical weather prediction model to improve intra-day solar forecasting. Renew. Energy 2016, 97, 599–610. [Google Scholar] [CrossRef]
  19. Benamrou, B.; Ouardouz, M.; Allaouzi, I.; Ben Ahmed, M. A Proposed Model to Forecast Hourly Global Solar Irradiation Based on Satellite Derived Data, Deep Learning and Machine Learning Approaches. J. Ecol. Eng. 2020, 21, 26–38. [Google Scholar] [CrossRef] [PubMed]
  20. Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
  21. Ameen, B.; Balzter, H.; Jarvis, C.; Wheeler, J. Modelling Hourly Global Horizontal Irradiance from Satellite-Derived Datasets and Climate Variables as New Inputs with Artificial Neural Networks. Energies 2019, 12, 148. [Google Scholar] [CrossRef]
  22. Tajjour, S.; Chandel, S.S.; Alotaibi, M.A.; Malik, H.; Márquez, F.P.G.; Afthanorhan, A. Short-Term Solar Irradiance Forecasting Using Deep Learning Techniques: A Comprehensive Case Study. IEEE Access 2023, 11, 119851–119861. [Google Scholar] [CrossRef]
  23. Available online: https://www.eumetsat.int/seviri (accessed on 26 March 2023).
  24. Available online: https://www.eumetsat.int/meteosat-second-generation (accessed on 26 March 2023).
  25. Available online: https://www.eumetsat.int/0-degree-service (accessed on 26 March 2023).
  26. National Centers for Environmental Prediction/National Weather Service; NOAA; U.S. Department of Commerce. NCEP GFS 0.25 Degree Global Forecast Grids Historical Archive; Research Data Archive at the National Center for Atmospheric Research; Computational and Information Systems Laboratory: Silver Spring, MD, USA, 2015; Updated Daily. [CrossRef]
  27. Lorenz, E.; Kühnert, J.; Heinemann, D.; Nielsen, K.P.; Remund, J.; Müller, S.C. Comparison of global horizontal irradiance forecasts based on numerical weather prediction models with different spatio-temporal resolutions. In Proceedings of the 31st European Photovoltaic Solar Energy Conference and Exhibition (EU PVSEC), Hamburg, Germany, 14–18 September 2015; pp. 1626–1640. [Google Scholar]
  28. Natschläger, T.; Traunmüller, W.; Reingruber, K.; Exner, H. Lokal optimierte Wetterprognosen zur Regelung stark umweltbeeinflusster Systeme; SCCH, Blue Sky. In Tagungsband Industrielles Symposium Mechatronik Automatisierung; Clusterland Oberösterreich GmbH/Mechatronik-Cluster: Wels, Austria, 2008; pp. 281–284. [Google Scholar]
  29. Trishchenko, A.P. Solar Irradiance and Effective Brightness Temperature for SWIR Channels of AVHRR/NOAA and GOES Imagers. J. Atmos. Ocean. Technol. 2006, 23, 198–210. [Google Scholar] [CrossRef]
  30. Rigollier, C.; Lefèvre, M.; Wald, L. The method Heliosat-2 for deriving shortwave solar radiation from satellite images. Sol. Energy 2004, 77, 159–169. [Google Scholar] [CrossRef]
  31. Mueller, R.; Behrendt, T.; Hammer, A.; Kemper, A. A New Algorithm for the Satellite-Based Retrieval of Solar Surface Irradiance in Spectral Bands. Remote Sens. 2012, 4, 622–647. [Google Scholar] [CrossRef]
  32. Rigollier, C.; Wald, L. Towards operational mapping of solar radiation from Meteosat images. In EARSeL Symposium 1998 “Operational Remote Sensing for Sustainable Development”; Nieuwenhuis, G., Vaughan, R., Molenaar, M., Eds.; Balkema: Enschede, The Netherlands, 1998; pp. 385–391. [Google Scholar]
  33. Perez, R.; Ineichen, P.; Seals, R.; Michalsky, J.; Stewart, R. Modeling daylight availability and irradiance components from direct and global irradiance. Sol. Energy 1990, 44, 271–289. [Google Scholar] [CrossRef]
  34. Lefèvre, M.; Oumbe, A.; Blanc, P.; Espinar, B.; Gschwind, B.; Qu, Z.; Wald, L.; Schroedter-Homscheidt, M.; Hoyer-Klick, C.; Arola, A.; et al. McClear: A new model estimating downwelling solar radiation at ground level in clear-sky conditions. Atmos. Meas. Tech. 2013, 6, 2403–2418. [Google Scholar] [CrossRef]
  35. Lorenz, E.; Heinemann, D. Prediction of solar irradiance and photovoltaic power. In Comprehensive Renewable Energy; Sayigh, A., Ed.; Elsevier: Oxford, UK, 2012; pp. 239–292. [Google Scholar]
  36. Reindl, T.; Walsh, W.; Yanqin, Z.; Bieri, M. Energy meteorology for accurate forecasting of PV power output on different time horizons. Energy Procedia 2017, 130, 130–138. [Google Scholar] [CrossRef]
  37. Thaker, J.; Höller, R. A Comparative Study of Time Series Forecasting of Solar Energy Based on Irradiance Classification. Energies 2022, 15, 2837. [Google Scholar] [CrossRef]
  38. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  39. Available online: https://colah.github.io/posts/2015-08-Understanding-LSTMs/ (accessed on 14 June 2022).
  40. Staudemeyer, R.C.; Morris, E.R. Understanding LSTM—A tutorial into Long Short-Term Memory Recurrent Neural Networks. arXiv 2019, arXiv:1909.09586v1. [Google Scholar]
  41. Available online: https://en.wikipedia.org/wiki/Gated_recurrent_unit (accessed on 27 July 2022).
  42. Benali, L.; Notton, G.; Fouilloy, A.; Voyant, C.; Dizene, R. Solar radiation forecasting using artificial neural network and random forest methods: Application to normal beam, horizontal diffuse and global components. Renew. Energy 2018, 132, 871–884. [Google Scholar] [CrossRef]
  43. Sibtain, M.; Li, X.; Saleem, S.; Ain, Q.U.; Asad, M.S.; Tahir, T.; Apaydin, H. A Multistage Hybrid Model ICEEMDAN-SE-VMD-RDPG for a Multivariate Solar Irradiance Forecasting. IEEE Access 2021, 9, 37334–37363. [Google Scholar] [CrossRef]
  44. Feng, C.; Cui, M.; Lee, M.; Zhang, J.; Hodge, B.-M.; Lu, S.; Hamann, H.F. Short-term global horizontal irradiance forecasting based on sky imaging and pattern recognition. In Proceedings of the 2017 IEEE Power & Energy Society General Meeting (PESGM), Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar]
  45. Huang, X.; Zhang, C.; Li, Q.; Tai, Y.; Gao, B.; Shi, J. A Comparison of Hour-Ahead Solar Irradiance Forecasting Models Based on LSTM Network. Math. Probl. Eng. 2020, 2020, 1–15. [Google Scholar] [CrossRef]
  46. Li, G.; Wang, H.; Zhang, S.; Xin, J.; Liu, H. Recurrent Neural Networks Based Photovoltaic Power Forecasting Approach. Energies 2019, 12, 2538. [Google Scholar] [CrossRef]
  47. Perez, R.; Kankiewicz, A.; Schlemmer, J.; Hemker, K.; Kivalov, S. A new operational solar resource forecast model service for PV fleet simulation. In Proceedings of the IEEE 40th Photovoltaic Specialist Conference (PVSC), Denver, CO, USA, 8–13 June 2014. [Google Scholar]
  48. Hoff, T.E.; Perez, R. Modeling PV fleet output variability. Sol. Energy 2012, 86, 2177–2189. [Google Scholar] [CrossRef]
  49. Yang, D.; Alessandrini, S.; Antonanzas, J.; Antonanzas-Torres, F.; Badescu, V.; Beyer, H.G.; Blaga, R.; Boland, J.; Bright, J.M.; Coimbra, C.F.; et al. Verification of deterministic solar forecasts. Sol. Energy 2020, 210, 20–37. [Google Scholar] [CrossRef]
  50. Murphy, A.H.; Winkler, R.L. A general framework for forecast verification. Mon. Weather Rev. 1987, 115, 1330–1338. [Google Scholar] [CrossRef]
Figure 1. Locations of ground measurement stations.
Figure 1. Locations of ground measurement stations.
Energies 17 00329 g001
Figure 2. Probability of distribution of cloud reflectance for Wels.
Figure 2. Probability of distribution of cloud reflectance for Wels.
Energies 17 00329 g002
Figure 3. Flow chart for the forecasting model algorithm.
Figure 3. Flow chart for the forecasting model algorithm.
Energies 17 00329 g003
Figure 4. Walk-forward methodology to train and test the models.
Figure 4. Walk-forward methodology to train and test the models.
Energies 17 00329 g004
Figure 5. Architecture of a typical LSTM block.
Figure 5. Architecture of a typical LSTM block.
Energies 17 00329 g005
Figure 6. Architecture of a typical LSTM neural network.
Figure 6. Architecture of a typical LSTM neural network.
Energies 17 00329 g006
Figure 7. Architecture of a typical GRU block.
Figure 7. Architecture of a typical GRU block.
Energies 17 00329 g007
Figure 8. Heatmap showing correlations of N1 with features used in modeling.
Figure 8. Heatmap showing correlations of N1 with features used in modeling.
Energies 17 00329 g008
Figure 9. Violin and distribution plots for calculated satellite GHI using McClear and Perez clear sky models vs. ground measurement GHI; (a,b): Wels, (c,d): Wien, (e,f): Salzburg.
Figure 9. Violin and distribution plots for calculated satellite GHI using McClear and Perez clear sky models vs. ground measurement GHI; (a,b): Wels, (c,d): Wien, (e,f): Salzburg.
Energies 17 00329 g009
Figure 10. Joint and marginal distribution of nowcast satellite GHI using the McClear (a,c,e) and Perez (b,d,f) clear sky models vs. ground measurement GHI; (a,b): Wels, (c,d): Wien, (e,f): Salzburg.
Figure 10. Joint and marginal distribution of nowcast satellite GHI using the McClear (a,c,e) and Perez (b,d,f) clear sky models vs. ground measurement GHI; (a,b): Wels, (c,d): Wien, (e,f): Salzburg.
Energies 17 00329 g010
Figure 11. Time series plot illustrating specific events at Wels on 24 June 2021 at (a) 12:45, (b) 15:45, (c) 16:15 hrs; (df) are images from all sky imager cameras; (gi) are cloud mask images; (jl) are RGB pixels images; and (mo) are modified high resolution cloud images “EUMETSAT [Meteosat] [High Rate SEVIRI Level 1.5 Image Data—MSG—0 degree] [2021]”.
Figure 11. Time series plot illustrating specific events at Wels on 24 June 2021 at (a) 12:45, (b) 15:45, (c) 16:15 hrs; (df) are images from all sky imager cameras; (gi) are cloud mask images; (jl) are RGB pixels images; and (mo) are modified high resolution cloud images “EUMETSAT [Meteosat] [High Rate SEVIRI Level 1.5 Image Data—MSG—0 degree] [2021]”.
Energies 17 00329 g011
Figure 12. Normalized RMSE (%) and RMSE skill score for the three stations.
Figure 12. Normalized RMSE (%) and RMSE skill score for the three stations.
Energies 17 00329 g012
Figure 13. Normalized MAE (%) and R2 scores for the three stations.
Figure 13. Normalized MAE (%) and R2 scores for the three stations.
Energies 17 00329 g013
Figure 14. Variation in RMSE skill score of the Hyb + SAT + NWP model for each horizon for different types of the day.
Figure 14. Variation in RMSE skill score of the Hyb + SAT + NWP model for each horizon for different types of the day.
Energies 17 00329 g014
Figure 15. Reliability diagram of the hybrid model Hyb + SAT + NWP vs. the smart persistence model.
Figure 15. Reliability diagram of the hybrid model Hyb + SAT + NWP vs. the smart persistence model.
Energies 17 00329 g015
Table 1. Ground station locations and data periods.
Table 1. Ground station locations and data periods.
LocationLatitudeLongitudePeriod
FromTo
Wien48.17919416.5035461 March 202231 December 2022
Wels48.16166514.0277371 March 202130 November 2021
Salzburg47.79111113.0541671 March 202231 December 2022
Table 2. List of models applied.
Table 2. List of models applied.
Tree-Based Regression ML ModelsNeuron-Based DL Models
Random forest (rf)Vanilla single layer LSTM (lstm)
Gradient boost (gb)Bidirectional LSTM (bd)
Extreme gradient boost (xgb)Stacked multilayer LSTM (stacked)
k-nearest neighbors (kNN)Dense LSTM (dense)
Support vector machine (SVM)Gated recurrent unit (gru)
Stacking ensemble (stacking)Voting ensemble (voting_DL)
Voting ensemble (voting)
Table 3. Error metrics for satellite-derived GHI using the McClear and Perez models.
Table 3. Error metrics for satellite-derived GHI using the McClear and Perez models.
LocationR2 ScorenMAEnRMSE
McClearPerezMcClearPerezMcClearPerez
Wien0.9180.89814.27916.81419.39621.656
Wels0.9020.86314.12918.41520.59224.315
Salzburg0.8550.82616.83120.86126.12328.620
Table 4. RMSE of different models for the Wien station for different forecast horizons.
Table 4. RMSE of different models for the Wien station for different forecast horizons.
Model1 h2 h3 h4 h5 h6 h
Hybrid model101.181 (7.54%)124.834 (4.19%)136.622 (10.61%)142.699 (4.73%)144.507 (6.34%)140.393 (5.95%)
DL voting101.221 (7.5%)125.664 (3.56%)139.943 (10.26%)145.247 (4.7%)145.774 (4%)141.543 (2.66%)
ML voting101.276 (4.22%)124.634 (3.98%)136.669 (3.83%)143.928 (5.33%)145.38 (5.78%)140.946 (5.81%)
Bidirectional101.418130.301142.967144.964151.574143.528
XGBoost103.026127.150137.152144.763145.513141.565
Gradient boost103.314126.081139.604146.818147.827140.809
SVM103.380129.801143.486152.027154.298149.281
Random forest103.405126.350140.630146.934150.748145.852
Stacking104.002127.956141.872151.906151.991145.721
LSTM stacking104.797129.407152.842148.595148.483144.803
kNN105.738129.504142.113149.786150.978144.438
Dense105.895129.624140.953145.759147.138143.401
GRU109.432126.941138.595150.839148.559144.141
Table 5. RMSE for Wien for different forecast horizons (best results in bold).
Table 5. RMSE for Wien for different forecast horizons (best results in bold).
Model1 h2 h3 h4 h5 h6 h
Hyb + SAT + NWP93.314109.13115.37121.626123.001120.182
Hyb + SAT95.839116.383127.093140.152141.783138.690
SAT99.065118.543128.315143.387144.594141.946
Hyb101.181124.834136.622142.699144.124140.393
GFS122.312123.606123.712124.046124.729119.638
Persistence106.622137.373153.571164.544164.752157.313
Table 6. RMSE for Salzburg for different forecast horizons (best results in bold).
Table 6. RMSE for Salzburg for different forecast horizons (best results in bold).
Model1 h2 h3 h4 h5 h6 h
Hyb + SAT + NWP97.926118.169129.440140.793143.140141.155
Hyb + SAT97.975124.439139.280158.324163.204165.452
Hyb104.966136.216153.399164.085166.307168.007
SAT109.347127.189138.466160.301168.234169.026
GFS155.669156.751156.685154.195151.299142.232
Persistence114.833157.049184.829193.822196.745196.267
Table 7. RMSE for Wels for different forecast horizons (best results in bold).
Table 7. RMSE for Wels for different forecast horizons (best results in bold).
Model1 h2 h3 h4 h5 h6 h
Hyb + SAT + NWP90.880106.951116.222126.101132.854137.392
Hyb + SAT92.580116.378131.519150.830162.325168.233
SAT97.149119.138133.579155.388166.627173.805
Hyb98.556122.013136.855151.640162.946166.783
GFS130.759133.126136.670139.290140.604141.075
Persistence107.253138.125158.846175.228182.195184.401
Table 8. Classification of day type based on the daily average value of clear sky index.
Table 8. Classification of day type based on the daily average value of clear sky index.
ClassDay TypeClear Sky Index
DT1Overcast0 ≤ K t < 0.3
DT2Highly cloudy0.3 ≤ K t < 0.5
DT3Cloudy0.5 ≤ K t < 0.7
DT4Partially cloudy0.7 ≤ K t   < 0.9
DT5Clear sky K t   ≥ 0.9
Table 9. Pearson correlation coefficient, MBE, and KSI score of all models.
Table 9. Pearson correlation coefficient, MBE, and KSI score of all models.
LocationHorizonPearson Correlation Coeff
HYB + SAT + NWPHYB + SATSATHYBNWPPersistent
Wien1 h0.9320.9280.9250.9180.880.915
2 h0.9090.8950.8950.8780.8820.865
3 h0.90.8770.8780.8550.8850.831
4 h0.8850.8830.8440.8410.8380.794
5 h0.8750.8290.8270.8230.8750.773
6 h0.8680.820.8150.8140.870.766
Salzburg1 h0.9290.9290.9110.9170.8220.904
2 h0.8960.8840.8780.8580.8240.82
3 h0.8760.8560.8560.8230.8290.752
4 h0.8530.8090.8040.7940.8380.731
5 h0.8440.7910.7840.7770.8410.717
6 h0.840.7690.7580.7620.8510.696
Wels1 h0.9330.930.9230.9210.8640.908
2 h0.9070.8890.8820.8790.8590.843
3 h0.8930.8590.8530.8470.8550.796
4 h0.8770.8140.8010.8130.8530.763
5 h0.8640.7870.7730.7880.8520.752
6 h0.8520.7680.7530.7720.8490.744
MBE (W/m2)
Wien1 h−10.21−6.207−16.3624.5−6.033−10.267
2 h−12.009−2.435−19.177−5.917−10.992−9.152
3 h−13.3194.0457.979−10.459−11.738−10.221
4 h−10.896−11.71−10.494−14.316−6.948−8.585
5 h−10.124−9.928−13.06−6.891−10.541−5.595
6 h−12.933−15.732−16.639−14.939−7.38−2.552
Salzburg1 h−5.298−5.639−13.0310.362−5.73913.957
2 h−3.45−4.509−11.852.839−1.53330.789
3 h3.3752.692−6.18711.8834.57945.635
4 h6.2793.7855.0162.4711.0955.33
5 h9.8015.9545.2456.55916.99157.473
6 h12.1075.6258.2332.98323.32151.786
Wels1 h−4.063−0.482−5.4573.303−14.81813.5
2 h−1.2945.11−0.6510.663−16.39317.575
3 h−2.864.8674.9215.054−19.21618.842
4 h−1.529.11210.4237.871−21.69120.827
5 h5.08320.88822.75618.998−22.41924.308
6 h9.25527.92134.98321.258−21.94227.039
KSI
Wien1 h0.030.0760.070.0930.0960.083
2 h0.0830.0950.0740.1140.1090.034
3 h0.0850.1020.0660.1250.1080.044
4 h0.1330.0820.1090.0750.1120.053
5 h0.0870.110.070.1430.1210.063
6 h0.0860.1180.0780.150.130.07
Salzburg1 h0.0960.0470.1150.0960.1120.058
2 h0.10.0550.1110.1230.1150.08
3 h0.0870.0610.0990.1130.1050.106
4 h0.090.070.0940.1170.110.104
5 h0.0960.0750.1370.0940.1150.114
6 h0.0980.0830.0950.1380.1210.105
Wels1 h0.0950.1070.060.1070.0930.034
2 h0.120.1290.0660.1380.1210.048
3 h0.1340.1390.0710.1490.1260.058
4 h0.130.1370.0770.1480.1230.053
5 h0.1220.1270.0810.1590.1140.066
6 h0.1210.1280.0770.1530.1290.063
Table 10. Range of error metrices of the Hybrid + SAT + NWP model compared with the smart persistence model (parentheses).
Table 10. Range of error metrices of the Hybrid + SAT + NWP model compared with the smart persistence model (parentheses).
LocationR2 ScorenMAE (%)nRMSE (%)
(1 h)(6 h)(1 h)(6 h)(1 h)(6 h)
Wien0.87 (0.77)0.75 (0.57)17.32 (18.85)25.58 (32.55)24.55 (28.05)34.38 (45.00)
Wels0.87 (0.81)0.72 (0.49)16.42 (18.34)25.89 (30.97)23.07 (27.23)31.95 (42.88)
Salzburg0.86 (0.81)0.70 (0.42)18.87 (20.29)28.98 (36.85)25.18 (29.53)36.73 (51.07)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Thaker, J.; Höller, R.; Kapasi, M. Short-Term Solar Irradiance Prediction with a Hybrid Ensemble Model Using EUMETSAT Satellite Images. Energies 2024, 17, 329. https://0-doi-org.brum.beds.ac.uk/10.3390/en17020329

AMA Style

Thaker J, Höller R, Kapasi M. Short-Term Solar Irradiance Prediction with a Hybrid Ensemble Model Using EUMETSAT Satellite Images. Energies. 2024; 17(2):329. https://0-doi-org.brum.beds.ac.uk/10.3390/en17020329

Chicago/Turabian Style

Thaker, Jayesh, Robert Höller, and Mufaddal Kapasi. 2024. "Short-Term Solar Irradiance Prediction with a Hybrid Ensemble Model Using EUMETSAT Satellite Images" Energies 17, no. 2: 329. https://0-doi-org.brum.beds.ac.uk/10.3390/en17020329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop