Next Article in Journal
Gutenberg–Richter B-Value Time Series Forecasting: A Weighted Likelihood Approach
Next Article in Special Issue
Battery Sizing for Different Loads and RES Production Scenarios through Unsupervised Clustering Methods
Previous Article in Journal
Attention-Based CNN-RNN Arabic Text Recognition from Natural Scene Images
Previous Article in Special Issue
The Wisdom of the Data: Getting the Most Out of Univariate Time Series Forecasting
Article

Influence of the Characteristics of Weather Information in a Thunderstorm-Related Power Outage Prediction System

Department of Civil & Environmental Engineering, University of Connecticut, Storrs, CT 06269, USA
*
Author to whom correspondence should be addressed.
Academic Editor: Sonia Leva
Received: 27 June 2021 / Revised: 31 July 2021 / Accepted: 3 August 2021 / Published: 5 August 2021
(This article belongs to the Special Issue Feature Papers of Forecasting 2021)

Abstract

Thunderstorms are one of the most damaging weather phenomena in the United States, but they are also one of the least predictable. This unpredictable nature can make it especially challenging for emergency responders, infrastructure managers, and power utilities to be able to prepare and react to these types of events when they occur. Predictive analytical methods could be used to help power utilities adapt to these types of storms, but there are uncertainties inherent in the predictability of convective storms that pose a challenge to the accurate prediction of storm-related outages. Describing the strength and localized effects of thunderstorms remains a major technical challenge for meteorologists and weather modelers, and any predictive system for storm impacts will be limited by the quality of the data used to create it. We investigate how the quality of thunderstorm simulations affects power outage models by conducting a comparative analysis, using two different numerical weather prediction systems with different levels of data assimilation. We find that limitations in the weather simulations propagate into the outage model in specific and quantifiable ways, which has implications on how convective storms should be represented to these types of data-driven impact models in the future.
Keywords: power outages; machine learning; thunderstorms; numerical weather prediction power outages; machine learning; thunderstorms; numerical weather prediction

1. Introduction

Weather-related power outages, and the severe weather events that cause them, pose a persistent threat to the functioning of the infrastructure and economy of the United States. These types of power outages affect millions of people and cost the U.S. economy tens of billions of dollars every year; moreover, the rate at which they occur appears to be increasing [1]. Anticipating the damages that storms can cause is a critical step in electrical utility managers’ storm outage management process. They need reliable information before a storm to be able to stage repair crews and effectively prepare for the damages that the storm will cause [2]. As such, there has been a recent surge in research and development activity into methods to predict storm damages and weather-related power outages.
Arguably, the most destructive types of storms in the United States are thunderstorms, including the associated convective phenomena (tornadoes, microbursts, hail, etc). While hurricanes often receive special attention because they are larger and more dramatic, thunderstorms are more common and cause more damage to the electrical infrastructure every year than any other type of weather. Indeed, investigations of major outage events reported to the Department of Energy have found that convective storms are responsible for the majority of weather-related outage events, the greatest number of customer outages, and the most outage hours [3,4]. Additionally there is every indication that the severity of thunderstorms is going to increase in the future. Changes in the climatic patterns of thunderstorms can already be seen in a time series analysis [5], and long-term climate projections suggest that, because of climate change, thunderstorms are likely to become stronger, more frequent, and more damaging [6,7].
Despite the demonstrated risk that thunderstorms present to the electrical infrastructure, they have not received much attention in the recent research for modeling weather-related power outages. While there are some outage modeling approaches that are generalized to a range of types of weather [8,9,10,11], much of research in this field has been focused on other types of storms. The vast majority of the work has focused on tropical storms and hurricanes, which can have particularly dramatic impacts [12,13,14,15,16], but several mature modeling approaches, specifically for extratropical storms [17,18,19], have also been developed.
In the existing general outage models, thunderstorms are sometimes included in the analysis [9,10,20,21], but the weather characteristics of these storms are treated in a similar fashion to other, more structured types of weather. There are also some studies that infer a focus on thunderstorms by including information about lightning strikes [11,22,23], but do not have an explicit focus on thunderstorms because they also include other types of weather events in their analysis.
This lack of focus on thunderstorms may be a result of the technical difficulty associated with describing and simulating them. Convective storms are particularly challenging for established numerical weather prediction (NWP) models and meteorological forecasts. While the increased horizontal resolution of convective-allowing configurations can lead to improved simulations, even with state-of-the-art high-resolution NWP models, reliable deterministic forecasts of thunderstorms longer than several hours are elusive [24,25,26]. As Yano et al. describe, there may be limitations to modern NWP models’ ability to simulate convective storms because of the wide-spread use of assumptions and parameterizations that are reasonable for synoptic-scale weather patterns but are much less applicable to more complex convective phenomena [26]. These potential limitations of NWP simulations are long standing, and multiple strategies for mitigating them have emerged. Assimilating radar or even lightning observations into initial conditions of simulations can be used to improve short-term predictions [27,28]; forecasting systems that leverage this type of data assimilation for rapidly-updating nowcasts are currently operational [29]. In addition, for forecasts longer than several hours, stochastic predictions from convective-allowing ensembles have shown an improved forecasting skill by being able to capture the range of potential outcomes, instead of one deterministic scenario [30,31,32].
Similar approaches and findings can be seen in the few studies in the literature that specifically focus on predicting thunderstorm-related power outages. In Alpay et al., the authors take a rapid-refresh nowcasting approach to modeling thunderstorm-related outages, using an LSTM neural network trained on data from a rapidly updating radar-ingesting weather model from NOAA [33]. The works of Shield and Kabir et al. both describe a thunderstorm outage prediction system trained on weather data from the National Digital Forecast Database for an area in Alabama [34,35]. Shields investigates the limitations of the model he develops and identifies that it has better skill at a synoptic scale, which illustrates the difficulty of forecasting with thunderstorms [34]. Kabir et al. take a more stochastic approach and develop a quantile regression model, which allows the communication of the significant uncertainties associated with predicting the impacts of thunderstorms [35].
While this previous work attempts to manage the known limitations of weather simulations of thunderstorms, how these limitations propagate from weather simulations into machine-learning based impact models remains poorly described. The problem of poor inputs for a computational algorithm has been recognized since the dawn of computation [36], but the effects in this context are not fully understood. In this paper, we attempt to shed light on this matter by analyzing the quality of the weather data from two different weather simulation systems with differing amounts of data assimilation, determine how outage models trained on these different sets of weather data differ in skill and accuracy, and what information the outage models learn from. This knowledge is critical to build an understanding of the limitations of the data used to build impact models for thunderstorms and to suggest how improved representations of weather will improve the quality of the insights that can be derived from them.

2. Materials and Methods

This study involved the creation and comparison of two separate machine-learning models designed to predict thunderstorm-related power outages, using data from NWP-based weather simulations and a wide range of other data sources in a region covering three states: Connecticut, Massachusetts, and New Hampshire, and five distinct electrical utility service territories: Eversource Connecticut (CT), Eversource Western Massachusetts (WMA), Eversource Eastern Massachusetts (EMA), Eversource New Hampshire (NH), and AVANGRID United Illuminating (UI). For geographical details of the modeling domain, refer to Figure 1.

2.1. Data

The outage models developed in this analysis use data describing 372 thunderstorm events that occurred in the utility service territories from 2016 to 2020, as well as a range of environmental characteristics, such as vegetation and drought status, as well as proprietary outage and infrastructure data provided by the power utilities, aggregated to the grid cells of the weather simulations. We included as many thunderstorm events that could be observed in weather station reports from each utility service territory, and aggregated the data to the RTMA grid cells of each service territory for each thunderstorm event. For details about the amount of data used from each territory, see Table 1.

2.1.1. Weather

The core of the analysis centers around datasets produced by two separate NWP gridded weather simulation systems: a hybrid NOAA analysis system, and a WRF 2 km simulation system. The NOAA analysis dataset is a combination of data from the Real-Time Mesoscale Analysis (RTMA) [37] and Stage IV Quantitative Precipitation Estimates (Stage IV) [38]. RTMA is a weather analysis product that produces a gridded estimate of weather conditions by statistically downscaling a 1 h short-term forecast and adjusting it with weather station observations. It produces a high-resolution, near real-time estimate of temperature, humidity, dew point, wind speed and direction, wind gusts, and surface pressure for the entire United States. The RTMA data were sourced from the archive hosted on the Google Earth Engine [39]. Stage IV is a Quantitative Precipitation Estimate (QPE) dataset created by the National Weather Service and the National Centers for Environmental Prediction (NWS, NCEP), using a blend of NEXRAD radar and the NWS River Forecast Center precipitation processing system [40]. It takes gridded precipitation estimates derived from radar scans, adjusts the values based on rain gauge data, and aggregates the data to produce gridded hourly estimates of precipitation for the continental United States. It is popular for analytical purposes and is often used as a reference to analyze the accuracy of satellite and other precipitation estimates [38]. By using a blend of RTMA and Stage IV, we are able to have a reasonable estimate of the average hourly weather conditions in each grid cell during each storm used in this analysis. For the sake of brevity, this dataset will sometimes be referred to as the “RTMA” system.
We compare this hybrid NOAA analysis dataset with another weather dataset developed from a configuration of the Weather Research and Forecasting Model (WRF), similar to one that was used in several outage predictions models [17,18], but with an increased horizon resolution to potentially help resolve convection. This model is initialized with the North American Mesoscale Forecast System analysis [41], which has 2 km horizontal grid spacing with one 6 km external domain. For configuration details, please see Table 2. These WRF simulations use a different projection than the RTMA system, so the results were resampled with bilinear interpolation to match the spatial characteristics of the RTMA analysis product.
For outage modeling purposes, 24 h time series of a common set of weather variables generated from both weather simulation systems were processed to generate descriptive data features for each thunderstorm in this analysis. The weather variables considered are dew point temperature, specific humidity, air temperature, surface pressure, wind speed, wind gust speed, wind direction, and hourly precipitation rate. Established weather parameters that directly describe convective potential, such as CAPE and CIN, were unfortunately not available for this study because they are not published in RTMA, which is primarily a surface analysis product. For each of the included variables, the mean, max, minimum, standard deviation, 4 h mean during peak winds, and total were calculated for each storm, except for wind direction for which we took the median value. The median was taken to limit its sensitivity to outliers. Several additional features were calculated: the number of hours of winds above various wind speeds, calculated using various thresholds applied to wind speeds and gusts; typical wind direction by taking the mean of the median wind direction of included storms; and the difference between the typical wind direction and the median wind direction for that storm. To preserve its characteristics, all computation and analysis of wind direction was performed via the circular library in R [48]. Additionally, we included an additional set of features describing the time series of wind stress exerted on the trees by taking the product of the leaf area index (see below) and the square of the wind speed. For details, please see Appendix A, which contains a detailed table of all data features used for modeling.

2.1.2. Infrastructure and Outage Data

Proprietary data of the infrastructure and historical outages are made available for this study for the five utility service territories. Using rgdal and rgeos [49,50] for the area within each outage model grid cell, we calculated the length of overhead power lines, the number of utility poles, the number of fuses and cutouts, and the number of circuit reclosers.
The historic outage data describes the time and location of where damage occurred to the power distribution grid for a period of five years (2016 to 2020). Based on this information, we were able to calculate the number of damage locations within each outage model grid cell associated with each storm. A damage location is a physical location where repair crews are dispatched to repair damage after a storm. In the vast majority of cases, this meant counting the damage locations that were identified in the 24 h storm period, but in several cases, additional “nested” storm-related outages were recognized by utility operators after the storm period, so a longer window was sometimes used. These damage data were extracted from the utility outage management system, which is a software tool used by most large utilities to identify outages and dispatch repair crews.

2.1.3. Environmental Data

Because weather-related power outages are the result of interactions between the weather, the infrastructure and the environment, a range of environmental information was considered for this analysis. We processed the environmental data in several different ways depending on spatial resolution. When working with datasets with a resolution higher than the 2.5 km RTMA grid, for each grid cell, the raster data were sampled from a 60 m buffer around the overhead lines in that cell, and we calculated the representative percentage for the categorical data, or the average and standard deviation for the numerical data. We applied this process to a range of datasets, including the following: categorical land cover from the 2016 National Land Cover Database (NLCD) [51], 2016 NLCD Tree Canopy Coverage [52], vegetation height estimates from the Global Ecosystem Dynamics Investigation (GEDI) lidar instrument on the International Space Station [53], USGS 3DEP DEM elevation [54], and several other datasets, which required special processing. For example, we sampled the soils dataset developed by Watson et al. [18] from the USDA SSURGO database [55] to describe the soil characteristics (density, porosity, hydraulic conductivity, composition, and saturation). Additionally, because that previous work suggests that systemic biases caused by differences in the elevations of the weather predictions and the infrastructure may be present, we used the difference between those two elevations as an additional feature, elvDiff.
As seen in other outage modeling work  [15,16], high-resolution data from the Individual Tree Species Parameter Maps (developed to support the USDA National Insect and Disease Risk Map) were used to calculate information about the density of the forest and the presence of various tree species [56]. However, because these data contain information about 264 individual tree species, we aggregated the basal area and stand density index of the species data by wood type (hardwood or softwood). Additionally, we were able to calculate the mean and standard deviation of the basal area (BA), stand density index (SDI), quadratic mean diameter (DQ), total frequency (TF), and trees per acre (TPA) for all trees, and generate statistics for the area around the infrastructure as described in the previous paragraph.
Data at the courser resolutions were handled more simply by sampling the data using the centroid of the grid cell. This included data describing the climatological leaf area index generated by Cerrai et al. [9], and a collection of drought indices published by the West Wide Drought Tracker [57]. While drought data was used in outage modeling in the past [12,15], we included more information, including the 1, 3 and 12 month Standardized Precipitation Index (SPI) of the month of the storm, as well as 12 month SPI from 1 to 5 years before the storm occurred. This information was included to capture not only the immediate drought conditions, but also any lingering effects of long-term drought stress on the vegetation.

2.2. Outage Modeling

To generate a robust outage prediction system based on the 131 data features, generated via the processes described in the previous section, additional steps were taken to confirm each variable’s importance for the modeling outage, tune the model’s hyperparameters, and test the system’s performance via cross-validation. All modeling processes were coded in R [58], using a range of support libraries.
Variable importance for modeling was initially confirmed via a Boruta variable selection process. This process involves calculating the variable importance in a random forest model, and comparing each variable’s importance against the importance of a randomized variable with the same distribution of values. Over many iterations, this process can confirm the importance of each variable in a dataset in comparison to random noise [59]. This was implemented via the Boruta R library [60].
Based on experience and the previous literature [9,10,18], we chose the Bayesian Additive Regression Tree (BART) model for this analysis [61], implemented via the BART R library [62]. While this is a quantile regression algorithm, we simplified outputs to deterministic predictions for each storm by taking the mean of the outputs of the model. The hyperparameters used by the BART algorithm (sparse parameters a and b, shrinkage parameter k, the number of trees, the number of posterior draws, and the number of iterations used to initialize the Monte-Carlo Markov Chains) were tuned for this dataset via differential evolution [63] implemented via the DEoptim library [64]. It was used to find the optimal configuration of the BART algorithm based on the mean root mean square logarithmic error (RMSLE) of a fixed 5-fold cross-validation of the RTMA system dataset. To maintain comparability, these optimized hyperparameter values were consistently applied to all models and experiments in this analysis. RMSLE was chosen because it is less sensitive to extreme errors.

2.3. Analysis

To understand the differences between the hybrid NOAA analysis dataset, the WRF simulation dataset, and the outage prediction models built on them, we evaluated each weather simulation’s ability to represent the local weather conditions by comparing its predictions against weather station observations. Then, to understand the different qualities of the two outage models, as well as evaluate the importance of individual and groups of variables in the outage models, we compared the cross-validation results, using traditional and spatial error metrics.
More specifically, to evaluate the two-gridded weather simulations, data were collected from METAR and SPECI reports via the Integrated Surface Data archive maintained by the National Centers for Environmental Information [65]. Any data flagged with quality issues were removed, and all observations reported were averaged for every hour to produce a 24 h time series. Any station or variable with more than two hours of missing data were removed from the analysis. Then, the same summary statistics used to generate the outage model features (mean, minimum, maximum, standard deviation, total, 4 h mean during peak winds) were calculated based on the weather station observations. Any mean or maximum gust values reported as zero by the weather stations were also removed from consideration.
For this analysis, all weather stations in the proximity of the outage prediction service territories were considered, with the exception of Northern New Hampshire. We removed that area from consideration because it is dominated by the White Mountains, and the complex topography would cause biased results. See Figure 1 for the detailed weather station location information used in this analysis. While additional data cleaning steps are common when this process is used for weather model evaluation, we determined that this would not be appropriate because the localized differences between the weather station observations and gridded NWP data are of interest.
The outage model performance was evaluated using leave-one-date-out cross-validation. This validation process simulates the operational predictability of the outages caused by each weather event by iteratively isolating the information of each storm event, and testing the model’s ability to predict it. More specifically, for each storm date and time present in the database of storms, we reserved the data from that date and time, trained the outage model on the remaining data, and tested that trained model on the reserved data. This way, we had a comprehensive evaluation of all storms in our database, but prevented any spatial or temporal correlations in the weather data from influencing the model performance. While 372 thunderstorm events were considered in this analysis, because of overlapping times, each outage model was only trained and tested 226 times for this cross-validation. To evaluate the overall cross-validation results, we calculated the median absolute percent error (MdAPE), mean absolute percent error (MAPE), centered root mean squared error (CRMSE), correlation coefficient (R 2 ), and the Nash–Suttcliffe efficiency (NSE) [66]. For definitions of these error metrics, please see Appendix C.
Because the spatial predictability of thunderstorm outages is also of interest, we also applied the fraction skill score (FSS) to evaluate the spatial skill of the outage models. FSS uses a threshold, or a series of thresholds, to generate binary rasters of predictions and actual values, and compares the two within a series of neighborhoods [67]. A skillful model is able to predict a similar fraction of values above the threshold as the actual in a small area. This metric is becoming a widely accepted method to evaluate the spatial skill of precipitation forecasts, especially in the U.S. [68]. Under ideal conditions, an FSS value greater than 0.5 indicates a “useful” skill, but depending on the conditions of the baseline performance (FSS u n i f o r m ), it is subject to change as defined by the following equation:
FSS u n i f o r m = 0.5 + FSS r a n d o m / 2
where FSS r a n d o m is the total of the derived binary raster, divided by the number of cells in the domain [67]. For precipitation, the threshold tends to increase with smaller domains and as the prevalence of precipitation increases [69]. For this analysis, we calculated the FSS for each storm by service territory for a range of scales (3 × 3 to 21 × 21 cells), and outage thresholds between upscaled outage predictions and actual outages via the validation library [70]. Upscaling the predicted and actual values for the FSS calculation was important because the resolution of our model and the frequency of actual damages is such that the actual values are extremely zero-inflated and very sparse (96.3% zeros, and mean of 0.048 damages per grid cell). The outage model predictions however, tend to be small (median of 0.0292 and 0.0314 for RTMA and WRF systems respectively) and are more evenly distributed. This difference in spatial distribution was minimized by applying boxcar smoothing to a small 3 × 3 neighborhood on both the actual and predicted outages for each event and territory via the SpatialVx library [71]. While this process effectively degrades the precision of the analysis, it generates more continuously distributed values that are more comparable, while not affecting the total number of damage locations for each event.
To measure the variable importance of each outage model, we applied the variable permutation technique described by Fisher et al. [72] via the DALEX library in R [73]. This technique is model agnostic and uses a loss function to measure model performance as the input variables are perturbed. This allows for a quantitative understanding of each variable’s influence on the model performance. Doing this evaluation via cross-validation would be prohibitively complex and computationally expensive, so to evaluate the variable importance within the outage models, all available data were used to train the models before variable importance was measured. In addition, because there is a significant random component in this analysis, we calculated this variable importance over ten iterations for both outage models, and calculated the confidence intervals. The loss metric used to evaluate variable importance, root mean squared logarithmic error (RMSLE), was chosen because it is robust to the inclusion of zeros and is less sensitive to rare cases of extreme errors, which can be present because of the statistical distribution of actual outages as described above. However, because it is a logarithmic error metric, differences in RMSLE can often appear small, despite being significant.

3. Results

3.1. Weather Analysis

As demonstrated in Figure 2, the NOAA analysis dataset represents almost all weather parameters used in the outage models more accurately than the WRF simulation dataset. Very significant differences are seen between the quality of the precipitation parameters, as well as several wind and gust features. Both systems are able to represent parameters associated with synoptic scale processes, such as temperature, humidity, and surface pressure dynamics, much more accurately than mesoscale and microscale processes, such as wind and precipitation. Some surface pressure parameters appear to be poorly captured, but this is likely due to differences in elevation between the NWP data and weather station data, which are not accounted for in this evaluation. In general, these results are quite consistent with what we would expect from the state of the art of NWP of a deterministic 24 h simulation of thunderstorms. For detailed metrics, see Appendix B.

3.2. The Outage Models

The RTMA-based outage model performs slightly better than the WRF-based model based on all metrics used in our analysis as seen in Table 3 and Figure 3.
While a direct comparison is not particularly fair because of the differences in the events used in the analysis and the domains of the models, both outage models presented here perform reasonably well in comparison to other outage prediction models of a similar architecture. Wanik et al. [10] describe a warm weather outage model that has a slightly higher MdAPE (35.1 to 38.7%). In Cerrai et al. [9], the best overall outage model has an overall MdAPE of 43%, a MAPE of 59% and an NSE of 0.53. In Yang et al. [17], their conditional outage prediction system designed for severe events has a MdAPE of 38%, MAPE of 46%, and NSE of 0.79. In Watson et al. [18], their best performing rain/wind storm model has a MdAPE of 38%, MAPE of 57%, and an NSE of 43%. The thunderstorm outage models described here have competitive APE metrics, but have a comparatively low NSE, in part because of one under-predicted extreme event.
Overall, the cross-validation results indicate that the models presented here are sensitive to the overall severity of the different thunderstorms. The model has a good dynamic range, especially if one considers that the median daily outages for CT, WMA, EMA, NH, and UI are 35, 6, 20, 22, and 25, respectively. The models shown here demonstrate a dynamic range of around 10 times the typical daily outage level for each service territory, depending on storm severity.

3.2.1. Spatial Skill

As seen in Figure 4, the RTMA-based outage model has slightly better spatial performance than the WRF-based model, but the differences between the outage models are small in comparison to the differences between the events and territories. While many thresholds were evaluated, we show the results for a threshold of 0.111 damage locations, which correspond to having one damage location smoothed out in a 3 × 3 pixel area (approximately 7.5 km 2 ).

3.2.2. Outage Model Variable Importance

The grouped variable importance analysis of the outage models in Figure 5 shows that, while infrastructure-related variables are the most important by far, there are differences between the two models as to which weather parameters contribute the most to the models. While the RTMA-based system finds precipitation information to be very useful, the WRF-based system has stronger preference for winds, temperature, and humidity than the RTMA model. The WRF model also appears to fit more on such environmental variables as land cover, vegetation, and elevation, which do not vary storm-by-storm in a given service territory. The results of an individual variable importance analysis is displayed in Appendix A. Although the importance of any one variable to the model is relatively small, given the large number of variables used, and the logarithmic error metric used to measure the dropout loss only makes the differences appear smaller, there are some interesting differences between the two models. Most notably, the maximum precipitation rate is one of the least important variables in the WRF model but is the second most important variable in the RTMA model.

4. Discussion

Based on these results, several conclusions can be made about the predictability of thunderstorm-related power outages. Firstly, while the NOAA analysis data represent local weather conditions more accurately than the WRF simulation, many weather features used in the outage prediction models have significant errors in both systems. Rather than these errors being simulation or forecasting errors, because of the amount of observational data assimilated into the NOAA analysis system, they are likely due to the representativeness error caused by depicting complex and locally variable phenomena as deterministic and uniform in the 2.5 km × 2.5 km area. This type of error has been documented in the literature for precipitation and winds [74,75,76,77], and the errors in the RTMA data for winds and the Stage IV are comparable to the magnitude of representativeness error found in these works.
Secondly, because the NOAA analysis data have higher quality weather data than the WRF simulations, it is unsurprising that the RTMA outage model is more accurate than the WRF-based one. However, what is surprising is how modest the performance differences between these outage models are. Even with the large amount of observational data incorporated into the RTMA and Stage IV analysis products, which have much fewer simulation errors present than the WRF simulations, the outage model is unable to predict thunderstorm-related outages with greater accuracy.
This suggests that the randomness of storm damages is quite significant, and more precise outage predictions may require significantly more precise information. One possibility is that additional factors that are not considered in this study, such as the age of the infrastructure, limit the outage model. However, there are also differences between the two models that suggest other possibilities. As described above, the spatial resolution of the representation of the weather data is a readily apparent source of imprecision in our data. Although all data used in these models, including the environmental and infrastructure information, may suffer from similar representativeness errors, we can see that some weather variables are better represented at 2.5 km × 2.5 km than others. How the precision of the weather data affects the outage models can be understood with a more detailed analysis of the variable importance.
By comparing the R 2 values of the weather feature evaluation and the importance of the weather variables in the outage models, we find that there is a weak but real correlation between the two ( 0.23 ± 0.07 for RTMA, 0.29 ± 0.07 for WRF). This indicates that the outage models have a preference for precise and accurate weather information. This may be obvious, but this preference also appears regardless of whether or not the weather phenomena directly causes power outages. Both RTMA and WRF outage systems find temperature and humidity to be somewhat important to its predictions, although these variables are not direct causes of outages in thunderstorms. They are more indicators of convective potential and are, thus, indirectly related to power outages, but because of their accurate representation, the machine learning algorithms of the outage models find them useful for understanding the risk of weather-related damage.
At the same time, there is also a distinct preference for variables that have a more direct causative relationship with weather-related outages. This can best be seen in how the RTMA system has a strong preference for precipitation variables. maxPREC is the 2nd most important variable of all for that model, despite it having only a moderate correlation with local conditions (R 2 of 0.5298). It can also be seen in how both models find useful information in wind and gust variables, despite the most precisely predicted variable in that group, avgWIND, only having a moderate correlation with local conditions (R 2 of 0.6346 and 0.5879 for RTMA and WRF, respectively). This is because both wind and precipitation are good indicators of the location and intensity of a convective storm, and more direct indicators of the risk of weather-related damages. Indeed, in the case of the RTMA system, the strong preference for precipitation information comes with a comparatively weaker preference for most other variable groups.
This suggests that if the precision of the precipitation and wind information could be increased further, we can expect corresponding increases in the accuracy of outage prediction models for thunderstorms. Additionally, if we consider how the apparent lack of precision in these data is likely from representativeness error, as described above, future directions for research become apparent.
Lastly, the spatial skill of the outage prediction system appears to vary significantly from storm-to-storm as well as territory-by-territory. It is beyond the scope of this paper to speculate about the storm-to-storm variability in the FSS scores, which may also be a function of the accuracy and precision of the weather simulations, but the distinct differences in spatial predictability of outages in different service territories is suggestive of distinct differences between them. It has been documented for precipitation that the FSS calculations change significantly depending on the size of the domain. However, in the case of outages, this effect is likely only moderate because the average value of FSS u n i f o r m does not vary much between territories. The most apparent and potentially impactful difference for outage models between the territories is the densities of the infrastructure. As seen in Figure 5 and Appendix A, infrastructure is a very influential variable for outage modeling, and while all the service territories included in this study contain some urban areas, some are much more consistently urbanized than others. As such, the mean density of overhead lines for each territory varies widely with a minimum of 8.5km per grid cell in WMA, and a maximum 27.5 km per grid cell in UI. If the mean density of the overhead lines and mean FSS as shown in Figure 4 for each territory are compared, we see that the Pearson correlations between the two are 0.927 for RTMA, and 0.946 for WRF: a very strong correlation between the overall spatial predictability of outages and the density of the infrastructure in the region. This is a clear indication of the influence that the infrastructure density has over the spatial predictability of power outages. However, this also may be an indication of over-fitting on the infrastructure features. Infrastructure is by far the most important variable group in this analysis, but in the case of the RTMA outage model, better spatial skill comes with a corresponding lower importance of infrastructure.

5. Conclusions

While the two thunderstorm-related outage models shown here are acceptably skilled at predicting the total number of damages for each storm event, they have difficulty predicting the location of storm impacts. Both the models based on the NOAA analysis dataset and the WRF simulation dataset appear to fit strongly on the amount of infrastructure present in an area and a combination of weather variables that are either directly related to storm damages but imprecisely represented (precipitation, winds), or are more general indicators of convective potential but more precisely represented (temperature, humidity).
Because predictions of the weather conditions and power outages appear to have similar limitations for thunderstorms, there are established analytical methods that could be readily applied to improve the modeling of power outages and other impacts associated with thunderstorms. Just as weather ensembles allow meteorologists to predict the potential intensity of thunderstorms beyond the capabilities of deterministic forecasts, an outage model coupled to a weather ensemble may allow us to predict the potential impacts in a similar way. Because of the high uncertainties, rapidly-refreshing outage models, such as that described in Alpay et al. [33], may be more useful in an operational decision-making context for thunderstorm preparedness.
If one considers how strong convective storms are an increasing threat, globally, there is an implicit call to accelerate investment in global weather prediction and the observation infrastructure. The impact models presented here, even with their limitations, are only possible because of the availability of high-resolution nowcasting products in the United States. While recent developments in global convective-allowing NWP systems are encouraging [78], for this type of impact modeling to be applied in other countries, more work in this space is needed.
Based on our findings, we can expect that as better representations of local weather conditions during thunderstorms are developed both in the United States and globally, outage model accuracy, overall as well as spatially, will improve; the outage models will learn more and more of the phenomena directly linked to weather-related power outages, such as strong winds and extreme precipitation, instead of the synoptic patterns that are correlated to them. To progress along that path, a more granular understanding of the weather conditions that cause damage in convective storms and how they can be represented is needed. Further research involving an analysis or modeling of storm impacts based on microscale numerical weather prediction, large eddy simulations, or even observations from radar or lidar instruments could be very informative about how weather information can be generated in a way that improves our ability to understand and anticipate the impacts of convective storms.

Author Contributions

Conceptualization, P.L.W.; methodology, P.L.W.; software, P.L.W. and M.K.; validation, P.L.W.; formal analysis, P.L.W.; investigation, P.L.W. and E.A.; resources, E.A.; data curation, P.L.W. and M.K.; writing—original draft preparation, P.L.W.; writing—review and editing, E.A. and M.K.; visualization, P.L.W.; supervision, E.A.; project administration, E.A.; funding acquisition, E.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Eversource Energy.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Restrictions apply to the availability of these data. Data were obtained from Eversource Energy and United Illuminating. They are available from the authors with the permission of Eversource Energy and United Illuminating.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
NWPNumerical Weather Prediction
WRFWeather Research and Forecasting model
RTMAReal-Time Mesoscale Analysis
RMSLERoot Mean Squared Logarithmic Error
NSENash–Sutcliffe Efficiency
MAPEMean Absolute Percent Error
CRMSECentered Root Mean Squared Error
FSSFraction Skill Score
CTEversource Connecticut
WMAEversource Western Massachusetts
EMAEversource Eastern Massachusetts
NHEversource New Hampshire
UIAVANGRID United Illuminating
SPIStandardized Precipitation Index
LAILeaf Area Index
DEMDigital Elevation Model
NLCDNational Land Cover Database
3DEP3D Elevation Program
GEDIGlobal Ecosystem Dynamics Investigation
SSURGOSoil Survey Geographic Database
ITSPIndividual Tree Species Parameter
WWDTWest Wide Drought Tracker
MODISModerate Resolution Imaging Spectroradiometer
METARMeteorological Aerodrome Reports
SPECIAviation Selected Special Weather Report
NOAANational Oceanic and Atmospheric Administration
NCEPNational Centers for Environmental Prediction
USDAUnited States Department of Agriculture
USGSUnited States Geological Survey
MRLCMulti-Resolution Land Characteristics

Appendix A. Data Features

Table A1. Description of variables used in outage prediction models. The dropout loss of the top ten variables for each model are in bold. Higher dropout loss indicates greater importance.
Table A1. Description of variables used in outage prediction models. The dropout loss of the top ten variables for each model are in bold. Higher dropout loss indicates greater importance.
NameDescriptionSourceVariable GroupRTMA Drp. LossWRF Drp. Loss
ohLengthLength of Overhead LineUtility CompanyInfrastructure0.1536950.155182
poleCountNumber of Utility PolesUtility CompanyInfrastructure0.1524730.153222
fuseCountNumber of FusesUtility CompanyInfrastructure0.1521810.153253
reclrCountNumber of ReclosersUtility CompanyInfrastructure0.1522330.153057
prec11Percent NLCD 11—Open WaterNLCD 2016 [51]Land Cover0.1519330.152713
prec21Percent NLCD 21—Developed, OpenNLCD 2016 [51]Land Cover0.1520560.152876
prec22Percent NLCD 22—Developed, LowNLCD 2016 [51]Land Cover0.1519100.152749
prec23Percent NLCD 23—Developed, MediumNLCD 2016 [51]Land Cover0.1520790.152963
prec24Percent NLCD 24—Developed, HighNLCD 2016 [51]Land Cover0.1519890.152700
prec31Percent NLCD 31—BarrenNLCD 2016 [51]Land Cover0.1519270.152714
prec41Percent NLCD 41—Deciduous ForestNLCD 2016 [51]Land Cover0.1519740.152783
prec42Percent NLCD 42—Evergreen ForestNLCD 2016 [51]Land Cover0.1519360.152731
prec43Percent NLCD 43—Mixed ForestNLCD 2016 [51]Land Cover0.1518610.152732
prec52Percent NLCD 52—ShrubNLCD 2016 [51]Land Cover0.1519330.152704
prec71Percent NLCD 71—GrasslandNLCD 2016 [51]Land Cover0.1519280.152699
prec82Percent NLCD 82—Cultivated CropsNLCD 2016 [51]Land Cover0.1519330.152715
prec95Percent NLCD 95—Herbaceous WetlandsNLCD 2016 [51]Land Cover0.1519340.152713
avgCanopyMean Percent Tree Canopy CoverNLCD Tree Canopy 2016 [52]Vegetation0.1523290.152956
stdCanopyStandard Deviation of Canopy CoverNLCD Tree Canopy 2016 [52]Vegetation0.1519680.152736
avgVegHgtMean Vegetation HeightGEDI 2019 [53]Vegetation0.1520370.152906
stdVegHgtStandard Deviation of Vegetation HeightGEDI 2019 [53]Vegetation0.1519450.152771
avgHardBAMean Hardwood Basal AreaITSP [56]Vegetation0.1519030.152714
stdHardBAStandard Deviation of Hardwood BAITSP [56]Vegetation0.1519390.152722
avgHardSDIMean Hardwood Stand Density IndexITSP [56]Vegetation0.1519630.152686
stdHardSDIStandard Deviation of Hardwood SDIITSP [56]Vegetation0.1519190.152698
avgSoftBAMean Softwood Basal AreaITSP [56]Vegetation0.1519140.152702
stdSoftBAStandard Deviation of Softwood BAITSP [56]Vegetation0.1518810.152691
avgSoftSDIMean Softwood Stand Density IndexITSP [56]Vegetation0.1519270.152677
stdSoftSDIStandard Deviation of Softwood SDIITSP [56]Vegetation0.1518910.152684
avgBAMean Total Basal AreaITSP [56]Vegetation0.1519500.152737
stdBAStandard Deviation of Total Basal AreaITSP [56]Vegetation0.1519100.152776
avgSDIMean Total Stand Density IndexITSP [56]Vegetation0.1519510.152698
stdSDIStandard Deviation of Total SDIITSP [56]Vegetation0.1519810.152767
avgDQMean Total Quadratic Mean DiameterITSP [56]Vegetation0.1519290.152718
stdDQStandard Deviation of Total DQITSP [56]Vegetation0.1519360.152760
avgTFMean of Total FrequencyITSP [56]Vegetation0.1522470.153137
stdTFStandard Deviation of TFITSP [56]Vegetation0.1519840.152783
avgTPAMean of Trees per AcreITSP [56]Vegetation0.1518810.152653
stdTPAStandard Deviation of TPAITSP [56]Vegetation0.1519320.152690
LAILeaf Area IndexMODIS [9,79]Vegetation0.1520830.152848
avgDEMMean Elevation3DEP [54]Elevation0.1518370.152660
stdDEMStandard Deviation of Elevation3DEP [54]Elevation0.1519240.152720
elvDiffDifference of avgDEM and weather elevation3DEP [54], RTMA [37], WRF [80]Elevation0.1519310.152706
spi1One Month Standardized Precipitation IndexWWDT [57]Drought0.1519870.153005
spi3Three Month Standardized Precipitation IndexWWDT [57]Drought0.1520010.152830
spi12_012 Month SPI, currentWWDT [57]Drought0.1519980.152773
spi12_112 Month SPI, 1 year priorWWDT [57]Drought0.1521370.152853
spi12_212 Month SPI, 2 years priorWWDT [57]Drought0.1520750.152831
spi12_312 Month SPI, 3 years priorWWDT [57]Drought0.1521550.152853
spi12_412 Month SPI, 4 years priorWWDT [57]Drought0.1520270.152809
spi12_512 Month SPI, 5 years priorWWDT [57]Drought0.1519390.152801
hydNoPercent not hydric soilsSSURGO [55]Soil Type0.1519540.152771
siltTotalPercent Silt ContentSSURGO [55]Soil Type0.1519430.152747
clayTotalPercent Clay ContentSSURGO [55]Soil Type0.1519290.152717
rockTotalPercent of Rock ContentSSURGO [55]Soil Type0.1519660.152738
soilDepthDepth of SoilSSURGO [55]Soil Type0.1518470.152661
orgMatPercent of Organic MaterialSSURGO [55]Soil Type0.1519490.152743
soilDensSoil DensitySSURGO [55]Soil Type0.1519500.152730
kSatSaturated Hydraulic ConductivitySSURGO [55]Soil Type0.1519450.152732
satPSoil PorositySSURGO [55]Soil Type0.1519610.152716
avgTMPMean Air TemperatureRTMA [37], WRF [80]Temperature0.1521910.152650
stdTMPStandard Deviation of Air TempRTMA [37], WRF [80]Temperature0.1521560.152819
maxTMPMaximum Air TemperatureRTMA [37], WRF [80]Temperature0.1526850.153235
minTMPMinimum Air TemperatureRTMA [37], WRF [80]Temperature0.1519250.152873
sumTMPSum of Air TemperaturesRTMA [37], WRF [80]Temperature0.1520290.152741
peakTMPMean Temp during peak windsRTMA [37], WRF [80]Temperature0.1520200.152780
avgDPTMean Dew Point TemperatureRTMA [37], WRF [80]Dew Point0.1519760.152767
stdDPTStandard Deviation of Dew PointRTMA [37], WRF [80]Dew Point0.1520130.152804
maxDPTMaximum Dew Point TemperatureRTMA [37], WRF [80]Dew Point0.1519260.152687
minDPTMinimum Dew Point TemperatureRTMA [37], WRF [80]Dew Point0.1519410.152832
sumDPTSum of Dew Point TemperaturesRTMA [37], WRF [80]Dew Point0.1520120.152792
peakDPTMean Dew Point during peak windsRTMA [37], WRF [80]Dew Point0.1520070.152723
avgPRESMean Surface PressureRTMA [37], WRF [80]Pressure0.1519140.152716
stdPRESStandard Deviation of PressureRTMA [37], WRF [80]Pressure0.1522970.152797
maxPRESMaximum Surface PressureRTMA [37], WRF [80]Pressure0.1519500.152735
minPRESMinimum Surface PressureRTMA [37], WRF [80]Pressure0.1519460.152737
sumPRESSum of Surface PressuresRTMA [37], WRF [80]Pressure0.1519430.152706
peakPRESMean Pressure during peak windsRTMA [37], WRF [80]Pressure0.1519600.152694
avgSPFHMean Specific HumidityRTMA [37], WRF [80]Humidity0.1520620.152817
stdSPFHStandard Deviation of Spec. HumidityRTMA [37], WRF [80]Humidity0.1520180.152836
maxSPFHMaximum Specific HumidityRTMA [37], WRF [80]Humidity0.1519490.152751
minSPFHMinimum Specific HumidityRTMA [37], WRF [80]Humidity0.1520820.152905
sumSPFHSum of Specific HumiditiesRTMA [37], WRF [80]Humidity0.1521630.152752
peakSPFHMean of Spec. Humidity during peak windsRTMA [37], WRF [80]Humidity0.1519840.152767
avgWINDMean 10m Wind SpeedRTMA [37], WRF [80]Wind/Gust0.1519610.152710
stdWINDStandard Deviation of 10m Wind SpeedRTMA [37], WRF [80]Wind/Gust0.1519540.152750
maxWINDMaximum 10m Wind SpeedRTMA [37], WRF [80]Wind/Gust0.1519770.152748
minWINDMinimum 10m Wind SpeedRTMA [37], WRF [80]Wind/Gust0.1519970.152745
sumWINDSum of Wind SpeedsRTMA [37], WRF [80]Wind/Gust0.1519720.152716
peakWINDMean wind speed during peak windsRTMA [37], WRF [80]Wind/Gust0.1519480.152742
avgGUSTMean Wind Gust SpeedRTMA [37], WRF [80]Wind/Gust0.1520450.152836
stdGUSTStandard Deviation of Wind Gust SpeedRTMA [37], WRF [80]Wind/Gust0.1519850.152769
maxGUSTMaximum Wind Gust SpeedRTMA [37], WRF [80]Wind/Gust0.1520400.152752
minGUSTMinimum Wind Gust SpeedRTMA [37], WRF [80]Wind/Gust0.1520890.152746
sumGUSTSum of Wind GustsRTMA [37], WRF [80]Wind/Gust0.1519880.152746
peakGUSTMean Wind Gust Speed during peak windsRTMA [37], WRF [80]Wind/Gust0.1520390.152748
avgLFSHMean Leaf StressMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1519910.152744
stdLFSHStandard Deviation of Leaf StressMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1519610.152738
maxLFSHMaximum Leaf StressMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1519800.152743
minLFSHMinimum Leaf StressMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1519630.152755
sumLFSHSum of Leaf StressesMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1520240.152760
peakLFSHMean Leaf Stress during peak windsMODIS [9,79], RTMA [37], WRF [80]Wind/Gust0.1519610.152826
wgt5Hours of Winds >5 m/sRTMA [37], WRF [80]Wind/Gust0.1519740.152793
cowgt5Continuous Hours of Winds >5 m/sRTMA [37], WRF [80]Wind/Gust0.1519520.152770
ggt13Hours of Gusts >13 m/sRTMA [37], WRF [80]Wind/Gust0.1519670.152997
ggt17Hours of Gusts >17 m/sRTMA [37], WRF [80]Wind/Gust0.1519320.152729
ggt22Hours of Gusts >22 m/sRTMA [37], WRF [80]Wind/Gust0.1519340.152717
coggt13Continuous Hours of Gusts >13 m/sRTMA [37], WRF [80]Wind/Gust0.1519350.152804
coggt17Continuous Hours of Gusts >17 m/sRTMA [37], WRF [80]Wind/Gust0.1519400.152736
coggt22Continuous Hours of Gusts >22 m/sRTMA [37], WRF [80]Wind/Gust0.1519450.152719
typWDIRTypical (mean) wind direction of all stormsRTMA [37], WRF [80]Wind/Gust0.1520050.152712
medWDIRMedian Wind direction of stormRTMA [37], WRF [80]Wind/Gust0.1520020.152806
difWDIRDifference between typWDIR and medWDIRRTMA [37], WRF [80]Wind/Gust0.1519660.152745
avgPRECMean Hourly Precipitation RateStage IV [38], WRF [80]Precipitation0.1522090.152784
stdPRECStandard Deviation of Precip. RateStage IV [38], WRF [80]Precipitation0.1524030.152731
maxPRECMaximum Hourly Precipitation RateStage IV [38], WRF [80]Precipitation0.1528440.152773
sumPRECTotal PrecipitationStage IV [38], WRF [80]Precipitation0.1521870.152746
peakPRECMean Precip. Rate during peak windsStage IV [38], WRF [80]Precipitation0.1523110.152726

Appendix B. Weather Correlations

Table A2. Correlation between RTMA and WRF weather datasets, and METAR and SPECI observations.
Table A2. Correlation between RTMA and WRF weather datasets, and METAR and SPECI observations.
NameVariable GroupRTMA—METAR R 2 WRF—METAR R 2
avgTMPTemperature0.98360.9129
stdTMPTemperature0.91190.6448
maxTMPTemperature0.97070.8686
minTMPTemperature0.94430.8592
sumTMPTemperature0.91190.8459
peakTMPTemperature0.78140.6480
avgDPTDew Point0.97980.9461
stdDPTDew Point0.90920.7349
maxDPTDew Point0.96080.8966
minDPTDew Point0.95110.8897
sumDPTDew Point0.92340.8921
peakDPTDew Point0.83480.7189
avgPRESPressure0.17000.1588
stdPRESPressure0.97660.9392
maxPRESPressure0.14980.1363
minPRESPressure0.22000.2038
sumPRESPressure0.00150.0013
peakPRESPressure0.17080.1469
avgSPFHHumidity0.97350.9274
stdSPFHHumidity0.88780.6932
maxSPFHHumidity0.94700.8648
minSPFHHumidity0.95000.8799
sumSPFHHumidity0.92040.8735
peakSPFHHumidity0.82190.7002
avgWINDWind/Gust0.63460.5879
stdWINDWind/Gust0.32170.1736
maxWINDWind/Gust0.33270.2667
minWINDWind/Gust0.50530.3046
sumWINDWind/Gust0.60570.5643
peakWINDWind/Gust0.36320.3246
avgGUSTWind/Gust0.59150.5056
stdGUSTWind/Gust0.14110.0627
maxGUSTWind/Gust0.24840.1067
minGUSTWind/Gust0.00600.0091
sumGUSTWind/Gust0.57890.4957
peakGUSTWind/Gust0.14870.0625
avgLFSHWind/Gust0.55120.5444
stdLFSHWind/Gust0.35830.2756
maxLFSHWind/Gust0.28450.2249
minLFSHWind/Gust0.43970.2735
sumLFSHWind/Gust0.53820.5385
peakLFSHWind/Gust0.29390.2786
wgt5Wind/Gust0.42300.4820
cowgt5Wind/Gust0.38370.3517
ggt13Wind/Gust0.44320.2149
ggt17Wind/Gust0.03520.0137
ggt22Wind/GustNA 10.0000
coggt13Wind/Gust0.41100.1665
coggt17Wind/Gust0.03960.0105
coggt22Wind/GustNA 10.0000
typWDIRWind/Gust0.00540.1378
medWDIRWind/Gust0.33570.0304
difWDIRWind/Gust0.23620.0219
avgPRECPrecipitation0.60560.0886
stdPRECPrecipitation0.55890.0622
maxPRECPrecipitation0.52980.0538
sumPRECPrecipitation0.55850.0862
peakPRECPrecipitation0.19890.0279
1 Not enough variance to compute.

Appendix C. Error Metrics

MAPE = 1 N i = 1 N | P A | A × 100
CRMSE = 1 N i = 1 N [ ( P i P ¯ ) ( A i A ¯ ) ] 2
NSE = 1 i = 1 N ( P i A i ) 2 i = 1 N ( P i A ¯ ) 2
R 2 = i = 1 N ( P i P ¯ ) ( A i A ¯ ) i = 1 N ( P i P ¯ ) 2 i = 1 N ( A i A ¯ ) 2 2
RMSLE = 1 N i = 1 N ( l o g ( P i + 1 ) l o g ( A i + 1 ) ) 2

References

  1. Economic Benefits of Increasing Electric Grid Resilience to Weather Outages; Technical Report; Executive Office of the President: Washington, DC, USA, 2013.
  2. Lubkeman, D.; Julian, D. Large scale storm outage management. In Proceedings of the IEEE Power Engineering Society General Meeting, Denver, CO, USA, 6–10 June 2004; Volume 2, pp. 16–22. [Google Scholar] [CrossRef]
  3. Hall, K.L. Out of Sight, Out of Mind; Technical Report; Edison Electric Institute: Washington, DC, USA, 2012. [Google Scholar]
  4. Mukherjee, S.; Nateghi, R.; Hastak, M. A multi-hazard approach to assess severe weather-induced major power outage risks in the U.S. Reliab. Eng. Syst. Saf. 2018, 175, 283–305. [Google Scholar] [CrossRef]
  5. Sander, J.; Eichner, J.F.; Faust, E.; Steuer, M. Rising Variability in Thunderstorm-Related U.S. Losses as a Reflection of Changes in Large-Scale Thunderstorm Forcing. Weather. Clim. Soc. 2013, 5, 317–331. [Google Scholar] [CrossRef]
  6. Diffenbaugh, N.S.; Scherer, M.; Trapp, R.J. Robust increases in severe thunderstorm environments in response to greenhouse forcing. Proc. Natl. Acad. Sci. USA 2013, 110, 16361–16366. [Google Scholar] [CrossRef]
  7. Scaff, L.; Prein, A.F.; Li, Y.; Liu, C.; Rasmussen, R.; Ikeda, K. Simulating the convective precipitation diurnal cycle in North America’s current and future climate. Clim. Dyn. 2020, 55, 369–382. [Google Scholar] [CrossRef]
  8. Li, Z.; Singhee, A.; Wang, H.; Raman, A.; Siegel, S.; Heng, F.L.; Mueller, R.; Labut, G. Spatio-temporal forecasting of weather-driven damage in a distribution system. In Proceedings of the 2015 IEEE Power & Energy Society General Meeting, Denver, CO, USA, 26–30 July 2015; pp. 1–5. [Google Scholar] [CrossRef]
  9. Cerrai, D.; Wanik, D.W.; Bhuiyan, M.A.E.; Zhang, X.; Yang, J.; Frediani, M.E.B.; Anagnostou, E.N. Predicting Storm Outages Through New Representations of Weather and Vegetation. IEEE Access 2019, 7, 29639–29654. [Google Scholar] [CrossRef]
  10. Wanik, D.W.; Anagnostou, E.N.; Hartman, B.M.; Frediani, M.E.B.; Astitha, M. Storm outage modeling for an electric distribution network in Northeastern USA. Nat. Hazards 2015, 79, 1359–1384. [Google Scholar] [CrossRef]
  11. Kankanala, P.; Das, S.; Pahwa, A. AdaBoost+: An Ensemble Learning Approach for Estimating Weather-Related Outages in Distribution Systems. IEEE Trans. Power Syst. 2014, 29, 359–367. [Google Scholar] [CrossRef]
  12. Han, S.R.; Guikema, S.D.; Quiring, S.M.; Lee, K.H.; Rosowsky, D.; Davidson, R.A. Estimating the spatial distribution of power outages during hurricanes in the Gulf coast region. Reliab. Eng. Syst. Saf. 2009, 94, 199–210. [Google Scholar] [CrossRef]
  13. Quiring, S.M.; Zhu, L.; Guikema, S.D. Importance of soil and elevation characteristics for modeling hurricane-induced power outages. Nat. Hazards 2011, 58, 365–390. [Google Scholar] [CrossRef]
  14. Guikema, S.D.; Nateghi, R.; Quiring, S.M.; Staid, A.; Reilly, A.C.; Gao, M. Predicting Hurricane Power Outages to Support Storm Response Planning. IEEE Access 2014, 2, 1364–1373. [Google Scholar] [CrossRef]
  15. McRoberts, D.B.; Quiring, S.M.; Guikema, S.D. Improving Hurricane Power Outage Prediction Models Through the Inclusion of Local Environmental Factors. Risk Anal. 2018, 38, 2722–2737. [Google Scholar] [CrossRef]
  16. D’Amico, D.F.; Quiring, S.M.; Maderia, C.M.; McRoberts, D.B. Improving the Hurricane Outage Prediction Model by including tree species. Clim. Risk Manag. 2019, 25, 100193. [Google Scholar] [CrossRef]
  17. Yang, F.; Watson, P.; Koukoula, M.; Anagnostou, E.N. Enhancing Weather-Related Power Outage Prediction by Event Severity Classification. IEEE Access 2020, 8, 60029–60042. [Google Scholar] [CrossRef]
  18. Watson, P.L.; Cerrai, D.; Koukoula, M.; Wanik, D.W.; Anagnostou, E. Weather-related power outage model with a growing domain: Structure, performance, and generalisability. J. Eng. 2020, 2020, 817–826. [Google Scholar] [CrossRef]
  19. Tervo, R.; Láng, I.; Jung, A.; Mäkelä, A. Predicting power outages caused by extratropical storms. Nat. Hazards Earth Syst. Sci. 2021, 21, 607–627. [Google Scholar] [CrossRef]
  20. Singhee, A.; Wang, H. Probabilistic forecasts of service outage counts from severe weather in a distribution grid. In Proceedings of the 2017 IEEE Power & Energy Society General Meeting, Chicago, IL, USA, 16–20 July 2017; pp. 1–5. [Google Scholar] [CrossRef]
  21. Yue, M.; Toto, T.; Jensen, M.P.; Giangrande, S.E.; Lofaro, R. A Bayesian Approach-Based Outage Prediction in Electric Utility Systems Using Radar Measurement Data. IEEE Trans. Smart Grid 2018, 9, 6149–6159. [Google Scholar] [CrossRef]
  22. Zhou, Y.; Pahwa, A.; Yang, S.S. Modeling Weather-Related Failures of Overhead Distribution Lines. IEEE Trans. Power Syst. 2006, 21, 1683–1690. [Google Scholar] [CrossRef]
  23. Kankanala, P.; Pahwa, A.; Das, S. Regression models for outages due to wind and lightning on overhead distribution feeders. In Proceedings of the 2011 IEEE Power and Energy Society General Meeting, San Detroit, MI, USA, 24–28 July 2011; pp. 1–4. [Google Scholar] [CrossRef]
  24. Hohenegger, C.; Schar, C. Atmospheric Predictability at Synoptic Versus Cloud-Resolving Scales. Bull. Am. Meteorol. Soc. 2007, 88, 1783–1794. [Google Scholar] [CrossRef]
  25. Sun, J.; Xue, M.; Wilson, J.W.; Zawadzki, I.; Ballard, S.P.; Onvlee-Hooimeyer, J.; Joe, P.; Barker, D.M.; Li, P.W.; Golding, B.; et al. Use of NWP for Nowcasting Convective Precipitation: Recent Progress and Challenges. Bull. Am. Meteorol. Soc. 2014, 95, 409–426. [Google Scholar] [CrossRef]
  26. Yano, J.I.; Ziemiański, M.Z.; Cullen, M.; Termonia, P.; Onvlee, J.; Bengtsson, L.; Carrassi, A.; Davy, R.; Deluca, A.; Gray, S.L.; et al. Scientific Challenges of Convective-Scale Numerical Weather Prediction. Bull. Am. Meteorol. Soc. 2018, 99, 699–710. [Google Scholar] [CrossRef]
  27. Papadopoulos, A.; Chronis, T.G.; Anagnostou, E.N. Improving Convective Precipitation Forecasting through Assimilation of Regional Lightning Measurements in a Mesoscale Model. Mon. Weather Rev. 2005, 133, 1961–1977. [Google Scholar] [CrossRef]
  28. Hu, M.; Xue, M. Impact of Configurations of Rapid Intermittent Assimilation of WSR-88D Radar Data for the 8 May 2003 Oklahoma City Tornadic Thunderstorm Case. Mon. Weather Rev. 2007, 135, 507–525. [Google Scholar] [CrossRef]
  29. Benjamin, S.G.; Weygandt, S.S.; Brown, J.M.; Hu, M.; Alexander, C.R.; Smirnova, T.G.; Olson, J.B.; James, E.P.; Dowell, D.C.; Grell, G.A.; et al. A North American Hourly Assimilation and Model Forecast Cycle: The Rapid Refresh. Mon. Weather Rev. 2016, 144, 1669–1694. [Google Scholar] [CrossRef]
  30. Clark, A.J.; Gallus, W.A.; Xue, M.; Kong, F. A Comparison of Precipitation Forecast Skill between Small Convection-Allowing and Large Convection-Parameterizing Ensembles. Weather Forecast. 2009, 24, 1121–1140. [Google Scholar] [CrossRef]
  31. Roberts, B.; Gallo, B.T.; Jirak, I.L.; Clark, A.J. The High Resolution Ensemble Forecast (HREF) system: Applications and Performance for Forecasting Convective Storms. Meteorology 2019. [Google Scholar] [CrossRef]
  32. Bouttier, F.; Marchal, H. Probabilistic thunderstorm forecasting by blending multiple ensembles. Tellus A Dyn. Meteorol. Oceanogr. 2020, 72, 1–19. [Google Scholar] [CrossRef]
  33. Alpay, B.A.; Wanik, D.; Watson, P.; Cerrai, D.; Liang, G.; Anagnostou, E. Dynamic Modeling of Power Outages Caused by Thunderstorms. Forecasting 2020, 2, 151–162. [Google Scholar] [CrossRef]
  34. Sheild, S.A.; Quiring, S.M.; McRoberts, D.B. Development of a Thunderstorm Outage Prediction Model. Ph.D. Thesis, The Ohio State University, Columbus, OH, USA, 2018. [Google Scholar]
  35. Kabir, E.; Guikema, S.D.; Quiring, S.M. Predicting Thunderstorm-Induced Power Outages to Support Utility Restoration. IEEE Trans. Power Syst. 2019, 34, 4370–4381. [Google Scholar] [CrossRef]
  36. Babbage, C. Passages from the Life of a Philosopher; Longman, Green, Longman, Roberts, & Green: London, UK, 1864. [Google Scholar]
  37. De Pondeca, M.S.F.V.; Manikin, G.S.; DiMego, G.; Benjamin, S.G.; Parrish, D.F.; Purser, R.J.; Wu, W.S.; Horel, J.D.; Myrick, D.T.; Lin, Y.; et al. The Real-Time Mesoscale Analysis at NOAA’s National Centers for Environmental Prediction: Current Status and Development. Weather Forecast. 2011, 26, 593–612. [Google Scholar] [CrossRef]
  38. Nelson, B.R.; Prat, O.P.; Seo, D.J.; Habib, E. Assessment and Implications of NCEP Stage IV Quantitative Precipitation Estimates for Product Intercomparisons. Weather Forecast. 2016, 31, 371–394. [Google Scholar] [CrossRef]
  39. NOAA/NWS. RTMA: Real-Time Mesoscale Analysis Data; NOAA/NWS: Washington, DC, USA, 2015.
  40. Hudlow, M.D. Technological Developments in Real-Time Operational Hydrologic Forecasting in the United States. J. Hydrol. 1988, 102, 69–92. [Google Scholar] [CrossRef]
  41. Environmental Modeling Center; National Centers for Environmental Prediction; National Weather Service; NOAA; U.S. Department of Commerce. NCEP North American Mesoscale (NAM) 12 km Analysis; U.S. Department of Commerce: Washington, DC, USA, 2015.
  42. Morrison, H.; Thompson, G.; Tatarskii, V. Impact of Cloud Microphysics on the Development of Trailing Stratiform Precipitation in a Simulated Squall Line: Comparison of One- and Two-Moment Schemes. Mon. Weather Rev. 2009, 137, 991–1007. [Google Scholar] [CrossRef]
  43. Mlawer, E.J.; Taubman, S.J.; Brown, P.D.; Iacono, M.J.; Clough, S.A. Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave. J. Geophys. Res. Atmos. 1997, 102, 16663–16682. [Google Scholar] [CrossRef]
  44. Chou, M.; Suarez, M. An Efficient Thermal Infrared Radiation Parameterization for Use in General Circulations Models. NASA Tech. Memo. 1994, 3, 1–85. [Google Scholar]
  45. Jiménez, P.A.; Dudhia, J.; González-Rouco, J.F.; Navarro, J.; Montávez, J.P.; García-Bustamante, E. A Revised Scheme for the WRF Surface Layer Formulation. Mon. Weather Rev. 2012, 140, 898–918. [Google Scholar] [CrossRef]
  46. Tewari, M.; Chen, F.; Wang, W.; Dudhia, J.; LeMone, M.; Mitchell, K.; Ek, M.; Gayno, G.; Wegiel, J.; Cuenca, R. Implementation and verification of the unified NOAH land surface model in the WRF model. In Proceedings of the 20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction, Seattle, WA, USA, 11–15 January 2004; Volume 1115, pp. 2165–2170. [Google Scholar]
  47. Hong, S.; Noh, Y.; Dudhia, J. A New Vertical Diffusion Package with an Explicit Treatment of Entrainment Processes. Mon. Weather Rev. 2006, 134, 2318–2341. [Google Scholar] [CrossRef]
  48. Agostinelli, C.; Lund, U. R Package Circular: Circular Statistics (Version 0.4-93); Department of Environmental Sciences, Informatics and Statistics, Ca’ Foscari University: Venice, Italy; UL: Department of Statistics, California Polytechnic State University: San Luis Obispo, CA, USA, 2017; Available online: https://cran.r-project.org/web/packages/circular/circular.pdf (accessed on 2 February 2021).
  49. Bivand, R.; Keitt, T.; Rowlingson, B. rgdal: Bindings for the ’Geospatial’ Data Abstraction Library. R Package Version 1.5-23. 2021. Available online: https://cran.r-project.org/web/packages/rgdal/index.html (accessed on 2 February 2021).
  50. Bivand, R.; Rundel, C. rgeos: Interface to Geometry Engine—Open Source (’GEOS’). R Package Version 0.5-5. 2020. Available online: https://cran.r-project.org/web/packages/rgeos/index.html (accessed on 1 June 2020).
  51. Jin, S.; Homer, C.; Yang, L.; Danielson, P.; Dewitz, J.; Li, C.; Zhu, Z.; Xian, G.; Howard, D. Overall Methodology Design for the United States National Land Cover Database 2016 Products. Remote Sens. 2019, 11, 2971. [Google Scholar] [CrossRef]
  52. Coulston, J.W.; Moisen, G.G.; Wilson, B.T.; Finco, M.V.; Cohen, W.B.; Brewer, C.K. Modeling Percent Tree Canopy Cover: A Pilot Study. Photogramm. Eng. Remote Sens. 2012, 78, 715–727. [Google Scholar] [CrossRef]
  53. Potapov, P.; Li, X.; Hernandez-Serna, A.; Tyukavina, A.; Hansen, M.C.; Kommareddy, A.; Pickens, A.; Turubanova, S.; Tang, H.; Silva, C.E.; et al. Mapping global forest canopy height through integration of GEDI and Landsat data. Remote Sens. Environ. 2021, 253, 112165. [Google Scholar] [CrossRef]
  54. Gesch, D.; Evans, G.; Oimoen, M.; Arundel, S. The National Elevation Dataset; American Society for Photogrammetry and Remote Sensing: Bethseda, MD, USA, 2018; pp. 83–110. [Google Scholar]
  55. Soil Survey Staff, Natural Resources Conservation Service. Soil Survey Geographic (SSURGO) Database. 2010. Available online: https://websoilsurvey.nrcs.usda.gov/ (accessed on 21 August 2020).
  56. Individual Tree Species Parameter Maps. 2015. Available online: https://www.fs.fed.us/foresthealth/applied-sciences/mappingreporting/indiv-tree-parameter-maps.shtml (accessed on 19 March 2021).
  57. Abatzoglou, J.T.; McEvoy, D.J.; Redmond, K.T. The West Wide Drought Tracker: Drought Monitoring at Fine Spatial Scales. Bull. Am. Meteorol. Soc. 2017, 98, 1815–1820. [Google Scholar] [CrossRef]
  58. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2021. [Google Scholar]
  59. Kursa, M.B.; Jankowski, A.; Rudnicki, W.R. Boruta—A System for Feature Selection. Fundam. Inform. 2010, 101, 271–285. [Google Scholar] [CrossRef]
  60. Kursa, M.B.; Rudnicki, W.R. Feature Selection with the Boruta Package. J. Stat. Softw. 2010, 36, 1–13. [Google Scholar] [CrossRef]
  61. Chipman, H.A.; George, E.I.; McCulloch, R.E. BART: Bayesian additive regression trees. Ann. Appl. Stat. 2010, 4, 266–298. [Google Scholar] [CrossRef]
  62. Sparapani, R.; Spanbauer, C.; McCulloch, R. Nonparametric Machine Learning and Efficient Computation with Bayesian Additive Regression Trees: The BART R Package. J. Stat. Softw. 2021, 97, 1–66. [Google Scholar] [CrossRef]
  63. Ardia, D.; Boudt, K.; Carl, P.; Mullen, K.M.; Peterson, B.G. Differential Evolution with DEoptim: An Application to Non-Convex Portfolio Optimization. R J. 2011, 3, 27–34. [Google Scholar] [CrossRef]
  64. Mullen, K.; Ardia, D.; Gil, D.; Windover, D.; Cline, J. DEoptim: An R Package for Global Optimization by Differential Evolution. J. Stat. Softw. 2011, 40, 1–26. [Google Scholar] [CrossRef]
  65. National Centers for Environmental Information. Integrated Surface Data (ISD) Archive. Available online: https://www.ncei.noaa.gov/data/global-hourly/access/ (accessed on 3 March 2021).
  66. Nash, J.; Sutcliffe, J. River flow forecasting through conceptual models part I—A discussion of principles. J. Hydrol. 1970, 10, 282–290. [Google Scholar] [CrossRef]
  67. Roberts, N.M.; Lean, H.W. Scale-Selective Verification of Rainfall Accumulations from High-Resolution Forecasts of Convective Events. Mon. Weather Rev. 2008, 136, 78–97. [Google Scholar] [CrossRef]
  68. Gilleland, E.; Ahijevych, D.A.; Brown, B.G.; Ebert, E.E. Verifying Forecasts Spatially. Bull. Am. Meteorol. Soc. 2010, 91, 1365–1376. [Google Scholar] [CrossRef]
  69. Mittermaier, M.; Roberts, N. Intercomparison of Spatial Forecast Verification Methods: Identifying Skillful Spatial Scales Using the Fractions Skill Score. Weather Forecast. 2010, 25, 343–354. [Google Scholar] [CrossRef]
  70. Laboratory, N.R.A. Verification: Weather Forecast Verification Utilities. R Package Version 1.42. 2015. Available online: https://cran.r-project.org/package=verification.
  71. Gilleland, E. SpatialVx: Spatial Forecast Verification. R Package Version 0.8. 2021. Available online: https://cran.r-project.org/package=SpatialVx (accessed on 3 March 2021).
  72. Fisher, A.; Rudin, C.; Dominici, F. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 2019, 20, 1–81. [Google Scholar]
  73. Biecek, P. DALEX: Explainers for Complex Predictive Models in R. J. Mach. Learn. Res. 2018, 19, 1–5. [Google Scholar]
  74. Lee, T.R.; Buban, M.; Turner, D.D.; Meyers, T.P.; Baker, C.B. Evaluation of the High-Resolution Rapid Refresh (HRRR) Model Using Near-Surface Meteorological and Flux Observations from Northern Alabama. Weather Forecast. 2019, 34, 635–663. [Google Scholar] [CrossRef]
  75. Pichugina, Y.L.; Banta, R.M.; Bonin, T.; Brewer, W.A.; Choukulkar, A.; McCarty, B.J.; Baidar, S.; Draxl, C.; Fernando, H.J.S.; Kenyon, J.; et al. Spatial Variability of Winds and HRRR–NCEP Model Error Statistics at Three Doppler-Lidar Sites in the Wind-Energy Generation Region of the Columbia River Basin. J. Appl. Meteorol. Climatol. 2019, 58, 1633–1656. [Google Scholar] [CrossRef]
  76. Shucksmith, P.E.; Sutherland-Stacey, L.; Austin, G.L. The spatial and temporal sampling errors inherent in low resolution radar estimates of rainfall: Spatial and temporal sampling errors in low resolution radar estimates of rainfall. Meteorol. Appl. 2011, 18, 354–360. [Google Scholar] [CrossRef]
  77. Moreau, E.; Testud, J.; Le Bouar, E. Rainfall spatial variability observed by X-band weather radar and its implication for the accuracy of rainfall estimates. Adv. Water Resour. 2009, 32, 1011–1019. [Google Scholar] [CrossRef]
  78. Zhou, L.; Lin, S.J.; Chen, J.H.; Harris, L.M.; Chen, X.; Rees, S.L. Toward Convective-Scale Prediction within the Next Generation Global Prediction System. Bull. Am. Meteorol. Soc. 2019, 100, 1225–1243. [Google Scholar] [CrossRef]
  79. Leaf Area Index (1 Month—Terra/MODIS). 2017. Available online: https://modis.gsfc.nasa.gov/data/dataprod/mod15.php (accessed on 17 December 2020).
  80. Community, WRF. Weather Research and Forecasting (WRF) Model; UCAR/NCAR: Boulder, CO, USA, 2000. [Google Scholar] [CrossRef]
Figure 1. The location of the outage model grid cells by service territory as well as the location of the airport weather stations used in the meteorological analysis.
Figure 1. The location of the outage model grid cells by service territory as well as the location of the airport weather stations used in the meteorological analysis.
Forecasting 03 00034 g001
Figure 2. Point-to-point comparison of the NOAA analysis parameters (RTMA, (Top)) and the WRF simulation parameters (WRF, (Bottom)) versus weather station observations for select variables, describing 24 h thunderstorm events.
Figure 2. Point-to-point comparison of the NOAA analysis parameters (RTMA, (Top)) and the WRF simulation parameters (WRF, (Bottom)) versus weather station observations for select variables, describing 24 h thunderstorm events.
Forecasting 03 00034 g002
Figure 3. Scatterplots of cross-validation predictions versus actual outages for all thunderstorm events for RTMA- (red, (left)) and WRF (blue, (right))-based outage prediction systems.
Figure 3. Scatterplots of cross-validation predictions versus actual outages for all thunderstorm events for RTMA- (red, (left)) and WRF (blue, (right))-based outage prediction systems.
Forecasting 03 00034 g003
Figure 4. FSS for all events by territory for the RTMA- and WRF-based outage models with a moderate outage risk threshold (0.111 damage locations), plotted for neighborhood sizes 3 × 3 to 21 × 21 grid cells. The colored lines are FSS values for each event; the black line indicates the average FSS over all events; and the horizontal dark grey line indicates the average FSS u n i f o r m .
Figure 4. FSS for all events by territory for the RTMA- and WRF-based outage models with a moderate outage risk threshold (0.111 damage locations), plotted for neighborhood sizes 3 × 3 to 21 × 21 grid cells. The colored lines are FSS values for each event; the black line indicates the average FSS over all events; and the horizontal dark grey line indicates the average FSS u n i f o r m .
Forecasting 03 00034 g004
Figure 5. Grouped variable importance as measured by dropout loss (RMSLE) over 10 iterations of permuted groups of variables. The 95% confidence intervals are also shown for both the RTMA-based outage model (red, (left)), and the WRF-based outage model (blue, (right)).
Figure 5. Grouped variable importance as measured by dropout loss (RMSLE) over 10 iterations of permuted groups of variables. The 95% confidence intervals are also shown for both the RTMA-based outage model (red, (left)), and the WRF-based outage model (blue, (right)).
Forecasting 03 00034 g005
Table 1. The amount of data available for training the thunderstorm-related outage models.
Table 1. The amount of data available for training the thunderstorm-related outage models.
CTWMAEMANHUITotal
Number of Storms7482699156372
Territory Grid Cells201963882021281695774
Total Entries149,40652,31656,580193,6489464461,414
Table 2. Details of the WRF simulation configuration.
Table 2. Details of the WRF simulation configuration.
Horizontal Resolution 2 km
Vertical Levels 51
Horizontal Grid Scheme Arakawa C Grid
Nesting One 6km Nested Domain
Microphysics Option Thompson Graupel Scheme [42]
Longwave Radiation Option RRTM Scheme [43]
Shortwave Radiation Option Goddard Shortwave Scheme [44]
Surface-Layer Option Revised MM5 Scheme [45]
Land-Surface Option Noah Land-Surface Model [46]
Planetary Boundary Layer Yonsei Scheme [47]
Table 3. Error metrics of the event-level performance of the cross-validation of the outage prediction systems.
Table 3. Error metrics of the event-level performance of the cross-validation of the outage prediction systems.
MdAPEMAPECRMSER 2 NSE
RTMA31%46%500.390.37
WRF35%50%510.360.35
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Back to TopTop