Next Article in Journal
Using UAV to Identify the Optimal Vegetation Index for Yield Prediction of Oil Seed Rape (Brassica napus L.) at the Flowering Stage
Previous Article in Journal
A Novel Hybrid Attention-Driven Multistream Hierarchical Graph Embedding Network for Remote Sensing Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Post-Flood Analysis for Damage and Restoration Assessment Using Drone Imagery

1
Mechanical Engineering, Virginia Tech, Blacksburg, VA 24061, USA
2
Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA
3
Development Monitors, LLC, Arlington, VA 22202, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2022, 14(19), 4952; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194952
Submission received: 27 June 2022 / Revised: 19 September 2022 / Accepted: 28 September 2022 / Published: 4 October 2022

Abstract

:
With natural disasters continuing to become more prevalent in recent years, the need for effective disaster management efforts becomes even more critical. Specifically, flooding is an extremely common natural disaster which can cause significant damage to homes and other property. In this article, we look at an area in Hurley, Virginia which suffered a significant flood event in August 2021. A drone is used to capture aerial imagery of the area and reconstructed to produce 3-dimensional models, Digital Elevation Models, and stitched orthophotos for flood modeling and damage assessment. Pre-flood Digital Elevation Models and available weather data are used to perform simulations of the flood event using HEC-RAS software. These were validated with measured water height values and found to be very accurate. After this validation, simulations are performed using the Digital Elevation Models collected after the flood and we found that a similar rainfall event on the new terrain would cause even worse flooding, with water depths between 29% and 105% higher. These simulations could be used to guide recovery efforts as well as aid response efforts for any future events. Finally, we look at performing semantic segmentation on the collected aerial imagery to assess damage to property from the flood event. While our segmentation of debris needs more work, it has potential to help determine the extent of damage and aid disaster response. Based on our investigation, the combination of techniques presented in this article has significant potential to aid in preparation, response, and recovery efforts for natural disasters.

1. Introduction

Between 1970 and 2019, the number of natural disasters has increased by a factor of five, driven by climate change, more extreme weather, and improved reporting [1]. In the 20 year period between 1995 and 2015, flooding was the most common natural disaster by a wide margin [2]. A joint report by the UN Office for Disaster Risk Reduction and the Centre for Research on the Epidemiology of Disasters recorded 3062 natural flood disasters, which accounted for for 43% of all recorded events in this 20 year period [2,3]. This was brought into the spotlight again recently with a catastrophic flood event occurring at Yellowstone National Park in June 2022 [4,5]. According to the USGS, the Yellowstone flood could be considered a 1 in 500 year event since peak streamflow was higher than the 0.2% (or 1 in 500) flood [6]. The extreme nature of the flood underscores an increasing trend in extreme weather events. Due to the significance of flood events, our work focuses on flooding. Specifically, we look at a test case of a devastating flood which occurred in Hurley, Virginia in 2021.
When responding to any disaster, three resources are particularly vital: money, time, and supplies [7]. Depending on circumstances and location, emergency response to a natural disaster may not be as effective. This is particularly the case if multiple simultaneous disasters put a strain on much-needed resources [7]. The first 72 h after a disaster are especially crucial and response must occur during that time to save lives [8]. After the initial response stage, recovery efforts are necessary to rebuild the impacted community. Additionally, preparedness is vital since more preparation beforehand will improve the response to a disaster [8]. This preparation can be done before any disaster occurs and during recovery and rebuilding efforts after a disaster.
Drones are becoming increasingly used in disaster management or humanitarian aid [9]. Unmanned aerial vehicles (UAVs), which are more commonly known as drones, are aircraft without any humans onboard. The use of drones enables the possibility for quickly surveying and collecting data after a natural disaster. Aerial imagery from drones is widely used for producing extremely detailed 3-dimensional models and Digital Elevation Models of the terrain. Since managing floods is a very complex and difficult task which requires continuous monitoring of specific areas, drones could be helpful for keeping an area under observation [10]. Based on a search of other research papers, the most common drone application in disasters has been for mapping or disaster management [9]. The majority of studies have been focused on drone-based support of mitigation and recovery activities, while there is a lack of response-related research [11]. Based upon their comprehensive study of articles on remote sensing of natural hazard-related disasters, Kucharczyk and Hugenholtz recommend additional research and future studies be performed on earthquakes, floods, and cyclones/windstorms along with in-depth damage assessment for recovery, among other areas. Some work has been performed for disaster response and damage assessment using drones, with one specific case study looking at flood damage in Malawi [12,13].
In this work, we look at a specific flood event in southwest Virginia in 2021. Aerial imagery is collected using a drone over portions of the area. These data are then used to create point clouds and digital elevation models. Using digital elevation models from before the flood, weather data, and measured water heights, we validate flood simulations of the event. After validating these simulations, we look at how another similar weather event would impact the new terrain. This is examined with the goal of using this data to aid recovery and preparedness of future events, which will also help aid disaster response efforts if another natural disaster strikes. Finally, we use semantic segmentation on the aerial imagery in order to analyze the amount of damage which was sustained during the flood event.

2. Materials and Methods

2.1. Test Area

The test area used for this work was a portion of Guesses Fork Road in Hurley, Virginia. Hurley is a small community in Southwest Virginia very near to the Kentucky and West Virginia state lines. On 30 August 2021, heavy rainfall caused significant flooding, landslides, and mudslides in Hurley. This resulted in the destruction of over 20 homes and led to death of one person [14]. The flooding in this area occurred rapidly in a short time frame, with water rising up and in people’s homes in less than 30 min [15]. Rainfall estimates varied around the area, but over six inches of rain was estimated to have fallen along portions of Guesses Fork Road [16]. Within the first two days of emergency response, emergency personnel estimated they had conducted between 40 and 50 evacuations [17]. In the immediate aftermath of the flooding, officials estimated that it could take a month before electricity was restored and a year before water service would be restored in the affected communities [18]. Despite the devastation in the area, the Federal Emergency Management Agency (FEMA) denied the request of Individual Assistance to those impacted by the flooding [14]. This further highlights the necessity of the development of rapid assessment techniques which can quantify the magnitude and likely recurrence of natural disasters.

2.2. Test Area Data Collection

In order to collect imagery of the test area after the flood damage, two visits were made to Hurley: one in November 2021 and the second in April 2022. During our two visits to the flood site in Hurley, a DJI Mavic Air 2 drone was used for collecting aerial imagery. The left portion of Figure 1 shows a map of Virginia and surrounding states with the location of Hurley marked by a pin. The right portion of the figure shows a picture of a DJI Mavic Air 2 drone, which was the model of drone used for our data collection. Flights were performed at an altitude of 50 m with the camera set to capture 12 megapixel nadir imagery. During the first visit in November, six flights were flown to collect imagery over the flooded areas. Five additional flights were flown along the same stretch of road during the second visit to the area in April.
For flood analysis of the test area, weather data was needed from the flood event on 30 August 2021 along with terrain data from before the flood event. Terrain data for this area was acquired from the Virginia LiDAR Inventory Web Mapping Application [20]. This portal for open data includes LiDAR point clouds and Digital Elevation Models obtained from NOAA (National Oceanic and Atmospheric Administration), USGS (United States Geological Survey), and VGIN (Virginia Geographic Information Network) data portals. For our flood analysis, the Digital Elevation Models were used. These were last updated in 2016, which was five years before the flood event. As a result, we are assuming the terrain along the road did not significantly change between when the DEMs were produced and the flood in 2021. A portion of one of these sections of the Digital Elevation Model is shown in Figure 2.
Rainfall data was acquired from the NOAA National Severe Storms Library (NSSL) Multi-Radar/Multi-Sensor System (MRMS) [21]. “The Multiple Radar Multiple Sensor system combines data streams from multiple radars, satellites, surface observations, upper air observations, lightning reports, rain gauges and numerical weather prediction models to produce a suite of decision-support products every two minutes” [22]. Rainfall data is available in Radar only and Multi-Sensor QPE (Quantitative precipitation estimation). Radar only QPE values are precipitation accumulations derived from the summation of the Surface Precipitation Rate (SPR) product over specific time intervals [23]. Multi-sensor QPE uses a combination of gauges and Numerical Weather Prediction (NWP) Quantitative Precipitation Forecasts (QPF) to fill in gaps in poor radar coverage areas [24]. The Multi-sensor QPE values were used as the precipitation for our simulations since this should be more accurate than radar only data. The Multi-sensor QPE data is available at 1 h intervals, so this interval was selected for our simulations. An example of the product viewer containing precipitation data is shown in Figure 3.
The MRMS system also contains streamflow values calculated using the rainfall observations. The Flooded Locations and Simulated Hydrographs Project (FLASH) [26] has produced many flood prediction products, including a maximum streamflow value at each grid point. The products for streamflow include 3 models for calculating these values: CREST, SAC-SMA, and Hydrophobic [27]. For our simulations, we used the CREST (Coupled Routing and Excess STorage) distributed hydrological model data. CREST was jointly developed by the University of Oklahoma and NASA SERVIR to “simulate the spatial and temporal variation of land surface, and subsurface water fluxes and storages by cell-to-cell simulation” [28]. The CREST flow values are available at 10 min intervals, but hourly values were used to match the rainfall data interval.

2.3. Image Reconstruction Software

After collecting aerial imagery of the test area, these data were processed using OpenDroneMap to produce point clouds, orthorectified imagery, and Digital Elevation Models. OpenDroneMap is an open source toolkit for processing aerial imagery [29]. OpenDroneMap can be used to turn simple 2D images into classified point clouds, 3D textured models, georeferenced orthorectified imagery, and georeferenced Digital Elevation Models [30]. Structure from Motion (SfM) techniques are utilized to generate the 3-dimensional data from 2 to dimensional imagery. The ODM software is available for Windows, Mac, and Linux and can be run using native installation or through docker [30].

2.4. Flood Analysis Software

For flood analysis, we are using HEC-RAS (Hydrologic Engineering Center’s River Analysis System) to perform the simulations. HEC-RAS is an open source software which allows users to perform one-dimensional steady flow, one and two-dimensional unsteady flow calculations, sediment transport and mobile bed computations, and water temperature and water quality monitoring [31]. HEC-RAS has been used and analyzed in many different studies for flood mapping, including in Italy and Kazakhstan [32,33].

2.5. Workflow

A general workflow for our application of drones for the post-flood response and recovery is outlined in Table 1. The first step should begin as soon as possible to begin capturing aerial imagery during the critical first 72 h. A drone and a certified pilot are required for this imagery collection portion of the workflow, with the time required depending on the size of the disaster area. Once imagery has been collected, it can be reconstructed to produce 3D models and digital terrain data. With an experienced user, this portion can be performed in a matter of hours. The time is also dependent on the amount of imagery and computer hardware to run OpenDroneMap. The next steps, damage segmentation and flood analysis, can be performed simultaneously if the necessary resources are available. If proper trained models exist, damage segmentation can be performed relatively quickly to understand the true extent of damage. Without this, manual inspection of the data would be much more time-intensive. Flood analysis using post-disaster imagery can be useful for guiding restoration efforts and determining future risks of major flooding. If a community keeps updated aerial imagery before a disaster, this flood modeling analysis could additionally be used to prepare for damage which may be expected during an upcoming flood.

3. Drone Image Reconstruction

3.1. OpenDroneMap Reconstructions

The drone imagery collected during our two trips to Hurley were processed using OpenDroneMap to produce 3D point clouds, georeferenced orthophotos, and Digital Elevation Models. Example 3D textured meshes produced from our drone imagery are shown in Figure 4 and Figure 5. These two figures show 3D views of the area which flooded. Figure 4 shows an area where several buildings were damaged during the flood event. A portion of the stream can be seen to now flow directly into a building near the center of the image. One of these houses was even moved from its foundation during the flood event. Figure 5 shows a different portion of the Guesses Fork Road area, which was reconstructed using the drone imagery collected during our second trip to Hurley in April. In this portion, while damage is still present, some recovery efforts to the area can be observed as well. For example, a new bridge has been built over the stream near the center of this image. Next to this new bridge, the previous bridge can be seen at the bottom of the stream.

3.2. Comparison and Interpretation of Visual Data

Figure 6 and Figure 7 show comparisons of satellite imagery from before the flood event to the orthophotos created from the aerial imagery collected with our drone. These two comparisons were selected to highlight some of the damage which occurred in this area during the flooding. In Figure 6, the stream has become much wider in this portion due to a noticeable amount of erosion to the land around the stream. Additionally, the bridges from the road to the homes were destroyed and washed away during the flooding, while the buildings in this area were not destroyed or washed away, the flooding had an observable impact on the terrain surrounding these buildings. Figure 7 shows an example of a location where homes were completely destroyed during the flood. Changes to the stream size in this area are not as dramatic, but multiple buildings and many trees in this area were wiped out.

4. Flood Simulations

4.1. HEC-RAS Flood Event Simulation

By using HEC-RAS software, flood simulations were performed for multiple sections across the flooded area. Two-Dimensional unsteady flow modeling was performed in HEC-RAS to simulate the flood event which occurred. Since our drone imagery was collected over several small and disconnected sections along the road, flood simulations were performed on the areas corresponding to where the imagery was collected. As a result of these small simulation areas, very small cell sizes were required to grid these simulation areas. The 2-D mesh for the simulations used 1 m by 1 m cell sizes to grid the area. Due to this very fine discretization, careful selection and adjustment of the simulation time step was required to ensure a stable numerical analysis. HEC-RAS includes a variable time step option, which uses the Courant number to adjust the time step [34]. Maximum and minimum values can be set for the Courant number threshold using this method. For some cases, a stable and accurate solution could be achieved with a Courant number as high as 5, but more rapid changes of depth and velocity require a maximum Courant number closer to 1 [34,35]. A maximum Courant number of 1 was set for our simulations, which resulted in stable solutions even with the fine discretization of the mesh.
The Manning’s Roughness Coefficient, also known as Manning’s n Value, is an important parameter for HEC-RAS simulations. The roughness coefficient can be set for a simulation using land cover data [36]. The USGS National Land Cover Database (NLCD) provides descriptive data for characteristics of the land surface [37]. These classes of land cover include multiple types of forests and crop lands, barren land, open water, and developed land. The developed land classes include open space, low intensity, medium intensity, and high intensity. The developed areas are considered those which contain a mixture of constructed materials and vegetation and the different developed land classes are defined by the percentage of total cover which impervious surfaces account for [38]. Much of the development along our area of interest would be considered Low Intensity, since impervious surfaces account for between 20% and 49% percent of total cover. Areas with impervious surfaces accounting for less than 20% are classified as Open Space. The HEC-RAS user’s manual suggests roughness coefficient values between 0.03 and 0.05 for Developed, Open Space and values between 0.06 and 0.12 for the Developed, Low Intensity class [36]. For the specific test areas presented in this paper, the land cover was classified on the lower end of the Developed, Low Intensity class. As a result of this, a roughness coefficient value of 0.06 was selected for these areas.
The Multi-sensor QPE data was used as the precipitation input across the 2-D mesh for the simulations. A plot of this precipitation data is shown in Figure 8 for one of the simulations. The plot shows hourly precipitation accumulations over the simulation time period. All of our simulations were run using a 48 h time period between 30 August 2022 at 12:00 a.m. UTC and 1 September 2022 at 12:00 a.m. UTC. Rainfall and water flow values were collected hourly across this 48 h period of time.
The inlet boundary condition for the 2D hydraulic model used a flow hydrograph with the data taken from the CREST flow product described above. The outlet boundary conditions was set to Normal Depth since this setting is a standard option which does not require as much data. The water volume accounting error percentage in HEC-RAS for the first simulation was 0.044%, while the second simulation was higher, with a value of 0.47%. The water depth output from HEC-RAS for the first simulation is shown in Figure 9. The deepest values are seen along the stream, with water depths of more than one meter also spreading into the surrounding land. The location of our ground-truth measurement is marked on the image. The ground-truth measurements are discussed in the following section.

4.2. Flood Water Depth Accuracy

Ground-truth water depth measurements were acquired during our visit by using the water lines observed on houses and trees. Three ground-truth measurements were used for comparison to verify the accuracy of the flood simulations for two different sections of the area. For each ground-truth measurement location, three different measurements were taken within a 0.3 m diameter to determine the water depth values. These three measurements were averaged at each location to determine an average water depth value at each ground-truth location. Variations in the measurement values at each location were only a couple of centimeters, resulting in small standard deviations for these measurements.
To assess the accuracy of the HEC-RAS flood simulations, multiple measurements were taken of the flood depth based on marks left after the flood. Figure 10 and Figure 11 show the locations of three measurements taken at the flood site. These two figures show the flood locations labeled on the orthophotos and satellite imagery. The first measurement location, shown in Figure 10, was at one of the buildings. A large quantity of mud entered this house and covered portions of the floor at depths of over 0.3 m. Using water lines left in the house from the flood, the water depth was measured to be 1.35 m above the ground at this location. The second water depth measurement location is shown in Figure 11 and was measured to be 2.51 m above the streambed. This measurement was taken along the stream, next to the road at an area where drone imagery was collected. The third water depth location is also shown in Figure 11 on top of satellite imagery. A measured water depth of 3.4 m was measured at the location where a railroad bridge crosses over Guesses Fork Road.
The water depth accuracy from the simulations is summarized in Table 2. In the first simulation, one water depth measurement was taken from a building which was flooded. This first value was at measurement location 1, with a measured water depth of 1.35 m above the ground. The HEC-RAS simulation produced a value of 1.32 m, which is an error of only 2.22%. The second simulation included two measured water depths from this area, locations 2 and 3 from Figure 11. The simulation values produced an error 8.37% and 2.06% for locations 2 and 3, respectively. These values validated that our simulations very accurately represented the water flow of the actual flood event, while one of the locations had an error over 5%, within 10% was still deemed to be acceptable. Some error could be inherent due to the pre-flood event terrain data being several years old. Although these simulations were very accurate, they could be potentially improved by including sediment flow in our HEC-RAS simulations. Without any sediment flow element, our simulations do not account for the landslides which occurred during the flood event.

4.3. HEC-RAS Simulation on Post-Flood Environment

After the HEC-RAS simulation for the flood event was validated with the ground-truth measurements, additional simulations were run on the post-flooded environment using the same input rainfall data. This allows us to investigate additional risk to these areas from future floods after restoration of the stream environment. In order to examine the restoration impacts at our test area in Hurley, we performed flood simulations in HEC-RAS using Digital Elevations Models produced from drone imagery collected after the flood, which were discussed in Section 3. This simulation used the same precipitation input and flow boundary conditions as the simulations performed on the pre-flood environment. This allows us to examine how the same rainfall event would impact the new terrain compared to the effects of the August flood event. Figure 12 shows the HEC-RAS simulated water depth results using the post-flood Digital Elevation model of the same area as the simulation in Figure 9.
Table 3 shows a comparison of the simulated flood depth values for the pre-flood environment and the post-flood environment. Water depth values were calculated for measurement locations 1 and 2. Measurement location 3 was outside of the area we captured imagery of with our drone, so we were unable to perform a flood simulation using post-flood terrain of this area. The flood depth values were calculated relative to static ground locations, which were assumed to have not moved. The first location water depth was measured from the ground outside of a house, which did not move during the flood event. The second depth measurement was taken relative to the streambed, which was observed to not have a large amount of debris or sediment buildup in this area. At the first measurement location, the simulated flood depth doubled when using the post-flood terrain compared to terrain from before the flood, while the second measurement location saw a 29.1% increase in water depth for the post-flood terrain.

5. Damage Analysis Using Semantic Segmentation

Use of computer vision techniques to analyze disaster damage is becoming more popular due to developments in Computer Vision algorithms and cost effectiveness of UAV mapping [13,39,40,41,42]. Semantic segmentation is one of the most important and extensively studied tasks in Computer Vision. In semantic segmentation, we assign a class label to each pixel in an image. With the recent developments in autonomous driving, medical imaging, and face recognition systems, more robust segmentation and object detection models have been developed, especially using Deep Learning. Convolutional Neural Networks (CNNs) use filter operations to find spatial and temporal dependencies in images and they are widely used for object detection, classification, and segmentation. Fully Convolutional Networks (FCN) [43] replace the fully connected layer in image classification neural networks with convolutional layers for semantic segmentation. U-Net [44] uses encoder-decoder network with skip connections for context information. PSPNet [45] uses feature pyramid pooling with atrous convolutions to get global context information. DeepLabV3+ [46] uses both atrous spatial pyramid pooling and encoder-decoder architecture.
Despite all this development in semantic segmentation and computer vision, there is a shortage of low altitude, high resolution annotated images for UAV disaster damage analysis. RescueNet [41] contains images of Hurricane Michael. Annotations for 11 classes including road, damaged and undamaged buildings, vehicle, water, etc. are provided in RescueNet. This dataset is a great resource for disaster damage analysis with semantic segmentation. Our work is similar to this work except that we focused more on analyzing the debris. Additionally, RescueNet lacks good annotations for debris class. We believe that analysis of debris distribution can potentially provide a new insight in predicting, mitigating, and responding to disaster damage. ISBDA [42] provides a building damage analysis in three categories: slight, severe, and debris. However, this dataset does not have other classes such as road, water, and vehicles which are crucial for Search and Rescue (SAR) missions. Other research in this area, such as [47], used texture analysis on UAV images and showed that HOG (Histogram of the Oriented Gradient) filters can be effectively used in disaster debris identification. Ref. [48] used Bag-of-words (BoW) feature representation based models for damage classification.

5.1. Dataset Description and Training

We have created a dataset of 135 images having 3000 × 4000 resolution and annotated these for semantic and instance segmentation using 6 classes: debris, water, building, vegetation, path, and vehicles. Figure 13 shows the pixel distribution for each class in the Hurley dataset. The dataset is split into training, testing, and validation with each category having 75%, 15%, and 10% of images, respectively.
While training, the data was augmented with vertical flip, horizontal flip, and random shuffling and trained on image patches with a size of 384 × 512 pixels. We used focal loss with γ = 4 as a loss function due to its characteristics for training imbalanced class distributions [49]. The model is trained with a learning rate of 0.001 and a weight decay of 0.0001 using the AdamW (Adam with weight decay) optimizer as it reduces overfitting [50]. We trained the Hurley data using DeepLabV3+, PSPNet, and U-Net architectures with ResNet34 as a backbone.

5.2. Segmentation Results

The segmentation results for three networks are compared using the mean Intersection over Union (mIoU) metric. Figure 14 shows an example image from our dataset along with our ground truth labels and the label colormap.
From Table 4 and Figure 15, it is observed that DeepLabV3+ is better at detecting debris and has overall higher mIoU of 19.2% compared to UNet and PSPNet. All models perform well in detecting water, building, vegetation, and path. The vehicle class includes all categories of vehicles, such as construction vehicles, cars, and recreational vehicle (RVs) and represents less than 1% of all pixel values. As a result, these models struggled to categorize this class in some cases. These results show that debris volume can be observed using semantic segmentation techniques and there is room for improvement in existing research on disaster damage analysis. Even though the mIoU score for the debris class is lower than that for the other classes, it gives a good representation of debris distribution after the disaster. We believe that, inherently, debris segmentation is a hard task as debris gets mixed with water, sand, buildings, etc. It becomes more difficult as segmentation networks rely heavily on good manual annotations, which is very time consuming. We will add more images to this dataset in the future which should improve our segmentation results. We will also categorize building damage as per FEMA requirements [51]. This work shows that debris analysis can be helpful for post-disaster SAR missions and restoration as we reduce the search space to specific areas in the environment. The future work will be focused towards improving these segmentation results and segmenting 3-D point clouds of the Hurley environment.

6. Conclusions

The information gained from HEC-RAS flood simulations can be used in multiple different stages of disaster emergency management, including preparedness, response, and recovery. In addition to this, the image segmentation damage analysis is particularly useful for disaster response and recovery efforts. Through measurements acquired after the flood event in Hurley, we were able to verify the accuracy of 2-dimensional HEC-RAS simulations for flood modeling of the water depth. Accurate flood event simulations have many applications for managing this type of natural disaster. When a large rainfall event has been predicted to occur, forecasted precipitation values or weather data from previous events could be used to simulate potential impacts of the flood event. Knowing these potential impacts and flood water depths can help response crews be prepared for the damage which may occur from this event. Due to a lack of aerial imagery from before the flood in our test case, we used openly available terrain models which were several years out of date. While this proved to be accurate, local communities could potentially improve this accuracy even more by flying drones regularly and keeping up-to-date aerial imagery for terrain modeling. In addition to preparation and response, HEC-RAS flood simulations can be used to inform and improve recovery efforts after the disaster has occurred. For our test area, the new terrain was discovered to be more susceptible to damage if the same rainfall event were to occur again, while progress was being made to restore the roads and homes in the area, debris left in the stream could negatively affect the water flow during future rainfall events. As a result, rebuilding efforts could use the flood simulation information to improve streams and the flow of water through the area in order to mitigate the impacts of any future floods. This enables recovery efforts to also aid in preparation and response efforts for any potential future flood events by creating flood models based on updated terrain information after a disaster. In the future, we plan to incorporate sediment flow into our HEC-RAS flood simulations to more accurately represent the flood event. In addition to this, we plan to test water flow models to calculate input flow hydrographs rather than sourcing this data from the online. This could enable improved predictive modeling before an expected flood event in order to improve response efforts.
The collection of aerial imagery with drones immediately after a natural disaster can greatly assist response efforts in other ways as well. 3D reconstruction software, such as OpenDroneMap, can enable the creation of accurate and detailed models of the damage shortly after a natural disaster occurs. In additional to manually reviewing this imagery for response efforts, semantic segmentation can be performed to analyze and quantify the damage which has occurred, while state-of-the-art flood inundation models still often use bare earth models with low resolution between 2 and 5 m, improved quality and resolution is needed for more accurate and reliable modeling [52]. The high-resolution imagery acquired from drones can enable detailed property damage and risk values that could not be obtained with other imagery sources, such as satellites. While our debris segmentation models are still a work in progress, we expect accuracy improvement as more training imagery is added. Additional work on our segmentation models will be performed to improve the ability to detect debris as well as using 3-D point clouds of the post-disaster environment to classify damage.

Author Contributions

Conceptualization, K.K. and J.W.; methodology, K.K.; software, D.W. and K.J.; validation, D.W. and K.J.; formal analysis, D.W. and K.J.; investigation, D.W.; resources, K.K.; data curation, D.W. and K.K.; writing—original draft preparation, D.W. and K.J.; writing—review and editing, K.K. and J.W.; visualization, D.W. and K.J.; supervision, K.K.; project administration, K.K.; funding acquisition, J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by grant through the Commonwealth Center for Innovation in Autonomous Systems (C2IAS).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

We would like to thank the Virginia Department of Emergency Management (VDEM) and the Virginia Department of Environmental Quality (DEQ) for their support and help with coordinating our data collection.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Weather-Related Disasters Increase over Past 50 Years, Causing More Damage but Fewer Death. Available online: https://public.wmo.int/en/media/press-release/weather-related-disasters-increase-over-past-50-years-causing-more-damage-fewer/ (accessed on 3 June 2022).
  2. Myers, J. Which Natural Disasters Hit Most Frequently? Available online: https://www.weforum.org/agenda/2016/01/which-natural-disasters-hit-most-frequently (accessed on 3 June 2022).
  3. The Human Cost of Weather Related Disasters (1995–2015). Available online: https://www.unisdr.org/2015/docs/climatechange/COP21_WeatherDisastersReport_2015_FINAL.pdf (accessed on 3 June 2022).
  4. Catastrophic Flooding in Yellowstone. Available online: https://earthobservatory.nasa.gov/images/150010/catastrophic-flooding-in-yellowstone (accessed on 1 July 2022).
  5. Montana City Faces Painful Reality Following Historic Yellowstone Flooding. Available online: https://www.accuweather.com/en/business/montana-city-faces-painful-reality-following-historic-yellowstone-flooding/1206348 (accessed on 1 July 2022).
  6. USGS Media Alert: USGS Crews Continue to Measure and Assess Yellowstone River Flood Conditions and Probabilities. Available online: https://www.usgs.gov/news/state-news-release/usgs-media-alert-usgs-crews-continue-measure-and-assess-yellowstone-river (accessed on 1 July 2022).
  7. Myers, T. Multiple Disasters Strain Response Systems, Slow Recovery, and Deepen Inequity. Available online: https://www.directrelief.org/2020/10/multiple-disasters-strain-response-slow-recovery-and-worsen-injustice/ (accessed on 5 June 2022).
  8. Ocha, U.N. 5 Essentials for the First 72 h of Disaster Response. Available online: https://medium.com/humanitarian-dispatches/5-essentials-for-the-first-72-hours-of-disaster-response-51746452bc88 (accessed on 5 June 2022).
  9. Mohd Daud, S.; Mohd Yusof, M.; Heo, C.; Khoo, L.; Chainchel Singh, M.; Mahmood, M.; Nawawi, H. Applications of drone in disaster management: A scoping review. Sci. Justice 2022, 62, 30–42. [Google Scholar] [CrossRef] [PubMed]
  10. Restas, A. Drone Applications for Supporting Disaster Management. World J. Eng. Technol. 2015, 3, 316–321. [Google Scholar] [CrossRef] [Green Version]
  11. Kucharczyk, M.; Hugenholtz, C. Remote sensing of natural hazard-related disasters with small drones: Global trends, biases, and research opportunities. Remote Sens. Environ. 2021, 264, 112577. [Google Scholar] [CrossRef]
  12. Zwęgliński, T. The Use of Drones in Disaster Aerial Needs Reconnaissance and Damage Assessment—Three-Dimensional Modeling and Orthophoto Map Study. Sustainability 2020, 12, 6080. [Google Scholar] [CrossRef]
  13. Wouters, L.; Couasnon, A.; de Ruiter, M.C.; van den Homberg, M.J.C.; Teklesadik, A.; de Moel, H. Improving flood damage assessments in data-scarce areas by retrieval of building characteristics through UAV image segmentation and machine learning—A case study of the 2019 floods in southern Malawi. Nat. Hazards Earth Syst. Sci. 2021, 21, 3199–3218. [Google Scholar] [CrossRef]
  14. Moore, M.; Lee, M. FEMA Denies Individual Assistance for Hurley Residences Ravaged by Floods. Available online: https://www.wjhl.com/news/local/fema-denies-individual-assistance-for-hurley-residences-ravaged-by-floods/ (accessed on 25 May 2022).
  15. Lee, M.; Marais, B. Flooded Hurley Community Faces Long Road to Recovery as Disaster Relief Continues. Available online: https://www.wjhl.com/news/local/flooded-hurley-community-faces-long-road-to-recovery-as-disaster-relief-continues/ (accessed on 25 May 2022).
  16. Heavy Rains Cause Flooding and Landslides in Hurley, Virginia, Rescue Crews in Area. Available online: https://wcyb.com/news/local/flooding-reported-in-hurley-county-supervisor-urges-people-in-area-to-stay-home (accessed on 25 May 2022).
  17. Lee, M.; Grosfield, K. More than 20 Buchanan County Homes Destroyed, Dozens Evacuated as Community Braces for More Rain. Available online: https://www.wjhl.com/news/local/more-than-20-buchanan-county-homes-destroyed-dozens-evacuated-as-community-braces-for-more-rain/ (accessed on 25 May 2022).
  18. Teague, S. Update: 1 Killed in Buchanan County Floods. Available online: https://www.wjhl.com/news/local/update-1-killed-in-buchanan-county-floods/ (accessed on 25 May 2022).
  19. Mavic Air 2—Up Your Game—DJI. Available online: https://www.dji.com/mavic-air-2 (accessed on 15 June 2022).
  20. Virginia LiDAR Downloads—Overview. Available online: https://www.arcgis.com/home/item.html?id=1e964be36b454a12a69a3ad0bc1473ce (accessed on 1 June 2022).
  21. NSSL Projects: Multi-Radar/Multi-Sensor System (MRMS). Available online: https://www.nssl.noaa.gov/projects/mrms/ (accessed on 3 June 2022).
  22. NOAA NSSL: MRMS. Available online: https://www.nssl.noaa.gov/news/factsheets/MRMS_2015.March.16.pdf (accessed on 3 June 2022).
  23. QPE—Radar Only—Warning Decision Training Division (WDTD)—Virtual Lab. Available online: https://vlab.noaa.gov/web/wdtd/-/qpe-radar-only?selectedFolder=9234881 (accessed on 3 June 2022).
  24. Multi-Sensor QPE—Warning Decision Training Division (WDTD)—Virtual Lab. Available online: https://vlab.noaa.gov/web/wdtd/-/multi-sensor-qpe-1?selectedFolder=9234881 (accessed on 3 June 2022).
  25. Operational Product Viewer. Available online: https://mrms.nssl.noaa.gov/qvs/product_viewer/ (accessed on 3 June 2022).
  26. FLASH—Flooded Locations and Simulated Hydrographs Project. Available online: https://inside.nssl.noaa.gov/flash/ (accessed on 3 June 2022).
  27. Maximum Streamflow—Warning Decision Training Division (WDTD)—Virtual Lab. Available online: https://vlab.noaa.gov/web/wdtd/-/maximum-streamflow?selectedFolder=2190208 (accessed on 3 June 2022).
  28. Wang, J.; Hong, Y.; Li, L.; Gourley, J.; Khan, S.; Yilmaz, K.; Adler, R.; Policelli, F.; Habib, S.; Irwn, D.; et al. The Coupled Routing and Excess STorage (CREST) distributed hydrological model. Hydrol. Sci. J. 2011, 56, 84–98. [Google Scholar] [CrossRef]
  29. Drone Mapping Software—OpenDroneMap. Available online: https://www.opendronemap.org/ (accessed on 20 May 2022).
  30. ODM—A Command Line Toolkit to Generate Maps, Point Clouds, 3D Models and DEMs from Drone, Balloon or Kite Images. Available online: https://github.com/OpenDroneMap/ODM (accessed on 20 May 2022).
  31. HEC-RAS. Available online: https://www.hec.usace.army.mil/software/hec-ras/ (accessed on 20 May 2022).
  32. Costabile, P.; Costanzo, C.; Ferraro, D.; Macchione, F.; Petaccia, G. Performances of the New HEC-RAS Version 5 for 2-D Hydrodynamic-Based Rainfall-Runoff Simulations at Basin Scale: Comparison with a State-of-the Art Model. Water 2020, 12, 2326. [Google Scholar] [CrossRef]
  33. Ongdas, N.; Akiyanova, F.; Karakulov, Y.; Muratbayeva, A.; Zinabdin, N. Application of HEC-RAS (2D) for Flood Hazard Maps Generation for Yesil (Ishim) River in Kazakhstan. Water 2020, 12, 2672. [Google Scholar] [CrossRef]
  34. Variable Time Step Capabilities. Available online: https://www.hec.usace.army.mil/confluence/rasdocs/r2dum/latest/running-a-model-with-2d-flow-areas/variable-time-step-capabilities (accessed on 30 July 2022).
  35. Selecting an Appropriate Grid Size and Time Step. Available online: https://www.hec.usace.army.mil/confluence/rasdocs/r2dum/latest/running-a-model-with-2d-flow-areas/selecting-an-appropriate-grid-size-and-time-step (accessed on 30 July 2022).
  36. Creating Land Cover, Manning’s n values, and % Impervious Layers. Available online: https://www.hec.usace.army.mil/confluence/rasdocs/r2dum/latest/developing-a-terrain-model-and-geospatial-layers/creating-land-cover-mannings-n-values-and-impervious-layers (accessed on 30 July 2022).
  37. Homer, C.; Fry, J.; Barnes, C. The National Land Cover Database, U.S. Geological Survey Fact Sheet 2012–3020. Available online: https://pubs.usgs.gov/fs/2012/3020/fs2012-3020.pdf (accessed on 30 July 2022).
  38. National Land Cover Database Class Legend and Description. Available online: https://www.mrlc.gov/data/legends/national-land-cover-database-class-legend-and-description (accessed on 30 July 2022).
  39. Xia, L.; Zhang, R.; Chen, L.; Li, L.; Yi, T.; Wen, Y.; Ding, C.; Xie, C. Evaluation of Deep Learning Segmentation Models for Detection of Pine Wilt Disease in Unmanned Aerial Vehicle Images. Remote Sens. 2021, 13, 3584. [Google Scholar] [CrossRef]
  40. Pi, Y.; Nath, N.; Behzadan, A. Detection and Semantic Segmentation of Disaster Damage in UAV Footage. J. Comput. Civ. Eng. 2021, 35, 04020063. [Google Scholar] [CrossRef]
  41. Chowdhury, T.; Murphy, R.; Rahnemoonfar, M. RescueNet: A High Resolution UAV Semantic Segmentation Benchmark Dataset for Natural Disaster Damage Assessment. arXiv 2022, arXiv:2202.12361. [Google Scholar]
  42. Zhu, X.; Liang, J.; Hauptmann, A. MSNet: A Multilevel Instance Segmentation Network for Natural Disaster Damage Assessment in Aerial Videos. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Arlington, VA, USA, 5–9 January 2021. [Google Scholar] [CrossRef]
  43. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the CVPR, Boston, MA, USA, 7–12 June 2015. [Google Scholar]
  44. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015. [Google Scholar]
  45. Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar] [CrossRef] [Green Version]
  46. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
  47. Ghaffarian, S.; Kerle, N. Towards post-disaster debris identification for precise damage and recovery assessments from uav and satellite images. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 297–302. [Google Scholar] [CrossRef] [Green Version]
  48. Vetrivel, A.; Gerke, M.; Kerle, N.; Vosselman, G. Identification of Structurally Damaged Areas in Airborne Oblique Images Using a Visual-Bag-of-Words Approach. Remote Sens. 2016, 8, 231. [Google Scholar] [CrossRef] [Green Version]
  49. Lin, T.Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar] [CrossRef] [Green Version]
  50. Loshchilov, I.; Hutter, F. Decoupled Weight Decay Regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar] [CrossRef]
  51. FEMA Preliminary Damage Assessment Guide. Available online: https://www.fema.gov/disaster/how-declared/preliminary-damage-assessments/guide (accessed on 15 June 2022).
  52. Backes, D.; Schumann, G.; Teferle, F.; Boehm, J. Towards a High-resolution Drone-based 3d Mapping Dataset to Optimise Flood Hazard Modelling. Isprs Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. 2019, XLII-2/W13, 181–187. [Google Scholar] [CrossRef]
Figure 1. A map with a pin marking the location of Hurley, VA is shown on the left. The right portion shows a DJI Mavic Air 2 drone [19], which was used to capture imagery at the test area in Hurley.
Figure 1. A map with a pin marking the location of Hurley, VA is shown on the left. The right portion shows a DJI Mavic Air 2 drone [19], which was used to capture imagery at the test area in Hurley.
Remotesensing 14 04952 g001
Figure 2. Digital Elevation Model of a small section of Guesses Fork Road in Hurley, Virginia.
Figure 2. Digital Elevation Model of a small section of Guesses Fork Road in Hurley, Virginia.
Remotesensing 14 04952 g002
Figure 3. Example Multi−sensor Quantitative precipitation estimation data for the Guesses Fork Road area of Hurley, Virginia. The data is being displayed in the Operational Product Viewer [25].
Figure 3. Example Multi−sensor Quantitative precipitation estimation data for the Guesses Fork Road area of Hurley, Virginia. The data is being displayed in the Operational Product Viewer [25].
Remotesensing 14 04952 g003
Figure 4. 3D reconstruction of a portion of Guesses Fork Road in Hurley, Virginia.
Figure 4. 3D reconstruction of a portion of Guesses Fork Road in Hurley, Virginia.
Remotesensing 14 04952 g004
Figure 5. 3D reconstruction of a portion of Guesses Fork Road in Hurley, Virginia.
Figure 5. 3D reconstruction of a portion of Guesses Fork Road in Hurley, Virginia.
Remotesensing 14 04952 g005
Figure 6. A comparison of pre-flood satellite imagery to post-flood imagery. The image on the left shows Google Maps satellite imagery. The image on the right shows the post-flood orthophoto overlaid on the satellite image in GIS (Geographic Information System) software.
Figure 6. A comparison of pre-flood satellite imagery to post-flood imagery. The image on the left shows Google Maps satellite imagery. The image on the right shows the post-flood orthophoto overlaid on the satellite image in GIS (Geographic Information System) software.
Remotesensing 14 04952 g006
Figure 7. A comparison of pre-flood satellite imagery to post-flood imagery. The image on the left shows Google Maps satellite imagery. The image on the right shows the post-flood orthophoto overlaid on the satellite image in GIS software.
Figure 7. A comparison of pre-flood satellite imagery to post-flood imagery. The image on the left shows Google Maps satellite imagery. The image on the right shows the post-flood orthophoto overlaid on the satellite image in GIS software.
Remotesensing 14 04952 g007
Figure 8. Multi-sensor Quantitative precipitation estimation data for one portion of Guesses Fork Road. The line shows hourly precipitation accumulations for a 48 h time period.
Figure 8. Multi-sensor Quantitative precipitation estimation data for one portion of Guesses Fork Road. The line shows hourly precipitation accumulations for a 48 h time period.
Remotesensing 14 04952 g008
Figure 9. HEC-RAS simulated water depth values using the Digital Elevation Models from the Virginia Lidar dataset. This represents the terrain before the flooding event occurred. The first ground-truth measurement location is marked by the label ’1’.
Figure 9. HEC-RAS simulated water depth values using the Digital Elevation Models from the Virginia Lidar dataset. This represents the terrain before the flooding event occurred. The first ground-truth measurement location is marked by the label ’1’.
Remotesensing 14 04952 g009
Figure 10. The first flood depth measurement location is marked with the label ’1’ on an orthophoto overlaid on satellite imagery. This measurement was taken at one of the houses along Guesses Fork Road.
Figure 10. The first flood depth measurement location is marked with the label ’1’ on an orthophoto overlaid on satellite imagery. This measurement was taken at one of the houses along Guesses Fork Road.
Remotesensing 14 04952 g010
Figure 11. The second and third flood depth measurement locations marked on an orthophoto overlaid on satellite imagery. The second measurement, labeled ’2’, was taken along the stream, next to the road. The third measurement, labeled ’3’, was taken at the railroad bridge at the end of the road.
Figure 11. The second and third flood depth measurement locations marked on an orthophoto overlaid on satellite imagery. The second measurement, labeled ’2’, was taken along the stream, next to the road. The third measurement, labeled ’3’, was taken at the railroad bridge at the end of the road.
Remotesensing 14 04952 g011
Figure 12. HEC-RAS simulated water depth values using a post-flood environment Digital Elevation Model. The ground-truth measurement location is marked with the label ’1’. This is the same area as the simulation in Figure 9.
Figure 12. HEC-RAS simulated water depth values using a post-flood environment Digital Elevation Model. The ground-truth measurement location is marked with the label ’1’. This is the same area as the simulation in Figure 9.
Remotesensing 14 04952 g012
Figure 13. Distribution of pixels for each class in the Hurley dataset.
Figure 13. Distribution of pixels for each class in the Hurley dataset.
Remotesensing 14 04952 g013
Figure 14. Original Image and Ground Truth.
Figure 14. Original Image and Ground Truth.
Remotesensing 14 04952 g014
Figure 15. Semantic segmentation results comparison of DeepLabV3+, PSPNet and UNet.
Figure 15. Semantic segmentation results comparison of DeepLabV3+, PSPNet and UNet.
Remotesensing 14 04952 g015
Table 1. General workflow for our flood response and recovery tools.
Table 1. General workflow for our flood response and recovery tools.
StepTaskResourcesTime
1Post-disaster Aerial Imagery CollectionDrone, Pilot1 day
2Reconstruct Aerial ImageryDrone Imagery, OpenDroneMap, User2–5 h
3Damage SegmentationDrone Imagery, Trained Segmentation Models, User1–3+ days
4Post-disaster Flood AnalysisTerrain Data, Precipitation Data, User1–3+ days
Table 2. A comparison of the water depth from the HEC-RAS simulation and the measured values during our visit to Hurley.
Table 2. A comparison of the water depth from the HEC-RAS simulation and the measured values during our visit to Hurley.
Measurement LocationSimulated Water Depth (m)Measured Water Depth (m)Error (%)
11.321.352.22
22.302.518.37
33.333.402.06
Table 3. A comparison of the water depth from the HEC-RAS simulation using pre-flood and post-flood terrain.
Table 3. A comparison of the water depth from the HEC-RAS simulation using pre-flood and post-flood terrain.
Measurement LocationPre-Flood Simulated Water Depth (m)Post-Flood Simulated Water Depth (m)Depth Change (%)
11.322.7105
22.302.9729.1
Table 4. Mean IOU results for each class on the Test Set.
Table 4. Mean IOU results for each class on the Test Set.
NetworkDebrisWaterBuildingVegetationPathVehiclemIoU(%)
DeepLabV3+19.248.457.1163.3746.0422.946.34
PSPNet13.7942.5061.9158.1537.7120.5841.74
U-Net16.0537.7856.8758.6943.2426.243.53
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Whitehurst, D.; Joshi, K.; Kochersberger, K.; Weeks, J. Post-Flood Analysis for Damage and Restoration Assessment Using Drone Imagery. Remote Sens. 2022, 14, 4952. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194952

AMA Style

Whitehurst D, Joshi K, Kochersberger K, Weeks J. Post-Flood Analysis for Damage and Restoration Assessment Using Drone Imagery. Remote Sensing. 2022; 14(19):4952. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194952

Chicago/Turabian Style

Whitehurst, Daniel, Kunal Joshi, Kevin Kochersberger, and James Weeks. 2022. "Post-Flood Analysis for Damage and Restoration Assessment Using Drone Imagery" Remote Sensing 14, no. 19: 4952. https://0-doi-org.brum.beds.ac.uk/10.3390/rs14194952

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop