Next Article in Journal / Special Issue
Safety Enhancement of UAVs from the Signal Processing’s Perspectives: A Bird’s Eye View
Previous Article in Journal
Drone-Monitoring: Improving the Detectability of Threatened Marine Megafauna
Previous Article in Special Issue
StratoTrans: Unmanned Aerial System (UAS) 4G Communication Framework Applied on the Monitoring of Road Traffic and Linear Infrastructure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance

1
Perception, Robotics, and Intelligent Machines Research Group (PRIME), Department of Computer Science, Université de Moncton, 18 Antonine-Maillet Ave, Moncton, NB E1A 3E9, Canada
2
Electronics Engineering Department, Universidad Técnica Federico Santa María, Valparaíso 2340000, Chile
*
Author to whom correspondence should be addressed.
Submission received: 3 February 2021 / Revised: 15 February 2021 / Accepted: 16 February 2021 / Published: 22 February 2021
(This article belongs to the Special Issue Feature Papers of Drones)

Abstract

:
Wildfires represent a significant natural risk causing economic losses, human death and environmental damage. In recent years, the world has seen an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small-scale environments. However, wildland fires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, unmanned aerial vehicles (UAV) and unmanned aerial systems (UAS) were proposed. UAVs have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper, previous works related to the use of UAV in wildland fires are reviewed. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, some of the recent frameworks proposing the use of both aerial vehicles and unmanned ground vehicles (UGV) for a more efficient wildland firefighting strategy at a larger scale are presented.

1. Introduction

Wildland fires are an important threat in rural and protected areas. Their control and mitigation are difficult as they can quickly spread to their surroundings, potentially burning large land areas and getting close to urban areas and cities. The occurrence of wildland fires results into substantial costs to the economy, ecosystems and climate [1]. Nevertheless, their frequency is on the rise. In fact, there has been an increase in the intensity and frequency of wildland fires in comparison to the past 10,000 years [2]. In the western U.S. alone, wildland fires increased by 400% in the last decades [3,4]. In 2018, 8.8 million acres (35,612.34 km2) were burned by more than 58,083 wildland fires in the U.S. [5]. In Northern California, a single fire, known as “Camp Fire”, ended up killing 85 people. This fire was the most destructive in California history burning 153,336 acres (620.53 km2) and destroying 18,733 structures. Losses were estimated to $16.5 billion [3]. Experts estimate that wildland fires will increase in the coming years mainly as a result of climate change [6].
With wildland fires being a multifaceted issue, many different elements are relevant to the efforts to reduce their impact. Aspects such as meteorology, drought monitoring, vegetation status monitoring can help the prevention and the preparation to wildland fires. Other aspects such as fire suppression actions and post-fire recovery strategies must also be taken into account after the appearance of fire. Many of these aspects have been studied with unmanned aerial vehicles (UAVs). However, in the literature, two elements seem more prominent in relation to UAVs. First, the time span between the start of a fire and the arrival of firefighters. This response time needs to be reduced to a minimum in order to decrease the chances of the fire spreading to unmanageable levels. The second key element is the evaluation of the extent of the event and the monitoring of the emergency response. As manual wildland fire assessment is rendered difficult by several factors (e.g., limited visibility), the consideration of this aspect is necessary in order to elaborate better fighting strategies. These two key elements can only be properly addressed through the development of reliable and efficient systems for early stage fire detection and monitoring. As a result of this need, interest has grown in the research community and led to a large number of publications on the subject.
Remote sensing has been widely researched in the field as it allows the observation of wildland fire events without unnecessarily exposing humans to dangerous activities. For instance, satellite images have been used to report the fire risks [7] and the detection of active fires [8,9]. Wireless sensor networks (WSNs) have also been proposed for wildland fire detection [10], monitoring [11] and risk assessment [12]. However, both types of systems have practical limitations. Satellite imagery has limited resolution. Therefore, the data relevant to an area are often averaged and constrained to a single-pixel making it difficult to detect small fires [13]. Furthermore, satellites have limited ground coverage and necessitate a significant amount of time before being able to resurvey the same region. Limited precision and the lack of real-time data reporting are therefore rendering satellite imagery unsuitable for continuous monitoring. As for WSNs, they operate as an infrastructure that needs to be deployed beforehand. As the sensors are installed in the forest, their coverage and resolution are proportional to the investment that is made in their acquisition and deployment. Moreover, in the event of a fire, the sensors are destroyed, leading to additional replacement costs. Maintenance difficulties, the lack of power independence and the fact that they are not scalable due to their static nature are all factors known to limit their coverage and effectiveness [14]. As a result of the previous systems’ shortcomings, unmanned aerial vehicles (UAVs) have been proposed as a more convenient technology for this task. Their maneuverability, autonomy, easy deployment and relatively low cost are all attributes that made UAV the technology of choice for future wildland fire management efforts.
UAV technologies have seen an important progression in the last decade and they are now used in a wide range of applications. UAV has become smaller, more affordable and now have better computation capabilities than in the past making them reliable tools for remote sensing missions in hostile environments [15]. Furthermore, UAVs can fly or hover over specific zones to retrieve relevant data in real time with cameras or other airborne sensors. As a result, research has shown their benefits for surveillance and monitoring of wildland fire as well as tasks related to post-fire damage evaluation [16,17,18,19,20]. Additionally, UAVs have exhibited a positive economic balance in favor of their use in wildland fire emergencies [21,22]. This makes UAVs both a practical and an economical solution. Therefore, research efforts have been oriented towards the development of frameworks and techniques using UAVs with the goal of delivering optimal fire detection, coverage and firefighting.
The subject of this paper is a summarization of the literature pertaining to use of UAVs in the context of wildland fires. Research in this area revolve more predominantly around fire detection and monitoring, therefore the core of this review will be concentrated on technologies and approaches aimed at tackling these challenges. However, this paper also touches on other subjects when relevant such as fire prognosis and firefighting but less extensively as fewer works are available on the subject in the literature. The only other related works believed to exist are the work of Yuan et al. [19] and Bailon-Ruiz and Lacroix [23]. Yuan et al. [19] touch on subjects such as UAV wildland fire monitoring, detection, fighting, diagnosis and prognosis, image vibration elimination and cooperative control of UAVs. While the subject of this work overlaps with ours, it was performed 5 years ago and since then a lot of research has been produced on the subject. In fact, most of the papers reviewed have been published in 2015 or after and are not present in Yuan et al. [19]. Therefore, this work is much more current than Yuan et al. [19]. Bailon-Ruiz and Lacroix [23] have been published in 2020, and are therefore much more current. The authors discuss two components of the field of UAV wildfire remote sensing: system architecture (single UAV or multiple UAV) and autonomy level. The reviewed works are characterized by similar attributes (mission types, decision level, collaboration level, fielded) and include unique attributes such as information processing and airframe, while this paper also analyzes unique attributes such as sensing mode and coordination. Attributes such as information processing and airframe indicate that Bailon-Ruiz and Lacroix [23] put more focus on the type of UAV and the software that runs on it while this paper is focusing on sensing and communication. The most notable difference between both works is the depth of analysis of the reviewed works and the extent of the reviewed literature. While Bailon-Ruiz and Lacroix [23] discusses system architecture and autonomy level only, this paper discusses these topics as well as sensing instruments, fire detection and segmentation, available fire datasets, fire geolocation and modeling and UAV-unmanned ground vehicles (UGV) systems for wildfires. This paper also reviews more recent works (16 vs. 10 published in 2015 or after), more works in total (27 vs. 19), and this paper’s reference count is more than three times higher (121 vs. 35) indicating a more in-depth discussion of concepts related to the reviewed works which in turn requires more referencing. Following these observations, it is believed that this paper is a significant contribution and is very relevant to the field.
The final goal of this review is to provide insight into the field towards the development of cooperative autonomous systems for wildland fires. Observations made after evolving for many years in the field indicate that the research community has provided many pieces of the solution to the problems that are wildland fires. However, these pieces, especially recent ones, often fail to come together in a unified framework to form a multifaceted solution to the underlying issue. A lot more could be accomplished by combining fire detection, monitoring, prognosis and firefighting under the same system. Therefore, this paper reviews fire assistance components, sensing modalities, fire perception approaches, relevant datasets and UAV/UGV coordination and cooperation strategies. In fact, this paper’s review approach is to break apart the reviewed works in these categories instead of discussing all the aspects of a reviewed work in the same paragraph. The idea is to bring existing approaches into light in such a way that it would be easier in the future to combine them into more complete systems instead of seeing them as individual systems. These subjects lead to the last section of this paper where cooperative autonomous systems are discussed and where all previously discussed technologies come together under the umbrella of a single framework.

2. Fire Assistance

Remote sensing with aerial systems presents multiple advantages in the context of emergency assistance. Their high maneuverability allows them to dynamically survey a region, follow a defined path or navigate autonomously. The wide range of sensors that can be loaded onboard allows the capture of important data which can be used to monitor the situation of interest and plan an emergency response. The ability to remotely control UAVs helps reduce the risk for humans and remove them from life-threatening tasks. The automation of maneuvers, planning and other mission-related tasks through a computer interface improves distant surveillance and monitoring. Advances in these aspects have a direct impact on the firefighting resource management.
UAV fire assistance systems in the literature can be characterized using four attributes: sensing modalities and instruments, type of task performed, coordination strategies with multiple UAVs or with the ground control station (GCS) and the approach to experimental validation. Figure 1 is a visual representation of these characteristics and their implementation in the reviewed works. These components are designed to perform one or more tasks related to fire emergencies. Within the reviewed works, the most prevalent tasks are the fire detection and monitoring. Fire prognosis and firefighting are also present in some works, but have received less interest from researchers. Fire detection and monitoring is based on recognition techniques, a field of research that has seen significant advances in the last decades. Meanwhile, fire prognosis and fighting has practical limitations that hinder research on the subject hence the imbalance in the research interest. Prognosis requires complex mathematical models that must be fed with data that can be difficult to acquire in real time and in unknown environments. Fire fighting, on the other hand, requires expensive combat equipment that is even more expensive for large wildland fires. Moreover, close proximity with fires can pose a significant risk for the vehicle integrity and lead to its loss. However, some initial research has been done to design a UAV capable of fighting fires [24,25,26] and more recently some drone manufacturers have steeped in to tackle this problem as well [27]. It is clear that more work remains to be done for these vehicles to be affordable and technically viable.
One key component of an airborne fire assistance system is the type of UAV used. UAVs have different sizes, maneuverability and endurance capacities. These characteristics in themselves have a strong influence on the overall architecture of the system. There is a wide selection of aerial systems ranging from large UAVs with long endurance and high-processing capabilities to small UAVs with short flight times and limited processing capabilities [28,29]. Large vehicles are expensive but have higher payload and can carry more sensors and other instruments. On the other hand, smaller vehicles are more affordable but with limited payload. The instruments onboard the vehicles vary between the reviewed systems, but some are essential to navigation and localization and therefore found in almost all UAVs. Global Navigation Satellite System (GNSS) and Inertial Navigation System (INS) fall into this last category. Furthermore, almost all of the vehicles in the reviewed works have at least one kind of imagery sensor used for different purposes including fire perception. Temperature sensors are also present in some of the proposed fire assistance systems, but as shown in Section 3, they are less common.
Sensor measurements are the inputs of fire perception algorithms that process the data to detect the presence of the fire. The processing can be performed either onboard the UAV or by a computer located at a GCS. Fire perception can also, in some cases, be performed by a human operator inspecting the data from a GCS. It seems that a lot of efforts in research are devoted to the automation of fire perception and the optimization of the processing while at the same time preserving the accuracy of the overall system. Computer vision and machine learning techniques are commonly used for this purpose.
The last component fire assistance systems is the coordination strategy, it provides the framework for the deployment of the flight missions. Surveillance missions are usually planned beforehand and aim to search wide areas, prioritizing areas with higher fire risks. These missions can be accomplished by humans manually operating UAVs or autonomously. The coordination strategy in itself becomes more critical during the monitoring of a fire propagation as it is necessary to adapt the flight plan to the fire spread. This is even more relevant if there are multiple UAVs collaborating to the mission during a fire emergency. For this purpose, multiple coordination strategies were proposed in the literature.
For example, a UAV could hover near a fire spot and alert the rest of the fleet to proceed with fire confirmation [30]. More complex planning is also possible, by requiring a consensus on the task to be performed by each unit [31] or by flying in a specific formation around the fire perimeter [32]. In both cases, a concrete description of the task and the autonomous decision scheme must be defined for the system to be effective. Section 7 gives more details about the coordination strategies using a single or multiple UAVs.
Table 1, Table 2 and Table 3 present an overview of the reviewed fire assistance systems. Table 1 contains the year of publication and the validation process used by the authors. Table 2 presents the sensing modalities used to perform fire perception and the tasks performed. Note that some of the works do not specify sensing instruments and the authors assume that the necessary instruments are available onboard the UAV. Table 3 contains the level of autonomy, the organization of the system and the coordination strategy. Not that in some works the system is only theorized and many assumptions are made and some information might not be specified as it is not relevant to the central subject of the work. This is especially the case with works validated in simulations that do not always define a specific hardware platform.

3. Sensing Instruments

Sensors provide the necessary data for navigation and for firefighting monitoring and assistance. In outdoor scenarios, GNSS and INS provide real-time UAV localization. They are also used to georeference the captured images thus allowing geographical mapping of fires. While these sensors are of interest to localize fires, the following section will instead focus on sensors that are able to detect fires. Fires have specific signatures that can be composed of different elements such as heat, flickering, motion, brightness, smoke and bio-product [64]. These elements can be measured using suitable sensing instruments. Cameras are the sensing instruments that offer the most versatility in their measurement. Visual and infrared (IR) sensors onboard UAVs can be used to capture a rich amount of information. In relation to cameras, Table 4 provides a list of the spectral bands used in the literature reviewed in this section. The table is provided in hopes that it will help researchers identify pertaining spectral bands for their application or identify areas of the spectrum that needs more attention for future works.

3.1. Infrared Spectrum

At room temperature, the radiation peak of matter is located within the thermal infrared band which ranges from 0.7 µm to 1000 µm. Specialized sensors are available and can capture images in different sub-bands of the IR spectrum. Wildfire temperatures can be as high as 1000 C (1800 F ), leading to a peak radiation in the mid-wave infrared (MWIR) sub-band [64,65]. Therefore, a sensor operating in the MWIR spectral band is best suited for fire perception. However, until recently, the form factor of MWIR sensors and their cost limited their use for low-cost small and medium UAVs [66]. To overcome these restrictions in smaller aerial vehicles, recent fire detection systems are still using NIR, SWIR or LWIR sensors. The use of these sub-bands is possible due to the fact that the higher temperature of fires also shifts the distribution of the object radiation in shorter wavelengths. Therefore, it is not necessary to use MWIR sensors directly as the effect of the peak can be observed in these other bands as well. However, a disadvantage of using NIR and SWIR is that objects under sunlight are often reflecting radiation in these sub-bands creating false positives. In such conditions, fire hot spots still remain detectable but their contrast is reduced during day time flights [64].
Characteristics put aside, IR sensors remain the most commonly used sensors in fire assistance systems due to their ability to detect heat. Bradley and Taylor [38], Casbeer et al. [32], Hinkley and Zajkowski [18], Kumar et al. [39] and Yuan et al. [49] are among the authors who have proposed methods based solely on the IR spectrum (see Table 2).

3.2. Visible Spectrum

Visible spectrum cameras are widely available and commonly used in various applications. They come in a wide variety of resolutions, form factors and cost. Their versatility offers a valuable alternative in wildland fire research from both technical and commercial perspectives. Moreover, the ever continuing reduction in visible cameras size and weight makes them perfect candidates for UAVs.
Data provided by these sensors are images in grayscale or RGB format. This allows the development of computer vision techniques using color, shape, temporal changes and motion in images or a sequence of images. Some of the vision-based techniques are presented in Section 4. Although, they are versatile and widely available, visible light sensors must be carefully selected for night-time operations as some sensors perform poorly in low light conditions. Despite some of their limitations, they equip almost all UAVs and make them good candidates for wildland fires study.
Yuan et al. [50,51,52,53], Sun et al. [48] and Zhao et al. [56] are among the authors that propose systems that rely only on the use of visible spectrum cameras (see Table 2).

3.3. Multispectral Cameras

Using each spectral band alone comes with its limitations. To tackle these limitations some authors propose the use of multiple cameras and combine multiple spectra. This allows the use of data fusion techniques to increase the accuracy of fire detection in complex situations and under different lighting conditions. Esposito et al. [67] developed a multispectral camera operating in the LWIR, NIR and visible spectrum mounted on a UAV. In a NASA Dryden’s project, Ikhana UAS [17,68], a Predator B unmanned aircraft system adapted for civilian missions, was built to carry a multispectral sensor that operates in 16 different bands from visible to LWIR spectrum. Despite their interesting characteristics, in both cases, the weight of these combined sensors limited their implementation to large airborne platforms only. To address this problem, other alternatives combine smaller sensors such as visible spectrum sensors and IR sensors. Martínez-de Dios et al. [30] used this approach to capture and project the IR data onto visible images. This generated a superposition of the data leading to pixels being represented with four intensity values red, green, blue and IR. The authors report improvements in fire detection with mixed segmentation techniques that make use of the four-channel values.

3.4. Other Sensors

Various sensors other than cameras have also been proposed to detect and confirm the presence of fires. Some authors proposed the use of chemical sensors which can detect concentrations of hazardous compounds [43]. Spectrometry measures is another approach that can be used to detect the characteristics of burned vegetation and confirm a fire [64]. Again, in both cases, the size of the sensors seems to limit their use.
Temperature sensors have also been used by Belbachir et al. [42,55] to generate heat maps and detect/locate fires. Lin et al. [54,63] also theorized the use of temperature sensors to estimate a fire contour and rate of spread in the context of fire modeling. Most of the authors are using temperature sensors in the context of a simulation and therefore assume their availability without referring to real hardware. However, Wardihani et al. [55] performed a real-world validation of their proposed solution and successfully demonstrated the use of a 2 × 2 pixel resolution non-contact infrared sensor with a field of view of five degrees to measure temperatures. While interesting, these sensors are limited in comparison to IR cameras that can provide richer data. This reflects on the reviewed works and the reported results are more limited than with IR cameras.

4. Fire Detection and Segmentation

Research has shown the effectiveness of UAVs as a remote sensing tool in firefighting scenarios [17,18,43]. They are very useful even in simple tasks such as observing the fire from a static position and streaming the video sequence to human operators. This simple use case already allows firefighters to have an aerial view of the spreading fire and plan containment measures. However, single man-controlled UAVs, even if they are useful for small emergencies, do not scale up in large scenarios. Therefore, the automation of the detection and the monitoring of fires can help deliver an optimal coverage of the fire area with the help of multiple UAVs and with less human intervention. Furthermore, the gathered data can later be processed to analyze the fire, estimate its Rate of Spread (ROS) [69], volume [40] or perform post fire damage evaluation [17].
To perform fire-related tasks autonomously, systems must address different subtasks such as fire geolocation, fire modeling and even path planning and coordination between UAVs. For that purpose, sensor data are often initially processed to detect fire and extract fire-related measures. The derived information is then passed on to the different subsystems. For fire detection, authors are usually able to directly extract fire-like pixels based on color cues or IR intensities and do not require further analysis. However, monitoring tasks usually require further analysis to estimate the fire perimeter or burned areas. In that context, computed measures (e.g., segmentation) are provided as input to fire models to estimate the fire propagation over time.
This section reviews some fire detection and segmentation techniques found in the literature.

4.1. Fire Segmentation

Fire segmentation is the process of extracting pixels corresponding to fire in an image. The criteria by which a pixel is selected vary from one method to another. The selection criteria are also the main factor affecting the accuracy of the detection. In general, fire segmentation uses the pixel values of a visual spectrum image (e.g., color space segmentation) or the intensities of an IR image. Motion segmentation can also be used to extract the fire using its movement over a sequence of images.

4.1.1. Color Segmentation

Images are built of pixel units that can have different encoding (e.g., grayscale, color). In color images, pixels are composed of three values in the red, green and blue channels (RGB). Other color spaces are also possible such as YCbCr, HSI, CIELAB, YUV, etc. [70]. In IR images the pixels have one channel value representing temperature (MWIR and LWIR) or reflectance (NIR, SWIR).
In the COMETS project [30,34,35], the authors employed a lookup table with fire-like colors (RGB values) that were extracted from a learned fire color histogram. The image pixels were compared to the table and the values that were not found were considered as non-fire. A non-calibrated LWIR camera is used to capture qualitative images with radiation values relative to the overall temperature of the objects in the scene. The heat peak observed in the resulting image depends on the current scenario. A training process was carried out to learn the thresholds to be applied to the IR images for binarization. Images with and without fires were considered as well as different lighting conditions and backgrounds. This permitted the selection of the appropriate threshold to apply during deployment in known conditions. Ambrosia et al. [17] selected fixed thresholds for each IR spectral band. They also varied the bands used for day-and-night missions. During night-time, the MWIR and LWIR bands were used and during the day, the NIR band was added. The results show that fixed threshold adapts poorly to unexpected conditions but can be tuned to perform better in known environments.
Yuan et al. [49,50,51,52,53] used color space segmentation. The images are converted from RGB to the CIELAB color space before further processing. Sun et al. [48] proposed the use of YCbCr color space. In both cases, a set of rules were developed based on empirical calculations performed on captured fire images. For example, Sun et al. [48] considered pixels as fires if their values followed the following rules: Y > C b , C r > C b , Y > Y m e a n , C b < C b m e a n and C r > C r m e a n . The mean sub-index indicates the channel mean value of the corresponding image. Otsu thresholding technique [71] was used in [49] to segment IR images.
Color value rule-based segmentation approaches are computationally efficient, but lack robustness during detection. Results show that objects with a color similar to fire are often mislabeled as fire and trigger false alarms. A combination of rules in different color spaces and the addition of IR can increase the detection accuracy. More complex algorithms that are time and space aware have also been shown to increase the accuracy of the fire detection [72,73,74,75,76,77,78,79,80,81,82]. The majority of them have not been integrated with UAVs.
In recent years, deep learning algorithms have shown impressive results in different areas. Relating to UAVs, past work using deep convolutional neural networks (CNN) dealt mainly with fire detection [56,83,84]. Deep fire segmentation techniques proposed recently have shown the potential of developing an efficient wildland fire segmentation system [85]. The used dataset in this last work included some aerial wildland fire images [86]. Deep segmentation of wildland fires is still lacking in UAV applications.

4.1.2. Motion Segmentation

Fire segmentation using static images help reduce the search space, but often objects with a similar color to fire can be detected and lead to false positives. Yuan et al. [49,50,51,52,53] and Sun et al. [48] proposed the use of Lukas-Kanade optical flow algorithm [87] to consider fire movements. With the detection of corresponding feature points in consecutive image frames, a relative motion vector can be computed. The mean motion vector matches the UAV’s motion except for moving objects in the ground. Fire flames are among those objects because of their random motion. By detecting feature points within regions with both random movements and fire-like colors, the fire can be confirmed and the false alarm rate reduced.

4.2. Fire Detection and Features Extraction

The data fed to a detection system are analyzed in order to find patterns that confirm the occurrence of an event. Patterns are recognized by computing different features which can be strong or weak signatures for a specific application. In the case of fire detection with UAVs, the most popular features are color, brightness and motion. Research focusing on fire detection considers the fusion of more features to obtain better results in the classification stage. These features can be categorized by the level of abstraction at which they are extracted: pixel, spatial and temporal.
Color cues are widely used in the first step to extract fire-like pixels. This reduces the search space for further processing with more computationally expensive detection algorithms. For example, the RGB mean values of a Region of Interest (RoI) and the absolute color differences ( | R B | , | R G | , | B G | ) can be thresholded [88] or used to train a classification algorithm [89]. In the work of Duong and Tinh [90], the authors further added the intensity mean, the variance and the entropy values of the ROI to the feature vector. Other features used in the literature include color histograms of ROI [75] and color spatial dispersion measures [73].
After the detection of the ROI, other features can be extracted. Some authors consider spatial characteristics to determine the fire perimeter complexity by relating the convex hull to the perimeter ratio and the bounding rectangle to perimeter ratio [91]. The distance between the blob centroid position within the bounding box has also been considered in this work.
Texture is another spatial characteristic often used for fire detection. The main texture descriptors proposed for this task are Local Binary Patterns (LBP) [92,93,94,95] and Speeded Up Robust Features (SURF) [75,78]. These operators characterize local spatial changes in intensity or color in an image and return a feature vector that can be used as input for classification. SURF [96] is computationally expensive but allows for scale and rotation invariant matching. LBP [97] needs less processing power and extracts the mean relation between pixels in a small area using the 8 neighbors of a pixel. Some authors [98,99] also used the Harris corner detector [100], which is a computationally efficient feature point extractor.
Deep learning is another approach that has been used for fire detecting. It allows the automatic learning of low- and high-level features instead of hand crafting them as it was the case with the previous approaches described. Zhao et al. [56] developed such an approach in the form of a 15-layer CNN called Fire_Net. The proposed architecture is inspired by AlexNet [101] and is made of a succession of convolutions, ReLUs and max poolings that end with a fully connected layer followed by a softmax layer. The approach is able to classify image patches as fire or not fire with a 98% accuracy outperforming many other similar deep learning or learning-based approaches tested by the authors on the same data. Jiao et al. [60,61] also proposed a deep learning approach but based on the YOLOv3 architecture [102]. The solution is an object detection approach able to provide bounding boxes around objects of interest. In this case, the network is trained on 3 classes: smoke, fire and combination of smoke and fire. Initially, the authors used a YOLOv3-tiny architecture and on-board computations. The system was able to reach a precision of 83% and a frame rate of 3 to 6 fps [60]. In a more recent contribution [61], the same authors were able to reach a detection precision of 91% and a frame rate of over 80 fps by performing the computation on a GPU located in the GSC instead.
The features reviewed above are extracted from single images. When a video sequence is available, the temporal variation in color, shape and position of some blobs can be extracted. In the work of Ko et al. [103], the fire blob shape variation is computed by a skewness measure of the distance from the perimeter points to the blob’s centroid. Foggia et al. [104] measured shape changes by computing the perimeter to area ratio variation over multiple frames. The authors also detected the blob movements by matching them in contiguous frames and to compute the centroid displacement. Fire tends to move slowly upwards, thus blobs that do not comply with this rule can be discarded [72,103,105]. The centroid displacement can also be an input for further classification [72,91,106]. A similarity evaluation is employed by Zhou et al. [91]. They measure the rate of change of overlapping areas of blobs in contiguous frames. This gives a practical representation of the speed at which the region of interest is moving and if it is growing or decreasing in size. Fire flickering can also be identified by considering specific measures such as intensity variation [107], the number of high-pass zero crossing in the wavelet transform [108] or the number of changes from fire to non-fire pixels inside a region [109]. Wang et al. [110] implemented a long-term movement gradient histogram, which accumulates the motion changes. The histogram is fitted to a curve which is used to evaluate if the area corresponds to a fire or not. Kim and Kim [111] proposed a Brownian motion estimator that measures the correlation of two random vectors [112]. The vectors are composed of channel values, the first intensity derivative and the second intensity derivative. Therefore, the Brownian motion estimator describes the dynamic dependence between a series of regions across multiple frames. Temporal features consider a time window for the fire evaluation. Then, some empirical criteria are established to determine the optimal thresholds and duration of the events in order to trigger a fire alarm.
Among the features described so far, there are some features that are more oriented towards fire detection. Features such as color, blob centroid displacement and flickering are some of the most popular. Some novel approaches such as the Brownian correlation or the histogram of gradients have been less explored but are nevertheless interesting. A comparison of these different features and an evaluation of which one has a greater impact on the fire detection accuracy and false positive rate would be very useful. Unfortunately, such a comprehensive comparison does not seem to have been published yet. However, as most of these features are not computationally expensive, ensembling the features can improve the performance and reduce the false detection rate. Table 5 gives an overview of the features used depending on the input.

4.3. Considerations in UAV Applications

Additional features can improve the fire detection. Features that are obtained by temporal analysis evaluate the difference between contiguous frames. In simple scenarios, where the camera is static and the background is not complex, frame subtractions can help detect moving pixels. In the presence of complex and dynamic backgrounds, Gaussian mixture models and other sophisticated background modeling techniques can be considered.
However, the video streams from UAVs have fast motions and no classical background subtraction method would give satisfying results because of the assumption of a static camera. Even in a situation where the UAV is hovering over a fixed position, the images are still affected by wind turbulence and vibrations. Therefore, in order to be able to apply these motion analysis techniques, it is necessary to consider image alignment and video stabilization. The usual approach is to find strong feature points that can be tracked over a sequence of frames. Merino et al. [44], in their fire assistance system, used a motion estimation approach based on feature points matching known as sparse motion field. From the matched points, they estimate a homography matrix that maps the pixels in an image with the pixels in the previous frame. This allows mapping every image to a common coordinated frame for alignment. SURF [96] and ORB [113] are two feature point methods that were used for extracting salient features prior to the image alignment. It seems that the impacts and the benefits of the image alignment have not yet been addressed in the literature relating to fire and smoke detection but some researchers such as [44] consider it important for their fire assistance system to work properly.

5. Wildland Fire Datasets

A large number of fire detection approaches use a classification method that relies on learning algorithms. The main challenges of machine learning is to build or to find a large enough dataset with low bias. Such a dataset should contain positive examples with high feature variance and negative examples consisting of standard and challenging samples.
Deep learning techniques need even larger datasets for training. Data augmentation techniques can help in this regard but it requires a sufficiently large dataset to start. Well-developed research fields such as face or object recognition have already large datasets that have been built and vetted by the community. These datasets are considered suitable for the development and benchmarking of the new algorithms in their respective fields. In the case of fire detection, no such widely employed dataset is available yet. Some effort has been made toward this direction. Steffens et al. [114] captured a set of 24 videos from hand-held cameras and robot mounted cameras. The ground-truth was defined by bounding boxes around the fire. Foggia et al. [104] compiled a collection of 29 videos of fire and smoke but did not provide ground-truth data. Chino et al. [93] gathered around 180 fire images to test their BowFire algorithm and made the dataset available with manually segmented binary images representing the ground-truth for the fire area. However, the main problem with these datasets is the lack of wildland fire samples. This could be problematic for the development of a fire detection module for wildland fire assistance systems. Aerial fire samples in the form of videos are also necessary for the development of UA-based systems.
In [86], the authors collected images and videos to build the Corsican fire database. This dataset is specifically built for wildland fires. It also contains multimodal images (visual and NIR images) of fires. The images have their corresponding binary masks representing the ground-truth (segmented fire area). Other information is also available such as smoke presence, location of capture, type of vegetation, dominant color, fire texture level, etc. The dataset contains some aerial wildland fire views, but their number is limited.
The wildland fire UAV research is still lacking a dataset that can help improve the development of the algorithms needed in a wildland fire assistance system. Table 6 contains a brief description of the main fire research datasets.

6. Fire Geolocation and Fire Modeling

In a wildland fire scenario, when a fire is detected, the vehicle must alert the GCS and send the fire’s geolocation to deploy the firefighting resources. In the reviewed literature, two different levels of approach are studied for detection alert. Some authors are using a local approach where the position of the fire is reported at first contact. Other authors go further by taking a global approach to the problem by identifying and locating the entire perimeter of the fire.
The simplest alerting approach is to directly provide the geographical coordinates of the UAV using the onboard GPS when a fire is first detected. This can be performed with good accuracy when the UAV is flying at low altitudes and has its data acquisition sensor pointing to the ground with a 90-degree angle. This approach is employed by Wardihani et al. [55] using a downward pointing temperature sensor to locate fire hotspots. A similar approach is possible with a camera located on the bottom of a UAV and oriented downward. However, for a camera located on the front side of the UAV, it is required to compute a projection of the camera plane onto a global coordinate system using an homography. This transformation allows mapping pixel coordinates to the ground plane. This approach performs well when the UAV pose estimation is reliable and when the ground is mostly planar. Some difficulties arise in the presence of uneven surfaces. Some authors [17,30,41,44,45] have circumvented this limitation by exploiting a previously known Digital Elevation Map (DEM) of the surveyed area. DEM allows for the estimation of the location from where a ray corresponding to a fire pixel originated and thus improves the fire location estimation. DEMs can induce some errors. To reduce these errors, a UAV fleet looking at the same hotspot can first detect the fire and then use different views of the UAV to refine the estimations [30].
In order to better characterize ongoing wildfires, some authors have studied fire modeling in order to provide global information such as the fire boundaries and its behavior. The simplest models are using an elliptic shape which is fitted to the fire and where each ellipse axis increases at some given rate. For example, Ghamry and Zhang [46], Ghamry and Zhang [47], Ghamry et al. [69] applied an elliptical model to estimate the fire perimeter. Here, the rate at which the ellipse axis grows depends on the direction towards which the wind blows and its speed. This allowed the authors to estimate the perimeter of the fire and then define a UAV team formation for further monitoring.
More complex fire models with more variables and data inputs have also been studied. These more advanced models often try to estimate the rate of spread (ROS) of the fire based on wind speed and direction, terrain slopes, vegetation density, weather and other variables. These models are often tested in a simulation. For example, Kumar et al. [39], Pham et al. [57,58], Lin et al. [54,63] and Seraj and Gombolay [62] used the FARSITE model to test their coordination strategies under various scenarios. Some of these models were not suitable for real-time fire estimation because their complexity significantly increased the computation time. However, Lin et al. [54,63] proposed a convergent Kalman filter-based methodology to provide data to a scalar field wildfire model that is executable on-board a UAV and requires low computation resources. The proposed approach was able to provide estimations of the wildfire ROS and the fire front contour.
Some authors used a different approach to model and characterize the fire. For example, Martínez-de Dios et al. [40] used multiple images to extract geometric features from the fire such as the base perimeter, the height and the inclination. The extraction is performed using computer vision techniques (e.g., image segmentation). The authors propose the use of multiple visible-NIR multimodal stereo vision systems to extract the fire area. Each stereo system provides an approximate 3D model of the fire. The models captured using multiple views are registered to get the fire 3D model. This 3D model is tracked over time to compute different fire characteristics such as height, width, inclination, perimeter, area, volume, ROS and their evolution over time.
Bradley and Taylor [38] divided the environment into cells and assigned a fire probability to each cell using IR images. This method takes into account the uncertainty in the position of the UAV and therefore applies a Gaussian weighting scheme to the probabilities. The authors then apply a Sequential Monte Carlo (SMC) method to compose a Georeferenced Uncertainty Mosaic (GUM) which is then used to locate the fire. Belbachir et al. [42] model the fire as a static cone of heat sourcing from the fire center and dissipating with an altitude and a horizontal distance. Based on this assumption, they construct a grid of fire probabilities with the temperature measures. The fire is detected when the probabilities are above a defined threshold. Lin and Liu [63] also generate an occupancy grid by using temperature sensors and by associating temperatures to cells. They also compute the gradient of the grid and estimate the fire center, ROS and perimeter.

7. Coordination Strategy

Coordination strategy is an important component when deploying autonomous UAVs. The coordination strategy establishes the procedure for communication, task allocation and planning procedures. Based on the communication links established during the mission, three main schemes can be distinguished. First, for a single vehicle, there is no coordination strategy as the UAV does not need to communicate with other UAVs. For multiple UAVs, the path planning and task allocation are often resolved by an optimization process or assumed to be so. One approach is to centralize the path planning and decision process in the GCS and only allows the UAV to communicate with it but not between each other. Another approach is to tackle the problem of coordinating multiple entities in a distributed and decentralized manner. Each vehicle can connect to other UAVs, allowing for distributed decision-making and communication with the GCS is only for reporting observations but not for planning.

7.1. Single UAV

The viability of single UAVs, either large airships or small aerial systems, has been evaluated for wildland fire surveillance and monitoring. The Ikhana UAS [17] was deployed in western US between 2006 and 2010. It was a single large and high endurance vehicle with powerful sensory systems for autonomous fire detection. The decision strategy and the path planning were performed by human operators. Similarly, Wardihani et al. [55] used small quadcopter UAV and manually defined flight paths using a mission planner software in order to survey a region and detect hotspots. Pastor et al. [41] proposed a semi-autonomous system in which a single UAV would sweep a rectangular area, locate hotspots and then return to a nearby ground station. A human could control the UAV and order it to stay over the hotspot location to confirm visually if it corresponds to a real fire or not. Martins et al. [33] used an entirely autonomous navigation system where the UAV only received waypoints from where to start surveillance. When a hotspot is detected, the UAV approaches the source, hovers over the target and confirms the fire. The experimental tests showed very interesting results for fire detection and monitoring tasks.
While single UAV strategies are interesting for their simplicity, they remain very limited in relation to large-scale wildland fires. For this reason, the more advanced and mature solutions use team-based systems that help increase the coverage area.

7.2. Centralized

The addition of more UAVs to the mission increases the area covered by the systems. In a centralized team strategy, all UAVs are coordinated by a single GCS. This scheme can lead to a more accurate fire georeferencing and less false alarms by allowing for a global situation awareness at all times. Another advantage of centralized communications is that it makes centralized processing easy and therefore makes it possible to use smaller and more affordable UAVs as they do not require high-processing power. The main drawback of this approach is the need for a functional communication network that can connect to all UAVs at all times which is not always possible when the fire areas are remote.
Martínez-de Dios et al. [30] proposed a simple centralized approach where data from multiple UAVs is combined to correct and reduce the uncertainty of fire georeferencing. After a fire is detected by a unit, nearby vehicles are sent to the same region to perform a fire confirmation.
Belbachir et al. [42] proposed a greedy algorithm for fire detection using a probability grid. For this purpose, each UAV selects, in a greedy way, the path that provides more information. The UAVs visit cells that have not yet been visited and which are within the direction where the temperature increases.
Ghamry and Zhang [46] distributed the UAVs uniformly around the fire perimeter using an elliptical formation. This allows the UAVs to keep their paths at even angles around the estimated fire center. Ghamry and Zhang [47] added the ability to restructure the formation if a UAV is damaged or has to leave for refueling. To achieve this fault-tolerant behavior, when a UAV needs to leave the formation, all communications with it are stopped. Other vehicles automatically notice the missing UAV and start performing a reformation process. In this system, prior to the monitoring task, the fleet flies in a leader–follower formation where the leader gets a predetermined flight path and the rest follow it at specific distances and angles. In the work of Lin et al. [54], Lin and Liu [63], UAVs are directed to fly uniformly in formation around an estimated fire center. In this approach, a Kalman filter is used to estimate the fire contour and the fire center movements, allowing the UAVs to fly and adapt their formation accordingly.
While incomplete, initial results by Aydin et al. [26] are worthy of mention as it is one of the only works to tackle fire fighting directly. The authors theorized a collaboration model where scout UAVs would spot wildfires and monitor the risk of spread to structures. Relay UAVs would then be used to extend the communication range and allow the scouts to contact firefighting UAVs carrying fire-extinguishing balls. It is believed that 10 UAVs each carrying 10 1.3 kg fire-extinguishing balls would be able to extinguish an area of approximately 676 m2 per sortie. While the extinguishing capacity of the fire-extinguishing balls was validated, the UAV coordination strategy has not been tested yet. However, this approach remains promising for wildfire fighting.

7.3. Decentralized

In a decentralized communication scheme, the UAVs are communicating between each other in order to collaborate for path planning and optimal area coverage. The interaction with the GCS is reduced to a minimum and usually only happens at the beginning of a flight to receive initial flight coordinates or at the end of a flight for observation reporting and data transfers. The system is able to perform more tasks in an autonomous manner and even to cover larger areas by using some UAVs as communication relays. The main advantages of such an approach are reliability as a link with the GCS is not required to be active at all times and the possibility for operations in remote areas where global communication links are impractical. However, the added complexity imposes new challenges as distributed coordination algorithms need to be developed and implemented. In the literature, these systems were mainly used for optimal fire perimeter surveillance and task allocation.
Alexis et al. [37] describe a UAV rendezvous-based consensus algorithm which aims to equally distribute the path length of the UAV around the fire perimeter. UAVs depart in pairs and in opposite directions around the fire perimeter. They set rendezvous locations where they share knowledge about the traveled paths, the current state of the fire perimeter and other units encountered. If the update shows that the fire perimeter has evolved, then each UAV will select new rendezvous locations in such a way that the distance traveled by each of the UAVs is almost the same. The authors have shown through simulations that the algorithm converges and the recomputing of rendezvous points allows efficient adaptation of the UAV formation to an evolving fire perimeter. The optimal distribution of UAVs around a fire perimeter has also been studied by Casbeer et al. [32]. They demonstrated that in order to reduce the length of time between data uploads to the GCS, the UAVs must depart in pairs, travel in opposite directions and be evenly spaced around the perimeter. To achieve optimal perimeter tracking, they designed a control loop to keep half of the bottom-facing IR camera over hotspot pixels and the other half over non-fire area.
For monitoring, Pham et al. [57,58] proposes a collaborative system in which UAVs are sent to monitor a fire and optimally cover the fire area. This formation is achieved by detecting neighboring UAVs and reducing camera view overlaps while considering the location of the fire front. The UAVs are also allowed to increase or decrease their altitude in order to control the resolution of the captured imagery to provide optimal observational capabilities. This behavior is accomplished with the application of a force field-based algorithm that simulates the attraction of a UAV by the fire front and its repeal from the other UAVs. The attraction and repulsion forces are adapted by considering the fire front confidence and the estimated field of view of each UAV. One problem with this approach is that the visibility reduction induced by smoke is not taken into account which can put the vehicle in a dangerous situation.
Another coordination strategy was proposed to perform optimal task allocation within a team of UAVs. The tasks can be surveillance, monitoring or firefighting. Ghamry et al. [31] proposed an auction-based firefighting coordination algorithm. In this algorithm, a fire is first detected and then the UAV must coordinate themselves to act upon each known fire spot. To achieve this task, each vehicle generates a bid valued by a cost function of its distance from the fire spot. In this manner, the UAV with the best offer for the task will be assigned to it. Sujit et al. [36] also proposed a similar auction-based collaboration algorithm but with the ability to consider a minimal number of UAVs to watch each hotspot. Both contributions distributed the UAVs equally around the fire perimeter.
Decentralized approaches have also been used for direct fire fighting using fire suppressants. Kumar et al. [39] proposed such a coordination protocol where the planned path of each UAV is optimized to minimize the distance to a detected fire perimeter. As a second phase, the path of UAVs carrying fire suppressants is optimized by minimizing the distance to the fire center. This allows the solution to monitor a fire situation and provide optimal fire suppressant delivery.
Recently, new control approaches based on deep reinforcement learning (DRL) started to appear in the literature. One of the very first with such an approach for wildfire monitoring has been proposed by Julian and Kochenderfer [59]. The authors first formulated the problem as a partially observable Markov decision process (POMDP) solvable with DRL. A simulation environment being required for DRL, they also defined a simplified stochastic wildfire model using a 100 × 100 fire presence grid. This environment was used to train a simulated fixed-wing agent with a decision process based on a CNN. Multiple agent using the same CNN can be spawned in the same environment to simulate a multi-UAV system. While the authors defined different DRL approaches, the best performing approach used a collaborative belief map shared and updated by all agents indicating the state of the wildfire. A reward function rewarding newly discovered burning cells by any aircraft is used to encourage good fire monitoring and collaboration between agents. An aircraft proximity penalty is also added to encourage aircraft separation. Simulation results show that the approach is able to outperform a baseline receding-horizon controller, scale with different numbers of aircraft and adapt to different fire area shapes. However, the approach remains limited as the environment modeling is oversimplified, the UAVs are assumed to maintain a steady altitude, a constant speed and fly at different altitudes as collision avoidance is not implemented.
While new approaches are interesting, research on objective function optimization-based distributed control frameworks is still very active and continues to generate state-of-the-art results. This is the case with the approach proposed by Seraj and Gombolay [62]. The authors used a dual-criterion objective function based on a Kalman uncertainty residual propagation and a weighted multi-agent consensus protocol. An adaptive extended Kalman filter (AEKF) is used to leverage the fire propagation model (FARSITE) and the observation model. The approach includes an uncertainty-based controller built through the combination of a fire front location uncertainty map and a human uncertainty map. This allows the system to take into account GPS-equipped human firefighters on the ground in order to ensure their safety while considering the fire front location like other similar methods. A second controller (formation controller) is encouraging the UAV team to maintain a formation consensus for maximizing the coverage. The approach is using the theory of artificial potential field to generate artificial forces to pull or push on the UAV in order to attain an optimal state. Following a simulation, the solution was able to outperform both a state-of-the-art model-based distributed control algorithm and a DRL baseline strongly confirming the relevance of the approach.
This paper only reviews decentralized communication frameworks used in wildland fire contexts. However, many solutions in the literature are presented as general communication solutions without corresponding applications. This is the case with the work of Pignaton de Freitas et al. [115] that proposed a multipurpose localization service to inform all UAVs in the formation of the other UAVs position. One interesting and rarely seen feature of the system is its ability to estimate the position of UAVs that are not received due to communication errors. This illustrates that researchers in the field should not only refer to wildfire-related works when the time comes to design new systems and that some works outside of the field may be important to consider.

8. Cooperative Autonomous Systems for Wildland Fires

UAVs can play an important role in the detection and monitoring of large wildland fires. Multiple UAVs can collaborate in the extraction of important data and improve firefighting strategies. Moreover, aerial vehicles can cooperate with unmanned ground vehicles (UGV) in operational firefighting scenarios.
One type of cooperation can consist of the use of UGV to carry small short endurance UAVs to detected fire areas and be used as refueling stations. Ghamry et al. [69] proposed such a system, where a coordinated leader–follower strategy is used. UAVs are carried by UGV to a desired location and deployed to explore preassigned areas. If a UAV detects a fire, an alert is sent to the leading UGV and to the rest of the fleet. The leader computes new optimal trajectories for the UAVs in order to monitor the fire perimeter. Phan and Liu [116] present another firefighting collaborative UAV-UGV strategy. A hierarchical UAV-UGV system composed of a large leading airship and cooperative UAV and UGV is proposed. When a fire is detected, the vehicles are deployed for fire monitoring. In this scenario, UAVs and UGVs are supposed to have the capacity to carry water and combat fire. The UAVs are deployed in an optimal flying formation over the fire front area. UGV are sent to prevent the fire propagation and limit its spread using water and fire retardants. Auction-based algorithms are implemented to allocate the tasks to each vehicle. Viguria et al. [117] also proposed the use of task allocation by an auction-based algorithm. In their framework, the vehicles can perform various tasks such as surveillance, monitoring, fire extinguishing, transportation and acts as a communication relay. A human or the GCS can generate a list of tasks that need to be fulfilled. Each robot sends a bid for each task and the one with the best offer wins and can proceed to execute the task. The offers are based on specific cost functions for each task that consider the vehicle distance, fuel level and capabilities.
Akhloufi et al. [118] proposed a multimodal UAV-UGV cooperative framework for large-scale wildland fire detection and segmentation, 3D modeling and strategical firefighting. The framework is composed of multiple UAVs and UGVs operating in a team-based cooperative mode. Figure 2 illustrates the proposed framework [118]. The vehicles are equipped with a multimodal stereo-vision system such as the ones developed for ground-based fire detection and 3D modeling [119,120,121,122]. The stereo system includes multispectral cameras operating in the visible and NIR spectrum for efficient fire detection and segmentation. Each stereo system provides an approximate 3D model of the fire. The models captured using multiple views are registered using inertial measurements, geospatial data and the extracted features using computer vision to build the propagating fire front 3D model [119,120,121]. Based on the 3D model of the fire, the UAVs and UGVs can be positioned strategically to capture complementary views of the fire front. This 3D model is tracked over time to compute different three-dimensional fire characteristics such as height, width, inclination, perimeter, area, volume, ROS and their evolution over time. The extracted three-dimensional fire characteristics can be fed to a mathematical fire propagation model to predict the fire behavior and spread over time. The obtained data make it possible to alert and inform about the risk levels in the surrounding areas. The predicted fire propagation can be mapped and used in an operational firefighting strategy. Furthermore, this information can be used for the optimal deployment of UAVs and UGVs in the field. This type of framework can be combined with other firefighting resources such as firefighters, aerial firefighting aircraft and future fire extinguisher drones.

9. Conclusions

This paper presents a survey of different approaches for the development of UAV fire assistance systems. Sensing instruments, fire perception algorithms and different coordination strategies have been described. UAVs can play an important role in the fight against wildland fires in large areas. With the decrease in their prices and their wider commercial availability, new applications in this field will emerge. However, some limitations remain such as autonomy, reliability and fault tolerance. Further research is needed to overcome these limitations. Security is also a concern, as there are risks associated with having UAVs flying over firefighters or close to aircraft carrying water and fire retardants. Nevertheless, the benefits of using UAVs are significant and this could lead to innovations aiming to solve these problems.
On the perception side, most of the developed techniques rely on classical computer vision algorithms. However, the emergence of some work in the field of deep learning has been witnessed in recent years, especially for fire detection, but it remains in the early stages of development. Furthermore, some datasets containing wildland fire images that can be used for the development of computer vision algorithms were presented. Unfortunately, only a small number of them contains aerial views of wildland fires. In addition, the lack of a large dataset limits the development of advanced deep learning algorithms. Such datasets would be important for the future of the field as they can serve to benchmark approaches and compare them quantitatively. Therefore, deep learning and the construction of new large-scale aerial wildfire datasets represents interesting research opportunities for future contributions by researchers in the field.
In this work, frameworks proposing cooperative autonomous systems where both aerial and ground vehicles contribute to wildland firefighting were also discussed. While these frameworks are mostly theoretical and limited to simulations, they provide interesting ideas about a more complete wildland firefighting system. Future research in these areas can provide new approaches for the further development of autonomous operational systems without or with little human intervention.

Author Contributions

Conceptualization, M.A.A.; writing—original draft preparation, N.A.C., M.A.A. and A.C.; writing—review and editing, A.C. and M.A.A.; visualization, A.C. and M.A.A.; supervision, M.A.A.; funding acquisition, M.A.A.; All authors have read and agreed to the published version of the manuscript.

Funding

Natural Sciences and Engineering Research Council of Canada (NSERC), reference number RGPIN-2018-06233.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Acknowledgments

This work has been partially supported by the government of Canada under the Canada–Chile Leadership Exchange Scholarship.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jolly, W.; Cochrane, M.; Freeborn, P.; Holden, Z.; Brown, T.; Williamson, G.; Bowman, D. Climate-induced variations in global wildfire danger from 1979 to 2013. Nat. Commun. 2015, 6, 7537. [Google Scholar] [CrossRef]
  2. Kelly, R.; Chipman, M.; Higuera, P.; Stefanova, I.; Brubaker, L.; Hu, F. Recent burning of boreal forests exceeds fire regime limits of the past 10,000 years. Proc. Natl. Acad. Sci. USA 2013, 110. [Google Scholar] [CrossRef] [Green Version]
  3. Amadeo, K. Wildfire Facts, Their Damage, and Effect on the Economy. 2019. Available online: https://www.thebalance.com/wildfires-economic-impact-4160764 (accessed on 30 December 2019).
  4. Westerling, A.L.; Hidalgo, H.G.; Cayan, D.R.; Swetnam, T.W. Warming and Earlier Spring Increase Western U.S. Forest Wildfire Activity. Science 2006, 313, 940–943. [Google Scholar] [CrossRef] [Green Version]
  5. Insurance Information Institute. Facts + Statistics: Wildfires. Available online: https://www.iii.org/fact-statistic/facts-statistics-wildfires (accessed on 30 December 2019).
  6. U.S. Global Change Research Program. Climate Science Special Report: Fourth National Climate Assessment. 2017. Available online: https://science2017.globalchange.gov/chapter/8/ (accessed on 30 December 2019).
  7. Chien, S.; Doubleday, J.; Mclaren, D.; Davies, A.; Tran, D.; Tanpipat, V.; Akaakara, S.; Ratanasuwan, A.; Mandl, D. Space-based sensorweb monitoring of wildfires in Thailand. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011; pp. 1906–1909. [Google Scholar] [CrossRef]
  8. Chiaraviglio, N.; Artés, T.; Bocca, R.; López, J.; Gentile, A.; Ayanz, J.S.M.; Cortés, A.; Margalef, T. Automatic fire perimeter determination using MODIS hotspots information. In Proceedings of the 2016 IEEE 12th International Conference on e-Science, Baltimore, MD, USA, 24–27 October 2016; pp. 414–423. [Google Scholar] [CrossRef]
  9. Fukuhara, T.; Kouyama, T.; Kato, S.; Nakamura, R.; Takahashi, Y.; Akiyama, H. Detection of small wildfire by thermal infrared camera with the uncooled microbolometer array for 50-kg class satellite. IEEE Trans. Geosci. Remote Sens. 2017, 55, 4314–4324. [Google Scholar] [CrossRef]
  10. Yoon, I.; Noh, D.K.; Lee, D.; Teguh, R.; Honma, T.; Shin, H. Reliable wildfire monitoring with sparsely deployed wireless sensor networks. In Proceedings of the 2012 IEEE 26th International Conference on Advanced Information Networking and Applications, Fukuoka, Japan, 26–29 March 2012; pp. 460–466. [Google Scholar] [CrossRef]
  11. Tan, Y.K.; Panda, S.K. Self-autonomous wireless sensor nodes with wind energy harvesting for remote sensing of wind-driven wildfire spread. IEEE Trans. Instrum. Meas. 2011, 60, 1367–1377. [Google Scholar] [CrossRef]
  12. Lin, H.; Liu, X.; Wang, X.; Liu, Y. A fuzzy inference and big data analysis algorithm for the prediction of forest fire based on rechargeable wireless sensor networks. Sustain. Comput. Informatics Syst. 2018, 18, 101–111. [Google Scholar] [CrossRef]
  13. Marder, J. NASA Tracks Wildfires From Above to Aid Firefighters Below. 2019. Available online: https://www.nasa.gov/feature/goddard/2019/nasa-tracks-wildfires-from-above-to-aid-firefighters-below (accessed on 30 December 2019).
  14. Bumberger, J.; Remmler, P.; Hutschenreuther, T.; Toepfer, H.; Dietrich, P. Potentials and Limitations of Wireless Sensor Networks for Environmental. In Proceedings of the AGU Fall Meeting, San Francisco, CA, USA, 9–13 December 2013; Volume 2013. [Google Scholar]
  15. Nex, F.; Remondino, F. Preface: Latest Developments, Methodologies, and Applications Based on UAV Platforms. Drones 2019, 3, 26. [Google Scholar] [CrossRef] [Green Version]
  16. Ollero, A.; de Dios, J.M.; Merino, L. Unmanned aerial vehicles as tools for forest-fire fighting. For. Ecol. Manag. 2006, 234, S263. [Google Scholar] [CrossRef]
  17. Ambrosia, V.; Wegener, S.; Zajkowski, T.; Sullivan, D.; Buechel, S.; Enomoto, F.; Lobitz, B.; Johan, S.; Brass, J.; Hinkley, E. The Ikhana unmanned airborne system (UAS) western states fire imaging missions: From concept to reality (2006–2010). Geocarto Int. 2011, 26, 85–101. [Google Scholar] [CrossRef]
  18. Hinkley, E.A.; Zajkowski, T. USDA forest service—NASA: Unmanned aerial systems demonstrations—Pushing the leading edge in fire mapping. Geocarto Int. 2011, 26, 103–111. [Google Scholar] [CrossRef]
  19. Yuan, C.; Youmin, Z.; Zhixiang, L. A survey on technologies for automatic forest fire monitoring, detection, and fighting using unmanned aerial vehicles and remote sensing techniques. Can. J. For. Res. 2015, 45, 783–792. [Google Scholar] [CrossRef]
  20. Skorput, P.; Mandzuka, S.; Vojvodic, H. The use of Unmanned Aerial Vehicles for forest fire monitoring. In Proceedings of the 2016 International Symposium ELMAR, Zadar, Croatia, 12–14 September 2016; pp. 93–96. [Google Scholar] [CrossRef]
  21. Restas, A. Forest Fire Management Supporting by UAV Based Air Reconnaissance Results of Szendro Fire Department, Hungary. In Proceedings of the 2006 First International Symposium on Environment Identities and Mediterranean Area, Corte-Ajaccio, France, 10–13 July 2006; pp. 73–77. [Google Scholar] [CrossRef]
  22. Laszlo, B.; Agoston, R.; Xu, Q. Conceptual approach of measuring the professional and economic effectiveness of drone applications supporting forest fire management. Procedia Eng. 2018, 211, 8–17. [Google Scholar] [CrossRef]
  23. Bailon-Ruiz, R.; Lacroix, S. Wildfire remote sensing with UAVs: A review from the autonomy point of view. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 412–420. [Google Scholar] [CrossRef]
  24. Twidwell, D.; Allen, C.R.; Detweiler, C.; Higgins, J.; Laney, C.; Elbaum, S. Smokey comes of age: Unmanned aerial systems for fire management. Front. Ecol. Environ. 2016, 14, 333–339. [Google Scholar] [CrossRef]
  25. Qin, H.; Cui, J.Q.; Li, J.; Bi, Y.; Lan, M.; Shan, M.; Liu, W.; Wang, K.; Lin, F.; Zhang, Y.F.; et al. Design and implementation of an unmanned aerial vehicle for autonomous firefighting missions. In Proceedings of the 2016 12th IEEE International Conference on Control and Automation (ICCA), Kathmandu, Nepal, 1–3 June 2016; pp. 62–67. [Google Scholar] [CrossRef]
  26. Aydin, B.; Selvi, E.; Tao, J.; Starek, M.J. Use of fire-extinguishing balls for a conceptual system of drone-assisted wildfire fighting. Drones 2019, 3, 17. [Google Scholar] [CrossRef] [Green Version]
  27. Johnson, K. DJI R&D Head Dreams of Drones Fighting Fires by the Thousands in ‘Aerial Aqueduct’. 2019. Available online: https://venturebeat.com/2019/04/20/dji-rd-head-dreams-of-drones-fighting-fires-by-the-thousands-in-aerial-aqueduct/ (accessed on 30 December 2019).
  28. Watts, A.; Ambrosia, V.; Hinkley, E. Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use. Remote Sens. 2012, 4, 1671–1692. [Google Scholar] [CrossRef] [Green Version]
  29. Abdullah, Q.A. iGEOG 892 Geospatial Applications of Unmanned Aerial Systems (UAS): Classification of the Unmanned Aerial Systems. Available online: https://www.e-education.psu.edu/geog892/node/5 (accessed on 5 August 2020).
  30. Martínez-de Dios, J.; Merino, L.; Ollero, A.; Ribeiro, L.M.; Viegas, X. Multi-UAV experiments: Application to forest fires. In Multiple Heterogeneous Unmanned Aerial Vehicles; Springer: Berlin/Heidelberg, Germany, 2007; pp. 207–228. [Google Scholar] [CrossRef]
  31. Ghamry, K.; Kamel, M.; Zhang, Y. Multiple UAVs in forest fire fighting mission using particle swarm optimization. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 1404–1409. [Google Scholar] [CrossRef]
  32. Casbeer, D.; Kingston, D.; Beard, R.; McLain, T. Cooperative forest fire surveillance using a team of small unmanned air vehicles. Int. J. Syst. Sci. 2006, 37, 351–360. [Google Scholar] [CrossRef]
  33. Martins, A.; Almeida, J.; Almeida, C.; Figueiredo, A.; Santos, F.; Bento, D.; Silva, H.; Silva, E. Forest fire detection with a small fixed wing autonomous aerial vehicle. IFAC 2007, 40, 168–173. [Google Scholar] [CrossRef]
  34. Merino, L.; Caballero, F.; Martínez-de Dios, J.; Ollero, A. Cooperative fire detection using Unmanned Aerial Vehicles. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain, 18–22 April 2005; pp. 1884–1889. [Google Scholar] [CrossRef]
  35. Merino, L.; Caballero, F.; Martínez-de Dios, J.; Ferruz, J.; Ollero, A. A cooperative perception system for multiple UAVs: Application to automatic detection of forest fires. J. Field Robot. 2006, 23, 165–184. [Google Scholar] [CrossRef]
  36. Sujit, P.; Kingston, D.; Beard, R. Cooperative forest fire monitoring using multiple UAVs. In Proceedings of the 2007 46th IEEE Conference on Decision and Control, New Orleans, LA, USA, 12–14 December 2007; pp. 4875–4880. [Google Scholar] [CrossRef]
  37. Alexis, K.; Nikolakopoulos, G.; Tzes, A.; Dritsas, L. Coordination of helicopter UAVs for aerial forest-fire surveillance. In Applications of Intelligent Control to Engineering Systems; Springer: Dordrecht, The Netherlands, 2009; Volume 39, Chapter 7; pp. 169–193. [Google Scholar]
  38. Bradley, J.; Taylor, C. Georeferenced mosaics for tracking fires using unmanned miniature air vehicles. J. Aerosp. Comput. Inf. Commun. 2011, 8, 295–309. [Google Scholar] [CrossRef]
  39. Kumar, M.; Cohen, K.; Homchaudhuri, B. Cooperative control of multiple uninhabited aerial vehicles for monitoring and fighting wildfires. J. Aerosp. Comput. Inf. Commun. 2011, 8, 1–16. [Google Scholar] [CrossRef]
  40. Martínez-de Dios, J.; Merino, L.; Caballero, F.; Ollero, A. Automatic forest-fire measuring using ground stations and Unmanned Aerial Systems. Sensors 2011, 11, 6328–6353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  41. Pastor, E.; Barrado, C.; Royo, P.; Santamaria, E.; Lopez, J.; Salami, E. Architecture for a helicopter-based unmanned aerial systems wildfire surveillance system. Geocarto Int. 2011, 26, 113–131. [Google Scholar] [CrossRef]
  42. Belbachir, A.; Escareno, J.; Rubio, E.; Sossa, H. Preliminary results on UAV-based forest fire localization based on decisional navigation. In Proceedings of the 2015 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), Cancún, Mexico, 23–25 November 2015; pp. 377–382. [Google Scholar] [CrossRef]
  43. Karma, S.; Zorba, E.; Pallis, G.C.; Statheropoulos, G.; Balta, I.; Mikedi, K.; Vamvakari, J.; Pappa, A.; Chalaris, M.; Xanthopoulos, G.; et al. Use of unmanned vehicles in search and rescue operations in forest fires: Advantages and limitations observed in a field trial. Int. J. Disaster Risk Reduct. 2015, 13, 307–312. [Google Scholar] [CrossRef]
  44. Merino, L.; Caballero, F.; Martínez-de Dios, J.R.; Maza, I.; Ollero, A. An unmanned aircraft system for automatic forest fire monitoring and measurement. J. Intell. Robot. Syst. 2012, 65, 533–548. [Google Scholar] [CrossRef]
  45. Merino, L.; Martínez-de Dios, J.; Ollero, A. Cooperative unmanned aerial systems for fire detection, Monitoring, and Extinguishing. In Handbook of Unmanned Aerial Vehicles; Springer: Dordrecht, The Netherlands, 2015; Chapter 112; pp. 2693–2722. [Google Scholar]
  46. Ghamry, K.; Zhang, Y. Cooperative control of multiple UAVs for forest fire monitoring and detection. In Proceedings of the 2016 12th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications (MESA), Auckland, New Zealand, 29–31 August 2016; pp. 1–6. [Google Scholar] [CrossRef]
  47. Ghamry, K.; Zhang, Y. Fault-tolerant cooperative control of multiple UAVs for forest fire detection and tracking mission. In Proceedings of the 2016 3rd Conference on Control and Fault-Tolerant Systems (SysTol), Barcelona, Spain, 7–9 September 2016; pp. 133–138. [Google Scholar] [CrossRef]
  48. Sun, H.; Song, G.; Wei, Z.; Zhang, Y.; Liu, S. Bilateral teleoperation of an unmanned aerial vehicle for forest fire detection. In Proceedings of the 2017 IEEE International Conference on Information and Automation (ICIA), Macau, China, 18–20 July 2017; pp. 586–591. [Google Scholar] [CrossRef]
  49. Yuan, C.; Liu, Z.; Zhang, Y. Fire detection using infrared images for UAV-based forest fire surveillance. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 567–572. [Google Scholar] [CrossRef]
  50. Yuan, C.; Liu, Z.; Zhang, Y. UAV-based forest fire detection and tracking using image processing techniques. In Proceedings of the 2015 International Conference on Unmanned Aircraft Systems (ICUAS), Denver, CO, USA, 9–12 June 2015; pp. 639–643. [Google Scholar] [CrossRef]
  51. Yuan, C.; Liu, Z.; Zhang, Y. Vision-based forest fire detection in aerial images for firefighting using UAVs. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems, Arlington, VA, USA, 7–10 June 2016; pp. 1200–1205. [Google Scholar] [CrossRef]
  52. Yuan, C.; Ghamry, K.; Liu, Z.; Zhang, Y. Unmanned aerial vehicle based forest fire monitoring and detection using image processing technique. In Proceedings of the 2016 IEEE Chinese Guidance, Navigation and Control Conference (CGNCC), Nanjing, China, 12–14 August 2016; pp. 1870–1875. [Google Scholar] [CrossRef]
  53. Yuan, C.; Liu, Z.; Zhang, Y. Aerial images-based forest fire detection for firefighting using optical remote sensing techniques and unmanned aerial vehicles. J. Intell. Robot. Syst. 2017, 88, 635–654. [Google Scholar] [CrossRef]
  54. Lin, Z.; Liu, H.H.; Wotton, M. Kalman filter-based large-scale wildfire monitoring with a system of UAVs. IEEE Trans. Ind. Electron. 2018, 66, 606–615. [Google Scholar] [CrossRef]
  55. Wardihani, E.; Ramdhani, M.; Suharjono, A.; Setyawan, T.A.; Hidayat, S.S.; Helmy, S.W.; Triyono, E.; Saifullah, F. Real-time forest fire monitoring system using unmanned aerial vehicle. J. Eng. Sci. Technol. 2018, 13, 1587–1594. [Google Scholar]
  56. Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors 2018, 18, 712. [Google Scholar] [CrossRef] [Green Version]
  57. Pham, H.; La, H.; Feil-Seifer, D.; Deans, M. A distributed control framework for a team of unmanned aerial vehicles for dynamic wildfire tracking. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 6648–6653. [Google Scholar] [CrossRef] [Green Version]
  58. Pham, H.X.; La, H.M.; Feil-Seifer, D.; Deans, M.C. A distributed control framework of multiple unmanned aerial vehicles for dynamic wildfire tracking. IEEE Trans. Syst. Man Cybern. Syst. 2018, 50, 1537–1548. [Google Scholar] [CrossRef] [Green Version]
  59. Julian, K.D.; Kochenderfer, M.J. Distributed wildfire surveillance with autonomous aircraft using deep reinforcement learning. J. Guid. Control. Dyn. 2019, 42, 1768–1778. [Google Scholar] [CrossRef]
  60. Jiao, Z.; Zhang, Y.; Xin, J.; Mu, L.; Yi, Y.; Liu, H.; Liu, D. A deep learning based forest fire detection approach using UAV and YOLOv3. In Proceedings of the 2019 1st International Conference on Industrial Artificial Intelligence (IAI), Shenyang, China, 23–25 July 2019; pp. 1–5. [Google Scholar] [CrossRef]
  61. Jiao, Z.; Zhang, Y.; Mu, L.; Xin, J.; Jiao, S.; Liu, H.; Liu, D. A YOLOv3-based Learning Strategy for Real-time UAV-based Forest Fire Detection. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 4963–4967. [Google Scholar] [CrossRef]
  62. Seraj, E.; Gombolay, M. Coordinated control of uavs for human-centered active sensing of wildfires. In Proceedings of the 2020 American Control Conference (ACC), Denver, CO, USA, 1–3 July 2020; pp. 1845–1852. [Google Scholar] [CrossRef]
  63. Lin, Z.; Liu, H. Enhanced cooperative filter for wildfire monitoring. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 3075–3080. [Google Scholar] [CrossRef]
  64. Allison, R.; Johnston, J.; Craig, G.; Jennings, S. Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors 2016, 16, 1310. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  65. Johnston, J.; Wooster, M.; Lynham, T. Experimental confirmation of the MWIR and LWIR grey body assumption for vegetation fire flame emissivity. Int. J. Wildland Fire 2014, 23, 463–479. [Google Scholar] [CrossRef]
  66. Ball, M. FLIR Unveils MWIR Thermal Camera Cores for Drone Applications. 2018. Available online: https://www.unmannedsystemstechnology.com/2018/12/new-mwir-thermal-camera-cores-launched-for-drone-applications/ (accessed on 30 December 2019).
  67. Esposito, F.; Rufino, G.; Moccia, A.; Donnarumma, P.; Esposito, M.; Magliulo, V. An integrated electro-optical payload system for forest fires monitoring from airborne platform. In Proceedings of the 2007 IEEE Aerospace Conference, Big Sky, MT, USA, 3–10 March 2007; pp. 1–13. [Google Scholar] [CrossRef]
  68. National Aeronautics and Space Administration. Ikhana UAV Gives NASA New Science and Technology Capabilities. 2007. Available online: https://www.nasa.gov/centers/dryden/news/NewsReleases/2007/07-12.html (accessed on 30 December 2019).
  69. Ghamry, K.; Kamel, M.; Zhang, Y. Cooperative forest monitoring and fire detection using a team of UAVs-UGVs. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems, Arlington, VA, USA, 7–10 June 2016; pp. 1206–1211. [Google Scholar] [CrossRef]
  70. Fairchild, M.D. Color Appearance Models, 3rd ed.; John Wiley & Sons Ltd.: Chichester, West Sussex, UK, 2013. [Google Scholar]
  71. Otsu, N. A Threshold Selection Method from Gray-Level Histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
  72. Lei, S.; Fangfei, S.; Teng, W.; Leping, B.; Xinguo, H. A new fire detection method based on the centroid variety of consecutive frames. In Proceedings of the 2017 2nd International Conference on Image, Vision and Computing (ICIVC), Chengdu, China, 2–4 June 2017; pp. 437–442. [Google Scholar] [CrossRef]
  73. Wang, T.; Shi, L.; Yuan, P.; Bu, L.; Hou, X. A new fire detection method based on flame color dispersion and similarity in consecutive frames. In Proceedings of the 2017 Chinese Automation Congress (CAC), Jinan, China, 20–22 October 2017; pp. 151–156. [Google Scholar] [CrossRef]
  74. Chou, K.; Prasad, M.; Gupta, D.; Sankar, S.; Xu, T.; Sundaram, S.; Lin, C.; Lin, W. Block-based feature extraction model for early fire detection. In Proceedings of the 2017 IEEE Symposium Series on Computational Intelligence (SSCI), Hawaii, HI, USA, 27 November–1 December 2017; pp. 1–8. [Google Scholar] [CrossRef]
  75. Shi, L.; Long, F.; Zhan, Y.; Lin, C. Video-based fire detection with spatio-temporal SURF and color features. In Proceedings of the 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China, 12–15 June 2016; pp. 258–262. [Google Scholar] [CrossRef]
  76. Abdullah, M.; Wijayanto, I.; Rusdinar, A. Position estimation and fire detection based on digital video color space for autonomous quadcopter using odroid XU4. In Proceedings of the 2016 International Conference on Control, Electronics, Renewable Energy and Communications (ICCEREC), Bandung, Indonesia, 13–15 September 2016; pp. 169–173. [Google Scholar] [CrossRef]
  77. Steffens, C.; Botelho, S.; Rodrigues, R. A texture driven approach for visible spectrum fire detection on mobile robots. In Proceedings of the 2016 XIII Latin American Robotics Symposium and IV Brazilian Robotics Symposium (LARS/SBR), Recife, Brazil, 8–12 October 2016; pp. 257–262. [Google Scholar] [CrossRef]
  78. Choi, J.; Choi, J.Y. Patch-based fire detection with online outlier learning. In Proceedings of the 2015 12th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Karlsruhe, Germany, 25–28 August 2015; pp. 1–6. [Google Scholar] [CrossRef]
  79. Poobalan, K.; Liew, S. Fire detection based on color filters and Bag-of-Features classification. In Proceedings of the 2015 IEEE Student Conference on Research and Development (SCOReD), Kuala Lumpur, Malaysia, 13–14 December 2015; pp. 389–392. [Google Scholar] [CrossRef] [Green Version]
  80. Zhang, H.; Zhang, N.; Xiao, N. Fire detection and identification method based on visual attention mechanism. Opt. Int. J. Light Electron Opt. 2015, 126, 5011–5018. [Google Scholar] [CrossRef]
  81. Toulouse, T.; Rossi, L.; Akhloufi, M.; Celik, T.; Maldague, X. Benchmarking of wildland fire colour segmentation algorithms. IET Image Process. 2015, 9, 1064–1072. [Google Scholar] [CrossRef] [Green Version]
  82. Verstockt, S.; Kypraios, I.; Potter, P.; Poppe, C.; Walle, R. Wavelet-based multi-modal fire detection. In Proceedings of the 19th European Signal Processing Conference, Barcelona, Spain, 29 August–2 September 2011; pp. 903–907. [Google Scholar]
  83. Kyrkou, C.; Theocharides, T. Deep-Learning-Based Aerial Image Classification for Emergency Response Applications Using Unmanned Aerial Vehicles. arXiv 2019, arXiv:1906.08716. [Google Scholar]
  84. Lee, W.; Kim, S.; Lee, Y.; Lee, H.; Choi, M. Deep neural networks for wild fire detection with unmanned aerial vehicle. In Proceedings of the 2017 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 8–10 January 2017; pp. 252–253. [Google Scholar] [CrossRef]
  85. Akhloufi, M.A.; Tokime, R.B.; Elassady, H. Wildland fires detection and segmentation using deep learning. In Proceedings of the SPIE 10649, Pattern Recognition and Tracking XXIX, International Society for Optics and Photonics, Orlando, FL, USA, 15–19 April 2018; Volume 10649. [Google Scholar] [CrossRef]
  86. Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef] [Green Version]
  87. Bruhn, A.; Weickert, J.; Schnörr, C. Lucas/Kanade meets Horn/Schunck: Combining local and global optic flow methods. Int. J. Comput. Vis. 2005, 61, 211–231. [Google Scholar] [CrossRef] [Green Version]
  88. Asatryan, D.; Hovsepyan, S. Method for fire and smoke detection in monitored forest areas. In Proceedings of the 2015 Computer Science and Information Technologies (CSIT), Yerevan, Armenia, 28 September–2 October 2015; pp. 77–81. [Google Scholar] [CrossRef]
  89. Yuan, C.; Liu, Z.; Zhang, Y. Learning-based smoke detection for unmanned aerial vehicles applied to forest fire surveillance. J. Intell. Robot. Syst. 2018, 93, 337–349. [Google Scholar] [CrossRef]
  90. Duong, H.; Tinh, D.T. An efficient method for vision-based fire detection using SVM classification. In Proceedings of the 2013 International Conference on Soft Computing and Pattern Recognition (SoCPaR), Hanoi, Vietnam, 15–18 December 2013; pp. 190–195. [Google Scholar] [CrossRef]
  91. Zhou, Q.; Yang, X.; Bu, L. Analysis of shape features of flame and interference image in video fire detection. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, 27–29 November 2015; pp. 633–637. [Google Scholar] [CrossRef]
  92. Chen, X.; Zhang, X.; Zhang, Q. Fire alarm using multi-rules detection and texture features classification in video surveillance. In Proceedings of the 2014 7th International Conference on Intelligent Computation Technology and Automation, Changsha, China, 25–26 October 2014; pp. 264–267. [Google Scholar] [CrossRef]
  93. Chino, D.; Avalhais, L.; Rodrigues, J.; Traina, A. BoWFire: Detection of fire in still images by integrating pixel color and texture analysis. In Proceedings of the 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, Salvador, Brazil, 26–29 August 2015; pp. 95–102. [Google Scholar] [CrossRef] [Green Version]
  94. Chi, R.; Lu, Z.M.; Ji, Q.G. Real-time multi-feature based fire flame detection in video. IET Image Process. 2017, 11, 31–37. [Google Scholar] [CrossRef]
  95. Favorskaya, M.; Pyataeva, A.; Popov, A. Verification of smoke detection in video sequences based on spatio-temporal local binary patterns. Procedia Comput. Sci. 2015, 60, 671–680. [Google Scholar] [CrossRef] [Green Version]
  96. Bay, H.; Ess, A.; Tuytelaars, T.; Gool, L.V. Speeded-Up Robust Features (SURF). Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
  97. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
  98. Avalhais, L.; Rodrigues, J.; Traina, A. Fire detection on unconstrained videos using color-aware spatial modeling and motion flow. In Proceedings of the 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI), San Jose, CA, USA, 6–8 November 2016; pp. 913–920. [Google Scholar] [CrossRef]
  99. Li, K.; Yang, Y. Fire detection algorithm based on CLG-TV optical flow model. In Proceedings of the 2016 2nd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 14–17 October 2016; pp. 1381–1385. [Google Scholar] [CrossRef]
  100. Harris, C.; Stephens, M. A combined corner and edge detector. In Proceedings of the Fourth Alvey Vision Conference, Manchester, UK, 31 August–2 September 1988; pp. 147–152. [Google Scholar]
  101. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097–1105. [Google Scholar] [CrossRef]
  102. Redmon, J.; Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  103. Ko, B.; Jung, J.; Nam, J. Fire detection and 3D surface reconstruction based on stereoscopic pictures and probabilistic fuzzy logic. Fire Saf. J. 2014, 68, 61–70. [Google Scholar] [CrossRef]
  104. Foggia, P.; Saggese, A.; Vento, M. Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1545–1556. [Google Scholar] [CrossRef]
  105. Buemi, A.; Giacalone, D.; Naccari, F.; Spampinato, G. Efficient fire detection using fuzzy logic. In Proceedings of the 2016 IEEE 6th International Conference on Consumer Electronics—Berlin (ICCE-Berlin), Berlin, Germany, 5–7 September 2016; pp. 237–240. [Google Scholar] [CrossRef]
  106. Cai, M.; Lu, X.; Wu, X.; Feng, Y. Intelligent video analysis-based forest fires smoke detection algorithms. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), Changsha, China, 13–15 August 2016; pp. 1504–1508. [Google Scholar] [CrossRef]
  107. Zhao, Y.; Tang, G.; Xu, M. Hierarchical detection of wildfire flame video from pixel level to semantic level. Expert Syst. Appl. 2015, 42, 4097–4104. [Google Scholar] [CrossRef]
  108. Stadler, A.; Windisch, T.; Diepold, K. Comparison of intensity flickering features for video based flame detection algorithms. Fire Saf. J. 2014, 66, 1–7. [Google Scholar] [CrossRef]
  109. Barmpoutis, P.; Dimitropoulos, K.; Grammalidis, N. Real time video fire detection using spatio-temporal consistency energy. In Proceedings of the 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance, Krakow, Poland, 27–30 August 2013; pp. 365–370. [Google Scholar] [CrossRef]
  110. Wang, H.; Finn, A.; Erdinc, O.; Vincitore, A. Spatial-temporal structural and dynamics features for Video Fire Detection. In Proceedings of the 2013 IEEE Workshop on Applications of Computer Vision (WACV), Clearwater, FL, USA, 15–17 January 2013; pp. 513–519. [Google Scholar] [CrossRef]
  111. Kim, S.; Kim, T. Fire detection using the brownian correlation descriptor. In Proceedings of the 2016 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Seoul, Korea, 26–28 October 2016; pp. 1–4. [Google Scholar] [CrossRef]
  112. Székely, G.; Rizzo, M. Brownian distance covariance. Ann. Appl. Stat. 2009, 3, 1236–1265. [Google Scholar] [CrossRef] [Green Version]
  113. Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar] [CrossRef]
  114. Steffens, C.; Rodrigues, R.; Botelho, S. An unconstrained dataset for non-stationary video based fire detection. In Proceedings of the 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), Uberlandia, Brazil, 29–31 October 2015; pp. 25–30. [Google Scholar] [CrossRef]
  115. Pignaton de Freitas, E.; da Costa, L.A.L.F.; Felipe Emygdio de Melo, C.; Basso, M.; Rodrigues Vizzotto, M.; Schein Cavalheiro Corrêa, M.; Dapper e Silva, T. Design, Implementation and Validation of a Multipurpose Localization Service for Cooperative Multi-UAV Systems. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 295–302. [Google Scholar] [CrossRef]
  116. Phan, C.; Liu, H. A cooperative UAV/UGV platform for wildfire detection and fighting. In Proceedings of the 2008 Asia Simulation Conference—7th International Conference on System Simulation and Scientific Computing, Beijing, China, 10–12 October 2008; pp. 494–498. [Google Scholar] [CrossRef]
  117. Viguria, A.; Maza, I.; Ollero, A. Distributed Service-Based Cooperation in Aerial/Ground Robot Teams Applied to Fire Detection and Extinguishing Missions. Adv. Robot. 2010, 24, 1–23. [Google Scholar] [CrossRef] [Green Version]
  118. Akhloufi, M.A.; Castro, N.A.; Couturier, A. UAVs for wildland fires. In Proceedings of the SPIE 10643, Autonomous Systems: Sensors, Vehicles, Security, and the Internet of Everything. International Society for Optics and Photonics, Orlando, FL, USA, 15–19 April 2018; Volume 10643. [Google Scholar] [CrossRef]
  119. Akhloufi, M.A.; Toulouse, T.; Rossi, L. Multiple spectrum vision for wildland fires. In Proceedings of the SPIE 10661, Thermosense: Thermal Infrared Applications XL. International Society for Optics and Photonics, Orlando, FL, USA, 15–19 April 2018; Volume 10661, p. 1066105. [Google Scholar] [CrossRef]
  120. Toulouse, T.; Rossi, L.; Akhloufi, M.A.; Pieri, A.; Maldague, X. A multimodal 3D framework for fire characteristics estimation. Meas. Sci. Technol. 2018, 29, 025404. [Google Scholar] [CrossRef]
  121. Akhloufi, M.; Toulouse, T.; Rossi, L.; Maldague, X. Multimodal three-dimensional vision for wildland fires detection and analysis. In Proceedings of the 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA), Montreal, QC, Canada, 28 November–1 December 2017; pp. 1–6. [Google Scholar] [CrossRef]
  122. Akhloufi, M.; Toulouse, T.; Rossi, L.; Maldague, X. Three-dimensional infrared-visible framework for wildland fires. In Proceedings of the 14th International Workshop on Advanced Infrared Technology and Applications (AITA), Quebec City, QC, Canada, 27–29 September 2017; pp. 65–69. [Google Scholar]
Figure 1. Characteristics of the reviewed works.
Figure 1. Characteristics of the reviewed works.
Drones 05 00015 g001
Figure 2. Unmanned aerial vehicle-unmanned ground vehicle (UAV-UGV) multimodal framework for wildland fires assistance.
Figure 2. Unmanned aerial vehicle-unmanned ground vehicle (UAV-UGV) multimodal framework for wildland fires assistance.
Drones 05 00015 g002
Table 1. Reviewed works’ characteristics.
Table 1. Reviewed works’ characteristics.
AuthorsYearValidation
Casbeer et al. [32]2006Simulation
Martins et al. [33]2007Simulation
Merino et al. [30,34,35]2007Practical
Sujit et al. [36]2007Simulation
Alexis et al. [37]2009Simulation
Ambrosia et al. [17]2011Practical
Bradley and Taylor [38]2011Near practical
Hinkley and Zajkowski [18]2011Practical
Kumar et al. [39]2011Simulation
Martínez-de Dios et al. [40]2011Practical
Pastor et al. [41]2011None
Belbachir et al. [42]2015Simulation
Karma et al. [43]2015Practical
Merino et al. [44,45]2015Practical
Ghamry and Zhang [46,47]2016Simulation
Ghamry et al. [31]2017Simulation
Sun et al. [48]2017Near practical
Yuan et al. [49]2017Simulation
Yuan et al. [50,51,52,53]2017Near practical
Lin et al. [54]2018Simulation
Wardihani et al. [55]2018Practical
Zhao et al. [56]2018Simulation
Pham et al. [57,58]2018Simulation
Julian and Kochenderfer [59]2019Simulation
Aydin et al. [26]2019Near practical
Jiao et al. [60,61]2020Near practical
Seraj and Gombolay [62]2020Simulation
Table 2. Reviewed works’ sensors and performed tasks.
Table 2. Reviewed works’ sensors and performed tasks.
AuthorsSensing ModeTasks
Casbeer et al. [32]IRMonitoring
Martins et al. [33]NIR, VisualDetection
Merino et al. [30,34,35]IR, VisualDetection, Monitoring
Sujit et al. [36]Not specifiedMonitoring
Alexis et al. [37]Not specifiedMonitoring
Ambrosia et al. [17]MultispectralDetection, Diagnosis
Bradley and Taylor [38]IRDetection
Hinkley and Zajkowski [18]IRMonitoring
Kumar et al. [39]IRMonitoring, Fighting
Martínez-de Dios et al. [40]IR, VisualMonitoring, Diagnosis
Pastor et al. [41]IR, VisualDetection, Monitoring
Belbachir et al. [42]TemperatureDetection
Karma et al. [43]Not specifiedMonitoring
Merino et al. [44,45]IR, VisualDetection, Monitoring
Ghamry and Zhang [46,47]Not specifiedDetection, Monitoring
Ghamry et al. [31]Not specifiedFighting
Sun et al. [48]VisualDetection, Monitoring
Yuan et al. [49]IRDetection
Yuan et al. [50,51,52,53]VisualDetection
Lin et al. [54]TemperatureMonitoring
Wardihani et al. [55]TemperatureDetection
Zhao et al. [56]VisualDetection
Pham et al. [57,58]IR, VisualMonitoring
Julian and Kochenderfer [59]Not specifiedMonitoring
Aydin et al. [26]IR, VisualFighting
Jiao et al. [60,61]VisualDetection
Seraj and Gombolay [62]VisualMonitoring
Table 3. Reviewed works’ system architecture.
Table 3. Reviewed works’ system architecture.
AuthorsAutonomyOrganizationCoordination
Casbeer et al. [32]AutonomousMultiple UAVDecentralized
Martins et al. [33]AutonomousSingle UAVNone
Merino et al. [30,34,35]AutonomousMultiple UAVCentralized
Sujit et al. [36]AutonomousMultiple UAVDecentralized
Alexis et al. [37]AutonomousMultiple UAVDecentralized
Ambrosia et al. [17]PilotedSingle UAVNone
Bradley and Taylor [38]PilotedSingle UAVNone
Hinkley and Zajkowski [18]PilotedSingle UAVNone
Kumar et al. [39]AutonomousMultiple UAVDecentralized
Martínez-de Dios et al. [40]PilotedSingle UAVNone
Pastor et al. [41]PilotedSingle UAVNone
Belbachir et al. [42]AutonomousMultiple UAVCentralized
Karma et al. [43]PilotedMultiple UAV and UGVCentralized
Merino et al. [44,45]AutonomousMultiple UAVCentralized
Ghamry and Zhang [46,47]AutonomousMultiple UAVCentralized
Ghamry et al. [31]AutonomousMultiple UAVDecentralized
Sun et al. [48]PilotedSingle UAVNone
Yuan et al. [49]Not specifiedSingle UAVNone
Yuan et al. [50,51,52,53]Not specifiedSingle UAVNone
Lin et al. [54,63]AutonomousMultiple UAVCentralized
Wardihani et al. [55]Near autonomousSingle UAVNone
Zhao et al. [56]PilotedSingle UAVNone
Pham et al. [57,58]AutonomousMultiple UAVDecentralized
Julian and Kochenderfer [59]AutonomousMultiple UAVDecentralized
Aydin et al. [26]AutonomousMultiple UAVCentralized
Jiao et al. [60,61]Not specifiedSingle UAVNone
Seraj and Gombolay [62]AutonomousMultiple UAVDecentralized
Table 4. Visual and IR electromagnetic spectrum.
Table 4. Visual and IR electromagnetic spectrum.
Spectral BandWavelength (µm)
Visible0.4–0.75
Near Infrared (NIR)0.75–1.4
Short Wave IR (SWIR)1.4–3
Mid Wave IR (MWIR)3–8
Long Wave IR (LWIR)8–15
Table 5. Image input and extracted features.
Table 5. Image input and extracted features.
InputStatistical MeasuresSpatial FeaturesTemporal Features
Color, IR and radiance imagesMean value, mean difference, color histogram, variance and entropy.LBP, SURF, shape, convex hull to the perimeter rate, bounding box to the perimeter rate.Shape and intensity variations, centroid displacement, ROI overlapping, fire to non-fire transitions, movement gradient histograms and Brownian correlation.
Wavelet transformMean energy content.Mean blob energy content.Diagonal filter difference. High-pass filter zero crossing of wavelet transform on area variation.
Table 6. Fire datasets.
Table 6. Fire datasets.
DatasetDescriptionWildland FiresAerial FootageAnnotations
FURG [114]14,397 fire frames in 24 videos from static and moving cameras.NoNoFire bounding boxes
BowFire [93]186 fire and non-fire images.NoNoFire masks
Corsican Fire DB [86]500 RGB and 100 multimodal images.AllFewFire masks
VisiFire [104]14 fire videos, 15 smoke videos, 2 videos containing fire-like objects.17 videos7 videosNo
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Akhloufi, M.A.; Couturier, A.; Castro, N.A. Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance. Drones 2021, 5, 15. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010015

AMA Style

Akhloufi MA, Couturier A, Castro NA. Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance. Drones. 2021; 5(1):15. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010015

Chicago/Turabian Style

Akhloufi, Moulay A., Andy Couturier, and Nicolás A. Castro. 2021. "Unmanned Aerial Vehicles for Wildland Fires: Sensing, Perception, Cooperation and Assistance" Drones 5, no. 1: 15. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010015

Article Metrics

Back to TopTop