Next Article in Journal
Temporal Variability of Equivalent Black Carbon Components in Atmospheric Air in Southern Poland
Previous Article in Journal
Public Health Considerations for PM10 in a High-Pollution Megacity: Influences of Atmospheric Condition and Land Coverage
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrated Air Quality Monitoring and Alert System Based on Two Image Analysis Techniques for Reportable Fire Events

1
International School of Technology and Management, Feng Chia University, Taichung 40724, Taiwan
2
Department of Photonics, Feng Chia University, Taichung 40724, Taiwan
3
Department of Environmental Engineering and Science, Feng Chia University, Taichung 40724, Taiwan
4
Department of Information Engineering and Computer Science, Feng Chia University, Taichung 40724, Taiwan
*
Author to whom correspondence should be addressed.
Submission received: 21 December 2020 / Revised: 12 January 2021 / Accepted: 12 January 2021 / Published: 15 January 2021
(This article belongs to the Section Air Quality)

Abstract

:
In this paper, a new monitoring alert system for air pollution emergencies is proposed. The proposed system can perform air quality monitoring to provide real-time alerts of an individual event. The system uses two image analysis techniques, namely pixel recognition and haze extraction, for video fire smoke detection. The image analysis process is divided into daytime and nighttime image analyses, which involve the analysis of red-green-blue (RGB) and gray scale images. The images analyzed in this study were captured by the video camera of an air quality monitoring station. Seven fire accidents around a selected industrial park and downtown area were analyzed in detail. Among these accidents, three occurred at daytime, one occurred over 7 days, and three occurred at nighttime. Alert models based on pixel recognition and haze extraction were established. These models incorporated the formulas of haze equivalent (HT(t)) and separated pixels (XT(t)), as well as the threshold equations of haze equivalent (∇H) and separated pixels (∇X). An alert signal is sent to the administrator when HT(t) > ∇H or XT(t) > ∇X. The obtained results indicate that a real-time observation and alert system based on two image analysis techniques can be designed for air quality monitoring without expensive hardware devices. This alert system can be used by administrators to understand the course of a reportable event, especially as evidence for the appraisal of fire accidents. It is recommended that this system be connected to the fire brigades in order to obtain early fire information.

Graphical Abstract

1. Introduction

The production, transportation, and storage processes in chemical industries are complex; thus, the possibility of severe accidents is relatively high in these industries [1]. Most accident escalations in the industry, especially fire events that are not handled in a timely manner, are caused by a domino effect [2]. Therefore, the managers of factories and industrial areas must detect the occurrence of an event early and address it quickly. Studies have focused on risk estimation and safety management [3,4,5,6]. Some models have been developed to estimate the safety-critical scenario [7], population vulnerability and risk level of an industrial site [8], and impact factors of accidents [9].
Video smoke detection methods are widely used in fire warning systems [10,11,12,13,14]. Gaur et al. [15] and Umar et al. [16] have reviewed various fire sensing technologies to present an overview of their state-of-the-art practices. Some modified fire sensing and control system concepts have also been proposed by them. Three color spaces, namely the RGB, hue–saturation–intensity (HSI), and YCbCr color spaces, are used in fire detection approaches. Hasan and Razzak [17] designed a fire detection approach based on the RGB color space for home video surveillance to warn people dynamically. For fire detection, Horng and Peng [18] developed a fire-flame color feature model based on the HSI color space to extract similar fire colors and color shifts. Çelik et al. [19] presented an image processing approach based on the YCbCr color space for detecting fire and smoke. In the aforementioned approach, color information is combined with motion analysis. Chowdhury et al. [20] proposed a hybrid fire detection model comprising vision and smoke sensors to detect environmental smoke and gas.
Numerous researchers have begun to study how to use digital imaging technology for analyzing the relationship between fine particles and visible haze (or visibility). Liu et al. [21] demonstrated that smartphone cameras could be used to monitor fine-grained PM2.5 in participatory sensing using their haze model with a deep learning-based method. Liu et al. [22] reported a method based on outdoor imaging to estimate particle matter concentration. Their method involves using six image features (the position of the sun, date, time, geographic information, and weather conditions) to predict the PM2.5 index. The results of Liu et al. indicated that different features have different significance levels in PM2.5 prediction. Zhan et al. [23] proposed a no-reference image quality assessment with a database comprising images of different haze levels. The results obtained using their method were consistent with those obtained through subjective evaluation. Feng et al. [24] proposed a PM2.5 estimation method based on the random forest model. This method requires meteorological data, traffic data, records from monitoring sites, information regarding points of interest, and photographs. Wang et al. [25] processed color images obtained with a general camera to estimate the real-time particulate mass concentration. Their method exhibited a correlation coefficient of 0.8219 and a mean square error of 51.2324 μg m−3. Moreover, the precision and recall of the PM2.5 estimations were 0.875 and 0.872, respectively. Pan et al. [26] described the AirTick mobile app, which can turn any camera-enabled smartphone device into an air quality sensor. The average accuracies of the AirTick tool for daytime and night time operations were 87% and 75%, respectively. Image-based techniques for evaluating air quality can be divided into two categories: (1) nonmodel-based and (2) model-based techniques. Nonmodel-based techniques do not have any model basis. They mainly extract characteristic values of images, such as the spatial contrast [21], image entropy [22], root mean square of an image [22], and HSI of the sky [24], according to the image blurring phenomenon. Model-based techniques are based on an atmospheric scattering model [24,25,26]; thus, a 2D extinction coefficient profile can be obtained.
Most air quality monitoring stations (AQMSs) in Taiwan, especially industrial AQMSs, are equipped with surveillance cameras. Currently, these cameras only have the functions of monitoring and recording. Real-time monitoring data and images are transmitted to environmental protection units and their websites through the Internet of Things technology. This study assessed the monitoring performance achieved when integrating two image analysis techniques for developing a real-time observation and alert system for reportable events. Seven fire accidents around a selected industrial park and downtown area were analyzed in detail. The image analysis procedure was divided into daytime and nighttime image analyses, which were based on images with different color scales. The proposed system can be used to perform damage assessment and chronological construction for a reportable event.

2. Materials and Methodology

2.1. Data and Image Sources

Images and air quality monitoring data from May 2018 to March 2019 were collected from AQMSs of the Environmental Protection Administration (EPA), Taiwan. The images were obtained from the videos captured by the surveillance cameras of the AQMSs. One image was collected every 10 min. The PM2.5 monitoring dataset comprised hourly values because the beta attenuation mass monitoring method with a time resolution of 1 h was used according to the US EPA PM2.5 Federal Equivalent Method.

2.2. Pixel Recognition

Three color spaces, namely the RGB, HSI, and YCbCr color spaces, are used in fire detection approaches. The RGB color space is used in the most simple and fast image processing methods for fire detection. Therefore, the fire detection approach based on the RGB color space was adopted in this study. The decision formulas of smoke or fire recognition for a pixel X at point (i, j) can be represented as follows [27]:
M h = max { R ( i , j ) , G ( i , j ) , B ( i , j ) }
M l = min { R ( i , j ) , G ( i , j ) , B ( i , j ) }
I int = 1 3 ( R ( i , j ) , G ( i , j ) , B ( i , j ) )
X ( i , j ) = { a   smoke   or   flame   pixel ,   if   M h M   l < T   and   K l I int K h not   a   smoke   or   flame   pixel ,   otherwise  
where T is a global threshold value (a recommended range is from 15 to 25), Kl and Kh are thresholds used to determine the range of smoke or flame pixels, and Iint is the intensity of optical flow. Table 1 presents the threshold ranges selected in this study for dark gray smoke, light gray smoke, yellow flame, and orange flame pixels.
Because the observation distances between the cameras and industrial park were generally long, the pixels of the four channels were merged to improve the observation sensitivity. The merging formula for smoke and fire pixels (X(i, j, t)) is given as follows:
X T ( t ) { X ( i , j , t ) D , X ( i , j , t ) L , X ( i , j , t ) Y , X ( i , j , t ) O } |   K l X ( i , j , t ) K h
where XT(t) is the total amount of separated pixels in time t. The footnotes of D, L, Y, and O are (i, j, t) for the ranges of dark gray smoke, light gray smoke, yellow flame, and orange flame pixels, respectively. In fire smoke detection, the threshold for the alteration uses the symbol of ∇X for identifying fire accidents. Figure 1 illustrates examples of images depicting and not depicting fire (Figure 1a,c, respectively). The figure also displays the color separation of the aforementioned images. A white background color is used for easy viewing. The parameter XT(t) in Figure 1b acts as a baseline for checking fire occurrences.

2.3. Haze Extraction

The dark channel prior method can be used to remove single image haze according to the previous observations for haze-free images [28]. The recorded image intensity can be divided into two parts: light transmission from the scene that penetrates air and light scattered by the fine particles in the air. Light is scattered due to the Rayleigh and Mie scattering effects [29]. The Rayleigh and Mie scattering effects apply to scattering with particle sizes less than and greater than the wavelength of light, respectively. Assuming that the atmosphere is a homogeneous medium, the transmittance T in the air can be expressed using Equation (6) according to the Beer–Lambert law [22,30].
T ( i , j ) = e β × d ( i , j )
where β is scattering coefficient and d(i, j) is the scene depth at pixel coordinates (i, j) [24]. For the deflection of light by aerosols in the atmosphere, Fattal [31] presented an image formation model, which can be expressed as follows:
I ( i , j ) = J ( i , j ) T ( i , j ) + A [ 1 T ( i , j ) ]
where I(i, j), J(i, j), and A are the input image, scene radiance, and atmospheric light vector, respectively. For a single image, the A value can be obtained using the automatic method by Sulami et al. [32].
Two types of dehazing techniques exist: the closed-form solution [31] and dark channel prior methods [28]. The dark channel prior technique is more efficient than the closed-form solution method; therefore, the dark channel prior technique has been adopted in most dehazing studies. In this technique, the RGB image of most outdoor scenes is subdivided into multiple small areas when the weather is clear and haze-free. Each small local area Ω(i, j) must find a pixel whose (r, g, b) channel values approach zero.
J d a r t ( i , j ) = min y Ω ( i , j ) ( min c { r , g , b } J c ( i , j ) ) 0
where Ω(i, j) and Jc are the transmission in a local patch and the color channel c in an RGB image, respectively. For a small local area, the value of Ω(i, j) is constant. Therefore, the first term on the right side of Equation (8) approaches zero and can be ignored. Thus, the estimated transmittance distribution of the atmosphere is calculated as follows [28]:
T ( i , j ) = 1 min y Ω ( i , j ) ( min c I c ( i , j ) A c )
where Ac is the global atmospheric light, which is determined by the first 0.1% of the pixels in the dark channel [24]. The resolution of transmittance distribution is low; however, this problem can be overcome using the soft matting method or guided image filter. Finally, the transmittance T is converted using the Beer–Lambert law to obtain the product of the extinction coefficient β and scene depth d, such that:
β × d ( i , j ) = ln [ T ( i , j ) ]
Figure 2 displays the procedure and result of processing an air pollution image by using the dark channel prior technology. In this study, the total amount of βd at time t is defined as a haze equivalent HT(t). In the haze extraction, the threshold for alteration uses the symbol of ∇H for identifying fire accidents.

2.4. Contaminated Parcel Forward Trajectory

Mesoscale and macroscale trajectory models, such as the HYSPLIT trajectory model, are unsuitable for the illustration of contaminated parcel transmission in a microscale regional accident. In this study, the transmission path of a contaminated plume from an accident site was simulated using the mathematical formula of the forward trajectory. When a pollutant drifts passively with the wind, the trajectory is the integration of the pollutant position vector in space and time. The forward trajectory calculation involves obtaining the next position of the pollutant according to the average velocity of the polluted position P of the pollutant and the estimated forward position P′. The forward trajectory is calculated using the following equation:
P ( x , y , t + Δ t ) = P ( x , y , t ) + W s ( P , t ) + W s ( P , t + Δ t ) 2 Δ t
First   quadrant :   θ = ( 90 ° W d ) × π 180 °    x = x a y = y b
Sec ond   quadrant :   θ = ( 1 80 ° W d ) × π 180 °    x = x a y = y + b
Third   quadrant :   θ = ( 270 ° W d ) × π 180 °    x = x + a y = y + b
Fourth   quadrant :   θ = ( 360 ° W d ) × π 180 °    x = x + a y = y b
where   a = W s ( P , t ) + W s ( P , t + Δ t ) 2 Δ t csc θ  
  and   b = W s ( P , t ) + W s ( P , t + Δ t ) 2 Δ t cos θ
where Ws and Wd are the wind speed (km h−1) and wind direction (°), respectively, and Δt is the integration time of a step (h). When the path of the forward trajectory is determined, the possible downwind position on the path can be obtained. Data on Ws and Wd are obtained from the nearest AQMS at the accident site. In this study, the selected time interval between two step positions was 10 min for the calculation of the wind trajectory.

2.5. Selected Industrial Parks

Figure 3 depicts the sites of AQMSs of the Taiwan EPA around the four selected industrial parks, namely the Linhai Industrial Park, Linyuan Industrial Park, Dafa Industrial Park, and Qiaotou Industrial Park, which are located within Kaohsiung City (on the south coast of Taiwan). The Linhai Industrial Park is a comprehensive industrial zone with 493 factories belonging to over two dozen industry types in an area of 15.60 km2. The Linyuan Industrial Park has 27 factories mostly related to petrochemical processes in an area of 4.03 km2. The Dafa Industrial Park is a comprehensive industrial zone with 536 small and medium-sized factories in an area of 3.91 km2. Four AQMSs, namely the Xiaogang, Daliao, Linyuan, and Qiaotou AQMSs, are located near the aforementioned three industrial parks. Each AQMS is equipped with a surveillance camera. The camera angle field of view (FoV) at each AQMS is displayed in Figure 3. The pictures of the Linhai Industrial Park, Linyuan Industrial Park, and Dafa Industrial Park are illustrated in the insets of Figure 3. The four industrial parks were selected due to multiple fire incidents that occurred in these industrial parks of Kaohsiung City from May 2018 to March 2019.
The cameras mounted at all the aforementioned AQMSs except the Qiaotou AQMS (Figure 4) can capture images of their adjacent industrial area. The red line in Figure 4b is the skyline of ambient haze extraction, which is determined to avoid the influence of white smoke in the estimation of HT(t). The Qiaotou Industrial Park is a small traditional industrial settlement with nine factories in an area with no obvious boundaries. Therefore, the camera mounted at the Qiaotou AQMS can image the downtown area (Figure 4d). The schematic of the camera lens’s FoV at the Qiaotou AQMS is displayed in Figure 5. Because of the change in wind direction, the AQMS may not be located in the downwind direction of the industrial park. In addition, some fog events occur outside the industrial park, which is beyond the FoV of the camera. Eight reportable events occurred in the target area from May 2018 to March 2019. Among them, three events occurred at daytime, one occurred over 7 days, and four occurred at nighttime. Six of the seven analyzed fire accidents (excluding the event of power loss in the Linyuan Industrial Park on 13 January 2019) are explained in detail later in the manuscript.

3. Results and Discussion

3.1. Fire and Pollutant Recognition

Figure 6 illustrates the flow diagram of the alert working platform with fire and pollutant recognition functions. The images and data are processed by the image recognition and the pollutant recognition, respectively. The rise ratios of HT(t) and XT(t) at time t and time t − Δt are defined as ΛH(t) and ΛX(t), respectively. If ΛH(t) and ΛX(t) are higher than their thresholds (∇H and ∇X) at a certain time interval, an alert is sent to the management staff; otherwise, the loop is repeated. The aforementioned threshold values are used in checks. Thus, the values of ∇H and ∇X must be set first according to the characteristics of the landscape. Similarly, when the concentration of a pollutant is higher than the air quality standard, an anomaly notification is sent to the management staff; otherwise, the loop is repeated.
In Taiwan, AQMS cameras perform daytime and nighttime imaging by using the RGB and gray scales, respectively. Therefore, no yellow or orange flame pixels were used in the estimation of XT at nighttime, which reduced the sensitivity of smoke/fire detection. Moreover, the sensitivity of haze extraction decreased due to excessive interference from the gray pixels of images. In this study, the equations for the threshold values of haze equivalent and separated pixels were as follows:
Λ H ( t ) = H T ( t ) H T ( t n Δ t ) > {   H ,   day   at   daytime     H ,   night   at   nighttime ,   where   n   =   1   or   2
Λ X ( t ) = X T ( t ) X T ( t n Δ t ) > { X , day   at   daytime   X , night     at   nighttime ,   where   n   =   1   or   2
The parameter Δt was selected as 10 min. Moreover, n was set as 1 and 2 for large (or nearby) and small (or distant) fire accidents, respectively. Summarizing the results of curve changes in many fire cases, the values of ∇H and ∇X at daytime and at nighttime are recommend set as 2.5 and 2.0, respectively. Equations (18) and (19) use the rise ratio rather than the rise rate because the rise ratio is dimensionless as well as independent of the background of a view and the period of a day.

3.2. Reportable Events

The images captured by the AQMS cameras at daytime and nighttime used an RGB scale and a gray scale, respectively. The sensitivities of smoke or fire detection and haze extraction were reduced due to excessive interference from the gray pixels of images. Thus, the case studies of fire accidents were divided into two types of fires: fires occurring at daytime and nighttime.

3.2.1. Fire Accident Occurring at Daytime or over Multiple Days

Case 1: Fire Accident of Sulfur Pit Smoldering

A sulfur pit smoldering fire accident occurred at a refining factory in the Linhai Industrial Park on 16 May 2018. The reported period of the accident was from 04:40 to 09:20 (Figure 7). During this period, the prevailing wind direction was SSW with respect to the location of the Xiaogang AQMS, which is in the downwind of the affected factory (inset of Figure 7). The trajectory indicates that the pollutants arrived at the Xiaogang AQMS after 20 min. Light yellow smoke appeared on the left side of the images during the initial period of 04:58–05:29, which corresponded to the trends of HT and XT during the same time (Figure 7a,c,d). When the fire began to be extinguished with water, the value of the separated pixels decreased rapidly. However, the value of HT did not decrease significantly. The results indicate that fire extinguishment affects the total amount of separated pixels (reduction in the fire smoke), but it can also respond to the progress of fire accidents.
High values of SO2 (>340 ppb) and XT (>150,000) were measured during 03:10–04:20, which was before the reported accident period (Figure 7a,d). Moreover, the hourly concentrations of PM2.5 were lower than 10 μg m−3 (Figure 7b). The curve of HT responded to the fire that maintained a high intensity during the accident period (Figure 7c). The curve confirmed that smoldering occurred before the reported accident period but was not detected. Accidents are highly prone to occur at night (e.g., 02:00–04:00) [33]. Thus, an early alert system, which can also record the course of an accident more completely, is required. The value of ΛH at 04:50 and that of ΛX at 05:20 were higher than ∇H,day and ∇X,day; therefore, an alert was sent.

Case 2: Fire Accident Due to Fuel Tank Leakage

The fire accident due to fuel tank leakage occurred at an advanced material plant in Linyuan Industrial Park on 21 October 2018. The fuel tank of the carbon black process was ignited by an explosion at 08:40. A large amount of black smoke was discharged into the air, and thick smoke could be observed several kilometers away. The fire related to the accident was fierce over 3 h; however, no casualties were reported. The observed results are displayed in Figure 8. The camera of the Linyuan AQMS clearly captured the black smoke caused by the fire at 08:40–11:40. Compared with the images of the Daliao AQMS captured at the same time (Figure 9), the images of the Linyuan AQMS were full of haze, which was caused by the black smoke. The observed PM2.5 concentration increased to 35 μg m−3. A high value of PM2.5 occurred before the fire accident due to the effects of hourly averages (Figure 8a). The parameter βd was maintained at a high value during and before the fire accident (Figure 8b). Therefore, using HT as an early warning function makes it difficult to determine whether an increase in HT is an event. The amount of XT increased from 13,610 to about 60,040 at 08:40 and decreased to approximately 23,110 at 11:10. The rapid drop in XT may have been due to the start of fire extinguishment, which suppressed the escape of black smoke. At 08:40, ΛH and ΛX were higher than ∇H,night and ∇X,night, and an alert was sent.
Figure 9 depicts the monitoring images and data of the Daliao AQMS during the accident period of case 2. The forward trajectory (inset in Figure 9) indicates that the pollutants arrived at the Daliao station in 1–2 h. Although the fire smoke was not directly captured by the camera of the Daliao AQMS, a part of the sky became darker during the accident. The curves of HT and XT responded to the delayed fire plume. The ΛH and ΛX values at 09:00 were higher than the values of ∇H,day and ∇X,day, and an alert was sent.

Case 3: Fire Accident Due to Large-Area Open Burning

Figure 10 displays the observed results of large-scale open burning during 17:14–19:20 on 5 March 2019. The fire site is located in the Qiaotou suburbs and is approximately 1.3 km southeast of the Qiaotou AQMS. The forward trajectory indicates that the pollutants reached the Qiaotou AQMS in approximately 10 min (inset in Figure 10). The measured PM2.5 concentrations at the Qiaotou AQMS increased from 16 to 64 μg m−3 at 17:00 and then decreased to 16 μg m−3 at 19:00. During this time, the station camera recorded the image of the fire clearly. Image analysis indicates that the results obtained through smoke or fire recognition were superior to those obtained through haze extraction for the detection of open burning. The range of HT was 0.5–0.8 at 15:00–18:20, which includes the period 2 h before the fire accident. Therefore, HT did not vary when the accident occurred. The ΛX value at 17:50 was higher than ∇X,day, and an alert was sent.

Case 4: Multiday Fire Accident in a Waste Paper Warehouse

A 7-day fire accident occurred in a large waste paper warehouse of a paper mill due to continuous burning. The fire started at 14:04 on 18 October 2018 and was extinguished on 24 October 2018. The aforementioned paper mill is close to the Dafa Industrial Park and is beyond the FoV of the camera of the Daliao AQMS. Figure 11a presents the forward trajectories at 00:00, 06:00, 12:00, and 18:00 for each day of the accident. The wind speed was mostly very low during the accident period, which indicated that the air pollutants did not dissipate easily. Therefore, a fire-contaminated parcel was observed intermittently although the paper mill was beyond the FoV of the AQMS’s camera. Furthermore, the alert subsystem of air pollutant had repeatedly reflected the high concentration of PM2.5. To describe this case in brief, daytime on 19 October 2018 was selected as a representative period of the fire accident. The curves of HT and XT at the Daliao AQMS were similar during the daytime. The variations in these curves may reflect the effect of the contaminated parcel observed intermittently in the images. The measured PM2.5 concentrations also vary with changes in the images, especially at 11:00–12:00. High PM2.5 concentrations were measured at noon for the 7 consecutive days of the accident. Because of its long duration and poor weather conditions, the aforementioned fire accident had a considerable effect on local air quality.

3.2.2. Fire Accident Occurring at Nighttime

Case 5: Fire Accident at the Waste Paper Warehouse

Another fire accident occurred at a large waste paper warehouse of a paper mill at nighttime on 28 January 2019. This paper mill is the same as that in case 4 and is beyond the FoV of the cameras of the four AQMSs. The forward trajectory indicates that the contaminated parcel arrived at the Linyuan AQMS approximately 1.5 h after the accident began (inset of Figure 12) because the wind speed was low at nighttime. The measured hourly average concentration of PM2.5 increased from 13 μg m−3 (at 20:50) to 30 μg m−3 (at 21:00) and 46 μg m−3 (at 22:00). Thus, the pollutants were transferred with the airflow to the Linyuan AQMS (Figure 12a). Approximately 1 h after the fire, HT increased significantly over time (Figure 12b). The fire smoke was observed 50 min after the fire had begun (Figure 12c). The variations in PM2.5, HT, and XT responded to the fire accident. However, XT was more representative of the evolution of the fire accident than the other two factors were. Thus, the variations in XT can help trace the cause and evolution of accidents. The ΛH value at 22:20 and ΛX value at 21:50 were higher than the values of ∇H,night and ∇X,night; therefore, an alert was sent.

Case 6: Fire Accident Due to Pipeline Rupture

A fire accident caused by pipeline rupture occurred at a petrochemical plant in the Linyuan Industrial Park on 28 February 2019. At the time of the accident, the operators used 10 kg of nitrogen gas to blow out the pipeline. Due to the rupture of the externally evacuated rubber pipeline, high-pressure gas was discharged to contact the fire source, which resulted in fire and explosion. Four operators were taken to the hospital due to burn injuries. The reported period of the accident event was from 17:48 to 23:18. The forward trajectory indicates that the contaminated parcel arrived at the Linyuan AQMS approximately 10 min after the fire began (inset in Figure 13). The measured PM2.5 concentration increased from 15 μg m−3 at 17:00 to 40 μg m−3 at 19:00 (Figure 13a). Fire smoke was observed 20 min before the reported time of the accident. Therefore, the fire accident may have started at approximately 17:20. The variations in HT and XT exhibited the same trend as the variations in the PM2.5 concentration, and the values of HT and XT began to increase at 17:30 (Figure 13b,c). At this time, ΛH and ΛX were higher than ∇H,night and ∇X,night; therefore, an alert was sent.

Case 7: Fire Accident Due to a Reaction Tank Leakage

A fire accident was caused due to reaction tank leakage in the Linyuan Industrial Park on 12 January 2019. The fire started at 22:00 and it was extinguished by 01:00. The forward trajectory indicates that the pollutants arrived at the Linyuan AQMS 20 min after the beginning of the fire (inset in Figure 14). The hourly PM2.5 value increased to 56 μg m−3 at 00:00 (Figure 14a). The image analysis results indicated that HT and XT increased with time after approximately 22:20. At this time, ΛH and ΛX were higher than ∇H,night and ∇X,night; therefore, an alert was sent.

3.3. Serviceability Limits of Alerts

The results of the aforementioned case studies indicate that the proposed approach is suitable for designing a successful real-time observation and alert system for reportable events, such as industrial fires, illegal open burning, and smoke-related emissions. This system uses the existing surveillance cameras and monitors of AQMSs and does not require expensive hardware devices. The results indicate that this system can respond to the progress of fire accidents and record it, such as fire start and end times, start time of water extinguishing, scale of the accident, and the possible harm situation. They are very important sources of evidence for subsequent fire accident identification. The two image analysis methods adopted in this study have complementary effects. The pixel recognition method provides high intensity values for fire accidents but is affected by fire extinguishment. By contrast, the haze extraction method provides low intensity values for fire accidents but is unaffected by fire extinguishment. Therefore, this system with two image analysis technologies can avoid the disadvantages of general pixel analysis systems [10,11,12,13,14]. In addition, the effect of wind speed on fire smoke flow is small because hot smokes normally rise a considerable distance, which could reduce the influence of wind. Therefore, the effect of wind speed on the application of the two methods is slight.
The aforementioned two model-based image processing techniques are based on atmospheric scattering, the dark channel phenomena, and separated pixels. Thus, the alerts provided by the system used in this study have the following serviceability limits:
  • Under poor weather conditions, such as rain, snow, and dark clouds, the collected images are too distorted due to low light flux from the atmosphere.
  • The acquisition of representative smoke pixels is negatively affected by the camera lens having a poor FoV (lens being too far or too close to the target area), unsuitable (too high or low) elevation angles, and an insufficiently high erection height.
  • In the case of poor air quality, visibility is obscured by numerous fine particles.
  • A large number of plumes with a high moisture or pollutant content, such as a high-mist plumes (Figure 4b), mist from cooling towers, and boiler soot-blowing.
  • The camera must be protected from glare caused by direct sunlight. In addition, the camera lens must be kept clean.
  • The change of the threshold varies in Equations (4), (18) and (19) with different regions, time periods, and weather conditions that require further determination.
To overcome the aforementioned limitations, artificial intelligence (AI) architectures, such as support vector machines (supervised learning) or convolutional neural networks (deep learning), can be introduced into the working platform to extract fine feature values from a large number of event images (at least 30). These feature values include the thresholds, temporal features, spatial features, lighting, lens angle, scene depth, smoke motion, and color. Through the self-learning and updating mechanism, a real-time observation and alert system can be developed. In addition, using an integrated deep agent approach to algorithms is also recommended since it can integrate two or more existing algorithms and enable them to complement each other to achieve a better outcome than using each one of them alone [34].

4. Conclusions

The proposed monitoring and alert system uses the existing surveillance cameras of AQMSs and does not require expensive hardware devices. A working platform with fire and pollutant recognition is proposed. The pixel recognition and haze extraction methods have better image analysis results at nighttime and daytime, respectively, so the two methods have complementary effects in image analysis. Estimation equations for pixel recognition, haze extraction, and alerting thresholds were established and used in this study. Moreover, a forward trajectory model suitable for regional contaminated parcel transmission was established to understand the transmission of fire plumes. This model can help to interpret image analysis results when pollutant transmission is delayed. Although this system is not connected to the fire brigade, all fire incidents can be matched with the fire brigade’s records without any omission. The results of seven case studies indicated that the proposed approach is successful. In image analysis, the fire accidents were divided into two types, namely daytime and nighttime fires, according to the image color scales. We detailed the serviceability limits of the proposed approach, especially with regard to the use of images, and introduced AI can reduce these limits. The proposed alert system can be used by administrators to understand the course of a reportable event, especially as evidence for the appraisal of fire accidents. It is recommended that this system be connected to the fire brigades in order to obtain early fire information.

Author Contributions

C.-J.L. has conceived, designed and performed the evaluation, and wrote the manuscript. J.-J.L. has guided the current research work and corrected the manuscript. F.-C.L., S.-H.L. and P.-R.Y. collected the data related to the research and prepared the tables and figures. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Ministry of Science and Technology and the Environmental Protection Administration of the Republic of China under Contract No. MOST-107-2221-E-035-007 and Contract No. EPA-107A085, respectively.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Heo, S.; Kim, M.; Yu, H.; Lee, W.K.; Sohn, J.R.; Jung, S.Y.; Moon, K.W.; Byeon, S.H. Chemical accident hazard assessment by spatial analysis of chemical factories and accident records in South Korea. Int. J. Disaster Risk Reduct. 2018, 27, 37–47. [Google Scholar] [CrossRef] [Green Version]
  2. Chena, C.; Reniers, G.; Zhang, L. An innovative methodology for quickly modeling the spatial-temporal evolution of domino accidents triggered by fire. J. Loss. Prevent. Proc. 2018, 54, 312–324. [Google Scholar] [CrossRef]
  3. Coze, J.C.L. Accident in a French dynamite factory: An example of an organisational investigation. Saf. Sci. 2010, 48, 80–90. [Google Scholar] [CrossRef]
  4. Årstad, I.; Aven, T. Managing major accident risk: Concerns about complacency and complexity in practice. Saf. Sci. 2017, 91, 114–121. [Google Scholar] [CrossRef]
  5. Gao, Y.; Fan, Y.X.; Wang, J.; Duan, Z. Evaluation of governmental safety regulatory functions in preventing major accidents in China. Saf. Sci. 2019, 120, 299–311. [Google Scholar] [CrossRef]
  6. Swuste, P.; van Nunen, K.; Reniers, G.; Khakzad, N. Domino effects in chemical factories and clusters: An historical perspective and discussion. Process Saf. Environ. 2019, 124, 18–30. [Google Scholar] [CrossRef]
  7. Rum, A.; Landucci, G.; Galletti, C. Coupling of integral methods and CFD for modeling complex industrial accidents. J. Loss. Prevent. Proc. 2018, 53, 115–128. [Google Scholar] [CrossRef]
  8. Li, F.; Bi, J.; Huang, L.; Qu, C.; Yang, J.; Bu, Q. Mapping human vulnerability to chemical accidents in the vicinity of chemical industry parks. J. Hazard. Mater. 2010, 179, 500–506. [Google Scholar] [CrossRef]
  9. Li, W.; Zhong, H.; Jing, N.; Fan, L. Research on the impact factors of public acceptance towards NIMBY facilities in China—A case study on hazardous chemicals factory. Habitat Int. 2019, 83, 11–19. [Google Scholar] [CrossRef]
  10. Phillips, W., III; Shah, M.; Lobo, N.V. Flame recognition in video. Pattern Recog. Lett. 2002, 23, 319–327. [Google Scholar] [CrossRef]
  11. Vicente, J.; Guillemant, P. An image processing technique for automatically detecting forest fire. Int. J. Therm. Sci. 2002, 41, 1113–1120. [Google Scholar] [CrossRef]
  12. Toreyin, B.U.; Dedeoglu, Y.; Gudukbay, U.; Cetin, A.E. Computer vision based method for real-time fire, flame detection. Pattern Recog. Lett. 2006, 27, 49–58. [Google Scholar] [CrossRef] [Green Version]
  13. Celik, T.; Demirel, H.; Ozkaramanli, H.; Uyguroglu, M. Fire detection using statistical color model in video sequences. J. Vis. Commun. Image Represent. 2007, 18, 176–185. [Google Scholar] [CrossRef]
  14. Yuan, F. A fast accumulative motion orientation model based on integral image for video smoke detection. Pattern Recog. Lett. 2008, 29, 925–932. [Google Scholar] [CrossRef]
  15. Gaur, A.; Singh, A.; Kumar, A.; Kulkarni, K.S.; Lala, S.; Kapoor, K.; Srivastava, V.; Kumar, A.S.; Mukhopadhyay, C. Fire Sensing Technologies: A Review. IEEE Sens. J. 2019, 19, 3191–3202. [Google Scholar] [CrossRef]
  16. Umar, M.M.; de Silva, L.C.; Bakar, M.S.A.; Petra, M.I. State of the art of smoke and fire detection using image processing. Int. J. Signal Imaging Syst. Eng. 2017, 10, 22–30. [Google Scholar] [CrossRef]
  17. Hasan, M.M.; Razzak, M.A. An automatic fire detection and warning system under home video surveillance. In Proceedings of the 2016 IEEE 12th International Colloquium on Signal Processing & Its Applications (CSPA2016), Melaka, Malaysia, 4–6 March 2016. [Google Scholar] [CrossRef]
  18. Horng, W.B.; Peng, J.W. A fast image-based fire flame detection method using color analysis. J. Sci. Eng. 2008, 11, 273–285. [Google Scholar] [CrossRef]
  19. Çelik, T.; Özkaramanlı, H.; Demirel, H. Fire and smoke detection without sensors: Image processing based approach. In Proceedings of the 15th European Signal Processing Conference (EUSIPCO 2007), Poznan, Poland, 3–7 September 2007. [Google Scholar] [CrossRef]
  20. Chowdhury, N.; Mushfiq, D.R.; Chowdhury, A.Z.M.E. Computer vision and smoke sensor based fire detection system. In Proceedings of the 1st International Conference on Advances in Science, Engineering and Robotics Technology 2019 (ICASERT 2019), Dhaka, Bangladesh, 3–5 May 2019. [Google Scholar] [CrossRef]
  21. Liu, X.; Song, Z.; Ngai, E.; Ma, J.; Wang, W. PM2.5 monitoring using images from smartphones in participatory sensing. In Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), Hong Kong, China, 26 April–1 May 2015; pp. 630–635. [Google Scholar] [CrossRef]
  22. Liu, C.; Tsow, F.; Zou, Y.; Tao, N. Particle pollution estimation based on image analysis. PLoS ONE 2016, 11, e0145955. [Google Scholar] [CrossRef]
  23. Zhan, Y.; Zhang, R.; Wu, Q.; Wu, Y. A new haze image database with detailed air quality information and a novel no-reference image quality assessment method for haze images. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 1095–1099. [Google Scholar] [CrossRef]
  24. Feng, C.; Wang, W.; Tian, Y.; Que, X.; Gong, X. Estimate air quality based on mobile crowd sensing and big data. In Proceedings of the 2017 IEEE 18th International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Macau, China, 12–15 June 2017; pp. 1–9. [Google Scholar] [CrossRef]
  25. Wang, H.; Yuan, X.; Wang, X.; Zhang, Y.; Dai, Q. Real-time air quality estimation based on color image processing. In Proceedings of the 2014 IEEE Visual Communications and Image Processing Conference, Valletta, Malta, 7–10 December 2014; pp. 326–329. [Google Scholar] [CrossRef]
  26. Pan, Z.; Yu, H.; Miao, C.; Leung, C. Crowd sensing Air Quality with Camera-Enabled Mobile Devices. In AAAI (2017, February); AAAI: Palo Alto, CA, USA, 2017; pp. 4728–4733. [Google Scholar] [CrossRef]
  27. Yu, C.; Fang, J.; Wang, J.; Zhang, Y. Video fire smoke detection using motion and color features. Fire Technol. 2010, 46, 651–663. [Google Scholar] [CrossRef]
  28. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [CrossRef]
  29. McCartney, E.J. Optics of the atmosphere: Scattering by molecules and particles. Phys. Today 1977, 30, 76. [Google Scholar] [CrossRef]
  30. Fattal, R. Dehazing using color-lines. ACM Trans. Graph. 2014, 34, 13. [Google Scholar] [CrossRef]
  31. Levin, A.; Lischinski, D.; Weiss, Y. A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 228–242. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Sulami, M.; Glatzer, I.; Fattal, R.; Werman, M. Automatic recovery of the atmospheric light in hazy images. In Proceedings of the 2014 IEEE International Conference on Computational Photography (ICCP), Santa Clara, CA, USA, 2–4 May 2014. [Google Scholar] [CrossRef] [Green Version]
  33. Reniersa, G.L.L.; Pauwelsa, N.; Audenaerta, A.; Aleb, B.J.M.; Soudan, K. Management of evacuation in case of fire accidents in chemical industrial areas. J. Hazard. Mater. 2007, 147, 478–487. [Google Scholar] [CrossRef] [PubMed]
  34. Jiao, F.; Bhanu, B. Deepagent: An algorithm integration approach for person re-identification. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018. [Google Scholar] [CrossRef]
Figure 1. (a) Image without fire, (b) color separation of image without fire, (c) image with fire, and (d) color separation of image with fire (white background color for easy viewing).
Figure 1. (a) Image without fire, (b) color separation of image without fire, (c) image with fire, and (d) color separation of image with fire (white background color for easy viewing).
Atmosphere 12 00117 g001
Figure 2. The procedure and result of processing air pollution image by dark channel prior technology. (a) Inputted image I, (b) dark channel image Ic, (c) transmittance T, and (d) distribution of βd.
Figure 2. The procedure and result of processing air pollution image by dark channel prior technology. (a) Inputted image I, (b) dark channel image Ic, (c) transmittance T, and (d) distribution of βd.
Atmosphere 12 00117 g002
Figure 3. Locations of the selected representative industrial parks and their adjacent air quality monitoring stations (AQMS). The three insets show the aerial photos of selected target industrial parks.
Figure 3. Locations of the selected representative industrial parks and their adjacent air quality monitoring stations (AQMS). The three insets show the aerial photos of selected target industrial parks.
Atmosphere 12 00117 g003
Figure 4. (ad) diagrams of the camera lens’s field of view of the four Taiwan AQMSs. The red line in (b) is the skyline of ambient haze extraction in the estimating of HT(t).
Figure 4. (ad) diagrams of the camera lens’s field of view of the four Taiwan AQMSs. The red line in (b) is the skyline of ambient haze extraction in the estimating of HT(t).
Atmosphere 12 00117 g004
Figure 5. Schematic diagram of camera lens’s field of view at the industrial air quality monitoring station.
Figure 5. Schematic diagram of camera lens’s field of view at the industrial air quality monitoring station.
Atmosphere 12 00117 g005
Figure 6. Flow diagram of the alert working platform with fire and pollutant recognitions, where ΛH and ΛX are the ratio of haze equivalents and ratio of total separated pixels; and ∇H and ∇X are the alert thresholds of haze extracting and pixel recognizing, respectively.
Figure 6. Flow diagram of the alert working platform with fire and pollutant recognitions, where ΛH and ΛX are the ratio of haze equivalents and ratio of total separated pixels; and ∇H and ∇X are the alert thresholds of haze extracting and pixel recognizing, respectively.
Atmosphere 12 00117 g006
Figure 7. The monitoring images and data during the accident period of case-1, where HT and XT are haze equivalent and total separated pixels. The inset shows the contaminated parcel forward trajectories. The length between the two dots of the trajectory line is the distance moved by the wind per hour, the wind speed per hour. (a), (b), (c) and (d) are the observed values of SO2, PM2.5, HT, and XT, respectively.
Figure 7. The monitoring images and data during the accident period of case-1, where HT and XT are haze equivalent and total separated pixels. The inset shows the contaminated parcel forward trajectories. The length between the two dots of the trajectory line is the distance moved by the wind per hour, the wind speed per hour. (a), (b), (c) and (d) are the observed values of SO2, PM2.5, HT, and XT, respectively.
Atmosphere 12 00117 g007
Figure 8. Same as in Figure 7, but depicting case-2 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Figure 8. Same as in Figure 7, but depicting case-2 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Atmosphere 12 00117 g008
Figure 9. Same as in Figure 7, but depicting case-2 and using data from the Daliao AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Figure 9. Same as in Figure 7, but depicting case-2 and using data from the Daliao AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Atmosphere 12 00117 g009
Figure 10. Same as in Figure 7, but depicting case-3 and using data from the Qiaotou AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Figure 10. Same as in Figure 7, but depicting case-3 and using data from the Qiaotou AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Atmosphere 12 00117 g010
Figure 11. The variations of (a) contaminated parcel forward trajectories during the accident period of case-4; and (b) monitoring images, (c) hourly (PM2.5), (d) HT, and (e) XT during 08:00–16:00 on 19 October 2018 (selected representative period for case-4).
Figure 11. The variations of (a) contaminated parcel forward trajectories during the accident period of case-4; and (b) monitoring images, (c) hourly (PM2.5), (d) HT, and (e) XT during 08:00–16:00 on 19 October 2018 (selected representative period for case-4).
Atmosphere 12 00117 g011
Figure 12. Same as in Figure 7, but depicting case-5 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT, and XT, respectively.
Figure 12. Same as in Figure 7, but depicting case-5 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT, and XT, respectively.
Atmosphere 12 00117 g012
Figure 13. Same as in Figure 7, but depicting case-6 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Figure 13. Same as in Figure 7, but depicting case-6 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Atmosphere 12 00117 g013
Figure 14. Same as in Figure 6, but depicting case-7 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Figure 14. Same as in Figure 6, but depicting case-7 and using data from the Linyuan AQMS. (a), (b) and (c) are the observed values of PM2.5, HT and XT, respectively.
Atmosphere 12 00117 g014
Table 1. The selected threshold ranges of dark-gray smoke, gray smoke, yellow flame, and orange flame pixels.
Table 1. The selected threshold ranges of dark-gray smoke, gray smoke, yellow flame, and orange flame pixels.
Colors ChannelsRGB
Dark-gray smoke80–10080–10080–100
Light-gray smoke125–145125–145125–145
Yellow flame240–260240–260130–150
Orange flame210–230130–15080–100
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liang, C.-J.; Lu, S.-H.; Liang, J.-J.; Lin, F.-C.; Yu, P.-R. Integrated Air Quality Monitoring and Alert System Based on Two Image Analysis Techniques for Reportable Fire Events. Atmosphere 2021, 12, 117. https://0-doi-org.brum.beds.ac.uk/10.3390/atmos12010117

AMA Style

Liang C-J, Lu S-H, Liang J-J, Lin F-C, Yu P-R. Integrated Air Quality Monitoring and Alert System Based on Two Image Analysis Techniques for Reportable Fire Events. Atmosphere. 2021; 12(1):117. https://0-doi-org.brum.beds.ac.uk/10.3390/atmos12010117

Chicago/Turabian Style

Liang, Chen-Jui, Sheng-Hua Lu, Jeng-Jong Liang, Feng-Cheng Lin, and Pei-Rong Yu. 2021. "Integrated Air Quality Monitoring and Alert System Based on Two Image Analysis Techniques for Reportable Fire Events" Atmosphere 12, no. 1: 117. https://0-doi-org.brum.beds.ac.uk/10.3390/atmos12010117

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop