Next Article in Journal
The Methodological Aspects of Constructing a High-Resolution DEM of Large Territories Using Low-Cost UAVs on the Example of the Sarycum Aeolian Complex, Dagestan, Russia
Previous Article in Journal
Quantifying Waterfowl Numbers: Comparison of Drone and Ground-Based Survey Methods for Surveying Waterfowl on Artificial Waterbodies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone

by
Apostolos Papakonstantinou
1,*,
Marios Batsaris
2,
Spyros Spondylidis
1 and
Konstantinos Topouzelis
1
1
Department of Marine Sciences, University of the Aegean, 81100 Mytilene, Greece
2
Geography Department, University of the Aegean, 81100 Mytilene, Greece
*
Author to whom correspondence should be addressed.
Submission received: 16 December 2020 / Revised: 13 January 2021 / Accepted: 14 January 2021 / Published: 18 January 2021

Abstract

:
Marine litter (ML) accumulation in the coastal zone has been recognized as a major problem in our time, as it can dramatically affect the environment, marine ecosystems, and coastal communities. Existing monitoring methods fail to respond to the spatiotemporal changes and dynamics of ML concentrations. Recent works showed that unmanned aerial systems (UAS), along with computer vision methods, provide a feasible alternative for ML monitoring. In this context, we proposed a citizen science UAS data acquisition and annotation protocol combined with deep learning techniques for the automatic detection and mapping of ML concentrations in the coastal zone. Five convolutional neural networks (CNNs) were trained to classify UAS image tiles into two classes: (a) litter and (b) no litter. Testing the CCNs’ generalization ability to an unseen dataset, we found that the VVG19 CNN returned an overall accuracy of 77.6% and an f-score of 77.42%. ML density maps were created using the automated classification results. They were compared with those produced by a manual screening classification proving our approach’s geographical transferability to new and unknown beaches. Although ML recognition is still a challenging task, this study provides evidence about the feasibility of using a citizen science UAS-based monitoring method in combination with deep learning techniques for the quantification of the ML load in the coastal zone using density maps.

Graphical Abstract

1. Introduction

Marine litter is a global problem affecting the world’s oceans, with millions of plastic items ending up in the sea and affecting marine ecosystems [1,2,3]. There has been a rapid escalation in plastic pollution in the marine environment over the last few decades, posing a severe environmental risk to many habitats globally.
Coastal zones are among the most populated and most productive areas globally, having a variety of habitats and ecosystems. They are close to land-based pollution sources, such as ports, cities, and rivers; thus, marine litter is present in high quantities [4,5,6,7]. Several initiatives have been planned by global and local players towards detection, monitoring, and cleaning [1,8,9,10,11,12,13,14,15].
Monitoring programs have been implemented to map, spatially and temporally, the load and type of marine litter on beaches worldwide [2,16,17,18,19,20]. State-of-the-art techniques were examined to detect and quantify floating marine litter [21,22,23,24,25,26,27,28].
Already, the scientific community is working towards the specification of sensors detecting and quantifying marine litter. Scientists, stakeholders, and policymakers have shown a rising interest in remote sensing technologies’ prospective applications in generating complementary benchmark data about marine litter [9,15,21,28,29]. All these entities have expressed a dire need for core end-products/descriptors from remote sensing technologies that are relevant to the (i) detection, (ii) identification, (iii) quantification, and (iv) mapping of ML.
Unmanned aerial systems (UAS) can be used in this direction [21,22,30,31,32,33,34]. UAS can provide high-quality monitoring information and products about the coastal environment at the local scale [35,36,37,38,39,40]. This information can equip policymakers, stakeholders, and citizens with a better understanding of marine litter pollution and how to manage and mitigate the ML problem. UAS can bridge the gap from local observations to regional mapping using earth observation (EO) data [41]. Recent works have demonstrated the feasibility of UAS for mapping stranded marine macro-litter >2.5 cm [42] on the orthomosaic produced from drone flight [22,23,24,26,43,44]. Automatic detection and reporting are essential for such a procedure; thus, several attempts are using artificial intelligence (AI) approaches for fast and accurate responses. Recent works have explored the viability of an UAS-based approach to detect, identify, and classify marine litter in coastal areas by applying automated detection using machine learning methods or deep learning techniques [22,34,45,46,47,48].
Fallati et al. proposed an ad-hoc methodology for monitoring and automatically quantifying anthropogenic marine debris (AMD) on the basis of the combined use of a commercial UAS (equipped with an RGB high-resolution camera) and a deep-learning-based software [45]. The high-resolution images acquired from UAS allowed for the visual detection of a percentage of the objects on the shores higher than 87.8%, thus providing suitable images to populate training and testing datasets. The PlasticFinder software was used, reaching a sensitivity of 67%, with a positive predictive value of 94%, in the automatic detection of AMD. Their study confirmed the efficiency of commercial UAVs as tools for AMD monitoring and the potential use of deep learning methods to automatically detect and quantify AMD.
An attempt to automate the process of marine litter recognition in beach dune systems using UAV images and machine learning methods is presented in Martin et al. [34]⁠. In their study, the use of the random forest (RF) classifier was examined for beach litter detection from orthomosaics. To increase the performance of their automatic detection process, the authors proposed three different RF classifiers. The first two classifiers aim to classify positive (beach litter) and negative (sand, vegetation, etc.) samples and the third classifier is responsible for distinguishing beach litter into three categories: (a) drink containers, (b) bottle caps, and (c) plastic bags. The third classifier’s training and validation image-sets were clipped in 64 by 64 pixel tile images from orthomosaics. The authors interfered in the test sample by selecting images containing more litters into less complicated backgrounds. The results of the multi-class classifier were compared with a manual/visual screening survey. The results were compared to a visual survey providing significantly low accuracy and detection rates of 44% for drink containers, 5% for bottle caps, and 3.7% for plastic bags.
Gonçalves et al. [46] explored the use of machine learning methods to automate marine litter detection from UAV orthomosaics. Along with the RGB orthomosaics, they convert color representations in the following color spaces HEX, CIE-Lab, and YCbCr. Their color-based approach relies on the argument that marine litter exists under several color and shape variations in contrast with the beach’s background. An RF classifier was used along with the pixel-by-pixel classification technique. A visual survey was conducted that identified 311 litter objects, which were digitized and used as training and validation image-sets with a split ratio of 60%/40%, respectively. Orthomosaics were clipped in 320 × 320 pixel blocks and further resampled in 64 × 64 pixels using the bicubic interpolation method to increase the computational performance. The results of this approach achieved an f-score of 75%.
The comparison between manual screening and two types of machine learning methods for marine litter detection was examined in Gonçalves et al. [22]. In this work, the authors extended their previous research, training a convolutional neural network (CNN) to compare manual screening results. DenseNet architecture was used on behalf of CNNs trained on 48 × 48 image tiles obtained by the RGB orthomosaic and their additional color conversions. On the basis of the manual survey, the authors annotated tiled images in litter and no litter images and used this to train the CNN. This study showed better results in the RF classifier approach with an f-score of 70%, while the CNN f-score was significantly lower (60%).
The above studies focused on small-scale applications on which typical machine learning models such as RF and support vector machines (SVMs) performed quite well in detecting litter. Large image-sets may benefit machine learning model performance, and therefore architectures with larger learning capacity are required. Image-sets such as ImageNet and the technological advances in parallel computing through GPUs make CNNs very successful for computer vision applications [49]. CNNs may be a potentially useful tool in marine litter recognition.
Kylili et al. [48] investigated the use of deep learning techniques to identify floating marine debris from on-vessel camera systems. They used a pre-trained VGG16 architecture on the ImageNet dataset realizing the transfer learning approach and bottleneck method to classify images into three categories: (a) plastic bottles, (b) plastic buckets, and (c) plastic straws. Using geometric transformations, the authors created a total image-set of 12,000 samples to form training and validation datasets with a split ratio of 80% (9600 images) and 20% (2400 images), respectively. The test dataset consisted of 165 images and achieved an overall accuracy of 86%. Kylili et al. [47] attempted to improve their previous approach by extending the number of classes from three to eight, which increased the training and validation image-sets. In order to evaluate CNN performance, the authors used a testing image-set of 400 samples, giving an overall accuracy of 90%, which improved their previous result by 4%.
Although research has been done using UAS machine learning approaches for monitoring ML in the coastal zone, to date, no efforts have been made to broaden the application of UAS to use citizen science for data acquisition. Furthermore, recent works have demonstrated the feasibility of UAS for mapping marine litter on the orthophoto map produced from drone flights for the ML mapping. As the automatic detection of ML in the coastal zone becomes a necessity, the UAS machine learning approach is employed in this direction. Thus, the creation of a massive training dataset with ML is required for the training of machine learning algorithms, for which citizen science/crowdsourcing approaches could be successfully applied.
In this context, we foresee many opportunities to use citizen science for both data acquisition and data annotation for ML automatic detection in the coastal zone. Therefore, we propose a citizen science UAS data acquisition protocol to enhance the data collection and apply machine learning detection in the aerial images to quantify ML load in the coastal zone. This study aimed to explore the use of citizen science drone data in an integrated approach for automatic marine litter detection. We created “marine litter density maps” on the beach and nearshore through citizen science/crowdsourcing approaches combined with deep learning algorithms. Furthermore, this study investigated the performance of five convolutional neural network (CNN) architectures for the recognition and mapping of marine litter from high-resolution UAS-derived images acquired from complex beach backgrounds, including the results of a first experimental application in Xabelia Beach, Mytilene Lesvos. The Xabelia dataset was not used to train the deep learning models; thus, our approach’s generalization ability was evaluated in a new unknown beach. This approach contributes to the geographical transferability of the method to new and unknown beaches.

2. Materials and Methods

Guidelines and protocols have been designed to standardize monitoring strategy performance on the coastal zone, defining the survey methods [42,50,51,52]. To date, the most common method for monitoring marine litter involves in situ visual surveys on the beach [33,53,54].
In this context, this study proposes a methodology that involves the combination of state-of-the-art deep learning models with drone technologies to provide quantification results on marine litter concentrations through density maps. More specifically, this study implements concrete methodological steps for the appropriate citizen science use of popular commercial off the shelf drones as close remote sensing data acquisition platforms to acquire data for ML mapping in the coastal zone. We propose a crowdsource-based classification scheme for data annotation and their combination with deep learning models to map and quantify ML accumulation. Mapping methods were studied to provide the best geo-visualization results that illustrate the automated ML quantification results. The AI-extracted results were illustrated as density maps depicting ML concentrations, showcasing their overall distribution and concentration trends.
For the completion of each objective, we implemented robust methodological steps. More specifically, our framework is based on the combination of protocols to (i) provide data acquisition standards to non-experienced citizens using commercial UAS for ML detection, (ii) annotate commercial drone images using citizen science platforms, (iii) train deep-learning models for ML visual recognition and evaluate their performance, and (iv) create specific geo-visualizations and maps illustrating the ML geographic clustering.
The majority of commercial of-the-shelf drones are equipped with optical imagers having true-color RGB sensors. RGB data were acquired and inserted into the AI algorithmic process to detect and quantify marine in the coastal zone to attain litter density.
Our work conceptualizes the best practices for applying state-of-the-art deep learning models to automate marine litter detection and quantification on the coastal zone using RGB raw UAS aerial images.
The proposed methodology consists of 4 pillars. The first pillar is the data acquisition protocol that enables the system selection, system preparation, mission programming, and the data acquisition flight. The second pillar consists of the (a) preprocessing step, where automatic image segmentation into tiles and geo-enrichment takes place, and (b) the annotation process through citizen science annotation campaigns. The third pillar is the automatic ML recognition and mapping steps using the annotated data to predict the ML existence in all tile dataset. Finally, an ML density map for the study area is produced. The following flowchart (Figure 1) illustrates the methodological steps and the overall structure of the approach proposed by this study.

2.1. UAS Data Acquisition Protocol

A UAS data acquisition framework was created and validated for citizen science data acquisition using off-the-shelf commercial drones. The idea was to create a protocol that will empower drone owners to act as stewards of the environment by providing survey data to enhance the data acquisition process, thus providing new and valuable data with minimum cost to the scientists mapping ML in the coastal zone. This framework was based on simple defined drone and flight parametrization steps to form an easy-to-follow data collection protocol for non-experienced commercial drone owners. The Pix4Dcapture (Figure 2) drone flight planning mobile application was selected to accomplish accurate citizen science data acquisition for ML mapping in the coastal zone [55].
This application is freely available from Pix4D to all drone owners that want to plan UAS flights for optimal mapping. An additional important factor for this selection was the compatibility of the application with the two most common mobile operating systems (iOS and Android) that provide protocol interoperability and use by a broader range of citizen scientists (individuals, non-governmental organizations, institutions, public organizations, etc.). Moreover, the application is “flexible”, as it supports drones from 3 of the biggest drone manufacturers on the market: DJI, Parrot, and Yuneec [55]. Through an easy-to-parameterize interface, the users can create flight missions and select flight details to acquire data in a consistent way. Thus, non-experienced citizen scientists can easily define the size of a mission to map areas of all sizes in order to customize mapping parameters such as the image overlap, camera angle, and flight altitude according to ML acquisition needs. The application provides an easy start and fly fully automatic data acquisition process where the drone sensor is automatically triggered according to the optimal acquisition parameters.
A collection protocol that is easy to follow for non-experienced commercial drone owners was created, relying on simple defined drone and flight parametrization steps using the selected mobile application. The proposed data acquisition protocol’s basic parameters were defined through considering and investigating the best operating condition for a commercial aerial drone to maximize the data acquisition for ML identification. Factors investigated included the (i) operating altitude of the drone above ground level (AGL), (ii) time of day (noon, afternoon between 12 p.m. and 3 p.m.), (iii) weather condition (sunny, cloudy), and (iv) substrate homogeneity of the beach (with high or low density of gravels and pebbles, etc.). We hypothesized that these factors would affect the quality of the photos taken, and hence the accuracy in marine litter identification. Data acquisition protocol’s efficacy was checked using two of the most popular mid-range commercial drone DJI Drones: Mavic Mini Enterprise and Phantom 4 Pro v2 [56,57].
The drones were controlled using Pix4Dcapture in Android and iOS mobile devices, allowing an automatic flight realization to map a specific preselected area. A series of photos were taken from both drones under different operating conditions, as mentioned above. After the test flights and checking the images acquired, we ended up with the following desirable parameters for mapping ML in the coastal zone.
The camera should be pointed at nadir (90° to the ground) with automatic settings to allow for good marine litter shape and size detection without the need for image rectification during post-processing. The maximum light sensitivity (ISO) should be set at 1000 to ensure that the photos will be taken at a shutter speed fast enough (usually 1/400 s–1/1000 s, depending on the time of the day and weather condition) to avoid blurry images. The photos must have a ground sample distance (GSD) of 0.5 centimeters, sufficient to capture a standard plastic bottle cap into 4 pixels. The desirable image frontal and lateral overlap is 20% between the photos, ensuring that all the coastal area will be covered, and no unnecessary images will be taken. The small proposed front and lateral overlap were guided from the deep learning algorithm quantification process resulting in denser ML approximation. Increasing both side and front overlap values lead to ML identification numerous times on the overlapped images. Thus, the image front and side overlap were both reduced to 20%. This value is the minimum required to ensure sufficient full beach coverage. Finally, the desirable UAS data acquisition speed was set to 5 m/s (18 Km/h), ensuring that during data acquisition, all citizen scientists can take control of their drone if something unexpected occurs when flying.
Finally, for selecting the appropriate drone’s altitude above ground level (AGL), we considered the GSD of 5mm. While flying at an altitude of 18m (AGL), most commonly used off-the-shelf commercial drones can take images with a 2 to 4 px/cm pixel density. These pixel densities allow for integration with available machine learning-based object detection algorithms [58,59] and provide sufficient visual information for small marine litter detection and classification by the human eye [32].

2.2. Data Acquisition and UAS Survey

The UAV-borne measurements were taken using an off-the-shelf UAV and processed using an online annotation tool. The UAS data acquisition was conducted on the 29 September 2020 at 12:00 on a sunny day. As a study area, the Xabelia beach in Lesvos, Greece, was selected. The beach is located to the northeast of Lesvos island, having a complex background where organic and inorganic debris are deposited by wave action. The aerial survey was performed using a DJI Phantom 4 Pro v2 quadcopter equipped with a 20-megapixel camera with a mechanical shutter mounted to a three-axis gimbal. The three-axis brushless gimbal smooths the camera’s angular movements, dampens vibrations, and maintains the camera at a predefined level. The camera sensor has a lens of 24 mm (35 mm format equivalent) focal length with 84 degrees field of view and a 1-inch CMOS (complementary metal oxide semiconductor) sensor. The UAS has a hover accuracy of ±0.5 m vertically and ±1.5 m horizontally as is using the GPS/GLONASS positioning system in combination with a barometer and inertial measurement unit (IMU). Finally, the drone is equipped with an intelligent flight battery that provides approximately 23 min of flight time under normal conditions [57].
Concerning the data acquisition, a non-experienced UAS pilot collected the data following the acquisition protocol proposed. The mission planning was implemented using the Pix4Dcapture and included all the parameters that allow the UAS to perform the flight autonomously. On the basis of the parameters proposed, the flight mission software computed—for the given camera model—the expected ground sampling distance (GSD) and the flight path to follow (Figure 3). The drone was set to fly at an altitude of 18 m, with the camera gimbal set to −90◦ for capturing photos perpendicular to the flight’s direction. The images with a resolution of 5472 × 3648 pixels (aspect ratio 3:2) were overlapped with 20% front and lateral overlap, having 0.49 cm image nominal spatial resolution (GSD). The flight plan was executed autonomously from the UAS and lasted 7 minutes and 38 seconds, collecting 106 aerial images.

2.3. Data Preprocessing

Every image acquired from the drone saves valuable metadata in exchangeable image file format (EXIF), which can be accessed during post-processing. The EXIF metadata stored in the raw image files contains specific DJI flight metadata such as GPS location, flight speed, GPS altitude, all three gimbal rotations (yaw, pitch, and roll), image dimensions, as well as the timestamp and camera specifications.
Python code was implemented for the automatic segmentation of raw aerial images to 512 × 512 image tiles suitable for mapping ML densities in the coastal zone due to their small footprint in the ground (6.55 square meters). Furthermore, the code was used to geo-enrich all the final produced tiles reading the EXIF information from the raw images for the following parameters: (a) GPS coordinates (latitude and longitude), (b) image dimensions, (c) image rotation relative to the true north, (d) UAS flight azimuth relative to the true north, and (e) the flight altitude. Finally, to overcome duplicates in ML detection and mapping due to the overlap of raw images, we designed an automated selection process for image tiles on the basis of the overlapping percentage translated into tile pixel overlap. Thus, all pixels corresponding to the 20% overlap were discarded from the right and bottom for each acquired raw image.

2.4. Data Sources

In this study, a total image-set of 1975 ultra-high-definition UAS raw images were used as a training dataset. These data were acquired from previous surveys of the marine remote sensing group team of the Aegean University, which were realized for ML mapping research. The image-set was acquired from beaches with complex backgrounds, differing from the Xabelia beach ground pattern and background. Furthermore, the data acquisition was realized using different UAS and sensors. According to this study’s objectives, it was essential to divide all raw images into 512 × 512 tiles suitable for mapping ML densities in the coastal zone and geo-enrichment of all the final produced tiles. As a result, from the initial image-set of 1975 raw images, we produced a training set of more than 30,000 image tiles, as shown in Table 1.
Applying the data acquisition protocol in the study area (Xabelia Beach), we collected 106 raw images. After the selection process, all raw images were segmented and georeferenced in the WGS84 system, producing 7420 georeferenced tiles. An annotation campaign took place to classify the dataset to litter and no litter tiles manually. The annotation process of all tiles was implemented through the Zooniverse platform (www.zooniverse.org). In the following subchapter, the citizen science data annotation is presented.

2.5. Data Annotation

In the last decade, several developments and innovations in online citizen science have come up to handle the classification of increasing quantities of digital data. Various online platforms were created to distribute data analysis—a type of citizen science [60]. The crowdsourcing of significant numbers of people in the scientific process has proven to be a technique capable of making a valuable contribution to this problem. In this context, the Zooniverse platform (www.zooniverse.org) grew out from the Galaxy Zoo project launched in 2007 [60]. It is a web platform containing a cluster of projects that use volunteer contributors to distribute data analysis and interpretation of large datasets [61]. The data analysis that volunteers are asked to complete is simple enough that members of the public can engage in the process without having special knowledge or background of the dataset or the problem behind it [60,61]. Zooniverse platform aims to solve specific scientific problems by serving as a reduction tool for data- and labor-intensive science. The non-expert citizen science engagement transforms raw data into a product to use in research [62].
Using the annotation tool of the online platform Zooniverse Project Builder (Citizen Science Alliance, Oxford, England; www.zooniverse.org), a group of 27 volunteers classified all 30,793 tiles (Figure 4). The volunteers participated in a 1-hour training and discussion to ensure that all understand the research scope and can distinguish the 2 classes correctly.
The Zooniverse Project Builder platform during annotation showed the tiles in random order. From the manual annotation process, 2 categories of tiles were distinguished: (i) “litter” and (ii) “no litter”, tiles that contain or are clear from any ML, respectively. Any artificial garbage such as metals, tires, and parts from wooden anthropogenic structures was tagged as “litter”. An expert screened all annotated tiles to correct misclassifications performed by the operators (volunteers). Additionally, tiles for which their class was uncertain and tagged as “not sure” from the annotation process were revised and placed in the appropriate class. Thus, all tiles produced in the segmentation preprocess step were used to form the final dataset. As can be seen in Table 2, from the annotation process, 7670 and 23,123 tiles were classified as litter and no litter, respectively.

2.6. Deep Learning for ML Recognition

In this study, we utilized the valuable knowledge acquired by several CNNs on the classification task of the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) [63] through the transfer learning approach to re-purpose them towards ML recognition from UAS aerial images. Regarding their performance on the ILSVRC classification task, their pioneering improvements, and their use in ML detection [22,47,48], we selected 5 CNN architectures from previous studies: (a) 2 plain architectures [49] (VGG-16 and VGG-19) and (b) 3 densely connected variations [64] (DenseNet-121, DenseNet-169, DenseNet-201).
The depth of the CNN architectures is a rather important matter for visual recognition tasks. VGG is one of the very first attempts to increase the performance of AlexNet, 2012 ILSVRC winner [65] by increasing the number of layers along with smaller convolutional filters (3 × 3) to make it possible [49]. One of the main limitations of very deep CNNs is the vanish of input information as it goes through the network. For this purpose, densely connected networks (DenseNet) attempt to increase the information flow between layers by introducing a new connectivity pattern that connects each layer with all the subsequent layers and all the preceding layers by concatenating them [64].

2.6.1. Training and Validation Image-Sets

Given the annotation process results, the image-set was divided into training and validation datasets with a split ratio of 80%/20%, accordingly. Training and validation datasets were generated on the basis of the number of litter images. To avoid the negative impact of the imbalanced class problem, we adopted the under-sampling method [66,67] to ensure that an equal number of samples was distributed between the 2 classes. Hence, “no litter” samples were randomly excluded to reach the number of litter samples. As a result, the training and validation image sets ended up with 12,276 and 3064 images, respectively, as shown in Table 3.

2.6.2. CNN Training

The abovementioned CNNs were trained using the transfer learning approach. We replaced the last layer of the classification stage to predict the classes defined in this study (litter, no litter) using the fine-tuning method. Due to GPU limitations, we chose a batch size of 64 samples, a stable learning rate of 0.000001 using Adam optimizer [68], and a dropout ratio of 0.5, and we reduced the input size to 224 × 224 pixels and, finally, trained for 40 epochs. Additionally, image augmentation was also applied (rotation, shear, horizontal and vertical flip).
Experiments were conducted through the programming environment of Python 3.7 using Tensorflow, an open-source, end-to-end machine learning framework [69] and the Keras high-level API [70]. The training and inference processes took place on an intel i7 8700 (3.2GHz) PC with a CUDA [71]-enabled NVIDIA GeForce RTX 2070 GPU equipped with 8GM of memory suitable for parallel computing.

2.7. Metrics Performance

The performance of the examined CNN architectures was evaluated using the f-score statistical analysis. Given the actual values of the testing images and a set of predictions, we generated a confusion matrix, as shown in Figure 5. The confusion matrix is the basis for assessing the model’s ability to generalize new and unseen images.
The actual image classes are on the y-axis, while the predicted ones are on the x-axis of the confusion matrix illustration. In addition, TP (true positive) stands for the correctly classified images as litter, while the FP (false positive) is the number of actual litter tiles that were predicted as no litter. Moreover, TN (true negative) is the number of the predicted no litter tiles that are no litter. The FN (false negative) is the number of no litter tiles wrongly classified as litter. Using these values, several statistical measurements such as precision, recall, f-score, and accuracy, may be calculated to evaluate the models’ performance. Precision (1) is the ratio of the correctly predicted litter images over the actual number of litter tiles.
precision = TP TP + FP
While recall (2) is the proportion of correctly classified tiles from the total litter predictions.
recall = TP TP + FN
However, both fail to capture the whole picture of the models’ performance, especially in imbalanced datasets. Therefore, we need to combine them into a single statistical measurement named f-score, the harmonic mean value (3).
f 1 = 2 × precision × recall precision + recall
Last but not least, the overall accuracy is also calculated. Accuracy is a metric of the model’s overall performance and considers the correctly classified tiles over the whole set of (4). Accuracy depends on the balance between classes.
accuracy = TP + TN TP + TN + FN + FP

3. Results and Discussion

Our proposed detection methodology’s experimental implementation was performed using a dataset consisting of two sub-datasets (train and validation dataset and test dataset). The training and validation dataset included a total set of 15,340 512 × 512 image tiles, created and annotated with the methodology described. The five selected models are trained on this dataset. Additionally, we performed data augmentation by flipping images left–right and up–down, rotating, and shearing to enhance the dataset.
Finally, to evaluate the selected deep learning models’ generalization ability on the ML classification task in new, unseen images, we used the Xabelia dataset. Thus, the test dataset comprised 7420 512 × 512 tiles, created from Xabelia beach raw images collected following the proposed data acquisition protocol. Additionally, all Xabelia tiles were annotated through the Zooniverse platform to evaluate the proposed models’ generalization ability. According to the manual classification results, 3411 tiles were identified containing litter, while 4009 tiles were classified as no litter.

3.1. Training

The deep learning models were re-purposed in order to identify ML from UAS images. The results of the training and validation accuracy and loss are depicted in Figure 6. During the training and validation process, a significant slack between validation and training accuracy occurred in DenseNet models. This slack is a sign of overfitting, which indicates that the models fit very well on the training samples while the new entry samples’ generalization ability remains relatively low. The VGG models failed to fit the training samples, while generalization remained high until the 30th epoch. Then, the two curves indicated a comparatively acceptable fit on both training and validation image-sets. Moreover, noisy accuracy and loss curves may indicate misrepresentative training and validation samples. The best fit on both training and validation image-sets occurred on the VGG19 architecture, as shown below.

3.2. Generalization Ability

In order to demonstrate the performance of the trained models, we used the Xabelia beach dataset. The Xabelia dataset used to evaluate the deep learning models’ generalization ability was unknown to the deep learning models. The use of an unseen image set showcases the geographical transferability of our approach to new and unknown beaches. Table 3 presents the statistical measurements calculated to evaluate the models’ performance on a new unseen dataset. DenseNet variations fail to successfully predict the input images, as shown by the differences between precision, recall, and the f-score in Table 4. The results indicate that most of the models failed to predict no litter samples successfully. VGG16 and VGG19 provided slightly better results in predicting both litter and no litter classes. The VGG19 architecture obtained the best prediction with an overall accuracy of 77.60%, while precision, recall, and f-score values were also acceptable.
Even though the experimental implementation was successfully conducted and obtained acceptable results, FP and FN values remained relatively high. ML in the coastal zone exists under numerous variations of colors, shapes, and sizes, and therefore it is very challenging to achieve higher overall accuracy and f-score.

3.3. Density Maps

For the creation of ML density maps, we used the deep learning results. The best in ML detection performance network (VGG19) was used to create two vector files. The first was a point vector file that contained all the 512 × 512 tile centroids. The coordinates for all centroids were calculated in the tilling process using the GPS tags in each image’s EXIF info. During this process, all metadata of the raw image were transferred to the corresponding tile centroids. Additionally, each centroid was tagged with ML detection information, and thus were tagged to those that contained and those that did not contain ML accordingly. A second vector file of a 10x10m grid in the study area was created on the basis of the European Union reference grid for geospatial data [72]. These two files were stored in a postgres database while the final density map was dynamically created using the Structure Query Language, calculating the number of litter points inside each 100 square meter grid cell. These results were used to create density maps depicting the accumulation of ML (Figure 7).
To evaluate all CNN performance to map ML abundance in Xabelia beach, we annotated the tiles using the Zooniverse web platform. Furthermore, the citizen science annotation results were manually screened by an experienced operator to produce a reference tile dataset of the ML present at the beach. This dataset was used to generate the centroid dataset and produce an ML reference density map. The map was employed to evaluate, visually and quantitatively, the performance of all CNNs used. Figure 7 depicts the ML density maps produced using the reference tile dataset and those automatically created using all CNN models’ results. The best-performing model, VGG19, returned ML accumulation patterns visually consistent with the manual method, identifying the main ML clusters located along the beach area center.
The statistical comparison of the density results was conducted with the use of two error metrics, mean absolute square error (MAE) and the root mean square error (RMSE) (Table 5). The manually classified dataset acted as the reference map for their calculation, and all errors were computed on the basis of. The results show that the best performing model was VGG19, which presented the lower errors in both MAE and RMSE, with values of 1.39 and 1.92, respectively. VGG16 produced errors at approximately 1.9 to 2.7 tiles per 100 m2, and all DenseNet variations were steadily off by more than 4 tiles.
To determine the overestimation or underestimation of the directional errors, we created boxplots depicting the individual density differences between the manual classification and the model results (Figure 8). The difference was calculated by subtracting each model’s results from the manual classification dataset. The mean error for both VGG models was negative, showing that they generally overestimated litter density, whereas all three DenseNet models underestimated their classification. The 50% of VGG19 errors were concentrated to the range between −2 to 0, presenting only two outliers, one underestimating litter density by four tiles and another overestimating by seven. Similar results were produced from the VGG16 model, but the 19 version performed consistently better, presenting smaller directional error variations. The DenseNet models had a mean underestimation error of three tiles per 100 m2, but the overall error range was high between −1 to 14.

3.4. Discussion

To date, the most common method for monitoring marine litter involves in situ visual surveys on the beach [33,53,54]. In this method, people are required to walk along transect lines of 100m long from strandline and vertical to coastline [33,73]. The survey typically requires three to five persons for about 3 hours to survey a small beach. Although these manual surveys can be achieved at low cost, with minimal equipment, and by inexperienced surveyors under instruction [54], they are labor-intensive and time-consuming [74], require high-demanding human work, and are spatially limited [4,75,76]. Furthermore, ML classification relies upon the participants’ judgment; hence, it depends on their skills and experience. Accessibility of the beach to be mapped is another concern for surveyors as sometimes it is difficult or dangerous to conduct surveys in inaccessible or steep areas. As the marine litter problem is escalating, new monitoring and mapping survey approaches that use minimal labor need to offer fast spatiotemporal repetition, cost-effectiveness, and efficiency. In this context, we propose concrete methodological steps for the appropriate citizen science use of popular commercial-grade drones as close remote sensing data acquisition platforms to acquire data for ML mapping in the coastal zone. This study is the first that introduces a citizen science UAS data acquisition protocol for mapping of ML concentrations in the coastal zone. The proposed methodology allows the realization of citizen science data acquisition using off-the-shelf commercial drones that leads to broader area coverage. This framework supports the idea of empowering drone owners to act as stewards of the environment providing new and valuable data to enhance the data acquisition process for mapping ML accumulation in the coastal zone.
In this study, we should note that we were interested in mapping only two beach classes (litter, no litter) using datasets on a beach consisting of various and complex beach backgrounds. Compared with previous works [22,34,46], the method presented here calculates densities using the raw data acquired from the UAS; thus, it does not rely on the orthomosaic. This approach has a significant advantage in mapping coverage as there is no need for high values of front and lateral overlap, which decreases the area covered per UAS mission. As a result, a more significant amount of data (aerial images) can be collected per data acquisition. Furthermore, the method is more straightforward as it is not dependent on the complex and demanding structure from the motion and multi-view stereo (SfM-MVS) processing step. The lack of this step reduces the in situ data acquisition effort concerning the ground control point (GCP) deployment on the study area. The GCPs are needed for georeferencing an orthomosaic in a specific cartographic coordinate system.
In contrast with most previous publications [22,34,46,47,48], this study used a significantly larger training and validation dataset acquired from five different beach environments with complex background characteristics and litter concentrations. Additionally, the evaluation of the deep learning models’ generalization ability in a completely new beach environment expands the geographical transferability of our approach to new and unknown beaches.

4. Conclusions

The presented framework combines drone technology and developments in artificial intelligence for computer vision applications to create a protocol that citizens can use to monitor shorelines for marine litter. This approach has great potential to be applied for routine monitoring by both citizens and regulatory bodies, especially for monitoring inaccessible locations or sensitive areas.
In the present work, very high resolution aerial images were acquired from a beach with a complex background using an off-the-self customer-grade drone. These images were used as input data in deep learning models to identify ML in the coastal zone and create ML density maps. The Zooniverse citizen science tool was used to annotate the input data into the litter and no litter classes. The annotation process was implemented in a short time with the help of volunteers, making the annotation more efficient and effective.
Five deep learning models were examined and trained to allow marine litter items to be distinguished from UAS very high resolution images collected from beaches with complex backgrounds. The proposed framework can detect marine litter in the coastal environment with an overall accuracy of 77.6%. We performed a comprehensive evaluation of our method, showing that it generalizes well to unseen images, even when applied to the completely new data acquired from Xabelia beach. The evaluation of the results provides significant evidence of our method’s potential applicability on several ML and background variations, but nonetheless, generalization to more complex coastal environments will require re-training using more data. The results of this study are encouraging. However, in the present study, the number of training and validation images was relatively small, resulting in a small stimulus from the networks for shaping the weights. Our approach’s limitations are that (i) it is not scale-invariant, (ii) it can be computationally prohibitive for real-time applications, (iii) it was trained on a relatively small dataset, and (iv) it requires a high number of samples to produce high recall. We believe that these limitations can be addressed in future work using a larger number of training and validation datasets. Additionally, augmentation techniques will be used to enhance the training and validation dataset.
Today, mapping ML in the coastal zone is carried out using conventional on-site sampling surveys. Existing data collection systems are limited and, therefore, unable to answer fundamental questions for ML concentrations and their spatial and temporal dynamics. Since UAS are nowadays very affordable, widely used, and versatile for environmental studies, this work intends to sustain and give an impulse to the use of citizen science data UAS imagery for quantifying and monitoring the spatiotemporal distribution of marine litter in the coastal zone. The proposed UAS deep learning approach results are encouraging as this combination could offer an instrumental tool for sustainable coastal zone environmental management. To achieve large-scale reproducibility of this framework, further research is needed in the direction of the critical limitations that influence data acquisition, such as sunlight conditions and the associated terrain-shading effects, as well as parameters of the automatic ML detection process.

Author Contributions

Conceptualization, A.P., M.B., S.S., K.T.; methodology, A.P., M.B., K.T.; validation, A.P., M.B.; formal analysis, A.P., M.B.; data curation, A.P., M.B., S.S.; writing—original draft preparation, A.P., M.B.; writing—review and editing, A.P., M.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research is co-financed by Greece and the European Union (European Social Fund—ESF) through the Operational Programme “Human Resources Development, Education and Lifelong Learning” in the context of the project “Reinforcement of Postdoctoral Researchers—2nd Cycle” (MIS-5033021), implemented by the State Scholarships Foundation (IKΥ).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Löhr, A.; Savelli, H.; Beunen, R.; Kalz, M.; Ragas, A.; Van Belleghem, F. Solutions for global marine litter pollution. Curr. Opin. Environ. Sustain. 2017, 28, 90–99. [Google Scholar] [CrossRef] [Green Version]
  2. Schulz, M.; Clemens, T.; Förster, H.; Harder, T.; Fleet, D.; Gaus, S.; Grave, C.; Flegel, I.; Schrey, E.; Hartwig, E. Statistical analyses of the results of 25 years of beach litter surveys on the south-eastern North Sea coast. Mar. Environ. Res. 2015, 109. [Google Scholar] [CrossRef]
  3. Vikas, M.; Dwarakish, G.S. Coastal Pollution: A Review. Aquat. Procedia 2015, 4, 381–388. [Google Scholar] [CrossRef]
  4. Galgani, F. Marine litter, future prospects for research. Front. Mar. Sci. 2015. [Google Scholar] [CrossRef] [Green Version]
  5. Munari, C.; Corbau, C.; Simeoni, U.; Mistri, M. Marine litter on Mediterranean shores: Analysis of composition, spatial distribution and sources in north-western Adriatic beaches. Waste Manag. 2016, 49, 483–490. [Google Scholar] [CrossRef] [PubMed]
  6. Ríos, N.; Frias, J.P.G.L.; Rodríguez, Y.; Carriço, R.; Garcia, S.M.; Juliano, M.; Pham, C.K. Spatio-temporal variability of beached macro-litter on remote islands of the North Atlantic. Mar. Pollut. Bull. 2018, 133, 304–311. [Google Scholar] [CrossRef]
  7. Valavanidis, A.; Vlachogianni, T. Marine litter: Man-made solid waste pollution in the Mediterranean Sea and coastline. Abundance, composition and sources identification. Sci. Adv. Environ. Chem. Toxicol. Ecotoxicol. 2012, 1, 18. [Google Scholar]
  8. United Nations Environmental Programme (UNEP). Plastic Debris in the Ocean (UNEP Year Book). In UNEP Year Book 2014 Emerging issues Update; UNEP Division of Early Warning and Assessment: Nairobi, Kenya, 2014; ISBN 978-92-807-3381-5. [Google Scholar]
  9. G20. Annex to G20 Leaders Declaration: G20 Action Plan on Marine Litter; Federal Ministry for Environment, Nature Conservation and Nuclear Safety: Hamburg, Germany, 2017.
  10. Cicin-Sain, B. Conserve and sustainably use the oceans, seas and marine resources for sustainable development. UN Chron. 2015, 51, 32–33. [Google Scholar] [CrossRef]
  11. Ferrari, R.; McKinnon, D.; He, H.; Smith, R.; Corke, P.; González-Rivero, M.; Mumby, P.; Upcroft, B. Quantifying Multiscale Habitat Structural Complexity: A Cost-Effective Framework for Underwater 3D Modelling. Remote Sens. 2016, 8, 113. [Google Scholar] [CrossRef] [Green Version]
  12. Chircop, A.; Coffen-Smout, S.; McConnell, M.L. Report of the United Nations Conference to Support the Implementation of Sustainable Development Goal 14: Conserve and Sustainably Use the Oceans, Seas and Marine Resources for Sustainable Development, 5–9 June 2017. Ocean Yearb. Online 2018, 32, 752–817. [Google Scholar] [CrossRef]
  13. Morseletto, P. A new framework for policy evaluation: Targets, marine litter, Italy and the Marine Strategy Framework Directive. Mar. Policy 2020, 117, 103956. [Google Scholar] [CrossRef]
  14. Maes, T.; Perry, J.; Alliji, K.; Clarke, C.; Birchenough, S.N.R. Shades of grey: Marine litter research developments in Europe. Mar. Pollut. Bull. 2019, 146, 274–281. [Google Scholar] [CrossRef] [PubMed]
  15. Maximenko, N.; Corradi, P.; Law, K.L.; Sebille, E. Van; Garaba, S.P.; Lampitt, R.S.; Galgani, F.; Martinez-Vicente, V.; Goddijn-Murphy, L.; Veiga, J.M.; et al. Towards the integrated marine debris observing system. Front. Mar. Sci. 2019, 6. [Google Scholar] [CrossRef] [Green Version]
  16. Costanzo, L.G.; Marletta, G.; Alongi, G. Assessment of marine litter in the coralligenous habitat of a marine protected area along the ionian coast of sicily (central mediterranean). J. Mar. Sci. Eng. 2020, 8, 656. [Google Scholar] [CrossRef]
  17. Painting, S.J.; Collingridge, K.A.; Durand, D.; Grémare, A.; Créach, V.; Arvanitidis, C.; Bernard, G. Marine monitoring in Europe: Is it adequate to address environmental threats and pressures? Ocean Sci. 2020, 16, 235–252. [Google Scholar] [CrossRef] [Green Version]
  18. Cheshire, A.; Adler, E.; Barbière, J.; Cohen, Y.; Evans, S.; Jarayabhand, S.; Jeftic, L.; Jung, R.-T.; Kinsey, S.; Kusui, E.T.; et al. UNEP/IOC Guidelines on Survey and Monitoring of Marine Litter; UNEP Regional Seas Reports and Studies, No. 186; IOC Technical Series No. 83; United Nations Environment Programme/Intergovernmental Oceanographic Commission, 2009; pp. xii–120. [Google Scholar]
  19. Husson, E.; Reese, H.; Ecke, F. Combining Spectral Data and a DSM from UAS-Images for Improved Classification of Non-Submerged Aquatic Vegetation. Remote Sens. 2017, 9, 247. [Google Scholar] [CrossRef] [Green Version]
  20. Veiga, J.M.; Fleet, D.; Kinsey, S.; Nilsson, P.; Vlachogianni, T.; Werner, S.; Galgani, F.; Thompson, R.C.; Dagevos, J.; Gago, J.; et al. Identifying Sources of Marine Litter. MSFD GES TG Marine Litter Thematic Report. In JRC Technical Reports; Publications Office of the European Union: Luxemburg, 2016; ISBN 9789279645228. [Google Scholar]
  21. Topouzelis, K.; Papakonstantinou, A.; Garaba, S.P. Detection of floating plastics from satellite and unmanned aerial systems (Plastic Litter Project 2018). Int. J. Appl. Earth Obs. Geoinf. 2019, 79, 175–183. [Google Scholar] [CrossRef]
  22. Gonçalves, G.; Andriolo, U.; Pinto, L.; Duarte, D. Mapping marine litter with Unmanned Aerial Systems: A showcase comparison among manual image screening and machine learning techniques. Mar. Pollut. Bull. 2020, 155, 111158. [Google Scholar] [CrossRef]
  23. Gonçalves, G.; Andriolo, U.; Gonçalves, L.; Sobral, P.; Bessa, F. Quantifying marine macro litter abundance on a sandy beach using unmanned aerial systems and object-oriented machine learning methods. Remote Sens. 2020, 12, 2599. [Google Scholar] [CrossRef]
  24. Andriolo, U.; Gonçalves, G.; Bessa, F.; Sobral, P. Mapping marine litter on coastal dunes with unmanned aerial systems: A showcase on the Atlantic Coast. Sci. Total Environ. 2020, 736. [Google Scholar] [CrossRef]
  25. Benassai, G.; Aucelli, P.; Budillon, G.; De Stefano, M.; Di Luccio, D.; Di Paola, G.; Montella, R.; Mucerino, L.; Sica, M.; Pennetta, M. Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation. Nat. Hazards Earth Syst. Sci. Discuss. 2017, 1–14. [Google Scholar] [CrossRef] [Green Version]
  26. Bao, Z.; Sha, J.; Li, X.; Hanchiso, T.; Shifaw, E. Monitoring of beach litter by automatic interpretation of unmanned aerial vehicle images using the segmentation threshold method. Mar. Pollut. Bull. 2018, 137, 388–398. [Google Scholar] [CrossRef]
  27. Haarr, M.L.; Westerveld, L.; Fabres, J.; Iversen, K.R.; Busch, K.E.T. A novel GIS-based tool for predicting coastal litter accumulation and optimising coastal cleanup actions. Mar. Pollut. Bull. 2019, 139, 117–126. [Google Scholar] [CrossRef]
  28. Topouzelis, K.; Papageorgiou, D.; Karagaitanakis, A.; Papakonstantinou, A.; Ballesteros, M.A. Remote sensing of sea surface artificial floating plastic targets with Sentinel-2 and unmanned aerial systems (plastic litter project 2019). Remote Sens. 2020, 12, 2013. [Google Scholar] [CrossRef]
  29. Smail, E.A.; DiGiacomo, P.M.; Seeyave, S.; Djavidnia, S.; Celliers, L.; Le Traon, P.Y.; Gault, J.; Escobar-Briones, E.; Plag, H.P.; Pequignet, C.; et al. An introduction to the ‘Oceans and Society: Blue Planet’ initiative. J. Oper. Oceanogr. 2019, 12, S1–S11. [Google Scholar] [CrossRef] [Green Version]
  30. Papakonstantinou, A.; Topouzelis, K.; Doukari, M.; Andreadis, O. Mapping refugee litters in the eastern coast of Lesvos using UAS, an emerging marine litter problem. Abstr. ICA 2019, 1, 1–2. [Google Scholar] [CrossRef] [Green Version]
  31. Velegrakis, A.; Andreadis, O.; Papakonstantinou, A.; Manoutsoglou, E.; Doukari, M.; Hasiotis, T.; Topouzelis, K.; Sea, K.A. Preliminary Study on the Emerging Marine Litter Problem Along the Eastern Coast of Lesvos Isl., Greece. In Proceedings of theCommission Internationale pour l’Exploration Scientifique de la Méditerranée (CIESM) Congress, Kiel, Gerrmany, 12–16 September 2016. [Google Scholar]
  32. Geraeds, M.; van Emmerik, T.; de Vries, R.; bin Ab Razak, M.S. Riverine plastic litter monitoring using Unmanned Aerial Vehicles (UAVs). Remote Sens. 2019, 11, 2045. [Google Scholar] [CrossRef] [Green Version]
  33. Lo, H.S.; Wong, L.C.; Kwok, S.H.; Lee, Y.K.; Po, B.H.K.; Wong, C.Y.; Tam, N.F.Y.; Cheung, S.G. Field test of beach litter assessment by commercial aerial drone. Mar. Pollut. Bull. 2020, 151, 110823. [Google Scholar] [CrossRef]
  34. Martin, C.; Parkes, S.; Zhang, Q.; Zhang, X.; McCabe, M.F.; Duarte, C.M. Use of unmanned aerial vehicles for efficient beach litter monitoring. Mar. Pollut. Bull. 2018, 131, 662–673. [Google Scholar] [CrossRef] [Green Version]
  35. Topouzelis, K.; Papakonstantinou, A. The Use of Unmanned Aerial Systems for Seagrass Mapping. Conf. Pap. 2016, 81100. [Google Scholar]
  36. Papakonstantinou, A.; Stamati, C.; Topouzelis, K. Comparison of True-Color and Multispectral Unmanned Aerial Systems Imagery for Marine Habitat Mapping Using Object-Based Image Analysis. Remote Sens. 2020, 12, 554. [Google Scholar] [CrossRef] [Green Version]
  37. Papakonstantinou, A.; Topouzelis, K.; Doukari, M. UAS close range remote sensing for mapping coastal environments. In Proceedings of the Fifth International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2017), Paphos, Cyprus, 20–23 March 2017; Papadavid, G., Hadjimitsis, D.G., Michaelides, S., Ambrosia, V., Themistocleous, K., Schreier, G., Eds.; SPIE: Bellingham, WA, USA, 2017; Volume 10444, p. 35. [Google Scholar]
  38. Doukari, M.; Batsaris, M.; Papakonstantinou, A.; Topouzelis, K. A Protocol for Aerial Survey in Coastal Areas Using UAS. Remote Sens. 2019, 11, 1913. [Google Scholar] [CrossRef] [Green Version]
  39. Mury, A.; Collin, A.; Houet, T.; Alvarez-Vanhard, E.; James, D. Using multispectral drone imagery for spatially explicit modeling of wave attenuation through a salt marsh meadow. Drones 2020, 4, 25. [Google Scholar] [CrossRef]
  40. Taddia, Y.; Stecchi, F.; Pellegrinelli, A. Coastal mapping using dji phantom 4 RTK in post-processing kinematic mode. Drones 2020, 4, 9. [Google Scholar] [CrossRef] [Green Version]
  41. Riihimäki, H.; Luoto, M.; Heiskanen, J. Estimating fractional cover of tundra vegetation at multiple scales using unmanned aerial systems and optical satellite data. Remote Sens. Environ. 2019, 224, 119–132. [Google Scholar] [CrossRef]
  42. Kershaw, P.J.; Turra, A.; Galgani, F. (Eds.) Guidelines for the monitoring and assessment of plastic litter in the ocean. In GESAMP Reports and Studies; No. 99; IMO/FAO/UNESCO-IOC/UNIDO/WMO/IAEA/UN/UNEP/UNDP/ISA Joint Group of Experts on the Scientific Aspects of Marine Environmental Prote; United Nations Office: Nairobi, Kenya, 2019; 130p. [Google Scholar]
  43. Deidun, A.; Gauci, A.; Lagorio, S.; Galgani, F. Optimising beached litter monitoring protocols through aerial imagery. Mar. Pollut. Bull. 2018, 131, 212–217. [Google Scholar] [CrossRef] [PubMed]
  44. Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. TensorFlow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), Savannah, GA, USA, 2–4 November 2016; pp. 265–283. [Google Scholar]
  45. Fallati, L.; Polidori, A.; Salvatore, C.; Saponari, L.; Savini, A.; Galli, P. Anthropogenic Marine Debris assessment with Unmanned Aerial Vehicle imagery and deep learning: A case study along the beaches of the Republic of Maldives. Sci. Total Environ. 2019, 693, 133581. [Google Scholar] [CrossRef]
  46. Gonçalves, G.; Andriolo, U.; Pinto, L.; Bessa, F. Mapping marine litter using UAS on a beach-dune system: A multidisciplinary approach. Sci. Total Environ. 2020, 706, 135742. [Google Scholar] [CrossRef]
  47. Kylili, K.; Hadjistassou, C.; Artusi, A. An intelligent way for discerning plastics at the shorelines and the seas. Environ. Sci. Pollut. Res. 2020, 27, 42631–42643. [Google Scholar] [CrossRef]
  48. Kylili, K.; Kyriakides, I.; Artusi, A.; Hadjistassou, C. Identifying floating plastic marine debris using a deep learning approach. Environ. Sci. Pollut. Res. 2019, 26, 17091–17099. [Google Scholar] [CrossRef]
  49. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  50. Schulz, M.; van Loon, W.; Fleet, D.M.; Baggelaar, P.; van der Meulen, E. OSPAR standard method and software for statistical analysis of beach litter data. Mar. Pollut. Bull. 2017, 122, 166–175. [Google Scholar] [CrossRef] [PubMed]
  51. Galgani, F.; Hanke, G.; Werner, S.; De Vrees, L. Marine litter within the European Marine Strategy Framework Directive. ICES J. Mar. Sci. 2013, 70, 1055–1064. [Google Scholar] [CrossRef]
  52. OSPAR. Guideline for Monitoring Marine Litter on the Beaches in the OSPAR Maritime Area, 1.0. ed.; OSPAR Commission: London, UK, 2010; 16p, plus appendices forms and photoguides. [Google Scholar] [CrossRef]
  53. Haseler, M.; Schernewski, G.; Balciunas, A.; Sabaliauskaite, V. Monitoring methods for large micro- and meso-litter and applications at Baltic beaches. J. Coast. Conserv. 2018, 22, 27–50. [Google Scholar] [CrossRef]
  54. Rees, G.; Pond, K. Marine litter monitoring programmes-A review of methods with special reference to national surveys. Mar. Pollut. Bull. 1995, 30, 103–108. [Google Scholar] [CrossRef]
  55. Pix4D Pix4Dcapture. Available online: https://www.pix4d.com/product/pix4dcapture (accessed on 31 October 2020).
  56. Da-Jiang Innovations Mavic 2 Enterprise Series—DJI. Available online: https://www.dji.com/gr/mavic-2-enterprise?site=brandsite&from=nav (accessed on 11 December 2020).
  57. Da-Jiang Innovations Phantom 4 Pro V2.0—DJI. Available online: https://www.dji.com/gr/phantom-4-pro-v2?site=brandsite&from=nav (accessed on 11 December 2020).
  58. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  59. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef] [Green Version]
  60. Simpson, R.; Page, K.R.; De Roure, D. Zooniverse: Observing the world’s largest citizen science platform. In WWW 2014 Companion, Proceedings of the 23rd International Conference on World Wide Web, Seoul, Korea, 7–11 April 2014; Association for Computing Machinery: New York, NY, USA; pp. 1049–1054. [CrossRef]
  61. Cox, J.; Oh, E.Y.; Simmons, B.; Lintott, C.; Masters, K.; Greenhill, A.; Graham, G.; Holmes, K. Defining and Measuring Success in Online Citizen Science: A Case Study of Zooniverse Projects. Comput. Sci. Eng. 2015, 17, 28–41. [Google Scholar] [CrossRef]
  62. Fortson, L.; Masters, K.; Nichol, R.; Borne, K.; Edmondson, E.; Lintott, C.; Raddick, J.; Schawinski, K.; Wallin, J. Galaxy Zoo: Morphological Classification and Citizen Science. In Machine Learning and Data Mining for Astronomy; Michael, J., Way, J.D., Scargle, K.M., Ali, A.N.S., Eds.; CRC Press; Taylor & Francis Group: Cleveland, OH, USA, 2012; pp. 213–236. [Google Scholar]
  63. Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet Large Scale Visual Recognition Challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
  64. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar] [CrossRef] [Green Version]
  65. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems; Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates, Inc.: Lake Tahoe, NV, USA, 2012; Volume 25, pp. 1097–1105. [Google Scholar]
  66. Duarte, D.; Andriolo, U.; Gonçalves, G. Addressing the class imbalance problem in the automatic image classification of coastal litter from orthophotos derived from uas imagery. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 5, 439–445. [Google Scholar] [CrossRef]
  67. Buda, M.; Maki, A.; Mazurowski, M.A. A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 2018, 106, 249–259. [Google Scholar] [CrossRef] [Green Version]
  68. Kingma, D.P.; Ba, J.L. Adam: A method for stochastic optimization. In Proceedings of the 3rd International Conference on Learning Representations. ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar]
  69. Abadi, M.; Agarwal, A.; Barham, P.; Brevdo, E.; Chen, Z.; Citro, C.; Corrado, G.S.; Davis, A.; Dean, J.; Devin, M.; et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. arXiv 2016, arXiv:1603.04467. [Google Scholar]
  70. François, C. Keras: The Python Deep Learning Library; Software Available from Keras.io. Available online: https://Keras.io (accessed on 10 December 2020).
  71. Corporation, N. CUDA Zone. Available online: https://developer.nvidia.com/cuda-zone (accessed on 10 December 2020).
  72. Annoni, A.; Bernard, L.; Lillethun, A.; Ihde, J.; Gallego, J.; Rives, M.; Sommer, E.; Poelman, H.; Condé, S.; Greaves, M.; et al. Short Proceedings of the 1st European Workshop on Reference Grids; JRC-Institute for Environment and Sustainability: Ispra, Italy.
  73. Xanthos, D.; Walker, T.R. International policies to reduce plastic marine pollution from single-use plastics (plastic bags and microbeads): A review. Mar. Pollut. Bull. 2017, 118, 17–26. [Google Scholar] [CrossRef] [PubMed]
  74. Nazerdeylami, A.; Majidi, B.; Movaghar, A. Autonomous litter surveying and human activity monitoring for governance intelligence in coastal eco-cyber-physical systems. Ocean Coast. Manag. 2021, 200, 105478. [Google Scholar] [CrossRef]
  75. Acosta, J.; Allee, R.J.; Althaus, F.; Alvarez, G.; Amblas, D.; Anderson, T.J.; Archambault, P.; Armstrong, R.A.; Bäck, S.; Baker, E.K.; et al. Contributors. In Seafloor Geomorphology as Benthic Habitat; Harris, P.T., Baker, E.K., Eds.; Elsevier: London, UK, 2012; pp. xxxi–xlv. ISBN 978-0-12-385140-6. [Google Scholar]
  76. Papachristopoulou, I.; Filippides, A.; Fakiris, E.; Papatheodorou, G. Vessel-based photographic assessment of beach litter in remote coasts. A wide scale application in Saronikos Gulf, Greece. Mar. Pollut. Bull. 2020, 150, 110684. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Flowchart of the proposed methodology.
Figure 1. Flowchart of the proposed methodology.
Drones 05 00006 g001
Figure 2. Pix4D capture unmanned aerial systems (UAS) mission planning preview.
Figure 2. Pix4D capture unmanned aerial systems (UAS) mission planning preview.
Drones 05 00006 g002
Figure 3. Pix4D capture of UAS mission planning preview for Xabelia beach.
Figure 3. Pix4D capture of UAS mission planning preview for Xabelia beach.
Drones 05 00006 g003
Figure 4. Preview of the annotation process through the Zooniverse project builder platform.
Figure 4. Preview of the annotation process through the Zooniverse project builder platform.
Drones 05 00006 g004
Figure 5. Visual explanation of the confusion matrix.
Figure 5. Visual explanation of the confusion matrix.
Drones 05 00006 g005
Figure 6. The learning experience of the corresponding models. (A) Training and validation accuracy for each epoch and (B) training and validation loss for each epoch.
Figure 6. The learning experience of the corresponding models. (A) Training and validation accuracy for each epoch and (B) training and validation loss for each epoch.
Drones 05 00006 g006
Figure 7. Marine litter (ML) density maps were obtained manually and automatically using deep learning. All maps are depicted from left to right in the following order: (A) manual classification, (B) VGG19, (C) VGG16, (D) Densnet201, (E) Densnet169, and (F) Densnet121.
Figure 7. Marine litter (ML) density maps were obtained manually and automatically using deep learning. All maps are depicted from left to right in the following order: (A) manual classification, (B) VGG19, (C) VGG16, (D) Densnet201, (E) Densnet169, and (F) Densnet121.
Drones 05 00006 g007
Figure 8. Boxplots of the tiles with litter concentration per 100 m2 difference between the reference values (manual classification) and the model results. The negative values represent overestimation occurrences, and the positive values represent underestimation.
Figure 8. Boxplots of the tiles with litter concentration per 100 m2 difference between the reference values (manual classification) and the model results. The negative values represent overestimation occurrences, and the positive values represent underestimation.
Drones 05 00006 g008
Table 1. Image-set used in this study including raw images as well as 512 × 512 pixel tile images.
Table 1. Image-set used in this study including raw images as well as 512 × 512 pixel tile images.
DatasetRaw Images512 × 512 Tiles
Beach A2313834
Beach B25411,624
Beach C49911,611
Beach D122490
Beach E8693234
Total tiles197530,793
Table 2. Image-set annotation results.
Table 2. Image-set annotation results.
DatasetLitter TilesNo Litter Tiles
Beach A13012533
Beach B44777147
Beach C67210,939
Beach D104386
Beach E11162118
Total tiles767023,123
Table 3. Training and validation image-sets.
Table 3. Training and validation image-sets.
DatasetLitterNo LitterTotal
Training images6138613812,276
Validation images153215323064
Total images7670767015,340
Table 4. Model generalization ability using f-score statistical analysis.
Table 4. Model generalization ability using f-score statistical analysis.
ModelTPFPFNTNPrecisionRecallf-ScoreAccuracy
VGG162547864147425350.74670.63340.68540.6849
VGG192850561110129080.83550.72130.77420.7760
DenseNet12156628452239870.16590.96250.28300.6136
DenseNet16952528861139980.15390.97940.26600.6095
DenseNet20159028213939700.17290.93790.29200.6145
Table 5. Error metrics comparing the manual classification tile density per 100 m2 to the density produced from the five networks.
Table 5. Error metrics comparing the manual classification tile density per 100 m2 to the density produced from the five networks.
MetricVGG19VGG16DenseNet201DenseNet169DenseNet121
MAE1.391.924.184.344.31
RMSE1.922.695.645.875.86
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Papakonstantinou, A.; Batsaris, M.; Spondylidis, S.; Topouzelis, K. A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone. Drones 2021, 5, 6. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010006

AMA Style

Papakonstantinou A, Batsaris M, Spondylidis S, Topouzelis K. A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone. Drones. 2021; 5(1):6. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010006

Chicago/Turabian Style

Papakonstantinou, Apostolos, Marios Batsaris, Spyros Spondylidis, and Konstantinos Topouzelis. 2021. "A Citizen Science Unmanned Aerial System Data Acquisition Protocol and Deep Learning Techniques for the Automatic Detection and Mapping of Marine Litter Concentrations in the Coastal Zone" Drones 5, no. 1: 6. https://0-doi-org.brum.beds.ac.uk/10.3390/drones5010006

Article Metrics

Back to TopTop