Next Article in Journal
Experimental Investigation on Effects of Water Injection on Rock Frictional Sliding and Its Implications for the Mechanism of Induced Earthquake
Previous Article in Journal
Model-Based Design of Aircraft Landing Gear System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A “Hardware-Friendly” Foreign Object Identification Method for Belt Conveyors Based on Improved YOLOv8

1
College of Mechanical and Vehicle Engineering, Taiyuan University of Technology, Taiyuan 030024, China
2
Shanxi Provincial Engineering Laboratory for Mine Fluid Control, Taiyuan 030024, China
3
Shandong Libo Heavy Industry Technology Co., Ltd., Taian 271025, China
*
Authors to whom correspondence should be addressed.
Submission received: 12 August 2023 / Revised: 8 October 2023 / Accepted: 15 October 2023 / Published: 19 October 2023
(This article belongs to the Special Issue Recent Advances in Machine Learning and Industrial Big Data Analysis)

Abstract

:
As a crucial element in coal transportation, conveyor belts play a vital role, and monitoring their health is essential for the coal mine transportation system’s safe and efficient operation. This paper introduces a new ‘hardware-friendly’ method for monitoring belt conveyor damage, aiming to address the issue of large parameters and computational requirements in existing deep learning-based foreign object detection methods and their challenges in deploying on edge devices with limited computing power. This method is tailored towards edge computing and aims to reduce the parameters and computational load of foreign object recognition networks deployed on edge computing devices. This method improves the YOLOv8 object detection network and redesigns a novel lightweight ShuffleNetV2 network as the backbone network, making the network more delicate in recognizing foreign object features while reducing redundant parameters. Additionally, a simple parameter-free attention mechanism called SimAM is introduced to further enhance recognition efficiency without imposing additional computational burden. Experimental results demonstrate that the improved foreign object recognition method achieves a detection accuracy of 95.6% with only 1.6 M parameters and 4.7 G model computational load (FLOPs). Compared to the baseline YOLOv8n, the detection accuracy has improved by 3.3 percentage points, while the number of parameters and model computational load have been reduced by 48.4% and 42.0%, respectively. These works are more friendly to edge computing devices that tend to “hardware friendly” algorithms. The improved algorithm can reduce latency in the data transmission process, enabling the accurate and timely detection of non-coal foreign objects on the conveyor belt. This provides assurance for the subsequent host computer system to promptly identify and address foreign objects, thereby ensuring the safety and efficiency of the belt conveyor.

1. Introduction

As the essential energy source for human society’s development, the coal industry holds a dominant position in national production [1]. Under the goals of carbon peaking and carbon neutrality, the urgent task in current energy development is to construct a diversified energy supply system and accelerate the energy transition [2]. To address the adverse environmental effects associated with coal mining, production, and utilization processes, the advancement of intelligent coal mining equipment for achieving eco-friendly, efficient utilization of coal resources has emerged as a central focus within the contemporary coal industry [3].
The transportation of coal, serving as a crucial stage in the coal mining process, greatly impacts both the overall energy consumption and production costs associated with coal [4]. As major transportation equipment for coal transportation, mining belt conveyors have been evolving towards large-scale development driven by rapid technological advancements and increasing demands in the coal industry. Additionally, they are progressively aligning with the development requirements proposed by Industry 4.0, transitioning towards intelligent and energy-efficient directions [5,6].
The safe operation of the belt conveyor system relies on the normal and healthy functioning of its equipment. Abnormal working conditions can not only cause unnecessary damage or wear to the equipment but also increase the system’s energy consumption, leading to additional safety risks and economic pressure for coal mining enterprises, thereby severely impacting the green and sustainable development of the enterprise [7].
In practical production processes, coal often needs to be transported over long distances. Conveyor belts efficiently facilitate the movement of large quantities of coal from one location to another, ensuring the unobstructed flow of the coal supply chain. Additionally, conveyor belt systems, meticulously engineered, enable the automated and continuous transportation of coal, eliminating the need for extensive manual labor. This not only enhances production efficiency but also reduces labor costs while minimizing the potential for human errors. Given the intricate working environment of belt conveyors, issues such as belt misalignment, belt tearing, and overloading [8] frequently manifest, with belt damage being the most prevalent among these malfunctions. Investigation has revealed that longitudinal tearing occurs when the belt encounters external sharp objects, such as anchor rods, large chunks of rock, angle irons, and iron plates [9,10,11]. Failure to promptly detect and address non-coal foreign objects on the conveyor belt may result in abnormal stoppages and conveyor belt malfunctions, introducing a gamut of potential risks and losses. These may include diminished production efficiency, the loss of substantial production capacity, delays in the transportation of coal and other commodities, and disruptions in the logistics of the supply chain, affecting delivery schedules and production plans. In more severe cases, it could lead to coal accumulation or blockage on the conveyor belt, potentially causing material stack collapses, posing risks to worker safety, and resulting in equipment damage. Therefore, the rapid identification and monitoring of non-coal foreign objects on conveyor belts are imperative.
As condition monitoring technology advances, the monitoring methods for belt damage in conveyor belts have gradually evolved from manual inspection to traditional image recognition algorithms and deep learning-based object detection methods. Manual inspection methods are inefficient and costly. In contrast to traditional image recognition algorithms, deep learning-based detection methods do not require manual feature extractor design and possess stronger feature extraction capabilities, meeting the demands of efficient and precise data processing in the age of big data [12].
Currently, deep learning object detection methods can be categorized into single-stage and two-stage algorithms. Classic two-stage algorithms include Region-based Convolutional Neural Network (R-CNN) [13], Fast Region-based Convolutional Neural Network (Fast R-CNN) [14], and Faster Region-based Convolutional Neural Network (Faster R-CNN) [15]. However, two-stage methods require generating candidate boxes from input images before feature extraction and detection, resulting in a relatively slow detection speed, which fails to meet the high demands of real-time performance and detection speed in coal mining applications. Classic single-stage algorithms encompass the You Only Look Once (YOLO) series and the Single Shot MultiBox Detector (SSD) algorithm [16]. Unlike the two-stage methods, single-stage algorithms directly extract category, coordinate, and other feature information while generating candidate boxes in a single step to obtain detection results. The YOLO series algorithms, known for their accuracy and fast speed, are widely used in coal mine foreign object detection due to their applicability to embedded mobile platforms.
Hu Jinghao et al. [17] proposed an improved YOLOv3-based method for foreign object detection in belt conveyor systems. This approach employs the Focal Loss function as its loss function and fine-tunes the optimal hyperparameters, including weight parameter α and focus parameter γ, to address the sample imbalance problem. The highest recognition accuracy on the proposed non-coal foreign object dataset was 94.0%. Zhang Mengchao et al. [18] proposed an improved YOLOv4-based approach using depthwise separable convolutions to construct a series of lightweight networks, achieving a detection accuracy of 93.7% on the proposed dataset. Zhang Lei et al. [19] proposed a coal gangue object detection method for belt conveyors based on YOLOv5s-SDE. By adding a Squeeze-and-Excitation (SE) attention mechanism to the backbone network and optimizing the loss function, they improved the model’s convergence speed and prediction accuracy. The results showed a maximum detection accuracy of 92.5% and a recognition speed of 30 frames per second. Mao Qinghua et al. [20] introduced a foreign object recognition approach for coal mine belt conveyors, which is based on an improved version of YOLOv7. By introducing deep separable convolutions instead of ordinary convolutions in network models, the foreign object recognition speed was improved, with a recognition accuracy of 92.8% and a recognition speed of 25.64 frames/s.
Although deep learning-based foreign object detection methods have become mainstream in current detection, the video surveillance systems serving the belt conveyor field rely on network cameras or inspection robots for data acquisition, which is then uploaded to central servers for centralized processing. This “cloud computing” processing approach exhibits relatively high network latency, which has a substantial impact on the real-time nature and accuracy of system alerts. Furthermore, the simultaneous transmission of multiple data streams imposes demanding bandwidth and computing power requirements on the “cloud processors”. Compared to the “cloud computing” processing approach, edge computing distributes the data processing mission to the data acquisition endpoints, reducing the latency during data transmission and alleviating the burden on the “cloud processors”. Therefore, it is better suited for real-time data analysis and intelligent processing. However, edge computing devices typically have limited computational power and may struggle to hold out complicated deep neural networks. To enable real-time processing capabilities on edge computing devices, it is necessary to compress and optimize the corresponding algorithms, reducing network model complexity and minimizing computational load.
Furthermore, as high-speed conveyor belts continue to evolve, they demand greater frame capture rates from cameras. Cameras with higher frame rates will transmit high-quality images to edge computing devices. If the network’s foreign object recognition and detection speed is not increased accordingly, it can lead to asynchronous image input and output signals, resulting in information delay to the upper computer system and cloud operation platform. This delay can impact subsequent decision-making and operations, thereby increasing the risk of conveyor belt damage.
More precisely, the term ‘hardware-friendly’ algorithm pertains to network models characterized by minimal parameter counts and reduced computational demands. Employing these network models on edge devices within coal mines, which often have constrained computational capabilities, not only aligns with resource constraints in such scenarios, including limited computing power and storage space, but also aligns with the requirements of advancing high-speed conveyor belt technologies.
Based on the above situation, this article proposes a “hardware friendly” coal mine conveyor belt foreign body identification and detection method, which uses an improved YOLOv8 network to “slim down” the conveyor belt foreign object detection network. Compress the parameter amount and storage space occupation of the network model, and make the network reduce the calculation amount and improve the reasoning detection speed without losing the detection accuracy so as to enhance its potential for utilization in edge computing devices.
The remainder of this paper is structured as follows: Section 2 outlines the data preparation procedure, Section 3 introduces the enhancements made to the algorithm, Section 4 evaluates the experimental outcomes and discusses associated observations, and finally, Section 5 summarizes the research outcomes and considers future work.

2. Data Preparation

Due to the particularity of the detection targets, there is presently no publicly accessible dataset for foreign object detection in underground coal mines. Therefore, the dataset utilized in the experiment was acquired from video images captured by intelligent inspection robots in coal mines during the belt conveyor operation in the Mining Fluid Control Engineering Laboratory of Shanxi Province. The experimental environment and hardware setup are illustrated in Figure 1.
The equipment and image parameters are as follows:
  • The conveyor belt operates at a speed of 4 m/s.
  • The mining inspection robot captures frames at a rate of 40 frames/s.
  • The image resolution is 1920 × 1080.
However, transmitting data at this resolution and under these testing conditions demands a substantial amount of memory and bandwidth. High hardware computing capability is needed when implementing this in an industrial setting, especially in edge computing. To reduce the computational costs and enhance network performance, this study resizes the images to 224 × 224 using Python (version: 3.7.0) batch processing. The images are annotated using Labelme software (version: 5.1.1) and stored in the VOC2007 format as the dataset.
Considering the adverse underground environment in coal mines and the impact of vibrations caused by the movement of inspection robots on the network detection performance, all images in the dataset are subjected to processing techniques such as motion blur, dust and fog effects, and reduced brightness. After data augmentation, the dataset comprises 17,483 samples of foreign object images with 44,480 corresponding data labels. Throughout the training procedure, the entire collection of image samples was divided into a training set (comprising 12,238 images), a validation set (comprising 3496 images), and a test set (comprising 1749 images) in a ratio of 7:2:1. The dataset includes various types of foreign objects, such as anchor rods, angle irons, trays, gangue, nuts, and screws. Some sample images are illustrated in Figure 2.

3. Algorithm Improvement

3.1. YOLOv8 Network Model

YOLOv8 is the latest version of the object detection and image segmentation model developed by Ultralytics in 2023. Building upon the successful foundation of YOLOv5, YOLOv8 introduces new functionalities and improvements aimed at further enhancing performance and flexibility. The YOLOv8 algorithm has developed five distinct models, denoted as YOLOv8n, YOLOv8s, YOLOv8m, YOLOv8l, and YOLOv8x, each with varying sub-module depths and widths. The model detection accuracy and model size have been improved in sequence. Considering the limited hardware resources and the high real-time requirements in actual coal mine underground environments, strict limitations on the model size are necessary. Therefore, in this study, we selected YOLOv8n, which has the minimum parameters and model computation, as the experimental model in coal mine belt conveyors. Its architecture is illustrated in Figure 3.
The YOLOv8 network comprises input, backbone, neck, and head components. At the input stage, primary operations include mosaic augmentation, adaptive anchor box calculation, and responsibility for receiving image data to pass it to the next layer of the network. Typically, the YOLOv8 network divides the image into fixed-sized grids and utilizes the information from each grid for object detection.
The backbone serves as the network’s central component, responsible for transforming input images into feature maps that contain both object positions and feature information. Typically, the backbone network consists of convolutional layers, pooling layers, and other deep learning layers, enabling it to capture information from different abstraction levels in the image. In YOLOv8, the Context to Focus (C2F) module has been introduced to replace the Cross Stage (C3) module, incorporating additional skip connections to enhance the model’s gradient flow and strengthen the network’s feature representation capacity.
The neck layer is employed to further extract and integrate features from the backbone network, typically consisting of a series of convolutional and pooling layers. Its role is to enhance feature representation and aid the network in better understanding contextual information about objects. In YOLOv8, the neck layer adopts the Path Aggregation Network (PANet) structure, which strengthens the network’s ability to aggregate object features at different scales.
The head layer serves as the network’s output component and is responsible for generating object detection results. Its primary function is to map the feature maps to object detection results to determine the position and category of detected objects. Typically, the head includes convolutional and detection layers. In YOLOv8, the classification and regression tasks are predicted separately, with the classification task still employing Binary Cross-Entropy Loss (BCE Loss), while the regression task utilizes Distribution Focal Loss (DFL Loss) and Complete Intersection over Union Loss (CIOU Loss) functions. These loss functions enable the network to rapidly focus on the distribution of positions in the vicinity of the target location, resulting in integer coordinate weight values.
Floating Point Operations (FLOPs), Memory Access Cost (MAC), parallelism, and computing platforms are important metrics for assessing network models’ computational speed and complexity. FLOPs quantify the amount of floating-point computations, i.e., the computational workload of the network model. Under a given MAC, parallelism, and computing platform, the larger the FLOPs, the greater the computational workload and complexity of the model. Although the YOLOv8n network exhibits good performance in object recognition accuracy and detection speed, the complex underground environment in coal mines (characterized by humidity, high coal dust levels, insufficient lighting, and overall darkness) leads to poor image quality and weak distinguishability of targets. Consequently, this imposes a significant computational burden on the deployment of detection devices. In addition, YOLOv8n backbone feature extraction comprises multiple standard convolution-dense connections. The excessive use of ordinary convolution to extract image features will result in the redundancy of the feature. The deeper the number of layers, the greater the impact on FLOPs, thus affecting the speed of foreign matter detection in coal mines. Therefore, it is necessary to “slim down” the YOLOv8n network model.

3.2. Selection of Lightweight Convolutional Network

3.2.1. Depthwise Separable Convolution

Depthwise Separable Convolution (DWConv) is an efficient feature extraction module composed of depth and pointwise convolution. It has significantly fewer parameters and computational costs compared to ordinary convolutions while being able to capture more informative feature representations. Compared to the ordinary convolution with h × w × k2 × c2 parameters, DWConv reduces the FLOPs to h × w × k2 × c. Assuming the input feature map size is DH × Dw × M (height × width × number of channels) when employing a convolution kernel of size Dk × Dk × 1 in YOLOv8, each convolution produces M feature maps of size DH × Dw. Then, employ N sets of 1 × 1 convolution kernels for convolution, resulting in a final output size of DH × Dw × N for the feature map.
The calculation amount of ordinary convolution is (Qc):
Q c = D k · D k · M · N · D w · D H
The calculation amount of DWConv is ( Q D ):
Q D = D k · D k · M · D w · D H + M · N · D w · D H
The calculation ratio of DWConv to ordinary convolution is:
Q D Q C = D k · D k · M · D w · D H + M · N · D w · D H D k · D k · M · N · D w · D H = 1 N + 1 D k 2
It can be seen from Formula (3) that by introducing DWConv, the calculation amount and parameters of the original network can be reduced so that the detection speed can be significantly improved.
While DWConv can efficiently decrease FLOPs, it cannot outright substitute the standard convolution, as this could result in a considerable drop in accuracy. In practice, when expanding the DWConv’s network width from c to c1 (where c < c1) to recoup for the precision loss, it also adds to the memory requirement for computations, thereby decelerating the overall computational velocity. For hardware devices in deployment, the count of memory accesses is enhanced to [21]:
h × w × 2 c + k 2 × c 2 h × w × 2 c
Higher than normal convolution, i.e.,
h × w × 2 c + k 2 × c 2 h × w × 2 c
where h and w denote the dimensions of the input image feature in terms of width and height, c signifies the network’s width, and k denotes the convolution kernel size.

3.2.2. ShuffleNet Network

Research on image classification methods based on lightweight deep neural networks has made considerable progress in recent years. Among them, the ShuffleNet series algorithms proposed by Megvii Technology have been widely applied in object detection for edge computing due to their ability to achieve the best model accuracy with limited computational resources. The ShuffleNet network employs two core operations [22]: group convolution and channel shuffle. These operations substantially decrease the model’s computational intricacy while preserving accuracy. However, group convolution is limited in that each group operates independently without feature fusion between groups. As shown in Figure 4a, when the convolution kernel is divided into three groups, the resulting feature maps are also divided into three groups, with each group only exchanging information internally, lacking any information fusion between groups. Therefore, ShuffleNetv1 proposed the concept of channel shuffle, which divides the channels in the feature map into several groups according to certain rules and then rearranges the elements within each group. Doing so enhances the information interaction and fusion between channels, increasing the model’s non-linear expression capability without compromising network accuracy. The channel shuffle operation is illustrated in Figure 4b.
However, ShuffleNetv1 utilizes many grouped and pointwise convolutions, which can slow down the model speed. To address this issue, MA et al. proposed four design principles for effective and compact networks based on the ShuffleNetv1 model and introduced the ShuffleNetv2 network [23]. A new channel split operation is proposed in ShuffleNetv2, where the network mainly comprises a basic unit and a downsample unit.
Figure 5a shows the basic unit feature extraction operation with Stride = 1. The input features are evenly divided under the channel split operation; that is, the number of feature channels in each branch accounts for 1/2 of the original number of channels. Then, the left pathway does not perform any processing identity mapping, and the right pathway performs two ordinary convolutions (1 × 1) and 1 DWConv (3 × 3. Step size is 1) operation, and finally, the left and right branches output feature maps through feature stitching and channel ShuffleNet operations. Figure 5b shows the downsampling operation with Stride = 2. Unlike the basic unit, the downsampling unit does not use channel split without increasing the computational complexity of the network model. Instead, it directly augments both the network’s channel count and its overall width, further enhancing the network’s ability to extract features. Firstly, the feature map is fed into two branches, each of which undergoes ordinary convolution (1 × 1) and DWConv (3 × 3. Step size is 2), and after the two branches are concatenated and merged through channels, the quantity of the output channels increases to twice the original, and channel ShuffleNet is performed on the merged feature map.
Within the network architecture of ShuffleNetv2, the number of channels is doubled every time a downsampling operation is performed. With the doubling of the number of channels, the network does not pay attention to the feature channels that significantly impact the classification results. Important and unimportant feature channels have the same weight, resulting in excessive retention of interference information from the mine, which can easily affect the classification effect of non-coal foreign objects. Therefore, this paper chooses the lightweight ShuffleNetv2 network as the backbone network and optimizes it for these problems.

3.3. Improved ShuffleNetv2 Network Module

3.3.1. Simple and Parameter-Free Attention Mechanism

The dataset of foreign objects contains interference from non-important features such as complex environmental backgrounds, which is accompanied by a lot of interference information during the recognition process. Redundant information is transmitted during the learning process of the network model, and as the number of network layers increases, the weight of interference information in the feature map also increases, ultimately having a certain negative impact on the model. The attention mechanism is a frequently employed method in the realm of deep learning that allows the model to focus more on key information in the input sequence, thereby improving the accuracy and efficiency of the model. However, existing attention mechanisms suffer from two issues: firstly, they are capable of enhancing features solely in either the channel dimension or the spatial dimension, lacking flexibility in simultaneously adapting to both dimensions. Secondly, their structures are based on a series of complex operations, which can increase the model parameter size and do not fit coal mine edge computing devices.
The Simple and Parameter-Free Attention Mechanism (SimAM) is an energy-based attention mechanism that can derive 3D attention weights without requiring additional parameters [24]. Compared to other attention mechanisms, SimAM’s operations are more concise and clear, effectively avoiding the issue of model parameter increase caused by structural adjustments. Therefore, this paper introduces the attention mechanism SimAM into the ShuffleNetV2 model, which allows the network to focus more on extracting essential feature information from non-coal foreign object images, effectively suppressing the interference of redundant information on the network. This leads to more efficient feature extraction and reconstruction, improving network recognition accuracy and reducing network complexity.
Its computational formulas are shown in Equations (6)–(8). SimAM evaluates the importance of each neuron in the network by defining an energy function based on linear separability. Here, t denotes the target neuron, xi represents neighboring neurons, and λ is a hyperparameter. μ ^ and σ ^ 2 represent the average and variance of all neurons within the channel, excluding t. The lower the energy, the higher the differentiation between neurons and adjacent neurons, and the higher the importance of neurons.
e t * = 4 σ ^ 2 + λ t μ ^ 2 + 2 σ ^ 2 + 2 λ
μ ^ = 1 M i = 1 M x i
σ ^ 2 = 1 M i = 1 M x i μ ^ 2
Subsequently, each neuron is assigned a unique weighted value based on the manifestation of attention regulation in mammalian brains, as shown in Equation (9) [25]:
X ˜ = s i g m o i d 1 E x X
Within the mathematical expression, x is the input feature tensor, X ˜ is the enhanced feature tensor, E(x) is the sum of e t * in all channels and spatial dimensions, and ⊙ is a dot product operation. By adding a sigmoid function to limit the excessive value of E(x), the sigmoid function does not influence the comparative importance of each neuron.
After the input feature map passes through the SimAM, the weight is normalized by the sigmoid function. Then, the weight of the target neuron is multiplied by the characteristics of the initial feature map to derive the ultimate output feature map. The weight distribution of SimAM is shown in Figure 6.

3.3.2. Improved ShuffleNetv2 Network Model

Considering the speed requirement for non-coal foreign object detection by the mining inspection robot, ShuffleNetv2 0.5X is chosen as the backbone network, and improvements are made to this model. The enhanced network architecture is depicted in Figure 7. The network architecture includes Conv (standard convolution), MaxPool (max pooling), ShuffleNetV2(1,3) (downsampling module stacked one time, basic unit module stacked three times), GlobalPool (global pooling), FC (fully connected layer), ReLU (activation function), BN (batch normalization), DWConv (depthwise separable convolution), SimAM (simple and parameter-Free attention mechanism), Concat (channel concatenation), Channel shuffle (channel shuffle module), and Channel split (channel splitting module). The specific operations are as follows:
(1)
SimAM attention modules are inserted before the feature concatenation of the basic and downsample units in the ShuffleNetv2 network model. This is because the SimAM attention mechanism can effectively exploit the neurons in the downsample layer and basic unit of the ShuffleNetv2 network using an energy function and balance the weight allocation based on the importance of neurons. It assigns greater weight to important feature channels and smaller weight to less important ones, thereby enhancing the attention of the downsample unit and basic unit. In addition, including the attention mechanism SimAM after the convolutional layers is intended to prevent the network from losing significant non-coal foreign object information due to the preceding convolutional operations. It enables the network to focus more on critical features related to non-coal foreign objects, thereby enhancing the model’s discriminative ability for different types of foreign objects.
(2)
Due to the high similarity between certain classes of non-coal foreign objects in coal blocks, such as nuts and bolts, both being deep brown or large coal pieces and coal gangue having only subtle differences in shape, it becomes challenging to distinguish them. By increasing the stacking times of the basic units, the network becomes more refined, enabling it to capture better the finer shape features that differentiate different foreign objects. This allows the network to thoroughly learn and utilize more detailed information about non-coal foreign objects. The downsampling unit of ShuffleNetv2 is denoted as shuffle-b, and the basic unit is denoted as shuffle-a. As shown in Table 1, while ensuring model lightweights, the stacking times of the downsampling unit in Stage 3 are set to 1, and the stacking times of the basic unit are increased to 9 times.
The specific process of the improved YOLOv8 for non-coal foreign object image classification in a belt conveyor is as follows: Firstly, the non-coal foreign object images of the belt conveyor are subjected to data augmentation and other preprocessing operations at the input end, and the images are transformed into 224 × 224 × 3 and input into the improved ShuffleNetv2 network in the backbone. Then, the images go through convolution and max-pooling operations to obtain feature maps. Through three ShuffleNetv2 network modules containing upsampling units and basic units, which further extract non-coal foreign object features with attention information, a 7× 7 × 192 feature map is obtained. Specifically, the upsampling units and basic units in Stage 2 and Stage 4 are stacked once and thrice, respectively, while in Stage 3, the upsampling units and basic units are stacked once and nine times. The feature extraction results are then sequentially processed through convolution operations, a global pooling layer, and fully connected layer operations before undergoing feature fusion with the neck layer. Finally, the recognition results of non-coal foreign object images in belt conveyors are obtained through three prediction heads of different sizes in the head.

4. Experimental Results and Analysis

4.1. Experimental Environment and Parameter Settings

The parameters of the model training equipment used in the experiment: operating system, Windows 11; CPU, 11th Gen Intel(R) Core(TM) i7-11800H 2.30 GHz; GPU, NVIDIA GeForce RTX 3080; deep learning framework, Torch 1.9.0 + CUDA 11.3.
During the training process, a random Stochastic Gradient Descent (SGD) optimizer is used for parameter updates to ensure the scientific reliability of the experimental conclusions. The number of iterations is set to 300, and the batch size is 16. The initial learning rate is established at 0.01 with a weight decay coefficient of 0.0005 to prevent the network from overfitting during training. A momentum factor of 0.937 is employed to prevent the model from becoming trapped in local optima or bypassing the global optimum.

4.2. Evaluation Indicators

Model usage precision, recall, mean of average precision (mAP), model size, parameter count, and FLOPs are used as evaluation indicators for the overall performance of the algorithm. The calculation expression for precision, recall, and mAP is as follows:
P = T P T P + F P
R = T P T P + F N
A P = 0 1 P R d R
where TP denotes the count of actual positive samples, FP denotes the count of incorrectly identified positive samples, and FN denotes the count of correctly identified negative samples. mAP is typically obtained by calculating the Precision-Recall curve (PR curve) for each class and then averaging the Area Under the Curve (AUC) of all classes to derive the mAP.

4.3. Experimental Findings and Analysis of the Enhanced Network Model

4.3.1. Analysis of Experimental Results Introducing SimAM Attention Mechanism

Three groups of contrastive experiments were conducted in this study to investigate the effectiveness of integrating the attention mechanism into the ShuffleNetv2 network model for non-coal foreign object recognition. The ShuffleNetv2 model, which is about to introduce the SimAM attention mechanism, is compared with the YOLOv8n model and baseline model YOLOv8n after replacing the backbone network with the ShuffleNetv2 network (abbreviated as YOLOv8n-SM, YOLOv8n, and YOLOv8n-shuffle, respectively). Then, the effects of embedding the SimAM attention module at positions ①, ②, and ③ in Figure 7 were discussed separately. Through the comparison of results in Table 2, it was observed that by using the lightweight convolutional network ShuffleNetv2 as the backbone of YOLOv8, the parameter count and FLOPs were reduced by 48.4% and 42.0%, respectively, while maintaining recognition accuracy. Subsequently, by introducing the SimAM attention mechanism after the two ShuffleNetv2 basic units’ 1 × 1 convolutional modules, there was no escalation in parameters or computational intricacy. However, the precision, recall, and mAP were improved by 0.5%, 2%, and 0.2%, respectively.
Based on Figure 8a,b, and in combination with Table 2, it can be obtained that the YOLOv8n-shuffle model and YOLOv8n-SM model show better convergence performance compared to the baseline YOLOv8n network model. They both reach a stable state after approximately 60 iterations. The YOLOv8n-SM model achieves a precision of 95.3% (an improvement of 1.5%), a recall of 90.0% (an improvement of 2.2%), and the highest recognition accuracy of 93.8% (an improvement of 1.5%).
The activation heatmap provides clear visual evidence for the model’s classification results, where deeper colors indicate that the model has focused more on the relevant regions, resulting in more accurate detection of the foreign object targets. This article extracts the thermal maps of YOLOv8n, YOLOv8n-shuffle, and YOLOv8-SM networks on the same layer (the sixth layer of YOLOv8n network; the sixth layer of YOLOv8n-shuffle and YOLOv8-SM networks: Stage3 layer).
In the figure, the first row represents images of coal gangue, and the second row shows images of nuts. Figure 9 illustrates that the YOLOv8n model places greater emphasis on other object features, such as conveyor belts and rollers, which results in an increased inference speed and overall computational complexity during feature extraction. However, when using the lightweight ShuffleNetv2 network as the backbone, the model starts to pay attention to the feature regions where materials are conveyed on the conveyor belt. With the integration of the SimAM into the ShuffleNetv2 model, the attention is concentrated on the relevant feature regions of non-coal foreign objects, enhancing its ability to discern key features of the foreign objects. Additionally, Figure 9 demonstrates that the model with the introduced SimAM module can more accurately extract the feature information of non-coal foreign objects while effectively avoiding interference from non-essential features, such as background environments.
To further validate the effectiveness of embedding the SimAM module in the ShuffleNetv2 network model, we conducted comparative experiments with several mainstream attention mechanisms, and the results are presented in Table 3. It is evident that solely embedding a single attention mechanism into the network may disrupt the stability of the original network structure. Although there is not much impact on the number of parameters and the amount of model calculation, they all affect the detection accuracy to varying degrees. However, with the introduction of the SimAM attention mechanism, which is a parameter-free attention mechanism, the network allocates more attention to important neurons, enabling more detailed feature extraction and thus improving the accuracy of foreign object detection.

4.3.2. Analysis of the Experimental Results of the Improved ShuffleNetv2 Network Model

We carried out three rounds of comparative experiments to investigate whether the reconstruction of the two basic units in the ShuffleNetv2 network is effective for non-coal foreign object recognition. The reconstructed ShuffleNetv2 network model (YOLOv8n-shuffle+) was compared with the baseline YOLOv8n network model and YOLOv8n-shuffle model. As shown in Table 4 and Figure 10, by changing the stacking of basic units, the reconstructed ShuffleNetv2 network model becomes more delicate, and the depth-wise separable convolution modules within the basic units better capture the finer shape features among different non-coal foreign objects. The model learns more informative features while keeping the parameter count and network computation relatively stable, improving detection accuracy.

4.3.3. Analysis of Ablation Experiment Results

To further substantiate the efficacy of different optimization techniques employed in this investigation and the enhancements in the integrated network model for recognizing non-coal foreign objects on conveyor belts, we executed comparative assessments between the enhanced network model and the baseline YOLOv8n model. The results from the experiments are presented in Table 5.
According to Table 5, both the improved ShuffleNetv2 network model and the implantation of the SimAM module positively impact the model’s recognition accuracy. In particular, compared to the original YOLOv8n model, the YOLOv8n model with the ShuffleNetv2 network as the backbone achieves an accuracy of 94.8%, an increase of 1 percentage point; the recall rate improved to 88.0%, an uptick of 0.2 percentage points; the mAP value reached 93.5%, an augmentation of 1.2 percentage points; the model parameter amount is 1.6M, reduced by 46.9%; the computation FLOPs is 4.7G, which reduced by 42.0%; after reconstructing the ShuffleNetv2 network, the accuracy reaches 96.9%, the recall rate is 92.7%, and mAP is 95.6%.
In terms of model performance, the improved network model does not increase the model parameter count and maintains a network computational complexity of 4.7G. This demonstrates that integrating the SimAM attention mechanism and the reconstructed ShuffleNetv2 network model has not negatively affected the YOLOv8n network. On the contrary, it is advantageous in improving the model’s recognition accuracy and detection speed.

4.4. Analysis of Generalization Performance Experiment Results

4.4.1. Analysis of Experimental Results of Different Data Sets

Alongside the mAP and model parameter count, the neural network model’s capacity for generalization serves as one of the metrics for assessing model quality. We expect that the model, trained on the dataset, will deliver a sensible output when presented with new samples not included in the dataset. Because of the particularity of the application context, there is currently no openly accessible dataset for detecting foreign objects on belt conveyor systems. Manually introducing foreign objects into coal mine belt conveyor systems would contravene safety regulations. Considering safety concerns, a validation was conducted at Boshitong Limited in Taiyuan, Shanxi Province, China (dataset containing 500 foreign object images, referred to as DataI). The assessment outcomes are displayed in Table 6, while certain detection findings are depicted in Figure 11.
It can be observed that both the original and improved network models achieve relatively accurate detection results for large pieces of coal gangue and other foreign objects. However, the improved model accurately detects buried and small corner foreign objects while avoiding redundant detection boxes.
To further assess the model’s capacity for generalization, we applied various image techniques to adjust some images in the DataI dataset, simulating real underground mining conditions such as fog, dust, low lighting, and blurriness due to robot motion during the inspection process. The detection outcomes are depicted in Figure 12 (the leftmost column signifies mild processing, the central column signifies moderate processing, and the rightmost column signifies severe processing).
The detection results show that the object detection network benefits from data augmentation techniques during model training, enabling it to exhibit strong adaptability to environmental changes. Specifically, the detection results remain largely unaffected in the presence of dust and fog. Even under low lighting conditions, the model can accurately identify buried objects. Although we considered motion blur and applied the corresponding preprocessing to the data before training, severe image blurriness can still lead to some missed detections.
Furthermore, this research assessed the new model’s generalization capability using data from Reference [18] (dataset containing 10,448 images with six types of non-coal foreign objects, denoted as DataII). The assessment outcomes are showcased in Table 7, and partial recognition findings are depicted in Figure 13. The visualized results show that both the original and improved networks exhibit commendable recognition and detection capabilities for large foreign objects. However, when dealing with partially buried objects and objects with features similar to the background, the improved network demonstrates higher detection accuracy, as indicated by the higher confidence scores of the detection bounding boxes. Combining the findings from Table 7, it is evident that the improved network has a lower parameter size and reduced model complexity.
Likewise, we employed identical data augmentation methods for the DataII dataset as we did for DataI, and the detection results are shown in Figure 14 (the leftmost column represents mild processing, the central column signifies moderate processing, and the rightmost column denotes severe processing). The improved network model demonstrates outstanding generalization performance when faced with the new dataset, including detection results in adverse environments such as dusty haze, low illumination, and mild blurring. However, it still exhibits relatively weak resistance to severe motion blur. From our perspective, this phenomenon is considered normal, as human eyes or vision behave similarly: as motion blur increases, human judgment capability tends to decline.

4.4.2. Analysis of Experimental Results of Different Models

To further validate the effectiveness of the optimized model, this study employed transfer learning to train popular object detection algorithms, for example, YOLOv3 [26], YOLOv5, and YOLOv7 [27], on the dataset. Additionally, a comparison was made with several representative lightweight convolutional network models, including Mobilenetv3 [28], EfficientNet [29], MobileNext [30], PP-LCNet [31], GhostNetV2 [32], and FasterNet [33]. We did not opt for object detection methods utilizing the two-stage approach from the R-CNN series, primarily because of the demanding need for real-time detection while maintaining a specific level of accuracy in the coal block field deployment. Achieving this level of performance is challenging with our current hardware infrastructure. For result reliability, all experiments were conducted in triplicate using distinct random seeds, and the average values were documented. The outcomes are displayed in Table 8 and Figure 15.
By comparison, it can be observed that among the YOLO series, YOLOv5x achieves the highest recognition accuracy. However, its 86.7M parameters and 205.7G FLOPs pose deployment challenges and demand high hardware device requirements, making it unsuitable for real-time detection tasks in underground coal mines. On the other hand, the improved YOLOv8n model maintains a high detection precision (mAP50: 95.6%) while further compressing the parameter count and computational complexity to 1.6M and 4.7G, respectively.
Additionally, as depicted in Figure 15, the improved YOLOv8 network still exhibits significantly higher detection accuracy, lower parameter count, and network computational complexity compared to various lightweight convolutional networks.
Finally, we conducted cross-comparisons of the foreign object detection model’s recognition and detection results under different hardware conditions, as shown in Table 9. We conducted a total of three major comparative experiments. In the first experiment group, we evaluated several state-of-the-art detection networks mentioned in recent literature. Regardless of the hardware conditions, the network’s parameter size remained unaffected, while recognition speed, i.e., frames per second, increased with GPU upgrades. In the second experiment group, we horizontally compared lightweight detection networks commonly used in the field of computer vision. From the results, it can be observed that the parameter size generally remained within 4 MB under the same hardware conditions. However, inference speed was influenced by factors such as network computing power and bandwidth, and an optimal balance was not achieved between recognition speed and accuracy. In the third experiment group, we vertically compared the improved network’s detection data under various hardware conditions. The results indicate that the improved method does not impose strict requirements on hardware in terms of detection and inference speed. In other words, even when using devices with relatively weaker computing capabilities, our method can still achieve relatively good detection results. This provides a direction and possibility for applications in edge devices with limited computational resources.

4.5. Discussion

The foreign object identification and detection technique for conveyor belts presented in this paper, utilizing the improved YOLOv8, has yielded notable detection outcomes in both the evaluated dataset and laboratory settings. Although its recognition speed and accuracy are superior to most current classic algorithms, there are still some areas that can be improved.
From the detection results in Figure 11g–i and Figure 13g–i, it can be observed that although the network model underwent motion blur preprocessing on the non-coal foreign object data before training, the resistance to motion blur in the improved network model is not ideal. In other words, the detection results are still affected by image clarity, and as the degree of image blur increases, the detection accuracy decreases. Therefore, it is imperative to fine-tune certain parameters of the acquisition apparatus; examples include decreasing exposure duration and adjusting the installation angle, focal length, and height of the acquisition apparatus to expand the perspective, particularly along the conveyor belt’s length. This can maximize the completeness and clarity of the acquired image data.
In addition, noise interference is another important factor affecting image detection results. Excessive noise data can interfere with the network model’s accurate judgments during recognition. As shown in Figure 16, we tested the model’s resistance to noise interference on three different non-coal foreign object datasets. As the level of noise increases, there is a certain degree of loss in the model’s recognition accuracy. In the industrial field, erroneous or delayed judgments can pose certain safety risks to the work site. Therefore, it is necessary to apply appropriate image-denoising techniques to the raw data during on-site debugging.

5. Conclusions

This paper introduces a foreign object identification method designed to be compatible with hardware constraints based on an improved version of YOLOv8. Through rigorous experimentation, the following conclusions have been reached: Compared to the baseline YOLOv8n, the improved model exhibits a remarkable reduction of 56.9% in model parameters and a 42.0% decrease in computational cost, achieving a peak prediction accuracy of 95.6%. This method showcases exceptional performance across new datasets and various object detection approaches. We aspire for the approach outlined in this paper to provide assistance to a broader community of developers and researchers engaged in foreign object identification and detection using edge computing devices.
More importantly, This approach offers a cost-effective, efficient, and exceptionally precise solution for foreign object identification and detection, even in challenging scenarios involving intricate backgrounds, low-light conditions, and the demand for real-time decision-making. Implementing this method on edge devices can significantly diminish detection delays, promptly allocate operational time for the upper computer system, minimize conveyor belt downtime, and ultimately enhance the overall operational efficiency of coal mining facilities.

6. Future Work

In our forthcoming research endeavors, we intend to enlarge and enhance the dataset to tackle the challenge of sample imbalance, consequently leading to a more efficient enhancement in detection speed. Given that noise disturbances in underground coal mines can influence the ultimate detection accuracy, we will also direct a segment of our upcoming investigations towards alleviating the effects of noise interference.

Author Contributions

Conceptualization, B.L.; Methodology, B.L.; Investigation, B.L.; Writing—original draft, B.L.; Writing—review and editing, Z.K. and C.H.; Supervision, Z.K.; C.H. and J.W.; Project administration, Z.K. and J.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under grant 52174147 (Project title: Research on Intelligent Control and Fault Diagnosis of Autonomous Inspection Robot for Belt Conveyor in Harsh Environment); Taishan Industry Leading Talent Program (Project title: Key Technology Research and Industrialization of Intelligent Lower Transportation Expandable Belt Conveyor Equipment); Key R&D Plan Projects in Shanxi Province (Project title: Research on Underground Complex Working Condition Inspection Robot Based on Domestic Dragon Core Control, Project number: 202102100401004); Unmanned Management System for Belt Conveyors Based on Multiple Perception Technology (Project number: RH2200002154); Research on the mechanism of online quantitative assessment of core joint twitching of steel wire rope core conveyor belts based on dynamic eddy currents (by the Shanxi Science Administration for Market Regulation under Grant 20210302124354).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Thank you to Yue Yanbo and Zhao Xuan for providing assistance with the manuscript. The authors would like to acknowledge the anonymous reviewers and editors whose thoughtful comments helped to improve this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hoang, A.T.; Pham, V.V.; Nguyen, X.P. Integrating renewable sources into energy system for smart city as a sagacious strategy towards clean andsustainable process. J. Clean. Prod. 2021, 305, 127161. [Google Scholar] [CrossRef]
  2. Wang, G.; Xu, Y.; Ren, H. Intelligent and ecological coal mining as well as clean utilization technology in China: Review and prospects. Int. J. Min. Sci. Technol. 2019, 29, 161–169. [Google Scholar] [CrossRef]
  3. Wang, Y.; Lei, Y.; Wang, S. Green mining efficiency and improvement countermeasures for China’s coal mining industry. Front. Energy Res. 2020, 8, 18. [Google Scholar] [CrossRef]
  4. Wang, J.; Yu, B.; Kang, H.; Wang, G.; Mao, D.; Liang, Y.; Jiang, P. Key technologies and equipment for a fully mechanized top-coal caving operation with a large mining height at ultra-thick coal seams. Int. J. Coal Sci. Technol. 2015, 2, 97–161. [Google Scholar] [CrossRef]
  5. Zhou, K.; Liu, T.; Zhou, L. Industry 4.0: Towards future industrial opportunities and challenges. In Proceedings of the 2015 12th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), Zhangjiajie, China, 15–17 August 2015; pp. 2147–2152. [Google Scholar]
  6. Halepoto, I.A.; Shaikh, M.Z.; Chowdhry, B.S.; Uqaili, M.A. Design and implementation of intelligent energy efficient conveyor system model based on variable speed drive control and physical modeling. Int. J. Contr. Autom. 2016, 6, 379–388. [Google Scholar] [CrossRef]
  7. Zhang, M.; Jiang, K.; Cao, Y.; Li, M.; Hao, N.; Zhang, Y. A deep learning-based method for deviation status detection in intelligent conveyor belt system. J. Clean. Prod. 2022, 363, 132575. [Google Scholar] [CrossRef]
  8. Gupta, A. Failure of belt in conveyor system: An analysis. IUP J. Mech. Eng. 2014, 7, 65. [Google Scholar]
  9. Zhang, M.; Shi, H.; Zhang, Y.; Yu, Y.; Zhou, M. Deep learning-based damage detection of mining conveyor belt. Measurement 2021, 175, 109130. [Google Scholar] [CrossRef]
  10. Zhang, M.; Zhang, Y.; Zhou, M.; Jiang, K.; Shi, H.; Yu, Y.; Hao, N. Application of Lightweight Convolutional Neural Network for Damage Detection of Conveyor Belt. Appl. Sci. 2021, 11, 7282. [Google Scholar] [CrossRef]
  11. Qu, D.; Qiao, T.; Pang, Y.; Yang, Y.; Zhang, H. Research on ADCN method for damage detection of mining conveyor belt. IEEE Sens. J. 2020, 21, 8662–8669. [Google Scholar] [CrossRef]
  12. Xiao, Y.; Tian, Z.; Yu, J.; Zhang, Y.; Liu, S.; Du, S.; Lan, X. A review of object detection based on deep learning. Multimed. Tools Appl. 2020, 79, 23729–23791. [Google Scholar] [CrossRef]
  13. Gkioxari, G.; Hariharan, B.; Girshick, R.; Malik, J. R-cnns for pose estimation and action detection. arXiv 2014, arXiv:1406.5212. [Google Scholar]
  14. Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  15. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar] [CrossRef] [PubMed]
  16. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Computer Vision–ECCV 2016, Proceedings of the 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  17. Hu, J.; Gao, Y.; Zhang, H.; Jin, B. Deep learning based non-coal foreign object recognition method for belt conveyor. Ind. Min. Autom. 2021, 47, 57–62, 90. [Google Scholar] [CrossRef]
  18. Zhang, M.; Cao, Y.; Jiang, K.; Li, M.; Liu, L.; Yu, Y.; Zhou, M.; Zhang, Y. Proactive measures to prevent conveyor belt Failures: Deep Learning-based faster foreign object detection. Eng. Fail. Anal. 2022, 141, 106653. [Google Scholar] [CrossRef]
  19. Zhang, L.; Wang, H.; Lei, W.; Wang, B.; Lin, J. Coal gangue target detection of belt conveyor based on YOLOv5s-SDE. Ind. Min. Autom. 2023, 49, 106–112. [Google Scholar] [CrossRef]
  20. Mao, Q.; Li, S.; Hu, X.; Xue, X.; Yao, L. Foreign object recognition of coal mine belt conveyor based on improved YOLOv7. Ind. Min. Autom. 2022, 48, 26–32. [Google Scholar] [CrossRef]
  21. Sifre, L.; Mallat, S. Rigid-motion scattering for texture classification. arXiv 2014, arXiv:1403.1687. [Google Scholar]
  22. Zhang, X.; Zhou, X.; Lin, M.; Jian, S. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6848–6856. [Google Scholar]
  23. Ma, N.; Zhang, X.; Zheng, H.T.; Zheng, J. Shufflenet v2: Practical guidelines for efficient cnn architecture design. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 116–131. [Google Scholar]
  24. Yang, L.; Zhang, R.Y.; Li, L.; Xie, X. SimAM: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
  25. Hillyard, S.A.; Vogel, E.K.; Luck, S.J. Sensory Gain Control (Amplification) as a Mechanism of Selec-tive Attention: Electrophysiological and Neuroimaging evidence. Philos. Trans. R. Soc. Lond. Ser. B Biol. Sci. 1998, 353, 1257–1270. [Google Scholar] [CrossRef]
  26. Redmon, J.; Farhadi, A. Yolov3: An incrementalimprovement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  27. Wang, C.Y.; Bochkovskiy, A.; Liao, H.Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv 2022, arXiv:2207.02696. [Google Scholar]
  28. Mehta, S.; Hajishirzi, H.; Rastegari, M. DiCENet: Dimension-wise convolutions for efficient networks. IEEE Trans. Pattern Anal. Mach. Intel. 2020, 44, 2416–2425. [Google Scholar] [CrossRef] [PubMed]
  29. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 6105–6114. [Google Scholar]
  30. Huber, J.F. Mobile next-generation networks. IEEE Multimed. 2004, 11, 72–83. [Google Scholar] [CrossRef]
  31. Cui, C.; Gao, T.; Wei, S.; Du, Y.; Guo, R.; Dong, S.; Lu, B.; Zhou, Y.; Lv, X.; Liu, Q.; et al. PP-LCNet: A lightweight CPU convolutional neural network. arXiv 2021, arXiv:2109.15099. [Google Scholar]
  32. Klepac, M. Ghost2. Hilltop Rev. 2012, 5, 9. [Google Scholar]
  33. Chen, J.; Kao, S.; He, H.; Zhuo, W.; Wen, S.; Lee, C.-H.; Chan, S.-H.G. Run, Don’t Walk: Chasing Higher FLOPS for Faster Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 12021–12031. [Google Scholar]
Figure 1. Experimental environment. (a) Belt conveyor used in the experiment; (b) mine inspection robot.
Figure 1. Experimental environment. (a) Belt conveyor used in the experiment; (b) mine inspection robot.
Applsci 13 11464 g001
Figure 2. Partial foreign object data image. (a) The original images; (b) preprocessed images.
Figure 2. Partial foreign object data image. (a) The original images; (b) preprocessed images.
Applsci 13 11464 g002aApplsci 13 11464 g002b
Figure 3. YOLOv8 network structure diagram.
Figure 3. YOLOv8 network structure diagram.
Applsci 13 11464 g003
Figure 4. Channel shuffling structure diagram. (a) Ordinary group convolution; (b) group convolution with channel shuffling.
Figure 4. Channel shuffling structure diagram. (a) Ordinary group convolution; (b) group convolution with channel shuffling.
Applsci 13 11464 g004
Figure 5. ShuffleNetV2 unit structure. (a) Basic unit structure diagram; (b) downsampling unit structure diagram.
Figure 5. ShuffleNetV2 unit structure. (a) Basic unit structure diagram; (b) downsampling unit structure diagram.
Applsci 13 11464 g005
Figure 6. Schematic diagram of the SimAM weight assignment.
Figure 6. Schematic diagram of the SimAM weight assignment.
Applsci 13 11464 g006
Figure 7. Improved YOLOv8n network model.
Figure 7. Improved YOLOv8n network model.
Applsci 13 11464 g007
Figure 8. The result curves of the three network models. (a) mAP result curve; (b) validation set classification loss curve.
Figure 8. The result curves of the three network models. (a) mAP result curve; (b) validation set classification loss curve.
Applsci 13 11464 g008
Figure 9. Comparison of heat maps of different network models. (a) Original input image; (b) YOLOv8n heat map; (c) YOLOv8n-shuffle heat map; (d) YOLOv8n-SM heat map.
Figure 9. Comparison of heat maps of different network models. (a) Original input image; (b) YOLOv8n heat map; (c) YOLOv8n-shuffle heat map; (d) YOLOv8n-SM heat map.
Applsci 13 11464 g009
Figure 10. Three types of network identification results. (a) YOLOv8n; (b) YOLOv8-shuffle; (c) YOLOv8n-shuffleNetv+.
Figure 10. Three types of network identification results. (a) YOLOv8n; (b) YOLOv8-shuffle; (c) YOLOv8n-shuffleNetv+.
Applsci 13 11464 g010
Figure 11. Data I recognition result diagram. (a) Original image; (b) YOLOv8n recognition result; (c) improved YOLOv8 recognition result.
Figure 11. Data I recognition result diagram. (a) Original image; (b) YOLOv8n recognition result; (c) improved YOLOv8 recognition result.
Applsci 13 11464 g011
Figure 12. Foreign object detection results in different environments. (ac) Mist and dust recognition results; (df) low light recognition results; (gi) motion blur recognition results.
Figure 12. Foreign object detection results in different environments. (ac) Mist and dust recognition results; (df) low light recognition results; (gi) motion blur recognition results.
Applsci 13 11464 g012
Figure 13. Data II recognition result diagram. (a) Original image; (b) YOLOv8n recognition result; (c) improved YOLOv8 recognition result.
Figure 13. Data II recognition result diagram. (a) Original image; (b) YOLOv8n recognition result; (c) improved YOLOv8 recognition result.
Applsci 13 11464 g013
Figure 14. Foreign object detection results in different environments. (ac) Mist and dust recognition results; (df) low light recognition results; (gi) motion blur recognition results.
Figure 14. Foreign object detection results in different environments. (ac) Mist and dust recognition results; (df) low light recognition results; (gi) motion blur recognition results.
Applsci 13 11464 g014
Figure 15. Comparison of mainstream convolutional networks. (a) Parameter quantity results; (b) mAP50 and model calculation quantity.
Figure 15. Comparison of mainstream convolutional networks. (a) Parameter quantity results; (b) mAP50 and model calculation quantity.
Applsci 13 11464 g015
Figure 16. Different levels of noise image detection results. (a) Mild processing; (b) moderate processing; (c) severe processing.
Figure 16. Different levels of noise image detection results. (a) Mild processing; (b) moderate processing; (c) severe processing.
Applsci 13 11464 g016
Table 1. Refactored ShuffleNetv2 structure.
Table 1. Refactored ShuffleNetv2 structure.
LayerInput SizeStrideRepeat
Image224 × 224 × 3--
Conv112 × 112 × 2421
MaxPool56 × 56 × 2421
Stage 2Shuffle-b28 × 28 × 4821
Shuffle-a28 × 28 × 17613
Stage 3Shuffle-b
Shuffle-a
14 × 14 × 9621
14 × 14 × 35219
Stage 4Shuffle-b
Shuffle-a
7 × 7 × 19221
7 × 7 × 70413
Conv7 × 7 × 102411
GlobalPool1 × 1 × 1024--
FCk--
Table 2. The results after introducing SimAM.
Table 2. The results after introducing SimAM.
ModelsPrecision (%)Recall (%)mAP (%)Parameters (M)FLOPs (G)
YOLOv8n93.887.892.33.18.1
YOLOv8n-shuffle94.888.093.51.64.7
YOLOv8n-SM ①95.089.493.61.64.7
YOLOv8n-SM ①②95.289.793.81.64.7
YOLOv8n-SM ①②③95.390.093.81.64.7
Table 3. Comparison of experimental results with different attention mechanisms.
Table 3. Comparison of experimental results with different attention mechanisms.
ModelsPrecision (%)Recall (%)mAP (%)Parameters (M)FLOPs (G)
YOLOv8n-shuffle94.888.093.51.604.7
YOLOv8n-shuffle-SE89.579.688.11.644.8
YOLOv8n-shuffle-CBAM86.080.187.51.654.7
YOLOv8n-shuffle-CA94.787.593.21.644.7
YOLOv8n-shuffle-ECA94.185.592.01.644.7
YOLOv8n-shuffle-GAM94.184.391.11.754.8
YOLOv8n-shuffle-NAMA95.285.492.31.644.7
YOLOv8n-shuffle-SimAM95.390.093.81.604.7
Table 4. Improved ShuffleNetv2 network model results comparison.
Table 4. Improved ShuffleNetv2 network model results comparison.
ModelsPrecision (%)Recall (%)mAP (%)Parameters (M)FLOPs (G)
YOLOv8n93.887.892.33.18.1
YOLOv8n-shuffle94.888.093.51.64.7
YOLOv8n-shuffleNetv+95.890.294.11.64.7
Table 5. Assessment of results from ablation experiments.
Table 5. Assessment of results from ablation experiments.
ModelPrecision (%)Recall (%)mAP (%)Parameters (M)Inference (ms)FLOPs (G)
YOLOv8n93.887.892.33.19.98.1
YOLOv8n-shuffleNetv294.888.093.51.63.74.7
YOLOv8n-shuffleNetv2+SimAM95.390.093.81.64.54.7
YOLOv8n-shuffleNetv+95.890.294.11.64.04.7
YOLOv8n-shuffleNetv++SM96.992.795.61.64.54.7
Table 6. Comparison of Data I before and after improvement.
Table 6. Comparison of Data I before and after improvement.
Data ImAP (%)Parameters (M)FLOPs (G)
YOLOv8n90.93.28.2
Improved YOLOv8n 93.61.64.7
Table 7. Comparison of Data II before and after improvement.
Table 7. Comparison of Data II before and after improvement.
Data IImAP (%)Parameters (M)FLOPs (G)
YOLOv8n93.63.08.2
Improved YOLOv8n 94.41.74.8
Table 8. Comparison of outcomes from various network detection models. The unit of the recognition accuracy of the six foreign objects is %, the unit of the parameter is M, and the unit of FLOPs is G.
Table 8. Comparison of outcomes from various network detection models. The unit of the recognition accuracy of the six foreign objects is %, the unit of the parameter is M, and the unit of FLOPs is G.
ModelPalletAnchor ShaftGangueAngle IronBoltNutmAP50ParametersFLOPs
YOLOv390.391.491.692.091.680.191.661.6193.5
YOLOv5s91.392.892.191.393.780.691.47.216.5
YOLOv5m91.692.794.591.393.583.692.921.249.0
YOLOv5l94.996.997.897.197.184.994.546.5109.1
YOLOv5x95.497.397.897.697.288.196.986.7205.7
YOLOv7-tiny91.492.792.591.393.883.591.66.113.2
YOLOv792.192.994.692.093.583.193.137.1105.1
YOLOv7-x94.995.295.194.993.683.594.170.5188.6
YOLOv8n92.993.796.595.194.571.492.33.18.1
YOLOv8s95.496.998.097.597.384.695.911.128.6
Improved YOLOv8n95.496.797.697.697.384.595.61.64.7
Table 9. Inference speed and hardware condition results for different models.
Table 9. Inference speed and hardware condition results for different models.
MethodParameters (M) FPSInference (ms) Hardware
Improved YOLOv3 [16]61.6--GTX 2080Ti
Improved YOLOv4 [18]6.5070.1-GTX 1650
Improved YOLOv5 [19]54.659.9-RTX A4000
Improved YOLOv7 [21]37.125.625.0RTX 3080
YOLOv8n3.1072.112.1RTX 3080
FasterNet-YOLOv82.9165.15.1RTX 3080
EfficientNet-YOLOv81.9060.44.9
GhostNetv2-YOLOv83.7840.420.7
PP-LCNet-YOLOv81.7273.24.9
MobileNext-YOLOv82.0520.881.5
Mobilenet-YOLOv82.3575.110.2
Our improved methods1.6081.54.5RTX 3080
1.6075.35.6GTX 2080
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Luo, B.; Kou, Z.; Han, C.; Wu, J. A “Hardware-Friendly” Foreign Object Identification Method for Belt Conveyors Based on Improved YOLOv8. Appl. Sci. 2023, 13, 11464. https://0-doi-org.brum.beds.ac.uk/10.3390/app132011464

AMA Style

Luo B, Kou Z, Han C, Wu J. A “Hardware-Friendly” Foreign Object Identification Method for Belt Conveyors Based on Improved YOLOv8. Applied Sciences. 2023; 13(20):11464. https://0-doi-org.brum.beds.ac.uk/10.3390/app132011464

Chicago/Turabian Style

Luo, Bingxin, Ziming Kou, Cong Han, and Juan Wu. 2023. "A “Hardware-Friendly” Foreign Object Identification Method for Belt Conveyors Based on Improved YOLOv8" Applied Sciences 13, no. 20: 11464. https://0-doi-org.brum.beds.ac.uk/10.3390/app132011464

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop