Next Article in Journal
Enhancing Signal Recognition Accuracy in Delay-Based Optical Reservoir Computing: A Comparative Analysis of Training Algorithms
Previous Article in Journal
Noise-like-Signal-Based Sub-Synchronous Oscillation Prediction for a Wind Farm with Doubly-Fed Induction Generators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Train Wheelset Tread Defects with Small Samples Based on Local Inference Constraint Network

College of Railway Transportation, Hunan University of Technology, Zhuzhou 412007, China
*
Author to whom correspondence should be addressed.
Submission received: 4 April 2024 / Revised: 24 May 2024 / Accepted: 31 May 2024 / Published: 5 June 2024
(This article belongs to the Special Issue Machine Vision in Industrial Systems)

Abstract

:
Due to the long-term service through wheel-rail rolling contact, the train wheelset tread will inevitably suffer from different types of defects, such as wear, cracks, and scratches. The effective detection of wheelset tread defects can provide critical support for the operation and maintenance of trains. In this paper, a new method based on a local inference constraint network is proposed to detect wheelset tread defects, and the main purpose is to address the issue of insufficient feature spaces caused by small samples. First, a generative adversarial network is applied to generate diverse samples with semantic consistency. An attention mechanism module is introduced into the feature extraction network to increase the importance of defect features. Then, the residual spine network for local input decisions is constructed to establish an association between sample features and defect types. Furthermore, the network’s activation function is improved to obtain higher learning speed and accuracy with fewer parameters. Finally, the validity and feasibility of the proposed method are verified using experimental data.

1. Introduction

As an important guiding, moving, and loading device of a train, the train wheelset tread will inevitably suffer from different types of defects, such as wear, cracks, and scratches, due to the long-term service of rolling contact [1,2,3]. If such defects are allowed to continue to worsen, not only will additional shock vibrations affecting ride comfort be caused, but serious defects, such as wheelset breakage, in extreme cases, can also eventuate. If not handled in time, these defects may lead to catastrophic accidents such as train derailment. Effective detection of such wheelset tread defects is of great significance in improving the running safety of a train [4,5,6].
At present, most detection methods of wheelset tread defects are traditional. These methods involve scanning the wheelset tread with light or laser beams when the train enters the depot at low speed or stops for maintenance. Defects are then identified by comparing the scanned shape with the standard wheelset tread shape [7,8,9,10,11]. Although the identification accuracy of such methods is high, high-intensity labor and long recognition cycles are required.
With the rapid development of machine learning, deep learning presents advantages and potential applications in defect detection. Various types of deep learning methods have been adopted in research related to wheelset tread defect detection [12,13,14,15,16,17,18], including the Faster RCNN algorithm, YOLO algorithm, SSD algorithm, deep convolutional neural networks, and so on [19,20,21,22,23,24]. In reference [19], a convolutional neural network recognition model was established to detect wheelset tread defects by obtaining two-dimensional wheel–rail force information data with a time series for an instrumented wheelset. Nevertheless, most existing wheelset tread defect detection methods require a relatively large amount of data to obtain high accuracy. This cannot be easily fulfilled, as trains running in complex and diversified environments, and their tread defect samples, are not easily collected. When deep learning is carried out directly with small samples, the model accuracy is affected, resulting in over-fitting due to an insufficient defect feature space [25,26]. So far, few works of research have been devoted to solving the issue of detecting wheelset tread defects with small samples.
This paper focuses on addressing the issue of deep learning with small samples in the context of wheelset tread defect detection. Through three aspects, namely sample expansion, feature enhancement, and network decision making, a local inference constraint network is constructed to defect wheelset tread defects. In terms of sample expansion, the data are expanded by using a generative adversarial network, and a dataset with semantic consistency and sample distribution diversity is obtained. For feature enhancement, the importance of defect features is increased by introducing an attention mechanism module into the network. For network decision making, a spine network with residual structure is constructed to achieve higher defect detection by using more accurate local information input and less deep network calculations. Finally, this proposed method is verified and analyzed with experimental data.
This paper focuses on addressing the issue of deep learning with small samples in the context of wheelset tread defect detection. Through three aspects, namely sample expansion, feature enhancement, and network decision making, a local inference constraint network is constructed to defect the wheelset tread defect. In terms of sample expansion, the data are expanded by using a generative adversarial network, and a dataset with semantic consistency and sample distribution diversity is obtained. For feature enhancement, the importance of defect features is increased by introducing an attention mechanism module into the network. For network decision making, a spine network with residual structure is constructed to obtain more accurate results with fewer operations through the precise input of local information. Finally, this proposed method is verified and analyzed with experimental data.

2. Proposed Method

Wear, cracks, peeling, scratches, and severe scratches on the wheelset treads are common defects caused by wheelset–rail contact [27,28,29]. Considering that an insufficient feature space can be caused by small samples of wheelset tread defects, the importance of defect features in model identification was improved by addressing sample expansion, feature enhancement, and network decision making. A local inference constraint network-based detection method was proposed to predict the type of wheelset tread defect with small samples, as shown in Figure 1.
In Figure 1, the diverse sample expansion module (Module B), utilizing the blue block indicated adversarial generative network module, was employed to obtain a dataset with semantic consistency and sample distribution diversity. The feature extraction process module (Module C) increased the importance of defect features by incorporating the attention mechanism module, represented by the orange block, into the network. The residual spinal fully connected layer (Module D) improves the distinguishability of feature vectors. This module re-sorts local features and assigns values based on their importance, enabling the network to achieve more accurate results with fewer calculations, and the defect types of the wheelset tread are determined, as in Module E.

2.1. Sample Expansion Based on Generative Adversarial Network

The purpose of data sample expansion is to generate enough data for deep network learning. In a basic generative adversarial network model, the generator receives a random sample space and noise data conforming to a Gaussian distribution. The discriminator receives samples generated by the generator and real data samples, updating parameters through gradient back-propagation via a loss function. By using random samples and noise data with Gaussian distribution, the basic generative adversarial network ignores the information distribution characteristics of the actual object, so the quality of the generated samples is relatively low. To address this, noise data were added through an AdaIN mechanism during sample expansion, ensuring that the generator creates wheelset tread images according to image size. The independent Gaussian noise input affects only subtle changes in visual features. AdaIN is defined as follows [30],
AdaIN ( x i , y ) = δ ( y ) ( x i μ ( x i ) σ ( x i ) ) + μ ( y )
where μ ( x i ) and σ ( x i ) represent the mean and variance of the real data sample, respectively; μ ( y ) and δ ( y ) are the mean and variance of the latent data sample space, respectively.
The network generated by Equation (1) can better fit the distribution of wheelset tread defect data, and the constructed image information is more in line with real samples. For the specific network structure, please refer to Reference [30].

2.2. Feature Enhancement with Attention Mechanism

Although a conventional network can dynamically describe and weight the input data in a learnable manner, it struggles to distinguish defect feature weights of different data subsets. This is especially true for weakened features, random positions, and high background noise in wheelset tread defects. To eliminate these weaknesses, a multi-dimensional attention mechanism was embedded in the feature extraction module. That is, an attention mechanism was added to the channel level to highlight the feature channel information, and an attention mechanism was introduced in the spatial dimension to strengthen the spatial dimensional information. The module is shown in Figure 2.
As can be seen from Figure 2, the input data were the wheelset tread defect data expanded by the generative adversarial network. A convolutional neural network is used to obtain a feature vector of high-dimensional semantic information. Weighted attention mechanism parameters are then obtained through the channel and spatial attention mechanisms indicated by the red outline in the figure. Then, the output result with attention weighting was obtained by multiplying feature mapping and attention weights, in which the feature mapping was composed of the weighted attention mechanism parameters and the feature vectors, and the attention weights were obtained by the attention mechanism. The channel attention mechanism and spatial attention formula are,
M c ( x ) = d ( M L P ( C o n v 5 ( x ) ) + M L P ( ( C o n v 5 ( x ) ) )
M s ( x ) = d ( c o n v ( M c ( x ) ) M c ( x ) )
A t ( x ) = C o n v 5 ( x ) M c ( x )
where Conv5 is the feature space after five convolution layers; Mc and Ms are the channel attention weight and spatial attention weight, respectively; At is the weighted feature space. By compressing the spatial dimension with the channel attention mechanism, a greater weight was applied to the wheelset tread defect features. By compressing the channel dimension with the spatial attention mechanism, spatial positional information was provided to improve the utilization of feature vectors.

2.3. Residual Spinal Connected Layer

The residual is a measure of the difference between a statistical sample and a true sample, and it is an observable estimate of unobservable statistical errors [30]. In neural networks, the residual (also ation (GT) known as a skip connection) directly connects an input to an output through another channel. It is used to amplify errors and highlight changes in the model’s parameters to be optimized. In the decision-making layer of the neural network, the fully connected layer is generally combined with other modules to reduce the number of parameters because of its inherent characteristics of high parameter count and difficult training. Inspired by the unique method of processing information in the vertebrate nervous system, the idea of the residual was introduced into the spinal fully connected layer, resulting in a residual spinal fully connected layer. The module innovatively improved the input and output connection, and it alleviated the over-fitting problem of the spinal fully connected layer by adding a residual channel. Furthermore, the activation function was reset based on the dataset’s quantity characteristics. This makes the module more suitable for detecting the wheelset tread defect with small samples. The model structure is shown in Figure 3.
As demonstrated in Figure 3, the weighted feature space At(x) obtained from Equation (4) is subjected to a max pooling layer, Maxpool, to prune redundant information from shallow features, resulting in a one-dimensional feature Mx of size 1 × 1 × 2048. Subsequently, Mx is decomposed into two equal parts, A1 and A2, defined as,
M x = M a x p o o l ( A t ( x ) )
A 1 = x M 1 , x M 2 x M 1024
A 2 = x M 1025 , x M 1026 x M 2048
where “Maxpool” refers to the operation of maximum pooling, and the elements xMi within the sets A1 and A2 denote the i-th feature value of the feature vector Mx.
The spinal neural network consists of Sp1, Sp2, Sp3, and Sp4. As indicated by the green outline in Figure 3, each spinal neural network module is composed of a dropout layer, a linear layer, and an activation function. The discard layer and activation function were set with the same parameters. The output of the linear layer was set to 512. The input size of the linear layer in Sp1 was 1024, while that of the linear layers in Sp2, Sp3, and Sp4 was 1536. The purpose of such settings is to unify the feature dimension of the final input with the fully connected layer. The corresponding equations are,
S p 1 = σ ( L i n e a r ( D r o o p ( A 1 ) ) )
S p 2 = σ ( L i n e a r ( D r o o p ( S p 1 A 2 ) ) )
S p 3 = σ ( L i n e a r ( D r o o p ( S p 2 A 1 ) ) )
S p 4 = σ ( L i n e a r ( D r o o p ( S p 3 A 2 ) ) )
F C = l i n e a r ( S p 1 + S p 2 + S p 3 + S p 4 ) + M a x p o o l ( A t ( X ) )
where Droop is a random zeroing operation, Linear is a linear layer, and σ is the activation function of the spinal neural network. The output of each spinal block, when combined with the feature vector that has undergone max pooling, yields a refined feature vector FC of dimensions 1 × 1 × 2048. This feature vector FC is then mapped to a specific class label through the final fully connected layer. Subsequently, the Softmax function is applied to transform the class labels into a probability distribution, thereby accomplishing the pattern recognition task for images of wheelset tread.

2.4. Model Pseudocode

Based on the above analysis, the pseudocode of the detection method of train wheelset tread defects is as follows.
Model pseudocode (Pytorch)
Input: x, half_width
Output: y
# features: feature extraction network
# attention: attention mechanism
# avergpool: average pooling
# rsf: residual spinal fully connected layer
# Perform training iterations
for epoch in range (50):
  x = features(x)
  x = attention(x)
  output = avergpool(x)
  x = output
  x1 = rsf − 1(x[:,0:half_width])
  x2 = rsf − 2(cat([x[:,half_width:2 × half_width],x1],dim = 1))
  x3 = rsf − 3(cat([x[:,0: half_width],x2],dim = 1))
  x4 = rsf − 4(cat([x[:,half_width:2 × half_width],x3],dim = 1))
  x = cat([x1,x2])
  x = cat([x,x3])
 x = cat([x,x4])
 y = fc(x)
# Apply the optimization network Adam
# Calculate the loss function
    Loss = CrossEntropyLoss()
# Load model test
 optimizer = optim.Adam()
# Load model test
 Model = load()
 Model.test()

3. Experiment and Analysis

3.1. Experiment Environment

Dataset: The dataset used in this paper was collected in a maintenance depot workshop. A total of 210 wheelset tread defect images were selected and labeled with defect type. Among them, 45 were normal, 56 had cracks, 52 had scratches, 42 had peelings, 40 had abrasions, and 20 had severe scratches.
Experimental setup: The operating platform of the experimental program was Pytorch. The running environment was configured as follows, processor: Intel Core i9-9900K; running memory: 11G; graphics card: Nvidia NVIDIA GeForce RTX 2080 Ti; Code operating environment: Torch = 1.90, Python = 3.9; CUDA version: CUDA11.1. SGD was used as the optimizer, and the batch size was 32, with 50 training iterations.
Evaluation indicators: The accuracy, recall, precision, and F1 value were chosen as the evaluation indicators. Among them, accuracy indicates the proportion of the number of correctly predicted samples to the total number of samples. Recall (R) is the proportion of the number of samples correctly predicted to be positive to the total number of positive samples. Precision (P) stands for the proportion of the number of samples correctly predicted to be positive to the total number of samples predicted to be positive. The F1 value indicates the weighted harmonic average of recall and precision.

3.2. Ablation Experiments

3.2.1. Ablation Experiment of Generative Adversarial Network

In order to objectively evaluate the performance of the generative adversarial network, wheelset tread defect datasets were established by various data generative methods, including a dataset without data generative processing (Non), a dataset generated by geometric transformation (GT), a dataset generated by pixel transformation (PT), and a dataset generated by a generative adversarial network (Gan). The specific distributions are shown in Table 1.
In the geometric transformation, a horizontal flip, vertical flip, rotation by 90 degrees, and rotation by 180 degrees were carried out. In the pixel transformation, Gaussian noise, salt-and-pepper noise, brightening, and blurring processes were carried out on the real data. In the generative adversarial network, the generation model variant StyleGan3 was used to generate the data, and several generative data by Non, GT, PT, and Gan are shown in Figure 4.
The training dataset and the test dataset were divided in a 4:1 ratio. Resnet was used as the feature extraction module in the above four datasets to verify the effectiveness of the data generative method, and the results are shown in Table 2.
As can be seen from Table 2, in all models, the applied generative method can effectively increase the number of training samples, improve the diversity of samples, and extend the samples’ spatial distribution. Therefore, regardless of whether the tread defect dataset is generated by geometric transformation, pixel transformation, or a generative adversarial network, the performance of the model can be effectively improved, and the error rate of the model can be reduced. Comparing geometric and pixel transformations, defects have a crumby distribution and are slightly darker than the background, resembling noise added in pixel transformation, which may reduce prediction performance. Thus, the dataset generated by geometric transformation is slightly better than that by pixel transformation. Generally, the performance of the generative adversarial network is better than that of the geometric transformation or pixel transformation because the generative adversarial network can effectively avoid the problem associated with small samples and improve the ability of models to resist over-fitting.

3.2.2. Ablation Experiment of Residual Spinal Fully Connected Layer

In order to verify the influence of the improved fully connected layer on the model performance, the fully connected layer in Resnet50 was replaced with the improved structure proposed in this paper. Meanwhile, in order to examine the performance difference between the spinal fully connected layer and the residual spinal fully connected layer, the feature network was Resnet [20], and the comparative experiments were performed on the two models. In these experiments, the settings were the same except for differences in the fully connected layer. The results are shown in Table 3.
Although the spinal neural network can improve the performance of some network models, it can lead to a decrease in model performance and an increase in parameters when applied to Resnet 50. Spinal-res represents the model after replacing the fully connected Resnet50 layer with a residual spinal fully connected layer. As seen in Table 3, the detection accuracy of the improved spinal fully connected layer is improved by 0.42%, and it has a stronger ability to identify defect features. This is due to the dual-branch decision network providing both local and global views, effectively reducing local information disturbance and improving model robustness.
In addition, the model parameter quantity in deep convolutional neural networks is also an evaluation indicator, which is used to measure the performance of model complexity and prediction accuracy. A smaller number of parameters usually leads to faster training speeds and smaller storage space occupation, but it may also reduce the model’s prediction accuracy. On the contrary, a larger number of parameter counts can improve the prediction accuracy, but it may require a longer training time and larger storage space. As shown in Table 3, although the prediction accuracy of the three models is similar, it is observed that Spinal-res can improve the accuracy and reduce the error rate without significant increases in parameter count, so the effectiveness of the Spinal-res module was verified.

3.2.3. Ablation Experiment of Attention Mechanism

The dimension and weight of the feature layer affect the training results of the network. Considering the different impacts of various feature information dimensions, an attention mechanism module was added after feature extraction in the Resnet training stage. In this experiment, the feature network was set as Resnet-20 [31], and the attention mechanism module was set as a single-channel attention mechanism (SE) and a multi-channel attention mechanism (CBAM). The results are shown in Table 4.
As can be seen from Table 4, compared with the model without an attention mechanism, the accuracy, precision, recall, and specificity of the model are improved by the insertion of the SE and CBAM. Specifically, the accuracies are improved by 1.25% and 1.25%, respectively; the precisions are improved by 1.26% and 1.26%, respectively; the recalls are improved by 1.29% and 1.27%, respectively; and the specificities are improved by 1.27% and 1.26%, respectively. This indicates that the attention mechanism can assign different weight information for different feature information data, so it can lead to better results in feature expression and model prediction.

3.2.4. Results Analysis

In order to verify the effectiveness of the proposed model, different traditional deep neural networks and lightweight network models were applied for comparison. The traditional deep neural networks included Resnet-50, Resnet-101, and VGG16, and the lightweight network models were Desnet-161, Desnet-169, and ConvNext. The data for model training and testing are the generated data in Section 3.2.1. The results are shown in Table 5.
As can be seen from Table 5, the prediction performances of deep neural networks (Resnet-50, Resnet-1010, VGG16) are superior to lightweight networks (Desnet-161, Desnet-169, ConvNext) for detecting wheelset tread defects. Compared with VGG16, the Resnet-50 and Resnet-101 networks can generally achieve better performance, which is the result of the innovatively introduced double-branched residual channel. Furthermore, the improved algorithm model based on the Resnet feature extraction network achieves a better performance than others in terms of accuracy, precision, recall, and specificity. Specifically, the accuracy of the model proposed in this paper is improved by 7.23% compared with VGG16 and by 1.667% compared with Resnet-50. The improvement is obtained through the restraining action on the model by the residual fully connected layer and attention mechanism, which can provide greater possibilities in feature expression and decision-making.

4. Conclusions

In order to solve the issue of insufficient defect feature spaces caused by small samples, a wheelset tread defect detection method based on a local inference constraint network is proposed. The main innovations are as follows: (1) The wheelset tread defect data generated by a generative adversarial network is used to expand sample distribution diversity while preserving semantic consistency. High-dimensional semantic feature vectors are then obtained. (2) The limited feature vector weight is increased through the attention mechanism to improve the importance of defect features. (3) In the decision layer, the residual spine network based on local input is innovatively proposed to obtain better prediction results with few parameters. Meanwhile, the experimental results demonstrate that the proposed method can achieve higher prediction accuracy in detecting wheelset tread defects compared to advanced methods such as Resnet-50, VGG16, ConvNext, etc.
Although this method has achieved high accuracy in detecting wheelset tread defects, it still faces practical challenges. Issues such as image blurring caused by vibration and unstable image frames due to high speed remain key areas for future research [32].

Author Contributions

Conceptualization, J.L. (Jianhua Liu) and Z.W.; methodology, J.L. (Jianhua Liu), S.J. and J.L. (Jiahao Liu); writing—original draft, S.J.; writing—review and editing, J.L. (Jianhua Liu) and Z.W.; validation J.L. (Jianhua Liu) and Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China (2021YFF0501101), The National Natural Science Foundation of China (52272347), the Scientific Research Project of Hunan Provincial Department of Education (22A0391), and the National Science Fund of Hunan (2024JJ7132).

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to the fact that the data were collected on-site from a specific vehicle depot.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Y.F.; Feng, K.; Chen, Y.J.; Chen, Z.G. Multioperator Morphological Undecimated Wavelet for Wheelset Bearing Compound Fault Detection. IEEE Trans. Instrum. Meas. 2023, 72, 7504612. [Google Scholar] [CrossRef]
  2. Guo, X.X.; Ji, Z.Y.; Feng, Q.B.; Wang, H.H.; Yang, Y.Y.; Li, Z. URS: A Light-Weight Segmentation Model for Train Wheelset Monitoring. IEEE Trans. Intell. Transp. 2023, 24, 7707–7716. [Google Scholar] [CrossRef]
  3. Huang, J.C.; Jiang, B.Y.; Xu, C.Q.; Wang, N.F. Slipping Detection of Electric Locomotive Based on Empirical Wavelet Transform, Fuzzy Entropy Algorithm and Support Vector Machine. IEEE Trans. Veh. Technol. 2021, 70, 7558–7570. [Google Scholar] [CrossRef]
  4. Zhang, Q.S.; Ding, J.M.; Zhao, W.T. An Adaptive Demodulation Band Segmentation Method to Optimize Spectral Boundary and Its Application for Wheelset-Bearing Fault Detection. IEEE Trans. Instrum. Meas. 2022, 71, 3514510. [Google Scholar] [CrossRef]
  5. Weng, Y.B.; Li, Z.C.; Chen, X.H.; He, J.; Liu, F.N.; Huang, X.B.; Yang, H. A Railway Track Extraction Method Based on Improved DeepLabV3+. Electronics 2023, 12, 3500. [Google Scholar] [CrossRef]
  6. Xin, G.; Li, Z.; Jia, M.; Zhong, Q.T.; Dong, H.H.; Hamzaoui, N.; Antoni, J. Fault Diagnosis of Wheelset Bearings in High-Speed Trains Using Logarithmic Short-Time Fourier Transform and Modified Self-Calibrated Residual Network. IEEE Trans. Ind. Inform. 2022, 18, 7285–7295. [Google Scholar] [CrossRef]
  7. Cai, W.B.; Chi, M.R.; Wu, X.W.; Wen, Z.F.; Liang, S.L.; Jin, X.S. Experimental and Numerical Analysis of the Polygonal Wear of High-Speed Trains. Wear 2019, 440–441, 203076. [Google Scholar] [CrossRef]
  8. Shi, H.L.; Wang, J.B.; Wu, P.B.; Song, C.Y.; Teng, W.X. Field Measurements of the Evolution of Wheel Wear and Vehicle Dynamics for High-Speed Trains. Veh. Syst. 2018, 56, 1187–1206. [Google Scholar] [CrossRef]
  9. Qin, C.J.; Tao, J.F.; Shi, H.T.; Xiao, D.Y.; Li, B.C.; Liu, C.L. A Novel Chebyshev-Wavelet-Based Approach for Accurate and Fast Prediction of Milling Stability. Precis. Eng. 2020, 62, 244–255. [Google Scholar] [CrossRef]
  10. Spangenberg, U. Variable Frequency Drive Harmonics and Interharmonics Exciting Axle Torsional Vibration Resulting in Railway Wheel Polygonization. Veh. Syst. Dyn. 2019, 58, 404–424. [Google Scholar] [CrossRef]
  11. Montinaro, N.; Epasto, G.; Cerniglia, D.; Guglielmino, E. Laser ultrasonics inspection for defect evaluation on train wheel. NDT E Int. 2019, 107, 102145. [Google Scholar] [CrossRef]
  12. Wang, R.; Guo, Q.; Lu, S.M.; Zhang, C.M. Tire Defect Detection using Fully Convolutional Network. IEEE Access 2019, 7, 43502–43510. [Google Scholar] [CrossRef]
  13. Mosleh, A.; Meixedo, A.; Ribeiro, D.; Montenegro, P.; Calçadaa, R. Early Wheel Flat Detection: An Automatic Data-Driven Wavelet-Based Approach for Railways. Veh. Syst. 2022, 61, 1644–1673. [Google Scholar] [CrossRef]
  14. Chen, L.; Choy, Y.S.; Wang, T.G.; Chiang, Y.K. Fault Detection of Wheel in Wheel Rail System using Kurtosis Beamforming Method. Struct. Health Monit. 2019, 19, 495–509. [Google Scholar] [CrossRef]
  15. Zhang, L.H.; Wang, Y.W.; Ni, Y.Q.; Lai, S.K. Online Condition Assessment of High-Speed Trains Based on Bayesian Forecasting Approach and Time Series Analysis. Smart Struct. Syst. 2018, 21, 705–713. [Google Scholar]
  16. Peng, D.D.; Wang, H.; Liu, Z.L.; Zhang, W.; Zuo, M.J. Multibranch and Multiscale Cnn for Fault Diagnosis of Wheelset Bearings under Strong Noise and Variable Load Condition. IEEE Trans. Ind. Inform. 2020, 16, 4949–4960. [Google Scholar] [CrossRef]
  17. Ye, Y.G.; Huang, C.H.; Zeng, J.; Zhou, Y.C.; Li, F.S. Shock Detection Of Rotating Machinery Based on Activated Time-Domain Images and Deep Learning: An Application to Railway Wheel Flat Detection. Mech. Syst. Signal Pract. 2023, 186, 109856. [Google Scholar] [CrossRef]
  18. Ye, Y.G.; Zhu, B.; Huang, P.; Peng, B. OORNet: A Deep Learning Model for On-Board Condition Monitoring and Fault Diagnosis of Out-of-Round Wheels of High-Speed Trains. Measurement 2022, 199, 111268. [Google Scholar] [CrossRef]
  19. Gabriel, K.; Cheng, S.O.; Stefan, K. Wheel defect detection with machine learning. IEEE Trans. Intell. Transp. 2017, 19, 1176–1187. [Google Scholar]
  20. Dabir, H.M.D.; Abdar, M.; Khosravi, A.; Jalali, S.M.J.; Atiya, A.F.; Nahabandi, S.; Srinivasan, D. Spinalnet: Deep Neural Network With Gradual Input. IEEE Trans. Artif. Intell. 2022, 4, 1165–1177. [Google Scholar] [CrossRef]
  21. Sun, B.; Liu, X.F. Significance support vector machine for high-speed train bearing fault diagnosis. IEEE Sens. J. 2021, 23, 4638–4646. [Google Scholar] [CrossRef]
  22. He, L.; Yi, C.; Zhou, Q.Y.; Lin, J.H. Fast Convolutional Sparse Dictionary Learning Based on Locomp and its Application to Bearing Fault Detection. IEEE Trans. Instrum. Meas. 2022, 71, 3519012. [Google Scholar] [CrossRef]
  23. Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster r-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
  24. Liang, B.; Iwnicki, S.; Ball, A.; Young, A.E. Adaptive Noise Cancelling and Time–Frequency Techniques for Rail Surface Defect Detection. Mech. Syst. Singal Pract. 2015, 54–55, 41–51. [Google Scholar] [CrossRef]
  25. Barman, J.; Hazarika, D. Linear and Quadratic Time–Frequency Analysis of Vibration for Fault Detection and Identification of NFR Trains. IEEE Trans. Instrum. Meas. 2020, 69, 8902–8909. [Google Scholar] [CrossRef]
  26. Li, Y.; Liu, Z.; Liu, X.; Zhao, P.F.; Liu, T.B.G. High-Speed Electromagnetic Train Wheel Inspection using a Kalman-Model-Based Demodulation Algorithm. IEEE Sens. J. 2019, 19, 6833–6843. [Google Scholar] [CrossRef]
  27. Zhang, Z.H.; Wang, P.; Ding, J.M. Fault Detection and Analysis for Wheelset Bearings via Improved Explicit Shift-Invariant Dictionary Learning. ISA Trans. 2023, 136, 468–482. [Google Scholar] [CrossRef]
  28. Xu, M.T.; Yao, H.M. Fault Diagnosis Method of Wheelset Based on Eemd-Mpe and Support Vector Machine Optimized by Quantum-Behaved Particle Swarm Algorithm. Measurement 2023, 216, 112923. [Google Scholar] [CrossRef]
  29. Liu, Z.C.; Yang, S.P.; Liu, Y.Q.; Lin, J.H.; Gu, X.H. Adaptive Correlated Kurtogram and its Applications in Wheelset-Bearing System Fault Diagnosis. Mech. Syst. Signal Pract. 2021, 154, 107511. [Google Scholar] [CrossRef]
  30. Karras, T.; Laine, S.; Aila, T. A Style-Based Generator Architecture for Generative Adversarial Networks. IEEE Trans. Pattern Anal. 2020, 43, 4217–4228. [Google Scholar] [CrossRef]
  31. He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  32. Li, H.Q.; Wang, Y.; Zeng, J.; Li, F.S.; Yang, Z.H.; Mei, G.M.; Ye, Y.G. Virtual Point Tracking Method for Online Detection of Relative Wheel-Rail Displacement of Railway Vehicles. Reliab. Eng. Syst. Saf. 2024, 246, 110087. [Google Scholar] [CrossRef]
Figure 1. Local inference constraint network-based detection method of train wheelset tread defect.
Figure 1. Local inference constraint network-based detection method of train wheelset tread defect.
Electronics 13 02201 g001
Figure 2. Defect feature enhancement with attention mechanism.
Figure 2. Defect feature enhancement with attention mechanism.
Electronics 13 02201 g002
Figure 3. Residual spinal fully connected layer.
Figure 3. Residual spinal fully connected layer.
Electronics 13 02201 g003
Figure 4. The data generated by Non, GT, PT, and Gan.
Figure 4. The data generated by Non, GT, PT, and Gan.
Electronics 13 02201 g004
Table 1. Generated dataset.
Table 1. Generated dataset.
Expanded MethodNormal (0)Crack (1)Scratch (2)Peeling (3)Abrasion (4)Severe Scratches (5)
Non455652424020
GT225280260210200100
Gan200200200200200200
PT225280260210200100
Table 2. Dataset validation.
Table 2. Dataset validation.
ModelParameters
(kb)
Accuracy (%)
NonGTPTGan
Resnet-34 [20]83,29761.2288.6283.5293.75
Resnet-5092,19343.290.588.6294.16
Resnet-10116,68965.390.1988.6294.16
Table 3. Ablation experiment of residuals in spinal fully connected laminar.
Table 3. Ablation experiment of residuals in spinal fully connected laminar.
ModelDecision-Making ModuleAccuracy (%)Precision (%)Recall (%)Specificity (%)Parameter (kb)
Resnet [20]Non94.5894.9795.1694.8792,193
Spinal [20]94.5894.9795.1694.8797,324
Spinal-res95.0095.3995.6195.3097,324
Non represents Resnet50 with a fully connected layer, and Spinal represents Resnet50 with a spinal fully connected layer instead of a fully connected layer.
Table 4. Ablation experiment of attention mechanism.
Table 4. Ablation experiment of attention mechanism.
ModelAttentionAccuracy (%)Precision (%)Recall (%)Specificity (%)Parameter (kb)
Resnet-50 [31]Non94.58394.9795.1694.8792,193
SE95.83396.2396.4596.14108,589
CBAM95.83396.2396.4396.1394,261
Table 5. Model experiment results.
Table 5. Model experiment results.
Evaluation IndicatorsModel
Resnet-50Resnet-101VGG16Desnet-169Desnet-161ConvNextProposed Method
Accuracy (%)95.4894.1789.0264.1766.608096.25
Precision (%)94.9794.1780.8364.4366.678094.58
Recall (%)95.1694.7881.3264.5567.1580.5396.87
Specificity (%)94.8794.4781.0764.3666.9180.2794.58
Parameter (kb)92,193166,689524,56349,814104,688342,20499,393
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, J.; Jiang, S.; Wang, Z.; Liu, J. Detection of Train Wheelset Tread Defects with Small Samples Based on Local Inference Constraint Network. Electronics 2024, 13, 2201. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13112201

AMA Style

Liu J, Jiang S, Wang Z, Liu J. Detection of Train Wheelset Tread Defects with Small Samples Based on Local Inference Constraint Network. Electronics. 2024; 13(11):2201. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13112201

Chicago/Turabian Style

Liu, Jianhua, Shiyi Jiang, Zhongmei Wang, and Jiahao Liu. 2024. "Detection of Train Wheelset Tread Defects with Small Samples Based on Local Inference Constraint Network" Electronics 13, no. 11: 2201. https://0-doi-org.brum.beds.ac.uk/10.3390/electronics13112201

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop