Next Article in Journal
Air Quality Versus Perceived Comfort and Health in Office Buildings at Western Macedonia Area, Greece during the Pandemic Period
Next Article in Special Issue
A Lightweight Neural Network-Based Method for Identifying Early-Blight and Late-Blight Leaves of Potato
Previous Article in Journal
Rotor Fault Diagnosis Method Based on VMD Symmetrical Polar Image and Fuzzy Neural Network
Previous Article in Special Issue
Deep CNN-Based Materials Location and Recognition for Industrial Multi-Crane Visual Sorting System in 5G Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Tracking and Identification of Typical Protective Behavior of Cows Based on DeepLabCut

1
Inner Mongolia Agricultural University, Hohhot 010018, China
2
Inner Mongolia Autonomous Region Key Laboratory of Big Data Research and Application of Agriculture and Animal Husbandry, Hohhot 010018, China
*
Author to whom correspondence should be addressed.
Submission received: 1 December 2022 / Revised: 11 January 2023 / Accepted: 12 January 2023 / Published: 14 January 2023
(This article belongs to the Special Issue Deep Learning in Object Detection and Tracking)

Abstract

:
In recent years, traditional farming methods have been increasingly replaced by more modern, intelligent farming techniques. This shift towards information and intelligence in farming is becoming a trend. When they are bitten by dinoflagellates, cows display stress behaviors, including tail wagging, head tossing, leg kicking, ear flapping, and skin fluttering. The study of cow protective behavior can indirectly reveal the health status of cows and their living patterns under different environmental conditions, allowing for the evaluation of the breeding environment and animal welfare status. In this study, we generated key point feature marker information using the DeepLabCut target detection algorithm and constructed the spatial relationship of cow feature marker points to detect the cow’s protective behavior based on the change in key elements of the cow’s head swinging and walking performance. The algorithm can detect the protective behavior of cows, with the detection accuracy reaching the level of manual detection. The next step in the research focuses on analyzing the differences in protective behaviors of cows in different environments, which can help in cow breed selection. It is an important guide for diagnosing the health status of cows and improving milk production in a practical setting.

1. Introduction

Large-scale dairy farming can improve the level of dairy production and ensure the high-quality development of dairy farming and the milk industry. However, large-scale farming also brings some problems, especially due to the excessive concentration of manure and urine. Parasites and other microorganisms in manure multiply and breed a large number of harmful dinoflagellate insects. Dipteral infestation causes a significant increase in the heart rate of cows. The feeding behavior of cows is disturbed, and they are unable to settle and rest. Cows need to regulate the body’s immune system to fight the insect-borne infection. Long-term mental stress leads to the development of diseases that can be extremely harmful to the cow’s weight and performance. When cows are bitten by dinoflagellates, they exhibit stress behaviors, mainly tail wagging, head shaking, leg kicking, ear flapping, skin fluttering, and other physical protective behavior. Protective behavior as a response to external aggression produces instinctive behavioral expressions to protect the physical body and maintain physiological constancy. The same cow behaves differently in different environments, reflecting the suitability of the cow for different breeding environments. It is difficult to detect these small dinoflagellate infestations directly, so it is valuable to detect them indirectly through observing the cow’s protective behavior. By studying the protective behavior of cows, we can indirectly determine their health status and their living patterns under different survival conditions. This allows us to understand the psychological needs of cows and make predictions about their behavior, enabling us to evaluate the breeding environment and animal welfare status. The temperament of the individual dairy cow determines its milk yield. Milk production also varies greatly between breeds and individuals. Cows express their emotional states through their behavior, including exhibiting signs of depression, agitation, distress, etc. Significant individual differences exist in the protective behavior of cows of different temperament types (e.g., docile, sensitive, neurotic, etc.) in response to stress induced by insects in the same environment. This variation shows the difference in the individual temperaments of cows. Therefore, the study of cow protective behavior is an important basic effort for the evaluation of cow temperament types and the analysis of differences in milk yield.
Most of the current research on cow protective behavior has been conducted by manual observation to obtain basic data to make empirical judgments and conduct theoretical analyses. There are many disadvantages to the manual observation method: high work intensity, fatigue, subjectivity, low accuracy, and susceptibility to zoonotic diseases. The observer is required to have a certain base knowledge of animal behavior. This over-reliance on the manual acquisition of behavioral information has constrained the progress of subsequent studies. Increasingly, research on animal behavior focuses on monitoring animal feeding, behavioral posture, etc., with the help of technologies such as wearable devices and sensor monitoring [1]. Sensor technology has improved the efficiency and accuracy of obtaining behavioral information [2], but sensor-based detection methods can only detect a single signal, and the equipment is costly and easily damaged. Sensors also cause animal stress during the wearing process, affecting the mood of the cows. This touch monitoring method impacts animal welfare.
Therefore, it is necessary to study a machine vision-based intelligent identification method for cow protective behavior to achieve accurate and efficient stress-free and contact-free detection to replace traditional observation and analysis and contact monitoring [3]. This project aims to investigate a markerless, quantitative, intelligent recognition method for typical cow protective behavior (tail wagging, head tossing, ear flapping, and leg kicking) based on machine vision technology to replace manual monitoring and accurately detect protective behavior. This research project provides accurate basic data for dairy cow behavior researchers and breeding experts, and it is an important fundamental work for related research. Deep data mining helps to scientifically determine cow temperament types and provides a basis for dairy farming environment assessment, which has important social and economic value.
Some scholars have completed the recognition of key behaviors of the head and trunk of cows [4], but there is still a great practical need for continued research regarding the detailed recognition of cow protective behavior. The automatic recognition of animal behaviors with the help of machine vision technology can provide the basis for psychological and physiological analysis and evaluation of animals [5]. Dairy cows usually live in outdoor or semi-enclosed barns. Most of the research on the recognition of dairy cow behaviors has been directed towards the detection of motion targets [6]. Simona et al. proposed the use of the Viola–Jones algorithm for the recognition of the lying posture of cows using multiple cameras [7]. Poursaberi et al. used the automatic matching circle arc of back posture to determine the lame cow target based on curvature [8]. However, the detection of motion targets regarding cow protective behavior in dynamic backgrounds is still a challenging topic due to the global motion of the background [9], the illumination variation of the background, background noise, scale variation of the target, and obscuration [10].
Deep learning neural networks have profoundly influenced many areas of artificial intelligence [11], especially the very challenging image recognition tasks [12,13]. The human behavior detection technology based on machine vision technology has made great progress, and more and more valuable research results have emerged. These research results provide a reference solution for the implementation of this study. We can learn from the current development of excellent algorithms for use in the field of human pose estimation. Some excellent algorithms can also be applied to cow protective behavior detection. We believe that an effective way to achieve complex protective behavior recognition is to fuse apparent features, such as shape, texture, and color, with multiple features, such as the motion features of optical flow, velocity, and motion trajectory, as well as depth information. There are many excellent algorithms availavle, including DeeperCut [14], ArtTrack [15], DeepPose [16], etc., which are effective in human pose tracking, but these methods do not study the behavioral characteristics of animals deeply enough and are not well applied in animal behavior detection [17]. Some scholars have made preliminary attempts to use the DeepLabCut technique in the behavior detection of rats and insects [18]. DeepLabCut is a novel deep learning algorithm developed by a team at Harvard University that can track animal movements, as well as behaviors, with artificial levels of accuracy and without the use of tracking markers or time-consuming manual analysis [19]. Wei Zhan et al. developed an algorithm based on the optimized DeepLabCut algorithm that can automatically track the pest trajectory of key points and accurately identify their behaviors [20]. Alan Wrench et al. used the DeepLabCut algorithm to study articulatory sites, which is found to provide unique kinematic data for the tongue, hyoid bone, jaw, and lips [21]. In this paper, we will investigate a behavior recognition algorithm based on the DeepLabCut target detection algorithm that monitors changes in the location of key parts of cows to identify head swinging, walking, and other protective behaviors of cows.

2. Materials and Methods

A 2.5 m (m) × 2.5 m (m) individual barn is built outdoors. The number of insects inside the barn is regulated by controlling the state of the outer screen enclosure to achieve different environmental conditions, including a human-controlled environment and a natural environment, respectively. The video data are collected under natural lighting conditions. The video images of healthy breeding Holstein cows are captured using high-definition cameras. We photographed one cow at a time, trying to avoid shading while recording information about the environment.
Considering that the protective behaviors are related to detailed head, tail, and leg action, the shooting distance is within 3 m, and the shooting height is fixed. Combed with the local temperature conditions, the experimental time is set to May-October. Therefore, this experiment is intended to collect video images of cows during the time of 11:00–14:00. The processor selected for the training experiment equipment is Intel i9-9900k CPU 3.6 GHz, memory 32 GBRAM, and graphics card NVIDIA GeForce RTX 2080Ti with 11 GB video memory.
In this paper, a DeepLabCut-based deep learning method is used to localize the detailed elements of cow protective behavior, without markers. Converting the target area in which the cow protective behavior occurs to the detection and tracking of a small number of feature points can significantly reduce the computational complexity of the tracking algorithm. First, a certain number of key video sample frames showing cow protective behavior are extracted from the captured video images. As shown in Figure 1, key feature points, such as the head, mouth, left ear, right ear, tail, and front and back limbs are manually marked for establishing the training sample dataset. Secondly, the effectiveness of deep learning convolutional network models for extracting target features of cow protective behavior is analyzed by comparative experiments. Finally, we use the DeepLabCut deep learning technique to construct a feature key point detection model for body protection behavior. After learning the manually labeled key feature point information in the training sample set, the pre-labeling of new test samples is no longer required. Deep convolutional networks directly predict the probability of key points of each body part in video pixels. That is, the coordinates of key feature points within each video image frame can be accurately located.
As shown in Figure 2, DeepLabCut is a deep convolutional network. It combines two key components: a pre-trained ResNet and a deconvolutional layer. The network consists of a variant of ResNet, whose weights are pre-trained on ImageNet. The deconvolution layer is used to sample the visual information and generate a spatial probability density. The probability density for a tracking target represents the markers at which the target is located at a particular location. To fine-tune the network for a specific task, its weights are trained on labeled data, which consist of frames and body part locations, with markers. Using the graphical interface of the DeepLabCut tool, the training video images are acquired frame by frame, and the segmented images are labeled. We divide the labeled images into training and validation sets in the ratio of 90:10. The weights are adjusted in an iterative manner during training. The network model assigns high probabilities to the labeled body part locations and low probabilities to other locations.
The method overcomes the effects of complex background changes, illumination interference, and camera shake on the deep network recognition model. This enables the automated detection and tracking of non-continuous moving feature points in regions such as the head, ears, legs, and tail. The distinguishing characteristics of various protective behaviors can be derived by analyzing the correlation and specificity of cow protective behavior. Combined with its transient and displacement mutability, the method can achieve the continuous detection and tracking of feature points on a large spatial scale to obtain information on the motion trajectories of key feature points of cow protective behavior in the spatiotemporal domain. Using continuous motion signal information from different feature points in the space–time domain, we can better identify the characteristics of protective behavior in the time dimension. Thus, the automatic classification and recognition of the protective behavior of dairy cows can be achieved.

3. Results

Cow protective behaviors are characterized by short duration and high randomness. Causing the deep learning algorithm to automatically find the feature points of the body-protective behavior in the image is the key to recognizing the cow’s behaviors. In this paper, a DeepLabCut-based deep learning approach is used to markerlessly localize cow protective behavior feature points. This extraction technique learns information about the critical feature targets. Using the residual convolutional network, the pixel features in the image sequence can be extracted to predict the position in the next frame. This method can effectively detect and identify the key point location information of the protective behavior so that the motion trajectory information of the target between adjacent frames can be calculated.
In this paper, we propose to precisely manually mark the feature points of key frames because human vision can intuitively discover the points of interest of the protective behavior. By learning and tracking these feature points through deep neural networks, the protective behavior actions can be quickly tracked; thus, the method eliminates the complexity of using image-processing algorithms to segment and identify body parts. Protective behavior is transient, and this type of behavior disappears quickly after it occurs. The method of real-time tracking using feature points does not lose the target.
An important feature of DeepLabCut is that it can accurately transform large videos into semantically meaningful low-dimensional time series data because we marked the feature points in our experiments. This means that we pre-selected the parts that are likely to provide the most information about the studied behavior. Compared to high-dimensional video, this low-dimensional time-series data is also well suited for behavioral analysis due to its ease of computational processing. The mainstream algorithms still focus on analyzing the data frame by frame, but this does not fully utilize the information in the temporal dimension to identify cow protective behavior. However, if we want to analyze long-term dimensional data, it inevitably brings dimensional disaster. Therefore, the low-dimensional time series data obtained by using feature point tracking can be used to better identify cow protective behavior. If the hardware device cannot meet the real-time requirement, we can also improve the performance by extracting the prediction data across frames at a certain time steps. This idea is to reduce the reliance on hardware at the expense of accuracy. The optimization of the model can be continued after the first training of the model. First, the predicted frames that are incorrectly identified are extracted. The incorrectly detected key points are manually moved to the correct position. Then, the above process is repeated to obtain a more accurate training model. We obtained videos of cow key feature point tracking trajectories by using the trained model to recognize new videos, without markers.
After accurately detecting the location information of key points, the speed and direction of motion of key feature points of each image in the video sequence can be analyzed. However, due to the difference in motion speed, the scale of moving displacement of the characteristic points of different body-protective behavior is very different. Therefore, when the perceptual field is smaller than the moving displacement, the target will likely be lost. Tracking can be performed on a large spatial scale using pyramidal feature fusion techniques. The tracking accuracy is continuously improved by progressively correcting the previously assumed motion position along the pyramid towards the lower levels.
Since cow protective behavior is made up of a continuous set of action signals in the time domain, each action has a back-and-forth causal relationship in both the time and spatial domains. The length of time of each action is also uncertain. This information reflects the extent to which the actions of cow protective behaviors change over time. Using only the location information would lose the characteristic information in its temporal dimension. Based on this feature, the spatiotemporal domain motion trajectories of different body parts can be captured to analyze characteristics of cow protective behavior, as shown in Figure 3. The temporal and spatial continuity of protective behavioral movements is exploited. Multiple features, such as apparent characteristics, motion characteristics, and location information of the cows are fused (see Figure 4). Different protective behaviors can be effectively classified and identified. Head wagging can be determined by the amount of change in the angle between the horizontal line and the line connecting the key points of the mouth and the head. The kicking behavior can be determined based on the amount of change in the angle between the key points of the four legs. The tail waving behavior can be determined based on the amount of change in the angle between the horizontal line and the line connecting the key points of the rump and tail.
The experimental results show that the model based on the DeepLabCut algorithm can accurately detect and track the target with very few data and training sets, as shown in Figure 5. By comparing the results of the behavior detection algorithm with those of manual detection, the detection accuracy of the algorithm reaches the level of manual detection. Compared with existing cow behavior recognition algorithms, this algorithm has the advantage of small training data and low resource consumption.

4. Discussion

Animal behavior recognition based on machine vision is a cross-disciplinary technical problem that integrates machine vision, pattern recognition, animal behavior science, and video analysis. Automatic recognition of animal behavior with the help of machine vision technology can provide the basis for the psychological and physiological analyses of animals. Dairy cows usually move in outdoor or semi-enclosed barns. Research on animal information and normal behavior recognition based on machine vision has achieved a variety of accurate and efficient recognition methods. However, most of the existing research on the recognition of cow behavior based on machine vision is focused on the detection of motion targets. All are examples of scene level behavior detection and recognition (e.g., multi-camera-based behavior detection of cows on the automatic milking system platform, different behavior detection of cow walking trajectory and distance, body size, standing or lying down, feeding, etc.).
Most of the research on cow protective behavior has been achieved by manual observation methods, and there is a lack of research on automatic detection of the movement targets. Although research on animal information and normal behavior recognition based on machine vision has achieved a variety of accurate and efficient recognition methods, the existing algorithms for behavior recognition based on machine vision focus on the physical parameters and normal behaviors of domestic animals such as pigs and cattle. In addition, the existing research requires a high video acquisition environment, which is difficult to adapt to the complex farm environment and still has a long way to go before it can be widely used in the farming industry. At the same time, due to the sudden and transient characteristics of cow protective behavior, there is still much room for progress in the research of detection algorithms.
The detailed elements (the tail, head, ears and legs in motion) of the cow’s protective behavior are targeted in the foreground, while the cow’s body, fences, and screens are in the background. The black and white body areas of the cows are very similar in color pattern to areas such as the tail, head, and legs, and it is easy to mistakenly process foreground targets as background images. The foreground target detection is difficult. The behavioral areas of the cow’s body protection system are non-rigid structures. The shape and posture are prone to change. Due to the large deformation of different parts of the motion posture, such as standing and lying down, this also brings difficulties to the detection and localization of targets in the detail areas regarding protective behavior. First, the tail swinging, head flinging, ear flapping, leg kicking, and other protective behaviors have the characteristics of short, unpredictable, transient, and sudden movement times. The relative moving distance of the body-protective behavior is large and fast, and it is difficult to track continuously. Secondly, the cow’s body will shake while the cow is moving, which will interfere with the identification of protective behavior. Thirdly, when the cow moves normally, the details of the protective behavior will also move with the body. It is difficult for the existing methods to effectively distinguish the features of protective behavior from the normal movement of the cow’s body, and thus, it is impossible to track and detect protective behavior in a targeted manner. Therefore, detecting cow protective behavior in a dynamic background is a difficult problem.
Cow protective behavior is a behavioral response to the external environment. The spatial and temporal characteristics, such as intensity and frequency, can reflect the superiority or inferiority of the cow-breeding environment and the significant differences between the different temperaments of cows. Although machine vision and machine learning methods have been deeply involved in various studies of animal behavior, the research on cow protective behavior recognition based on machine vision is still in the initial stages. The existing methods cannot further quantify and count the intensity, frequency and change trend of cow protective behavior based on automatic recognition, and cannot completely replace manual work.
The head, ears, tail, and legs are the key areas tracked in this project. Detailed parts of dairy cow protective behavior are non-rigid targets. With sudden and transient characteristics, as each behavior occurs irregularly, the direction and speed of movement will often change suddenly. The scale and shape of these detailed parts change accordingly to the cow’s body movement at different viewpoints. The continuous detection and tracking of key feature points in non-continuously moving cow detail areas, without expert pre-marking, is a critical problem that needs to be addressed. This project investigates the mechanism of a new deep learning network model to recognize the image features of detailed parts of cow protective behavior, optimizing the network model structure by using a deep residual separable convolution technique to improve detection accuracy while reducing the number of model parameters. An improved DeepLabCut deep learning model is used to detect and identify key feature points of cow protective behavior in videos. The focus of this project is on how to understand motion trajectories of the detected feature point as a certain protective behavior. Considering the continuity of cow protective actions in the spatial and temporal domains, the fusion technique of pyramid features is used to extract information such as the body features and motion trajectory of cow protective actions occurring at different time points in the video. We use neural network techniques to capture dynamic information in the time-serialized data of actions to achieve the recognition of cow protective behavior.
The next step of the study will focus on analyzing the differences in cow protective behavior in different environments. We will collect data in different natural and human-controlled environments, including indicators of spatial and temporal variability, such as intensity, frequency, and change trends. According to the results of recognition based on machine vision, we will perform data mining and machine learning analysis on the collected sample data. Combining the results with the expert experience, we will build a quantitative model of spatiotemporal variation in cow protective behavior characteristic indicators.

5. Conclusions

This paper breaks through the existing detection methods of animal behavior recognition that mainly takes the outline of animal shape as the starting point of the research. This study takes the feature points of key areas as the detection target and uses the DeepLabCut model to extract the information for cow feature parts. It not only overcomes the problem of difficult foreground target segmentation, but also excludes interference such as cow body shaking and background changes. The number of feature points to be tracked is greatly reduced, and the computational complexity of the algorithm is decreased. The detection accuracy of the algorithm reaches the level of manual detection. By analyzing the position changes of the key cow features, it can realize the automatic classification and recognition of cow protective behavior.
Our current work is a preliminary study aimed at solving this important and practical problem in stock farming, which can also be extended to other livestock, such as sheep, horses, etc. Technically speaking, beyond tracking and key point-based pose estimation, there are some potentially interesting settings calling for further research, i.e., an image/video based retrieval system to conveniently access the video clip of interest for specific behaviors [22]. The retrieval can go beyond raw visual data and leverage a graph-based, more structured search by using matching techniques, either by pairwise matching [23] or multiple instance matching [24]. The data could also be used to analyze the behavior as an event in the recorded video, and the event sequence over time can be analyzed as a whole for a better understanding at both the individual animal level and the group level. This can be essentially related to the temporal point process [25], which can further incorporate neural networks for learning [26], with wide applications. We leave these research directions for future work.

Author Contributions

Conceptualization, J.L. and F.K.; methodology, F.K.; software, Y.Z.; validation, Y.L., Y.Z. and X.Y.; formal analysis, J.L. and F.K.; investigation, J.L.; resources, F.K.; data curation, X.Y.; writ-ing—original draft preparation, J.L.; writing—review and editing, F.K.; visualization, J.L.; su-pervision, Y.L.; project administration, F.K.; funding acquisition, J.L. and F.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China, grant number 32160813; Natural Science Foundation of Inner Mongolia Autonomous Region of China, grant number 2021BS03038; Research Program of Science and Technology at Universities of Inner Mongolia Autonomous Region of China, grant number NJZY21457; Inner Mongolia Agricultural University High-Level Talent Research Start-Up Project, grant number NDYB2019-27; Inner Mongolia Agricultural University High-Level Talent Research Start-Up Project, grant number NDYB2018-38; and National Natural Science Foundation of China, grant number 32060415.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schweinzer, V.; Gusterer, E.; Kanz, P.; Krieger, S.; Süss, D.; Lidauer, L.; Berger, A.; Kickinger, F.; Öhlschuster, M.; Auer, W.; et al. Evaluation of an ear-attached accelerometer for detecting estrus events in indoor housed dairy cows. Theriogenology 2019, 130, 19–25. [Google Scholar] [CrossRef] [PubMed]
  2. Benaissa, S.; Tuyttens, F.A.M.; Plets, D.; Trogh, J.; Martens, L.; Vandaele, L.; Joseph, W.; Sonck, B. Calving and estrus detection in dairy cattle using a combination of indoor localization and accelerometer sensors. Comput. Electron. Agric. 2020, 168, 105153. [Google Scholar] [CrossRef]
  3. Li, J.; Wu, P.; Kang, F.; Zhang, L.; Xuan, C. Study on the Detection of Dairy Cows’ Self-Protective Behaviors Based on Vision Analysis. Adv. Multimed. 2018, 2018, 9106836. [Google Scholar] [CrossRef] [Green Version]
  4. Jabbar, K.A.; Hansen, M.F.; Smith, M.L.; Smith, L.N. Early and non-intrusive lameness detection in dairy cows using 3-dimensional video. Biosyst. Eng. 2017, 153, 63–69. [Google Scholar] [CrossRef]
  5. Wagner, N.; Antoine, V.; Mialon, M.M.; Lardy, R.; Silberberg, M.; Koko, J.; Veissier, I. Machine learning to detect behavioural anomalies in dairy cows under subacute ruminal acidosis. Comput. Electron. Agric. 2020, 170, 105233. [Google Scholar] [CrossRef]
  6. Röttgen, V.; Schön, P.C.; Becker, F.; Tuchscherer, A.; Wrenzycki, C.; Düpjan, S.; Puppe, B. Automatic recording of individual oestrus vocalisation in group-housed dairy cattle: Development of a cattle call monitor. Animal 2020, 14, 198–205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Porto, S.M.; Arcidiacono, C.; Anguzza, U.; Cascone, G. A computer vision-based system for the automatic detection of lying behaviour of dairy cows in free-stall barns. Biosyst. Eng. 2013, 115, 184–194. [Google Scholar] [CrossRef]
  8. Poursaberi, A.; Bahr, C.; Pluk, A.; Van Nuffel, A.; Berckmans, D. Real-time automatic lameness detection based on back posture extraction in dairy cattle: Shape analysis of cow with image processing techniques. Comput. Electron. Agric. 2010, 74, 110–119. [Google Scholar] [CrossRef]
  9. Bezen, R.; Edan, Y.; Halachmi, I. Computer vision system for measuring individual cow feed intake using RGB-D camera and deep learning algorithms. Comput. Electron. Agric. 2020, 172, 105345. [Google Scholar] [CrossRef]
  10. Küster, S.; Kardel, M.; Ammer, S.; Brünger, J.; Koch, R.; Traulsen, I. Usage of computer vision analysis for automatic detection of activity changes in sows during final gestation. Comput. Electron. Agric. 2020, 169, 105177. [Google Scholar] [CrossRef]
  11. Yang, X.; Yan, J. On the Arbitrary-Oriented Object Detection: Classification Based Approaches Revisited. Int. J. Comput. Vis. 2022, 130, 1340–1365. [Google Scholar] [CrossRef]
  12. Yang, X.; Zhang, G.; Yang, X.; Zhou, Y.; Wang, W.; Tang, J.; He, T.; Yan, J. Detecting rotated objects as gaussian distributions and its 3-d generalization. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 1–18. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, S.; Qiu, L.; Zhu, F.; Yan, J.; Zhang, H.; Zhao, R.; Li, H.; Yang, X. Align Representations with Base: A New Approach to Self-Supervised Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–24 June 2022; pp. 16579–16588. [Google Scholar]
  14. Insafutdinov, E.; Pishchulin, L.; Andres, B.; Andriluka, M.; Schiele, B. DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 34–50. [Google Scholar]
  15. Insafutdinov, E.; Andriluka, M.; Pishchulin, L.; Tang, S.; Levinkov, E.; Andres, B.; Schiele, B. Arttrack: Articulated multi-person tracking in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6457–6465. [Google Scholar]
  16. Toshev, A.; Szegedy, C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 24–27 June 2014; pp. 1653–1660. [Google Scholar]
  17. Nath, T.; Mathis, A.; Chen, A.C.; Patel, A.; Bethge, M.; Mathis, M.W. Using DeepLabCut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 2019, 14, 2152–2176. [Google Scholar] [CrossRef] [PubMed]
  18. Mathis, A.; Biasi, T.; Schneider, S.; Yuksekgonul, M.; Rogers, B.; Bethge, M.; Mathis, M.W. Pretraining boosts out-of-domain robustness for pose estimation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 1858–1867. [Google Scholar]
  19. Mathis, A.; Mamidanna, P.; Cury, K.M.; Abe, T.; Murthy, V.N.; Mathis, M.W.; Bethge, M. DeepLabCut: Markerless pose estimation of user-defined body parts with deep learning. Nat. Neurosci. 2018, 21, 1281–1289. [Google Scholar] [CrossRef] [PubMed]
  20. Zhan, W.; Zou, Y.; He, Z.; Zhang, Z. Key points tracking and grooming behavior recognition of Bactrocera minax (Diptera: Trypetidae) via DeepLabCut. Math. Probl. Eng. 2021, 2021, 1392362. [Google Scholar] [CrossRef]
  21. Wrench, A.; Balch-Tomes, J. Beyond the Edge: Markerless Pose Estimation of Speech Articulators from Ultrasound and Camera Images Using DeepLabCut. Sensors 2022, 22, 1133. [Google Scholar] [CrossRef] [PubMed]
  22. Deng, C.; Yang, E.; Liu, T.; Li, J.; Liu, W.; Tao, D. Unsupervised Semantic-Preserving Adversarial Hashing for Image Search. IEEE Trans. Image Process. 2019, 28, 4032–4044. [Google Scholar] [CrossRef] [PubMed]
  23. Li, Y.; Gu, C.; Dullien, T.; Vinyals, O.; Kohli, P. Graph Matching Networks for Learning the Similarity of Graph Structured Objects. In Proceedings of the International Conference on Machine Learning 2019, Long Beach, CA, USA, 9–15 June 2019; pp. 3835–3845. [Google Scholar]
  24. Yan, J.; Cho, M.; Zha, H.; Yang, X.; Chu, S.M. Multi-Graph Matching via Affinity Optimization with Graduated Consistency Regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 1228–1242. [Google Scholar] [CrossRef] [PubMed]
  25. Linderman, S.; Adams, R. Discovering Latent Network Structure in Point Process Data. In Proceedings of the 31st International Conference on Machine Learning 2014, Beijing, China, 21–26 June 2014; pp. 1413–1421. [Google Scholar]
  26. Xiao, S.; Xu, H.; Yan, J.; Farajtabar, M.; Yang, X.; Song, L.; Zha, H. Learning Conditional Generative Models for Temporal Point Processes. In Proceedings of the AAAI Conference on Artificial Intelligence 2018, New Orleans, LA, USA, 2–7 February 2018; Volume 32. [Google Scholar]
Figure 1. Feature points of key areas of protective behavior in dairy cows. Different colors were used to mark different characteristic points, including the head, limbs, and tail. These body parts are the key points regarding the cow’s protective behavior.
Figure 1. Feature points of key areas of protective behavior in dairy cows. Different colors were used to mark different characteristic points, including the head, limbs, and tail. These body parts are the key points regarding the cow’s protective behavior.
Applsci 13 01141 g001
Figure 2. DeepLabCut flow and training architecture of the deep neural network (DNN). (a) Training: extract image key frames with different pose features of cow protective behavior. (b) Mark the body parts manually using different colors. Select the cow’s head, legs, and tail as the features of interest. (c) A deep neural network architecture is trained to predict the body part location based on the corresponding image to predict the probability of a body part being in a specific pixel. (d) The trained network can be used to extract the location of the body parts corresponding to the cow’s protective behavior from the video.
Figure 2. DeepLabCut flow and training architecture of the deep neural network (DNN). (a) Training: extract image key frames with different pose features of cow protective behavior. (b) Mark the body parts manually using different colors. Select the cow’s head, legs, and tail as the features of interest. (c) A deep neural network architecture is trained to predict the body part location based on the corresponding image to predict the probability of a body part being in a specific pixel. (d) The trained network can be used to extract the location of the body parts corresponding to the cow’s protective behavior from the video.
Applsci 13 01141 g002
Figure 3. Comparison of spatiotemporal domain motion trajectories of different body parts. The tracking points of different protective body behaviors change their positions in space–time differently. The curves of these changes describe the spatiotemporal trajectories of different body parts.
Figure 3. Comparison of spatiotemporal domain motion trajectories of different body parts. The tracking points of different protective body behaviors change their positions in space–time differently. The curves of these changes describe the spatiotemporal trajectories of different body parts.
Applsci 13 01141 g003
Figure 4. Spatial and temporal distribution characteristics of multi-characteristic points of dairy cow protective behavior. The spatial distribution of the feature points of different poses retains morphological characteristics during the movement. Thus, by using the information of the distributions of these feature points in space, their behavior categories can be identified more precisely.
Figure 4. Spatial and temporal distribution characteristics of multi-characteristic points of dairy cow protective behavior. The spatial distribution of the feature points of different poses retains morphological characteristics during the movement. Thus, by using the information of the distributions of these feature points in space, their behavior categories can be identified more precisely.
Applsci 13 01141 g004
Figure 5. Tracking results of key cow body parts in different postures: (a) cows walking; (b) dairy cows standing. Cow protective behavior is correlated with the posture of their own movements. The protective behavior in different postures also affects the tracking of the feature points. Distinguishing between normal movement behavior and protective behavior requires using the transient characteristics of protective behavior for identification.
Figure 5. Tracking results of key cow body parts in different postures: (a) cows walking; (b) dairy cows standing. Cow protective behavior is correlated with the posture of their own movements. The protective behavior in different postures also affects the tracking of the feature points. Distinguishing between normal movement behavior and protective behavior requires using the transient characteristics of protective behavior for identification.
Applsci 13 01141 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, J.; Kang, F.; Zhang, Y.; Liu, Y.; Yu, X. Research on Tracking and Identification of Typical Protective Behavior of Cows Based on DeepLabCut. Appl. Sci. 2023, 13, 1141. https://0-doi-org.brum.beds.ac.uk/10.3390/app13021141

AMA Style

Li J, Kang F, Zhang Y, Liu Y, Yu X. Research on Tracking and Identification of Typical Protective Behavior of Cows Based on DeepLabCut. Applied Sciences. 2023; 13(2):1141. https://0-doi-org.brum.beds.ac.uk/10.3390/app13021141

Chicago/Turabian Style

Li, Jia, Feilong Kang, Yongan Zhang, Yanqiu Liu, and Xia Yu. 2023. "Research on Tracking and Identification of Typical Protective Behavior of Cows Based on DeepLabCut" Applied Sciences 13, no. 2: 1141. https://0-doi-org.brum.beds.ac.uk/10.3390/app13021141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop