Contrast Maximization-Based Feature Tracking for Visual Odometry with an Event Camera
Abstract
:1. Introduction
- Compared with traditional event-by-event tracking methods [20,21,22], a new tracking mechanism is presented to resolve data associations. A contrast maximization method is adopted to calculate the displacement parameters of the events, and the IMU data are used to calibrate the rotation parameters of the events, which greatly enhances the calculation speed and accuracy of the event stream.
2. Related Work
3. Main Methods
3.1. Feature Detection
3.2. Feature Tracking
3.2.1. Choice of Spatio-Temporal Windows
3.2.2. Maximizing IWE Contrast
3.2.3. Template Edge Update
3.2.4. Depth Estimation
3.2.5. Pose Estimation
Algorithm 1: Pseudo-code of pose estimation. |
Step 1: Extracting feature points and edge points |
1: Extract feature points by Harris detector |
2: Extract edge points by Canny detector |
3: Pick edge points around feature points as template edges |
Step 2: Computing the optical flow and the spatio-temporal windows of events |
4: Input: template edges P and event point set W |
Output: optical flow |
Initialize optical flow 0 |
do |
for do |
end for |
end for |
5: Update the spatio-temporal window: |
Step 3: Updating the template edges using the optical flow and IMU data |
6: Update the postion of feature points: |
7: Update the relative position: |
8: Update the postion of edge points: |
Step 4: Computing the depth value of template edges using a depth filter |
9: Triangulation depth: |
10: Depth filter: |
Input: Triangulation depth: |
Output: depth value |
do |
; |
; ; ; |
end for |
Step 5: Pose estimation using the ICP agorithm |
11: ICP agorithm: |
Output |
, . |
4. Experiments
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, J.; Qin, J.H.; Wang, J.; Li, J. OpenStreetMap-Based Autonomous Navigation for the Four Wheel-Legged Robot Via 3D-Lidar and CCD Camera. IEEE Trans. Ind. Electron. 2022, 69, 2708–2717. [Google Scholar] [CrossRef]
- Liu, D.; Yang, T.L.; Zhao, R.; Wang, J.; Xie, X. Lightweight Tensor Deep Computation Model with Its Application in Intelligent Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2022, 23, 2678–2687. [Google Scholar] [CrossRef]
- Rong, S.; He, L.; Du, L.; Li, Z.; Yu, S. Intelligent Detection of Vegetation Encroachment of Power Lines with Advanced Stereovision. IEEE Trans. Power Deliv. 2021, 36, 3477–3485. [Google Scholar] [CrossRef]
- Leng, C.; Zhang, H.; Li, B.; Cai, G.; Pei, Z.; He, L. Local feature descriptor for image matching: A Survey. IEEE Access 2019, 7, 6424–6434. [Google Scholar] [CrossRef]
- Tedaldi, D.; Gallego, G.; Mueggler, E.; Scaramuzza, D. Feature Detection and Tracking with the Dynamic and Active-pixel Vision Sensor (DAVIS). In Proceedings of the Second International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP), Krakow, Poland, 13–15 June 2016. [Google Scholar]
- Schuman, C.D.; Potok, T.E.; Patton, R.M.; Birdwell, J.D.; Dean, M.E.; Rose, G.S.; Plank, J.S. A survey of neuromorphic computing and neural networks in hardware. arXiv 2017, arXiv:1705.06963. [Google Scholar]
- Rebecq, H.; Horstschaefer, T.; Gallego, G.; Scaramuzza, D. EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time. IEEE Robot. Autom. Lett. 2017, 2, 593–600. [Google Scholar] [CrossRef] [Green Version]
- Chen, S.; Guo, M. Live demonstration: CELEX-V: A 1m pixel multi-mode event-based sensor. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Zhang, Z.; Wan, W. DOVO. Mixed Visual Odometry Based on Direct Method and Orb Feature. In Proceedings of the 2018 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 16–17 July 2018. [Google Scholar]
- Guo, M.; Huang, J.; Chen, S. Live demonstration: A 768 × 640 pixels 200Meps dynamic vision sensor. In Proceedings of the 2017 IEEE International Symposium on Circuits and Systems (ISCAS), Baltimore, MD, USA, 28–31 May 2017. [Google Scholar]
- Posch, C.; Matolin, D.; Wohlgenannt, R. A QVGA 143 DB dynamic range frame-free PWM image sensor with lossless pixel-level video compression and time-domain cds. IEEE J. Solid-State Circuits 2011, 46, 259–275. [Google Scholar] [CrossRef]
- Li, Y.; Li, J.; Yao, Q.; Zhou, W.; Nie, J. Research on Predictive Control Algorithm of Vehicle Turning Path Based on Monocular Vision. Processes 2022, 10, 417. [Google Scholar] [CrossRef]
- Stoffregen, T.; Gallego, G.; Drummond, T.; Kleeman, L.; Scaramuzza, D. Event-Based Motion Segmentation by Motion Compensation. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October 2019. [Google Scholar]
- Stoffregen, T.; Kleeman, L. Event Cameras, Contrast Maximization and Reward Functions: An Analysis. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
- Gallego, G.; Delbruck, T.; Orchard, G.M.; Bartolozzi, C.; Taba, B.; Censi, A.; Leutenegger, S.; Davison, A.; Conradt, J.; Daniilidis, K.; et al. Event-based Vision: A Survey. arXiv 2020, arXiv:1904.08405. [Google Scholar]
- Chiang, M.L.; Tsai, S.H.; Huang, C.M.; Tao, K.T. Adaptive Visual Serving for Obstacle Avoidance of Micro Unmanned Aerial Vehicle with Optical Flow and Switched System Model. Processes 2021, 9, 2126. [Google Scholar] [CrossRef]
- Duo, J.; Zhao, L. An Asynchronous Real-Time Corner Extraction and Tracking Algorithm for Event Camera. Sensors 2022, 22, 1475. [Google Scholar] [CrossRef] [PubMed]
- Mitrokhin, A.; Fermuller, C.; Parameshwara, C.; Aloimonos, Y. Event-based moving object detection and tracking. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar]
- Li, K.; Shi, D.; Zhang, Y.; Li, R.; Qin, W.; Li, R. Feature Tracking Based on Line Segments with the Dynamic and Active-Pixel Vision Sensor (DAVIS). IEEE Access 2019, 7, 110874–110883. [Google Scholar] [CrossRef]
- Iaboni, C.; Patel, H.; Lobo, D.; Choi, J.W.; Abichandani, P. Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots. IEEE Access 2022, 9, 166588–166602. [Google Scholar] [CrossRef]
- Zhu, A.Z.; Chen, Y.; Daniilidis, K. Realtime time synchronized event-based stereo. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Ozawa, T.; Sekikawa, Y.; Saito, H. Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird’s-Eye View Transformation. Sensors 2022, 22, 773. [Google Scholar] [CrossRef]
- Kim, H.; Kim, H.J. Real-Time Rotational Motion Estimation with Contrast Maximization Over Globally Aligned Events. IEEE Robot. Autom. Lett. 2022, 6, 6016–6023. [Google Scholar] [CrossRef]
- Pal, B.; Khaiyum, S.; Kumaraswamy, Y.S. 3D point cloud generation from 2D depth camera images using successive triangulation. In Proceedings of the 2017 International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), Bengaluru, India, 21–23 February 2017. [Google Scholar]
- Umair, M.; Farooq, M.U.; Raza, R.H.; Chen, Q.; Abdulhai, B. Efficient Video-based Vehicle Queue Length Estimation using Computer Vision and Deep Learning for an Urban Traffic Scenario. Processes 2021, 9, 1786. [Google Scholar] [CrossRef]
- Gallego, G.; Lund, J.E.A.; Mueggler, E.; Rebecq, H.; Delbruck, T.; Scaramuzza, D. Event-based, 6-DOF camera tracking from photometric depth maps. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2402–2412. [Google Scholar] [CrossRef] [Green Version]
- Shao, F.; Wang, X.; Meng, F.; Rui, T.; Wang, D.; Tang, J. Real-time traffic sign detection and recognition method based on simplified Gabor wavelets and CNNs. Sensors 2018, 18, 3192. [Google Scholar] [CrossRef] [Green Version]
- Alzugaray, I.; Chli, M. Asynchronous corner detection and tracking for event cameras in real time. IEEE Robot. Autom. Lett. 2018, 3, 3177–3184. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y.; Gallego, G.; Shen, S. Event-Based Stereo Visual Odometry. IEEE Trans. Robot. 2022, 37, 1433–1450. [Google Scholar] [CrossRef]
- Mueggler, E.; Gallego, G.; Scaramuzza, D. Continuous-Time Trajectory Estimation for Event-based Vision Sensors. In Proceedings of the Robotics: Science and Systems, Rome, Italy, 17 July 2015. [Google Scholar]
- Kim, H.; Leutenegger, S.; Davison, A.J. Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 16 October 2016; Springer: Cham, Switzerland, 2016. [Google Scholar]
- Mueggler, E.; Gallego, G.; Rebecq, H.; Scaramuzza, D. Continuous-Time Visual-Inertial Odometry for Event Cameras. IEEE Trans. Robot. 2018, 34, 1425–1444. [Google Scholar] [CrossRef] [Green Version]
- Mueggler, E.; Rebecq, H.; Gallego, G.; Delbruck, T.; Scaramuzza, D. The Event Camera Dataset and Simulator: Event-Based Data for Pose Estimation, Visual Odometry, and SLAM. Int. J. Robot. Res. 2017, 36, 142–149. [Google Scholar] [CrossRef]
Sequences | Our_Method | ORB | Our_Method (+IMU) | EVO | |
---|---|---|---|---|---|
Absolute pose error | shapes_6dof | 0.0815 | 0.0780 | 0.0315 | 0.0435 |
boxes_6dof | 0.0434 | 0.0369 | 0.0204 | 0.0344 | |
outdoors_6dof | 0.1416 | 0.2357 | 0.0461 | 0.0227 | |
Position error | shapes_6dof | 0.1562 | 0.0845 | 0.0391 | 0.0496 |
boxes_6dof | 0.2255 | 0.0956 | 0.0436 | 0.0586 | |
outdoors_6dof | 0.1728 | 0.0735 | 0.0325 | 0.0265 | |
Rotation error | shapes_6dof | 2.4562 | 1.8342 | 0.6443 | 0.9093 |
boxes_6dof | 3.5432 | 1.6901 | 0.5426 | 0.5084 | |
outdoors_6dof | 3.4734 | 1.6042 | 0.4453 | 0.7253 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, X.; Xue, H.; Liu, X. Contrast Maximization-Based Feature Tracking for Visual Odometry with an Event Camera. Processes 2022, 10, 2081. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10102081
Gao X, Xue H, Liu X. Contrast Maximization-Based Feature Tracking for Visual Odometry with an Event Camera. Processes. 2022; 10(10):2081. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10102081
Chicago/Turabian StyleGao, Xiang, Hanjun Xue, and Xinghua Liu. 2022. "Contrast Maximization-Based Feature Tracking for Visual Odometry with an Event Camera" Processes 10, no. 10: 2081. https://0-doi-org.brum.beds.ac.uk/10.3390/pr10102081