Environment-Driven Online LiDAR-Camera Extrinsic Calibration
Zhiwei Huang
Jiaqi Li
Hongbo Zhao
Xiao Ma
Ping Zhong
Xiaohu Zhou
Wei Ye
Rui Fan
[Paper]
[GitHub]
Software developed on EdO-LCEC will be released in this repository.

Abstract

LiDAR-camera extrinsic calibration (LCEC) is crucial for multi-modal data fusion in autonomous robotic systems. Existing methods, whether target-based or target-free, typically rely on customized calibration targets or fixed scene types, which limit their applicability in real-world scenarios. To address these challenges, we present EdO-LCEC, the first environment-driven online calibration approach. Unlike traditional target-free methods, EdO-LCEC employs a generalizable scene discriminator to estimate the feature density of the application environment. Guided by this feature density, EdO-LCEC extracts LiDAR intensity and depth features from varying perspectives to achieve higher calibration accuracy. To overcome the challenges of cross-modal feature matching between LiDAR and camera, we introduce dual-path correspondence matching (DPCM), which leverages both structural and textural consistency for reliable 3D-2D correspondences. Furthermore, we formulate the calibration process as a joint optimization problem that integrates global constraints across multiple views and scenes, thereby enhancing overall accuracy. Extensive experiments on real-world datasets demonstrate that EdO-LCEC outperforms state-of-the-art methods, particularly in scenarios involving sparse point clouds or partially overlapping sensor views.


Video


[Youtube Video Link] [BiliBili Video Link]

Paper and Supplementary Material

Z. Huang, J. Li, H. Zhao, X. Ma, P. Zhong, X. Zhou, W. Ye, R. Fan.
Environment-Driven Online LiDAR-Camera Extrinsic Calibration
In TASE, 2025.
(hosted on ArXiv)


[Bibtex]


Acknowledgements

This research was supported by the National Natural Science Foundation of China under Grants 62473288, 62233013, 62272489, 62176184, 62388101, and 62176184, the Fundamental Research Funds for the Central Universities, Xiaomi Young Talents Program, and the National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an Jiaotong University under Grant No. HMHAI-202406. (Corresponding author: Rui Fan)