Integrating Disparity Confidence Estimation
into Relative Depth Priors-Guided
Unsupervised Stereo Matching
Chuang-Wei Liu
Mingjian Sun
Cairong Zhao
Hanli Wang
Alexander Dvorkovich
Rui Fan
[Supplementary Material]
[ViTAS]
[GitHub]

Abstract

Unsupervised stereo matching has garnered significant attention for its independence from costly disparity annotations. Typical unsupervised methods rely on the multi-view consistency assumption for training networks, which suffer considerably from stereo matching ambiguities, such as repetitive patterns and texture-less regions. A feasible solution lies in transferring 3D geometry knowledge from a relative depth map to the stereo matching networks. However, existing knowledge transfer methods learn depth ranking information from randomly built sparse correspondences, which make only inefficient utilization of 3D geometry knowledge and introduce noise from mistaken estimations. This work proposes a novel unsupervised learning framework to address these challenges, which comprises a plug-and-play disparity confidence estimation algorithm and two depth priors-guided loss functions. Specifically, the local coherence consistency between neighboring disparities and their corresponding relative depths is first checked to obtain disparity confidence. Afterwards, quasi-dense correspondences are built using only confident disparity estimations to facilitate efficient depth ranking learning. Finally, a dual disparity smoothness loss is proposed to boost stereo matching performance at disparity discontinuities. Experimental results demonstrate that our method achieves state-of-the-art stereo matching accuracy on the KITTI Stereo benchmarks among all unsupervised stereo matching methods.


Acknowledgements

This research was supported in part by the National Natural Science Foundation of China under Grant 62473288, Grant 62473286, Grant 62233013, and Grant 62371343, in part by the National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, Xi'an Jiaotong University (No. HMHAI-202406), in part by the Fundamental Research Funds for the Central Universities, in part by NIO University Programme (NIO UP), and in part by the Xiaomi Young Talents Program.