Freespace Optical Flow Modeling for Automated Driving

Tongji University
IEEE Transactions on Mechatronics, 2023

Abstract

Optical flow and disparity are two informative visual features for autonomous driving perception. They have been used for a variety of applications, such as obstacle and lane detection. The concept of “U-V-Disparity” has been widely explored in the literature, while its counterpart in optical flow has received relatively little attention. Traditional motion analysis algorithms estimate optical flow by matching correspondences between two successive video frames, which limits the full utilization of environmental information and geometric constraints. Therefore, we propose a novel strategy to model optical flow in the collision-freespace (also referred to as drivable area or simply freespace) for intelligent vehicles, with the full utilization of geometry information in a 3-D driving environment. We provide explicit representations of optical flow and deduce the quadratic relationship between the optical flow component and the vertical coordinate. Through extensive experiments on several public datasets, we demonstrate the high accuracy and robustness of our model. In addition, our proposed freespace optical flow model boasts a diverse array of applications within the realm of automated driving, providing a geometric constraint in freespace detection, vehicle localization, and more. We have made our source code publicly available at https://mias.group/FSOF.

Methodology



Difference and relationship between optical flow and disparity.

Experimental results

BibTeX

@article{feng2023freespace,
	title={Freespace Optical Flow Modeling for Automated Driving},
	author={Feng, Yi and Zhang, Ruge and Du, Jiayuan and Chen, Qijun and Fan, Rui},
	journal={IEEE/ASME Transactions on Mechatronics},
	year={2023},
	publisher={IEEE}
}		  
        

Acknowledgements

This work was supported in part by the National Key R&D Program of China under Grant 2020AAA0108100, in part by the National Natural Science Foundation of China under Grant 62233013, in part by the Science and Technology Commission of Shanghai Municipal under Grant 22511104500, and in part by the Fundamental Research Funds for the Central Universities.