Optic nerve head (ONH) detection has been a crucial area of study in ophthalmology for years. However, the significant discrep-
ancy between fundus image datasets, each generated using a single type of fundus camera, poses challenges to the generalizability
of ONH detection approaches developed based on semantic segmentation networks. Despite the numerous recent advancements
in general-purpose semantic segmentation methods using convolutional neural networks (CNNs) and Transformers, there is cur-
rently a lack of benchmarks for these state-of-the-art (SoTA) networks specifically trained for ONH detection. Therefore, in this
article, we make contributions from three key aspects: network design, the publication of a dataset, and the establishment of
a comprehensive benchmark. Our newly developed ONH detection network, referred to as ODFormer, is based upon the Swin
Transformer architecture and incorporates two novel components: a multi-scale context aggregator and a lightweight bidirectional
feature recalibrator. Our published large-scale dataset, known as TongjiU-DROD, provides multi-resolution fundus images for
each participant, captured using two distinct types of cameras. Our established benchmark involves three datasets: DRIONS-DB,
DRISHTI-GS1, and TongjiU-DROD, created by researchers from different countries and containing fundus images captured from
participants of diverse races and ages. Extensive experimental results demonstrate that our proposed ODFormer outperforms other
state-of-the-art (SoTA) networks in terms of performance and generalizability. Our dataset and source code are publicly available
at https://mias.group/ODFormer/.
|