DarSwin-Unet: Distortion Aware Encoder-Decoder Architecture
Akshaya Athwale     Ichrak Shili     Émile Bergeron      Ola Ahmad      Jean-François Lalonde    





description Paper code Code newspaper Poster videocam Video insert_comment BibTeX


Accepted in Winter Conference on Applications of Computer Vision (WACV), 2025!


Abstract

Wide-angle fisheye images are becoming increasingly common for perception tasks in applications such as robotics, security, and mobility (e.g. drones, avionics). However, current models often either ignore the distortions in wide-angle images or are not suitable to perform pixel-level tasks. In this paper, we present an encoder-decoder model based on a radial transformer architecture that adapts to distortions in wide-angle lenses by leveraging the physical characteristics defined by the radial distortion profile. In contrast to the original model, which only performs classification tasks, we introduce a U-Net architecture, DarSwin-Unet, designed for pixel level tasks. Furthermore, we propose a novel strategy that minimizes sparsity when sampling the image for creating its input tokens. Our approach enhances the model capability to handle pixel-level tasks in wide-angle fisheye images, making it more effective for real-world applications. Compared to other baselines, DarSwin-Unet achieves the best results across different datasets, with significant gains when trained on bounded levels of distortions (very low, low, medium, and high) and tested on all, including out-of-distribution distortions. We demonstrate its performance on depth estimation and show through extensive experiments that DarSwin-Unet can perform zero-shot adaptation to unseen distortions of different wide-angle lenses.


Presentation video

Citation

@article{athwale2025darswin-unet,
	title={DarSwin-Unet: Distortion Aware Encoder-Decoder Architecture},
	author={Athwale, Akshaya and Shili, Ichrak and Bergeron, Émile and Ahmad, Ola and Lalonde, Jean-Fran{\c{c}}ois},
	journal={IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
	year={2025}
  }
	

						
						


Acknowledgements

This research was supported by NSERC grant ALLRP-567654, Thales, an NSERC USRA to E. Bergeron, Mitacs and the Digital Research Alliance Canada. We thank Yohan Poirier-Ginter, Frédéric Fortier-Chouinard and Justine Giroux for proofreading.