Seunghun Lee, Wonhyeok Choi, Changjae Kim, Minwoo Choi, Sunghoon Im, "ADAS: A Direct Adaptation Strategy for Multi-Target Domain Adaptive Semantic Segmentation", CVPR (2022)
Pytorch >= 1.8.0 You need GPU with no less than 32G memory.
conda create --name adas python=3.8 -y
conda activate adas
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install tensorboardX scikit-image==0.19 tqdm matplotlib
See Preparing Datasets for ADAS.
See Getting Started with ADAS.
We regard the filtered pixels by BARS as hard samples and progressively learn a greater number of these hard samples each epoch. The option to activate this feature is '--curriculum', and it is controlled by '--incremental_ratio'.
We extended the DACS method for self-training using BARS. Refer to Domain_mixer.py for details.
@inproceedings{lee2022adas,
title={Adas: A direct adaptation strategy for multi-target domain adaptive semantic segmentation},
author={Lee, Seunghun and Choi, Wonhyeok and Kim, Changjae and Choi, Minwoo and Im, Sunghoon},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={19196--19206},
year={2022}
}
This repo is largely based on RobustNet. Thanks for their excellent works.