Skip to content

Seung-Hun-Lee/ADAS

Repository files navigation

ADAS (CVPR 2022)

Pytorch implementation of paper:

Requirements

Pytorch >= 1.8.0 You need GPU with no less than 32G memory.

Installation

conda create --name adas python=3.8 -y
conda activate adas
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
pip install tensorboardX scikit-image==0.19 tqdm matplotlib

Getting Started

See Preparing Datasets for ADAS.

See Getting Started with ADAS.

Improvement (1)

We regard the filtered pixels by BARS as hard samples and progressively learn a greater number of these hard samples each epoch. The option to activate this feature is '--curriculum', and it is controlled by '--incremental_ratio'.

Improvement (2)

We extended the DACS method for self-training using BARS. Refer to Domain_mixer.py for details.

Citation

@inproceedings{lee2022adas,
  title={Adas: A direct adaptation strategy for multi-target domain adaptive semantic segmentation},
  author={Lee, Seunghun and Choi, Wonhyeok and Kim, Changjae and Choi, Minwoo and Im, Sunghoon},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={19196--19206},
  year={2022}
}

Acknowledgement

This repo is largely based on RobustNet. Thanks for their excellent works.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published