Skip to content

VCIP-RGBD/DFormer-SOD

Repository files navigation

RGB-D Salient Object Detection in DFormer

PWC PWC PWC PWC PWC

Authors: Bowen Yin, Xuying Zhang, Zhongyu Li, Li Liu, Ming-Ming Cheng, Qibin Hou*

This official repository contains the RGB-D SOD code of paper 'DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation'. The technical report could be found at arXiv.

We invite all to contribute in making it more acessible and useful. If you have any questions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn). If you are using our code and evaluation toolbox for your research, please cite this paper (BibTeX).

1. Preparation.

The training and testing experiments for DFormer-SOD are conducted on one NVIDIA Tesla 3090 GPU with 24 GB memory.

  • Requirement The requirements for DFormer-SOD is the same as DFormer. If you have installed it, you can skip this.
conda create -n dformer python=3.10 -y
conda activate dformer
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.11/index.html
pip install tqdm opencv-python scipy tensorboardX tabulate easydict
  • Datasets:

Download training dataset at Google Drive. Then move it into ./Data/.

Download testing dataset at Google Drive and move it into ./Data/.

  • Checkpoints:

ImageNet-1K Pre-trained DFormers T/S/B/L can be downloaded at

Pre-trained GoogleDrive OneDrive BaiduNetdisk
  • Trained Weight:

DFormer-L BaiduNetDisk

  • Predicted Saliency Maps:

DFormer-L BaiduNetDisk

Orgnize the checkpoints and dataset folder in the following structure:

<Checkpoint>
|-- <pretrained>
    |-- <DFormer_Large.pth.tar>
    |-- <DFormer_Base.pth.tar>
    |-- <DFormer_Small.pth.tar>
    |-- <DFormer_Tiny.pth.tar>
|-- <trained>
    |-- <DFormer_SOD_epoch_best.pth>
<Data>
|-- <TrainDataset>
    |-- <RGB>
        |-- <name1>.<ImageFormat>
        |-- <name2>.<ImageFormat>
        ...
    |-- <Depth>
        |-- <name1>.<DepthFormat>
        |-- <name2>.<DepthFormat>
    |-- <GT>
        |-- <name1>.<DepthFormat>
        |-- <name2>.<DepthFormat>
|-- <TestDataset>
|-- ...

2. Train.

python train.py

3. Eval.

python test_produce_maps.py
python test_evaluation_maps.py

🚩 Performance


We invite all to contribute in making it more acessible and useful. If you have any questions or suggestions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn) or raise an issue.

Reference

You may want to cite:

@article{yin2023dformer,
  title={DFormer: Rethinking RGBD Representation Learning for Semantic Segmentation},
  author={Yin, Bowen and Zhang, Xuying and Li, Zhongyu and Liu, Li and Cheng, Ming-Ming and Hou, Qibin},
  journal={arXiv preprint arXiv:2309.09668},
  year={2023}
}

Acknowledgment

Our implementation is mainly based on mmsegmentaion, and SPNet. Thanks for their authors.

License

Code in this repo is for non-commercial use only.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published