Skip to content

Codes for 《Towards Complex Backgrounds: A Unified Difference-Aware Decoder for Binary Segmentation》

Notifications You must be signed in to change notification settings

Henryjiepanli/DAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DAD: Difference-Aware Decoder for Binary Segmentation

Abstract: Inspired by the way human eyes detect objects, we propose a new unified dual-branch decoder paradigm, termed the Difference-Aware Decoder (DAD), designed to explore the differences between foreground and background effectively, thereby enhancing the separation of objects of interest in optical images. The DAD operates in two stages, leveraging multi-level features from the encoder. In the first stage, it achieves coarse detection of foreground objects by utilizing high-level semantic features, mimicking the initial rough observation typical of human vision. In the second stage, the decoder refines segmentation by examining differences in low-level features, guided by the coarse map generated in the first stage.

COD

SOD

This repository contains the code for our paper:
Towards Complex Backgrounds: A Unified Difference-Aware Decoder for Binary Segmentation.

Training Instructions

To train the DAD model, follow these steps:

  1. Set the task (COD/SOD/Poly/MSD), batch size, and specify the GPU for training. Execute the following commands:

    python train.py --gpu_id 0 --task COD --batchsize 8 --backbone resnet
    python train.py --gpu_id 0 --task COD --batchsize 8 --backbone res2net
    python train.py --gpu_id 0 --task COD --batchsize 8 --backbone v2_b2
    
    python train.py --gpu_id 0 --task SOD --batchsize 8 --backbone resnet
    python train.py --gpu_id 0 --task SOD --batchsize 8 --backbone res2net
    python train.py --gpu_id 0 --task SOD --batchsize 8 --backbone v2_b2
    
    python train.py --gpu_id 0 --task Poly --batchsize 8 --backbone v2_b2
    python train.py --gpu_id 0 --task MSD --batchsize 8 --backbone v2_b2

Inference Code and Pretrained Models

We provide inference code along with pretrained and trained models. You can download them using the links below:

To test the trained models, run the following command:

python test.py --task COD --backbone resnet --pth_path './Experiments/DAD/'
python test.py --task COD --backbone res2net --pth_path './Experiments/DAD/'
python test.py --task COD --backbone v2_b2 --pth_path './Experiments/DAD/'
python test.py --task SOD --backbone resnet --pth_path './Experiments/DAD/'
python test.py --task SOD --backbone res2net --pth_path './Experiments/DAD/'
python test.py --task SOD --backbone v2_b2 --pth_path './Experiments/DAD/'

Visual Results for Multiple Tasks and Backbones

We have released visual results for various tasks using different backbones. You can access them from the following links:

Camouflaged Object Detection (COD)

Salient Object Detection (SOD)

Mirror Detection

Polyp Segmentation

Citation

If you find our work useful, please consider citing our paper:

@article{YourPaper2022,
  title={Towards Complex Backgrounds: A Unified Difference-Aware Decoder for Binary Segmentation},
  author={Your Name and Co-authors},
  journal={arXiv preprint arXiv:2210.15156},
  year={2022}
}

About

Codes for 《Towards Complex Backgrounds: A Unified Difference-Aware Decoder for Binary Segmentation》

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages