Skip to content

Commit

Permalink
update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
yinbow committed Dec 10, 2022
1 parent 8c306c1 commit 4693b9c
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 35 deletions.
47 changes: 34 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,22 @@
This official repository contains the source code, prediction results, and evaluation toolbox of paper 'CamoFormer: Masked Separable Attention for Camouflaged Object Detection'. The technical report could be found at [arXiv](https://arxiv.org/abs/).
The whole benchmark results can be found at [One Drive](https://mailnankaieducn-my.sharepoint.com/:f:/g/personal/bowenyin_mail_nankai_edu_cn/EmB36EZb_fdMvWGgKx2EalgBuQnj8AFifyR-ip7Jtkfwqg?e=nu6DJz), [Baidu Netdisk](https://pan.baidu.com/s/1k5CxYzcgizzJ4sRdAxBNlA?pwd=srtf), or [Google Drive](https://drive.google.com/drive/folders/1gsCeYtS9cwsMpTHQzkx81n4jsRK4LYdf?usp=sharing).
Code will be released soon.

<p align="center">
<img src="figs/CamoFormer.png" width="600" width="1200"/> <br />
<em>
Figure 1: Overall architecture of our CamoFormer model. First, a pretrained Transformer-based backbone is utilized to extract multi-scale features of the input image. Then, the features from the last three stages are aggregated to generate the coarse prediction. Next, the
progressive refinement decoder equipped with masked separable attention (MSA) is applied to gradually polish the prediction results. All
the predictions generated by our CamoFormer are supervised by the ground truth (GT).
</em>
</p>




## 1. :fire: NEWS :fire:

- [2022/12/08] Releasing the whole COD benchmarking results (21 models).
- [2022/12/09] Releasing the codebase of CamoFormer and the whole COD benchmarking results (21 models).
- [2022/12/08] Creating repository.

> We invite all to contribute in making it more acessible and useful. If you have any questions about our work, feel free to contact me via e-mail (bowenyin@mail.nankai.edu.cn). If you are using our code and evaluation toolbox for your research, please cite this paper ([BibTeX]()).
Expand All @@ -24,7 +34,27 @@ Code will be released soon.

**0. Install**

You could refer to [here](https://github.com/HVision-NKU/CamoFormer/blob/main/figs/Install.md).
```
conda create --name CamoFormer python=3.8.5
conda activate CamoFormer
conda install pytorch==1.12.1 torchvision==0.13.1 cudatoolkit=11.3 -c pytorch
pip install opencv-python
conda install tensorboard
conda install tensorboardX
pip install timm
pip install matplotlib
pip install scipy
pip install einops
Please also install [apex](https://github.com/NVIDIA/apex).
git clone https://github.com/NVIDIA/apex
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
or
Apex also support Python-only build (required with Pytorch 0.4):
pip install -v --no-cache-dir ./
```


**1. Download Datasets and Checkpoints.**

Expand Down Expand Up @@ -55,16 +85,7 @@ bash test_eval.sh

## 3. Proposed CamoFormer

### 3.1. Overview

<p align="center">
<img src="figs/CamoFormer.png" width="600" width="1200"/> <br />
<em>
Figure 1: Overall architecture of our CamoFormer model. First, a pretrained Transformer-based backbone is utilized to extract multi-scale features of the input image. Then, the features from the last three stages are aggregated to generate the coarse prediction. Next, the
progressive refinement decoder equipped with masked separable attention (MSA) is applied to gradually polish the prediction results. All
the predictions generated by our CamoFormer are supervised by the ground truth (GT).
</em>
</p>
### 3.1. The F-TA in MSA:


<p align="center">
Expand Down
22 changes: 0 additions & 22 deletions figs/Install.md

This file was deleted.

0 comments on commit 4693b9c

Please sign in to comment.