Skip to content

Commit

Permalink
Bump version to v0.8.0 (#269)
Browse files Browse the repository at this point in the history
* [Fix]: Fix mmcls upgrade bug (#235)

* [Feature]: Add multi machine dist_train (#232)

* [Feature]: Add multi machine dist_train

* [Fix]: Change bash to sh

* [Fix]: Fix missing sh suffix

* [Refactor]: Change bash to sh

* [Refactor] Add unit test (#234)

* [Refactor] add unit test

* update workflow

* update

* [Fix] fix lint

* update test

* refactor moco and densecl unit test

* fix lint

* add unit test

* update unit test

* remove modification

* [Feature]: Add MAE metafile (#238)

* [Feature]: Add MAE metafile

* [Fix]: Fix lint

* [Fix]: Change LARS to AdamW in the metafile of MAE

* [Fix] fix codecov bug (#241)

* [Fix] fix codecov bug

* update comment

* [Refactor] Using MMCls backbones (#233)

* [Refactor] using backbones from MMCls

* [Refactor] modify the unit test

* [Fix] modify default setting of out_indices

* [Docs] fix lint

* [Refactor] modify super init

* [Refactore] remove res_layer.py

* using mmcv PatchEmbed

* [Fix]: Fix outdated problem (#249)

* [Fix]: Fix outdated problem

* [Fix]: Update MoCov3 bibtex

* [Fix]: Use abs path in README

* [Fix]: Reformat MAE bibtex

* [Fix]: Reformat MoCov3 bibtex

* [Feature] Resume from the latest checkpoint automatically. (#245)

* [Feature] Resume from the latest checkpoint automatically.

* fix windows path problem

* fix lint

* add code reference

* [Docs] add docstring for ResNet and ResNeXt (#252)

* [Feature] support KNN benchmark (#243)

* [Feature] support KNN benchmark

* [Fix] add docstring and multi-machine testing

* [Fix] fix lint

* [Fix] change args format and check init_cfg

* [Docs] add benchmark tutorial

* [Docs] add benchmark results

* [Feature]: SimMIM supported (#239)

* [Feature]: SimMIM Pretrain

* [Feature]: Add mix precision and 16x128 config

* [Fix]: Fix config import bug

* [Fix]: Fix config bug

* [Feature]: Simim Finetune

* [Fix]: Log every 100

* [Fix]: Fix eval problem

* [Feature]: Add docstring for simmim

* [Refactor]: Merge layer wise lr decay to Default constructor

* [Fix]:Fix simmim evaluation bug

* [Fix]: Change model to be compatible to latest version of mmcls

* [Fix]: Fix lint

* [Fix]: Rewrite forward_train for classification cls

* [Feature]: Add UT

* [Fix]: Fix lint

* [Feature]: Add 32 gpus training for simmim ft

* [Fix]: Rename mmcls classifier wrapper

* [Fix]: Add docstring to SimMIMNeck

* [Feature]: Generate docstring for the forward function of simmim encoder

* [Fix]: Rewrite the class docstring for constructor

* [Fix]: Fix lint

* [Fix]: Fix UT

* [Fix]: Reformat config

* [Fix]: Add img resolution

* [Feature]: Add readme and metafile

* [Fix]: Fix typo in README.md

* [Fix]: Change BlackMaskGen to BlockwiseMaskGenerator

* [Fix]: Change the name of SwinForSimMIM

* [Fix]: Delete irrelevant files

* [Feature]: Create extra transformerfinetuneconstructor

* [Fix]: Fix lint

* [Fix]: Update SimMIM README

* [Fix]: Change SimMIMPretrainHead to SimMIMHead

* [Fix]: Fix the docstring of ft constructor

* [Fix]: Fix UT

* [Fix]: Recover deletion

Co-authored-by: Your <you@example.com>

* [Fix] add seed to distributed sampler (#250)

* [Fix] add seed to distributed sampler

* fix lint

* [Feature] Add ImageNet21k (#225)

* solve memory leak by limited implementation

* fix lint problem

Co-authored-by: liming <liming.ai@bytedance.com>

* [Refactor] change args format to '--a-b' (#253)

* [Refactor] change args format to `--a-b`

* modify tsne script

* modify 'sh' files

* modify getting_started.md

* modify getting_started.md

* [Fix] fix 'mkdir' error in prepare_voc07_cls.sh (#261)

* [Fix] fix positional parameter error (#260)

* [Fix] fix command errors in benchmarks tutorial (#263)

* [Docs] add brief installation steps in README.md (#265)

* [Docs] add colab tutorial (#247)

* [Docs] add colab tutorial

* fix lint

* modify the colab tutorial, using API to train the model

* modify the description

* remove #

* modify the command

* [Docs] translate 6_benchmarks.md into Chinese (#262)

* [Docs] translate 6_benchmarks.md into Chinese

* Update 6_benchmarks.md

change 基准 to 基准评测

* Update 6_benchmarks.md

(1)  Add Chinese translation of  ‘1 folder for ImageNet nearest-neighbor classification task’
(2) 数据预准备 -> 数据准备

* [Docs] remove install scripts in README (#267)

* [Docs] Update version information in dev branch (#268)

* update version to v0.8.0

* fix lint

* [Fix]: Install the latest mmcls

* [Fix]: Add SimMIM in RAEDME

Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com>
Co-authored-by: Jiahao Xie <52497952+Jiahao000@users.noreply.github.com>
Co-authored-by: Your <you@example.com>
Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com>
Co-authored-by: liming <liming.ai@bytedance.com>
Co-authored-by: RenQin <45731309+soonera@users.noreply.github.com>
Co-authored-by: YuanLiuuuuuu <3463423099@qq.com>
  • Loading branch information
8 people authored Mar 31, 2022
1 parent 16d9bf2 commit df907e5
Show file tree
Hide file tree
Showing 121 changed files with 5,294 additions and 1,064 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,10 @@ jobs:
coverage run --branch --source mmselfsup -m pytest tests/
coverage xml
coverage report -m --omit="mmselfsup/apis/*"
# Only upload coverage report for python3.7 && pytorch1.5
# Only upload coverage report for python3.8 && pytorch1.9.0
- name: Upload coverage to Codecov
if: ${{matrix.torch == '1.9' && matrix.python-version == '3.7'}}
uses: codecov/codecov-action@v1.0.10
if: ${{matrix.torch == '1.9.0' && matrix.python-version == '3.8'}}
uses: codecov/codecov-action@v2
with:
file: ./coverage.xml
flags: unittests
Expand Down
18 changes: 12 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,13 +66,13 @@ This project is released under the [Apache 2.0 license](LICENSE).

## ChangeLog

MMSelfSup **v0.7.0** was released in 03/03/2022.
MMSelfSup **v0.8.0** was released in 31/03/2022.

Highlights of the new version:

* Support **MAE**
* Add **Places205** benchmarks
* Add test Windows in workflows
* Support **SimMIM**
* Add **KNN** benchmark, support KNN test with checkpoint and extracted backbone weights
* Support ImageNet-21k dataset

Please refer to [changelog.md](docs/en/changelog.md) for details and release history.

Expand All @@ -99,6 +99,7 @@ Supported algorithms:
- [x] [SimSiam (CVPR'2021)](https://arxiv.org/abs/2011.10566)
- [x] [MoCo v3 (ICCV'2021)](https://arxiv.org/abs/2104.02057)
- [x] [MAE](https://arxiv.org/abs/2111.06377)
- [x] [SimMIM](https://arxiv.org/abs/2111.09886)

More algorithms are in our plan.

Expand All @@ -120,13 +121,16 @@ More algorithms are in our plan.

## Installation

Please refer to [install.md](docs/en/install.md) for installation and [prepare_data.md](docs/en/prepare_data.md) for dataset preparation.
MMSelfSup depends on [PyTorch](https://pytorch.org/)], [MMCV](https://github.com/open-mmlab/mmcv) and [MMClassification](https://github.com/open-mmlab/mmclassification).

Please refer to [install.md](docs/en/install.md) for more detailed instruction.

## Get Started

Please see [getting_started.md](docs/en/getting_started.md) for the basic usage of MMSelfSup.
Please refer to [prepare_data.md](docs/en/prepare_data.md) for dataset preparation and [getting_started.md](docs/en/getting_started.md) for the basic usage of MMSelfSup.

We also provides tutorials for more details:

- [config](docs/en/tutorials/0_config.md)
- [add new dataset](docs/en/tutorials/1_new_dataset.md)
- [data pipeline](docs/en/tutorials/2_data_pipeline.md)
Expand All @@ -135,6 +139,8 @@ We also provides tutorials for more details:
- [customize runtime](docs/en/tutorials/5_runtime.md)
- [benchmarks](docs/en/tutorials/6_benchmarks.md)

Besides, we provide [colab tutorial](https://github.com/open-mmlab/mmselfsup/blob/master/demo/mmselfsup_colab_tutorial.ipynb) for basic usage.

## Citation

If you use this toolbox or benchmark in your research, please cite this project.
Expand Down
18 changes: 12 additions & 6 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,13 +64,13 @@ MMSelfSup 是一个基于 PyTorch 实现的开源自监督表征学习工具箱

## 更新日志

最新的 **v0.7.0** 版本已经在 2022.03.03 发布。
最新的 **v0.8.0** 版本已经在 2022.03.31 发布。

新版本亮点:

* 支持 **MAE**
* 增加 **Places205** 下游基准测试
* 增加 Windows 测试
* 支持 **SimMIM**
* 增加 **KNN** 基准测试,支持中间 checkpoint 和提取的 backbone 权重进行评估
* 支持 ImageNet-21k 数据集

请参考 [更新日志](docs/zh_cn/changelog.md) 获取更多细节和历史版本信息。

Expand Down Expand Up @@ -98,6 +98,7 @@ MMSelfSup 和 OpenSelfSup 的不同点写在 [对比文档](docs/en/compatibilit
- [x] [SimSiam (CVPR'2021)](https://arxiv.org/abs/2011.10566)
- [x] [MoCo v3 (ICCV'2021)](https://arxiv.org/abs/2104.02057)
- [x] [MAE](https://arxiv.org/abs/2111.06377)
- [x] [SimMIM](https://arxiv.org/abs/2111.09886)

更多的算法实现已经在我们的计划中。

Expand All @@ -119,13 +120,16 @@ MMSelfSup 和 OpenSelfSup 的不同点写在 [对比文档](docs/en/compatibilit

## 安装

请参考 [安装文档](docs/zh_cn/install.md) 进行安装和参考 [准备数据](docs/zh_cn/prepare_data.md) 准备数据集。
MMSelfSup 依赖 [PyTorch](https://pytorch.org/)], [MMCV](https://github.com/open-mmlab/mmcv)[MMClassification](https://github.com/open-mmlab/mmclassification).

请参考 [安装文档](docs/zh_cn/install.md) 获取更详细的安装指南。

## 快速入门

请参考 [入门指南](docs/zh_cn/getting_started.md) 获取 MMSelfSup 的基本使用方法.
请参考 [准备数据](docs/zh_cn/prepare_data.md) 准备数据集和 [入门指南](docs/zh_cn/getting_started.md) 获取 MMSelfSup 的基本使用方法.

我们也提供了更加全面的教程,包括:

- [配置文件](docs/zh_cn/tutorials/0_config.md)
- [添加数据集](docs/zh_cn/tutorials/1_new_dataset.md)
- [数据处理流](docs/zh_cn/tutorials/2_data_pipeline.md)
Expand All @@ -134,6 +138,8 @@ MMSelfSup 和 OpenSelfSup 的不同点写在 [对比文档](docs/en/compatibilit
- [自定义运行](docs/zh_cn/tutorials/5_runtime.md)
- [基准测试](docs/zh_cn/tutorials/6_benchmarks.md)

另外,我们提供了 [colab 教程](https://github.com/open-mmlab/mmselfsup/blob/master/demo/mmselfsup_colab_tutorial.ipynb)

## 参与贡献

我们非常欢迎任何有助于提升 MMSelfSup 的贡献,请参考 [贡献指南](docs/zh_cn/community/CONTRIBUTING.md) 来了解如何参与贡献。
Expand Down
31 changes: 31 additions & 0 deletions configs/benchmarks/classification/_base_/models/swin-base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# model settings

custom_imports = dict(imports='mmcls.models', allow_failed_imports=False)

model = dict(
type='MMClsImageClassifierWrapper',
backbone=dict(
type='mmcls.SwinTransformer',
arch='base',
img_size=192,
drop_path_rate=0.1,
stage_cfgs=dict(block_cfgs=dict(window_size=6))),
neck=dict(type='mmcls.GlobalAveragePooling'),
head=dict(
type='mmcls.LinearClsHead',
num_classes=1000,
in_channels=1024,
init_cfg=None, # suppress the default init_cfg of LinearClsHead.
loss=dict(
type='mmcls.LabelSmoothLoss',
label_smooth_val=0.1,
mode='original'),
cal_acc=False),
init_cfg=[
dict(type='TruncNormal', layer='Linear', std=0.02, bias=0.),
dict(type='Constant', layer='LayerNorm', val=1., bias=0.)
],
train_cfg=dict(augments=[
dict(type='BatchMixup', alpha=0.8, num_classes=1000, prob=0.5),
dict(type='BatchCutMix', alpha=1.0, num_classes=1000, prob=0.5)
]))
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
_base_ = 'swin-base_ft-8xb256-coslr-100e_in1k.py'

data = dict(samples_per_gpu=64, workers_per_gpu=8)
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
_base_ = [
'../_base_/models/swin-base.py',
'../_base_/datasets/imagenet.py',
'../_base_/schedules/adamw_coslr-100e_in1k.py',
'../_base_/default_runtime.py',
]

# dataset
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_pipeline = [
dict(
type='RandomAug',
input_size=192,
color_jitter=0.4,
auto_augment='rand-m9-mstd0.5-inc1',
interpolation='bicubic',
re_prob=0.25,
re_mode='pixel',
re_count=1,
mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225))
]
test_pipeline = [
dict(type='Resize', size=219, interpolation=3),
dict(type='CenterCrop', size=192),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg)
]
data = dict(
samples_per_gpu=256,
drop_last=False,
workers_per_gpu=32,
train=dict(pipeline=train_pipeline),
val=dict(pipeline=test_pipeline))

# model
model = dict(backbone=dict(init_cfg=dict()))

# optimizer
optimizer = dict(
lr=1.25e-3 * 2048 / 512,
paramwise_options={
'norm': dict(weight_decay=0.),
'bias': dict(weight_decay=0.),
'absolute_pos_embed': dict(weight_decay=0.),
'relative_position_bias_table': dict(weight_decay=0.)
},
constructor='TransformerFinetuneConstructor',
model_type='swin',
layer_decay=0.9)

# clip gradient
optimizer_config = dict(grad_clip=dict(max_norm=5.0))

# learning policy
lr_config = dict(
policy='CosineAnnealing',
min_lr=2.5e-7 * 2048 / 512,
warmup='linear',
warmup_iters=20,
warmup_ratio=2.5e-7 / 1.25e-3,
warmup_by_epoch=True,
by_epoch=False)

# mixed precision
fp16 = dict(loss_scale='dynamic')

# runtime
checkpoint_config = dict(interval=1, max_keep_ckpts=3, out_dir='')
persistent_workers = True
log_config = dict(
interval=100, hooks=[
dict(type='TextLoggerHook'),
])
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@
'pos_embed': dict(weight_decay=0.),
'cls_token': dict(weight_decay=0.)
},
constructor='MAEFtOptimizerConstructor',
constructor='TransformerFinetuneConstructor',
model_type='vit',
layer_decay=0.65)

# learning policy
Expand Down
29 changes: 29 additions & 0 deletions configs/benchmarks/classification/knn_imagenet.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
data_source = 'ImageNet'
dataset_type = 'SingleViewDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
pipeline = [
dict(type='Resize', size=256),
dict(type='CenterCrop', size=224),
dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg),
]

data = dict(
samples_per_gpu=256,
workers_per_gpu=8,
train=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/train',
ann_file='data/imagenet/meta/train.txt',
),
pipeline=pipeline),
val=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/val',
ann_file='data/imagenet/meta/val.txt',
),
pipeline=pipeline))
41 changes: 41 additions & 0 deletions configs/selfsup/_base_/datasets/imagenet_simmim.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# dataset settings
data_source = 'ImageNet'
dataset_type = 'SingleViewDataset'
img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
train_pipeline = [
dict(
type='RandomResizedCrop',
size=192,
scale=(0.67, 1.0),
ratio=(3. / 4., 4. / 3.)),
dict(type='RandomHorizontalFlip')
]

# prefetch
prefetch = False
if not prefetch:
train_pipeline.extend(
[dict(type='ToTensor'),
dict(type='Normalize', **img_norm_cfg)])

train_pipeline.append(
dict(
type='BlockwiseMaskGenerator',
input_size=192,
mask_patch_size=32,
model_patch_size=4,
mask_ratio=0.6))

# dataset summary
data = dict(
samples_per_gpu=256,
workers_per_gpu=8,
train=dict(
type=dataset_type,
data_source=dict(
type=data_source,
data_prefix='data/imagenet/train',
ann_file='data/imagenet/meta/train.txt',
),
pipeline=train_pipeline,
prefetch=prefetch))
10 changes: 10 additions & 0 deletions configs/selfsup/_base_/models/simmim_swin-base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# model settings
model = dict(
type='SimMIM',
backbone=dict(
type='SimMIMSwinTransformer',
arch='B',
img_size=192,
stage_cfgs=dict(block_cfgs=dict(window_size=6))),
neck=dict(type='SimMIMNeck', in_channels=128 * 2**3, encoder_stride=32),
head=dict(type='SimMIMHead', patch_size=4, encoder_in_channels=3))
13 changes: 11 additions & 2 deletions configs/selfsup/byol/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@

**Back to [model_zoo.md](https://github.com/open-mmlab/mmselfsup/blob/master/docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models are pre-trained on ImageNet-1k dataset.

### Classification

The classification benchmarks includes 4 downstream task datasets, **VOC**, **ImageNet**, **iNaturalist2018** and **Places205**. If not specified, the results are Top-1 (%).
The classification benchmarks includes 4 downstream task datasets, **VOC**, **ImageNet**, **iNaturalist2018** and **Places205**. If not specified, the results are Top-1 (%).

#### VOC SVM / Low-shot SVM

Expand Down Expand Up @@ -51,6 +51,15 @@ The **Feature1 - Feature5** don't have the GlobalAveragePooling, the feature map
| [resnet50_8xb32-accum16-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/byol/byol_resnet50_8xb32-accum16-coslr-200e_in1k.py) | 21.25 | 36.55 | 43.66 | 50.74 | 53.82 |
| [resnet50_8xb32-accum16-coslr-300e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/byol/byol_resnet50_8xb32-accum16-coslr-300e_in1k.py) | 21.18 | 36.68 | 43.42 | 51.04 | 54.06 |

#### ImageNet Nearest-Neighbor Classification

The results are obtained from the features after GlobalAveragePooling. Here, k=10 to 200 indicates different number of nearest neighbors.

| Self-Supervised Config | k=10 | k=20 | k=100 | k=200 |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---- | ---- | ----- | ----- |
| [resnet50_8xb32-accum16-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/byol/byol_resnet50_8xb32-accum16-coslr-200e_in1k.py) | 63.9 | 64.2 | 62.9 | 61.9 |
| [resnet50_8xb32-accum16-coslr-300e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/byol/byol_resnet50_8xb32-accum16-coslr-300e_in1k.py) | 66.1 | 66.3 | 65.2 | 64.4 |

### Detection

The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007 + 2012** and **COCO2017**. This benchmark follows the evluation protocols set up by MoCo.
Expand Down
4 changes: 2 additions & 2 deletions configs/selfsup/deepcluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,11 @@ Clustering is a class of unsupervised learning methods that has been extensively

**Back to [model_zoo.md](https://github.com/open-mmlab/mmselfsup/blob/master/docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models are pre-trained on ImageNet-1k dataset.

### Classification

The classification benchmarks includes 4 downstream task datasets, **VOC**, **ImageNet**, **iNaturalist2018** and **Places205**. If not specified, the results are Top-1 (%).
The classification benchmarks includes 4 downstream task datasets, **VOC**, **ImageNet**, **iNaturalist2018** and **Places205**. If not specified, the results are Top-1 (%).

#### VOC SVM / Low-shot SVM

Expand Down
10 changes: 9 additions & 1 deletion configs/selfsup/densecl/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ To date, most existing self-supervised learning methods are designed and optimiz

**Back to [model_zoo.md](https://github.com/open-mmlab/mmselfsup/blob/master/docs/en/model_zoo.md) to download models.**

In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models were trained on ImageNet1k dataset.
In this page, we provide benchmarks as much as possible to evaluate our pre-trained models. If not mentioned, all models are pre-trained on ImageNet-1k dataset.

### Classification

Expand Down Expand Up @@ -50,6 +50,14 @@ The **Feature1 - Feature5** don't have the GlobalAveragePooling, the feature map
| -------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------- | -------- | -------- | -------- |
| [resnet50_8xb32-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/densecl/densecl_resnet50_8xb32-coslr-200e_in1k.py) | 21.32 | 36.20 | 43.97 | 51.04 | 50.45 |

#### ImageNet Nearest-Neighbor Classification

The results are obtained from the features after GlobalAveragePooling. Here, k=10 to 200 indicates different number of nearest neighbors.

| Self-Supervised Config | k=10 | k=20 | k=100 | k=200 |
| -------------------------------------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ----- | ----- |
| [resnet50_8xb32-coslr-200e](https://github.com/open-mmlab/mmselfsup/blob/master/configs/selfsup/densecl/densecl_resnet50_8xb32-coslr-200e_in1k.py) | 48.2 | 48.5 | 46.8 | 45.6 |

### Detection

The detection benchmarks includes 2 downstream task datasets, **Pascal VOC 2007 + 2012** and **COCO2017**. This benchmark follows the evluation protocols set up by MoCo.
Expand Down
Loading

0 comments on commit df907e5

Please sign in to comment.