Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] translate 0_config.md into Chinese #216

Merged
merged 3 commits into from
Feb 24, 2022

Conversation

wang11wang
Copy link
Contributor

@wang11wang wang11wang commented Feb 21, 2022

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects, like MMDet or MMSeg.
  • CLA has been signed and all committers have signed the CLA in this PR.

docs/zh_cn/tutorials/0_config.md Outdated Show resolved Hide resolved
```
{backbone setting}_{neck setting}_{head_setting}
```
Here we use `'_'` to concatenate to make the name more readable.
这里我们使用 `'_'` 连接各个部分提升名字可读性。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove this line. Actually we forgot to revise this part, now we prefer the format like {backbone setting}-{neck setting}-{head_setting}, and the _ is used like line 30.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please help us to revise the description in English version tutorial? Thanks!

Copy link
Collaborator

@fangyixiao18 fangyixiao18 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revise the two minor parts, the other content looks good. Thanks!

- `npid-ensure-neg`
- `deepcluster-sobel`

### Module information
### 模块信息
```
{backbone setting}_{neck setting}_{head_setting}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change this to {backbone setting}-{neck setting}-{head_setting}, and the corresponding line in English version.

- `8xb32`:共使用 8 张 GPU,每张 GPU上 的 batch size 32
- `coslr`:使用余弦学习率调度器
- `200e`:训练模型 200 个周期
- `in1k`:数据信息,在 ImageNet1k 数据集上训练
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove line 86 - 94, and the corresponding part in English version.

@fangyixiao18 fangyixiao18 changed the base branch from master to dev_v0.7.0 February 23, 2022 11:47
@fangyixiao18 fangyixiao18 merged commit bf7deb3 into open-mmlab:dev_v0.7.0 Feb 24, 2022
fangyixiao18 added a commit that referenced this pull request Mar 4, 2022
* [Enhance] add pre-commit hook for algo-readme and copyright (#213)

* [Enhance] add test windows in workflows (#215)

* [Enhance] add test windows in workflows

* fix lint

* add optional requirements

* add try-except judgement

* add opencv installation in windows test steps

* fix path error on windows

* update

* update path

* update

* add pytest skip for algorithm test

* update requirements/runtime.txt

* update pytest skip

* [Docs] translate 0_config.md into Chinese (#216)

* [Docs] translate 0_config.md into Chinese

* [Fix] fix format description in 0_config.md

* Update: 0_config.md

* [Fix] fix tsne 'no `init_cfg`' error (#222)

* [Fix] fix tsne 'no init_cfg' and pool_type errors

* [Refactor] fix linting of tsne vis

* [Docs] reorganizing OpenMMLab projects and update algorithms in readme (#219)

* [Docs] reorganizing OpenMMLab projects and update algorithms in readme

* using small letters

* fix typo

* [Fix] fix image channel bgr/rgb bug and update benchmarks (#210)

* [Fix] fix image channel bgr/rgb bug

* update model zoo

* update readme and metafile

* [Fix] fix typo

* [Fix] fix typo

* [Fix] fix lint

* modify Places205 directory according to the downloaded dataset

* update results

* [Fix] Fix the bug when using prefetch under multi-view methods, e.g., DenseCL (#218)

* fig bug for prefetch_loader under multi-view setting

* fix lint problem

Co-authored-by: liming <liming.ai@bytedance.com>

* [Feature]: MAE official (#221)

* [Feature]: MAE single image pre-training

* [Fix]: Fix config

* [Fix]: Fix dataset link

* [Feature]: Add run

* [Refactor]: Delete spot

* [Feature]: ignore nohup output file

* [Feature]: Add auto script to generate run cmd

* [Refactor]: Refactor mae config file

* [Feature]: sz20 settings

* [Feature]: Add auto resume

* [Fix]: Fix lint

* [Feature]: Make git ignore txt

* [Refactor]: Delete gpus in script

* [Fix]: Make generate_cmd to add --async

* [Feature]: Initial version of Vit fine-tune

* [Fix]: Add 1424 specific settings

* [Fix]: Fix missing file client bug for 1424

* [Feature]: 1424 customized settings

* [Fix]: Make drop in eval to False

* [Feature]: Change the finetune and pre-training settings

* [Feature]: Add debug setting

* [Refactor]: Refactor the model

* [Feature]: Customized settings

* [Feature]: Add A100 settings

* [Fix]: Change mae to imagenet

* [Feature]: Change mae pretrain num workers to 32

* [Feature]: Change num workers to 16

* [Feature]: Add A100 setting for pre_release ft version

* [Feature]: Add img_norm_cfg

* [Fix]: Fix mae cls test missing logits bug

* [Fix]: Fix mae cls head bias initialize to zero

* [Feature]: Rename mae config name

* [Feature]: Add MAE README.md

* [Fix]: Fix lint

* [Feature]: Fix typo

* [Fix]: Fix typo

* [Feature]: Fix invalid link

* [Fix]: Fix finetune config file name

* [Feature]: Official pretrain v1

* [Feature]: Change log interval to 100

* [Feature]: pretrain 1600 epochs

* [Fix]: Change encoder num head to 12

* [Feature]: Mix precision

* [Feature]: Add default value to random masking

* [Feature]: Official MAE finetune

* [Feature]: Finetune img per gpu 32

* [Feature]: Add multi machine training for lincls

* [Fix]: Fix lincls master port master addr

* [Feature]: Change img per gpu to 128

* [Feature]: Add linear eval and Refactor

* [Fix]: Fix debug mode

* [Fix]: Delete MAE dataset in __init__.py

* [Feature]: normalize pixel for mae

* [Fix]: Fix lint

* [Feature]: LARS for linear eval

* [Feature]: Add lars for mae linear eval

* [Feature]: Change mae linear lars num workers to 32

* [Feature]: Change mae linear lars num workers to 8

* [Feature]: log every 25 iter for mae linear eval lars

* [Feature]: Add 1600 epoch and 800 epoch pretraining

* [Fix]: Change linear eval to 902

* [Fix]: Add random flip to linear eval

* [Fix]: delete fp16 in mae

* [Refactor]: Change backbone to mmcls

* [Fix]: Align finetune settings

* [Fix]: replace timm trunc_normal with mmcv trunc_normal

* [Fix]: Change finetune layer_decay to 0.65

* [Fix]: Delete pretrain last norm when global_pooling

* [Fix]: set requires_grad of norm1 to False

* [Fix]: delete norm1

* [Fix]: Fix docstring bug

* [Fix]: Fix lint

* [Fix]: Add external link

* [Fix]: Delete auto_resume and reformat config readme.

* [Fix]: Fix pytest bug

* [Fix]: Fix lint

* [Refactor]: Rename filename

* [Feature]: Add docstring

* [Fix]: Rename config file name

* [Fix]: Fix name inconsistency bug

* [Fix]: Change the default value of persistent_worker in builder to True

* [Fix]: Change the default value of CPUS_PER_TASK to 5

* [Fix]: Add a blank line to line136 in tools/train.py

* [Fix]: Fix MAE algorithm docstring format and add paper name and url

* [Feature]: Add MAE paper name and link, and store mae teaser on github

* [Refactor]: Delete mae.png

* [Fix]: Fix config file name”

* [Fix]: Fix name bug

* [Refactor]: Change default GPUS to 8

* [Fix]: Abandon change to drop_last

* [Fix]: Fix docstring in mae algorithm

* [Fix]: Fix lint

* [Fix]: Fix lint

* [Fix]: Fix mae finetune algo type bug

* [Feature]: Add unit test for algorithm

* [Feature]: Add unit test for remaining parts

* [Fix]: Fix lint

* [Fix]: Fix typo

* [Fix]: Delete some unnecessary modification in gitignore

* [Feature]: Change finetune setting in mae algo to mixup setting

* [Fix]: Change norm_pix_loss to norm_pix in pretrain head

* [Fix]: Delete modification in dist_train_linear.sh

* [Refactor]: Delete global pool in mae_cls_vit.py

* [Fix]: Change finetune param to mixup in test_mae_classification

* [Fix]: Change norm_pix_loss to norm_pix of mae_pretrain_head in unit test

* [Fix]: Change norm_pix_loss to norm_pix in unit test

* [Refactor]: Create init_weights for mae_finetune_head and mae_linprobe_head

* [Refactor]: Construct 2d sin-cosine position embedding using torch

* [Refactor]: Using classification and using mixup from mmcls

* [Fix]: Fix lint

* [Fix]: Add False to finetune mae linprobe‘
“

* [Fix]: Set drop_last to False

* [Fix]: Fix MAE finetune layerwise lr bug

* [Refactor]: Delete redundant MAE when registering MAE

* [Refactor]: Split initialize_weights in MAE to submodules

* [Fix]: Change the min_lr of mae pretrain to 0.0

* [Refactor]: Delete  unused _init_weights in mae_cls_vit

* [Refactor]: Change MAE cls vit to a more general name

* [Feature]: Add Epoch Fix cosine annealing lr updater

* [Fix]: Fix lint

* [Feature]: Add layer wise lr decay in optimizer constructor

* [Fix]: Fix lint

* [Fix]: Fix set layer wise lr decay bug

* [Fix]: Fix UT for MAE

* [Fix]: Fix lint

* [Fix]: update algorithm readme format for MAE

* [Fix]: Fix isort

* [Fix]: Add Returns inmae_pretrain_vit

* [Fix]: Change bgr to rgb

* [Fix]: Change norm pix to True

* [Fix]: Use cls_token to linear prob

* [Fix]: Delete mixup.py

* [Fix]: Fix MAE readme

* [Feature]: Delete linprobe

* [Refactor]: Merge MAE head into one file

* [Fix]: Fix lint

* [Fix]: rename mae_pretrain_head to mae_head

* [Fix]: Fix import error in __init__.py

* [Feature]: skip MAE algo UT when running on windows

* [Fix]: Fix UT bug

* [Feature]: Update model_zoo

* [Fix]: Rename MAE pretrain model name

* [Fix]: Delete mae ft prefix

* [Feature]: Change b to base

* [Refactor]: Change b in MAE pt config to base

* [Fix]: Fix typo in docstring

* [Fix]: Fix name bug

* [Feature]: Add new constructor for MAE finetune

* [Fix]: Fix model_zoo link

* [Fix]: Skip UT for MAE

* [Fix]: Change fixed channel order to param

Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn>
Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>

* [Feature]: Add diff seeds to diff ranks and set torch seed in worker_init_fn (#228)

* [Feature]: Add set diff seeds to diff ranks

* [Fix]: Set diff seed to diff workers

* Bump version to v0.7.0 (#227)

* Bump version to v0.7.0

* [Docs] update readme

Co-authored-by: wang11wang <95845452+wang11wang@users.noreply.github.com>
Co-authored-by: Liangyu Chen <45140242+c-liangyu@users.noreply.github.com>
Co-authored-by: Ming Li <73068772+mitming@users.noreply.github.com>
Co-authored-by: liming <liming.ai@bytedance.com>
Co-authored-by: Yuan Liu <30762564+YuanLiuuuuuu@users.noreply.github.com>
Co-authored-by: LIU Yuan <liuyuuan@pjlab.org.cn>
Co-authored-by: liu yuan <liuyuan@pjlab.org.cn>
@OpenMMLab-Assistant003
Copy link

Hi @wang11wang!First of all, we want to express our gratitude for your significant PR in the MMSelfsup project. Your contribution is highly appreciated, and we are grateful for your efforts in helping improve this open-source project during your personal time. We believe that many developers will benefit from your PR.

We would also like to invite you to join our Special Interest Group (SIG) private channel on Discord, where you can share your experiences, ideas, and build connections with like-minded peers. To join the SIG channel, simply message moderator— OpenMMLab on Discord or briefly share your open-source contributions in the #introductions channel and we will assist you. Look forward to seeing you there! Join us :https://discord.gg/UjgXkPWNqA

If you have WeChat account,welcome to join our community on WeChat. You can add our assistant :openmmlabwx. Please add "mmsig + Github ID" as a remark when adding friends:)
Thank you again for your contribution❤ @wang11wang

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants