Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Fix like xtuner #119

Merged
merged 5 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Fix mim command
  • Loading branch information
okotaku committed Jan 5, 2024
commit c5d3ef3524ed8bd16ccd5ec7503cdad23128a3a4
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,6 @@ venv.bak/
*.pkl.json
*.log.json
/work_dirs
/mmcls/.mim
.DS_Store

# Pytorch
Expand Down Expand Up @@ -153,7 +152,6 @@ artifacts/
wandb/
data/
dump/
diffengine/.mim/
build/
.dist_test/
.netrc
Expand Down
6 changes: 0 additions & 6 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,11 +1,5 @@
include diffengine/.mim/model-index.yml
recursive-include diffengine/.mim/configs *.py *.yml
recursive-include diffengine/.mim/tools *.sh *.py
recursive-include diffengine/.mim/demo *.sh *.py
recursive-exclude tests *
recursive-exclude demo *
recursive-exclude data *
recursive-exclude configs *
recursive-exclude docs *
recursive-exclude work_dirs *
recursive-exclude dist *
Expand Down
4 changes: 1 addition & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ Before installing DiffEngine, please ensure that PyTorch >= v2.0 has been succes
Install DiffEngine

```
pip install openmim
pip install git+https://github.com/okotaku/diffengine.git
```

Expand All @@ -55,7 +54,7 @@ DiffEngine makes training easy through its pre-defined configs. These configs pr
2. **Start Training**: Open a terminal and run the following command to start training with the selected config:

```bash
mim train diffengine stable_diffusion_v15_dreambooth_lora_dog.py
diffengine train stable_diffusion_v15_dreambooth_lora_dog.py
```

3. **Monitor Progress and get results**: The training process will begin, and you can track its progress. The outputs of the training will be located in the `work_dirs/stable_diffusion_v15_dreambooth_lora_dog` directory, specifically when using the `stable_diffusion_v15_dreambooth_lora_dog` config.
Expand Down Expand Up @@ -319,7 +318,6 @@ Also, please check the following openmmlab and huggingface projects and the corr

- [OpenMMLab](https://openmmlab.com/)
- [HuggingFace](https://huggingface.co/)
- [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.

```
@article{mmengine2022,
Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/debias_estimation_loss/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/debias_estimation_loss/stable_diffusion_xl_pokemon_blip_debias_estimation_loss.py
$ diffengine train stable_diffusion_xl_pokemon_blip_debias_estimation_loss
```

## Inference with diffusers
Expand Down
16 changes: 5 additions & 11 deletions diffengine/configs/deepfloyd_if/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/deepfloyd_if/deepfloyd_if_pokemon_blip.py
$ diffengine train deepfloyd_if_pokemon_blip
```

## Inference with diffusers
Expand All @@ -36,9 +36,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/deepfloyd_if/deepfloyd_if_l_pokemon_blip.py work_dirs/deepfloyd_if_l_pokemon_blip/epoch_50.pth work_dirs/deepfloyd_if_l_pokemon_blip --save-keys unet
$ diffengine convert deepfloyd_if_l_pokemon_blip work_dirs/deepfloyd_if_l_pokemon_blip/epoch_50.pth work_dirs/deepfloyd_if_l_pokemon_blip --save-keys unet
```

Then we can run inference.
Expand All @@ -63,12 +63,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_if "yoda pokemon" work_dirs/deepfloyd_if_l_pokemon_blip
```

## Results Example

#### deepfloyd_if_l_pokemon_blip
Expand Down
12 changes: 3 additions & 9 deletions diffengine/configs/deepfloyd_if_dreambooth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/deepfloyd_if_dreambooth/deepfloyd_if_xl_dreambooth_lora_dog.py
$ diffengine train deepfloyd_if_xl_dreambooth_lora_dog
```

## Inference with diffusers
Expand Down Expand Up @@ -67,12 +67,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_if_lora "A photo of sks dog in a bucket" work_dirs/deepfloyd_if_xl_dreambooth_lora_dog/step999
```

## Results Example

#### deepfloyd_if_xl_dreambooth_lora_dog
Expand Down
10 changes: 5 additions & 5 deletions diffengine/configs/distill_sd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/distill_sd/small_sd_xl_pokemon_blip.py
$ diffengine train small_sd_xl_pokemon_blip
```

## Inference with diffusers
Expand All @@ -45,9 +45,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/distill_sd/small_sd_xl_pokemon_blip.py work_dirs/small_sd_xl_pokemon_blip/epoch_50.pth work_dirs/small_sd_xl_pokemon_blip --save-keys unet
$ diffengine convert small_sd_xl_pokemon_blip work_dirs/small_sd_xl_pokemon_blip/epoch_50.pth work_dirs/small_sd_xl_pokemon_blip --save-keys unet
```

Then we can run inference.
Expand Down
12 changes: 3 additions & 9 deletions diffengine/configs/distill_sd_dreambooth/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/distill_sd_dreambooth/small_sd_dreambooth_lora_dog.py
$ diffengine train small_sd_dreambooth_lora_dog
```

## Training Speed
Expand Down Expand Up @@ -84,12 +84,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_lora "A photo of sks dog in a bucket" work_dirs/small_sd_dreambooth_lora_dog/step999 --sdmodel segmind/small-sd
```

## Results Example

#### small_sd_dreambooth_lora_dog
Expand Down
10 changes: 5 additions & 5 deletions diffengine/configs/esd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/esd/stable_diffusion_xl_gogh_esd.py
$ diffengine train stable_diffusion_xl_gogh_esd
```

## Inference with diffusers
Expand All @@ -42,9 +42,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/esd/stable_diffusion_xl_gogh_esd.py w
$ diffengine convert stable_diffusion_xl_gogh_esd w
ork_dirs/stable_diffusion_xl_gogh_esd/iter_500.pth work_dirs/stable_diffusion_xl_gogh_esd --save-keys unet
```

Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/input_perturbation/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/input_perturbation/stable_diffusion_xl_pokemon_blip_input_perturbation.py
$ diffengine train stable_diffusion_xl_pokemon_blip_input_perturbation
```

## Inference with diffusers
Expand Down
10 changes: 5 additions & 5 deletions diffengine/configs/instruct_pix2pix/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/instruct_pix2pix/stable_diffusion_xl_instruct_pix2pix.py
$ diffengine train stable_diffusion_xl_instruct_pix2pix
```

## Inference with diffusers
Expand All @@ -42,9 +42,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/instruct_pix2pix/stable_diffusion_xl_instruct_pix2pix.py work_dirs/stable_diffusion_xl_instruct_pix2pix/epoch_3.pth work_dirs/stable_diffusion_xl_instruct_pix2pix --save-keys unet
$ diffengine convert stable_diffusion_xl_instruct_pix2pix work_dirs/stable_diffusion_xl_instruct_pix2pix/epoch_3.pth work_dirs/stable_diffusion_xl_instruct_pix2pix --save-keys unet
```

Then we can run inference.
Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/ip_adapter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/ip_adapter/stable_diffusion_xl_pokemon_blip_ip_adapter.py
$ diffengine train stable_diffusion_xl_pokemon_blip_ip_adapter
```

## Inference with diffusers
Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/kandinsky_v22/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/kandinsky_v22/kandinsky_v22_prior_pokemon_blip.py
$ diffengine train kandinsky_v22_prior_pokemon_blip
```

## Inference prior with diffusers
Expand Down
10 changes: 5 additions & 5 deletions diffengine/configs/kandinsky_v3/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/kandinsky_v3/kandinsky_v3_pokemon_blip.py
$ diffengine train kandinsky_v3_pokemon_blip
```

## Inference prior with diffusers
Expand All @@ -44,10 +44,10 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
# Note that when training colossalai, use `--colossalai` and set `INPUT_FILENAME` to index file.
$ mim run diffengine publish_model2diffusers configs/kandinsky_v3/kandinsky_v3_pokemon_blip.py work_dirs/kandinsky_v3_pokemon_blip/epoch_50.pth/model/pytorch_model.bin.index.json work_dirs/kandinsky_v3_pokemon_blip --save-keys unet --colossalai
$ diffengine convert kandinsky_v3_pokemon_blip work_dirs/kandinsky_v3_pokemon_blip/epoch_50.pth/model/pytorch_model.bin.index.json work_dirs/kandinsky_v3_pokemon_blip --save-keys unet --colossalai
```

Then we can run inference.
Expand Down
10 changes: 5 additions & 5 deletions diffengine/configs/lcm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/lcm/lcm_xl_pokemon_blip.py
$ diffengine train lcm_xl_pokemon_blip
```

## Inference with diffusers
Expand All @@ -44,9 +44,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/lcm/lcm_xl_pokemon_blip.py work_dirs/lcm_xl_pokemon_blip/epoch_50.pth work_dirs/lcm_xl_pokemon_blip --save-keys unet
$ diffengine convert lcm_xl_pokemon_blip work_dirs/lcm_xl_pokemon_blip/epoch_50.pth work_dirs/lcm_xl_pokemon_blip --save-keys unet
```

Then we can run inference.
Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/lcm_lora/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/lcm_lora/lcm_xl_lora_pokemon_blip.py
$ diffengine train lcm_xl_lora_pokemon_blip
```

## Inference with diffusers
Expand Down
6 changes: 3 additions & 3 deletions diffengine/configs/loha/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/stable_diffusion_xl_loha/stable_diffusion_xl_loha_pokemon_blip.py
$ diffengine train stable_diffusion_xl_loha_pokemon_blip
```

## Inference with diffusers
Expand Down
Loading
Loading