Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Fix like xtuner #119

Merged
merged 5 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,6 @@ venv.bak/
*.pkl.json
*.log.json
/work_dirs
/mmcls/.mim
.DS_Store

# Pytorch
Expand Down Expand Up @@ -153,7 +152,6 @@ artifacts/
wandb/
data/
dump/
diffengine/.mim/
build/
.dist_test/
.netrc
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,6 @@ repos:
exclude: |-
(?x)(
^docs
| ^configs
| ^diffengine/configs
| ^projects
)
9 changes: 2 additions & 7 deletions MANIFEST.in
Original file line number Diff line number Diff line change
@@ -1,12 +1,7 @@
include diffengine/.mim/model-index.yml
include add_mim_extension.py
recursive-include diffengine/.mim/configs *.py *.yml
recursive-include diffengine/.mim/tools *.sh *.py
recursive-include diffengine/.mim/demo *.sh *.py
recursive-include diffengine/configs *.py *.yml *.json
recursive-include diffengine/tools *.sh *.py
recursive-exclude tests *
recursive-exclude demo *
recursive-exclude data *
recursive-exclude configs *
recursive-exclude docs *
recursive-exclude work_dirs *
recursive-exclude dist *
Expand Down
6 changes: 2 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,20 +42,19 @@ Before installing DiffEngine, please ensure that PyTorch >= v2.0 has been succes
Install DiffEngine

```
pip install openmim
pip install git+https://github.com/okotaku/diffengine.git
```

## 👨‍🏫 Get Started [🔝](#-table-of-contents)

DiffEngine makes training easy through its pre-defined configs. These configs provide a streamlined way to start your training process. Here's how you can get started using one of the pre-defined configs:

1. **Choose a config**: You can find various pre-defined configs in the [`configs`](configs/) directory of the DiffEngine repository. For example, if you wish to train a DreamBooth model using the Stable Diffusion algorithm, you can use the [`configs/stable_diffusion_dreambooth/stable_diffusion_v15_dreambooth_lora_dog.py`](configs/stable_diffusion_dreambooth/stable_diffusion_v15_dreambooth_lora_dog.py).
1. **Choose a config**: You can find various pre-defined configs in the [`configs`](diffengine/configs/) directory of the DiffEngine repository. For example, if you wish to train a DreamBooth model using the Stable Diffusion algorithm, you can use the [`configs/stable_diffusion_dreambooth/stable_diffusion_v15_dreambooth_lora_dog.py`](diffengine/configs/stable_diffusion_dreambooth/stable_diffusion_v15_dreambooth_lora_dog.py).

2. **Start Training**: Open a terminal and run the following command to start training with the selected config:

```bash
mim train diffengine stable_diffusion_v15_dreambooth_lora_dog.py
diffengine train stable_diffusion_v15_dreambooth_lora_dog
```

3. **Monitor Progress and get results**: The training process will begin, and you can track its progress. The outputs of the training will be located in the `work_dirs/stable_diffusion_v15_dreambooth_lora_dog` directory, specifically when using the `stable_diffusion_v15_dreambooth_lora_dog` config.
Expand Down Expand Up @@ -319,7 +318,6 @@ Also, please check the following openmmlab and huggingface projects and the corr

- [OpenMMLab](https://openmmlab.com/)
- [HuggingFace](https://huggingface.co/)
- [MIM](https://github.com/open-mmlab/mim): MIM Installs OpenMMLab Packages.

```
@article{mmengine2022,
Expand Down
75 changes: 0 additions & 75 deletions add_mim_extension.py

This file was deleted.

3 changes: 2 additions & 1 deletion diffengine/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .entry_point import cli
from .version import __version__

__all__ = ["__version__"]
__all__ = ["__version__", "cli"]
23 changes: 23 additions & 0 deletions diffengine/configs/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# flake8: noqa: PTH122,PTH120
# Copied from xtuner.configs.__init__
import os


def get_cfgs_name_path():
path = os.path.dirname(__file__)
mapping = {}
for root, _, files in os.walk(path):
# Skip if it is a base config
if "_base_" in root:
continue
for file_ in files:
if file_.endswith(
(".py", ".json"),
) and not file_.startswith(".") and not file_.startswith("_"):
mapping[os.path.splitext(file_)[0]] = os.path.join(root, file_)
return mapping


cfgs_name_path = get_cfgs_name_path()

__all__ = ["cfgs_name_path"]
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/debias_estimation_loss/stable_diffusion_xl_pokemon_blip_debias_estimation_loss.py
$ diffengine train stable_diffusion_xl_pokemon_blip_debias_estimation_loss
```

## Inference with diffusers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/deepfloyd_if/deepfloyd_if_pokemon_blip.py
$ diffengine train deepfloyd_if_pokemon_blip
```

## Inference with diffusers
Expand All @@ -36,9 +36,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/deepfloyd_if/deepfloyd_if_l_pokemon_blip.py work_dirs/deepfloyd_if_l_pokemon_blip/epoch_50.pth work_dirs/deepfloyd_if_l_pokemon_blip --save-keys unet
$ diffengine convert deepfloyd_if_l_pokemon_blip work_dirs/deepfloyd_if_l_pokemon_blip/epoch_50.pth work_dirs/deepfloyd_if_l_pokemon_blip --save-keys unet
```

Then we can run inference.
Expand All @@ -63,12 +63,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_if "yoda pokemon" work_dirs/deepfloyd_if_l_pokemon_blip
```

## Results Example

#### deepfloyd_if_l_pokemon_blip
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/deepfloyd_if_dreambooth/deepfloyd_if_xl_dreambooth_lora_dog.py
$ diffengine train deepfloyd_if_xl_dreambooth_lora_dog
```

## Inference with diffusers
Expand Down Expand Up @@ -67,12 +67,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_if_lora "A photo of sks dog in a bucket" work_dirs/deepfloyd_if_xl_dreambooth_lora_dog/step999
```

## Results Example

#### deepfloyd_if_xl_dreambooth_lora_dog
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/distill_sd/small_sd_xl_pokemon_blip.py
$ diffengine train small_sd_xl_pokemon_blip
```

## Inference with diffusers
Expand All @@ -45,9 +45,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/distill_sd/small_sd_xl_pokemon_blip.py work_dirs/small_sd_xl_pokemon_blip/epoch_50.pth work_dirs/small_sd_xl_pokemon_blip --save-keys unet
$ diffengine convert small_sd_xl_pokemon_blip work_dirs/small_sd_xl_pokemon_blip/epoch_50.pth work_dirs/small_sd_xl_pokemon_blip --save-keys unet
```

Then we can run inference.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/distill_sd_dreambooth/small_sd_dreambooth_lora_dog.py
$ diffengine train small_sd_dreambooth_lora_dog
```

## Training Speed
Expand Down Expand Up @@ -84,12 +84,6 @@ image = pipe(
image.save('demo.png')
```

We also provide inference demo scripts:

```bash
$ mim run diffengine demo_lora "A photo of sks dog in a bucket" work_dirs/small_sd_dreambooth_lora_dog/step999 --sdmodel segmind/small-sd
```

## Results Example

#### small_sd_dreambooth_lora_dog
Expand Down
10 changes: 5 additions & 5 deletions configs/esd/README.md → diffengine/configs/esd/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/esd/stable_diffusion_xl_gogh_esd.py
$ diffengine train stable_diffusion_xl_gogh_esd
```

## Inference with diffusers
Expand All @@ -42,9 +42,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/esd/stable_diffusion_xl_gogh_esd.py w
$ diffengine convert stable_diffusion_xl_gogh_esd w
ork_dirs/stable_diffusion_xl_gogh_esd/iter_500.pth work_dirs/stable_diffusion_xl_gogh_esd --save-keys unet
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/input_perturbation/stable_diffusion_xl_pokemon_blip_input_perturbation.py
$ diffengine train stable_diffusion_xl_pokemon_blip_input_perturbation
```

## Inference with diffusers
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,12 +27,12 @@ Run Training

```
# single gpu
$ mim train diffengine ${CONFIG_FILE}
$ diffengine train ${CONFIG_FILE}
# multi gpus
$ mim train diffengine ${CONFIG_FILE} --gpus 2 --launcher pytorch
$ NPROC_PER_NODE=${GPU_NUM} diffengine train ${CONFIG_FILE}

# Example.
$ mim train diffengine configs/instruct_pix2pix/stable_diffusion_xl_instruct_pix2pix.py
$ diffengine train stable_diffusion_xl_instruct_pix2pix
```

## Inference with diffusers
Expand All @@ -42,9 +42,9 @@ Once you have trained a model, specify the path to the saved model and utilize i
Before inferencing, we should convert weights for diffusers format,

```bash
$ mim run diffengine publish_model2diffusers ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
$ diffengine convert ${CONFIG_FILE} ${INPUT_FILENAME} ${OUTPUT_DIR} --save-keys ${SAVE_KEYS}
# Example
$ mim run diffengine publish_model2diffusers configs/instruct_pix2pix/stable_diffusion_xl_instruct_pix2pix.py work_dirs/stable_diffusion_xl_instruct_pix2pix/epoch_3.pth work_dirs/stable_diffusion_xl_instruct_pix2pix --save-keys unet
$ diffengine convert stable_diffusion_xl_instruct_pix2pix work_dirs/stable_diffusion_xl_instruct_pix2pix/epoch_3.pth work_dirs/stable_diffusion_xl_instruct_pix2pix --save-keys unet
```

Then we can run inference.
Expand Down
Loading