Skip to content

IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild

Notifications You must be signed in to change notification settings

Jinwook-shim/IDM-VTON

Repository files navigation

Improving Diffusion Models for Authentic Virtual Try-on in the Wild

This is an official implementation of paper 'Improving Diffusion Models for Authentic Virtual Try-on in the Wild'

🤗 Try our huggingface Demo

teaser 

TODO LIST

  • demo model
  • inference code
  • training code

Requirements

git clone https://github.com/yisol/IDM-VTON.git
cd IDM-VTON

conda env create -f environment.yaml
conda activate idm

Data preparation

You can download VITON-HD dataset from VITON-HD. After download VITON-HD dataset, move vitonhd_test_tagged.json into the test folder. Structure of the Dataset directory should be as follows.


train
|-- ...

test
|-- image
|-- image-densepose
|-- agnostic-mask
|-- cloth
|-- vitonhd_test_tagged.json

Inference

Inference with python file with argument.

accelerate launch inference.py \
    --width 768 --height 1024 --num_inference_steps 30 \
    --output_dir "result" \
    --unpaired \
    --data_dir "DATA_DIR" \
    --seed 42 \
    --test_batch_size 2 \
    --guidance_scale 2.0

You can simply run with the script file.

sh inference.sh

Acknowledgements

For the demo, GPUs are supported from zerogpu, and auto masking generation codes are based on OOTDiffusion. Parts of the code were based on IP-Adapter.

About

IDM-VTON : Improving Diffusion Models for Authentic Virtual Try-on in the Wild

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.8%
  • Cuda 4.2%
  • C++ 2.7%
  • Shell 0.3%
  • Dockerfile 0.0%
  • C 0.0%