Skip to content

shadowkun/UniControl

Repository files navigation

This repository is for the paper:

UniControl: A Unified Diffusion Model for Controllable Visual Generation In the Wild
Can Qin 1,2, Shu Zhang1, Ning Yu 1, Yihao Feng1, Xinyi Yang1, Yingbo Zhou 1, Huan Wang 1, Juan Carlos Niebles1, Caiming Xiong 1, Silvio Savarese 1, Stefano Ermon 3, Yun Fu 2, Ran Xu 1
1 Salesforce AI 2 Northeastern University 3 Stanford Univerisy
Work done when Can Qin was an intern at Salesforce AI Research.

img

Introduction

Achieving machine autonomy and human control often represent divergent objectives in the design of interactive AI systems. Visual generative foundation models such as Stable Diffusion show promise in navigating these goals, especially when prompted with arbitrary languages. However, they often fall short in generating images with spatial, structural, or geometric controls. The integration of such controls, which can accommodate various visual conditions in a single unified model, remains an unaddressed challenge. In response, we introduce UniControl, a new generative foundation model that consolidates a wide array of controllable condition-to-image (C2I) tasks within a singular framework, while still allowing for arbitrary language prompts. UniControl enables pixel-level-precise image generation, where visual conditions primarily influence the generated structures and language prompts guide the style and context. To equip UniControl with the capacity to handle diverse visual conditions, we augment pretrained text-to-image diffusion models and introduce a task-aware HyperNet to modulate the diffusion models, enabling the adaptation to different C2I tasks simultaneously. Trained on nine unique C2I tasks, UniControl demonstrates impressive zero-shot generation abilities with unseen visual conditions. Experimental results show that UniControl often surpasses the performance of single-task-controlled methods of comparable model sizes. This control versatility positions UniControl as a significant advancement in the realm of controllable visual generation.

Instruction

Environment Preparation

Setup the env first (need to wait a few minutes).

conda env create -f environment.yaml
conda activate unicontrol

Checkpoint Preparation (Only For Training)

Then you need to decide which Stable Diffusion Model you want to control. In this example, we will just use standard SD1.5. You can download it from the official page of Stability. You want the file "v1-5-pruned.ckpt".

(Or "v2-1_512-ema-pruned.ckpt" if you are using SD2.)

Note that all weights inside the ControlNet are also copied from SD so that no layer is trained from scratch, and you are still finetuning the entire model.

We provide a simple script for you to achieve this easily. If your SD filename is "./models/v1-5-pruned.ckpt" and you want the script to save the processed model (SD+ControlNet) at location "./models/control_sd15_ini.ckpt", you can just run:

python tool_add_control.py ./models/v1-5-pruned.ckpt ./models/control_sd15_ini.ckpt

Or if you are using SD2:

python tool_add_control_sd21.py ./models/v2-1_512-ema-pruned.ckpt ./models/control_sd21_ini.ckpt

The checkpoint of pre-trained model is saved at "laion400m-data/canqin/checkpoints_v1/ours_latest_acti.ckpt".

Data Preparation

need volume "laion400m-data-ssd" for tasks "canny, hed, seg, depth, normal, depth, openpose".

Model Inference (CUDA 11.0 and Conda 4.12.0 work)

For different tasks, please run the code as follows. If you meet OOM error, please decrease the "--num_samples".

Canny to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task canny 

HED Edge to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task hed 

HED-like Skech to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task hedsketch

Depth Map to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task depth 

Normal Surface Map to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task normal

Segmentation Map to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task seg

Human Skeleton to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task openpose

Object Bounding Boxes to Image Generation:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task bbox

Image Outpainting:

python inference_demo.py --ckpt ../checkpoints_v1/ours_latest_acti.ckpt --task outpainting

Model Training (CUDA 11.0 and Conda 4.12.0 work)

For single task, please run the following code with your options of "task" and it will use GPU (DDP):

python train_single_task.py --task canny --checkpoint_path ./models/control_sd15_ini.ckpt

then the model checkpoint will be saved at "lightning_logs/version_$num" and image logger visualization will apprear in "image_log/train".

For multi task, please run the following code with your options of "task" and it will use GPU (DDP):

python train_multi_task_full.py

then the model checkpoint will be saved at "lightning_logs/version_$num" and image logger visualization will apprear in "image_log/train".

To Do

  • Data Preparation
  • Pre-training Tasks Inference
    • Canny-to-image
    • HED-to-image
    • HEDSketch-to-image
    • Depth-to-image
    • Normal-to-image
    • Seg-to-image
    • Human-Skeleton-to-image
    • Bbox-to-image
    • Image-outpainting
  • Model Training
  • Gradio Demo
  • Zero-shot Tasks Inference
  • Jupyter Notebook

Citation

If you find this project useful for your research, please kindly cite our paper:

Acknowledgement

Stable Diffusion https://github.com/CompVis/stable-diffusion

ControlNet https://github.com/lllyasviel/ControlNet

StyleGAN3 https://github.com/NVlabs/stylegan3

About

Unified Controllable Visual Generation Model

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%