Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stable diffusion VAE fine tuning (backport AutoencoderKL and its config.yaml to taming-transformers) #222

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

rbbb
Copy link

@rbbb rbbb commented Sep 17, 2023

Can we have stable diffusion VAE fine-tuning directly from taming-transformers ?

The code seems to work (obviously, AutoencoderKL was taken from taming-transformers).
Both the AutoencoderKL code and the config snippet were taken from stable-diffusion.
Usage is strictly identical to 'VQGAN with your own data'.

There is some safetensors loading code, but it doesn't work with torch 1.7 that is recommended with taming-transformers.

Related discussions:
lllyasviel/ControlNet#500

Related code:
https://github.com/cccntu/fine-tune-models/blob/main/run_finetune_vae.py
(adapted from Patil Suraj's stable-diffusion-jax)

@rbbb
Copy link
Author

rbbb commented Sep 19, 2023

Added a colab notebook in commit efb20eb.

I'm slightly confused about the actual objective function.

@sgw-ite
Copy link

sgw-ite commented Apr 18, 2024

hi, i use the code in https://github.com/CompVis/taming-transformers/pull/222/files, I would like to ask why you used VQLPIPS as the loss function in line 20(configs/finetune_vae.yaml), and also thank you very much for your code!

@rbbb
Copy link
Author

rbbb commented Apr 18, 2024

Hi.

In the original pull request, I wrote 'it is an aesthetic choice'

There are no rules in what metrics are used to fine-tune the VAE (the VAE police will not come to get you if you change the loss function). It is usual to drop the discriminator in fine tuning, and by usual, I mean 'common practice that people usually do without formal verification or peer reviewed paper'. Dropping or including LPIPS is the same, the result will give you an aesthetically different result.

To take a classical example: https://en.wikipedia.org/wiki/Dither
After reducing color space, having only pixel loss will produce bands in the image
Having perceptive loss should achieve dithering, with the occasional bad pixel.

So you should run the two (with and without LPIPS), look very closely at you images, and see which one you prefer.

More formally, if you take a paper at random (say, the StableDiff3 paper https://arxiv.org/pdf/2403.03206.pdf ), you'll notice that model evaluation is all human preference.

HTH

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants