This repo is implemented upon stylegan2-ada-pytorch with minimal modifications to train and load DiffAugment-stylegan2 models in PyTorch. Please check the stylegan2-ada-pytorch README for the dependencies and the other usages of this codebase.
The following command is an example of training StyleGAN2 with the default Color + Translation + Cutout DiffAugment on 100-shot Obama with 1 GPU. See here for a list of our provided low-shot datasets. You may also prepare your own dataset and specify the path to your image folder.
python train.py --outdir=training-runs --data=https://data-efficient-gans.mit.edu/datasets/100-shot-obama.zip --gpus=1
The following commands are an example of generating images with our pre-trained 100-shot Obama model. See here for a list of our provided pre-trained models. The code will automatically convert a TensorFlow StyleGAN2 model to the compatible PyTorch version; you may also use legacy.py
to do this manually.
python generate.py --outdir=out --seeds=1-16 --network=https://data-efficient-gans.mit.edu/models/DiffAugment-stylegan2-100-shot-obama.pkl
python generate_gif.py --output=obama.gif --seed=0 --num-rows=1 --num-cols=8 --network=https://data-efficient-gans.mit.edu/models/DiffAugment-stylegan2-100-shot-obama.pkl
To train on larger datasets (e.g., CIFAR and FFHQ), please follow the guidelines in the stylegan2-ada-pytorch README to prepare the datasets.
This PyTorch codebase will not fully reproduce our paper's results, as it uses a different set of hyperparameters and a different evaluation protocal. Please refer to our TensorFlow repo to fully reproduce the paper's results.