Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

config files for conditional training on segmentation maps #17

Open
ash80 opened this issue Feb 2, 2021 · 8 comments
Open

config files for conditional training on segmentation maps #17

ash80 opened this issue Feb 2, 2021 · 8 comments

Comments

@ash80
Copy link

ash80 commented Feb 2, 2021

Great paper! I am trying to retrain this model on an image dataset where I'm able to generate the segmentation masks using DeepLab v2. However, I don't have a config yaml file for training transformer as for faceHQ or D-RIN. Could you please provide a sample yaml file training with segmentation masks? Many Thanks

@ink1
Copy link

ink1 commented Feb 2, 2021

I'm also interested in that, see #16
The best starting I could find is the yaml file shared with sflckr checkpoint. Replace validation by train at the end of the file.
But my progress basically stops there.

@attashe
Copy link

attashe commented Feb 4, 2021

I am trying to train this on Flicklr-30k dataset and after 21 epochs intermediate results nothing changed. Config from sflckr checkpoint.
Снимок экрана 2021-02-04 в 19 38 57
Снимок экрана 2021-02-04 в 19 39 58
Снимок экрана 2021-02-04 в 19 40 07

@akmtn
Copy link

akmtn commented Feb 19, 2021

Distributed sflckr.yaml seems insufficient for training, because some settings are lacking. For example, models.params.lossconfig

Hi, authors,
Could you please provide a sample yaml file training with segmentation masks? Thanks.

@pesser
Copy link
Contributor

pesser commented May 20, 2021

Added config and loss to train cond stage on segmentation maps (configs/coco_cond_stage.yaml and configs/sflckr_cond_stage.yaml). Optionally, you can also extract weights of cond stage from the transformer checkpoints,

python scripts/extract_submodel.py logs/2021-01-20T16-04-20_coco_transformer/checkpoints/last.ckpt coco_cond_stage.ckpt cond_stage_model

and fine-tune from there (maybe adjust data section of config):

python main.py --base configs/coco_cond_stage.yaml -t True --gpus 0, model.params.ckpt_path=coco_cond_stage.ckpt

@kampta
Copy link

kampta commented Jun 14, 2021

I have a related question. It looks like the current config file sflckr_cond_stage.yaml leads to resizing the image to SmallestMaxSize=256. So the model was essentially trained using smaller (resized) images. Is the model checkpoint provided trained using the same config? I'd imagine in order to sample high res images, we need to just crop the images without resizing.

@ali-design
Copy link

Thank you for the great effort!
How can I train the conditional transformer when I would like to condition image on a vector (as opposed to depth map or class label)? Specifically how the config would look like?

@Kai-0515
Copy link

Kai-0515 commented Sep 4, 2022

Added config and loss to train cond stage on segmentation maps (configs/coco_cond_stage.yaml and configs/sflckr_cond_stage.yaml). Optionally, you can also extract weights of cond stage from the transformer checkpoints,

python scripts/extract_submodel.py logs/2021-01-20T16-04-20_coco_transformer/checkpoints/last.ckpt coco_cond_stage.ckpt cond_stage_model

and fine-tune from there (maybe adjust data section of config):

python main.py --base configs/coco_cond_stage.yaml -t True --gpus 0, model.params.ckpt_path=coco_cond_stage.ckpt

hello How can I sample from a segmentation model? segmentation model can't sample as readme, because VQSegmentationModel has no attribute 'encode_to_z' but make_samples.py use it
looking forward to your reply

@Kai-0515
Copy link

Kai-0515 commented Sep 4, 2022

I am trying to train this on Flicklr-30k dataset and after 21 epochs intermediate results nothing changed. Config from sflckr checkpoint. Снимок экрана 2021-02-04 в 19 38 57 Снимок экрана 2021-02-04 в 19 39 58 Снимок экрана 2021-02-04 в 19 40 07

hello could please tell me how u sample from segmentation model. I sample as the readme failed, because VQSegmentationModel has no attribute 'encode_to_z'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants