Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output of domain B network varies in color #144

Open
MengXinChengXuYuan opened this issue Apr 7, 2021 · 4 comments
Open

output of domain B network varies in color #144

MengXinChengXuYuan opened this issue Apr 7, 2021 · 4 comments

Comments

@MengXinChengXuYuan
Copy link

MengXinChengXuYuan commented Apr 7, 2021

Hi I decided to start to train the domain B networtk first, beacause it seems to be the most easy part
But I found there are usually color shifts on the generated images, as mentioned in the section 3.4 Face Enhancement of the paper

I'm confused of the reason leading to this, in my opinion it should be very easy for the network to produce something exactly the same as the input.

Here are some results:
youdu图片20210407141511
youdu图片20210407141507
youdu图片20210407141500

It's usually ligther, some times yellower, even though I added L1 loss. I think this could make the final result a litte uncontrollable.

And I would like to ask that is it possible sharing the training logs of the provided pretrained weights?
Here's snippet of mine:
'''
(epoch: 82, iters: 30720, time: 0.011 lr: 0.00020) G_GAN: 0.855 G_GAN_Feat: 1.952 G_VGG: 2.144 G_KL: 0.915 D_real: 0.380 D_fake: 0.363 Smooth_L1: 0.083
(epoch: 82, iters: 33920, time: 0.011 lr: 0.00020) G_GAN: 0.815 G_GAN_Feat: 1.943 G_VGG: 2.100 G_KL: 0.917 D_real: 0.433 D_fake: 0.335 Smooth_L1: 0.082
(epoch: 82, iters: 37120, time: 0.011 lr: 0.00020) G_GAN: 0.819 G_GAN_Feat: 2.025 G_VGG: 2.185 G_KL: 0.935 D_real: 0.411 D_fake: 0.350 Smooth_L1: 0.095
(epoch: 82, iters: 40320, time: 0.011 lr: 0.00020) G_GAN: 0.802 G_GAN_Feat: 1.988 G_VGG: 2.160 G_KL: 0.926 D_real: 0.423 D_fake: 0.355 Smooth_L1: 0.083
'''

@syfbme
Copy link

syfbme commented Apr 7, 2021

Maybe you should remove L1 loss:
For L1 loss, model tends to generate colors which is the median number of your training space.
By the way, what datasets do you use? The VOC2012 or?

@MengXinChengXuYuan
Copy link
Author

Maybe you should remove L1 loss:
For L1 loss, model tends to generate colors which is the median number of your training space.
By the way, what datasets do you use? The VOC2012 or?

But without L1 loss it's still the same in my experiment :(
I'm using ffhq for now, in my case I only care about protrait

@MengXinChengXuYuan
Copy link
Author

@raywzy @zhangmozhe
Hi is it possible sharing the training logs of the provided weights? And if with some training intermediate generated images, that would be great.
I just want to figure out if my training configuration is ok, and how long does it take to get a reasonable weight

@hello-trouble
Copy link

@raywzy @zhangmozhe
Hi is it possible sharing the training logs of the provided weights? And if with some training intermediate generated images, that would be great.
I just want to figure out if my training configuration is ok, and how long does it take to get a reasonable weight

Hello, I am very interested in this project. How to download the datasets used in this paper? Thank you in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants