-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrong input for generating self.D in models/cgan_model.py line 44 #15
Comments
Thank you @Agchai52 for you careful code review review. I checked the codes which you refer. In Conditional GAN, the blurry image is the auxiliary information for (not both) generator only. However, in the review process, i found another bug for generator loss function. Thank you. |
I understand the equation and the condition of blur image is important also in discriminator. How can we calculate D(x,y) ??
Meanwhile, the reason why i concat generated_image and real image is to output two probability (first_term and second term in (1)) |
However I found that the your reference DeblurGAN code (pytorch, original author) do not concat the blur image into the input of discriminator |
That a very quick response! I just checked the link your shared. The problem is actually on DeblurGAN. You just follow their code. I didn't dive into their function. Sorry about that. (the pytorch DeblurGAN is too painful to read, forgive me...) The way to use auxiliary info is exactly what you show. Just concatinate the blurry after the real and the generated. To get D(x,y) and D(x, G(x,z)), we need to feed them into Discrimiantor separately, rather than together. You can check this link for more details: https://github.com/yenchenlin/pix2pix-tensorflow/blob/master/model.py This the tf version of Image-to-image. |
Thank you. I didn't check the original pix2pix codes carefully, but your comments are really helpful to me. Your comment is right, most of GAN models are implement to feed generator and discriminator separately. |
So, DeblurGAN doesn't use conditional GAN as the claimed. |
Thank you for your excellent work! I had been searching for tf version of DeblurGAN for months.
I find a bug in "models/cgan_model.py" Line 44.
self.D = discriminator(tf.concat([self.G, self.input['real_img']], axis=0))
which means that the input of discriminator is "[deblurred image from G, sharp image]", where real_img = sharp image.
It will effect Line 100, computing the adversarial loss,
self.adv_loss = adv_loss(self.D)
However in Kupyn's Pytorch code, "models/conditional_gan_model.py" Line 95, the corresponding code of Line 44 and Line 100 is
self.loss_G_GAN = self.discLoss.get_g_loss(self.netD, self.real_A, self.fake_B)
.which means the input of discriminator is "[deblurred image from G, blurry image]", since self.real_A = blurry image and the self.fake_B = deblurred image from G.
That is is how Kupyn generates the adversarial loss.
So, the correct Line 44 should be:
self.D = discriminator(tf.concat([self.G, self.input['blur_img']], axis=0))
From the point of view of Conditional GAN, the blurry image is the auxiliary information for both generator and discriminator. In other words, the blurry image is the information we condition on.
the input of the discriminator should be [G(blurry), blurry] or [real, blurry].
The text was updated successfully, but these errors were encountered: