-
Notifications
You must be signed in to change notification settings - Fork 227
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support Ajbrock SN and Update BigGAN with best FID and IS #159
Conversation
plyfager
commented
Nov 11, 2021
- Support SN implemented by ajbrock.
- Updated BigGAN trained on imagenet128x128 which reached best FID(8.69) and IS(129.07).
TODOs:
|
configs/biggan/biggan_ajbrock-sn_imagenet1k_128x128_b32x8_1500k.py
Outdated
Show resolved
Hide resolved
ajbrock(https://github.com/ajbrock/BigGAN-PyTorch/blob/master/layers.py) | ||
will be followed. | ||
If set to `torch`, implementation by `PyTorch` will be followed. | ||
Defaults to `torch`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we change the default value to biggan
? Because the default value should always lead to the best performance. Since this may bring BC-breakings, we may carefully deal with it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I set the the default value to ajbrock
.
|
||
Args: | ||
W (torch.Tensor): Module weight. | ||
u_ (list[torch.Tensor]): list of left singular vector. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may revise the copied codes from another repo. This name is not so good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
W
--> weight
, u_
--> u_list
.
return svs, us, vs | ||
|
||
|
||
class SN(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest that you can use nn.Module
here. Have you done the unit test for this module independently?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SN(object)
--> SpectralNorm(nn.Module)
. Unittest has been completed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I Kept SpectralNorm inheriting from object
since otherwise the initialization way would be rather ugly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take SNLinear
for example, the super
function of nn.Linear
will call the init function of SpectralNorm
, which will cause an error.
"""Get singular values.""" | ||
return [getattr(self, 'sv%d' % i) for i in range(self.num_svs)] | ||
|
||
def W_(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In OpenMMLab
, we perfer to use lowercase letters for the name of fucntions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
W_
--> sn_weight
.
eps (float, optional): Vector Normalization epsilon for | ||
avoiding divide by zero. Defaults to 1e-12. | ||
""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we refer to other code, we should polish it rather than directly copy it.
You may need to merge the conflict from the master branch. Then, I can merge this PR. |
with torch.no_grad(): | ||
for i, sv in enumerate(svs): | ||
self.sv[i][:] = sv | ||
return self.weight / svs[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please leave a comment here to specify why we just use svs[0]
rather than another element in svs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just change svs[0]
to svs[-1]
instead.
Codecov Report
@@ Coverage Diff @@
## master #159 +/- ##
==========================================
+ Coverage 75.52% 75.59% +0.06%
==========================================
Files 127 128 +1
Lines 8814 8927 +113
Branches 1748 1781 +33
==========================================
+ Hits 6657 6748 +91
- Misses 1719 1729 +10
- Partials 438 450 +12
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
[Feature] Support Ajbrock SN and Update BigGAN with best FID and IS