You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a wonderful project!
I found a bug in the code. The word 'CosineAnealing' in the config file 'configs/selfup/moco/r50_v2.py' should be 'CosineAnnealing', and I think this bug results from the spelling error fixed in mmvc.
And here I am confused about the performance of SimClR, which is much lower than that printed in the paper (e.g. 64.5% vs 75.5, the Top-5 accuracy on semi-supervised learning on ImageNet 1%). Is it mainly because of the batch size (the batch size of your reproduction is only 256)?
Looking forward to your reply!
The text was updated successfully, but these errors were encountered:
Thank you for your remind. I fixed it just now.
Because in our implementation, the batch size is 256, the total epoch is 200, while in SimCLR full setting, the batch size is 4096, the epoch is 1000. In our setting, the result of linear classification in ImageNet is consistent with that officially reported by SimCLR. We are not able to run the full setting for now. Besides, the evaluation setting is also different from that of SimCLR which also requires a large number of computation resources. We set up a new evaluation protocol applicable to all methods for fair comparison.
This is a wonderful project!
I found a bug in the code. The word 'CosineAnealing' in the config file 'configs/selfup/moco/r50_v2.py' should be 'CosineAnnealing', and I think this bug results from the spelling error fixed in mmvc.
And here I am confused about the performance of SimClR, which is much lower than that printed in the paper (e.g. 64.5% vs 75.5, the Top-5 accuracy on semi-supervised learning on ImageNet 1%). Is it mainly because of the batch size (the batch size of your reproduction is only 256)?
Looking forward to your reply!
The text was updated successfully, but these errors were encountered: