Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More information about the updated checkpoint of pgd trained Madry's model on cifar-10 #8

Closed
renqibing opened this issue Jan 14, 2021 · 3 comments

Comments

@renqibing
Copy link

Hi, first thanks for your great work!

I wanna know more information about the updated checkpoint of pgd trained Madry's model on cifar-10. Was this checkpoint stored when the whole 76000 iterations were down? I ran PGD-20 attack to your trained model and the accuracy is 50.05% while it's 47.04% reported in the leaderboard from Madrylab's cifar-10 challenge. Is there any possible reason for such a difference?

Thanks for your attention. Looking forward to your reply.

@ylsung
Copy link
Owner

ylsung commented Apr 7, 2021

Sorry for the late reply.

Yes. I use the last saved checkpoints. According to the accuracy difference, there is a lot of randomness will effect the trained output, such as weight initialization, data order and the randomness in PGD attack (during training), so I guess the difference is come from some of these factors. To get a stable result, maybe you can train multiple models with different seeds and compute the average accuracy on them.

@ylsung
Copy link
Owner

ylsung commented Apr 8, 2021

BTW, in Madry's implementation, they normalize the inputs with mean and std. This may also cause the difference.

@ylsung
Copy link
Owner

ylsung commented May 9, 2021

Close because of inactivity.

@ylsung ylsung closed this as completed May 9, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants