Skip to content

RobertoFalconi/BlackBoxAttackDNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BlackBoxAttackDNN

Python implementation of a practical black-box attack against machine learning.

This is the technical report for the Neural Networks course by Professor A. Uncini, PhD S. Scardapane and PhD D. Comminiello. The report is about Practical Black-Box Attacks against Machine Learning, scientific paper by N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik and A. Swami. The work is done by Dr S. Clinciu and Dr R. Falconi, while studying at MSc in Engineering in Computer Science, at Sapienza University of Rome.

Project’s goal is to introduce the first demonstration that black box attacks against deep neural networks (DNN) classifiers are practical for real-world adversaries with no knowledge about the model. We assume the adversary has no information about the structure or parameters of the DNN, and the defender does not have access to any large training dataset. A can only observe labels assigned by the DNN for chosen inputs, in a manner analog to a cryptographic oracle.

Full description

Available at:
https://www.slideshare.net/RobertoFalconi4/blackbox-attacks-against-neural-networks-technical-project-presentation-148255145

About

Neural Networks exam project. Machine learning algorithm: implementation of FGSM and JSMA attacks by Goodfellow and Papernot.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages