University of Birmingham > Talks@bham > Computer Security Seminars > Adversarial examples and attack models in machine learning

Adversarial examples and attack models in machine learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mani Bhesania.

An “adversarial example” is an input to a machine learning algorithm (e.g., an image) which is specially crafted to cause the algorithm to output an incorrect result. I will overview existing methods to compute adversarial examples, and the efforts to detect or eliminate them. I will also explain an attack idea on Amazon Web Services “Rekognition” API which I am developing with Dan Fentham (UG student), and a defence idea based on synthetic labels which I am developing with Bogdan Serban (MSc student). I will also comment on appropriateness of attack models in machine learning literature.

This talk is part of the Computer Security Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on talks.cam from the University of Cambridge.