University of Birmingham > Talks@bham > Computer Security Seminars > Tricking binary trees: The (in)security of machine learning

Tricking binary trees: The (in)security of machine learning

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Mani Bhesania.

Machine learning (ML) has become a topic of importance in the cyber security space over recent years, with numerous products being developed featuring ML algorithms to aid in attack detection. By introducing ML into systems, they become smarter and better at detecting attacks.

However, one thing that is often overlooked when deploying ML in security applications is the robustness of the algorithm itself. Frequently, existing algorithms and frameworks are used which were not developed with an adversary in mind. What happens if an attacker doesn’t want to be classified? In this talk I will provide an overview of the current academic space in attacking ML algorithms, and discuss some of the approaches that can be made to make ML more secure in the face of an adversary. This work has been published within ACM Computing Surveys under the title “On the Security of Machine Learning in Malware C&C Detection: A Survey”

This talk is part of the Computer Security Seminars series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

Talks@bham, University of Birmingham. Contact Us | Help and Documentation | Privacy and Publicity.
talks@bham is based on talks.cam from the University of Cambridge.