Attacking Machine Learning: On the Security and Privacy of Neural Networks

Поделиться
HTML-код
  • Опубликовано: 15 май 2019
  • Nicholas Carlini, Research Scientist, Google
    Despite significant successes, machine learning has serious security and privacy concerns. This talk will examine two of these. First, how adversarial examples can be used to fool state-of-the-art vision classifiers (to, e.g., make self-driving cars incorrectly classify road signs). Second, how to extract private training data out of a trained neural network.Learning Objectives:1: Recognize the potential impact of adversarial examples for attacking neural network classifiers.2: Understand how sensitive training data can be leaked through exposing APIs to pre-trained models.3: Know when you need to deploy defenses to counter these new threats in the machine learning age.Pre-Requisites:Understanding of threats on traditional classifiers (e.g., spam or malware systems), evasion attacks, and privacy, as well as the basics of machine learning.
  • НаукаНаука

Комментарии •