Adversarial Examples for Deep Neural Networks

Поделиться
HTML-код
  • Опубликовано: 3 июн 2020
  • A lecture that discusses adversarial examples for deep neural networks. We discuss white box attacks, black box attacks, real world attacks, and adversarial training. We discuss Projected Gradient Descent, the Fast Gradient Sign Method, Carlini-Wagner methods, Universal Adversarial Perturbations, Adversarial Patches, Transferability Attacks, Zeroth Order Optimization, and more.
    This lecture is from Northeastern University's CS 7150 Summer 2020 class on Deep Learning, taught by Paul Hand.
    The notes are available at: khoury.northeastern.edu/home/h...
    References:
    Goodfellow et al. 2015:
    Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." International Conference on Learning Representations, 2015.
    Szegedy et al. 2014:
    Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199, 2014.
    Carlini and Wagner 2017:
    Carlini, Nicholas, and David Wagner. "Towards evaluating the robustness of neural networks." In 2017 IEEE Symposium on Security and Privacy, pp. 39-57. IEEE, 2017.
    Moosavi-Dezfooli et al. 2017:
    Moosavi-Dezfooli, Seyed-Mohsen, Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. "Universal adversarial perturbations." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1765-1773, 2017.
    Chen et al. 2017:
    Chen, Pin-Yu, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. "ZOO: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models." In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15-26, 2017.
    Cheng et al. 2018:
    Cheng, Minhao, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. "Query-efficient hard-label black-box attack: An optimization-based approach." arXiv preprint arXiv:1807.04457, 2018.
    Liu et al. 2017:
    Liu, Yanpei, Xinyun Chen, Chang Liu, and Dawn Song. "Delving into transferable adversarial examples and black-box attacks." International Conference on Learning Representations, 2017.
    Brown et al. 2018:
    Brown, T. B., D. Mané, A. Roy, M. Abadi, and J. Gilmer. "Adversarial patch, 2017." arXiv preprint arXiv:1712.09665, 2018.
    Wu et al. 2019:
    Wu, Zuxuan, Ser-Nam Lim, Larry Davis, and Tom Goldstein. "Making an invisibility cloak: Real world adversarial attacks on object detectors." arXiv preprint arXiv:1910.14667, 2019.
    Sharif et al. 2016:
    Sharif, Mahmood, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter. "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 1528-1540, 2016.
    Eykholt et al. 2018:
    Eykholt, Kevin, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Song. "Robust physical-world attacks on deep learning visual classification." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1625-1634, 2018.

Комментарии • 10

  • @faelexxxx
    @faelexxxx Месяц назад

    Thank you for the clear explanationof these concepts!

  • @ipheiman3658
    @ipheiman3658 5 месяцев назад +1

    This is amazing! Thank you for the clear explanation :)

  • @user-ve6il1tu7i
    @user-ve6il1tu7i 4 месяца назад

    very insightful

  • @supercat4742
    @supercat4742 2 года назад +2

    awesome survey on attacks, thanks!

  • @taozhang7696
    @taozhang7696 3 года назад +1

    thanks for the course, which is easy to understand:)

    • @ignaciobentley8747
      @ignaciobentley8747 3 года назад

      pro tip : watch movies at Flixzone. Me and my gf have been using it for watching all kinds of movies these days.

    • @maisoncory4967
      @maisoncory4967 3 года назад

      @Ignacio Bentley yup, I have been using flixzone for months myself =)

  • @CRTagadiya
    @CRTagadiya 3 года назад +2

    great explanation. Thanks professor.

  • @alryabov
    @alryabov 8 месяцев назад +1

    Why not just make data argumentation with some random noise?

  • @user-nb6vp9ii4c
    @user-nb6vp9ii4c 3 года назад +1

    where is this code?