Nicholas Carlini - Some Lessons from Adversarial Machine Learning

Поделиться
HTML-код
  • Опубликовано: 18 окт 2024
  • Nicholas Carlini from Google DeepMind on 'Some Lessons from Adversarial Machine Learning' at the Vienna Alignment Workshop. Key Highlights:
    Challenges in developing robust defenses
    Importance of learning from adversarial ML research
    Need for clear problem definitions and effective evaluations
    The Alignment Workshop is a series of events convening top ML researchers from industry and academia, along with experts in the government and nonprofit sectors, to discuss and debate topics related to AI alignment. The goal is to enable researchers and policymakers to better understand potential risks from advanced AI, and strategies for solving them.
    If you are interested in attending future workshops, please fill out the following expression of interest form to get notified about future events: far.ai/futures...
    Find more talks on this RUclips channel, and at www.alignment-...

Комментарии • 2

  • @nikre
    @nikre Месяц назад +9

    Thanks for this interesting and honest talk. Robustness is not as profitable as growing a forest of cherries to pick.

  • @juliesteele5021
    @juliesteele5021 24 дня назад +2

    Nice talk! I disagree that adversarial robustness has only one attack and differs from other computer security in that way.
    Once the simple PGD attack is solved in a tight epsilon ball, you still can’t say there is no adversarial image that breaks the model. Enumerating all possible attacks is still very difficult/ impossible for now.