Tricking Neural Networks : Explore Adversarial Attacks - Bernice Waweru

Поделиться
HTML-код
  • Опубликовано: 13 сен 2024
  • Tricking Neural Networks : Explore Adversarial Attacks - Bernice Waweru - PyCon Italia 2024
    Elevator Pitch:
    Large Language Models are pretty cool, but we need to be aware of how they can be compromised. I will show how neural networks are vulnerable to attacks through an example of an adversarial attack on deep learning models in Natural Language Processing(NLP).
    Description:
    Large Language Models are pretty cool, but we need to be aware of how they can be compromised. I will show how neural networks are vulnerable to attacks through an example of an adversarial attack on deep learning models in Natural Language Processing(NLP).
    We’ll explore the mechanisms used to attack models, and you’ll get a new way to think about the security of deep learning models.
    With increasing adoption of deep learning models such as Large Language Models(LLMs) in real-world applications, we should consider security and safety of the models. To address the security concerns we need to understand the model's vulnerabilities and how they can be compromised. After all, it is hard to defend yourself when you don't know you are under attack.
    You will gain the most out of this session if you have worked with deep learning models before.
    Learn more: 2024.pycon.it/...
    #Security #MachineLearning #DeepLearning #NaturalLanguageProcessing

Комментарии •