Deep Backdoors in Deep Reinforcement Learning Agents

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025
  • Deep Reinforcement Learning (DRL) is revolutionizing industries by enabling AI agents to make critical decisions at superhuman speeds, impacting areas like autonomous driving, healthcare, and cybersecurity. However, this groundbreaking technology also introduces a new frontier of threats as these agents, often assumed to be benign, can be compromised through outsourced training or models downloaded from online repositories.
    Join us for an eye-opening exploration into the hidden dangers of DRL backdoors. Discover how the demanding nature of DRL training and the opaque nature of AI models create vulnerabilities to supply chain attacks, leaving users defenseless against covert threats. We will unveil the sophisticated methods adversaries can use to embed backdoors in DRL models, showcasing practical demonstrations that start with simpler scenarios and escalate to high-stakes environments.
    In this session, we'll dive into the world of DRL backdoors, exposing their stealthy integration and activation. Witness firsthand how attackers can compromise even the advanced systems with minimal detection. Finally, learn which techniques can detect and neutralize these backdoors in real-time, empowering operators to act swiftly and prevent catastrophic outcomes. Don't miss this critical briefing on securing the future of AI-driven technologies.
    By:
    Vasilios Mavroudis | Principal Research Scientist, The Alan Turing Institute
    Jamie Gawith | Assistant Professor, University of Bath
    Sañyam Vyas | AI Security PhD Candidate, Cardiff University
    Chris Hicks | Co-lead, AICD Research Centre, Alan Turing Institute
    Full Abstract and Presentation Materials:
    www.blackhat.c...

Комментарии •