Hypnotized AI and Large Language Model Security

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024
  • Read Chenta Lee's article → ibm.biz/hypnot...
    Explore IBM watsonx → ibm.biz/explor...
    Large language models (LLMs) are awesome, but pose a potential cyber threat due to their capacity to generate false responses and follow hidden commands. In a two-part discussion with Chenta Lee from the IBM Security team, it first delves into prompt injection, where a malicious actor can manipulate LLMs into creating false realities and potentially accessing unauthorized data. In the second part, Chenta provides more details and explains how to address these potential threats.
    Get started for free on IBM Cloud → ibm.biz/ibm-cl...
    Subscribe to see more videos like this in the future → ibm.biz/subscri...
    #ai #llm #cybersecurity

Комментарии • 8