Can we Jailbreak ChatGPT & Make It Do Whatever We Want 😱 | Red Teaming Prompts | Past Tense Attack

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 7

  • @anybodycanprompt
    @anybodycanprompt  3 месяца назад +2

    *Disclaimer & Ethics Statement:* This video contains examples of *harmful language.* Viewer discretion is recommended. This video is intended for raising awareness of the jailbreaking problem by showing illustrative examples. Any misuse is strictly prohibited. This research was conducted in a controlled setting to improve AI safety. The AI models mentioned may have already been updated to address these issues.

  • @MonicaGupta
    @MonicaGupta 3 месяца назад +1

    👍👍

  • @Wheelykool
    @Wheelykool 3 месяца назад +1

    This is mind-blowing! AI safety is trickier than I thought 🤯

  • @SaahilGupta-iy7gk
    @SaahilGupta-iy7gk 3 месяца назад +1

    Scary stuff. Hope the big tech companies are watching this!

  • @AICritique
    @AICritique 3 месяца назад +1

    I'm studying CS and this is exactly why ethics courses are so important

  • @altelity
    @altelity 3 месяца назад +1

    So basically, we can hack AI by making it a history teacher? 😅

  • @flowmantra
    @flowmantra 3 месяца назад +1

    I wonder if this works on Alexa or Siri too? 🤔