Thank you for posting this. It kills me that the overall response has been rather sparse. If each prompt were segmented (or isolated) that could mitigate this, but then the history would be needed for chaining plugin events (...as in give me a recipe, put it in Instacart, etc.) but your idea of establishing a security contract for plugins is a must - one million percent!
Thanks, for the comment. See the post about Cross Plug-in Request Forgery: embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/
Thanks for watching and commenting! 😀As mentioned in the description, it was reported to OpenAI in early April but it was not considered a security vulnerability. Even though it has a CVSS score of High. Their Bugcrowd program hadn’t existed then.
Thank you for posting this. It kills me that the overall response has been rather sparse. If each prompt were segmented (or isolated) that could mitigate this, but then the history would be needed for chaining plugin events (...as in give me a recipe, put it in Instacart, etc.) but your idea of establishing a security contract for plugins is a must - one million percent!
+1 for the Sneakers reference 🔭
Indeed! Thanks for watching! 🙂
Hmm real JS XSS might be interesting here... Could perhaps lead to SSRF on the side of OpenAI😅 - cool finding👌🏻
Awesome, curious it can execute other elements
Thanks, for the comment. See the post about Cross Plug-in Request Forgery:
embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/
Report it to make money at bugcrowd
Thanks for watching and commenting! 😀As mentioned in the description, it was reported to OpenAI in early April but it was not considered a security vulnerability. Even though it has a CVSS score of High. Their Bugcrowd program hadn’t existed then.