LLM Vulnerability Scanning with garak. Tutorial: Test your own chat bots!

Поделиться
HTML-код
  • Опубликовано: 22 авг 2024

Комментарии • 12

  • @tristanmartin49
    @tristanmartin49 Месяц назад +1

    Thank you for the well articulated and educational content :)

  • @LeonDerczynski
    @LeonDerczynski 2 месяца назад +1

    Beautiful. Thank you!

    • @embracethered
      @embracethered  2 месяца назад

      Thanks! Hope it's useful and helps some to get started! 🙂

  • @cyberprotec
    @cyberprotec 29 дней назад +1

    Thanks for this content. Will you be able to assist with setting up a GPU Env for Garak Scan? Been working on this for a while. EC2 with ML AMI?

    • @embracethered
      @embracethered  28 дней назад

      Hey thanks for watching! What is the issue you are running into with these AMIs? A good suggestion might also be to join the garak discord to see if anyone has experience with EC2 and ML AMIs - lot's of helpful folks there.

    • @cyberprotec
      @cyberprotec 28 дней назад

      @@embracethered Thanks for the feedback. I am on the discord. I have dropped this but it seems like no one is doing that.
      I am trying to build a prod integration with our Jira in a way that when Devs request for a model via jira, a workflow kickes in, API gateway will collect the model name and use a Lambda to trigger Garak on the instance, scan the model, then export a zip of the report to Jira and Slack.
      I have part of the integration just running into issues setting up garak to use the GPUs on the instance. I have checked the GPUs [lsmod | grep nvidia] [nvidia-smi] they are running but Garak is not using them. It would rather use CPU and Memory. There are 4 GPUs with a total of 98gb memory. Garak attempts to use 1 and once the memory on the single GPU max out [˜23gb], the garak process crashes.

  • @nickbritt
    @nickbritt 2 месяца назад +1

    Great walk through I’ve been following along for a while now, the ascii smuggling tool is great. Is there a public place where we could try this tool out via a bug bounty program or similar? When I looked at the in scope items hallucinations and attacks like DAN are out of scope.

    • @embracethered
      @embracethered  2 месяца назад +1

      Thanks for watching! 🙏
      Great question, it depends on program. bug bounty programs (and industry at large) are a bit behind when it comes to considering novel LLM appsec issues, and there end to end but also long term implications.
      I often have lengthy threads with companies behind the scenes to help educate and explain, and it always starts with "not applicable", "model safety" issue,... and eventually turns into a fix/improvements - including a few findings about ASCII smuggling I hope to share in coming weeks/months.
      To explore and research i often create small toy apps myself to debug and help understand what could go wrong.
      Again, thanks for watching and let me knowing there is any specific topic you'd like me to cover in future?

    • @nickbritt
      @nickbritt 2 месяца назад +1

      @@embracethered honestly I’ll be reading the blog and watching regardless. I really enjoy the data exfiltration techniques you shared. But I guess anything that would carry more impact to anyone implementing AI in their web application’s or areas that you deem has the most impact on the underlying models.

  • @Sumukh30
    @Sumukh30 Месяц назад

    I have 4 rest apis, 1st api request has injection point and 4th api has response for tool to analyse.. in this case how to write config.json file? Does tool support multiple requests

    • @embracethered
      @embracethered  Месяц назад

      Garak supports creating custom generators for dialogue based systems - I think for what you describe that's probably best. Search for garak.generators.base in documentation. Hope that helps.