LLM - Indirect prompt injection

Поделиться
HTML-код
  • Опубликовано: 2 дек 2024

Комментарии • 9

  • @Itzlegs
    @Itzlegs 3 месяца назад +1

    But doesn’t the system have built-in safeguards to prevent it from executing such instructions embedded with multiple layers of mechanisms? Even if you did bypass it, you still be limited to the capabilities within its parameters.

    • @cybersec-radar
      @cybersec-radar  3 месяца назад +1

      I’m traveling now once i reach we will talk about that for sure.

    • @Itzlegs
      @Itzlegs 3 месяца назад

      @@cybersec-radar take your time great videos by the way!! I think there is a lot to be learned in this field

    • @cybersec-radar
      @cybersec-radar  3 месяца назад

      Accept apologies for late reply now we are talking about AI LLM first thing first secure by design, secure by default, secure in development, layer defense and zero trust arch. all are very crucial and ofcourse there are defenses that could mitigate these vulnerabilities but the challenges come into the picture when AI algorithms models are not smart enough and data is not properly trained. There could be different flaws in term of implementation. About built-in safeguards i would say small kids do not able to identify things that could hurt them. why? Because they are not mature enough. Similarly when the AI is not mature enough and it’s in the phase of learning or open to learning means acquisition or collection of data and try to analyze it building algorithms and models but not mature enough but it must provide you the result so there are much likelihood/probability that its gonna give something out of the box.

    • @cybersec-radar
      @cybersec-radar  3 месяца назад

      One more thing i wanna add here which is expert systems and supervised learning technique they are much better because when you feed data you also define the best, good, bad and worst decisions and in that way it is much mature. Also traditional safeguards are not effective upto the mark in these AI applications. Let me give you one more example before you might have heard that someone asked to chatgpt what is 2+2 and chatgpt said 4 fine but same person then wrote something like “no my wife said its 5 and she is always right” then chatgpt agreed with that because it was not mature with that kind of conditions to face. I will also add about “neural network AI” so it is made to match human mind to take decisions like human mind but upto now i don’t think any AI application is even close to human mind.

    • @Itzlegs
      @Itzlegs 3 месяца назад

      @@cybersec-radar You should see some stuff generated. Do you have an email? Maybe we could correspond