Tanya presents Detecting Hallucinations in Large Language Models using Semantic Entropy

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 8

  • @user-tq6hj8bh9y
    @user-tq6hj8bh9y 27 дней назад +1

    Isn't a better solution a specilzed agent that scans the answer of the larger LLM for bs , instead of training the model to detect its own bs ?

    • @Skeleman
      @Skeleman 26 дней назад +1

      i agree.
      the llm is like the brocas area. it has generative grammar and semantic categories but there should be a separate model that checks relevant corpuses for agreement.
      the only issue would be the large energy and time costs at runtime. hence why they try to do both in the llm, i think.

    • @Luixxxd1
      @Luixxxd1 22 дня назад

      Then wouldn't that make the whole tool redundant?

    • @user-tq6hj8bh9y
      @user-tq6hj8bh9y 22 дня назад

      @@Luixxxd1 Pretty much ...Its more effecient to run specialzed agent to check math when needed than blob everything together at runtime.

  • @AnshumanAwasthi-kd7qx
    @AnshumanAwasthi-kd7qx 29 дней назад

    Are you pursuing LLM security as research aim?

    • @nPlan
      @nPlan  21 день назад

      We are performing research on using LLMs and a part of that research is safety related i.e. making sure that our models are truthful, useful, and robust to adversarial attacks. However, we are not focused on security specifically.

    • @AnshumanAwasthi-kd7qx
      @AnshumanAwasthi-kd7qx 21 день назад

      @@nPlan good, my thesis is on LLM security and I was looking for any masters/ PhD researcher for a possible collaboration of sorts.