LM Studio + AnythingLLM: Process Local Documents with RAG Like a Pro!

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 7

  • @Jzguan
    @Jzguan 5 дней назад +1

    Omg I've found u !!!! Been searching over the net. None of it are legit. Yours are true value

    • @coding-ai-now
      @coding-ai-now  5 дней назад

      Thank you for such encouraging words. It makes the effort worth it.

    • @Jzguan
      @Jzguan 5 дней назад +1

      @@coding-ai-now please do not give up.
      cant wait for more. im learning and implementing what you have taught. been lost .. now im clear =)

  • @jordonkash
    @jordonkash 25 дней назад

    Seems that the retrieval worked flawlessly despite you using the smaller llama3.2 3B in LM Studio, do you see any RAG performance difference with larger models?

    • @coding-ai-now
      @coding-ai-now  4 дня назад

      I haven't seen a big difference but I haven't done a lot of testing to compare either.

  • @KenBob52
    @KenBob52 Месяц назад

    it would be great if your monitor was in focus, we could then see what you are doing