How to Implement RAG locally using LM Studio and AnythingLLM

Поделиться
HTML-код
  • Опубликовано: 28 май 2024
  • This video shows a step-by-step process to locally implement RAG Pipeline with LM Studio and AnythingLLM with local model offline and for free.
    🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahdmirza
    🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
    bit.ly/fahd-mirza
    Coupon code: FahdMirza
    ▶ Become a Patron 🔥 - / fahdmirza
    #lmstudio #anythingllm
    PLEASE FOLLOW ME:
    ▶ LinkedIn: / fahdmirza
    ▶ RUclips: / @fahdmirza
    ▶ Blog: www.fahdmirza.com
    RELATED VIDEOS:
    ▶ Resource lmstudio.ai , useanything.com/
    All rights reserved © 2021 Fahd Mirza
  • НаукаНаука

Комментарии • 27

  • @FrankSchwarzfree
    @FrankSchwarzfree 13 дней назад +1

    Great video TY!

    • @fahdmirza
      @fahdmirza  12 дней назад

      You're very welcome! . Please also subscribe if you haven't already, thanks.

  • @publicsectordirect982
    @publicsectordirect982 Месяц назад +1

    Another useful and informative video thank you

  • @bomsbravo
    @bomsbravo Месяц назад

    You are my favorite RUclips! This is amazing. I got to know about the LM studio through you and now I am going to try this out. I was trying to RAG llama3 but I ran into a lot of errors. But I think as this is a simpler method, I would be finally able to chat with my pdfs

  • @Alex29196
    @Alex29196 Месяц назад +2

    In my opinion, the latest release of Msty is much more functional and has a better UI. AnythingLLM advantage is that connects to LMStudio.

    • @maximt1401
      @maximt1401 Месяц назад

      I have a large json file I would like to extract insights from.
      What is going to be the best way to do this ?... Msty + Which LLM ?

    • @fahdmirza
      @fahdmirza  Месяц назад +2

      I have also covered it today. thanks

  • @skillsandhonor4640
    @skillsandhonor4640 Месяц назад

    thank you

  • @SolidBuildersInc
    @SolidBuildersInc Месяц назад

    Very nice solution for RAG and using a local model.
    I was attempting to do this with Streamlit but this appears to be very clean approach.
    How can we use Colab to point to a public URL with Localtunnel.
    I seem to have a challenge in getting that working ?
    Thanks for Sharing

  • @bhawanirathore3105
    @bhawanirathore3105 Месяц назад +1

    Sir nice vedio , can u pls tell me sir what are the benchmarks to calculate LLM model performance and compare with other LLMs in term of performance and privacy

    • @fahdmirza
      @fahdmirza  Месяц назад

      I have done few videos on benchmarks, plz search the channel.

  • @JohnPamplin
    @JohnPamplin 20 дней назад

    I'm trying to hook AnythingLLM into a Slack chatbot, because you can use multiple models for docs and websites (even Google search, I think). While LM Studio has a server port, I don't think AnythingLLM does, does it?

    • @fahdmirza
      @fahdmirza  20 дней назад

      would need to check

  • @user-ms2ss4kg3m
    @user-ms2ss4kg3m Месяц назад +2

    think you are AnythingLLM IS safe?

    • @fahdmirza
      @fahdmirza  Месяц назад +1

      sorry you would need to do your own due diligence.

  • @longboardfella5306
    @longboardfella5306 Месяц назад +1

    It's been reported that AnythingLLM has a critical security flaw. Just fyi

    • @fahdmirza
      @fahdmirza  Месяц назад

      thanks for info. could you please also give link to the source of this info?

    • @longboardfella5306
      @longboardfella5306 Месяц назад +2

      @@fahdmirza It was a Medium report titled: A Critical Vulnerability at AnythingLLM - Understanding and Mitigating CVE-2024-0765

    • @fahdmirza
      @fahdmirza  Месяц назад

      @@longboardfella5306 Thanks, thats very helpful. github.com/advisories/GHSA-f7cx-hq8m-95w6

    • @publicsectordirect982
      @publicsectordirect982 Месяц назад

      Thanks for sharing 👍

    • @shatfield
      @shatfield 8 дней назад +2

      Hey there, this vulnerability was actually patched months ago and the Medium report was a report showing how the vulnerability was fixed :)