How to Implement RAG locally using LM Studio and AnythingLLM
HTML-код
- Опубликовано: 28 май 2024
- This video shows a step-by-step process to locally implement RAG Pipeline with LM Studio and AnythingLLM with local model offline and for free.
🔥 Buy Me a Coffee to support the channel: ko-fi.com/fahdmirza
🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:
bit.ly/fahd-mirza
Coupon code: FahdMirza
▶ Become a Patron 🔥 - / fahdmirza
#lmstudio #anythingllm
PLEASE FOLLOW ME:
▶ LinkedIn: / fahdmirza
▶ RUclips: / @fahdmirza
▶ Blog: www.fahdmirza.com
RELATED VIDEOS:
▶ Resource lmstudio.ai , useanything.com/
All rights reserved © 2021 Fahd Mirza - Наука
Great video TY!
You're very welcome! . Please also subscribe if you haven't already, thanks.
Another useful and informative video thank you
cheers, thanks
You are my favorite RUclips! This is amazing. I got to know about the LM studio through you and now I am going to try this out. I was trying to RAG llama3 but I ran into a lot of errors. But I think as this is a simpler method, I would be finally able to chat with my pdfs
Yay! Thank you!
In my opinion, the latest release of Msty is much more functional and has a better UI. AnythingLLM advantage is that connects to LMStudio.
I have a large json file I would like to extract insights from.
What is going to be the best way to do this ?... Msty + Which LLM ?
I have also covered it today. thanks
thank you
You're welcome
Very nice solution for RAG and using a local model.
I was attempting to do this with Streamlit but this appears to be very clean approach.
How can we use Colab to point to a public URL with Localtunnel.
I seem to have a challenge in getting that working ?
Thanks for Sharing
thanks
Sir nice vedio , can u pls tell me sir what are the benchmarks to calculate LLM model performance and compare with other LLMs in term of performance and privacy
I have done few videos on benchmarks, plz search the channel.
I'm trying to hook AnythingLLM into a Slack chatbot, because you can use multiple models for docs and websites (even Google search, I think). While LM Studio has a server port, I don't think AnythingLLM does, does it?
would need to check
think you are AnythingLLM IS safe?
sorry you would need to do your own due diligence.
It's been reported that AnythingLLM has a critical security flaw. Just fyi
thanks for info. could you please also give link to the source of this info?
@@fahdmirza It was a Medium report titled: A Critical Vulnerability at AnythingLLM - Understanding and Mitigating CVE-2024-0765
@@longboardfella5306 Thanks, thats very helpful. github.com/advisories/GHSA-f7cx-hq8m-95w6
Thanks for sharing 👍
Hey there, this vulnerability was actually patched months ago and the Medium report was a report showing how the vulnerability was fixed :)