RAG from Scratch in 10 lines Python - No Frameworks Needed!
HTML-код
- Опубликовано: 26 июн 2024
- In this video, I'll show you how to create a fully functional chat system using your own documents with just 10 lines of Python code. We'll dive into Retrieval Augmented Generation (RAG) without relying on frameworks like LangChain, LamaIndex, or vector stores such as Chroma.
💻 RAG Beyond Basics Course:
prompt-s-site.thinkific.com/c...
LINKS:
Colab: tinyurl.com/cnufkeky
Ben's X account: x.com/bclavie
Ragatouille/ColBERT video: • Advanced RAG with ColB...
Let's Connect:
🦾 Discord: / discord
☕ Buy me a Coffee: ko-fi.com/promptengineering
|🔴 Patreon: / promptengineering
💼Consulting: calendly.com/engineerprompt/c...
📧 Business Contact: engineerprompt@gmail.com
Become Member: tinyurl.com/y5h28s6h
💻 Pre-configured localGPT VM: bit.ly/localGPT (use Code: PromptEngineering for 50% off).
Signup for Newsletter, localgpt:
tally.so/r/3y9bb0
00:00 Introduction to Building a Chat System without Frameworks
00:26 Understanding Retrieval Augmented Generation (RAG)
02:12 Setting Up the Python Environment
03:39 Data Preparation and Chunking
05:12 Embedding the Chunks
06:31 Retrieving Relevant Chunks
08:53 Generating Responses with LLM
09:50 Advanced Techniques and Recommendations
11:15 Conclusion and Further Learning
All Interesting Videos:
Everything LangChain: • LangChain
Everything LLM: • Large Language Models
Everything Midjourney: • MidJourney Tutorials
AI Image Generation: • AI Image Generation Tu... Наука
Brilliant! Thanks for this one
This is exactly what I've been trying to find for the last couple of days. Simple instructions on how to do this with pure python and local LLM. Thank you!
x2! thanks @prompt engineering!
Excelent and concise description. Thank you.
Great video, nice style and easy to listen to, subscribed 👍🏼
Great work 👍🏻 Thanks
Brilliantly explained with clarity and insight, thank you!
Also really pleased you point out that RAG emerged from IR ideas and wasn't brand new: when I saw it I was like, haven't people seen Facebook's DrQA from 2017?!? And even that wasn't out the blue, there's a long established history with IR 👍
thank you. I agree, in most of the case, we are reinventing the wheel and giving old approaches with new names. Interestingly enough a simple keyword based search (BM-25) will still out perform dense embeddings in most cases!
Legend!
great work! thanks!
yes! i did the same a year ago in research duration.. it works.
Great! Thanks!
Problem with RAG solutions is they don’t hold up with bigger amounts of unstructured data. I wish there was a solution that includes long term memory for chat agents so that they get smarter about your context as you chat with them
Google released context caching for their long context models. This could be a solution
@@engineerpromptis there a way to save and load the vector store that you made here sir ?
500 likes, keep it up !
Can this also be implemented with a local model through Ollama?
I never liked RAG frameworks .. thanks for the useful content
Hello!
I’ve a doubt. The similarities is a way to reduce the number of tokens that is sent to the openAi api? So basically when you make a query to the llm you are not sending the entire text of the wikipedia page?
I ask it because of tokens cost, to know exactly what openai will charge us.
Your content is probably the best on youtube! Really appreciate all your videos
Probably. He used a Wiki page but you may have a 1000 pages pdf that will cost a lot to process and maybe most of it is irrelevant to what you want.
When you break the text, and then get the 'n' most relevant chunks you get what you want faster and cheaper.
And if you use a AI locally, the more info you use the slower it will be. So it can make a not so powerful PC do the job too.
Yes, there are two parts as mentioned by @luizemanoel. First the document can contain a lot of irrelevant info. You only want to provide what is relevant to the query to the LLM. This will improve the responses. And the added benefit is reduced tokens which means less cost as well.
@@engineerprompt @luizemanoel2588 Ok thanks to both!
could you please make a video on a a chatbot that can interact with pdf files and answer questions with recent tech ? I'm having the most difficulties with outdated tutorials. It would be a great help!
can u also show how to make structured output?
What are the best ways of importing documents into the RAG system From corporate systems, such as Google Docs or Confluence or Notion without asking your IT?
I have actually done a few things manually, but they are very labour-intensive and manual for example using scraping tools and chrome extensions but is there something that is a bit more streamlined?
Also - how to add indexing, link backs, more nuances chunking mechanisms (context and type of info aware)?
You are looking for data connectors in this case. Each of these services will have their own APIs or you can use data loaders from langchain (python.langchain.com/v0.2/docs/integrations/document_loaders/). This is one aspect where i would recommend using a framework.
Hi, could you convert complex PDF documents (with graphics and tables) into an easily readable text format, such as Markdown? The input file would be a PDF and the output file would be a text file (.txt).
Yes, checkout this video: ruclips.net/video/mdLBr9IMmgI/видео.html
Thanks for the video! However, RAG never convinced me. I'm looking for fine-tuning in 10 lines of code.
"10 lines" 🤣
No frameworks, but please install RAGatuille? WTF!
Are you also mad he used numpy? Hahahahah wtf
Framework: a collection of libraries to build applications
Libraries: a tool to leverage functionality
@@Yocoda24 , well: if the claim is pure python, no frameworks, yes. WTF.
@@MeinDeutschkurs not sure where you’re pulling “pure python” from? Can you give me a timestamp to when it is said in the video?
@@Yocoda24 Read the video title:
“RAG from Scratch in 10 lines Python - No Frameworks Needed!”
@@MeinDeutschkurs oh okay so it doesn’t say pure python, and he doesn’t use any frameworks. Glad we could come to an understanding
Thankyou
Thanks for this great video. I tried to run your juypter notebook. When calling the line "from google.colab import userdata"
I get the error: ModuleNotFoundError: No module named 'google'. and somewhere I see pkg_resources is deprecated as an API
Is python 3.12.3 too new?
OK, I replaced the google part. There are other ways to create an OpenAI client !
Now it works !